Building My AI Resume Analyzer

Synopsis: Here I create a resume analyzer that compares candidate documents with job descriptions, highlighting missing keywords and evaluating semantic alignment. It transforms resumes into structured insights, helping applicants identify gaps, strengthen applications, and better match opportunities through clear, data-driven feedback.

I recently built Rahul’s AI Resume Analyzer, a tool that compares a resume against a job description and highlights missing keywords while measuring semantic fit. Let me walk you through the code that powers it.

Building My AI Resume Analyzer
Building My AI Resume Analyzer

First, I listed dependencies in requirements.txt: Gradio for UI, sentence-transformers for embeddings, scikit-learn for TF-IDF and cosine similarity, PyPDF for PDF parsing, and pandas/numpy/torch for data handling.

In app.py, I began with imports: core libraries like os, re, io, json, and string for utilities; numpy and pandas for data manipulation; and Gradio for the interface. From ML packages, I used TfidfVectorizer for keyword ranking, cosine_similarity for text similarity, and SentenceTransformer for embeddings.

I defined configuration constants like the embedding model (all-MiniLM-L6-v2), number of keywords, and similarity thresholds. Then I wrote text utilities (clean_text, normalize_tokens, split_sentences) to preprocess both resumes and job descriptions. Functions like read_pdf and read_text extract raw text.

The core logic lives in analyze(). It embeds the resume and job description, ranks job-specific terms with TF-IDF, checks which ones are “covered” or “missing,” and suggests bullet points to strengthen the resume.

Finally, I wired it all into a Gradio Blocks UI, so users can upload resumes, paste JDs, and instantly see missing skills, strong matches, and suggestions.