An applicant tracking system (ATS) is the software employers use to receive, parse, and rank résumés. To optimize for one, you need three things: a résumé that parses cleanly (single-column, text- based, standard fonts), keyword overlap with the JD that's natural rather than forced, and quantified bullets that signal real impact. The good news: modern ATSes — Greenhouse, Workday, Lever, Ashby, SmartRecruiters — are all reasonably forgiving. The bad news: most rejections happen because the recruiter never opens the file, not because the ATS auto-rejected it.
The big six ATS platforms (and how they actually work)
applinity scrapes every major ATS used by Silicon Valley tech companies. Across the 300-company catalog, the platform distribution looks roughly like this:
- Greenhouse — startup and growth-stage default (Stripe, Anthropic, Figma, Notion, Cursor, Linear, plus most YC-graduated companies). Best résumé parsing of the bunch; the structured profile auto-fills are usually accurate.
- Workday — enterprise default (Apple's pre-2020 requisitions, Salesforce, Oracle, Cisco, Nvidia, Adobe, plus most Fortune 500s). Has the most complex application flow — multi-step wizards, account creation, often re-entering everything by hand even after résumé upload.
- Lever — common for mid-stage startups (Brex, Mercury, several growth-stage SaaS). Clean parsing, fast forms.
- Ashby — newer entrant favored by AI labs and design-forward startups (Anthropic for some teams, OpenAI for some teams, Vercel, Replit). Best UX of the modern crop.
- SmartRecruiters — common at enterprise consumer companies. Mid-tier parsing.
- iCIMS — older enterprise default. Heavy forms, weaker parsing. Plan to re-enter your work history by hand.
The practical takeaway: optimize for parsing failure on the worst of the bunch. If your résumé parses cleanly on iCIMS and Workday, it parses cleanly on everything.
The 4-axis ATS rubric (used by applinity's scorer)
We grade résumés on four 25-point axes — the same rubric used by every recruiter quickly scanning a file:
1. Parseability (0–25)
Can the ATS extract your text? Issues that tank this score:
- Image-only PDFs (scanned résumés or résumé-builder exports that embed everything as a graphic)
- Two-column layouts where dates sit in a side rail (parsers don't always know which job a date belongs to)
- Exotic fonts that get substituted on the server side
- Section headers that aren't standard ("My Journey" instead of "Experience")
- Contact info in a header / footer (some parsers skip those zones)
2. Keyword density (0–25)
ATSes weight keyword overlap with the JD heavily. The applinity scorer checks your résumé against a library of ~250 commonly- required engineering technologies (languages, frameworks, cloud services, data tools, observability) and surfaces what's missing. The aim isn't to stuff — it's to make sure that when a recruiter searches "Kubernetes" in their ATS, you actually show up.
3. Quantified impact (0–25)
"Built X to handle 50k req/s" beats "Worked on scalability" every single time. We count the percentage of bullets that include a measurable number — throughput, latency, headcount, dollars, % improvement, error rate. The target is 60%+ for engineering résumés. Below 30% and you sound junior regardless of seniority.
4. Action-verb strength (0–25)
Bullets that lead with strong verbs ("Shipped", "Reduced", "Architected", "Owned") outperform passive openings ("Worked on", "Helped with", "Was responsible for"). We measure the ratio. Replace your weak verbs even if the underlying work is identical — recruiter response rates roughly double when this axis is clean.
Format guidelines
- Single column. Always. Multi-column résumés break parsing at most ATSes.
- Standard fonts. Calibri, Arial, Helvetica, Georgia, Times. Anything else risks font substitution on the parser side.
- 10–11 pt body, 14–16 pt section headers. Larger is unprofessional; smaller is unreadable.
- Date format YYYY–YYYY or Mon YYYY – Mon YYYY. Avoid ambiguous "2022" alone — parsers can't tell start from end.
- Hyperlinks on portfolio / GitHub. Recruiters click them; ATSes pick them up.
- No headshots, no graphics, no progress bars. US tech hiring norms; in other markets the rules differ.
Keyword strategy that doesn't backfire
The lazy advice is "match the JD keywords." The practical reality is more nuanced:
- Tools you've actually used belong on the résumé. If the JD asks for Kubernetes and you've shipped to a K8s cluster, put it in your bullets — not in a "Skills" section dump.
- Categories beat lists. "Languages: Go, Python, TypeScript" beats a 30-item flat skill dump. ATSes parse both; recruiters only read the structured one.
- Don't add things you haven't used. Recruiters screen for fabrication in 90 seconds. The technical interview screens for it in 5 minutes.
- Tailor the top, leave the rest. Move the bullets most relevant to the JD into the top third of your most recent role. applinity's per-job tailoring does this automatically with a hard no-fabrication rule.
What to do right now
- Run your résumé through applinity's free scorer. Note the score per axis.
- Fix the lowest-scoring axis first. Parseability issues are usually 5-minute fixes (re-export the PDF; switch to single column). Keyword and quantification fixes take longer but compound.
- Re-score. Repeat. The target is 80+ for entry-level engineers, 90+ for senior and staff IC.
- When you're applying to a specific role, sign up for applinity and run per-job tailoring on the actual JD — that gets you the last 10 points of keyword overlap without fabricating anything.
Frequently asked questions
What is an ATS (applicant tracking system)?
An ATS is the software employers use to receive, parse, store, score, and route job applications. Common platforms include Greenhouse, Workday, Lever, Ashby, SmartRecruiters, and iCIMS. When you upload a résumé to a job posting, it's parsed by the ATS first, then surfaced to a recruiter — typically with a structured profile generated automatically from your file.
Do recruiters actually use the ATS scoring?
Most ATS platforms don't auto-reject anyone. What they do is rank applicants based on keyword match, experience parsing, and screening-question responses — and recruiters work the queue top-down. A low-ranking résumé doesn't get rejected, it gets ignored. The practical effect is the same.
PDF or DOCX — which format do ATSes prefer?
Both work at every major ATS. The format risk isn't the extension, it's the structure. Single-column, text-based PDFs and standard DOCX files parse fine. Image-based PDFs (especially scanned résumés), two-column layouts with tab-separated dates, and exotic fonts are where most parsing failures happen.
Does keyword stuffing work?
No. Modern ATSes parse keywords in context — duplicate keywords without backing experience get caught by recruiters in 5 seconds, and some ATSes downweight overly-repetitive résumés as spam. Lead with quantified outcomes, then let the keywords fall out of context naturally.
Should I match the job description word-for-word?
Match the substance and the keywords; don't copy the phrasing. Recruiters and ATSes both flag verbatim JD copies as low-effort. The applinity per-job tailoring rewrites the top of your résumé to lead with the bullets most relevant to the JD's keywords, without inventing anything.
How long should my résumé be?
One page for engineers with under 8 years of experience. Up to two pages for senior IC and management. ATSes don't care about page count; recruiters do, and they're who you're optimizing for in the end.
What's the single biggest ATS mistake?
Image-only PDFs. Some résumé builders export résumés as a single embedded image, which means the parser extracts zero text. The recruiter sees a blank profile, gives up, and moves on. Always check that text is selectable in your PDF before submitting.
Does the file name matter?
Slightly. Recruiters often see filenames in the queue view. 'firstname-lastname-resume.pdf' is more professional than 'untitled-3-final-v2.pdf', and some ATSes use the filename as a fallback when parsing fails.