What does the Readability Checker do?
The Readability Checker reads your text and tells you how hard it is to read. Paste in a paragraph, a blog post, an email, or a full article, and the tool reports two numbers that writers have been using for almost a century: the Flesch Reading Ease score (0 to 100, higher means easier) and the Flesch-Kincaid Grade Level (which U.S. school grade can comfortably read it).
Paste in "The cat sat on the mat. It was a sunny day. The cat was happy." and you'll get a Reading Ease near 100 and a Grade Level around 0.5 — first-grade reading. Paste in a paragraph from a legal contract and you'll watch the Ease score crash into the 20s and the grade climb past college.
The math runs in your browser, instantly, as you type. Nothing leaves the page. There's no signup, no upload, no waiting for an AI server to wake up — and unlike a sentence-by-sentence rewriting assistant, this tool isn't here to change your writing for you. It just measures it. What you do with the number is your call.
Why readability scores exist (and why they still matter)
Rudolf Flesch built the first version of this formula in 1948, trying to figure out why some Navy training manuals were getting recruits killed. The answer turned out to be embarrassingly simple: the manuals were written at college reading level, and most recruits read at eighth grade. The formula caught on fast — by the 1950s it was required reading inside the Pentagon, and by the 1970s every U.S. state had laws requiring insurance contracts to clear a minimum Reading Ease score.
It's still the standard. Microsoft Word ships with it. The U.S. government's plain-language guidelines reference it. Hemingway Editor uses a slightly different version of the same idea. And it's baked into the SEO tooling at Yoast, Surfer, and a dozen others — because Google's quality raters explicitly look at whether a page is readable by its intended audience.
The reason the formula has survived is that it measures the two things that actually make text hard:
- Long sentences — more clauses to track in working memory before you reach the period
- Long words — more syllables, usually rarer vocabulary, more time per word
That's it. The formula doesn't know about jargon, passive voice, or whether your topic is rocket science. It just counts. But those two signals — sentence length and syllable density — turn out to be excellent proxies for cognitive load, which is why they keep predicting real reading difficulty across decades, languages, and audiences.
The formula, in plain text
The Flesch-Kincaid Grade Level uses this formula:
FK Grade = 0.39 × (words ÷ sentences) + 11.8 × (syllables ÷ words) - 15.59
Two ratios, two constants, one subtraction. The Reading Ease score uses similar inputs in a different shape:
Reading Ease = 206.835 - 1.015 × (words ÷ sentences) - 84.6 × (syllables ÷ words)
The constants look arbitrary but they're not — they were fit to actual reading-comprehension data on schoolchildren. A score of 100 is easy enough for a fifth grader. A score of 0 means the text is basically unreadable to a college graduate. Most newspapers sit between 50 and 70.
A worked example
Take this paragraph, which is a fair imitation of a typical blog post:
"Most people think writing is hard. It isn't. The hard part is editing. When you write a first draft, you put down whatever comes to mind. When you edit, you cut the parts that don't work. That is the real skill."
Run the numbers:
- Words: 48
- Sentences: 6
- Syllables: 60 (approximately — the checker uses a vowel-cluster heuristic, the same one Word uses)
- Words per sentence: 48 ÷ 6 = 8.0
- Syllables per word: 60 ÷ 48 ≈ 1.25
Plug into the grade formula: 0.39 × 8.0 + 11.8 × 1.25 - 15.59 = 3.12 + 14.75 - 15.59 ≈ 2.3. Plug into the ease formula: 206.835 - 1.015 × 8.0 - 84.6 × 1.25 ≈ 101 (capped at 100 in display).
So the paragraph reads at roughly a second-grade level with a near-perfect Reading Ease score. That's not "the writing is dumb" — it's "anyone with a basic education can read this without effort." Which is usually what you want from a blog post.
Now compare to the same idea written formally:
"It is commonly assumed that the principal challenge of composition resides in the generative phase; however, experienced practitioners recognize that the editorial revision stage, during which non-functional material is systematically excised, constitutes the substantive craft."
Same idea, longer sentences, longer words. The grade level jumps to roughly 22 (well past college) and the Reading Ease drops to about 14. Identical meaning, very different cognitive cost.
What score should you aim for?
There's no universal answer — the right target depends entirely on who you're writing for. But there are useful anchors. Here's where some well-known publications and document types tend to land:
| Audience / publication | Reading Ease | Grade Level | Feel |
|---|---|---|---|
| BuzzFeed, Reddit, casual blogs | 70-80 | 5-7 | Conversational, fast scan |
| Reader's Digest | 65 | 7-8 | Mainstream prose |
| The New York Times | 50-55 | 10-11 | Educated general reader |
| The Economist | 40-45 | 12-14 | College-educated, careful |
| Harvard Business Review | 35-40 | 13-15 | Professional, demanding |
| Academic journals | 20-30 | 16-20 | Specialist only |
| Legal contracts | 10-25 | 18-25 | Often near-unreadable |
| SEC 10-K filings | 25-35 | 15-18 | Required disclosure prose |
If you're writing for a general web audience, 60-70 Reading Ease is the sweet spot — roughly eighth- to ninth-grade reading. That's not dumbing down. That's where The New York Times, Wikipedia, and most popular non-fiction live. Aiming higher (lower ease) costs you readers without making you sound smarter; it usually just makes you sound like a tax form.
Stephen King's prose scores around 80 Reading Ease. Hemingway scores in the 90s. If two of the most respected writers in English literature write at a fifth- to sixth-grade level, you can too.
Where the formula gets it wrong
The Reading Ease score is a measurement, not a verdict. A few things it can't see:
- Jargon and named entities. "Aluminum" is the same length as "altitude," but if your reader doesn't know aluminum, the formula won't help.
- Logical complexity. "If A then B, unless C, in which case D" can be twenty short words but cognitively brutal.
- Topic familiarity. A paragraph about your reader's hobby reads easier than one about a stranger topic, regardless of score.
- Code blocks and equations. Strip these before checking — the formula treats them as ordinary words and the scores get weird.
- Other languages. Flesch-Kincaid is calibrated for English. Spanish has a separate Fernández Huerta variant; German has the Wiener Sachtextformel. The English score on non-English text is meaningless.
Treat the score as one signal among several. If the number says "grade 14" and your readers are eighth graders, that's worth investigating. If the number says "grade 6" but readers are bouncing, the problem isn't the words — it's the argument, the structure, or the topic.
Practical ways to lower the score
If the checker tells you your draft reads at grade 15 and your audience is general readers, three moves get you most of the way:
- Break long sentences. Any sentence over 25 words is a candidate to split. Look for "and," "but," "which," and semicolons — they're often disguised periods.
- Swap long words for short ones. Utilize → use. Demonstrate → show. Approximately → about. In order to → to. Most three-syllable words have a one-syllable cousin doing the same job.
- Cut filler phrases. "It is important to note that," "due to the fact that," "in spite of the fact that" — these add words without adding sense. Edit them out and watch the score climb.
For checking the actual length of what you've written before and after edits, the Word Counter shows total words, characters, and sentences. If you want to look at sentence count and structure on its own, the Sentence Counter is the simpler version.
Privacy and how it runs
Everything happens in your browser. Your text is parsed locally with JavaScript, the syllables are counted with a vowel-cluster heuristic, the formula runs on the spot, and the result appears. No copy of your draft is sent to any server, ever. Closing the tab clears everything. That makes the Readability Checker safe for confidential drafts, unpublished writing, internal memos, and anything else you wouldn't paste into a third-party AI service.
Most cloud-based readability tools quietly retain inputs to train models or "improve service quality." This one doesn't, because it can't — there's no service to send anything to.
Related text tools
The Readability Checker is part of a small family of text analysis tools that all work the same way: paste, see results, no signup.
- Word Counter — total words, characters, sentences, paragraphs, and reading time in one paste.
- Sentence Counter — focused on sentence counts and average length, useful when you're auditing structure.
- Reading Time Calculator — converts word count into a reading-time estimate for posts, scripts, and speeches.
- Word Frequency Counter — shows which words you use most, helpful for spotting repetition or checking keyword density.
- Character Counter — when only character count matters, like for tweets, bios, or meta descriptions.
- Case Converter — switch between upper, lower, title, sentence, and camelCase without retyping.
Frequently asked questions
What's a good Reading Ease score?
For general web writing, aim for 60-70. That's eighth- to ninth-grade reading, which is roughly where The New York Times and most popular non-fiction sit. Lower scores (40-50) are fine for educated audiences; scores under 30 mean you're writing for specialists or losing readers. Higher scores (80+) are fine for casual blogs, kids' content, and conversational copy.
How is "syllable" counted?
The checker uses a vowel-cluster heuristic: it counts runs of vowels (a, e, i, o, u, sometimes y) and subtracts silent-e endings. It's about 95% accurate on English, which matches what Microsoft Word does. A few words ("fire," "every," "chocolate") are tricky, but the small errors average out over any text long enough to matter.
Why does the checker give a different score than Microsoft Word?
Word uses the same Flesch-Kincaid formulas but with a slightly different syllable counter and different sentence-boundary detection (it treats some abbreviations as sentence ends, this checker doesn't). For most text the gap is a point or two on the Ease score and half a grade on the Grade Level. If you're using the score for a hard requirement, use whichever tool the requirement was written against.
Does the score predict whether my article will rank on Google?
Not directly. Google has never confirmed using Flesch-Kincaid as a ranking signal. But Google's Search Quality Rater Guidelines repeatedly mention "readability for the intended audience," and there's strong evidence that pages matching their audience's reading level get more dwell time, lower bounce rate, and more shares — all of which do affect ranking. So the score is an indirect signal, not a direct one.
Can I check readability in other languages?
The Flesch-Kincaid formula is calibrated for English. Running it on Spanish, German, or French will produce a number, but the number won't mean what it means in English (Romance languages have systematically longer words; the formula reads them as harder than they are). For non-English text, look up the language-specific variant (Fernández Huerta for Spanish, Wiener Sachtextformel for German, etc.).
Does the checker work on long documents?
Yes. Everything runs in your browser, so document size is limited only by what your browser can hold in memory — usually well over a million characters. For very long texts, the score reflects the whole document; if you want per-section scores, run each section separately.
Is my text stored anywhere?
No. The Readability Checker runs entirely in JavaScript on your device. Your text never reaches any Microapp server, never gets logged, and never trains anything. Close the tab and it's gone.