are you in?
Our content is reader supported, which means when you buy from links you click on, we may earn a commission.
Keyword density is the number of times a keyphrase is used in comparison to the total number of words in a piece of content.
If you’ve got 100 words in your article and use your keyphrase in there one time, then you’ve got a density of 1%. The density of any phrase is calculated by looking at the number of times it occurs in an article and dividing that by the total number of words in the article.
It’s pretty simple math even I can do.
Lots of people hoping to get high rankings search SEO forums and SEO articles far and wide for the ideal keyword density.
You’ll see suggestions thrown out all over. Some say 5%, some say 3%, others say 2.2%.
The real answer is there’s no fool-proof keyword density that’s going to shoot your web pages to the top of Google.
High rankings are calculated based on a number of factors, both on-site and off-site factors.
There’s not going to be a single keyword density that’s optimal for every web page. I’ll cover a few reasons why later on when I get into my Keyword Density Sliding Scale idea.
But first, I conducted an SEO study where we looked at the top 5 ranking web pages for 100 random keyphrases. Part of the study looked at the body copy of the web page. We found that on average, these high ranking web pages had a density of under a half a percent! Can that be right?
A half a percent density means that for every 200 words in the copy, you’d have your keyphrase in there one time.
It may not fit your paradigm, but it’s what we found based on actual high ranking web pages. Again, this is an average, some percentages were higher, some were lower.
I’ve been successfully ranking web pages using densities around that number for a very long time now.
Here’s an example to help you visualize this:
This page was ranking high for the phrase ‘make money fast’. The web page has the phrase ‘make money fast’ in the heading and the title tag.
By my count the phrase ‘make money fast’ is in that article 4 times. When I copied and pasted the content into Word, I came out with 3,068 words (I did not count the citations, the related articles, and the stuff at the end, just the body of the article).
So 4 times with over 3,000 words is a 0.13% density. That goes against what a lot of people are saying to do, but it’s right there in front of us.
And this was just one of 500 high ranking web pages we looked at. Across all of them we were seeing the same trend, very low densities in the body copy for the very keyphrases they rank highly for.
How can Google even figure out what this page is about?
For one the exact keyphrase is in the title tag and the headline.
And the word ‘money’ is in the article 27 times.
Even with a count of 27 times, that’s still only a 0.9% keyword density (it’s 27/3068). So it’s right under 1% and that’s for the single keyword ‘money’.
It makes sense to have a single word like that in an article with a higher density because it’s hard to talk about anything dealing with money without using that word. A good writer might try to switch it up a little, like using ‘dollars’ and ‘pay’ (both of which are in the article at least a couple times each). But even then, sometimes you’ve just got to use the word ‘money’ to convey what you’re talking about.
Within the article, there are times when they use ‘get money’, ‘quick money’ and ‘earn money’. All these words help corroborate the idea that the article truly is about making money fast. They’ve just got the words slightly changed up. That’s how good writers do it.
One way to think about this is your headline or title tag shows what the article is about, while the content backs up that it’s truly about that topic with the use of related phrases and words.
It seems Google’s figured out what natural writing looks like along with a heck of a lot of work on figuring out related words (like money & dollars and quick & fast).
So even though the string ‘make money fast’ isn’t in every other sentence (it’s barely in there at all), it’s still a good piece of content to show for that search string. And that’s because they’ve got all these related words in there that mean the same thing.
Many of the sites we looked at in our study did not have the keyphrase they were ranking high for in the body of the content at all.
That’s zero times for a density of zero.
Sometimes it was in the headline or title tag, but pretty much nowhere else on the entire page. However, there may have been very related words in the body copy that served as clues for understanding what the page was about.
Why are so many pages with very low keyword densities ranking high over pages with higher densities?
That’s a good question. How is Google figuring all this out and making it work?
We know Google started using a supplemental index many years ago. One reason they created this special index was so they had a place where they could segregate duplicate web pages from pages with unique content. This was all back in 2005 when people were taking articles off other sites like article directories and getting them ranked.
But Google squashed that tactic through the use of their supplemental index. Everyone started freaking out about how they didn’t want their pages winding up in there.
The deal was Google would pull the search results for any given keyphrase from their regular index first and if there weren’t any good matches, only then would they pull results from the supplemental index.
You can imagine how often web pages banished to the supplemental index got shown, like pretty much never. So your web pages were (and still are) worthless if they get put in that supplemental index.
At first you could find out if your pages were in the supplemental index. Then later on in standard Google ‘we must keep everyone else in the dark’ style, they made it so you couldn’t figure it out.
At this point they likely have more than one supplemental index and they use them to ‘quarantine off’ sites they just don’t like for one reason or another.
The point is, Google’s very good at sorting the ‘bad’ sites from the ‘good’ sites. They’ve been doing it for a very long time and they don’t tell you if they think your site’s ‘bad’ other than rewarding what they consider ‘good’ sites with higher rankings.
So let’s just say your pages (or likely even your entire site) are put in a special index (let’s call it the low quality index) and they don’t get shown for results until there’s nothing left to show from the main (good) index.
These low quality pages are shown only when there are no matches for any given keyphrase (so for very random keyphrases) or they get shown way after the best stuff. So that’s why these pages may be shown on page 10 of a Google results page. Although often, I don’t see them at all.
Whether this is how it works or not I think it’s a good way to think about things, especially if you want high rankings (and why wouldn’t you, that’s an awful lot of traffic someone else is going to get if you turn your back on SEO).
Basically, there are many factors that determine whether or not your web page is high quality or not. The number of times you repeat keyphrases and keywords over and over again is probably just one of them.
You’ve probably heard of the phrase, keyword stuffing. It may actually take a lot less than you think for Google to decide your web page is taking advantage of keyword stuffing.
It’s not really a keyword, but a keyphrase.
Another thing to consider is that your keyphrase is a series of words and that the number of words in the phrase may impact the ideal density range (although I don’t think there’s an ideal number, I’m sure there’s a range).
Even though it’s often called keyword density, not many people go after a single word. You know a word like ‘debt’. That’s going to be crazy hard to rank for so most people don’t bother trying to get their sites ranked high for single words. So it’s really keyphrases that we’re looking at (because a phrase is 2 or more words together).
If a keyphrase has just 2 words within it then most of the time it’s going to wind up in your copy several times whether you do this intentionally or not.
For example, if I’m writing an article on ‘banana peels’ what can I substitute out for that phrase?
I suppose I could just say ‘the peel’ or ‘peeling’ a few times, but it’s going to be a struggle to intentionally not use that phrase.
The fewer times I use the phrase ‘banana peel’, the weirder my article is going to sound. So I’ll have to include it in my copy several times for it to sound natural.
However, if my keyphrase is a longer phrase like, ‘can you eat banana peels’ then it’s going to be harder to put that exact phrase in the article more than once.
I could see a title, ‘can you eat banana peels’ and then that’s it.
Otherwise the article’s going to turn out sounding pretty strange.
Here’s the standard first sentence of an article where keyword stuffing is taking place:
“This article aims to answer the questions ‘can you eat banana peels?'”.
And then they have the rest of an article which may or may not have that phrase in there. At the end they say, ‘I hope you found the answers to the question ‘can you eat banana peels?’.
You’d be way better off leaving that right out of there.
It sounds bad.
And if you’re ordering articles from an outsourcer or a service and this is what they give you, stop using them for articles. It’s a probably a waste of your money.
If you’re trying to write naturally, you’d be lucky to fit the phrase ‘can you eat banana peels’ in your article at all. It’ll definitely work in your headline or a subheading (probably just one of those two) and that’s about it. You’ll use other phrases like ‘eating banana peels’ in your body copy because they’ll sound better.
My guess is Google probably even incorporates phrase length into their algorithm. If a searcher types a 2-word phrase into Google, the ideal keyword density range would be higher.
If a searcher types in a longer phrase, like one with 5 or 6 words, then the ideal keyword density range is probably lower. That’s because they want to show their visitors the best article, one that’s written naturally and not trying to rank high due to a loophole dealing with density.
It’s a pretty big tip off if you’ve got one of these longer phrases in your article over and over again that you’re intentionally stuffing it in there.
So if you’re targeting a longer phrase, just write naturally, which will mean you’ll probably end up with an ultra-low keyword density. If you’re targeting a shorter phrase, like one with 2 words, then again, write naturally. It’s fine to put it in your copy a few times. Any good writer would do just that.
It’s a completely different way to look at keyword density.
It’s not just a single percentage, but a range and there’s likely a sliding scale dependent on the number of words within the phrase. I call this the Keyword Density Sliding Scale.
If this is freaking you out, don’t worry. All it means is you should write naturally (or ask your writer to write naturally, which is what they do best anyway).
When the article’s done, try to insert a few keyphrases in the content and the headline even just one time (as long as it makes sense). If you can’t include a keyphrase because it will make the article sound awkward, then leave it out and try another.
That’s why I think grouping related keyphrases is so important. You can target several natural sounding phrases within the same piece of content instead of stuffing it with the same one over and over again. And again, that’s how any good writer will do it.
What do you think of the Keyword Density Sliding Scale idea? And do you have any other thoughts on keyword density like what it should be? Please share your comments below …