Does Google Detect Duplicate Content If Every 13th Word is Unique?

In one of our last blog posts, we scientifically proved 51% / 49% (unique to duplicate content) to be the minimum ratio required for a page to be determined unique. Our test was designed to find a “sweet spot” of words on a page - that number which would make a page unique in the eyes of Google. To that end, we ran 5 tests, each with its own target keyword. The goal was to compare 2 pages (Page A and Page B), each with a different percentage of duplicate content, and figure out the optimal percentage of unique text for various cases. Page A was indexed first; Page B was then indexed to test whether it was read as duplicate content.

We concluded that regardless of how many words are used the ratio appears to be the deciding factor. For instance; 100 unique words and 100 duplicate words are considered unique overall, whereas 400 unique words and 800 duplicate words are considered duplicate and wont index.

This experiment was run about a year ago. But we decided to take the results of the previous tests a step further to examine whether Google detects duplicate content if every 13th word is unique. Essentially breaking up a big block of duplicate text with single, unique words. 

If you've been in SEO a while, then you know this was one of the original 'google leak' rumors. What do you think? Will this rumor sink or swim?

We set up the test like this..

This experiment was an extension of our previous experiments on duplicate content. In the previous experiment, we determined that if 51% of the page consists of unique content, it will be considered unique overall by Google. 

The current experiment was run in a few different ways to prove our new Hypothesis. 

Hypothesis

If every 13th word is unique, the page will be considered unique overall. 

We set up five pages, each optimized for the same keyword and ran the tests five times. 

All five pages were identical except for one difference: the position of the target keyword + a unique word on the page. Page 1 was left unchanged. On page 2, the target keyword + a unique word was added every 6th word. Page 3 had the same settings as the previous two pages except for adding the target keyword + a unique word every 13th word. On page 4, the target keyword and a unique word were added every 24th word. The last page tested had the target keyword and a unique word added every 4th word.

The idea behind setting up the tests this way was simple: if the page is not unique enough, it will get filtered. If Google detects duplicate content, a message (shown below) will be displayed at the bottom of the SERP:

Here is what we discovered

The results of the five tests were astonishing: all five pages appeared in the SERP with no duplicate content issues.

Even Page 4, which had a unique word every 24th word - seemingly the most un-unique, still passed the duplicate content filter.

Final Takeaway

We actually discovered that it’s enough to have every 24th word unique in order for Google to see your page as “unique” and not de-index it. 

Disclaimer: This experiment was not done to “trick” Google. And we most certainly do not encourage you to spin content. Rather, our goal is to show the science behind Google’s algo. Even with no duplicate content “penalty,” it is widely accepted that Google rewards quality and unique content that brings value to the viewers. 

Does A.city Domain Beat A.com In Local Search?

Acquiring .city domain names has become extremely popular over the past few years among businesses with target markets located in particular cities. Alternative TLDs have turned out to be a great alternative to the classic domain extensions such as .com or to country code extensions (.fr, .de, .co.uk...). 

Because of the many questions and misconceptions around this topic of new TLDs, Google’s John Mueller published FAQs on the Google Webmaster Central blog addressing the issue. Specifically, he stated that ..our systems treat new gTLDs like other gTLDs (like .com and .org). Keywords in a TLD do not give any advantage or disadvantage in search”. As far as new region or city domains are concerned, John emphasized that they’ll be treated as gTLDs even if they look region specific.

That being said, an immediate question came to our minds: Would the .city provide a ranking advantage over a .com when trying to rank local or city specific pages and consequently give a local ranking edge? We decided to test it!

Daniel Furch - Ranking-Analyse neuer Top-Level-Domains: ‚.berlin‘-TLDs in der lokalen Google-Suche

Source: Daniel Furch - Searchmetrics

We could only locate one other previous experiment done on this by SearchMetrics in 2014 using the study of a .berlin vs .com and .de for local Berlin searches (sorry, this article is in German). Their conclusion was that a .city domain name will help boost local ranking. This experiment was a field test done on existing SERPs, not in a test environment so very different variables and conditions. Stick around to the end of this blog post to find out our test results.

We set up the test like this..

The goal of the experiment was to find whether a .com will beat a .city. As we were putting a domain against a domain we only had two pages in play. Since it’s possible that an unforeseen ranking factor will influence the result having just 2 pages in our test environment, we ran the test three times to see if we got a repeating result.

First, we acquired 3 sets of domains. We obtained both the .com and .city for the same fake keyword. The domains were as follows:

                                                                                           June 6th

                                                                                           June 15th

                                                                                           June 27th



Then we attempted to make pages that were “local” to Phoenix, Arizona. To make the pages “local” we:

1. Created the pages from a Phoenix ip address 

2. Added schema on the pages for a local Phoenix physical address

3. Embedded a Google Map of Phoenix in the pages:

4. Added an image of Phoenix to the page, the alt text contained the word “Phoenix” in it and the image was tagged with the latitude and longitude coordinates of Phoenix.

5. In the Page Title and in the Body Content the word “Phoenix” was placed next to the target fake keyword. As a variation, the phrase “keyword in Phoenix” was also used. 

6. Added authority outbound links on the page to the Phoenix wikipedia page and the Phoenix.gov page

After the setup was complete all domains were submitted to the Google url submitter. We then checked three variations of possible searches: 

1. Keyword 

2. Phoenix + Keyword 

3. Keyword + in Phoenix

Here is what we discovered

With three sets of domains and 3 different searches for each one there were a total of nine results. The .com won 8 out of nine times.

For the first keyword

1. Keyword

2. Phoenix + Keyword

3. Keyword + in Phoenix

The only .city to win was the main keyword. For this set, the .com won both the “Phoenix + Keyword” and “Keyword in Phoenix”.

Final Takeaway

At the end of the day, we would recommend going for the .com. You see those anomalies from time to time where a .city is winning, but if you have to make a decision, we would go with the .com.

Please note that these are single variable test results. We created an isolated situation for testing in order to determine results. At the same time, we are not saying that you cannot rank a .city and win. As always, great content on great pages will do their job.

Plans start at $10 only*

How Much Original Text Is Required for a Page to be Considered “Unique”?

How do you determine if Google sees your content as unique? A quick, easy test is to do a highly specific search. Copy and paste a sentence from your page into Google. If the result looks normal, Google does not detect any duplicate content issues. If, on the other hand, Google has found duplicate content issues, you may see only one page cited in the SERP (hopefully it’s yours) along with the following message:

Google defines duplicate content in their guidelines as such: “Duplicate content generally refers to substantive blocks of content within or across domains that either completely match other content or are appreciably similar. Mostly, this is not deceptive in origin.”

According to Google’s Search Quality Senior Strategist, Andrey Lipattsev, Google does not have a duplicate content penalty. This statement was supported by Google's Webmaster Trends Analyst, John Mueller, during a regular Google Webmaster hangout session. He emphasized that there is “No duplicate content penalty” but “We do have some things around duplicate content … that are penalty worthy”. What does that mean exactly? Our interpretation is that you cannot expect to rank high in Google with content that is duplicated from other, more trusted sites.

Even with no duplicate content “penalty,” it is widely accepted that Google rewards quality, uniqueness, and the signals associated with adding value. Meanwhile, a critical component of cost-effective SEO is creating pages that are seen as unique but that also leverage existing content. This brings up a justified question: How much original information should appear on the page in order to be considered unique?

We tested this very question. Read on to find our results!

We set up the test like this..

Hypothesis: At least 50% of a page needs to be unique

Our test was designed to find a “sweet spot” of words on a page - that number which would make a page unique in the eyes of Google. To that end, we ran 5 tests, each with its own target keyword. The goal was to compare 2 pages (Page A and Page B), each with a different percentage of duplicate content, and figure out the optimal percentage of unique text for various cases. Page A was indexed first; Page B is then indexed to test whether it’s duplicate content.

                                                                                           June 6th

                                                                                           June 15th

                                                                                           June 27th

Still no movement of the test page.

Test 1

Objective: To test whether a 50/50 ratio of duplicate to unique content is considered unique.

Test 2

Objective: To test whether 2 blocks of duplicate content at a 50/50 ratio of duplicate to unique content is considered unique.

Test 3

Objective: To test whether a 33/66 percentage of unique to duplicate content is considered unique.

Test 4

Objective: To test whether an increased word count at a 33/66 percentage of unique to duplicate content is considered unique.

Test 5

Objective: To test whether 2 blocks of duplicate content at a 40/60 percentage of unique to duplicate content is considered unique.

Here is what we discovered

In order to test our hypothesis, we first had to determine in which situation Google considers a page ‘unique’. Fortunately, testing has given us an insight into some of Google’s default tendencies. If SERPs contain duplicate content, Google will omit the results and display the following:

It means that if that notice is not displayed in the SERPs for a target keyword, then Google sees all the displayed results as unique. If that notice is displayed, then we know that Google sees the displayed results as duplicate. 

The results of five tests varied depending on the percentage of duplicate content.

Test 1

The objective for the first test was to determine whether a 50/50 ratio of unique to duplicate content is considered unique. In the image below, both pages were shown in SERPs, which leads to the conclusion that a 50/50 ratio of unique to duplicate content is seen as original by Google.

Test 2

In the second test, we tried to prove whether 2 blocks of duplicate content at a 50/50 ratio of unique to duplicate content would be considered unique. The test results appeared to be positive again.

Test 3

The goal of test #3 was to determine whether a 33/66 percentage of duplicate to unique content is considered unique. Unlike the previous 2 tests, this time SERPs displayed only one page along with the already familiar notice “In order to show you the most relevant reposts, we have omitted some entries very similar to the 1 already displayed...”. This means that a 33/66% unique to duplicate ratio is considered duplicate content.

Test 4

The setting for test #4 was very similar to test #3 as we wanted to prove whether a 33/66% unique to duplicate ratio is seen as duplicate content. However, this time we wanted to test whether the results would stay the same if the unique word count was increased to 400. The content was also considered duplicate.

Test 5

The goal of the final test was to scientifically prove whether a page with 60% and 300 words duplicate content would be seen as unique. The results showed this ratio of unique to duplicate content is still considered to be duplicate content.

Final Takeaway

The bottom line - It seems there needs to be at least a 50/50 ratio for a page to be determined unique. Regardless of how many words are used (i.e. 100 words unique and 100 words duplicate are considered unique content whereas 400 words unique and 800 duplicate - duplicate) the ratio appears to be the deciding factor.

This is a pretty exciting result: not only does it significantly lessen the burden of copywriting, but it also helps answer a question that nearly every client will ask at some point.

These test results are highly valuable for e-commerce sites. The common practice to get more unique words on the product page to improve rankings can take this study into account.

Note: In this study we were using duplicate content in the body copy of text excluding page titles.

This experiment was run about a year ago. There has since been a more recent and exciting test on a very similar subject that we will publish very soon. So stay tuned!

Plans start at $10 only*

Can You Beat an H2 with multiple H3s?

In one of our previous experiments, we tested whether it was possible to trade one keyword optimized H1 for multiple H2s also optimized with a test keyword. We came to the conclusion that no matter how many optimized H2s you add to the page, you cannot outrank a page that has a single optimized H1.

Despite those results, we were not ready to give up on the idea of a possible trade off economy of on-page signals. So, we decided to alter the experiment by looking at 2 lesser signals - H2s and H3s. Are they be valued the same by Google or will an optimized H2 always beat an optimized H3?

In other words, can we beat an optimized H2 with multiple optimized H3s and thus prove that the economy of ranking signals works for lesser signals. Or is it time to close the book on the topic of trade-offs? Read on to find out the answer!

We set up the test like this..


Hypothesis: The page with two optimized H3s will outrank the pages with only one optimized H2

The test was set up in the same way as the H1 vs H2s test: five pages were created and indexed in the normal fashion. Two lines were created in lorem ipsum, both containing the target keyword - these lines would be used for H2, H3, and paragraph text. 

On the experiment (variable) page (page #3), both lines were added as H3s with lorem ipsum text optimized for keywords while all other pages (1, 2 ,4 ,5) had one H2 and a regular paragraph sentence, all with the test keyword.


May 31st, Initial SERP, showing the page with two H3s:

Similar to the previous experiment, immediately after  keywords were removed from H2s and added to the H3s, the variable page (previously ranked #3) dropped to the bottom. At this point, the goal was to continue adding keyword optimized H3s to the variable page until its ranking increased.

Page with two H3s:

Page with 1 H2 and paragraph text:

                                            June 1st - SERP shows the variable page dropping to the bottom of the results:

Another H3 was added to variable page:

The same line was added to the other pages as a regular text line:

After no movement, another H3 was added to the variable page and the same line added as regular text to the rest of the pages.

June 27th - SERP still shows no movement of the variable page:

                                                                                           June 15th




Here is what we discovered

At the start of the experiment, the variable page containing two H3s dropped in rank below the rest of the pages that contained only one H2 and a regular paragraph line optimized for the keyword. In spite of additional H3s being added to the page, the variable page never moved.

This test was run three times, and the result stayed the same - it was impossible to beat one H2 with multiple H3s, just like it was impossible to beat the H1 with H2s no matter how many were added to the page.

Final Takeaway

Since the goal of the experiment was to determine once and for all whether the economy of ranking signals works in real life for lesser signals or not, the results of the tests helped to let the cat out of the bag. As with the H1 vs H2s test, we couldn’t add enough H3s to the page to overtake an H2. Moreover, by the end of the test, the pages were over optimized. 

Based on the results of this experiment, we can state with certainty that both H2 and H3 signals are treated as separate signals in the algorithm. As such, it’s useless to try to trade a number of H2s or H3s for something else, especially something that is higher the ladder in terms of a signal.

When it comes to optimization, as with H2s, we recommend optimizing your pages for the number of H3s as it appears that Google is rewarding it. Do not look to adding H3s to a page with the goal of gaining an edge on your competitors with this signal, especially if you are in a competitive niche.

First, look at your competitors and analyze the number of H2s and H3s they are averaging. This is probably the range that you want on your page in order to provide what Google is looking for. Luckily, PageOptimizer Pro does exactly that - by comparing competitor pages, PageOptimizer Pro can tell you with 100% certainty which keyword to put where.

Plans start at $10 only*

Mythbusting: Keyword Placement in the Title Tag

How do you write a great title tag? If you take a look at the full explanation of title tags from the MOZ SEO Learning center [2019 SEO], you will find that one of the recommendations for writing a good title tag is to put important keywords first. Based on MOZ’s testing and experience, the author suggests that keywords placed closer to the beginning of the title may positively affect ranking.

Since meta title tag is one of the major factors in helping search engines understand what a page is about, we decided to scientifically prove whether the position of the keyword affects rankings by placing the target keyword in various places in the meta title. You might be astonished by what we discovered, so make sure to stick around till the end of the blog post!

We set up the test like this..

 Myth: Putting the target keyword first in your meta title gives you an SEO advantage

The goal was to find whether the position of the keyword in the meta title gives an SEO edge. We set up three identical pages with the test keyword placed in the beginning, in the middle and at the end of the title tag. In order to ensure the consistency of the results, we repeated the test using an additional keyword. 

                                                                                           June 6th

                                                                                           June 15th

                                                                                           June 27th

Still no movement of the test page.

Here is what we discovered

After running two tests with different target keywords, we discovered that keywords in the middle/end of the title tag always beat keywords at the beginning. More specifically, the placement of the keyword at the end won in both tests.

Keyword 1


Rank #1: Keyword at the end

Rank #2: Keyword in the middle

Rank #3: Keyword at the beginning

Keyword 2


Rank #1: Keyword at the end

Rank #2: Keyword at the beginning

Rank #3: Keyword in the middle

Final Takeaway

The myth of positioning the keyword first in your meta title for an SEO edge is busted: there is no SEO benefit to a particular keyword placement in the title tag. It's also possible, though unlikely, that putting the keyword at the beginning might be a negative factor. 

The test results showed that the keyword placed at the end of the meta title ranked first. In our opinion, the main takeaway is that the position of the keyword in the title tag doesn’t really matter. However, it’s critically important to generally have your target keyword appear somewhere within the meta title tag.

Plans start at $10 only*

Can You Beat an H1 with multiple H2s?

It’s pretty obvious that an H1 tag is a stronger ranking signal than an H2 right? This was proven in one of our previous experiments, where we combined on page elements into 4 groups representing the importance of keyword presence. H1 tags, Meta title, body content and URL were confirmed to be the top weighting on page elements for keyword placement.

But what if your client decides not to use an H1 for design reasons? Is it possible to trade one ranking factor for another, or, in this case, outrank one H1 with several H2s? In other words, how many H2 sub headers do you need to surpass the “strength” of an H1? We ran several tests to find the answer and we can’t wait to share it with you!

We set up the test like this..


The goal was to find out whether it’s possible to trade multiple H2s for an H1. And, if so, to determine the sweet spot, or optimal number of H2s.

We set up five test pages and ran the experiment three times. For the test setup, we created two lines of lorem ipsum text, both containing the target keyword, - we would use those lines in H1, H2 and paragraph text. The page ranked #3 received two H2s with lorem ipsum text optimized for keywords. The pages ranked 1, 2, 4, and 5 each received an H1 and one extra line of paragraph text, all with the test keyword. Every new round of tests, we added one H2 with the target keyword to the page that originally ranked #3, adding the same line as a paragraph to the pages that originally ranked 1, 2, 4, 5. 


Hypothesis: A page with two optimized H2s will outrank a page with one optimized H1.

Immediately after keywords were removed from H1s and added to the H2s, the page dropped to the bottom. At this point, the main idea was to continue adding keyword optimized H2s until the page would rise to the top. 

Since we didn’t want to mess up keyword density or word count, instead of adding another H1, an extra line of paragraph text was added - this way, both keyword density and word count would remain the same.

Initial SERP, showing the page with two H2s:

WordPress dashboard of page with two H2s:

WordPress dashboard of pages with an H1:

May 31st

SERP shows that the page with two H2s dropped to rank #5:

                                                                                           June 1st

Added an optimized H2 to the test page:

Also added the same line as regular paragraph 

text to the rest of the pages:

                                                                                           June 6th

SERP result shows the test page remained at rank #5.

Another H2 was added to the test page:

The same line was added as regular text to

the other pages:

                                                                                           June 15th

The test page remained at the #5 rank.

...

After no movement, an additional line was added in 

the same pattern as before - 6 H2s:

H1 with 5 lines as regular text:

                                                                                           June 27th

Still no movement of the test page.

Here is what we discovered

As we continued adding H2s to the page, the test page would never rise. It actually ended up sticking to the number 5 spot through the end of the testing period. We got to the point where we reached 6 H2s with the test keyword on the page and it simply would not move up. 

At this point, there were so many H2s with the keyword as a strong ranking signal, that we ended up over-optimizing the page even before we could overtake the H1. After running this test three times, the results stayed the same - it was impossible to beat the H1 with H2s no matter how many were added to the page.

Final Takeaway

The goal of this experiment was to prove whether the economy of ranking signals works in real life and whether Google approves the trade-off in the first place. Based on the consistent results of three tests, we came to the conclusion that however many keyword optimized H2s added to the page, it’s not possible to outrank a page with the keyword in the H1. In a worst-case scenario, you can even end up over-optimizing your page. You can’t equal the value of a stronger signal with a certain number of a lesser signal.

This experiment showed that a page can be both under and over-optimized. Fortunately, with PageOptimizer Pro you know exactly which keyword signals to use and where exactly to put them.

Plans start at $10 only*

Mythbusting: LSI or Keywords?

When Google reads website content is it looking for target keyword placement, or is it looking for LSI (Latent Semantic Indexing) terms that match the search intent? The prevailing idea from most SEO professionals is that Google is getting smarter and first looking for the latter - content on pages that matches search intent. We conducted an experiment to see what happens if you compare a single target keyword against a page with the target keyword and LSI keyword variations. The result of this experiment is a game changer!

Make sure to read the post till the end as we have an awesome announcement to share with you!

We set up the test like this..


Based on an observation that ranking pages are not overtly repeating the keyword multiple times in the body copy, we made an assumption that a page with LSI terms will outrank a page with higher keyword density.


Latent Semantic Indexing - words that would naturally come up in a conversation about particular topic. For example, if you are having a conversation about a kitchen, LSI would be: sink, refrigerator, pantry,..

Since Latent Semantic Indexing are words that would naturally come up in a conversation, we had to depart from our normal “Lorem Ipsum” Pages and use an actual term that would produce usable LSI. To do this, a keyword phrase was found that had only a few ranking pages but could produce LSI words.

The keyword chosen was for a local service in a remote area and, when searched in quotes, returned just 4 results. Besides, the articles were both published in Google Doc and made public. The only difference between the articles was that instead of repeating the keyword “house demolition”, the second article used the keyword one time, and then LSI terms in the rest of it. Meanwhile, LSI terms were determined by searching “house demolition” using the Keyword Planner and picking appropriate variations.

Here is what we discovered

After running the test, we discovered that the page that uses only the target keyword will beat a page that uses the keyword 1 time and LSI variations the rest of the time.

Final Takeaway

The last week's episode of SEO Fight Club released the studies behind LSI, which proved Latent Semantic Indexing to be a ranking factor. However, it does not undermine the importance of getting the target keyword on the page. You might have heard many SEO experts talk about getting away from using keywords and just use LSI because of Hummingbird or RankBrain. But you need to help Google understand what the page is about in the first place, and Google does that by looking at those on page optimization signals containing your target keyword.



To conclude, don’t put cart before the horse: optimize for keywords first and then LSI your pages. However, don’t get too excited while optimizing for keywords - putting as many of them as humanly possible (keyword stuffing) can lead a site to getting search penalty. Stressing the importance of good quality content, Google’s own Matt Cutts warned webmasters about keyword stuffing: “...all those people doing, for lack of a better word, over optimization or overly SEO – versus those making great content and a great site. We are trying to make GoogleBot smarter, make our relevance better, and we are also looking for those who abuse it, like too many keywords on a page...” 

Good news is that PageOptimizer Pro can tell you exactly which keyword signals to use where and in what frequency on your page. 

Announcement: New LSI Function in PageOptimizer Pro

We are excited to share with you that POP has now LSI built in to the tool!  In addition to providing LSI terms, POP also gives suggestions on which ones to use, where, and how many times.  

 

POP starts off with a wide set of possible terms and then runs through different filtering processes to get down to terms where LSI (Latent Semantic Indexing / NLP (Natural Language Processing) / TF-IDF (Term Frequency - Inverse Document Frequency) are in relative agreement.  The tool then calculates a “weight” or importance score - the number in the screenshot that is less than 1. The closer to 1, the more important the term. Lastly, the tool counts the number of times you have used the term and compares that to the average usage.


The goal is to show how many LSI words you are using right now versus how many times you might want to use them. Try if for yourself!


Is Schema Markup a Ranking Factor?

Most SEOs are already familiar with Google’s stance on schema. They are notoriously quiet on the subject. In January 2018, Roger Montti published a popular article in SearchEngineJournal, where he states that Schema Markup is currently not a ranking factor. To our knowledge, Google has not responded this statement. Schema Markup is supposed to help Google understand what a page is about and then categorise it. But is Schema Markup actually a ranking factor in Google’s algo?  We know the answer…

We set up the test like this..

Hypothesis

A page with schema markup outranks a page without schema markup

  • Number of test pages: 2
  • 400 word articles
  • 2% keyword density
  • Experiment page had product and offer schema at the top of body tag, the control page did not
  • Both published 9th May
  • Both submitted to Google at the same time

Here is what we discovered

After running two tests, only the variable/experiment page (the one with schema) was indexing. The pages without schema did not appear in results, even though they had been submitted to Google. Therefore, both tests confirm the hypothesis that a page with schema markup outranks a page without schema markup.

Final Takeaway

The bottom line - because Schema Markup is indeed a ranking factor - we absolutely recommend using it to your advantage. As an added bonus, Schema Markup is not a difficult thing to do. Insider tip: We use the JasonLD Schema Markup Generator - it is easy to use and does not require any coding skills. Now it’s time to take action!

Schema Markup Warning: 


  1. The structured data should be compatible with the content on the webpage. If the information on the webpage doesn’t match the schema, it’s likely that your website will be penalized by Google. The same relates to the information that is not visible to the readers of the page.
  2. Don’t use aggregate rating schema across all pages, otherwise Google will assume that all your pages have been rated equally. For example, adding your overall business score to the product pages would be found misleading, as products have their own review scores. We suggest keeping your overall score within organization schema on the home page only.
  1. In October 2016, Google updated their guidelines to state that you shouldn’t use third party reviews within your Local Business Schema. Therefore, we recommend that you play it safe and review Google’s guidelines and policies for structured data.
  2. Before publishing your Schema Markup, make sure to test it in the Google Structured Data Testing Tool. Addressing the errors and warnings accordingly will definitely reduce the risk of having your website penalized.

Plans start at $10 only*

Mythbusting: Optimizing Meta Titles for Clicks

It’s common SEO knowledge that meta titles are a critical ranking element. In one of Rand’s Whiteboard Friday’s back in 2016, he argued that a meta title optimized for a click rather than keyword will have higher CTR and be therefore a stronger ranking signal than a well keyword optimized meta title.

Since we have already introduced the meta title as the number one signal for on-page optimization, we decided to conduct our own SEO experiment to bust or confirm this myth. Read on to find the answer!

As 98% of people don’t go to the second page in Google, the experiment is valid under one essential condition: there should be the high chance that anyone would click on test page, which means that the page should already be on page 1, otherwise CTR would be too low for this to be a valid experiment to undertake.


If you haven't watched Rand Fishkin's video about 8 SEO Practices that are no longer effective , or you just want to refresh it in mind before moving to the experiment and its astonishing results, this is your chance to do it:

We setup the test like this..

We set up two pages with exact meta titles Rand used in his example (to keep the same structure) with the only difference - the word “pipe” was substituted with a unique keyword. The searches that we were interested in replicating were the searches for “pipes” and “wooden pipes”.

  "A meta title that is optimized for a click will beat a meta title optimized for keywords"

   

  Rand's Hypothesis

Here is what we discovered


The meta title that was optimized for keywords won both the simulated searches of “pipes” and “wooden pipes”. 

Final Takeaway

The myth is BUSTED: meta title optimized for target keywords will beat meta title that isn’t. Recognizing the importance of both factors, we would argue that keyword optimisation in Title tag is a stronger ranking signal to CTR, so you really need to get your keyword in the meta title for ranking purposes first. Why? If you are not ranking well, you will never get the click.

For more information on top ten on page ranking factors from best to worst, read here

Use a tool such as PageOptimizer Pro to know how well your page is optimized for your keyword and which changes you can implement to improve your rankings.

*Plans starting at only $10 

The TOP 10 on page factors from top to bottom


It’s common SEO knowledge that there are more or less 12 various elements where a keyword can be placed on a web page, from your URL to body copy, to title tags, the list goes on. However, where to place your keywords and in what quantity in order for the page to perform best in search engines is the question on every SEO’s mind.

So to find out once and for all how Google weights these various on page elements we setup a test...


We set up the test like this..

In order to find out the exact place to put the sites keywords, we created pages with the keyword in each area and then watched Google rank them from 1 to 10, therefore scientifically determining 10 on page factors from best to worst. The factors tested included URL placement, Meta title, meta description, Meta Keyword, Body at 2% density, H1, H2, H3, H4, as a bold word, as an italic word, image alt.

Here is what we discovered

Based on the test results, we ranked the on page factors from best to worst and combined them into 4 groups representing the importance of keyword presence. 

Group A

Group A consists of Meta title, body content, URL and H1, as the tests confirmed that these are the top weighting on page elements for keyword placement. Meta title proved to be the undisputed highest weighted signal in on page SEO. Therefore, it’s critical that the meta titles are unique for each target page and contain the target keyword. A signal many people miss is keyword placement in the URL, if you are building a new page, make sure you get it in there. If its an existing page with Page Authority then don't go and change the URL, just think about it for next time.

Group B

The on-page factors that fall into group B include H2, H3, H4 and anchor text. A really important insight came from the Hl and H2 test pages. On those pages Google has ignored the meta title that was used and instead chosen to use what it felt more important to the particular search.  We have known for some time that Google will sometimes ignore tagged meta descriptions and use what it wants but now we have the insight into the places that Google will look for the title to display in SERPs. First it will look to the meta title you have written, then it will look to the Hl and H2 signals. Therefore, if you really want a particular title to display in Google, put it in your H1 and something similar in your H2. 

Group C

Bold, Italic and Image Alt fall into group C. The keyword in bold is ranking well and its position among the top 5 factors for traditionally secondary keyword is not uncommon. We would not suggest that you put much bold text on a target page as secondary factors jump up and move quite often.  Interestingly, image alt ranked last as a weighted on page factor.

Group D

The last group is represented by schema, html tags, and open graph. Surprisingly, neither meta description nor meta keyword test pages indexed at all.

Final takeaway

Knowing which keyword signals Google considers more and less important is empowering knowledge for SEOs. What’s even better is being told exactly which keyword signals to use where and in what frequency on your page. Page Optimizer Pro does exactly this. We invite you to give it a go and get 3 reports for free!

*Get 3 free reports. No credit card required.