Duplicate Content – Five Common Myths Debunked

23 August 2021
/
by Archie Williamson
/
5 mins
Duplicate Content – Five Common Myths Debunked

As the name suggests, duplicate content refers to any content that shows up in more than one place on the internet with a unique website address (URL). The presence of multiple instances of the same content can have an impact on your search rankings as signals and ranking power are spread across duplicates.

According to the Google’s own guidelines: “Duplicate content generally refers to substantive blocks of content within or across domains that either completely match other content or are appreciably similar. Mostly, this is not deceptive in origin.”

A study by Raven Tools found around 30% of pages on the web have duplicate content. Google says common examples include product pages on an online store that are linked to by unique URLs and web pages with printer-only pages. Other causes include:

  • HTTP and HTTPS versions of a page
  • Index pages
  • Session IDs
  • Scraped content
  • Country-specific pages
  • Trailing slashes at the end of URLs

Deceptive practices occur when content is “deliberately duplicated” with the objective of manipulating search engine rankings or increasing clicks. When this happens, Google can remove the site from its index entirely. However, if you are sincere in your efforts to craft content this will not happen to you.

This is why focusing on writing high-quality content is crucial in the fight to prevent copied and unoriginal content from undermining your SEO strategy. By outsourcing creation to a reputable agency, you can be sure that everything you produce is fresh, original, and authoritative.

Myth #1 – There is a ranking penalty for duplicate content

There is a myth that duplicate content leads to a ranking penalty, but what is actually happening is that the content is being diluted. Search engines only want to show one version of an article or product page, and in Google’s case, it has an algorithm that chooses the best page to show in SERPs. This reduces the visibility of duplicates, among which could be the original piece.

The same is true for link equity as inbound links can point to multiple pages rather than a single one. As links are a ranking signal, this can hamper the visibility of content in search. Duplicates in this case do undermine SEO, but Google will only penalise sites when there is intent to manipulate.

Myth #2 – Blocking access to duplicate pages is the answer

While blocking access to duplicate pages may appear to be a common sense solution to this problem, this course of action is not recommended by Google. If you use robots.txt files to prevent pages from being indexed, Google says it will “treat them as separate, unique pages”, which can exacerbate existing issues.

The best thing to do is use canonical tags to inform search engines that a specific page is the master copy. Canonical is basically a coding term for “official version”. By adding this tag Google will be able to consolidate the signals from each of the duplicates to the version you want to be shown.

Myth #3 – Syndicated content is similar to duplicate content

Content syndication is completely different to duplicate content. Syndication is the process of republishing articles, blogs and videos on third-party sites to improve their reach and increase exposure. After a content writer has crafted a high-quality piece, it is normal for it to be published on other legitimate news sites – sometimes with additional commentary and analysis – and with a link or credit to the original content.

The only time content being republished can be deemed as malicious is when “scrapers” copy entire posts and don’t mention the original piece. However, SEO expert Neil Patel says scrapers “don’t help or hurt you” as the copied content will not confuse Google and simply “just isn’t relevant”.

If your site isn’t scraping, then you have nothing to worry about. You can, however, improve the syndication process by making sure third-party websites add a link to your original content and requesting the use of a noindex tag to ensure Google only shows your original version in search.

Myth #4 – Translated copy cannot be a duplicate

Basic translations are another potential cause of duplicate content. Google is capable of identifying content that has been copied verbatim into a non-English language using rudimentary tools like Google Translate. This again highlights the importance of creating content that is tailored specifically for another language, taking cultural and linguistic differences into account. Where you do wish to carry your content over into multiple languages, ensure that translations are being carried out by a native speaker of the language you are translating into. This will not only ensure better quality but reduces the risk of like-for-like translations that may catch Google’s attention.

Myth #5 – Duplicate content only occurs when text is copied

There is a common misconception that duplicate content only refers to the text on a page being copied, but this is not the case. It also occurs when a page is accessible via several different URLs. As noted earlier, Google has trouble making sense of duplicate URLs and must choose one to present in SERPs. URL duplication is usually a result of technical mishaps and structural issues within a site.

In order to avoid issues in the future, Google recommends using 301 redirects and canonical tags, being consistent with internal linking, understanding how your content management system works and minimising the use of similar content. It adds: “If you have many pages that are similar, consider expanding each page or consolidating the pages into one.” If you offer services in multiple cities for example, rather than having an individual page for each city with your flavour text and information contained in them, keep all of that text on one page with links pointing out to city pages, and put any city-specific information on those unique landing pages.

While most forms of duplicate content are not malicious, it can affect SEO in several ways. When content is not duplicated, Google knows what version to index and rank while link power, trust and authority are attributed to a single piece. The opposite is true when there are multiple versions of content.

That’s why it is crucial that you work with a SEO agency that can help you to publish unique content that is tailored for search while conducting a free website audit to identify areas for improvement. Contact us today to talk about our full range of services, which include content creation and blog management.

Content marketing

Have a question?