Writing content for a website is hard. Not only does it need to be captivating for the audience you’re trying to capture, but it has to be optimized and ready to jump out of the gate from an SEO perspective, too. As it tends to go, though, not all content performs well, and you may find that you’re sabotaging yourself with some of the work you’re doing (or not doing).
So, what do you do when you have a piece that doesn’t rank, get much traffic, or resonate? You recycle it, of course.
You put time (and maybe even money) into producing the content that’s on your site. Why let it go to waste when you could instead put it to use somewhere else in a more efficient manner?
Obligatory disclaimer about duplicate content
For those who aren’t in the know, duplicate content is bad. Google’s absolutely not a fan, and they make it clear in their stance:
Google tries hard to index and show pages with distinct information. This filtering means, for instance, that if your site has a “regular” and “printer” version of each article, and neither of these is blocked with a noindex meta tag, we’ll choose one of them to list. In the rare cases in which Google perceives that duplicate content may be shown with intent to manipulate our rankings and deceive our users, we’ll also make appropriate adjustments in the indexing and ranking of the sites involved. As a result, the ranking of the site may suffer, or the site might be removed entirely from the Google index, in which case it will no longer appear in search results.
In the realm of SEO, your content is going to be the heavy hitter on the field for you, and it’s important that each of the pages on your website is fully distinct and unique. Of course, smaller calls to action, or disclaimers and the likes are typically considered safe, but beyond that, you may be walking a fine line.
Attorneys, in general, should be aware of this. Say you have a practice area page for car accidents and a different one for medical malpractice. If you’ve got only a paragraph differentiating the two pages, and the rest is filler information about why your firm is fantastic and how Avvo just loves you guys, you’re going to run into a difficult time. Much like the regular vs. printer comparison from Google there, you run the risk of Google thinking these pages are different versions of each other and choosing one over the other to show your audience. If your user is searching for medical malpractice attorneys and your car accident page comes up, they may be dissuaded from clicking because it’s not what they’re looking for.
Embrace the merge
One of the biggest weapons in your SEO arsenal is the page merge. Consider the following example from our client of ours, the Safe Birth Project.
Safe Birth Project had the following splintered pages, each based off of the primary issue of meconium aspiration syndrome:
- Causes of meconium aspiration syndrome
- Symptoms of meconium aspiration syndrome
- Meconium aspiration syndrome treatment
- Meconium aspiration syndrome legal issues
- Questions to ask your doctor about meconium aspiration syndrome
- What to expect with meconium aspiration syndrome
This isn’t by any means unheard of, and in fact it’s a tactic you see widely across the internet. If a person is searching for symptoms of the issue, you want your symptoms page to show, and so on. We see this a lot with law firms in particular: how insurance handles car accidents, how to start a car accident case, qualities to look for in a car accident attorney, etc.
But brace yourself for this knowledge bomb: you can optimize for all of the searches under a particular subject umbrella just as well with one single authoritative page.
We took each of those splintered pages there and merged them into a “mega page” on meconium aspiration syndrome complete with in-page navigation and every possible bit of information our users could possibly need on the subject.
The results speak for themselves, looking at the five months leading up to the merge, compared to the five months after the merge:
158% growth in 5 months. The real kicker? It’s all organic. There were no paid promotions of this page of any sort, just simple organic search results.
This page is ranking on average in position one for more specific queries related to meconium aspiration syndrome (associated with our previous splintered subjects):
And ranks on page 1 of Google for just “meconium aspiration syndrome” by itself.
Not too bad for content that was already written and just rearranged, right?
But wait, wouldn’t that mean we now have duplicate content spread across the site? Well, yes and no…
Rel canonical tags can help avoid duplicate content issues
We maintained the original subject-separated pages on this particular site for two primary reasons:
- User experience – Sometimes people don’t want to have to deal with a huge page to find the information they want. We addressed this concern via the navigation element that breaks the mega page down into easier-to-reach chunks, but the point still stands.
- Avoiding redirects – Typically the process would be to take that content, move it elsewhere, and submit a 301 redirect to the new page that supersedes the old one and delete it, but we didn’t want to be throwing additional redirects onto the site for a variety of reasons.
So now we’ve fully admitted that we’ve got word-for-word content spread across a few different pages of the website, which should be triggering alarm bells all over the place. Thankfully, rel canonical tags can help shut them down.
When you’re working with pages and meta tags to be applied to them, two important ones come to mind:
Noindex: This means that you’re going to prohibit Google from including this page in the Google index. This is useful for campaign-specific landing pages, or pages that simply don’t have enough content on them to make them worthwhile. This is not what we want for this instance.
Rel=Canonical: This is a tag that you can put on a site to essentially say that the authoritative version of the page is located elsewhere, and that Google should give preferential treatment to the one that’s flagged as canonical. This is what we’re looking for.
In our example, we set a rel=canonical tag on each of the subdivided pages to mark the mega page as the canonical one that Google should be focusing on. By doing this, we’re allowed to have the content on the site in different places (per the above rationale), and maintain the mega page as the authoritative version without facing issues with duplicate content. Win-win.
Finishing the process with resubmission and annotation
We’re data people here, and with that can sometimes counterintuitively come impatience. We’ll be the first to admit that we want to see data immediately, and we want our results to come just as fast — who doesn’t?
For this reason, finishing the process of a page merge involves two final steps: resubmission and annotation.
Resubmission is relatively straightforward; by submitting the newly merged pages to the Google index manually, you assure yourself that Google will have the latest version of it on hand immediately. Yes, websites are crawled fairly often for this kind of thing, but in a field filled with variables, the ability to have something in your control and manageable is always a good thing.
The process of resubmission is simple as it gets. Open up Google Search Console (or Webmaster Tools, same deal, just a legacy name at this point) and find your way to the Fetch as Google section in the sidebar:
From there you drop in the URL of the page you’d like to have crawled, click submit, and then click to resubmit it to the index. Done!
Annotation is similarly easy, but a vital step for those who are data-oriented. An annotation in Google Analytics is pretty much just a note that’s visible to all of those on the account that have access to it.
Our process is pretty standard in that you drop the name of the URL you resubmitted into a note on the analytics account, the date you resubmitted it, and press submit. This quick note lets you easily check back in the future on the results of particular efforts, with a recorded date as to when it was done.
Duplicate content is bad; accessible data is good.
While we’d like to say that we accomplished this all just by some simple copying and pasting, the process as a whole is one that takes some finesse. Not all pages are viable candidates for merges, but if the subject matter is similar enough that it could reasonably be lumped under the same umbrella while ultimately providing more use for the user, you’ve got no reason not to push forward and give it a shot.
Not sure which pages on your site may be good candidates for merges and optimizations? Drop us a line to get in touch for a free content audit of your site.
Taylor is a digital strategist at JSO Digital. He graduated from Millersville University, and currently resides in rural Pennsylvania.