Optimising rankings can be one of the toughest disciplines in digital marketing. The balance of keyword targeting, perfecting meta data, and sourcing quality backlinks can all be extremely draining on both time and resources. So naturally, no one wants to think their website build might be costing them some of their hard work. But quite often, that’s exactly the case. Generally, there are five common website issues that have massive repercussions on a site’s SEO.
1. Broken Links
Broken links are essentially links on a website that don’t lead anywhere. Perhaps a page was retitled ‘Contact Us’ rather than ‘Contact’, and the URL was changed to suit. But in the process, the footer link wasn’t updated to reflect the new URL. Now, the previous ‘/contact’ links lead to nowhere other than an ‘Error 404’ message.
Broken links can be both internal and external. Any links directed to another website that don’t land are still ‘broken’.
From Google’s perspective, this is the sign of a dodgy website.
Google sends ‘Googlebots’ through every website. These bots jump aboard every link they find, riding them to their respective destinations – like a vagabond who catches a different train from Central Station each day, just to see where they all go. And if one single train doesn’t get them where they want to go, Google suddenly has a bad opinion of that website. So naturally, Google is hardly going to enhance the rankings of a website full of dead ends.
If the website is too large to consistently check manually for broken links, it may be worth investing in a backlink checking tool such as Ahrefs. It scans websites as Google would and provides a complete “outgoing broken link” report, making websites’ broken links easy to repair.
The free (but less user-friendly) alternative would be a free Chrome plug-in such as Check My Links. Simply open the plug-in on each individual page and it will flag the links on that specific page that are broken. So if a website consists of 10 or less pages, the free alternative will more than suffice.
2. Orphan Pages
Any page that exists on a website, but has absolutely no links or directs to it, is deemed as an orphan page. The key to these pages is that they’re found by Google, but a crawler such as Botify won’t find them.
There are classically 2 kinds of orphan pages – the expected and the unexpected. The expected orphans typically come from:
- Pages linked externally – Particular landing pages that are linked externally are viewed as ‘orphans’, but naturally, have a strong place in the site’s digital performance.
- Ex-errors – Perhaps Google crawled and reported a few errors that were recently corrected –it’s certainly not a smudge on a site’s reputation.
- Expired pages – If pages with a short-term life are left as orphans, it’s not particularly bad for SEO. Just ensure that not too many pages are left orphaned, or invest in rotating pages a little more effectively.
Otherwise, there are countless error-related reasons why orphan pages might exist.
Similarly, there are two separate reasons why orphan pages might be harming the site’s SEO. Firstly, if they aren’t internally linked, those pages will struggle to rank. Any investment in having that page and its content rank can be completely undermined if the site is orphaned.
On the other hand, orphan sites also consume more of the site’s ‘crawl budget’. The crawl budget allocated to any site is Google’s amalgamation of:
- The site’s crawl rate limit – How much Google can crawl without occupying too much of the server.
- Google’s crawl demand – How much Google wants to crawl the website, perhaps because of popularity or ‘freshness’ of content.
Orphan pages certainly aren’t simple to crawl, and by soaking the site’s budget on these pages, Google can’t fully understand the map and intent of the rest of the website. And without that, it won’t return good rankings for that site.
Again, a paid SEO tool is the best solution to resolving orphan pages. SEMrush has a Site Audit tool that can quickly and simply identify orphaned pages on a website – making it simple to update content and links as necessary.
3. Incorrect Use of Canonical Tags
Canonical tags can be one of the most useful tools in any SEO Manager’s arsenal. Occasionally, pages or sites will be essentially the same content – such as the HTTP and the HTTPS versions of a page, URLs listed with session IDs and those that don’t, etc. Sometimes, a page can be operated from 2 completely different URLs – depending on how the user is supposed to arrive at the page.
There’s definitely no damage caused by the correct use of these tags. But not tagging pages correctly can undo plenty of hard SEO work. Firstly, Google might not rank the preferred page. Or even worse – an incorrect canonical tag might not redirect to the right page, or any page at all. And perhaps the biggest weight on rankings: redirecting to an external website.
Firstly, check if any duplicate content can be avoided. If it can, that should instantly remove the need for canonical tags.
Otherwise, the website audit tool in SEOprofiler can assist in identifying canonical tags that require attention.
4. Duplicate Content
To be clear, there is quite a substantial difference between duplicate content, and copied content. The latter is definitely the one Google punishes more harshly.
Duplicate content is often completely legitimate, and rather hard to avoid. Consider a job website, with 2 almost identical positions listed for the same company. Although they’re duplicate content, they do at least have very minor differences. These will typically be treated fairly by Google.
Copied content is the more malicious act some websites will attempt. It’s more often than not a direct copy (either in its entirety or a partial copy) of the content on another domain. This is what will cause the website the most SEO harm.
Specifically, Google will recognise that content has been cloned. And to provide the best user experience, it definitely won’t list both for any related search. So cloning content is a sure-fire way to cost both pages rankings.
Systems such as Copyscape can help identify any copied content that might be out there. It also ensures the content published on your site won’t be at risk of appearing as copied by Google.
5. Site Loading Speed
The speed of a site can be affected by a number of elements. Typically, the most significant reasons why a site might be slow include:
- The website’s host
- The size of images
- The theme and widgets
- Double-barrelled code
- Other resource-hungry media and content.
Even a single piece of media (such as an ad or video) can turn a simple site into a slug.
Since 2010, Google has considered the speed of a website an integral factor in arranging rankings. After all, a site that doesn’t respond quickly across mobile, tablet and desktop isn’t offering users the best experience. If a search comes from a screen that doesn’t load the website quickly at all (such as mobile), Google knows better than to serve that site to the user.
Google’s PageSpeed Insights tool is a terrific way to check how a site is performing. If the website doesn’t rate well, it may take some tinkering in the website’s setup to reduce the amount of media and processes required to open.
Make sure your websites are optimised for organic traffic with these other Marketing.com.au articles:
- Learn 8 Things You Should See in SEO-friendly Hosting
- 11 SEO Tips for Beginners to Boost Website Traffic
- 9 SEO Mistakes You Should Avoid to Ensure Better Ranking