Ready to turn rankings into revenue? Discover everything you need to know about Search Engine Optimization (SEO).
Ready to turn rankings into revenue? Discover everything you need to know about Search Engine Optimization (SEO).
Now you know how content can turbocharge your SEO success, it’s time to learn the other side to SEO. The technical side.
In this chapter, we’ll reveal the essential technical SEO tips to ensure your website is up to scratch and is rewarded in search results.
Want to have the edge on your competitors and outrank them in search?
Here’s your secret weapon…
Search engines prefer websites that meet a certain set of technical requirements, including a secure connection, structured data, sitemaps and site architecture, canonical tags, and more. All of these play a crucial role in your website’s crawlability, indexability, and where you come out in search results.
If you master these technical elements, you’ll be rewarded by Google with higher rankings, more traffic, and, ultimately, more revenue.
But with a ton of jargon out there, it’s no surprise that most business owners overlook this part and focus on things like SEO content, link building, and keyword research instead.
If you can nail technical SEO, you instantly get a one-up over other websites vying for the number one spot.
So how can you separate the HTTPS from the SSL and XML, and use technical SEO to come out on top?
That’s exactly what we’re here to help you do.
In this post, we’ll take you through everything you need to know to ensure your website’s technical side is up to scratch and is rewarded in search results.
What technical SEO is and why it matters
The key elements of a technical SEO audit (based on the checklist that we use for our very own SEO clients)
13 of the most common technical SEO issues out there (and how to fix them).
Let’s get started.
Before jumping into common technical SEO issues, let’s take a look at all of the different elements of SEO and how they play together into a broader strategy.
This is a good way to understand exactly HOW technical SEO differs from on-page and off-page SEO.
On-page SEO involves optimizing the structure and content on your website so search engines know exactly what your page is about. It’s the act of optimizing everything ON your website for search engine algorithms.
This includes elements such as your heading, internal links, image alt-text, site speed and page load times, author information, mobile-friendly design, and more.
On the flipside, off-page optimization includes all the elements you do OFF your website to help it rank higher in search.
Link building is generally the most popular off-page tactic. However, off-page SEO also includes actions like your brand’s social media presence, linked or unlinked brand mentions, guest blogging, online PR, and influencer marketing.
Technical SEO is an optimization of your website and web server that helps search engine bots crawl and index your site more effectively.
While this type of SEO primarily involves actions done ON your website, it’s considered as separate to on-page SEO because of its technical nature.
This is because most technical SEO involves diving into the code of your website, in order to update things at the most fundamental level.
It’s essentially the ‘back end’ component of SEO. None of your customers see it, but it’s incredibly, INCREDIBLY important if you want to rank.
Which takes us to our next point...
As we touched on just now, technical SEO is the process of ensuring your website meets all of the technical requirements of search engines like Google.
But what does that really involve?
The best way to think of technical optimization for search engines is in terms of a hierarchy of needs, like this handy pyramid illustrates:
Image source: Search Engine Land
All of these technical elements play a role in where your website ranks in search engine results. However, the most important are the two foundational parts at the base of the pyramid: crawlability and indexability.
Crawlability refers to a search engine’s ability to access and ‘crawl’ all of the information on a page.
In Google’s own words:
The web is like an ever-growing library with billions of books and no central filing system. We use software known as web crawlers to discover publicly available webpages. Crawlers look at webpages and follow links on those pages, much like you would if you were browsing content on the web. They go from link to link and bring data about those webpages back to Google’s servers.
If your site has no crawlability issues, this means that the crawlers that search engines use can find every page on your website, much like a spider ‘crawling’ along different threads on a web:
Image source: Google
If, however, you have broken links or dead ends, then search engines won’t be able to access that content.
After this comes indexability.
Once a search engine crawls your website, it takes all of the content that it crawled and adds it to a central Search Index.
A search index is like the index on the back of a book. It’s a HUGE database with hundreds of billion web pages and all of the words that are contained within them.
When someone searches for a specific word, Google trawls through its index for those words and determines the best results to display, based on its search algorithm.
If Google can’t index all of your pages, then your content won’t be in the Search Index — and, more importantly, it won’t appear in search results.
Technical SEO is important because it’s the foundation that your website is built upon.
Your technical SEO needs to be sound if you want to maximize your SEO strategy, improve your rankings on search engines, drive traffic, and convert that traffic into revenue.
You can create the best content in the world BUT if Google’s crawlers can’t find it, neither will your audience.
You can have backlinks and brand mentions from massive websites like MSNBC and HuffPost BUT if Google can’t crawl your website, those backlinks won’t pass on their link juice.
You can have the best keyword research and keyword integration BUT if your website isn’t indexed properly, your page will never make it into Google’s index.
Think of it like the proverbial tree in the forest: If you have a web page and Google can’t crawl or index it, does it really even exist at all?
And when all things are considered equal, technical SEO is often the difference between who comes out on top on the results pages for search engines — especially for highly competitive keywords.
Now that we’ve covered off what technical SEO is and why it matters, it’s time to move on to the next part: improving your website’s technical elements to optimize it for rankings.
And to do that, you need to start with technical SEO audits of your domain.
Don’t worry — it’s not as scary as it sounds.
An SEO technical audit simply involves looking at how your current website stacks up from a technical perspective, based on Google’s key ranking factors.
From here, you can identify any issues and get your team, developer, or SEO agency to address them.
Below, we’ll take you through the most important elements you need to look at as part of your audit.
This is based on the exact framework we use at OMG to help our clients up their technical SEO game, and rank higher in search.
A website crawler will reveal any technical issues on your website, such as duplicate data, error codes, broken links, and more.
Once you have these, you can build a plan to fix them (we’ll touch on this a bit further down).
After looking at a website’s crawlability, it’s time to understand whether the pages are actually being indexed by Google. This can be done directly via Google Search Console.
Google Search Console’s Index Coverage Report shows the indexing state of all the pages on your website that Google has crawled, or tried to crawl.
The final report clusters your web pages into the following: error, valid with warnings, valid or excluded:
Image source: Search Engine Land
Clicking into each of these sections also reveals the reason for each page’s status:
Another way to check how many of your web pages are being indexed is via a quick Google search for “site:https://www.yourwebsitehere.com”. This will show you how many pages are currently stored, like so:
Here, we can see that Google has stored 38,100 pages from Starbucks. If Starbucks had 45,000 pages, this is a clear indication that some of the pages aren’t being indexed by Google.
Regardless of which method you opt for, checking indexability gives a snapshot of how many pages Google is currently indexing, and helps identify issues for improvement.
Your preferred domain tells Google whether you prefer the ‘www’ or ‘non-www’ version of your website to show up in search. The easiest way to identify your preferred domain is to run a search for your brand name online.
For example, a search for “Online Marketing Gurus” returns the ‘www’ version as the preferred domain:
However, further down in the search results, we can see that Clutch has chosen the non-www version as their preferred domain:
You can either manually set the preferred version of your domain, or leave it up to Google to select a version to show searchers.
Here’s the good news: there’s no benefit to choosing one version to another. It all boils down to a matter of preference. However, once you’ve identified your preferred domain, it’s important to ensure that every version of your website redirects there.
Not doing so means that Google will essentially treat the non-www and www version of your website as two separate websites — creating a whole raft of issues along the way.
SSL stands for Secure Sockets Layer. This is a security protocol that’s used to establish encrypted links between a web server and a browser.
When installed on a web server, the SSL certificate encrypts the site using HTTPS (Hypertext Transport Protocol Security) rather than HTTP (Hypertext Transport Protocol), so any data going between your web server and browser remains private and secure.
In practice, it looks like this:
The easiest way to check if you have SSL installed is to look at your URL in the URL bar. If there’s a little lock symbol next to your URL, you know that your website has SSL installed.
On the other hand, if your website doesn’t have SSL installed, you’ll see this:
Duplicate content is exactly as it sounds: it means two or more pages have identical or near-identical content on your site.
This might not sound like a big deal, but it can be HUGELY detrimental to your rankings.
When you have two similar pages, Google won’t know which one to rank first.
In other words, your own pages end up competing against each other.
Luckily, it’s fairly easy to check if you have duplicate content on your website.
One quick way to do it is through SEMRush’s Site Audit tool.
Simply plug your website URL in:
Once you’ve done this, the tool will run a technical SEO audit for your website. If you have any duplicate content on your site, you’ll see it in the results section:
Image source: GrowCheap
Once you know about any issues, you can work at fixing them.
Going beyond duplicate content on your own website, it’s also critical to check that your site doesn't copy any content from other sites.
See, Google favors the original author of the content in rankings. If the copy on your page is the same, or extremely similar, as the copy on another web page, Google will prioritize the page it considers to be the original author.
Here’s an example.
Let’s say you’re a gardening company and you’ve published a blog post on tips for gardening.
You may have conducted your research from a highly useful resource. However, if you didn’t update the copy before putting it on your site, then it’s likely that you’ve got a case of duplicate content — and you won’t rank as high in search as a result.
Simply type in your URL and Copyscape will return a list of similar content that appears around the web.
If you find a content overlap between your website and another site, run an exact search for the duplicate copy in question on Google.
If your website shows up first, then Google considers you as the original author. If not, then you might need to edit your content so that it’s completely unique, to ensure it performs better in search.
Once you’ve completed your site audit, it’s time to fix any issues you’ve flagged along the way.
There are a number of different issues that can spring up from a technical SEO audit, all with varying levels of importance and difficulty to address.
Below, we've identified 13 that we've seen pop up time and time again from working with our clients.
These SEO technical issues are fairly simple and straightforward to fix — and will help give your website a much-needed boost in the results of search engines.
First things first:
If you haven’t done so already, it’s a good idea to register your site with Google Search Console.
Google Search Console is a set of tools designed to help you understand how Google sees your website, and why it ranks your site where it does.
Once you register with Google Search Console, you can easily monitor performance and fix issues that are potentially holding back your organic search performance, such as page speed, crawl errors, indexing issues, and more.
Search Console helps you pinpoint any crawl errors with your website, communicate data more effectively to Google, and analyze your link profile.
And the best part? It’s free, so there’s no reason not to register your website and use the tool.
Crawlability is a foundational part of your SEO strategy. If a search engine’s bots aren’t able to crawl your site, then your pages won’t be indexed — and you won’t be ranked in search results.
An XML sitemap is important because they help Google’s bots understand your site structure. Once they understand how your website is structured, it’s easier for the bots to crawl and index your website content.
To check if you have an XML sitemap in place, simply type your domain into Google and add “sitemap.xml” to the end like so:
If your website has an XML sitemap, you’ll get a long page of code that looks a little like this:
If you don’t have a sitemap, your site will return a 404 error. In this case, there are three routes you can take to fix the problem:
Create one yourself using an XML sitemap generator tool, then submit it to Google Search Console and Bing Webmaster Tools.
Use the Yoast SEO plugin to automatically generate an XML sitemap (for WordPress websites only).
Hire a web developer to create one for you.
As we mentioned earlier, duplicate content issues can be hugely detrimental to your rankings and cause your web pages to compete against each other in search results.
So once you’ve found the culprit, how can you resolve it?
There are a few options you can take, depending on the time and resources you have at hand:
Edit content on one of the pages to make sure it is completely unique. This is the most resource-intensive, as it requires a complete refresh of your content.
Remove duplicate pages entirely. Just be sure to use a 301 redirect to make sure the URL directs search engines to the correct link.
If you can’t change the content or delete the pages, use a rel="canonical" link to one of the pages. This lets search engines know which page you want them to prioritize in search results.
Google also has a list of guidelines to that webmasters can use to avoid duplicate content issues.
Not having an “https://” URL affects your rankings AND turns visitors off your site.
In the best-case scenario, if you don’t have an SSL certificate installed, users will see the “Not Secure” warning in the navigation bar, as we highlighted earlier:
Worse yet, your browser might even present visitors with a warning page before they get to your website that instantly turns people away:
Image source: Microsoft
To fix this issue, you’ll need to obtain an SSL certificate from a certification authority, such as SSL.com. If you purchased your domain through a provider like GoDaddy or NameCheap, you can also add on SSL certification through there.
Once your SSL certificate has been purchased and installed, your website URL should display with “https://” and have a lock next to it.
Structured data markup (also known as schema mark-up) sounds complex, but it’s an important part of your technical SEO efforts.
Structured data is a code that’s written in a specific format so that search engines understand it and use it to display search results in a richer and more informative way. These are otherwise known as 'rich snippets'.
In other words, structured data tells a search engine what your information means.
Let’s take a search for a brownie recipe:
This listing includes a number of rich snippets, including recipe cooking time, calories, and user rating, and all of these appear in addition to the rest of the meta description.
How do they do it?
Simple: they use structured data.
Using structured data is a surefire way to make your web page stand out from the pack, and entice more users to click through to your page.
And the best part is that you don’t need to learn complex code to use structured data on your website. A structured data plugin, such as the Schema Creator or Google’s Structured Data Markup Helper will do all of the heavy lifting for you.
Once you’ve got your structured data markup, simply ask your developer to add the HTML into your website code.
This issue isn’t applicable to every site.
However, if you’re running a website across multiple countries or in multiple languages, you NEED to get this element of technical search engine optimization right.
Not doing so could result in Google misinterpreting different localized versions of your website as duplicate content, or directing users to the English version of your website when they are looking for the Spanish version (and so on).
If you’re running an eCommerce site, this component is also critical as it ensures your visitors see the correct product and price:
So how do you signal to Google when you have language-specific or region-specific pages?
It’s all done using hreflang tags.
Hreflang tags are an important way to tell Google about localized versions of your page. In the code, it looks like this:
Image source: Woorank
While the code looks complex, it’s actually quite straightforward. The code is telling Google to direct visitors from France (“FR”) to https://www.woorank.com/fr, visitors from the Netherlands (NL) to https://www.woorank.com/nl, and so on.
At the most basic level, you need to make sure that your site is using the right hreflang tags, and that these tags are pointing to the right localized versions.
You can either do this yourself using a Hreflang generator, through your web developer, or via your SEO agency.
However, given the technical nature of hreflang tags, we generally recommend the latter two for this particular technical SEO issue.
A robots.txt file instructs search engines on which parts of your site they should and shouldn't crawl.
Not having a robots.txt file — or having a poorly configured robots.txt file — can be a HUGE traffic killer for your site.
Most site audit tools will pick up if you’re missing a robots.txt file. However, you can also perform a manual check by typing your website URL into your browser and adding “/robots.txt”:
If you’re missing a robots.txt file, Google will return a 404 error. In this case, you or your developer will need to create a robots.txt file and add it to your site. Google has a detailed list of guidelines here.
If you do have a robots.txt file, your browser will then return a result like this:
Notice the “Disallow: /xxxx” part? This means that your robots.txt file is telling all robots to stay out of this certain page on the site.
In some cases, this might be an error — or it could have been done deliberately to keep search engines from crawling that part of your site.
In this case, it’s best to speak to your developer or SEO agency before touching your robots.txt file as it may have been done with a particular reason in mind.
Broken links are one of the main culprits behind crawlability issues on your site. These can prevent your page from appearing in search results and also deliver a poor user experience to boot.
Broken links occur when search engines try to crawl a specific page of your site and can’t. It might be because you deleted a page from your site or changed the URL, but other internal pages still have links to it.
There are a number of different types of broken links, but the most common is a 404 error.
Thankfully, these are an easy fix: once you’ve identified any broken links using a site crawler tool, you can either set up a 301 redirect to a more relevant and updated page, or update any links to point to the correct web page.
Broken links are bad from a user experience standpoint, but site errors are much, MUCH worse. These crawl errors prevent search engines from accessing your site.
It might be due to a DNS error (which means the bot cannot communicate with the server), a server error (such as flaws in your code that prevents a page from loading), or something else entirely.
The easiest way to pinpoint the cause of the error is via Google Search Console or through your SEO audit tool. Once you have this, it’s best to work with your tech team or SEO team to resolve the issue, as it could be a problem on your web host’s end.
Orphan pages are pages without any links to other pages on your site. These pages don’t provide a search engine bot with enough context to understand how they should be indexed — and in most cases, they end up not being indexed and won’t show up in search results.
Image source: Backlinko
If you picked up some orphan pages in your SEO audit, the fix is very simple: add at least one or two internal links on the page. This will help search engines find and index the page, so it’s displayed in search results.
Your site and URL structure play a critical role in the crawlability and indexability of your website.
See, most websites don’t have any challenge getting the homepage indexed by Google.
However, when you have “deep” web pages that are 3-4 clicks away from the homepage, it becomes much harder to get these crawled and indexed by Google:
Image source: Backlinko
As a result, they might not appear in the results for search engines at ALL — and even if they do, search engine bots are less likely to crawl them for updates.
The best fix for this is to use an intuitive and flat site structure and URL structure, with your most important content no more 3 clicks away from your homepage.
And if you DO have deep pages that you want to be linked, using a good old fashioned internal link can always help search engines index the page.
Images are a commonly overlooked part of technical SEO, despite the fact that using them can boost the SEO value of your page AND provide you with additional ranking opportunities on Google Image Search.
Alt tags help search engines index a page by telling a search engine bot what the image is about. In the back end, it looks like this:
Image source: Southern Web
Using image alt tags is a quick fix that will provide a much-needed boost to any web page. Most SEO tools will identify any images that are missing alt tags, which can then be updated in your site’s code or via your content management system’s media uploader, such as WordPress:
Title tags are an HTML element that specifies the title of a web page, both for users and search engines.
These little snippets are short, but they’re INCREDIBLY important to use if you want your site to be crawled and indexed effectively.
They help search engine bots understand what your page is about, and also play a key role in the clickability of your web page in Google search results.
Site audit tools should pick up on any missing or duplicate title tags. Once you’ve identified these, follow these best practices to update your title tags (and for any new ones you create):
Ensure your title tags are between 55-60 characters long. Anything longer will get cut off in search engine results.
Create a unique title tag for every page. Like duplicate content, duplicate title tags can result in your pages competing with one another in search results.
Ensure your page doesn’t have multiple title tags. Keep only the most relevant heading as your title tag, and update the remaining tags to header tags, such as H1 or H2.
Like title tags, H1 tags are a little snippet of code that indicates the main topic of your web page to search engines and browsers. These appear in the code like so:
<h1>On-Page SEO: 22+ Actionable Tips to Boost Rankings</h1>
If you’re using a CMS, you can mark a piece of content as your H1 in the content editor:
Each page should only have one H1 tag that represents the main headline or subject or the page. If you have more than one, keep only the main one and update the others to H2 or H3, depending on their importance.
Technical SEO isn’t as daunting as it sounds — not by a long shot.
With the right tools up your sleeve and a solid understanding of how search engines crawl and index your site, you’re well on your way to improving the technical structure of your website AND ranking higher on search results.
Combine this with sound on-page SEO elements, such as page speed, and effective off-page SEO efforts, and you’re well on your way to becoming an unstoppable force in organic search.
But these common technical SEO issues are just the tip of the iceberg.
If you truly want to improve your visibility on search engines, you need to leave no stone unturned.
That’s where we come in.
We’ve worked with thousands of businesses to improve their technical SEO in line with best practices, and we’re here to help you do the same. Just click on the link below to get started with an obligation-free chat today.
We’ll conduct a free audit of your current online marketing and pinpoint exactly what you need to do to get your site ranking higher in organic search. We’ll also give you a FREE six-month game plan to help skyrocket your online marketing efforts, skyrocket your rankings, and ultimately, drive more revenue through digital marketing.