Technical SEO

What is Technical SEO?

Technical SEO refers to the practice of making improvements to your website to help search engines crawl, render and index your site more easily and efficiently, and can refer to any technical changes you make to your site to help it appear among the top search engine results.

Note: Because Google is the largest and most relevant search engine, we will be focusing on Google optimization, but this information generally applies to other search engines as well.

A Brief History of Technical SEO

Technical SEO refers to the practice of making improvements to your website to help search engines crawl, render and index your site more easily and efficiently, and can refer to any technical changes you make to your site to help it appear among the top results in Google Search Results.

As the internet has evolved, Google has updated its search algorithms to deliver the highest quality content and the highest performing websites (with quality page experience) to search engine users on various devices.

Historically, much of search engine optimization (SEO) has focused on off-page and on-page SEO tactics that deal with optimizing content and links and less on technical SEO strategies. This means a lot of SEO is intended to signal to search engines — like Google — that the content of their website is relevant to a user's search query, and that their website is an authoritative and trusted source.

Technical SEO is intended to make it easy for Google to technically process a site (ie crawl, render, and index a site), and to signal the quality of a website's performance. Technically optimizing your website is becoming increasingly important as websites become more technically complex — meaning they're harder for Google to process — and as user expectations continually rise.

How websites work

To understand technical SEO, it's helpful to know how a website works. Knowing how a website works will help you understand issues that a search engine might run into when trying to understand your website and ways that you can help search engines process your website more effectively and efficiently.

What is a Website?

A website is really just a way to publically collect and display information. It's made up of pages of code. This code describes the structure, content, appearance, and action of a website. There are three primary types of code: HTML, CSS, and JavaScript.

HTML — The structure and content of a page (titles, body content, etc.)

CSS — The appearance of a page (color, fonts, etc.)

JavaScript — How a page moves or reacts to a specific input (interactive, dynamic, etc.)

How Does a Website Get on the Internet?

Each page is named with a URL (Uniform Resource Locator) and each URL is attached to a domain. Website domains don't exist in space — they must first be set up and given a home.

First, you need to give your website a domain name and register it with a registrar like GoDaddy or HostGator. These are just companies that manage the reservations of domain names. Then, your domain name must be linked with an IP address and stored on a domain name server (DNS). A DNS does two things:

  1. A DNS translates domain names — for example, Huckabuy.com — into IP addresses (Internet Protocol addresses) — for example, 226.204. In other words, we need a DNS service to translate human-readable words into machine-readable numbers.
  2. A DNS hosts the IP address. This line of numbers must be stored on a massive computer called a server. DNS providers are companies that have a bunch of servers for storing this information that will send that information to your computer via the World Wide Web (ie the internet).

Note: It's important to know the difference between a domain and a hosting service. You can buy a domain, but it doesn't really exist anywhere until you pay to have it hosted on a server.

How Users Interact with Websites 

When you access a website, your computer is using a browser — think Safari, Firefox, or Chrome. When you know the exact web address (URL) for a web page you can type it into your browser or you can click a link to that URL.

When a user requests a domain using a browser, a series of events take place.

  1. The browser will request the domain from the server.
  2. The browser will render — or decode — the HTML, CSS, and JavaScript that a developer has written out.
  3. The browser will then display that information to you in the form of a familiar webpage with recognizable elements like a title, paragraph, and hyperlinks.

Note: Javascript can slow the time it takes for your browser to load a webpage because it takes longer to render than HTML and CSS.

When you don't know the exact address of a website, or you don't know which website you want to go to for information, you'll use a search engine by typing in a question or term related to whatever website or information you are looking for.

How search engines interact with websites

Search engines are designed to help humans find information on the web. One helpful analogy, introduced by Google Developer Advocate Martin Splitt, is to compare search engines to librarians and the internet to one massive library (or library catalog). A librarian must go through a collection of books, consider the content of the books, and then decide how those books relate to other books and how they should be labeled.

Similarly, search engines have to go through the vast collection of information on the internet and parse the code to organize it for users. They go through and parse this information by crawling and rendering it with search engine crawlers. Once the search engine has crawled and rendered a page, it can index it into its database. Once the page is in the index (or catalog), the search engine can match the content of a web page with a question (or terms) that a user types in.

Technical SEO Journey Infographic

Characteristics of a Technically Optimized Website

Fast

Page Speed has long been a Google ranking factor. In 2010, Google updated their algorithm to include desktop site speed as a ranking factor, in 2018 they announced speed for mobile users would be a ranking factor, and in June 2021 Google began rolling out another update focused on page speed and overall page performance (ie page experience).

Crawlable 

An integral part of technical optimization for search engines is to ensure that they can crawl your website effectively and efficiently. Crawlable pages are easy for Google to process, so they can quickly move onto the next page. Google only allows a specific amount of time to crawl each website — this is called crawl budget. So, you want to try to maximize your crawl budget by making their crawl experience as smooth as possible.

Live Links

It's important to not have any dead or broken links on your website, — whether internal or external — because it will result in a 404 error and can negatively impact the authority and crawlabilty of your site. One easy way to solve this problem for internal links and backlinks (when other websites link to you) is to redirect the broken or deleted page to a new, live page. The three most common redirects are 301s, 302s, and meta refreshes. Click here to learn more about redirects.

Secure

Secure browsing is important to both users and search engines. HTTPS (Hypertext Transfer Protocol Secure) is an internet communication protocol that protects the integrity and confidentiality of data between the user's computer and the site. Google strongly encourages HTTPS as a best practice in their documentation and explains common pitfalls and how to solve them.

Structured Data

Structured data— also known as schema markup — is a machine-to-machine learning language that allows search engine spiders (Search Bots) to contextualize and understand each page's content so that they can accurately index your web pages in search results and amplify them as rich features — like frequently asked questions, ratings, and reviews — which are embedded alongside standard blue links. This page provides a basic introduction to the concept, covers why it is important, and looks at the different options you have to implement it on your website.

To learn more about different types of structured data markup, and to see examples, read Examples of Structured Data.

To learn about how different types of rich snippets can improve clicks, impressions, and CTR for your business in this article.

Structured Data Infographic

Sitemap

The Sitemaps allow webmasters to inform search engines about URLs on a website that they want to be indexed into search results. A Sitemap is an XML file that lists the URLs for a site that includes additional information about each URL. It tells Google when a URL was last updated, how often it changes, and how important it is in relation to other URLs of the site.

According to Google rep, Gary Illyse says, an XML Sitemap is the second most important source for finding URLs after internal links. Users can submit their XML sitemap, remove their old sitemap, and view their current sitemap all in Google Search Console.

Internationally Optimized

If your website targets people from more than one county or language, it's important to be familiar with international SEO. You want to make it easy for international visitors to find the version of your site meant for them. To optimize your content for multiple countries, you want to choose a URL structure that makes it easy to connect specific domains or pages to specific countries (for example, www.example.uk) For multilingual sites it's important to add something called an HREflang tag to indicate to Google that there are multiple copies of a web page in different languages.

How Crawling Works

Crawling is how Google finds out what sites and pages exist on the web. Once Google crawls a website, they can add that information to their database. Google finds new web pages by following links from one webpage to another. Website owners can also tell Google to visit, or crawl, their website by submitting a list of pages called a sitemap. 

“The crawling process begins with a list of web addresses from past crawls and sitemaps provided by website owners. As our crawlers visit these websites, they use links on those sites to discover other pages. The software pays special attention to new sites, changes to existing sites and dead links.” 

Google

Google doesn’t have infinite time and resources to crawl every page of every website all the time. Over the last decade, as the internet has grown in size and complexity, they have acknowledged their limitations and disclosed that they discover only a fraction of the internet’s content. That makes it the webmasters’ job to factor “crawl budget” into their technical SEO strategy, such that Google is able to discover and crawl the “right” URLs more often. Continue reading to learn more.

Control Crawling

You have some control over how Google crawls your site and what signals your site is giving to search engine crawlers. You want to signal to Google which are the most important URLs and sections on your site. 

Additionally, Google has provided a tool called Google Search Console to help you control crawling. With Google Search Console you can provide detailed instructions about how to process pages on your site, request a recrawl, or opt-out of crawling altogether. 

Organize Your Site Navigation

One way you can help search engines understand which parts of your site are most important is by organizing your pages by topical relevance and importance. For example, if your product pages are really important to you, your business, and the overall purpose of your website, you want to make that clear by putting those pages in your navigation menu. 

You can organize your navigation into categories and subcategories as well. So, if you want to first and foremost be considered a clothing brand, you might create a page on your site called “Clothing” and then if you can signal which clothing items you sell through the subcategories shorts, dresses, and jackets.

Organizing your website in this way helps Google know which pages to prioritize in search results ranking, and which pages to crawl more frequently. 

Internal Linking

Another way you can control which pages Google crawls is through a strategy called internal linking. Internal linking is simply the practice of linking one page of your website to another website through hyperlinks. 

The more frequently you link to a specific page from various other pages on your website, the more frequently Google will have an opportunity to crawl that page as they are crawling your website. This decreases the chances of that page getting missed and signals to Google the relevance and importance of this page to the rest of your overall website.

How Rendering Works

Rendering is the process by which Google retrieves your pages, runs your code, and assesses your content to understand the layout and structure of your site. This information is then used to rank the quality and value of your site content in search results. As the web has transitioned primarily from a document platform to an application platform, javascript has played a larger role in this rendering process and posed significant challenges to the user and bot experiences. Deciding which type of rendering to use depends on the nature and size of your website.

There are three different types of rendering: client-side, server-side, and dynamic rendering. The difference between the three types of rendering depends on the method of delivery of resources to the browser and to search engine bots. Each of the three options has pros and cons.

Client-Side Rendering

Client-side rendering is when the content of a website is rendered in the browser instead of the website's server. This is the default form of rendering. 

One way to understand the process of rendering is to think of the site assets as ingredients and the fully rendered site as a ready-to-eat meal. With client-side rendering, the ingredients are delivered to you, but you still need to take the time to assemble your meal. 

Client-side rendering is often more cost-effective than server-side or dynamic rendering, but it can come at the cost of a quality user experience. This is because client-side rendered websites often take longer to load. 

Server-Side Rendering

Server-side rendering is when the content of a website is rendered by the server rather than the browser. 

If we go back to the food analogy, you can think of server-side rendering as a food delivery service. The hard part of cooking the meal is performed by the delivery service, so all you need to do is open it up, maybe warm it up a bit, and set the table. 

Overall, there are fewer page speed and crawl budget-related SEO issues with server-side rendering than client-side rendering. However, there are some cons. First, server-side rendering is more expensive, and while first contentful paint is improved substantially, time-to-interactive often lags. For the user, this means that the page appears to load faster, but they cannot interact with the page until all resources have been loaded.

Dynamic Rendering

Dynamic rendering is the process of serving content based on the user agent that calls it. This means serving a client-side rendered version of your site for human users and a separate, server-side version for search engines.

On the server-side, javascript content is converted into a static HTML version preferred by search engine bots. This allows them to fully access, crawl, and index webpage content and to use your crawl budget effectively. When this occurs, the client-side experience is unchanged. Dynamic rendering is one of the biggest technical SEO initiatives Google has endorsed in years.

dynamic rendering infographic

How Indexing Works

After Google has found and processed your webpage, the search engine determines what your webpage is about and categorizes it accordingly. This process of identification and categorization is called indexing. After a page is indexed, the page is stored in the Google Index which is a large database of a massive network of servers.

Robots Directives

Robot meta directives include “meta tags” and “meta descriptions.” These are instructions that help Google understand how to crawl and index your web content. 

Meta Tags 

Google can’t easily capture the meaning and contents of things like images, videos, and infographics, so webmasters and content creators can add concise descriptions to the code via meta tags to enable Google to quickly understand what that content (image, video, etc.) is about and correctly index it into image results, video results, or other types of search results. 

Accessibility SEO: Meta tags also help users that require technologies like screen readers to access the content of your site. These assistive technologies read meta tags to the user to help them understand the contents of the image and contextualize it with the rest of the webpage content. 

How to Control Indexing with Robot Meta Directives

You can also use meta tags to give crawlers directions on how to index your content. None of these directives are case-sensitive. Here are some of the most common indexation directions you can give search engines via meta tags: 

Meta Descriptions

Meta descriptions help Google to quickly understand the overall purpose of a web page and properly index it. Meta descriptions are also often displayed to users in the search result (under the blue link) to give them a preview of what the webpage will be about before they actually click on the page.

Technical SEO Tools

Below is a list of tools that can help you improve the technical SEO for your website.

Technical SEO Checklist

Don't know how to approach technical SEO? Here's a series of tasks to help you improve your technical SEO!

  1. Find Errors — This is where a technical SEO site audit and the technical SEO tools list above will come in handy
  2. Revise Site Structure
  3. Eliminate Duplicate Content
  4. Improve Site Speed — You can do this manually or through a site enhancement tool
  5. Improve Security — The first step to improving site security is by serving your website through HTTPS
  6. Introduce Device-Friendly Design — Make sure you are using responsive best practices
  7. Use Structured Data — You can implement structured data manually or automate the process via a third party software

Technical SEO Articles

Core Web Vitals

The Core Web Vitals are a set of Google ranking signals that Google began to roll out in June 2021. The Core Web Vitals measure the page experience — including how a user experiences page speed and overall page performance. Read this article to learn how to fix the Core Web Vitals.

Introduction to Largest Contentful Paint for SEO

Largest contentful paint is a key metric in Google’s new set of “Core Web Vitals” that measures how long it takes for the primary piece of content above the fold to be usable by a visitor. It could be a text block, image, video, or some other element — whatever is largest. Anything that extends beyond the initial viewport is not taken into consideration. Continue reading to learn more.

Introduction to First Input Delay for SEO

First input delay is a key metric in Google’s new set of “Core Web Vitals” that measures the delay in discrete event processing (like the click of a button) in order to capture a user’s first impression of a site’s interactivity and responsiveness. In other words, it quantifies the experience users feel when trying to interact with elements on a page. Continue reading to learn more.

Introduction to Cumulative Layout Shift for SEO

Cumulative layout shift is a key metric in Google’s new set of “Core Web Vitals” that measures the sum total of all the unexpected layout shifts that occur during the loading of a page. A layout shift occurs any time a visible element changes position from one frame to the next. To illustrate, say you went to click on the menu bar of a homepage and it shifted up and you accidentally clicked on a newsletter signup button that popped up instead. That’s an example of a layout shift. Continue reading to learn more.

Mobile-friendliness: SEO for Mobile

Mobile-friendliness is a big part of page experience, and it will only become more important with the upcoming Page Experience Update. Read this article to learn about mobile SEO, how your business can benefit from it, and what to do to optimize for it.

AMP for SEO

Mobile users continue to make up a larger portion of internet users worldwide, meaning that the need for webmasters to provide a quality mobile page experience is only becoming more important. Accelerated Mobile Pages are one way webmasters can keep up with this ever-growing population of mobile users. In this article, you'll learn what AMP is, the benefits of using AMP for SEO and business, the disadvantages of AMP and more.

Introduction to Canonicalization for Technical SEO

This article provides a brief overview of canonical tags and why they are an important part of a technical SEO strategy.

canonical tags help search engine bots

Search engines experience website content differently than humans. For search engines, every unique URL is a separate page. And if a single page on your website is accessible by multiple URLs with similar or near similar content, Google interprets them as duplicate versions of the same page. Consequently, Google will choose one URL as the original and most important piece of content and index that. Sometimes, they make a mistake and choose the wrong one. This makes it important to take the proactive step of telling Google which one is which. Continue reading to learn more.

Introduction to Log File Analysis for Technical SEO

This article provides an introduction to log file analysis, a task allowing marketers to study how Google interacts with their websites in order to inform changes for technical SEO.

log file analysis

For technical SEO purposes, a log file is a collection of server data from a given period of time showing requests to your webpages from humans and search engines. Marketers analyze the data from these log files in order to understand, for example, how their website is being crawled by Google’s bots. The insights from this data can be used to resolve bugs, errors, or hacks that are negatively impacting how Google is discovering, understanding, and adding your content to search results. Continue reading to learn more.

Introduction to HTTP/2 for Technical SEO

HTTP/2 — hypertext transfer protocol version 2 — works much like standard hypertext transfer protocol (HTTP) only more efficient. With HTTP 2.0 protocol, internet users experience improved page load times and it supports more complex web pages. HTTP/2 can be implemented into your technical SEO strategy to improve your website performance.

Alternatives to the Structured Data Testing Tool

In July 2020, Google announced that the rich results testing tool was out of beta and that its structured data tool would be deprecated at the end of the year. After significant feedback from the SEO community concerning this announcement, Google changed course in December 2020, stating that the old tool will be migrated to the schema.org community in April 2021 where it will be used strictly to check syntax and compliance of any markup with schema.org standards. Continue reading to learn more.

How To Use Google's Crawl Stats Report For Technical SEO

In November 2020, Google released an improved crawl stats that gives webmasters better insight into how Google crawls their websites and what they can do to improve their site's performance for search engine optimization. Continue reading to learn more.

How To Use Google's Lighthouse Report For Technical SEO

The latest version of Google's Lighthouse Report includes the new Core Web Vitals metrics that will become ranking factors when the page experience update goes live in May 2021. Continue reading to learn how to use this report for to optimize for this update.

SEO and Accessibility: 6 SEO Practices That Improve Web Accessibility

SEO and web accessibility are linked because many SEO best practices — things like adding image alt text — also help people with disabilities access your site more easily. This article will define SEO and Web Accessibility and give 6 SEO practices that improve your website accessibility.

Frequently Asked Technical SEO Questions

What is technical SEO?

Technical SEO refers to the practice of making improvements to your website to help search engines crawl, render and index your site more easily and efficiently, and can refer to any technical changes you make to your site to help it appear among the top results in Google Search Results.

How is technical SEO different from other types of SEO?

SEO can be split into three main categories: on-page SEO, off-page SEO, and technical SEO. On-page and off-page SEO have more to do with creating a website that is optimized for search engine users (humans) while technical SEO is about optimizing a site for bots.

A search engine is designed to effectively solve user problems and answer questions. On-page SEO tactics help search engines identify which pages offer the best content to meet a user's needs. Off-page SEO tactics help search engines identify which websites and specific pages are authoritative on a given topic. Technical SEO is about making it easy for search engines to efficiently understand your website with the technology that processes and categorizes web pages (crawling, rendering, and indexing).

Top Articles


Largest Contentful Paint (LCP): What You Need to Know

03 / 27 / 2021

In the spring of 2021, Google will roll out a major algorithm change called the “Page Experience” update. This update introduces a new set of ranking factors called “Core Web Vitals” – largest contentful paint (LCP), first input delay (FID), and cumulative layout shift (CLS) – that measure performance and user loading experience
. . .

Continue Reading

Structured Data Examples

04 / 05 / 2021

Have you ever looked up a movie rating, and at the top of the page you quickly find ratings from three different websites — say, Rotten Tomatoes, IMBd, and Common Sense Media — neatly placed at the top of the Google search results page? Do wonder how Google picks out this information?
. . .

Continue Reading

History of Google Algorithm Updates

01 / 13 / 2021

Google’s algorithm has a massive impact. Google Search is such a prominent resource in answering questions for so many people and many businesses, organizations, and creators optimize their web content for Google search engine users. That’s why it’s so important that Google is continually adapting and improving its algorithm
. . .

Continue Reading

Introduction to First Input Delay for SEO

12 / 19 / 2020

First input delay (FID) is a key metric in Google’s new set of “Core Web Vitals” that measures the delay in discrete event processing (like the click of a button) in order to capture a user’s first impression of a site’s interactivity and responsiveness. In other words, FID measures the experience users feel when trying to interact
. . .

Continue Reading

Introduction to Cumulative Layout Shift for SEO

12 / 05 / 2020

Cumulative layout shift is a key metric in Google's new set of “Core Web Vitals” that measures the sum total of all the unexpected layout shifts that occur during the loading of a page. A layout shift occurs any time a visible element changes position from one frame to the next. To illustrate, say you went to click on
. . .

Continue Reading

The May 2021 Google Algorithm update is all about site performance

12 / 01 / 2021

That’s why we built the Huckabuy Cloud — to give all types of businesses peace of mind when it comes to addressing technical SEO issues like site performance. Our software leverages Google’s latest technical SEO initiatives to solve for the most important Core Web Vitals - largest contentful paint and first input delay.
. . .

Continue Reading

Technical SEO Terms Glossary

API: An application programming interface is a set of programming code that can be used across different software platforms and comes with a defined set of rules and conventions to follow.

Cacheing: The act of storing data in a caching memory. Search engines store a pre-rendered version of a page in cacheing to serve pages to users more quickly.

CDNs: Content Delivery Networks are globally dispersed locations where data is stored. They are often used to solve page loading issues associated with the geographical distance between a user and a host.

CLS:  Cumulative Layout Shift is a page experience metric that measures the load speed for all elements of a page. It measures visual stability.

Cloaking: Cloaking is a spam technique meant to trick a search bot. Cloaking happens when the content on the page on the user-facing side is different than the content presented to the search bot.

Core Web Vitals: CWV consists of three metrics: LCP, FID, and CLS. These three metrics measure page experience and overall page speed.

Crawl Budget: The number of pages that Google will crawl on your site in a day. Crawl budget can vary from site to site.

Critical Rendering Path: The steps a browser takes to convert HTML, CSS and JavaScript into a page that can be viewed by a user.

DNS: A Domain Name System (DNS) connects URLs with their corresponding IP address. Web browsers process data in numbers and a DNS is the thing that matches a searchable domain name (like Huckabuy.com) into a string of numbers that the browser can process.

Dynamic Rendering: The process of serving a client-side version of your site for users and a separate, server-side version for Search Bots. On the server-side, javascript content is converted into a flat HTML version preferred by Search Bots. Dynamic rendering is not cloaking.

Edge SEO: The practice of utilizing CDNs to speed up the delivery of content to users.

FID: First Input Delay is a Core Web Vitals page experience metric, and it measures the page load time it takes until the first user input (whether they click, tap, or press any keys).

Google Lighthouse: One of the Google Webmaster tools that can be used to measure page performance and page experience.

HTML: Hypertext Markup Language is a standardized system for tagging text files to achieve design effects on web pages.

Javascript: A programming language typically used to create interactive effects and dynamic elements on web pages.

JSON-LD: JavaScript Object Notation for Linked Data (JSON-LD) is a format for structuring the data of a web page to better communicate with search engines. There are other structured formats, but JSON-LD is the one preferred by Google.

LCP: Largest Contentful Paint is a Core Web Vitals page experience metric, and it measures that measures the page load time it takes for the largest image or text block on a page to render. To a user, LCP is when the loading of a page visually appears to be complete.

PageSpeed Insights: One of the Google Webmaster tools that measure page speed.

Rich Result: Also called a rich snippet, a rich result is a snippet of information (be it an image, video, a review, etc.) that appears on the top of a SERP above the 10 blue links.

Sitemap: a file that explains the connections between various pages on your website.

Structured Data: Another way to say “organized” data (as opposed to unorganized). Schema.org is a way to structure your data, for example, by labeling it with additional information that helps the search engine understand it.

Technical SEO refers to optimizing your site for the crawling and indexing of search engines, but can also refer to any technical changes you make to your site to help your site appear better in search results.

URL Routing: the practice of defining URLs and their destinations.

Additional Technical SEO Resources