Skip to content

On-Site SEO

On-Site SEO

All good SEO activity starts with planning your strategy and selecting the right keywords for your site. But once that’s done, how do you actually go about optimizing your site so that search engines understand what you’re offering and display you on the results page for the most relevant queries? This is what’s known as ‘on-site SEO’ because it involves the actions you take on your site, as opposed to elsewhere on the web, to boost your site’s positioning within search engine results pages, also known as your search ranking. On-Site SEO is also called On-Page SEO. There are three main areas of your site that need optimizing for search: the site structure, the source code, and the body copy.

Website structure

When you hear people talking about the ‘structure’, ‘architecture’, or ‘hierarchy’ of a website, it just refers to how the site is organized – in other words, how different pages on the site link to one another. A well-structured site is important for both visitors and search engines. Visitors want to find what they need quickly and easily, while search engines use the structure to understand what the site is about and the relationships between different areas of content.

When it comes to designing the hierarchy of your site, you essentially have two choices: a deep hierarchy and a flat hierarchy. Deep hierarchies look more vertical when they’re mapped out: you start with few headings, each linking to a small number of sub-headings, which in turn each link to another small set of sub-levels, and so on.

A flat hierarchy looks more horizontal – you start off with more headings, and the rest of your site’s content is listed under one of these.

The hierarchy that works best for your site will depend on your content. However, the flat hierarchy is often recommended as the best approach from an SEO perspective, simply because it reduces the number of ‘clicks’ that search crawlers need to make to find your content, which is usually recommended to be no more than four.

When deciding on your site hierarchy, it can be very helpful to draw a diagram that maps out all of your main headings, categories, and sub-categories. This allows you to check the flow and logic of your design, and to ensure that all of your content is included.

However, you choose to structure your site, your pages’ web addresses – also known as URLs – should reflect that structure and the keywords you’ve chosen. If you’re aiming for a maximum of four levels, then, rather than using generic headings that don’t give any idea of the actual content of the page, structuring the URL as domain/category/subcategory/destination page is a helpful place to start.

Your site hierarchy and URLs will come together to form what’s known as a sitemap – essentially a list of all the pages within your website, organized in a logical manner, with categories and headings. There are two kinds of sitemap: HTML and XML.

An HTML sitemap is an actual page on your website. These kinds of sitemaps are very helpful for visitors, especially if the site has a lot of content, but they’re not essential for SEO, though they are recommended.

An XML sitemap is a file, intended for use by search engines. It contains, among other things, your page URLs and the date they were last updated, allowing the crawler to check them against the date of its last visit and quickly see which pages need to be checked for updates.

XML sitemaps can be generated automatically using one of the many free tools available online. Any pages that you don’t want to be indexed for public searches, such as login pages or password protected material, should be excluded from your XML sitemap. Once your file is created, you’ll need to submit it to each search engine that you want to index your site – and each individual search engine will have instructions available for how to do this.

Beyond the site hierarchy, you’ll also want to consider your internal links.

Internal links are hyperlinks between pages on your site – a ‘related products’ link, for example, which leads to other, relevant content on your site. They can be very useful in limiting the number of clicks that visitors need to make, and in helping the search engines to crawl your site. Using links to connect pages with related content, products or services is much more visitor-friendly than trying to convey all of this information through the website menu and navigation tabs.

One particularly helpful form of internal linking is to use breadcrumbs. These are the navigational links you see at the top or bottom of a webpage that shows visitors where they are in the site hierarchy and allows them to quickly access previous pages. For SEO purposes, breadcrumbs help search crawlers understand the site structure and will sometimes appear on the search engine results page instead of the page title.

Internal links aren’t just useful for navigation – they can also improve your page’s search ranking through something called link equity, also known as link juice. Search engines will ascribe a certain level of link equity to individual pages on your site, according – among other things – to how recently they were updated, and the quality and relevance of the page with regards to your chosen keywords. Pages with higher link equity can pass ‘authority’ on to the pages they link to – so if you’ve got a high-ranking page, including internal links to other pages could help you boost those pages’ search rankings.

When you’re planning out your internal links, look at the common paths that customers take through your website that lead to conversions. What are the key pages – or pieces of content – that you want visitors and search crawlers to find? Make sure your anchor text – the actual words that users click to follow the link – contains your keywords and accurately reflects the topic of the destination page. That way, both visitors and search crawlers know where the link is taking them, and potential customers are more likely to click a descriptive link than one that just says ‘click here’ or ‘learn more’.

If you end up with a broken link – one that doesn’t lead anywhere, or leads to somewhere you didn’t intend – you need to find it and fix it quickly. Online tools are capable of reporting any missing pages or broken links back to you. If broken links lead to a page that’s been deleted or permanently moved, the browser will display a 404 ‘page not found’ error. If this is the case, try including a custom 404 page that contains links to your other, relevant content to allow users and crawlers to find their way back to your site.

One word of warning when it comes to site structure: if the site you want to optimize already exists, be careful about tampering with its current structure. If some of its pages already have excellent search engine positioning, don’t rename or remove them – if you do, you’ll risk taking away the equity that you’ve built for those pages, and probably doing more harm than good to your overall search rankings in the process. A better practice is to include a permanent ‘301 redirect’ for the old URL, which basically means that users and search engines are rerouted to the new page, or to the most relevant page if the content has been removed.

You might find there are areas of your website that you don’t want to appear in search results, like member-only, password-protected pages. In these cases, you need to specifically tell the search crawlers not to include these pages when they crawl your site. To do that, you’ll need to create a robots.txt file.

Robots.txt is a text file found at ‘your site’s address.com/ robots.txt’ that tells search crawlers how to index your site. You can specify your high priority pages to ensure they’re indexed, or included in search engines’ databases of sites. You can also list URLs of pages on your site that should not be crawled, and provide the locations of your sitemaps so that the crawlers can find them easily and follow all of their links.

It’s not compulsory to create a robots.txt file – if search crawlers don’t find one for your site, they’ll crawl your site as normal and assume that every page should be indexed.

It’s worth noting that using robots.txt to tell crawlers not to index a certain page won’t necessarily prevent them from doing so if they can still get to the page via links from other parts of the site. Also, be aware that the robots.txt file is publicly available, so it’s not a good idea to list any pages that you wouldn’t want people to know about! In both of these situations, the only way to prevent crawlers indexing these pages is to include a robots meta tag.

A robots meta tag is a piece of code included in the source code of your page – in other words, the ‘language’ read and used by a browser to display a webpage. It can prevent indexing at an individual page level by using the ‘noindex’ tag, and tell crawlers not to follow links from the page or create a cached copy of the page – in other words, a temporarily saved copy. This can be especially helpful if you need to create duplicate copies of a page – a printer-friendly version, for example – which would diminish the original page’s search ranking if crawlers were to find identical content available in multiple places.

However, you choose to structure your site, the main thing to keep in mind is: make it logical. If users can’t work out how to navigate to the content they’re looking for, they’ll leave – and if they can’t manage it, search crawlers won’t either, and you’ll end up missing out when it comes to your positioning on the results page.

Optimizing your pages

Once you’ve considered how to optimize your site overall, it’s time to think about optimizing individual pages for search.

The page source code is key to this; and among the most significant areas of the source code is the <head> section. This contains information about the page that is not visible in browsers, generally known as metadata.

When it comes to optimizing individual pages, one of the most important areas of the <head> section is the title tag. Make sure you don’t confuse it with the meta title tag which is no longer used by search engines, or with the header tag that sits at the top of the page’s body copy.

The title tag forms the heading you see on the search results page, or when a link to the page is posted on social media – so rather than opting for something generic like ‘About Us’ or ‘Home Page’, choose a tag that actually tells users about the page and persuades them to follow the link. However, be aware that tags longer than 60 characters risk being cut short on the results page. You do want to include your primary keywords, but do it in a way that seems natural rather than stuffing your title tag full of them.

If you want to include your company name in the title tag, put it at the end rather than at the beginning, because keywords closer to the start of the tag carry greater weight with search engines.

You’ll need to use a different title tag for every page of your site to avoid any potential issues with duplicate content. If search engines find the same content replicated in different locations, it could suggest that you’re copying and pasting rather than creating unique content – and, whether you’re copying from your own site or someone else’s, it’s a red flag to search engines that the contents of your page aren’t providing any unique value. Occasionally Google rewrites title tags for the results page, which may be a warning that your page has good content, but that your title tag needs work.

The next element of your source code we’re going to look at is the meta-description tag. It looks like this:

<meta name=“description” content=“your meta-tag goes here”>

and it’s used to provide a brief description of the page. Although it doesn’t directly contribute to search ranking, it’s often used by search engines to form the descriptive text below the heading on the results page. This is your chance to write something that really showcases the value for the user of clicking on your link, so make sure it reads naturally, includes keywords, and talks about the benefits of the page content.

Just like your title tags, your meta-description tags need to be unique for every page. If search engines discover duplicate descriptions, or if they determine that your description doesn’t match the page content, you risk triggering spam flags and witnessing a drop in your search rankings.

If Google does detect that something isn’t right with your meta-descriptions, it might select alternative text from somewhere else on your page. If this happens, try rewriting your meta-descriptions, or testing different versions.

Then there are canonical tags. They look like this:

<link rel=“canonical” href=“https://example.com/your-primary-page”/>

and they’re used when you have multiple URLs pointing to the same content.

In this situation, you can specify the primary page to be indexed by including the URL within the canonical tag. If you then include that same canonical tag in the <head> section of the duplicate versions of that page, crawlers will largely ignore them and you shouldn’t find your search ranking suffering.

Canonical issues can also be caused by the different ways in which a URL can be written. For example, it’s clear to us humans that these refer to the same home page:
domain.com/ home www.domain.com

However, to a search crawler these are different URLs, and so could lead to duplicate content issues. Because you can’t control exactly how other sites will write your URL when they link to you – which hopefully they will! – it’s good practice to always include a canonical tag on your home page so that crawlers will recognize yours as the original and ignore any other versions of the URL.

Something that can have a positive impact on your SEO rankings, perhaps counterintuitively, are links away from your page to other high-quality sites and content. That’s because – providing the other site really is high-quality – outbound links demonstrate that you’re credible, that you have relationships in your industry or field, and that you’re providing value and additional resources for your visitors.

Any outbound links you do include should be clearly related to the topic of the page. You don’t want to fill your pages with outbound links – ideally, you shouldn’t have more than two or three per page. After all, remember that outbound links do take visitors away from your site, so make sure they open in a new browser tab so that your visitors can easily come back to you when they’re ready.

social media

You might also want to include links or buttons to share your content on social media. Different search engines place a different value on your content receiving likes and shares on social media, but ultimately your content being shared demonstrates that it’s clearly high-quality and valuable, which is exactly what search engines are looking for.

Choose the social platforms that are appropriate for your target audience and content, and make it as easy as possible for users to share your page’s content by offering a pre-formatted post that they can simply ‘click to share’ rather than expecting them to copy, paste and contextualize your URL.

Outside of the individual elements of your page, there are some further technical considerations that will impact the search ranking for pages on your site. One of these is the page loading speed. Visitors will quickly leave your page if it takes too long to load; plus, search crawlers will only dedicate a certain amount of time to crawling your site, so if it’s wasted while your pages load, they might not get around to indexing the rest of your content.

Another technical consideration you’ll want to get to grips with is Transport Layer Security or TLS. You might be more familiar with this under the term Secure Sockets Layer or SSL certificate, which was retired and replaced with TLS.

Have you ever visited a website and spotted that the ‘Http’ at the beginning of the URL has changed to ‘https’? That’s the indication that the site has TLS – a security protocol that keeps all the data you exchange with that site – credit card numbers or passwords, for example, private and safe from hacking.

Google takes the certification so seriously that it’s now included as a factor in how high your site will rank within the results, and their web browser even warns users if they try to visit a site that isn’t certified.

All this talk of tags, links, and technology might seem complicated, but remember that it’s just a way of communicating what your site is about, and how to navigate, it to search crawlers. Think of them as signs that tell Google or other search engines that you’re credible and trustworthy, and that you deserve that prime placing on the results page.

SEO and Body copy

As well as the way you structure your site and the code that tells the browser what to display when a user loads your website, the body copy of the site itself – in other words, the actual text that a user sees on your page – heavily impacts that page’s search rankings.

So how do you create content for your site that will help bolster your search rankings? One method is to use headings.

Headings in the body copy of the page are indicated by header tags. HTML – the standard language used in creating websites – allows for up to six ‘levels ‘of headings, in descending order of significance, so <h1> is the most important. Any text you put between <h1> tags will generally appear in larger, bold font. <h2> to <h6> tags are useful if you need further headings to break up the text, but they don’t carry the same weight when it comes to SEO.

<h1> tags tell both visitors and search engines what the page is about, so you should have unique text for each page. As with title tags, you’ll be penalized by the search engines if they deem your <h1> text irrelevant to the content of the page, or find you’ve duplicated the same text elsewhere.

When it comes to the content of the page, your keywords are important, and you should ensure they appear in the first 100-150 words of copy – but don’t sacrifice the quality of your content in doing so. Search engines will take engagement metrics such as bounce rate – the number of people who abandon your site without making a second click – and dwell time – how long people spend on your site on average – into account, so creating high-quality, grammatically correct content that’s readable and relevant is by far the most important on-site factor in determining your search rankings. And of course, if you’re ensuring the content relates to the topic of the page, you shouldn’t have to try to include keywords – they should just occur naturally within the text.

Just as you need to guard against duplicating your content across multiple pages, you should also be careful about having multiple pages focused on the same keywords. This can lead to a situation known as ‘keyword cannibalization’ where your pages end up competing with each other for positions on the results page for that keyword, and you lose control over which one is displayed. So, it’s a good idea to limit the focus of each page to just one keyword and to use variations of that keyword if you have other pages with similar content.

If the text of your page contains information that is naturally structured – for example, a recipe or a list – you might want to use Google’s ‘Structured Data Markup Helper’ tool. This allows you to indicate each separate element of your list to Google, which basically means that Google is able to better understand your content, and therefore to show it in response to relevant keywords. So, if you have a recipe for muffins that lists blueberries within the ingredients, Google can show your page when users search for ‘recipes with blueberries’.

It’s not just a written copy that search engines take into account either. Incorporating media such as audio, video, infographics and diagrams on your page can increase visitor engagement and dwell time, but be aware that any images you incorporate could be returned as results on Google’s Image Search – so if you’re using images there’s a whole other results page for you to think about! The good news is, optimizing your images will help your results for both regular and image-based results pages.

Just as you would with URLs, you should use descriptive keywords as the filenames of your images rather than giving them generic labels; so, “pincushion-cactus.jpg” is much more effective than “image1.jpg”.

Alternative text tags are used to specify the words to display when an image can’t be shown in the browser – for example, when your internet connection is too slow for images to load. These should describe the image, and also include any relevant keywords.

For example, the alternative text tag for pincushion-cactus.jpg might be ‘small flowering pincushion cactus’:

<img src=”pincushion-cactus.jpg” alt=”small flowering pincushion cactus”/>

Essentially, your aim in writing body copy should be to make the message of your content as clear as possible to users and search engines, to offer real value, and to demonstrate why your page is relevant. Although there are a few technical tips for helping the search engines understand what you’re offering, you shouldn’t really find yourself having to go too far out of your way to optimize the text that visitors see for search – as long as your site genuinely provides what you claim it does, there shouldn’t be the reason to worry.

nv-author-image

Era Innovator

Era Innovator is a growing Technical Information Provider and a Web and App development company in India that offers clients ceaseless experience. Here you can find all the latest Tech related content which will help you in your daily needs.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.