A Guide to SEO and Feeding Social Media

A Guide to SEO and Feeding Social Media

It’s no secret that search engine optimization is a must if you want to be recognized. There are many tweaks that can be applied, and each CMS takes an individual approach to them to get the most out of SEO. Scrivito and the Example App were designed to make SEO as easy as possible.

There is on-site and off-site search engine optimization. The former refers to following a set of rules regarding your website’s markup and content to attract more visitors through searches, while the latter is about increasing the number of links that point to your site. This guide is about on-site SEO.

On-site SEO with single-page applications (SPAs)

Traditional web pages mainly consist of HTML markup delivered by a server. Browsers and search engine crawlers are given the same markup after requesting a page for rendering or, respectively, processing and indexing it.

However, with single-page applications like those based on Scrivito, there is only one HTML file that can be requested. Its <body> usually consists of nothing more than a <script> element that contains or fetches the JavaScript code to run, meaning that all pages are put into existence on the client side, dynamically as the visitor navigates the site. No server URLs, no visible HTML markup.

Browsers are perfectly equipped for handling this, but some of today’s search engine crawlers still aren’t, causing the dynamically created pages of SPAs to not be indexed. Meanwhile, Google’s crawlers have JavaScript activated to render the pages, index them, and recognize and respect SEO signals in the DOM.

You could also have your SPA prerendered for crawlers, meaning that the JavaScript code that renders a page is executed on the server side to deliver HTML markup to them. If you are hosting your SPA on Netlify, activating prerendering is a matter of just a single click. For Scrivito websites, this option is turned on automatically.

To make Google Analytics work with an SPA, the tracking code needs to be executed manually on every page view. Check out Google’s Single Page Application Tracking instructions to learn how to achieve this.

SEO facts and best practices

Title and meta description

For providing search engines with the information you want them to display in search results, the title tag and description meta tag are available. Both are page-specific and should be well-thought-out to make users want to click the search result, leveraging the traffic on your site and propagating it.

As a best practice, keep page titles as short as 40 to 60 characters and make them as relevant, inviting, pointed, or informative, etc. as possible.

Since Google aims at maximizing visitor satisfaction, a title’s relevance and hence usefulness with respect to the content of the page is a ranking factor, so including keywords and phrases likely to be used as search terms is an excellent idea.

In the meta description, boil down the key topic of a page to 140 to 160 characters, again using recurring keywords and phrases. However, don’t stuff the description with keywords as this may result in a ranking penalty.

Headings structure pages

Headings (HTML tags <h1> through <h6>) are often seen as a pure means of styling, but they are more than that: they structure a page hierarchically, from the page title (<h1>), along main headlines down to smaller sections, ideally forming a consistent sequence of content levels. With frames, tooltips and other layered elements, heading levels are often chosen only because of their styles, though.

Not only for letting Google, Bing, Yahoo, etc. know which parts of your content you consider important, but also for converting web pages to other representations (e.g. PDF files), it is highly recommended to have exactly one <h1> element on each and every page, and to keep the hierarchy gapless down to <h3>, at least.

The “Headline” widget included in the Scrivito example app allows you to take care of exactly this. It lets you choose a headline’s style and its level tag separately to give editors more freedom regarding page styling. This way, you can have, for example, level-two headlines that look like level-three ones with their slightly smaller font – or vice versa.

Note that intelligently composed headlines are as relevant to ranking and findability as are technical aspects such as heading levels. If you can make it, weave a page or site-related keyword into your headlines.

Keep the URL structure simple

URLs often reflect a website’s structure by including the path of a specific page into its URL as in “/articles/tech/computing/serverless.html”. While Scrivito supports hierarchical websites structured by paths, it doesn’t include them in URLs. Instead, it derives URLs from two page-specific properties, their title and an identifier like in “/creating-and-using-links-0070d554cc952740”.

The textual part of such URLs (“creating-and-using-links”) not only lets visitors get a clue of what the page is about but also adds to SEO because Google honors human-readable URLs. Note that this textual part is flexible in the sense that it may change (after altering the page title) without causing links and bookmarks to become invalid because the uniqueness of a page is based on the ID part only. This is ideal as the ID of a page (or any other CMS object) never changes, not even after moving it to a different location on the website. Note that, as an alternative to the default “slug-ID” pattern, editors may assign permalinks to pages.

All this also satisfies Google’s most relevant URL-related recommendations: Keep the URL structure simple for better conceivability, and avoid having multiple URLs for the same content (or, if this cannot be achieved, declare canonical URLs).

Controlling crawlers

Search bots and other user agents that comply with the Robots Exclusion Standard request and process a file named “robots.txt” before they start crawling a website. Using directives such as Allow and Disallow, all or particular user agents can be instructed to crawl or skip specific locations and documents of the site, allowing you to explicitly request or deny crawling and indexing.

Using the Sitemap directive in the “robots.txt” file, you can specify the location of an XML sitemap (see below) for speeding up the indexing process.

To provide a global “robots.txt” file for your Scrivito-based app or the Example App, place it in the “public” folder. After building the app, this folder is the root directory of your app.

On the page level, you can set the indexing option as well as the meta description used for search results directly in the page properties.

The “Page description” causes a description meta tag to be generated, and, if (and only if) indexing is switched off, a robots meta tag with the content set to noindex is included in the <head> section of the page.

Specifying different or additional directives

Google recognizes further keywords in the robots meta tag (e.g. nosnippet and noarchive), and any directive valid in robots meta tags can also be specified as an X-Robots-Tag HTTP header.

In case you want to extend or change the contents of the meta tags produced for a page, they are generated in “utils/getMetaData.js”.

Providing a sitemap

A sitemap XML file enables search engine crawlers to quickly determine the pages a website consists of. Based on the sitemap protocol, it reflects the structure of a website, identifying the URLs and optionally providing meta data such as the date the pages were last changed, or the frequency at which they are updated. Using this information, search engine bots can index a site significantly faster.

A sitemap is typically provided as a file named “sitemap.xml” in the root directory of an application. To make the sitemap known to search engines, add a Sitemap directive to your “robots.txt” file, or submit it using the respective search engine’s web interface (e.g. the Google Search Console).

Connecting to social media

Social media signals are taken account of by Google’s ranking algorithm. However, social media integration usually isn’t aimed at improving a website’s rank. Instead, SEO and social media activities should be seen as equally significant components of an overall marketing strategy that wants to increase brand awareness, engagement, and, as a consequence, reach and sales figures. Delivering high-quality content is a key factor to successfully harnessing social media in the context of such a strategy.

So, delivering content that counts, as well as strengthening and enlarging the community that appreciates it is a more rewarding approach than merely keeping an eye on the site rank.

As showcased in the Example App, page-specific meta data for Twitter tweets and Facebook posts can be specified on the properties view of each page. A preview of the resulting message is displayed, too.

Via the “Site settings” tab, the Twitter site name and the Facebook app ID can also be set for Twitter’s and Facebook’s crawlers.