Digital Strategy SEO Social Networks User Experience Web Analytics

Is Google’s Algorithm more human than you think ?

Google more human
Jeanne Duhaze
Written by Jeanne Duhaze

Search Quality Raters. These users evaluate the quality of the search results of the Google algorithm. Behind the magic of the automatic, the questions, intentions of users and results of research are analysed and evaluated, tightly circumscribed by guidelines provided by Google. Their feedback is used by the engineers to improve the quality of the research results proposed to the daily users. However, SEO professionals are worried about the impact of the work of these teams on the URL ranking. Despite claims by Google managers to the contrary, minds remain sceptical about the total use of these data collected.

The humans behind the algorithm

An unknown profession, but not secret

Since the early 2000s, people have been working and analysing the results of Google’s algorithm. Today, there are approximately 10,000 of them in the world. They are average people, users of search engines like everyone else. They applied for a part-time job offer at a third company such as Lionbridge or Leapforce and had to pass two tests in order to be selected. One tested their reasoning through questions and the other composed of ‘nearly real-life’ exercises. At home, they spend between 10 and 20 hours per week (paid between $ 12-15 / hour) studying and giving feedback on research results that have already happened.

“In-our-shoes” analyses

The analysed results are mainly organic like texts, images, videos and news results (sometimes paid ad results, as well). Each day, they are offered to perform different tasks to evaluate research results. They can, for example, test a given URL and assess its relevance according to a query on desktop or mobile. They also make side-by-side comparisons of organic results of the same search and selecting the results that best match the query.

Companies provided them with information such as the language of the search, location and sometimes the map of queries (map restoring queries previously sought) to better understand the intention of the user. Their purpose, to put themselves in the shoes of any user and determine if the results are relevant to the intent and research.

 

A very monitored job

Each task has an estimated completion time. Agencies are timing the Search Quality Raters during their tasks to judge their effectiveness. For example, evaluating the quality of a URL is estimated at 1 minute and 48 seconds. However, to ensure that the analysis is done without bias and with the application, the same tasks are assigned to several Search Quality Raters. If their results diverge, they are asked to agree together. In case of persistent disagreement, a moderator will decide

 

The Guidelines: Quality Made in Google

To best frame the evaluation of the quality of the search results, Google transmits (via third-party companies) guidelines. In 2015, after many leaks, Google finally decided to publish them officially.

Google regularly makes changes according to the new objectives of the algorithm. The last official publication dates back to July 20, 2018 and is 164 pages long.

In their guidelines, Google explains to their Search Quality Raters how to evaluate the quality of pages of their search engine. For this, it is necessary to carry out three notations.

Needs Met

The objective is to verify that the result corresponds to the query and the intention of the user. For this, Google identifies four kinds of queries: those with the objective to inquire (know), to act (do), to go to a specific site (website) and local visit (visit-in-person). The Search Quality Rater will evaluate whether the result meets the needs by placing the cursor on the scale from FailsM (Fail to Meet the Needs) to FullyM (Fully Meet the Needs). Some queries can be a mixture of several types.

Scale of the Needs Mat Rating

A Search Quality Rater may decide not to assign a rating for content and to “flag” it in certain cases: if the material is pornographic, presented in a language different from that of the query, does not load, or contains upsetting and or offensive content.

 

The E-A-T

The E-A-T acronym stands for Expertise-Authority-Trust. The Search Quality Raters assess the level of expertise of the content by verifying that the author of the main content has enough personal experience for it to be considered relevant.

They then assesses the authority of the main content, the site and the author. A Search Quality Rater must find evidence of their reputation and recommendations from entities whose authority is already clearly established.

Finally, Trustworthiness is the confidence that the user can have towards the site. It is established with the main content, the website and the author.

This evaluation is in no way related to the query. Through their criteria, Google puts forward the assessment of the benefit that the content brings to users. Moreover, it says on the Google Blog: “We built Google for the users, not for websites”.  Through this rating, Google is fighting back against the increase of fake news.

We built Google for the users, not for websites – The Google Blog

The Overall page quality rating

 

This rating is based on the query and the intent of the user. It includes five criteria: The purpose of the page, the notation of the E-A-T, the appreciation of the main content, the information found and the reputation of the website and the author.

Scale of the Overall Page Quality Rating

The YMYL pages

Some pages are rated more strictly than others: pages Your Money, Your Life (YMYL) page category, created by Google, groups pages containing medical, financial, legal, news, public / official information, as well as pages for shopping or financial transactions. Their content can have a significant impact on the lives of users reading them, which is why they must contain high-quality information.

A quarter of the guidelines pages are dedicated to mobile queries and the assessment of its content especially for queries like “visit-in-person”. Both the main content, as well as the quality of the mobile optimisation of pages have a full part to play in this.

Grey Areas around the ratings

The impact on the SERP ranking

Many experts have expressed concerns about the role of Search Quality Raters in the Search Engine Result Page (SERP). Can the evaluation of URL quality and feedback from Search Quality Raters cause a downgrade? Is the data collected reusing in addition to the algorithm? In response to this, Matt Cutts, the head of the webspam team at Google, said the feedback would only be used to refine the algorithm. The webspam and quality rater teams have two separate goals and are not connected.

 

Indeed, the process would be to evaluate the quality of sites at first. Then, when engineers change the algorithm, Search Quality Raters would be able to assess the difference in quality during side-by-side evaluations without knowing which side contains the product of the change in the algorithm and which version is the old one. Engineers will modify and improve the algorithm based on feedback from Search Quality Raters. They can then run a live test on a small percentage of users that are not search quality raters.

However, if in the short term the ranking of a page judged of poor quality by Google is not altered. We can imagine that this will happen in the long term. Indeed, if a page presents some of the characteristics considered to be bad quality, the fact that it is noted as such by a Search Quality Rater will not impact its ranking.

On the other hand the engineers will make sure that only the high quality results are present in the best results during different changes in the algorithm.

The Search Quality Evaluator Guidelines as SEO bedtime reading  

The ratings of Search Quality Raters are therefore essential. Unfortunately, Google does not communicate this to the authors but the guidelines framing their notation are, which is why the Search Quality Evaluator Guidelines is an essential document for evaluating one’s content. By doing our assessment, we are more than likely to find areas for improvement. Moreover, as SEO is a red thread spot, this evaluation is to be renewed regularly and especially when reworking these guidelines

 

 

Sources :

About the author

Jeanne Duhaze

Jeanne Duhaze