Search Directories   «Prev 

Operations of Search Engine

How doesa a search engine determine Content Relevance

Search Content Relevance

Hypothetically, the most relevant search engine would have a team of experts on every subject in the entire world, a staff large enough to read, study, and evaluate every document published on the web so they could return the most accurate results for each query submitted by users.
The fastest search engine, on the other hand, would crawl a new URL the very second it's published and introduce it into the general index immediately, available to appear in query results only seconds after it goes live. The challenge for Google and all other engines is to find the balance between those two scenarios: To combine rapid crawling and indexing with a relevance algorithm that can be instantly applied to new content. In other words, they are trying to build scalable relevance. With very few exceptions, Google is uninterested in hand-removing specific content. Instead, its model is built around identifying characteristics in web content that indicate the content is especially relevant or irrelevant, so that content all across the web with those same characteristics can be similarly promoted or demoted. This course frequently discusses the benefits of content created with the user in mind. To some hardcore SEOs, Google's "think about the user" is unusual. They would much prefer to know a secret line of code or server technique that bypasses the intent of creating engaging content.

Focus on creating relevant Content

While it may be strange, Google's focus on creating relevant, user-focused content really is the key to its algorithm of scalable relevance. Google is constantly trying to find ways to reward content that truly answers users' questions and ways to minimize or filter out content built for content's sake. While this book discusses techniques for making your content visible and accessible to engines, remember that means talking about content constructed with users in mind, designed to be innovative, helpful, and to serve the query intent of human users.

Operations of a Search Engine - Past (1999) and Present

1) Automated robot or spider programs read information day after day from websites that are linked to the site they are reading from.
1) Automated robot or spider programs read information day after day from websites that are linked to the site they are reading from.

2) Information is stored and indexed in the search service's database
2) Information is stored and indexed in the search service's database

3) Compose a search query from keywords and symbols that restrict or expand a search and submit the query to the Search Engine.
3) Compose a search query from keywords and symbols that restrict or expand a search and submit the query to the Search Engine.

4) The search engine searches the service's database with its software for matches to your search query.
4) The search engine searches the service's database with its software for matches to your search query.

5) Matches or hits are then assembled into a list of search resultes
5) Matches or hits are then assembled into a list of search results.