Type: Process Essays
Sample donated: Jacqueline Walton
Last updated: June 11, 2019
The most important reason why SEO is necessary is that it makes your website more useful for both users and search engines. Although these cannot yet see a web page as a human does. SEO is necessary to help search engines understand what each page is about and whether or not it is useful for users.
Now let’s give an example to see the most apparent things:We have electronic commerce dedicated to the sale of children’s books. Well, for the term “coloring pictures” there are about 673,000 searches per month. Assuming that the first result that appears after doing a Google search gets 22% of clicks ( CTR = 22%), we would get about 148,000 visits per month.Now, how much are those 148,000 visits worth? Well, if for this term the average cost per click is € 0.20, we are talking about more than € 29,000 / month.
This only in Spain, if we have a business-oriented to several countries, every hour 1.4 billion searches are done in the world. Of those searches, 70% of clicks are on organic results, and 75% of users do not reach the second page.
If we consider all this, we see that there are many clicks per month for the first result.SEO is the best way for your users to find you through searches in which your website is relevant. These users look for what you offer them. The best way to reach them is through a search engine.3. How do search engines work?The operation of a search engine can be summarized in two steps: tracking and indexing.TrackingA search engine crawls the web crawling with what are called bots. These scroll through all the pages through the links.
Hence the importance of a good link structure. As any user would do when browsing the content of the Web, they pass from one relationship to another and collect data about those web pages that they provide to their servers.The crawling process begins with a list of web addresses of previous crawls and sitemaps provided by other web pages. Once they access these webs, the bots look for links to other pages to visit them. The bots are especially attracted to new sites and changes in existing networks.It is the bots themselves that decide which pages to visit, how often and how long they will crawl that web, that’s why it’s essential to have an optimal loading time and updated content.It is very common that in a web page you need to restrict the tracking of some pages or specific content to avoid that they appear in the search results.
For this, you can tell search engine bots not to crawl individual pages through the “robots.txt” file.IndexingOnce a bot has crawled a web and collected the necessary information, these pages are included in an index.There they are ordered according to their content, their authority, and their relevance.
In this way, when we make a query to the search engine it will be much easier to show us the results that are more related to our question.At first, the search engines were based on the number of times a word was repeated. When searching, they tracked those terms in their index to find which pages they had in their texts, positioning better the one that had happened it more times. Currently, they are more sophisticated and base their indexes on hundreds of different aspects. The date of publication, if they contain images, videos or animations, microformats, etc. they are some of those issues.
Now they give more priority to the quality of the content.Once the pages are tracked and indexed, the time comes when the algorithm acts: algorithms are the computer processes that decide which pages appear before or after the search results. Once the search is done, the algorithms check the indexes. This way they will know which are the most relevant pages taking into account the hundreds of positioning factors. And all this happens in a matter of milliseconds.