Wednesday, October 27, 2010

Thoroughness Challenge

Photobucket



The Thoroughness Challenge is a post consisting of paragraphs that contain spelling and/or grammatical errors. The paragraphs with the errors corrected and highlighted in red can be found at the end of the post.

Note: The purpose of the Challenge is thoroughness. You're only looking for errors in spelling and/or grammar. Names and places will NOT be misspelled, nor will there by any changes to punctuation or sentence structure.



Your Challenge today is about the early days of search engine optimization and contains 37 errors.

Good Luck!

*******************************************************************************

Webmasters and content providers began optimzing sites for search engines in the mid 1990s. Initially, all webmasters needed to do was submit the address of a page, or URL, to the varous engines which would send a "spider" to "crawl" that page, extract links to other pages from it, and return infomation found on the page to be indexed. The prosess involves a search engine spider downloading a page and storing it on the search engine's own server, where a second progam, known as an indexer, extracts varous information about the page, such as the words it contains and where these are located, as well as any weight for specfic words, and all links the page contains, which are then placed into a schedular for crawling at a latter date.

Site owners started to recogize the value of having there sites highly ranked and visable in search engine results, and the phrase "search engine optimazation" probably came into use in 1997.

Early versions of search algorithyms relied on webmaster-provided infomation such as the keyword meta tag, or index files in engines. Meta tags provide a guide to each page's content. Using meta data to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentally be an innacurate representaton of the site's actual content. Inacurate, inconplete, and inconsistant data in meta tags could and did cause pages to rank for irrelavant searches. Web content providers also maniplated a number of atributes within the HTML source of a page in an attempt to rank well in search engines.

By relying so much on factors such as keyword density, which were exclusivley within a webmaster's control, early search engines suffered from abuse and ranking maniplation. To provide better results to their users, search engines had to adept to ensure their results pages showed the most relavant search results, rather than unrelated pages stuffed with numrous keywords by unscrupulus webmasters. Since the sucess and popularity of a search engine is determned by its ability to produce the most relavant results to any given search, allowing those results to be false would turn users to find other search sources. Search engines responded by develping more complex ranking algorythms, taking into account addtional factors that were more diffcult for webmasters to munipulate.


Now, let's see how thorough you are!


*******************************************************************************

Webmasters and content providers began optimizing sites for search engines in the mid 1990s. Initially, all webmasters needed to do was submit the address of a page, or URL, to the various engines which would send a "spider" to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed. The process involves a search engine spider downloading a page and storing it on the search engine's own server, where a second program, known as an indexer, extracts various information about the page, such as the words it contains and where these are located, as well as any weight for specific words, and all links the page contains, which are then placed into a scheduler for crawling at a later date.

Site owners started to recognize the value of having their sites highly ranked and visible in search engine results, and the phrase "search engine optimization" probably came into use in 1997.

Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag, or index files in engines. Meta tags provide a guide to each page's content. Using meta data to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content. Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches. Web content providers also manipulated a number of attributes within the HTML source of a page in an attempt to rank well in search engines.

By relying so much on factors such as keyword density, which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, allowing those results to be false would turn users to find other search sources. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate.



Photobucket

Wishing all you Cool Cats a totally Cool and HAPPY day!





6 comments:

  1. I need these sweet kitties' glasses after that excercise! LOL!

    Thank you!

    Take care
    x

    ReplyDelete
  2. I missed one :(
    But this is the best I've done here :)

    ReplyDelete
  3. Got all but one! Helps when I understand the material as well.

    ReplyDelete
  4. One slipped by on first read (since you did give the total errors we should find.) I will say the purple background made it harder to read. I would have pasted it into Word to get the black on white, but then I'd have gotten the red squiggles, which seemed like cheating!

    Terry
    Terry's Place
    Romance with a Twist--of Mystery

    ReplyDelete
  5. Thanks everyone!

    Terry: The number of misspelled words is at the beginning of the post.

    Too bad you find the purple difficult. It's actually been scientifically proven that reading too much *on* white is bad for the eyes. That's one of the reasons book pages are an off-white.

    Have a great week everyone!

    ReplyDelete

Note: Only a member of this blog may post a comment.


Copyright © 2009–2010 Crystal Clear Proofing. All Rights Reserved.