Search engine
|
Google.jpg
A search engine is a program designed to help find information stored on a computer system such as the World Wide Web, or a personal computer. The search engine allows one to ask for content meeting specific criteria (typically those containing a given word or phrase) and retrieving a list of references that match those criteria. Search engines use regularly updated indexes to operate quickly and efficiently.
Without further qualificiation, search engine usually refers to a Web search engine, which searches for information on the public Web. Other kinds of search engine are enterprise search engines, which search on intranets and personal search engines, which search individual personal computers.
Some search engines also mine data available in newsgroups, large databases, or open directories like DMOZ.org. Unlike Web directories, which are maintained by human editors, search engines operate algorithmically.
Contents |
History
The first Web search engine was "Wandex", a now-defunct index collected by the World Wide Web Wanderer, a web crawler developed by Matthew Gray at MIT in 1993. Another very early search engine, Aliweb, also appeared in 1993 and still runs today. One of the first engines to later become a major commercial endeavor was Lycos, which started at Carnegie Mellon University as a research project in 1994.
Soon after, many search engines appeared and vied for popularity. These included WebCrawler, Hotbot, Excite, Infoseek, Inktomi, Open Text, Northern Light, and AltaVista. In some ways they competed with popular directories such as Yahoo!. Later, the directories integrated or added on search engine technology for greater functionality.
In 2002, Yahoo! acquired Inktomi and in 2003, Yahoo! acquired Overture, which owned AlltheWeb and Altavista. In 2004, Yahoo! launched its own search engine based on the combined technologies of its acquisitions and providing a service that gave pre-eminence to the Web search engine over the directory.
Search engines were also known as some of the brightest stars in the Internet investing frenzy that occurred in the late 1990s. Several companies entered the market spectacularly, recording record gains during their initial public offerings. Some have taken down their public search engine, and are marketing Enterprise-only editions, such as Northern Light (http://www.northernlight.com/) which used to be part of the 8 or 9 early search engines after Lycos came out.
Before the advent of the Web, there were search engines for other protocols or uses, such as the Archie search engine for anonymous FTP sites and the Veronica search engine for the Gopher protocol.
Osmar R. Zaïane's From Resource Discovery to Knowledge Discovery on the Internet (http://citeseer.ist.psu.edu/117999.html) details the history of search engine technology prior to the emergence of Google.
Recent additions to the list of search engines include a9.com, AlltheWeb, Ask Jeeves, Clusty, Gigablast, Ez2Find, GoHook, Kartoo, Laplounge, Mamma, Plazoo, Snap, Teoma, Walhello and WiseNut.
Around 2001, the Google search engine rose to prominence. Its success was based in part on the concept of link popularity and PageRank. How many other web sites and web pages link to a given page is taken into consideration with PageRank, on the premise that good or desirable pages are linked to more than others. The PageRank of linking pages and the number of links on these pages contribute to the PageRank of the linked page. This makes it possible for Google to order its results by how many web sites link to each found page. Google's minimalist user interface was very popular with users, and has since spawned a number of imitators.
Google and most other web engines utilize not only PageRank but more than 150 criteria to determine relevancy. The algorithm "remembers" where it has been and indexes the number of cross-links and relates these into groupings. PageRank is based on citation analysis that was developed in the 1950s by Dr. Eugene Garfield at the University of Pennsylvania. Google's founders cite Garfield's work in their original paper. In this way virtual communities of webpages are found. Teoma's search technology uses a communities approach in its ranking algorithm. NEC Research Institute has worked on similar technology. Web link analysis was first developed by Dr. Jon Kleinberg and his team while working on the CLEVER project at IBM's Almaden research lab.
Challenges faced by search engines
- The web is growing much faster than any present-technology search engine can possibly index (see distributed web crawling).
- Many web pages are updated frequently, which forces the search engine to revisit them periodically.
- The queries one can make are currently limited to searching for key words, which may result in many false positives.
- Dynamically generated sites may be slow or difficult to index, or may result in excessive results from a single site.
- Many dynamically generated sites are not indexable by search engines; this phenomenon is known as the invisible web.
- Some search engines do not order the results by relevance, but rather according to how much money the sites have paid them.
- Some sites use tricks to manipulate the search engine to display them as the first result returned for some keywords. This can lead to some search results being polluted, with more relevant links being pushed down in the result list.
How search engines work
Web search engines work by storing information about a large number of web pages, which they retrieve from the WWW itself. These pages are retrieved by a web crawler (sometimes also known as a spider) — an automated web browser which follows every link it sees. The contents of each page are then analyzed to determine how it should be indexed (for example, words are extracted from the titles, headings, or special fields called meta tags). Data about web pages is stored in an index database for use in later queries. Some search engines, such as Google, store all or part of the source page (referred to as a cache) as well as information about the web pages, whereas some store every word of every page it finds, such as Altavista. This cached page always holds the actual search text since it is the one that was actually indexed, so it can be very useful when the content of the current page has been updated and the search terms are no longer in it. This problem might be considered to be a mild form of linkrot, and Google's handling of it increases usability by satisfying user expectations that the search terms will be on the returned web page. This satisfies the principle of least astonishment since the user normally expects the search terms to be on the returned pages. This relevance to the search makes these cached pages very useful, even beyond the fact that they may contain data that may no longer be available elsewhere.
When a user comes to the search engine and makes a query, typically by giving key words, the engine looks up the index and provides a listing of best-matching web pages according to its criteria, usually with a short summary containing the document's title and sometimes parts of the text. Most search engines support the use of the boolean terms AND, OR and NOT to further specify the search query. An advanced feature is proximity search, which allows you to define the distance between keywords.
The usefulness of a search engine depends on the relevance of the results it gives back. While there may be millions of Web pages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rank the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and new techniques evolve.
Most Web search engines are commercial ventures supported by advertising revenue and, as a result, some employ the controversial practice of allowing advertisers to pay money to have their listings ranked higher in search results.
The vast majority of search engines are run by private companies using proprietary algorithms and closed databases, the most popular currently being Google, MSN Search, and Yahoo!. There does exist open-source search engine technology such as ht://Dig, Nutch, Egothor, and OpenFTS [1] (http://www.searchtools.com/tools/tools-opensource.html), but there is no publically-available world wide web search server using this technology.
See also
- Data mining
- History of the Internet
- List of search engines
- Metasearch engine
- Search engine optimization
- Search engine spammer
- Web indexing
- Search marketing
External links
- Articles and information on search engines and how they work (http://www.searchengineforums.com)
- On Search, the Series (http://www.tbray.org/ongoing/When/200x/2003/07/30/OnSearchTOC), Tim Bray, 2003 — A series of essays on search engine techniques.
- searchtools.com, a comprehensive list of free and commercial search software (http://www.searchtools.com/)
- Directory of Internet search engine resources (http://www.searchenginefinder.com/)ar:محرك بحث
da:Søgemaskine de:Suchmaschine es:Buscador eo:Serĉilo fa:موتورهای جستجو fr:Moteur de recherche ko:검색 엔진 id:Mesin pencari ia:Mechanismo de recerca it:Motore di ricerca he:מנוע חיפוש ms:Enjin gelintar nl:Zoekmachine ja:検索エンジン pl:Wyszukiwarka internetowa pt:Motor de busca ru:Поисковая система simple:Search engine fi:Hakukone sv:Sökmaskiner vi:Máy truy tìm dữ liệu uk:Пошукова машина zh:搜索引擎