HOW SEARCH ENGINE ( SEARCH ENGINE ) syafiudin101192 - Search engine or Search engine is a computer program designed to perform a search on the files stored in the service www , ftp , mailing lists publications , or news group or a number in a computer peladendalam a network . Search engine is a search device information from the documents available . Search results are obat telat bulan generally displayed in list form often sorted according to the accuracy or visitor ratio of a file called hits . Information into the search target may be present in various types of files such as web page , image , or other file types .How search engines work
Web search engines work by storing information about many web pages , which they retrieve from the WWW . These pages are retrieved by a web crawler - an automated Web browser which follows every link / link it sees . The contents of each page are then analyzed to determine how the index ( eg , the words taken from the title , subtitle , or special fields called meta tags ) . Data about web pages are stored in an index database for use in later queries . Most search engines , such as Google , store all or part of the source page ( which is called a cache ) as well as information about the web page itself .
In addition to web pages , search engines also store and deliver information in the form of link search results that refer to files , such as audio files , video files , images , photos and so on , as well as information about a person , a product , service , and a variety of other information that the continued developed in accordance with the development of information technology .
When someone visits a search engine and enter a query , usually by entering keywords , search engines index and a list of web pages that best fit the criteria , usually with a short summary of the document's title and sometimes parts of the text .
Search engine Google as the largest in the world is one among hundreds of search engines in the world's most widely used by the virtual world community in seeking and finding the blog url and articles on a blog page or website . If in terms of tatacaranya noticed in doing indexing on a new url felt algorithm in use by search engines is very difficult in rasasa will understand all that he engineered technology use , as if to know there are a lot of formulas reading the google algorithm , but is that true ? Certainty can not be sure . Google uses a giant supercomputer that can perform large data processing in doing indexing hundreds of millions of blog page or website url in the world obat pembesar penis .
There are other types of search engines : real -time search engine , such as Orase . Such machines do not use the index . The machine required information is only collected if there is a new search . When compared to the index -based system used machines like Google , real-time system is superior in several respects : information is always fresh , ( almost ) no dead links , and fewer system resources required . ( Google uses nearly 100,000 computers , Orase only one . ) However , there are also disadvantages : longer search completion .
Benefits depend on the relevance of the search engine results it provides. While there may be millions of web pages that contain a particular word or phrase , some pages may be more relevant , popular , or authoritative than others . Most search engines employ methods to rank the results to provide the "best " results first . How to determine engines which pages are most appropriate , and the order of the pages are shown , varies greatly . Her methods also change over time as Internet usage changes and new techniques evolve .
Most of web search engines are commercial ventures supported by advertising revenue and hence some employ the controversial practice of allowing advertisers to pay to have their listings ranked higher in search results .Search process
Perform a search for documents that can be posted on a website so easy and it seems it may be difficult as well . moreover mengignat so spread information everywhere , even the University of Calofornia said today there have been over 50 billion web pages on the internet , although there is no one really knows the exact amount .
difficulties that may occur is because the WWW is not recorded in the form of standardized content. not the same as an existing catalog in the library , which has a worldwide standardization based on the subject of the title of the book , although the amount is also not small .
In searching the web , users always predict what word in there about the page you want to find. or roughly what the subject chosen by someone to manage site pages they manage , any topic about which are discussed .
If users perform what is known as a search on a web page , it is not actually perform the search . not likely to search on the WWW directly .
On the web is really made up of many web pages you want to keep from the various servers around the world . Computer users do not directly do a search for all the computer directly .
What users might do is to access the computer through one or more intermediaries , called the search tool available today . Do a search on the tool to the database was owned . The data base collecting sites were found and saved .
The search tool provides results in the form of hypertext link with the URL to the other pages . when you click this link , and go to the address of the documents , images , sounds and more other forms that exist on the server is provided , in accordance with the information contained in it . This service can reach anywhere in the world .General principles of search engine
The system performance of this machine there are a few things to note , pasang iklan baris gratis tanpa daftar especially its association with arsitekrut problems and mechanisms.Spider
Is a program that download pages they find, similar to the browser . Perbedannya is that the browser directly menapilkan existing information ( either tekas , images , etc. ) . For the benefit of humans who use it at the time, while the spider does not undertake to show in a form that looks like that , because interests are for machines , not humans , spiders were executed by the machine automatically . Interests is to take the pages visited to be stored into a database owned by the search engines .Search Engine CrawlerGoogle like others , use automated software to read , analyze , compare and rank your pages . This means that the visual elements of the website such as layout , color , animation , flash , simple visualization and other graphics will be ignored . Google Search Engine is like a blind person who reads the book using the letters Braille.Berikut the principal thought that you should know about Google Serach Engine :
So , what is a ranking ?
If you type eg " ebook " into the search box on Google , then you will get a list view ( default : 10 listings per page ) that I think is most relevant , and then be filtered in accordance with the wishes of relativity .
Websites that are most relevant and the most important will be displayed in the list in descending order ( starting from the most relevant to the not so relevant ) . For Google , the relevance of web pages depending on how well the pages in the word or phrase search .
Significance on the other hand is dependent on the quality and quantity ( amount ) that shows a link to your page from other web pages . If your website does not appear in the top 30, it's because obat aborsi Google or Search Engine traffic is rapidly changing , or it is caused by the lack of people looking for these pages .
When I come to visit your website ?
To get listed in Google's index database , then Google will visit your websie using :
- Robot
- Spider
Both of these programs will read every page through the main page and followed by entering any links to all your pages .
Google will not add your web pages into the index database unless there is at least one other web pages in the index that has links to the other pages . So do not ever put your website directly to Google .
Arrange in advance every link your web page before it is put into Google .
Program is owned search engine to track down and find the links contained on any page of the meets. His job is to determine where to go spoder and evaluate link based on the specified address from the beginning . Crawlers follow links and try to find a document that has not been recognized by search engines .indexer
These components perform activities to describe each page and examine the various elements , such as text , headers , structure or feature of the style of writing , special HTML tags , etc. .database
Is a standard place to store the data of the pages you have visited , downloaded and already analyzed . sometimes also called the index of a search engine .Engine Result
Machines that perform classification and ranking of search results on search engines . The engine determines which pages best meet the criteria of the search results based on user demand , and how the shape penampulan to be displayed .
This process is carried out based on ranking algorithms that are owned by the search engines , ranking hakaman follow the rules used by them is their right , the researchers studied the properties they use , especially to improve searches generated by serach engines.web Server
Is a component that serves the request and provide a response back from the request. Web servers are usually produce information or documents in HTML format . Services available on the page to fill in the desired keyword search by a user. Web Server is also responsible for delivering search results that are delivered to the computer that requested the information .
Post By : Obat Telat Bulan Dan Alat Bantu Sex
0 komentar:
Posting Komentar