Logan Mancini

From Hst250
Jump to: navigation, search

Wiki Entry #1: Census of 1890

As the population of the United State of America grew, the difficulty of calculating the size of America grew too. The Census is conducted every 10 years roughly and the 1880 Census took 8 years to complete (Watrall, 2012). With the ever-growing population of the United States, the 1890 Census was bound to take much longer than the previous census. With this issue on the desks of America’s top officials, a solution was asked for. The United States government needed a resolution to calculating the census in a faster means. The government asked for a solution and would offer a contract to the winning idea, or in this case, a machine(Watrall, 2012).

Herman Hollerith, an eventual founder of International Business Machines (IBM), submitted the design of his, “Hollerith Desk.” Hollerith had the idea that an electronic device was needed to aid in the tabulation of the 1890 Census. Unlike most prior machines that were mechanical tabulation devices, the Hollerith Desk was electronic. Herman Hollerith’s design won the government’s contract for the Census and the Hollerith Desk was officially born(Watrall, 2012).

The Hollerith Desk was much smaller than other previous designs of tabulation machines. The main idea for the Hollerith Desk was that it gathered information off of punch cards and then displayed the results on the left side of the machine on the dial displays(Watrall, 2012). With the tabulation being completed by the electronic machine, the only thing left for humans to complete, was to input the information on the punch cards. This resulted in a much faster and accurate tabulation of the census. Because a machine was used for calculating complex numbers, results were much more accurate and efficient. With the use of the Hollerith Desk, the Census of 1890 only took 3 years to complete(Watrall, 2012). America was shocked with the staggering and impressive results. Some Americans didn’t accept that the census could be completed so quickly and thought the numbers were incorrect.

Because the Hollerith Desk was wildly successful, Herman Hollerith was able to mark history with his invention. This was a huge event for electronic computation machines. The Hollerith Desk shattered the expectations of the public with its staggeringly short time to calculate the Census of 1890. Not only was the census completed 5 years faster but more accurately too. Because large tabulations are very complicated, human error is to be expected. With the Hollerith Desk completing the calculations though, most human error could be cut out. The data was simply punched into punch cards and the machine completed the tabulations.

Herman Hollerith continued to produce tabulation machines for years after the creation of the Hollerith Desk. Because America realized the potential of the electronic tabulation machines, as seen from the 1890 Census, computers began to catch on. Herman eventually paired up with other business men and formed one of the most critical and important businesses to the computer industry: IBM(Watrall, 2012). IBM still exists to this day, one of the longest standing computer companies of all time. Without the Census of 1890 to showcase the Hollerith Desk, personal computers and IBM may not have had such a boost to start the personal computer era.


Watrall, Ethan. HST 250 Week 2 Lecture. Lecture, [PowerPoint slides]. History 250. 23 May 2012. Retrieved from http://history.msu.edu/hst250/files/2009/04/week3lecture.mov


Wiki Entry #2: The Tech Model Railroad Club

History

The Tech Model Railroad Club was formed in 1946 at the Massachusetts Institute of Technology (MIT) by diehard technology enthusiasts. This club provided a very large boost to the video gaming era in America by producing the game, “Spacewar!” The Club still exists to this day at MIT and continues to be on the cutting edge of technology.

The members of the Tech Model Railroad Club were young students that were enthusiasts that were interested in technology. Originally, as the name precludes, the students were interested in model railroads. As new technology developed, so did the interests of the students in the TMRC. When computers were becoming more widely available at universities, the club shifted its focus to the computer generation.

Spacewar! Video Game

MIT had received a PDP-1 that was available to the TMRC in the early 1960’s and the club wanted to show off the capabilities of the new computer. After deliberation, the students came up with the idea of a game that would pin rockets against each other in outer-space. In 1962 the game was completed and made available to be played. The game becomes very popular across the country, being played at universities everywhere. ARPAnet links universities across the country, thus distributing it widely. DEC, the developer of the PDP-1, also includes the game on all new PDP-1’s sold. This is a drastic push for the video gaming development era in a positive notion.

Importance of the TMRC for video games

The TMRC was huge for the development of video games. The students did not know at the time, but they had created the first widely successful video game. The students that created Spacewar! were the first hackers of America. The term hacker had a different definition of what it is today. The term hack was more of a compliment than anything. A hacker was someone who could modify systems to complete a task that the machine originally was not meant to perform. These hackers would take the capabilities of early computers beyond what people thought they could complete and the TMRC did just that. With the new PDP-1 available to the club, the hackers were able to take advantage of its vast capabilities. The PDP-1 had a very large screen and strong enough computing power to produce a real game on it.

The principles of the hacker culture were of extreme importance to the video game industry. Because of the TMRC, Spacewar! became available to new hackers across the country. Those interested in video games, where the same people that were researching in the computer field. The students, or even professional researchers, researched by day and hacked by night.

At the cost of $100,000 per PDP-1, the developers of the game were not concerned with patenting the game. In hind-sight, this was can be seen as a very big mistake. Spacewar! Was wildly popular and the creators stood to make quite a large sum of money off of the game. But, as the game was spread across the country, new variations sprung up from school to school. Maybe with a patent, the new development would not have been encouraged, stifling the video game generation.

Citation

Watrall, Ethan. HST 250 Week 3 Lecture, Part 1. Lecture, [PowerPoint slides]. History 250. 28 May 2012. Retrieved from http://history.msu.edu/hst250/files/2009/04/videogamelecturept1.mov

Wiki Entry #3: DARPA

Defense Advanced Research Projects Agency (DARPA) DARPA was created in 1958 to prevent any strategic surprises from occurring to the United States of America. The goal was to keep America’s technology more advanced than any other country in the world. In 1958, the agency was established in response to the Russians beating the U.S. to space. With the surprise of Sputnik being launched, America had realized they lost. DARPA was founded and has produced some of the most sophisticated technology known to man ever since (DARPA, 1).

DARPA consists of six program offices that all have a specific branch of goals and work areas.

Adaptive Execution Office (AEO): “Working with Combatant Commands (COCOMs) and Service partners, AEO establishes relationships that enable the rapid insertion of these technologies into military operations and exercises to address requirements and enhance war fighting capabilities” (DoD).

Defense Sciences Office (DSO): “bridge the gap from fundamental science to applications by identifying and pursuing the most promising ideas within the science and engineering research communities” (DoD).

Information Innovation Office (I2O): “I2O aims to ensure U.S. technological superiority in all areas where information can provide a decisive military advantage. I2O works to ensure U.S. technological superiority in these areas by conceptualizing and executing advanced research and development (R&D) projects” (DoD).

Microsystems Technology Office (MTO):”MTO is leading pioneering research in Integrated Microsystems as "platforms-on-a-chip" to enable revolutionary performance and functionality for future DoD systems.” (DoD).

Strategic Technology Office (STO): “undertakes research and development of innovative technologies to support the Department of Defense” (DoD).

Tactical Technology Office (TTO): “transforms the future of warfighting by pursuing high-risk, high payoff tactical technology and development of rapid, mobile and responsive combat capability for advanced weapons, platforms and space systems” (DoD).

DARPA’s advanced research and technology invention has led to major breakthroughs. DARPA has had a huge hand in creating many technologies that the entire world uses today like computer networking and NLS (West's Encyclopedia of American Law). These advancements in technology were inspired and funded by DARPA and have changed how the world operates.

NLS or the "oN-Line System" was the first system that really linked together a mouse, keyboard, monitor and the use of hyperlinks. This system of using hyperlinks to navigate to information is the core of the internet (Englebart, pg). By linking any bit of information together, to be accessed by an individual by clicking on the screen was the foundation for the internet. By networking computers together, which DARPA also funded research on; the formulation of the internet was born. Because of DARPA, these technologies were formed far before any other researcher or company could have produced such product. DARPA has a huge budget that allows it to operate with the most advanced and recognized sciences and personnel the world has to offer. DARPA does its best to keep America ahead of its opponents in this new technological era. When the public is lucky enough to receive some of its products, the whole world benefits from it. DARPA Continues to produce the most sophisticated and advanced systems that the world will continue to use, just like computer networking, the internet and NLS.

Citations

Defense Advanced Research Projects Agency. “Bridging the Gap Powered By Ideas.” Defense Advanced Research Projects Agency, Arlington, VA, 22201.

Douglas C. Englebart (June 1986). "The Augmented Knowledge Workshop". Proceedings of the ACM Conference on The history of personal workstations (Palo Alto, California: ACM). DOI:10.1145/12178.12184. ISBN 0-89791-176-8.

DoD. "Defense Advanced Research Projects Agency." Defense Advanced Research Projects Agency. Web. 22 June 2012. <http://www.darpa.mil/>.

West's Encyclopedia of American Law, edition 2. "Internet legal definition of Internet". Free Online Law Dictionary. July 15, 2009.



Wiki Article: Web Search Engines

Overview

Web search engines have revolutionized our lives and the way we interact with the internet. During the mid to late 1990’s, search engines began to become very popular because they can locate electronic information so quickly. A web search engine is defined as, “a software program that searches the internet based on the words that you designate as search terms.” (History of Search Engines). This is a rudimentary definition, considering what all takes place when a user searches the internet with a search engine. Since their first creation, search engines have grown into tailor-made devices that skowers the depth of the internet in an instant. Search engines like Google and Bing! bring endless amounts of information to people’s fingertips that was once never possible. The changes that occurred for this to become possible are enormous and the impacts on the way the world operates will forever be changed.

History

To begin, the first search engine was created back in 1990, when computer science students at McGill University in Montreal, Canada needed to search files on their public file transfer sites. The search engine was called, “Archie” which is just the word “archive” without the “v” (Seymour, 47). Still, this was not quite used for the internet. Over the next couple years, search engines became more refined and then made their debut to the web. It was not until the summer of 1993 that the first web search engine was created. Matthew Gray of MIT created, “what was probably the first web robot, the Perl-based World Wide Web Wanderer, and used it to generate an index called ‘Wandex’” (Seymour, 48). This encompasses the very element of high-tech search engines, that it employs the use of a web robot to search the web for information. One of the most important aspects of search engines today, is that it allows the user to search for any word on the webpage. The first search engine to utilize this profound technique was called WebCrawler and came out in 1994 (Seymour, 48). With information from the entire webpage allowable to be searched, the search engine was well on its way to becoming the power tool that it is today. Even more importantly, this was the first engine that became widely known by the public. This new found fame of the search engine exploded. Companies like Magellan, Infoseek, Northern Light and Yahoo! were all among the early contenders for the search engine market. The true innovation of search engines became to come into play at this point. Many companies were caught up in the trap of forcing users to use their portal or internet program to use their search engine. The successful search engines allowed the user to utilize their search engine from any internet web browser page and obviously worked the best.

The workings of an internet search engine have become extremely advanced and even personalized. When the first search engines were introduced, as always with anything new, they were fairly simple. The first system, Archie, simply would tell the user where the file name that they searched for was located. Archie could list many paths and directories if the file that was searched for, was located in numerous locations. Archie was all UNIX based, and required a great deal of knowledge to learn its full capacity (Seymour, 49). In 1993, the next major breakthrough occurred for web search engines. A new browser/search engine called W3 Catalog, “exploited the fact that many high-qualities, manually maintained lists of web resources… W3 Catalog simply mirrored these pages, reformatted the contents and proved dynamic querying” (Seymour, 50). This allowed W3 Catalog to use the leg work of website developers to do that hard part of providing dynamic querying. The World Wide Web Wanderer followed quickly in W3 Catalog’s step, with the invention of the web robot. The idea of a machine “crawling” or “wandering” all over the internet, collecting information where ever it goes, is the quintessential point of search engines. This was a huge innovation for search engines and for the progression forward for the next to follow.

For the very first true WWW search engine, we see Jump Station, which was released in December 1993. Jump Station used a combination of all the search engines that came before it, to make it truly efficient. Jump Station employed a web robot to compile web sites for its index and then used a, “web form as the interface to its query program” (Seymour, 50). Combining the three major feature of Crawling, Indexing, and Searching, this was rightly the first web search engine ever. Because of technology was still lacking, Jump Station could not store all the data that their crawler came across. The only information that the crawler stored and indexed were the titles and heading found on the web pages. This was still quite rudimentary compared to the capabilities that search engines possess today, but nonetheless, it was the first true search engine. This was truly iconic event that would change how the world learns forever. Shortly behind the release of Jump Station, WebCrawler was released in the spring of 1994. WebCrawler was essentially the same as Jump Station, but it provided full text search, not just titles and headings. This is even more of an advancement, just months after Jump Station was released (Seymour, 50).

With the release of MetaCrawler, yet another advancement was made. MetaCrawler did not use just one search engine to find results. MetaCrawler pooled from multiple different search engines to find what the user was looking for. Right behind MetaCrawler came AltaVista. AltaVista was the fastest search engine of its day, enabling it to be one of the most popular because there was no lag or degradation, even with a million hits a day. What was uniquely different to AltaVista than the other sites is AltaVista introduced a, “natural language search.” This allowed the user to type, “Where is London? without getting a million-plus pages referring to ‘where’ and is’” (Seymour, 51). This allowed for much more user-friendly searches, making search engines even more popular and useful. During 1995, search engines became wildly successful and from ’95 to ’98, many similar search engines came out and volleyed for position.

Not until 1998 did the search engine world get a radical new competitor, Google. Google quickly rose above its competition because of its own patented algorithm called PageRank. This revolutionary system ranks web pages by, “analyzes human generated links, assuming that web pages linked from many important pages are themselves likely to be important” (Seymour, 52). This is strikingly different than generic search engines of 1998. Typically, search engines searched off of a keyword-based method, ranking search results by how many times the search terms occurred or how related the terms were with the page (History, 52). With this new advanced way of providing search results, Google started to dominate the search engine field. PageRank was not the only thing that distinguished Google from the others. Google has many “secret” criteria that it uses to provide results too. Google is thought to be able to provide results that, “correlate well with human concepts of importance” (Seymour, 52).

Other competitors have tried to compete with Google, but most fall far short. Bing was released my Microsoft on June 3rd, 2009 and has failed to capture much of the search engine market. Microsoft tried to implement search criteria similar to Google, but it has not proved to be any better than Google (Seymour, 52). The workings of a good web search engine lay in the three base functions that all search engines must complete: Web Crawling, Indexing, and Searching.

How Search Engines Work

Every search engine must be able to complete the base tasks to find the information on the internet, then store it for the user, and then search for it when the user asks for it. Dependent on how well the search engine can do all three of these aspects (Web crawling, indexing and searching), depends on how successful the engine is. First, the search engine must develop and utilize a “web crawler” or “spider” that goes out into the depths of the internet and gathers information, storing mass amounts of data they encounter. Then, based on the information the crawler finds, the information is indexed, making it easier to find when “queried” or in other words, searched for. Dependent on how the information is indexed, the data will generate as a result when queried (Seymour, 53). Think of the process as this: When a user searches or “queries” for information, it is like the search engine first looks into the index (like you would find in a book) it then locates where the information is, and then relays that information back to the user via the results. These three base tasks are extremely important for the success of the engine. If the crawler is not fast enough or cannot store or gather sufficient information, then search results will only come from a small portion of information. Even if the web crawler grabs every single bit of data on the internet but the indexer does not index the data properly and efficiently, search results will not be accurate and will produce useless information to the user.

Profit and Business Models

Search engines operate on the basis that advertisers will pay sums of money for ads displayed within search results or on the whitespace surrounding the results. The bulk of a search engine’s profits come from search ads or “paid search” in industry speak (Guth). When a user searches, “the site spits out the results, plus paid advertiser links. Whenever someone clicks on one of the paid links, the search engine collects a fee from the advertiser” (Guth). U.S. paid search market reached $12.3 billion in 2009. This paints a clear picture on why this market has exploded so drastically in the past decade; Big money is to be made in this industry. In the early days of search engines, the large corporations such as MSN and Yahoo did not focus on paid search. Google quickly learned that paid search was a primary source of income for their company and capitalized on the failure of their competitors. This move on Google’s behalf to actively target paid search was a huge advantage and lead to their utter dominance to the industry (Guth).

Market Shares

As of late 2010, there are three main contenders in the search engine market, although two are in a rather large shadow of their successor. Google ranks in at #1 with a staggering 91% of the market share. Following in second and third respectively is Yahoo at 4% and Bing at 3% (Seymour, 56). Google has dominated the market because of its patented PageRank system and their extraordinary use paid search. Both of these models, Google took advantage of early in their life, before other companies realized the profits that could be made from the industry. Google has gone on to own numerous other companies such as YouTube, Picasa and many more (Blakeman, 48).

Search Engine Bias

Throughout search engine’s short lifespan, they have already experienced biases and discrepancies. Studies have shown that due to, “various political, economic, and social biases” search results are not always displayed properly (Seymour, 57). These discrepancies sometimes arise honestly but many instances have shown that they are a direct result of economic and commercial processes. Search engines inlaying specific ads into search results, is a prime example of a real scenario that produces biases in search results. SOPA or Protect IP has a very real potential to cause bias in search results too. If passed, advertisements will operate differently than they do today. There is much speculation that speech will be limited and the government will have greater control of what is censored on the internet. The removal of information from the internet to comply with national regulations is a comprehendible situation. If not government altering internet content, there are activist groups that try to alter search results. Groups like Anonymous are infamous for emplacing or skewing information on search engines. A popular event called, “Google Bombing” is the act of attempting to manipulate search results for, “political, social or commercial reasons (Seymour, 57).

What Search Engines Know About You

As search engines become more and more popular, they have become more advanced too. Companies like Google, Yahoo and Bing have improved their engines, to even include gathering information on the user too. In the effort to tailor search results to the specific user, giving even more accurate results, search engines gather vast amounts on the user too (Blakeman, 46). “This information is used to build up a profile of us as individual searchers and to provide a personalized service,” says Blakeman. Google is usually the so called villain but Yahoo and Bing are just as guilty, just not as successful. Google is a master at personalization. By default, Google personalizes your search results according to your web history. This feature can be turned off, but if you switch machines or clear your browser’s cookies, it will default back to on (Blakeman, 46). All this data collection on the user gives a more specific response to the user and even to the advertisement companies that post ads on sites such as Google. As companies like Google expand to new areas with the acquisition of new companies, this allows even more information to be gathered on users. Because search engine companies collect this data on their users, they can even market to their advertisers that the ads they run will be more likely to post for the correct target audience. The effects are truly two-fold.


Filter Bubble

Eli Pariser coined the term, “Filter Bubble” that explains the phenomenon of internet search engines displaying biased or over-tailored information to the user. Big corporations understand that computers and the internet are everywhere in our daily lives. These same corporations then take steps to ensure that your internet experience is tailored to their product. This customization of the internet is causing issues though. Pariser says that this is causing a, “Filter Bubble” and is causing those things that the average user is not interested in, like real world news, to not show up on your internet experience (Pariser, 2). This customization of our internet definitely alters the perception of what is to be found on the internet. Pariser views it this way: If Google or Yahoo chooses what I see, who’s really in control of my search? Because of the way search engines are designed, some information is now ranked as less important and never appears for some users. Two people can search the exact same thing via a search engine and yet, get two very different search results. Companies such as Google and Bing build a history on what you search and view and attempt to produce results that you would find of interest. Because of this altering of search results, many important pages may not arise in your search results (Blakeman, 47).



Works Cited

Blakeman, Karen. "What Search Engines Know about You." Online 34.5 (2010): 46-8. ABI/INFORM Complete; ProQuest Research Library. Web. 24 June 2012.

Guth, Robert A. "Microsoft Bid to Beat Google Builds on a History of Misses." Wall Street Journal: A.1. ABI/INFORM Complete. Jan 16 2009. Web. 21 June 2012.

Pariser, Eli, auth. "The Filter Bubble: What The Internet Is Hiding From You." NPR. N.p., n.d. web. 11 Jun 2012.

Seymour, Tom, Dean Frantsvog, and Satheesh Kumar. "History of Search Engines." International Journal of Management and Information Systems 15.4 (2011): 47-58. ABI/INFORM Complete. Web. 24 June 2012.