Реферат на тему Computers In Everything Essay Research Paper Computers
Работа добавлена на сайт bukvasha.net: 2015-06-06Поможем написать учебную работу
Если у вас возникли сложности с курсовой, контрольной, дипломной, рефератом, отчетом по практике, научно-исследовательской и любой другой работой - мы готовы помочь.
Computers In Everything Essay, Research Paper
Computers in some form are in almost everything these days. From Toasters
to Televisions, just about all electronic things has some form of
processor in them. This is a very large change from the way it used to be,
when a computer that would take up an entire room and weighed tons of
pounds has the same amount of power as a scientific calculator. The
changes that computers have undergone in the last 40 years have been
colossal. So many things have changed from the ENIAC that had very little
power, and broke down once every 15 minutes and took another 15 minutes to
repair, to our Pentium Pro 200’s, and the powerful Silicon Graphics
Workstations, the core of the machine has stayed basically the same. The
only thing that has really changed in the processor is the speed that it
translates commands from 1’s and 0’s to data that actually means something
to a normal computer user. Just in the last few years, computers have
undergone major changes. PC users came from using MS-DOS and Windows 3.1,
to Windows 95, a whole new operating system. Computer speeds have taken a
huge increase as well, in 1995 when a normal computer was a 486 computer
running at 33 MHz, to 1997 where a blazing fast Pentium (AKA 586) running
at 200 MHz plus. The next generation of processors is slated to come out
this year as well, being the next CPU from Intel, code named Merced,
running at 233 MHz, and up. Another major innovation has been the
Internet. This is a massive change to not only the computer world, but to
the entire world as well. The Internet has many different facets, ranging
from newsgroups, where you can choose almost any topic to discuss with a
range of many other people, from university professors, to professionals
of the field of your choice, to the average person, to IRC, where you can
chat in real time to other people around the world, to the World Wide Web,
which is a mass of information networked from places around the world.
Nowadays, no matter where you look, computers are somewhere, doing
something.
Changes in computer hardware and software have taken great leaps and jumps
since the first video games and word processors. Video games started out
with a game called Pong…monochrome (2 colors, typically amber and black,
or green and black), you had 2 controller paddles, and the game resembled
a slow version of Air Hockey. The first word processors had their roots in
MS-DOS, these were not very sophisticated nor much better than a good
typewriter at the time. About the only benefits were the editing tools
available with the word processors. But, since these first two dinosaurs
of software, they have gone through some major changes. Video games are
now placed in fully 3-D environments and word processors now have the
abilities to change grammar and check your spelling.
Hardware has also undergone some fairly major changes. When computers
entered their 4th generation, with the 8088 processor, it was just a base
computer, with a massive processor, with little power, running at 3-4 MHz,
and there was no sound to speak of, other than blips and bleeps from an
internal speaker. Graphics cards were limited to two colors (monochrome),
and RAM was limited to 640k and less. By this time, though, computers had
already undergone massive changes. The first computers were massive beasts
of things that weighed thousands of pounds. The first computer was known
as the ENIAC, it was the size of a room, used punched cards as input and
didn’t have much more power than a calculator. The reason for it being so
large is that it used vacuum tubes to process data. It also broke down
very often…to the tune of once every fifteen minutes, and then it would
take 15 minutes to locate the problem and fix it. This beast also used
massive amount of power, and people used to joke that the lights would dim
in the city of origin whenever the computer was used.
The Early Days of Computers
The very first computer, in the roughest sense of the term, was the
abacus. Consisting of beads strung on wires, the abacus was the very first
desktop calculator. The first actual mechanical computer came from an
individual named Blaise Pascal, who built an adding machine based on gears
and wheels. This invention did not become improved significantly until a
person named Charles Babbage came along, who made a machine called the
difference engine. It is for this, that Babbage is known as the “Father of
the Computer.”
Born in England in 1791, Babbage was a mathematician, and an inventor. He
decided a machine could be built to solve polynomial equations more easily
and accurately by calculating the differences between them. The model of
this was named the Difference Engine. The model was so well received that
he began to build a full scale working version, with money that he
received from the British Government as a grant.
Babbage soon found that the tightest design specifications could not
produce an accurate machine. The smallest imperfection was enough to throw
the tons of mechanical rods and gears, and threw the entire machine out of
whack. After spending 17,000 pounds, the British Government withdrew
financial support. Even though this was a major setback, Babbage was not
discouraged. He came up with another machine of wheels and cogs, which he
would call the analytical engine, which he hoped would carry out many
different kinds of calculations. This was also never built, at least by
Babbage (although a model was put together by his son, later), but the
main thing about this was it manifested five key concepts of modern
computers —
? Input device
? Processor or Number calculator
? Storage unit to hold number waiting to be processed
? Control unit to direct the task waiting to be performed and the sequence
of calculations
? Output device
Parts of Babbage’s inventions were similar to an invention built by Joseph
Jacquard. Jacquard, noting the repeating task of weavers working on looms,
came up with a stiff card with a series of holes in it, to block certain
threads from entering the loom and blocked others from completing the
weave. Babbage saw that the punched card system could be used to control
the calculations of the analytical engine, and brought it into his
machine.
Ada Lovelace was known as the first computer programmer. Daughter of an
English poet (Lord Byron), went to work with Babbage and helped develop
instructions for doing calculations on the analytical engine. Lovelace’s
contributions were very great, her interest gave Babbage encouragement;
she was able to see that his approach was workable and also published a
series of notes that led others to complete what he prognosticated.
Since 1970, the US Congress required that a census of the population be
taken every ten years. For the census for 1880, counting the census took
7? years because all counting had to be done by hand. Also, there was
considerable apprehension in official society as to whether the counting
of the next census could be completed before the next century.
A competition was held to find some way to speed the counting process. In
the final test, involving a count of the population of St. Louis, Herman
Hollerith’s tabulating machine completed the count in only 5? hours. As a
result of his systems adoption, an unofficial count of the 1890 population
was announced only six weeks after the census was taken. Like the cards
that Jacquard used for the loom, Hollerith’s punched cards also used stiff
paper with holes punched at certain points. In his tabulating machine,
roods passed through the holes to complete a circuit, which caused a
counter to advance one unit. This capability pointed up the principal
difference between the analytical engine and the tabulating machine;
Hollerith was able to use electrical power rather than mechanical power to
drive the device.
Hollerith, who had been a statistician with the Census Bureau, realized
that the punched card processing had high potential for sales. In 1896, he
started the Tabulating Machine Company, which was very successful in
selling machines to railroads and other clients. In 124, this company
merged with two other companies to form the International Business
Machines Corporation, still well known today as IBM.
IBM, Aiken & Watson
For over 30 years, from 1924 to 1956, Thomas Watson, Sr., ruled IBM with
an iron grip. Before becoming the head of IBM, Watson had worked for the
Tabulating Machine Company. While there, he had a running battle with
Hollerith, whose business talent did not match his technical abilities.
Under the lead of Watson, IBM became a force to be reckoned with in the
business machine market, first as a purveyor of calculators, then as a
developer of computers.
IBM’s entry into computers was started by a young person named Howard
Aiken. In 1936, after reading Babbage’s and Lovelace’s notes, Aiken
thought that a modern analytical engine could be built. The important
difference was that this new development of the analytical engine would be
electromechanical. Because IBM was such a power in the market, with lots
of money and resources, Aiken worked out a proposal and approached Thomas
Watson. Watson approved the deal and give him 1 million dollars in which
to make this new machine, which would later be called the Harvard Mark I,
which began the modern era of computers.
Nothing close to the Mark I had ever been built previously. It was 55 feet
long and 8 feet high, and when it processed information, it made a
clicking sound, equivalent to (according to one person) a room full of
individuals knitting with metal needles. Released in 1944, the sight of
the Mark I was marked by the presence of many uniformed Navy officers. It
was now W.W.II and Aiken had become a naval lieutenant, released to
Harvard to help build the computer that was supposed to solve the Navy’s
obstacles.
During the war, German scientists made impressive advances in computer
design. In 1940 they even made a formal development proposal to Hitler,
who rejected farther work on the scheme, thinking the war was already won.
In Britain however, scientists succeeded in making a computer called
Colossus, which helped in cracking supposedly unbreakable German radio
codes. The Nazis unsuspectingly continued to use these codes throughout
the war. As great as this accomplishment is, imagine the possibilities if
the reverse had come true, and the Nazis had the computer technology and
the British did not.
In the same time frame, American military officers approached Dr. Mauchly
at the University of Pennsylvania and asked him to develop a machine that
would quickly calculate the trajectories for artillery and missiles.
Mauchly and his student, Presper Eckert, relied on the work of Dr. John
Atanasoff, a professor of physics at Iowa State University.
During the late ‘30’s, Atanasoff had spent time trying to build an
electronic calculating device to help his students solve complicated math
problems. One night, the idea came to him for linking the computer memory
and the associated logic. Later, he and an associate, Clifford Berry,
succeeded in building the “ABC,” for Atanasoff-Berry Computer. After
Mauchly met with Atanasoff and Berry, he used the ABC as the basis for the
next computer development. From this association ultimately would come a
lawsuit, considering attempts to get patents for a commercial version of
the machine that Mauchly built. The suit was finally decided in 1974, when
it was decided that Atanasoff had been the true developer of the ideas
required to make an electronic digital computer actually work, although
some computer historians dispute this decision. But during the war years,
Mauchly and Eckert were able to use the ABC principals in dramatic effect
to create the ENIAC.
Computers Become More Powerful
The size of ENIAC’s numerical “word” was 10 decimal digits, and it could
multiply two of
these numbers at a rate of 300 per second, by finding the value of each
product from a
Multiplication table stored in its memory. ENIAC was about 1000 times
faster than the previous generation of computers. ENIAC used 18,000 vacuum
tubes, about 1,800 square feet of floor space, and consumed about 180,000
watts of electrical power. It had punched card input, 1 multiplier, 1
divider/square rooter, and 20 adders using decimal ring counters, which
served as adders and also as quick-access (.0002 seconds) read-write
register storage. The executable
instructions making up a program were embodied in the separate “units” of
ENIAC, which
were plugged together to form a “route” for the flow of information. The
problem with the ENIAC was that the average life of a vacuum tube is 3000
hours, and a vacuum tube would then burn out once every 15 minutes. It
would take on average 15 minutes to find the burnt out tube and replace
it.
Enthralled by the success of ENIAC, the mathematician John Von Neumann
undertook, in 1945, a study of computation that showed that a computer
should have a very basic, fixed physical construction, and yet be able to
carry out any kind of computation by means of a proper programmed control
without the need for any change in the unit itself. Von Neumann
contributed a new consciousness of how sensible, yet fast computers should
be organized and assembled. These ideas, usually referred to as the
stored-program technique, became important for future generations of
high-speed digital computers and were wholly adopted. The Stored-Program
technique involves many features of computer design and function besides
the one that it is named after. In combination, these features make very
high speed operations attainable. An impression may be provided by
considering what 1,000 operations per second means. If each instruction in
a job program were used once in concurrent order, no human programmer
could induce enough instruction to keep the computer busy. Arrangements
must be made, consequently, for parts of the job program (called
subroutines) to be used repeatedly in a manner that depends on the way the
computation goes. Also, it would clearly be helpful if instructions could
be changed if needed during a computation to make them behave differently.
Von Neumann met these two requirements by making a special type of machine
instruction, called a Conditional control transfer — which allowed the
program sequence to be stopped and started again at any point – and by
storing all instruction programs together with data in the same memory
unit, so that, when needed, instructions could be changed in the same way
as data.
As a result of these techniques, computing and programming became much
faster, more flexible, and more efficient with work. Regularly used
subroutines did not have to be reprogrammed for each new program, but
could be kept in “libraries” and read into memory only when needed. Hence,
much of a given program could be created from the subroutine library. The
computer memory became the collection site in which all parts of a long
computation were kept, worked on piece by piece, and put together to form
the final results. When the advantage of these techniques became clear,
they became a standard practice.
The first generation of modern programmed electronic computers to take
advantage of these improvements was built in 1947. This group included
computers using Random- Access-Memory (RAM), which is a memory designed to
give almost constant access to any particular piece of information. .
These machines had punched-card or tape I/O devices. Physically, they were
much smaller than ENIAC. Some were about the size of a grand piano and
used only 2,500 electron tubes, a lot less then required by the earlier
ENIAC. The first-generation stored-program computers needed a lot of
maintenance, reached probably about 70 to 80% reliability of operation
(ROO) and were used for 8 to 12 years. This group of computers included
EDVAC and UNIVAC, the first commercially available computers.
Early in the 50’s two important engineering discoveries changed the image
of the electronic-computer field, from one of fast but unreliable hardware
to an image of relatively high reliability and even more capability. These
discoveries were the magnetic core memory and the Transistor – Circuit
Element. These technical discoveries quickly found their way into new
models of digital computers. RAM capacities increased from 8,000 to 64,000
words in commercially available machines by the 1960’s, with access times
of 2 to 3 MS (Milliseconds). These machines were very expensive to
purchase or even to rent and were particularly expensive to operate
because of the cost of expanding programming. Such computers were mostly
found in large computer centers operated by industry, government, and
private laboratories — staffed with many programmers and support
personnel. This situation led to modes of operation enabling the sharing
of the high potential available. During this time, another important
development was the move from machine language to assembly language, also
known as symbolic languages. Assembly languages use abbreviations for
instructions rather than numbers. This made programming a computer a lot
easier.
After the implementation of assembly languages came high-level languages.
The first language to be universally accepted was a language by the name
of FORTRAN, developed in the mid 50’s as an engineering, mathematical, and
scientific language. Then, in 1959, COBOL was developed for business
programming usage. Both languages, still being used today, are more
English like than assembly. Higher level languages allow programmers to
give more attention to solving problems rather than coping with the minute
details of the machines themselves. Disk storage complimented magnetic
tape systems and enabled users to have rapid access to data required.
All these new developments made the second generation computers easier and
less costly to operate. This began a surge of growth in computer systems,
although computers were being mostly used by business, university, and
government establishments. They had not yet been passed down to the
general public. The real part of the computer revolution was about to
begin.
One of the most abundant elements in the earth is silicon; a non-metal
substance found in sand as well as in most rocks and clay. The element has
given rise to the name “Silicon Valley” for Santa Clara County, about 50
km south of San Francisco. In 1965, Silicon valley became the principle
site of the computer industry, making the so-called silicon chip.
An integrated circuit is a complete electronic circuit on a small chip of
silicon. The chip may be less than 3mm square and contain hundreds to
thousands of electronic components. Beginning in 1965, the integrated
circuit began to replace the transistor in machines was now called
third-generation computers. An Integrated Circuit was able to replace an
entire circuit board of transistors with one chip of silicon much smaller
than one transistor. Silicon is used because it is a semiconductor. It is
a crystalline substance that will conduct electric current when it has
been doped with chemical impurities shot onto the structure of the
crystal. A cylinder of silicon is sliced into wafers, each about 76mm in
diameter. The wafer is then etched repeatedly with a pattern of electrical
circuitry. Up to ten layers may be etched onto a single wafer. The wafer
is then divided into several hundred chips, each with a circuit so small
it is half the size of a fingernail; yet under a microscope, it is complex
as a railroad yard. A chip 1 centimeter square it is so powerful that it
can hold 10,000 words, about the size of an average newspaper.
Integrated circuits entered the market with the simultaneous announcement
in 1959 by Texas Instruments and Fairchild Semiconductor that they had
each independently produced chips containing several complete electronic
circuits. The chips were hailed as a generational breakthrough because
they had four desirable characteristics.
? Reliability – They could be used over and over again without failure,
whereas vacuum tubes failed ever fifteen minutes. Chips rarely failed —
perhaps one in 33 million hours of operation. This reliability was due not
only to the fact that they had no moving parts but also that semiconductor
firms gave them a rigid work/not work test.
? Compactness – Circuitry packed into a small space reduces equipment
size. The machine speed is increased because circuits are closer together,
thereby reducing the travel time for the electricity.
? Low Cost – Mass-production techniques has made possible the manufacture
of inexpensive integrated circuits. That is, miniaturization has allowed
manufacturers to produce many chips inexpensively.
? Low power use — Miniaturization of integrated circuits has meant that
less power is required for computer use than was required in previous
generations. In an energy-conscious time, this was important.
The Microprocessor
Throught the 1970’s, computers gained dramatically in speed, reliability,
and storage capacity, but entry into the fourth generation was
evolutionary rather than revolutionary. The fourth generation was, in
fact, furthering the progress of the third generation. Early in the first
part of the third generation, specialized chips were developed for memory
and logic. Therefore, all parts were in place for the next technological
development, the microprocessor, or a general purpose processor on a chip.
Ted Hoff of Intel developed the chip in 1969, and the microprocessor
became commercially available in 1971.
Nowadays microprocessors are everywhere. From watches, calculatores and
computers, processors can be found in virtually every machine in the home
or business. Environments for computers have changed, with no more need
for climate-controlled rooms and most models of microcomputers can be
placed almost anywhere.
New Stuff
After the technoligical improvements in the 60’s and the 70’s, computers
haven’t gotten much different, aside from being faster, smaller and more
user friendly. The base architecture of the computer itself is
fundementally the same. New improvements from the 80’s on have been more
“Comfort Stuff”, those being sound cards (For hi-quality sound and music),
CD-ROMs (large storage capicity disks), bigger monitors and faster video
cards. Computers have come a long way, but there has not really been alot
of vast technological improvements, architecture-wise.