Реферат

Реферат на тему UnH1d Essay Research Paper Neural Networks

Работа добавлена на сайт bukvasha.net: 2015-06-18

Поможем написать учебную работу

Если у вас возникли сложности с курсовой, контрольной, дипломной, рефератом, отчетом по практике, научно-исследовательской и любой другой работой - мы готовы помочь.

Предоплата всего

от 25%

Подписываем

договор

Выберите тип работы:

Скидка 25% при заказе до 11.11.2024


Untitled Essay, Research Paper

Neural Networks

A neural network also known as an artificial neural network provides a

unique computing architecture whose potential has only begun to be tapped. They are used

to address problems that are intractable or cumbersome with traditional methods. These new

computing architectures are radically different from the computers that are widely used

today. ANN’s are massively parallel systems that rely on dense arrangements of

interconnections and surprisingly simple processors (Cr95, Ga93).

Artificial neural networks take their name from the networks of nerve

cells in the brain. Although a great deal of biological detail is eliminated in these

computing models, the ANN’s retain enough of the structure observed in the brain to

provide insight into how biological neural processing may work (He90).

Neural networks provide an effective approach for a broad spectrum of

applications. Neural networks excel at problems involving patterns, which include pattern

mapping, pattern completion, and pattern classification (He95). Neural networks may be

applied to translate images into keywords or even translate financial data into financial

predictions (Wo96).

Neural networks utilize a parallel processing structure that has large

numbers of processors and many interconnections between them. These processors are much

simpler than typical central processing units (He90). In a neural network, each processor

is linked to many of its neighbors so that there are many more interconnections than

processors. The power of the neural network lies in the tremendous number of

interconnections (Za93).

ANN’s are generating much interest among engineers and scientists.

Artificial neural network models contribute to our understanding of biological models.

They also provide a novel type of parallel processing that has powerful capabilities and

potential for creative hardware implementations, meets the demand for fast computing

hardware, and provides the potential for solving application problems (Wo96).

Neural networks excite our imagination and relentless desire to

understand the self, and in addition, equip us with an assemblage of unique technological

tools. But what has triggered the most interest in neural networks is that models similar

to biological nervous systems can actually be made to do useful computations, and

furthermore, the capabilities of the resulting systems provide an effective approach to

previously unsolved problems (Da90).

Neural network architectures are strikingly different from traditional

single-processor computers. Traditional Von Neumann machines have a single CPU that

performs all of its computations in sequence (He90). A typical CPU is capable of a hundred

or more basic commands, including additions, subtractions, loads, and shifts. The commands

are executed one at a time, at successive steps of a time clock. In contrast, a neural

network processing unit may do only one, or, at most, a few calculations. A summation

function is performed on its inputs and incremental changes are made to parameters

associated with interconnections. This simple structure nevertheless provides a neural

network with the capabilities to classify and recognize patterns, to perform pattern

mapping, and to be useful as a computing tool (Vo94).

The processing power of a neural network is measured mainly be the

number of interconnection updates per second. In contrast, Von Neumann machines are

benchmarked by the number of instructions that are performed per second, in sequence, by a

single processor (He90). Neural networks, during their learning phase, adjust parameters

associated with the interconnections between neurons. Thus, the rate of learning is

dependent on the rate of interconnection updates (Kh90).

Neural network architectures depart from typical parallel processing

architectures in some basic respects. First, the processors in a neural network are

massively interconnected. As a result, there are more interconnections than there are

processing units (Vo94). In fact, the number of interconnections usually far exceeds the

number of processing units. State-of-the-art parallel processing architectures typically

have a smaller ratio of interconnections to processing units (Za93). In addition, parallel

processing architectures tend to incorporate processing units that are comparable in

complexity to those of Von Neumann machines (He90). Neural network architectures depart

from this organization scheme by containing simpler processing units, which are designed

for summation of many inputs and adjustment of interconnection parameters.

The two primary attractions that come from the computational viewpoint

of neural networks are learning and knowledge representation. A lot of researchers feel

that machine learning techniques will give the best hope for eventually being able to

perform difficult artificial intelligence tasks (Ga93).

Most neural networks learn from examples, just like children learn to recognize dogs from

examples of dogs (Wo96). Typically, a neural network is presented with a training set

consisting of a group of examples from which the network can learn. These examples, known

as training patterns, are represented as vectors, and can be taken from such sources as

images, speech signals, sensor data, and diagnosis information (Cr95, Ga93).

The most common training scenarios utilize supervised learning, during which the network

is presented with an input pattern together with the target output for that pattern. The

target output usually constitutes the correct answer, or correct classification for the

input pattern. In response to these paired examples, the neural network adjusts the values

of its internal weights (Cr95). If training is successful, the internal parameters are

then adjusted to the point where the network can produce the correct answers in response

to each input pattern (Za93).

Because they learn by example, neural networks have the potential for building computing

systems that do not need to be programmed (Wo96). This reflects a radically different

approach to computing compared to traditional methods, which involve the development of

computer programs. In a computer program, every step that the computer executes is

specified in advance by the network. In contrast, neural nets begin with sample inputs and

outputs, and learns to provide the correct outputs for each input (Za93).

The neural network approach does not require human identification of features. It also

doesn’t require human development of algorithms or programs that are specific to the

classification problem at hand. All of this will suggest that time and human effort can be

saved (Wo96). There are drawbacks to the neural network approach, however. The time to

train the network may not be known, and the process of designing a network that

successfully solves an applications problem may be involved. The potential of the

approach, however, appears significantly better than past approaches (Ga93).

Neural network architectures encode information in a distributed fashion. Typically the

information that is stored in a neural network is shared by many of its processing units.

This type of coding is in stark contrast to traditional memory schemes, where particular

pieces of information are stored in particular locations of memory. Traditional speech

recognition systems, for example, contain a lookup table of template speech patterns that

are compared one by one to spoken inputs. Such templates are stored in a specific location

of the computer memory. Neural networks, in contrast, identify spoken syllables by using a

number of processing units simultaneously. The internal representation is thus distributed

across all or part of the network. Furthermore, more than one syllable or pattern may be

stored at the same time by the same network (Ze93).

Neural networks have far-reaching potential as building blocks in tomorrow’s computational

world. Already, useful applications have been designed, built, and commercialized, and

much research continues in hopes of extending this success (He95).

Neural network applications emphasize areas where they appear to offer a more appropriate

approach than traditional computing has. Neural networks offer possibilities for solving

problems that require pattern recognition, pattern mapping, dealing with noisy data,

pattern completion, associative lookups, and systems that learn or adapt during use (Fr93,

Za93). Examples of specific areas where these types of problems appear include speech

synthesis and recognition, image processing and analysis, sonar and seismic signal

classification, and adaptive control. In addition, neural networks can perform some

knowledge processing tasks and can be used to implement associative memory (Kh90). Some

optimization tasks can be addressed with neural networks. The range of potential

applications is impressive.

The first highly developed application was handwritten character identification. A neural

network is trained on a set of handwritten characters, such as printed letters of the

alphabet. The network training set then consists of the handwritten characters as inputs

together with the correct identification for each character. At the completion of

training, the network identifies handwritten characters in spite of the variations (Za93).

Another impressive application study involved NETtalk, a neural network that learns to

produce phonetic strings, which in turn specify pronunciation for written text. The input

to the network in this case was English text in the form of successive letters that appear

in sentences. The output of the network was phonetic notation for the proper sound to

produce given the text input. The output was linked to a speech generator so that an

observer could hear the network learn to speak. This network, trained by Sejnowski and

Rosenberg, learned to pronounce English text with a high level of accuracy (Za93).

Neural network studies have also been done for adaptive control applications. A classic

implementation of a neural network control system was the broom-balancing experiment,

originally done by Widrow and Smith in 1963. The network learned to move a cart back and

forth in such a way that a broom balanced upside-down on its handle tip and the cart

remained on end (Da90). More recently, application studies were done for teaching a

robotic arm how to get to its target position, and for steadying a robotic arm. Research

was also done on teaching a neural network to control an autonomous vehicle using

simulated, simplified vehicle control situations (Wo96).

Neural networks are expected to complement rather than replace other technologies. Tasks

that are done well by traditional computer methods need not be addressed with neural

networks, but technologies that complement neural networks are far-reaching (He90). For

example, expert systems and rule-based knowledge-processing techniques are adequate for

some applications, although neural networks have the ability to learn rules more flexibly.

More sophisticated systems may be built in some cases from a combination of expert systems

and neural networks (Wo96). Sensors for visual or acoustic data may be combined in a

system that includes a neural network for analysis and pattern recognition. Robotics and

control systems may use neural network components in the future. Simulation techniques,

such as simulation languages, may be extended to include structures that allow us to

simulate neural networks. Neural networks may also play a new role in the optimization of

engineering designs and industrial resources (Za93).

Many design choices are involved in developing a neural network application. The first

option is in choosing the general area of application. Usually this is an existing problem

that appears amenable to solutions with a neural network. Next the problem must be defined

specifically so that a selection of inputs and outputs to the network may be made. Choices

for inputs and outputs involve identifying the types of patterns to go into and out of the

network. In addition, the researcher must design how those patterns are to represent the

needed information. Next, internal design choices must be made. This would include the

topology and size of the network (Kh90). The number of processing units are specified,

along with the specific interconnections that the network is to have. Processing units are

usually organized into distinct layers, which are either fully or partially interconnected

(Vo95).

There are additional choices for the dynamic activity of the processing units. A variety

of neural net paradigms are available. Each paradigm dictates how the readjustment of

parameters takes place. This readjustment results in learning by the network. Next there

are internal parameters that must be tuned to optimize the ANN design (Kh90). One such

parameter is the learning rate from the back-error propagation paradigm. The value of this

parameter influences the rate of learning by the network, and may possibly influence how

successfully the network learns (Cr95). There are experiments that indicate that learning

occurs more successfully if this parameter is decreased during a learning session. Some

paradigms utilize more than one parameter that must be tuned. Typically, network

parameters are tuned with the help of experimental results and experience on the specific

applications problem under study (Kh90).

Finally, the selection of training data presented to the neural network influences whether

or not the network learns a particular task. Like a child, how well a network will learn

depends on the examples presented. A good set of examples, which illustrate the tasks to

be learned well, is necessary for the desired learning to take place. The set of training

examples must also reflect the variability in the patterns that the network will encounter

after training (Wo96).

Although a variety of neural network paradigms have already been established, there are

many variations currently being researched. Typically these variations add more complexity

to gain more capabilities (Kh90). Examples of additional structures under investigation

include the incorporation of delay components, the use of sparse interconnections, and the

inclusion of interaction between different interconnections. More than one neural net may

be combined, with outputs of some networks becoming the inputs of others. Such combined

systems sometimes provide improved performance and faster training times (Da90).

Implementations of neural networks come in many forms. The most widely used

implementations of neural networks today are software simulators. These are computer

programs that simulate the operation of the neural network. The speed of the simulation

depends on the speed of the hardware upon which the simulation is executed. A variety of

accelerator boards are available for individual computers to speed the computations

(Wo96).

Simulation is key to the development and deployment of neural network technology. With a

simulator, one can establish most of the design choices in a neural network system. The

choice of inputs and outputs can be tested as well as the capabilities of the particular

paradigm used (Wo96).

Implementations of neural networks are not limited to computer simulation, however. An

implementation could be an individual calculating the changing parameters of the network

using pencil and paper. Another implementation would be a collection of people, each one

acting as a processing unit, using a hand-held calculator (He90). Although these

implementations are not fast enough to be effective for applications, they are

nevertheless methods for emulating a parallel computing structure based on neural network

architectures (Za93).

One challenge to neural network applications is that they require more computational power

than readily available computers have, and the tradeoffs in sizing up such a network are

sometimes not apparent from a small-scale simulation. The performance of a neural network

must be tested using a network the same size as that to be used in the application (Za93).

The response of an ANN may be accelerated through the use of specialized hardware. Such

hardware may be designed using analog computing technology or a combination of analog and

digital. Development of such specialized hardware is underway, but there are many problems

yet to be solved. Such technological advances as custom logic chips and logic-enhanced

memory chips are being considered for neural network implementations (Wo96).

No discussion of implementation would be complete without mention of the original neural

networks, which is the biological nervous systems. These systems provided the first

implementation of neural network architectures. Both systems are based on parallel

computing units that are heavily interconnected, and both systems include feature

detectors, redundancy, massive parallelism, and modulation of connections (Vo94, Gr93).

However the differences between biological systems and artificial neural networks are

substantial. Artificial neural networks usually have regular interconnection topologies,

based on a fully connected, layered organization. While biological interconnections do not

precisely fit the fully connected, layered organization model, they nevertheless have a

defined structure at the systems level, including specific areas that aggregate synapses

and fibers, and a variety of other interconnections (Lo94, Gr93). Although many

connections in the brain may seem random or statistical, it is likely that considerable

precision exists at the cellular and ensemble levels as well as the system level. Another

difference between artificial and biological systems arises from the fact that the brain

organizes itself dynamically during a developmental period, and can permanently fix its

wiring based on experiences during certain critical periods of development. This influence

on connection topology does not occur in current ANN’s (Lo94, Da90).

The future of neurocomputing can benefit greatly from biological studies. Structures found

in biological systems can inspire new design architectures for ANN models (He90).

Similarly, biology and cognitive science can benefit from the development of

neurocomputing models. Artificial neural networks do, for example, illustrate ways of

modeling characteristics that appear in the human brain (Le91). Conclusions, however, must

be carefully drawn to avoid confusion between the two types of systems.

REFERENCES[Cr95] Cross, et, Introduction to Neural Networks", Lancet, Vol.

346 (October 21, 1995), pp 1075.[Da90] Dayhoff, J. E. Neural Networks: An Introduction, Van Nostrand

Reinhold, New York, 1990.[Fr93] Franklin, Hardy, "Neural Networking", Economist, Vol.

329, (October 9, 1993), pp 19.[Ga93] Gallant, S. I. Neural Network Learning and Expert Systems, MIT

Press, Massachusetts, 1993.[Gr93] Gardner, D. The Neurobiology of Neural Networks, MIT Press,

Massachusetts, 1993.[He90] Hecht-Nielsen, R. Neurocomputing, Addison-Wesley Publishing

Company, Massachusetts, 1990.[He95] Helliar, Christine, "Neural Computing", Management

Accounting, Vol. 73 (April 1, 1995), pp 30.[Kh90] Khanna, T. Foundations of Neural Networks, Addison-Wesley

Publishing Company, Massachusetts, 1990.[Le91] Levine, D. S. Introduction to Neural & Cognitive Modeling,

Lawrence Erlbaum Associates Publishers, New Jersey, 1991.[Lo94] Loofbourrow, Tod, "When Computers Imitate the Workings of

Brain", Boston Business Journal, Vol. 14 (June 10, 1994), pp

24.[Vo94] Vogel, William, "Minimally Connective, Auto-Associative,

Neural Networks", Connection Science, Vol. 6 (January 1,

1994), pp 461.[Wo96] Internet Information.

http://www.mindspring.com/~zsol/nnintro.html

http://ourworld.compuserve.com/homepages/ITechnologies/

http://sharp.bu.edu/inns/nn.html

http://www.eeb.ele.tue.nl/neural/contents/neural_networks.html

http://www.ai.univie.ac.at/oefai/nn/

http://www.nd.com/welcome/whatisnn.htm

http://www.mindspring.com/~edge/neural.html

http://vita.mines.colorado.edu:3857/lpratt/applied-nnets.html[Za93] Zahedi, F. Intelligent Systems for Business: Expert Systems with

Neural Networks, Wadsworth Publishing Company, California, 1993.


1. Реферат на тему Oregano Essay Research Paper OREGANOYou cant hardly
2. Реферат Несостоятельность банкротство юридического лица 2
3. Реферат на тему Survey Of Wireless Computing Essay Research Paper
4. Реферат на тему Liberalism Essay Research Paper
5. Реферат на тему Miracle Worker Essay Research Paper H1
6. Доклад на тему Внешняя политика России в XVII веке
7. Реферат на тему Рекламные печатные материалы
8. Курсовая на тему Аудит учредительных докментов
9. Реферат Развернутые ответы на вопросы по истории социологии
10. Реферат на тему Культура как социальное явление