E-Book Details:
Title:
|
Neural networks and learning machines
|
Publisher:
|
Prentice Hall, 2009
|
Author:
|
Simon S. Haykin
|
Edition:
|
3, illustrated (2009)
|
Format:
|
PDF
|
ISBN:
|
0131471392
|
EAN:
|
9780131471399
|
No. of Pages:
|
906
|
Book Description:
Fluid and authoritative, this well-organized book represents the first comprehensive treatment of neural networks from an engineering perspective, providing extensive, state-of-the-art coverage that will expose readers to the myriad facets of neural networks and help them appreciate the technology's origin, capabilities, and potential applications. Examines all the important aspects of this emerging technology, covering the learning process, back propagation, radial basis functions, recurrent networks, self-organizing systems, modular networks, temporal processing, aerodynamics, and VLSI implementation. Integrates computer experiments throughout to demonstrate how neural networks are designed and perform in practice. Chapter objectives, problems, worked examples, a bibliography, photographs, illustrations, and a thorough glossary all reinforce concepts throughout. New chapters delve into such areas as support vector machines, and reinforcement learning/aerodynamic programming, plus readers will find an entire chapter of case studies to illustrate the real-life, practical applications of neural networks. A highly detailed bibliography is included for easy reference. For professional engineers and research scientists.
FEATURES:
• Computer-oriented experiments distributed throughout the text.
• Extensive, state-of-the-art coverage exposes students to the many facets of neural networks and helps them appreciate the technology's capabilities and potential applications.
• Reinforces key concepts with chapter objectives, problems, worked examples, a bibliography, photographs, illustrations, and a thorough glossary.
• Detailed analysis of back-propagation learning and multi-layer perceptrons.
• Explores the intricacies of the learning process—an essential component for understanding neural networks.
• Considers recurrent networks, such as Hopfield networks, Boltzmann machines, and meanfield theory machines, as well as modular networks, temporal processing, and neurodynamics.
• Integrates computer experiments throughout, giving students the opportunity to see how neural networks are designed and perform in practice.
• Includes a detailed and extensive bibliography for easy reference.
• On-line learning algorithms rooted in stochastic gradient descent; small-scale and large-scalelearning problems.
• Kernel methods, including support vector machines, and the representer theorem.
• Information-theoretic learning models, including copulas, independent components analysis(ICA), coherent ICA, and information bottleneck.
• Stochastic dynamic programming, including approximate and neurodynamic procedures.
• Sequential state-estimation algorithms, including Kalman and particle filters.
• Recurrent neural networks trained using sequential-state estimation algorithms.
1 comments :
broken link
Post a Comment