Print List Price: | $19.99 |
Kindle Price: | $12.99 Save $7.00 (35%) |
Sold by: | Macmillan Price set by seller. |
Your Memberships & Subscriptions
Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required.
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera - scan the code below and download the Kindle app.
OK
Audible sample Sample
On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines Adapted Edition, Kindle Edition
From the inventor of the PalmPilot comes a new and compelling theory of intelligence, brain function, and the future of intelligent machines
Jeff Hawkins, the man who created the PalmPilot, Treo smart phone, and other handheld devices, has reshaped our relationship to computers. Now he stands ready to revolutionize both neuroscience and computing in one stroke, with a new understanding of intelligence itself.
Hawkins develops a powerful theory of how the human brain works, explaining why computers are not intelligent and how, based on this new theory, we can finally build intelligent machines.
The brain is not a computer, but a memory system that stores experiences in a way that reflects the true structure of the world, remembering sequences of events and their nested relationships and making predictions based on those memories. It is this memory-prediction system that forms the basis of intelligence, perception, creativity, and even consciousness.
In an engaging style that will captivate audiences from the merely curious to the professional scientist, Hawkins shows how a clear understanding of how the brain works will make it possible for us to build intelligent machines, in silicon, that will exceed our human ability in surprising ways.
Written with acclaimed science writer Sandra Blakeslee, On Intelligence promises to completely transfigure the possibilities of the technology age. It is a landmark book in its scope and clarity.
- ISBN-13978-0805074567
- EditionAdapted
- PublisherTimes Books
- Publication dateApril 1, 2007
- LanguageEnglish
- File size587 KB
Customers who bought this item also bought
Editorial Reviews
Amazon.com Review
From Publishers Weekly
Copyright © Reed Business Information, a division of Reed Elsevier Inc. All rights reserved.
From Scientific American
Richard Lipkin
From Booklist
Copyright © American Library Association. All rights reserved
Review
--James D. Watson, president, Cold Spring Harbor Laboratory, and Nobel laureate in Physiology
"Brilliant and embued with startling clarity. On Intelligence is the most important book in neuroscience, psychology, and artificial intelligence in a generation."
--Malcolm Young, neurobiologist and provost, University of Newcastle
"Read this book. Burn all the others. It is original, inventive, and thoughtful, from one of the world's foremost thinkers. Jeff Hawkins will change the way the world thinks about intelligence and the prospect of intelligent machines."
-- John Doerr, partner, Kleiner Perkins Caufield & Byers
About the Author
Jeff Hawkins, co-author of On Intelligence, is one of the most successful and highly regarded computer architects and entrepreneurs in Silicon Valley. He founded Palm Computing and Handspring, and created the Redwood Neuroscience Institute to promote research on memory and cognition. Also a member of the scientific board of Cold Spring Harbor Laboratories, he lives in northern California.
Excerpt. © Reprinted by permission. All rights reserved.
Let me show why computing is not intelligence. Consider the task of catching a ball. Someone throws a ball to you, you see it traveling towards you, and in less than a second you snatch it out of the air. This doesn't seem too difficult-until you try to program a robot arm to do the same. As many a graduate student has found out the hard way, it seems nearly impossible. When engineers or computer scientists try to solve this problem, they first try to calculate the flight of the ball to determine where it will be when it reaches the arm. This calculation requires solving a set of equations of the type you learn in high school physics. Next, all the joints of a robotic arm have to be adjusted in concert to move the hand into the proper position. This whole operation has to be repeated multiple times, for as the ball approaches, the robot gets better information about its location and trajectory. If the robot waits to start moving until it knows exactly where the ball will land it will be too late to catch it. A computer requires millions of steps to solve the numerous mathematical equations to catch the ball. And although it's imaginable that a computer might be programmed to successfully solve this problem, the brain solves it in a different, faster, more intelligent way.
Product details
- ASIN : B003J4VE5Y
- Publisher : Times Books; Adapted edition (April 1, 2007)
- Publication date : April 1, 2007
- Language : English
- File size : 587 KB
- Text-to-Speech : Enabled
- Screen Reader : Supported
- Enhanced typesetting : Enabled
- X-Ray : Not Enabled
- Word Wise : Enabled
- Sticky notes : On Kindle Scribe
- Print length : 284 pages
- Best Sellers Rank: #167,923 in Kindle Store (See Top 100 in Kindle Store)
- #101 in AI & Semantics
- #103 in Cognitive Psychology (Kindle Store)
- #105 in Neuroscience (Kindle Store)
- Customer Reviews:
About the authors
Sandra (aka Sandy) Blakeslee. I am a science writer with endless curiosity and interests but have spent the past 35 years or so writing about the brain, mostly for the New York Times where I started my career back in the dark ages (late 60s.) I've been writing books for the past few years (The Body Has a Mind of It's Own, On intelligence, Sleights of Mind, Dirt Is Good and more.) As for back story -- I graduated from Berkeley in 1965 (Free Speech Movement major), went to Peace Corps in Borneo, joined the NYT in 1968 as a staff writer, then took off on my own, raised a family, lived in many parts of the world, now live in Santa Fe NM and even have grandchildren. To quote Churchill, so much to do....
Jeff Hawkins is a well-known scientist and entrepreneur, considered one of the most successful and highly regarded computer architects in Silicon Valley. He is widely known for founding Palm Computing and Handspring Inc. and for being the architect of many successful handheld computers. He is often credited with starting the entire handheld computing industry.
Despite his successes as a technology entrepreneur, Hawkins’ primary passion and occupation has been neuroscience. From 2002 to 2005, Hawkins directed the Redwood Neuroscience Institute, now located at U.C. Berkeley. He is currently co-founder and chief scientist at Numenta, a research company focused on neocortical theory.
Hawkins has written two books, "On Intelligence" (2004 with Sandra Blakeslee) and "A Thousand Brains: A new theory of intelligence" (2021). Many of his scientific papers have become some of the most downloaded and cited papers in their journals.
Hawkins has given over one hundred invited talks at research universities, scientific institutions, and corporate research laboratories. He has been recognized with numerous personal and industry awards. He is considered a true visionary by many and has a loyal following – spanning scientists, technologists, and business leaders. Jeff was elected to the National Academy of Engineering in 2003.
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on Amazon-
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
But isn't artifical intelligence (AI) a good metaphor for human intelligence? No, says Hawkins. In AI a computer is taught to solve problems beloning to a specific domain based on a large set of data and rules. In comparison to human intelligence AI systems are very limited. They are only good for the one thing they were designed for. Teaching an AI based system to perform a task like catching a ball is hard because it would require vast amounts of data and complicated algorithms to capture the complex features of the environment. A human would have little difficulty in solving such everyday problems much easier and quicker.
Ok, but aren't neural networks then a good approximation of human intelligence? Although they are indeed an improvement to AI and have made possible some very practical tools they are still very different to human intelligence. Not only are human brains structurally much more complicated, there are clear functional differences too. For instance, in a neural network information flows only one direction while in the human brain there is a constant flow of information in two directions.
Well, isn't the brain then like a parallel computer in which billions of cells are concurrently computing? Is parallel computing what makes human so fast in solving complex problems like catching a ball? No, says the author. He explains that a human being can perform significant tasks within much less time than a second. Neurons are so slow that in that fraction of a second they can only traverse a chain of 100 neurons long. Computers can do nothing useful in so few steps. How can a human accomplish it?
All right, human intelligence is different from what our computers do. What then is it? I'll try to summarize Hawkin's theory.
The neocortex constantly receives sequences of patterns of information, which it stores by creating so-called invariant representations (memories independent of details). These representations allow you to handle variations in the world automatically. For instance, you can still recognize your friends face although she is wearing a new hairstyle.
All memories are stored in the synaptic connections between neurons. Although there is a vast amount of information stored in the neocortex only a few things are atively remembered at one time. This is so because a system, called `autoassociative memory' takes care that only the particular part of the memory is activated which is relevant to the current situation (the patterns that are currently flowing in the brain). On the basis of these activated memory patterns predictions are made -without us being aware of it- about what will happen next. The incoming patterns are compared to and combined with the patterns provided by memory result in your perception of a situation. So, what you perceive is not only based on what your eyes, ears, etc tell you. In fact, theses senses give you fuzzy and partial information. Only when combined with the activated patterns from your memory, you get a consistent perception.
The hierarchical structure of the neocortex plays an important role in perception and learning. Low regions in the structure of the neocortex make low-level predictions (about concreet information like color, time, tone, etc) about what they expect to encounter next, while higher-level regions make higher-level predictions (about more abstract things. Understanding something means that the neocortex' prediction fits with the new sensory input. Whenever neocortex patterns and sensory patterns conflict, there is confusion and your attention is drawn to this error. The error is then sent up to higher neocortex regions to check if the situation can be understood on a higher level. In other words: are there patterns to be found somewhere else in the neocortex, which do fit to the current sensory input?
Learning roughly takes place as follows. During repetitive learning memories of the world first form in higher regions of the cortex but as your learn they are reformed in lower parts of the cortical hierarchy. So, well-learned patterns are represented low in the cortex while new information is sent to higher parts. Slowly but surely the neocortex builds in itself a representation of the world it encounters. Hawkins: "The real world's nested structure is mirrored by the nested structure of your cortex."
This model explains well the efficiency and great speed of the human brain while dealing with complex tasks of a familiar kind. The downside is that we are not seeing and hearing precisely what is happening. When someone is talking we by definition don't fully listen to what he says. Instead, we constantly predict what he will say next and as long as there seems to be a fit between prediction and incoming sensory information our attention remains rather low. Only when he will say something, which is actively conflicting with our prediction, we will pay attention.
The author takes his model one step further by saying that even the motor system is prediction driven. In other words, the human neocortex directs behavior to satisfy its predictions. Hawkins says that doing something is literally the start of how we do it. Remembering, predicting, perceiving and doing are all very intertwined.
I think this is a fascinating and stimulating book. Many questions about intelligence may remain unanswered but I believe this book to be a step forward in our quest to understand intelligence. The author predicts we can soon build intelligence in computersystems by using the principles of the neocortex. He is optimistic about what will happen once we succeed in this. He (reasonably convincing) argues these systems will be useful for humanity and not a threat.
Coert Visser, [...]
The author focuses most of his attention on the cortex, the most recently evolved part of the human brain, and the one responsible for many functions of higher intelligence. His speculation is that this system uses the same generalized learning/prediction algorithm throughout, with little difference in how input from vision, hearing, touch, and other senses are processed. All this data is just sequences of patterns that the cortex filters through its multilayered hierarchy, each layer discerning trends in the input from lower layers, and forming models of the world.
This may sound like the traditional AI concept of "neural networks", but Hawkins breaks from that model with his view that the cortex uses massive amounts of feedback from higher, more time-invariant layers (which view the world more abstractly) to lower, more time-variant layers (which deal with more concrete experience), activating many context switches. He sees the cortex as a blank slate upon birth, which follows relatively simple programming to accumulate and categorizes knowledge. As our minds form, we find ourselves experiencing the world less through our sensory input, and more through our pre-formed models. Only when there is conflict between those models and our input sequences, is our conscious attention drawn to our senses.
In terms of biological neuroscience, this is all probably overly simplistic and not completely accurate (Hawkins doesn't give a lot of attention to the older, more instinctive parts of the brain), but if he's even partly right, his ideas have huge implications for artificial intelligence. If much our human intelligence really does boil down to a generalized memory-prediction algorithm -- one that may be complex, but not beyond our understanding -- the effects on the future will be astounding. Even if Hawkins wasn't able to prove his claims, they're fascinating to contemplate, and the next few decades will certainly shed a lot of light on their truth.
If this book speaks to you, consider also reading Marvin Minsky's A Society of Minds, which contains a lot of complementary ideas.
Top reviews from other countries
Se siete arrivati sin qui, vi invito ad acquistate il libro e a leggerlo per intero.
L'autore si mostra come un amico: racconta i suoi successi (senza boria) e i suoi errori (cosa ha pensato erroneamente per molti anni e i "no" ricevuti, ammettendolo senza mezzi termini e senza vergongna).
In questo libro si trovano dei concetti chiave molto interessanti, che non si trovano facilmente altrove.
Scorrevole, ricco di esempi, si respira il fermento scientifico e le discussioni tra più discipline.
Consigliatissimo.
(Leggete la bibliografia. Si trovano diverse chicche.)
Still relevant and groundbreaking in 2018 as deep neural net AI proves Hawkins right.
This book will change the way you think about your mind. When you understand sequence memory prediction you'll see the world in a different way - a true paradigm shift in the same league as the theory of evolution.
La plus grande partie du livre est consacrée à la description du neocortex, son rôle , et la manière dont il interagit et construit un modèle du monde extérieur.
Auto-association, mémoire et prédiction dans le cadre d'une organsation hiérarchique sont les mots clés de ce livre passionnant, pas uniquement réservés à ceux qui s'intéressent à l'IA, mais aussi ceux qui cherche à comprendre ce qu'est l'intelligence humaine.