Part One Background
Chapter 1 The Roots of Artificial Intelligence
- The concept of supervised learning is a key part of modern AI.
- Training set. Test set.
- Perceptron learning algorithm.
- Even at beginning, AI suffered from a hype problem approach.
- "Easy things are hard" dictum: the human workers are hired to perform the "easy" tasks that are currently too hard for computers.
Chapter 2 Neural Networks and the Ascent of Machine Learning
- rebutted: réfuter
- to soar: monter en flèche
- pep: vitalité
- imbued: imprégné
- disparagingly: de façon désobligeante
- "activation"
- "back-propagation"
- I trained both a perceptron and a two-layer neural network, each with 324 inputs and 10 outputs, on the handwritten-digit-recognition task, using sixty thousand examples, and then tested how well each was able to recognize ten thousand new examples.
- The term "connectionist" refers to the idea that knowledge in these networks resides in weighted connections between units.
Chapter 3 AI Spring
- overlord: chef suprême
- eerily: sinistrement
- pun: jeux de mots
- derogatory: désobligeant
- tout: racoler
- shallow: superficiel
- self-awareness: conscience de soi
- dogged: obstiné
- contrivance: dispositif
- ergo: par conséquent
- to imbue: imprégner
- to spur: inciter
- to scoff at: se moquer de
- glaringly: extrêmement
- malevolent: malveillant
- to ascribe: attribuer
- wryly: ironiquement
- rapture: extase
- post-haste: en toute hâte
- conscripted: appeler
- dire: pressant
- to relent: céder
- a scant: peu de
- surfeit: excès
- to straddle: enfourcher
- zaniest: loufoque
- ploy: stratagème
- stung: piqûre
- wagering: pari
- foil: faire-valoir
- wherein: où
- harbinger: messager
- bearing: position
- The terms narrow and weak are used to contrast with strong, human-level or full-blown AI (sometimes called AGI, or Artificial General Intelligence).
- We're back to the philosophical question I was discussing with my mother: is there a difference between 'simulating a mind' and 'literally having a mind'?
- Ray Kurzweil, who is now director of engineering at Google.
- Singularity: "a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed".
- Kurzweil agrees: "most of the brain's complexity comes from its own interaction with a complex world. Thus, it will be necessary to provide an artificial intelligence with an education just as we do with a natural intelligence.
- Kurzweil's thinking has been particularly influential in the tech industry, where people often believe in exponential technological progress as the means to solve all of society's problems.
- Kapor: "Perception of and physical interactions with the environment is the equal partner of cognition in shaping experience...Emotions bound and shape the envelope of what is thinkable"
- Crucial abilities underlying our distinctive human intelligence, such as perception, language, decision-making, common sense reasoning and learning.
Part Two Looking and Seeing
Chapter 4 Who, What, When, Where, Why
- The neural networks dominating deep learning are directly modeled after discoveries in neuroscience.
- Today's most influential and widely used approach: convolutional neural networks, or (as most people in the field call them) ConvNets.
- This calculation - multiplying each value in a receptive field by its corresponding weight and summing the results - is called a convolution. Hence the name "convolutional neural network".
- Would you like to experiment with a well-trained ConvNet? Simply take a photo of an object, and upload it to Google's "search by image" engine. Google will run a ConvNet on your image and, based on the resulting confidences (over thousands of possible object categories), will tell you its "best guess" for the image.
Chapter 5 ConvNets and ImageNet
- bunch: groupe
- updended: renversé
- jolt: soubresaut
- snooping: fouiner
- terse: sec
- stuffed: empaillé
- to tease out: extraire
- Yann LeCun, the inventor of Convnets.
- A cardinal rule in machine learning is "Don't train on the test data". It seems obvious.
- It turns out that the recent success of deep learning is due less to new breakthroughs in AI than the availability of huge amounts of data (thank you, internet!) and very fast parallel computer hardware.
- Facebook labelled your uploaded photos with names of your friends and registered a patent of classifying the emotions behind facial expressions in uploaded photos.
- ConvNets can be applied to video and used in self-driving cars to track pedestrians, or to read lips and classify body languages. Convnets can even diagnose breast and skin cancer from medical images, determine the stage of diabetic retinopathy, and assist physicians in treatment planning for prostate cancer.
- It could be that the knowledge needed for humanlike visual intelligence - for example, making sense of the "soldier and dog" photo at the beginning of the previous chapter - can't be learned from millions of pictures downloaded from the web, but has to be experienced in some way in the real world.
Chapter 6 A Closer Look at Machines That Learn
- to veer: virer
- speckling: moucheté
- repellent: répulsif
- skewed: faussé
- inconspicuous: qui passe inaperçu
- whack-a-mole: jeu du chat et de la souris
- adversarial: conflictuel
- ostrich: autruche
- I'll explore how differences between learning in ConvNets and in humans affect the robustness and trustworthiness of what is learned.
- The learning process of ConvNets is not very humanlike.
- The most successful ConvNets learned via a supervised-learning procedure: they gradually change their weights as they process the examples in the training set again and again, over many epochs (that is, many passes through the training set), learning to classify each input as one of a fixed set of possible output categories.
- Demis Hassabis, co-founder of Google DeepMind.
- Deep learning requires big data.
- Have you ever put a photo of a friend on your Facebook page and commented on it? Facebook thanks you!
- Deep learning, as always, requires a profusion of training examples.
- Upon purchase of a Tesla vehicle, must agree to a data-sharing policy with the company.
- Requiring so much data is a major limitation of deep-learning today. Yoshua Bengio, another high-profile AI researcher, agrees: "We can't realistically label everything in the world and meticulously explain every last detail to the computer. "
- The term unsupervised learning refers to a broad group of methods for learning categories or actions without labelled data.
- In machine-learning jargon, Will's network "overfitted" to its specific training set.
- They are overfitting to their training data and learning something different from what we are trying to teach them.
- Commercial face-recognition systems tend to be more accurate on white male faces than on female or non white faces. Camera software for face detection is sometimes prone to missing faces with dark skin and to classifying Asian faces as "blinking".
- The spread of real-world AI systems trained on biased data can magnify these biases and do real damage.
- Should the data sets being used to train AI accurately mirror our own biased society - as they often do now - or should they be tinkered with specifically to achieve social reform aims? And who should be allowed to specify the aims or do the tinkering.
- More generally, you can often trust that people know what they are doing if they can explain to you how they arrived at an answer or a decision.
- The dark secret at the heart of AI.
- Ian Goodfellow, an AI expert who is part of the Google Brain team, says, "Almost anything bad you can think of doing to a machine-learning model can be done right now...and defending it is really, really hard".
- It's misleading to say that deep networks "learn on their own" or that their training is "similar to human learning". Recognition of the success of these networks must be tempered with a realization that they can fail in unexpected ways because of overfitting to their trains data, long-tails effects and vulnerability to hacking.
- The formidable challenges of balancing the benefits of AI with the risks of its unreliability and misuse.