Part One Background
Chapter 1 The Roots of Artificial Intelligence
- The concept of supervised learning is a key part of modern AI.
- Training set. Test set.
- Perceptron learning algorithm.
- Even at beginning, AI suffered from a hype problem approach.
- "Easy things are hard" dictum: the human workers are hired to perform the "easy" tasks that are currently too hard for computers.
Chapter 2 Neural Networks and the Ascent of Machine Learning
- rebutted: réfuter
- to soar: monter en flèche
- pep: vitalité
- imbued: imprégné
- disparagingly: de façon désobligeante
- "activation"
- "back-propagation"
- I trained both a perceptron and a two-layer neural network, each with 324 inputs and 10 outputs, on the handwritten-digit-recognition task, using sixty thousand examples, and then tested how well each was able to recognize ten thousand new examples.
- The term "connectionist" refers to the idea that knowledge in these networks resides in weighted connections between units.
Chapter 3 AI Spring
- overlord: chef suprême
- eerily: sinistrement
- pun: jeux de mots
- derogatory: désobligeant
- tout: racoler
- shallow: superficiel
- self-awareness: conscience de soi
- dogged: obstiné
- contrivance: dispositif
- ergo: par conséquent
- to imbue: imprégner
- to spur: inciter
- to scoff at: se moquer de
- glaringly: extrêmement
- malevolent: malveillant
- to ascribe: attribuer
- wryly: ironiquement
- rapture: extase
- post-haste: en toute hâte
- conscripted: appeler
- dire: pressant
- to relent: céder
- a scant: peu de
- surfeit: excès
- to straddle: enfourcher
- zaniest: loufoque
- ploy: stratagème
- stung: piqûre
- wagering: pari
- foil: faire-valoir
- wherein: où
- harbinger: messager
- bearing: position
- The terms narrow and weak are used to contrast with strong, human-level or full-blown AI (sometimes called AGI, or Artificial General Intelligence).
- We're back to the philosophical question I was discussing with my mother: is there a difference between 'simulating a mind' and 'literally having a mind'?
- Ray Kurzweil, who is now director of engineering at Google.
- Singularity: "a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed".
- Kurzweil agrees: "most of the brain's complexity comes from its own interaction with a complex world. Thus, it will be necessary to provide an artificial intelligence with an education just as we do with a natural intelligence.
- Kurzweil's thinking has been particularly influential in the tech industry, where people often believe in exponential technological progress as the means to solve all of society's problems.
- Kapor: "Perception of and physical interactions with the environment is the equal partner of cognition in shaping experience...Emotions bound and shape the envelope of what is thinkable"
- Crucial abilities underlying our distinctive human intelligence, such as perception, language, decision-making, common sense reasoning and learning.
Part Two Looking and Seeing
Chapter 4 Who, What, When, Where, Why
- The neural networks dominating deep learning are directly modeled after discoveries in neuroscience.
- Today's most influential and widely used approach: convolutional neural networks, or (as most people in the field call them) ConvNets.
- This calculation - multiplying each value in a receptive field by its corresponding weight and summing the results - is called a convolution. Hence the name "convolutional neural network".
- Would you like to experiment with a well-trained ConvNet? Simply take a photo of an object, and upload it to Google's "search by image" engine. Google will run a ConvNet on your image and, based on the resulting confidences (over thousands of possible object categories), will tell you its "best guess" for the image.
Chapter 5 ConvNets and ImageNet
- bunch: groupe
- updended: renversé
- jolt: soubresaut
- snooping: fouiner
- terse: sec
- stuffed: empaillé
- to tease out: extraire
- Yann LeCun, the inventor of Convnets.
- A cardinal rule in machine learning is "Don't train on the test data". It seems obvious.
- It turns out that the recent success of deep learning is due less to new breakthroughs in AI than the availability of huge amounts of data (thank you, internet!) and very fast parallel computer hardware.
- Facebook labelled your uploaded photos with names of your friends and registered a patent of classifying the emotions behind facial expressions in uploaded photos.
- ConvNets can be applied to video and used in self-driving cars to track pedestrians, or to read lips and classify body languages. Convnets can even diagnose breast and skin cancer from medical images, determine the stage of diabetic retinopathy, and assist physicians in treatment planning for prostate cancer.
- It could be that the knowledge needed for humanlike visual intelligence - for example, making sense of the "soldier and dog" photo at the beginning of the previous chapter - can't be learned from millions of pictures downloaded from the web, but has to be experienced in some way in the real world.
Chapter 6 A Closer Look at Machines That Learn
- to veer: virer
- speckling: moucheté
- repellent: répulsif
- skewed: faussé
- inconspicuous: qui passe inaperçu
- whack-a-mole: jeu du chat et de la souris
- adversarial: conflictuel
- ostrich: autruche
- I'll explore how differences between learning in ConvNets and in humans affect the robustness and trustworthiness of what is learned.
- The learning process of ConvNets is not very humanlike.
- The most successful ConvNets learned via a supervised-learning procedure: they gradually change their weights as they process the examples in the training set again and again, over many epochs (that is, many passes through the training set), learning to classify each input as one of a fixed set of possible output categories.
- Demis Hassabis, co-founder of Google DeepMind.
- Deep learning requires big data.
- Have you ever put a photo of a friend on your Facebook page and commented on it? Facebook thanks you!
- Deep learning, as always, requires a profusion of training examples.
- Upon purchase of a Tesla vehicle, must agree to a data-sharing policy with the company.
- Requiring so much data is a major limitation of deep-learning today. Yoshua Bengio, another high-profile AI researcher, agrees: "We can't realistically label everything in the world and meticulously explain every last detail to the computer. "
- The term unsupervised learning refers to a broad group of methods for learning categories or actions without labelled data.
- In machine-learning jargon, Will's network "overfitted" to its specific training set.
- They are overfitting to their training data and learning something different from what we are trying to teach them.
- Commercial face-recognition systems tend to be more accurate on white male faces than on female or non white faces. Camera software for face detection is sometimes prone to missing faces with dark skin and to classifying Asian faces as "blinking".
- The spread of real-world AI systems trained on biased data can magnify these biases and do real damage.
- Should the data sets being used to train AI accurately mirror our own biased society - as they often do now - or should they be tinkered with specifically to achieve social reform aims? And who should be allowed to specify the aims or do the tinkering.
- More generally, you can often trust that people know what they are doing if they can explain to you how they arrived at an answer or a decision.
- The dark secret at the heart of AI.
- Ian Goodfellow, an AI expert who is part of the Google Brain team, says, "Almost anything bad you can think of doing to a machine-learning model can be done right now...and defending it is really, really hard".
- It's misleading to say that deep networks "learn on their own" or that their training is "similar to human learning". Recognition of the success of these networks must be tempered with a realization that they can fail in unexpected ways because of overfitting to their trains data, long-tails effects and vulnerability to hacking.
- The formidable challenges of balancing the benefits of AI with the risks of its unreliability and misuse.
Chapter 7 On Trustworthy and Ethical AI
- tipsy: pompette
- sobering: qui donne à réfléchir
- menial: subalterne
- rickshaw: pousse-pousse
- to canvass: sonder l'opinion
- creepy: horreur
- hamstrung: couper les tendons d'Achille
- spur: voie secondaire
- staple: de base
- contrived: imaginé
- Facebook, for example, applies a face-recognition algorithm to every photo that is uploaded to its site, trying to detect the faces in the photo and to match them with known users (at least those users who haven't disabled this feature)
- Privacy is an obvious issue. Even if I'm not on Facebook (or any other social media platform with face recognition), photos including me might be tagged and later automatically recognized on the site, without my permission.
- "We deserve a world where we're not empowering governments to categorize, track and control citizens"
- My own opinion is that too much attention has been given to the risks from superintelligent AI and far too little to deep learning's lack of reliability and transparency and its vulnerability to attacks.
PART THREE Learning to Play
Chapter 8 Rewards for Robots
- preposterously: absurde
- nagging: tenace
- covertly: secrètement
- oblivious: inconscient
- wag: remuer
- puddle: flaque
- awash: inondé
- wry: ironique
- treat: friandise
- "Reward behavior like and ignore behavior I don't"
- Operant conditioning inspired an important machine-learning approach called reinforcement learning. Reinforcement learning contrasts with the supervised learning method.
- Reinforcement learning requires no labelled training examples. Instead, an agent - the learning program - performs actions in an environment (usually a computer simulation) and occasionally receives rewards from the environment.
- Reinforcement learning: learning too much at one time can be detrimental.
- Q-learning
- Exploration versus exploitation balance
Chapter 9 Game On
- stance: position
- paddle: pagaie
- to tally up: faire le total
- devious: sournois
- windfall: aubaine
- gobsmacked: estomaqué
- pruning: élagage
- In reinforcement learning we have no labels.
- Learning a guess from a better guess.
- Three all-important concepts: the game tree, the evaluation function and learning by self-play.
- AlphaGo acquired its abilities by reinforcement learning via self-play.
- The program chooses moves probabilistically.
- AlphaGo learns by playing against itself over many games.
- With its AlphaGo project, DeepMind demonstrated that one of AI's long-time grand challenges could be conquered by an inventive combination of reinforcement learning, convolutional neural networks and Monte Carlo tree search.
Chapter 10 Beyond Games
- imbuing: imprégner
- adversarial: conflictuel
- prowess: talent
- pesky: fichu
- Unlike supervised learning, reinforcement learning holds the promise of programs that can truly learn on their own, simply by performing actions in their "environment" and observe the outcome.
- How to think better: how to think logically, reason abstractly and plan strategically.
- Andrej Karpathy, Tesla's director of AI
PART FOUR Artificial Intelligence Meets Natural Language
Chapter 11 Words, and the Company They Keep
- to surmise: présumer
- beak: bec
- bent out of shape: upset and angry
- mind-boggling: impressionnant
- to impinge: empiéter
- wit: esprit
- alluring: séduisant
- to seep through: suinter
- You shall know a word by the company it keeps.
- In linguistics, this idea is known more formally as distributional semantics.
- The semantic of words might actually require many dozens if not hundred of dimensions.
- It turns out that using word vectors as numerical inputs to represent words, as opposed to the simple one-hot scheme, greatly improves the performance of neural networks in NLP tasks.
- "word2vec": shorthand for "word to vector".
- You shall know a word by the company it keeps.
- The idea is to train the word2vec network to predict what words are likely to be paired with a given input word. Word vectors are also called word embeddings.
- Let's remember that the goal of this whole process is to find a numerical representation - a vector - for each word in the vocabulary, one that captures something of the semantics of the word.
Chapter 12 Translation as Encoding and Decoding
- mildew: moisissure, mildiou
- cringe: avoir un mouvement de recul
- spurred: incité
- jarring: qui secoue
- dazzled: éblouir
- wrongheaded: erroné
- gist: sens général
- Encoder, Meet Decoder
- Long short-term memory, LTSM units: the idea is that these units allow for more "short-term" memory that can last throughout the processing of the sentence.
- To measure the quality of a translation, BLEU essentially counts the number of matches - between words and phrases of varying lengths.
- My general experience is that the translation quality of, say, Google Translate declines significantly when it is given whole paragraphs instead of single sentences.
- While the skeletal meaning of this story comes through, subtle but important nuances get lost in all the translations.
- The main obstacle is this: like speech-recognition systems, machine-translation systems perform their task without actually understanding the text they are processing.
- "Machine translation...often involves problems of ambiguity that can only be resolved by achieving an actual understanding of the text - and bringing real-world knowledge to bear".
- The images were downloaded from repositories such as Flickr.com, and the captions for these images were produced by humans - namely, Amazon Mechanical Turk workers, who were hired by Google for this study.
- I'm certain that these systems will improve as researchers apply more data and new algorithms. However, I believe that the fundamental lack of understanding in caption-generating networks inevitably means that, as in language translation, these systems will remain untrustworthy.
Chapter 13 Ask Me Anything
- lodestar: guide
- puns: jeu de mots
- godsend: aubaine
- uncanny: mystérieux
- stunt: acrobatie
- parlour: petit salon
- dearth: manque
- bestowing: conférer
- to suss out: piger
- dubious: douteux
- to vie: concourir
- to forestall: prévenir
- inching: avancer doucement
- adversarial: conflictuel
- adversary: adversaire
- nefarious: abominable
- Overpromising and under-delivering are, of course, an all-too-common story in AI.
- The Winograd schemas are designed precisely to be easy for humans but tricky for computers.
- It seems to me to be extremely unlikely that machines could ever reach the level of humans on translation, reading comprehension and the like by learning exclusively from online data, with no real understanding of the language they process.
- Language also relies on commonsense knowledge of the other people with whom we communicate.
PART FIVE The Barrier of Meaning
Chapter 14 On Understanding
- endowed: doté
- insight: perspicacité
- bland: fade
- teeming: grouillant
- fraught: tendu
- what the heck? : c'est quoi ce bordel ?
- indulge: céder
- pun: jeu de mots
- yucky: dégoutant
- libel: diffamation
- feat: exploit
- Humans, in some deep and essential way, understand the situations they encounter, whereas no AI system yet possesses such understanding. While sate-of-the-art AI systems have nearly equalled (and in some cases surpassed) humans on certain narrowly defined tasks, these systems all lack grasp of the rich meanings humans bring to bear in perception, language and processing.
- Psychologists have coined a term - intuitive physics - for the basic knowledge and beliefs humans share about objects and how they behave. As very young children, we also develop intuitive biology: knowledge about how living things differ from inanimate objects.
- Because humans are a profoundly social species, from infancy on we additionally develop intuitive psychology: the ability to sense and predict the feelings, beliefs and goals of other people.
- Simulations appear central to the representation of meaning.
- For example, Lakoff and Johnson note that we talk about the abstract concept of time using terms that apply to the more concrete concept of money: You "spend" or "save" time. You often "don't have enough time".
- "I was given a warm welcome", "She gave me an icy stare", "He gave me the cold shoulder". Such phrasings are so ingrained that we don't realize we're speaking metaphorically. These metaphors reveal the physical basis of our understanding of concepts.
- Abstraction and analogy.
- Analogy-making in a very general sense as "the perception of a common essence between two things".
- "Without concepts there can be no thought, and without analogies there can be no concepts"
- Everyone in AI research agrees that core commonsense knowledge and the capacity for sophisticated abstraction and analogy are among the missing links required for future progress in AI.
Chapter 15 Knowledge, Abstraction and Analogy in Artificial Intelligence
- elusive: insaisissable
- rut: ornière
- commonsense: du bon sens; sensé; raisonnable
- imbue: imprégner
- mind-boggling: impressionnant
- grappling: s'agripper
- Lenat concluded that rule progress in AI would require machines to have common sense.
- Unwritten knowledge that humans have.
- Our commonsense knowledge is governed by abstraction and analogy.
- AI research often uses so-called microwords - idealized domains, such as Bongard problems, in which a researcher can develop ideas before testing them in more complex domains.
- Conceptual slippage, an idea at the heart of analogy-making.
- The concept of website slipped to the concept of wall, and the concept of writing a blog slipped to the concept of spray-painting graffiti.
- Have you ever struggled unsuccessfully to solve a problem, finally recognizing that you have been repeating the same unproductive thought process? This happens to me all the time; however, once I recognize this pattern, I can sometimes break out of the rut.
- "We Are really, Really Far Away"
- The modern age of artificial intelligence is dominated by deep learning, with its triumvirate of deep neural networks, big data and ultrafast computers.
- A small segment of the AI community has consistently argued for the so-called embodiment hypothesis: the premise tat a machine cannot attain human-level intelligence without having some kind of body tat interacts with the world.
Chapter 16 Questions, Answers and Speculations
- jaywalk: traverser en dehors des clous
- dart across: foncer
- inconspicuous: qui passe inaperçu
- pesky: fichu
- oxymoron: réalité paradoxale; exemple "mort-vivant"
- bewildering: déroutant
- beset: frappé
- ballparked: approximatif
- to elude: échapper
- witty: plein d'esprit
- elusive: insaisissable
- vexing: épineux
- foibles: manies
- addle: embrouillé
- headlong: tête la première
- The sort of core intuitive knowledge : intuitive physics, biology and especially psychology.
- It's worth remembering the maxim that the first 90 percent of a complex technology project takes 10 percent of the time and the last 10 per cent takes 90 per cent of the time.
- I believe that it is possible, in principle, for a computer to be creative.
- I've seen numerous computer-generated artworks that I consider beautiful.
- The creativity results from the teamwork of human and computer: the computer generates initial artworks and then successive variations, and the human provides judgment of the resulting works which comes from the human's understanding of abstract artistic concepts.
- "Human intelligence is a marvelous, subtle, and poorly understood phenomenon. There is no danger of duplicating it anytime soon"
- "Prediction is hard, especially about the future"
- The annoying limitations of humans, such as our slowness of thought and learning, our irrationality and cognitive biases, our susceptibility to boredom, our need for sleep and our emotions, all of which get in the way of productive thinking.
- Above all, the take-home message from this book is that we humans tend to overestimate AI advances and underestimate the complexity of our own intelligence.
- AI systems are brittle; that is, they make errors when their input varies too much from the examples on which they've been trained.
- We tend to anthropomorphize AI systems: we impute human qualities to them and end up overestimating the extent to which these systems can actually be fully trusted.