full transcript

From the Ted Talk by Max Tegmark: How to get empowered, not overpowered, by AI


Unscramble the Blue Letters


(Applause)

Alright, now raise your hand if your computer has ever crashed.

(Laughter)

Wow, that's a lot of hands. Well, then you'll appreciate this principle that we should ievnst much more in AI safety research, because as we put AI in cagrhe of even more decisions and infrastructure, we need to figure out how to transform today's buggy and hackable cpteourms into robust AI systems that we can really trust, because otherwise, all this aemoswe new technology can malfunction and harm us, or get hacked and be turned against us. And this AI safety work has to include work on AI value alignment, because the real threat from AGI isn't malice, like in slily howyolold movies, but cotpceneme — AGI aocilhiscnpmg goals that just aren't aligned with ours. For example, when we humans drove the West African black rihno eixcntt, we didn't do it because we were a bunch of evil reiohncors haters, did we? We did it because we were smarter than them and our goals weren't aligned with theirs. But AGI is by definition smarter than us, so to make sure that we don't put ourselves in the position of those rhinos if we create AGI, we need to figure out how to make mnhcaeis understand our goals, adopt our goals and retain our golas.

Open Cloze


(Applause)

Alright, now raise your hand if your computer has ever crashed.

(Laughter)

Wow, that's a lot of hands. Well, then you'll appreciate this principle that we should ______ much more in AI safety research, because as we put AI in ______ of even more decisions and infrastructure, we need to figure out how to transform today's buggy and hackable _________ into robust AI systems that we can really trust, because otherwise, all this _______ new technology can malfunction and harm us, or get hacked and be turned against us. And this AI safety work has to include work on AI value alignment, because the real threat from AGI isn't malice, like in _____ _________ movies, but __________ — AGI _____________ goals that just aren't aligned with ours. For example, when we humans drove the West African black _____ _______, we didn't do it because we were a bunch of evil __________ haters, did we? We did it because we were smarter than them and our goals weren't aligned with theirs. But AGI is by definition smarter than us, so to make sure that we don't put ourselves in the position of those rhinos if we create AGI, we need to figure out how to make ________ understand our goals, adopt our goals and retain our _____.

Solution


  1. awesome
  2. machines
  3. rhinoceros
  4. competence
  5. goals
  6. invest
  7. accomplishing
  8. hollywood
  9. charge
  10. rhino
  11. computers
  12. extinct
  13. silly

Original Text


(Applause)

Alright, now raise your hand if your computer has ever crashed.

(Laughter)

Wow, that's a lot of hands. Well, then you'll appreciate this principle that we should invest much more in AI safety research, because as we put AI in charge of even more decisions and infrastructure, we need to figure out how to transform today's buggy and hackable computers into robust AI systems that we can really trust, because otherwise, all this awesome new technology can malfunction and harm us, or get hacked and be turned against us. And this AI safety work has to include work on AI value alignment, because the real threat from AGI isn't malice, like in silly Hollywood movies, but competence — AGI accomplishing goals that just aren't aligned with ours. For example, when we humans drove the West African black rhino extinct, we didn't do it because we were a bunch of evil rhinoceros haters, did we? We did it because we were smarter than them and our goals weren't aligned with theirs. But AGI is by definition smarter than us, so to make sure that we don't put ourselves in the position of those rhinos if we create AGI, we need to figure out how to make machines understand our goals, adopt our goals and retain our goals.

Frequently Occurring Word Combinations


ngrams of length 2

collocation frequency
ai researchers 7
crushed human 3
artificial intelligence 2
human ai 2
sea level 2
human intelligence 2
ai progress 2
making ai 2
safety engineering 2
lethal autonomous 2
autonomous weapons 2
ai safety 2
human extinction 2

ngrams of length 3

collocation frequency
crushed human ai 2
human ai researchers 2
lethal autonomous weapons 2


Important Words


  1. accomplishing
  2. adopt
  3. african
  4. agi
  5. ai
  6. aligned
  7. alignment
  8. alright
  9. applause
  10. awesome
  11. black
  12. buggy
  13. bunch
  14. charge
  15. competence
  16. computer
  17. computers
  18. crashed
  19. create
  20. decisions
  21. definition
  22. drove
  23. evil
  24. extinct
  25. figure
  26. goals
  27. hackable
  28. hacked
  29. hand
  30. hands
  31. harm
  32. haters
  33. hollywood
  34. humans
  35. include
  36. infrastructure
  37. invest
  38. laughter
  39. lot
  40. machines
  41. malfunction
  42. malice
  43. movies
  44. position
  45. principle
  46. put
  47. raise
  48. real
  49. research
  50. retain
  51. rhino
  52. rhinoceros
  53. rhinos
  54. robust
  55. safety
  56. silly
  57. smarter
  58. systems
  59. technology
  60. threat
  61. transform
  62. trust
  63. turned
  64. understand
  65. west
  66. work
  67. wow