lectures.alex.balgavy.eu

Lecture notes from university.
git clone git://git.alex.balgavy.eu/lectures.alex.balgavy.eu.git
Log | Files | Refs | Submodules

philosophy.html (3635B)


      1 <!DOCTYPE html>
      2 <html>
      3 <head>
      4 <script type="text/javascript" async src="https://cdn.jsdelivr.net/gh/mathjax/MathJax@2.7.5/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
      5 <link rel="Stylesheet" type="text/css" href="style.css">
      6 <title>philosophy</title>
      7 <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      8 </head>
      9 <body>
     10 
     11 <div id="Philosophy of AI"><h1 id="Philosophy of AI">Philosophy of AI</h1></div>
     12 <p>
     13 what is intelligence?
     14 </p>
     15 <ul>
     16 <li>
     17 classically, you test this using the Turing Test
     18 
     19 <ul>
     20 <li>
     21 interrogation game
     22 
     23 <li>
     24 interrogate two parties, the goal of both parties is to convince the interrogator that they are human
     25 
     26 <li>
     27 if the interrogator can't tell who is the human, the computer is intelligent
     28 
     29 </ul>
     30 <li>
     31 the objections:
     32 
     33 <ul>
     34 <li>
     35 the test is subjective
     36 
     37 <li>
     38 why are we basing intelligence on <em>human</em> intelligence? metaphor with flight, we only managed to get off the ground once we stopped imitating natural flight 
     39 
     40 </ul>
     41 </ul>
     42 
     43 <p>
     44 intelligence is everything a computer can't do yet.
     45 </p>
     46 
     47 <p>
     48 can a computer be intelligent?
     49 </p>
     50 <ul>
     51 <li>
     52 substitution argument: if you replace one neuron at a time with a computer chip in the human brain, you would eventually change into a computer, without your conscience or thought process changing at any point.
     53 
     54 <li>
     55 medium argument: no. "carbohydrate racism", there's something special about carbohydrates that allows us to do stuff that computers can't do.
     56 
     57 <li>
     58 formal systems argument: no. mathematical systems are inherently limited in some way; since computers are just formal systems, therefore they inherently have some limitations. we are not formal systems (that's debatable) so we do not have those limitations.
     59 
     60 <li>
     61 symbol-grounding: learning systems manipulate symbols
     62 
     63 <ul>
     64 <li>
     65 symbols can only refer to other symbols, so how can a computer ever know what's "red", "heavy", "sad" in the 'real' world?
     66 
     67 <li>
     68 so simulated intelligence ≠ real intelligence
     69 
     70 <li>
     71 thought experiment - the Chinese Room:
     72 
     73 <ul>
     74 <li>
     75 a room with Chinese symbols coming in
     76 
     77 <li>
     78 there's one person inside that uses a book to translate Chinese symbols to other symbols
     79 
     80 <li>
     81 there's nothing in this system that understands Chinese
     82 
     83 </ul>
     84 </ul>
     85 </ul>
     86 
     87 <p>
     88 Mind-body problem:
     89 </p>
     90 <ul>
     91 <li>
     92 we have the physical body, and metaphysical thoughts
     93 
     94 <li>
     95 what could be the relationship between the physical and the metaphysical? 
     96 
     97 <li>
     98 opinions:
     99 
    100 <ul>
    101 <li>
    102 mind-body dualism, interactionism: we consist of two parts (physical <em>and</em> metaphysical) --  Descartes 
    103 
    104 <li>
    105 materialism: the mind and body is one thing
    106 
    107 <li>
    108 gradualism: we evolved the mind (intelligence, consciousness) over time
    109 
    110 </ul>
    111 </ul>
    112 
    113 <p>
    114 Intentional stance:
    115 </p>
    116 <ul>
    117 <li>
    118 intelligence/consciousness is "attributed" and "gradual"
    119 
    120 <li>
    121 so the question isn't "will computers ever be conscious?", but rather "will we ever use consciousness-related words to describe them?"
    122 
    123 <li>
    124 if it's useful to talk about consciousness, motivation, feeling, etc., then we are allowed to (or should) do so equally for both humans and machines
    125 
    126 <li>
    127 people have a strong tendency to take the intentional stance, so we will <em>call</em> our computers "intelligent"
    128 
    129 </ul>
    130 
    131 <p>
    132 Free will:
    133 </p>
    134 <ul>
    135 <li>
    136 reasons why it can't be true:
    137 
    138 <ul>
    139 <li>
    140 physics is deterministic, you can predict the next states, so your brain doesn't <em>physically</em> allow free will
    141 
    142 <li>
    143 inconsistent with psychology and neuroscience -- motor areas begin activity 2 seconds before we think we want to do something (<a href="https://www.youtube.com/watch?v=IQ4nwTTmcgs">Libet's experiment</a>)
    144 
    145 </ul>
    146 </ul>
    147 
    148 </body>
    149 </html>