lectures.alex.balgavy.eu

Lecture notes from university.
git clone git://git.alex.balgavy.eu/lectures.alex.balgavy.eu.git
Log | Files | Refs | Submodules

ethics.html (2490B)


      1 <!DOCTYPE html>
      2 <html>
      3 <head>
      4 <script type="text/javascript" async src="https://cdn.jsdelivr.net/gh/mathjax/MathJax@2.7.5/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
      5 <link rel="Stylesheet" type="text/css" href="style.css">
      6 <title>ethics</title>
      7 <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      8 </head>
      9 <body>
     10 
     11 <div id="Ethics of AI"><h1 id="Ethics of AI">Ethics of AI</h1></div>
     12 <p>
     13 three main questions:
     14 </p>
     15 <ul>
     16 <li>
     17 how do we encode ethical behavior?
     18 
     19 <li>
     20 how should we behave towards AI?
     21 
     22 <li>
     23 how does the existence of AI affects our daily lives?
     24 
     25 </ul>
     26 <blockquote>
     27 "Ethics begins when elements of a moral system conflict."
     28 </blockquote>
     29 
     30 <p>
     31 Fundamental ethics: moral absolutism, you are not allowed to do something due to e.g. religion
     32 </p>
     33 
     34 <p>
     35 Pragmatic ethics: humans always have a choice, you have the freedom of choice at any point in time
     36 </p>
     37 
     38 <div id="Ethics of AI-Sci-fi ethics (problems down the road)"><h2 id="Sci-fi ethics (problems down the road)">Sci-fi ethics (problems down the road)</h2></div>
     39 <p>
     40 Asimov's laws:
     41 </p>
     42 <ol>
     43 <li>
     44 A robot may not injure a human being or, through inaction, allow a human being to come to harm.
     45 
     46 <li>
     47 A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law. 
     48 
     49 <li>
     50 A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
     51 
     52 </ol>
     53 
     54 <p>
     55 The trolley problem is a good example of an ethical dilemma, and can be extended to self-driving cars (should it kill the driver or bystanders?).
     56 </p>
     57 
     58 <p>
     59 How do we treat AI? How should we?
     60 </p>
     61 
     62 <div id="Ethics of AI-Today's problems"><h2 id="Today's problems">Today's problems</h2></div>
     63 <ul>
     64 <li>
     65 Autonomous weapons: weapons that decide what to do by themselves
     66 
     67 <ul>
     68 <li>
     69 what are we allowing these systems to do?
     70 
     71 <li>
     72 the Dutch government said it's fine "if there's a human in the wider loop"...but this is very vague, what is the wider loop?
     73 
     74 </ul>
     75 <li>
     76 Privacy
     77 
     78 <ul>
     79 <li>
     80 big companies have a bunch of data about people
     81 
     82 <li>
     83 often, people give this data for free.
     84 
     85 </ul>
     86 <li>
     87 Profiling (e.g. racial)
     88 
     89 <ul>
     90 <li>
     91 e.g. a black person was stopped while driving in an expensive car because the system thought he could only be driving the car if he stole it.
     92 
     93 </ul>
     94 </ul>
     95 
     96 <p>
     97 Prosecutor's fallacy:
     98 </p>
     99 <ul>
    100 <li>
    101 using probabilities incorrectly. \(P(\text{black} | \text{uses drugs}) \neq P(\text{uses drugs} | \text{black})\)
    102 
    103 </ul>
    104 
    105 </body>
    106 </html>