lectures.alex.balgavy.eu

Lecture notes from university.
git clone git://git.alex.balgavy.eu/lectures.alex.balgavy.eu.git
Log | Files | Refs | Submodules

commit 8d47712b875cc2f61eedf99ba01c6c57ca003e1f
parent 41e2f99661497d1e61d8f2444f8a78fe9cfd9169
Author: Alex Balgavy <alex@balgavy.eu>
Date:   Tue, 23 Mar 2021 15:14:48 +0100

Philosophy notes migrated

Diffstat:
Mcontent/_index.md | 2+-
Dcontent/philosophy-notes/.nojekyll | 0
Acontent/philosophy-notes/Lecture 1 subjectivism & objectivism.md | 22++++++++++++++++++++++
Dcontent/philosophy-notes/Lecture 1_ subjectivism & objectivism.html | 52----------------------------------------------------
Acontent/philosophy-notes/Lecture 2 egoism, contractualism.md | 28++++++++++++++++++++++++++++
Dcontent/philosophy-notes/Lecture 2_ egoism, contractualism.html | 69---------------------------------------------------------------------
Acontent/philosophy-notes/Lecture 3 theories of well-being.md | 66++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Dcontent/philosophy-notes/Lecture 3_ theories of well-being.html | 84-------------------------------------------------------------------------------
Acontent/philosophy-notes/Lecture 4 utilitarianism.md | 36++++++++++++++++++++++++++++++++++++
Dcontent/philosophy-notes/Lecture 4_ utilitarianism.html | 68--------------------------------------------------------------------
Acontent/philosophy-notes/Lecture 5 Kantian ethics.md | 19+++++++++++++++++++
Dcontent/philosophy-notes/Lecture 5_ Kantian ethics.html | 47-----------------------------------------------
Acontent/philosophy-notes/_index.md | 12++++++++++++
Dcontent/philosophy-notes/index.html | 21---------------------
Dcontent/philosophy-notes/style.css | 38--------------------------------------
15 files changed, 184 insertions(+), 380 deletions(-)

diff --git a/content/_index.md b/content/_index.md @@ -19,7 +19,7 @@ title = "Alex's university course notes" * [Equational Programming](equational-notes/) * [Machine Learning](ml-notes/) **(unfinished)** * [Automata & Complexity](automata-complexity-notes/) **(unfinished)** -* [Philosophy](https://thezeroalpha.github.io/philosophy-notes) +* [Philosophy](philosophy-notes/) ## Subject notes: Year 2 diff --git a/content/philosophy-notes/.nojekyll b/content/philosophy-notes/.nojekyll diff --git a/content/philosophy-notes/Lecture 1 subjectivism & objectivism.md b/content/philosophy-notes/Lecture 1 subjectivism & objectivism.md @@ -0,0 +1,22 @@ ++++ +title = 'Lecture 1: subjectivism & objectivism' ++++ +# Lecture 1: subjectivism & objectivism +cultural relativism: no objective moral standard, only relative to a culture + +ethics: study of moral standards. these standards are: +- subjectivism: based on feelings + - example: "boooo i think this sucks" + - can't account for disagreements (if someone criticises you, you could just say "muh culture") + - even if agreement in _belief_, may have a conflict of attitudes: I agree you like something, but think you shouldn't and encourage you not to + - objectivist: "you should change your attitude for good reasons" + - why people like it: moral standards not 'scientifically testable' so seem up to us + - why people don't: there's stuff that seems clearly criticizable (slavery, racism) for good reasons +- objectivism: based on reasons + - example: "this objectively sucks for good reasons" + +why ethics? helps resolve moral dilemmas and make decisions in a systematic, intelligent way. + +Todo: +- read chapter 2 (challenge of cultural relativism): +- section 3.4 diff --git a/content/philosophy-notes/Lecture 1_ subjectivism & objectivism.html b/content/philosophy-notes/Lecture 1_ subjectivism & objectivism.html @@ -1,52 +0,0 @@ - -<!DOCTYPE html> -<html> - <head> - <meta charset="UTF-8"> - - <title>Lecture 1: subjectivism &amp; objectivism</title> - <link rel="stylesheet" href="style.css"/></head> - <body> - <style type="text/css"> -nav a { - text-align: left; -} -nav #name { - text-align: right; - float: right; - font-style: italic; -} - </style> - <nav> - <a href="index.html">Index</a> - <span id="name">Alex Balgavy</span> - </nav> - <hr> - <div class="exported-note"><h1>Lecture 1: subjectivism &amp; objectivism</h1> - - <div id="rendered-md"><p>cultural relativism: no objective moral standard, only relative to a culture<br> - ethics: study of moral standards. these standards are:</p> - <ul> - <li>subjectivism: based on feelings - <ul> - <li>example: &quot;boooo i think this sucks&quot;</li> - <li>can't account for disagreements (if someone criticises you, you could just say &quot;muh culture&quot;)</li> - <li>even if agreement in <em>belief</em>, may have a conflict of attitudes: I agree you like something, but think you shouldn't and encourage you not to - <ul> - <li>objectivist: &quot;you should change your attitude for good reasons&quot;</li> - </ul> - </li> - <li>why people like it: moral standards not 'scientifically testable' so seem up to us</li> - <li>why people don't: there's stuff that seems clearly criticizable (slavery, racism) for good reasons</li> - </ul> - </li> - <li>objectivism: based on reasons - <ul> - <li>example: &quot;this objectively sucks for good reasons&quot;</li> - </ul> - </li> - </ul> - <p>why ethics? helps resolve moral dilemmas and make decisions in a systematic, intelligent way.</p> - </div></div> - </body> -</html> diff --git a/content/philosophy-notes/Lecture 2 egoism, contractualism.md b/content/philosophy-notes/Lecture 2 egoism, contractualism.md @@ -0,0 +1,28 @@ ++++ +title = 'Lecture 2: egoism, contractualism' ++++ +# Lecture 2: egoism, contractualism +egoism: if people's nature is egoistic, then saying they have moral reasons to help others will be pointless and they won't do it. +- but, is it even true? in many crises, people go volunteer and help + - counter: maybe that's still in self-interest. we want to feel good that we helped or maybe get compliments from others for helping ("reward seeking"), avoid feeling guilty for not helping ("punishment avoidance"), and relieve our own distress that people need help ("aversal-arousal reduction") +- even if humans fail, we could try to make a machine that could do better + +ethical egoism: people should/may be egoistic. we don't have any duties to help others or contribute to common good. +- tragedy of the commons: we can't sustain common goods (fresh air, public safety, energy) if we're all ethical egoists (i.e. we don't contribute to them) + - e.g. you'll have more toilet paper if you buy more, but if everyone and their grandma buys more toilet paper, there won't be any in the shop + - makes ethical egoism self-defeating + +so what do? + +contractualism: we make a deal that we cooperate (act morally not egoistically), on the condition that others do too +- morality is like a contract to limit each other's egoism +- why accept: + - it's in one's self interest, this is why we can have nice things. if nobody steals shit, nobody will steal your shit (well, in general, and doesn't count for bikes because those seem to get stolen no matter what you do). + - it's also in one's self-interest to try to secretly free-ride (benefit from others without cooperating, basically leeching). the state might try to make people cooperate by penalizing, and we'd accept this because it's in our self-interest if everyone cooperates. +- doesn't imply that we should always cooperate: if I benefit, and if I have reason to believe that others will cooperate, then I should cooperate. +- counters: + - "I never signed a contract" - yeah no shit. But you would, if you understand the logic of the situation. + - "I won't sign a contract with everyone, but only if I will benefit from the cooperation. Like why sign such a contract with animals or oppressed groups?" + - counter: imagine signing "behind a veil of ignorance", i.e. you don't know what your position will be in the society + + diff --git a/content/philosophy-notes/Lecture 2_ egoism, contractualism.html b/content/philosophy-notes/Lecture 2_ egoism, contractualism.html @@ -1,69 +0,0 @@ - -<!DOCTYPE html> -<html> - <head> - <meta charset="UTF-8"> - - <title>Lecture 2: egoism, contractualism</title> - <link rel="stylesheet" href="style.css"/></head> - <body> - <style type="text/css"> -nav a { - text-align: left; -} -nav #name { - text-align: right; - float: right; - font-style: italic; -} - </style> - <nav> - <a href="index.html">Index</a> - <span id="name">Alex Balgavy</span> - </nav> - <hr> - <div class="exported-note"><h1>Lecture 2: egoism, contractualism</h1> - - <div id="rendered-md"><p>egoism: if people's nature is egoistic, then saying they have moral reasons to help others will be pointless and they won't do it.</p> - <ul> - <li>but, is it even true? in many crises, people go volunteer and help - <ul> - <li>counter: maybe that's still in self-interest. we want to feel good that we helped or maybe get compliments from others for helping (&quot;reward seeking&quot;), avoid feeling guilty for not helping (&quot;punishment avoidance&quot;), and relieve our own distress that people need help (&quot;aversal-arousal reduction&quot;)</li> - </ul> - </li> - <li>even if humans fail, we could try to make a machine that could do better</li> - </ul> - <p>ethical egoism: people should/may be egoistic. we don't have any duties to help others or contribute to common good.</p> - <ul> - <li>tragedy of the commons: we can't sustain common goods (fresh air, public safety, energy) if we're all ethical egoists (i.e. we don't contribute to them) - <ul> - <li>e.g. you'll have more toilet paper if you buy more, but if everyone and their grandma buys more toilet paper, there won't be any in the shop</li> - <li>makes ethical egoism self-defeating</li> - </ul> - </li> - </ul> - <p>so what do?</p> - <p>contractualism: we make a deal that we cooperate (act morally not egoistically), on the condition that others do too</p> - <ul> - <li>morality is like a contract to limit each other's egoism</li> - <li>why accept: - <ul> - <li>it's in one's self interest, this is why we can have nice things. if nobody steals shit, nobody will steal your shit (well, in general, and doesn't count for bikes because those seem to get stolen no matter what you do).</li> - <li>it's also in one's self-interest to try to secretly free-ride (benefit from others without cooperating, basically leeching). the state might try to make people cooperate by penalizing, and we'd accept this because it's in our self-interest if everyone cooperates.</li> - </ul> - </li> - <li>doesn't imply that we should always cooperate: if I benefit, and if I have reason to believe that others will cooperate, then I should cooperate.</li> - <li>counters: - <ul> - <li>&quot;I never signed a contract&quot; - yeah no shit. But you would, if you understand the logic of the situation.</li> - <li>&quot;I won't sign a contract with everyone, but only if I will benefit from the cooperation. Like why sign such a contract with animals or oppressed groups?&quot; - <ul> - <li>counter: imagine signing &quot;behind a veil of ignorance&quot;, i.e. you don't know what your position will be in the society</li> - </ul> - </li> - </ul> - </li> - </ul> - </div></div> - </body> -</html> diff --git a/content/philosophy-notes/Lecture 3 theories of well-being.md b/content/philosophy-notes/Lecture 3 theories of well-being.md @@ -0,0 +1,65 @@ ++++ +title = 'Lecture 3: theories of well-being' ++++ +# Lecture 3: theories of well-being +theories don't necessarily disagree, but if they agree, they agree for different reasons. + +utilitarianism justifies choices by referring to well-being of everyone involved. + +but what is well-being? what's ultimately good for one? + +## Hedonism +your well-being depends on whether you enjoy your life. +something matters because it brings you pleasure. + +but pleasure is valuable in itself. + +but if you had an experience machine that could give you any experience you wanted, would you plug in? + +why not, based on Nozick: +1. I want to be someone, not just a set of experiences + - but there could be a machine in which we could be transformed into any character we ant +2. I want to do things, achieve things through pain and effort, not just sit and wait for things to happen. + - but there could be a machine in which we work on and _accomplish_ all sorts of great projects +3. I want contact with reality, not a reality restricted to what humans can imagine and program. + +## Preference satisfaction +your well-being depends on whether you get what you want. +life goes better if more of your preferences are satisfied +i.e. not just about pleasure. + +is it always good to get what you want? people make bad choices...maybe because they are misinformed? +amend: what matters is if your laundered preferences are satisfied (i.e. those you'd have if you were sufficiently informed). + +policies can influence people's preferences. +some might seem good, like smokers losing their desire for smoking. +but what's the justification? at what point does it just become propaganda, i.e. people like stuff because of the policy? +though preference satisfaction says it's good if people get what they want after being informed, not after having their desires manipulated. + +also, what if someone's only fully-informed desire is to count blades of grass? +if life going well for this person if they get all the time to count blades of grass? +based on preference satisfaction, yes. +but like, for real? + +## Objective list theory +your well-being depends on whether you have the items that are on the objective list. +i.e. there are things that are good for everyone. + +so what's on the list? + +based on Martha Nussbaum: +- life: being able to live a life of normal length +- health: good health, including reproductive +- bodily integrity: being able to move freely, be secure against assault/violence, and having opportunities for sexual satisfaction +- senses, imagination, thought: being bale to use senses, to imagine, to think and reason with adequate education; being able to experience and produce self-expressive works of religion/literature/music/etc.; being able to have pleasurable experiences and avoid unnecessary pain. +- emotions: being able to have attachments to other things and people, to love, to grieve, to experience justified anger +- practical reason: being able to form conception of good, and to engage in critical reflection about planning of one's life +- affiliation: being able to live with/toward others, to show concern for other people, being able to work as a human being, having social bases of self-respect and non-humiliation +- other species: being able to live with concern for and relation to animals, plants, and nature. +- play: being able to laugh, play, enjoy recreational activities. +- control over one's environment: political (participating in political choices, protections of free speech and association), material (being able to have property, having right to equally seek employment, having freedom from unwarranted search and seizure) + +advantage: this view might be more suitable for policy. + +disadvantage: less room for choice and diversity. +<!-- vim: set spc=: -->+ \ No newline at end of file diff --git a/content/philosophy-notes/Lecture 3_ theories of well-being.html b/content/philosophy-notes/Lecture 3_ theories of well-being.html @@ -1,84 +0,0 @@ - -<!DOCTYPE html> -<html> - <head> - <meta charset="UTF-8"> - - <title>Lecture 3: theories of well-being</title> - <link rel="stylesheet" href="style.css"/></head> - <body> - <style type="text/css"> -nav a { - text-align: left; -} -nav #name { - text-align: right; - float: right; - font-style: italic; -} - </style> - <nav> - <a href="index.html">Index</a> - <span id="name">Alex Balgavy</span> - </nav> - <hr> - <div class="exported-note"><h1>Lecture 3: theories of well-being</h1> - - <div id="rendered-md"><p>theories don't necessarily disagree, but if they agree, they agree for different reasons.</p> - <p>utilitarianism justifies choices by referring to well-being of everyone involved.<br> - but what is well-being? what's ultimately good for one?</p> - <h2 id="hedonism">Hedonism</h2> - <p>your well-being depends on whether you enjoy your life.<br> - something matters because it brings you pleasure.<br> - but pleasure is valuable in itself.</p> - <p>but if you had an experience machine that could give you any experience you wanted, would you plug in?<br> - why not, based on Nozick:</p> - <ol> - <li>I want to be someone, not just a set of experiences - <ul> - <li>but there could be a machine in which we could be transformed into any character we ant</li> - </ul> - </li> - <li>I want to do things, achieve things through pain and effort, not just sit and wait for things to happen. - <ul> - <li>but there could be a machine in which we work on and <em>accomplish</em> all sorts of great projects</li> - </ul> - </li> - <li>I want contact with reality, not a reality restricted to what humans can imagine and program.</li> - </ol> - <h2 id="preference-satisfaction">Preference satisfaction</h2> - <p>your well-being depends on whether you get what you want.<br> - life goes better if more of your preferences are satisfied<br> - i.e. not just about pleasure.</p> - <p>is it always good to get what you want? people make bad choices...maybe because they are misinformed?<br> - amend: what matters is if your laundered preferences are satisfied (i.e. those you'd have if you were sufficiently informed).</p> - <p>policies can influence people's preferences.<br> - some might seem good, like smokers losing their desire for smoking.<br> - but what's the justification? at what point does it just become propaganda, i.e. people like stuff because of the policy?<br> - though preference satisfaction says it's good if people get what they want after being informed, not after having their desires manipulated.</p> - <p>also, what if someone's only fully-informed desire is to count blades of grass?<br> - if life going well for this person if they get all the time to count blades of grass?<br> - based on preference satisfaction, yes.<br> - but like, for real?</p> - <h2 id="objective-list-theory">Objective list theory</h2> - <p>your well-being depends on whether you have the items that are on the objective list.<br> - i.e. there are things that are good for everyone.</p> - <p>so what's on the list?<br> - based on Martha Nussbaum:</p> - <ul> - <li>life: being able to live a life of normal length</li> - <li>health: good health, including reproductive</li> - <li>bodily integrity: being able to move freely, be secure against assault/violence, and having opportunities for sexual satisfaction</li> - <li>senses, imagination, thought: being bale to use senses, to imagine, to think and reason with adequate education; being able to experience and produce self-expressive works of religion/literature/music/etc.; being able to have pleasurable experiences and avoid unnecessary pain.</li> - <li>emotions: being able to have attachments to other things and people, to love, to grieve, to experience justified anger</li> - <li>practical reason: being able to form conception of good, and to engage in critical reflection about planning of one's life</li> - <li>affiliation: being able to live with/toward others, to show concern for other people, being able to work as a human being, having social bases of self-respect and non-humiliation</li> - <li>other species: being able to live with concern for and relation to animals, plants, and nature.</li> - <li>play: being able to laugh, play, enjoy recreational activities.</li> - <li>control over one's environment: political (participating in political choices, protections of free speech and association), material (being able to have property, having right to equally seek employment, having freedom from unwarranted search and seizure)</li> - </ul> - <p>advantage: this view might be more suitable for policy.<br> - disadvantage: less room for choice and diversity.</p> - </div></div> - </body> -</html> diff --git a/content/philosophy-notes/Lecture 4 utilitarianism.md b/content/philosophy-notes/Lecture 4 utilitarianism.md @@ -0,0 +1,35 @@ ++++ +title = 'Lecture 4: utilitarianism' ++++ +# Lecture 4: utilitarianism +utilitarianism (consequentialism): +- weigh costs/benefits of all options, see which option is best +- only factor that morally matters are consequences of an action on the well-being of everyone, where everyone gets equal consideration. + +comparisons: +- vs contractualism: starts from assumption that everyone matters equally, while contractualism says that we should do what's in self-interest. +- vs ethical egoism: looks at well-being of everyone involved, not just own well-being. + +"well-being" has different meanings, in principle any of the ones mentioned in [Lecture 3: theories of well-being](lecture-3-theories-of-well-being) can be used + +if you don't know how an act will play out, you have to work with all of the _potential_ consequences. +- that doesn't mean we can't take risks +- don't always evaluate this, sometimes it takes long time to evaluate all possible consequences, and you won't have time to act + +how do you approach issues of e.g. health vs privacy (like tracking people with Corona)? + +utilitarians: weigh costs/benefits of mass surveillance vs other strategies +- but how assign costs to mass surveillance if don't know what's valuable about privacy? +- argument: I have nothing to hide, please track me + - counter 1: prevention of harm - info you share now might be used against you later + - counter 2: intentional inequity - usually citizens don't see how their data is used + - counter 3: injustice and discrimination - personal data can be used to discriminate against you + - counter 4: autonomy and human dignity - mass surveillance threatens our image of private mental life + +problems for utilitarianism: +- "an individual's rights may be trampled upon if enough other people benefit" (e.g. killing one person with a rare blood type to transplant their organs and save 5 other people) + - response 1: violating people's rights will typically not have the best consequences (e.g. if sacrificing people was common, society would be in stress and fear) + - response 2: update view so it's typically not ok to violate people's rights. i.e. maximize everyone's well-being, where everyone gets equal consideration + - response 3: in some select cases, people's rights may be violated (like with the privacy issue) + +<!-- vim: set spc=: -->+ \ No newline at end of file diff --git a/content/philosophy-notes/Lecture 4_ utilitarianism.html b/content/philosophy-notes/Lecture 4_ utilitarianism.html @@ -1,68 +0,0 @@ - -<!DOCTYPE html> -<html> - <head> - <meta charset="UTF-8"> - - <title>Lecture 4: utilitarianism</title> - <link rel="stylesheet" href="style.css"/></head> - <body> - <style type="text/css"> -nav a { - text-align: left; -} -nav #name { - text-align: right; - float: right; - font-style: italic; -} - </style> - <nav> - <a href="index.html">Index</a> - <span id="name">Alex Balgavy</span> - </nav> - <hr> - <div class="exported-note"><h1>Lecture 4: utilitarianism</h1> - - <div id="rendered-md"><p>utilitarianism (consequentialism):</p> - <ul> - <li>weigh costs/benefits of all options, see which option is best</li> - <li>only factor that morally matters are consequences of an action on the well-being of everyone, where everyone gets equal consideration.</li> - </ul> - <p>comparisons:</p> - <ul> - <li>vs contractualism: starts from assumption that everyone matters equally, while contractualism says that we should do what's in self-interest.</li> - <li>vs ethical egoism: looks at well-being of everyone involved, not just own well-being.</li> - </ul> - <p>&quot;well-being&quot; has different meanings, in principle any of the ones mentioned in <a data-from-md data-resource-id='97ee2dae8b644b3b802094b618067169' title='' href='Lecture 3_ theories of well-being.html' type=''>Lecture 3: theories of well-being</a> can be used<br> - if you don't know how an act will play out, you have to work with all of the <em>potential</em> consequences.</p> - <ul> - <li>that doesn't mean we can't take risks</li> - <li>don't always evaluate this, sometimes it takes long time to evaluate all possible consequences, and you won't have time to act</li> - </ul> - <p>how do you approach issues of e.g. health vs privacy (like tracking people with Corona)?<br> - utilitarians: weigh costs/benefits of mass surveillance vs other strategies</p> - <ul> - <li>but how assign costs to mass surveillance if don't know what's valuable about privacy?</li> - <li>argument: I have nothing to hide, please track me - <ul> - <li>counter 1: prevention of harm - info you share now might be used against you later</li> - <li>counter 2: intentional inequity - usually citizens don't see how their data is used</li> - <li>counter 3: injustice and discrimination - personal data can be used to discriminate against you</li> - <li>counter 4: autonomy and human dignity - mass surveillance threatens our image of private mental life</li> - </ul> - </li> - </ul> - <p>problems for utilitarianism:</p> - <ul> - <li>&quot;an individual's rights may be trampled upon if enough other people benefit&quot; (e.g. killing one person with a rare blood type to transplant their organs and save 5 other people) - <ul> - <li>response 1: violating people's rights will typically not have the best consequences (e.g. if sacrificing people was common, society would be in stress and fear)</li> - <li>response 2: update view so it's typically not ok to violate people's rights. i.e. maximize everyone's well-being, where everyone gets equal consideration</li> - <li>response 3: in some select cases, people's rights may be violated (like with the privacy issue)</li> - </ul> - </li> - </ul> - </div></div> - </body> -</html> diff --git a/content/philosophy-notes/Lecture 5 Kantian ethics.md b/content/philosophy-notes/Lecture 5 Kantian ethics.md @@ -0,0 +1,18 @@ ++++ +title = 'Lecture 5: Kantian ethics' ++++ +# Lecture 5: Kantian ethics +Kantian ethics: asks about intentions for action - are they good or bad? +contracts or consequences don't matter. + +how do you judge your intentions? +- formula of humanity: whether you disrespect others and use them against their will for your purposes + - humanity should be used as an end, not only as a means + - if you're using someone only as a means, you don't care about their consent +- formula of universal law: whether you make an exception for yourself + - only act on intentions (maxims) that could be universal (i.e. you'd be fine if everyone did it) + - i.e. if everyone does some action, would you still be able to achieve your goal by doing the same? if not, don't do the action. + +in summary: we shouldn't think we are more important than others. respect others and don't make exceptions for yourself. + +<!-- vim: set spc=: -->+ \ No newline at end of file diff --git a/content/philosophy-notes/Lecture 5_ Kantian ethics.html b/content/philosophy-notes/Lecture 5_ Kantian ethics.html @@ -1,47 +0,0 @@ - -<!DOCTYPE html> -<html> - <head> - <meta charset="UTF-8"> - - <title>Lecture 5: Kantian ethics</title> - <link rel="stylesheet" href="style.css"/></head> - <body> - <style type="text/css"> -nav a { - text-align: left; -} -nav #name { - text-align: right; - float: right; - font-style: italic; -} - </style> - <nav> - <a href="index.html">Index</a> - <span id="name">Alex Balgavy</span> - </nav> - <hr> - <div class="exported-note"><h1>Lecture 5: Kantian ethics</h1> - - <div id="rendered-md"><p>Kantian ethics: asks about intentions for action - are they good or bad?<br> - contracts or consequences don't matter.</p> - <p>how do you judge your intentions?</p> - <ul> - <li>formula of humanity: whether you disrespect others and use them against their will for your purposes - <ul> - <li>humanity should be used as an end, not only as a means</li> - <li>if you're using someone only as a means, you don't care about their consent</li> - </ul> - </li> - <li>formula of universal law: whether you make an exception for yourself - <ul> - <li>only act on intentions (maxims) that could be universal (i.e. you'd be fine if everyone did it)</li> - <li>i.e. if everyone does some action, would you still be able to achieve your goal by doing the same? if not, don't do the action.</li> - </ul> - </li> - </ul> - <p>in summary: we shouldn't think we are more important than others. respect others and don't make exceptions for yourself.</p> - </div></div> - </body> -</html> diff --git a/content/philosophy-notes/_index.md b/content/philosophy-notes/_index.md @@ -0,0 +1,11 @@ ++++ +title = 'Philosophy' ++++ +# Philosophy + +Lectures: +1. [Lecture 1: subjectivism & objectivism](lecture-1-subjectivism-objectivism) +2. [Lecture 2: egoism, contractualism](lecture-2-egoism-contractualism) +3. [Lecture 3: theories of well-being](lecture-3-theories-of-well-being) +4. [Lecture 4: utilitarianism](lecture-4-utilitarianism) +5. [Lecture 5: Kantian ethics](lecture-5-kantian-ethics)+ \ No newline at end of file diff --git a/content/philosophy-notes/index.html b/content/philosophy-notes/index.html @@ -1,21 +0,0 @@ -<!DOCTYPE html> -<html> - <head> - <meta charset="UTF-8"> - - <title>Philosophy</title> - <link rel="stylesheet" href="style.css"/></head> - <body> - <div class="exported-note"><h1>Philosophy</h1> - - <div id="rendered-md"><p>Lectures:</p> - <ul> - <li><a data-from-md data-resource-id='f06c32217c3848ae9f8c7fec4ea2e84e' title='' href='Lecture 1_ subjectivism & objectivism.html' type=''>Lecture 1: subjectivism & objectivism</a></li> - <li><a href="Lecture 2_ egoism, contractualism.html">Lecture 2: egoism, contractualism</a></li> - <li><a href="Lecture 3_ theories of well-being.html">Lecture 3: theories of well-being</a></li> - <li><a href="Lecture 4_ utilitarianism.html">Lecture 4: utilitarianism</a></li> - <li><a href="Lecture 5_ Kantian ethics.html">Lecture 5: Kantian ethics</a></li> - </ul> - </div></div> - </body> -</html> diff --git a/content/philosophy-notes/style.css b/content/philosophy-notes/style.css @@ -1,38 +0,0 @@ -@charset 'UTF-8'; - -body { - margin: 0px; - padding: 1em; - background: #f3f2ed; - font-family: 'Lato', sans-serif; - font-size: 12pt; - font-weight: 300; - color: #8A8A8A; - padding-left: 50px; - line-height: 1.5; -} -h1 { - margin: 0px; - padding: 0px; - font-weight: 300; - text-align: center; -} -ul.toc li { - margin: 8px 0; -} -h3.name { - font-style: italic; - text-align: center; - font-weight: 300; - font-size: 20px; -} -a { - color: #D1551F; - } -a:hover { - color: #AF440F; -} - strong { - font-weight: 700; - color: #2A2A2A; - }