ENGL 108D

 

Professor Stephen Fernandez, Spring 2019

June 5th

  • selfie:
    • photographic object that transmits human feeling
    • practice/gesture with meaning relative to the community that it’s shared in e.g. likes, comments, shares
  • although selfies are taken with our consent, they are distributed to the digital superpublic, where it can live indefinitely without our consent
  • some psychiatrists and psychologists in the US see selfies as a disease to be treated e.g. narcissism, body dysmorphia, psychosis
    • this may not be scientifically credible
    • selfies can be political, joke, sports-related, fan-related, illness related etc., so it’s unfair to imagine them only being taken for vain reasons
    • resembles “moral panic”
  • people have fixed ideas of what selfies should look like for a particular group of people
    • causes moral/media panic when a selfie doesn’t fit that mould i.e. selfies of people from minority groups
  • other point of view is that selfies are a form of culture
  • sometimes, governments/media will treat selfies as a pathology so that they can have a discussion about the selfie (dangers of taking selfies etc.) rather than address the message of the selfie
  • users online self-select selfies for authenticity
  • users online interact with selfies by “grabbing” them rather then gazing at them, i.e. they interact with selfies in ways the creator cannot control
    • They can download images
    • Users can hack other users
    • Users can share images
    • Data can be sold to advertisers/governments

June 12th

  • selfie essay due tonight
  • Next week:
    • what if the world was filled with empathetic robots? are they intelligent? what is intelligence? do you need to go to school?
  • artificially intelligent agents have started to make decisions for us e.g. healthcare diagnosis
  • Karches, a doctor, argues that artificially intelligent doctors should not replace physician judgement
    • pressure to use AI in medicine from government, insurance, and corporations that build AI
    • in an JAMA (Journal of American Medicine Association) article, doctors argue in favor of predictive analytics ⇒ more accurate than a doctor due to perfect memory of personal historical data
      • Insurance companies like this because they can also use these predictive analytics to do pricing e.g. more likely to diagnose someone as developing ulcers in order to collect more insurance cash
      • Predictive analytics examples: predictive text, suggested Facebook ads, video recommendations, music recommendations
        • YouTube recommendations are optimized to give you videos that you will rewatch the most, possibly radicalizing
        • But the YouTube algorithm itself is not “bad”, it doesn’t know if it’s hurting people, more on ethics in later classes
    • no room for negotiation with predictive analytics, unlike with doctors ⇒ limits what doctors can do
    • electronic healthcare record: all information related to a patient e.g. vital signs, insurances, lab data, radiology reports, previous doctors, previous hospitals checked into etc.
      • Doctors no longer need to examine you at all, they can just use the EHR. The more you use the EHR, the higher your service quality. This makes visits to the clinic faster/more efficient, and allows insurance companies to easily factcheck claims
      • Unfortunately, can’t easily standardize healthcare the way that you might do with the fast food industry
        • empathy: computer cannot empathize with symptoms
        • counterpoints
          • does empathy matter for quality of care?
          • doctors are corrupt?
  • moral panic ⇒ kneejerk negative reaction to new technology
  • Awake Labs wrist band measures anxiety, warns parents of children with autism of panic attacks

June 17th

  • Turing test: person asks questions to try to determine whether responses are from computer of person

  • Seale Test: person who doesn’t understand Chinese follows instructions on index cards in order to answer questions in Chinese ⇒ computer program doesn’t “know” Chinese, so it’s not actually intelligent without the help of its index cards
  • These tests are antiquated, since computers can “learn” now (e.g. predictive text)
  • Humans have emotional intelligence, computers do not
    • Allows us to be sensitive to the feelings of other people, not offend them
    • Developed through experiences with other humans, unlike logical intelligence, which is developed through systematic education/training
    • May cause us to act illogically (unpredictably)
  • According to Nicolas Carr, humans have 2 types of knowledge
    • tacit knowledge: acts we perform without thinking e.g. walking, riding a bike
    • explicit knowledge: transformation of practical experiences into abstract information
  • Bugeja: computers are not good at tacit knowledge, can feign it

  • “mirror neurons” ⇒ we empathize with others by literally feeling the same things they do when we observe them
  • AI development is using facial recognition in order to parse human emotional state and respond accordingly

June 19th

  • Facebook social currency?
  • Could we build social communities with chatbots the way that we do with humans?
  • In artificial communities, we may be less willing to interact/negotiate because of prior data from predictive analytics e.g. healthcare corporations refuse claims based on prior data, cannot negotiate
  • Bugeja: Computers don’t empathize with us
  • Barrat: Computers are amoral, we must program morality into computers
  • We have to decide what is worth transferring over from humans to computers
  • Penrose’s 4 Possibilities for the Future of AI:

    • Option A: Strong artificial intelligence ⇒ computers develop awareness
    • Option B: Computational simulation ⇒ computers can simulate awareness
    • Option C: Physicial consciousness ⇒ humans possess awareness that can’t be replicated
    • Option D: Inexplicable consciousness ⇒ human awareness cannot be explained in scientific terms
  • Asimov’s Three Laws of Robotics:
    • Zeroth Law: A robot may not harm humanity, or by inaction, allow humanity to come to harm
    • First Law: A robot may not injure a human being, or through inaction, allow a human being to come to harm
    • Second Law: A robot must obey the orders given it by human beings, except when such orders conflict with the first or zeroth law
    • Third Law: A robot must protect its own existence as long as such protection does not conflict with the first or second law

June 24th

  • Why use emojis?
    • Show exaggerated expression, convey emotions (symbolism)
    • Shorter to send these, easier to read
    • Friendlier
    • Fake emotions
  • Where did emojis start?
    • Japan had extra representable symbols in memory, so they used them to represent smileys etc.
    • “ji” means kanji, or word in Japanese
  • MIT developed deepmoji, a model that predicts emojis based on text input
  • Bitmoji: personalizes emojis
    • Some people may use them because they’re not comfortable with their real picture
    • Use to hide real blemishes

June 26th

  • Smiling people make better first impressions, seem warmer, more cooperative, more competent
  • First impressions are often virtual nowadays
  • Smileys are perceived as inappropriate for the workplace, seem childish
  • Experiment 1
    • People who smile are seen as more competent and warm
    • Pilot: Participants read an email with 0 to 4 smileys, then rated the appropriateness of the email as well as whether or not there were smileys in the email
      • Results: people found smileys less appropriate, people didn’t notice smileys if only one was included
    • Experiment: participants judged perceived warmth and competence of someone based on photo of neutral face, photo of smiling face, text without smileys, and text with smileys
      • Photo of smile conveys competence and warmth
      • Text without smileys conveys competence, less warmth
      • Text with smileys conveys less competence, marginally more warmth
  • Experiment 2
    • People are more cooperative when others smile at them
    • Women may use more emoticons than men
    • Experiment: participants judged perceived warmth and competence of someone based on text without smileys, and text with smileys. Participants also guessed gender of sender, and wrote an email in reply to the sender
      • no significant difference in warmth between text with smileys and text without
      • less information shared (lower word count) in reply when sender email contained smileys
      • when smileys were included, participants more frequently judged sender was female
  • Experiment 3
    • smileys are inappropriate for work context, unprofessional
    • Experiment: participants judged perceived warmth, competence, and appropriateness based on email asking questions about work
      • smiley has lower perceived competence, no impact on perceived warmth in a formal setting, positive impact on perceived warmth in an informal setting ⇒ smileys only appropriate for informal settings
  • Smileys are not smiles

July 3rd

  • tactical media: foster critical thinking by subverting the normal way we think about media with a message
  • tactics is the detailed application of strategy e.g. goal is to get a seat, strategy is to board the bus at the back, tactics is moving quickly or pushing people out of the way
  • tactical media is a performance:
    • audience should be active participants, weight is on experience
    • is fluid and temporary
  • virtuosity: art that exists as a performance, only recorded in the minds of audience members