CS 709 Text Analytics Seminar (HWS 2018: Ethics in NLP)

NOTE: This lecture will NOT be offered in FSS 2019, since the module leader (Simone Ponzetto) is on sabbatical.

This semester we will look at major discussions and contributions at the intersection of ethics and Natural Language Processing.

Organization

Registration

NOTE: registration starts in August via Portal2.

Schedule

The seminar will take place in C1.01 from 10:00 to 11:30 am.

Dates:

  • 14.09.18 (Kickoff)
  • 12.10.18
  • 19.10.18
  • 09.11.18
  • 23.11.18
  • 30.11.18

Goals

In this seminar, you will have to meet the following requirements:

  • Read, understand, and explore scientific literature related to one of the topics that can be found below
  • Learn how your topic is reflected by media and society
  • Discuss and debate with the other participants
  • Summarize your topic in a concise report
  • Give a presentation on your topic focusing on two scientific publications (25 minutes presentation + 20 minutes questions and discussion)

Prerequisites

  • Successful completion of IE 661 “Text Analytics” or IE 663 “Web Search and Information Retrieval”.
  • Fundamental notions of linear algebra and probability theory.

Topics + Literature

The seminar will touch on the following topics. Please use the literature below as a starting point and also check out the proceedings of the Workshop on Ethics in NLP (e.g., 2017).*

  • exclusion/ discrimination/ bias
  1. Park, J.H. & Shin, J. & Fung, P. (2018). Reducing Gender Bias in Abusive Language Detection. arXiv.
  2. Angwin, J., & Larson, J. (Dec 30, 2016). Bias in criminal risk scores is mathematically inevitable, researchers say. ProPublica.
  3. boyd, d. (2015). What world are we building? (Everett C Parker Lecture. Washington, DC, October 20)
  4. Brennan, M. (2015). Can computers be racist? big data, inequality, and discrimination. (online; Ford Foundation)
  5. Clark, J. (Jun 23, 2016). Artificial intelligence has a `sea of dudes' problem. Bloomberg Technology.
  6. Crawford, K. (Apr 1, 2013). The hidden biases in big data. Harvard Business Review.
  7. Daumé III, H. (Nov 8, 2016). Bias in ML, and teaching AI. (Blog post, accessed 1/17/17)
  8. Emspak, J. (Dec 29, 2016). How a machine learns prejudice: Artificial intelligence picks up bias from human creators--not from hard, cold logic. Scientific American.
  9. Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems (TOIS), 14(3), 330-347.
  10. Guynn, J. (Jun 10, 2016). `Three black teenagers' Google search sparks outrage. USA Today.
  11. Hardt, M. (Sep 26, 2014). How big data is unfair: Understanding sources of unfairness in data driven decision making. Medium.
  12. Jacob. (May 8, 2016). Deep learning racial bias: The avenue Q theory of ubiquitous racism. Medium.
  13. Larson, J., Angwin, J., & Parris Jr., T. (Oct 19, 2016). Breaking the black box: How machines learn to be racist. ProPublica.
  14. Morrison, L. (Jan 9, 2017). Speech analysis could now land you a promotion. BBC capital.
  15. Rao, D. (n.d.). Fairness in machine learning. (slides)
  16. Sweeney, L. (May 1, 2013). Discrimination in online ad delivery. Communications of the ACM, 56 (5), 44-54.
  17. Zliobaite, I. (2015). On the relation between accuracy and fairness in binary classification. CoRR, abs/1505.05723.
  • democracy and the language of manipulation
  1. Cutler, A. and Kulis, B. (2018). Inferring Human Traits From Facebook Statuses . arXiv.
  2. Yao, M. (n.d.). Can bots manipulate public opinion? (Web page, accessed 12/29/16)
  3. www.bloomberg.com/features/2016-how-to-hack-an-election/ (Web page, accessed 12/29/16)
  4. https://www.nytimes.com/interactive/2018/05/14/technology/facebook-ads-congress.html (News Article, accessed 09/10/18)
  5. https://www.nytimes.com/2017/11/01/us/politics/russia-2016-election-facebook.html (News Article, accessed 09/10/18)
  6. https://www.politico.eu/article/cambridge-analytica-chris-wylie-brexit-trump-britain-data-protection-privacy-facebook/ (News Article, accessed 09/10/18)
  • privacy/ intellectual property
  1. Abadi, M., Chu, A., Goodfellow, I., Brendan McMahan, H., Mironov, I., Talwar, K., et al. (2016). Deep Learning with Differential Privacy. ArXiv e-prints.
  2. Amazon.com. 2017. Memorandum of Law in Support of Amazon's Motion to Quash Search Warrant
  3. Brant, T. (Dec 27, 2016). Amazon Alexa data wanted in murder investigation. PC Mag.
  4. Friedman, B., Kahn Jr, P. H., Hagman, J., Severson, R. L., & Gill, B. (2006). The watcher and the watched: Social judgments about privacy in a public place. Human-Computer Interaction, 21(2), 235-272.
  5. Golbeck, J., & Mauriello, M. L. (2016). User perception of facebook app data access: A comparison of methods and privacy concerns. Future Internet, 8(2), 9.
  6. Narayanan, A., & Shmatikov, V. (2010). Myths and fallacies of "personally identifiable information". Communications of the ACM, 53 (6), 24-26.
  7. Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford: Stanford University Press.
  8. Solove, D. J. (2007). 'I've got nothing to hide' and other misunderstandings of privacy. San Diego Law Review, 44 (4), 745-772.
  9. Steel, E., & Angwin, J. (Aug 4, 2010). On the Web's cutting edge, anonymity in name only. The Wall Street Journal.
  10. Tene, O., & Polonetsky, J. (2012). Big data for all: Privacy and user control in the age of analytics. Northwestern Journal of Technology and Intellectual Property, 11(45), 239-273.
  11. Vitak, J., Shilton, K., & Ashktorab, Z. (2016). Beyond the Belmont principles: Ethical challenges, practices, and beliefs in the online data research community. In Proceedings of the 19th ACM conference on computer-supported cooperative work & social computing (pp. 941-953).
  12. usableprivacy.org/publications (compiled list of NLP publications related to the topic)
  • chat bots
  1. Fessler, Leah. (Feb 22, 2017). SIRI, DEFINE PATRIARCHY: We tested bots like Siri and Alexa to see who would stand up to sexual harassment. Quartz.
  2. Fung, P. (Dec 3, 2015). Can robots slay sexism? World Economic Forum.
  3. Mott, N. (Jun 8, 2016). Why you should think twice before spilling your guts to a chatbot. Passcode.
  4. Paolino, J. (Jan 4, 2017). Google home vs Alexa: Two simple user experience design gestures that delighted a female user. Medium.
  5. Seaman Cook, J. (Apr 8, 2016). From Siri to sexbots: Female AI reinforces a toxic desire for passive, agreeable and easily dominated women. Salon.
  6. Twitter. (Apr 7, 2016). Automation rules and best practices. (Web page, accessed 12/29/16)
  7. Yao, M. (n.d.). Can bots manipulate public opinion? (Web page, accessed 12/29/16)
  • word embeddings and language behavior
  1. Bolukbasi, T., Chang, K., Zou, J. Y., Saligrama, V., & Kalai, A. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. CoRR, abs/1607.06520.
  2. Caliskan-Islam, A., Bryson, J., & Narayanan, A. (2016). A story of discrimination and unfairness. (Talk presented at HotPETS 2016Video of presentation)
  3. Daumé III, H. (2016). Language bias and black sheep. (Blog post, accessed 12/29/16)
  4. Herbelot, A., Redecker, E. von, & Müller, J. (2012, April). Distributional techniques for philosophical enquiry. In Proceedings of the 6th workshop on language technology for cultural heritage, social sciences, and humanities (pp. 45-54). Avignon, France: Association for Computational Linguistics.
  5. Schmidt, B. (2015). Rejecting the gender binary: A vector-space operation. (Blog post, accessed 12/29/16)
  • NLP techniques and applications for addressing ethical issues
  1. Fokkens, A. (2016). Reading between the lines. (Slides presented at Language Analysis Portal Launch event, University of Oslo, Sept 2016)
  2. Gershgorn, D. (Feb 27, 2017). NOT THERE YET: Alphabet's hate-fighting AI doesn't understand hate yet. Quartz.
  3. Google.com. (2017). The women missing from the silver screen and the technology used to find them. Blog post, accessed March 1, 2017.
  4. Greenberg, A. (2016). Inside Google'S Internet Justice League and Its AI-Powered War on Trolls. Wired.
  5. Kellion, L. (Mar 1, 2017) Facebook artificial intelligence spots suicidal users. BBC News.
  6. Munger, K. (2016). Tweetment effects on the tweeted: Experimentally reducing racist harassment. Political Behavior, 1-21.
  7. Munger, K. (Nov 17, 2016). This researcher programmed bots to fight racism on twitter. It worked. Washington Post.
  8. Murgia, M. (Feb 23, 2017). Google launches robo-tool to flag hate speech online. Financial Times.
  9. The times is partnering with jigsaw to expand comment capabilities. (Sep 20, 2016). The New York Times.
  10. Fake News Challenge
  11. Jigsaw Challenges
  12. Perspective (from Jigsaw) But see: Hosseini, H, S. Kannan, B. Zhang and R. Poovendran. 2017. Deceiving Google's Perspective API Built for Detecting Toxic Comments. ArXiv.
  13. Textio See also: CEO Kieran Snyder's posts on medium.com; Recording of Kieran Snyder's NLP Meetup talk from Aug 15, 2016

 

* The literature is mostly taken from http://faculty.washington.edu/ebender/2017_575/.