Tutorial 2: Ethics#
Week 2, Day 5: Mysteries
By Neuromatch Academy
Content creators: Megan Peters, Joshua Shepherd, Jana Schaich Borg
Content reviewers: Samuele Bolotta, Lily Chamakura, RyeongKyung Yoon, Yizhou Chen, Ruiyi Zhang
Production editors: Konstantine Tsafatinos, Ella Batty, Spiros Chavlis, Samuele Bolotta, Hlib Solodzhuk
Tutorial Objectives#
Estimated timing of tutorial: 30-50 minutes (depends on chosen trajectory; see below)
By the end of this tutorial, participants will be able to:
Understand the relationship between consciousness, intelligence, and moral status.
Discuss responsible, moral, ethical, and safe artificial intelligence.
Setup#
⚠ Experimental LLM-enhanced tutorial ⚠
This notebook includes Neuromatch’s experimental Chatify 🤖 functionality. The Chatify notebook extension adds support for a large language model-based “coding tutor” to the course materials. The tutor provides automatically generated text to help explain any code cell in this notebook.
Note that using Chatify may cause breaking changes and/or provide incorrect or misleading information. If you wish to proceed by installing and enabling the Chatify extension, you should run the next two code blocks (hidden by default). If you do not want to use this experimental version of the Neuromatch materials, please use the stable materials instead.
To use the Chatify helper, insert the %%explain magic command at the start of any code cell and then run it (shift + enter) to access an interface for receiving LLM-based assitance. You can then select different options from the dropdown menus depending on what sort of assitance you want. Press the Submit button to generate a response. To disable Chatify and run the code block as usual, simply delete the %%explain command and re-run the cell.
Thanks for giving Chatify a try! Love it? Hate it? Either way, we’d love to hear from you about your Chatify experience! Please consider filling out our brief survey to provide feedback and help us make Chatify more awesome!
Run the next two cells to install and configure Chatify…
%pip install -q davos
import davos
davos.config.suppress_stdout = True
Note: you may need to restart the kernel to use updated packages.
smuggle chatify # pip: git+https://github.com/ContextLab/chatify.git
%load_ext chatify
Install and import feedback gadget#
Show code cell source
# @title Install and import feedback gadget
!pip install vibecheck --quiet
from vibecheck import DatatopsContentReviewContainer
def content_review(notebook_section: str):
return DatatopsContentReviewContainer(
"", # No text prompt
notebook_section,
{
"url": "https://pmyvdlilci.execute-api.us-east-1.amazonaws.com/klab",
"name": "neuromatch_neuroai",
"user_key": "wb2cxze8",
},
).render()
feedback_prefix = "W2D5_T2"
Section 1: Ethics Intro & Moral Status#
Video 1: Ethics Lecture 1#
Submit your feedback#
Show code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Video_1")
Discussion activity: moral status#
There are many reasons to ascribe moral status to some system, depending upon one’s view of the grounds of moral status.
Discuss! What is your view (or your intuition) about what is important for moral status – consciousness, affective consciousness, cognitive sophistication, etc. – and what would this view imply about how we approach design of and interaction with different forms of AI?
Both rooms are the same discussion topic.
Section 2: Ethical AI#
Before starting the next sections, see how much time you have left in today’s schedule.
If you have at least 30 minutes left, you should do both of the following sections all together as one group. If you have less than 30 minutes left, you should split into 2 groups and do the next 2 sections in parallel, then come back together and discuss.

Video 2: Ethics Lecture 2#
Submit your feedback#
Show code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Video_2")
Discussion activity: Can AI be safe? Can it respect privacy? Can AI (or its creators/users) be responsible?#
Discuss!
Room 1: How can we maximize AI safety?
Room 2: How can we protect our privacy from AI threats?
Room 3: How can we decide who is responsible for AI behavior?
Section 3: Fair AI#
Video 3: Ethics Lecture 3#
Submit your feedback#
Show code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Video_3")
Discussion activity: Can AI be fair? Can it exhibit human-like morality?#
Discuss!
Room 1: What can we do to help AI be more fair? – better training data? interpretable/explainable AI? alignment?
Room 2: What else can we do to help AI be more moral? – Is the top-down, bottom-up, or hybrid approach more promising? Why?
Summary#
Video 4: Ethics Lecture 4#
Submit your feedback#
Show code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Video_4")