Public Forums

The Center for Ethics and Values holds regular public forums focusing on significant ethics issues faced by researchers across the university, by students, and more broadly by society.

To receive regular updates about these events, follow us on myUMBC, Facebook, or LinkedIn, or sign up here to receive Center news.

Free & Open to the Public

Come Join the Conversation!

Register at the links below

Directions, Parking, & Campus Map

These publics forms are open for full participation by all individuals regardless of race, color, religion, sex, national origin, or any other protected category under applicable federal law, state law, and the University’s nondiscrimination policy.

UMBC is committed to creating an accessible and inclusive environment for all faculty, staff, students, and visitors. To request accessibility accommodations, please contact us at ethics@umbc.edu.

The Ethics of Artificial Intelligence

April 8, 7 – 8:30 pm

UMBC Fine Arts Recital Hall

Click Here to Preregister

Preregistration is recommended, but not required

As principal architect of ethical AI practice at Salesforce, Kathy develops research-informed best practice to educate Salesforce employees, customers, and the industry on the development of responsible AI. She collaborates and partners with external AI and ethics experts to continuously evolve Salesforce policies, practices, and products. Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research. She received her MS in Engineering Psychology and BS in Applied Psychology from the Georgia Institute of Technology. The second edition of her book, “Understanding your users,” was published in May 2015. You can read about her current research at einstein.ai/ethics.

One part of David Danks’ research examines the ethical, psychological, and policy issues around AI and robotics across multiple sectors. He also develops novel AI systems and computational cognitive models. Danks currently serves on multiple advisory boards, including the National AI Advisory Committee.

Gabriella Waters is an artificial intelligence and machine learning researcher. She’s the Director of Operations and the Director of the Cognitive and Neurodiversity AI Lab (CoNA) at the Center for Equitable AI & Machine Learning Systems at Morgan State University in Baltimore, MD. She is a research associate at NIST in the AI Innovation Lab, where she leads AI testing and evaluation across three teams, and also serves as the Principal AI Scientist at the Propel Center, where she is also a professor of Culturally Relevant AI/ML Systems.

She is passionate about increasing the diversity of thought around technology and focuses on interdisciplinary collaborations to drive innovation, equity, explainability, transparency, and ethics in the development and application of AI tools. In her research, Gabriella is interested in studying the intersections between human neurobiology & learning, quantifying ethics & equity in AI/ML systems, neuro-symbolic architectures, and intelligent systems that make use of those foundations for improved human-computer synergy. She develops technology innovations, with an emphasis on support for neurodiverse populations.

Moderated by Blake Francis
Assistant Professor of Philosophy & Director of the Human Context of Science and Technology, UMBC

Many thanks to our cosponsors: College of Arts, Humanities, and Social Sciences; Dresher Center for the Humanities; Center for Social Science Scholarship; Department of Computer Science and Electrical Engineering; Department of Psychology; Human Context of Science and Technology Program.


Existential Catastrophe and the Love of Humanity

The Evelyn Barker Memorial Lecture

May 8, 7 – 8:30 pm

UMBC Fine Arts Recital Hall

This event is part of the Dresher Center’s Humanities Forum

 

Click Here to Preregister

Preregistration is recommended, but not required

Samuel Scheffler works primarily in the areas of moral and political philosophy and the theory of value. His writings have addressed central questions in ethical theory, and he has also written on topics as diverse as equality, nationalism and cosmopolitanism, toleration, terrorism, immigration, tradition, and the moral significance of personal relationships. He is the author of seven books: The Rejection of Consequentialism, Human Morality, Boundaries and Allegiances, Equality and Tradition, Death and the Afterlife (Niko Kolodny ed.), Why Worry about Future Generations?, and, most recently, One Life to Lead: The Mysteries of Time and the Goods of Attachment.

The Oxford philosopher Toby Ord estimates that there is a one in six chance that humanity will experience an “existential catastrophe” within the next hundred years. By an existential catastrophe he means either the extinction of humanity or some other event, like the irreversible collapse of civilization, that destroys what he calls humanity’s “long-term potential.” Ord divides the risks we face into three categories: first, there are natural risks – like those posed by asteroids and supervolcanic eruptions, then there are current human-caused risks – such as those resulting from nuclear weapons, climate change, and environmental destruction – and, finally, there are newly emerging human-caused risks, such as those posed by artificial intelligence and engineered pandemics. If it is true that humanity faces a serious risk of existential catastrophe within the next hundred years, how should we respond? In recent discussions of this question, two diametrically opposed answers stand out. One answer, favored by Ord and other advocates of “longtermism,” is that we should make the avoidance of existential risk our highest priority – or at least one of our highest priorities. The second answer, favored by so-called “antinatalists,” is that we should welcome the prospect of humanity’s disappearance, and do what we can to accomplish it with as little suffering as possible, by simply ceasing to have children. This lecture will offer a different and more compelling response to the prospect of existential catastrophe.

 

 

Many thanks to our cosponsors: Dresher Center for the Humanities; Department of Geography and Environmental Systems; Human Context of Science and Technology Program.