Risk Averse Preferences as an AGI Safety Technique - Carl Shulman & Anna Salamon
AGI Safety & Understanding - Tom Everitt - AGI17
AI Safety & Definitions of Intelligence - Allison Duettmann
AISafety.com Reading Group (Session 67)
Whole Brain Emulation as a Platform for Creating Safe AGI - Anna Salamon & Carl Shulman
The Ezra Klein Show and 2 more
AGI 2011: The Future of AGI Workshop Part 1 - Ethics of Advanced AGI
AGI-13 Tarek Besold - Human-Level Artificial Intelligence Must Be a Science
Carl Shulman (Pt 1) — Intelligence explosion, primate evolution, robot doublings, & alignment
How to classify different types of risk takers
Robin Hanson on AI Takeoff Scenarios - AI Go Foom?
Carl Shulman Could we use untrustworthy human brain emulations to make trustworthy ones
BBC News Ireland banks are risk averse
2020-12-02 CERIAS - Maximizing Cyber Deception to Improve Security: An Empirical Analysis
Lecture 9: Risk-Sharing with Production
Carl Shulman (Pt 2) — AI Takeover, bio & cyber attacks, detecting deception, & humanity's far future
Survival in the Margins of the Singularity? - Anna Salamon [UKH+]
UMass VOICE/it: What Obama should do about the troops in Iraq
Determining a woman's preference for treatment options in ovarian cancer - Dr. Laura Havrilesky
Anna Salamon: "Shaping the Intelligence Explosion"