Oliver Klingefjord about the Meaning Alignment Institute and how to bring up wisdom in a collective
Ep. 10

Oliver Klingefjord about the Meaning Alignment Institute and how to bring up wisdom in a collective

Episode description

Interview with Oliver Klingefjord: AI Alignment and Human Values In this episode of Democracy or Spot Guest, Alessandro Oppo interviews Oliver Klingefjord, co-founder of the Meaning Alignment Institute. They discuss how AI systems can be aligned with human values, the challenges of current democratic systems, and potential futures in an AI-centered world. Oliver explains their unique approach to understanding human values at a deeper level beyond slogans and preferences, and how this can transform decision-making processes.

0:00Introduction
0:15What does aligning AI mean?
0:58Process of discovering human moral values
3:00The Meaning Alignment Institute
3:30History and formation of the organization
5:00Technical methodology
7:00Using AI to understand core values
9:00Research results and test outcomes
11:00Training AI on moral graphs
13:00Current implementation status
14:00Oliver's background
16:00Team structure and collaborations
18:00Challenges and interdisciplinary nature
19:00Collaboration opportunities
20:00Future vision
22:00Concerns about black box AI
23:00Horizontal coordination
25:00Potential risks and dystopian scenarios
27:00Digital divide
28:00Related projects and inspiration
30:00Advice for innovators
32:00Institutional adoption
33:20Web3 and coordination technology
35:00Final message and conclusion