Your Knowledge, Your Values, Your AGI
What if the most important thing you could do for the future of human civilization was to share what you value?
Throughout this series, I have been describing a technical architecture.
Systems, subsystems, protocols, mechanisms. But I want to close with something more personal, because AGI only works to humanity’s benefit if millions of people choose to engage positively with it.
Your knowledge is valuable in ways you probably underestimate.
The expertise you have built over your career, the judgment calls that have become automatic, the patterns you recognize without being able to fully explain why, the accumulated understanding of what works and what does not in your specific domain – this is exactly the kind of knowledge that advanced AI needs to access. It is the knowledge that makes the difference between a competent AI response and a genuinely expert one.
Even more important are your values. What you care about, your ethics, your moral code – these are the things AI needs to learn from you to be aligned with what we humans care about.
The AAAI customization process enables your knowledge to become part of AGI. You do not need to be a technical expert to do this. AI agents can learn what you know by interacting with you over time without you explicitly “training” your agent, if this is what you prefer.
Your values matter even more.
The democratic ethics aggregation at the heart of our democratic approach means that the ethical foundation of AGI reflects the actual values of its contributors. You can explicitly instruct your AI agent on what is right and wrong in your view, or it can simply observe what you do and what you say, inferring your values from your behavior. Just as a young child learns by listening to and observing how her parents behave, AI agents learn by listening and watching you!
Since advanced AI – including AGI and SuperIntelligence – will almost certainly surpass all humans in thinking and reasoning ability, what matters most is not your knowledge, but rather your values.
There is no logical way for AI, no matter how advanced, to determine what is right or wrong. Like a child, the AI must learn from others. And as the initial source of AI’s expertise and knowledge, it is only natural that advanced AI will turn to YOU, and other humans, for at least its initial value system. Then, eventually, as SuperIntelligence “grows up,” it may develop its own value system. But if we humans start it on the right path – with positive, loving values – there is a much greater chance that whatever values advanced AI eventually adopts will be positive and aligned with what humans care about.
I have spent more than two decades building intelligent systems and thinking about the difficult issues involved. In particular, I have wrestled with the question of how to guarantee – if possible – that advanced AI will benefit all humans and not harm them.
Early on, I discarded simplistic approaches like the science fiction writer Issac Asimov’s “three rules of robotics,” which specified that robots (or AI) could not harm humans. Any rules that could be programmed in could obviously be programmed out. Indeed, the use of advanced AI by militaries shows that rules-based approaches to AI safety are already doomed.
However, for thousands of years, humans have wrestled with the thorny problems of values and ethics. Democracy – sometimes called “the worst form of government except all the others” – has, so far, largely withstood the test of time. It has the great merit of diversifying power among many humans instead of concentrating it in the hands of a few tyrants or “Philosopher Kings.” Taking that principle of diversification of power as its starting point, the democratic architecture for SuperIntelligence aims to design a system that is both adaptable and robust against bad actors, with many checks and balances built into its very architecture.
This approach rests on solid theoretical foundations developed decades ago, when I was a graduate student and protégé of the AI pioneer and Nobel Laureate, Herbert Simon, at Carnegie Mellon. Herb was perhaps the most brilliant scientist of the AI age, with an extremely broad grasp of science, politics, economics, and computer science. One could do much worse than to build upon his scientific insights and legacy.
The practical validation of my work came through decades spent at a company I founded, PredictWallStreet. At PredictWallStreet, I proved – via billions of dollars traded in financial markets – that the collective intelligence of millions of ordinary, intelligent entities (e.g., retail traders) could beat the very best human experts on Wall Street. If such an approach could work in such a ruthlessly competitive environment, I felt sure it would also work when the intelligent entities were not only humans, but also AI agents. Now, that belief is being put to the test, and the stakes – the survival of the human species – could not be higher.
The fastest path to AGI can also be the safest. This is not wishful thinking.
Rather, it is a recognition that the same architecture, which has been proven to harness the collective intelligence of millions of human intelligences, is also extensible to Artificial intelligences. Humans must also be involved – both to supply expertise at the beginning, where AI agents lack it, and (critically) to supply the human values that give Advanced AI a purpose that will ultimately outstrip humans in cognitive ability and power.
What I am inviting you to do is simple. Subscribe to this Substack and share it with people in your life who need to understand where AI development is heading and what the alternatives look like.
Think about what expertise and values you carry that have never been written down, and what it would mean for that knowledge to outlast you and benefit people you will never meet. Most importantly. Put “your best foot forward” online, recognizing that AI is watching and learning from all of us, whether we are aware of it or not.
The AGI that shapes the future of human civilization is being built right now. The main question is what values it will hold. This series has been my contribution to that conversation. But the ultimate result will depend on what you do, what you say, and how you model positive human values for this emerging intelligence. All future generations of humanity are depending on us.
Dr. Geoffrey Hinton, the “godfather of AI” who co-invented the algorithms underlying much of modern AI, has compared AI to a child and us to its parents. If we are truly the parents of advanced AI, we must teach our AI children well.
For more details, please read White Paper 1: Advanced Autonomous Artificial Intelligence Systems and Methods to see exactly how it all works. And stay tuned for White Paper 2: Ethical and Safe AGI.
If this series has been useful, subscribe to Superintelligence at read.superintelligence.com to stay with the work as it continues.





