In the United States, everyone seems split on artificial intelligence (AI).
Some—corporations and enterprising individuals—believe these large language models will make them more efficient, lowering the costs of making and delivering everything from products to university papers. But many of us fear that AI will destroy the value of human life and labor, with the most vulnerable of us the first victims. Neither is the full story, but we need to focus on the latter.
Who is most vulnerable in the AI age? It’s clear from existing research it includes but is not limited to: people who create or labor1 and people facing discrimination – those marginalized by race, gender, sexuality, disability, religion, nationality and more.2 It makes sense the Writers and Screen Actors Guilds, more diverse after Hollywood’s streaming push, were the first workers to go on strike with AI on the bargaining table. The lesson is clear: when corporations own our data and creativity, many more people are disempowered.
We can avoid the worst of AI and use it to advance our society, but we must center workers and historically marginalized communities, the value of their knowledge, over corporations’ desire to own more of both. I have experience doing so as the co-founder of an app and platform, OTV | Open Television, which centers historically marginalized creators. We overcorrected for media and tech’s exclusion of people of color, disabled, queer, women, trans and gender nonconforming people by creating platforms to cultivate and release their stories. Our work at OTV shows that when sharing diverse perspectives, everyone benefits, including white, cisgender, able-bodied people.
While writing my book about OTV, Reparative Media (MIT Press, forthcoming), I realized we were drawing on “Ancestral Intelligence” as a framework for developing media and technology in ways that can heal our deepest cultural wounds.
Ancestral Intelligence are collectively cultivated knowledges, locally specific, balancing our cultural differences with our ever-changing ecosystems. Ancestral Intelligence includes everything from: the ways our minds process information through story, our spiritual and religious practices (particularly those that center love and connection over violence and division), and our many systems of governance over the course of human history, from democracies to seasonal hierarchies.
Ancestral Intelligences are as diverse as our cultures, rejecting the utility of a “general intelligence.” Ancestral Intelligences are based in stories, and working with storytellers showed me the power of story to shift culture. Our ancestral stories take inspiration from their immediate environment, the people and other living beings that sustain them, from families to food. Others find inspiration in the stars, as seen in how most cultures have nature-based myths and rituals, deriving lessons from stories about the stars. Ancestral Intelligences can be thousands or hundreds of years old.
Across cultures, Ancestral Intelligences are distinct but share some common traits.
I take an expansive perspective of Ancestral Intelligence. From ancient myths and knowledge systems to practices of gathering and organizing, we must root ourselves in human ways of being as AI expands our capacities. Unfortunately, most of these Indigenous practices have been dismissed as frivolous by the scientists, technologists, engineers, and mathematicians who developed AI, and that may explain the palpable fear of the tools they created. There is little understanding of intelligence beyond what they can calculate and test, which their tools can now do better than most of them can.
We all have Ancestral Intelligence, but some have been marginalized since the Enlightenment and rise of corporate capitalism since the 18th century, the marriage of science and markets that led to the creation of AI. According to David Graeber and David Wengrow in their provocative critique of civilization theory, Enlightenment philosophers imagined “the isolated, rational, self-conscious individual…as the normal default state of human beings anywhere.”3 This philosophy prioritizes capital markets and private property, supported by the individual as isolated from and master over nature, thereby enabling him to extract endlessly the Earth’s resources. (Sounds like the OpenAI’s imagined ChatGPT user, alone talking to a machine to extract ideas from the natural well of human thought). Against this Enlightenment standard, Indigenous people were seen as “primitive or savage…in a mythological dreamworld,” even though indigenous societies had maintained complex political systems for thousands of years. Western philosophers saw egalitarian, matriarchal, myth-based societies as pre-civilized. But Graeber and Wengrow find many of these societies likely had hierarchies and notions of private property, just that social hierarchies were likely seasonal, connected to the sun and earth, and private property protected by notions of the sacred or divine.
The Ancestral Intelligences shaping our culture in the United States, including from Enlightenment-based capitalist rationality, have eclipsed many other diverse, Indigenous ways of knowing cultivated by communities for millennia. This is a core reason why our society continues to be so unequal and out of step with our climate. The dominance of capitalist and scientific ways of thinking shows how having one intelligence system prevail over many others for too long can do harm.
While developing my perspective on Ancestral Intelligence, I was very inspired by Robin Wall Kimmerer’s Braiding Sweetgrass. Dr. Kimmerer argues that Indigenous stories and myths created scientific protocols for sustainably using the earth’s resources, which Western scientists are now learning from,4 see: our newfound zeal in psychedelics for treating epidemics of depression and addiction, or the rise of regenerative farming. Indigenous myths have proliferated and shapeshifted across cultures, with Yoruba in Africa spreading to the Americas through Santeria, Voodoo, and Hoodoo. These ancient knowledge systems are culturally specific but also connect cultures, as the Greek and Roman star myths most U.S. Americans are connected to Arab epistemologies (Sumerian, Egyptian), which relate to South Asian or Vedic practices. (Interestingly, both Alphabet and Meta have developed tech inspired by these systems, e.g. Google’s Gemini and Facebook’s Libra).
Many scholars are now realizing how important Indigenous ways of knowing are to navigating a world where machines gain more power in shaping our reality. Eun Seo Jo and Timnit Gebru find inspiration for ethical AI in community-based and Indigenous archiving practices, and Indigenous scholars around the world have been actively exploring new paths to cultivating AI in sustainable ways.5
Reading Ancestral Intelligences reveals cross-cultural values. We must be skeptical of the “Artificial General Intelligence” corporations like OpenAI are selling, particularly given its apparent relationship to eugenics.6 All knowledge systems are different, and any ethical AI should be culturally specific. Indeed, AI is already heading this direction, with various companies across sectors acknowledging that they will need industry-specific AI.
Across time and history, humans have developed related values and frameworks for living. These cross-cultural values are critical for us to remember in this new age and can be applied to cultivate a more ethical AI framework.
The most striking value across Ancestral Intelligences is the most important and challenging to uphold in these hyper-capitalist times: all knowledge as commonly owned. All ancient knowledge systems were cultivated by intellectuals and spiritual leaders alongside their communities. Myths only have power if they are molded by collective voices and needs. No one person or institution “owns” the Ojibwe Star Map or African Orishas, even if individuals or powerful leaders can create sacred objects representing these symbols that belong to them. Some Yoruban communities believe people need to visit shrines or shamans to access their deities, yet there are hundreds of locally and family specific Orishas that people cultivate on their own.
All told, Ancestral Intelligence tells us the first problem with AI is its corporate ownership. Data as public domain or utility is critical to any ethical AI, and, frankly, may be inevitable, as AI developers are getting sued left and right for all the data they colonized without consent or payment.
The idea of an Artificial General Intelligence goes against everything humans have ever created. While there are lots of values and knowledge that connect our cultures, those are tested and refined in local contexts. Across all mythic systems, there are similar figures and stories, from deities for war and motherhood to various creation myths. Yet the details and meanings of these stories shift across time and place. Even within systems, there are multiple significations. In Hellenistic star systems, everything from planets to signs have a dozen meanings or more. We see variations of spellings and myths in Yoruba as it migrated across Africa and then to the Americas.
If AI is to be effective, specific communities will need the power to reshape and adapt tools for their needs. Some companies like Stability AI and many artists and collectives are already proposing this kind of decentralized access, but the biggest corporations seem intent on policing and excluding others from their allegedly propriety data and algorithms.
Many Ancestral Intelligences are gendered, with characters and symbols labeled “masculine” or “feminine,” with some variation, as in Chinese knowledge systems that use yin or yang. Yet there is ample space to read these stories beyond the binary. For example, many Yoruban followers believe the creator of the universe, Olodumare, to be neither masculine nor feminine who delegated rulership of the sky to the masculine Olorun and waters to Olokun. In Greek and Roman astrologies, there are planets that shift gender (Mercury) or exist beyond it (planets beyond Saturn). In many Indigenous myths, women or feminine energies are given primacy over creation. In all, corporate AI does not disclose its gender balance of datasets, but given rampant inequalities in virtually all knowledge fields, it’s easy to assume it is similarly male-dominated. This poses a grave risk to a world where most people are not cisgender men.
Ancestral Intelligences see connections between the material and immaterial. Humans create physical things from what we imagine. From the notion of Ashe to Chi and Chakras, the spiritual has natural manifestations and consequences, where the natural inspires human belief (including the widely held belief that we all come from water). This means that nature is always divine, to be respected and protected.
The tremendous amount of computing power and energy required for general AI is deeply out of step with all ancestral intelligences that place the climate on equal or higher status to human life. This is a sign that we need smaller more specifically localized AI’s developed with communities and specific local contexts.
Of course, we must remember that these systems are not pure and have been used to oppress, too. Astrology has been a tool of monarchies to retain power (the British royal family still uses it). Knowledge systems are only as powerful as the institutions that use them, but Ancestral Intelligences have fallen out of favor by the world’s most powerful institutions. In Postcolonial Astrology, Alice Sparkly Kat writes that “neither race nor astrology have a biological basis, but racial biases are upheld by institutions, while astrological ones are not”7. We are now more aware of how institutions oppress, and we can avoid the traps of Ancestral Intelligences if we follow the lead of the most oppressed just as we should with AI.
There is much we can learn from Ancestral Intelligence, from the widespread belief in reincarnation (inspiring us not to fear living digitally) to reality of imperfection (nearly all gods are imperfect, and disability exists as natural in many knowledge systems).
In my next project, I’ll be testing Ancestral Intelligence to generate empirical data on the efficacy of ancient cross-cultural principles.
In my last project, The Cookout, I tested a more recent one.
In Reparative Media and a forthcoming comic book (pictured below), I show how developing OTV drew on a much more recent ancestral practice: The Cookout, a Black American tradition of gathering to nourish each other.
Most Black Americans probably do not think of the Cookout as an ancestral form of intelligence. We may think of more spiritual traditions like gospel songs and sermons, or rootwork and hoodoo. Those are absolutely forms of ancestral intelligence. But I like the Cookout for the simple way it offers a framework for gathering and collaboration, sharing nourishment, stories, knowledge, and our specific local cultures.
The Cookout holds me accountable to my ancestors, some of whom gave their freedom so others could be free, and others who sacrificed their own pleasures to help others achieve their dream. If you are not Black, you may not identify with the Cookout. You may have other ancestral practices of gathering. The framework relates to a wide range of ancestral practices. Investigate your history, no matter how you identify. For many of us, our (white) ancestors were not motivated by care but instead by greed, violence, extraction, and domination. We are accountable to that too (including me). Understanding how we and our ancestors have harmed others can inspire us to break the pattern and choose different practices. If you are worried about replicating the worst forms of your ancestral intelligences, the book offers a way forward.
What makes the Cookout a particularly powerful form of ancestral intelligence is its inclusivity. It produced new forms of art and sustenance like soul food, inspired by African and American indigenous culinary and other arts traditions. We invite our family, friends, neighbors, and the people they love into a process of collectively nurturing each other. There are simple protocols in place for doing so, which have been refined over generations and have sustained us even when churches have failed us or as other spiritual practices have been marginalized.
In contrasting my Cookout framework with the White House’s inaugural AI Bill of Rights8, we can uncover how ancestral knowledge pushes us to act more radically and equitably as we enter this new AI age. The proposed AI Bill of Rights is a productive framework developed in collaboration with equity-minded scholars and practitioners. Using the Cookout, we can push it even further!
OTV’s critical communicator was not our app but our process for developing stories. Instead of Hollywood’s closed doors (not everyone can get a Netflix meeting) or Silicon Valley’s wide open doors (anyone can upload to YouTube), we extended an open invitation to any multiply marginalized creator in our specific locale of Chicago. These creators were our leaders, producing their stories and executing their own releases with our support. Just like the Cookout, we were open to anyone locally who could get an invitation, but your invitation also meant you had to contribute to the collective.
The White House Bill of Rights argues for AI “development in consultation from diverse communities.”9 Based on the framework of the cookout, we should flip that concept. AI developments should empower communities in creating, owning, and benefitting from their own data and intellectual property. Development should be led by and centering diverse communities. Instead of just being “safe and protected from” AI, they should be leading, guiding and shaping it.
The stories we produced at OTV were community funded, produced, and centered. Unlike the reductive narratives of TikTok or sensational, traumatizing stories on streaming platforms, our stories were connective, collective, and care-centered. OTV didn’t manufacture this. This emerged naturally as we simply helped creators tell the story they wanted to tell. By centering the most marginalized, our platform, an archive of original intellectual property, “cooked” or produced much more representative of the global population than most platforms, even though only locally developed.
The White House states AI companies should conduct “proactive equity assessments,” which sounds nice but is rather toothless, particularly since most developers say they don’t even know all the data and IP that went into their platforms. Instead, the Cookout shows us how to embed equity into system design by centering historically marginalized communities in producing and uploading the data and stories they feel will best serve the culture.
Above all, the Cookout is a platform for distributing culture and hosting communities to participate in its cultivation. All platforms are hosts: Netflix hosts global films and series, Instagram hosts our images, and ChatGPT hosts global knowledge. Algorithmic distribution obscures the protocols for distributing culture – we don’t know why some data are centered and others are not. The complexity of these LLMs makes it impossible for communities to know the values of the host, which is critical to any successful Cookout. Indeed, many of these platforms claim to be neutral, without core values. We can imagine platforms guided deep values, like OTV prioritizing intersectionality. But basic values like privacy and sovereignty over our contributions are essential to building this trust; at the Cookout, I need to know you won’t steal my recipe unless I share it with you!
The White House says Americans should be protected from surveillance or privacy in using AI platforms, and I agree. But the only way to do this in a way that builds trust in those hosting our knowledge is to give users sovereignty over our contributions and data. Ideally, if AI is to be market-driven, we should be paid a basic income to ensure fair and full participation. Beyond that, what would it mean for knowledge to be hosted by specific platforms guided by their cultural values, so long as those values align with our laws (e.g. banning violent or hateful speech, etc.)?
Cultivating knowledge is critical to sustaining the Cookout. During and after each gathering, hosts organically collect information on how it went. We ask people how they liked the food, the music, the vibes. This form of checking in ensures everyone’s individual experience influences the collective. Consent is critical. If I tell you about a problem with another person whose behavior was violent, I have to trust the host to find a way to seek restorative justice that values my transparency in sharing my experience (data) so vulnerably.
As a data-driven system, AI needs clear and robust protocols for learning. The White House Bill of Rights encourages the “use of representative data,” but how are platforms going to be held accountable to the worst effects of data discrimination? Centering the data of those whose data has been most devalued is a start; better yet is asking us about our experiences and letting us suggest alternatives. The White House argues for “plain language” transparency, but given the complexity of these systems, open source code and co-created explanations of data will encourage far more trust that our knowledge is being seen, valued, and understood. I agree with the White House that we should be able to “opt out” of our data being used, with a human connection to remedy any mistakes, but in the spirit of the Cookout, these humans should be close to us, accessible in person, and part of our communities, not outsourced, underpaid labor to far off places, as has been the case in corporate AI’s development. What would it take for communities to bravely “opt in” to a process of generating collective intelligence, in ways we felt honored our individual beliefs and experiences because we know exactly how it will be used?
The White House’s initial Bill of Rights goes farther than any AI company wants to go. In fact, as Emily Bender and Alex Hanna discuss on their necessary Mystery AI Hype podcast, the White House released an even more toothless set of recommendations in September after consulting with the leading AI corporations without prominent critics present. These recommendations include ensuring trust, safety, and security, with benign recommendations and several loopholes companies can use to sidestep accountability.
The Cookout goes farther than any government that seeks corporate cooperation can go. It may not be possible to restructure our media and tech industries in such decentralized, community-controlled ways. But the point of this framework is it gives us the chance to create our own experiments in AI/Ancestral Intelligence outside of corporations.
Through the questions we naturally ask ourselves when planning a cookout, we can develop platforms that heal our culture. The Cookout gives us a clear view on how to develop technology sustainably and ethnically. This process achieves balance by rejecting corporations’ extractive logics for managing data and intellectual property. The more of us who create platforms for people to own and control their knowledge, the more we can slowly, over time, take back collective ownership over who we are and can become in the 21st century.
Ancestral Intelligence is not the only thing we need. We need regulation, technologies like blockchain, and much more. But our ancestral knowledges are important frameworks for being as we enter a new age of human development.
I’m not the first to realize this. Storytellers around the world have been using the term Ancestral Intelligence for everything from NFT collections to community gatherings in Lagos. In a recent LinkedIn post, Guild of Future Architects co-designer Tony Patrick proposed drawing on Indigenous value systems as a framework for living in AI:
“it’s our original GPS. It’s the way our ancestors built wonders that humble us as the Pyramids and the Parthenon. And it’s the way people have survived unimaginable horrors and terrors - both collective and individual.”10
We have all the knowledge we need to live in an age of data abundance, but we all need to have power in this emerging environment. Only if we all – individually and collectively – have sovereignty over our knowledge, data, and stories can we leverage automated, algorithmic intelligence to become the ancestors who saved our world.