Four Approaches to Democratizing AI

Divya Siddarth

AI is on track to lead to profound societal shifts, and must be actively directed towards the collective good. Our governance approaches currently fall prey to the transformative technology trilemma. They assume significant trade-offs between progress, participation, and safety. Market-forward builders tend to sacrifice safety for progress; risk-averse technocrats tend to sacrifice participation for safety; participation-centered democrats tend to sacrifice progress for participation. Collective flourishing requires all three. 

We need work on risk mitigation and redress for harms, as covered in our previous work on standardizing and requiring model evaluation for extreme scenarios, concretizing liability, regulating sensitive use cases, and the like. We also need to articulate an affirmative vision for a democratic approach to AI that can simultaneously advance technological capabilities, spread the benefits, and enable individual and collective self-determination. Here, we focus on the latter point, laying out four forms of AI democratization and directions to be taken on each. While most public discussions of AI democratization focus on the first two—democratizing AI use and development—we believe that very little progress can be made without commensurate investment in the democratization of AI benefits and governance. 

Democratization of use: Addressing the AI divide

Benefits from AI should not just accrue to a small subset of the population, exacerbating existing inequality and deepening the digital divide. Building the infrastructure, protocols, and tools for the AI commons will ensure that access to AI is broad-based and benefits are widely shared. Specific policies could cover:

  • Closing the AI divide via broad public education, reskilling, and generally making it easier for people of all backgrounds to access helpful capabilities. 

  • Leveraging existing initiatives like those on rural broadband, internet access, technology literacy, etc. to deepen public access to and and understanding of AI systems.

Democratization of development: Co-design of AI systems

AI systems that are built without real-world testing and participation from affected stakeholders (whether application-specific, model-specific, or a broader polity as in the case of environmental effects) are less likely to serve a broad range of communities, and may be more likely to result in unexpected harms when deployed. Further, applications with broad public benefit may not by default be pursued by AI corporations or startups, and state funding could bridge that gap. Specific policies could cover: 

  • Expand pipeline of AI developers, researchers, and designers to ensure that diverse viewpoints and backgrounds are represented.

  • Direct public funding for AI to use cases and models that require broad-based participation for deployment, invest in open-source models specifically designed for public interest use cases (such as in health, education, financial literacy).

Democratization of benefits: Broadly distributing gains from AI

The monocultural funding landscape for much of the technology space makes the sector fragile, while driving its development in particular directions: asset-light, high-growth, Silicon Valley-legible startups. Taking advantage of AI for the collective good requires new institutional forms with which to build the future of technology and experiment with different ways of allocating risk and reward. Specific policies could cover:

  • Pooled public-private funding into an ‘AI commons’, with shared compute, datasets, benchmarks, and other resources to allow for greater access to cutting-edge benefits. 

  • New corporate structures that combine innovations of existing startup models with insights from PBCs and perpetual purpose trusts to build in windfall distribution, stakeholder input, and transparency from the start. 

  • World-class resourcing system for high-impact, high-risk ideas in frontier AI for the collective good, ensuring safe development and building on existing models like DARPA and ARPA-H. 

Democratization of governance: Collective decision-making over AI futures

The public should be actively included in making critical value decisions and adjudicating complex values tradeoffs, for example between safety and innovation. This will require expanded public input processes to elicit and integrate collective values over the direction of AI development, in partnership with developers of AI as well as with civil society. Specific policies could cover:

  • Creating institutional models for co-ownership of inputs to AI, including worker-owned sector-specific generative models.

  • Building data cooperatives and establishing collective ownership rights over data to ensure public accountability for models trained on the digital common (ie Common Crawl and public data). Expanding CIP’s pilot alignment assemblies to a repeatable, scalable pipeline for public input into AI across the development lifecycle, focusing on risk assessments, API access, and speed of deployment. 

  • Creating multi-stakeholder standards-setting bodies for third-party auditing into AI (covering red-teaming, evaluations, and more traditional audits), modeled after polycentric fora for Internet governance.

Previous
Previous

AI Risk Prioritization: OpenAI Alignment Assembly Report

Next
Next

Generative AI and the Digital Commons