The National Economic Council convened a symposium at NYU's Information Law Institute in July, and they've released their report: 25 crisp (if slightly wonky) pages on how AI could increase inequality, erode accountability, and lead us into temptation — along with recommendations for how to prevent this, from involving marginalized and displaced people in AI oversight; to increasing the diversity of AI researchers; to modifying the Computer Fraud and Abuse Act and Digital Millennium Copyright Act to clarify that neither stands in the way of independent auditing of AI systems.
As many noted during the AI Now Experts’ Workshop, the means to create and train AI systems are expensive and limited to a handful of large actors. Or, put simply, it’s not possible to DIY AI without significant resources. Training AI models requires a huge amount of data – the more the better. It also requires significant computing power, which is expensive. This limits fundamental research to those who can afford such access, and thus limits the possibility of democratically creating AI systems that serve the goals of diverse populations. Investing in foundational infrastructure and access to appropriate training data could help even the playing field. Similarly, opening up the development and design process within existing industry and institutional settings to diverse disciplines and external comment could help create AI systems that better serve and reflect diverse contexts and needs.
…AI systems also have manifold impacts within labor markets, beyond “replacing workers.” They shift power relationships, employee expectations, and the role of work itself. These shifts are already having profound impacts on workers, and as such it’s important that an understanding of these impacts take into account the way in which fair and unfair practices are constituted as AI systems are introduced. For example, if companies that develop AI systems that effectively act as management are seen to be technology service companies, as opposed to employers, employees may be left without existing legal protections.
…AI and predictive systems increasingly determine whether people are granted or denied opportunities. In many cases, people are unaware that a machine, and not a human process, is making lifedefining decisions. Even when they are aware, there is no standard processes to contest an incorrect characterization, or to push back against an adverse decision. We need to invest in research and technical prototyping that will ensure that basic rights and liberties are respected in contexts where AI systems are increasingly used to make important decisions.
…In order to conduct the research necessary for examining, measuring, and evaluating the impact of AI systems on public and private institutional decisionmaking, especially in terms of key social concerns such as fairness and bias, researchers must be clearly allowed to test systems across numerous domains and via numerous methodologies. However, certain U.S. laws, such as the Computer Fraud and Abuse Act (CFAA) and the Digital Millennium Copyright Act (DMCA), threaten to limit or prohibit this research by outlawing “unauthorized” interactions with computing systems, even publicly accessible ones on the internet. These laws should be clarified or amended to explicitly allow for interactions that promote such critical research.
…In many cases, those living with the impact of AI systems will be the most expert on the context and outcome of AI systems. Especially given the current lack of diversity within the AI field, it is imperative that those impacted by AI systems’ deployment be substantively engaged in providing feedback and design direction, and that these suggestions form a feedback loop that can directly influence AI systems’ development and broader policy frameworks.
…As a field, computer science suffers from a lack of diversity. Women, in particular, are heavily underrepresented. The situation is even more severe in AI. For example, while a handful of AI academic labs are being run by women, only 13.7 percent of attendees were women at the most recent Neural Information Processing Systems conference, one of the field’s most important annual gatherings. A community that lacks 80 diversity is less likely to consider the needs and concerns of those not among its membership. When these needs and concerns are central to the social and economic institutions in which AI is being deployed, it is essential that they be understood, and that AI development reflects these critical perspectives. Focusing on diversity among those creating AI is key. Beyond gender and representation of protected classes of people, it is also important that diversity include various disciplines beyond computer science (CS), creating development practices that rely on expertise from those trained in the study of applicable social and economic domains.
The AI Now Report [New York University’s Information Law Institute]