
Yann LeCun of Meta, with 70 others, endorsed an open approach to AI development, challenging the proprietary model. Their letter, inspired by the U.K.’s AI Safety Summit, calls for transparency to mitigate AI risks, highlighting the debate between open-source and corporate-controlled AI. They argue for public access and scrutiny to improve AI safety and encourage independent research, accountability, and innovation, countering the belief that proprietary control is the only way to protect society from AI threats.
Yann LeCun of Meta, among 70 others, recently advocated for greater transparency in AI development. The appeal came as the U.K. convened global leaders at the AI Safety Summit in Bletchley Park. The group, supported by Mozilla, emphasized the urgent need for openness and transparency in AI to address current and future risks, suggesting this is a global priority.
The ongoing debate in AI between open and proprietary approaches echoes similar discussions in software over the last few decades. Meta’s Chief AI Scientist, Yann LeCun, criticized some companies like OpenAI and DeepMind on a platform for attempting to monopolize AI through “regulatory capture.” He warned against the danger of a few corporations dominating AI.
This dispute is part of a larger conversation around AI governance, including concerns expressed by leaders at the AI Safety Summit. Some argue that open-source AI could be misused by malicious actors, such as in creating chemical weapons, while others see the centralization of AI control as a greater threat, hindering innovation and safety.
The open letter signed by LeCun and others, including Andrew Ng of Google Brain and Coursera, Julien Chaumond of Hugging Face, and Brian Behlendorf of the Linux Foundation, underscores the risks and potential abuses of AI. Yet, it argues that openness, enabling independent research, public scrutiny, and lower entry barriers, can foster safer, more responsible AI development. It counters the belief that strict proprietary control is the best way to safeguard society from AI-induced harm, advocating for policy debate and decision-making informed by open models.