
AI regulation needn’t be a reinvention. Amid a transformative tech revolution, AI’s impact on work and society is clear. Amid increasing investment, calls for responsible AI regulation are rising. Aligning with past successful regulations like internet governance, a balanced approach is essential. A parallel to Secure Sockets Layer (SSL)/Transport Layer Security (TLS) protocols is proposed—a lightweight, transparent certification standard overseen by independent bodies. This preserves innovation while protecting consumers and key principles.
We find ourselves amidst one of the most transformative technological revolutions of the past century. This era, akin to the tech boom of the 2000s or even the Industrial Revolution, is marked by the disruption of essential societal functions through tools that some hail as innovative and others find unsettling. While the perceived advantages continue to divide public opinion, there’s little doubt about AI’s far-reaching impact on the future of work and communication.
This sentiment is echoed by institutional investors. Over the last three years alone, venture capital investment in generative AI has surged by 425%, reaching a staggering $4.5 billion in 2022, as reported by PitchBook. This recent surge in funding is driven largely by the widespread convergence of technology across various industries. Consulting giants like KPMG and Accenture are channeling billions into generative AI to enhance their client services. Airlines leverage novel AI models to optimize routes, and even biotechnology firms employ generative AI to advance antibody therapies for life-threatening diseases.
Naturally, this revolutionary technology has caught the attention of regulators, and at a rapid pace. Figures like Lina Khan from the Federal Trade Commission warn about AI’s potential societal risks, citing increased fraud, automated discrimination, and collusive price manipulation if not properly managed.
One of the most prominent instances highlighting the regulatory focus on AI is Sam Altman’s recent testimony before Congress. As the CEO of one of the world’s largest AI startups, Altman emphasized the necessity of government intervention to mitigate the risks posed by increasingly powerful AI models. He, along with other industry leaders, penned an open letter advocating that safeguarding against AI-related threats should be a global priority akin to addressing pandemics and nuclear war.
Altman and regulators like Khan agree on the importance of regulation for ensuring safer technological applications. However, the scope of regulation remains a point of contention. While entrepreneurs seek minimal restrictions to foster an innovative economic environment, government officials strive for broader limits to protect consumers.
Yet, both sides overlook the fact that certain areas have seen successful regulation for years. The emergence of the internet, search engines, and social media prompted government oversight through acts like the Telecommunications Act, The Children’s Online Privacy Protection Act (COPPA), and The California Consumer Privacy Act (CCPA). Instead of implementing a sweeping framework of restrictive policies that could stifle tech innovation, the U.S. employs a patchwork of policies rooted in fundamental laws such as those related to intellectual property, privacy, contracts, harassment, cybercrime, data protection, and cybersecurity.
These frameworks often draw inspiration from established technological standards and encourage their integration into services and emerging technologies. They also ensure the presence of trusted organizations responsible for upholding these standards in practice.
Consider the example of Secure Sockets Layer (SSL)/Transport Layer Security (TLS) protocols. These encryption protocols ensure secure data transfer between browsers and servers, aligning with mandates like CCPA and the EU’s General Data Protection Regulation. This safeguards customer information, credit card details, and personal data from malicious exploitation. SSL certificates issued by certificate authorities (CAs) validate the authenticity and security of the transferred information.
A similar symbiotic relationship should exist for AI. Rigid licensing standards imposed by government bodies could halt the industry’s progress and disproportionately benefit major players like OpenAI, Google, and Meta, fostering an anti-competitive environment. A lightweight, user-friendly certification standard akin to SSL, overseen by independent CAs, could safeguard consumer interests while still allowing room for innovation.
Such standards could ensure transparency in AI usage for consumers, revealing whether a model is operational, its underlying foundational model, and its trustworthiness. In this scenario, the government’s role lies in collaboratively creating and promoting these protocols to establish widely accepted standards.
At its core, regulation aims to protect fundamental aspects like consumer privacy, data security, and intellectual property, rather than stifle technology that users interact with daily. These fundamental protections are already upheld on the internet and can extend to AI through similar frameworks.
Since the internet’s inception, regulation has effectively balanced consumer protection and incentivized innovation. In light of rapid technological advancement, government entities should not deviate from this approach. Regulating AI shouldn’t involve reinventing the wheel, irrespective of the polarized political discourse.