We’re Not Ready to Regulate AI

...
8 minute read

It’s an AI world, and we’re all just living in it. It drafts contracts, powers call centers, and analyzes medical scans. Depending on who you ask, it is either transforming society or straight-up ruining it. Every major tech company is racing to integrate artificial intelligence. Startups are springing up by the dozen, hyping their use of AI the way dot-com companies used to brag about “e-commerce.”

But hype aside, AI isn’t all smoke and mirrors. It is rapidly moving from novelty to infrastructure, and with that shift comes the teeny, inconvenient question: what happens when the software goes sideways? Is anyone actually regulating this?

So far, the answer feels like a resounding “No, but we probably should figure something out.” The real question is whether we’re ready to do that, and whether our lawmakers are. Based on their track record with technology over the last few decades... let’s just say it’s not a sure thing.

We Regulate Planes. We Regulate Cars. But Not AI?

We’re already seeing problems that challenge the boundaries of current law. In 2023, a self-driving GM Cruise vehicle ran over a pedestrian in San Francisco and dragged her 20 feet. The company initially blamed "confusion" in the sensor readings. The California DMV responded by suspending Cruise’s autonomous vehicle permits, but who is ultimately responsible here?

Or take the infamous Air Canada chatbot incident, where the airline’s AI assistant invented a fake policy about bereavement fare discounts. A court ruled that, yes, Air Canada was still liable even though no such policy existed.

Cases like these are just the tip of the iceberg. As AI becomes further integrated into our society, it will take on more use cases that once relied on human judgment: interpreting or triaging patients in hospitals, law enforcement, hiring models for companies, and more. To be fair, humans aren’t perfect at these tasks either. Bias in law enforcement or hiring practices? Say it ain’t so! But the key point here is that when humans make those mistakes, there’s someone to hold accountable.

What Are the Current Laws?

AI regulation in the U.S. is a bit of a mess. There are some rules here and there, but no one really has a handle on the whole thing. For example, the National Highway Traffic Safety Administration (NHTSA) issued voluntary guidelines for self-driving cars, and of course, each state has its own laws because consistency is overrated.

As for Congress, they’ve made a few attempts. In the self-driving car realm, they introduced bills like the SELF DRIVE Act and AV START, but both failed to gain traction due to the usual in-fighting. But self-driving cars aren’t the only AI-related issue on the table. In 2019, Congress introduced the Algorithmic Accountability Act, which aimed to regulate systems like facial recognition and predictive policing. That one didn’t pass either.

There have been a few committee hearings, like the Senate Judiciary Committee in 2023, which discussed AI and privacy, and the House Oversight Committee in 2024, which looked at AI’s potential impact on jobs. The result? Mostly more hot air and the realization that lawmakers may not fully understand what they’re talking about (more on that later).

Meanwhile, across the pond, the European Union seems to have figured things out a bit better. The Artificial Intelligence Act (AIA), introduced in 2024, classifies AI systems by risk and mandates transparency and accountability.

Is Congress Even Ready for This?

One of these days, as AI continues to take over more and more corners of business, transportation, medicine, and basically everything else that probably should be regulated, Congress will have to get serious about AI.

AI is a technically complex subject. But don’t worry—the United States Congress is filled with the sharpest minds our nation has to offer. Surely they’ll rise to the occasion. In the meantime, let’s take a look at how they’ve handled technology in the past.

TikTok Uses Wifi

Hailing from my home state of North Carolina, we have Rep. Richard Hudson delivering a real gem during the 2023 TikTok hearings. He asked TikTok CEO Shou Chew, “Does TikTok access the home Wi-Fi network?”

Chew, visibly confused, replied, “Only if the user turns on the Wi-Fi... I’m sorry, I may not understand the question.” Which, to his credit, is probably the most diplomatic way to say, “What the fuck are you talking about?”

To be fair, TikTok’s data practices aren’t exactly above suspicion. There are real concerns about privacy, data sharing, and potentially shady behavior on user devices. But the shocking revelation that TikTok connects to Wi-Fi to access the internet feels less like a cybersecurity bombshell and more like me trying to explain to my grandmother that her emails are not gone forever because she dropped her iPad on the kitchen floor.

Hudson's concern seemed to be that TikTok might use your home Wi-Fi to spy on other devices. That’s a serious accusation, but also one that would be easy to confirm or debunk with basic network monitoring.

The hearing was meant to tackle real issues, like how TikTok handles user data and whether it follows industry regulations. Instead, we got a sideshow from Concord, North Carolina asking if TikTok connects to Wi-Fi. It does. Thank you, Representative. Next question.

Senator, We Run Ads

One of the more meme-worthy moments from the 2018 Facebook hearings, held in the wake of the Facebook–Cambridge Analytica scandal, came from Senator Orrin Hatch. The octogenarian senator asked:

Hatch: “How do you sustain a business model in which users don’t pay for your service?”
Zuckerberg: “Senator, we run ads.”

It’s the kind of moment that makes you wonder how any useful conversation could follow. If a sitting senator doesn’t understand the most basic part of Facebook’s business model, what exactly are we expecting from the rest of the hearing? A detailed discussion on data retention policies or user consent under GDPR/CCPA? Probably not. Maybe we can give Senator Hatch the benefit of the doubt and say he was just asking this question rhetorically.

In the end, Facebook was fined over $5 billion by the FTC for data privacy violations. But thank you for your contributions, Senator.

A Series of Tubes?

Back in 2006, net neutrality was a hot topic, and Alaska Senator Ted Stevens was one of the leading voices of opposition. In his arguments against net neutrality, Stevens used a bizarre metaphor to describe the internet, attempting to lend credibility to his judgment on whether net neutrality should be regulated. The result was a quote so absurd that it has its own Wikipedia article:

"Ten movies streaming across that, that Internet, and what happens to your own personal Internet? I just the other day got... an Internet [email] was sent by my staff at 10 o'clock in the morning on Friday. I got it yesterday [Tuesday]. Why? Because it got tangled up with all these things going on the Internet commercially. [...] They want to deliver vast amounts of information over the Internet. And again, the Internet is not something that you just dump something on. It's not a big truck. It's a series of tubes. And if you don't understand, those tubes can be filled and if they are filled, when you put your message in, it gets in line and it's going to be delayed by anyone that puts into that tube enormous amounts of material, enormous amounts of material."

Putting aside the fact that he refers to your "personal Internet," calls email "Internet," and seems confused about why his email was delayed (spoiler: it probably had nothing to do with net neutrality), the "series of tubes" part became the headline from that rant. Stevens was widely mocked in the press for being a loud voice trying to regulate something he clearly did not understand very well.

It’s an apt comparison to today’s push to regulate AI. Once again, we find ourselves at a crossroads where those tasked with understanding and regulating complex technologies may be just a little bit out of their depth.

Conclusion

AI isn’t some distant future concept; it’s already here, reshaping everything from customer service to criminal justice. When the consequences include misdiagnosed patients, wrongful arrests, or lives lost on the road, regulation isn’t optional. It’s essential.

The point of this post isn't that Congress is full of idiots (though... some of the transcripts make a strong case). The point is that tech evolves faster than our legislative system is built to handle, and unless we find a way to bridge that gap, we’re going to keep making laws about tomorrow’s tech using yesterday’s understanding.

Author Image

The team at /dev/null Digest is dedicated to offering lighthearted commentary and insights into the world of software development. Have opinions to share? Want to write your own articles? We’re always accepting new submissions, so feel free to contact us.

Related Posts

Leave a Comment



By posting you agree to our site's terms and conditions , ensuring that we can create a positive and respectful community experience for everyone.