- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
I am extremely disappointed in the recent senate hearing on AI regulation. The level of authoritarianism and control required to truly regulate AI will bring about a draconian world in which the very values we wish to protect will be destroyed.
The 3 witnesses failed to address the positives AI will bring and do not focus on solving the root of the problem at hand. Instead they are targeting the very freedoms of our world in an attempt to regulate AI. They are asking congress to control online speech to prevent interference of elections, mandate ID verification on social platforms, and even requiring DRM on all chips to make sure users do not try to jailbreak models.
The witnesses are Dario Amodei of Anthropic (Google’s AI), Yoshu Bengio (Quebec AI) and Stuart Russell (Professor of University of California Berkeley).
I will list the most significant and concerning statements higher up.
- 1:25:00 Stuart Russell tells senate members that the only means to control users from jailbreaking models is to have chips themselves reject certain prompts and models (DRM).
- 38:24: Yoshu Bengio discusses how fine tuning of open source models are used to circumvent the censorship and that he wants the government to regulate this.
- 41:30: Stuart Russell wants all chat from any LLM to be saved so that it can be compared with a database to determine the origin of the conversation.
- 39:50: Yoshu Bengio wants the government to prevent open source AI from being released going forward.
- 37:59: Dario Amodei wants to require watermarks of all AIs including open source. This is a privacy violation unjustified given the good LLMs are doing.
- 56:24: Senate member Klobuchar seems to be pushing for an answer to regulate open models to prevent them for being used for scams even though the benefits open source models are providing significantly outweighs the downside.
- 2:07:18: Yoshua Bengio and Yoshu Bengio discusses regulation of open source, and especially open source going into the future.
It is important to also mention that not all points given in the hearing were unjustified. Some suggestions such as the requirement for transparency by larger corporations when it comes to red flag capabilities from an AI are reasonable. However some requests given to congress show a disregard for the basic freedoms and values of democracy that will be destroyed if regulation is not considered extremely carefully.
Full Hearing: https://piped.video/watch?v=hm1zexCjELo
Yeah they’re eager to gain a monopoly on AI even (or maybe especially) if it results in average people missing out on all the potential advantages. I could list positive outcomes of AI for hours, enabling small and independent business to compete with current monopolies is one of the key advantages and that scares the rich, enabling people to access things like healthcare and key services easily scares them even more…
Imagine a world where a poor person such as you and a rich person like Elon both go to court on the same charge - your low paid legal team has about a dozen billable hours to research and it’s done by one note especially motivated lawyer, Elons team have the top minds and the ability to have a vast team use the most expensive tools to dig up information and hire in experts - who is more likely to get off? But a good legal ai able to guide your tired lawyer to the right objections and fillings to make and which can in an instant pull up all the relevant information on the charge will mean that rich people no longer have such an advantage…
And take work and housing, you’re working long hours and need somewhere to live so it’s likely you’ll feel pressured into clinging to whatever you can get because researching moving is hard, getting finance acceptance and everything is hard too especially when every attempt requires endless fees and forms and hoops to leap through - so when your landlord does something you don’t like you just eat it, not only would the legal ai mentioned above totally change the landscape as you’ll be able to simply say to your computer ‘is my landlord allowed to let himself in to watch my sleep?’ and it doesn’t matter how poor you are or how little you know the culture or legal system the system will tell you your rights and how to enforce them. An ai able to ask you basic questions on your needs and search all available rental options then cross reference with other sources of information would be a game changer too, for regular people it could vastly reduce the stress and fear of housing – that sucks for rich people who use the threat of destitution as a way of keeping people locked into bad situations.
These two AI implementations also result in another benefit to average people in allowing them to learn about and collect any help or resources they’re entitled to, of an ai knew the law and my situation then it could tell me ‘you’re entitled to a carer subsidy as you look after your parents’ or ‘you can claim these items agsinst your tax, I’ve added them to the records and we can choose the best option when filling’ or ‘your car insurance is higher than other equal options, do you want me to switch it and save you $87?’
Again rich people have accountants that reduce their tax to zero by knowing exactly what to do while we have to struggle through purposely obtuse and opaque systems so we end up paying more than we should and missing out on everything we’re entitled to - all that extra money we pay because we simply don’t know we have other options goes into rich peoples pockets, this even more true for people with physical and educational handicaps, people in difficult situations, immigrants, victims of domestic abuse or poor home situations…
Something that really scares them too is the possibility of the ai telling you ‘according to our personal records kept privately on your secure home system you purchased a product that has been discovered to have been manufactured using illegal and dangerous materials, do you want to join the class action lawsuit against Dupont?’ then 18 months later it says ‘the Dupont lawsuit was successful, do you want me to send them the information required to claim $1530?’
The law is actually often pretty fair and reasonable in many cases, long debates go into making sure it all makes sense and is even handed - the problem is access to the law is entirely dependent on how much money you have, being able to file a case with all the right boxes ticked would ruin their little game.
And that’s only one strand of not especially complex AI, all tasks a LLM matched with verifier networks and task structuring could do - probably only one or two generations beyond the current models. Enabling collaborative design and community projects is another thing that could hugely benefit regular people, DRM ink for printers exists because most people don’t know they have other options and they feel brand names are more supported but an ai able to code printer drivers on the fly gets rid of that issue as everything becomes true plug and play but also it allows consumers to select better options - I obsessively search tech stuff when I’m going to make a purchase but still often discover better options I would have selected, being able to say ‘what are my best options for buying a printer?’ and it asks a couple of questions then gives me a short list of ones I can actually get and include comparisons and test data in its thinking which would have taken me an evening in pandas just to get a ballpark understandong of - and it’s looked at all the help forums telling me things like ‘people have reported issues with this printer and your current hardware, though we could upgrade your firmware to circumvent problems’
The possibilities for AI making our lives better are huge, when I hear people who have nothing good to say about it all I can think it’s the they must consider everyday people being able to live better lives as a bad thing, they must see it as something that threatens their existence as a modern day robber baron.
You can’t give powerful tools to the populace that might threaten your control over them.
Exactly why they want corporations in charge of them, but it’ll only delay the inevitable.
So what I’m reading is we should download those open source non drm ai bot projects now while we still can and hoard all the data. Thanks for the warning.
Of course. I know some open source devs that advice backing up raw training data, LoRa, and essentially the original base models for fine tuning.
Politicians sent an open letter out in protest when Meta released their LLaMA 2. It is not unreasonable to assume they will intervene for the next one unless we speak out against this.
I saw this coming. The people who have power are directly threatened by ai in the hands of the working class. Chips will be designed to report on their users or just refuse to work unless you are “authority”. There will be a big divide between what the populace can do and what those in power can do. They also know this is the best time for them to implement draconian controls because 98% of the population doesn’t understand the implications
Thanks for giving us the highlights. I just hope, if AI has that big of an impact on our lives as some of us think… It somehow gets democratized and isn’t just something under tight control of the big corporations that have 50M$ to spare.
I definitely don’t agree with their opinions, and I think it would be unconstitutional as hell to implement the measures that they’re advocating for, but it should be noted that if they are successful, we’ll see such a spectacular torrenting and dark web LLM scene. I don’t think there’s any stopping this. They can try and they will lose.
What do they mean by watermarks? Why is it a bad idea to know which, if any, ai has produced something?
Thanks for the post
DRM on the chip seems not really feasible to me. In the end, the chip doesn’t know what it is doing. It just does math. So how can any DRM on that level realize that it is running a forbidden model, or that a jailbreak prompt is being executed? Finding out what a program does already non trivial if you have the source code, and the DRM of the chip would only have the source code.
Its impossible to regulate open source, AI or not. Doing so would be another brick in the wall, helping to cripple whatever region tries to regulate it. Just because some countries want to regulate it to try to control its power, it’s already too late. Bad actors aren’t waiting for anything, they have a head start. They don’t work on ethics or morals so have no problem doing what they want, but even discounting that you have other countries with their own citizens who can work on AI. The cat’s out of the bag, the elite see the power and danger AI can bring them, and think that restricting who can utilise it they can be safe. But for how long?