Honestly, I’m increasingly feeling that things like this are a decent use for a technology like ChatGPT. People suck and definitely have ulterior motives to forward their group. With AI, there’s at least some degree of impartiality. We definitely need to regulate the shit out of it and make clear expectations for transparency in its use, but we’re not necessarily doomed. (At least in this specific case.)
There’s no impartiality in the training data an LLM derives it’s answers from. This is no better than anyone who owns a media consortium or lobbying group writing a bill for a politician. An LLM can easily be directed to reflect or mirror the prompts that it is given. Prime example are the exploit prompts that have been found that can get chat gpt to reveal training data.
The issue is when bills are not written by politicians or when they skirt committee which is what lobbyists do. LLMs are just another tool for that, except they’re even worse as there are fewer humans employed in the process.
As far as answering
*What prompts exactly were used? Is it at all independently repeatable? *
did you read the article? the draft was voted on by a committee, so it had to be read by other people. honestly, work like this is perfect for LLMs like chatGPT. what is concerning about this for you?
Removed by mod
Honestly, I’m increasingly feeling that things like this are a decent use for a technology like ChatGPT. People suck and definitely have ulterior motives to forward their group. With AI, there’s at least some degree of impartiality. We definitely need to regulate the shit out of it and make clear expectations for transparency in its use, but we’re not necessarily doomed. (At least in this specific case.)
There’s no impartiality in the training data an LLM derives it’s answers from. This is no better than anyone who owns a media consortium or lobbying group writing a bill for a politician. An LLM can easily be directed to reflect or mirror the prompts that it is given. Prime example are the exploit prompts that have been found that can get chat gpt to reveal training data.
https://www.businessinsider.com/google-researchers-openai-chatgpt-to-reveal-its-training-data-study-2023-12?op=1
https://news.mit.edu/2023/large-language-models-are-biased-can-logic-help-save-them-0303
https://www.technologyreview.com/2020/12/10/1013617/racism-data-science-artificial-intelligence-ai-opinion/
https://arxiv.org/pdf/2304.00612.pdf
I think that’s where the transparency comes it. What prompts exactly were used? Is it at all independently repeatable?
That’s where the advantage lies. With humans, the reasoning is truely a black box.
Also, I’m not arguing that LLMs are free of bias, just that they have a better shot at impartiality than any given politician.
The issue is when bills are not written by politicians or when they skirt committee which is what lobbyists do. LLMs are just another tool for that, except they’re even worse as there are fewer humans employed in the process.
As far as answering
*What prompts exactly were used? Is it at all independently repeatable? *
That’s all in the provided links.
did you read the article? the draft was voted on by a committee, so it had to be read by other people. honestly, work like this is perfect for LLMs like chatGPT. what is concerning about this for you?
Removed by mod
why should it concern me? I don’t understand the danger.
Removed by mod
fair point to make, and I mostly agree.
Removed by mod
Because life requires actual human participation and you can’t be a lazy asshole who lets AI, or anything else for that matter, do the living for you.
Removed by mod
Wall-E
Removed by mod
I agree, but this is work for language written in law. how does what you say tie into this scenario?
side note, I think everyone who believes what happened here is not good has never collaborated on writing a large document before.
Removed by mod
Removed by mod
Projection