“I don’t have a wife” Uno card!
Oh poor baby, do you need the dishwasher to wash your dishes? Do you need the washing mashine to wash your clothes? You can’t do it?
how about fucking your wife?
nah, I want chat gpt to be my wife, since I don’t have a real one
/s
I can see how this is going to be a real cuckold kink in a few years
I just had Copilot hallucinate 4 separate functions, despite me giving it 2 helper files for context that contain all possible functions available to use.
AI iS tHe FuTuRE.
I want to ask a chatbot for an AI slop picture, then an essay, then to fuck my wife.
Kwebbelkop making an AI generated video from an AI generated prompt for his AI to react to:
“you wouldnt let AI fuck your wife”
I don’t need Chat GPT to fuck my wife but if I had one and her and Chat GPT were into it then I would like to watch Chat GPT fuck my wife.
good on you!
Had it write some simple shader yesterday cause I have no idea how those work. It told me about how to use the mix and step functions to optimize for GPUs, then promptly added some errors I had to find myself. Actually not that bad cause now after fixing it I do understand the code. Very educational.
This is my experience using it for electrical engineering and programming. It will give me 80% of the answer, and the remainder 20% is hidden errors. Turns out the best way to learn math from GPT is to ask it a question you know the answer (but not the process) to. Then, reverse engineer the process and determine what mistakes were made and why they impact the result.
Alternatively, just refer to existing materials in the textbook and online. Then you learn it right the first time.
thank you for that last sentence because I thought I was going crazy reading through these responses.
Some people retain information easier by doing than reading.
ok, I finally figured out my view on this I believe. I was worried I was being a grumpy old man who was just yelling at the AI (still probably am, but at least I can articulate why I feel this is a negative reply to my concerns)
It’s not reproducible.
I personally don’t believe asking an AI with a prompt then “troubleshooting” it is the best educational tool for the masses to be promoted to each-other. It works for some individuals, but as you can see the results will always vary with time.
There are so many promotional and awesome educational tools that emphasize the “doing” part instead of reading. You don’t need to ask an AI prompt then try to fix all the horrible shit when there is always a statistically likely chance you will never be able to solve it and the AI gave you an impossible answer to fix.
I get some people do it, some people succeed, and some people are maybe so lonely that this interaction is actually preferable since it seems like some weird sort of collaboration. The reality is that the AI was trained unethically and has so many moral and ethical repercussions that just finding a decent educator or forum/discord to actually engage with is whole magnitudes better for society and your own mental processes.
Shaders are black magic so understandable. However, they’re worth learning precisely because they are black magic. Makes you feel incredibly powerful once you start understanding them.
“I have no math talent, but that’s ok I’ll use a tool to help” - absolutely no issues, math is hard and you don’t need most of it in “real life” (nonsense of course)
“I can’t code so I’ll use a web page maker to help” - all good, learning to code is optional, it’s what you create that matters right?
“Hey AI, break this concept down for me to help me learn it” - surprisingly, still good (though very ill advised, also built on plagiarism and putting private tutors out of work…).
“I have no art talent, but that’s ok I’ll use a tool to help” - society melts down because…?
I suppose it could just be a case of being happy to see talents we don’t have replaced by a tool? Then again, it might be artists are better at generating attractive looking arguments for their case.
So basically there are now a couple studies that show critical thinking skills are on the decline due to AI use which is bad because. 1. Makes you easier to manipulate. 2. How do you check if the AI is Right?
Ooh, links please! Might be worth throwing a few cogs into the university’s strategic goals.
“I have no math talent, but that’s ok I’ll use a tool to help”
What tools do you need to replace “math talent”? If you’re talking about calculators - first of all they’re for arithmetic, not math - and second they still do not help you to “solve math problems”. You need logic, experience and intellect to do that. The only “tool” thag can help you is an online forum if someone already solved it.
“I can’t code so I’ll use a web page maker to help”
You still need to do stuff, think with your brain and spend time to build the web page. You need to have taste and work your sweat (and some tears) into it.
“Hey AI, break this concept down for me to help me learn it”
No, not good unless you want to be misinformed and/or manipulated
society melts down because…?
Because the massive group of people were screwd over without their consent to make a tool that going to devalue their work. If you look closely on the examples you yourself provided, you can see that they all respect copyright of others and are themselves often a good and productive work. Ai on the other hand were made “at the expense” of us, and we are rightfully mad.
What tools to to replace math work besides calculator?
Mathematica is one example that solves integrals and do some elementary proof run-down for you.
Granted that it is used mostly by STEM students. But I rarelly see someone totally forbiding the use of Mathematica as learning tool.
If you want a more High-school tool, then geogebra is another great example (and also opensource.
Pretty useful to plot the graphs and help you see what you’re getting wrong.
I’m answering just to show that there are indeed mathematical tools used for the inbetween of a full math major and a “paltry peasant” that only needs to compute a good enough function for his problem be it an engineer working beams load, a chemist working enthalpy reactions or an biologist trying to find an EDO that best fits the data of prey/predator in a given ecosystem.
Never used a computer algebra system? Like wolfram mathematica or sage math or maple? Then we have proof assisting software like coq and smt solvers like cvc5 or Z3.
This is all software that can solve real math problems in an easy way.
Oh dear…
Yes, copyright owners, but not the rights of the creator. Mathematical research is part of the publishing industry, and that strips the rights from creators of such works. Their work is mislabelled discovery, and no protection offered.
That lovely tool you use to make a website? Yeah, £10 says there is open source code misappropriated there (much as AI generated code is pirated from GitHub, a lot of programs “borrow” code).
Surely the mathematician and coder have equal claims to anger? It is their works being stolen too?
People are constantly getting upset about new technologies. It’s a good thing they’re too inept to stop these technologies.
People are also always using one example to illustrate another, also known as a false injunction.
There is no rule that states all technology must be considered safe.
Every technology is a tool - both safe and unsafe depending on the user.
Nuclear technology can be used to kill every human on earth. It can also be used to provide power and warmth for every human.
AI is no different. It can be used for good or evil. It all depends on the people. Vilifying the tool itself is a fool’s argument that has been used since the days of the printing press.
While this may be true for technologies, tools are distinctly NOT inherently neutral. Consider the automatic rifle or the nuclear bomb. In the rifle, the technology of the mechanisms in the gun is the same precision-milled clockwork engineering that is used for worldwide production automation. The technology of the harnessing of a nuclear chain reaction is the same, whether enriching uranium for a bomb or a power plant.
HOWEVER, BOTH the automatic rifle and the nuclear bomb are tools, and tools have a specific purpose. In these cases, that SOLE purpose is to, in an incredibly short period of time, with little effort or skill, enable the user to end the lives of as many people as possible. You can never use a bomb as a power plant, nor a rifle to alleviate supply shortages (except, perhaps, by a very direct reduction in demand). Here, our problem has never been with the technology of Artificial Neural Nets, which have been around for decades. It isn’t even with “AI” (note that no extant “AI” is actually “intelligent”)! No, our problem is with the tools. These tools are made with purpose and intent. Intent to defraud, intent to steal credit for the works of others, and the purpose of allowing corporations to save money on coding, staffing, and accountability for their actions, the purpose of having a black box a CEO can point to, shrug their shoulders, and say “what am I supposed to do? The AI agent told me to fire all of these people! Is it my fault that they were all <insert targetable group here>?!”
These tools cannot be used to know things. They are probabilistic models. These tools cannot be used to think for you. They are Chinese Rooms. For you to imply that the designers of these models are blameless — when their AI agents misidentify black men as criminals in facial recognition software; when their training data breaks every copyright law on the fucking planet, only to allow corporations to deepfake away any actual human talent in existence; when the language models spew vitriol and raging misinformation with the slightest accidental prompting, and can be hard-limited to only allow propagandized slop to be produced, or tailored to the whims of whatever despot directs the trolls today; when everyone now has to question whether they are even talking to a real person, or just a dim reflection, echoing and aping humanity like some unseen monster in the woods — is irreconcilable with even an iota of critical thought. Consider more carefully when next you speak, for your corporate-apologist principles will only help you long enough for someone to train your beloved “tool” on you. May you be replaced quickly.
You’ve made many incorrect assumptions and setup several strawmen fallacies. Rather than try to converse with someone who is only looking to feed their confirmation bias, I’ll suggest you continue your learnings by looking up the Dunning Kruger effect.
Can you point out and explain each strawman in detail? It sounds more like someone made good analogies that counter your point and you buzzword vomited in response.
Dissecting his wall of text would take longer than I’d like, but I would be happy to provide a few examples:
- I have “…corporate-apologist principles”.
— Though wolfram claims to have read my post history, he seems to have completely missed my many posts hating on TSLA, robber barons, Reddit execs, etc. I completely agree with him that AI will be used for evil by corporate assholes, but I also believe it will be used for good (just like any other technology).
- “…tools are distinctly NOT inherently neutral. Consider the automatic rifle or the nuclear bomb” “HOWEVER, BOTH the automatic rifle and the nuclear bomb are tools, and tools have a specific purpose”
— Tools are neutral. They have more than one purpose. A nuclear bomb could be used to warm the atmosphere another planet to make it habitable. Not to mention any weapon can be used to defend humanity, or to attack it. Tools might be designed with a specific purpose in mind, but they can always be used for multiple purposes.
There are a ton of invalid assumptions about machine learning as well, but I’m not interested in wasting time on someone who believes they know everything.
I understand that you disagree with their points, but I’m more interested in where the strawman arguments are. I don’t see any, and I’d like to understand if I’m missing a clear fallacy due to my own biases or not.
EDIT: now I understand. After going through your comments, I can see that you just claim confirmation bias rather than actually having to support your own arguments. Ironic that you seem to show all of this erudition in your comments, but as soon as anyone questions your beliefs, you just resort to logical buzzwords. The literal definition of the bias you claim to find. Tragic. Blocked.
Blocking individual on Lemmy is actually quite pointless as they still can reply to your comments and posts you just will not know about it while there can be whole pages long slander about you right under your nose
I’d say it’s by design to spread tankie propaganda unabated
You know what? They can go ahead and slander me. Fine. Good for them. They’ve shown they aren’t interested in actual argument. I agree with your point about the whole slander thing, and maybe there is some sad little invective, “full of sound and fury, signifying nothing”, further belittling my intelligence to try to console themself. If other people read it and think “yeah that dude’s right”, then that’s their prerogative. I’ve made my case, and it seems the best they can come up with is projection and baseless accusation by buzzword. I need no further proof of their disingenuity.
isn’t that comic gas company propaganda, or am i rememberong it wrong
Do you need a car to go 55 mph? AI is a tool. The problem is the ownership of these tools.
The ownership, energy cost, reliability of responses and the ethics of scraping and selling other people’s work, but yeah.
I wonder if “pete” drew those sad faces himself or if they were computer generated.
they were made by a designer, no?
yeah imma keep it real with you, i ain’t wasting my time writing an essay or sum when i can take one sentence into an AI instead. more free time for me.
however i do not possess the audacity to then claim that i am a “AI writer” or sum
i feel like this is emblematic for the shit state of the world. like ppl should write essays because they wanna say something. if you can have an ai write your essay for you, then why are u even writing an essay in the first place? i know the answer to that is because your school or work commands it, and thats such alienating bs.
Yeah the AI can by definition only regurgitate ideas. It’s precisely the opposite of why humans write essays.
Imagine having your job be to learn and you make an autocorrect do it
Yay! More free time to eat the crayons!
or, you know, pursue my hobbies instead of slaving away at my job
You write essays at your job?
it was an example.
i am still an apprentice so i have to write a lot at vocational college. which i outsource to ChatGPT quite often
Did you edit your comment so that mine doesn’t make sense?
That’s skeevy.
no i didn’t. i didn’t edit a single comment in this thread
My bad then.
I write code at my job, and it makes me faster at writing code.
yeah your precious time