• Please review our updated Terms and Rules here

ChatGPT and code generation

Chuck(G)

25k Member
Joined
Jan 11, 2007
Messages
44,230
Location
Pacific Northwest, USA
The curiosity bug has hit me, so I decided to try OpenAI ChatGPT's code generation. I asked it to generate code for a composite USB device (CDCACM+MSD) using a specific dev library and MCU. Without doing extensive checking, it certainly got the descriptors correct--it took two tries, however. The first try just generated code for two separate USB devices, not a composite one. After I pointed that flaw out, it did succeed in generating the correct descriptors and endpoint definitions.
I was favorably impressed--and glad that I'm old. I feel that this technology is going to get very disruptive as it's refined.
 
I've been instructing ChatGPT in IBM 1620 machine code. It's getting better, but the initial answers were wildly wrong. I provided it with a link to the appropriate IBM manual and will investigate in the future if it has digested the information.

On the other hand, asking questions about IBM 7080 autocoder failed miserably, with ChatGPT professing complete ignorance.
 
Okay, one thing that I've observed is not to depend on ChatGPT for accurate answers. I've been playing with it on the fringes of computing where there are few public documents available. What I've found is that there seems to be a compulsion (if you can call it that) to always give an answer, even if it's wildly wrong. You can correct it and get somewhat better results, but heck, if you know an answer's wrong, why are you asking an idiot machine? :)
In other words, kids, don't ask it to write your dissertation--it's not going to get you where you want to be.
 
Just got my first complaint that a URL it came up with pointed to a non-existent document on bitsavers.
The other URL pointed to a cctlk post about something completely different along with a bogus VCF wiki page.
Future so bright, all you can see is brown.
 

Attachments

  • poo.png
    poo.png
    174.3 KB · Views: 15
Maybe someone can ask ChatGPT to write a novel about a dystopian/idiotic future where AI worked *just* well enough to put the last generation of human experts out of a job, only to have all of society’s working parts fall to pieces because with the only source of new information for their fundamentally moronic algorithms to digest being increasingly corrupt regurgitations/remixes of the final human dataset all semblance of intelligence is completely lost. All online text essentially turns into lorum ipsum-esque (but riddled with racism) nonsense, representative illustration all turns into DeepDream-ed hellscapes of random body parts and cat memes…

No matter how broken the result it’ll be a work of Shakespeare compared to what an AI like ChatGPT will be churning out once it starts really eating its own tail.
 
A friend described ChatGPT as "a parrot with a big vocabulary".
I liked that!

A friend attempted to generate small code snippets using ChatGPT for actual work. The thought made sense, apart from curiosity he needed to use some libraries once and didn't want to spend the time getting familiar with them just to write 10-20 line functions that no one would re-visit. Results looked OK initially, but not even a single function really worked without corrections. Some fixes where simple but required reading the library's documentation which defies the purpose. Others were nasty bugs, really bad and silly code, programming 101 type of code, that were basically re-written entirely. So I have to agree we're not there yet. I don't know if we'll get there eventually but currently I classify it as an advanced 1st level customer support chat bot that can pop up in a web page and actually succeed in being helpful.
 
That it's a parrot becomes evident when one tries to "educate" the AI. It can parrot back statements that you've made earlier, but when asked, will often return an answer that directly conflicts with what you just told it.

For example, if I inform it what "mock turtle soup" doesn't actually have any turtles in it, it will acknowledge that and then proceed to tell me that I need turtle meat to make mock turtle soup.

After a bit, it's like talking to a parrot.
 
A lot of ink has been spilled over the years about the theoretical existential threats AI poses to humanity; one of the favorite doomsday scenarios is the "Paperclip Maximizer", where an self-improving AI that's tasked with optimizing the output of a paperclip factory runs amok and uses its geometrically-increasing intelligence to grab the power and resources necessary to convert the entire planet, if not eventually the entire galaxy, into trillions upon trillions of paperclips. ChatGPT and friends are obviously nothing like that, they have no actual intelligence (when they're correct it's only statistically so, they never actually *know* anything about what they're talking about in a way most philosophers and scientists would argue an actual brain "knows" things), but... I can't help but feel like these things are approaching the threshold of having the power to destroy human intellect by gobbling up all of what we actually know and recycling/mangling it into such a fetid stew of garbled disinformation that even the sharpest minds will drown in s**t trying to sort out what is actually real anymore.

The parrot analogy is real. The only reason they sound even remotely sensible is because they're parrots with extremely large photographic memories filled with at least sometimes-sensible conversations they can grab pieces of and spit back out based on their prompts. It's been demonstrated over and over again (with Microsoft's "Tay" back in 2016, the weird garbage that's already coming out of Bing's new chatbot after only about a week, etc) that it's trivial to intentionally poison the well with these things, and the same thing will happen, potentially at a geometric rate, once they start consuming their own garbage.

We already have enough human-generated mis/disinformation circulating around the Internet, the last thing we need is even more of it generated as a result of these hallucinating moronic AI's running off at the mouth. Feh.
 
I'd seen some examples of ChatGPT and really, it's mostly useless. There's already shills in various places pushing it along, and I even received some spam email promoting it. This, I think, is a very good reason to stay away from it - far away.

The real problem is that the usual stupid mob mentality is going to start believing the rubbish it puts out. Soon, nobody will know what is truth and what isn't (worse than the situation now).
 
I was always thinking about these applications:

1- AI has never ending patience as a companion for people
who are physically/mentally impaired.
2- AI has tremendous potential as a teaching tool for all ages.
3- Multinational translator for people who travel or interact with
people of foreign languages.
4- I am standing in line for a decent offline home assistant.

......and the list goes on and on and on!

ziloo 😊
 
I think using ChatGPT is like outsourcing research to a junior high school class. Lots of enthusiasm and ability to find sources but a lack of the knowledge needed to verify the sources. But if having AI restate the documentation gets more programmers to read the documentation, that will be a good thing.
 
I experimented with ChatGPT a few weeks ago. I had recently read part of a collection of personal letters written by Thomas Jefferson. I asked ChatGPT to summarize Jefferson's viewpoint on a arcane topic and it did great. Then I started to inquire about a atmospheric electrical phenomenon and it authoritatively pointed me to a written resource giving the author's name... which I was unable to find. When I asked for additional information and the ISBN number of the book it told me it wasn't a search engine. I kind of left the session feeling like it had BS'd me. Ha!
 
Well, at least you have a big vocabulary. Me, not so much.
Permit me to back up on my previous post a little bit. By "big" I meant just "a lot of words" none having to be particularly fancy or out of the ordinary. I'm with you, my vocabulary is no better than the average bear's.
 
I submitted the following query to ChatGPT:
Please furnish a schematic diagram for the Altos 586 computer.

And ChatGPT replied:

I'm sorry, but as an AI language model, I do not have access to specific schematics or diagrams for the Altos 586 computer. However, I can provide you with some information about the Altos 586 computer.

The Altos 586 computer was a PC-compatible computer that was introduced by Altos Computer Systems in 1984. It was based on the Intel 80286 processor and supported up to 16 MB of RAM. It also had a built-in hard disk controller that supported up to four hard disk drives.

The Altos 586 computer was typically sold as a server or workstation and was used in various industries such as finance, government, and education. It was also popular among software developers and enthusiasts.

Oooookaaay: PC Compatible? 80286 with 16 MB RAM? :)
 
Back
Top