• Please review our updated Terms and Rules here

AI articles on vintage computers...

3lectr1c

Veteran Member
Joined
Nov 28, 2022
Messages
1,053
Location
USA
You know how a handful of websites have been using AI to write articles of questionable quality as of late? Well, look what I just found: https://ts2.shop/en/posts/exploring-the-legacy-of-the-toshiba-satellite-pro-430cdt
By about two sentences in I'd already picked up on it. It's weird how recognizable GPT writing has gotten. Probably helps that I asked it the other day about a couple of vintage laptops out of curiosity and what it spit out read about the same as this article. It's already high in google search so I guess it works... I hope this doesn't become common.
 
I'm sure it will... at least right now it's pretty easy to tell. So hard to describe. Just that vague, trying to be expressive, yet clinical writing style. It just lacks humanity, which I suppose makes sense!
Not to mention how much information will end up being incorrect. This one article doesn't have anything glaring, probably because of how vague it is, but still, it has and is going to continue happening.
 
mfrack_realistic_photo_of_man_reading_local_website_news_a585ded9-639b-4952-bc2a-cf351647aa10.jpeg

Perhaps more obvious was the (completely unrelated to the article) image that they used. Those newspaper headlines that are pure nonsense.
 
Perhaps more obvious was the (completely unrelated to the article) image that they used. Those newspaper headlines that are pure nonsense.

I had the “pleasure” recently of spiking on some generative AI stuff for work, and for laughs we played with running programmatically determined image prompts through Dall-E; by the end of it I had the Soviet national anthem stuck on repeat in my brain because of how when AI art makes the mistake of trying to render an object that should have text on it the result looks like the Cyrillic in vintage russian propaganda.
 
I wonder if AI is immune to Steve's Reality Distortion Field. 🤔
I can only imagine some of the platinum-certified BS it could come up with.

Edited: actually since I doubt these programs can go deep into fact-verification if you figure out where it draws its datasets from you could quite easily poison it.
 
I asked the bing assistant (GPT-4 basically, can also search the web) about a few of the obscure laptop brands I've documented, and it pulled information straight from here and from my site (it tells you where it sourced from). At some point it started talking to me as if I was the person who had done that documentation (which I AM), so it either somehow figured out it was talking to 3lectr1c, or hopefully more likely, it just glitched.
So yeah, it probably wouldn't be hard to poison that one. I could probably just completely make up a nonexistent computer, post a thread about it, ask bing chat a while later and it would start telling me all about it.
 
AI nonsense is ruining search engines as a way to find anything technical. I think I am going to start collecting more printed books.
 
Yup. That's kind of it. It actually started long ago by people trying to manipulate google search results to drive traffic to their site, drowning out the better resources. Now we have algorithmically generated content popping up everywhere, replacing the authors themselves. I am concerned when I see every site I get in a result have the same basic wordy outline format with little real information.
 
See? There's no autocoder there, just a bunch of JCL. And the JCL isn't even 7070--looks like S/360. For 7070 autocoder example, see PDF page 265 here.
So GPT4, like its predecessors, makes garbage up when it can't come up with an answer. It can't even read online manuals.
Relying on AI can be very dangerous, IMOHO.

This is what I found too. I asked Chat GPT about a very rare Germanium power diode, where the specs are practically impossible to find unless you know the data book to look in.

Then it made up a whole load of nonsense about the specs. Then I corrected it with some of it, then it confabulated more nonsense. It would have been better if it said there was no reliable data currently online. Or perhaps even said (Like V.I.C.K.I.) "I'm sorry, there appears to be data corruption". At least that would have been funny.
 
I like to talk to real people in person. That way I can read their body langue.
They sort of did this in the iRobot movie, where the Robot winked at the detective to gain an advantage in a fight.
I know that I have always preferred to hear a person's voice on the phone, rather than just seeing a text, in a sense, it is a similar thing where more information is conveyed than it can be in text alone.
 
The problem, from a business standpoint, is that most people do not speak with the same precision that writing calls for. I prefer to get things in writing--there's none of this "...but you said..."
 
Back
Top