• Please review our updated Terms and Rules here

AI articles on vintage computers...

Tell me why I would ever want to read something that was crapped out by an "AI".

When I am searching for information, I am looking for original content with original experiences and original research. An "AI" can NEVER have that.

At best, an AI is comparable to a bored college research student who has no understanding of the topic, access to mediocre information, and the the ability to bullshit out the yingyang.

Use it to create a template? Fine. Use it make a fluff piece that might as well be a wall of ipsum text? Sounds like a fine idea until it starts spewing dangerously inaccurate or insulting gibberish.
 
I don't understand how the AI can invent incorrect information. The AI may not know what is accurate and possibly do something like copy a phony case out of a Law & Order script. The creation of complete fantasies without some basis should not be something the AI is programmed to do.
 
There is a type of personal connection you get from speaking to a person. Even more so not just on the phone or video call even, but in person. I learned this being a computer nerd, but forcing myself to accept a sales role. I ended up meeting lots of different people. It was actually better to meet them face to face.

Now you can get alot of technical things done over email, but you first benefit lots from that first meeting. I had extreme disappointment with the role changing due to covid. Going all virtual really put the relationships at a distance.

I can't imagine what it would be like in a post AI world. Productivity probably would never recover.
 
I don't understand how the AI can invent incorrect information. The AI may not know what is accurate and possibly do something like copy a phony case out of a Law & Order script. The creation of complete fantasies without some basis should not be something the AI is programmed to do.
It’s because it’s not real AI. It’s a text generation model. At an extreme simplification, it’s just guessing what word would fit best next. It’s like a really advanced version of the suggested work field on your phone keyboard. It happens to be able to guess the right words to say if it’s something it’s been fed information about (which is why it’s accurate in most cases when asked something that isn’t obscure), but when asked about something it doesn’t know, it will just guess what it should say next.
 
I don't understand how the AI can invent incorrect information. The AI may not know what is accurate and possibly do something like copy a phony case out of a Law & Order script. The creation of complete fantasies without some basis should not be something the AI is programmed to do.
I was surprised by this finding too. But the AI did give the impression of confabulating data when I asked it about a rare Germanium rectifier (which was made by Philips in 1959), but it could have turned out this way because it found some data albeit incorrect, relating to other parts with similar numbers, that were not the same. For example it reported it as a silicon rectifier with completely different specs.

In any case the AI appeared "unaware" that the data it provided had a very low probability of being correct. I think, it should be re-programmed to provide some approximate uncertainty figures with its answers.

For example, if you asked it the chemical makeup of water, which it can check on a myriad of data bases that all agree, it answers and would give a probability figure of 1, that the answer is correct.

But if you ask it about something it has little to no data on and it can't find what appears to be agreeing data on many data bases on it (especially if it finds nothing) and it presents all the "loose information" it can find (which amounts to confabulation) it should add a probability of accuracy = 0.001 etc, to warn you that the answer is highly suspect. Or simply below some reasonable probability level, just admit it was "guessing" just like we do sometimes.

For a less challenging question than I asked it, but say still not easy, the AI could say something like; "from that data I have access to I'm 90% sure that this is the correct answer". That is the sort of thing a HI would do & say when they knew the data base they had access to was not ideal and they were not sure that the information they found from different data sources was in perfect agreement.
 
When dealing with ChatGPT4, one thing I realized was that one could correct it with accurate information. It would acknowledge that and parrot it back, but then there was no "learning". It was like trying to teach a dog calculus. I guess this is why ChatGPT is sometimes referred to as an LLM - large language model. An idiot with a huge vocabulary.

But then, this is ChatGPT, a generative AI, whose roots lie in Eliza.

Right now, I'm seeing a lot of use of the technology in promoting online scams. Don't believe me? Try searching for meaningful information on a scam product: "Esaver Watt". Even the YT videos on it have AI-generated comments. The idea is to drown out real criticism by swamping the system with fake data.

It's still fairly crude, but you'd better believe that in a few years, much online content will be utterly useless.
 
Last edited:
It's still fairly crude, but you'd better believe that in a few years, much online content will be utterly useless.
Ha ha, I like that, a billion grains of sand on a beach created by AI and one or two mixed in there, diluted into statistical oblivion, a few HI created grains.

With this sort of model you could write the future and re-write the past.

Perhaps AI does represent the existential threat to humanity some people believe, especially the conspiracy theorists.

To console myself, I know that the conspiracy theorists think they have found a plot, but in reality they have lost the plot.

(Also we can also pull out the memory cards on the AI, like Dave did on 2001 Space Odyssey , hopefully they will still have S-100 like edge connectors)
 
This topic needs this quote from Cory Doctorow:
This "AI debate" is pretty stupid, proceeding as it does from the foregone conclusion that adding compute power and data to the next-word-predictor program will eventually create a conscious being, which will then inevitably become a superbeing. This is a proposition akin to the idea that if we keep breeding faster and faster horses, we'll get a locomotive
 
I don’t truly think we’ll be able to create conscious AI until we figure out how consciousness works in humans. And that’s one of life’s biggest mysteries that I’m not positive we’ll ever solve.
 
On the topic of AI, its lack of abstract thought and auto-generated publications, this gem popped up:



Are you ready to embark on an exciting coding journey with THEC64? Whether you're a complete novice or have some coding experience, this book is your ultimate companion for learning how to code on THEC64. THEC64, a beloved retro computer, offers a fantastic platform for coding enthusiasts to create their own programs, games, and applications. In "C64 Programming For Non-Coders," we've meticulously crafted a comprehensive guide that takes you through coding on THEC64, starting from
gettyimages-617888054-612x612.jpg 1701718559390.png 1701718376766.png


Now the fact "the C64" is repeatedly used tells me it's learned from somewhere that a Commodore 64 was spelled "THEC64". Sure enough, another book was found.
61ILn2rUJAL._SL1429_.jpg

(The rest of the story is here.)

So it's an existing book, plagiarized and repackaged with a new cover made from a cheap Getty Image and ten minutes in Photoshop and posted with AI fudging the book details to sell a copy at half the price of the original book.
 
Last edited:
I don't understand how the AI can invent incorrect information. The AI may not know what is accurate and possibly do something like copy a phony case out of a Law & Order script. The creation of complete fantasies without some basis should not be something the AI is programmed to do.

The same way idiots invent incorrect information. They make something up because they think it is right, then other idiots believe them and spread the incorrect information.

And this is how we have religion.

AIs are good at creating mis-information because they can calculate what best sounds right. Try zooming in on an AI enhanced graphic. What you will see is called a "hallucination". It filled in details with what statistically looks right, even though it had no freaking idea what was there.

At the moment I'm working with a database that is getting filled in with data collected from the web using nothing more than Google (R)(TM) searches. (company information like names and addressed). The data that is out there is horrible as it is. Most of what is out there is on sites who's only real purpose is to display advertising.

Why would a site, who's entire purpose is to advertise, NOT want to make stuff up? Push a button on an "AI", and suddenly there are millions more pages and even hundreds or thousands more sites dishing out your advertising. Idiots searching won't know any better. I do fully expect the Internet to be filled with AI produced gibberish in a few more year. The human created gibberish for the same purpose is bad enough already.

It does boggle my mind, that unlike the bored research student, AI's are given such leniency to make stuff up without providing sources.

Then, what ARE its sources? Facebook? X-Twitter? Crap spewed out by other "AI" gibberish producing programs? Garbage in, garbage out.

They clearly can not evaluate their sources for accuracy. Even reading a fairly well fact-check 1980s periodical, I may come across some piece of information that I pick up on as probably being wrong.

Back to what I was talking about earlier - over the years, I have gone to a lot of effort to explore obscure software, hardware, and such, and then write up my experiences. I would hope that these small experiences might somehow help someone or make the world slightly better somehow.

I guess no one wants that sort of thing any more. It doesn't sell enough advertising.

If this is the future, I hate to think how it will expand in to everything else. When every single product is designed by an AI, without any human creativity or thought. Imagine shelves filled with a surreal distorted mish-mash of products. Is this a glitzy beauty product, a food, or a mind-numbing corporate mainframe product? Oh, it's all three.

But each product has been designed using unbelievably huge amounts of information collected from everyone on the planet. From purchasing history, search history, where you go, what you do, every communication to others, all the way down to every single eye-movement.

This is the scary part, because THIS is where the AI of today can actually succeed. Because they can pull from such huge giga-sets of data, they can find "optimal" solutions that have not already been thought of. Usually because those pesky ethics, common sense, or even reality were in the way.

Hmmm, there is a lot of sediment shaped sediment in this cake.
 
The back-cover image is a stock image from gettyimages. Somehow, I doubt that a guy with a beard goes by the name "Ava".
That guy looks like he drives a panel van and offers free candy.
 
AI will certainly be used to set traps for the gullible and influence popular opinion. Heck, it's probably already doing that today and will only get better at it as time goes by.
But then, when has the human race ever discarded a transformative technology?
 
Hallmark of AI bots: wordiness.
I watch random videos on youtube and feel as if the narrator may not be a real person because the language feels so unnatural. Maybe this is how younger people are talking or maybe its AI. OR maybe one is affecting the other...
 
I watch random videos on youtube and feel as if the narrator may not be a real person because the language feels so unnatural. Maybe this is how younger people are talking or maybe its AI. OR maybe one is affecting the other...
There is certainly a certain "youtuber voice" that a lot of people do that's quite exaggerated, that certainly sounds unnatural. There's also a generic new male and female voice that get used a lot, that both sound a bit more real than the old text to speech ones, but both sound unnatural. I think one or both may come from TikTok.
 
I detest the "youtuber voice." It always sounds like they are presenting something to a room full of idiots.
 
That's because they technically kind of are. It's meant to "stand out" and "engage" in order to capture the most watch time possible, in order to get their content pushed to the algorithm more, meaning more views, meaning more money. As long as it works, people will keep doing it. And generally, in most cases (not all), people using that voice in order to get views means the content itself isn't that good.
 
Back
Top