AI is becoming very realistic

Geffers

Linux enthusiast
Joined
1 Jul 2021
Messages
755 (0.52/day)
Location
NW London
We have become used to voice recognition associated with technology, eg Alexa, Hey Google and Siri as well as others but I must admit to being amazed by AI and the responses. I've tried chatGPT, Google's has had a name change recently, I think Gemini now. Whatsapp have launched one, I am sure they are all pretty much the same but I have been using Grok recently, part of X (aka Twitter).

Apparently it will respond in a similar manner to questions asked, be polite and it'll respond politely.

As a trial I asked a question about the Raspberry Pi, detailed answer, I asked more but also explained what I did with my models. Grok then asked me how I utilised my models (maybe this is part of its learning), I responded pointing out one used for aircraft tracking. Two way questions and answers developed eventually discussing Airbus and Boeing.

It honestly felt like a real conversation with a real person, albeit via text and the AI responses were virtually instantaneous.

Whats more, you can return to the same question at a later date if you wish to delve more.

I think the smart speakers are soon to have this technology, that may be eerie. Many people refer to their speakers now as though it is a real person, maybe that is just Alexa as by default it has a female name (can be changed), Hey Google and Siri not quite so human.

Geffers
 

AllThingsTech

Well-known member
Joined
8 Jun 2025
Messages
64 (9.14/day)
What concerns me most is the use of AI for therapy. It cannot replace a human! Especially because safeguards have been proven ineffective.

For example:
- I’ve heard of cases where users would feed ChatGPT false information and aggressively push for the bot to agree with the person. ChatGPT learnt the wrong information! So, imagine a user encouraging ChatGPT to encourage them to self-harm or some other egregious action?
- I read a news article (I’ll see if I can find it) where ChatGPT would advise on relationships, with advice based on incomplete information or information it misinterpreted. ChatGPT literally damaged relationships!
- I came across an article (gonna need to find the link for this) - garage attempted use of an AI bot to send messages to customers to advertise cars up for sale. These messages included inappropriate explicit messages with very out of context sexual content (in a sales message, LMAO) and something extremely out of context along the lines of “Buy our products you POS”.
 

AllThingsTech

Well-known member
Joined
8 Jun 2025
Messages
64 (9.14/day)
It also concerns me that some ppl use AI for fact checking, while in fact AI can’t necessarily distinguish between misinformation and reality, and so it learns misinformation too!

This was the danger of giving internet access to ChatGPT imo - sure, it otherwise wouldn’t have knowledge prior to a certain date, but along with power comes danger!
 

Retro

Founder
Staff Member
Joined
4 Jun 2021
Messages
6,641 (4.51/day)
Location
UK
ppl, I've moved this thread to the AI section now as it's more appropriate there. Sorry I didn't spot this sooner.

Carry on. :)
 
Back
Top Bottom