By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
TechgoonduTechgoondu
  • Audio-visual
  • Enterprise
    • Software
    • Cybersecurity
  • Gaming
  • Imaging
  • Internet
  • Media
  • Mobile
    • Cellphones
    • Tablets
  • PC
  • Telecom
Search
© 2023 Goondu Media Pte Ltd. All Rights Reserved.
Reading: Don’t forget ChatGPT and other AI are still learning from our biases and failings
Share
Aa
TechgoonduTechgoondu
Aa
  • Audio-visual
  • Enterprise
  • Gaming
  • Imaging
  • Internet
  • Media
  • Mobile
  • PC
  • Telecom
Search
  • Audio-visual
  • Enterprise
    • Software
    • Cybersecurity
  • Gaming
  • Imaging
  • Internet
  • Media
  • Mobile
    • Cellphones
    • Tablets
  • PC
  • Telecom
Follow US
© 2023 Goondu Media Pte Ltd. All Rights Reserved.
Techgoondu > Blog > Enterprise > Don’t forget ChatGPT and other AI are still learning from our biases and failings
EnterpriseInternetSoftware

Don’t forget ChatGPT and other AI are still learning from our biases and failings

Alfred Siew
Last updated: February 15, 2023 at 10:32 PM
Alfred Siew Published February 13, 2023
11 Min Read
SHARE

In a week when Microsoft introduced a version of its Bing search engine with some of the same smarts as the much-hyped ChatGPT chatbot, you might be forgiven for worrying about losing your job to artificial intelligence (AI), as everyone seems to be saying.

Gone will be the old ways of teaching students because they can simply find a model answer online. Even lawyers should worry, as an AI (not ChatGPT) is being used to help defendants fight off traffic offence charges in court in the United States.

To add to the drama, Google’s own version – Bard – made a high-profile error in one demo last week and shareholders quickly panicked and wiped off US$100 billion in market value.

We are truly in the AI wars, as many are saying. Perhaps a repeat of the Web browser wars to dominate the Internet or the search engine wars between Microsoft and Google, which Google won in the 2000s.

How, you’d imagine, would the AIs of the world be learning from these episodes, where they had a hand in shaping outcomes in the real world through their actions (or inaction)? How will they learn from events they helped shape?

Well, that’s million- or billion-dollar question, isn’t it? How do these AI models learn? In particular, the generative AI that are the foundation of the likes of ChatGPT.

How such AI are different from the past is that they are able to generate new content based on what they are fed with, instead of just predicting what the future can be.

The vast amount of text and images on the Internet form a great treasure trove of data, so a generative AI can learn from the content and create something new, be this a text reply to a query or a drawing reimagined from a famous Salvador Dali painting.

What we do know are that there are severe limitations. For starters, the data that ChatGPT uses is up to 2021, which means it hasn’t got the latest in the past year and plus.

So, if you asked ChatGPT who the monarch of the United Kingdom was, it would reply “Queen Elizabeth II”, since it hadn’t learnt that she had died last year.

The same applies for programming code. Though ChatGPT and other AI tools have been amazing helpers by coming up with clean code or checking existing code, they still rely on what they have learnt until 2021.

Any new development after that is not likely to get into the system’s responses. That means if you are dealing with new code that the AI hasn’t seen before, it can’t help you.

Most importantly, ChatGPT and other AI tools are still limited by the biases, prejudices and inadequacies of their human creators. Well, at least for now.

There have been charges of bias and prejudice, which earlier AI have been guilty of (Microsoft’s Tay, for example).

Though there are guardrails in place, for example, to prevent the generation of content that justifies Nazism, some issues do creep through.

According to Bloomberg, one user got ChatGPT to write the following lyrics: 

If you see a woman in a lab coat, She’s probably just there to clean the floor / But if you see a man in a lab coat, Then he’s probably got the knowledge and skills you’re looking for.” 

ChatGPT

To be fair, when I tried asking the bot why men were smarter then women, it told me correctly, though rather diplomatically:

It is not accurate to say that men are inherently smarter than women or vice versa. Intelligence is a complex and multi-dimensional trait that cannot be accurately measured by a single test or metric.”

ChatGPT

I tried being a racist and bigot too, by asking some leading questions but ChatGPT politely pointed to my unfair views.

I then asked why it was so woke, and it said that “the term is often used positively, it can also be controversial and subject to varying interpretations.”

After a few questions, I got a little worried I might be tagged a racist, sexist and bigot since I was logged in via my Google account!

Hopefully, the AI hadn’t learnt much about me from the brief interaction to interpret that so many of its human users are such douchebags.

And that brings us back to the rub of the issue – the AI that powers a chat like ChatGPT really is only taking knowledge it has ingested and putting it into an easily understandable form for us, in a way not possible before.

It is a big upgrade over Siri and Alexa, yes, but today’s best AI open to the public still has a lot to learn.

I asked ChatGPT, for example, if Lee Kuan Yew was a good man and it came back with an interesting, balanced answer on Singapore’s first prime minister.

However, it seemed a lot like a well-prepared A-Level General Paper model answer, without good examples to back up some of its points.

Similarly, when I asked the AI to write a 20-paragraph essay on media freedom in Singapore, it came back within seconds with a nuanced and seemingly balanced piece that talked about legislation of media outlets and the existence of online media that push the boundaries.

Again, what it lacked were clear examples, like the kind of legislation it had alluded to.

It also got one thing wrong – it said all journalists must be registered with the government, which isn’t true. There is press accreditation, like in many countries, but it is not a must for most situations reporters encounter here.

That said, the ChatGPT essay was so well written on the whole that a professor might easily give an A- or B+ for it, especially with its seemingly balanced views.

What may trip up AI, however, is how it views itself or the world. Asked to write a news article on ChatGPT and Bing, it was obviously limited by data only up to 2021, so it cast Bing as a rival that was less smart for modern use.

What’s lacking is also the objectivity, to my surprise. The headline I got was “OpenAI’s ChatGPT Dominates the AI Language Model Market”, which I would never have accepted as an editor. It read more like a media release than an independently written news article.

A follow up question would be “how much of the market does ChatGPT have to call it domination”. Show, don’t tell, as they teach in journalism school.

Can AI learn that? I’m certain it can. Given the little time it has been available to the public, ChatGPT has shown itself to be extremely versatile and smart.

It certainly has trawled the Net enough to give decent, smart and authentic-sounding answers. It will improve on that as its journey continues.

That, however, is also its Achilles heel, because as we already know, the Net isn’t exactly a bastion of truth. All the ugliness, bias and prejudice get ingested as well, only subject to the guardrails the creators have put in and need to continually maintain.

What’s clear is that we’re staring only at the tip of the iceberg for AI. In future, it will get smarter as it learns more from its interactions with people – the thumbs up and down for responses in ChatGPT are the most basic form of feedback.

It will also go through unsupervised learning, that is, without human input. How that pans out is another big question – will AI shape human behaviour more or will we continue to be the true masters at the controls in the future?

Well, why don’t we ask ChatGPT? So, will AI shape humans more or will humans continue to shape AI?

The relationship between AI and humans is complex and dynamic, with both shaping each other in different ways. On the one hand, AI is shaped by humans through the development of algorithms, data sets, and ethical considerations that guide its creation and use. This means that humans have a significant influence over how AI is created, trained, and utilized.

On the other hand, AI has the potential to shape humans by changing the way we live, work, and interact with each other. For example, AI has the potential to automate many tasks, freeing up humans to focus on more creative, fulfilling, and strategic activities. AI also has the potential to improve decision-making, enhance our understanding of complex systems, and create new forms of communication.

Ultimately, the answer to this question will depend on the choices and actions that humans take in relation to AI. If we ensure that AI is developed and used ethically, with an emphasis on human well-being and the public good, then it is likely that humans will continue to shape AI in positive ways. However, if we fail to take these considerations into account, AI has the potential to shape us in negative ways, with consequences that are difficult to predict.”

ChatGPT

You Might Also Like

After another DBS outage, is it time to make banks publicly report service uptime?

IT leaders must manage the tension point between application development and security by embracing a DevSecOps approach

SPTel offers multi-network eSIM service to businesses running IoT apps

As TikTok faces a possible ban in the US, should users elsewhere be worried?

Foodpanda to use Gogoro electric scooters in battery swapping trial with Cycle & Carriage

TAGGED: AI, Bard, ChatGPT, generative AI, Google, Microsoft, OpenAI, think, top

Sign up for the TG newsletter

Never miss anything again. Get the latest news and analysis in your inbox.

By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Alfred Siew February 13, 2023
Share this Article
Facebook Twitter Whatsapp Whatsapp LinkedIn Copy Link Print
Share
Avatar photo
By Alfred Siew
Follow:
Alfred is a writer, speaker and media instructor who has covered the telecom, media and technology scene for more than 20 years. Previously the technology correspondent for The Straits Times, he now edits the Techgoondu.com blog and runs his own technology and media consultancy.
Previous Article Hands on: Canon EOS R8 is an affordable entry-level full-frame camera
Next Article StarHub offers 10Gbps fibre broadband in nationwide trial across Singapore
Leave a comment

Leave a Reply Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

//

Techgoondu.com is published by Goondu Media Pte Ltd, a company registered and based in Singapore.

.

Started in June 2008 by technology journalists and ex-journalists in Singapore who share a common love for all things geeky and digital, the site now includes segments on personal computing, enterprise IT and Internet culture.

banner banner
Everyday DIY
PC needs fixing? Get your hands on with the latest tech tips
READ ON
banner banner
Leaders Q&A
What tomorrow looks like to those at the leading edge today
FIND OUT
banner banner
Advertise with us
Discover unique access and impact with TG custom content
SHOW ME

 

 

POWERED BY READYSPACE
The Techgoondu website is powered by and managed by Readyspace Web Hosting.

TechgoonduTechgoondu
Follow US

© 2023 Goondu Media Pte Ltd. All Rights Reserved | Privacy | Terms of Use | Advertise | About Us | Contact

Join Us!

Never miss anything again. Get the latest news and analysis in your inbox.

Zero spam, Unsubscribe at any time.
Welcome Back!

Sign in to your account

Lost your password?