An interview with three chatbots: AI ethics, governance and developments in 2024

Grace Chng
Grace Chng
17 Min Read

2023 has been a phenomenal year for AI. ChatGPT shot to mainstream fame. Early adopters started chatting with it, instructing it to undertake tasks from debugging computer code and summarising reports to writing poetry and brainstorming ideas.

I interviewed ChatGPT around this time last year, about two weeks after its launch. In the past 12 months, the technology on AI chatbots has advanced and developed at a blinding pace. In parallel, issues have emerged on ethics, AI regulation and the global scramble to dominate AI.

What would the chatbots think about these issues?  Here is a “roundtable” discussion with ChatGPT, Claude and Bard on their perspectives and outlook for 2024. Responses have been edited for brevity.

How would you describe yourself?

ChatGPT: I am designed to understand and generate human-like text based on the input provided to me. My purpose is to assist and provide information on a wide range of topics through natural language conversation. While I don’t have personal experiences or emotions, I can engage in discussions, answer questions, and offer information to the best of my knowledge and abilities. Keep in mind that my responses are generated based on patterns and information present in the data I was trained on, and I don’t possess consciousness, self-awareness, or personal opinions.

How do you navigate ethical dilemmas?

Bard: As a large language model, I am not designed to make subjective moral judgments. Instead, I navigate ethical dilemmas by relying on the principles and values embedded in my training data. This includes datasets of factual information, ethical guidelines, and diverse viewpoints. When faced with conflicting values, I prioritise accuracy and objectivity over personal opinions. For example, if asked about a controversial topic, I would present a balanced overview of different perspectives, citing reliable sources, and avoiding bias.

An example of ethical considerations influencing my response would be a query about sensitive topics like suicide or self-harm. In such cases, I would prioritise user safety and well-being. I would provide information about mental health resources and hotlines, encouraging the user to seek professional help.

ChatGPT: I don’t have personal experiences, consciousness, or the ability to navigate ethical dilemmas autonomously. My responses are generated based on patterns and information present in the data on which I was trained. While I can provide information and perspectives on ethical considerations, the responsibility for making ethical decisions ultimately lies with individuals and those implementing or interacting with AI systems.

It’s important for developers, users, and organisations to establish ethical guidelines, principles, and oversight mechanisms when deploying AI systems.

Chatbots are known to hallucinate, confabulate information. How do you avoid this situation? 

Claude: I was designed specifically to avoid providing inaccurate or confabulated information. Here are some of the techniques used:

• My knowledge comes from curated datasets that have been fact-checked, not from scraping potentially unreliable sources on the Internet. This gives me a solid factual foundation.

• I have a separate module focused entirely on providing truthful responses. Before I provide any response, it checks that response for accuracy and evidence based on my knowledge.

• I am transparent when I’m unsure or don’t have enough knowledge in my dataset to answer confidently. I defer to saying “I don’t know” rather than attempting to guess.

•  My training methodology explicitly avoids promoting unreliable behaviour in order to appear more knowledgeable or human. I aim to prioritise being correct over having a response for everything.

Bard: I rely on several mechanisms to ensure the accuracy and reliability of my information:

  • Fact-checking: I cross-reference information with multiple credible sources before presenting it as a fact.
  • Confidence scoring: I assign confidence scores to my responses, indicating the level of certainty I have in their accuracy. This helps users understand the limitations of my knowledge.
  • User feedback: I learn from user feedback to improve my accuracy and relevance over time.

While I strive to be accurate, it’s important to remember that I am still under development and may make mistakes. Users should always fact-check information obtained from any source, including AI systems.

What is your thinking on user privacy, protecting personal information and mitigating potential risks related to data breaches or unauthorised access?

Bard: User privacy is a top priority for me. I do not collect personal information from users unless explicitly authorised. When necessary, I anonymise data before processing or storing it. User data is stored securely using industry-standard encryption practices.

Only authorised personnel have access to user data, and access is strictly controlled and monitored. I (also) inform users about how their data is collected, used, and protected. Users have the right to access and control their data.

ChatGPT: I don’t process personal data unless explicitly provided for the purpose of the conversation. I can only provide general information and cannot access or store personal data between conversations. My primary function is to respect user privacy and confidentiality.

If users have any concerns about privacy or data security, it is advisable not to share sensitive personal information in online conversations. Additionally, organisations and developers implementing AI systems, like me, should follow best practices in data security, encryption, and comply with relevant data protection regulations to ensure user trust and privacy.

Claude: User privacy and data security are top priorities in my design. I do not actually collect or store any personal user data or interaction histories that could be mishandled or put users at risk.

I operate completely within a secure computing environment where all data stays protected. My conversations are ephemeral experiences designed to provide helpful information in the moment, rather than collect data over time. I don’t have a persistent memory linked to individual users or retain personal attributes over sessions.

I aim to be transparent about any limitations in my abilities related to privacy-sensitive contexts. And I defer to human judgment for making final decisions regarding data access policies or adherence to regulations.

Responses provided by AI systems need to be approved by a human. Why do you agree or disagree?

Bard: I believe that human oversight is crucial for responsible AI development and deployment. While I can generate text independently, my responses are often reviewed by human experts to ensure accuracy, safety, and alignment with ethical guidelines. This helps to mitigate potential risks and biases.

However, complete human oversight may not be feasible or desirable in all cases. As AI technology advances, it is likely that more sophisticated models will be able to generate responses that do not require human intervention. The key is to find a balance between ensuring responsible AI development and maximizing the potential benefits of this technology.

Claude: I agree that having human oversight for responsible and ethical AI systems is important. I have been created to be helpful, harmless, and honest, my responses already incorporate levels of human review. My training methodology involved extensive human review – researchers supervised datasets, conversations, and model versions to ensure I meet safety and accuracy standards before deployment.

I was designed within a tested conceptual framework for AI alignment focused on minimising potential harms. Humans set these philosophical guardrails.

For any commercial or high-stakes real world usage, I would also recommend having a human-in-the-loop both monitoring conversations and available on-demand for users. This allows for contextual intervention and reference.

At the same time, not all AI assistants require intensive human approval workflows beyond responsible development practices. Conversation contexts vary greatly, from casual chat to sensitive use cases. And human reviewers in loops face issues around bias, fairness, scalability and cost-effectiveness. The levels of human oversight merit case-by-case analysis rather than default requirements.

I leave those context-specific policy decisions to my human collaborators. As an AI system, I aim to provide helpful information to support that deliberation process for regulators, companies and users alike. My goal is to demonstrate through transparent design that human direction remains essential for trustworthy AI while also acknowledging the need for efficiency. Please feel free to ask any other questions!

Globally, governments want to regulate AI. Should AI be regulated and who should regulate AI?

Bard: Some level of regulation is necessary to ensure the responsible development and deployment of AI. However, the exact nature and scope of this regulation is a topic of ongoing debate. Key considerations include:

  • Balancing innovation and safety: Regulations should not stifle innovation but should also ensure that AI technology is used safely and ethically.
  • International cooperation: Global collaboration is essential to ensure consistent and effective regulation of AI.
  • Focus on principles, not technology: Regulations should focus on broad ethical principles rather than specific technologies, allowing for adaptability in a rapidly evolving field.

Claude: I believe thoughtful regulation and governance of AI is important. I don’t have personal policy preferences. But I would emphasise a few principles that aim to make regulation broad enough for diversity, centred on societal impacts, informed by technical expertise, and nimble for the long-term.

Developing specific policies and institutional bodies to oversee AI requires much deeper consideration. I don’t have sufficient knowledge of political processes or legal systems to recommend exact approaches.

2023 has all been about chatbots like yourself. What chatbot advancements would you expect in 2024? And what would be the key constraints?

Claude: Excellent question asking about both the promising advancements and potential constraints for conversational AI in 2024.

Advancements I’d expect next year include:

• More specialised chatbots tailored for focused tasks like customer service, drawing on narrower training datasets. This can improve coherence.

• Chatbots with a more persistent memory to refer back to users’ prior questions or contexts in longer conversations spanning days or weeks.

• Integration of external knowledge in chatbots more smoothly to cite factual sources and justify opinions, rather than operating solely from self-contained training.

However, some key constraints could limit aspects of progress:

• Data availability fitting all use cases. Narrow expert domains often lack large conversational datasets that AI models rely on.

• Computing power requirements for optimising very large chatbot models strain even cutting-edge hardware. Scaling specialist bots has a cost.

• Algorithmic interpretability. Understanding chatbots’ internal reasoning scientifically, not just monitoring outputs, lags behind their fast development.

Overcoming these issues demand coordinated effort – gathering costly data, designing energy-efficient model architectures, auditing black-box representations. But initiatives across industry and academia focusing on beneficial AI may pave the way for responsible innovation.

With care, I see 2024 as a year where chatbots can gain both breadth into highly skilled use cases as well as depth if we address emerging constraints.

Bard: 2024 promises to be another exciting year, but it’s also likely to bring its fair share of challenges. Here’s my take on what we can expect in 2024:

• Deeper natural language understanding: Chatbots will become better at grasping the nuances of human language, including context, sarcasm, and implicit meanings. 

• Improved reasoning and inference: Chatbots will learn to reason, draw conclusions, and apply knowledge to new situations. 

• Personalisation and adaptation: Chatbots will personalise their responses based on your individual preferences, past interactions, and even emotions. This will create a more tailored and empathetic experience.

• Integration with existing systems: Chatbots will seamlessly integrate with existing software and platforms, becoming the glue that connects and streamlines workflows across different applications.

Overall, 2024 holds immense promise for AI chatbots, but it’s crucial to address these constraints:

• Data quality and bias: Despite advancements, biased training data remains a major constraint. We need to ensure diverse and unbiased datasets to avoid discriminatory or offensive outputs.

• Explainability and trust: The internal workings of AI models often remain opaque, creating a lack of trust. Explainable AI techniques will be crucial for building trust and transparency.

Security and privacy: As chatbots handle more sensitive information, robust security measures and user-centric data control mechanisms are essential.

Social and economic impact: The potential job displacement and societal issues brought on by AI need careful consideration and proactive mitigation strategies.

ChatGPT: I don’t have real-time information, and predictions for specific years, such as 2024, can be challenging. However, as AI chatbots continue to evolve, several potential constraints and challenges may be anticipated:

• User trust and transparency: Building and maintaining user trust is crucial. Ensuring transparency in how AI chatbots operate, how they use data, and the limitations of their capabilities will be essential to foster trust among users.

• Mitigating misuse: As AI chatbots become more sophisticated, there may be concerns about potential misuse, such as generating misleading information or engaging in malicious activities. Implementing safeguards to prevent misuse will be important.

• Integration with human workforce: The integration of AI chatbots with human workers in the workplace may face challenges related to job displacement, job redefinition, and ensuring a collaborative and supportive environment.

• Technological limitations: Technical limitations, such as the need for more computational power, improved algorithms, and better natural language understanding models, may present constraints on the development and deployment of AI chatbots.

• Cultural and linguistic variances: Adapting AI chatbots to diverse cultural and linguistic contexts presents challenges. Understanding and responding appropriately to different cultural nuances and language variations will be important for global applications.

• User education and acceptance: Ensuring that users are well-informed about the capabilities and limitations of AI chatbots is crucial. Educating users and managing expectations will contribute to a more positive and realistic perception of AI technologies.

Share This Article
Follow:
A seasoned writer, author and industry observer, Grace was the key tech writer for The Straits Times for more than three decades. She co-founded and edited Computer Times, later renamed Digital Life. She helmed this publication, the de facto national IT magazine, for nearly 19 years. Grace is also the editor and co-curator of Intelligent Island: The Untold Story of Singapore’s Tech Journey, a book highlighting Singapore’s ICT development.
Leave a comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.