aster.cloud aster.cloud
  • /
  • Platforms
    • Public Cloud
    • On-Premise
    • Hybrid Cloud
    • Data
  • Architecture
    • Design
    • Solutions
    • Enterprise
  • Engineering
    • Automation
    • Software Engineering
    • Project Management
    • DevOps
  • Programming
    • Learning
  • Tools
  • About
  • /
  • Platforms
    • Public Cloud
    • On-Premise
    • Hybrid Cloud
    • Data
  • Architecture
    • Design
    • Solutions
    • Enterprise
  • Engineering
    • Automation
    • Software Engineering
    • Project Management
    • DevOps
  • Programming
    • Learning
  • Tools
  • About
aster.cloud aster.cloud
  • /
  • Platforms
    • Public Cloud
    • On-Premise
    • Hybrid Cloud
    • Data
  • Architecture
    • Design
    • Solutions
    • Enterprise
  • Engineering
    • Automation
    • Software Engineering
    • Project Management
    • DevOps
  • Programming
    • Learning
  • Tools
  • About
AI robot
  • Design
  • Platforms

An AI Dilemma: How To Implement Generative AI Tools Safely And Ethically

  • aster.cloud
  • August 23, 2023
  • 4 minute read

Artificial intelligence is being used in all sorts of ways, from chatbots and virtual assistants to self-driving cars, and 97% of business owners believe that ChatGPT will help their business. But with any new technology, there are concerns about safety and ethics – and it’s no different with AI. 

Some business leaders have recently called for a six-month pause on the development of new models more powerful than GPT-4, warning of “profound risks to society and humanity.” With the introduction of the Biden Administration roadmap to promote responsible innovation and focus investment in AI research and development, it’s clear that these risks must be properly mitigated to ensure that safety and the public good remain at the center of all innovation.


Partner with aster.cloud
for your next big idea.
Let us know here.



From our partners:

CITI.IO :: Business. Institutions. Society. Global Political Economy.
CYBERPOGO.COM :: For the Arts, Sciences, and Technology.
DADAHACKS.COM :: Parenting For The Rest Of Us.
ZEDISTA.COM :: Entertainment. Sports. Culture. Escape.
TAKUMAKU.COM :: For The Hearth And Home.
ASTER.CLOUD :: From The Cloud And Beyond.
LIWAIWAI.COM :: Intelligence, Inside and Outside.
GLOBALCLOUDPLATFORMS.COM :: For The World's Computing Needs.
FIREGULAMAN.COM :: For The Fire In The Belly Of The Coder.
ASTERCASTER.COM :: Supra Astra. Beyond The Stars.
BARTDAY.COM :: Prosperity For Everyone.

For companies looking to adopt AI on an enterprise level, there is hesitation on the longevity and safety of new generative AI tools, which poses a necessary question – is all AI bad? What ethical concerns do we need to be aware of? 

As we work to identify answers to these questions, there are tangible steps that can be taken to avoid risking possible ethical dilemmas brought on by data bias. Companies using generative AI must be cognizant of the potential damage that bias can cause and, while large language models (LLMs) are useful, they rely on large sets of data that must be reliable and unbiased. 

Ethical challenges of AI

While ChatGPT and other new AI-generated tools are tempting, and the opportunities seem endless, integrating them into existing products without caution and careful review can reinforce existing stereotypes and discriminatory practices. These generative AI models rely on large sets of data to form their reasonings and explanations, and if those data sets are flawed, biases will be reflected in the responses and work it produces.

Read More  Wells Fargo’s New Virtual Assistant, Fargo, To Be Powered By Google Cloud AI

Data bias used to train these tools can lead to catastrophic results, which is one of the many reasons why an ethical code must be developed and enforced among organizations creating, adopting and integrating these tools into existing products and platforms. For instance, a study by two researchers at the University of Washington found that ChatGPT perpetuates gender stereotypes for occupations across several different spoken languages.

How to harness the benefits of AI – without causing harm

Avoid bias

The most obvious step in creating AI tools that do not suffer bias is to ensure that the data on which the AI is trained does not have bias. This is at odds with models that are trained on the public internet; there is no way to ensure that data pulled randomly from the internet can be free of bias (and, in fact, virtually guarantees that bias will exist). However, when targeting very specific use cases, you can limit the input data and in turn, vet the training data for bias. 

Choose use cases wisely

When deciding whether or not to use AI in a particular use case, think about whether and how AI might be affected by bias. You may find use cases that are much less likely to suffer from bias (for example, in my industry, generating Kubernetes YAML from an English description of a deployment topology) than others (for example, writing a job description for an engineering position, which could accidentally introduce gendered pronouns indicating bias).  

Protect user privacy

We are more aware than ever of how data is being used – think about the number of times a day you get asked about “cookies” on a website. AI and language models represent yet another way that data can be used, and just like with waves of innovation that preceded this one, we need to ensure that we are protecting data privacy. 

Read More  New, Free Training Course Teaches Fundamentals Of Serverless On Kubernetes

If you are planning on using user-submitted content as part of your training dataset, you must at least notify your users that their data can be used in that way. And ideally, you would allow users to opt-out of having their data used in training.  

Be transparent about how AI is being used

While ChatGPT can be a useful tool and ease many monotonous and routine tasks, it is crucial to be transparent about AI usage – both internally and externally. A thorough understanding of not only how the work was created but also the data set that was used to inform the work is required to ensure proper fact-checking and bias-reducing actions can be taken. 

Transparency can help police any bias and build trust with users by openly sharing information to inform employees, customers and users’ decisions based on their comfort and encouraging a two-way dialogue about the use of such tools.

Large-Language Models

LLMs are a powerful interactive tool for implementing ChatGPT and other generative AI tools. The best part? They can be trained on private and personalized data sets and models, mitigating many of the ethical issues that may arise in other use cases.

Enterprise companies looking to adopt generative AI can use LLMs to build AI-driven chatbots ranging from technical support portals to blog post generators. However, a disclaimer is needed here – like all code, whether it was written by a colleague, copied from Stack Overflow or generated by an LLM, it must be carefully reviewed and tested before put into production.

Read More  Oracle Expands Government Cloud With National Security Regions For U.S. Intelligence Community

While it is important for the industry as a whole to take steps to ensure ethical models are being enforced, companies themselves must also take on the responsibility of reducing bias when implementing new generative AI tools. As the generative AI landscape continues to evolve and new models are introduced to the market, companies should keep a close eye on not only how these new models can benefit their organizations, but also the broader impacts of implementing these technologies on a larger scale.

By: Dan Ciruli, VP of Product at D2iQ
Originally publish at Cloud Native Computing Foundation

Source: cyberpogo.com


For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!

Our humans need coffee too! Your support is highly appreciated, thank you!

aster.cloud

Related Topics
  • AI
  • Artificial Intelligence
  • Ethics
  • Generative AI
  • Large Language Models
  • LLM
  • Responsible AI
You May Also Like
View Post
  • Computing
  • Design
  • Engineering
  • Technology

Here’s why it’s important to build long-term cryptographic resilience

  • December 24, 2024
Google Cloud and Smart Communications
View Post
  • Platforms
  • Technology

Smart Communications, Inc. Dials into Google Cloud AI to Help Personalize Digital Services for Filipinos

  • October 25, 2024
View Post
  • Platforms
  • Public Cloud

Empowering builders with the new AWS Asia Pacific (Malaysia) Region

  • August 30, 2024
Red Hat and Globe Telecoms
View Post
  • Platforms
  • Technology

Globe Collaborates with Red Hat Open Innovation Labs to Modernize IT Infrastructure for Greater Agility and Scalability

  • August 19, 2024
Huawei Cloud Cairo Region Goes Live
View Post
  • Cloud-Native
  • Computing
  • Platforms

Huawei Cloud Goes Live in Egypt

  • May 24, 2024
Asteroid
View Post
  • Computing
  • Platforms
  • Technology

Asteroid Institute And Google Cloud Identify 27,500 New Asteroids, Revolutionizing Minor Planet Discovery With Cloud Technology

  • April 30, 2024
IBM
View Post
  • Hybrid Cloud
  • Platforms

IBM To Acquire HashiCorp, Inc. Creating A Comprehensive End-to-End Hybrid Cloud Platform

  • April 24, 2024
View Post
  • Platforms
  • Technology

Canonical Delivers Secure, Compliant Cloud Solutions for Google Distributed Cloud

  • April 9, 2024

Stay Connected!
LATEST
  • college-of-cardinals-2025 1
    The Definitive Who’s Who of the 2025 Papal Conclave
    • May 7, 2025
  • conclave-poster-black-smoke 2
    The World Is Revalidating Itself
    • May 6, 2025
  • 3
    Conclave: How A New Pope Is Chosen
    • April 25, 2025
  • Getting things done makes her feel amazing 4
    Nurturing Minds in the Digital Revolution
    • April 25, 2025
  • 5
    AI is automating our jobs – but values need to change if we are to be liberated by it
    • April 17, 2025
  • 6
    Canonical Releases Ubuntu 25.04 Plucky Puffin
    • April 17, 2025
  • 7
    United States Army Enterprise Cloud Management Agency Expands its Oracle Defense Cloud Services
    • April 15, 2025
  • 8
    Tokyo Electron and IBM Renew Collaboration for Advanced Semiconductor Technology
    • April 2, 2025
  • 9
    IBM Accelerates Momentum in the as a Service Space with Growing Portfolio of Tools Simplifying Infrastructure Management
    • March 27, 2025
  • 10
    Tariffs, Trump, and Other Things That Start With T – They’re Not The Problem, It’s How We Use Them
    • March 25, 2025
about
Hello World!

We are aster.cloud. We’re created by programmers for programmers.

Our site aims to provide guides, programming tips, reviews, and interesting materials for tech people and those who want to learn in general.

We would like to hear from you.

If you have any feedback, enquiries, or sponsorship request, kindly reach out to us at:

[email protected]
Most Popular
  • 1
    IBM contributes key open-source projects to Linux Foundation to advance AI community participation
    • March 22, 2025
  • 2
    Co-op mode: New partners driving the future of gaming with AI
    • March 22, 2025
  • 3
    Mitsubishi Motors Canada Launches AI-Powered “Intelligent Companion” to Transform the 2025 Outlander Buying Experience
    • March 10, 2025
  • PiPiPi 4
    The Unexpected Pi-Fect Deals This March 14
    • March 13, 2025
  • Nintendo Switch Deals on Amazon 5
    10 Physical Nintendo Switch Game Deals on MAR10 Day!
    • March 9, 2025
  • /
  • Technology
  • Tools
  • About
  • Contact Us

Input your search keywords and press Enter.