aster.cloud aster.cloud
  • /
  • Platforms
    • Public Cloud
    • On-Premise
    • Hybrid Cloud
    • Data
  • Architecture
    • Design
    • Solutions
    • Enterprise
  • Engineering
    • Automation
    • Software Engineering
    • Project Management
    • DevOps
  • Programming
    • Learning
  • Tools
  • About
  • /
  • Platforms
    • Public Cloud
    • On-Premise
    • Hybrid Cloud
    • Data
  • Architecture
    • Design
    • Solutions
    • Enterprise
  • Engineering
    • Automation
    • Software Engineering
    • Project Management
    • DevOps
  • Programming
    • Learning
  • Tools
  • About
aster.cloud aster.cloud
  • /
  • Platforms
    • Public Cloud
    • On-Premise
    • Hybrid Cloud
    • Data
  • Architecture
    • Design
    • Solutions
    • Enterprise
  • Engineering
    • Automation
    • Software Engineering
    • Project Management
    • DevOps
  • Programming
    • Learning
  • Tools
  • About
  • DevOps
  • Programming
  • Software Engineering

PromptOps In application Delivery: Empowering Your Workflow with ChatGPT

  • aster.cloud
  • April 30, 2023
  • 7 minute read

ChatGPT is taking the tech industry by storm, thanks to its unparalleled natural language processing capabilities. As a powerful AI language model, it has the ability to understand and generate human-like responses, revolutionizing communication in various industries. From streamlining customer service chatbots to enabling seamless language translation tools, ChatGPT has already proved its mettle in creating innovative solutions that improve efficiency and user experience. 

Now the question is, can we leverage ChatGPT to transform the way we deliver applications? With the integration of ChatGPT into DevOps workflows, we are witnessing the possible emergence of a new era of automation called PromptOps. This advancement in AIOps technology is revolutionizing the way businesses operate, allowing for faster and more efficient application delivery.


Partner with aster.cloud
for your next big idea.
Let us know here.



From our partners:

CITI.IO :: Business. Institutions. Society. Global Political Economy.
CYBERPOGO.COM :: For the Arts, Sciences, and Technology.
DADAHACKS.COM :: Parenting For The Rest Of Us.
ZEDISTA.COM :: Entertainment. Sports. Culture. Escape.
TAKUMAKU.COM :: For The Hearth And Home.
ASTER.CLOUD :: From The Cloud And Beyond.
LIWAIWAI.COM :: Intelligence, Inside and Outside.
GLOBALCLOUDPLATFORMS.COM :: For The World's Computing Needs.
FIREGULAMAN.COM :: For The Fire In The Belly Of The Coder.
ASTERCASTER.COM :: Supra Astra. Beyond The Stars.
BARTDAY.COM :: Prosperity For Everyone.

In this article, we will explore how to integrate ChatGPT into your DevOps workflow to deliver applications.

Integrate ChatGPT into Your DevOps Workflow

When it comes to integrating ChatGPT into DevOps workflows, many developers are faced with the challenge of managing extra resources and writing complicated shells. However, there is a better way – KubeVela Workflow. This open-source cloud-native workflow project offers a streamlined solution that eliminates the need for pods or complex scripting.

In KubeVela Workflow, every step has a type that can be easily abstracted and reused. The step-type is programmed in CUE language, making it incredibly easy to customize and use atomic capabilities like a function call in every step. An important point to note is that with all these atomic capabilities, such as HTTP requests, it is possible to integrate ChatGPT in just 5 minutes by writing a new step.

Check out the Installation Guide to get started with KubeVela Workflow.The complete code of this chat-gpt step type is available at GitHub.

Now that we choose the right tool, let’s see the capabilities of ChatGPT in delivery.

Case 1: Diagnose the resources

It’s quite common in the DevOps world to encounter problems like “I don’t know why the pod is not running” or “I don’t know why the service is not available”. In this case, we can use ChatGPT to diagnose the resource.

For example, In our workflow, we can apply a Deployment with an invalid image in the first step. Since the deployment will never be ready, we can add a timeout in the step to ensure the workflow is not stuck in this step. Then, passing the unhealthy resources deployed in the first step to the second step, we can use the `chat-gpt` step type to diagnose the resource to determine the issue. Note that the second step is only executed if the first one fails. 

step 1 to step 2

The complete workflow is shown below:

apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
  name: chat-gpt-diagnose
  namespace: default
spec:
  workflowSpec:
    steps:
    # Apply an invalid deployment with a timeout
    - name: apply
      type: apply-deployment
      timeout: 3s
      properties:
        image: invalid
      # output the resource to the next step
      outputs:
        - name: resource
          valueFrom: output.value

    # Use chat-gpt to diagnose the resource
    - name: chat-diagnose
      # only execute this step if the `apply` step fails
      if: status.apply.failed
      type: chat-gpt
      # use the resource as inputs and pass it to prompt.content
      inputs:
        - from: resource
          parameterKey: prompt.content
      properties:
        token:
          value: <your token>
        prompt:
          type: diagnose

Apply this Workflow and check the result, the first step will fail because of timeout. Then the second step will be executed and the result of chat-gpt will be shown in the log:

Read More  Exploring Cilium Layer 7 Capabilities Compared To Istio

vela workflow logs chat-gpt-diagnose

vela workflow logs chat-gtp

Visualize in the dashboard

If you want to visualize the process and the result in the dashboard, it’s time to enable the velaux addon.

vela addon enable velaux

Copy all the steps in the above yaml to create a pipeline.

Dashboard

Run this pipeline, and you can check out the failed reason analyzed by ChatGPT in the logs of the second step.

diagnose

Write the chat-gpt step from scratch

How to write this chat-gpt step type? Is it simple for you to write a step type like this? Let’s see how to complete this step type.

We can first define what this step type need from the user. That is: the users’ token for ChatGPT, and the resource to diagnose. For some other parameters like the model or the request timeout, we can set the default value with * like below:

parameter: {
  token: value: string
// +usage=the model name
model: *"gpt-3.5-turbo" | string
// +usage=the prompt to use
prompt: {
type:    *"diagnose" | string
lang:    *"English" | string
content: {...}
}
timeout: *"30s" | string
}

Let’s complete this step type by writing the logic of the step. We can first import vela/op package in which we can use the op.#HTTPDo capability to send a request to the ChatGPT API. If the request fails, the step should be failed with op.#Fail. We can also set this step’s log data with ChatGPT’s answer. The complete step type is shown below:

// import packages
import (
"vela/op"
"encoding/json"
)

// this is the name of the step type
"chat-gpt": {
description: "Send request to chat-gpt"
type: "workflow-step"
}

// this is the logic of the step type
template: {
  // send http request to chat gpt
http: op.#HTTPDo & {
method: "POST"
url:    "https://api.openai.com/v1/chat/completions"
request: {
timeout: parameter.timeout
body:    json.Marshal({
model: parameter.model
messages: [{
if parameter.prompt.type == "diagnose" {
content: """
You are a professional kubernetes administrator.
Carefully read the provided information, being certain to spell out the diagnosis & reasoning, and don't skip any steps.
Answer in  \(parameter.prompt.lang).
---
\(json.Marshal(parameter.prompt.content))
---
What is wrong with this object and how to fix it?
"""
}
role: "user"
}]
})
header: {
"Content-Type":  "application/json"
"Authorization": "Bearer \(parameter.token.value)"
}
}
}

response: json.Unmarshal(http.response.body)

fail:     op.#Steps & {
if http.response.statusCode >= 400 {
requestFail: op.#Fail & {
message: "\(http.response.statusCode): failed to request: \(response.error.message)"
}
}
}
result: response.choices[0].message.content
log:    op.#Log & {
data: result
}
  parameter: {
    token: value: string
    // +usage=the model name
    model: *"gpt-3.5-turbo" | string
    // +usage=the prompt to use
    prompt: {
      type:    *"diagnose" | string
      lang:    *"English" | string
      content: {...}
    }
    timeout: *"30s" | string
  }
}

That’s it! Apply this step type and we can use it in our Workflow like the above.

Read More  Why You Need An API Gateway To Manage Access To Your APIs

vela def apply chat-gpt.cue

Case 2: Audit the resource

Now the ChatGPT is our Kubernetes expert and can diagnose the resource. Can it also give us some security advice for the resource? Definitely! It’s just prompt. Let’s modify the step type that we wrote in the previous case to add the audit feature. We can add a new prompt type audit and pass the resource to the prompt. You can check out the whole step type in GitHub.

In the Workflow, we can apply a Deployment with nginx image and pass it to the second step. The second step will use the audit prompt to audit the resource. 

pass applied resource

The complete Workflow is shown below:

apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
  name: chat-gpt-audit
  namespace: default
spec:
  workflowSpec:
    steps:
    - name: apply
      type: apply-deployment
      # output the resource to the next step
      outputs:
        - name: resource
          valueFrom: output.value
      properties:
        image: nginx

    - name: chat-audit
      type: chat-gpt
      # use the resource as inputs and pass it to prompt.content
      inputs:
        - from: resource
          parameterKey: prompt.content
      properties:
        token:
          value: <your token>
        prompt:
          type: audit
Code

Use Diagnose & Audit in one Workflow

Now that we have the capability to diagnose and audit the resource, we can use them in one Workflow, and use the if condition to control the execution of the steps. For example, if the apply step fails, then diagnose the resource, if it succeeds, audit the resource. 

step 1

The complete Workflow is shown below:

apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
  name: chat-gpt
  namespace: default
spec:
  workflowSpec:
    steps:
    - name: apply
      type: apply-deployment
      outputs:
        - name: resource
          valueFrom: output.value
      properties:
        image: nginx

    # if the apply step fails, then diagnose the resource
    - name: chat-diagnose
      if: status.apply.failed
      type: chat-gpt
      inputs:
        - from: resource
          parameterKey: prompt.content
      properties:
        token:
          value: <your token>
        prompt:
          type: diagnose
       
    # if the apply step succeeds, then audit the resource
    - name: chat-audit
      if: status.apply.succeeded
      type: chat-gpt
      inputs:
        - from: resource
          parameterKey: prompt.content
      properties:
        token:
          value: <your token>
        prompt:
          type: audit

Case 3: Use ChatGPT as a quality gate

If we want to apply the resources to a production environment, can we let ChatGPT rate the quality of the resource first, only if the quality is high enough, then apply the resource to the production environment? Absolutely!

Read More  Ubisoft: Driving Innovation In Gaming With Kubernetes

Note that to make the score evaluated by chat-gpt more convincing, it’s better to pass metrics than the resource in this case.

Let’s write our Workflow. KubeVela Workflow has the capability to apply resources to multi clusters. The first step is to apply the Deployment to the test environment. The second step is to use the ChatGPT to rate the quality of the resource. If the quality is high enough, then apply the resource to the production environment. 

image

The complete Workflow is shown below:

apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
  name: chat-gpt-quality-gate
  namespace: default
spec:
  workflowSpec:
    steps:
    # apply the resource to the test environment
    - name: apply
      type: apply-deployment
      # output the resource to the next step
      outputs:
        - name: resource
          valueFrom: output.value
      properties:
        image: nginx
        cluster: test

    - name: chat-quality-check
      # this step will always be executed
      if: always
      type: chat-gpt
      # get the inputs from resource and pass it to the prompt.content
      inputs:
        - from: resource
          parameterKey: prompt.content
      # output the score of ChatGPT and use strconv.Atoi to convert the score string to int
      outputs:
        - name: chat-result
          valueFrom: |
            import "strconv"
            strconv.Atoi(result)
      properties:
        token:
          value: <your token>
        prompt:
          type: quality-gate

    # if the score is higher than 60, then apply the resource to the production environment
    - name: apply-production
      type: apply-deployment
      # get the score from chat-result
      inputs:
        - from: chat-result
      # check if the score is higher than 60
      if: inputs["chat-result"] > 60
      properties:
        image: nginx
        cluster: prod

Apply this Workflow and we can see that if the score is higher than 60, then the resource will be applied to the production environment.

In the End

ChatGPT brings imagination to the world of Kubernetes. Diagnose, audit, rate is just the beginning. In the new AI era, the most precious thing is idea. What do you want to do with ChatGPT? Share your insights with us in the KubeVela Community.

By Fog Dong, Engineer at Alibaba Cloud, and Maintainer of KubeVela 
Originally published at Cloud Native Computing Foundation



Source: Cyberpogo


For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!

Our humans need coffee too! Your support is highly appreciated, thank you!

aster.cloud

Related Topics
  • AIOps
  • ChatGPT
  • CNCF
  • KubeVela
  • Kurbenetes
  • PromptOps
You May Also Like
View Post
  • Software Engineering
  • Technology

Claude 3.7 Sonnet and Claude Code

  • February 25, 2025
View Post
  • Engineering
  • Software Engineering

This Month in Julia World

  • January 17, 2025
View Post
  • Engineering
  • Software Engineering

Google Summer of Code 2025 is here!

  • January 17, 2025
View Post
  • Software Engineering

5 Books Every Beginner Programmer Should Read

  • July 25, 2024
View Post
  • DevOps
  • Engineering
  • Platforms

How To Fail At Platform Engineering

  • March 11, 2024
Ruby
View Post
  • Software Engineering

How To Get Started With A Ruby On Rails Project – A Developer’s Guide

  • January 27, 2024
View Post
  • Engineering
  • Software Engineering

5 Ways Platform Engineers Can Help Developers Create Winning APIs

  • January 25, 2024
Clouds
View Post
  • Cloud-Native
  • Platforms
  • Software Engineering

Microsoft Releases Azure Migrate Assessment Tool For .NET Application

  • January 14, 2024

Stay Connected!
LATEST
  • 1
    Just make it scale: An Aurora DSQL story
    • May 29, 2025
  • 2
    Reliance on US tech providers is making IT leaders skittish
    • May 28, 2025
  • Examine the 4 types of edge computing, with examples
    • May 28, 2025
  • AI and private cloud: 2 lessons from Dell Tech World 2025
    • May 28, 2025
  • 5
    TD Synnex named as UK distributor for Cohesity
    • May 28, 2025
  • Weigh these 6 enterprise advantages of storage as a service
    • May 28, 2025
  • 7
    Broadcom’s ‘harsh’ VMware contracts are costing customers up to 1,500% more
    • May 28, 2025
  • 8
    Pulsant targets partner diversity with new IaaS solution
    • May 23, 2025
  • 9
    Growing AI workloads are causing hybrid cloud headaches
    • May 23, 2025
  • Gemma 3n 10
    Announcing Gemma 3n preview: powerful, efficient, mobile-first AI
    • May 22, 2025
about
Hello World!

We are aster.cloud. We’re created by programmers for programmers.

Our site aims to provide guides, programming tips, reviews, and interesting materials for tech people and those who want to learn in general.

We would like to hear from you.

If you have any feedback, enquiries, or sponsorship request, kindly reach out to us at:

[email protected]
Most Popular
  • 1
    Cloud adoption isn’t all it’s cut out to be as enterprises report growing dissatisfaction
    • May 15, 2025
  • 2
    Hybrid cloud is complicated – Red Hat’s new AI assistant wants to solve that
    • May 20, 2025
  • 3
    Google is getting serious on cloud sovereignty
    • May 22, 2025
  • oracle-ibm 4
    Google Cloud and Philips Collaborate to Drive Consumer Marketing Innovation and Transform Digital Asset Management with AI
    • May 20, 2025
  • notta-ai-header 5
    Notta vs Fireflies: Which AI Transcription Tool Deserves Your Attention in 2025?
    • May 16, 2025
  • /
  • Technology
  • Tools
  • About
  • Contact Us

Input your search keywords and press Enter.