AgroScout, a startup in the agritech (agricultural technology) sector dedicated to the early detection of pests and disease in field crops, is a prime example of a cutting-edge company using Oracle Cloud Native Services to migrate their application to Kubernetes and deliver an automated deployment pipeline. Cloud native technologies are all the rage right now, with a huge range of options a customer can choose from to implement both their application platform and the continuous integration/continuous delivery technologies used to deliver those applications. Now up and running, the AgroScout development team enjoys much easier management of their application with Kubernetes and a streamlined CI/CD platform, better performance from Oracle’s Gen 2 cloud and much more.
The Customer
AgroScout surveys fields via auto-piloted drones with cameras, then processes, detects and classifies any issues in the crops before recommending treatment. They rely on Graphical Processing Unit (GPU) based machine learning as well as a set of microservices backed by a SQL database. Their initial technology stack was based on Heroku and AWS. The small development team found that the time taken to manage this stack was detracting from their ability to deliver new features. As well as being hard to maintain they found their existing solution suffered from poor performance and was difficult to scale. In addition, the outcome and status of new deployments wasn’t immediately clear.
Oracle Cloud Native Services
Oracle Cloud Native Services include those for containers, serverless functions, streaming (with compatibility for Apache Kafka), infrastructure automation (with compatibility for Terraform), APIs, and associated monitoring & notifications capabilities. A relatively recent addition to the Oracle Cloud Infrastructure portfolio, the services have seen momentum over the last year. Adopters include scientific organizations, healthcare organizations, large financial services companies, innovative AI-centric startups, logistics and transportation companies, and government entities. You can get more details about these services and their adoption from the update we made in November 2019.
The Solution
Kubernetes was chosen as the application platform for this project, provided by the Oracle Container Engine for Kubernetes (OKE). OKE is a developer-friendly and enterprise-ready managed Kubernetes service for running highly available clusters with the control, security, and predictable performance of Oracle’s Cloud Infrastructure. New clusters can be created via the console, API, CLI or Terraform with a choice of virtual machines or bare metal servers for the worker nodes. OKE uses standard Kubernetes – all the tools such as kubectl, helm and the dashboard are available for a pure k8s user experience and portability across platforms.
Any container technology relies on images and requires a registry to store and access them. Oracle Cloud Infrastructure includes a managed container registry, Oracle Cloud Infrastructure Registry (OCIR), which is Docker v2 compatible. Our solution also required tooling to take the customer’s code, build it and deploy it as pods on a Kubernetes cluster. This final piece was provided by a Continuous Integration/Delivery (CI/CD) platform which brings a set of prebuilt integration for Kubernetes and container registries that can be used to build code or container images and then deploy them to Kubernetes or other platforms.
The goal of the solution was to allow a developer to commit code changes to their source control system and have this automatically trigger a build that created a Docker container image which was then pushed to a repository in OCIR. A further automated step then deploys this image as a Kubernetes deployment along with the Kubernetes services needed to expose it to the outside world. Fortunately, our CI/CD platform integrates with most popular hosted git offerings so the customer could continue to use their preferred BitBucket git repositories minimizing the impact on their development workflow. Each microservice has a dedicated git repository and each of these was associated with a build pipeline meaning that each microservice could be built and deployed in isolation. Now commits to the source code will trigger the first step of the associated pipeline which is to build the application, in this case using node.js, and create a Docker image. The CI/CD pipeline can use native steps to build an image or work from a standard Dockerfile stored in git. We chose the latter for portability and readability. Once the image is created a second step pushes the image to a repository hosted in OCIR using the pipeline’s native capabilities. There was a requirement to deploy the application to multiple Kubernetes environments depending on which branch in git the developer was working on. Here the pipeline’s ability to branch deployment workflows allowed us to create a set of Kubernetes manifest files for each specified environment based on the git branch the developer was working on. A final step then deployed these to the correct Kubernetes cluster. The application’s Kubernetes deployments were exposed to the outside world via an ingress controller that leveraged the integration of the Oracle Cloud Infrastructure Load Balancer service into OKE to give a highly available public load balancer to allow internet access to the customer’s website.
The application also included a set of batch jobs to be run periodically. These too could be built using the same approach outlined above and deployed by our CI/CD platform to the same OKE clusters as Kubernetes cronjobs. Periodic maintenance jobs, backups and processing of image data use this pattern with the cronjob launching a job at the desired interval. The machine learning side of the application also makes uses of Jupyter Notebook and again an image including the Notebook and all required files was built and deployed via build pipelines on Oracle Cloud Infrastructure.
The proof is in the pudding, of course, and we’re happy to say that once up and running the development team enjoyed much easier management of their application with Kubernetes and a streamlined CI/CD platform as well as better performance from Oracle’s Gen 2 cloud for both their microservices and GPU based machine learning. Scalability and resilience were improved using Oracle native load balancer to forward traffic to Kubernetes services. Integration with Slack ensured that the team was always notified of the state of each build. The end solution on Oracle Cloud Infrastructure freed up precious developer time to add new features and bug fixes to their growing customer base.
The Results
AgroScout has seen significant improvements in 3 areas:
- Performance: The speed of downloading pictures of crops from fields, thousands of them, has been reduced from minutes to a few seconds. Tagging, viewing and working with pictures is much faster, thereby improving the overall user experience.
- Agility: Oracle Solution Center engineers and Oracle Cloud Infrastructure technology have made the process of committing code, as well as building and delivering new releases automatically fast and simple. Prior manual processes would take at least a day and included no capability for notifications. The DevOps team now gets notified right away on their cell phone and can fix bugs much faster.
- Scalability: With Oracle Cloud, AgroScout can scale dynamically, based on demand. They expect to have tens of thousands of users in the next 2-5 years.
Next Steps
Try Oracle Cloud today and experience Oracle Container Engine for Kubernetes, Oracle Cloud Infrastructure Registry, and other services. Additional resources:
- Free workshops on PluralSight.com – get 12 months of free access to Oracle Cloud courses
- Oracle Cloud Infrastructure training (for learners of all levels)
- Learn more about Oracle Cloud Native Services
by Angus Myles, Sales Consulting and Akshai Parthasarathy, Product Marketing
Our humans need coffee too! Your support is highly appreciated, thank you!