Vertex AI launched with the premise “one AI platform, every ML tool you need.” Let’s talk about how Vertex AI streamlines modeling universally for a broad range of use cases.
The overall purpose of Vertex AI is to simplify modeling so that enterprises can fast track their innovation, accelerate time to market, and ultimately increase return on ML investments. Vertex AI facilitates this in several ways. Features like Vertex AI Workbench, for example, speed up training and deployment of models by five times compared to traditional notebooks. Vertex AI Workbench’s native integration with BigQuery and Spark means that users without data science expertise can more easily perform machine learning work. Tools integrated into the unified Vertex AI platform, such as state of the art pre-trained APIs and AutoML, make it easier for data scientists to build models in less time. And for modeling work that lends itself best to custom modeling, Vertex AI’s custom model tooling supports advanced ML coding, with nearly 80% fewer lines of code required (compared to competitive platforms) to train a model with custom libraries. Vertex AI delivers all this while maintaining a strong focus on Explainable AI.
Yet organizations with the largest investments in AI and machine learning, with teams of ML experts, require extremely advanced toolsets to deliver on their most complex problems. Simplified ML modeling isn’t relegated to simple use cases only.
Let’s look at Vertex AI Neural Architecture Search (NAS), for instance.
Vertex AI NAS enables ML experts at the highest level to perform their most complex tasks with higher accuracy, lower latency, and low power requirements. Vertex AI NAS originates from the deep experience Alphabet has with building advanced AI at scale. In 2017, the Google Brain team recognized we need a better way to scale AI modeling, so they developed Neural Architecture Search technology to create an AI that generates other neural networks, trained to optimize their performance in a specific task the user provides. To the astonishment of many in the field, these AI-optimized models were able to beat a number of state of the art benchmarks, such as ImageNet and SOTA mobilenets, setting a new standard for many of the applications we see in use today, including many Google-internal products. Google Cloud saw the potential of such a technology and shipped in less than a year a productized version of the technique (under the brand AutoML). Vertex AI NAS is the newest and most powerful version of this idea, using the most sophisticated innovation that has emerged since the initial research.
Customer organizations are already implementing Vertex AI NAS for their most advanced workloads. Autonomous vehicle company Nuro is using Vertex AI NAS, and Jack Guo, Head of Autonomy Platform at the company, states, “Nuro’s perception team has accelerated their AI model development with Vertex AI NAS. Vertex AI NAS have enabled us to innovate AI models to achieve good accuracy and optimize memory and latency for the target hardware. Overall, this has increased our team’s productivity for developing and deploying perception AI models.”
And our partner ecosystem is growing for Vertex AI NAS. Google Cloud and Qualcomm Technologies have collaborated to bring Vertex AI NAS to the Qualcomm Technologies Neural Processing SDK, optimized for Snapdragon 8. This will bring AI to different device types and use cases, such as those involving IoT, mixed reality, automobiles, and mobile.
Google Cloud’s commitments to making machine learning more accessible and useful for data users, from the novice to the expert, and to increasing the efficacy of machine learning for enterprises are at the core of everything we do. With the suite of unified machine learning tools within Vertex AI, organizations can take advantage of every ML tool they need on one AI platform.
By: Craig Wiley (Director of PM, Cloud AI)
Source: Google Cloud Blog