aster.cloud aster.cloud
  • /
  • Platforms
    • Public Cloud
    • On-Premise
    • Hybrid Cloud
    • Data
  • Architecture
    • Design
    • Solutions
    • Enterprise
  • Engineering
    • Automation
    • Software Engineering
    • Project Management
    • DevOps
  • Programming
  • Tools
  • About
aster.cloud aster.cloud
  • /
  • Platforms
    • Public Cloud
    • On-Premise
    • Hybrid Cloud
    • Data
  • Architecture
    • Design
    • Solutions
    • Enterprise
  • Engineering
    • Automation
    • Software Engineering
    • Project Management
    • DevOps
  • Programming
  • Tools
  • About
  • Data

Scaling Machine Learning Inference With NVIDIA Tensorrt And Google Dataflow

  • relay
  • January 26, 2023
  • 3 minute read

A collaboration between Google Cloud and NVIDIA has enabled Apache Beam users to maximize the performance of ML models within their data processing pipelines, using NVIDIA TensorRT and NVIDIA GPUs alongside the new Apache Beam TensorRTEngineHandler.

The NVIDIA TensorRT SDK provides high-performance, neural network inference that lets developers optimize and deploy trained ML models on NVIDIA GPUs with the highest throughput and lowest latency, while preserving model prediction accuracy. TensorRT was specifically designed to support multiple classes of deep learning models, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and Transformer-based models.

Deploying and managing end-to-end ML inference pipelines while maximizing infrastructure utilization and minimizing total costs is a hard problem. Integrating ML models in a production data processing pipeline to extract insights requires addressing challenges associated with the three main workflow segments:

  1. Preprocess large volumes of raw data from multiple data sources to use as inputs to train ML models to “infer / predict” results, and then leverage the ML model outputs downstream for incorporation into business processes.
  2. Call ML models within data processing pipelines while supporting different inference use-cases: batch, streaming, ensemble models, remote inference, or local inference. Pipelines are not limited to a single model and often require an ensemble of models to produce the desired business outcomes.
  3. Optimize the performance of the ML models to deliver results within the application’s accuracy, throughput, and latency constraints. For pipelines that use complex, computate-intensive models for use-cases like NLP or that require multiple ML models together, the response time of these models often becomes a performance bottleneck. This can cause poor hardware utilization and requires more compute resources to deploy your pipelines in production, leading to potentially higher costs of operations.
Read More  Google Cloud Joins Linux Foundation Networking At Platinum Level

Google Cloud Dataflow is a fully managed runner for stream or batch processing pipelines written with Apache Beam. To enable developers to easily incorporate ML models in data processing pipelines, Dataflow recently announced support for Apache Beam’s generic machine learning prediction and inference transform, RunInference. The RunInference transform simplifies the ML pipeline creation process by allowing developers to use models in production pipelines without needing lots of boilerplate code.

You can see an example of its usage with Apache Beam in the following code sample. Note that the engine_handler is passed as a configuration to the RunInference transform, which abstracts the user from the implementation details of running the model.

engine_handler = TensorRTEngineHandlerNumPy(
          min_batch_size=4,
          max_batch_size=4,
          engine_path=
          'gs://gcp_bucket/single_tensor_features_engine.trt')

pcoll = pipeline | beam.Create(SINGLE_FEATURE_EXAMPLES)
predictions = pcoll | RunInference(engine_handler)

Along with the Dataflow runner and the TensorRT engine, Apache Beam enables users to address the three main challenges. The Dataflow runner takes care of pre-processing data at scale, preparing the data for use as model input. Apache Beam’s single API for batch and streaming pipelines means that RunInference is automatically available for both use cases. Apache Beam’s ability to define complex multi-path pipelines also makes it easier to create pipelines that have multiple models. With TensorRT support, Dataflow now also has the ability to optimize the inference performance of models on NVIDIA GPUs.

For more details and samples to start using this feature today please have a look at the NVIDIA Technical Blog, “Simplifying and Accelerating Machine Learning Predictions in Apache Beam with NVIDIA TensorRT.” Documentation for RunInference can be found at the Apache Beam document site and for Dataflow docs.

Read More  An Update On Google Cloud’s Commitments To E.U. Businesses In Light Of The New E.U.-U.S. Data Transfer Framework

By: Reza Rokni (Senior Staff Developer Advocate Dataflow) and Ruichao Ren (Deep Learning Specialist)
Source: Google Cloud Blog

relay

Related Topics
  • Apache Beam
  • Google Cloud
  • Google Dataflow
  • Machine Learning
  • NVIDIA
  • TensorRT
You May Also Like
View Post
  • Computing
  • Data

Sovereign Clouds Are Becoming A Big Deal Again

  • March 23, 2023
View Post
  • Big Data
  • Data

The Benefits And Core Processes Of Data Wrangling

  • March 17, 2023
mobile-laptop-developer-christina-wocintechchat-com-UTw3j_aoIKM-unsplash
View Post
  • Data
  • Software
  • Solutions

Build Customer Trust Through Secure Front End App Development & Cyber Security

  • March 14, 2023
View Post
  • Data
  • Engineering

Sentiment Analysis With BigQuery ML

  • March 13, 2023
View Post
  • Data
  • Engineering

Rapidly Expand The Reach Of Spanner Databases With Read-Only Replicas And Zero-Downtime Moves

  • March 12, 2023
View Post
  • Data
  • Engineering

Shorten The Path To Insights With Aiven For Apache Kafka And Google BigQuery

  • March 9, 2023
View Post
  • Computing
  • Data

Snowflake’s Telecom Data Cloud Bets On Accelerating Cloud Efficiency

  • March 3, 2023
View Post
  • Data
  • Engineering

10 Free Resources To Become A Health Data Scientist

  • March 1, 2023

Stay Connected!
LATEST
  • 1
    My First Pull Request At Age 14
    • March 24, 2023
  • 2
    AWS Chatbot Now Integrated Into Microsoft Teams
    • March 24, 2023
  • 3
    Verify POST Endpoint Availability With Uptime Checks
    • March 24, 2023
  • 4
    Sovereign Clouds Are Becoming A Big Deal Again
    • March 23, 2023
  • 5
    Ditching Google: The 3 Search Engines That Use AI To Give Results That Are Meaningful
    • March 23, 2023
  • 6
    Pythonic Techniques For Handling Sequences
    • March 21, 2023
  • 7
    Oracle Cloud Infrastructure to Increase the Reliability, Efficiency, and Simplicity of Large-Scale Kubernetes Environments at Reduced Costs
    • March 20, 2023
  • 8
    Monitor Kubernetes Cloud Costs With Open Source Tools
    • March 20, 2023
  • 9
    What Is An Edge-Native Application?
    • March 20, 2023
  • 10
    Eclipse Java Downloads Skyrocket
    • March 19, 2023
about
Hello World!

We are aster.cloud. We’re created by programmers for programmers.

Our site aims to provide guides, programming tips, reviews, and interesting materials for tech people and those who want to learn in general.

We would like to hear from you.

If you have any feedback, enquiries, or sponsorship request, kindly reach out to us at:

[email protected]
Most Popular
  • 1
    Cloudflare Takes On Online Fraud Detection Market
    • March 15, 2023
  • 2
    Linux Foundation Training & Certification & Cloud Native Computing Foundation Partner With Corise To Prepare 50,000 Professionals For The Certified Kubernetes Administrator Exam
    • March 16, 2023
  • 3
    Cloudflare Democratizes Post-Quantum Cryptography By Delivering It For Free, By Default
    • March 16, 2023
  • 4
    Daily QR “Scan Scams” Phishing Users On Their Mobile Devices
    • March 16, 2023
  • 5
    Lockheed Martin Launches Commercial Ground Control Software For Satellite Constellations
    • March 14, 2023
  • /
  • Platforms
  • Architecture
  • Engineering
  • Programming
  • Tools
  • About

Input your search keywords and press Enter.