aster.cloud aster.cloud
  • /
  • Platforms
    • Public Cloud
    • On-Premise
    • Hybrid Cloud
    • Data
  • Architecture
    • Design
    • Solutions
    • Enterprise
  • Engineering
    • Automation
    • Software Engineering
    • Project Management
    • DevOps
  • Programming
    • Learning
  • Tools
  • About
  • /
  • Platforms
    • Public Cloud
    • On-Premise
    • Hybrid Cloud
    • Data
  • Architecture
    • Design
    • Solutions
    • Enterprise
  • Engineering
    • Automation
    • Software Engineering
    • Project Management
    • DevOps
  • Programming
    • Learning
  • Tools
  • About
aster.cloud aster.cloud
  • /
  • Platforms
    • Public Cloud
    • On-Premise
    • Hybrid Cloud
    • Data
  • Architecture
    • Design
    • Solutions
    • Enterprise
  • Engineering
    • Automation
    • Software Engineering
    • Project Management
    • DevOps
  • Programming
    • Learning
  • Tools
  • About
  • Data
  • Engineering

Ingesting Google Cloud Storage Files To BigQuery Using Cloud Functions And Serverless Spark

  • aster.cloud
  • April 11, 2022
  • 4 minute read

Apache Spark has become a popular platform as it can serve all of data engineering, data exploration, and machine learning use cases. However, Spark still requires the on-premises way of managing clusters and tuning infrastructure for each job. This increases costs, reduces agility, and makes governance extremely hard; prohibiting enterprises from making insights available to the right users at the right time.Dataproc Serverless lets you run Spark batch workloads without requiring you to provision and manage your own cluster. Specify workload parameters, and then submit the workload to the Dataproc Serverless service. The service will run the workload on a managed compute infrastructure, autoscaling resources as needed. Dataproc Serverless charges apply only to the time when the workload is executing.

You can run the following Spark workload types on the Dataproc Serverless for Spark service:


Partner with aster.cloud
for your next big idea.
Let us know here.



From our partners:

CITI.IO :: Business. Institutions. Society. Global Political Economy.
CYBERPOGO.COM :: For the Arts, Sciences, and Technology.
DADAHACKS.COM :: Parenting For The Rest Of Us.
ZEDISTA.COM :: Entertainment. Sports. Culture. Escape.
TAKUMAKU.COM :: For The Hearth And Home.
ASTER.CLOUD :: From The Cloud And Beyond.
LIWAIWAI.COM :: Intelligence, Inside and Outside.
GLOBALCLOUDPLATFORMS.COM :: For The World's Computing Needs.
FIREGULAMAN.COM :: For The Fire In The Belly Of The Coder.
ASTERCASTER.COM :: Supra Astra. Beyond The Stars.
BARTDAY.COM :: Prosperity For Everyone.

  • Pyspark
  • Spark SQL
  • Spark R
  • Spark Java/Scala

This post walks you through the process of ingesting files into BigQuery using serverless service such as Cloud Functions, Pub/Sub & Serverless Spark. We use Daily Shelter Occupancy data in this example.

You can find the complete source code for this solution within our Github.

Here’s a look at the architecture we’ll be using:

 

Here’s how to get started with ingesting GCS files to BigQuery using Cloud Functions and Serverless Spark:

1. Create a bucket, the bucket holds the data to be ingested in GCP. Once the object is upload in a bucket, the notification is created in Pub/Sub topic

 

PROJECT_ID=<<project_id>>
GCS_BUCKET_NAME=<<Bucket name>>
gsutil mb gs://${GCS_BUCKET_NAME}
gsutil notification create \
    -t projects/${PROJECT_ID}/topics/create_notification_${GCS_BUCKET_NAME} \
    -e OBJECT_FINALIZE \
    -f json gs://${GCS_BUCKET_NAME}

 

Read More  Investing $1 Billion In Digital Connectivity To Japan

2. Build and copy the jar to a GCS bucket(Create a GCS bucket to store the jar if you don’t have one).

Follow the steps to create a GCS bucket and copy JAR to the same.

 

GCS_ARTIFACT_REPO=<<artifact repo name>>
gsutil mb gs://${GCS_ARTIFACT_REPO}
cd gcs2bq-spark
mvn clean install
gsutil cp target/GCS2BQWithSpark-1.0-SNAPSHOT.jar gs://${GCS_ARTIFACT_REPO}/

 

3. Enable network configuration required to run serverless spark

 

  • Open subnet connectivity: The subnet must allow subnet communication on all ports. The following gcloud command attaches a network firewall to a subnet that allows ingress communications using all protocols on all ports if the source and destination are tagged with “serverless-spark”

 

gcloud compute firewall-rules create allow-internal-ingress \
--network="default" \
--source-tags="serverless-spark" \
--target-tags="serverless-spark" \
--direction="ingress" \
--action="allow" \
--rules="all"

 

Note: The default VPC network in a project with the default-allow-internal firewall rule, which allows ingress communication on all ports (tcp:0-65535, udp:0-65535, and icmp protocols:ports), meets this requirement. However, it also allows ingress by any VM instance on the network

  • Private Google Access: The subnet must have Private Google Access enabled. External network access. Drivers and executors have internal IP addresses. You can set up Cloud NAT to allow outbound traffic using internal IPs on your VPC network.

4. Create necessary GCP resources required by Serverless Spark

  • Create BQ Dataset Create a dataset to load csv files.

 

DATASET_NAME=<<dataset_name>>
bq --location=US mk -d \
    --default_table_expiration 3600 \
    ${DATASET_NAME}

 

  • Create BQ table Create a table using the schema in schema/schema.json

 

TABLE_NAME=<<table_name>>
bq mk --table ${PROJECT_ID}:${DATASET_NAME}.${TABLE_NAME} \
    ./schema/schema.json

 

  • Create service account and permission required to read from GCS bucket and write to BigQuery table
Read More  Proactive Database Wellness With Active Assist: Introducing Additional Cloud SQL Recommenders

 

SERVICE_ACCOUNT_ID="gcs-to-bq-sa"
gcloud iam service-accounts create ${SERVICE_ACCOUNT_ID} \
    --description="GCS to BQ service account for Serverless Spark" \
    --display-name="GCS2BQ-SA"

roles=("roles/dataproc.worker" "roles/bigquery.dataEditor" "roles/bigquery.jobUser" "roles/storage.objectViewer")
for role in ${roles[@]}; do
    gcloud projects add-iam-policy-binding ${PROJECT_ID} \
        --member="serviceAccount:${SERVICE_ACCOUNT_ID}@${PROJECT_ID}.iam.gserviceaccount.com" \
        --role="$role"
done

 

  • Create GCS bucket to load data to BigQuery

 

GCS_TEMP_BUCKET=<<temp_bucket>>
gsutil mb gs://${GCS_TEMP_BUCKET}

 

  • Create Dead Letter Topic and Subscription

 

ERROR_TOPIC=err_gcs2bq_${GCS_BUCKET_NAME}
gcloud pubsub topics create $ERROR_TOPIC
gcloud pubsub subscriptions create err_sub_${GCS_BUCKET_NAME}} \
--topic=${ERROR_TOPIC}

 

Note: Once all resources are created, change the variables value () in trigger-serverless-spark-fxn/main.py from line 27 to 31. The code of the function is in Github

5. Deploy the cloud function.

The cloud function is triggered once the object is copied to the bucket. The cloud function triggers the Servereless spark which loads data into Bigquery.

 

cd trigger-serverless-spark-fxn
gcloud functions deploy trigger-serverless-spark-fxn --entry-point \
invoke_sreverless_spark --runtime python37 \
--trigger-resource ${GCS_BUCKET_NAME}_create_notification \
--trigger-event google.pubsub.topic.publish

 

6. Invoke end to end pipeline

Invoke the end-to-end pipeline by Downloading 2020 Daily Center Data and uploading to the GCS bucket(GCS_BUCKET_NAME).(Note: replace with the bucket name created in Step-1)

Debugging Pipelines

Error messages for the failed data pipelines are published to Pub/Sub topic (ERROR_TOPIC) created in Step 4 (Create Dead Letter Topic and Subscription). The errors from both cloud function and spark are forwarded to Pub/Sub. Pub/Sub topics might have multiple entries for the same data-pipeline instance. Messages in Pub/Sub topics can be filtered using the “oid” attribute. The attribute(oid) is unique for each pipeline run and holds a full object name with the generation id.

Conclusion

In this post, we’ve shown you how to ingest GCS files to BigQuery using Cloud Functions and Serverless Spark. Register interest here to request early access to the new solutions for Spark on Google Cloud.

Read More  GKE Workload Rightsizing — From Recommendations To Action

 

 

By: Swati Sindwani (Big Data and Analytics Cloud Consultant) and Bipin Upadhyaya (Strategic Cloud Engineer)
Source: Google Cloud Blog


For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!

Our humans need coffee too! Your support is highly appreciated, thank you!

aster.cloud

Related Topics
  • Apache Spark
  • BigQuery;
  • Cloud Functions
  • Data Analytics
  • Google Cloud
  • Serverless
  • Tutorial
You May Also Like
View Post
  • Engineering

Just make it scale: An Aurora DSQL story

  • May 29, 2025
Getting things done makes her feel amazing
View Post
  • Computing
  • Data
  • Featured
  • Learning
  • Tech
  • Technology

Nurturing Minds in the Digital Revolution

  • April 25, 2025
View Post
  • Engineering
  • Technology

Guide: Our top four AI Hypercomputer use cases, reference architectures and tutorials

  • March 9, 2025
View Post
  • Computing
  • Engineering

Why a decades old architecture decision is impeding the power of AI computing

  • February 19, 2025
View Post
  • Engineering
  • Software Engineering

This Month in Julia World

  • January 17, 2025
View Post
  • Engineering
  • Software Engineering

Google Summer of Code 2025 is here!

  • January 17, 2025
View Post
  • Data
  • Engineering

Hiding in Plain Site: Attackers Sneaking Malware into Images on Websites

  • January 16, 2025
View Post
  • Computing
  • Design
  • Engineering
  • Technology

Here’s why it’s important to build long-term cryptographic resilience

  • December 24, 2024

Stay Connected!
LATEST
  • 1
    Just make it scale: An Aurora DSQL story
    • May 29, 2025
  • 2
    Reliance on US tech providers is making IT leaders skittish
    • May 28, 2025
  • Examine the 4 types of edge computing, with examples
    • May 28, 2025
  • AI and private cloud: 2 lessons from Dell Tech World 2025
    • May 28, 2025
  • 5
    TD Synnex named as UK distributor for Cohesity
    • May 28, 2025
  • Weigh these 6 enterprise advantages of storage as a service
    • May 28, 2025
  • 7
    Broadcom’s ‘harsh’ VMware contracts are costing customers up to 1,500% more
    • May 28, 2025
  • 8
    Pulsant targets partner diversity with new IaaS solution
    • May 23, 2025
  • 9
    Growing AI workloads are causing hybrid cloud headaches
    • May 23, 2025
  • Gemma 3n 10
    Announcing Gemma 3n preview: powerful, efficient, mobile-first AI
    • May 22, 2025
about
Hello World!

We are aster.cloud. We’re created by programmers for programmers.

Our site aims to provide guides, programming tips, reviews, and interesting materials for tech people and those who want to learn in general.

We would like to hear from you.

If you have any feedback, enquiries, or sponsorship request, kindly reach out to us at:

[email protected]
Most Popular
  • Understand how Windows Server 2025 PAYG licensing works
    • May 20, 2025
  • By the numbers: How upskilling fills the IT skills gap
    • May 21, 2025
  • 3
    Cloud adoption isn’t all it’s cut out to be as enterprises report growing dissatisfaction
    • May 15, 2025
  • 4
    Hybrid cloud is complicated – Red Hat’s new AI assistant wants to solve that
    • May 20, 2025
  • 5
    Google is getting serious on cloud sovereignty
    • May 22, 2025
  • /
  • Technology
  • Tools
  • About
  • Contact Us

Input your search keywords and press Enter.