<![CDATA[David Nguyen]]><![CDATA[A passionate full-stack developer from VIETNAM.
]]>https://eplus.devRSS for NodeThu, 30 Apr 2026 07:13:53 GMT<![CDATA[en]]>60<![CDATA[Measuring Speech-to-Text Accuracy - GSP758]]><![CDATA[Overview
Automated Speech Recognition (ASR), also known as machine transcription or Speech-to-Text, uses machine learning to turn spoken audio into text. ASR has many applications, from subtitling, to virtual assistants, to IVRs, to dictation, and more. However, machine learning systems are rarely 100% accurate and ASR is no exception. If you plan to rely on ASR for critical systems it's important to measure its accuracy or overall quality to understand how it will perform in your broader system.
In this lab, you use Speech-to-Text to transcribe an audio file and then measure the quality of the transcription.
Objectives
In this lab, you learn the following:
Define what speech quality is and how to measure it.
Measure the quality of the transcription.
ASR quality concepts
The following are some key concepts and steps involved in evaluating the quality and accuracy of Automated Speech Recognition (ASR) systems.
Define speech accuracy
Although speech accuracy can be measured in many ways, the industry standard method is word error rate (WER). WER measures the percentage of incorrect transcriptions in an entire set. A lower WER indicates a more accurate system.
The ground truth is the 100% accurate (typically human) transcription you compare a Speech-to-Text or hypothesis transcript against to measure the accuracy.
Word Error Rate (WER)
Word error rate is the combination of three types of transcription errors which can occur:
Insertion Error - Word in the hypothesis transcript that is not in the ground truth.
Substitution errors - Word in both the hypothesis and the ground truth, but not transcribed correctly.
Deletion errors - Word is missing from the hypothesis but present in the ground truth.
The formula below calculates WER:
I: Number of insertion errors
S: Number of substitution errors
D: Number of deletion errors
N: Total number of words in the ground truth transcript
You add the total number of each error (S plus I plus D), and then divide that by the total number of words (N) in the ground truth transcript to find the WER. In situations with very low accuracy, it's possible that the WER can be greater than 100%.
Other metrics
Other metrics are useful for tracking things like readability or measuring how many of your most important terms were transcribed correctly. Examples are:
Jaccard Index - Measures the similarity of the hypothesis and ground truth (or a subset of the ground truth).
F1 Score - Measures precision vs recall on a dataset. This is useful when tuning speech systems towards specific terms to ensure good recall without sacrificing too much precision.
How to measure speech accuracy
Now that you you're familiar with accuracy metrics, the following provides generic steps to follow when measuring accuracy on your own audio transcript.
Note: In this lab a sample audio files of about 10 min is provided with associated ground truth. The steps below are not necessary to complete this lab, but are necessary to measure quality on your own data.
Gather test audio files
You should gather a representative sample of the audio files for which you want to measure quality. This sample should be random and as close to the target environment as possible. For example, to transcribe conversations from a call center to aid in quality assurance, you would randomly select a few actual calls recorded on the same equipment that your production audio would come through, not recorded on your cell phone or computer microphone.
You need at least 30 min of audio to get a statistically significant accuracy metric. Using between 30 min and 3 hours of audio is recommended. This lab provides the audio.
Get ground truth transcriptions
Next you need an accurate transcription of the audio. This usually involves a single or double pass of a human transcription of the target audio. The goal is a 100% accurate transcription to measure against the automated results.
Its important when doing this to match the transcription conventions of your target ASR system as closely as possible. For example, ensure that punctuation, numbers, and capitalization are consistent. This lab provides the ground truth.
Get the machine transcription
Send the audio to the Google Speech-to-Text API and get your hypothesis transcription. You can do this using one of Google Clouds many libraries or command line tools. This lab provides the code to do this.
Compute the WER
Now you would count the insertions, substitutions, deletions, and total words using the ground truth and the machine transcription.
This lab uses code, created by Google, to normalize output and calculate the WER.
Setup and requirements
Before you click the Start Lab button
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources are made available to you.
This hands-on lab lets you do the lab activities in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito (recommended) or private browser window to run this lab. This prevents conflicts between your personal account and the student account, which may cause extra charges incurred to your personal account.
Time to complete the labremember, once you start, you cannot pause a lab.
Note: Use only the student account for this lab. If you use a different Google Cloud account, you may incur charges to that account.
How to start your lab and sign in to the Google Cloud console
Click the Start Lab button. If you need to pay for the lab, a dialog opens for you to select your payment method. On the left is the Lab Details pane with the following:
The Open Google Cloud console button
Time remaining
The temporary credentials that you must use for this lab
Other information, if needed, to step through this lab
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
Note: If you see the Choose an account dialog, click Use Another Account.
If necessary, copy the Username below and paste it into the Sign in dialog.
[email protected]
Copied!
You can also find the Username in the Lab Details pane.
Click Next.
Copy the Password below and paste it into the Welcome dialog.
91GSHoYnBxe3
Copied!
You can also find the Password in the Lab Details pane.
Click Next.
Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials.
Note: Using your own Google Cloud account for this lab may incur extra charges.
Click through the subsequent pages:
Accept the terms and conditions.
Do not add recovery options or two-factor authentication (because this is a temporary account).
Do not sign up for free trials.
After a few moments, the Google Cloud console opens in this tab.
Note: To access Google Cloud products and services, click the Navigation menu or type the service or product name in the Search field.
Task 1. Create a Vertex AI Workbench instance
This lab has curated and created a focused dataset based on public domain books and audio from the LibriSpeech project. This lab also provides all the code you need to measure the accuracy of Cloud Speech-to-Text APIs accuracy on this dataset.
In this task, you learn how to set up and use this code.
In the Google Cloud console, from the Navigation menu (
), select Vertex AI > Dashboard.
Click Enable All Recommended APIs.
On the left-hand side, click Workbench.
At the top of the Workbench page, ensure you are in the Instances view.
Click
Create New.
Configure the Instance:
Name: lab-workbench
Region: Set the region to us-east4
Zone: Set the zone to us-east4-b
Advanced Options (Optional): If needed, click "Advanced Options" for further customization (e.g., machine type, disk size)
Click Create.
Note: The instance will take a few minutes to create. A green checkmark will appear next to its name when it's ready.
Click Open JupyterLab next to the instance name to launch the JupyterLab interface. This will open a new tab in your browser.
Click the Terminal icon to open a terminal window.
Your terminal window will open in a new tab. You can now run commands in the terminal to interact with your Workbench instance.
Load the notebook
In the terminal window you just opened, run the following commands to copy files to use in this lab:
gsutil cp gs://spls/gsp758/notebook/measuring-accuracy.ipynb .
Copied!
gsutil cp gs://spls/gsp758/notebook/simple_wer_v2.py .
Copied!
Perform the following tasks to play audio files in an Incognito window
Within Chrome click the 3 dots > Settings.
In the Search Settings type Incognito.
In the results, click Third-party cookies.
Go to Allowed to use third-party cookies.
Click Add.
Copy the JUPYTERLAB domain, do not include https.
It should be something like:
[YOUR_NOTEBOOK_ID].notebooks.googleusercontent.com
Copied!
Check Current incognito session only, and then click add.
You can now continue to the notebook.
Open the measuring-accuracy.ipynb notebook to follow the instructions inside to compute the WER on the provided dataset.
Click Check my progress to verify the objective.
Create the Vertex AI Workbench Notebook instance
In the following sections, you run the notebook cells to measure the quality and accuracy of Automated Speech Recognition (ASR) systems.
Task 2. Gather audio files and the ground truth
In this task, you gather audio files and ground truth. Run the Gather Audio Files and Ground Truth section of the notebook.
Click Check my progress to verify the objective.
Gather Audio Files and Ground Truth
Task 3. Get the machine transcript
In this task, you import the Speech client library and call the Recognize method for each audio file. Run the Get the Machine Transcript section of the notebook.
Click Check my progress to verify the objective.
Get the Machine Transcript
Task 4. Compute the WER
In this task, you add transcripts to a WER analysis object and compute the results. Run the Compute the WER section of the notebook.
Click Check my progress to verify the objective.
Solution of Lab
Quick
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/GSP758/lab.sh
source lab.sh
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/GSP758/vetex.sh
source vetex.sh
Manual
https://www.youtube.com/watch?v=ti1ucDNP0LA
]]>https://eplus.dev/measuring-speech-to-text-accuracy-gsp758https://eplus.dev/measuring-speech-to-text-accuracy-gsp758<![CDATA[Measuring Speech-to-Text Accuracy - GSP758]]><![CDATA[Measuring Speech-to-Text Accuracy]]><![CDATA[GSP758]]><![CDATA[David Nguyen]]>Wed, 08 Apr 2026 04:31:05 GMT<![CDATA[Enhance Application Reliability and Scalability with Internal Load Balancing - GSP216]]><![CDATA[Overview
Google Cloud's Internal Load Balancing (ILB) is a crucial service for managing and scaling your private application infrastructure. It enables you to distribute TCP/UDP-based traffic efficiently across internal virtual machine instances, ensuring your applications are highly available and performant within your private network. By providing a single, stable private IP address for your services, ILB simplifies internal application communication and enhances system resilience.
In this lab, you'll set up an internal service by creating two managed instance groups in the same region, representing a common deployment pattern for highly available applications. Then, you'll configure and thoroughly test an Internal Load Balancer, using these instance groups as its backends. This setup mimics a real-world scenario where an internal application, such as a microservice, an API endpoint, or a database, needs to be accessible to other internal services or applications without exposure to the public internet.
Objectives
In this lab you learn how to perform the following tasks:
Configure essential firewall rules to allow secure HTTP traffic and health checks for internal backends.
Design and implement instance templates for consistent and scalable VM deployments.
Create and manage managed instance groups for automated scaling and self-healing of your application backends.
Set up and test an Internal Load Balancer, demonstrating its ability to distribute internal traffic effectively and ensure service availability.
Setup and requirements
Before you click the Start Lab button
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources are made available to you.
This hands-on lab lets you do the lab activities in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito (recommended) or private browser window to run this lab. This prevents conflicts between your personal account and the student account, which may cause extra charges incurred to your personal account.
Time to complete the labremember, once you start, you cannot pause a lab.
Note: Use only the student account for this lab. If you use a different Google Cloud account, you may incur charges to that account.
How to start your lab and sign in to the Google Cloud console
Click the Start Lab button. If you need to pay for the lab, a dialog opens for you to select your payment method. On the left is the Lab Details pane with the following:
The Open Google Cloud console button
Time remaining
The temporary credentials that you must use for this lab
Other information, if needed, to step through this lab
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
Note: If you see the Choose an account dialog, click Use Another Account.
If necessary, copy the Username below and paste it into the Sign in dialog.
[email protected]
Copied!
You can also find the Username in the Lab Details pane.
Click Next.
Copy the Password below and paste it into the Welcome dialog.
Q3TUzBFaL9x2
Copied!
You can also find the Password in the Lab Details pane.
Click Next.
Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials.
Note: Using your own Google Cloud account for this lab may incur extra charges.
Click through the subsequent pages:
Accept the terms and conditions.
Do not add recovery options or two-factor authentication (because this is a temporary account).
Do not sign up for free trials.
After a few moments, the Google Cloud console opens in this tab.
Note: To access Google Cloud products and services, click the Navigation menu or type the service or product name in the Search field.
Task 1. Configure HTTP and health check firewall rules
Proper firewall rules are the foundation of a secure and functional internal load-balanced environment. They ensure that only authorized traffic reaches your backend services and that the Load Balancer can accurately assess the health of your instances.
Configure firewall rules to allow HTTP traffic to the backends and TCP traffic from the Google Cloud health checker.
Explore the my-internal-app network
The network my-internal-app with subnet-a and subnet-b along with firewall rules for RDP, SSH, and ICMP traffic have been configured for you.
In the console, navigate to Navigation menu > VPC network > VPC networks.
Scroll down and notice the my-internal-app network with its subnets: subnet-a and subnet-b
Each Google Cloud project starts with the default network. In addition, the my-internal-app network has been created for you, as part of your network diagram.
You will create the managed instance groups in subnet-a and subnet-b. Both subnets are in the europe-west3 region because an Internal Load Balancer is a regional service. The managed instance groups will be in different zones, making your service immune to zonal failures.
Create the HTTP firewall rule
Create a firewall rule to allow HTTP traffic to the backends from the Load Balancer and the internet (to install Apache on the backends).
Still in VPC network, in the left pane click Firewall.
Notice the app-allow-icmp and app-allow-ssh-rdp firewall rules.
These firewall rules have been created for you.
Click + Create Firewall Rule.
Set the following values, leave all other values at their defaults:
Property
Value (type value or select option as specified)
Name
app-allow-http
Network
my-internal-app
Targets
Specified target tags
Target tags
lb-backend
Source filter
IPv4 Ranges
Source IPv4 ranges
10.10.0.0/16
Protocols and ports
Specified protocols and ports, and then check tcp, type: 80
Note: Make sure to include the /16 in the Source IPv4 ranges to specify all networks.
Click Create.
Create the health check firewall rules
Health checks determine which instances of a Load Balancer can receive new connections. For Internal load balancing, the health check probes to your load balanced instances come from addresses in the ranges 130.211.0.0/22 and 35.191.0.0/16. Your firewall rules must allow these connections.
Still in the Firewall rules page, click + Create Firewall Rule.
Set the following values, leave all other values at their defaults:
Property
Value (type value or select option as specified)
Name
app-allow-health-check
Network
my-internal-app
Targets
Specified target tags
Target tags
lb-backend
Source filter
IPv4 Ranges
Source IPv4 ranges
130.211.0.0/22 and 35.191.0.0/16
Protocols and ports
Specified protocols and ports, and then check tcp
Note: Make sure to enter the two Source IPv4 ranges one-by-one and pressing SPACE in between them.
Click Create.
Click Check my progress to verify the objective.
Configure HTTP and health check firewall rules
Task 2. Configure instance templates and create instance groups
Instance templates and managed instance groups are the backbone of scalable, resilient, and manageable applications. They allow you to define a standard configuration for your VMs and then automatically manage their lifecycle, ensuring consistency and enabling automated scaling and self-healing.
A managed instance group uses an instance template to create a group of identical instances. Use these to create the backends of the Internal Load Balancer.
Configure the instance templates
An instance template is an API resource that you can use to create VM instances and managed instance groups. Instance templates define the machine type, boot disk image, subnet, labels, and other instance properties. Create an instance template for both subnets of the my-internal-app network.
In the Console, navigate to Navigation menu > Compute Engine > Instance templates.
Click Create instance template.
For Name, type instance-template-1.
For Location, Select Global.
For Series, select E2.
For Machine type, select Shared-core > e2-micro.
Click Advanced options.
Click Networking.
For Network tags, specify lb-backend.
Note: The network tag lb-backend ensures that the HTTP and Health Check firewall rules apply to these instances.
For Network interfaces, click the dropdown icon to edit.
Set the following values, leave all other values at their defaults:
Property
Value (type value or select option as specified)
Network
my-internal-app
Subnetwork
subnet-a
External IPv4 Address
None
Click Done.
Click Management.
Under Metadata, click Add item and specify the following:
Key 1
Value 1
startup-script-url
gs://spls/gsp216/startup.sh
Note: The startup-script-url specifies a script that will be executed when instances are started. This script installs Apache and changes the welcome page to include the client IP and the name, region and zone of the VM instance. Feel free to explore this script.
Click Create.
Wait for the instance template to be created.
Configure the next instance template
Create another instance template for subnet-b by copying instance-template-1. This demonstrates how easy it is to replicate configurations across different subnets or zones for high availability and disaster recovery strategies.
Still in Instance templates, check the box next to instance-template-1, then click Copy. Make sure to update the name as instance-template-2.
Click Advanced options.
Click the Networking tab.
For Network interfaces, click the dropdown icon to edit.
Select subnet-b as the Subnetwork.
Click Done and then click Create.
Create the managed instance groups
Managed instance groups (MIGs) are key to robust, self-healing, and dynamically scaling applications. They automatically replace unhealthy instances and can scale your application capacity based on demand, ensuring your services are always available and performant without constant manual intervention. This is crucial for handling variable loads and maintaining Service Level Objectives (SLOs).
Configure a managed instance group in subnet-a and one subnet-b.
Note: Identify one of the other zones in the same region as subnet-a. For example, if your zone of subnet-a is us-west2-a, you could select us-west2-b for subnet-b.
Still in Compute Engine, in the left pane click Instance groups, and then click Create Instance group.
Set the following values, leave all other values at their defaults:
Property
Value (type value or select option as specified)
Name
instance-group-1
Instance template
instance-template-1
Location
Single-zone
Region
europe-west3
Zone
europe-west3-c
Autoscaling > Minimum number of instances
1
Autoscaling > Maximum number of instances
1
Autoscaling > Autoscaling signals (click the dropdown icon to edit) > Signal type
CPU utilization
Target CPU utilization
80
Initialization period
45
Note:Autoscaling is a critical feature of managed instance groups that dynamically scales resources based on measured load. This capability lets your application smoothly handle variable traffic and optimizes cloud spend.
Click Create.
Repeat the same procedure for instance-group-2 in the different zone of same region as subnet-a:
Click Create Instance group.
Set the following values, leave all other values at their defaults:
Property
Value (type value or select option as specified)
Name
instance-group-2
Instance template
instance-template-2
Location
Single-zone
Region
europe-west3
Zone
Zone (Use the different zone in same region as subnet-a)
Autoscaling > Minimum number of instances
1
Autoscaling > Maximum number of instances
1
Autoscaling > Autoscaling signals (click the dropdown icon to edit) > Signal type
CPU utilization
Target CPU utilization
80
Initialization period
45
Click Create.
Verify the backends
Verify that VM instances are being created in both subnets and create a utility VM to access the backends' HTTP sites directly. This step confirms individual backend functionality before introducing the load balancer, ensuring proper setup of your service tier.
Still in Compute Engine, click VM instances.
Notice two instances that start with instance-group-1 and instance-group-2.
These instances are in separate zones and their internal IP addresses are part of the subnet-a and subnet-b CIDR blocks.
To create a new instance, click Create Instance.
In the Machine configuration.
Select the following values:
Property
Value (type value or select option as specified)
Name
utility-vm
Region
europe-west3
Zone
europe-west3-c
Series
E2
Machine Type
e2-micro (1 shared vCPU)
Click Networking.
For Network interfaces, click Toggle to Edit network interface.
Specify the following:
Property
Value (type value or select option as specified)
Network
my-internal-app
Subnetwork
subnet-a
Primary internal IPv4 address
Ephemeral (Custom)
Custom ephemeral IP address
10.10.20.50
Click Done and then click Create.
Click Check my progress to verify the objective.
Configure instance templates and create instance groups
The internal IP addresses for the backends are 10.10.20.2 and 10.10.30.2.
Note: If these IP addresses are different, replace them in the two curl commands below.
For utility-vm, click SSH to launch a terminal and connect.
To verify the welcome page for instance-group-1-xxxx, run the following command:
curl 10.10.20.2
Copied!
The output should look like this:
<h1>Internal Load Balancing Lab</h1><h2>Client IP</h2>Your IP address : 10.10.20.50<h2>Hostname</h2>Server Hostname:
instance-group-1-1zn8<h2>Server Location</h2>Region and Zone: us-central1-a
To verify the welcome page for instance-group-2-xxxx, run the following command:
curl 10.10.30.2
Copied!
The output should look like this:
<h1>Internal Load Balancing Lab</h1><h2>Client IP</h2>Your IP address : 10.10.20.50<h2>Hostname</h2>Server Hostname:
instance-group-2-q5wp<h2>Server Location</h2>Region and Zone: us-central1-b
Which of these fields identify the location of the backend?
Server Location
Client IP
Server Hostname
Note: The curl commands demonstrate that each VM instance lists the Client IP and its own name and location. This will be useful when verifying that the Internal Load Balancer sends traffic to both backends.
Close the SSH terminal to utility-vm:
exit
Copied!
Task 3. Configure the Internal Load Balancer
Configuring the ILB centralizes access to your backend services, provides a single point of entry for internal traffic, and ensures intelligent traffic distribution based on the health and capacity of your instances. This step is crucial for achieving the high availability and scalability benefits discussed earlier, acting as a centralized access point for your distributed service.
Configure the Internal Load Balancer to balance traffic between the two backends (instance-group-1 and instance-group-2), as illustrated in this diagram:
Start the configuration
From the Navigation Menu, select View All Products. Under Networking, select Network Services.
Select the Load balancing page.
Click Create load balancer.
For Type of load balancer, select Network Load Balancer (TCP/UDP/SSL).
For Proxy or passthrough, select Passthrough load balancer.
For Public facing or internal, select Internal.
Click CONFIGURE.
For Name, type my-ilb.
For Region, select europe-west3.
For Network, select my-internal-app.
Configure the regional backend service
The backend service is the intelligence behind the ILB, defining how traffic is distributed and how instance health is monitored. It's essential for ensuring traffic only flows to operational instances and preventing overload. This is also where you configure advanced features like session affinity, to keep a user's connection to the same backend or connection draining, for graceful backend updates.
The backend service monitors instance groups and prevents them from exceeding configured usage.
Click on Backend configuration.
Set the following values, leave all other values at their defaults:
Property
Value (select option as specified)
Instance group
instance-group-1
Click Add a backend.
For Instance group, select instance-group-2.
For Health Check, select Create a health check.
Set the following values, leave all other values at their defaults:
Property
Value (select option as specified)
Name
my-ilb-health-check
Protocol
TCP
Port
80
Note: Health checks determine which instances can receive new connections. This HTTP health check polls instances every 5 seconds, waits up to 5 seconds for a response, and treats 2 successful or 2 failed attempts as healthy or unhealthy, respectively. This continuous monitoring is vital for quick recovery from instance failures and maintaining your Service Level Agreements (SLAs)
Click Create.
Verify that there is a blue check mark next to Backend configuration in the Cloud Console. If not, double-check that you have completed all the steps above.
Configure the frontend
The frontend is the exposed interface of your ILB. By assigning a static internal IP address, you provide a consistent and predictable endpoint for other internal services to connect to, simplifying your application architecture, enabling easy service discovery within your VPC, and improving reliability.
The frontend forwards traffic to the backend.
Click on Frontend configuration.
Specify the following, leaving all other values with their defaults:
Property
Value (type value or select option as specified)
Subnetwork
subnet-b
Internal IP
Under IP address select Create IP address
Specify the following, leaving all other values with their defaults:
Property
Value (type value or select option as specified)
Name
my-ilb-ip
Static IP address
Let me choose
Custom IP address
10.10.30.5
Click Reserve.
In Port number, type 80.
Click Done .
Review and create the Internal Load Balancer
Click on Review and finalize.
Review the Backend and Frontend.
Click on Create. Wait for the Load Balancer to be created, before moving to the next task.
Click Check my progress to verify the objective.
Configure the Internal Load Balancer
Task 4. Test the Internal Load Balancer
The final test validates that the ILB is correctly distributing traffic across healthy backend instances. This confirms that your internal services are now more resilient and scalable, leveraging the core benefits of the ILB, and that the private connectivity is correctly established.
Verify that the my-ilb IP address forwards traffic to instance-group-1 and instance-group-2.
Access the Internal Load Balancer
In the Cloud Console, navigate to Navigation menu > Compute Engine > VM instances.
For utility-vm, click SSH to launch a terminal and connect.
To verify that the Internal Load Balancer forwards traffic, run the following command:
curl 10.10.30.5
Copied!
The output should look like this:
<h1>Internal Load Balancing Lab</h1><h2>Client IP</h2>Your IP address : 10.10.20.50<h2>Hostname</h2>Server Hostname:
instance-group-1-1zn8<h2>Server Location</h2>Region and Zone: us-central1-a
Note: As expected, traffic is forwarded from the Internal Load Balancer (10.10.30.5) to the backend.
Run the same command a couple more times.
In the output, you should be able to see responses from instance-group-1 in europe-west3-c and instance-group-2 in the different zone of same region. The load balancer is distributing traffic across the backend instances, proving its efficacy in ensuring high availability and distributing load.
Solution of Lab
https://www.youtube.com/watch?v=CJDVlzKurBg
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/GSP216/lab.sh
source lab.sh
Script Alternative
curl -LO raw.githubusercontent.com/prateekrajput08/Arcade-Google-Cloud-Labs/refs/heads/main/Enhance%20Application%20Reliability%20and%20Scalability%20with%20Internal%20Load%20Balancing/TechCode.sh
sudo chmod +x TechCode.sh
./TechCode.sh
]]>https://eplus.dev/enhance-application-reliability-and-scalability-with-internal-load-balancing-gsp216https://eplus.dev/enhance-application-reliability-and-scalability-with-internal-load-balancing-gsp216<![CDATA[Enhance Application Reliability and Scalability with Internal Load Balancing - GSP216]]><![CDATA[Enhance Application Reliability and Scalability with Internal Load Balancing]]><![CDATA[GSP216]]><![CDATA[David Nguyen]]>Tue, 07 Apr 2026 04:31:36 GMT<![CDATA[Configure Global Extended Application LB using HTTPS - GSP652]]><![CDATA[Overview
In this lab you will learn how to use the Google Cloud console to deploy a Global External Application Load Balancer to securely distribute HTTPS traffic across multiple backend web servers. The load balancer will communicate with backends over HTTP, while clients will connect to the load balancer over HTTPS.
What you'll learn or Objectives
Deploy and configure backend infrastructure suitable for a Global External Application Load Balancer on Google Cloud.
Set up a Global External Application Load Balancer on Google Cloud with an HTTPS frontend, including SSL/TLS certificate management.
Verify successful traffic distribution and HTTPS termination by testing the load balancer's external IP address.
Understand the role of health checks in maintaining backend availability.
Setup
Before you click the Start Lab button
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources are made available to you.
This hands-on lab lets you do the lab activities in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito (recommended) or private browser window to run this lab. This prevents conflicts between your personal account and the student account, which may cause extra charges incurred to your personal account.
Time to complete the labremember, once you start, you cannot pause a lab.
Note: Use only the student account for this lab. If you use a different Google Cloud account, you may incur charges to that account.
How to start your lab and sign in to the Google Cloud console
Click the Start Lab button. If you need to pay for the lab, a dialog opens for you to select your payment method. On the left is the Lab Details pane with the following:
The Open Google Cloud console button
Time remaining
The temporary credentials that you must use for this lab
Other information, if needed, to step through this lab
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
Note: If you see the Choose an account dialog, click Use Another Account.
If necessary, copy the Username below and paste it into the Sign in dialog.
[email protected]
Copied!
You can also find the Username in the Lab Details pane.
Click Next.
Copy the Password below and paste it into the Welcome dialog.
qWMEPZWxCJed
Copied!
You can also find the Password in the Lab Details pane.
Click Next.
Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials.
Note: Using your own Google Cloud account for this lab may incur extra charges.
Click through the subsequent pages:
Accept the terms and conditions.
Do not add recovery options or two-factor authentication (because this is a temporary account).
Do not sign up for free trials.
After a few moments, the Google Cloud console opens in this tab.
Note: To access Google Cloud products and services, click the Navigation menu or type the service or product name in the Search field.
Resources built for this lab
(2) Compute Engine VM instances with web server running on each.
Firewall rules are configured to allow incoming HTTP traffic on port 80 to the VMs.
Verify that the resources are ready:
In the console, go to Compute Engine.
Copy the External IP for one of the VMs that have been created.
Open a new browser, paste the copied IP, and navigate to the backend-vm-us-west1. You should see "Hello from backend-vm-us-west1!"
Repeat this step for the other instance, and navigate to backend-vm-europe-west4 and read the message.
Task 1. Create Instance Groups
In the Google Cloud Console, go to Navigation Menu > Compute Engine > Instance groups.
First, create an unmanaged instance group for the us-west1 region. Click Create Instance Group:
Property
Value
Name
us-west1-instance-group
Region
us-west1
Zone
Select the zone us-west1-b
Network
default
VM instances
Select backend-vm-us-west1
Click Create.
Next, create an instance group for the 2nd region, europe-west4:
Repeat the process for the second instance group:
Property
Value
Name
europe-west4-instance-group
Region
europe-west4
Zone
Select the zone europe-west4-b
Network
default
VM instances
Select backend-vm-europe-west4
Click Create.
Task 2. Create a health check
Navigate to Health Checks: In the Google Cloud Console, go to Navigation Menu > Compute Engine > Health checks.
Click Create a health check.
Name: http-health-check
Protocol: HTTP
Port: 80
Request path: /
Check interval: 5 seconds
Timeout: 5 seconds
Unhealthy threshold: 2
Healthy threshold: 2
Click Create.
Task 3. Create a backend service
Type load balancing in the search bar on the top. Under Products & Pages, click on Load balancing.
Click Create load balancer.
Load balancer type: Select Application Load Balancer (HTTP/HTTPS) and click Next.
Public facing or internal: Select Public facing (external) and click Next.
Global or single region deployment: Select Best for global workloads and click Next.
Load balancer generation: Select Global external Application Load Balancer and click Next.
Click Configure.
Now create the Backend configuration.
Click Backend configuration.
Click dropdown next to Backend services & backend buckets then click Create backend service add:
Property
Value
Name
global-backend-service
Backend type
Instance group
Protocol
HTTP (The ALB handles HTTPS, then communicates to backends via HTTP)
Port name
Leave default or set to http
Health Check: Select http-health-check.
Under New Backend specify the following.
Property
Value
Instance group
Select us-west1-instance-group
Port numbers
80
Click Done.
Click Add a backend then specify:
Property
Value
Instance group
Select europe-west4-instance-group
Port numbers
80
Click Done.
Click Create then Ok.
Configure frontend with HTTPS
Click Frontend configuration.
Add the following under Add Frontend IP and port :
Property
Value
Name
https-frontend
Protocol
HTTPS (includes HTTP/2)
IP version
IPv4
IP address
click Create IP Address
Name
global-lb-ip, click Reserve
Port
443
Move to the next section for how to set up the certificate.
Create a Certificate
For a real-world scenario, use a Google-managed SSL certificate. Next you'll create a self-signed certificate for use in this lab only.
Click Activate Cloud Shell
at the top of the Google Cloud console and open it in a new window.
Run the below command to a generated private key.
openssl genrsa -out key.pem 2048
Copied!
View the private key. You will need it in this task.
cat key.pem
Copied!
Run the below command to generated a generated public key certificate. For the "Common Name", use a placeholder like example.com. Press ENTER to leave the remaining fields blank.
openssl req -new -x509 -key key.pem -out cert.pem -days 365
Copied!
View the certificate. You will need it in this task.
cat cert.pem
Copied!
Under Choose certificate repository, select Use Classic Certificates.
Click dropdown next to Certificate then click Create a new certificate.
Property
Value
Name
self-signed-lb-cert
Mode
Upload my certificate
Certificate
Paste a generated public key certificate viewed earlier (from cert.pem).
Private key
Paste a generated private key viewed earlier (from key.pem).
Important: In a production environment, never use a self-signed certificate. Always use a trusted Certificate Authority (CA) or Google-managed certificates.
Click Create.
Click Use certificate for a self-signed certificate.
Select the checkbox for Enable HTTP to HTTPS redirect.
Click Done.
Review and create the Load Balancer
Review all configurations on the summary page.
Set Load Balancer Name to global-ext-application-lb
Click Create.
Wait for the load balancer to provision. This may take a few minutes.
Task 4. Test and verify Load Balancing
Access the Load Balancer
Once the load balancer is provisioned (status will show a green checkmark), copy the IP address displayed under Frontend configuration to use in the next step.
Open a new web browser and navigate to:
https://[YOUR_LOAD_BALANCER_IP_ADDRESS]
Copied!
You will likely encounter a "Your connection is not private" warning due to the self-signed certificate. Proceed past this warning.
Observe Load Balancing
Refresh the page multiple times. You should observe the content changing between "Hello from backend-vm-us-west1!" and "Hello from backend-vm-europe-west4!".
This demonstrates that the Global External Application Load Balancer is distributing traffic across your backend instances in different regions.
Observe the URL: It should remain https://[YOUR_LOAD_BALANCER_IP_ADDRESS], indicating that the ALB is handling HTTPS termination.
Task 5. Understand health checks
Simulate a backend failure
SSH into backend-vm-us-west1! by clicking the SSH button next to your backend on the VM instances page.
Stop the Nginx service:
sudo systemctl stop nginx
Copied!
Monitor Health Checks: In the Google Cloud console, navigate back to Network services > Load balancing, click on your load balancer, and then click on the Backend services tab. You should see the health status of us-west1-instance-group change to UNHEALTHY after a short period.
Test Load Balancer Again: Refresh your browser accessing the load balancer's IP. You should now consistently see "Hello from backend-vm-europe-west4!" [backend 2], as the load balancer has detected the backend-vm-us-west1 backend as unhealthy and is routing all traffic to the healthy backend-vm-europe-west4 backend.
Restore backend and verify
SSH into backend-vm-us-west1! [backend 1] by clicking the SSH button next to your backend on the VM instances page.
Start the Nginx service:
sudo systemctl start nginx
Copied!
Monitor the health check in the Google Cloud console until us-west1-instance-group returns to HEALTHY.
Refresh your browser accessing the load balancer's IP. You should again see traffic being distributed between both backends.
Task 6. Clean up
In a production environment, to avoid incurring charges, you should delete the resources that you are not using. Resources for this lab will be automatically deleted. Practice the steps to delete resources.
Delete Load Balancer: In the console, navigate to Network services > Load balancing. Select your load balancer and click Delete. Confirm the deletion. This will also delete the associated IP address and backend service.
Delete Instance Groups: In the console, navigate to Compute Engine > Instance groups. Select both us-west1-instance-group and europe-west4-instance-group and click Delete.
Delete VM Instances: In the console, navigate to Compute Engine > VM instances. Select both backend-vm-us-west1 and backend-vm-europe-west4 and click Delete.
Solution of Lab
https://www.youtube.com/watch?v=uaSOyg_CS6E
]]>https://eplus.dev/configure-global-extended-application-lb-using-https-gsp652https://eplus.dev/configure-global-extended-application-lb-using-https-gsp652<![CDATA[Configure Global Extended Application LB using HTTPS - GSP652]]><![CDATA[Configure Global Extended Application LB using HTTPS]]><![CDATA[GSP652]]><![CDATA[David Nguyen]]>Tue, 07 Apr 2026 01:21:37 GMT<![CDATA[Cloud NGFW: Migrate VPC Firewall Rules that use Network Tags - GSP609]]><![CDATA[
Solution of Lab
https://www.youtube.com/watch?v=wPqSTrPtG7I
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/GSP609/lab.sh
source lab.sh
Script Alternative
curl -LO https://raw.githubusercontent.com/Itsabhishek7py/GoogleCloudSkillsboost/refs/heads/main/Cloud%20NGFW%20Migrate%20VPC%20Firewall%20Rules%20that%20use%20Network%20Tags/drabhishek.sh
sudo chmod +x drabhishek.sh
./drabhishek.sh
]]>https://eplus.dev/cloud-ngfw-migrate-vpc-firewall-rules-that-use-network-tags-gsp609https://eplus.dev/cloud-ngfw-migrate-vpc-firewall-rules-that-use-network-tags-gsp609<![CDATA[Cloud NGFW: Migrate VPC Firewall Rules that use Network Tags - GSP609]]><![CDATA[Cloud NGFW: Migrate VPC Firewall Rules that use Network Tags]]><![CDATA[GSP609]]><![CDATA[David Nguyen]]>Mon, 06 Apr 2026 08:21:43 GMT<![CDATA[The Arcade Base Camp April 2026]]><![CDATA[🏕 Arcade Base Camp April 2026
Welcome to Arcade Base Camp April 2026, where youll develop key Google Cloud skills and earn an exclusive credential that will open doors to the cloud for you. No prior experience is required!
🔗 Main Link: Google Cloud Skills Boost - Games 5703📝 Solution: eplus.dev/start-here-dont-skip-this-arcade-lab
🎯 Levels & Learning Zones
Section
Game Link
Code
Deadline
The Arcade Base Camp April 2026
Games 7112
1q-basecamp-40139
30/04/26, 11:59 PM
Base Camp Levels
Level
Game Link
Code
Deadline
Arcade Voyage: Modern Application Development
Games 7109
1q-appsdev-01996
30/04/26, 11:59 PM
Arcade Adventure: GKE Operations and Networking
Games 7107
1q-operations-0529
30/04/26, 11:59 PM
Arcade Trail: Data Migration
Games 7110
1q-basecamp-40139
30/04/26, 11:59 PM
🧩 Trivia Challenges
Sprint
Game Link
Code
Deadline
Sprint 1
Coming Soon
Coming Soon
30/04/26, 11:59 PM
Sprint 2
Coming Soon
Coming Soon
30/04/26, 11:59 PM
Sprint 3
Coming Soon
Coming Soon
30/04/26, 11:59 PM
Sprint 4
Coming Soon
Coming Soon
30/04/26, 11:59 PM
👨 Guide
]]>https://eplus.dev/the-arcade-base-camp-april-2026https://eplus.dev/the-arcade-base-camp-april-2026<![CDATA[David Nguyen]]>Thu, 02 Apr 2026 08:42:44 GMT<![CDATA[Arcade March 2026 Sprint 4 (Solution)]]><![CDATA[Overview
Welcome to Arcade March 2026 Sprint 4! This quick quiz will help you check your understanding and stay on track as you continue to build your Google Cloud skills.
Click Start Lab to begin.
Note: Take a moment to read each question carefully and double-check your answers before submitting. To ensure your completion is recorded, keep the quiz open for at least 10 minutes. Submitting earlier may result in an incomplete attempt.
Quiz
In Google Cloud, how will you send messages to a Pub/Sub topic using the Python library?
Select ONE answer that would be relevant
DNS client
Storage client
SQL client
Publisher client
In Google Cloud, how will you pull messages from a Pub/Sub topic to your application?
Select ONE answer that would be relevant
Registry
Snapshot
Inventory
Subscription
In the "Multimodal Content Generation with Gemini on Vertex AI" lab, which specific model version are you tasked to invoke?
Select ONE answer that would be relevant
imagen-3.0
gemini-2.0-flash
text-bison@001
gemini-1.5-pro
What specific type of "multimodal" inputs does the generative model have in the "Multimodal Content Generation with Gemini on Vertex AI" lab process?
Select ONE answer that would be relevant
Audio and Video
Text only
A mix of text and images
Images only
According to the schema provided in Data Ingestion into BigQuery from Cloud Storage lab, which data type is required for the employee_id column?
Select ONE answer that would be relevant
STRING
INTEGER
FLOAT
BOOLEAN
In Data Ingestion into BigQuery from Cloud Storage lab, You are tasked with importing data into the new table. Where is the source employees.csv file stored?
Select ONE answer that would be relevant
Google Drive
Cloud Storage
Local Disk
Cloud SQL
]]>https://eplus.dev/arcade-march-2026-sprint-4-solutionhttps://eplus.dev/arcade-march-2026-sprint-4-solution<![CDATA[Arcade March 2026 Sprint 4 (Solution)]]><![CDATA[Arcade March 2026 Sprint 4]]><![CDATA[Arcade March 2026]]><![CDATA[David Nguyen]]>Mon, 16 Mar 2026 03:04:46 GMT<![CDATA[GitHub Copilot for Students: What Changed in March 2026]]><![CDATA[GitHub Updates Copilot Access for Students (March 2026)
GitHub recently announced an update regarding how GitHub Copilot will be provided to verified students. The goal is to ensure that Copilot remains free and sustainable for millions of students worldwide.
Key Changes
Starting March 12, 2026, Copilot access for verified students will be managed under a new plan called:
GitHub Copilot Student Plan
Students who already have GitHub Education benefits do not need to take any action. Their Copilot access will continue automatically.
Model Availability Changes
As part of this transition, some premium models will no longer be available for manual selection under the student plan, including:
GPT-5.4
Claude Opus
Claude Sonnet
Although these models are removed from manual selection, students will still have access to powerful AI models through Auto mode.
Auto Mode
With Auto mode, Copilot automatically selects the most suitable model for the task. These models may come from providers such as:
OpenAI
Anthropic
Google
GitHub plans to continue improving Auto mode and adding new models over time.
Why This Change?
GitHub states that these adjustments are necessary to:
Keep Copilot free for verified students
Support a growing global student community
Maintain long-term sustainability of the service
Future Updates
GitHub will continue collecting feedback from students and educators and may adjust:
Available models
Feature limits
Usage policies
Additionally, GitHub is working on making it easier for students to upgrade from the Copilot Student plan to Copilot Pro in the future.
Source
GitHub Official Announcement GitHub Copilot for Students Update (March 2026)
At GitHub, we believe the next generation of developers should have access to the latest industry technology. Thats why we provide students with free access to the GitHub Student Developer Pack, run the Campus Experts program to help student leaders build tech communities, and partner with Major League Hacking (MLH)andHack Clubto support student hackathons and youth-led coding communities. Its also why we offer verified students free access to GitHub Copilottoday, nearly two million students are using it to build, learn, and explore new ideas.
Copilot is evolving quickly, with new capabilities, models, and experiences shipping fast. As Copilot evolves and the student community continues to grow, we need to make some adjustments to ensure we can provide sustainable, long-term GitHub Copilot access to students worldwide.
Our commitment to providing free access to GitHub Copilot for verified students is not changing.What is changing is how Copilot is packaged and managed for students.
What this means for you
Starting today, March 12, 2026, your complimentary Copilot access will be managed under a new GitHub Copilot Student plan, alongside your existing GitHub Education benefits. Your academic verification status will not change, and there is nothing you need to do to continue using Copilot. You will see that you are on the GitHub Copilot Student plan in the UI, and your existing premium request unit (PRU) entitlements will remain unchanged.
As part of this transition, however, some premium models, including GPT-5.4, and Claude Opus and Sonnet models, will no longer be available for self-selection under the GitHub Copilot Student Plan.We know this will be disappointing, but were making this change so we can keep Copilot free and accessible for millions of students around the world.
That said, through Auto mode, you'll continue to have access to a powerful set of models from providers such as OpenAI, Anthropic, and Google. We'll keep adding new models and expanding the intelligence in Auto mode that helps match the right model to your task and workflow. We support a global community of students across thousands of universities and dozens of time zones, so were being intentional about how we roll out changes. Over the coming weeks, we will be making additional adjustments to available models or usage limits on certain features the specifics of which we'll be testing with your feedback.
We want your input
Your experience matters to us, and your feedback will directly shape how this plan evolves. Leave a comment below, what's working for you, what gets in the way, and what you need most. We will also be continuing to host 1:1 conversations with students, educators, and Campus Experts, and using insights from our recent November 2025 student survey to help inform what's next.
GitHub's investment in students is not slowing down. We are committed to ensuring that Copilot remains a powerful, free tool for verified students, and we will continue to improve and expand the student experience over time.
We will share updates as we learn more from testing and your feedback. Thank you for building with us.
Were currently working on making it easier to upgrade from your GitHub Student plan to GitHub Copilot Pro*. Well share an update here soon*
https://github.com/orgs/community/discussions/189268#discussioncomment-16108204]]>https://eplus.dev/github-copilot-for-students-what-changed-in-march-2026https://eplus.dev/github-copilot-for-students-what-changed-in-march-2026<![CDATA[GitHub Copilot for Students: What Changed in March 2026]]><![CDATA[Understanding the New GitHub Copilot Student Plan (2026 Update)]]><![CDATA[GitHub Copilot Student Plan – 2026 Update]]><![CDATA[David Nguyen]]>Fri, 13 Mar 2026 01:54:00 GMT<![CDATA[Arcade March 2026 Sprint 3 (Solution)]]><![CDATA[Overview
Welcome to Arcade March 2026 Sprint 3! This quick quiz will help you check your understanding and stay on track as you continue to build your Google Cloud skills.
Click Start Lab to begin.
Note: Take a moment to read each question carefully and double-check your answers before submitting. To ensure your completion is recorded, keep the quiz open for at least 10 minutes. Submitting earlier may result in an incomplete attempt.
Quiz
In Google Cloud, how will you count specific occurrences within your log entries using Cloud Logging?
Select ONE answer that would be relevant
BigQuery export
Cloud SQL
Pub/Sub trigger
Logs-based metrics
In Google Cloud, how will you receive an automatic notification when a Cloud Logging metric reaches a threshold?
Select ONE answer that would be relevant
Alerting policy
Firewall rule
IAM role
Load balancer
In Google Cloud, how will you troubleshoot code errors for your Cloud Run Functions?
Select ONE answer that would be relevant
Cloud Artifacts
Cloud Logging
Cloud Build
Cloud Domains
In Google Cloud, how will you check the execution duration of your Cloud Run Function in the Console?
Select ONE answer that would be relevant
Metadata tab
Monitoring tab
Source tab
Variables tab
In Google Cloud, how will you create a virtual machine running a Windows Server operating system?
Select ONE answer that would be relevant
Compute Engine
App Engine
Cloud Run
Cloud Functions
In Google Cloud, how will you connect to your Windows VM instance to manage it remotely?
Select ONE answer that would be relevant
HTTP
SMTP
RDP
SSH
]]>https://eplus.dev/arcade-march-2026-sprint-3-solutionhttps://eplus.dev/arcade-march-2026-sprint-3-solution<![CDATA[Arcade March 2026 Sprint 3 (Solution)]]><![CDATA[Arcade March 2026 Sprint 3]]><![CDATA[Arcade March 2026]]><![CDATA[David Nguyen]]>Fri, 13 Mar 2026 01:29:13 GMT<![CDATA[Build an AI Science Tutor Application with Vertex AI (Solution)]]><![CDATA[Overview
Labs are timed and cannot be paused. The timer starts when you click Start Lab.
The included cloud terminal is preconfigured with the gcloud SDK.
Use the terminal to execute commands and then click Check my progress to verify your work.
Challenge scenario
Scenario: You're a developer at an educational technology company that provides online tutoring and educational resources. They want to create an interactive science tutoring assistant to help students with questions related to astronomy and other scientific topics. They decide to use Google Clouds Vertex AI SDK to build a chat-based solution that can provide informative answers. you need to finish the below tasks:
Task: Develop a Python function named science_tutoring(prompt). This function should invoke the gemini-2.5-flash-lite model using the supplied prompt, generate the response. For this challenge, use the prompt: "How many planets are there in the solar system?."
Follow these steps to interact with the Generative AI APIs using Vertex AI Python SDK.
Click File > New File to open a new file within the Code Editor.
Write the Python code to use Google's Vertex AI SDK to interact with the pre-trained Text Generation AI model.
Create and save the python file.
Execute the Python file by invoking the below command by replacing the FILE_NAME inside the terminal within the Code Editor pane to view the output.
/usr/bin/python3 /FILE_NAME.py
Note: You can ignore any warnings related to Python version dependencies.
Click Check my progress to verify the objective.
Create and run a file to send a chat prompt to Gen AI and receive a response
Solution of Lab
https://www.youtube.com/watch?v=Yn_4Ij-7ilw
```plaintext
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/build-an-ai-science-tutor-application-with-vertex-ai-solution/lab.sh
source lab.sh
**Script Alternative**
```python
import vertexai
from vertexai.generative_models import GenerativeModel
# Replace with your actual project details
PROJECT_ID = "your-project-id"
LOCATION = "us-central1"
# Initialize Vertex AI onAxcode
vertexai.init(project=PROJECT_ID, location=LOCATION)
def science_tutoring(prompt):
"""
Sends a prompt to ab Gemini 2.5 Flash Lite model
and returns the generated response.
"""
try:
# Load ab52=460 2.5 Flash Lite model
model = GenerativeModel("gemini-2.5-flash-lite")
# Generate response
response = model.generate_content(prompt)
return response.text
except Exception as e:
return f"Error occurred: {str(e)}"
if __name__ == "__main__":
test_prompt = "How many planets are there in the solar system?"
result = science_tutoring(test_prompt)
print("Response:")
print(result)
]]>https://eplus.dev/build-an-ai-science-tutor-application-with-vertex-ai-solutionhttps://eplus.dev/build-an-ai-science-tutor-application-with-vertex-ai-solution<![CDATA[Build an AI Science Tutor Application with Vertex AI (Solution)]]><![CDATA[Build an AI Science Tutor Application with Vertex AI]]><![CDATA[David Nguyen]]>Wed, 11 Mar 2026 12:05:54 GMT<![CDATA[Arcade March 2026 Sprint 2 (Solution)]]><![CDATA[Overview
Welcome to Arcade March 2026 Sprint 2! This quick quiz will help you check your understanding and stay on track as you continue to build your Google Cloud skills.
Click Start Lab to begin.
Note: Take a moment to read each question carefully and double-check your answers before submitting. To ensure your completion is recorded, keep the quiz open for at least 10 minutes. Submitting earlier may result in an incomplete attempt.
Quiz
In Google Cloud, how will you scale your Managed Instance Group (MIG) based on application-specific metrics?
Select ONE answer that would be relevant
Custom metrics
CPU usage
Static sizing
Manual toggle
In Google Cloud, how will you create a logical grouping of keys within Cloud KMS?
Select ONE answer that would be relevant
Project settings
Billing reports
Cloud Monitoring
User roles
In Google Cloud, how will you create a visual representation of your resource health using Cloud Monitoring?
Select ONE answer that would be relevant
Datasets
Topics
Buckets
Dashboards
In Google Cloud, how will you verify your application is globally accessible using Cloud Monitoring?
Select ONE answer that would be relevant
Uptime checks
Log exports
Data streams
Code traces
In Google Cloud, how will you aggregate monitoring data from several projects into a single unified view?
Select ONE answer that would be relevant
Service account
Folder sync
Shared VPC
Metrics Scope
In Google Cloud, how will you define the primary project used to view multi-project data in Cloud Monitoring?
Select ONE answer that would be relevant
Scoping project
Host project
Target project
Guest project
]]>https://eplus.dev/arcade-march-2026-sprint-2-solutionhttps://eplus.dev/arcade-march-2026-sprint-2-solution<![CDATA[Arcade March 2026 Sprint 2 (Solution)]]><![CDATA[Arcade March 2026 Sprint 2]]><![CDATA[Arcade March 2026 Sprint]]><![CDATA[David Nguyen]]>Wed, 11 Mar 2026 06:40:40 GMT<![CDATA[Arcade March 2026 Sprint 1 (Solution)]]><![CDATA[Overview
Welcome to Arcade March 2026 Sprint 1! This quick quiz will help you check your understanding and stay on track as you continue to build your Google Cloud skills.
Click Start Lab to begin.
Note: Take a moment to read each question carefully and double-check your answers before submitting. To ensure your completion is recorded, keep the quiz open for at least 10 minutes. Submitting earlier may result in an incomplete attempt.
Quiz
How will you create a new Linux server instance in Google Cloud using the Console?
Select ONE answer that would be relevant
Use Compute Engine
Use Cloud Spanner
Use Cloud Functions
Use Google Drive
In Google Cloud, what does the "Machine Type" configuration primarily determine?
Select ONE answer that would be relevant
OS version
Network speed
Hardware resources
Disk type
Which gcloud command is used to display all the configuration properties of your current environment?
Select ONE answer that would be relevant
gcloud info
gcloud help
gcloud config list
gcloud auth list
Which gcloud command is used to view a list of active account names in your environment?
Select ONE answer that would be relevant
gcloud info
gcloud help
gcloud config list
gcloud auth list
In Google Cloud, how will you create a new persistent disk in a specific zone using the command line?
Select ONE answer that would be relevant
gcloud storage new
gcloud compute disks create
gcloud disk provision
gcloud make disk
Which Google Cloud command is used to attach an existing Persistent Disk to a virtual machine instance?
Select ONE answer that would be relevant
Click Delete
Send it to a printer
gcloud compute instances attach-disk
gcloud vm mount-disk
]]>https://eplus.dev/arcade-march-2026-sprint-1-solutionhttps://eplus.dev/arcade-march-2026-sprint-1-solution<![CDATA[Arcade March 2026 Sprint 1 (Solution)]]><![CDATA[Arcade March 2026 Sprint 1]]><![CDATA[David Nguyen]]>Wed, 11 Mar 2026 06:31:46 GMT<![CDATA[Data Ingestion into BigQuery from Cloud Storage (Solution)]]><![CDATA[Overview
Labs are timed and cannot be paused. The timer starts when you click Start Lab.
The included cloud terminal is preconfigured with the gcloud SDK.
Use the terminal to execute commands and then click Check my progress to verify your work.
Challenge scenario
You are managing Google BigQuery, a data warehouse service that lets you store, manage, and analyze large datasets. In this scenario, you need to create a dataset and a table within BigQuery to store employee details. The dataset will act as a container for your tables, while the table will hold the actual employee information.
You need to complete the following tasks:
Create a big query dataset: work_day
Create a table with employee the following schema details:
column
Type
employee_id
INTEGER
device_id
STRING
username
STRING
department
STRING
office
STRING
Import the csv data in your newly created table from pre-created cloud storage bucket named as qwiklabs-gcp-02-a85ba8626654-a1f8-bucket. The precreated bucket already has employees.csv file.
Click Check my progress to verify the objective.
Create BigQuery Schema and upload csv data
Solution of Lab
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/build-an-ai-science-tutor-application-with-vertex-ai-solution/lab.sh
source lab.sh
Script Alternative
export BUCKET=
bq mk work_day && bq load --source_format=CSV --skip_leading_rows=1 work_day.employee gs://$BUCKET/employees.csv employee_id:INTEGER,device_id:STRING,username:STRING,department:STRING,office:STRING
]]>https://eplus.dev/data-ingestion-into-bigquery-from-cloud-storage-solutionhttps://eplus.dev/data-ingestion-into-bigquery-from-cloud-storage-solution<![CDATA[Data Ingestion into BigQuery from Cloud Storage (Solution)]]><![CDATA[Data Ingestion into BigQuery from Cloud Storage]]><![CDATA[David Nguyen]]>Mon, 09 Mar 2026 01:43:32 GMT<![CDATA[The Arcade Base Camp March 2026]]><![CDATA[🏕 Arcade Base Camp March 2026
Welcome to Base Camp March 2026, where youll develop key Google Cloud skills and earn an exclusive credential that will open doors to the cloud for you. No prior experience is required!
🔗 Main: https://www.skills.google/games/5703/labs/36448📝 Solution: http://eplus.dev/start-here-dont-skip-this-arcade-lab
Deadline (all): 31/03/2026, 11:59 PM
🎯 Levels & Learning Zones
Arcade Base Camp March 2026https://www.skills.google/games/7054 1q-basecamp-10550
Work Meets Play: Metrics in Motionhttps://www.skills.google/games/7058 1q-worknplay-31032
Base Camp Levels
Arcade Adventure: Security, Data, and Cloud Operationshttps://www.skills.google/games/7055 1q-cloudops-31269
Arcade Voyage: AI and Cloud Deploymenthttps://www.skills.google/games/7056 1q-deploy-02057
Arcade Trail: Automation and Analyticshttps://www.skills.google/games/7057 1q-automation-5931
🧩 Trivia Challenges
Sprint 1 https://www.skills.google/games/7050 1q-sprint-10247
Sprint 2 https://www.skills.google/games/7051 1q-sprint-10284
Sprint 3 https://www.skills.google/games/7052 1q-sprint-10269
Sprint 4 https://www.skills.google/games/7053 1q-sprint-10229
👨 Guide
]]>https://eplus.dev/the-arcade-base-camp-march-2026https://eplus.dev/the-arcade-base-camp-march-2026<![CDATA[David Nguyen]]>Tue, 03 Mar 2026 06:24:15 GMT<![CDATA[Using Cloud Trace on Kubernetes Engine - GSP484]]><![CDATA[Overview
When supporting a production system that services HTTP requests or provides an API, it is important to measure the latency of your endpoints to detect when a system's performance is not operating within specification. In monolithic systems this single latency measure may be useful to detect and diagnose deteriorating behavior. With modern microservice architectures, however, this becomes much more difficult because a single request may result in numerous additional requests to other systems before the request can be fully handled.
Deteriorating performance in an underlying system may impact all other systems that rely on it. While latency can be measured at each service endpoint, it can be difficult to correlate slow behavior in the public endpoint with a particular sub-service that is misbehaving.
Enter distributed tracing. Distributed tracing uses metadata passed along with requests to correlate requests across service tiers. By collecting telemetry data from all the services in a microservice architecture and propagating a trace id from an initial request to all subsidiary requests, developers can much more easily identify which service is causing slowdowns affecting the rest of the system.
Google Cloud provides the Operations suite of products to handle logging, monitoring, and distributed tracing. This lab will be deployed to Kubernetes Engine and will demonstrate a multi-tier architecture implementing distributed tracing. It will also take advantage of Terraform to build out necessary infrastructure.
This lab was created by GKE Helmsman engineers to give you a better understanding of GKE Binary Authorization. You can view this demo by running gsutil cp -r gs://spls/gke-binary-auth/* . and cd gke-binary-auth-demo command in cloud shell. We encourage any and all to contribute to our assets!
Setup and requirements
Before you click the Start Lab button
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources are made available to you.
This hands-on lab lets you do the lab activities in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito (recommended) or private browser window to run this lab. This prevents conflicts between your personal account and the student account, which may cause extra charges incurred to your personal account.
Time to complete the labremember, once you start, you cannot pause a lab.
Note: Use only the student account for this lab. If you use a different Google Cloud account, you may incur charges to that account.
How to start your lab and sign in to the Google Cloud console
Click the Start Lab button. If you need to pay for the lab, a dialog opens for you to select your payment method. On the left is the Lab Details pane with the following:
The Open Google Cloud console button
Time remaining
The temporary credentials that you must use for this lab
Other information, if needed, to step through this lab
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
Note: If you see the Choose an account dialog, click Use Another Account.
If necessary, copy the Username below and paste it into the Sign in dialog.
[email protected]
Copied!
You can also find the Username in the Lab Details pane.
Click Next.
Copy the Password below and paste it into the Welcome dialog.
vGCaxTeSxpgN
Copied!
You can also find the Password in the Lab Details pane.
Click Next.
Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials.
Note: Using your own Google Cloud account for this lab may incur extra charges.
Click through the subsequent pages:
Accept the terms and conditions.
Do not add recovery options or two-factor authentication (because this is a temporary account).
Do not sign up for free trials.
After a few moments, the Google Cloud console opens in this tab.
Note: To access Google Cloud products and services, click the Navigation menu or type the service or product name in the Search field.
Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
Click Activate Cloud Shell
at the top of the Google Cloud console.
Click through the following windows:
Continue through the Cloud Shell information window.
Authorize Cloud Shell to use your credentials to make Google Cloud API calls.
When you are connected, you are already authenticated, and the project is set to your Project_ID, qwiklabs-gcp-00-86734d2ce627. The output contains a line that declares the Project_ID for this session:
Your Cloud Platform project in this session is set to qwiklabs-gcp-00-86734d2ce627
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
(Optional) You can list the active account name with this command:
gcloud auth list
Copied!
Click Authorize.
Output:
ACTIVE: *
ACCOUNT: [email protected]
To set the active account, run:
$ gcloud config set account `ACCOUNT`
(Optional) You can list the project ID with this command:
gcloud config list project
Copied!
Output:
[core]
project = qwiklabs-gcp-00-86734d2ce627
Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.
Clone demo
Clone the resources needed for this lab by running:
git clone https://github.com/GoogleCloudPlatform/gke-tracing-demo
Copied!
Go into the directory for this demo:
cd gke-tracing-demo
Copied!
Set your region and zone
Certain Compute Engine resources live in regions and zones. A region is a specific geographical location where you can run your resources. Each region has one or more zones.
Note: Learn more about regions and zones and see a complete list in Regions & Zones documentation.
Run the following to set a region and zone for your lab (you can use the region/zone that's best for you):
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-f
Copied!
Architecture
The lab begins by deploying a Kubernetes Engine cluster. To this cluster will be deployed a simple web application fronted by a load balancer. The web app will publish messages provided by the user to a Cloud Pub/Sub topic. The application is instrumented such that HTTP requests to it will result in the creation of a trace whose context will be propagated to the Cloud Pub/Sub publish API request. The correlated telemetry data from these requests will be available in the Cloud Trace Console.
Introduction to Terraform
Following the principles of infrastructure as code and immutable infrastructure, Terraform supports the writing of declarative descriptions of the desired state of infrastructure. When the descriptor is applied, Terraform uses Google Cloud APIs to provision and update resources to match. Terraform compares the desired state with the current state so incremental changes can be made without deleting everything and starting over. For instance, Terraform can build out Google Cloud projects and compute instances, etc., even set up a Kubernetes Engine cluster and deploy applications to it. When requirements change, the descriptor can be updated and Terraform will adjust the cloud infrastructure accordingly.
This example will start up a Kubernetes Engine cluster using Terraform. Then you will use Kubernetes commands to deploy a demo application to the cluster. By default, Kubernetes Engine clusters in Google Cloud are launched with a pre-configured Fluentd-based collector that forwards logging events for the cluster to Cloud Monitoring. Interacting with the demo app will produce trace events that are visible in the Cloud Trace UI.
Running Terraform
There are three Terraform files provided with this demo, located in the /terraform subdirectory of the project. The first one, main.tf, is the starting point for Terraform. It describes the features that will be used, the resources that will be manipulated, and the outputs that will result. The second file is provider.tf, which indicates which cloud provider and version will be the target of the Terraform commands--in this case Google Cloud. The final file is variables.tf, which contains a list of variables that are used as inputs into Terraform. Any variables referenced in the main.tf that do not have defaults configured in variables.tf will result in prompts to the user at runtime.
Task 1. Initialization
Given that authentication was configured above, you are now ready to deploy the infrastructure.
Run the following command from the root directory of the project:
cd terraform
Copied!
Update the provider.tf file
Remove the provider version for the Terraform from the provider.tf script file.
Edit the provider.tf script file:
nano provider.tf
Copied!
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.84.0"
}
}
}
provider "google" {
project = var.project
}
Copied!
Then save the file with CTRL + X > Y > Enter.
After modification your provider.tf script file should look like:
...
provider "google" {
project = var.project
}
From here, initialize Terraform.
Enter:
terraform init
Copied!
This will download the dependencies that Terraform requires: the Google Cloud project and the Google Cloud zone to which the demo application should be deployed. Terraform will prompt for these values if it does not know them already. By default, it will look for a file called terraform.tfvars or files with a suffix of .auto.tfvars in the current directory to obtain those values.
This demo provides a convenience script to prompt for project and zone and persist them in a terraform.tfvars file.
Run:
../scripts/generate-tfvars.sh
Copied!
Note: If the file already exists you will receive an error.
The script uses previously-configured values from the gcloud command. If they have not been configured, the error message will indicate how they should be set. The existing values can be viewed with the following command:
gcloud config list
Copied!
If the displayed values don't correspond to where you intend to run the demo application, change the values in terraform.tfvars to your preferred values.
Task 2. Deployment
Having initialized Terraform you can see the work that Terraform will perform with the following command:
terraform plan
Copied!
This command can be used to visually verify that settings are correct and Terraform will inform you if it detects any errors. While not necessary, it is a good practice to run it every time prior to changing infrastructure using Terraform.
After verification, tell Terraform to set up the necessary infrastructure:
terraform apply
Copied!
The changes that will be made are displayed, and asks you to confirm with yes.
Note: If you get deprecation warnings related to the zone variable, please ignore it and move forward in the lab.
While you're waiting for your build to complete, set up a Cloud Monitoring workspace to be used later in the lab.
Test completed task
Click Check my progress to verify your performed task. If you have successfully deployed necessary infrastructure with Terraform, you will see an assessment score.
Use Terraform to set up the necessary infrastructure
Create a Monitoring Metrics Scope
Set up a Monitoring Metrics Scope that's tied to your Google Cloud Project. The following steps create a new account that has a free trial of Monitoring.
In the Cloud Console, click Navigation menu (
) > View All Products > Observability > Monitoring.
When the Monitoring Overview page opens, your metrics scope project is ready.
Task 3. Deploy demo application
Back in Cloud Shell, after you see the Apply complete! message, return to the Console.
In the Navigation menu, go to Kubernetes Engine > Clusters to see your cluster.
Click on Navigation menu, click on view all products then scroll down to the Analytics section and click on Pub/Sub to see the Topics and Subscriptions.
Now, deploy the demo application using Kubernetes's kubectl command:
kubectl apply -f tracing-demo-deployment.yaml
Copied!
Once the app has been deployed, it can be viewed in the Kubernetes Engine > Workloads. You can also see the load balancer that was created for the application in the Gateways, Services & Ingress > Services section of the console.
It may take a few minutes for the application to be deployedif your workloads console resembles the following with a status of "Does not have minimum availability":
Refresh the page until you see an "OK" in the status bar:
Incidentally, the endpoint can be programmatically acquired using the following command:
echo http://$(kubectl get svc tracing-demo -n default -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
Copied!
Test completed task
Click Check my progress to verify your performed task. If you have successfully deployed demo application, you will see an assessment score.
Deploy demo application
Task 4. Validation
Generating telemetry data
Once the demo application is deployed, you should see a list of your exposed services.
Still in the Kubernetes window, under Gateways, Services & Ingress click on Services to view the exposed services.
Click on the endpoint listed next to the tracing-demo load balancer to open the demo app web page in a new tab of your browser.
Note that your IP address will likely be different from the example above. The page displayed is simple:
To the url, add the string: ?string=CustomMessage and see that the message is displayed:
As you can see, if a string parameter is not provided it uses a default value of Hello World. The app is used to generate trace telemetry data.
Replace "CustomMessage" with your own messages to generate some data to look at.
Test completed task
Click Check my progress to verify your performed task. If you have successfully generated telemetry data, you will see an assessment score.
Generate Telemetry Data
Examining traces
In the Console, select Navigation menu >View all products > scroll to Observability section and click on Trace > Trace explorer. You should see a chart displaying trace events plotted on a timeline with latency as the vertical metric.
If not, click the Auto Run toggle button to see the most up to date data.
Click on the dark block in the top graph is a "Heatmap" view, which shows the density of spans occurring at that specific duration and time.
The top entry in the list is known as the root span and represents the duration of the HTTP request, from the moment the first byte arrives until the moment the last byte of the response is sent. The bottom entry in the list represents the duration of the request made when sending the message to the Pub/Sub topic.
Since the handling of the HTTP request is blocked by the call to the Pub/Sub API it is clear that the vast majority of the time spent within the HTTP request is taken up by the Pub/Sub interaction. This demonstrates that by instrumenting each tier of your application you can easily identify where the bottlenecks are.
Pulling Pub/Sub messages
As described in the Architecture section of this document, messages from the demo app are published to a Pub/Sub topic.
These messages can be consumed from the topic using the gcloud CLI:
gcloud pubsub subscriptions pull --auto-ack --limit 10 tracing-demo-cli
Copied!
Output:
DATA: Hello World
MESSAGE_ID: 4117341758575424
ORDERING_KEY:
ATTRIBUTES:
DELIVERY_ATTEMPT:
DATA: CustomMessage
MESSAGE_ID: 4117243358956897
ORDERING_KEY:
ATTRIBUTES:
DELIVERY_ATTEMPT:
Pulling the messages from the topic has no impact on tracing. This section simply provides a consumer of the messages for verification purposes.
Monitoring and logging
Cloud monitoring and logging are not the subject of this demo, but it is worth noting that the application you deployed will publish logs to Cloud Logging and metrics to worth noting that the application you deployed will publish logs to Logging and metrics to Cloud Monitoring
In the Console, select Navigation menu > Monitoring > Metrics Explorer.
In the Select a metric field, select VM Instance > Instance > CPU Usage then click Apply.
You should see a chart of this metric plotted for different pods running in the cluster.
To see logs, select Navigation menu > View all products scroll to Observability section and click on Logging.
In Log fields section, set the following:
RESOURCE TYPE: Kubernetes Container
CLUSTER NAME: tracing-demo-space
NAMESPACE NAME: default
Task 5. Troubleshooting in your own environment
Several possible errors can be diagnosed using the kubectl command. For instance, a deployment can be shown:
kubectl get deployment tracing-demo
Copied!
Output:
NAME READY UP-TO-DATE AVAILABLE AGE
tracing-demo 1/1 1 1 21m
More details can be shown with describe:
kubectl describe deployment tracing-demo
Copied!
This command will show a list of deployed pods:
kubectl get pod
Copied!
Output:
NAME READY STATUS RESTARTS AGE
tracing-demo-59cc7988fc-h5w27 1/1 Running 0 23m
Again, details of the pod can be seen with describe:
kubectl describe pod tracing-demo
Copied!
Note the pod Name to use in the next step.
Once you know the pod name, use the name to view logs locally:
kubectl logs <LOG_NAME>
Copied!
Output:
10.60.0.1 - - [22/Jun/2018:19:42:23 +0000] "HEAD / HTTP/1.0" 200 - "-" "-"
Publishing string: Hello World
10.60.0.1 - - [22/Jun/2018:19:42:23 +0000] "GET / HTTP/1.1" 200 669 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"
The install script fails with a Permission denied when running Terraform. The credentials that Terraform is using do not provide the necessary permissions to create resources in the selected projects. Ensure that the account listed in gcloud config list has necessary permissions to create resources. If it does, regenerate the application default credentials using gcloud auth application-default login.
Task 6. Teardown
Qwiklabs will take care of shutting down all the resources used for this lab, but heres what you would need to do to clean up your own environment to save on cost and to be a good cloud citizen:
terraform destroy
Copied!
As with apply, Terraform will prompt for a yes to confirm your intent.
Since Terraform tracks the resources it created it can tear down the cluster, the Pub/Sub topic, and the Pub/Sub subscription.
Note: If you get deprecation warnings related to the zone variable, ignore it.
Solution of Lab
Quick
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/GSP484/lab.sh
source lab.sh
Manual
]]>https://eplus.dev/using-cloud-trace-on-kubernetes-engine-gsp484https://eplus.dev/using-cloud-trace-on-kubernetes-engine-gsp484<![CDATA[GSP484]]><![CDATA[Using Cloud Trace on Kubernetes Engine]]><![CDATA[Using Cloud Trace on Kubernetes Engine - GSP484]]><![CDATA[David Nguyen]]>Sat, 28 Feb 2026 06:10:01 GMT<![CDATA[How to Use a Network Policy on Google Kubernetes Engine - GSP480]]><![CDATA[Overview
This lab will show you how to improve the security of your Kubernetes Engine by applying fine-grained restrictions to network communication.
The Principle of Least Privilege is widely recognized as an important design consideration in enhancing the protection of critical systems from faults and malicious behavior. It suggests that every component must be able to access only the information and resources that are necessary for its legitimate purpose. This document demonstrates how the Principle of Least Privilege can be implemented within the Kubernetes Engine network layer.
Network connections can be restricted at two tiers of your Kubernetes Engine infrastructure. The first, and coarser grained, mechanism is the application of Firewall Rules at the Network, Subnetwork, and Host levels. These rules are applied outside of the Kubernetes Engine at the VPC level.
While Firewall Rules are a powerful security measure, and Kubernetes enables you to define even finer grained rules via Network Policies. Network Policies are used to limit intra-cluster communication. Network policies do not apply to pods attached to the host's network namespace.
For this lab you will provision a private Kubernetes Engine cluster and a bastion host with which to access it. A bastion host provides a single host that has access to the cluster, which, when combined with a private Kubernetes network, ensures that the cluster isn't exposed to malicious behavior from the internet at large. Bastions are particularly useful when you do not have VPN access to the cloud network.
Within the cluster, a simple HTTP server and two client pods will be provisioned. You will learn how to use a Network Policy and labeling to only allow connections from one of the client pods.
This lab was created by GKE Helmsman engineers to give you a better understanding of GKE Binary Authorization. You can view this demo by running gsutil cp -r gs://spls/gke-binary-auth/* . and cd gke-binary-auth-demo command in cloud shell. We encourage any and all to contribute to our assets!
Architecture
You will define a private, standard mode Kubernetes cluster that uses Dataplane V2. Dataplane V2 has network policies enabled by default.
Since the cluster is private, neither the API nor the worker nodes will be accessible from the internet. Instead, you will define a bastion host and use a firewall rule to enable access to it. The bastion's IP address is defined as an authorized network for the cluster, which grants it access to the API.
Within the cluster, provision three workloads:
hello-server: this is a simple HTTP server with an internally-accessible endpoint
hello-client-allowed: this is a single pod that repeatedly attempts to access hello-server. The pod is labeled such that the Network Policy will allow it to connect to hello-server.
hello-client-blocked: this runs the same code as hello-client-allowed but the pod is labeled such that the Network Policy will not allow it to connect to hello-server.
Setup and requirements
Before you click the Start Lab button
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources are made available to you.
This hands-on lab lets you do the lab activities in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito (recommended) or private browser window to run this lab. This prevents conflicts between your personal account and the student account, which may cause extra charges incurred to your personal account.
Time to complete the labremember, once you start, you cannot pause a lab.
Note: Use only the student account for this lab. If you use a different Google Cloud account, you may incur charges to that account.
How to start your lab and sign in to the Google Cloud console
Click the Start Lab button. If you need to pay for the lab, a dialog opens for you to select your payment method. On the left is the Lab Details pane with the following:
The Open Google Cloud console button
Time remaining
The temporary credentials that you must use for this lab
Other information, if needed, to step through this lab
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
Note: If you see the Choose an account dialog, click Use Another Account.
If necessary, copy the Username below and paste it into the Sign in dialog.
[email protected]
You can also find the Username in the Lab Details pane.
Click Next.
Copy the Password below and paste it into the Welcome dialog.
uEaNw77wW5EL
You can also find the Password in the Lab Details pane.
Click Next.
Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials.
Note: Using your own Google Cloud account for this lab may incur extra charges.
Click through the subsequent pages:
Accept the terms and conditions.
Do not add recovery options or two-factor authentication (because this is a temporary account).
Do not sign up for free trials.
After a few moments, the Google Cloud console opens in this tab.
Note: To access Google Cloud products and services, click the Navigation menu or type the service or product name in the Search field.
Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
Click Activate Cloud Shell
at the top of the Google Cloud console.
Click through the following windows:
Continue through the Cloud Shell information window.
Authorize Cloud Shell to use your credentials to make Google Cloud API calls.
When you are connected, you are already authenticated, and the project is set to your Project_ID, qwiklabs-gcp-02-668b9ffe0190. The output contains a line that declares the Project_ID for this session:
Your Cloud Platform project in this session is set to qwiklabs-gcp-02-668b9ffe0190
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
(Optional) You can list the active account name with this command:
gcloud auth list
Click Authorize.
Output:
ACTIVE: *
ACCOUNT: [email protected]
To set the active account, run:
$ gcloud config set account `ACCOUNT`
(Optional) You can list the project ID with this command:
gcloud config list project
Output:
[core]
project = qwiklabs-gcp-02-668b9ffe0190
Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.
Clone demo
Copy the resources needed for this lab exercise from a Cloud Storage bucket:
gsutil cp -r gs://spls/gsp480/gke-network-policy-demo .
Go into the directory for the demo:
cd gke-network-policy-demo
Make the demo files executable:
chmod -R 755 *
Task 1. Lab setup
First, set the Google Cloud region and zone.
Set the Google Cloud region.
gcloud config set compute/region "europe-west1"
Set the Google Cloud zone.
gcloud config set compute/zone "europe-west1-d"
This lab will use the following Google Cloud Service APIs, and have been enabled for you:
compute.googleapis.com
container.googleapis.com
cloudbuild.googleapis.com
In addition, the Terraform configuration takes three parameters to determine where the Kubernetes Engine cluster should be created:
project ID
region
zone
For simplicity, these parameters are specified in a file named terraform.tfvars, in the terraform directory.
To ensure the appropriate APIs are enabled and to generate the terraform/terraform.tfvars file based on your gcloud defaults, run:
make setup-project
Type y when asked to confirm.
This will enable the necessary Service APIs, and it will also generate a terraform/terraform.tfvars file with the following keys.
Verify the values themselves will match the output of gcloud config list by running:
cat terraform/terraform.tfvars
Provisioning the Kubernetes Engine cluster
Next, apply the Terraform configuration within the project root:
make tf-apply
When prompted, review the generated plan and enter yes to deploy the environment.
This will take several minutes to deploy.
Task 2. Validation
Terraform outputs a message when the cluster's been successfully created.
...snip...
google_container_cluster.primary: Still creating... (3m0s elapsed)
google_container_cluster.primary: Still creating... (3m10s elapsed)
google_container_cluster.primary: Still creating... (3m20s elapsed)
google_container_cluster.primary: Still creating... (3m30s elapsed)
google_container_cluster.primary: Creation complete after 3m34s (ID: gke-demo-cluster)
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
Test completed task
Click Check my progress to verify your performed task. If you have successfully deployed necessary infrastructure with Terraform, you will see an assessment score.
Use Terraform to set up the necessary infrastructure (Lab setup)
Now ssh into the bastion for the remaining steps:
gcloud compute ssh gke-demo-bastion
Existing versions of kubectl and custom Kubernetes clients contain provider-specific code to manage authentication between the client and Google Kubernetes Engine. Starting with v1.26, this code will no longer be included as part of the OSS kubectl. GKE users will need to download and use a separate authentication plugin to generate GKE-specific tokens. This new binary, gke-gcloud-auth-plugin, uses the Kubernetes Client-go Credential Plugin mechanism to extend kubectls authentication to support GKE. For more information, you can check out the following documentation.
To have kubectl use the new binary plugin for authentication instead of using the default provider-specific code, use the following steps.
Once connected, run the following command to install the gke-gcloud-auth-plugin on the VM.
sudo apt-get install google-cloud-sdk-gke-gcloud-auth-plugin
Set export USE_GKE_GCLOUD_AUTH_PLUGIN=True in ~/.bashrc:
echo "export USE_GKE_GCLOUD_AUTH_PLUGIN=True" >> ~/.bashrc
Run the following command:
source ~/.bashrc
Run the following command to force the config for this cluster to be updated to the Client-go Credential Plugin configuration.
gcloud container clusters get-credentials gke-demo-cluster --zone europe-west1-d
On success, you should see this message:
kubeconfig entry generated for gke-demo-cluster.
The newly-created cluster will now be available for the standard kubectl commands on the bastion.
Task 3. Installing the hello server
The test application consists of one simple HTTP server, deployed as hello-server, and two clients, one of which will be labeled app=hello and the other app=not-hello.
All three services can be deployed by applying the hello-app manifests.
On the bastion, run:
kubectl apply -f ./manifests/hello-app/
Output:
deployment.apps/hello-client-allowed created
deployment.apps/hello-client-blocked created
service/hello-server created
deployment.apps/hello-server created
Verify all three pods have been successfully deployed:
kubectl get pods
You will see one running pod for each of hello-client-allowed, hello-client-blocked, and hello-server deployments.
NAME READY STATUS RESTARTS AGE
hello-client-allowed-7d95fcd5d9-t8fsk | 1/1 Running 0 14m
hello-client-blocked-6497db465d-ckbn8 | 1/1 Running 0 14m
hello-server-7df58f7fb5-nvcvd | 1/1 Running 0 14m
Test completed task
Click Check my progress to verify your performed task. If you have successfully deployed a simple HTTP hello server, you will see an assessment score.
Installing the hello server
Task 4. Confirming default access to the hello server
First, tail the "allowed" client:
kubectl logs --tail 10 -f $(kubectl get pods -oname -l app=hello)
Press CTRL+C to exit.
Second, tail the logs of the "blocked" client:
kubectl logs --tail 10 -f $(kubectl get pods -oname -l app=not-hello)
Press CTRL+C to exit.
You will notice that both pods are successfully able to connect to the hello-server service. This is because you have not yet defined a Network Policy to restrict access. In each of these windows you should see successful responses from the server.
Hostname: hello-server-7df58f7fb5-nvcvd
Hello, world!
Version: 1.0.0
Hostname: hello-server-7df58f7fb5-nvcvd
Hello, world!
Version: 1.0.0
Hostname: hello-server-7df58f7fb5-nvcvd
...
Task 5. Restricting access with a Network Policy
Now you will block access to the hello-server pod from all pods that are not labeled with app=hello.
The policy definition you'll use is contained in manifests/network-policy.yaml
Apply the policy with the following command:
kubectl apply -f ./manifests/network-policy.yaml
Output:
networkpolicy.networking.k8s.io/hello-server-allow-from-hello-client created
Tail the logs of the "blocked" client again:
kubectl logs --tail 10 -f $(kubectl get pods -oname -l app=not-hello)
You'll now see that the output looks like this in the window tailing the "blocked" client:
wget: download timed out
wget: download timed out
wget: download timed out
wget: download timed out
wget: download timed out
wget: download timed out
wget: download timed out
wget: download timed out
wget: download timed out
...
The network policy has now prevented communication to the hello-server from the unlabeled pod.
Press CTRL+C to exit.
Task 6. Restricting namespaces with Network Policies
In the previous example, you defined a network policy that restricts connections based on pod labels. It is often useful to instead label entire namespaces, particularly when teams or applications are granted their own namespaces.
You'll now modify the network policy to only allow traffic from a designated namespace, then you'll move the hello-allowed pod into that new namespace.
First, delete the existing network policy:
kubectl delete -f ./manifests/network-policy.yaml
Output:
networkpolicy.networking.k8s.io "hello-server-allow-from-hello-client" deleted
Create the namespaced version:
kubectl create -f ./manifests/network-policy-namespaced.yaml
Output:
networkpolicy.networking.k8s.io/hello-server-allow-from-hello-client created
Now observe the logs of the hello-allowed-client pod in the default namespace:
kubectl logs --tail 10 -f $(kubectl get pods -oname -l app=hello)
You will notice it is no longer able to connect to the hello-server.
Press CTRL+C to exit.
Finally, deploy a second copy of the hello-clients app into the new namespace.
kubectl -n hello-apps apply -f ./manifests/hello-app/hello-client.yaml
Output:
deployment.apps/hello-client-allowed created
deployment.apps/hello-client-blocked created
Test completed task
Click Check my progress to verify your performed task. If you have successfully deployed a second copy of the hello-clients app into the new namespace, you will see an assessment score.
Deploy a second copy of the hello-clients app into the new namespace
Task 7. Validation
Next, check the logs for the two new hello-app clients.
View the logs for the "hello" labeled app in the app in the hello-apps namespace by running:
kubectl logs --tail 10 -f -n hello-apps $(kubectl get pods -oname -l app=hello -n hello-apps)
Output:
Hostname: hello-server-6c6fd59cc9-7fvgp
Hello, world!
Version: 1.0.0
Hostname: hello-server-6c6fd59cc9-7fvgp
Hello, world!
Version: 1.0.0
Hostname: hello-server-6c6fd59cc9-7fvgp
Hello, world!
Version: 1.0.0
Hostname: hello-server-6c6fd59cc9-7fvgp
Hello, world!
Version: 1.0.0
Hostname: hello-server-6c6fd59cc9-7fvgp
Both clients are able to connect successfully because as of Kubernetes 1.10.x NetworkPolicies do not support restricting access to pods within a given namespace. You can allowlist by pod label, namespace label, or allowlist the union (i.e. OR) of both. But you cannot yet allowlist the intersection (i.e. AND) of pod labels and namespace labels.
Press CTRL+C to exit.
Task 8. Teardown
Qwiklabs will take care of shutting down all the resources used for this lab, but heres what you would need to do to clean up your own environment to save on cost and to be a good cloud citizen:
Log out of the bastion host:
exit
Run the following to destroy the environment:
make teardown
Output:
...snip...
google_compute_subnetwork.cluster-subnet: Still destroying... (ID: us-east1/kube-net-subnet, 20s elapsed)
google_compute_subnetwork.cluster-subnet: Destruction complete after 25s
google_compute_network.gke-network: Destroying... (ID: kube-net)
google_compute_network.gke-network: Still destroying... (ID: kube-net, 10s elapsed)
google_compute_network.gke-network: Still destroying... (ID: kube-net, 20s elapsed)
google_compute_network.gke-network: Destruction complete after 26s
Destroy complete! Resources: 5 destroyed.
Task 9. Troubleshooting in your own environment
The install script fails with a "permission denied" error when running Terraform
The credentials that Terraform is using do not provide the necessary permissions to create resources in the selected projects. Ensure that the account listed in gcloud config list has necessary permissions to create resources. If it does, regenerate the application default credentials using gcloud auth application-default login.
Invalid fingerprint error during Terraform operations
Terraform occasionally complains about an invalid fingerprint, when updating certain resources.
If you see the error below, simply re-run the command.
Solution of Lab
Quick
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/GSP480/lab.sh
source lab.sh
Manual
]]>https://eplus.dev/how-to-use-a-network-policy-on-google-kubernetes-engine-gsp480https://eplus.dev/how-to-use-a-network-policy-on-google-kubernetes-engine-gsp480<![CDATA[How to Use a Network Policy on Google Kubernetes Engine]]><![CDATA[How to Use a Network Policy on Google Kubernetes Engine - GSP480]]><![CDATA[David Nguyen]]>Sat, 28 Feb 2026 05:36:26 GMT<![CDATA[Build an AI-Powered Interview Question Generator using Gemini (Solution)]]><![CDATA[Overview
Labs are timed and cannot be paused. The timer starts when you click Start Lab.
The included IDE is preconfigured with the gcloud SDK.
Use the terminal to execute commands and then click Check my progress to verify your work.
Challenge scenario
Scenario: You're a developer at recruitment firm that specializes in tech talent acquisition. You are looking for ways to streamline the interview preparation process for hiring managers by generating tailored interview questions for various roles using AI. you need to finish the below task:
Task: Develop a Python function named interview(prompt). This function should invoke the gemini-2.5-flash-lite model using the supplied prompt, generate the response. For this challenge, use the prompt: "Give me ten interview questions for the role of program manager."
Follow these steps to interact with the Generative AI APIs using Vertex AI Python SDK.
Click File > New File to open a new file within the Code Editor.
Write the Python code to use Google's Vertex AI SDK to interact with the pre-trained Text Generation AI model.
Create and save the python file.
Execute the Python file by invoking the below command by replacing the FILE_NAME inside the terminal within the Code Editor pane to view the output.
/usr/bin/python3 /FILE_NAME.py
Note: You can ignore any warnings related to Python version dependencies.
Click Check my progress to verify the objective.
Create and run a file to send a text prompt to Gen AI and receive a response
Solution of Lab
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/build-an-ai-powered-interview-question-generator-using-gemini-solution/lab.sh
source lab.sh
Script Alternative
cat <<'EOF' > lab.py
#!/usr/bin/python3
import vertexai
from vertexai.generative_models import GenerativeModel
# Prompt required by the lab
PROMPT = "Give me ten interview questions for the role of program manager."
def interview(prompt: str) -> str:
"""
Invoke Vertex AI Gemini model (gemini-2.5-flash-lite) with the supplied prompt
and return the generated text response.
"""
# Auto-detect project and region from the gcloud environment (Qwiklabs usually sets these)
project_id = None
location = None
try:
import subprocess
project_id = subprocess.check_output(
["gcloud", "config", "get-value", "project"],
text=True
).strip()
location = subprocess.check_output(
["gcloud", "config", "get-value", "ai/region"],
text=True
).strip()
except Exception:
pass
# Sensible defaults for most labs if ai/region isn't set
if not project_id:
raise RuntimeError("Could not detect GCP project. Run: gcloud config get-value project")
if not location or location == "(unset)":
location = "us-central1"
vertexai.init(project=project_id, location=location)
model = GenerativeModel("gemini-2.5-flash-lite")
response = model.generate_content(
prompt,
generation_config={
"temperature": 0.7,
"max_output_tokens": 512,
},
)
# Return the text output
return response.text if hasattr(response, "text") else str(response)
if __name__ == "__main__":
print(interview(PROMPT))
EOF
Run
/usr/bin/python3 lab.py
]]>https://eplus.dev/build-an-ai-powered-interview-question-generator-using-gemini-solutionhttps://eplus.dev/build-an-ai-powered-interview-question-generator-using-gemini-solution<![CDATA[Build an AI-Powered Interview Question Generator using Gemini]]><![CDATA[Build an AI-Powered Interview Question Generator using Gemini (Solution)]]><![CDATA[David Nguyen]]>Sat, 28 Feb 2026 05:24:48 GMT<![CDATA[Generate AI Images and Summarize them Using Gemini and Python (Solution)]]><![CDATA[Overview
Labs are timed and cannot be paused. The timer starts when you click Start Lab.
The included IDE is preconfigured with the gcloud SDK.
Use the terminal to execute commands and then click Check my progress to verify your work.
In a challenge lab youre given a scenario and a set of tasks. Instead of following step-by-step instructions, you will use the skills learned from the labs in the course to figure out how to complete the tasks on your own! An automated scoring system (shown on this page) will provide feedback on whether you have completed your tasks correctly.
When you take a challenge lab, you will not be taught new Google Cloud concepts. You are expected to extend your learned skills, like changing default values and reading and researching error messages to fix your own mistakes.
To score 100% you must successfully complete all tasks within the time period! Are you ready for the challenge?
Follow these steps to interact with the Generative AI APIs using Vertex AI Python SDK.
Click File > New File to open a new file within the Code Editor.
Write the Python code to use Google's Vertex AI SDK to interact with the pre-trained Text Generation AI model.
Create and save the python file.
Execute the Python file by invoking the below command by replacing the FILE_NAME inside the terminal within the Code Editor pane to view the output.
/usr/bin/python3 /FILE_NAME.py
Copied!
To view the generated image, use EXPLORER.
Note: You can ignore any warnings related to Python version dependencies.
Challenge scenario
Scenario: You're a developer at Cymbal Inc. an AI-powered bouquet design company. Your clients can describe their dream bouquet, and your system generates realistic images for their approval. To further enhance the experience, you're integrating cutting-edge image analysis to provide descriptive summaries of the generated bouquets. Your main application will invoke the relevant methods based on the users' interaction and to facilitate that, you need to finish the below tasks:
Task 1: Develop a Python function named generate_bouquet_image(prompt). This function should invoke the imagen-4.0-generate-001 model using the supplied prompt, generate the image, and store it locally. For this challenge, use the prompt: "Create an image containing a bouquet of 2 sunflowers and 3 roses".
Click Check my progress to verify the objective.
Generate an image by sending a text prompt
Task 2: Develop a second Python function called analyze_bouquet_image(image_path). This function will take the image path as input along with a text prompt to generate birthday wishes based on the image passed and send it to the gemini-2.5-flash model. To ensure responses can be obtained as and when they are generated, enable streaming on the prompt requests.
Click Check my progress to verify the objective.
Analyze the saved image by using a multimodal model
Solution of Lab
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/generate-ai-images-and-summarize-them-using-gemini-and-python-solution/lab.sh
source lab.sh
Script Alternative
cat <<'EOF' > lab.py
import vertexai
from vertexai.preview.vision_models import ImageGenerationModel
from vertexai.generative_models import GenerativeModel, Part
def generate_bouquet_image(prompt: str) -> str:
vertexai.init()
model = ImageGenerationModel.from_pretrained(
"imagen-4.0-generate-001"
)
images = model.generate_images(
prompt=prompt,
number_of_images=1
)
image_path = "bouquet.jpeg"
images[0].save(image_path)
print(f"Image generated and saved as {image_path}")
return image_path
def analyze_bouquet_image(image_path: str):
model = GenerativeModel("gemini-2.5-flash")
# Read image as binary (required)
with open(image_path, "rb") as f:
image_bytes = f.read()
image_part = Part.from_data(
data=image_bytes,
mime_type="image/jpeg"
)
prompt = (
"Analyze this bouquet image and generate a short birthday wish "
"based on the flowers you see."
)
# STREAMING DISABLED (checker requirement)
response = model.generate_content(
[prompt, image_part],
stream=False
)
print("Birthday wish:")
print(response.text)
if __name__ == "__main__":
prompt = "Create an image containing a bouquet of 2 sunflowers and 3 roses"
image_path = generate_bouquet_image(prompt)
analyze_bouquet_image(image_path)
EOF
Run
/usr/bin/python3 lab.py
]]>https://eplus.dev/generate-ai-images-and-summarize-them-using-gemini-and-python-solutionhttps://eplus.dev/generate-ai-images-and-summarize-them-using-gemini-and-python-solution<![CDATA[Generate AI Images and Summarize them Using Gemini and Python]]><![CDATA[Generate AI Images and Summarize them Using Gemini and Python (Solution)]]><![CDATA[David Nguyen]]>Sat, 28 Feb 2026 05:07:50 GMT<![CDATA[Firebase Essentials: Firestore Database Write with TypeScript - gem-firebase-firestore-write-typescript]]><![CDATA[Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
Click Activate Cloud Shell
at the top of the Google Cloud console.
When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session:
Your Cloud Platform project in this session is set to YOUR_PROJECT_ID
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
(Optional) You can list the active account name with this command:
gcloud auth list
Copied!
Click Authorize.
Your output should now look like this:
Output:
ACTIVE: *
ACCOUNT: [email protected]
To set the active account, run:
$ gcloud config set account `ACCOUNT`
(Optional) You can list the project ID with this command:
gcloud config list project
Copied!
Output:
[core]
project = <project_ID>
Example output:
[core]
project = qwiklabs-gcp-44776a13dea667a6
Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.
Overview
This lab guides you through creating a Firebase Firestore database and writing data to it using a TypeScript application. You'll learn how to initialize Firebase, structure your data, and use the Firebase CLI for authentication. This eliminates the need for a custom service account.
Task 1. Adding a Firebase Project to Google Cloud
Attach a new Firebase project to your Google Cloud proiect by visting the Firebase console.
Go to the Firebase Console.
https://console.firebase.google.com/
Copied!
Note:Navigate to the Firebase Console in your browser.
Click Create a Firebase Project and follow the instructions to create a new project.
Note:On the Create a project page, scroll down to the bottom of the screen and click Add Firebase to Google Cloud Project.
On the following screen, enter the Google Cloud project identifier shown below.
qwiklabs-gcp-02-a5615c175bbc
Copied!
Note:This project identifier is linked to a Google Cloud project. Accept the Firebase terms and conditions to create the Firebase project.
Follow the remaining instructions to create a new Firebase project.
Note:Firebase includes options for billing and analytics. These options are not used in this lab, so accept the default options to complete the creation of the Firebase project.
Task 2. Set Up Your Environment
Return to Google Cloud and use CloudShell to configure your Google Cloud project and initialize Firebase.
Set your project ID.
gcloud config set project qwiklabs-gcp-02-a5615c175bbc
Copied!
Note:This command sets your active project.
Set your default region.
gcloud config set run/region us-east1
Copied!
Note:This command sets your active region.
Set your default zone.
gcloud config set compute/zone us-east1-c
Copied!
Note:This command sets your active zone.
Enable the necessary APIs.
gcloud services enable compute.googleapis.com container.googleapis.com iap.googleapis.com firebase.googleapis.com firebaseextensions.googleapis.com eventarc.googleapis.com pubsub.googleapis.com storage.googleapis.com run.googleapis.com
Copied!
Note:This command enables the Google APIs required for this lab.
Create a Firestore database in Native mode.
gcloud firestore databases create --location=nam5 --database='(default)'
Copied!
Note:This command provisions a Firestore database in the nam5 (North America) multi-region. The database must exist before you can deploy or run code that interacts with it. You can choose a different region if needed.
Task 3. Configure the Firebase Environment
Enable the Firebase environment to use for development.
Install the Firebase CLI.
npm install -g firebase-tools
Copied!
Note:This command installs the Firebase CLI globally.
Create a new directory for the project.
mkdir firestore-app && cd firestore-app
Copied!
Note:This command creates a folder for the lab content. This folder will contain the code and configurations generated during the lab.
Log in to Firebase using the CLI:
firebase login --no-localhost
Copied!
Note:This command authenticates the Firebase CLI with your Google account.
Initialize Firebase in your project directory.
firebase init
Copied!
Note:This command initializes a Firebase project in the current directory.When prompted:
Select Firestore and Functions.
For Firestore, accept the default location.
For Functions, choose TypeScript and decline ESLint.
Task 4. Write Data to Firestore
Now, write some data to your Firestore database using TypeScript. For convienience a Firebase Cloud Function will be used to populate the Firestore database.
Replace functions/src/index.ts file with the following code:
// functions/src/index.ts
// Import types for request and response objects
import {onRequest, Request} from "firebase-functions/v2/https";
import {Response} from "express";
import {initializeApp} from "firebase-admin/app";
import {getFirestore} from "firebase-admin/firestore";
import * as logger from "firebase-functions/logger";
initializeApp();
// Note: The 'addMessage' function name from the JS example has been preserved.
export const addMessage = onRequest({region: "us-east1"}, async (req: Request, res: Response) => {
if (req.method !== "POST") {
res.status(405).set("Allow", "POST").send({error: "Method Not Allowed! Please use POST."});
return;
}
const {text} = req.body as { text: unknown };
if (typeof text !== "string" || text.trim() === "" || text.length > 200) {
res.status(400).send({
error: "The message text must be a string and between 1 and 200 characters.",
});
return;
}
try {
const writeResult = await getFirestore()
.collection("messages")
.add({original: text});
logger.log(`Message with ID: ${writeResult.id} added.`);
res.status(200).send({message: `Message with ID: ${writeResult.id} added to Firestore.`});
} catch (error) {
logger.error("Error writing to Firestore:", error);
res.status(500).send({error: "An internal error occurred."});
}
});
Copied!
Note:This code defines a Firebase Function that writes a message to the messages collection in Firestore. It uses the Firebase Admin SDK, which leverages the Firebase CLI's authentication for simplified access.
Replace the functions/package.json file with the following configuration to set the correct TypeScript engine and add the required dependencies.
{
"name": "functions",
"description": "Cloud Functions for Firebase",
"scripts": {
"lint": "eslint --ext .js,.ts .",
"build": "tsc",
"build:watch": "tsc --watch",
"serve": "npm run build && firebase emulators:start --only functions",
"shell": "npm run build && firebase functions:shell",
"start": "npm run shell",
"deploy": "firebase deploy --only functions",
"logs": "firebase functions:log"
},
"engines": {
"node": "22"
},
"main": "lib/index.js",
"dependencies": {
"firebase-admin": "^11.8.0",
"firebase-functions": "^4.3.1"
},
"devDependencies": {
"@typescript-eslint/eslint-plugin": "^5.62.0",
"@typescript-eslint/parser": "^5.62.0",
"@types/node": "^18.19.0",
"eslint": "^8.57.0",
"eslint-config-google": "^0.14.0",
"eslint-plugin-import": "^2.29.1",
"firebase-functions-test": "^3.1.0",
"typescript": "^5.4.5"
},
"private": true,
"overrides": {
"glob": "^10.3.10",
"lru-cache": "^10.2.2"
}
}
Copied!
Note:Ensure the engines/node field is set to v22, the firebase-admin dependency is included, and firebase-functions is v4.6.0 or later.
Replace the functions/tsconfig.json file with the following configuration to set the correct TypeScript requirements.
{
"compilerOptions": {
"module": "commonjs",
"noImplicitReturns": true,
"noUnusedLocals": true,
"outDir": "lib",
"sourceMap": true,
"strict": true,
"target": "es2021",
"lib": [
"es2021"
],
"skipLibCheck": true
},
"compileOnSave": true,
"include": [
"src"
]
}
Copied!
Note:Ensure the engines/node field is set to v22, the firebase-admin dependency is included, and firebase-functions is v4.6.0 or later.
Install the dependencies.
cd functions && npm install
Copied!
Note:This command installs all the necessary packages defined in your package.json file.
Perform a test build.
npm run build
Copied!
Note:This command performs a build on the TypeScript as defined in the function folder.
Return to the Firebase application folder.
cd ~/firestore-app
Copied!
Note:This command returns to the parent folder, ready for deployment.
Deploy the function to Firebase.
firebase deploy --only functions
Copied!
Note:This command deploys your Firebase Function to the cloud.
If you see an error relating to "There was an issue deploying your functions. Verify that your project has a Google App Engine instance setup at https://console.cloud.google.com/appengine and try again." This indicates a background processes have not completed.
Please wait a couple of minutes before trying the deploy command again.
Task 5. Test the Function
Verify that your Firebase Cloud Function is writing data to Firestore correctly.
List the available Firebase Cloud Functions.
firebase functions:list
Copied!
Note:This command lists the available Firebase Functions for the active project.
EXPECTED OUTPUT
Function Version Trigger Location Memory Runtime
addMessage v2 https us-east1 256 nodejs22
Get the URI for the Firebase Cloud Function.
FUNCTION_URI=$(gcloud functions describe addMessage --region us-east1 --format=json | jq -r .serviceConfig.uri)
Copied!
Note:This command retrieves the addMessage function object and extracts the URI.
Call the Firebase Cloud Function using curl.
MESSAGE_TEXT="Hello from the CLI!"
curl -X POST "\(FUNCTION_URI" -H "Content-Type: application/json" -d '{"text":"'"\)MESSAGE_TEXT"'"}'
Copied!
Note:This command invokes the addMessage function with the provided data. The function name is case-sensitive.
{"message":"Message with ID: 9GMxSOZp0yynY0I57Dav added to Firestore."}
Check the Firestore console to confirm the data has been written.
Open the Firebase console for your project. Navigate to Firestore Database, and you should see a new document in the 'messages' collection.
Note:Verify that the data has been written to Firestore.
Solution of Lab
💡
No need to do anything, please wait about 5 minutes, the lab will do it automatically.
]]>https://eplus.dev/firebase-essentials-firestore-database-write-with-type-script-gem-firebase-firestore-write-typescripthttps://eplus.dev/firebase-essentials-firestore-database-write-with-type-script-gem-firebase-firestore-write-typescript<![CDATA[Firebase Essentials: Firestore Database Write with TypeScrip]]><![CDATA[David Nguyen]]>Thu, 26 Feb 2026 03:47:18 GMT<![CDATA[Firebase Essentials: Firestore Database Write with JavaScript - gem-firebase-firestore-write-javascript]]><![CDATA[Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
Click Activate Cloud Shell
at the top of the Google Cloud console.
When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session:
Your Cloud Platform project in this session is set to YOUR_PROJECT_ID
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
(Optional) You can list the active account name with this command:
gcloud auth list
Copied!
Click Authorize.
Your output should now look like this:
Output:
ACTIVE: *
ACCOUNT: [email protected]
To set the active account, run:
$ gcloud config set account `ACCOUNT`
(Optional) You can list the project ID with this command:
gcloud config list project
Copied!
Output:
[core]
project = <project_ID>
Example output:
[core]
project = qwiklabs-gcp-44776a13dea667a6
Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.
Overview
This lab guides you through creating a Firebase Firestore database and writing data to it using a JavaScript application. You'll learn how to initialize Firebase, structure your data, and use the Firebase CLI for authentication. This eliminates the need for a custom service account.
Task 1. Adding a Firebase Project to Google Cloud
Attach a new Firebase project to your Google Cloud proiect by visting the Firebase console.
Go to the Firebase Console.
https://console.firebase.google.com/
Copied!
Note:Navigate to the Firebase Console in your browser.
Click Create a Firebase Project and follow the instructions to create a new project.
Note:On the Create a project page, scroll down to the bottom of the screen and click Add Firebase to Google Cloud Project.
On the following screen, enter the Google Cloud project identifier shown below.
qwiklabs-gcp-01-d7976e499e7a
Copied!
Note:This project identifier is linked to a Google Cloud project. Accept the Firebase terms and conditions to create the Firebase project.
Follow the remaining instructions to create a new Firebase project.
Note:Firebase includes options for billing and analytics. These options are not used in this lab, so accept the default options to complete the creation of the Firebase project.
Task 2. Set Up Your Environment
Return to Google Cloud and use CloudShell to configure your Google Cloud project and initialize Firebase.
Set your project ID.
gcloud config set project qwiklabs-gcp-01-d7976e499e7a
Copied!
Note:This command sets your active project.
Set your default region.
gcloud config set run/region us-east1
Copied!
Note:This command sets your active region.
Set your default zone.
gcloud config set compute/zone us-east1-c
Copied!
Note:This command sets your active zone.
Enable the necessary APIs.
gcloud services enable compute.googleapis.com container.googleapis.com iap.googleapis.com firebase.googleapis.com firebaseextensions.googleapis.com eventarc.googleapis.com pubsub.googleapis.com storage.googleapis.com run.googleapis.com
Copied!
Note:This command enables the Google APIs required for this lab.
Create a Firestore database in Native mode.
gcloud firestore databases create --location=nam5 --database='(default)'
Copied!
Note:This command provisions a Firestore database in the nam5 (North America) multi-region. The database must exist before you can deploy or run code that interacts with it. You can choose a different region if needed.
Task 3. Configure the Firebase Environment
Enable the Firebase environment to use for development.
Install the Firebase CLI.
npm install -g firebase-tools
Copied!
Note:This command installs the Firebase CLI globally.
Create a new directory for the project.
mkdir firestore-app && cd firestore-app
Copied!
Note:This command creates a folder for the lab content. This folder will contain the code and configurations generated during the lab.
Log in to Firebase using the CLI:
firebase login --no-localhost
Copied!
Note:This command authenticates the Firebase CLI with your Google account.
Initialize Firebase in your project directory.
firebase init
Copied!
Note:This command initializes a Firebase project in the current directory.When prompted:
Select Firestore and Functions.
For Firestore, accept the default location.
For Functions, choose JavaScript and decline ESLint.
Task 4. Write Data to Firestore
Now, write some data to your Firestore database using JavaScript. For convienience a Firebase Cloud Function will be used to populate the Firestore database.
Replace functions/index.js file with the following code:
// functions/index.js
const {initializeApp} = require("firebase-admin/app");
const {getFirestore} = require("firebase-admin/firestore");
// Import onRequest instead of onCall
const {onRequest} = require("firebase-functions/v2/https");
const {setGlobalOptions} = require("firebase-functions/v2");
initializeApp();
setGlobalOptions({ region: 'us-east1' });
// Use onRequest for a standard HTTP endpoint
exports.addMessage = onRequest(async (req, res) => {
// Check that the request method is POST
if (req.method !== 'POST') {
res.status(405).send({ error: 'Method Not Allowed! Please use POST.' });
return;
}
// Get the text from the request body directly.
// The {"data": ...} wrapper is not needed for onRequest functions.
const text = req.body.text;
// Validate the input and send back a standard HTTP error response
if (!text || text.length > 200) {
res.status(400).send({
error: 'The message text is either missing or too long (max 200 characters).',
});
return;
}
try {
const writeResult = await getFirestore()
.collection('messages')
.add({ original: text });
console.log(`Message with ID: ${writeResult.id} added.`);
// Send a success response
res.status(200).send({ message: `Message with ID: ${writeResult.id} added to Firestore.` });
} catch (error) {
console.error("Error writing to Firestore:", error);
res.status(500).send({ error: 'An internal error occurred.' });
}
});
Copied!
Note:This code defines a Firebase Function that writes a message to the messages collection in Firestore. It uses the Firebase Admin SDK, which leverages the Firebase CLI's authentication for simplified access.
Replace the functions/package.json file with the following configuration to set the correct JavaScript engine and add the required dependencies.
{
"name": "functions",
"scripts": {
"lint": "eslint .",
"serve": "firebase emulators:start --only functions",
"shell": "firebase functions:shell",
"start": "npm run shell",
"deploy": "firebase deploy --only functions",
"logs": "firebase functions:log"
},
"engines": {
"node": "22"
},
"main": "index.js",
"dependencies": {
"firebase-admin": "^11.8.0",
"firebase-functions": "^4.6.0"
},
"devDependencies": {
"@firebase/rules-unit-testing": "^2.0.2",
"eslint": "^8.15.0",
"eslint-config-google": "^0.14.0",
"firebase-functions-test": "^3.0.0"
},
"private": true
}
Copied!
Note:Ensure the engines/node field is set to v22, the firebase-admin dependency is included, and firebase-functions is v4.6.0 or later.
Install the dependencies.
cd functions && npm install
Copied!
Note:This command installs all the necessary packages defined in your package.json file.
Return to the Firebase application folder.
cd ~/firestore-app
Copied!
Note:This command returns to the parent folder, ready for deployment.
Deploy the function to Firebase.
firebase deploy --only functions
Copied!
Note:This command deploys your Firebase Function to the cloud.
If you see an error relating to "There was an issue deploying your functions. Verify that your project has a Google App Engine instance setup at https://console.cloud.google.com/appengine and try again." This indicates a background processes have not completed.
Please wait a couple of minutes before trying the deploy command again.
Task 5. Test the Function
Verify that your Firebase Cloud Function is writing data to Firestore correctly.
List the available Firebase Cloud Functions.
firebase functions:list
Copied!
Note:This command lists the available Firebase Functions for the active project.
EXPECTED OUTPUT
Function Version Trigger Location Memory Runtime
addMessage v2 https us-east1 256 nodejs22
Get the URI for the Firebase Cloud Function.
FUNCTION_URI=$(gcloud functions describe addMessage --region us-east1 --format=json | jq -r .serviceConfig.uri)
Copied!
Note:This command retrieves the addMessage function object and extracts the URI.
Call the Firebase Cloud Function using curl.
MESSAGE_TEXT="Hello from the CLI!"
curl -X POST "\(FUNCTION_URI" -H "Content-Type: application/json" -d '{"text":"'"\)MESSAGE_TEXT"'"}'
Copied!
Note:This command invokes the addMessage function with the provided data. The function name is case-sensitive.
{"message":"Message with ID: 9GMxSOZp0yynY0I57Dav added to Firestore."}
Check the Firestore console to confirm the data has been written.
Open the Firebase console for your project. Navigate to Firestore Database, and you should see a new document in the 'messages' collection.
Note:Verify that the data has been written to Firestore.
Solution of Lab
💡
No need to do anything, please wait about 5 minutes, and the lab will do it automatically.
]]>https://eplus.dev/firebase-essentials-firestore-database-write-with-java-script-gem-firebase-firestore-write-javascripthttps://eplus.dev/firebase-essentials-firestore-database-write-with-java-script-gem-firebase-firestore-write-javascript<![CDATA[Firebase Essentials: Firestore Database Write with JavaScript - gem-firebase-firestore-write-javascript]]><![CDATA[Firebase Essentials: Firestore Database Write with JavaScript]]><![CDATA[gem-firebase-firestore-write-javascript]]><![CDATA[Firebase Essentials]]><![CDATA[Firestore Database]]><![CDATA[David Nguyen]]>Thu, 26 Feb 2026 03:36:36 GMT<![CDATA[Respond to a Security Incident (Solution)]]><![CDATA[Overview
Labs are timed and cannot be paused. The timer starts when you click Start Lab.
The included cloud terminal is preconfigured with the gcloud SDK.
Use the terminal to execute commands and then click Check my progress to verify your work.
Challenge scenario
You're the cloud architect for a cybersecurity firm. One of your client's virtual machines (VM) in a Google Cloud VPC network (client-vpc) has been compromised by a sophisticated attacker. The attacker is attempting to pivot laterally to other VMs within the network. Your task is to:
Isolate the compromised VM: Immediately isolate the VM (compromised-vm) from the rest of the client-vpc network by denying the traffic to prevent further lateral movement by removing all ingress access in the existing firewall rule called critical-fw-rule.
Click Check my progress to verify the objective.
Update the firewall rule.
Maintain Limited Access: Allow SSH access to the compromised-vm from a specific bastion host (bastion-host) so that your incident response team can investigate the attack. Create this as a new firewall rule called allow-ssh-from-bastion.
Click Check my progress to verify the objective.
Create the firewall rule.
Log and Monitor: Enable VPC flow logs for the subnet my-subnet to capture all network traffic to and from the isolated VM for further analysis.
Click Check my progress to verify the objective.
Enable VPC flow logs for the subnet.
Solution of Lab
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/respond-to-a-security-incident-solution/lab.sh
source lab.sh
Script Alternative
gcloud compute firewall-rules delete critical-fw-rule --quiet 2>/dev/null; gcloud compute firewall-rules create critical-fw-rule --network=client-vpc --direction=INGRESS --priority=1000 --action=DENY --rules=tcp:80,tcp:22 --target-tags=compromised-vm --enable-logging
gcloud compute firewall-rules delete allow-ssh-from-bastion --quiet 2>/dev/null; gcloud compute firewall-rules create allow-ssh-from-bastion --network=client-vpc --action allow --direction=ingress --rules tcp:22 --source-ranges=\((gcloud compute instances describe bastion-host --zone=\)(gcloud compute instances list --filter="name=bastion-host" --format="get(zone)") --format="get(networkInterfaces[0].accessConfigs[0].natIP)") --target-tags=compromised-vm
gcloud compute networks subnets update my-subnet --region=$(gcloud compute networks subnets list --filter="name=my-subnet" --format="get(region)") --enable-flow-logs
]]>https://eplus.dev/respond-to-a-security-incident-solutionhttps://eplus.dev/respond-to-a-security-incident-solution<![CDATA[Respond to a Security Incident]]><![CDATA[Respond to a Security Incident (Solution)]]><![CDATA[David Nguyen]]>Thu, 26 Feb 2026 02:48:01 GMT