Special New Year Discounts Limited Time 60% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 713PS592

ExamsBrite Dumps

Google Cloud Certified - Professional Cloud DevOps Engineer Exam Question and Answers

Google Cloud Certified - Professional Cloud DevOps Engineer Exam

Last Update Sep 17, 2025
Total Questions : 194

We are offering FREE Professional-Cloud-DevOps-Engineer Google exam questions. All you do is to just go and sign up. Give your details, prepare Professional-Cloud-DevOps-Engineer free exam questions and then go for complete pool of Google Cloud Certified - Professional Cloud DevOps Engineer Exam test questions that will help you more.

Professional-Cloud-DevOps-Engineer pdf

Professional-Cloud-DevOps-Engineer PDF

$42  $104.99
Professional-Cloud-DevOps-Engineer Engine

Professional-Cloud-DevOps-Engineer Testing Engine

$50  $124.99
Professional-Cloud-DevOps-Engineer PDF + Engine

Professional-Cloud-DevOps-Engineer PDF + Testing Engine

$66  $164.99
Questions 1

Your team has recently deployed an NGINX-based application into Google Kubernetes Engine (GKE) and has exposed it to the public via an HTTP Google Cloud Load Balancer (GCLB) ingress. You want to scale the deployment of the application's frontend using an appropriate Service Level Indicator (SLI). What should you do?

Options:

A.  

Configure the horizontal pod autoscaler to use the average response time from the Liveness and Readiness probes.

B.  

Configure the vertical pod autoscaler in GKE and enable the cluster autoscaler to scale the cluster as pods expand.

C.  

Install the Stackdriver custom metrics adapter and configure a horizontal pod autoscaler to use the number of requests provided by the GCLB.

D.  

Expose the NGINX stats endpoint and configure the horizontal pod autoscaler to use the request metrics exposed by the NGINX deployment.

Discussion 0
Questions 2

You need to build a CI/CD pipeline for a containerized application in Google Cloud Your development team uses a central Git repository for trunk-based development You want to run all your tests in the pipeline for any new versions of the application to improve the quality What should you do?

Options:

A.  

1. Install a Git hook to require developers to run unit tests before pushing the code to a central repository2. Trigger Cloud Build to build the application container Deploy the application container to a testing environment, and run integration tests3. If the integration tests are successful deploy the application container to your production environment. and run acceptance tests

B.  

1. Install a Git hook to require developers to run unit tests before pushing the code to a central repositoryIf all tests are successful build a container2. Trigger Cloud Build to deploy the application container to a testing environment, and run integrationtests and acceptance tests3. If all tests are successful tag the code as production ready Trigger Cloud Build to build and deploy the application container to the production environment<

C.  

1. Trigger Cloud Build to build the application container and run unit tests with the container2. If unit tests are successful, deploy the application container to a testing environment, and run integration tests3. If the integration tests are successful the pipeline deploys the application container to the production environment After that, run acceptance tests

D.  

1. Trigger Cloud Build to run unit tests when the code is pushed If all unit tests are successful, build and push the application container to a central registry.2. Trigger Cloud Build to deploy the container to a testing environment, and run integration tests and acceptance tests3. If all tests are successful the pipeline deploys the application to the production environment and runs smoke tests

Discussion 0
Questions 3

You are deploying a Cloud Build job that deploys Terraform code when a Git branch is updated. While testing, you noticed that the job fails. You see the following error in the build logs:

Initializing the backend. ..

Error: Failed to get existing workspaces : querying Cloud Storage failed: googleapi : Error

403

You need to resolve the issue by following Google-recommended practices. What should you do?

Options:

A.  

Change the Terraform code to use local state.

B.  

Create a storage bucket with the name specified in the Terraform configuration.

C.  

Grant the roles/ owner Identity and Access Management (IAM) role to the Cloud Build service account on the project.

D.  

Grant the roles/ storage. objectAdmin Identity and Access Management (IAM) role to the Cloud Build service account on the state file bucket.

Discussion 0
Questions 4

You have an application deployed to Cloud Run. A new version of the application has recently been deployed using the canary deployment strategy. Your Site Reliability Engineering (SRE) teammate informs you that an SLO has been exceeded for this application. You need to make the application healthy as quickly as possible. What should you do first?

Options:

A.  

Configure traffic splitting to send 100% of the traffic to the latest revision.

B.  

Configure traffic splitting to send 100% of the traffic to the previous revision.

C.  

Create a new revision using the last known good version of the application.

D.  

Identify the cause of the latency by using Cloud Trace.

Discussion 0
Questions 5

You manage a retail website for your company. The website consists of several microservices running in a GKE Standard node pool with node autoscaling enabled. Each microservice has resource limits and a Horizontal Pod Autoscaler configured. During a busy period, you receive alerts for one of the microservices. When you check the Pods, half of them have the status OOMKilled, and the number of Pods is at the minimum autoscaling limit. You need to resolve the issue. What should you do?

Options:

A.  

Increase the memory resource limit of the microservice.

B.  

Increase the maximum number of nodes in the node pool.

C.  

Increase the maximum replica limit of the Horizontal Pod Autoscaler.

D.  

Update the node pool to use a machine type with more memory.

Discussion 0
Questions 6

You are running a web application deployed to a Compute Engine managed instance group Ops Agent is installed on all instances You recently noticed suspicious activity from a specific IP address You need to configure Cloud Monitoring to view the number of requests from that specific IP address with minimal operational overhead. What should you do?

Options:

A.  

Configure the Ops Agent with a logging receiver Create a logs-based metric

B.  

Create a script to scrape the web server log Export the IP address request metrics to the Cloud Monitoring API

C.  

Update the application to export the IP address request metrics to the Cloud Monitoring API

D.  

Configure the Ops Agent with a metrics receiver

Discussion 0
Questions 7

Your team of Infrastructure DevOps Engineers is growing, and you are starting to use Terraform to manage infrastructure. You need a way to implement code versioning and to share code with other team members. What should you do?

Options:

A.  

Store the Terraform code in a version-control system. Establish procedures for pushing new versions and merging with the master.

B.  

Store the Terraform code in a network shared folder with child folders for each version release. Ensure that everyone works on different files.

C.  

Store the Terraform code in a Cloud Storage bucket using object versioning. Give access to the bucket to every team member so they can download the files.

D.  

Store the Terraform code in a shared Google Drive folder so it syncs automatically to every team member’s computer. Organize files with a naming convention that identifies each new version.

Discussion 0
Questions 8

You have an application that runs in Google Kubernetes Engine (GKE). The application consists of several microservices that are deployed to GKE by using Deployments and Services One of the microservices is experiencing an issue where a Pod returns 403 errors after the Pod has been running for more than five hours Your development team is working on a solution but the issue will not be resolved for a month You need to ensure continued operations until the microservice is fixed You want to follow Google-recommended practices and use the fewest number of steps What should you do?

Options:

A.  

Create a cron job to terminate any Pods that have been running for more than five hours

B.  

Add a HTTP liveness probe to the microservice s deployment

C.  

Monitor the Pods and terminate any Pods that have been running for more than five hours

D.  

Configure an alert to notify you whenever a Pod returns 403 errors

Discussion 0
Questions 9

You have a set of applications running on a Google Kubernetes Engine (GKE) cluster, and you are using Stackdriver Kubernetes Engine Monitoring. You are bringing a new containerized application required by your company into production. This application is written by a third party and cannot be modified or reconfigured. The application writes its log information to /var/log/app_messages.log, and you want to send these log entries to Stackdriver Logging. What should you do?

Options:

A.  

Use the default Stackdriver Kubernetes Engine Monitoring agent configuration.

B.  

Deploy a Fluentd daemonset to GKE. Then create a customized input and output configuration to tail the log file in the application's pods and write to Slackdriver Logging.

C.  

Install Kubernetes on Google Compute Engine (GCE> and redeploy your applications. Then customize the built-in Stackdriver Logging configuration to tail the log file in the application's pods and write to Stackdriver Logging.

D.  

Write a script to tail the log file within the pod and write entries to standard output. Run the script as a sidecar container with the application's pod. Configure a shared volume between the containers to allow the script to have read access to /var/log in the application container.

Discussion 0
Questions 10

Your organization is using Helm to package containerized applications Your applications reference both public and private charts Your security team flagged that using a public Helm repository as a dependency is a risk You want to manage all charts uniformly, with native access control and VPC Service Controls What should you do?

Options:

A.  

Store public and private charts in OCI format by using Artifact Registry

B.  

Store public and private charts by using GitHub Enterprise with Google Workspace as the identity provider

C.  

Store public and private charts by using Git repository Configure Cloud Build to synchronize contents of the repository into a Cloud Storage bucket Connect Helm to the bucket by using https: // [bucket] .srorage.googleapis.com/ [holnchart] as the Helm repository

D.  

Configure a Helm chart repository server to run in Google Kubernetes Engine (GKE) with Cloud Storage bucket as the storage backend

Discussion 0
Questions 11

Your team is running microservices in Google Kubernetes Engine (GKE) You want to detect consumption of an error budget to protect customers and define release policies What should you do?

Options:

A.  

Create SLIs from metrics Enable Alert Policies if the services do not pass

B.  

Use the metrics from Anthos Service Mesh to measure the health of the microservices

C.  

Create a SLO Create an Alert Policy on select_slo_bum_rate

D.  

Create a SLO and configure uptime checks for your services Enable Alert Policies if the services do not pass

Discussion 0
Questions 12

You support a multi-region web service running on Google Kubernetes Engine (GKE) behind a Global HTTP'S Cloud Load Balancer (CLB). For legacy reasons, user requests first go through a third-party Content Delivery Network (CDN). which then routes traffic to the CLB. You have already implemented an availability Service Level Indicator (SLI) at the CLB level. However, you want to increase coverage in case of a potential load balancer misconfiguration. CDN failure, or other global networking catastrophe. Where should you measure this new SLI?

Choose 2 answers

Options:

A.  

Your application servers' logs

B.  

Instrumentation coded directly in the client

C.  

Metrics exported from the application servers

D.  

GKE health checks for your application servers

E.  

A synthetic client that periodically sends simulated user requests

Discussion 0
Questions 13

Your application images are built and pushed to Google Container Registry (GCR). You want to build an automated pipeline that deploys the application when the image is updated while minimizing the development effort. What should you do?

Options:

A.  

Use Cloud Build to trigger a Spinnaker pipeline.

B.  

Use Cloud Pub/Sub to trigger a Spinnaker pipeline.

C.  

Use a custom builder in Cloud Build to trigger a Jenkins pipeline.

D.  

Use Cloud Pub/Sub to trigger a custom deployment service running in Google Kubernetes Engine (GKE).

Discussion 0
Questions 14

Your company allows teams to self-manage Google Cloud projects, including project-level Identity and Access Management (IAM). You are concerned that the team responsible for the Shared VPC project might accidentally delete the project, so a lien has been placed on the project. You need to design a solution to restrict Shared VPC project deletion to those with the resourcemanager.projects.updateLiens permission at the organization level. What should you do?

Options:

A.  

Enable VPC Service Controls for the container.googleapis.com API service.

B.  

Revoke the resourcemanager.projects.updateLiens permission from all users associated with the project.

C.  

Enable the compute.restrictXpnProjectLienRemoval organization policy constraint.

D.  

Instruct teams to only perform IAM permission management as code with Terraform.

Discussion 0
Questions 15

You support a web application that runs on App Engine and uses CloudSQL and Cloud Storage for data storage. After a short spike in website traffic, you notice a big increase in latency for all user requests, increase in CPU use, and the number of processes running the application. Initial troubleshooting reveals:

After the initial spike in traffic, load levels returned to normal but users still experience high latency.

Requests for content from the CloudSQL database and images from Cloud Storage show the same high latency.

No changes were made to the website around the time the latency increased.

There is no increase in the number of errors to the users.

You expect another spike in website traffic in the coming days and want to make sure users don’t experience latency. What should you do?

Options:

A.  

Upgrade the GCS buckets to Multi-Regional.

B.  

Enable high availability on the CloudSQL instances.

C.  

Move the application from App Engine to Compute Engine.

D.  

Modify the App Engine configuration to have additional idle instances.

Discussion 0
Questions 16

You are leading a DevOps project for your organization. The DevOps team is responsible for managing the service infrastructure and being on-call for incidents. The Software Development team is responsible for writing, submitting, and reviewing code. Neither team has any published SLOs. You want to design a new joint-ownership model for a service between the DevOps team and the Software Development team. Which responsibilities should be assigned to each team in the new joint-ownership model?

Options:

A.  

Option A

B.  

Option B

C.  

Option C

D.  

Option D

Discussion 0
Questions 17

Your company is using HTTPS requests to trigger a public Cloud Run-hosted service accessible at the https://booking-engine-abcdef .a.run.app URL You need to give developers the ability to test the latest revisions of the service before the service is exposed to customers What should you do?

Options:

A.  

Runthegcioud run deploy booking-engine —no-traffic —-ag dev command Use the https://dev----booking-engine-abcdef. a. run. app URL for testing

B.  

Runthegcioud run services update-traffic booking-engine —to-revisions LATEST*! command Use the ht tps: //booking-engine-abcdef. a. run. ape URL for testing

C.  

Pass the curl -K "Authorization: Hearer S(gclcud auth print-identity-token)" auth token Use the https: / /booking-engine-abcdef. a. run. app URL to test privately

D.  

Grant the roles/run. invoker role to the developers testing the booking-engine service Use the https: //booking-engine-abcdef. private. run. app URL for testing

Discussion 0
Questions 18

You are creating a CI/CD pipeline in Cloud Build to build an application container image The application code is stored in GitHub Your company requires thai production image builds are only run against the main branch and that the change control team approves all pushes to the main branch You want the image build to be as automated as possible What should you do?

Choose 2 answers

Options:

A.  

Create a trigger on the Cloud Build job Set the repository event setting to Pull request'

B.  

Add the owners file to the Included files filter on the trigger

C.  

Create a trigger on the Cloud Build job Set the repository event setting to Push to a branch

D.  

Configure a branch protection rule for the main branch on the repository

E.  

Enable the Approval option on the trigger

Discussion 0
Questions 19

You are responsible for creating and modifying the Terraform templates that define your Infrastructure. Because two new engineers will also be working on the same code, you need to define a process and adopt a tool that will prevent you from overwriting each other's code. You also want to ensure that you capture all updates in the latest version. What should you do?

Options:

A.  

• Store your code in a Git-based version control system.• Establish a process that allows developers to merge their own changes at the end of each day.• Package and upload code lo a versioned Cloud Storage bucket as the latest master version.

B.  

• Store your code in a Git-based version control system.• Establish a process that includes code reviews by peers and unit testing to ensure integrity and functionality before integration of code.• Establish a process where the fully integrated code in the repository becomes the latest master version.

C.  

• Store your code as text files in Google Drive in a defined folder structure that organizes the files.• At the end of each day. confirm that all changes have been captured in the files within the folder structure.• Rename the folder structure with a predefined naming convention that increments the version.

D.  

• Store your code as text files in Google Drive in a defined folder structure that organizes the files.• At the end of each day, confirm that all changes have been captured in the files within the folder structure and create a new .zip archive with a predefined naming convention.• Upload the .zip archive to a versioned Cloud Storage bucket and accept it as the latest version.

Discussion 0
Questions 20

Your company stores a large volume of infrequently used data in Cloud Storage. The projects in your company's CustomerService folder access Cloud Storage frequently, but store very little data. You want to enable Data Access audit logging across the company to identify data usage patterns. You need to exclude the CustomerService folder projects from Data Access audit logging. What should you do?

Options:

A.  

Enable Data Access audit logging for Cloud Storage for all projects and folders, and configure exempted principals to include users of the CustomerService folder.

B.  

Enable Data Access audit logging for Cloud Storage at the organization level, with no additional configuration.

C.  

Enable Data Access audit logging for Cloud Storage at the organization level, and configure exempted principals to include users of the CustomerService folder.

D.  

Enable Data Access audit logging for Cloud Storage for all projects and folders other than the CustomerService folder.

Discussion 0
Questions 21

You are reviewing your deployment pipeline in Google Cloud Deploy You must reduce toil in the pipeline and you want to minimize the amount of time it takes to complete an end-to-end deployment What should you do?

Choose 2 answers

Options:

A.  

Create a trigger to notify the required team to complete the next step when manual intervention is required

B.  

Divide the automation steps into smaller tasks

C.  

Use a script to automate the creation of the deployment pipeline in Google Cloud Deploy

D.  

Add more engineers to finish the manual steps.

E.  

Automate promotion approvals from the development environment to the test environment

Discussion 0
Questions 22

Your product is currently deployed in three Google Cloud Platform (GCP) zones with your users divided between the zones. You can fail over from one zone to another, but it causes a 10-minute service disruption for the affected users. You typically experience a database failure once per quarter and can detect it within five minutes. You are cataloging the reliability risks of a new real-time chat feature for your product. You catalog the following information for each risk:

• Mean Time to Detect (MUD} in minutes

• Mean Time to Repair (MTTR) in minutes

• Mean Time Between Failure (MTBF) in days

• User Impact Percentage

The chat feature requires a new database system that takes twice as long to successfully fail over between zones. You want to account for the risk of the new database failing in one zone. What would be the values for the risk of database failover with the new system?

Options:

A.  

MTTD: 5MTTR: 10MTBF: 90Impact: 33%

B.  

MTTD:5MTTR: 20MTBF: 90Impact: 33%

C.  

MTTD:5MTTR: 10MTBF: 90Impact 50%

D.  

MTTD:5MTTR: 20MTBF: 90Impact: 50%

Discussion 0
Questions 23

You are leading a DevOps project for your organization. The DevOps team is responsible for managing the service infrastructure and being on-call for incidents. The Software Development team is responsible for writing, submitting, and reviewing code. Neither team has any published SLOs. You want to design a new joint-ownership model for a service between the DevOps team and the Software Development team. Which responsibilities should be assigned to each team in the new joint-ownership model?

Options:

A.  

DevOps team responsibilitiesManage the service infrastructureBe on-call for incidentsPerform code reviewsSoftware Development team responsibilitiesSubmit code to be reviewed by the DevOps teamPublish the SLOs that the DevOps team must meet

B.  

DevOps team responsibilitiesManage the service infrastructurePerform code reviewsSoftware Development team responsibilitiesSubmit code to be reviewed by the DevOps teamBe on-call for incidentsPublish the SLOs that the DevOps team must meet

C.  

DevOps team responsibilitiesShared responsibilities for code reviewsSoftware Development team responsibilitiesManage the service infrastructureBe on-call for incidents on a rotation basisAdopt and publish SLOs for the serviceSubmit code to be reviewed

D.  

DevOps team responsibilitiesManage the service infrastructureBe on-call for incidentsSoftware Development team responsibilitiesAdopt and publish SLOs for the serviceSubmit code to be reviewedShared responsibilities for code reviews

Discussion 0
Questions 24

Your company's security team needs to have read-only access to Data Access audit logs in the _Required bucket You want to provide your security team with the necessary permissions following the principle of least privilege and Google-recommended practices. What should you do?

Options:

A.  

Assign the roles/logging, viewer role to each member of the security team

B.  

Assign the roles/logging. viewer role to a group with all the security team members

C.  

Assign the roles/logging.privateLogViewer role to each member of the security team

D.  

Assign the roles/logging.privateLogviewer role to a group with all the security team members

Discussion 0
Questions 25

You are running an application on Compute Engine and collecting logs through Stackdriver. You discover that some personally identifiable information (PII) is leaking into certain log entry fields. You want to prevent these fields from being written in new log entries as quickly as possible. What should you do?

Options:

A.  

Use the filter-record-transformer Fluentd filter plugin to remove the fields from the log entries in flight.

B.  

Use the fluent-plugin-record-reformer Fluentd output plugin to remove the fields from the log entries in flight.

C.  

Wait for the application developers to patch the application, and then verify that the log entries are no longer exposing PII.

D.  

Stage log entries to Cloud Storage, and then trigger a Cloud Function to remove the fields and write the entries to Stackdriver via the Stackdriver Logging API.

Discussion 0
Questions 26

You are managing an application that exposes an HTTP endpoint without using a load balancer. The latency of the HTTP responses is important for the user experience. You want to understand what HTTP latencies all of your users are experiencing. You use Stackdriver Monitoring. What should you do?

Options:

A.  

• In your application, create a metric with a metricKind set to DELTA and a valueType set to DOUBLE.• In Stackdriver's Metrics Explorer, use a Slacked Bar graph to visualize the metric.

B.  

• In your application, create a metric with a metricKind set to CUMULATIVE and a valueType set to DOUBLE.• In Stackdriver's Metrics Explorer, use a Line graph to visualize the metric.

C.  

• In your application, create a metric with a metricKind set to gauge and a valueType set to distribution.• In Stackdriver's Metrics Explorer, use a Heatmap graph to visualize the metric.

D.  

• In your application, create a metric with a metricKind. set toMETRlc_KIND_UNSPECIFIEDanda valueType set to INT64.• In Stackdriver's Metrics Explorer, use a Stacked Area graph to visualize the metric.

Discussion 0
Questions 27

You are configuring your CI/CD pipeline natively on Google Cloud. You want builds in a pre-production Google Kubernetes Engine (GKE) environment to be automatically load-tested before being promoted to the production GKE environment. You need to ensure that only builds that have passed this test are deployed to production. You want to follow Google-recommended practices. How should you configure this pipeline with Binary Authorization?

Options:

A.  

Create an attestation for the builds that pass the load test by requiring the lead quality assurance engineer to sign the attestation by using a key stored in Cloud Key Management Service (Cloud KMS).

B.  

Create an attestation for the builds that pass the load test by using a private key stored in Cloud Key Management Service (Cloud KMS) authenticated through Workload Identity.

C.  

Create an attestation for the builds that pass the load test by using a private key stored in Cloud Key Management Service (Cloud KMS) with a service account JSON key stored as a Kubernetes Secret.

D.  

Create an attestation for the builds that pass the load test by requiring the lead quality assurance engineer to sign the attestation by using their personal private key.

Discussion 0
Questions 28

You are managing the production deployment to a set of Google Kubernetes Engine (GKE) clusters. You want to make sure only images which are successfully built by your trusted CI/CD pipeline are deployed to production. What should you do?

Options:

A.  

Enable Cloud Security Scanner on the clusters.

B.  

Enable Vulnerability Analysis on the Container Registry.

C.  

Set up the Kubernetes Engine clusters as private clusters.

D.  

Set up the Kubernetes Engine clusters with Binary Authorization.

Discussion 0
Questions 29

Your company runs an ecommerce website built with JVM-based applications and microservice architecture in Google Kubernetes Engine (GKE) The application load increases during the day and decreases during the night Your operations team has configured the application to run enough Pods to handle the evening peak load You want to automate scaling by only running enough Pods and nodes for the load What should you do?

Options:

A.  

Configure the Vertical Pod Autoscaler but keep the node pool size static

B.  

Configure the Vertical Pod Autoscaler and enable the cluster autoscaler

C.  

Configure the Horizontal Pod Autoscaler but keep the node pool size static

D.  

Configure the Horizontal Pod Autoscaler and enable the cluster autoscaler

Discussion 0
Questions 30

You use Google Cloud Managed Service for Prometheus with managed collection to gather metrics from your service running on Google Kubernetes Engine (GKE). After deploying the service, there is no metric data appearing in Cloud Monitoring, and you have not encountered any error messages. You need to troubleshoot this issue. What should you do?

Options:

A.  

Determine if your service has exceeded its quota for writes to the Cloud Monitoring API.

B.  

Check if the Grafana service is installed on your GKE cluster.

C.  

Confirm that your service has the monitoring.servicesViewer IAM role.

D.  

Verify that your PodMonitoring configuration references a valid port.

Discussion 0
Questions 31

You are troubleshooting a failed deployment in your CI/CD pipeline. The deployment logs indicate that the application container failed to start due to a missing environment variable. You need to identify the root cause and implement a solution within your CI/CD workflow to prevent this issue from recurring. What should you do?

Options:

A.  

Run integration tests in the CI pipeline.

B.  

Implement static code analysis in the CI pipeline.

C.  

Use a canary deployment strategy.

D.  

Enable Cloud Audit Logs for the deployment.

Discussion 0
Questions 32

Your company runs services by using Google Kubernetes Engine (GKE). The GKE clusters in the development environment run applications with verbose logging enabled. Developers view logs by using the kubect1 logs

command and do not use Cloud Logging. Applications do not have a uniform logging structure defined. You need to minimize the costs associated with application logging while still collecting GKE operational logs. What should you do?

Options:

A.  

Run the gcloud container clusters update --logging—SYSTEM command for the development cluster.

B.  

Run the gcloud container clusters update logging=WORKLOAD command for the development cluster.

C.  

Run the gcloud logging sinks update _Defau1t --disabled command in the project associated with the development environment.

D.  

Add the severity >= DEBUG resource. type "k83 container" exclusion filter to the Default logging sink in the project associated with the development environment.

Discussion 0
Questions 33

You support an application running on App Engine. The application is used globally and accessed from various device types. You want to know the number of connections. You are using Stackdriver Monitoring for App Engine. What metric should you use?

Options:

A.  

flex/connections/current

B.  

tcp_ssl_proxy/new_connections

C.  

tcp_ssl_proxy/open_connections

D.  

flex/instance/connections/current

Discussion 0
Questions 34

Some of your production services are running in Google Kubernetes Engine (GKE) in the eu-west-1 region. Your build system runs in the us-west-1 region. You want to push the container images from your build system to a scalable registry to maximize the bandwidth for transferring the images to the cluster. What should you do?

Options:

A.  

Push the images to Google Container Registry (GCR) using the gcr.io hostname.

B.  

Push the images to Google Container Registry (GCR) using the us.gcr.io hostname.

C.  

Push the images to Google Container Registry (GCR) using the eu.gcr.io hostname.

D.  

Push the images to a private image registry running on a Compute Engine instance in the eu-west-1 region.

Discussion 0
Questions 35

You created a Stackdriver chart for CPU utilization in a dashboard within your workspace project. You want to share the chart with your Site Reliability Engineering (SRE) team only. You want to ensure you follow the principle of least privilege. What should you do?

Options:

A.  

Share the workspace Project ID with the SRE team. Assign the SRE team the Monitoring Viewer IAM role in the workspace project.

B.  

Share the workspace Project ID with the SRE team. Assign the SRE team the Dashboard Viewer IAM role in the workspace project.

C.  

Click "Share chart by URL" and provide the URL to the SRE team. Assign the SRE team the Monitoring Viewer IAM role in the workspace project.

D.  

Click "Share chart by URL" and provide the URL to the SRE team. Assign the SRE team the Dashboard Viewer IAM role in the workspace project.

Discussion 0
Questions 36

You are using Terraform to manage infrastructure as code within a Cl/CD pipeline You notice that multiple copies of the entire infrastructure stack exist in your Google Cloud project, and a new copy is created each time a change to the existing infrastructure is made You need to optimize your cloud spend by ensuring that only a single instance of your infrastructure stack exists at a time. You want to follow Google-recommended practices What should you do?

Options:

A.  

Create a new pipeline to delete old infrastructure stacks when they are no longer needed

B.  

Confirm that the pipeline is storing and retrieving the terraform. if state file from Cloud Storage with the Terraform gcs backend

C.  

Verify that the pipeline is storing and retrieving the terrafom.tfstat* file from a source control

D.  

Update the pipeline to remove any existing infrastructure before you apply the latest configuration

Discussion 0
Questions 37

You are configuring the frontend tier of an application deployed in Google Cloud The frontend tier is hosted in ngmx and deployed using a managed instance group with an Envoy-based external HTTP(S) load balancer in front The application is deployed entirely within the europe-west2 region: and only serves users based in the United Kingdom. You need to choose the most cost-effective network tier and load balancing configuration What should you use?

Options:

A.  

Premium Tier with a global load balancer

B.  

Premium Tier with a regional load balancer

C.  

Standard Tier with a global load balancer

D.  

Standard Tier with a regional load balancer

Discussion 0
Questions 38

You are developing a Node.js utility on a workstation in Cloud Workstations by using Code OSS. The utility is a simple web page, and you have already confirmed that all necessary firewall rules are in place. You tested the application by starting it on port 3000 on your workstation in Cloud Workstations, but you need to be able to access the web page from your local machine. You need to follow Google-recommended security practices. What should you do?

Options:

A.  

Allow public IP addresses in the Cloud Workstations configuration.

B.  

Use a browser running on a bastion host VM.

C.  

Run the gcloud compute start-iap-tunnel command to the Cloud Workstations VM.

D.  

Click the preview link in the Code OSS panel.

Discussion 0
Questions 39

You need to deploy a new service to production. The service needs to automatically scale using a Managed Instance Group (MIG) and should be deployed over multiple regions. The service needs a large number of resources for each instance and you need to plan for capacity. What should you do?

Options:

A.  

Use the n2-highcpu-96 machine type in the configuration of the MIG.

B.  

Monitor results of Stackdriver Trace to determine the required amount of resources.

C.  

Validate that the resource requirements are within the available quota limits of each region.

D.  

Deploy the service in one region and use a global load balancer to route traffic to this region.

Discussion 0
Questions 40

Your company experiences bugs, outages, and slowness in its production systems. Developers use the production environment for new feature development and bug fixes. Configuration and experiments are done in the production environment, causing outages for users. Testers use the production environmentfor load testing, which often slows the production systems. You need to redesign the environment to reduce the number of bugs and outages in production and to enable testers to load test new features. What should you do?

Options:

A.  

Create an automated testing script in production to detect failures as soon as they occur.

B.  

Create a development environment with smaller server capacity and give access only to developers and testers.

C.  

Secure the production environment to ensure that developers can't change it and set up one controlled update per year.

D.  

Create a development environment for writing code and a test environment for configurations, experiments, and load testing.

Discussion 0
Questions 41

Your company has recently experienced several production service issues. You need to create a Cloud Monitoring dashboard to troubleshoot the issues, and you want to use the dashboard to distinguish between failures in your own service and those caused by a Google Cloud service that you use. What should you do?

Options:

A.  

Enable Personalized Service Health annotations on the dashboard.

B.  

Create an alerting policy for the system error metrics.

C.  

Create a log-based metric to track cloud service errors, and display the metric on the dashboard.

D.  

Create a logs widget to display system errors from Cloud Logging on the dashboard.

Discussion 0
Questions 42

You are using Stackdriver to monitor applications hosted on Google Cloud Platform (GCP). You recently deployed a new application, but its logs are not appearing on the Stackdriver dashboard.

You need to troubleshoot the issue. What should you do?

Options:

A.  

Confirm that the Stackdriver agent has been installed in the hosting virtual machine.

B.  

Confirm that your account has the proper permissions to use the Stackdriver dashboard.

C.  

Confirm that port 25 has been opened in the firewall to allow messages through to Stackdriver.

D.  

Confirm that the application is using the required client library and the service account key has proper permissions.

Discussion 0
Questions 43

As a Site Reliability Engineer, you support an application written in GO that runs on Google Kubernetes Engine (GKE) in production. After releasing a new version Of the application, you notice the applicationruns for about 15 minutes and then restarts. You decide to add Cloud Profiler to your application and now notice that the heap usage grows constantly until the application restarts. What should you do?

Options:

A.  

Add high memory compute nodes to the cluster.

B.  

Increase the memory limit in the application deployment.

C.  

Add Cloud Trace to the application, and redeploy.

D.  

Increase the CPU limit in the application deployment.

Discussion 0
Questions 44

Your organization uses a change advisory board (CAB) to approve all changes to an existing service You want to revise this process to eliminate any negative impact on the software delivery performance What should you do?

Choose 2 answers

Options:

A.  

Replace the CAB with a senior manager to ensure continuous oversight from development to deployment

B.  

Let developers merge their own changes but ensure that the team's deployment platform can roll back changes if any issues are discovered

C.  

Move to a peer-review based process for individual changes that is enforced at code check-in time and supported by automated tests

D.  

Batch changes into larger but less frequent software releases

E.  

Ensure that the team's development platform enables developers to get fast feedback on the impact of their changes

Discussion 0
Questions 45

You are building and deploying a microservice on Cloud Run for your organization Your service is used by many applications internally You are deploying a new release, and you need to test the new version extensively in the staging and production environments You must minimize user and developer impact. What should you do?

Options:

A.  

Deploy the new version of the service to the staging environment Split the traffic, and allow 1 % of traffic through to the latest version Test the latest version If the test passes gradually roll out the latest version to the staging and production environments

B.  

Deploy the new version of the service to the staging environment Split the traffic, and allow 50% of traffic through to the latest version Test the latest version If the test passes, send all traffic to the latest version Repeat for the production environment

C.  

Deploy the new version of the service to the staging environment with a new-release tag without serving traffic Test the new-release version If the test passes; gradually roll out this tagged version Repeat for the production environment

D.  

Deploy a new environment with the green tag to use as the staging environment Deploy the new version of the service to the green environment and test the new version If the tests pass, send all traffic to the green environment and delete the existing staging environment Repeat for the production environment

Discussion 0
Questions 46

You are running a web application that connects to an AlloyDB cluster by using a private IP address in your default VPC. You need to run a database schema migration in your CI/CD pipeline by using Cloud Build before deploying a new version of your application. You want to follow Google-recommended security practices. What should you do?  

Options:

A.  

Set up a Cloud Build private pool to access the database through a static external IP address. Configure the database to only allow connections from this IP address. Execute the schema migration script in the private pool.

B.  

Create a service account that has permission to access the database. Configure Cloud Build to use this service account and execute the schema migration script in a private pool.

C.  

Add the database username and encrypted password to the application configuration file. Use these credentials in Cloud Build to execute the schema migration script.

D.  

Add the database username and password to Secret Manager. When running the schema migration script, retrieve the username and password from Secret Manager.

Discussion 0
Questions 47

You work for a global organization and are running a monolithic application on Compute Engine You need to select the machine type for the application to use that optimizes CPU utilization by using the fewest number of steps You want to use historical system metncs to identify the machine type for the application to use You want to follow Google-recommended practices What should you do?

Options:

A.  

Use the Recommender API and apply the suggested recommendations

B.  

Create an Agent Policy to automatically install Ops Agent in all VMs

C.  

Install the Ops Agent in a fleet of VMs by using the gcloud CLI

D.  

Review the Cloud Monitoring dashboard for the VM and choose the machine type with the lowest CPU utilization

Discussion 0
Questions 48

Your organization is running multiple Google Kubernetes Engine (GKE) clusters in a project. You need to design a highly-available solution to collect and query both domain-specific workload metrics and GKE default metrics across all clusters, while minimizing operational overhead. What should you do?

Options:

A.  

Use Prometheus Operator to install Prometheus in every cluster and scrape the metrics. Ensure that a Thanos sidecar is enabled on every Prometheus instance. Configure Thanos in the central cluster. Query the central Thanos instance.

B.  

Use Prometheus Operator to install Prometheus in every cluster and scrape the metrics. Configure remote-write to one central Prometheus. Query the central Prometheus instance.

C.  

Enable managed collection on every GKE cluster. Query the metrics in Cloud Monitoring.

D.  

Enable managed collection on every GKE cluster. Query the metrics in BigQuery.

Discussion 0
Questions 49

You use Spinnaker to deploy your application and have created a canary deployment stage in the pipeline. Your application has an in-memory cache that loads objects at start time. You want to automate the comparison of the canary version against the production version. How should you configure the canary analysis?

Options:

A.  

Compare the canary with a new deployment of the current production version.

B.  

Compare the canary with a new deployment of the previous production version.

C.  

Compare the canary with the existing deployment of the current production version.

D.  

Compare the canary with the average performance of a sliding window of previous production versions.

Discussion 0
Questions 50

Your company processes IOT data at scale by using Pub/Sub, App Engine standard environment, and an application written in GO. You noticed that the performance inconsistently degrades at peak load. You could not reproduce this issue on your workstation. You need to continuously monitor the application in production to identify slow paths in the code. You want to minimize performance impact and management overhead. What should you do?

Options:

A.  

Install a continuous profiling tool into Compute Engine. Configure the application to send profiling data to the tool.

B.  

Periodically run the go tool pprof command against the application instance. Analyze the results by using flame graphs.

C.  

Configure Cloud Profiler, and initialize the cloud.go@gle.com/go/profiler library in the application.

D.  

Use Cloud Monitoring to assess the App Engine CPU utilization metric.

Discussion 0
Questions 51

Your Cloud Run application writes unstructured logs as text strings to Cloud Logging. You want to convert the unstructured logs to JSON-based structured logs. What should you do?

Options:

A.  

A Install a Fluent Bit sidecar container, and use a JSON parser.

B.  

Install the log agent in the Cloud Run container image, and use the log agent to forward logs to Cloud Logging.

C.  

Configure the log agent to convert log text payload to JSON payload.

D.  

Modify the application to use Cloud Logging software development kit (SDK), and send log entries with a jsonPay10ad field.

Discussion 0
Questions 52

You manage an application that runs in Google Kubernetes Engine (GKE) and uses the blue/green deployment methodology Extracts of the Kubernetes manifests are shown below:

The Deployment app-green was updated to use the new version of the application During post-deployment monitoring you notice that the majority of user requests are failing You did not observe this behavior in the testing environment You need to mitigate the incident impact on users and enable the developers to troubleshoot the issue What should you do?

Options:

A.  

Update the Deployment app-blue to use the new version of the application

B.  

Update the Deployment ape-green to use the previous version of the application

C.  

Change the selector on the Service app-2vc to app: my-app.

D.  

Change the selector on the Service app-svc to app: my-app, version: blue

Discussion 0
Questions 53

You are writing a postmortem for an incident that severely affected users. You want to prevent similar incidents in the future. Which two of the following sections should you include in the postmortem? (Choose two.)

Options:

A.  

An explanation of the root cause of the incident

B.  

A list of employees responsible for causing the incident

C.  

A list of action items to prevent a recurrence of the incident

D.  

Your opinion of the incident’s severity compared to past incidents

E.  

Copies of the design documents for all the services impacted by the incident

Discussion 0
Questions 54

Your organization is starting to containerize with Google Cloud. You need a fully managed storage solution for container images and Helm charts. You need to identify a storage solution that has native integration into existing Google Cloud services, including Google Kubernetes Engine (GKE), Cloud Run, VPC Service Controls, and Identity and Access Management (IAM). What should you do?

Options:

A.  

Use Docker to configure a Cloud Storage driver pointed at the bucket owned by your organization.

B.  

Configure Container Registry as an OCI-based container registry for container images.

C.  

Configure Artifact Registry as an OCI-based container registry for both Helm charts and container images.

D.  

Configure an open source container registry server to run in GKE with a restrictive role-based access control (RBAC) configuration.

Discussion 0
Questions 55

You are configuring connectivity across Google Kubernetes Engine (GKE) clusters in different VPCs You notice that the nodes in Cluster A are unable to access the nodes in Cluster B You suspect that the workload access issue is due to the network configuration You need to troubleshoot the issue but do not have execute access to workloads and nodes You want to identify the layer at which the network connectivity is broken What should you do?

Options:

A.  

Install a toolbox container on the node in Cluster A Confirm that the routes to Cluster B are configured appropriately

B.  

Use Network Connectivity Center to perform a Connectivity Test from Cluster A to Cluster

C.  

Use a debug container to run the traceroute command from Cluster A to Cluster B and from Cluster B to Cluster A Identify the common failure point

D.  

Enable VPC Flow Logs in both VPCs and monitor packet drops

Discussion 0
Questions 56

Your company operates in a highly regulated domain that requires you to store all organization logs for seven years You want to minimize logging infrastructure complexity by using managed services You need to avoid any future loss of log capture or stored logs due to misconfiguration or human error What should you do?

Options:

A.  

Use Cloud Logging to configure an aggregated sink at the organization level to export all logs into a BigQuery dataset

B.  

Use Cloud Logging to configure an aggregated sink at the organization level to export all logs into Cloud Storage with a seven-year retention policy and Bucket Lock

C.  

Use Cloud Logging to configure an export sink at each project level to export all logs into a BigQuery dataset

D.  

Use Cloud Logging to configure an export sink at each project level to export all logs into Cloud Storage with a seven-year retention policy and Bucket Lock

Discussion 0
Questions 57

Your team uses Cloud Build for all CI/CO pipelines. You want to use the kubectl builder for Cloud Build to deploy new images to Google Kubernetes Engine (GKE). You need to authenticate to GKE while minimizing development effort. What should you do?

Options:

A.  

Assign the Container Developer role to the Cloud Build service account.

B.  

Specify the Container Developer role for Cloud Build in the cloudbuild.yaml file.

C.  

Create a new service account with the Container Developer role and use it to run Cloud Build.

D.  

Create a separate step in Cloud Build to retrieve service account credentials and pass these to kubectl.

Discussion 0
Questions 58

You are configuring a Cl pipeline. The build step for your Cl pipeline integration testing requires access to APIs inside your private VPC network. Your security team requires that you do not expose API traffic publicly. You need to implement a solution that minimizes management overhead. What should you do?

Options:

A.  

Use Cloud Build private pools to connect to the private VPC.

B.  

Use Spinnaker for Google Cloud to connect to the private VPC.

C.  

Use Cloud Build as a pipeline runner. Configure Internal HTTP(S) Load Balancing for API access.

D.  

Use Cloud Build as a pipeline runner. Configure External HTTP(S) Load Balancing with a Google Cloud Armor policy for API access.

Discussion 0