Weekend Special 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: exams65

ExamsBrite Dumps

Google Certified Professional - Cloud Developer Question and Answers

Google Certified Professional - Cloud Developer

Last Update Feb 14, 2025
Total Questions : 265

We are offering FREE Professional-Cloud-Developer Google exam questions. All you do is to just go and sign up. Give your details, prepare Professional-Cloud-Developer free exam questions and then go for complete pool of Google Certified Professional - Cloud Developer test questions that will help you more.

Professional-Cloud-Developer pdf

Professional-Cloud-Developer PDF

$36.75  $104.99
Professional-Cloud-Developer Engine

Professional-Cloud-Developer Testing Engine

$43.75  $124.99
Professional-Cloud-Developer PDF + Engine

Professional-Cloud-Developer PDF + Testing Engine

$57.75  $164.99
Questions 1

You want to create “fully baked” or “golden” Compute Engine images for your application. You need to bootstrap your application to connect to the appropriate database according to the environment the application is running on (test, staging, production). What should you do?

Options:

A.  

Embed the appropriate database connection string in the image. Create a different image for each environment.

B.  

When creating the Compute Engine instance, add a tag with the name of the database to be connected. In your application, query the Compute Engine API to pull the tags for the current instance, and use the tag to construct the appropriate database connection string.

C.  

When creating the Compute Engine instance, create a metadata item with a key of “DATABASE” and a value for the appropriate database connection string. In your application, read the “DATABASE” environment variable, and use the value to connect to the appropriate database.

D.  

When creating the Compute Engine instance, create a metadata item with a key of “DATABASE” and a value for the appropriate database connection string. In your application, query the metadata server for the “DATABASE” value, and use the value to connect to the appropriate database.

Discussion 0
Questions 2

Your company wants to expand their users outside the United States for their popular application. The

company wants to ensure 99.999% availability of the database for their application and also wants to minimize the read latency for their users across the globe.

Which two actions should they take? (Choose two.)

Options:

A.  

Create a multi-regional Cloud Spanner instance with "nam-asia-eur1" configuration.

B.  

Create a multi-regional Cloud Spanner instance with "nam3" configuration.

C.  

Create a cluster with at least 3 Spanner nodes.

D.  

Create a cluster with at least 1 Spanner node.

E.  

Create a minimum of two Cloud Spanner instances in separate regions with at least one node.

F.  

Create a Cloud Dataflow pipeline to replicate data across different databases.

Discussion 0
Questions 3

You work for a web development team at a small startup. Your team is developing a Node.js application using Google Cloud services, including Cloud Storage and Cloud Build. The team uses a Git repository for version control. Your manager calls you over the weekend and instructs you to make an emergency update to one of the company’s websites, and you’re the only developer available. You need to access Google Cloud to make the update, but you don’t have your work laptop. You are not allowed to store source code locally on a non-corporate computer. How should you set up your developer environment?

Options:

A.  

Use a text editor and the Git command line to send your source code updates as pull requests from a public computer.

B.  

Use a text editor and the Git command line to send your source code updates as pull requests from a virtual machine running on a public computer.

C.  

Use Cloud Shell and the built-in code editor for development. Send your source code updates as pull requests.

D.  

Use a Cloud Storage bucket to store the source code that you need to edit. Mount the bucket to a public computer as a drive, and use a code editor to update the code. Turn on versioning for the bucket, and point it to the team’s Git repository.

Discussion 0
Questions 4

You are tasked with using C++ to build and deploy a microservice for an application hosted on Google Cloud. The code needs to be containerized and use several custom software libraries that your team has built. You do not want to maintain the underlying infrastructure of the application How should you deploy the microservice?

Options:

A.  

Use Cloud Functions to deploy the microservice.

B.  

Use Cloud Build to create the container, and deploy it on Cloud Run.

C.  

Use Cloud Shell to containerize your microservice. and deploy it on GKE Standard.

D.  

Use Cloud Shell to containerize your microservice. and deploy it on a Container-Optimized OS Compute Engine instance.

Discussion 0
Questions 5

Your application is built as a custom machine image. You have multiple unique deployments of the machine image. Each deployment is a separate managed instance group with its own template. Each deployment requires a unique set of configuration values. You want to provide these unique values to each deployment but use the same custom machine image in all deployments. You want to use out-of-the-box features of Compute Engine. What should you do?

Options:

A.  

Place the unique configuration values in the persistent disk.

B.  

Place the unique configuration values in a Cloud Bigtable table.

C.  

Place the unique configuration values in the instance template startup script.

D.  

Place the unique configuration values in the instance template instance metadata.

Discussion 0
Questions 6

Your team is creating a serverless web application on Cloud Run. The application needs to access images stored in a private Cloud Storage bucket. You want to give the application Identity and Access Management (IAM) permission to access the images in the bucket, while also securing the services using Google-recommended best practices What should you do?

Options:

A.  

Enforce signed URLs for the desired bucket. Grant the Storage Object Viewer IAM role on the bucket to the Compute Engine default service account.

B.  

Enforce public access prevention for the desired bucket. Grant the Storage Object Viewer IAM role on the bucket to the Compute Engine

default service account.

C.  

Enforce signed URLs for the desired bucket Create and update the Cloud Run service to use a user -managed service account. Grant the Storage Object Viewer IAM role on the bucket to the service account

D.  

Enforce public access prevention for the desired bucket.

Create and update the Cloud Run service to use a user-managed service account. Grant the Storage Object Viewer IAM role on the bucket to the service account.

Discussion 0
Questions 7

Your service adds text to images that it reads from Cloud Storage. During busy times of the year, requests to

Cloud Storage fail with an HTTP 429 "Too Many Requests" status code.

How should you handle this error?

Options:

A.  

Add a cache-control header to the objects.

B.  

Request a quota increase from the GCP Console.

C.  

Retry the request with a truncated exponential backoff strategy.

D.  

Change the storage class of the Cloud Storage bucket to Multi-regional.

Discussion 0
Questions 8

You are developing an application that will allow users to read and post comments on news articles. You want to configure your application to store and display user-submitted comments using Firestore. How should you design the schema to support an unknown number of comments and articles?

Options:

A.  

Store each comment in a subcollection of the article.

B.  

Add each comment to an array property on the article.

C.  

Store each comment in a document, and add the comment’s key to an array property on the article.

D.  

Store each comment in a document, and add the comment’s key to an array property on the user profile.

Discussion 0
Questions 9

You have an on-premises application that authenticates to the Cloud Storage API using a user-managed service account with a user-managed key. The application connects to Cloud Storage using Private Google Access over a Dedicated Interconnect link. You discover that requests from the application to access objects in the Cloud Storage bucket are failing with a 403 Permission Denied error code. What is the likely cause of this issue?

Options:

A.  

The folder structure inside the bucket and object paths have changed.

B.  

The permissions of the service account’s predefined role have changed.

C.  

The service account key has been rotated but not updated on the application server.

D.  

The Interconnect link from the on-premises data center to Google Cloud is experiencing a temporary outage.

Discussion 0
Questions 10

Your team is developing a new application using a PostgreSQL database and Cloud Run. You are responsible for ensuring that all traffic is kept private on Google Cloud. You want to use managed services and follow Google-recommended best practices. What should you do?

Options:

A.  

1. Enable Cloud SQL and Cloud Run in the same project.

2. Configure a private IP address for Cloud SQL. Enable private services access.

3. Create a Serverless VPC Access connector.

4. Configure Cloud Run to use the connector to connect to Cloud SQL.

B.  

1. Install PostgreSQL on a Compute Engine virtual machine (VM), and enable Cloud Run in the same project.

2. Configure a private IP address for the VM. Enable private services access.

3. Create a Serverless VPC Access connector.

4. Configure Cloud Run to use the connector to connect to the VM hosting PostgreSQL.

C.  

1. Use Cloud SQL and Cloud Run in different projects.

2. Configure a private IP address for Cloud SQL. Enable private services access.

3. Create a Serverless VPC Access connector.

4. Set up a VPN connection between the two projects. Configure Cloud Run to use the connector to connect to Cloud SQL.

D.  

1. Install PostgreSQL on a Compute Engine VM, and enable Cloud Run in different projects.

2. Configure a private IP address for the VM. Enable private services access.

3. Create a Serverless VPC Access connector.

4. Set up a VPN connection between the two projects. Configure Cloud Run to use the connector to access the VM hosting PostgreSQL

Discussion 0
Questions 11

You are trying to connect to your Google Kubernetes Engine (GKE) cluster using kubectl from Cloud Shell. You have deployed your GKE cluster with a public endpoint. From Cloud Shell, you run the following command:

You notice that the kubectl commands time out without returning an error message. What is the most likely cause of this issue?

Options:

A.  

Your user account does not have privileges to interact with the cluster using kubectl.

B.  

Your Cloud Shell external IP address is not part of the authorized networks of the cluster.

C.  

The Cloud Shell is not part of the same VPC as the GKE cluster.

D.  

A VPC firewall is blocking access to the cluster’s endpoint.

Discussion 0
Questions 12

Your team develops services that run on Google Kubernetes Engine. You need to standardize their log data using Google-recommended practices and make the data more useful in the fewest number of steps. What should you do? (Choose two.)

Options:

A.  

Create aggregated exports on application logs to BigQuery to facilitate log analytics.

B.  

Create aggregated exports on application logs to Cloud Storage to facilitate log analytics.

C.  

Write log output to standard output (stdout) as single-line JSON to be ingested into Cloud Logging as structured logs.

D.  

Mandate the use of the Logging API in the application code to write structured logs to Cloud Logging.

E.  

Mandate the use of the Pub/Sub API to write structured data to Pub/Sub and create a Dataflow streaming pipeline to normalize logs and write them to BigQuery for analytics.

Discussion 0
Questions 13

You are developing an application that will store and access sensitive unstructured data objects in a Cloud Storage bucket. To comply with regulatory requirements, you need to ensure that all data objects are available for at least 7 years after their initial creation. Objects created more than 3 years ago are accessed very infrequently (less than once a year). You need to configure object storage while ensuring that storage cost is optimized. What should you do? (Choose two.)

Options:

A.  

Set a retention policy on the bucket with a period of 7 years.

B.  

Use IAM Conditions to provide access to objects 7 years after the object creation date.

C.  

Enable Object Versioning to prevent objects from being accidentally deleted for 7 years after object creation.

D.  

Create an object lifecycle policy on the bucket that moves objects from Standard Storage to Archive Storage after 3 years.

E.  

Implement a Cloud Function that checks the age of each object in the bucket and moves the objects older than 3 years to a second bucket with the Archive Storage class. Use Cloud Scheduler to trigger the Cloud Function on a daily schedule.

Discussion 0
Questions 14

You are designing a schema for a Cloud Spanner customer database. You want to store a phone number array field in a customer table. You also want to allow users to search customers by phone number. How should you design this schema?

Options:

A.  

Create a table named Customers. Add an Array field in a table that will hold phone numbers for the customer.

B.  

Create a table named Customers. Create a table named Phones. Add a CustomerId field in the Phones table to find the CustomerId from a phone number.

C.  

Create a table named Customers. Add an Array field in a table that will hold phone numbers for the customer. Create a secondary index on the Array field.

D.  

Create a table named Customers as a parent table. Create a table named Phones, and interleave this table into the Customer table. Create an index on the phone number field in the Phones table.

Discussion 0
Questions 15

You have an application deployed in Google Kubernetes Engine (GKE) that reads and processes Pub/Sub messages. Each Pod handles a fixed number of messages per minute. The rate at which messages are published to the Pub/Sub topic varies considerably throughout the day and week, including occasional large batches of messages published at a single moment.

You want to scale your GKE Deployment to be able to process messages in a timely manner. What GKE feature should you use to automatically adapt your workload?

Options:

A.  

Vertical Pod Autoscaler in Auto mode

B.  

Vertical Pod Autoscaler in Recommendation mode

C.  

Horizontal Pod Autoscaler based on an external metric

D.  

Horizontal Pod Autoscaler based on resources utilization

Discussion 0
Questions 16

You work at a rapidly growing financial technology startup. You manage the payment processing application written in Go and hosted on Cloud Run in the Singapore region (asia-southeast1). The payment processing application processes data stored in a Cloud Storage bucket that is also located in the Singapore region.

The startup plans to expand further into the Asia Pacific region. You plan to deploy the Payment Gateway in Jakarta, Hong Kong, and Taiwan over the next six months. Each location has data residency requirements that require customer data to reside in the country where the transaction was made. You want to minimize the cost of these deployments. What should you do?

Options:

A.  

Create a Cloud Storage bucket in each region, and create a Cloud Run service of the payment processing application in each region.

B.  

Create a Cloud Storage bucket in each region, and create three Cloud Run services of the payment processing application in the Singapore region.

C.  

Create three Cloud Storage buckets in the Asia multi-region, and create three Cloud Run services of the payment processing application in the Singapore region.

D.  

Create three Cloud Storage buckets in the Asia multi-region, and create three Cloud Run revisions of the payment processing application in the Singapore region.

Discussion 0
Questions 17

You are deploying a Python application to Cloud Run using Cloud Build. The Cloud Build pipeline is shown below:

You want to optimize deployment times and avoid unnecessary steps What should you do?

Options:

A.  

Remove the step that pushes the container to Artifact Registry.

B.  

Add the —cache-from argument to the Docker build step in your build config file.

C.  

Store image artifacts in a Cloud Storage bucket in the same region as the Cloud Run instance.

D.  

Deploy a new Docker registry in a VPC and use Cloud Build worker pools inside the VPC to run the build pipeline.

Discussion 0
Questions 18

Your team is developing an ecommerce platform for your company. Users will log in to the website and add items to their shopping cart. Users will be automatically logged out after 30 minutes of inactivity. When users log back in, their shopping cart should be saved. How should you store users’ session and shopping cart information while following Google-recommended best practices?

Options:

A.  

Store the session information in Pub/Sub, and store the shopping cart information in Cloud SQL.

B.  

Store the shopping cart information in a file on Cloud Storage where the filename is the SESSION ID.

C.  

Store the session and shopping cart information in a MySQL database running on multiple Compute Engine instances.

D.  

Store the session information in Memorystore for Redis or Memorystore for Memcached, and store the shopping cart information in Firestore.

Discussion 0
Questions 19

You work for an organization that manages an online ecommerce website. Your company plans to expand across the world; however, the estore currently serves one specific region. You need to select a SQL database and configure a schema that will scale as your organization grows. You want to create a table that stores all customer transactions and ensure that the customer (CustomerId) and the transaction (TransactionId) are unique. What should you do?

Options:

A.  

Create a Cloud SQL table that has TransactionId and CustomerId configured as primary keys. Use an incremental number for the TransactionId.

B.  

Create a Cloud SQL table that has TransactionId and CustomerId configured as primary keys. Use a random string (UUID) for the Transactionid.

C.  

Create a Cloud Spanner table that has TransactionId and CustomerId configured as primary keys. Use a random string (UUID) for the TransactionId.

D.  

Create a Cloud Spanner table that has TransactionId and CustomerId configured as primary keys. Use an incremental number for the TransactionId.

Discussion 0
Questions 20

You need to deploy a new European version of a website hosted on Google Kubernetes Engine. The current and new websites must be accessed via the same HTTP(S) load balancer's external IP address, but have different domain names. What should you do?

Options:

A.  

Define a new Ingress resource with a host rule matching the new domain

B.  

Modify the existing Ingress resource with a host rule matching the new domain

C.  

Create a new Service of type LoadBalancer specifying the existing IP address as the loadBalancerIP

D.  

Generate a new Ingress resource and specify the existing IP address as the kubernetes.io/ingress.global-static-ip-name annotation value

Discussion 0
Questions 21

You manage a microservices application on Google Kubernetes Engine (GKE) using Istio. You secure the communication channels between your microservices by implementing an Istio AuthorizationPolicy, a Kubernetes NetworkPolicy, and mTLS on your GKE cluster. You discover that HTTP requests between two Pods to specific URLs fail, while other requests to other URLs succeed. What is the cause of the connection issue?

Options:

A.  

A Kubernetes NetworkPolicy resource is blocking HTTP traffic between the Pods.

B.  

The Pod initiating the HTTP requests is attempting to connect to the target Pod via an incorrect TCP port.

C.  

The Authorization Policy of your cluster is blocking HTTP requests for specific paths within your application.

D.  

The cluster has mTLS configured in permissive mode, but the Pod's sidecar proxy is sending unencrypted traffic in plain text.

Discussion 0
Questions 22

You are developing a microservice-based application that will be deployed on a Google Kubernetes Engine cluster. The application needs to read and write to a Spanner database. You want to follow security best practices while minimizing code changes. How should you configure your application to retrieve Spanner credentials?

Options:

A.  

Configure the appropriate service accounts, and use Workload Identity to run the pods.

B.  

Store the application credentials as Kubernetes Secrets, and expose them as environment variables.

C.  

Configure the appropriate routing rules, and use a VPC-native cluster to directly connect to the database.

D.  

Store the application credentials using Cloud Key Management Service, and retrieve them whenever a database connection is made.

Discussion 0
Questions 23

You are designing a resource-sharing policy for applications used by different teams in a Google Kubernetes Engine cluster. You need to ensure that all applications can access the resources needed to run. What should you do? (Choose two.)

Options:

A.  

Specify the resource limits and requests in the object specifications.

B.  

Create a namespace for each team, and attach resource quotas to each namespace.

C.  

Create a LimitRange to specify the default compute resource requirements for each namespace.

D.  

Create a Kubernetes service account (KSA) for each application, and assign each KSA to the namespace.

E.  

Use the Anthos Policy Controller to enforce label annotations on all namespaces. Use taints and tolerations to allow resource sharing for namespaces.

Discussion 0
Questions 24

Your API backend is running on multiple cloud providers. You want to generate reports for the network latency of your API.

Which two steps should you take? (Choose two.)

Options:

A.  

Use Zipkin collector to gather data.

B.  

Use Fluentd agent to gather data.

C.  

Use Stackdriver Trace to generate reports.

D.  

Use Stackdriver Debugger to generate report.

E.  

Use Stackdriver Profiler to generate report.

Discussion 0
Questions 25

Your existing application keeps user state information in a single MySQL database. This state information is

very user-specific and depends heavily on how long a user has been using an application. The MySQL

database is causing challenges to maintain and enhance the schema for various users.

Which storage option should you choose?

Options:

A.  

Cloud SQL

B.  

Cloud Storage

C.  

Cloud Spanner

D.  

Cloud Datastore/Firestore

Discussion 0
Questions 26

Your team recently deployed an application on Google Kubernetes Engine (GKE). You are monitoring your application and want to be alerted when the average memory consumption of your containers is under 20% or above 80% How should you configure the alerts?

Options:

A.  

Create a Cloud Function that consumes the Monitoring API. Create a schedule to trigger the Cloud Function hourly and alert you if the average memory consumption is outside the defined range

B.  

In Cloud Monitoring, create an alerting policy to notify you if the average memory consumption is outside the

defined range

C.  

Create a Cloud Function that runs on a schedule, executes kubect1 top on all the workloads on the cluster, and sends an email alert if the average memory consumption is outside the defined range

D.  

Write a script that pulls the memory consumption of the instance at the OS level and sends an email alert if the average memory consumption is outside the defined range

Discussion 0
Questions 27

You have written a Cloud Function that accesses other Google Cloud resources. You want to secure the environment using the principle of least privilege. What should you do?

Options:

A.  

Create a new service account that has Editor authority to access the resources. The deployer is given permission to get the access token.

B.  

Create a new service account that has a custom IAM role to access the resources. The deployer is given permission to get the access token.

C.  

Create a new service account that has Editor authority to access the resources. The deployer is given permission to act as the new service account.

D.  

Create a new service account that has a custom IAM role to access the resources. The deployer is given permission to act as the new service account.

Discussion 0
Questions 28

You are using the Cloud Client Library to upload an image in your application to Cloud Storage. Users of the application report that occasionally the upload does not complete and the client library reports an HTTP 504 Gateway Timeout error. You want to make the application more resilient to errors. What changes to the application should you make?

Options:

A.  

Write an exponential backoff process around the client library call.

B.  

Write a one-second wait time backoff process around the client library call.

C.  

Design a retry button in the application and ask users to click if the error occurs.

D.  

Create a queue for the object and inform the users that the application will try again in 10 minutes.

Discussion 0
Questions 29

You developed a JavaScript web application that needs to access Google Drive’s API and obtain permission from users to store files in their Google Drives. You need to select an authorization approach for your application. What should you do?

Options:

A.  

Create an API key.

B.  

Create a SAML token.

C.  

Create a service account.

D.  

Create an OAuth Client I

D.  

Discussion 0
Questions 30

Your code is running on Cloud Functions in project A. It is supposed to write an object in a Cloud Storage

bucket owned by project B. However, the write call is failing with the error "403 Forbidden".

What should you do to correct the problem?

Options:

A.  

Grant your user account the roles/storage.objectCreator role for the Cloud Storage bucket.

B.  

Grant your user account the roles/iam.serviceAccountUser role for the service-PROJECTA@gcf-adminrobot.

iam.gserviceaccount.com service account.

C.  

Grant the service-PROJECTA@gcf-admin-robot.iam.gserviceaccount.com service account the roles/

storage.objectCreator role for the Cloud Storage bucket.

D.  

Enable the Cloud Storage API in project B.

Discussion 0
Questions 31

You have a web application that publishes messages to Pub/Sub. You plan to build new versions of the application locally and need to quickly test Pub/Sub integration tor each new build. How should you configure local testing?

Options:

A.  

Run the gclcud config set api_endpoint_overrides/pubsub https: / 'pubsubemulator.googleapi3.com. coin/ command to change the Pub/Sub endpoint prior to starting the application

B.  

In the Google Cloud console, navigate to the API Library and enable the Pub/Sub API When developing locally, configure your application to call pubsub.googleapis com

C.  

Install Cloud Code on the integrated development environment (IDE) Navigate to Cloud APIs, and enable Pub/Sub against a valid Google Project ID. When developing locally, configure your application to call pubsub.googleapis com

D.  

Install the Pub/Sub emulator using gcloud and start the emulator with a valid Google Project I

D.  

When developing locally, configure your application to use the local emulator by exporting the fuhsub emulator Host variable

Discussion 0
Questions 32

You are planning to deploy hundreds of microservices in your Google Kubernetes Engine (GKE) cluster. How should you secure communication between the microservices on GKE using a managed service?

Options:

A.  

Use global HTTP(S) Load Balancing with managed SSL certificates to protect your services

B.  

Deploy open source Istio in your GKE cluster, and enable mTLS in your Service Mesh

C.  

Install cert-manager on GKE to automatically renew the SSL certificates.

D.  

Install Anthos Service Mesh, and enable mTLS in your Service Mesh.

Discussion 0
Questions 33

Your development team has built several Cloud Functions using Java along with corresponding integration and service tests. You are building and deploying the functions and launching the tests using Cloud Build. Your Cloud Build job is reporting deployment failures immediately after successfully validating the code. What should you do?

Options:

A.  

Check the maximum number of Cloud Function instances.

B.  

Verify that your Cloud Build trigger has the correct build parameters.

C.  

Retry the tests using the truncated exponential backoff polling strategy.

D.  

Verify that the Cloud Build service account is assigned the Cloud Functions Developer role.

Discussion 0
Questions 34

Your company has deployed a new API to a Compute Engine instance. During testing, the API is not behaving as expected. You want to monitor the application over 12 hours to diagnose the problem within the application code without redeploying the application. Which tool should you use?

Options:

A.  

Cloud Trace

B.  

Cloud Monitoring

C.  

Cloud Debugger logpoints

D.  

Cloud Debugger snapshots

Discussion 0
Questions 35

You plan to deploy a new application revision with a Deployment resource to Google Kubernetes Engine (GKE) in production. The container might not work correctly. You want to minimize risk in case there are issues after deploying the revision. You want to follow Google-recommended best practices. What should you do?

Options:

A.  

Perform a rolling update with a PodDisruptionBudget of 80%.

B.  

Perform a rolling update with a HorizontalPodAutoscaler scale-down policy value of 0.

C.  

Convert the Deployment to a StatefulSet, and perform a rolling update with a PodDisruptionBudget of 80%.

D.  

Convert the Deployment to a StatefulSet, and perform a rolling update with a HorizontalPodAutoscaler scale-down policy value of 0.

Discussion 0
Questions 36

You are building an API that will be used by Android and iOS apps The API must:

• Support HTTPs

• Minimize bandwidth cost

• Integrate easily with mobile apps

Which API architecture should you use?

Options:

A.  

RESTful APIs

B.  

MQTT for APIs

C.  

gRPC-based APIs

D.  

SOAP-based APIs

Discussion 0
Questions 37

You are running a web application on Google Kubernetes Engine that you inherited. You want to determine whether the application is using libraries with known vulnerabilities or is vulnerable to XSS attacks. Which service should you use?

Options:

A.  

Google Cloud Armor

B.  

Debugger

C.  

Web Security Scanner

D.  

Error Reporting

Discussion 0
Questions 38

You work for a financial services company that has a container-first approach. Your team develops microservices applications You have a Cloud Build pipeline that creates a container image, runs regression tests, and publishes the image to Artifact Registry You need to ensure that only containers that have passed the regression tests are deployed to Google Kubernetes Engine (GKE) clusters You have already enabled Binary Authorization on the GKE clusters What should you do next?

Options:

A.  

Deploy Voucher Server and Voucher Client Components. After a container image has passed the regression tests, run Voucher Client as a step in the Cloud Build pipeline.

B.  

Set the Pod Security Standard level to Restricted for the relevant namespaces Digitally sign the container

images that have passed the regression tests as a step in the Cloud Build pipeline.

C.  

Create an attestor and a policy. Create an attestation for the container images that have passed the regression tests as a step in the Cloud Build pipeline.

D.  

Create an attestor and a policy Run a vulnerability scan to create an attestation for the container image as a step in the Cloud Build pipeline.

Discussion 0
Questions 39

Your application is running in multiple Google Kubernetes Engine clusters. It is managed by a Deployment in each cluster. The Deployment has created multiple replicas of your Pod in each cluster. You want to view the logs sent to stdout for all of the replicas in your Deployment in all clusters. Which command should you use?

Options:

A.  

kubectl logs [PARAM]

B.  

gcloud logging read [PARAM]

C.  

kubectl exec –it [PARAM] journalctl

D.  

gcloud compute ssh [PARAM] –-command= “sudo journalctl”

Discussion 0
Questions 40

Your team is developing a Cloud Function triggered by Cloud Storage Events. You want to accelerate testing and development of your Cloud Function while following Google-recommended best practices. What should you do?

Options:

A.  

Install the Functions Frameworks library, and configure the Cloud Function on localhost. Make a copy of the function, and make edits to the new version Test the new version using cur1.

B.  

Make a copy of the Cloud Function, and rewrite the code to be HTTP-triggered Edit and test the new version

by triggering the HTTP endpoint. Send mock requests to the new function to evaluate the functionality.

C.  

Make a copy of the Cloud Function in the Google Cloud Console Use the Cloud console's in-line editor to

make source code changes to the new function Modify your web application to call the new function and test the new version in production.

D.  

Create a new Cloud Function that is triggered when Cloud Audit Logs detects the

cloudfunctions. functions. sourceCodeSet operation in the original Cloud Function Send mock

requests to the new function to evaluate the functionality.

Discussion 0
Questions 41

Which database should HipLocal use for storing user activity?

Options:

A.  

BigQuery

B.  

Cloud SQL

C.  

Cloud Spanner

D.  

Cloud Datastore

Discussion 0
Questions 42

In order for HipLocal to store application state and meet their stated business requirements, which database service should they migrate to?

Options:

A.  

Cloud Spanner

B.  

Cloud Datastore

C.  

Cloud Memorystore as a cache

D.  

Separate Cloud SQL clusters for each region

Discussion 0
Questions 43

For this question, refer to the HipLocal case study.

Which Google Cloud product addresses HipLocal’s business requirements for service level indicators and objectives?

Options:

A.  

Cloud Profiler

B.  

Cloud Monitoring

C.  

Cloud Trace

D.  

Cloud Logging

Discussion 0
Questions 44

For this question, refer to the HipLocal case study.

How should HipLocal redesign their architecture to ensure that the application scales to support a large increase in users?

Options:

A.  

Use Google Kubernetes Engine (GKE) to run the application as a microservice. Run the MySQL database on a dedicated GKE node.

B.  

Use multiple Compute Engine instances to run MySQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling.

C.  

Use Memorystore to store session information and CloudSQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling.

D.  

Use a Cloud Storage bucket to serve the application as a static website, and use another Cloud Storage bucket to store user state information.

Discussion 0
Questions 45

In order to meet their business requirements, how should HipLocal store their application state?

Options:

A.  

Use local SSDs to store state.

B.  

Put a memcache layer in front of MySQL.

C.  

Move the state storage to Cloud Spanner.

D.  

Replace the MySQL instance with Cloud SQL.

Discussion 0
Questions 46

HipLocal's APIs are showing occasional failures, but they cannot find a pattern. They want to collect some

metrics to help them troubleshoot.

What should they do?

Options:

A.  

Take frequent snapshots of all of the VMs.

B.  

Install the Stackdriver Logging agent on the VMs.

C.  

Install the Stackdriver Monitoring agent on the VMs.

D.  

Use Stackdriver Trace to look for performance bottlenecks.

Discussion 0
Questions 47

For this question, refer to the HipLocal case study.

HipLocal's application uses Cloud Client Libraries to interact with Google Cloud. HipLocal needs to configure authentication and authorization in the Cloud Client Libraries to implement least privileged access for the application. What should they do?

Options:

A.  

Create an API key. Use the API key to interact with Google Cloud.

B.  

Use the default compute service account to interact with Google Cloud.

C.  

Create a service account for the application. Export and deploy the private key for the application. Use the service account to interact with Google Cloud.

D.  

Create a service account for the application and for each Google Cloud API used by the application. Export and deploy the private keys used by the application. Use the service account with one Google Cloud API to interact with Google Cloud.

Discussion 0
Questions 48

Which service should HipLocal use to enable access to internal apps?

Options:

A.  

Cloud VPN

B.  

Cloud Armor

C.  

Virtual Private Cloud

D.  

Cloud Identity-Aware Proxy

Discussion 0
Questions 49

For this question refer to the HipLocal case study.

HipLocal wants to reduce the latency of their services for users in global locations. They have created read replicas of their database in locations where their users reside and configured their service to read traffic using those replicas. How should they further reduce latency for all database interactions with the least amount of effort?

Options:

A.  

Migrate the database to Bigtable and use it to serve all global user traffic.

B.  

Migrate the database to Cloud Spanner and use it to serve all global user traffic.

C.  

Migrate the database to Firestore in Datastore mode and use it to serve all global user traffic.

D.  

Migrate the services to Google Kubernetes Engine and use a load balancer service to better scale the application.

Discussion 0
Questions 50

HipLocal wants to reduce the number of on-call engineers and eliminate manual scaling.

Which two services should they choose? (Choose two.)

Options:

A.  

Use Google App Engine services.

B.  

Use serverless Google Cloud Functions.

C.  

Use Knative to build and deploy serverless applications.

D.  

Use Google Kubernetes Engine for automated deployments.

E.  

Use a large Google Compute Engine cluster for deployments.

Discussion 0
Questions 51

For this question, refer to the HipLocal case study.

How should HipLocal increase their API development speed while continuing to provide the QA team with a stable testing environment that meets feature requirements?

Options:

A.  

Include unit tests in their code, and prevent deployments to QA until all tests have a passing status.

B.  

Include performance tests in their code, and prevent deployments to QA until all tests have a passing status.

C.  

Create health checks for the QA environment, and redeploy the APIs at a later time if the environment is unhealthy.

D.  

Redeploy the APIs to App Engine using Traffic Splitting. Do not move QA traffic to the new versions if errors are found.

Discussion 0
Questions 52

HipLocal’s data science team wants to analyze user reviews.

How should they prepare the data?

Options:

A.  

Use the Cloud Data Loss Prevention API for redaction of the review dataset.

B.  

Use the Cloud Data Loss Prevention API for de-identification of the review dataset.

C.  

Use the Cloud Natural Language Processing API for redaction of the review dataset.

D.  

Use the Cloud Natural Language Processing API for de-identification of the review dataset.

Discussion 0
Questions 53

HipLocal wants to improve the resilience of their MySQL deployment, while also meeting their business and technical requirements.

Which configuration should they choose?

Options:

A.  

Use the current single instance MySQL on Compute Engine and several read-only MySQL servers on

Compute Engine.

B.  

Use the current single instance MySQL on Compute Engine, and replicate the data to Cloud SQL in an

external master configuration.

C.  

Replace the current single instance MySQL instance with Cloud SQL, and configure high availability.

D.  

Replace the current single instance MySQL instance with Cloud SQL, and Google provides redundancy

without further configuration.

Discussion 0
Questions 54

HipLocal is configuring their access controls.

Which firewall configuration should they implement?

Options:

A.  

Block all traffic on port 443.

B.  

Allow all traffic into the network.

C.  

Allow traffic on port 443 for a specific tag.

D.  

Allow all traffic on port 443 into the network.

Discussion 0
Questions 55

For this question, refer to the HipLocal case study.

HipLocal is expanding into new locations. They must capture additional data each time the application is launched in a new European country. This is causing delays in the development process due to constant schema changes and a lack of environments for conducting testing on the application changes. How should they resolve the issue while meeting the business requirements?

Options:

A.  

Create new Cloud SQL instances in Europe and North America for testing and deployment. Provide developers with local MySQL instances to conduct testing on the application changes.

B.  

Migrate data to Bigtable. Instruct the development teams to use the Cloud SDK to emulate a local Bigtable development environment.

C.  

Move from Cloud SQL to MySQL hosted on Compute Engine. Replicate hosts across regions in the Americas and Europe. Provide developers with local MySQL instances to conduct testing on the application changes.

D.  

Migrate data to Firestore in Native mode and set up instan

Discussion 0
Questions 56

HipLocal has connected their Hadoop infrastructure to GCP using Cloud Interconnect in order to query data stored on persistent disks.

Which IP strategy should they use?

Options:

A.  

Create manual subnets.

B.  

Create an auto mode subnet.

C.  

Create multiple peered VPCs.

D.  

Provision a single instance for NAT.

Discussion 0
Questions 57

HipLocal's.net-based auth service fails under intermittent load.

What should they do?

Options:

A.  

Use App Engine for autoscaling.

B.  

Use Cloud Functions for autoscaling.

C.  

Use a Compute Engine cluster for the service.

D.  

Use a dedicated Compute Engine virtual machine instance for the service.

Discussion 0
Questions 58

Which service should HipLocal use for their public APIs?

Options:

A.  

Cloud Armor

B.  

Cloud Functions

C.  

Cloud Endpoints

D.  

Shielded Virtual Machines

Discussion 0
Questions 59

For this question, refer to the HipLocal case study.

A recent security audit discovers that HipLocal’s database credentials for their Compute Engine-hosted MySQL databases are stored in plain text on persistent disks. HipLocal needs to reduce the risk of these credentials being stolen. What should they do?

Options:

A.  

Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain the database credentials.

B.  

Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain a key used to decrypt the database credentials.

C.  

Create a service account and grant it the roles/iam.serviceAccountUser role. Impersonate as this account and authenticate using the Cloud SQL Proxy.

D.  

Grant the roles/secretmanager.secretAccessor role to the Compute Engine service account. Store and access the database credentials with the Secret Manager API.

Discussion 0