March Sale Special 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: exams65

Google Certified Professional - Cloud Developer Question and Answers

Google Certified Professional - Cloud Developer

Last Update Mar 29, 2024
Total Questions : 254

We are offering FREE Professional-Cloud-Developer Google exam questions. All you do is to just go and sign up. Give your details, prepare Professional-Cloud-Developer free exam questions and then go for complete pool of Google Certified Professional - Cloud Developer test questions that will help you more.

Professional-Cloud-Developer pdf

Professional-Cloud-Developer PDF

$35  $99.99
Professional-Cloud-Developer Engine

Professional-Cloud-Developer Testing Engine

$42  $119.99
Professional-Cloud-Developer PDF + Engine

Professional-Cloud-Developer PDF + Testing Engine

$56  $159.99
Questions 1

Your company has deployed a new API to a Compute Engine instance. During testing, the API is not behaving as expected. You want to monitor the application over 12 hours to diagnose the problem within the application code without redeploying the application. Which tool should you use?

Options:

A.  

Cloud Trace

B.  

Cloud Monitoring

C.  

Cloud Debugger logpoints

D.  

Cloud Debugger snapshots

Discussion 0
Questions 2

You want to view the memory usage of your application deployed on Compute Engine. What should you do?

Options:

A.  

Install the Stackdriver Client Library.

B.  

Install the Stackdriver Monitoring Agent.

C.  

Use the Stackdriver Metrics Explorer.

D.  

Use the Google Cloud Platform Console.

Discussion 0
Questions 3

You recently developed a new application. You want to deploy the application on Cloud Run without a Dockerfile. Your organization requires that all container images are pushed to a centrally managed container repository. How should you build your container using Google Cloud services? (Choose two.)

Options:

A.  

Push your source code to Artifact Registry.

B.  

Submit a Cloud Build job to push the image.

C.  

Use the pack build command with pack CLI.

D.  

Include the --source flag with the gcloud run deploy CLI command.

E.  

Include the --platform=kubernetes flag with the gcloud run deploy CLI command.

Discussion 0
Questions 4

You need to containerize a web application that will be hosted on Google Cloud behind a global load balancer with SSL certificates. You don't have the time to develop authentication at the application level, and you want to offload SSL encryption and management from your application. You want to configure the architecture using managed services where possible What should you do?

Options:

A.  

Host the application on Compute Engine, and configure Cloud Endpoints for your application.

B.  

Host the application on Google Kubernetes Engine and use Identity-Aware Proxy (IAP) with Cloud Load Balancing and Google-managed certificates.

C.  

Host the application on Google Kubernetes Engine, and deploy an NGINX Ingress Controller to handle authentication.

D.  

Host the application on Google Kubernetes Engine, and deploy cert-manager to manage SSL certificates.

Discussion 0
Questions 5

You recently developed an application that monitors a large number of stock prices. You need to configure Pub/Sub to receive a high volume messages and update the current stock price in a single large in-memory database The downstream service needs only the most up-to-date prices in the in-memory database to perform stock trading transactions Each message contains three pieces of information

• Stock symbol

• Stock price

• Timestamp for the update

.

How should you set up your Pub/Sub subscription?

Options:

A.  

Create a pull subscription with both ordering and exactly-once delivery turned off

B.  

Create a pull subscription with exactly-once delivery enabled

C.  

Create a push subscription with exactly-once delivery enabled

D.  

Create a push subscription with both ordering and exactly-once delivery turned off

Discussion 0
Questions 6

Your code is running on Cloud Functions in project A. It is supposed to write an object in a Cloud Storage

bucket owned by project B. However, the write call is failing with the error "403 Forbidden".

What should you do to correct the problem?

Options:

A.  

Grant your user account the roles/storage.objectCreator role for the Cloud Storage bucket.

B.  

Grant your user account the roles/iam.serviceAccountUser role for the service-PROJECTA@gcf-adminrobot.

iam.gserviceaccount.com service account.

C.  

Grant the service-PROJECTA@gcf-admin-robot.iam.gserviceaccount.com service account the roles/

storage.objectCreator role for the Cloud Storage bucket.

D.  

Enable the Cloud Storage API in project B.

Discussion 0
Questions 7

You are using Cloud Build for your CI/CD pipeline to complete several tasks, including copying certain files to Compute Engine virtual machines. Your pipeline requires a flat file that is generated in one builder in the pipeline to be accessible by subsequent builders in the same pipeline. How should you store the file so that all the builders in the pipeline can access it?

Options:

A.  

Store and retrieve the file contents using Compute Engine instance metadata.

B.  

Output the file contents to a file in /workspace. Read from the same /workspace file in the subsequent build step.

C.  

Use gsutil to output the file contents to a Cloud Storage object. Read from the same object in the subsequent build step.

D.  

Add a build argument that runs an HTTP POST via curl to a separate web server to persist the value in one builder. Use an HTTP GET via curl from the subsequent build step to read the value.

Discussion 0
Questions 8

You made a typo in a low-level Linux configuration file that prevents your Compute Engine instance from booting to a normal run level. You just created the Compute Engine instance today and have done no other maintenance on it, other than tweaking files. How should you correct this error?

Options:

A.  

Download the file using scp, change the file, and then upload the modified version

B.  

Configure and log in to the Compute Engine instance through SSH, and change the file

C.  

Configure and log in to the Compute Engine instance through the serial port, and change the file

D.  

Configure and log in to the Compute Engine instance using a remote desktop client, and change the file

Discussion 0
Questions 9

You need to load-test a set of REST API endpoints that are deployed to Cloud Run. The API responds to HTTP POST requests Your load tests must meet the following requirements:

• Load is initiated from multiple parallel threads

• User traffic to the API originates from multiple source IP addresses.

• Load can be scaled up using additional test instances

You want to follow Google-recommended best practices How should you configure the load testing'?

Options:

A.  

Create an image that has cURL installed and configure cURLto run a test plan Deploy the image in a

managed instance group, and run one instance of the image for each VM.

B.  

Create an image that has cURL installed and configure cURL to run a test plan Deploy the image in an

unmanaged instance group, and run one instance of the image for each VM.

C.  

Deploy a distributed load testing framework on a private Google Kubernetes Engine Cluster Deploy

additional Pods as needed to initiate more traffic and support the number of concurrent users.

D.  

Download the container image of a distributed load testing framework on Cloud Shell Sequentially start

several instances of the container on Cloud Shell to increase the load on the API.

Discussion 0
Questions 10

For this question, refer to the HipLocal case study.

How should HipLocal increase their API development speed while continuing to provide the QA team with a stable testing environment that meets feature requirements?

Options:

A.  

Include unit tests in their code, and prevent deployments to QA until all tests have a passing status.

B.  

Include performance tests in their code, and prevent deployments to QA until all tests have a passing status.

C.  

Create health checks for the QA environment, and redeploy the APIs at a later time if the environment is unhealthy.

D.  

Redeploy the APIs to App Engine using Traffic Splitting. Do not move QA traffic to the new versions if errors are found.

Discussion 0
Questions 11

HipLocal wants to reduce the number of on-call engineers and eliminate manual scaling.

Which two services should they choose? (Choose two.)

Options:

A.  

Use Google App Engine services.

B.  

Use serverless Google Cloud Functions.

C.  

Use Knative to build and deploy serverless applications.

D.  

Use Google Kubernetes Engine for automated deployments.

E.  

Use a large Google Compute Engine cluster for deployments.

Discussion 0
Questions 12

In order for HipLocal to store application state and meet their stated business requirements, which database service should they migrate to?

Options:

A.  

Cloud Spanner

B.  

Cloud Datastore

C.  

Cloud Memorystore as a cache

D.  

Separate Cloud SQL clusters for each region

Discussion 0
Questions 13

For this question, refer to the HipLocal case study.

How should HipLocal redesign their architecture to ensure that the application scales to support a large increase in users?

Options:

A.  

Use Google Kubernetes Engine (GKE) to run the application as a microservice. Run the MySQL database on a dedicated GKE node.

B.  

Use multiple Compute Engine instances to run MySQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling.

C.  

Use Memorystore to store session information and CloudSQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling.

D.  

Use a Cloud Storage bucket to serve the application as a static website, and use another Cloud Storage bucket to store user state information.

Discussion 0
Questions 14

HipLocal has connected their Hadoop infrastructure to GCP using Cloud Interconnect in order to query data stored on persistent disks.

Which IP strategy should they use?

Options:

A.  

Create manual subnets.

B.  

Create an auto mode subnet.

C.  

Create multiple peered VPCs.

D.  

Provision a single instance for NAT.

Discussion 0
Questions 15

For this question, refer to the HipLocal case study.

A recent security audit discovers that HipLocal’s database credentials for their Compute Engine-hosted MySQL databases are stored in plain text on persistent disks. HipLocal needs to reduce the risk of these credentials being stolen. What should they do?

Options:

A.  

Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain the database credentials.

B.  

Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain a key used to decrypt the database credentials.

C.  

Create a service account and grant it the roles/iam.serviceAccountUser role. Impersonate as this account and authenticate using the Cloud SQL Proxy.

D.  

Grant the roles/secretmanager.secretAccessor role to the Compute Engine service account. Store and access the database credentials with the Secret Manager API.

Discussion 0
Questions 16

For this question, refer to the HipLocal case study.

Which Google Cloud product addresses HipLocal’s business requirements for service level indicators and objectives?

Options:

A.  

Cloud Profiler

B.  

Cloud Monitoring

C.  

Cloud Trace

D.  

Cloud Logging

Discussion 0
Questions 17

For this question, refer to the HipLocal case study.

HipLocal is expanding into new locations. They must capture additional data each time the application is launched in a new European country. This is causing delays in the development process due to constant schema changes and a lack of environments for conducting testing on the application changes. How should they resolve the issue while meeting the business requirements?

Options:

A.  

Create new Cloud SQL instances in Europe and North America for testing and deployment. Provide developers with local MySQL instances to conduct testing on the application changes.

B.  

Migrate data to Bigtable. Instruct the development teams to use the Cloud SDK to emulate a local Bigtable development environment.

C.  

Move from Cloud SQL to MySQL hosted on Compute Engine. Replicate hosts across regions in the Americas and Europe. Provide developers with local MySQL instances to conduct testing on the application changes.

D.  

Migrate data to Firestore in Native mode and set up instan

Discussion 0
Questions 18

In order to meet their business requirements, how should HipLocal store their application state?

Options:

A.  

Use local SSDs to store state.

B.  

Put a memcache layer in front of MySQL.

C.  

Move the state storage to Cloud Spanner.

D.  

Replace the MySQL instance with Cloud SQL.

Discussion 0
Questions 19

HipLocal’s data science team wants to analyze user reviews.

How should they prepare the data?

Options:

A.  

Use the Cloud Data Loss Prevention API for redaction of the review dataset.

B.  

Use the Cloud Data Loss Prevention API for de-identification of the review dataset.

C.  

Use the Cloud Natural Language Processing API for redaction of the review dataset.

D.  

Use the Cloud Natural Language Processing API for de-identification of the review dataset.

Discussion 0
Questions 20

For this question, refer to the HipLocal case study.

HipLocal's application uses Cloud Client Libraries to interact with Google Cloud. HipLocal needs to configure authentication and authorization in the Cloud Client Libraries to implement least privileged access for the application. What should they do?

Options:

A.  

Create an API key. Use the API key to interact with Google Cloud.

B.  

Use the default compute service account to interact with Google Cloud.

C.  

Create a service account for the application. Export and deploy the private key for the application. Use the service account to interact with Google Cloud.

D.  

Create a service account for the application and for each Google Cloud API used by the application. Export and deploy the private keys used by the application. Use the service account with one Google Cloud API to interact with Google Cloud.

Discussion 0
Questions 21

Which service should HipLocal use for their public APIs?

Options:

A.  

Cloud Armor

B.  

Cloud Functions

C.  

Cloud Endpoints

D.  

Shielded Virtual Machines

Discussion 0
Questions 22

HipLocal wants to improve the resilience of their MySQL deployment, while also meeting their business and technical requirements.

Which configuration should they choose?

Options:

A.  

Use the current single instance MySQL on Compute Engine and several read-only MySQL servers on

Compute Engine.

B.  

Use the current single instance MySQL on Compute Engine, and replicate the data to Cloud SQL in an

external master configuration.

C.  

Replace the current single instance MySQL instance with Cloud SQL, and configure high availability.

D.  

Replace the current single instance MySQL instance with Cloud SQL, and Google provides redundancy

without further configuration.

Discussion 0
Questions 23

HipLocal's APIs are showing occasional failures, but they cannot find a pattern. They want to collect some

metrics to help them troubleshoot.

What should they do?

Options:

A.  

Take frequent snapshots of all of the VMs.

B.  

Install the Stackdriver Logging agent on the VMs.

C.  

Install the Stackdriver Monitoring agent on the VMs.

D.  

Use Stackdriver Trace to look for performance bottlenecks.

Discussion 0
Questions 24

For this question refer to the HipLocal case study.

HipLocal wants to reduce the latency of their services for users in global locations. They have created read replicas of their database in locations where their users reside and configured their service to read traffic using those replicas. How should they further reduce latency for all database interactions with the least amount of effort?

Options:

A.  

Migrate the database to Bigtable and use it to serve all global user traffic.

B.  

Migrate the database to Cloud Spanner and use it to serve all global user traffic.

C.  

Migrate the database to Firestore in Datastore mode and use it to serve all global user traffic.

D.  

Migrate the services to Google Kubernetes Engine and use a load balancer service to better scale the application.

Discussion 0
Questions 25

Which database should HipLocal use for storing user activity?

Options:

A.  

BigQuery

B.  

Cloud SQL

C.  

Cloud Spanner

D.  

Cloud Datastore

Discussion 0
Questions 26

HipLocal's.net-based auth service fails under intermittent load.

What should they do?

Options:

A.  

Use App Engine for autoscaling.

B.  

Use Cloud Functions for autoscaling.

C.  

Use a Compute Engine cluster for the service.

D.  

Use a dedicated Compute Engine virtual machine instance for the service.

Discussion 0
Questions 27

HipLocal is configuring their access controls.

Which firewall configuration should they implement?

Options:

A.  

Block all traffic on port 443.

B.  

Allow all traffic into the network.

C.  

Allow traffic on port 443 for a specific tag.

D.  

Allow all traffic on port 443 into the network.

Discussion 0
Questions 28

Which service should HipLocal use to enable access to internal apps?

Options:

A.  

Cloud VPN

B.  

Cloud Armor

C.  

Virtual Private Cloud

D.  

Cloud Identity-Aware Proxy

Discussion 0
Questions 29

Your teammate has asked you to review the code below. Its purpose is to efficiently add a large number of small rows to a BigQuery table.

Which improvement should you suggest your teammate make?

Options:

A.  

Include multiple rows with each request.

B.  

Perform the inserts in parallel by creating multiple threads.

C.  

Write each row to a Cloud Storage object, then load into BigQuery.

D.  

Write each row to a Cloud Storage object in parallel, then load into BigQuery.

Discussion 0
Questions 30

You are deploying a single website on App Engine that needs to be accessible via the URL http://www.altostrat.com/. What should you do?

Options:

A.  

Verify domain ownership with Webmaster Central. Create a DNS CNAME record to point to the App Engine canonical name ghs.googlehosted.com.

B.  

Verify domain ownership with Webmaster Central. Define an A record pointing to the single global App Engine IP address.

C.  

Define a mapping in dispatch.yaml to point the domain www.altostrat.com to your App Engine service. Create a DNS CNAME record to point to the App Engine canonical name ghs.googlehosted.com.

D.  

Define a mapping in dispatch.yaml to point the domain www.altostrat.com to your App Engine service. Define an A record pointing to the single global App Engine IP address.

Discussion 0
Questions 31

You are developing an online gaming platform as a microservices application on Google Kubernetes Engine (GKE). Users on social media are complaining about long loading times for certain URL requests to the application. You need to investigate performance bottlenecks in the application and identify. which HTTP requests have a significantly high latency span in user requests What should you do?

Options:

A.  

Instrument your microservices by installing the OpenTelemetry tracing package Update your application code to send traces to Trace for inspection and analysis Create an analysis report on Trace to analyze user requests

B.  

Configure GKE workload metrics using kubect1 Select all Pods to send their metrics to Cloud Monitoring. Create a custom dashboard of application metrics in Cloud Monitoring to determine performance bottlenecks of your GKE cluster

C.  

Install tcpdump on your GKE nodes. Run tcpdump to capture network traffic over an extended period of time to collect data Analyze the data files using Wireshark to determine the cause of high latency

D.  

Update your microservices to log HTTP request methods and URL paths to STDOUT Use the logs router to send container logs to Cloud Logging Create filters in Cloud Logging to evaluate the latency of user requests across different methods and URL paths.

Discussion 0
Questions 32

You are parsing a log file that contains three columns: a timestamp, an account number (a string), and a

transaction amount (a number). You want to calculate the sum of all transaction amounts for each unique

account number efficiently.

Which data structure should you use?

Options:

A.  

A linked list

B.  

A hash table

C.  

A two-dimensional array

D.  

A comma-delimited string

Discussion 0
Questions 33

You are using Cloud Run to host a web application. You need to securely obtain the application project ID and region where the application is running and display this information to users. You want to use the most performant approach. What should you do?

Options:

A.  

Use HTTP requests to query the available metadata server at the http://metadata.google.internal/ endpoint with the Metadata-Flavor: Google header.

B.  

In the Google Cloud console, navigate to the Project Dashboard and gather configuration details. Navigate to the Cloud Run “Variables & Secrets” tab, and add the desired environment variables in Key:Value format.

C.  

In the Google Cloud console, navigate to the Project Dashboard and gather configuration details. Write the application configuration information to Cloud Run's in-memory container filesystem.

D.  

Make an API call to the Cloud Asset Inventory API from the application and format the request to include instance metadata.

Discussion 0
Questions 34

You are designing an application that uses a microservices architecture. You are planning to deploy the application in the cloud and on-premises. You want to make sure the application can scale up on demand and also use managed services as much as possible. What should you do?

Options:

A.  

Deploy open source Istio in a multi-cluster deployment on multiple Google Kubernetes Engine (GKE) clusters managed by Anthos.

B.  

Create a GKE cluster in each environment with Anthos, and use Cloud Run for Anthos to deploy your application to each cluster.

C.  

Install a GKE cluster in each environment with Anthos, and use Cloud Build to create a Deployment for your application in each cluster.

D.  

Create a GKE cluster in the cloud and install open-source Kubernetes on-premises. Use an external load balancer service to distribute traffic across the two environments.

Discussion 0
Questions 35

You have a web application that publishes messages to Pub/Sub. You plan to build new versions of the application locally and need to quickly test Pub/Sub integration tor each new build. How should you configure local testing?

Options:

A.  

Run the gclcud config set api_endpoint_overrides/pubsub https: / 'pubsubemulator.googleapi3.com. coin/ command to change the Pub/Sub endpoint prior to starting the application

B.  

In the Google Cloud console, navigate to the API Library and enable the Pub/Sub API When developing locally, configure your application to call pubsub.googleapis com

C.  

Install Cloud Code on the integrated development environment (IDE) Navigate to Cloud APIs, and enable Pub/Sub against a valid Google Project ID. When developing locally, configure your application to call pubsub.googleapis com

D.  

Install the Pub/Sub emulator using gcloud and start the emulator with a valid Google Project I

D.  

When developing locally, configure your application to use the local emulator by exporting the fuhsub emulator Host variable

Discussion 0
Questions 36

You are developing an ecommerce web application that uses App Engine standard environment and Memorystore for Redis. When a user logs into the app, the application caches the user’s information (e.g., session, name, address, preferences), which is stored for quick retrieval during checkout.

While testing your application in a browser, you get a 502 Bad Gateway error. You have determined that the application is not connecting to Memorystore. What is the reason for this error?

Options:

A.  

Your Memorystore for Redis instance was deployed without a public IP address.

B.  

You configured your Serverless VPC Access connector in a different region than your App Engine instance.

C.  

The firewall rule allowing a connection between App Engine and Memorystore was removed during an infrastructure update by the DevOps team.

D.  

You configured your application to use a Serverless VPC Access connector on a different subnet in a different availability zone than your App Engine instance.

Discussion 0
Questions 37

Your API backend is running on multiple cloud providers. You want to generate reports for the network latency of your API.

Which two steps should you take? (Choose two.)

Options:

A.  

Use Zipkin collector to gather data.

B.  

Use Fluentd agent to gather data.

C.  

Use Stackdriver Trace to generate reports.

D.  

Use Stackdriver Debugger to generate report.

E.  

Use Stackdriver Profiler to generate report.

Discussion 0
Questions 38

Your development team has built several Cloud Functions using Java along with corresponding integration and service tests. You are building and deploying the functions and launching the tests using Cloud Build. Your Cloud Build job is reporting deployment failures immediately after successfully validating the code. What should you do?

Options:

A.  

Check the maximum number of Cloud Function instances.

B.  

Verify that your Cloud Build trigger has the correct build parameters.

C.  

Retry the tests using the truncated exponential backoff polling strategy.

D.  

Verify that the Cloud Build service account is assigned the Cloud Functions Developer role.

Discussion 0