Labour Day Special 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: exams65

Google Cloud Certified - Professional Cloud Architect exam Question and Answers

Google Cloud Certified - Professional Cloud Architect exam

Last Update Apr 24, 2024
Total Questions : 275

We are offering FREE Professional-Cloud-Architect Google exam questions. All you do is to just go and sign up. Give your details, prepare Professional-Cloud-Architect free exam questions and then go for complete pool of Google Cloud Certified - Professional Cloud Architect exam test questions that will help you more.

Professional-Cloud-Architect pdf

Professional-Cloud-Architect PDF

$35  $99.99
Professional-Cloud-Architect Engine

Professional-Cloud-Architect Testing Engine

$42  $119.99
Professional-Cloud-Architect PDF + Engine

Professional-Cloud-Architect PDF + Testing Engine

$56  $159.99
Questions 1

For this question, refer to the JencoMart case study.

The JencoMart security team requires that all Google Cloud Platform infrastructure is deployed using a least privilege model with separation of duties for administration between production and development resources. What Google domain and project structure should you recommend?

Options:

A.  

Create two G Suite accounts to manage users: one for development/test/staging and one for production. Each account should contain one project for every application.

B.  

Create two G Suite accounts to manage users: one with a single project for all development applications and one with a single project for all production applications.

C.  

Create a single G Suite account to manage users with each stage of each application in its own project.

D.  

Create a single G Suite account to manage users with one project for the development/test/staging environment and one project for the production environment.

Discussion 0
Questions 2

For this question, refer to the JencoMart case study.

The migration of JencoMart’s application to Google Cloud Platform (GCP) is progressing too slowly. The infrastructure is shown in the diagram. You want to maximize throughput. What are three potential bottlenecks? (Choose 3 answers.)

Options:

A.  

A single VPN tunnel, which limits throughput

B.  

A tier of Google Cloud Storage that is not suited for this task

C.  

A copy command that is not suited to operate over long distances

D.  

Fewer virtual machines (VMs) in GCP than on-premises machines

E.  

A separate storage layer outside the VMs, which is not suited for this task

F.  

Complicated internet connectivity between the on-premises infrastructure and GCP

Discussion 0
Questions 3

For this question, refer to the JencoMart case study

A few days after JencoMart migrates the user credentials database to Google Cloud Platform and shuts down the old server, the new database server stops responding to SSH connections. It is still serving database requests to the application servers correctly. What three steps should you take to diagnose the problem? Choose 3 answers

Options:

A.  

Delete the virtual machine (VM) and disks and create a new one.

B.  

Delete the instance, attach the disk to a new VM, and investigate.

C.  

Take a snapshot of the disk and connect to a new machine to investigate.

D.  

Check inbound firewall rules for the network the machine is connected to.

E.  

Connect the machine to another network with very simple firewall rules and investigate.

F.  

Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and investigate.

Discussion 0
Questions 4

Your company creates rendering software which users can download from the company website. Your

company has customers all over the world. You want to minimize latency for all your customers. You want to follow Google-recommended practices.

How should you store the files?

Options:

A.  

Save the files in a Multi-Regional Cloud Storage bucket.

B.  

Save the files in a Regional Cloud Storage bucket, one bucket per zone of the region.

C.  

Save the files in multiple Regional Cloud Storage buckets, one bucket per zone per region.

D.  

Save the files in multiple Multi-Regional Cloud Storage buckets, one bucket per multi-region.

Discussion 0
Questions 5

For this question, refer to the JencoMart case study.

JencoMart has decided to migrate user profile storage to Google Cloud Datastore and the application servers to Google Compute Engine (GCE). During the migration, the existing infrastructure will need access to Datastore to upload the data. What service account key-management strategy should you recommend?

Options:

A.  

Provision service account keys for the on-premises infrastructure and for the GCE virtual machines (VMs).

B.  

Authenticate the on-premises infrastructure with a user account and provision service account keys for the VMs.

C.  

Provision service account keys for the on-premises infrastructure and use Google Cloud Platform (GCP) managed keys for the VMs

D.  

Deploy a custom authentication service on GCE/Google Container Engine (GKE) for the on-premises infrastructure and use GCP managed keys for the VMs.

Discussion 0
Questions 6

Your company places a high value on being responsive and meeting customer needs quickly. Their primary business objectives are release speed and agility. You want to reduce the chance of security errors being accidentally introduced. Which two actions can you take? Choose 2 answers

Options:

A.  

Ensure every code check-in is peer reviewed by a security SME.

B.  

Use source code security analyzers as part of the CI/CD pipeline.

C.  

Ensure you have stubs to unit test all interfaces between components.

D.  

Enable code signing and a trusted binary repository integrated with your CI/CD pipeline.

E.  

Run a vulnerability security scanner as part of your continuous-integration /continuous-delivery (CI/CD) pipeline.

Discussion 0
Questions 7

Your company has an application running on App Engine that allows users to upload music files and share them with other people. You want to allow users to upload files directly into Cloud Storage from their browser session. The payload should not be passed through the backend. What should you do?

Options:

A.  

1. Set a CORS configuration in the target Cloud Storage bucket where the base URL of the App

Engine application is an allowed origin.

2. Use the Cloud Storage Signed URL feature to generate a POST URL.

B.  

1. Set a CORS configuration in the target Cloud Storage bucket where the base URL of the App

Engine application is an allowed origin.

2. Assign the Cloud Storage WRITER role to users who upload files.

C.  

1. Use the Cloud Storage Signed URL feature to generate a POST URL.

2. Use App Engine default credentials to sign requests against Cloud Storage.

D.  

1. Assign the Cloud Storage WRITER role to users who upload files.

2. Use App Engine default credentials to sign requests against Cloud Storage.

Discussion 0
Questions 8

You have developed an application using Cloud ML Engine that recognizes famous paintings from uploaded images. You want to test the application and allow specific people to upload images for the next 24 hours. Not all users have a Google Account. How should you have users upload images?

Options:

A.  

Have users upload the images to Cloud Storage. Protect the bucket with a password that expires after 24 hours.

B.  

Have users upload the images to Cloud Storage using a signed URL that expires after 24 hours.

C.  

Create an App Engine web application where users can upload images. Configure App Engine to disable the application after 24 hours. Authenticate users via Cloud Identity.

D.  

Create an App Engine web application where users can upload images for the next 24 hours. Authenticate users via Cloud Identity.

Discussion 0
Questions 9

You are working at a sports association whose members range in age from 8 to 30. The association collects a large amount of health data, such as sustained injuries. You are storing this data in BigQuery. Current legislation requires you to delete such information upon request of the subject. You want to design a solution that can accommodate such a request. What should you do?

Options:

A.  

Use a unique identifier for each individual. Upon a deletion request, delete all rows from BigQuery with this identifier.

B.  

When ingesting new data in BigQuery, run the data through the Data Loss Prevention (DLP) API to identify any personal information. As part of the DLP scan, save the result to Data Catalog. Upon a deletion request, query Data Catalog to find the column with personal information.

C.  

Create a BigQuery view over the table that contains all data. Upon a deletion request, exclude the rows that affect the subject’s data from this view. Use this view instead of the source table for all analysis tasks.

D.  

Use a unique identifier for each individual. Upon a deletion request, overwrite the column with the unique identifier with a salted SHA256 of its value.

Discussion 0
Questions 10

Your architecture calls for the centralized collection of all admin activity and VM system logs within your

project.

How should you collect these logs from both VMs and services?

Options:

A.  

All admin and VM system logs are automatically collected by Stackdriver.

B.  

Stackdriver automatically collects admin activity logs for most services. The Stackdriver Logging agent

must be installed on each instance to collect system logs.

C.  

Launch a custom syslogd compute instance and configure your GCP project and VMs to forward all logs to it.

D.  

Install the Stackdriver Logging agent on a single compute instance and let it collect all audit and access logs for your environment.

Discussion 0
Questions 11

Your customer wants to capture multiple GBs of aggregate real-time key performance indicators (KPIs) from their game servers running on Google Cloud Platform and monitor the KPIs with low latency. How should they capture the KPIs?

Options:

A.  

Store time-series data from the game servers in Google Bigtable, and view it using Google Data Studio.

B.  

Output custom metrics to Stackdriver from the game servers, and create a Dashboard in Stackdriver

Monitoring Console to view them.

C.  

Schedule BigQuery load jobs to ingest analytics files uploaded to Cloud Storage every ten minutes, and visualize the results in Google Data Studio.

D.  

Insert the KPIs into Cloud Datastore entities, and run ad hoc analysis and visualizations of them in Cloud Datalab.

Discussion 0
Questions 12

TerramEarth has about 1 petabyte (PB) of vehicle testing data in a private data center. You want to move the data to Cloud Storage for your machine learning team. Currently, a 1-Gbps interconnect link is available for you. The machine learning team wants to start using the data in a month. What should you do?

Options:

A.  

Request Transfer Appliances from Google Cloud, export the data to appliances, and return the appliances to Google Cloud.

B.  

Configure the Storage Transfer service from Google Cloud to send the data from your data center to Cloud Storage

C.  

Make sure there are no other users consuming the 1 Gbps link, and use multi-thread transfer to upload the data to Cloud Storage.

D.  

Export files to an encrypted USB device, send the device to Google Cloud, and request an import of the data to Cloud Storage

Discussion 0
Questions 13

Your company has an application running on a deployment in a GKE cluster. You have a separate cluster for development, staging and production. You have discovered that the team is able to deploy a Docker image to the production cluster without first testing the deployment in development and then staging. You want to allow the team to have autonomy but want to prevent this from happening. You want a Google Cloud solution that can be implemented quickly with minimal effort. What should you do?

Options:

A.  

Create a Kubernetes admission controller to prevent the container from starting if it is not approved for usage in the given environment

B.  

Configure a Kubernetes lifecycle hook to prevent the container from starting if it is not approved for usage in the given environment

C.  

Implement a corporate policy to prevent teams from deploying Docker image to an environment unless the Docker image was tested in an earlier environment

D.  

Configure the binary authorization policies for the development, staging and production clusters. Create attestations as part of the continuous integration pipeline”

Discussion 0
Questions 14

For this question, refer to the TerramEarth case study. To be compliant with European GDPR regulation, TerramEarth is required to delete data generated from its European customers after a period of 36 months when it contains personal data. In the new architecture, this data will be stored in both Cloud Storage and BigQuery. What should you do?

Options:

A.  

Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.

B.  

Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36 months.

C.  

Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.

D.  

Create a BigQuery time-partitioned table for the European data, and set the partition period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action with an Age condition of 36 months.

Discussion 0
Questions 15

TerramEarth has a legacy web application that you cannot migrate to cloud. However, you still want to build a cloud-native way to monitor the application. If the application goes down, you want the URL to point to a "Site is unavailable" page as soon as possible. You also want your Ops team to receive a notification for the issue. You need to build a reliable solution for minimum cost

What should you do?

Options:

A.  

Create a scheduled job in Cloud Run to invoke a container every minute. The container will check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

B.  

Create a cron job on a Compute Engine VM that runs every minute. The cron job invokes a Python program to check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

C.  

Create a Cloud Monitoring uptime check to validate the application URL If it fails, put a message in a Pub/Sub queue that triggers a Cloud Function to switch the URL to the "Site is unavailable" page, and notify the Ops team.

D.  

Use Cloud Error Reporting to check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

Discussion 0
Questions 16

For this question, refer to the TerramEarth case study. TerramEarth has decided to store data files in Cloud Storage. You need to configure Cloud Storage lifecycle rule to store 1 year of data and minimize file storage cost.

Which two actions should you take?

Options:

A.  

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Standard”, and Action: “Set to Coldline”, and create a second GCS life-cycle rule with Age: “365”, Storage Class: “Coldline”, and Action: “Delete”.

B.  

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Coldline”, and Action: “Set to Nearline”, and create a second GCS life-cycle rule with Age: “91”, Storage Class: “Coldline”, and Action: “Set to Nearline”.

C.  

Create a Cloud Storage lifecycle rule with Age: “90”, Storage Class: “Standard”, and Action: “Set to Nearline”, and create a second GCS life-cycle rule with Age: “91”, Storage Class: “Nearline”, and Action: “Set to Coldline”.

D.  

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Standard”, and Action: “Set to Coldline”, and create a second GCS life-cycle rule with Age: “365”, Storage Class: “Nearline”, and Action: “Delete”.

Discussion 0
Questions 17

For this question, refer to the TerramEarth case study. A new architecture that writes all incoming data to

BigQuery has been introduced. You notice that the data is dirty, and want to ensure data quality on an

automated daily basis while managing cost.

What should you do?

Options:

A.  

Set up a streaming Cloud Dataflow job, receiving data by the ingestion process. Clean the data in a Cloud Dataflow pipeline.

B.  

Create a Cloud Function that reads data from BigQuery and cleans it. Trigger it. Trigger the Cloud Function from a Compute Engine instance.

C.  

Create a SQL statement on the data in BigQuery, and save it as a view. Run the view daily, and save the result to a new table.

D.  

Use Cloud Dataprep and configure the BigQuery tables as the source. Schedule a daily job to clean the data.

Discussion 0
Questions 18

For this question, refer to the TerramEarth case study. You are asked to design a new architecture for the

ingestion of the data of the 200,000 vehicles that are connected to a cellular network. You want to follow

Google-recommended practices.

Considering the technical requirements, which components should you use for the ingestion of the data?

Options:

A.  

Google Kubernetes Engine with an SSL Ingress

B.  

Cloud IoT Core with public/private key pairs

C.  

Compute Engine with project-wide SSH keys

D.  

Compute Engine with specific SSH keys

Discussion 0
Questions 19

You are migrating a Linux-based application from your private data center to Google Cloud. The TerramEarth security team sent you several recent Linux vulnerabilities published by Common Vulnerabilities and Exposures (CVE). You need assistance in understanding how these vulnerabilities could impact your migration. What should you do?

Options:

A.  

Open a support case regarding the CVE and chat with the support engineer.

B.  

Read the CVEs from the Google Cloud Status Dashboard to understand the impact.

C.  

Read the CVEs from the Google Cloud Platform Security Bulletins to understand the impact

D.  

Post a question regarding the CVE in Stack Overflow to get an explanation

E.  

Post a question regarding the CVE in a Google Cloud discussion group to get an explanation

Discussion 0
Questions 20

You have an application that makes HTTP requests to Cloud Storage. Occasionally the requests fail with

HTTP status codes of 5xx and 429.

How should you handle these types of errors?

Options:

A.  

Use gRPC instead of HTTP for better performance.

B.  

Implement retry logic using a truncated exponential backoff strategy.

C.  

Make sure the Cloud Storage bucket is multi-regional for geo-redundancy.

D.  

Monitor https://status.cloud.google.com/feed.atom and only make requests if Cloud Storage is not reporting

an incident.

Discussion 0
Questions 21

For this question, refer to the TerramEarth case study.

You start to build a new application that uses a few Cloud Functions for the backend. One use case requires a Cloud Function func_display to invoke another Cloud Function func_query. You want func_query only to accept invocations from func_display. You also want to follow Google's recommended best practices. What should you do?

Options:

A.  

Create a token and pass it in as an environment variable to func_display. When invoking func_query, include the token in the request Pass the same token to func _query and reject the invocation if the tokens are different.

B.  

Make func_query 'Require authentication.' Create a unique service account and associate it to func_display. Grant the service account invoker role for func_query. Create an id token in func_display and include the token to the request when invoking func_query.

C.  

Make func _query 'Require authentication' and only accept internal traffic. Create those two functions in the same VP

C.  

Create an ingress firewall rule for func_query to only allow traffic from func_display.

D.  

Create those two functions in the same project and VPC. Make func_query only accept internal traffic. Create an ingress firewall for func_query to only allow traffic from func_display. Also, make sure both functions use the same service account.

Discussion 0
Questions 22

For this question, refer to the TerramEarth case study. You need to implement a reliable, scalable GCP solution for the data warehouse for your company, TerramEarth. Considering the TerramEarth business and technical requirements, what should you do?

Options:

A.  

Replace the existing data warehouse with BigQuery. Use table partitioning.

B.  

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs.

C.  

Replace the existing data warehouse with BigQuery. Use federated data sources.

D.  

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs. Add an additional Compute Engine pre-emptible instance with 32 CPUs.

Discussion 0
Questions 23

For this question, refer to the JencoMart case study.

JencoMart has built a version of their application on Google Cloud Platform that serves traffic to Asia. You want to measure success against their business and technical goals. Which metrics should you track?

Options:

A.  

Error rates for requests from Asia

B.  

Latency difference between US and Asia

C.  

Total visits, error rates, and latency from Asia

D.  

Total visits and average latency for users in Asia

E.  

The number of character sets present in the database

Discussion 0
Questions 24

For this question, refer to the Helicopter Racing League (HRL) case study. The HRL development team

releases a new version of their predictive capability application every Tuesday evening at 3 a.m. UTC to a

repository. The security team at HRL has developed an in-house penetration test Cloud Function called Airwolf.

The security team wants to run Airwolf against the predictive capability application as soon as it is released

every Tuesday. You need to set up Airwolf to run at the recurring weekly cadence. What should you do?

Options:

A.  

Set up Cloud Tasks and a Cloud Storage bucket that triggers a Cloud Function.

B.  

Set up a Cloud Logging sink and a Cloud Storage bucket that triggers a Cloud Function.

C.  

Configure the deployment job to notify a Pub/Sub queue that triggers a Cloud Function.

D.  

Set up Identity and Access Management (IAM) and Confidential Computing to trigger a Cloud Function.

Discussion 0
Questions 25

For this question, refer to the Helicopter Racing League (HRL) case study. HRL wants better prediction

accuracy from their ML prediction models. They want you to use Google’s AI Platform so HRL can understand

and interpret the predictions. What should you do?

Options:

A.  

Use Explainable AI.

B.  

Use Vision AI.

C.  

Use Google Cloud’s operations suite.

D.  

Use Jupyter Notebooks.

Discussion 0
Questions 26

For this question, refer to the Helicopter Racing League (HRL) case study. Recently HRL started a new regional

racing league in Cape Town, South Africa. In an effort to give customers in Cape Town a better user

experience, HRL has partnered with the Content Delivery Network provider, Fastly. HRL needs to allow traffic

coming from all of the Fastly IP address ranges into their Virtual Private Cloud network (VPC network). You are

a member of the HRL security team and you need to configure the update that will allow only the Fastly IP

address ranges through the External HTTP(S) load balancer. Which command should you use?

Options:

A.  

glouc compute firewall rules update hlr-policy \

--priority 1000 \

target tags-sourceiplist fastly \

--allow tcp:443

B.  

gcloud compute security policies rules update 1000 \

--security-policy hlr-policy \

--expression "evaluatePreconfiguredExpr('sourceiplist-fastly')" \

--action " allow"

C.  

gcloud compute firewall rules update

sourceiplist-fastly \

priority 1000 \

allow tcp: 443

D.  

gcloud compute priority-policies rules update

1000 \

security policy from fastly

--src- ip-ranges"

-- action " allow"

Discussion 0
Questions 27

For this question, refer to the Dress4Win case study. You are responsible for the security of data stored in

Cloud Storage for your company, Dress4Win. You have already created a set of Google Groups and assigned the appropriate users to those groups. You should use Google best practices and implement the simplest design to meet the requirements.

Considering Dress4Win’s business and technical requirements, what should you do?

Options:

A.  

Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.

Encrypt data with a customer-supplied encryption key when storing files in Cloud Storage.

B.  

Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.

Enable default storage encryption before storing files in Cloud Storage.

C.  

Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements.

Utilize Google’s default encryption at rest when storing files in Cloud Storage.

D.  

Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements. Ensure that the default Cloud KMS key is set before storing files in Cloud Storage.

Discussion 0
Questions 28

For this question, refer to the Dress4Win case study. Dress4Win is expected to grow to 10 times its size in 1 year with a corresponding growth in data and traffic that mirrors the existing patterns of usage. The CIO has set the target of migrating production infrastructure to the cloud within the next 6 months. How will you configure the solution to scale for this growth without making major application changes and still maximize the ROI?

Options:

A.  

Migrate the web application layer to App Engine, and MySQL to Cloud Datastore, and NAS to Cloud Storage. Deploy RabbitMQ, and deploy Hadoop servers using Deployment Manager.

B.  

Migrate RabbitMQ to Cloud Pub/Sub, Hadoop to BigQuery, and NAS to Compute Engine with Persistent Disk storage. Deploy Tomcat, and deploy Nginx using Deployment Manager.

C.  

Implement managed instance groups for Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Compute Engine with Persistent Disk storage.

D.  

Implement managed instance groups for the Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Cloud Storage.

Discussion 0
Questions 29

For this question, refer to the Dress4Win case study. Considering the given business requirements, how would you automate the deployment of web and transactional data layers?

Options:

A.  

Deploy Nginx and Tomcat using Cloud Deployment Manager to Compute Engine. Deploy a Cloud SQL server to replace MySQL. Deploy Jenkins using Cloud Deployment Manager.

B.  

Deploy Nginx and Tomcat using Cloud Launcher. Deploy a MySQL server using Cloud Launcher. Deploy Jenkins to Compute Engine using Cloud Deployment Manager scripts.

C.  

Migrate Nginx and Tomcat to App Engine. Deploy a Cloud Datastore server to replace the MySQL server in a high-availability configuration. Deploy Jenkins to Compute Engine using Cloud Launcher.

D.  

Migrate Nginx and Tomcat to App Engine. Deploy a MySQL server using Cloud Launcher. Deploy Jenkins to Compute Engine using Cloud Launcher.

Discussion 0
Questions 30

For this question, refer to the TerramEarth case study

Your development team has created a structured API to retrieve vehicle data. They want to allow third parties to develop tools for dealerships that use this vehicle event data. You want to support delegated authorization against this data. What should you do?

Options:

A.  

Build or leverage an OAuth-compatible access control system.

B.  

Build SAML 2.0 SSO compatibility into your authentication system.

C.  

Restrict data access based on the source IP address of the partner systems.

D.  

Create secondary credentials for each dealer that can be given to the trusted third party.

Discussion 0
Questions 31

For this question, refer to the Dress4Win case study. To be legally compliant during an audit, Dress4Win must be able to give insights in all administrative actions that modify the configuration or metadata of resources on Google Cloud.

What should you do?

Options:

A.  

Use Stackdriver Trace to create a trace list analysis.

B.  

Use Stackdriver Monitoring to create a dashboard on the project’s activity.

C.  

Enable Cloud Identity-Aware Proxy in all projects, and add the group of Administrators as a member.

D.  

Use the Activity page in the GCP Console and Stackdriver Logging to provide the required insight.

Discussion 0
Questions 32

For this question, refer to the Dress4Win case study. You want to ensure that your on-premises architecture meets business requirements before you migrate your solution.

What change in the on-premises architecture should you make?

Options:

A.  

Replace RabbitMQ with Google Pub/Sub.

B.  

Downgrade MySQL to v5.7, which is supported by Cloud SQL for MySQL.

C.  

Resize compute resources to match predefined Compute Engine machine types.

D.  

Containerize the micro services and host them in Google Kubernetes Engine.

Discussion 0
Questions 33

For this question, refer to the Dress4Win case study. Which of the compute services should be migrated as –is and would still be an optimized architecture for performance in the cloud?

Options:

A.  

Web applications deployed using App Engine standard environment

B.  

RabbitMQ deployed using an unmanaged instance group

C.  

Hadoop/Spark deployed using Cloud Dataproc Regional in High Availability mode

D.  

Jenkins, monitoring, bastion hosts, security scanners services deployed on custom machine types

Discussion 0
Questions 34

For this question, refer to the TerramEarth case study.

TerramEarth has equipped unconnected trucks with servers and sensors to collet telemetry data. Next year they want to use the data to train machine learning models. They want to store this data in the cloud while reducing costs. What should they do?

Options:

A.  

Have the vehicle’ computer compress the data in hourly snapshots, and store it in a Google Cloud storage (GCS) Nearline bucket.

B.  

Push the telemetry data in Real-time to a streaming dataflow job that compresses the data, and store it in Google BigQuery.

C.  

Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Cloud Bigtable.

D.  

Have the vehicle's computer compress the data in hourly snapshots, a Store it in a GCS Coldline bucket.

Discussion 0
Questions 35

For this question, refer to the TerramEarth case study.

The TerramEarth development team wants to create an API to meet the company's business requirements. You want the development team to focus their development effort on business value versus creating a custom framework. Which method should they use?

Options:

A.  

Use Google App Engine with Google Cloud Endpoints. Focus on an API for dealers and partners.

B.  

Use Google App Engine with a JAX-RS Jersey Java-based framework. Focus on an API for the public.

C.  

Use Google App Engine with the Swagger (open API Specification) framework. Focus on an API for the public.

D.  

Use Google Container Engine with a Django Python container. Focus on an API for the public.

E.  

Use Google Container Engine with a Tomcat container with the Swagger (Open API Specification) framework. Focus on an API for dealers and partners.

Discussion 0
Questions 36

For this question, refer to the Mountkirk Games case study.

Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?

Options:

A.  

Create a scalable environment in GCP for simulating production load.

B.  

Use the existing infrastructure to test the GCP-based backend at scale.

C.  

Build stress tests into each component of your application using resources internal to GCP to simulate load.

D.  

Create a set of static environments in GCP to test different levels of load — for example, high, medium, and low.

Discussion 0
Questions 37

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Mountkirk Games has the following requirements:

• Services are deployed redundantly across multiple regions in the US and Europe.

• Only frontend services are exposed on the public internet.

• They can provide a single frontend IP for their fleet of services.

• Deployment artifacts are immutable.

Which set of products should they use?

Options:

A.  

Google Cloud Storage, Google Cloud Dataflow, Google Compute Engine

B.  

Google Cloud Storage, Google App Engine, Google Network Load Balancer

C.  

Google Kubernetes Registry, Google Container Engine, Google HTTP(S) Load Balancer

D.  

Google Cloud Functions, Google Cloud Pub/Sub, Google Cloud Deployment Manager

Discussion 0
Questions 38

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants you to design their new testing strategy. How should the test coverage differ from their existing backends on the other platforms?

Options:

A.  

Tests should scale well beyond the prior approaches.

B.  

Unit tests are no longer required, only end-to-end tests.

C.  

Tests should be applied after the release is in the production environment.

D.  

Tests should include directly testing the Google Cloud Platform (GCP) infrastructure.

Discussion 0
Questions 39

For this question, refer to the Mountkirk Games case study

Mountkirk Games needs to create a repeatable and configurable mechanism for deploying isolated application environments. Developers and testers can access each other's environments and resources, but they cannot access staging or production resources. The staging environment needs access to some services from production.

What should you do to isolate development environments from staging and production?

Options:

A.  

Create a project for development and test and another for staging and production.

B.  

Create a network for development and test and another for staging and production.

C.  

Create one subnetwork for development and another for staging and production.

D.  

Create one project for development, a second for staging and a third for production.

Discussion 0
Questions 40

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?

Options:

A.  

Container Engine, Cloud Pub/Sub, and Cloud SQL

B.  

Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery

C.  

Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow

D.  

Cloud Dataproc, Cloud Pub/Sub, Cloud SQL, and Cloud Dataflow

E.  

Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc

Discussion 0
Questions 41

For this question, refer to the Mountkirk Games case study.

Mountkirk Games' gaming servers are not automatically scaling properly. Last month, they rolled out a new feature, which suddenly became very popular. A record number of users are trying to use the service, but many of them are getting 503 errors and very slow response times. What should they investigate first?

Options:

A.  

Verify that the database is online.

B.  

Verify that the project quota hasn't been exceeded.

C.  

Verify that the new feature code did not introduce any performance bugs.

D.  

Verify that the load-testing team is not running their tool against production.

Discussion 0
Questions 42

For this question, refer to the JencoMart case study.

JencoMart wants to move their User Profiles database to Google Cloud Platform. Which Google Database should they use?

Options:

A.  

Cloud Spanner

B.  

Google BigQuery

C.  

Google Cloud SQL

D.  

Google Cloud Datastore

Discussion 0
Questions 43

You are implementing Firestore for Mountkirk Games. Mountkirk Games wants to give a new game

programmatic access to a legacy game's Firestore database. Access should be as restricted as possible. What

should you do?

Options:

A.  

Create a service account (SA) in the legacy game's Google Cloud project, add this SA in the new game's IAM page, and then give it the Firebase Admin role in both projects

B.  

Create a service account (SA) in the legacy game's Google Cloud project, add a second SA in the new game's IAM page, and then give the Organization Admin role to both SAs

C.  

Create a service account (SA) in the legacy game's Google Cloud project, give it the Firebase Admin role, and then migrate the new game to the legacy game's project.

D.  

Create a service account (SA) in the lgacy game's Google Cloud project, give the SA the Organization Admin rule and then give it the Firebase Admin role in both projects

Discussion 0
Questions 44

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the compute workloads for your company, Mountkirk Games. Considering the Mountkirk Games business and technical requirements, what should you do?

Options:

A.  

Create network load balancers. Use preemptible Compute Engine instances.

B.  

Create network load balancers. Use non-preemptible Compute Engine instances.

C.  

Create a global load balancer with managed instance groups and autoscaling policies. Use preemptible Compute Engine instances.

D.  

Create a global load balancer with managed instance groups and autoscaling policies. Use non-preemptible Compute Engine instances.

Discussion 0
Questions 45

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to migrate from their current analytics and statistics reporting model to one that meets their technical requirements on Google Cloud Platform.

Which two steps should be part of their migration plan? (Choose two.)

Options:

A.  

Evaluate the impact of migrating their current batch ETL code to Cloud Dataflow.

B.  

Write a schema migration plan to denormalize data for better performance in BigQuery.

C.  

Draw an architecture diagram that shows how to move from a single MySQL database to a MySQL cluster.

D.  

Load 10 TB of analytics data from a previous game into a Cloud SQL instance, and run test queries against the full dataset to confirm that they complete successfully.

E.  

Integrate Cloud Armor to defend against possible SQL injection attacks in analytics files uploaded to Cloud Storage.

Discussion 0
Questions 46

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for hybrid connectivity between EHR's on-premises systems and Google Cloud. You want to follow Google's recommended practices for production-level applications. Considering the EHR Healthcare business and technical requirements, what should you do?

Options:

A.  

Configure two Partner Interconnect connections in one metro (City), and make sure the Interconnect connections are placed in different metro zones.

B.  

Configure two VPN connections from on-premises to Google Cloud, and make sure the VPN devices on-premises are in separate racks.

C.  

Configure Direct Peering between EHR Healthcare and Google Cloud, and make sure you are peering at least two Google locations.

D.  

Configure two Dedicated Interconnect connections in one metro (City) and two connections in another metro, and make sure the Interconnect connections are placed in different metro zones.

Discussion 0
Questions 47

For this question, refer to the EHR Healthcare case study. You are responsible for ensuring that EHR's use of Google Cloud will pass an upcoming privacy compliance audit. What should you do? (Choose two.)

Options:

A.  

Verify EHR's product usage against the list of compliant products on the Google Cloud compliance page.

B.  

Advise EHR to execute a Business Associate Agreement (BAA) with Google Cloud.

C.  

Use Firebase Authentication for EHR's user facing applications.

D.  

Implement Prometheus to detect and prevent security breaches on EHR's web-based applications.

E.  

Use GKE private clusters for all Kubernetes workloads.

Discussion 0
Questions 48

For this question, refer to the EHR Healthcare case study. You are responsible for designing the Google Cloud network architecture for Google Kubernetes Engine. You want to follow Google best practices. Considering the EHR Healthcare business and technical requirements, what should you do to reduce the attack surface?

Options:

A.  

Use a private cluster with a private endpoint with master authorized networks configured.

B.  

Use a public cluster with firewall rules and Virtual Private Cloud (VPC) routes.

C.  

Use a private cluster with a public endpoint with master authorized networks configured.

D.  

Use a public cluster with master authorized networks enabled and firewall rules.

Discussion 0
Questions 49

For this question, refer to the EHR Healthcare case study. In the past, configuration errors put public IP addresses on backend servers that should not have been accessible from the Internet. You need to ensure that no one can put external IP addresses on backend Compute Engine instances and that external IP addresses can only be configured on frontend Compute Engine instances. What should you do?

Options:

A.  

Create an Organizational Policy with a constraint to allow external IP addresses only on the frontend Compute Engine instances.

B.  

Revoke the compute.networkAdmin role from all users in the project with front end instances.

C.  

Create an Identity and Access Management (IAM) policy that maps the IT staff to the compute.networkAdmin role for the organization.

D.  

Create a custom Identity and Access Management (IAM) role named GCE_FRONTEND with the compute.addresses.create permission.

Discussion 0
Questions 50

For this question, refer to the EHR Healthcare case study. You are a developer on the EHR customer portal team. Your team recently migrated the customer portal application to Google Cloud. The load has increased on the application servers, and now the application is logging many timeout errors. You recently incorporated Pub/Sub into the application architecture, and the application is not logging any Pub/Sub publishing errors. You want to improve publishing latency. What should you do?

Options:

A.  

Increase the Pub/Sub Total Timeout retry value.

B.  

Move from a Pub/Sub subscriber pull model to a push model.

C.  

Turn off Pub/Sub message batching.

D.  

Create a backup Pub/Sub message queue.

Discussion 0
Questions 51

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for securely deploying workloads to Google Cloud. You also need to ensure that only verified containers are deployed using Google Cloud services. What should you do? (Choose two.)

Options:

A.  

Enable Binary Authorization on GKE, and sign containers as part of a CI/CD pipeline.

B.  

Configure Jenkins to utilize Kritis to cryptographically sign a container as part of a CI/CD pipeline.

C.  

Configure Container Registry to only allow trusted service accounts to create and deploy containers from the registry.

D.  

Configure Container Registry to use vulnerability scanning to confirm that there are no vulnerabilities before deploying the workload.

Discussion 0
Questions 52

For this question, refer to the EHR Healthcare case study. EHR has single Dedicated Interconnect

connection between their primary data center and Googles network. This connection satisfies

EHR’s network and security policies:

• On-premises servers without public IP addresses need to connect to cloud resources

without public IP addresses

• Traffic flows from production network mgmt. servers to Compute Engine virtual

machines should never traverse the public internet.

You need to upgrade the EHR connection to comply with their requirements. The new

connection design must support business critical needs and meet the same network and

security policy requirements. What should you do?

Options:

A.  

Add a new Dedicated Interconnect connection

B.  

Upgrade the bandwidth on the Dedicated Interconnect connection to 100 G

C.  

Add three new Cloud VPN connections

D.  

Add a new Carrier Peering connection

Discussion 0
Questions 53

You need to upgrade the EHR connection to comply with their requirements. The new connection design must support business-critical needs and meet the same network and security policy requirements. What should you do?

Options:

A.  

Add a new Dedicated Interconnect connection.

B.  

Upgrade the bandwidth on the Dedicated Interconnect connection to 100 G.

C.  

Add three new Cloud VPN connections.

D.  

Add a new Carrier Peering connection.

Discussion 0
Questions 54

For this question, refer to the Dress4Win case study.

You want to ensure Dress4Win's sales and tax records remain available for infrequent viewing by auditors for at least 10 years. Cost optimization is your top priority. Which cloud services should you choose?

Options:

A.  

Google Cloud Storage Coldline to store the data, and gsutil to access the data.

B.  

Google Cloud Storage Nearline to store the data, and gsutil to access the data.

C.  

Google Bigtabte with US or EU as location to store the data, and gcloud to access the data.

D.  

BigQuery to store the data, and a web server cluster in a managed instance group to access the data. Google Cloud SQL mirrored across two distinct regions to store the data, and a Redis cluster in a managed instance group to access the data.

Discussion 0