Summer Special Discount 60% Offer - Ends in 0d 00h 00m 00s - Coupon code: brite60

ExamsBrite Dumps

Google Professional Data Engineer Exam Question and Answers

Google Professional Data Engineer Exam

Last Update Jul 17, 2025
Total Questions : 376

We are offering FREE Professional-Data-Engineer Google exam questions. All you do is to just go and sign up. Give your details, prepare Professional-Data-Engineer free exam questions and then go for complete pool of Google Professional Data Engineer Exam test questions that will help you more.

Professional-Data-Engineer pdf

Professional-Data-Engineer PDF

$42  $104.99
Professional-Data-Engineer Engine

Professional-Data-Engineer Testing Engine

$50  $124.99
Professional-Data-Engineer PDF + Engine

Professional-Data-Engineer PDF + Testing Engine

$66  $164.99
Questions 1

You are choosing a NoSQL database to handle telemetry data submitted from millions of Internet-of-Things (IoT) devices. The volume of data is growing at 100 TB per year, and each data entry has about 100 attributes. The data processing pipeline does not require atomicity, consistency, isolation, and durability (ACID). However, high availability and low latency are required.

You need to analyze the data by querying against individual fields. Which three databases meet your requirements? (Choose three.)

Options:

A.  

Redis

B.  

HBase

C.  

MySQL

D.  

MongoDB

E.  

Cassandra

F.  

HDFS with Hive

Discussion 0
Questions 2

You are designing the database schema for a machine learning-based food ordering service that will predict what users want to eat. Here is some of the information you need to store:

    The user profile: What the user likes and doesn’t like to eat

    The user account information: Name, address, preferred meal times

    The order information: When orders are made, from where, to whom

The database will be used to store all the transactional data of the product. You want to optimize the data schema. Which Google Cloud Platform product should you use?

Options:

A.  

BigQuery

B.  

Cloud SQL

C.  

Cloud Bigtable

D.  

Cloud Datastore

Discussion 0
Questions 3

You are developing an application that uses a recommendation engine on Google Cloud. Your solution should display new videos to customers based on past views. Your solution needs to generate labels for the entities in videos that the customer has viewed. Your design must be able to provide very fast filtering suggestions based on data from other customer preferences on several TB of data. What should you do?

Options:

A.  

Build and train a complex classification model with Spark MLlib to generate labels and filter the results.

Deploy the models using Cloud Dataproc. Call the model from your application.

B.  

Build and train a classification model with Spark MLlib to generate labels. Build and train a second

classification model with Spark MLlib to filter results to match customer preferences. Deploy the models

using Cloud Dataproc. Call the models from your application.

C.  

Build an application that calls the Cloud Video Intelligence API to generate labels. Store data in Cloud

Bigtable, and filter the predicted labels to match the user’s viewing history to generate preferences.

D.  

Build an application that calls the Cloud Video Intelligence API to generate labels. Store data in Cloud

SQL, and join and filter the predicted labels to match the user’s viewing history to generate preferences.

Discussion 0
Questions 4

You work for a manufacturing plant that batches application log files together into a single log file once a day at 2:00 AM. You have written a Google Cloud Dataflow job to process that log file. You need to make sure the log file in processed once per day as inexpensively as possible. What should you do?

Options:

A.  

Change the processing job to use Google Cloud Dataproc instead.

B.  

Manually start the Cloud Dataflow job each morning when you get into the office.

C.  

Create a cron job with Google App Engine Cron Service to run the Cloud Dataflow job.

D.  

Configure the Cloud Dataflow job as a streaming job so that it processes the log data immediately.

Discussion 0
Questions 5

Your new customer has requested daily reports that show their net consumption of Google Cloud compute resources and who used the resources. You need to quickly and efficiently generate these daily reports. What should you do?

Options:

A.  

Do daily exports of Cloud Logging data to BigQuery. Create views filtering by project, log type, resource, and user.

B.  

Filter data in Cloud Logging by project, resource, and user; then export the data in CSV format.

C.  

Filter data in Cloud Logging by project, log type, resource, and user, then import the data into BigQuery.

D.  

Export Cloud Logging data to Cloud Storage in CSV format. Cleanse the data using Dataprep, filtering by project, resource, and user.

Discussion 0
Questions 6

You are building a report-only data warehouse where the data is streamed into BigQuery via the streaming API Following Google's best practices, you have both a staging and a production table for the data How should you design your data loading to ensure that there is only one master dataset without affecting performance on either the ingestion or reporting pieces?

Options:

A.  

Have a staging table that is an append-only model, and then update the production table every three hours

with the changes written to staging

B.  

Have a staging table that is an append-only model, and then update the production table every ninety

minutes with the changes written to staging

C.  

Have a staging table that moves the staged data over to the production table and deletes the contents of the

staging table every three hours

D.  

Have a staging table that moves the staged data over to the production table and deletes the contents of the staging table every thirty minutes

Discussion 0
Questions 7

You need to migrate a Redis database from an on-premises data center to a Memorystore for Redis instance. You want to follow Google-recommended practices and perform the migration for minimal cost. time, and effort. What should you do?

Options:

A.  

Make a secondary instance of the Redis database on a Compute Engine instance, and then perform a live cutover.

B.  

Write a shell script to migrate the Redis data, and create a new Memorystore for Redis instance.

C.  

Create a Dataflow job to road the Redis database from the on-premises data center. and write the data to a Memorystore for Redis instance

D.  

Make an RDB backup of the Redis database, use the gsutil utility to copy the RDB file into a Cloud Storage bucket, and then import the RDB tile into the Memorystore for Redis instance.

Discussion 0
Questions 8

You are configuring networking for a Dataflow job. The data pipeline uses custom container images with the libraries that are required for the transformation logic preinstalled. The data pipeline reads the data from Cloud Storage and writes the data to BigQuery. You need to ensure cost-effective and secure communication between the pipeline and Google APIs and services. What should you do?

Options:

A.  

Leave external IP addresses assigned to worker VMs while enforcing firewall rules.

B.  

Disable external IP addresses and establish a Private Service Connect endpoint IP address.

C.  

Disable external IP addresses from worker VMs and enable Private Google Access.

D.  

Enable Cloud NAT to provide outbound internet connectivity while enforcing firewall rules.

Discussion 0
Questions 9

You need to copy millions of sensitive patient records from a relational database to BigQuery. The total size of the database is 10 TB. You need to design a solution that is secure and time-efficient. What should you do?

Options:

A.  

Export the records from the database as an Avro file. Upload the file to GCS using gsutil, and then load the Avro file into BigQuery using the BigQuery web UI in the GCP Console.

B.  

Export the records from the database as an Avro file. Copy the file onto a Transfer Appliance and send it to Google, and then load the Avro file into BigQuery using the BigQuery web UI in the GCP Console.

C.  

Export the records from the database into a CSV file. Create a public URL for the CSV file, and then use Storage Transfer Service to move the file to Cloud Storage. Load the CSV file into BigQuery using the BigQuery web UI in the GCP Console.

D.  

Export the records from the database as an Avro file. Create a public URL for the Avro file, and then use Storage Transfer Service to move the file to Cloud Storage. Load the Avro file into BigQuery using the BigQuery web UI in the GCP Console.

Discussion 0
Questions 10

You store and analyze your relational data in BigQuery on Google Cloud with all data that resides in US regions. You also have a variety of object stores across Microsoft Azure and Amazon Web Services (AWS), also in US regions. You want to query all your data in BigQuery daily with as little movement of data as possible. What should you do?

Options:

A.  

Load files from AWS and Azure to Cloud Storage with Cloud Shell gautil rsync arguments.

B.  

Create a Dataflow pipeline to ingest files from Azure and AWS to BigQuery.

C.  

Use the BigQuery Omni functionality and BigLake tables to query files in Azure and AWS.

D.  

Use BigQuery Data Transfer Service to load files from Azure and AWS into BigQuery.

Discussion 0
Questions 11

You issue a new batch job to Dataflow. The job starts successfully, processes a few elements, and then suddenly fails and shuts down. You navigate to the Dataflow monitoring interface where you find errors related to a particular DoFn in your pipeline. What is the most likely cause of the errors?

Options:

A.  

Exceptions in worker code

B.  

Job validation

C.  

Graph or pipeline construction

D.  

Insufficient permissions

Discussion 0
Questions 12

You are selecting services to write and transform JSON messages from Cloud Pub/Sub to BigQuery for a data pipeline on Google Cloud. You want to minimize service costs. You also want to monitor and accommodate input data volume that will vary in size with minimal manual intervention. What should you do?

Options:

A.  

Use Cloud Dataproc to run your transformations. Monitor CPU utilization for the cluster. Resize the number of worker nodes in your cluster via the command line.

B.  

Use Cloud Dataproc to run your transformations. Use the diagnose command to generate an operational output archive. Locate the bottleneck and adjust cluster resources.

C.  

Use Cloud Dataflow to run your transformations. Monitor the job system lag with Stackdriver. Use the

default autoscaling setting for worker instances.

D.  

Use Cloud Dataflow to run your transformations. Monitor the total execution time for a sampling of jobs. Configure the job to use non-default Compute Engine machine types when needed.

Discussion 0
Questions 13

You are deploying a new storage system for your mobile application, which is a media streaming service. You decide the best fit is Google Cloud Datastore. You have entities with multiple properties, some of which can take on multiple values. For example, in the entity ‘Movie’ the property ‘actors’ and the property ‘tags’ have multiple values but the property ‘date released’ does not. A typical query would ask for all movies with actor= ordered by date_released or all movies with tag=Comedy ordered by date_released. How should you avoid a combinatorial explosion in the number of indexes?

Options:

A.  

Option A

B.  

Option

B.  

C.  

Option C

D.  

Option D

Discussion 0
Questions 14

You work for a large fast food restaurant chain with over 400,000 employees. You store employee information in Google BigQuery in a Users table consisting of a FirstName field and a LastName field. A member of IT is building an application and asks you to modify the schema and data in BigQuery so the application can query a FullName field consisting of the value of the FirstName field concatenated with a space, followed by the value of the LastName field for each employee. How can you make that data available while minimizing cost?

Options:

A.  

Create a view in BigQuery that concatenates the FirstName and LastName field values to produce the FullName.

B.  

Add a new column called FullName to the Users table. Run an UPDATE statement that updates the FullName column for each user with the concatenation of the FirstName and LastName values.

C.  

Create a Google Cloud Dataflow job that queries BigQuery for the entire Users table, concatenates the FirstName value and LastName value for each user, and loads the proper values for FirstName, LastName, and FullName into a new table in BigQuery.

D.  

Use BigQuery to export the data for the table to a CSV file. Create a Google Cloud Dataproc job to process the CSV file and output a new CSV file containing the proper values for FirstName, LastName and FullName. Run a BigQuery load job to load the new CSV file into BigQuery.

Discussion 0
Questions 15

You are implementing a chatbot to help an online retailer streamline their customer service. The chatbot must be able to respond to both text and voice inquiries. You are looking for a low-code or no-code option, and you want to be able to easily train the chatbot to provide answers to keywords. What should you do?

Options:

A.  

Use the Speech-to-Text API to build a Python application in App Engine.

B.  

Use the Speech-to-Text API to build a Python application in a Compute Engine instance.

C.  

Use Dialogflow for simple queries and the Speech-to-Text API for complex queries.

D.  

Use Dialogflow to implement the chatbot. defining the intents based on the most common queries collected.

Discussion 0
Questions 16

You are planning to load some of your existing on-premises data into BigQuery on Google Cloud. You want to either stream or batch-load data, depending on your use case. Additionally, you want to mask some sensitive data before loading into BigQuery. You need to do this in a programmatic way while keeping costs to a minimum. What should you do?

Options:

A.  

Use the BigQuery Data Transfer Service to schedule your migration. After the data is populated in BigQuery. use the connection to the Cloud Data Loss Prevention {Cloud DLP} API to de-identify the necessary data.

B.  

Create your pipeline with Dataflow through the Apache Beam SDK for Python, customizing separate options within your code for streaming.

batch processing, and Cloud DLP Select BigQuery as your data sink.

C.  

Use Cloud Data Fusion to design your pipeline, use the Cloud DLP plug-in to de-identify data within your pipeline, and then move the data

into BigQuery.

D.  

Set up Datastream to replicate your on-premise data on BigQuery.

Discussion 0
Questions 17

You want to analyze hundreds of thousands of social media posts daily at the lowest cost and with the fewest steps.

You have the following requirements:

    You will batch-load the posts once per day and run them through the Cloud Natural Language API.

    You will extract topics and sentiment from the posts.

    You must store the raw posts for archiving and reprocessing.

    You will create dashboards to be shared with people both inside and outside your organization.

You need to store both the data extracted from the API to perform analysis as well as the raw social media posts for historical archiving. What should you do?

Options:

A.  

Store the social media posts and the data extracted from the API in BigQuery.

B.  

Store the social media posts and the data extracted from the API in Cloud SQL.

C.  

Store the raw social media posts in Cloud Storage, and write the data extracted from the API into BigQuery.

D.  

Feed to social media posts into the API directly from the source, and write the extracted data from the API into BigQuery.

Discussion 0
Questions 18

You work for a farming company. You have one BigQuery table named sensors, which is about 500 MB and contains the list of your 5000 sensors, with columns for id, name, and location. This table is updated every hour. Each sensor generates one metric every 30 seconds along with a timestamp. which you want to store in BigQuery. You want to run an analytical query on the data once a week for monitoring purposes. You also want to minimize costs. What data model should you use?

Options:

A.  

1. Create a retries column in the sensor? table.

2. Set record type and repeated mode for the metrics column.

3. Use an UPDATE statement every 30 seconds to add new metrics.

B.  

1. Create a metrics column in the sensors table.

2. Set RECORD type and REPEATED mode for the metrics column.

3. Use an INSERT statement every 30 seconds to add new metrics.

C.  

1. Create a metrics table partitioned by timestamp.

2. Create a sensorld column in the metrics table, that points to the id column in the sensors table.

3. Use an IHSEW statement every 30 seconds to append new metrics to the metrics table.

4. Join the two tables, if needed, when running the analytical query.

D.  

1. Create a metrics table partitioned by timestamp.

2. Create a sensor Id column in the metrics table, that points to the _d column in the sensors table.

3. Use an UPDATE statement every 30 seconds to append new metrics to the metrics table.

4. Join the two tables, if needed, when running the analytical query.

Discussion 0
Questions 19

You’re training a model to predict housing prices based on an available dataset with real estate properties. Your plan is to train a fully connected neural net, and you’ve discovered that the dataset contains latitude and longtitude of the property. Real estate professionals have told you that the location of the property is highly influential on price, so you’d like to engineer a feature that incorporates this physical dependency.

What should you do?

Options:

A.  

Provide latitude and longtitude as input vectors to your neural net.

B.  

Create a numeric column from a feature cross of latitude and longtitude.

C.  

Create a feature cross of latitude and longtitude, bucketize at the minute level and use L1 regularization during optimization.

D.  

Create a feature cross of latitude and longtitude, bucketize it at the minute level and use L2 regularization during optimization.

Discussion 0
Questions 20

You have several Spark jobs that run on a Cloud Dataproc cluster on a schedule. Some of the jobs run in sequence, and some of the jobs run concurrently. You need to automate this process. What should you do?

Options:

A.  

Create a Cloud Dataproc Workflow Template

B.  

Create an initialization action to execute the jobs

C.  

Create a Directed Acyclic Graph in Cloud Composer

D.  

Create a Bash script that uses the Cloud SDK to create a cluster, execute jobs, and then tear down the cluster

Discussion 0
Questions 21

You are loading CSV files from Cloud Storage to BigQuery. The files have known data quality issues, including mismatched data types, such as STRINGS and INT64s in the same column, andinconsistent formatting of values such as phone numbers or addresses. You need to create the data pipeline to maintain data quality and perform the required cleansing and transformation. What should you do?

Options:

A.  

Use Data Fusion to transform the data before loading it into BigQuery.

B.  

Load the CSV files into a staging table with the desired schema, perform the transformations with SQL. and then write the results to the final destination table.

C.  

Create a table with the desired schema, toad the CSV files into the table, and perform the transformations in place using SQL.

D.  

Use Data Fusion to convert the CSV files lo a self-describing data formal, such as AVRO. before loading the data to BigOuery.

Discussion 0
Questions 22

Your car factory is pushing machine measurements as messages into a Pub/Sub topic in your Google Cloud project. A Dataflow streaming job. that you wrote with the Apache Beam SDK, reads these messages, sends acknowledgment lo Pub/Sub. applies some custom business logic in a Doffs instance, and writes the result to BigQuery. You want to ensure that if your business logic fails on a message, the message will be sent to a Pub/Sub topic that you want to monitor for alerting purposes. What should you do?

Options:

A.  

Use an exception handling block in your Data Flow’s Doffs code to push the messages that failed to be transformed through a side output

and to a new Pub/Sub topic. Use Cloud Monitoring to monitor the topic/num_jnacked_messages_by_region metric on this new topic.

B.  

Enable retaining of acknowledged messages in your Pub/Sub pull subscription. Use Cloud Monitoring to monitor the

subscription/num_retained_acked_messages metric on this subscription.

C.  

Enable dead lettering in your Pub/Sub pull subscription, and specify a new Pub/Sub topic as the dead letter topic. Use Cloud Monitoring to

monitor the subscription/dead_letter_message_count metric on your pull subscription.

D.  

Create a snapshot of your Pub/Sub pull subscription. Use Cloud Monitoring to monitor the snapshot/numessages metric on this

snapshot.

Discussion 0
Questions 23

You need to look at BigQuery data from a specific table multiple times a day. The underlying table you are querying is several petabytes in size, but you want to filter your data and provide simple aggregations to downstream users. You want to run queries faster and get up-to-date insights quicker. What should you do?

Options:

A.  

Run a scheduled query to pull the necessary data at specific intervals daily.

B.  

Create a materialized view based off of the query being run.

C.  

Use a cached query to accelerate time to results.

D.  

Limit the query columns being pulled in the final result.

Discussion 0
Questions 24

You designed a database for patient records as a pilot project to cover a few hundred patients in three clinics. Your design used a single database table to represent all patients and their visits, and you used self-joins to generate reports. The server resource utilization was at 50%. Since then, the scope of the project has expanded. The database must now store 100 times more patientrecords. You can no longer run the reports, because they either take too long or they encounter errors with insufficient compute resources. How should you adjust the database design?

Options:

A.  

Add capacity (memory and disk space) to the database server by the order of 200.

B.  

Shard the tables into smaller ones based on date ranges, and only generate reports with prespecified date ranges.

C.  

Normalize the master patient-record table into the patient table and the visits table, and create other necessary tables to avoid self-join.

D.  

Partition the table into smaller tables, with one for each clinic. Run queries against the smaller table pairs, and use unions for consolidated reports.

Discussion 0
Questions 25

Your company uses a proprietary system to send inventory data every 6 hours to a data ingestion service in the cloud. Transmitted data includes a payload of several fields and the timestamp of the transmission. If there are any concerns about a transmission, the system re-transmits the data. How should you deduplicate the data most efficiency?

Options:

A.  

Assign global unique identifiers (GUID) to each data entry.

B.  

Compute the hash value of each data entry, and compare it with all historical data.

C.  

Store each data entry as the primary key in a separate database and apply an index.

D.  

Maintain a database table to store the hash value and other metadata for each data entry.

Discussion 0
Questions 26

You are designing a basket abandonment system for an ecommerce company. The system will send a message to a user based on these rules:

    No interaction by the user on the site for 1 hour

    Has added more than $30 worth of products to the basket

    Has not completed a transaction

You use Google Cloud Dataflow to process the data and decide if a message should be sent. How should you design the pipeline?

Options:

A.  

Use a fixed-time window with a duration of 60 minutes.

B.  

Use a sliding time window with a duration of 60 minutes.

C.  

Use a session window with a gap time duration of 60 minutes.

D.  

Use a global window with a time based trigger with a delay of 60 minutes.

Discussion 0
Questions 27

You create an important report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. You notice that visualizations are not showing data that is less than 1 hour old. What should you do?

Options:

A.  

Disable caching by editing the report settings.

B.  

Disable caching in BigQuery by editing table details.

C.  

Refresh your browser tab showing the visualizations.

D.  

Clear your browser history for the past hour then reload the tab showing the virtualizations.

Discussion 0
Questions 28

You are building a model to make clothing recommendations. You know a user’s fashion preference is likely to change over time, so you build a data pipeline to stream new data back to the model as it becomes available. How should you use this data to train the model?

Options:

A.  

Continuously retrain the model on just the new data.

B.  

Continuously retrain the model on a combination of existing data and the new data.

C.  

Train on the existing data while using the new data as your test set.

D.  

Train on the new data while using the existing data as your test set.

Discussion 0
Questions 29

You are running a pipeline in Cloud Dataflow that receives messages from a Cloud Pub/Sub topic and writes the results to a BigQuery dataset in the EU. Currently, your pipeline is located in europe-west4 and has a maximum of 3 workers, instance type n1-standard-1. You notice that during peak periods, your pipeline is struggling to process records in a timely fashion, when all 3 workers are at maximum CPU utilization. Which two actions can you take to increase performance of your pipeline? (Choose two.)

Options:

A.  

Increase the number of max workers

B.  

Use a larger instance type for your Cloud Dataflow workers

C.  

Change the zone of your Cloud Dataflow pipeline to run in us-central1

D.  

Create a temporary table in Cloud Bigtable that will act as a buffer for new data. Create a new step in your pipeline to write to this table first, and then create a new pipeline to write from Cloud Bigtable to BigQuery

E.  

Create a temporary table in Cloud Spanner that will act as a buffer for new data. Create a new step in your pipeline to write to this table first, and then create a new pipeline to write from Cloud Spanner to BigQuery

Discussion 0
Questions 30

You have spent a few days loading data from comma-separated values (CSV) files into the Google BigQuery table CLICK_STREAM. The column DT stores the epoch time of click events. For convenience, you chose a simple schema where every field is treated as the STRING type. Now, you want to compute web session durations of users who visit your site, and you want to change its data type to the TIMESTAMP. You want to minimize the migration effort without making future queries computationally expensive. What should you do?

Options:

A.  

Delete the table CLICK_STREAM, and then re-create it such that the column DT is of the TIMESTAMP type. Reload the data.

B.  

Add a column TS of the TIMESTAMP type to the table CLICK_STREAM, and populate the numeric values from the column TS for each row. Reference the column TS instead of the column DT from now on.

C.  

Create a view CLICK_STREAM_V, where strings from the column DT are cast into TIMESTAMP values. Reference the view CLICK_STREAM_V instead of the table CLICK_STREAM from now on.

D.  

Add two columns to the table CLICK STREAM: TS of the TIMESTAMP type and IS_NEW of the BOOLEAN type. Reload all data in append mode. For each appended row, set the value of IS_NEW to true. For future queries, reference the column TS instead of the column DT, with the WHERE clause ensuring that the value of IS_NEW must be true.

E.  

Construct a query to return every row of the table CLICK_STREAM, while using the built-in function to cast strings from the column DT into TIMESTAMP values. Run the query into a destination table NEW_CLICK_STREAM, in which the column TS is the TIMESTAMP type. Reference the table NEW_CLICK_STREAM instead of the table CLICK_STREAM from now on. In the future, new data is loaded into the table NEW_CLICK_STREAM.

Discussion 0
Questions 31

Your company is migrating their 30-node Apache Hadoop cluster to the cloud. They want to re-use Hadoop jobs they have already created and minimize the management of the cluster as much as possible. They also want to be able to persist data beyond the life of the cluster. What should you do?

Options:

A.  

Create a Google Cloud Dataflow job to process the data.

B.  

Create a Google Cloud Dataproc cluster that uses persistent disks for HDFS.

C.  

Create a Hadoop cluster on Google Compute Engine that uses persistent disks.

D.  

Create a Cloud Dataproc cluster that uses the Google Cloud Storage connector.

E.  

Create a Hadoop cluster on Google Compute Engine that uses Local SSD disks.

Discussion 0
Questions 32

Flowlogistic is rolling out their real-time inventory tracking system. The tracking devices will all send package-tracking messages, which will now go to a single Google Cloud Pub/Sub topic instead of the Apache Kafka cluster. A subscriber application will then process the messages for real-time reporting and store them in Google BigQuery for historical analysis. You want to ensure the package data can be analyzed over time.

Which approach should you take?

Options:

A.  

Attach the timestamp on each message in the Cloud Pub/Sub subscriber application as they are received.

B.  

Attach the timestamp and Package ID on the outbound message from each publisher device as they are sent to Clod Pub/Sub.

C.  

Use the NOW () function in BigQuery to record the event’s time.

D.  

Use the automatically generated timestamp from Cloud Pub/Sub to order the data.

Discussion 0
Questions 33

You create a new report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. It is company policy to ensure employees can view only the data associated with their region, so you create and populate a table for each region. You need to enforce the regional access policy to the data.

Which two actions should you take? (Choose two.)

Options:

A.  

Ensure all the tables are included in global dataset.

B.  

Ensure each table is included in a dataset for a region.

C.  

Adjust the settings for each table to allow a related region-based security group view access.

D.  

Adjust the settings for each view to allow a related region-based security group view access.

E.  

Adjust the settings for each dataset to allow a related region-based security group view access.

Discussion 0
Questions 34

You need to compose visualization for operations teams with the following requirements:

    Telemetry must include data from all 50,000 installations for the most recent 6 weeks (sampling once every minute)

    The report must not be more than 3 hours delayed from live data.

    The actionable report should only show suboptimal links.

    Most suboptimal links should be sorted to the top.

    Suboptimal links can be grouped and filtered by regional geography.

    User response time to load the report must be <5 seconds.

You create a data source to store the last 6 weeks of data, and create visualizations that allow viewers to see multiple date ranges, distinct geographic regions, and unique installation types. You always show the latest data without any changes to your visualizations. You want to avoid creating and updating new visualizations each month. What should you do?

Options:

A.  

Look through the current data and compose a series of charts and tables, one for each possible

combination of criteria.

B.  

Look through the current data and compose a small set of generalized charts and tables bound to criteria filters that allow value selection.

C.  

Export the data to a spreadsheet, compose a series of charts and tables, one for each possible

combination of criteria, and spread them across multiple tabs.

D.  

Load the data into relational database tables, write a Google App Engine application that queries all rows, summarizes the data across each criteria, and then renders results using the Google Charts and visualization API.

Discussion 0
Questions 35

Flowlogistic’s CEO wants to gain rapid insight into their customer base so his sales team can be better informed in the field. This team is not very technical, so they’ve purchased a visualization tool to simplify the creation of BigQuery reports. However, they’ve been overwhelmed by all thedata in the table, and are spending a lot of money on queries trying to find the data they need. You want to solve their problem in the most cost-effective way. What should you do?

Options:

A.  

Export the data into a Google Sheet for virtualization.

B.  

Create an additional table with only the necessary columns.

C.  

Create a view on the table to present to the virtualization tool.

D.  

Create identity and access management (IAM) roles on the appropriate columns, so only they appear in a query.

Discussion 0
Questions 36

Flowlogistic’s management has determined that the current Apache Kafka servers cannot handle the data volume for their real-time inventory tracking system. You need to build a new system on Google Cloud Platform (GCP) that will feed the proprietary tracking software. The system must be able to ingest data from a variety of global sources, process and query in real-time, and store the data reliably. Which combination of GCP products should you choose?

Options:

A.  

Cloud Pub/Sub, Cloud Dataflow, and Cloud Storage

B.  

Cloud Pub/Sub, Cloud Dataflow, and Local SSD

C.  

Cloud Pub/Sub, Cloud SQL, and Cloud Storage

D.  

Cloud Load Balancing, Cloud Dataflow, and Cloud Storage

Discussion 0
Questions 37

MJTelco needs you to create a schema in Google Bigtable that will allow for the historical analysis of the last 2 years of records. Each record that comes in is sent every 15 minutes, and contains a unique identifier of the device and a data record. The most common query is for all the data for a given device for a given day. Which schema should you use?

Options:

A.  

Rowkey: date#device_idColumn data: data_point

B.  

Rowkey: dateColumn data: device_id, data_point

C.  

Rowkey: device_idColumn data: date, data_point

D.  

Rowkey: data_pointColumn data: device_id, date

E.  

Rowkey: date#data_pointColumn data: device_id

Discussion 0
Questions 38

MJTelco is building a custom interface to share data. They have these requirements:

    They need to do aggregations over their petabyte-scale datasets.

    They need to scan specific time range rows with a very fast response time (milliseconds).

Which combination of Google Cloud Platform products should you recommend?

Options:

A.  

Cloud Datastore and Cloud Bigtable

B.  

Cloud Bigtable and Cloud SQL

C.  

BigQuery and Cloud Bigtable

D.  

BigQuery and Cloud Storage

Discussion 0
Questions 39

You want to use Google Stackdriver Logging to monitor Google BigQuery usage. You need an instant notification to be sent to your monitoring tool when new data is appended to a certain table using an insert job, but you do not want to receive notifications for other tables. What should you do?

Options:

A.  

Make a call to the Stackdriver API to list all logs, and apply an advanced filter.

B.  

In the Stackdriver logging admin interface, and enable a log sink export to BigQuery.

C.  

In the Stackdriver logging admin interface, enable a log sink export to Google Cloud Pub/Sub, and subscribe to the topic from your monitoring tool.

D.  

Using the Stackdriver API, create a project sink with advanced log filter to export to Pub/Sub, and subscribe to the topic from your monitoring tool.

Discussion 0
Questions 40

Flowlogistic wants to use Google BigQuery as their primary analysis system, but they still have Apache Hadoop and Spark workloads that they cannot move to BigQuery. Flowlogistic does not know how to store the data that is common to both workloads. What should they do?

Options:

A.  

Store the common data in BigQuery as partitioned tables.

B.  

Store the common data in BigQuery and expose authorized views.

C.  

Store the common data encoded as Avro in Google Cloud Storage.

D.  

Store he common data in the HDFS storage for a Google Cloud Dataproc cluster.

Discussion 0
Questions 41

MJTelco’s Google Cloud Dataflow pipeline is now ready to start receiving data from the 50,000 installations. You want to allow Cloud Dataflow to scale its compute power up as required. Which Cloud Dataflow pipeline configuration setting should you update?

Options:

A.  

The zone

B.  

The number of workers

C.  

The disk size per worker

D.  

The maximum number of workers

Discussion 0
Questions 42

You need to compose visualizations for operations teams with the following requirements:

Which approach meets the requirements?

Options:

A.  

Load the data into Google Sheets, use formulas to calculate a metric, and use filters/sorting to show only suboptimal links in a table.

B.  

Load the data into Google BigQuery tables, write Google Apps Script that queries the data, calculates the metric, and shows only suboptimal rows in a table in Google Sheets.

C.  

Load the data into Google Cloud Datastore tables, write a Google App Engine Application that queries all rows, applies a function to derive the metric, and then renders results in a table using the Google charts and visualization API.

D.  

Load the data into Google BigQuery tables, write a Google Data Studio 360 report that connects to your data, calculates a metric, and then uses a filter expression to show only suboptimal rows in a table.

Discussion 0
Questions 43

Given the record streams MJTelco is interested in ingesting per day, they are concerned about the cost of Google BigQuery increasing. MJTelco asks you to provide a design solution. They require a single large data table called tracking_table. Additionally, they want to minimize the cost of daily queries while performing fine-grained analysis of each day’s events. They also want to use streaming ingestion. What should you do?

Options:

A.  

Create a table called tracking_table and include a DATE column.

B.  

Create a partitioned table called tracking_table and include a TIMESTAMP column.

C.  

Create sharded tables for each day following the pattern tracking_table_YYYYMMDD.

D.  

Create a table called tracking_table with a TIMESTAMP column to represent the day.

Discussion 0
Questions 44

What are two of the characteristics of using online prediction rather than batch prediction?

Options:

A.  

It is optimized to handle a high volume of data instances in a job and to run more complex models.

B.  

Predictions are returned in the response message.

C.  

Predictions are written to output files in a Cloud Storage location that you specify.

D.  

It is optimized to minimize the latency of serving predictions.

Discussion 0
Questions 45

Which of these are examples of a value in a sparse vector? (Select 2 answers.)

Options:

A.  

[0, 5, 0, 0, 0, 0]

B.  

[0, 0, 0, 1, 0, 0, 1]

C.  

[0, 1]

D.  

[1, 0, 0, 0, 0, 0, 0]

Discussion 0
Questions 46

You are developing a software application using Google's Dataflow SDK, and want to use conditional, for loops and other complex programming structures to create a branching pipeline. Which component will be used for the data processing operation?

Options:

A.  

PCollection

B.  

Transform

C.  

Pipeline

D.  

Sink API

Discussion 0
Questions 47

Which Cloud Dataflow / Beam feature should you use to aggregate data in an unbounded data source every hour based on the time when the data entered the pipeline?

Options:

A.  

An hourly watermark

B.  

An event time trigger

C.  

The with Allowed Lateness method

D.  

A processing time trigger

Discussion 0
Questions 48

Which row keys are likely to cause a disproportionate number of reads and/or writes on a particular node in a Bigtable cluster (select 2 answers)?

Options:

A.  

A sequential numeric ID

B.  

A timestamp followed by a stock symbol

C.  

A non-sequential numeric ID

D.  

A stock symbol followed by a timestamp

Discussion 0
Questions 49

By default, which of the following windowing behavior does Dataflow apply to unbounded data sets?

Options:

A.  

Windows at every 100 MB of data

B.  

Single, Global Window

C.  

Windows at every 1 minute

D.  

Windows at every 10 minutes

Discussion 0
Questions 50

Which is not a valid reason for poor Cloud Bigtable performance?

Options:

A.  

The workload isn't appropriate for Cloud Bigtable.

B.  

The table's schema is not designed correctly.

C.  

The Cloud Bigtable cluster has too many nodes.

D.  

There are issues with the network connection.

Discussion 0
Questions 51

Which of these sources can you not load data into BigQuery from?

Options:

A.  

File upload

B.  

Google Drive

C.  

Google Cloud Storage

D.  

Google Cloud SQL

Discussion 0
Questions 52

When a Cloud Bigtable node fails, ____ is lost.

Options:

A.  

all data

B.  

no data

C.  

the last transaction

D.  

the time dimension

Discussion 0
Questions 53

What are all of the BigQuery operations that Google charges for?

Options:

A.  

Storage, queries, and streaming inserts

B.  

Storage, queries, and loading data from a file

C.  

Storage, queries, and exporting data

D.  

Queries and streaming inserts

Discussion 0
Questions 54

You want to use a BigQuery table as a data sink. In which writing mode(s) can you use BigQuery as a sink?

Options:

A.  

Both batch and streaming

B.  

BigQuery cannot be used as a sink

C.  

Only batch

D.  

Only streaming

Discussion 0
Questions 55

Cloud Dataproc is a managed Apache Hadoop and Apache _____ service.

Options:

A.  

Blaze

B.  

Spark

C.  

Fire

D.  

Ignite

Discussion 0
Questions 56

Which is the preferred method to use to avoid hotspotting in time series data in Bigtable?

Options:

A.  

Field promotion

B.  

Randomization

C.  

Salting

D.  

Hashing

Discussion 0
Questions 57

Cloud Bigtable is Google's ______ Big Data database service.

Options:

A.  

Relational

B.  

mySQL

C.  

NoSQL

D.  

SQL Server

Discussion 0
Questions 58

Which TensorFlow function can you use to configure a categorical column if you don't know all of the possible values for that column?

Options:

A.  

categorical_column_with_vocabulary_list

B.  

categorical_column_with_hash_bucket

C.  

categorical_column_with_unknown_values

D.  

sparse_column_with_keys

Discussion 0
Questions 59

How would you query specific partitions in a BigQuery table?

Options:

A.  

Use the DAY column in the WHERE clause

B.  

Use the EXTRACT(DAY) clause

C.  

Use the __PARTITIONTIME pseudo-column in the WHERE clause

D.  

Use DATE BETWEEN in the WHERE clause

Discussion 0
Questions 60

What Dataflow concept determines when a Window's contents should be output based on certain criteria being met?

Options:

A.  

Sessions

B.  

OutputCriteria

C.  

Windows

D.  

Triggers

Discussion 0
Questions 61

What is the recommended action to do in order to switch between SSD and HDD storage for your Google Cloud Bigtable instance?

Options:

A.  

create a third instance and sync the data from the two storage types via batch jobs

B.  

export the data from the existing instance and import the data into a new instance

C.  

run parallel instances where one is HDD and the other is SDD

D.  

the selection is final and you must resume using the same storage type

Discussion 0
Questions 62

Which of the following statements about Legacy SQL and Standard SQL is not true?

Options:

A.  

Standard SQL is the preferred query language for BigQuery.

B.  

If you write a query in Legacy SQL, it might generate an error if you try to run it with Standard SQL.

C.  

One difference between the two query languages is how you specify fully-qualified table names (i.e. table names that include their associated project name).

D.  

You need to set a query language for each dataset and the default is Standard SQL.

Discussion 0
Questions 63

Your company produces 20,000 files every hour. Each data file is formatted as a comma separated values (CSV) file that is less than 4 KB. All files must be ingested on Google Cloud Platform before they can be processed. Your company site has a 200 ms latency to Google Cloud, and your Internet connection bandwidth is limited as 50 Mbps. You currently deploy a secure FTP (SFTP) server on a virtual machine in Google Compute Engine as the data ingestion point. A local SFTP client runs on a dedicated machine to transmit the CSV files as is. The goal is to make reports with data from the previous day available to the executives by 10:00 a.m. each day. This design is barely able to keep up with the current volume, even though the bandwidth utilization is rather low.

You are told that due to seasonality, your company expects the number of files to double for the next three months. Which two actions should you take? (choose two.)

Options:

A.  

Introduce data compression for each file to increase the rate file of file transfer.

B.  

Contact your internet service provider (ISP) to increase your maximum bandwidth to at least 100 Mbps.

C.  

Redesign the data ingestion process to use gsutil tool to send the CSV files to a storage bucket in parallel.

D.  

Assemble 1,000 files into a tape archive (TAR) file. Transmit the TAR files instead, and disassemble the CSV files in the cloud upon receiving them.

E.  

Create an S3-compatible storage endpoint in your network, and use Google Cloud Storage Transfer Service to transfer on-premices data to the designated storage bucket.

Discussion 0
Questions 64

Your company has recently grown rapidly and now ingesting data at a significantly higher rate than it was previously. You manage the daily batch MapReduce analytics jobs in Apache Hadoop. However, the recent increase in data has meant the batch jobs are falling behind. You were asked to recommend ways the development team could increase the responsiveness of the analytics without increasing costs. What should you recommend they do?

Options:

A.  

Rewrite the job in Pig.

B.  

Rewrite the job in Apache Spark.

C.  

Increase the size of the Hadoop cluster.

D.  

Decrease the size of the Hadoop cluster but also rewrite the job in Hive.

Discussion 0
Questions 65

Your company is loading comma-separated values (CSV) files into Google BigQuery. The data is fully imported successfully; however, the imported data is not matching byte-to-byte to the source file. What is the most likely cause of this problem?

Options:

A.  

The CSV data loaded in BigQuery is not flagged as CSV.

B.  

The CSV data has invalid rows that were skipped on import.

C.  

The CSV data loaded in BigQuery is not using BigQuery’s default encoding.

D.  

The CSV data has not gone through an ETL phase before loading into BigQuery.

Discussion 0
Questions 66

You work for an economic consulting firm that helps companies identify economic trends as they happen. As part of your analysis, you use Google BigQuery to correlate customer data with the average prices of the 100 most common goods sold, including bread, gasoline, milk, and others. The average prices of these goods are updated every 30 minutes. You want to make sure this data stays up to date so you can combine it with other data in BigQuery as cheaply as possible. What should you do?

Options:

A.  

Load the data every 30 minutes into a new partitioned table in BigQuery.

B.  

Store and update the data in a regional Google Cloud Storage bucket and create a federated data source in BigQuery

C.  

Store the data in Google Cloud Datastore. Use Google Cloud Dataflow to query BigQuery and combine the data programmatically with the data stored in Cloud Datastore

D.  

Store the data in a file in a regional Google Cloud Storage bucket. Use Cloud Dataflow to query BigQuery and combine the data programmatically with the data stored in Google Cloud Storage.

Discussion 0