Big Black Friday Sale 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: exams65

ExamsBrite Dumps

Google Cloud Associate Data Practitioner (ADP Exam) Question and Answers

Google Cloud Associate Data Practitioner (ADP Exam)

Last Update Nov 30, 2025
Total Questions : 106

We are offering FREE Associate-Data-Practitioner Google exam questions. All you do is to just go and sign up. Give your details, prepare Associate-Data-Practitioner free exam questions and then go for complete pool of Google Cloud Associate Data Practitioner (ADP Exam) test questions that will help you more.

Associate-Data-Practitioner pdf

Associate-Data-Practitioner PDF

$36.75  $104.99
Associate-Data-Practitioner Engine

Associate-Data-Practitioner Testing Engine

$43.75  $124.99
Associate-Data-Practitioner PDF + Engine

Associate-Data-Practitioner PDF + Testing Engine

$57.75  $164.99
Questions 1

You are designing an application that will interact with several BigQuery datasets. You need to grant the application’s service account permissions that allow it to query and update tables within the datasets, and list all datasets in a project within your application. You want to follow the principle of least privilege. Which pre-defined IAM role(s) should you apply to the service account?

Options:

A.  

roles/bigquery.jobUser and roles/bigquery.dataOwner

B.  

roles/bigquery.connectionUser and roles/bigquery.dataViewer

C.  

roles/bigquery.admin

D.  

roles/bigquery.user and roles/bigquery.filteredDataViewer

Discussion 0
Questions 2

You work for a financial organization that stores transaction data in BigQuery. Your organization has a regulatory requirement to retain data for a minimum of seven years for auditing purposes. You need to ensure that the data is retained for seven years using an efficient and cost-optimized approach. What should you do?

Options:

A.  

Create a partition by transaction date, and set the partition expiration policy to seven years.

B.  

Set the table-level retention policy in BigQuery to seven years.

C.  

Set the dataset-level retention policy in BigQuery to seven years.

D.  

Export the BigQuery tables to Cloud Storage daily, and enforce a lifecycle management policy that has a seven-year retention rule.

Discussion 0
Questions 3

You recently inherited a task for managing Dataflow streaming pipelines in your organization and noticed that proper access had not been provisioned to you. You need to request a Google-provided IAM role so you can restart the pipelines. You need to follow the principle of least privilege. What should you do?

Options:

A.  

Request the Dataflow Developer role.

B.  

Request the Dataflow Viewer role.

C.  

Request the Dataflow Worker role.

D.  

Request the Dataflow Admin role.

Discussion 0
Questions 4

Your data science team needs to collaboratively analyze a 25 TB BigQuery dataset to support the development of a machine learning model. You want to use Colab Enterprise notebooks while ensuring efficient data access and minimizing cost. What should you do?

Options:

A.  

Export the BigQuery dataset to Google Drive. Load the dataset into the Colab Enterprise notebook using Pandas.

B.  

Use BigQuery magic commands within a Colab Enterprise notebook to query and analyze the data.

C.  

Create a Dataproc cluster connected to a Colab Enterprise notebook, and use Spark to process the data in BigQuery.

D.  

Copy the BigQuery dataset to the local storage of the Colab Enterprise runtime, and analyze the data using Pandas.

Discussion 0
Questions 5

Your organization’s ecommerce website collects user activity logs using a Pub/Sub topic. Your organization’s leadership team wants a dashboard that contains aggregated user engagement metrics. You need to create a solution that transforms the user activity logs into aggregated metrics, while ensuring that the raw data can be easily queried. What should you do?

Options:

A.  

Create a Dataflow subscription to the Pub/Sub topic, and transform the activity logs. Load the transformed data into a BigQuery table for reporting.

B.  

Create an event-driven Cloud Run function to trigger a data transformation pipeline to run. Load the transformed activity logs into a BigQuery table for reporting.

C.  

Create a Cloud Storage subscription to the Pub/Sub topic. Load the activity logs into a bucket using the Avro file format. Use Dataflow to transform the data, and load it into a BigQuery table for reporting.

D.  

Create a BigQuery subscription to the Pub/Sub topic, and load the activity logs into the table. Create a materialized view in BigQuery using SQL to transform the data for reporting

Discussion 0
Questions 6

You are predicting customer churn for a subscription-based service. You have a 50 PB historical customer dataset in BigQuery that includes demographics, subscription information, and engagement metrics. You want to build a churn prediction model with minimal overhead. You want to follow the Google-recommended approach. What should you do?

Options:

A.  

Export the data from BigQuery to a local machine. Use scikit-learn in a Jupyter notebook to build the churn prediction model.

B.  

Use Dataproc to create a Spark cluster. Use the Spark MLlib within the cluster to build the churn prediction model.

C.  

Create a Looker dashboard that is connected to BigQuery. Use LookML to predict churn.

D.  

Use the BigQuery Python client library in a Jupyter notebook to query and preprocess the data in BigQuery. Use the CREATE MODEL statement in BigQueryML to train the churn prediction model.

Discussion 0
Questions 7

You need to design a data pipeline that ingests data from CSV, Avro, and Parquet files into Cloud Storage. The data includes raw user input. You need to remove all malicious SQL injections before storing the data in BigQuery. Which data manipulation methodology should you choose?

Options:

A.  

EL

B.  

ELT

C.  

ETL

D.  

ETLT

Discussion 0
Questions 8

Your company is adopting BigQuery as their data warehouse platform. Your team has experienced Python developers. You need to recommend a fully-managed tool to build batch ETL processes that extract data from various source systems, transform the data using a variety of Google Cloud services, and load the transformed data into BigQuery. You want this tool to leverage your team’s Python skills. What should you do?

Options:

A.  

Use Dataform with assertions.

B.  

Deploy Cloud Data Fusion and included plugins.

C.  

Use Cloud Composer with pre-built operators.

D.  

Use Dataflow and pre-built templates.

Discussion 0
Questions 9

You work for a financial services company that handles highly sensitive data. Due to regulatory requirements, your company is required to have complete and manual control of data encryption. Which type of keys should you recommend to use for data storage?

Options:

A.  

Use customer-supplied encryption keys (CSEK).

B.  

Use a dedicated third-party key management system (KMS) chosen by the company.

C.  

Use Google-managed encryption keys (GMEK).

D.  

Use customer-managed encryption keys (CMEK).

Discussion 0
Questions 10

You need to create a data pipeline for a new application. Your application will stream data that needs to be enriched and cleaned. Eventually, the data will be used to train machine learning models. You need to determine the appropriate data manipulation methodology and which Google Cloud services to use in this pipeline. What should you choose?

Options:

A.  

ETL; Dataflow -> BigQuery

B.  

ETL; Cloud Data Fusion -> Cloud Storage

C.  

ELT; Cloud Storage -> Bigtable

D.  

ELT; Cloud SQL -> Analytics Hub

Discussion 0
Questions 11

Your organization uses Dataflow pipelines to process real-time financial transactions. You discover that one of your Dataflow jobs has failed. You need to troubleshoot the issue as quickly as possible. What should you do?

Options:

A.  

Set up a Cloud Monitoring dashboard to track key Dataflow metrics, such as data throughput, error rates, and resource utilization.

B.  

Create a custom script to periodically poll the Dataflow API for job status updates, and send email alerts if any errors are identified.

C.  

Navigate to the Dataflow Jobs page in the Google Cloud console. Use the job logs and worker logs to identify the error.

D.  

Use the gcloud CLI tool to retrieve job metrics and logs, and analyze them for errors and performance bottlenecks.

Discussion 0
Questions 12

You work for a home insurance company. You are frequently asked to create and save risk reports with charts for specific areas using a publicly available storm event dataset. You want to be able to quickly create and re-run risk reports when new data becomes available. What should you do?

Options:

A.  

Export the storm event dataset as a CSV file. Import the file to Google Sheets, and use cell data in the worksheets to create charts.

B.  

Copy the storm event dataset into your BigQuery project. Use BigQuery Studio to query and visualize the data in Looker Studio.

C.  

Reference and query the storm event dataset using SQL in BigQuery Studio. Export the results to Google Sheets, and use cell data in the worksheets to create charts.

D.  

Reference and query the storm event dataset using SQL in a Colab Enterprise notebook. Display the table results and document with Markdown, and use Matplotlib to create charts.

Discussion 0
Questions 13

Your team wants to create a monthly report to analyze inventory data that is updated daily. You need to aggregate the inventory counts by using only the most recent month of data, and save the results to be used in a Looker Studio dashboard. What should you do?

Options:

A.  

Create a materialized view in BigQuery that uses the SUM( ) function and the DATE_SUB( ) function.

B.  

Create a saved query in the BigQuery console that uses the SUM( ) function and the DATE_SUB( ) function. Re-run the saved query every month, and save the results to a BigQuery table.

C.  

Create a BigQuery table that uses the SUM( ) function and the _PARTITIONDATE filter.

D.  

Create a BigQuery table that uses the SUM( ) function and the DATE_DIFF( ) function.

Discussion 0
Questions 14

Your organization uses scheduled queries to perform transformations on data stored in BigQuery. You discover that one of your scheduled queries has failed. You need to troubleshoot the issue as quickly as possible. What should you do?

Options:

A.  

Navigate to the Logs Explorer page in Cloud Logging. Use filters to find the failed job, and analyze the error details.

B.  

Set up a log sink using the gcloud CLI to export BigQuery audit logs to BigQuery. Query those logs to identify the error associated with the failed job ID.

C.  

Request access from your admin to the BigQuery information_schema. Query the jobs view with the failed job ID, and analyze error details.

D.  

Navigate to the Scheduled queries page in the Google Cloud console. Select the failed job, and analyze the error details.

Discussion 0
Questions 15

Your company’s customer support audio files are stored in a Cloud Storage bucket. You plan to analyze the audio files’ metadata and file content within BigQuery to create inference by using BigQuery ML. You need to create a corresponding table in BigQuery that represents the bucket containing the audio files. What should you do?

Options:

A.  

Create an external table.

B.  

Create a temporary table.

C.  

Create a native table.

D.  

Create an object table.

Discussion 0
Questions 16

You work for a global financial services company that trades stocks 24/7. You have a Cloud SGL for PostgreSQL user database. You need to identify a solution that ensures that the database is continuously operational, minimizes downtime, and will not lose any data in the event of a zonal outage. What should you do?

Options:

A.  

Continuously back up the Cloud SGL instance to Cloud Storage. Create a Compute Engine instance with PostgreSCL in a different region. Restore the backup in the Compute Engine instance if a failure occurs.

B.  

Create a read replica in another region. Promote the replica to primary if a failure occurs.

C.  

Configure and create a high-availability Cloud SQL instance with the primary instance in zone A and a secondary instance in any zone other than zone A.

D.  

Create a read replica in the same region but in a different zone.

Discussion 0
Questions 17

Your organization stores highly personal data in BigQuery and needs to comply with strict data privacy regulations. You need to ensure that sensitive data values are rendered unreadable whenever an employee leaves the organization. What should you do?

Options:

A.  

Use AEAD functions and delete keys when employees leave the organization.

B.  

Use dynamic data masking and revoke viewer permissions when employees leave the organization.

C.  

Use customer-managed encryption keys (CMEK) and delete keys when employees leave the organization.

D.  

Use column-level access controls with policy tags and revoke viewer permissions when employees leave the organization.

Discussion 0
Questions 18

You are using your own data to demonstrate the capabilities of BigQuery to your organization’s leadership team. You need to perform a one-time load of the files stored on your local machine into BigQuery using as little effort as possible. What should you do?

Options:

A.  

Write and execute a Python script using the BigQuery Storage Write API library.

B.  

Create a Dataproc cluster, copy the files to Cloud Storage, and write an Apache Spark job using the spark-bigquery-connector.

C.  

Execute the bq load command on your local machine.

D.  

Create a Dataflow job using the Apache Beam FileIO and BigQueryIO connectors with a local runner.

Discussion 0
Questions 19

You used BigQuery ML to build a customer purchase propensity model six months ago. You want to compare the current serving data with the historical serving data to determine whether you need to retrain the model. What should you do?

Options:

A.  

Compare the two different models.

B.  

Evaluate the data skewness.

C.  

Evaluate data drift.

D.  

Compare the confusion matrix.

Discussion 0
Questions 20

You have created a LookML model and dashboard that shows daily sales metrics for five regional managers to use. You want to ensure that the regional managers can only see sales metrics specific to their region. You need an easy-to-implement solution. What should you do?

Options:

A.  

Create asales_regionuser attribute, and assign each manager’s region as the value of their user attribute. Add anaccess_filterExplore filter on theregion_namedimension by using thesales_regionuser attribute.

B.  

Create five different Explores with thesql_always_filterExplore filter applied on theregion_namedimension. Set eachregion_namevalue to the corresponding region for each manager.

C.  

Create separate Looker dashboards for each regional manager. Set the default dashboard filter to the corresponding region for each manager.

D.  

Create separate Looker instances for each regional manager. Copy the LookML model and dashboard to each instance. Provision viewer access to the corresponding manager.

Discussion 0
Questions 21

You are a database administrator managing sales transaction data by region stored in a BigQuery table. You need to ensure that each sales representative can only see the transactions in their region. What should you do?

Options:

A.  

Add a policy tag in BigQuery.

B.  

Create a row-level access policy.

C.  

Create a data masking rule.

D.  

Grant the appropriate 1AM permissions on the dataset.

Discussion 0
Questions 22

You need to create a new data pipeline. You want a serverless solution that meets the following requirements:

• Data is streamed from Pub/Sub and is processed in real-time.

• Data is transformed before being stored.

• Data is stored in a location that will allow it to be analyzed with SQL using Looker.

Which Google Cloud services should you recommend for the pipeline?

Options:

A.  

1. Dataproc Serverless

2. Bigtable

B.  

1. Cloud Composer

2. Cloud SQL for MySQL

C.  

1. BigQuery

2. Analytics Hub

D.  

1. Dataflow

2. BigQuery

Discussion 0
Questions 23

You need to transfer approximately 300 TB of data from your company's on-premises data center to Cloud Storage. You have 100 Mbps internet bandwidth, and the transfer needs to be completed as quickly as possible. What should you do?

Options:

A.  

Use Cloud Client Libraries to transfer the data over the internet.

B.  

Use the gcloud storage command to transfer the data over the internet.

C.  

Compress the data, upload it to multiple cloud storage providers, and then transfer the data to Cloud Storage.

D.  

Request a Transfer Appliance, copy the data to the appliance, and ship it back to Google.

Discussion 0
Questions 24

You have millions of customer feedback records stored in BigQuery. You want to summarize the data by using the large language model (LLM) Gemini. You need to plan and execute this analysis using the most efficient approach. What should you do?

Options:

A.  

Query the BigQuery table from within a Python notebook, use the Gemini API to summarize the data within the notebook, and store the summaries in BigQuery.

B.  

Use a BigQuery ML model to pre-process the text data, export the results to Cloud Storage, and use the Gemini API to summarize the pre- processed data.

C.  

Create a BigQuery Cloud resource connection to a remote model in Vertex Al, and use Gemini to summarize the data.

D.  

Export the raw BigQuery data to a CSV file, upload it to Cloud Storage, and use the Gemini API to summarize the data.

Discussion 0
Questions 25

Your organization has a BigQuery dataset that contains sensitive employee information such as salaries and performance reviews. The payroll specialist in the HR department needs to have continuous access to aggregated performance data, but they do not need continuous access to other sensitive data. You need to grant the payroll specialist access to the performance data without granting them access to the entire dataset using the simplest and most secure approach. What should you do?

Options:

A.  

Use authorized views to share query results with the payroll specialist.

B.  

Create row-level and column-level permissions and policies on the table that contains performance data in the dataset. Provide the payroll specialist with the appropriate permission set.

C.  

Create a table with the aggregated performance data. Use table-level permissions to grant access to the payroll specialist.

D.  

Create a SQL query with the aggregated performance data. Export the results to an Avro file in a Cloud Storage bucket. Share the bucket with the payroll specialist.

Discussion 0
Questions 26

You are working with a large dataset of customer reviews stored in Cloud Storage. The dataset contains several inconsistencies, such as missing values, incorrect data types, and duplicate entries. You need toclean the data to ensure that it is accurate and consistent before using it for analysis. What should you do?

Options:

A.  

Use the PythonOperator in Cloud Composer to clean the data and load it into BigQuery. Use SQL for analysis.

B.  

Use BigQuery to batch load the data into BigQuery. Use SQL for cleaning and analysis.

C.  

Use Storage Transfer Service to move the data to a different Cloud Storage bucket. Use event triggers to invoke Cloud Run functions to load the data into BigQuery. Use SQL for analysis.

D.  

Use Cloud Run functions to clean the data and load it into BigQuery. Use SQL for analysis.

Discussion 0