Spring Sale 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: exams65

ExamsBrite Dumps

SnowPro Advanced: Architect Certification Exam Question and Answers

SnowPro Advanced: Architect Certification Exam

Last Update Feb 28, 2026
Total Questions : 182

We are offering FREE ARA-C01 Snowflake exam questions. All you do is to just go and sign up. Give your details, prepare ARA-C01 free exam questions and then go for complete pool of SnowPro Advanced: Architect Certification Exam test questions that will help you more.

ARA-C01 pdf

ARA-C01 PDF

$36.75  $104.99
ARA-C01 Engine

ARA-C01 Testing Engine

$43.75  $124.99
ARA-C01 PDF + Engine

ARA-C01 PDF + Testing Engine

$57.75  $164.99
Questions 1

What integration object should be used to place restrictions on where data may be exported?

Options:

A.  

Stage integration

B.  

Security integration

C.  

Storage integration

D.  

API integration

Discussion 0
Questions 2

A company is following the Data Mesh principles, including domain separation, and chose one Snowflake account for its data platform.

An Architect created two data domains to produce two data products. The Architect needs a third data domain that will use both of the data products to create an aggregate data product. The read access to the data products will be granted through a separate role.

Based on the Data Mesh principles, how should the third domain be configured to create the aggregate product if it has been granted the two read roles?

Options:

A.  

Use secondary roles for all users.

B.  

Create a hierarchy between the two read roles.

C.  

Request a technical ETL user with the sysadmin role.

D.  

Request that the two data domains share data using the Data Exchange.

Discussion 0
Questions 3

Why does a conditional multi-table insert option support the Data Vault data model?

Options:

A.  

Data can be inserted in parallel to hubs and satellites using surrogate keys.

B.  

Data can be inserted in parallel to dimensions and facts using surrogate keys.

C.  

Data can be inserted in sequence to hubs and satellites using surrogate keys.

D.  

Data can be inserted in sequence to dimensions and facts using surrogate keys.

Discussion 0
Questions 4

An Architect is designing partitioned external tables for a Snowflake data lake. The data lake size may grow over time, and partition definitions may need to change in the future.

How can these requirements be met?

Options:

A.  

Use the PARTITION BY () clause when creating the external table.

B.  

Use partition_type = USER_SPECIFIED when creating the external table.

C.  

Set METADATA$EXTERNAL_TABLE_PARTITION = MANUAL.

D.  

Alter the table using ADD_PARTITION_COLUMN before defining a new partition column.

Discussion 0
Questions 5

What are some of the characteristics of result set caches? (Choose three.)

Options:

A.  

Time Travel queries can be executed against the result set cache.

B.  

Snowflake persists the data results for 24 hours.

C.  

Each time persisted results for a query are used, a 24-hour retention period is reset.

D.  

The data stored in the result cache will contribute to storage costs.

E.  

The retention period can be reset for a maximum of 31 days.

F.  

The result set cache is not shared between warehouses.

Discussion 0
Questions 6

An Architect is designing a solution that will be used to process changed records in an orders table. Newly-inserted orders must be loaded into the f_orders fact table, which will aggregate all the orders by multiple dimensions (time, region, channel, etc.). Existing orders can be updated by the sales department within 30 days after the order creation. In case of an order update, the solution must perform two actions:

1. Update the order in the f_0RDERS fact table.

2. Load the changed order data into the special table ORDER _REPAIRS.

This table is used by the Accounting department once a month. If the order has been changed, the Accounting team needs to know the latest details and perform the necessary actions based on the data in the order_repairs table.

What data processing logic design will be the MOST performant?

Options:

A.  

Useone stream and one task.

B.  

Useone stream and two tasks.

C.  

Usetwo streams and one task.

D.  

Usetwo streams and two tasks.

Discussion 0
Questions 7

Company A would like to share data in Snowflake with Company B. Company B is not on the same cloud platform as Company A.

What is required to allow data sharing between these two companies?

Options:

A.  

Create a pipeline to write shared data to a cloud storage location in the target cloud provider.

B.  

Ensure that all views are persisted, as views cannot be shared across cloud platforms.

C.  

Setup data replication to the region and cloud platform where the consumer resides.

D.  

Company A and Company B must agree to use a single cloud platform: Data sharing is only possible if the companies share the same cloud provider.

Discussion 0
Questions 8

A global company needs to securely share its sales and Inventory data with a vendor using a Snowflake account.

The company has its Snowflake account In the AWS eu-west 2 Europe (London) region. The vendor's Snowflake account Is on the Azure platform in the West Europe region. How should the company's Architect configure the data share?

Options:

A.  

1. Create a share.2. Add objects to the share.3. Add a consumer account to the share for the vendor to access.

B.  

1. Create a share.2. Create a reader account for the vendor to use.3. Add the reader account to the share.

C.  

1. Create a new role called db_share.2. Grant the db_share role privileges to read data from the company database and schema.3. Create a user for the vendor.4. Grant the ds_share role to the vendor's users.

D.  

1. Promote an existing database in the company's local account to primary.2. Replicate the database to Snowflake on Azure in the West-Europe region.3. Create a share and add objects to the share.4. Add a consumer account to the share for the vendor to access.

Discussion 0
Questions 9

Which SQL ALTER command will MAXIMIZE memory and compute resources for a Snowpark stored procedure when executed on the snowpark_opt_wh warehouse?

Options:

A.  

ALTER WAREHOUSE snowpark_opt_wh SET MAX_CONCURRENCY_LEVEL = 1;

B.  

ALTER WAREHOUSE snowpark_opt_wh SET MAX_CONCURRENCY_LEVEL = 2;

C.  

ALTER WAREHOUSE snowpark_opt_wh SET MAX_CONCURRENCY_LEVEL = 8;

D.  

ALTER WAREHOUSE snowpark_opt_wh SET MAX_CONCURRENCY_LEVEL = 16;

Discussion 0
Questions 10

A company is storing large numbers of small JSON files (ranging from 1-4 bytes) that are received from IoT devices and sent to a cloud provider. In any given hour, 100,000 files are added to the cloud provider.

What is the MOST cost-effective way to bring this data into a Snowflake table?

Options:

A.  

An external table

B.  

A pipe

C.  

A stream

D.  

A copy command at regular intervals

Discussion 0
Questions 11

A company’s daily Snowflake workload consists of a huge number of concurrent queries triggered between 9pm and 11pm. At the individual level, these queries are smaller statements that get completed within a short time period.

What configuration can the company’s Architect implement to enhance the performance of this workload? (Choose two.)

Options:

A.  

Enable a multi-clustered virtual warehouse in maximized mode during the workload duration.

B.  

Set the MAX_CONCURRENCY_LEVEL to a higher value than its default value of 8 at the virtual warehouse level.

C.  

Increase the size of the virtual warehouse to size X-Large.

D.  

Reduce the amount of data that is being processed through this workload.

E.  

Set the connection timeout to a higher value than its default.

Discussion 0
Questions 12

A company is using Snowflake in Azure in the Netherlands. The company analyst team also has data in JSON format that is stored in an Amazon S3 bucket in the AWS Singapore region that the team wants to analyze.

The Architect has been given the following requirements:

1. Provide access to frequently changing data

2. Keep egress costs to a minimum

3. Maintain low latency

How can these requirements be met with the LEAST amount of operational overhead?

Options:

A.  

Use a materialized view on top of an external table against the S3 bucket in AWS Singapore.

B.  

Use an external table against the S3 bucket in AWS Singapore and copy the data into transient tables.

C.  

Copy the data between providers from S3 to Azure Blob storage to collocate, then use Snowpipe for data ingestion.

D.  

Use AWS Transfer Family to replicate data between the S3 bucket in AWS Singapore and an Azure Netherlands Blob storage, then use an external table against the Blob storage.

Discussion 0
Questions 13

An Architect needs to design a Snowflake account and database strategy to store and analyze large amounts of structured and semi-structured data. There are many business units and departments within the company. The requirements are scalability, security, and cost efficiency.

What design should be used?

Options:

A.  

Create a single Snowflake account and database for all data storage and analysis needs, regardless of data volume or complexity.

B.  

Set up separate Snowflake accounts and databases for each department or business unit, to ensure data isolation and security.

C.  

Use Snowflake's data lake functionality to store and analyze all data in a central location, without the need for structured schemas or indexes

D.  

Use a centralized Snowflake database for core business data, and use separate databases for departmental or project-specific data.

Discussion 0
Questions 14

An Architect needs to grant a group of ORDER_ADMIN users the ability to clean old data in an ORDERS table (deleting all records older than 5 years), without granting any privileges on the table. The group’s manager (ORDER_MANAGER) has full DELETE privileges on the table.

How can the ORDER_ADMIN role be enabled to perform this data cleanup, without needing the DELETE privilege held by the ORDER_MANAGER role?

Options:

A.  

Create a stored procedure that runs with caller’s rights, including the appropriate "> 5 years" business logic, and grant USAGE on this procedure to ORDER_ADMIN. The ORDER_MANAGER role owns the procedure.

B.  

Create a stored procedure that can be run using both caller’s and owner’s rights (allowing the user to specify which rights are used during execution), and grant USAGE on this procedure to ORDER_ADMIN. The ORDER_MANAGER role owns the procedure.

C.  

Create a stored procedure that runs with owner’s rights, including the appropriate "> 5 years" business logic, and grant USAGE on this procedure to ORDER_ADMIN. The ORDER_MANAGER role owns the procedure.

D.  

This scenario would actually not be possible in Snowflake – any user performing a DELETE on a table requires the DELETE privilege to be granted to the role they are using.

Discussion 0
Questions 15

When loading data into a table that captures the load time in a column with a default value of either CURRENT_TIME () or CURRENT_TIMESTAMP() what will occur?

Options:

A.  

All rows loaded using a specific COPY statement will have varying timestamps based on when the rows were inserted.

B.  

Any rows loaded using a specific COPY statement will have varying timestamps based on when the rows were read from the source.

C.  

Any rows loaded using a specific COPY statement will have varying timestamps based on when the rows were created in the source.

D.  

All rows loaded using a specific COPY statement will have the same timestamp value.

Discussion 0
Questions 16

The diagram shows the process flow for Snowpipe auto-ingest with Amazon Simple Notification Service (SNS) with the following steps:

Step 1: Data files are loaded in a stage.

Step 2: An Amazon S3 event notification, published by SNS, informs Snowpipe — by way of Amazon Simple Queue Service (SQS) - that files are ready to load. Snowpipe copies the files into a queue.

Step 3: A Snowflake-provided virtual warehouse loads data from the queued files into the target table based on parameters defined in the specified pipe.

If an AWS Administrator accidentally deletes the SQS subscription to the SNS topic in Step 2, what will happen to the pipe that references the topic to receive event messages from Amazon S3?

Options:

A.  

The pipe will continue to receive the messages as Snowflake will automatically restore the subscription to the same SNS topic and will recreate the pipe by specifying the same SNS topic name in the pipe definition.

B.  

The pipe will no longer be able to receive the messages and the user must wait for 24 hours from the time when the SNS topic subscription was deleted. Pipe recreation is not required as the pipe will reuse the same subscription to the existing SNS topic after 24 hours.

C.  

The pipe will continue to receive the messages as Snowflake will automatically restore the subscription by creating a new SNS topic. Snowflake will then recreate the pipe by specifying the new SNS topic name in the pipe definition.

D.  

The pipe will no longer be able to receive the messages. To restore the system immediately, the user needs to manually create a new SNS topic with a different name and then recreate the pipe by specifying the new SNS topic name in the pipe definition.

Discussion 0
Questions 17

An Architect needs to meet a company requirement to ingest files from the company's AWS storage accounts into the company's Snowflake Google Cloud Platform (GCP) account. How can the ingestion of these files into the company's Snowflake account be initiated? (Select TWO).

Options:

A.  

Configure the client application to call the Snowpipe REST endpoint when new files have arrived in Amazon S3 storage.

B.  

Configure the client application to call the Snowpipe REST endpoint when new files have arrived in Amazon S3 Glacier storage.

C.  

Create an AWS Lambda function to call the Snowpipe REST endpoint when new files have arrived in Amazon S3 storage.

D.  

Configure AWS Simple Notification Service (SNS) to notify Snowpipe when new files have arrived in Amazon S3 storage.

E.  

Configure the client application to issue a COPY INTO

command to Snowflake when new files have arrived in Amazon S3 Glacier storage.

Discussion 0
Questions 18

A company needs to have the following features available in its Snowflake account:

1. Support for Multi-Factor Authentication (MFA)

2. A minimum of 2 months of Time Travel availability

3. Database replication in between different regions

4. Native support for JDBC and ODBC

5. Customer-managed encryption keys using Tri-Secret Secure

6. Support for Payment Card Industry Data Security Standards (PCI DSS)

In order to provide all the listed services, what is the MINIMUM Snowflake edition that should be selected during account creation?

Options:

A.  

Standard

B.  

Enterprise

C.  

Business Critical

D.  

Virtual Private Snowflake (VPS)

Discussion 0
Questions 19

A company has built a data pipeline using Snowpipe to ingest files from an Amazon S3 bucket. Snowpipe is configured to load data into staging database tables. Then a task runs to load the data from the staging database tables into the reporting database tables.

The company is satisfied with the availability of the data in the reporting database tables, but the reporting tables are not pruning effectively. Currently, a size 4X-Large virtual warehouse is being used to query all of the tables in the reporting database.

What step can be taken to improve the pruning of the reporting tables?

Options:

A.  

Eliminate the use of Snowpipe and load the files into internal stages using PUT commands.

B.  

Increase the size of the virtual warehouse to a size 5X-Large.

C.  

Use an ORDER BY command to load the reporting tables.

D.  

Create larger files for Snowpipe to ingest and ensure the staging frequency does not exceed 1 minute.

Discussion 0
Questions 20

A table for IOT devices that measures water usage is created. The table quickly becomes large and contains more than 2 billion rows.

The general query patterns for the table are:

1. DeviceId, lOT_timestamp and Customerld are frequently used in the filter predicate for the select statement

2. The columns City and DeviceManuf acturer are often retrieved

3. There is often a count on Uniqueld

Which field(s) should be used for the clustering key?

Options:

A.  

lOT_timestamp

B.  

City and DeviceManuf acturer

C.  

Deviceld and Customerld

D.  

Uniqueld

Discussion 0
Questions 21

Is it possible for a data provider account with a Snowflake Business Critical edition to share data with an Enterprise edition data consumer account?

Options:

A.  

A Business Critical account cannot be a data sharing provider to an Enterprise consumer. Any consumer accounts must also be Business Critical.

B.  

If a user in the provider account with role authority to create or alter share adds an Enterprise account as a consumer, it can import the share.

C.  

If a user in the provider account with a share owning role sets share_restrictions to False when adding an Enterprise consumer account, it can import the share.

D.  

If a user in the provider account with a share owning role which also has override share restrictions privilege share_restrictions set to False when adding an Enterprise consumer account, it can import the share.

Discussion 0
Questions 22

An Architect has a VPN_ACCESS_LOGS table in the SECURITY_LOGS schema containing timestamps of the connection and disconnection, username of the user, and summary statistics.

What should the Architect do to enable the Snowflake search optimization service on this table?

Options:

A.  

Assume role with OWNERSHIP on future tables and ADD SEARCH OPTIMIZATION on the SECURITY_LOGS schema.

B.  

Assume role with ALL PRIVILEGES including ADD SEARCH OPTIMIZATION in the SECURITY LOGS schema.

C.  

Assume role with OWNERSHIP on VPN_ACCESS_LOGS and ADD SEARCH OPTIMIZATION in the SECURITY_LOGS schema.

D.  

Assume role with ALL PRIVILEGES on VPN_ACCESS_LOGS and ADD SEARCH OPTIMIZATION in the SECURITY_LOGS schema.

Discussion 0
Questions 23

User1 and User2 are new users that were granted different functional roles.

User1 was granted the IT_ANALYST_ROLE

User2 was granted the FIN_ANALYST_ROLE

Review the following security design (as shown in the diagram):

A database (DB) grants USAGE and SELECT on all tables to DB_IT_RO_ROLE

DB_IT_RO_ROLE is granted to IT_ANALYST_ROLE

IT_SCHEMA contains TABLE1

FINANCE_SCHEMA grants USAGE and SELECT to DB_FIN_ROLE

DB_FIN_ROLE is granted to FIN_ANALYST_ROLE

FINANCE_SCHEMA contains FIN_TABLE

Which tables can each user read?

Options:

A.  

User1 will be the only user able to read tables from both schemas, since the DB_IT_RO_ROLE has SELECT privileges on all database tables.

B.  

User1 will be able to read tables from both schemas, while User2 will be able to read only the FINANCE_SCHEMA tables.

C.  

User2 will be able to read tables from the FINANCE_SCHEMA, while User1 will be unable to read any table.

D.  

User2 will be able to read tables from both schemas, while User1 will be able to read tables only in IT_SCHEMA.

Discussion 0
Questions 24

What transformations are supported in the below SQL statement? (Select THREE).

CREATE PIPE ... AS COPY ... FROM (...)

Options:

A.  

Data can be filtered by an optional where clause.

B.  

Columns can be reordered.

C.  

Columns can be omitted.

D.  

Type casts are supported.

E.  

Incoming data can be joined with other tables.

F.  

The ON ERROR - ABORT statement command can be used.

Discussion 0
statement used by Snowpipe to load data from an ingestion queue into tables1. The statement uses a subquery in the FROM clause to transform the data from the staged files before loading it into the table2.

The transformations supported in the subquery are as follows2:

Data can be filtered by an optional WHERE clause, which specifies a condition that must be satisfied by the rows returned by the subquery. For example:

SQLAI-generated code. Review and use carefully. More info on FAQ.

createpipe mypipeas

copyintomytable

from(

select*from@mystage

wherecol1='A'andcol2>10

);

Columns can be reordered, which means changing the order of the columns in the subquery to match the order of the columns in the target table. For example:

SQLAI-generated code. Review and use carefully. More info on FAQ.

createpipe mypipeas

copyintomytable (col1, col2, col3)

from(

selectcol3, col1, col2from@mystage

);

Columns can be omitted, which means excluding some columns from the subquery that are not needed in the target table. For example:

SQLAI-generated code. Review and use carefully. More info on FAQ.

createpipe mypipeas

copyintomytable (col1, col2)

from(

selectcol1, col2from@mystage

);

The other options are not supported in the subquery because2:

Type casts are not supported, which means changing the data type of a column in the subquery. For example, the following statement will cause an error:

SQLAI-generated code. Review and use carefully. More info on FAQ.

createpipe mypipeas

copyintomytable (col1, col2)

from(

selectcol1::date, col2from@mystage

);

Incoming data can not be joined with other tables, which means combining the data from the staged files with the data from another table in the subquery. For example, the following statement will cause an error:

SQLAI-generated code. Review and use carefully. More info on FAQ.

createpipe mypipeas

copyintomytable (col1, col2, col3)

from(

selects.col1, s.col2, t.col3from@mystages

joinothertable tons.col1=t.col1

);

The ON ERROR - ABORT statement command can not be used, which means aborting the entire load operation if any error occurs. This command can only be used in the COPY INTO

statement, not in the subquery. For example, the following statement will cause an error:

SQLAI-generated code. Review and use carefully. More info on FAQ.

createpipe mypipeas

copyintomytable

from(

select*from@mystage

onerror abort

);

1: CREATE PIPE | Snowflake Documentation

2: Transforming Data During a Load | Snowflake Documentation

Questions 25

A Snowflake Architect is setting up database replication to support a disaster recovery plan. The primary database has external tables.

How should the database be replicated?

Options:

A.  

Create a clone of the primary database then replicate the database.

B.  

Move the external tables to a database that is not replicated, then replicate the primary database.

C.  

Replicate the database ensuring the replicated database is in the same region as the external tables.

D.  

Share the primary database with an account in the same region that the database will be replicated to.

Discussion 0
Questions 26

The following DDL command was used to create a task based on a stream:

Assuming MY_WH is set to auto_suspend – 60 and used exclusively for this task, which statement is true?

Options:

A.  

The warehouse MY_WH will be made active every five minutes to check the stream.

B.  

The warehouse MY_WH will only be active when there are results in the stream.

C.  

The warehouse MY_WH will never suspend.

D.  

The warehouse MY_WH will automatically resize to accommodate the size of the stream.

Discussion 0
Questions 27

An Architect with the ORGADMIN role wants to change a Snowflake account from an Enterprise edition to a Business Critical edition.

How should this be accomplished?

Options:

A.  

Run an ALTER ACCOUNT command and create a tag of EDITION and set the tag to Business Critical.

B.  

Use the account's ACCOUNTADMIN role to change the edition.

C.  

Failover to a new account in the same region and specify the new account's edition upon creation.

D.  

Contact Snowflake Support and request that the account's edition be changed.

Discussion 0
Questions 28

A company is designing a process for importing a large amount of loT JSON data from cloud storage into Snowflake. New sets of loT data get generated and uploaded approximately every 5 minutes.

Once the loT data is in Snowflake, the company needs up-to-date information from an external vendor to join to the data. This data is then presented to users through a dashboard that shows different levels of aggregation. The external vendor is a Snowflake customer.

What solution will MINIMIZE complexity and MAXIMIZE performance?

Options:

A.  

1. Create an external table over the JSON data in cloud storage.2. Create a task that runs every 5 minutes to run a transformation procedure on new data, based on a saved timestamp.3. Ask the vendor to expose an API so an external function can be used to generate a call to join the data back to the loT data in the transformation procedure.4. Give the transformed table access to the dashboard tool.5. Perform the aggregations on the dashboard

B.  

1. Create an external table over the JSON data in cloud storage.2. Create a task that runs every 5 minutes to run a transformation procedure on new data based on a saved timestamp.3. Ask the vendor to create a data share with the required data that can be imported into the company's Snowflake account.4. Join the vendor's data back to the loT data using a transformation procedure.5. Create views over the larger dataset to perform the aggrega

C.  

1. Create a Snowpipe to bring the JSON data into Snowflake.2. Use streams and tasks to trigger a transformation procedure when new JSON data arrives.3. Ask the vendor to expose an API so an external function call can be made to join the vendor's data back to the loT data in a transformation procedure.4. Create materialized views over the larger dataset to perform the aggregations required by the dashboard.5. Give the materialized views acce

D.  

1. Create a Snowpipe to bring the JSON data into Snowflake.2. Use streams and tasks to trigger a transformation procedure when new JSON data arrives.3. Ask the vendor to create a data share with the required data that is then imported into the Snowflake account.4. Join the vendor's data back to the loT data in a transformation procedure5. Create materialized views over the larger dataset to perform the aggregations required by the dashboard

Discussion 0
Questions 29

A retail company has over 3000 stores all using the same Point of Sale (POS) system. The company wants to deliver near real-time sales results to category managers. The stores operate in a variety of time zones and exhibit a dynamic range of transactions each minute, with some stores having higher sales volumes than others.

Sales results are provided in a uniform fashion using data engineered fields that will be calculated in a complex data pipeline. Calculations include exceptions, aggregations, and scoring using external functions interfaced to scoring algorithms. The source data for aggregations has over 100M rows.

Every minute, the POS sends all sales transactions files to a cloud storage location with a naming convention that includes store numbers and timestamps to identify the set of transactions contained in the files. The files are typically less than 10MB in size.

How can the near real-time results be provided to the category managers? (Select TWO).

Options:

A.  

All files should be concatenated before ingestion into Snowflake to avoid micro-ingestion.

B.  

A Snowpipe should be created and configured with AUTO_INGEST = true. A stream should be created to process INSERTS into a single target table using the stream metadata to inform the store number and timestamps.

C.  

A stream should be created to accumulate the near real-time data and a task should be created that runs at a frequency that matches the real-time analytics needs.

D.  

An external scheduler should examine the contents of the cloud storage location and issue SnowSQL commands to process the data at a frequency that matches the real-time analytics needs.

E.  

The copy into command with a task scheduled to run every second should be used to achieve the near-real time requirement.

Discussion 0
Questions 30

An Architect has chosen to separate their Snowflake Production and QA environments using two separate Snowflake accounts.

The QA account is intended to run and test changes on data and database objects before pushing those changes to the Production account. It is a requirement that all database objects and data in the QA account need to be an exact copy of the database objects, including privileges and data in the Production account on at least a nightly basis.

Which is the LEAST complex approach to use to populate the QA account with the Production account’s data and database objects on a nightly basis?

Options:

A.  

1) Create a share in the Production account for each database2) Share access to the QA account as a Consumer3) The QA account creates a database directly from each share4) Create clones of those databases on a nightly basis5) Run tests directly on those cloned databases

B.  

1) Create a stage in the Production account2) Create a stage in the QA account that points to the same external object-storage location3) Create a task that runs nightly to unload each table in the Production account into the stage4) Use Snowpipe to populate the QA account

C.  

1) Enable replication for each database in the Production account2) Create replica databases in the QA account3) Create clones of the replica databases on a nightly basis4) Run tests directly on those cloned databases

D.  

1) In the Production account, create an external function that connects into the QA account and returns all the data for one specific table2) Run the external function as part of a stored procedure that loops through each table in the Production account and populates each table in the QA account

Discussion 0
Questions 31

When using the Snowflake Connector for Kafka, what data formats are supported for the messages? (Choose two.)

Options:

A.  

CSV

B.  

XML

C.  

Avro

D.  

JSON

E.  

Parquet

Discussion 0
Questions 32

What is a characteristic of event notifications in Snowpipe?

Options:

A.  

The load history is stored In the metadata of the target table.

B.  

Notifications identify the cloud storage event and the actual data in the files.

C.  

Snowflake can process all older notifications when a paused pipe Is resumed.

D.  

When a pipe Is paused, event messages received for the pipe enter a limited retention period.

Discussion 0
Questions 33

A Data Engineer is designing a near real-time ingestion pipeline for a retail company to ingest event logs into Snowflake to derive insights. A Snowflake Architect is asked to define security best practices to configure access control privileges for the data load for auto-ingest to Snowpipe.

What are the MINIMUM object privileges required for the Snowpipe user to execute Snowpipe?

Options:

A.  

OWNERSHIP on the named pipe, USAGE on the named stage, target database, and schema, and INSERT and SELECT on the target table

B.  

OWNERSHIP on the named pipe, USAGE and READ on the named stage, USAGE on the target database and schema, and INSERT end SELECT on the target table

C.  

CREATE on the named pipe, USAGE and READ on the named stage, USAGE on the target database and schema, and INSERT end SELECT on the target table

D.  

USAGE on the named pipe, named stage, target database, and schema, and INSERT and SELECT on the target table

Discussion 0
Questions 34

A company has an inbound share set up with eight tables and five secure views. The company plans to make the share part of its production data pipelines.

Which actions can the company take with the inbound share? (Choose two.)

Options:

A.  

Clone a table from a share.

B.  

Grant modify permissions on the share.

C.  

Create a table from the shared database.

D.  

Create additional views inside the shared database.

E.  

Create a table stream on the shared table.

Discussion 0
Questions 35

An Architect wants to stream website logs near real time to Snowflake using the Snowflake Connector for Kafka.

What characteristics should the Architect consider regarding the different ingestion methods? (Select TWO).

Options:

A.  

Snowpipe Streaming is the default ingestion method.

B.  

Snowpipe Streaming supports schema detection.

C.  

Snowpipe has lower latency than Snowpipe Streaming.

D.  

Snowpipe Streaming automatically flushes data every one second.

E.  

Snowflake can handle jumps or resetting offsets by default.

Discussion 0
Questions 36

An Architect runs the following SQL query:

How can this query be interpreted?

Options:

A.  

FILEROWS is a stage. FILE_ROW_NUMBER is line number in file.

B.  

FILEROWS is the table. FILE_ROW_NUMBER is the line number in the table.

C.  

FILEROWS is a file. FILE_ROW_NUMBER is the file format location.

D.  

FILERONS is the file format location. FILE_ROW_NUMBER is a stage.

Discussion 0
Questions 37

An Architect needs to automate the daily Import of two files from an external stage into Snowflake. One file has Parquet-formatted data, the other has CSV-formatted data.

How should the data be joined and aggregated to produce a final result set?

Options:

A.  

Use Snowpipe to ingest the two files, then create a materialized view to produce the final result set.

B.  

Create a task using Snowflake scripting that will import the files, and then call a User-Defined Function (UDF) to produce the final result set.

C.  

Create a JavaScript stored procedure to read. join, and aggregate the data directly from the external stage, and then store the results in a table.

D.  

Create a materialized view to read, Join, and aggregate the data directly from the external stage, and use the view to produce the final result set

Discussion 0
Questions 38

Which of the below commands will use warehouse credits?

Options:

A.  

SHOW TABLES LIKE 'SNOWFL%';

B.  

SELECT MAX(FLAKE_ID) FROM SNOWFLAKE;

C.  

SELECT COUNT(*) FROM SNOWFLAKE;

D.  

SELECT COUNT(FLAKE_ID) FROM SNOWFLAKE GROUP BY FLAKE_ID;

Discussion 0
Questions 39

The following table exists in the production database:

A regulatory requirement states that the company must mask the username for events that are older than six months based on the current date when the data is queried.

How can the requirement be met without duplicating the event data and making sure it is applied when creating views using the table or cloning the table?

Options:

A.  

Use a masking policy on the username column using a entitlement table with valid dates.

B.  

Use a row level policy on the user_events table using a entitlement table with valid dates.

C.  

Use a masking policy on the username column with event_timestamp as a conditional column.

D.  

Use a secure view on the user_events table using a case statement on the username column.

Discussion 0
Questions 40

What are purposes for creating a storage integration? (Choose three.)

Options:

A.  

Control access to Snowflake data using a master encryption key that is maintained in the cloud provider’s key management service.

B.  

Store a generated identity and access management (IAM) entity for an external cloud provider regardless of the cloud provider that hosts the Snowflake account.

C.  

Support multiple external stages using one single Snowflake object.

D.  

Avoid supplying credentials when creating a stage or when loading or unloading data.

E.  

Create private VPC endpoints that allow direct, secure connectivity between VPCs without traversing the public internet.

F.  

Manage credentials from multiple cloud providers in one single Snowflake object.

Discussion 0
Questions 41

When using the COPY INTO

command with the CSV file format, how does the MATCH_BY_COLUMN_NAME parameter behave?

Options:

A.  

It expects a header to be present in the CSV file, which is matched to a case-sensitive table column name.

B.  

The parameter will be ignored.

C.  

The command will return an error.

D.  

The command will return a warning stating that the file has unmatched columns.

Discussion 0
Questions 42

A company has a table with that has corrupted data, named Data. The company wants to recover the data as it was 5 minutes ago using cloning and Time Travel.

What command will accomplish this?

Options:

A.  

CREATE CLONE TABLE Recover_Data FROM Data AT(OFFSET => -60*5);

B.  

CREATE CLONE Recover_Data FROM Data AT(OFFSET => -60*5);

C.  

CREATE TABLE Recover_Data CLONE Data AT(OFFSET => -60*5);

D.  

CREATE TABLE Recover Data CLONE Data AT(TIME => -60*5);

Discussion 0
Questions 43

An Architect needs to allow a user to create a database from an inbound share.

To meet this requirement, the user’s role must have which privileges? (Choose two.)

Options:

A.  

IMPORT SHARE;

B.  

IMPORT PRIVILEGES;

C.  

CREATE DATABASE;

D.  

CREATE SHARE;

E.  

IMPORT DATABASE;

Discussion 0
Questions 44

An Architect needs to design a data unloading strategy for Snowflake, that will be used with the COPY INTO command.

Which configuration is valid?

Options:

A.  

Location of files: Snowflake internal location. File formats: CSV, XML. File encoding: UTF-8. Encryption: 128-bit

B.  

Location of files: Amazon S3. File formats: CSV, JSON. File encoding: Latin-1 (ISO-8859). Encryption: 128-bit

C.  

Location of files: Google Cloud Storage. File formats: Parquet. File encoding: UTF-8· Compression: gzip

D.  

Location of files: Azure ADLS. File formats: JSON, XML, Avro, Parquet, ORC. Compression: bzip2. Encryption: User-supplied key

Discussion 0
Questions 45

An Architect is troubleshooting a query with poor performance using the QUERY function. The Architect observes that the COMPILATION_TIME Is greater than the EXECUTION_TIME.

What is the reason for this?

Options:

A.  

The query is processing a very large dataset.

B.  

The query has overly complex logic.

C.  

The query Is queued for execution.

D.  

The query Is reading from remote storage

Discussion 0
Questions 46

The Business Intelligence team reports that when some team members run queries for their dashboards in parallel with others, the query response time is getting significantly slower What can a Snowflake Architect do to identify what is occurring and troubleshoot this issue?

A)

B)

C)

D)

Options:

A.  

Option A

B.  

Option B

C.  

Option C

D.  

Option D

Discussion 0
Questions 47

Files arrive in an external stage every 10 seconds from a proprietary system. The files range in size from 500 K to 3 MB. The data must be accessible by dashboards as soon as it arrives.

How can a Snowflake Architect meet this requirement with the LEAST amount of coding? (Choose two.)

Options:

A.  

Use Snowpipe with auto-ingest.

B.  

Use a COPY command with a task.

C.  

Use a materialized view on an external table.

D.  

Use the COPY INTO command.

E.  

Use a combination of a task and a stream.

Discussion 0
Questions 48

What step will im the performance of queries executed against an external table?

Options:

A.  

Partition the external table.

B.  

Shorten the names of the source files.

C.  

Convert the source files' character encoding to UTF-8.

D.  

Use an internal stage instead of an external stage to store the source files.

Discussion 0
Questions 49

An Architect has a table called leader_follower that contains a single column named JSON. The table has one row with the following structure:

{

"activities": [

{ "activityNumber": 1, "winner": 5 },

{ "activityNumber": 2, "winner": 4 }

],

"follower": {

"name": { "default": "Matt" },

"number": 4

},

"leader": {

"name": { "default": "Adam" },

"number": 5

}

}

Which query will produce the following results?

ACTIVITY_NUMBER

WINNER_NAME

1

Adam

2

Matt

Options:

A.  

SELECT lf.json:activities.activityNumber AS activity_number,

IFF(

lf.json:activities.activityNumber = lf.json:leader.number,

lf.json:leader.name.default,

lf.json:follower.name.default

)::VARCHAR

FROM leader_follower lf;

B.  

SELECT

C.  

value:activityNumber AS activity_number,

IFF(

D.  

value:winner = lf.json:leader.number,

lf.json:leader.name.default,

lf.json:follower.name.default

)::VARCHAR AS winner_name

FROM leader_follower lf,

LATERAL FLATTEN(input => json:activities) p;

E.  

SELECT

F.  

value:activityNumber AS activity_number,

IFF(

G.  

value:winner = lf.json:leader.number,

lf.json:leader,

lf.json:follower

)::VARCHAR AS winner_name

FROM leader_follower lf,

LATERAL FLATTEN(input => json:activities) p;

Discussion 0
Questions 50

A new table and streams are created with the following commands:

CREATE OR REPLACE TABLE LETTERS (ID INT, LETTER STRING) ;

CREATE OR REPLACE STREAM STREAM_1 ON TABLE LETTERS;

CREATE OR REPLACE STREAM STREAM_2 ON TABLE LETTERS APPEND_ONLY = TRUE;

The following operations are processed on the newly created table:

INSERT INTO LETTERS VALUES (1, 'A');

INSERT INTO LETTERS VALUES (2, 'B');

INSERT INTO LETTERS VALUES (3, 'C');

TRUNCATE TABLE LETTERS;

INSERT INTO LETTERS VALUES (4, 'D');

INSERT INTO LETTERS VALUES (5, 'E');

INSERT INTO LETTERS VALUES (6, 'F');

DELETE FROM LETTERS WHERE ID = 6;

What would be the output of the following SQL commands, in order?

SELECT COUNT (*) FROM STREAM_1;

SELECT COUNT (*) FROM STREAM_2;

Options:

A.  

2 & 6

B.  

2 & 3

C.  

4 & 3

D.  

4 & 6

Discussion 0
Questions 51

A company's Architect needs to find an efficient way to get data from an external partner, who is also a Snowflake user. The current solution is based on daily JSON extracts that are placed on an FTP server and uploaded to Snowflake manually. The files are changed several times each month, and the ingestion process needs to be adapted to accommodate these changes.

What would be the MOST efficient solution?

Options:

A.  

Ask the partner to create a share and add the company's account.

B.  

Ask the partner to use the data lake export feature and place the data into cloud storage where Snowflake can natively ingest it (schema-on-read).

C.  

Keep the current structure but request that the partner stop changing files, instead only appending new files.

D.  

Ask the partner to set up a Snowflake reader account and use that account to get the data for ingestion.

Discussion 0
Questions 52

How can the Snowpipe REST API be used to keep a log of data load history?

Options:

A.  

Call insertReport every 20 minutes, fetching the last 10,000 entries.

B.  

Call loadHistoryScan every minute for the maximum time range.

C.  

Call insertReport every 8 minutes for a 10-minute time range.

D.  

Call loadHistoryScan every 10 minutes for a 15-minute time range.

Discussion 0
Questions 53

An Architect needs to design a solution for building environments for development, test, and pre-production, all located in a single Snowflake account. The environments should be based on production data.

Which solution would be MOST cost-effective and performant?

Options:

A.  

Use zero-copy cloning into transient tables.

B.  

Use zero-copy cloning into permanent tables.

C.  

Use CREATE TABLE ... AS SELECT (CTAS) statements.

D.  

Use a Snowflake task to trigger a stored procedure to copy data.

Discussion 0
Questions 54

An Architect for a multi-national transportation company has a system that is used to check the weather conditions along vehicle routes. The data is provided to drivers.

The weather information is delivered regularly by a third-party company and this information is generated as JSON structure. Then the data is loaded into Snowflake in a column with a VARIANT data type. This

table is directly queried to deliver the statistics to the drivers with minimum time lapse.

A single entry includes (but is not limited to):

- Weather condition; cloudy, sunny, rainy, etc.

- Degree

- Longitude and latitude

- Timeframe

- Location address

- Wind

The table holds more than 10 years' worth of data in order to deliver the statistics from different years and locations. The amount of data on the table increases every day.

The drivers report that they are not receiving the weather statistics for their locations in time.

What can the Architect do to deliver the statistics to the drivers faster?

Options:

A.  

Create an additional table in the schema for longitude and latitude. Determine a regular task to fill this information by extracting it from the JSON dataset.

B.  

Add search optimization service on the variant column for longitude and latitude in order to query the information by using specific metadata.

C.  

Divide the table into several tables for each year by using the timeframe information from the JSON dataset in order to process the queries in parallel.

D.  

Divide the table into several tables for each location by using the location address information from the JSON dataset in order to process the queries in parallel.

Discussion 0