Pre-Winter Special Discount 60% Offer - Ends in 0d 00h 00m 00s - Coupon code: brite60

ExamsBrite Dumps

SnowPro Advanced: Architect Certification Exam Question and Answers

SnowPro Advanced: Architect Certification Exam

Last Update Oct 9, 2024
Total Questions : 162

We are offering FREE ARA-C01 Snowflake exam questions. All you do is to just go and sign up. Give your details, prepare ARA-C01 free exam questions and then go for complete pool of SnowPro Advanced: Architect Certification Exam test questions that will help you more.

ARA-C01 pdf

ARA-C01 PDF

$40  $99.99
ARA-C01 Engine

ARA-C01 Testing Engine

$48  $119.99
ARA-C01 PDF + Engine

ARA-C01 PDF + Testing Engine

$64  $159.99
Questions 1

Company A would like to share data in Snowflake with Company B. Company B is not on the same cloud platform as Company A.

What is required to allow data sharing between these two companies?

Options:

A.  

Create a pipeline to write shared data to a cloud storage location in the target cloud provider.

B.  

Ensure that all views are persisted, as views cannot be shared across cloud platforms.

C.  

Setup data replication to the region and cloud platform where the consumer resides.

D.  

Company A and Company B must agree to use a single cloud platform: Data sharing is only possible if the companies share the same cloud provider.

Discussion 0
Questions 2

Which of the following are characteristics of how row access policies can be applied to external tables? (Choose three.)

Options:

A.  

An external table can be created with a row access policy, and the policy can be applied to the VALUE column.

B.  

A row access policy can be applied to the VALUE column of an existing external table.

C.  

A row access policy cannot be directly added to a virtual column of an external table.

D.  

External tables are supported as mapping tables in a row access policy.

E.  

While cloning a database, both the row access policy and the external table will be cloned.

F.  

A row access policy cannot be applied to a view created on top of an external table.

Discussion 0
Questions 3

What are some of the characteristics of result set caches? (Choose three.)

Options:

A.  

Time Travel queries can be executed against the result set cache.

B.  

Snowflake persists the data results for 24 hours.

C.  

Each time persisted results for a query are used, a 24-hour retention period is reset.

D.  

The data stored in the result cache will contribute to storage costs.

E.  

The retention period can be reset for a maximum of 31 days.

F.  

The result set cache is not shared between warehouses.

Discussion 0
Questions 4

Consider the following scenario where a masking policy is applied on the CREDICARDND column of the CREDITCARDINFO table. The masking policy definition Is as follows:

Sample data for the CREDITCARDINFO table is as follows:

NAME EXPIRYDATE CREDITCARDNO

JOHN DOE 2022-07-23 4321 5678 9012 1234

if the Snowflake system rotes have not been granted any additional roles, what will be the result?

Options:

A.  

The sysadmin can see the CREDICARDND column data in clear text.

B.  

The owner of the table will see the CREDICARDND column data in clear text.

C.  

Anyone with the Pl_ANALYTICS role will see the last 4 characters of the CREDICARDND column data in dear text.

D.  

Anyone with the Pl_ANALYTICS role will see the CREDICARDND column as*** 'MASKED* **'.

Discussion 0
Questions 5

A company has an external vendor who puts data into Google Cloud Storage. The company's Snowflake account is set up in Azure.

What would be the MOST efficient way to load data from the vendor into Snowflake?

Options:

A.  

Ask the vendor to create a Snowflake account, load the data into Snowflake and create a data share.

B.  

Create an external stage on Google Cloud Storage and use the external table to load the data into Snowflake.

C.  

Copy the data from Google Cloud Storage to Azure Blob storage using external tools and load data from Blob storage to Snowflake.

D.  

Create a Snowflake Account in the Google Cloud Platform (GCP), ingest data into this account and use data replication to move the data from GCP to Azure.

Discussion 0
Questions 6

Which of the following ingestion methods can be used to load near real-time data by using the messaging services provided by a cloud provider?

Options:

A.  

Snowflake Connector for Kafka

B.  

Snowflake streams

C.  

Snowpipe

D.  

Spark

Discussion 0
Questions 7

A table, EMP_ TBL has three records as shown:

The following variables are set for the session:

Which SELECT statements will retrieve all three records? (Select TWO).

Options:

A.  

Select * FROM Stbl_ref WHERE Scol_ref IN ('Name1','Nam2','Name3');

B.  

SELECT * FROM EMP_TBL WHERE identifier(Scol_ref) IN ('Namel','Name2', 'Name3');

C.  

SELECT * FROM identifier WHERE NAME IN ($var1, $var2, $var3);

D.  

SELECT * FROM identifier($tbl_ref) WHERE ID IN Cvarl','var2','var3');

E.  

SELECT * FROM $tb1_ref WHERE $col_ref IN ($var1, Svar2, Svar3);

Discussion 0
Questions 8

A Developer is having a performance issue with a Snowflake query. The query receives up to 10 different values for one parameter and then performs an aggregation over the majority of a fact table. It then

joins against a smaller dimension table. This parameter value is selected by the different query users when they execute it during business hours. Both the fact and dimension tables are loaded with new data in an overnight import process.

On a Small or Medium-sized virtual warehouse, the query performs slowly. Performance is acceptable on a size Large or bigger warehouse. However, there is no budget to increase costs. The Developer

needs a recommendation that does not increase compute costs to run this query.

What should the Architect recommend?

Options:

A.  

Create a task that will run the 10 different variations of the query corresponding to the 10 different parameters before the users come in to work. The query results will then be cached and ready to respond quickly when the users re-issue the query.

B.  

Create a task that will run the 10 different variations of the query corresponding to the 10 different parameters before the users come in to work. The task will be scheduled to align with the users' working hours in order to allow the warehouse cache to be used.

C.  

Enable the search optimization service on the table. When the users execute the query, the search optimization service will automatically adjust the query execution plan based on the frequently-used parameters.

D.  

Create a dedicated size Large warehouse for this particular set of queries. Create a new role that has USAGE permission on this warehouse and has the appropriate read permissions over the fact and dimension tables. Have users switch to this role and use this warehouse when they want to access this data.

Discussion 0
Questions 9

An Architect has chosen to separate their Snowflake Production and QA environments using two separate Snowflake accounts.

The QA account is intended to run and test changes on data and database objects before pushing those changes to the Production account. It is a requirement that all database objects and data in the QA account need to be an exact copy of the database objects, including privileges and data in the Production account on at least a nightly basis.

Which is the LEAST complex approach to use to populate the QA account with the Production account’s data and database objects on a nightly basis?

Options:

A.  

1) Create a share in the Production account for each database

2) Share access to the QA account as a Consumer

3) The QA account creates a database directly from each share

4) Create clones of those databases on a nightly basis

5) Run tests directly on those cloned databases

B.  

1) Create a stage in the Production account

2) Create a stage in the QA account that points to the same external object-storage location

3) Create a task that runs nightly to unload each table in the Production account into the stage

4) Use Snowpipe to populate the QA account

C.  

1) Enable replication for each database in the Production account

2) Create replica databases in the QA account

3) Create clones of the replica databases on a nightly basis

4) Run tests directly on those cloned databases

D.  

1) In the Production account, create an external function that connects into the QA account and returns all the data for one specific table

2) Run the external function as part of a stored procedure that loops through each table in the Production account and populates each table in the QA account

Discussion 0
Questions 10

Files arrive in an external stage every 10 seconds from a proprietary system. The files range in size from 500 K to 3 MB. The data must be accessible by dashboards as soon as it arrives.

How can a Snowflake Architect meet this requirement with the LEAST amount of coding? (Choose two.)

Options:

A.  

Use Snowpipe with auto-ingest.

B.  

Use a COPY command with a task.

C.  

Use a materialized view on an external table.

D.  

Use the COPY INTO command.

E.  

Use a combination of a task and a stream.

Discussion 0
Questions 11

A group of Data Analysts have been granted the role analyst role. They need a Snowflake database where they can create and modify tables, views, and other objects to load with their own data. The Analysts should not have the ability to give other Snowflake users outside of their role access to this data.

How should these requirements be met?

Options:

A.  

Grant ANALYST_R0LE OWNERSHIP on the database, but make sure that ANALYST_ROLE does not have the MANAGE GRANTS privilege on the account.

B.  

Grant SYSADMIN ownership of the database, but grant the create schema privilege on the database to the ANALYST_ROLE.

C.  

Make every schema in the database a managed access schema, owned by SYSADMIN, and grant create privileges on each schema to the ANALYST_ROLE for each type of object that needs to be created.

D.  

Grant ANALYST_ROLE ownership on the database, but grant the ownership on future [object type] s in database privilege to SYSADMIN.

Discussion 0
Questions 12

Which Snowflake objects can be used in a data share? (Select TWO).

Options:

A.  

Standard view

B.  

Secure view

C.  

Stored procedure

D.  

External table

E.  

Stream

Discussion 0
Questions 13

A Snowflake Architect created a new data share and would like to verify that only specific records in secure views are visible within the data share by the consumers.

What is the recommended way to validate data accessibility by the consumers?

Options:

A.  

Create reader accounts as shown below and impersonate the consumers by logging in with their credentials.

create managed account reader_acctl admin_name = userl , adroin_password ■ 'Sdfed43da!44T , type = reader;

B.  

Create a row access policy as shown below and assign it to the data share.

create or replace row access policy rap_acct as (acct_id varchar) returns boolean -> case when 'acctl_role' = current_role() then true else false end;

C.  

Set the session parameter called SIMULATED_DATA_SHARING_C0NSUMER as shown below in order to impersonate the consumer accounts.

alter session set simulated_data_sharing_consumer - 'Consumer Acctl*

D.  

Alter the share settings as shown below, in order to impersonate a specific consumer account.

alter share sales share set accounts = 'Consumerl’ share restrictions = true

Discussion 0
Questions 14

The Business Intelligence team reports that when some team members run queries for their dashboards in parallel with others, the query response time is getting significantly slower What can a Snowflake Architect do to identify what is occurring and troubleshoot this issue?

Options:

A.  

A computer error message Description automatically generated

B.  

A close up of text Description automatically generated

C.  

A black text on a white background Description automatically generated

D.  

A screen shot of a computer Description automatically generated

Discussion 0
Questions 15

A data platform team creates two multi-cluster virtual warehouses with the AUTO_SUSPEND value set to NULL on one. and '0' on the other. What would be the execution behavior of these virtual warehouses?

Options:

A.  

Setting a '0' or NULL value means the warehouses will never suspend.

B.  

Setting a '0' or NULL value means the warehouses will suspend immediately.

C.  

Setting a '0' or NULL value means the warehouses will suspend after the default of 600 seconds.

D.  

Setting a '0' value means the warehouses will suspend immediately, and NULL means the warehouses will never suspend.

Discussion 0
Questions 16

An Architect has been asked to clone schema STAGING as it looked one week ago, Tuesday June 1st at 8:00 AM, to recover some objects.

The STAGING schema has 50 days of retention.

The Architect runs the following statement:

CREATE SCHEMA STAGING_CLONE CLONE STAGING at (timestamp => '2021-06-01 08:00:00');

The Architect receives the following error: Time travel data is not available for schema STAGING. The requested time is either beyond the allowed time travel period or before the object creation time.

The Architect then checks the schema history and sees the following:

CREATED_ON|NAME|DROPPED_ON

2021-06-02 23:00:00 | STAGING | NULL

2021-05-01 10:00:00 | STAGING | 2021-06-02 23:00:00

How can cloning the STAGING schema be achieved?

Options:

A.  

Undrop the STAGING schema and then rerun the CLONE statement.

B.  

Modify the statement: CREATE SCHEMA STAGING_CLONE CLONE STAGING at (timestamp => '2021-05-01 10:00:00');

C.  

Rename the STAGING schema and perform an UNDROP to retrieve the previous STAGING schema version, then run the CLONE statement.

D.  

Cloning cannot be accomplished because the STAGING schema version was not active during the proposed Time Travel time period.

Discussion 0
Questions 17

A company wants to Integrate its main enterprise identity provider with federated authentication with Snowflake.

The authentication integration has been configured and roles have been created in Snowflake. However, the users are not automatically appearing in Snowflake when created and their group membership is not reflected in their assigned rotes.

How can the missing functionality be enabled with the LEAST amount of operational overhead?

Options:

A.  

OAuth must be configured between the identity provider and Snowflake. Then the authorization server must be configured with the right mapping of users and roles.

B.  

OAuth must be configured between the identity provider and Snowflake. Then the authorization server must be configured with the right mapping of users, and the resource server must be configured with the right mapping of role assignment.

C.  

SCIM must be enabled between the identity provider and Snowflake. Once both are synchronized through SCIM, their groups will get created as group accounts in Snowflake and the proper roles can be granted.

D.  

SCIM must be enabled between the identity provider and Snowflake. Once both are synchronized through SCIM. users will automatically get created and their group membership will be reflected as roles In Snowflake.

Discussion 0
Questions 18

Which organization-related tasks can be performed by the ORGADMIN role? (Choose three.)

Options:

A.  

Changing the name of the organization

B.  

Creating an account

C.  

Viewing a list of organization accounts

D.  

Changing the name of an account

E.  

Deleting an account

F.  

Enabling the replication of a database

Discussion 0
Questions 19

The diagram shows the process flow for Snowpipe auto-ingest with Amazon Simple Notification Service (SNS) with the following steps:

Step 1: Data files are loaded in a stage.

Step 2: An Amazon S3 event notification, published by SNS, informs Snowpipe — by way of Amazon Simple Queue Service (SQS) - that files are ready to load. Snowpipe copies the files into a queue.

Step 3: A Snowflake-provided virtual warehouse loads data from the queued files into the target table based on parameters defined in the specified pipe.

If an AWS Administrator accidentally deletes the SQS subscription to the SNS topic in Step 2, what will happen to the pipe that references the topic to receive event messages from Amazon S3?

Options:

A.  

The pipe will continue to receive the messages as Snowflake will automatically restore the subscription to the same SNS topic and will recreate the pipe by specifying the same SNS topic name in the pipe definition.

B.  

The pipe will no longer be able to receive the messages and the user must wait for 24 hours from the time when the SNS topic subscription was deleted. Pipe recreation is not required as the pipe will reuse the same subscription to the existing SNS topic after 24 hours.

C.  

The pipe will continue to receive the messages as Snowflake will automatically restore the subscription by creating a new SNS topic. Snowflake will then recreate the pipe by specifying the new SNS topic name in the pipe definition.

D.  

The pipe will no longer be able to receive the messages. To restore the system immediately, the user needs to manually create a new SNS topic with a different name and then recreate the pipe by specifying the new SNS topic name in the pipe definition.

Discussion 0
Questions 20

What are characteristics of Dynamic Data Masking? (Select TWO).

Options:

A.  

A masking policy that Is currently set on a table can be dropped.

B.  

A single masking policy can be applied to columns in different tables.

C.  

A masking policy can be applied to the value column of an external table.

D.  

The role that creates the masking policy will always see unmasked data In query results

E.  

A masking policy can be applied to a column with the GEOGRAPHY data type.

Discussion 0
Questions 21

A company is designing high availability and disaster recovery plans and needs to maximize redundancy and minimize recovery time objectives for their critical application processes. Cost is not a concern as long as the solution is the best available. The plan so far consists of the following steps:

1. Deployment of Snowflake accounts on two different cloud providers.

2. Selection of cloud provider regions that are geographically far apart.

3. The Snowflake deployment will replicate the databases and account data between both cloud provider accounts.

4. Implementation of Snowflake client redirect.

What is the MOST cost-effective way to provide the HIGHEST uptime and LEAST application disruption if there is a service event?

Options:

A.  

Connect the applications using the - URL. Use the Business Critical Snowflake edition.

B.  

Connect the applications using the - URL. Use the Virtual Private Snowflake (VPS) edition.

C.  

Connect the applications using the - URL. Use the Enterprise Snowflake edition.

D.  

Connect the applications using the - URL. Use the Business Critical Snowflake edition.

Discussion 0
Questions 22

A media company needs a data pipeline that will ingest customer review data into a Snowflake table, and apply some transformations. The company also needs to use Amazon Comprehend to do sentiment analysis and make the de-identified final data setavailable publicly for advertising companies who use different cloud providers in different regions.

The data pipeline needs to run continuously ang efficiently as new records arrive in the object storage leveraging event notifications. Also, the operational complexity, maintenance of the infrastructure, including platform upgrades and security, and the development effort should be minimal.

Which design will meet these requirements?

Options:

A.  

Ingest the data using COPY INTO and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

B.  

Ingest the data using Snowpipe and use streams and tasks to orchestrate transformations. Create an external function to do model inference with Amazon Comprehend and write the final records to a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

C.  

Ingest the data into Snowflake using Amazon EMR and PySpark using the Snowflake Spark connector. Apply transformations using another Spark job. Develop a python program to do model inference by leveraging the Amazon Comprehend text analysis API. Then write the results to a Snowflake table and create a listing in the Snowflake Marketplace to make the data available to other companies.

D.  

Ingest the data using Snowpipe and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

Discussion 0
Questions 23

Which columns can be included in an external table schema? (Select THREE).

Options:

A.  

VALUE

B.  

METADATASROW_ID

C.  

METADATASISUPDATE

D.  

METADAT A$ FILENAME

E.  

METADATAS FILE_ROW_NUMBER

F.  

METADATASEXTERNAL TABLE PARTITION

Discussion 0
Questions 24

An Architect needs to design a data unloading strategy for Snowflake, that will be used with the COPY INTO command.

Which configuration is valid?

Options:

A.  

Location of files: Snowflake internal location

. File formats: CSV, XML

. File encoding: UTF-8

. Encryption: 128-bit

B.  

Location of files: Amazon S3

. File formats: CSV, JSON

. File encoding: Latin-1 (ISO-8859)

. Encryption: 128-bit

C.  

Location of files: Google Cloud Storage

. File formats: Parquet

. File encoding: UTF-8

· Compression: gzip

D.  

Location of files: Azure ADLS

. File formats: JSON, XML, Avro, Parquet, ORC

. Compression: bzip2

. Encryption: User-supplied key

Discussion 0
Questions 25

Which feature provides the capability to define an alternate cluster key for a table with an existing cluster key?

Options:

A.  

External table

B.  

Materialized view

C.  

Search optimization

D.  

Result cache

Discussion 0
Questions 26

An Architect is troubleshooting a query with poor performance using the QUERY_HIST0RY function. The Architect observes that the COMPILATIONJHME is greater than the EXECUTIONJTIME.

What is the reason for this?

Options:

A.  

The query is processing a very large dataset.

B.  

The query has overly complex logic.

C.  

The query is queued for execution.

D.  

The query is reading from remote storage.

Discussion 0
Questions 27

A Snowflake Architect Is working with Data Modelers and Table Designers to draft an ELT framework specifically for data loading using Snowpipe. The Table Designers will add a timestamp column that Inserts the current tlmestamp as the default value as records are loaded into a table. The Intent is to capture the time when each record gets loaded into the table; however, when tested the timestamps are earlier than the loae_take column values returned by the copy_history function or the Copy_HISTORY view (Account Usage).

Why Is this occurring?

Options:

A.  

The timestamps are different because there are parameter setup mismatches. The parameters need to be realigned

B.  

The Snowflake timezone parameter Is different from the cloud provider's parameters causing the mismatch.

C.  

The Table Designer team has not used the localtimestamp or systimestamp functions in the Snowflake copy statement.

D.  

The CURRENT_TIMEis evaluated when the load operation is compiled in cloud services rather than when the record is inserted into the table.

Discussion 0
Questions 28

A Data Engineer is designing a near real-time ingestion pipeline for a retail company to ingest event logs into Snowflake to derive insights. A Snowflake Architect is asked to define security best practices to configure access control privileges for the data load for auto-ingest to Snowpipe.

What are the MINIMUM object privileges required for the Snowpipe user to execute Snowpipe?

Options:

A.  

OWNERSHIP on the named pipe, USAGE on the named stage, target database, and schema, and INSERT and SELECT on the target table

B.  

OWNERSHIP on the named pipe, USAGE and READ on the named stage, USAGE on the target database and schema, and INSERT end SELECT on the target table

C.  

CREATE on the named pipe, USAGE and READ on the named stage, USAGE on the target database and schema, and INSERT end SELECT on the target table

D.  

USAGE on the named pipe, named stage, target database, and schema, and INSERT and SELECT on the target table

Discussion 0
Questions 29

A table contains five columns and it has millions of records. The cardinality distribution of the columns is shown below:

Column C4 and C5 are mostly used by SELECT queries in the GROUP BY and ORDER BY clauses. Whereas columns C1, C2 and C3 are heavily used in filter and join conditions of SELECT queries.

The Architect must design a clustering key for this table to improve the query performance.

Based on Snowflake recommendations, how should the clustering key columns be ordered while defining the multi-column clustering key?

Options:

A.  

C5, C4, C2

B.  

C3, C4, C5

C.  

C1, C3, C2

D.  

C2, C1, C3

Discussion 0
Questions 30

A company has an inbound share set up with eight tables and five secure views. The company plans to make the share part of its production data pipelines.

Which actions can the company take with the inbound share? (Choose two.)

Options:

A.  

Clone a table from a share.

B.  

Grant modify permissions on the share.

C.  

Create a table from the shared database.

D.  

Create additional views inside the shared database.

E.  

Create a table stream on the shared table.

Discussion 0
Questions 31

A company's Architect needs to find an efficient way to get data from an external partner, who is also a Snowflake user. The current solution is based on daily JSON extracts that are placed on an FTP server and uploaded to Snowflake manually. Thefiles are changed several times each month, and the ingestion process needs to be adapted to accommodate these changes.

What would be the MOST efficient solution?

Options:

A.  

Ask the partner to create a share and add the company's account.

B.  

Ask the partner to use the data lake export feature and place the data into cloud storage where Snowflake can natively ingest it (schema-on-read).

C.  

Keep the current structure but request that the partner stop changing files, instead only appending new files.

D.  

Ask the partner to set up a Snowflake reader account and use that account to get the data for ingestion.

Discussion 0
Questions 32

Database DB1 has schema S1 which has one table, T1.

DB1 --> S1 --> T1

The retention period of EG1 is set to 10 days.

The retention period of s: is set to 20 days.

The retention period of t: Is set to 30 days.

The user runs the following command:

Drop Database DB1;

What will the Time Travel retention period be for T1?

Options:

A.  

10 days

B.  

20 days

C.  

30 days

D.  

37 days

Discussion 0
Questions 33

Why might a Snowflake Architect use a star schema model rather than a 3NF model when designing a data architecture to run in Snowflake? (Select TWO).

Options:

A.  

Snowflake cannot handle the joins implied in a 3NF data model.

B.  

The Architect wants to remove data duplication from the data stored in Snowflake.

C.  

The Architect is designing a landing zone to receive raw data into Snowflake.

D.  

The Bl tool needs a data model that allows users to summarize facts across different dimensions, or to drill down from the summaries.

E.  

The Architect wants to present a simple flattened single view of the data to a particular group of end users.

Discussion 0
Questions 34

Which Snowflake data modeling approach is designed for BI queries?

Options:

A.  

3 NF

B.  

Star schema

C.  

Data Vault

D.  

Snowflake schema

Discussion 0
Questions 35

A company is using Snowflake in Azure in the Netherlands. The company analyst team also has data in JSON format that is stored in an Amazon S3 bucket in the AWS Singapore region that the team wants to analyze.

The Architect has been given the following requirements:

1. Provide access to frequently changing data

2. Keep egress costs to a minimum

3. Maintain low latency

How can these requirements be met with the LEAST amount of operational overhead?

Options:

A.  

Use a materialized view on top of an external table against the S3 bucket in AWS Singapore.

B.  

Use an external table against the S3 bucket in AWS Singapore and copy the data into transient tables.

C.  

Copy the data between providers from S3 to Azure Blob storage to collocate, then use Snowpipe for data ingestion.

D.  

Use AWS Transfer Family to replicate data between the S3 bucket in AWS Singapore and an Azure Netherlands Blob storage, then use an external table against the Blob storage.

Discussion 0
Questions 36

What is a characteristic of loading data into Snowflake using the Snowflake Connector for Kafka?

Options:

A.  

The Connector only works in Snowflake regions that use AWS infrastructure.

B.  

The Connector works with all file formats, including text, JSON, Avro, Ore, Parquet, and XML.

C.  

The Connector creates and manages its own stage, file format, and pipe objects.

D.  

Loads using the Connector will have lower latency than Snowpipe and will ingest data in real time.

Discussion 0
Questions 37

What Snowflake system functions are used to view and or monitor the clustering metadata for a table? (Select TWO).

Options:

A.  

SYSTEMSCLUSTERING

B.  

SYSTEMSTABLE_CLUSTERING

C.  

SYSTEMSCLUSTERING_DEPTH

D.  

SYSTEMSCLUSTERING_RATIO

E.  

SYSTEMSCLUSTERING_INFORMATION

Discussion 0
Questions 38

What is the MOST efficient way to design an environment where data retention is not considered critical, and customization needs are to be kept to a minimum?

Options:

A.  

Use a transient database.

B.  

Use a transient schema.

C.  

Use a transient table.

D.  

Use a temporary table.

Discussion 0
Questions 39

An Architect Is designing a data lake with Snowflake. The company has structured, semi-structured, and unstructured data. The company wants to save the data inside the data lake within the Snowflake system. The company is planning on sharing data among Its corporate branches using Snowflake data sharing.

What should be considered when sharing the unstructured data within Snowflake?

Options:

A.  

A pre-signed URL should be used to save the unstructured data into Snowflake in order to share data over secure views, with no time limit for the URL.

B.  

A scoped URL should be used to save the unstructured data into Snowflake in order to share data over secure views, with a 24-hour time limit for the URL.

C.  

A file URL should be used to save the unstructured data into Snowflake in order to share data over secure views, with a 7-day time limit for the URL.

D.  

A file URL should be used to save the unstructured data into Snowflake in order to share data over secure views, with the "expiration_time" argument defined for the URL time limit.

Discussion 0
Questions 40

Which of the below commands will use warehouse credits?

Options:

A.  

SHOW TABLES LIKE 'SNOWFL%';

B.  

SELECT MAX(FLAKE_ID) FROM SNOWFLAKE;

C.  

SELECT COUNT(*) FROM SNOWFLAKE;

D.  

SELECT COUNT(FLAKE_ID) FROM SNOWFLAKE GROUP BY FLAKE_ID;

Discussion 0
Questions 41

What is a key consideration when setting up search optimization service for a table?

Options:

A.  

Search optimization service works best with a column that has a minimum of 100 K distinct values.

B.  

Search optimization service can significantly improve query performance on partitioned external tables.

C.  

Search optimization service can help to optimize storage usage by compressing the data into a GZIP format.

D.  

The table must be clustered with a key having multiple columns for effective search optimization.

Discussion 0
Questions 42

A retail company has over 3000 stores all using the same Point of Sale (POS) system. The company wants to deliver near real-time sales results to category managers. The stores operate in a variety of time zones and exhibit a dynamic range of transactions each minute, with some stores having higher sales volumes than others.

Sales results are provided in a uniform fashion using data engineered fields that will be calculated in a complex data pipeline. Calculations include exceptions, aggregations, and scoring using external functions interfaced to scoring algorithms. The source data for aggregations has over 100M rows.

Every minute, the POS sends all sales transactions files to a cloud storage location with a naming convention that includes store numbers and timestamps to identify the set of transactions contained in the files. The files are typically less than 10MB in size.

How can the near real-time results be provided to the category managers? (Select TWO).

Options:

A.  

All files should be concatenated before ingestion into Snowflake to avoid micro-ingestion.

B.  

A Snowpipe should be created and configured with AUTO_INGEST = true. A stream should be created to process INSERTS into a single target table using the stream metadata to inform the store number and timestamps.

C.  

A stream should be created to accumulate the near real-time data and a task should be created that runs at a frequency that matches the real-time analytics needs.

D.  

An external scheduler should examine the contents of the cloud storage location and issue SnowSQL commands to process the data at a frequency that matches the real-time analytics needs.

E.  

The copy into command with a task scheduled to run every second should be used to achieve the near-real time requirement.

Discussion 0
Questions 43

A company is storing large numbers of small JSON files (ranging from 1-4 bytes) that are received from IoT devices and sent to a cloud provider. In any given hour, 100,000 files are added to the cloud provider.

What is the MOST cost-effective way to bring this data into a Snowflake table?

Options:

A.  

An external table

B.  

A pipe

C.  

A stream

D.  

A copy command at regular intervals

Discussion 0
Questions 44

A new table and streams are created with the following commands:

CREATE OR REPLACE TABLE LETTERS (ID INT, LETTER STRING) ;

CREATE OR REPLACE STREAM STREAM_1 ON TABLE LETTERS;

CREATE OR REPLACE STREAM STREAM_2 ON TABLE LETTERS APPEND_ONLY = TRUE;

The following operations are processed on the newly created table:

INSERT INTO LETTERS VALUES (1, 'A');

INSERT INTO LETTERS VALUES (2, 'B');

INSERT INTO LETTERS VALUES (3, 'C');

TRUNCATE TABLE LETTERS;

INSERT INTO LETTERS VALUES (4, 'D');

INSERT INTO LETTERS VALUES (5, 'E');

INSERT INTO LETTERS VALUES (6, 'F');

DELETE FROM LETTERS WHERE ID = 6;

What would be the output of the following SQL commands, in order?

SELECT COUNT (*) FROM STREAM_1;

SELECT COUNT (*) FROM STREAM_2;

Options:

A.  

2 & 6

B.  

2 & 3

C.  

4 & 3

D.  

4 & 6

Discussion 0
Questions 45

At which object type level can the APPLY MASKING POLICY, APPLY ROW ACCESS POLICY and APPLY SESSION POLICY privileges be granted?

Options:

A.  

Global

B.  

Database

C.  

Schema

D.  

Table

Discussion 0
Questions 46

A healthcare company is deploying a Snowflake account that may include Personal Health Information (PHI). The company must ensure compliance with all relevant privacy standards.

Which best practice recommendations will meet data protection and compliance requirements? (Choose three.)

Options:

A.  

Use, at minimum, the Business Critical edition of Snowflake.

B.  

Create Dynamic Data Masking policies and apply them to columns that contain PHI.

C.  

Use the Internal Tokenization feature to obfuscate sensitive data.

D.  

Use the External Tokenization feature to obfuscate sensitive data.

E.  

Rewrite SQL queries to eliminate projections of PHI data based on current_role().

F.  

Avoid sharing data with partner organizations.

Discussion 0