Month End Sale 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: exams65

ExamsBrite Dumps

HCIP - AI EI Developer V2.5 Exam Question and Answers

HCIP - AI EI Developer V2.5 Exam

Last Update Sep 30, 2025
Total Questions : 60

We are offering FREE H13-321_V2.5 Huawei exam questions. All you do is to just go and sign up. Give your details, prepare H13-321_V2.5 free exam questions and then go for complete pool of HCIP - AI EI Developer V2.5 Exam test questions that will help you more.

H13-321_V2.5 pdf

H13-321_V2.5 PDF

$36.75  $104.99
H13-321_V2.5 Engine

H13-321_V2.5 Testing Engine

$43.75  $124.99
H13-321_V2.5 PDF + Engine

H13-321_V2.5 PDF + Testing Engine

$57.75  $164.99
Questions 1

Which of the following has never been used as a method in the history of NLP?

Options:

A.  

Recursion-based method

B.  

Deep learning-based method

C.  

Rule-based method

D.  

Statistics-based method

Discussion 0
Questions 2

In an image preprocessing experiment, the cv2.imread("lena.png", 1) function provided by OpenCV is used to read images. The parameter "1" in this function represents a --------- -channel image. (Fill in the blank with a number.)

Options:

Discussion 0
Questions 3

How many parameters need to be learned when a 3 × 3 convolution kernel is used to perform the convolution operation on two three-channel color images?

Options:

A.  

10

B.  

9

C.  

28

D.  

55

Discussion 0
Questions 4

Which of the following statements about the functions of layer normalization and residual connection in the Transformer is true?

Options:

A.  

Residual connections and layer normalization help prevent vanishing gradients and exploding gradients in deep networks.

B.  

Residual connections primarily add depth to the model but do not aid in gradient propagation.

C.  

Layer normalization accelerates model convergence and does not affect model stability.

D.  

In shallow networks, residual connections are beneficial, but they aggravate the vanishing gradient problem in deep networks.

Discussion 0
Questions 5

A text classification task has only one final output, while a sequence labeling task has an output in each input position.

Options:

A.  

TRUE

B.  

FALSE

Discussion 0
Questions 6

When training a deep neural network model, a loss function measures the difference between the model's predictions and the actual labels.

Options:

A.  

TRUE

B.  

FALSE

Discussion 0
Questions 7

In the image recognition algorithm, the structure design of the convolutional layer has a great impact on its performance. Which of the following statements are true about the structure and mechanism of the convolutional layer? (Transposed convolution is not considered.)

Options:

A.  

In the convolutional layer, each neuron only collects some information. This effectively reduces the memory required.

B.  

The convolutional layer uses parameter sharing so that features at different positions share the same group of parameters. This reduces the number of network parameters required but reduces the expression capabilities of models.

C.  

A stride in the convolutional layer can control the spatial resolution of the output feature map. A larger stride indicates a smaller output feature map and simpler calculation.

D.  

The convolutional layer slides over the input feature map using a convolution kernel of a fixed size to extract local features without explicitly defining their features.

Discussion 0
Questions 8

In 2017, the Google machine translation team proposed the Transformer in their paperAttention is All You Need. In a Transformer model, there is customized LSTM with CNN layers.

Options:

A.  

TRUE

B.  

FALSE

Discussion 0
Questions 9

If a scanned document is not properly placed, and the text is tilted, it is difficult to recognize the characters in the document. Which of the following techniques can be used for correction in this case?

Options:

A.  

Perspective transformation

B.  

Grayscale transformation

C.  

Rotational transformation

D.  

Affine transformation

Discussion 0
Questions 10

Which of the following are object detection algorithms?

Options:

A.  

R-CNN

B.  

YOLO

C.  

SSD

D.  

Faster-R-CNN

Discussion 0
Questions 11

Which of the following is a learning algorithm used for Markov chains?

Options:

A.  

Baum-Welch algorithm

B.  

Viterbi algorithm

C.  

Exhaustive search

D.  

Forward-backward algorithm

Discussion 0
Questions 12

Transformer models outperform LSTM when analyzing and processing long-distance dependencies, making them more effective for sequence data processing.

Options:

A.  

TRUE

B.  

FALSE

Discussion 0
Questions 13

The U-Net uses an upsampling mechanism and has a fully-connected layer.

Options:

A.  

TRUE

B.  

FALSE

Discussion 0
Questions 14

Vision transformer (ViT) performs well in image classification tasks. Which of the following is the main advantage of ViT?

Options:

A.  

It can handle small datasets with minimal labeling required.

B.  

It achieves fast convergence without using pre-trained models.

C.  

It can process high-resolution images to enhance classification accuracy.

D.  

The self-attention mechanism is used to capture global features of images, improving classification accuracy.

Discussion 0
Questions 15

The basic operations of morphological processing include dilation and erosion. These operations can be combined to achieve practical algorithms such as opening and closing operations.

Options:

A.  

TRUE

B.  

FALSE

Discussion 0
Questions 16

The technologies underlying ModelArts support a wide range of heterogeneous compute resources, allowing you to flexibly use the resources that fit your needs.

Options:

A.  

TRUE

B.  

FALSE

Discussion 0
Questions 17

In NLP tasks, transformer models perform well in multiple tasks due to their self-attention mechanism and parallel computing capability. Which of the following statements about transformer models are true?

Options:

A.  

Transformer models outperform RNN and CNN in processing long texts because they can effectively capture global dependencies.

B.  

Multi-head attention is the core component of a transformer model. It computes multiple attention heads in parallel to capture semantic information in different subspaces.

C.  

A transformer model directly captures the dependency between different positions in the input sequence through the self-attention mechanism, without using the recurrent neural network (RNN) or convolutional neural network (CNN).

D.  

Positional encoding is optional in a transformer model because the self-attention mechanism can naturally process the order information of sequences.

Discussion 0
Questions 18

Which of the following statements about the levels of natural language understanding are true?

Options:

A.  

Syntactic analysis is to find out the meaning of words, structural meaning, their combined meaning, so as to determine the true meaning or concept expressed by a language.

B.  

Semantic analysis is to analyze the structure of sentences and phrases to find out the relationship between words and phrases, as well as their functions in sentences.

C.  

Speech analysis involves distinguishing independent phonemes from a speech stream based on phoneme rules, and then identifying syllables and their lexemes or words according to the phoneme form rules.

D.  

Lexical analysis is to find the lexemes of a word and obtain linguistic information from them.

E.  

Pragmatic analysis is to study the influence of the language's external environment on the language users.

Discussion 0