1. Home
  2. iSQI
  3. iSQI ISTQB Certified Tester
  4. CT-AI Exam Info

iSQI CT-AI Exam Questions - Navigate Your Path to Success

The iSQI Certified Tester AI Testing (CT-AI) exam is a good choice for Customer Service Administration Manager and if the candidate manages to pass iSQI Certified Tester AI Testing exam, he/she will earn iSQI ISTQB Certified Tester Certification. Below are some essential facts for iSQI CT-AI exam candidates:

  • TrendyCerts offers 80 Questions that are based on actual iSQI CT-AI syllabus.
  • Our iSQI CT-AI Exam Practice Questions were last updated on: Mar 11, 2025

Sample Questions for iSQI CT-AI Exam Preparation

Question 1

A software component uses machine learning to recognize the digits from a scan of handwritten numbers. In the scenario above, which type of Machine Learning (ML) is this an example of?

SELECT ONE OPTION

Correct : C

Recognizing digits from a scan of handwritten numbers using machine learning is an example of classification. Here's a breakdown:

Classification: This type of machine learning involves categorizing input data into predefined classes. In this scenario, the input data (handwritten digits) are classified into one of the 10 digit classes (0-9).

Why Not Other Options:

Reinforcement Learning: This involves learning by interacting with an environment to achieve a goal, which does not fit the problem of recognizing digits.

Regression: This is used for predicting continuous values, not discrete categories like digit recognition.

Clustering: This involves grouping similar data points together without predefined classes, which is not the case here.


Options Selected by Other Users:
Question 2

Which ONE of the following statements is a CORRECT adversarial example in the context of machine learning systems that are working on image classifiers.

SELECT ONE OPTION

Correct : D

A . Black box attacks based on adversarial examples create an exact duplicate model of the original.

Black box attacks do not create an exact duplicate model. Instead, they exploit the model by querying it and using the outputs to craft adversarial examples without knowledge of the internal workings.

B . These attack examples cause a model to predict the correct class with slightly less accuracy even though they look like the original image.

Adversarial examples typically cause the model to predict the incorrect class rather than just reducing accuracy. These examples are designed to be visually indistinguishable from the original image but lead to incorrect classifications.

C . These attacks can't be prevented by retraining the model with these examples augmented to the training data.

This statement is incorrect because retraining the model with adversarial examples included in the training data can help the model learn to resist such attacks, a technique known as adversarial training.

D . These examples are model specific and are not likely to cause another model trained on the same task to fail.

Adversarial examples are often model-specific, meaning that they exploit the specific weaknesses of a particular model. While some adversarial examples might transfer between models, many are tailored to the specific model they were generated for and may not affect other models trained on the same task.

Therefore, the correct answer is D because adversarial examples are typically model-specific and may not cause another model trained on the same task to fail.


Options Selected by Other Users:
iSQI CT-AI