01. How is the totalTrainingSteps parameter calculated during fine-tuning in OCI Generative AI?
a) totalTrainingSteps = (totalTrainingEpochs * size(trainingDataset)) / trainingBatchSize
b) totalTrainingSteps = (totalTrainingEpochs * trainingBatchSize) / size(trainingDataset)
c) totalTrainingSteps = (size(trainingDataset) * trainingBatchSize) / totalTrainingEpochs
d) totalTrainingSteps = (totalTrainingEpochs + size(trainingDataset)) * trainingBatchSize
02. In which phase of the RAG pipeline are additional context and user query used by LLMs to respond to the user?
a) Evaluation
b) Ingestion
c) Retrieval
d) Generation
03. A researcher is exploring generative models for various tasks. While diffusion models have shown excellent results in generating high-quality images, they encounter significant challenges in adapting these models for text.
What is the primary reason why diffusion models are difficult to apply to text generation tasks?
a) Because text is not categorical
b) Because text representation is categorical, unlike images
c) Because diffusion models can only produce images
d) Because text generation does not require complex models
04. What happens when this line of code is executed?
embed_text_response = generative_ai_inference_client.embed_text(embed_text_detail)
a) It processes and configures the OCI profile settings for the inference session.
b) It initializes a pretrained OCI Generative AI model for use in the session.
c) It sends a request to the OCI Generative AI service to generate an embedding for the input text.
d) It initiates a connection to OCI and authenticates using the user’s credentials.
05. In an OCI Generative AI chat model, which of these parameter settings is most likely to induce hallucinations and factually incorrect information?
a) temperature = 0.9, top_p = 0.8, and frequency_penalty = 0.1
b) temperature = 0.2, top_p = 0.6, and frequency_penalty = 0.8
c) temperature = 0.0, top_p = 0.7, and frequency_penalty = 1.0
d) temperature = 0.5, top_p = 0.9, and frequency_penalty = 0.5
06. When activating content moderation in OCI Generative AI Agents, which of these can you specify?
a) The maximum file size for input data
b) The threshold for language complexity in responses
c) The type of vector search used for retrieval
d) Whether moderation applies to user prompts, generated responses, or both
07. How can you affect the probability distribution over the vocabulary of a Large Language Model (LLM)?
a) By adjusting the token size during the training phase
b) By restricting the vocabulary used in the model
c) By using techniques like prompting and training
d) By modifying the model’s training data
08. You want to build an LLM application that can connect application components easily and allow for component replacement in a declarative manner.
What approach would you take?
a) Use LangChain Expression Language (LCEL).
b) Use prompts.
c) Use agents.
d) Use Python classes like LLMChain.
09. What must be done before you can delete a knowledge base in Generative AI Agents?
a) Disconnect the database tool connection.
b) Delete the data sources and agents using that knowledge base.
c) Reassign the knowledge base to a different agent.
d) Archive the knowledge base for future use.
10. Which phase of the RAG pipeline includes loading, splitting, and embedding of documents?
a) Evaluation
b) Generation
c) Retrieval
d) Ingestion