This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
What can an attacker achieve through prompt injection attacks?
Attacker can increase the processing speed of the AI model.
Attacker can manipulate the AI’s behavior by altering input prompts and injecting malicious instructions.
Attacker can enhance the quality of the AI-generated content.
How can developers control which content is trusted in the Semantic Kernel SDK?
Developers must trust all content by default for the system to function.
Developers can set AllowUnsafeContent = true to trust specific input variables or function call results.
Developers cannot configure trusted content; the SDK automatically handles everything.
Which of the following is a feature of the Function Invocation Filter in the Semantic Kernel SDK?
It can modify prompts before submission to the AI model.
It automatically terminates the function invocation process.
It can log function invocations and validate actions before and after execution.
You must answer all questions before checking your work.
Was this page helpful?