java.lang.Object
ai.tutor.cab302exceptionalhandlers.Utils.AIUtils

public class AIUtils extends Object
Singleton utility class for interacting with the Ollama AI API.

This class handles the construction of prompts, communication with the Ollama API, and parsing of AI responses.

Usage Example:

 MessageResponseFormat response = AIUtils.getInstance().generateResponse(
     chatHistory,
     chatConfig,
     isQuizMode
 );
Author:
Justin.
  • Method Details

    • getInstance

      public static AIUtils getInstance()
      Gets the singleton instance of AIUtils.
      Returns:
      The singleton instance.
      Throws:
      RuntimeException - if initialization fails.
    • validateQuizResponse

      public static boolean validateQuizResponse(AIUtils.ModelResponseFormat response)
      Validates the structure and content of an AI-generated quiz response.

      This method thoroughly checks the response format to ensure it contains valid quizzes, questions, and options. First, responses cannot be null or empty. If the response is valid, it checks each quiz for a title and questions.

      A quiz must have at least one question and a list of questions. Furthermore, each question must satisfy all the variables in the AIUtils.Question class. Each question must have a valid question number, content, and at least one option. Ideally, the AI should generate at least one correct answer option for each question. If not, the quiz response is considered invalid.

      Parameters:
      response - The AIUtils.ModelResponseFormat to validate.
      Returns:
      True if the quiz response is valid, false otherwise.
    • isOllamaRunning

      public boolean isOllamaRunning()
      Pings Ollama host to check if it is running.

      Ollama is by default running locally at http://127.0.0.1:11434/.

      On Windows, having the Ollama host set to "localhost" will not be equivalent to 127.0.0.1

      To setup ollama, `ollama serve` must be run in the terminal.

      Returns:
      True if Ollama is running, false otherwise.
    • hasModel

      public boolean hasModel()
      Calls /api/tags on the Ollama host to check if modelName is available.
      Returns:
      True if the model is available, false otherwise.
    • getModelName

      public String getModelName()
    • setVerbose

      public void setVerbose(boolean verbose)
      Sets the verbosity of OllamaAPI output.

      By default, this is false when OllamaAPI is initialized.

      Parameters:
      verbose - True to enable verbose output, false to disable.
    • generateResponse

      public AIUtils.ModelResponseFormat generateResponse(List<Message> history, Chat chatConfig, boolean isQuizMode)
      Calls /api/chat on the Ollama host generating an AI response based on chat history and configuration.

      The AI model is configured accordingly using OptionsBuilder with the specified temperature, number of predictions, and context size. These model parameters are highly dependant on which model is being used.

      After the model is generates a response, it's thinking tokens are formatted out, the response is further processed by either processQuizResponse(String) or processChatResponse(String) methods assuming the response is in JSON format, then the response is returned in a AIUtils.ModelResponseFormat.

      If the model response fails, the default response is an error message.

      It is expected but not enforced that there should be a system prompt for the AI to follow. The system prompt is constructed based on the provided chat configuration and whether the response is for a quiz or general chat.

      System prompts are loaded via loadPrompts() method.

      Parameters:
      history - The list of previous Message objects in the chat.
      chatConfig - The Chat configuration (e.g., personality, quiz settings).
      isQuizMode - True if a quiz response is requested, false for a general chat response.
      Returns:
      A AIUtils.ModelResponseFormat containing the AI's response.