This first exercise aims to introduce LLM interaction with Spring AI by implementing a simple prompt use case.
Modify the LLMService
class.
We will use a ChatClient
object to interact with the LLM. This object can be built with ChatClient.Builder
already instantiated thanks to autoconfiguration.
Create a private final attribute ChatClient
named chatClient.
Create a private final attribute SystemMessage
named systemMessage.
In the LLMService constructor, set chatClient with the result of calling build()
on the builder.
private final ChatClient chatClient;
private final SystemMessage systemMessage;
public LLMService(ChatClient.Builder builder, @Value("classpath:/prompt-system.md") Resource promptSystem) {
this.systemMessage = new SystemMessage(promptSystem);
this.chatClient = builder.build();
}
Update the prompt-system.md
file in src/main/resources
folder with the following content:
Please answer the question asked and provide the shortest possible response without extra text nor line-breaks, using formal English language.
Create a OllamaOptions
attribute and initialize it in the constructor by using OllamaOptions.create()
method and set model to mistral:7b
and temperature to 0.8
.
Complete the existing getResponse
method with the following steps:
- create a new
Prompt
object usingPrompt(List<Message> messages, OllamaOptions options)
constructor. Pass the previously created objects as arguments, where SystemMessage and UserMessage are included in a list, along with the OllamaOptions object. - call
chatClient.stream
method by passing thePrompt
object as argument - map and return
chatClient.stream
result
private Stream<String> getResponse(final Message userMessage) {
List<Message> messages = new ArrayList<>();
messages.add(systemMessage);
messages.add(userMessage);
Prompt prompt = new Prompt(messages, options);
return chatClient.prompt(prompt).stream()
.chatResponse().toStream()
.map(ChatResponse::getResults)
.flatMap(List::stream)
.map(Generation::getOutput)
.map(AssistantMessage::getContent);
}
If needed, the solution can be checked in the solution/exercise-1
folder.
- Make sure that ollama container is running
- Run the application
- In the application prompt, type this command to ask the model
llm List the names of the top 5 places to visit in Canada
- Check the response
- Try a new question
llm What is the best to visit in July ?
- What do you think about the response ?
In this first exercise, we implemented a simple prompt use case, and we understood some concepts.
- LLM handle different types of messages called "roles" (System, User, Assistant)
- System role set the model overall behavior
- User role provides the user input
- LLM does not automatically keep conversation history
- Spring AI provides some classes and APIs to interact with LLM
- Spring AI provides options that can be changed for each query to the LLM
- Conversational memory is not handled by default
Now we can move to the next exercise to provide some memory to the LLM.