bob_llm.llm_node
Classes
A ROS 2 node that interfaces with an OpenAI-compatible LLM. |
Functions
|
Module Contents
- class bob_llm.llm_node.LLMNode
Bases:
rclpy.node.NodeA ROS 2 node that interfaces with an OpenAI-compatible LLM.
This node handles chat history, tool execution, and communication with an LLM backend, configured entirely through ROS parameters.
- chat_history = []
- _prefix_history_len
- _is_generating = False
- _cancel_requested = False
- sub
- pub_response
- pub_stream
- pub_latest_turn
- _initialize_chat_history()
Populates the initial chat history from ROS parameters.
This method adds the system prompt and any few-shot examples provided in the ‘system_prompt’ and ‘initial_messages_json’ parameters, respectively, to guide the LLM’s behavior.
- load_llm_client()
Loads and configures the LLM client based on ROS parameters.
This method reads the ‘api_*’ and generation parameters (e.g., temperature, top_p) to instantiate and configure the appropriate backend client for communicating with the Large Language Model.
- _load_tools()
Dynamically loads tool modules specified in ‘tool_interfaces’.
Supports loading from both Python module names (e.g., ‘my_package.tools’) and absolute file paths. It generates an OpenAI-compatible schema for each function and maps the function name to its callable object.
- Returns:
A tuple containing a list of tool schemas for the LLM and a dictionary mapping function names to their callable objects.
- _publish_latest_turn(user_prompt: str, assistant_message: dict)
Processes the latest conversational turn for publishing and logging.
- Args:
user_prompt: The string content of the user’s latest prompt. assistant_message: The final message dictionary from the assistant,
e.g., {‘role’: ‘assistant’, ‘content’: ‘…’}.
- _get_truncated_history()
Returns a copy of chat history with long strings truncated for logging.
- _trim_chat_history()
Prevents the chat history from growing indefinitely.
It trims the oldest conversational turns to stay within the ‘max_history_length’ limit, preserving the system prompt and any initial few-shot examples. A turn is defined as a user message and all subsequent assistant/tool messages that follow it.
- prompt_callback(msg)
Processes an incoming prompt from the ‘llm_prompt’ topic.
This is the core callback for the node. It manages the conversation flow by first entering a loop to handle potential tool calls from the LLM. Once the LLM decides to respond with text, it exits the loop and generates the final response, either by streaming it token-by-token or as a single message, based on the ‘stream’ parameter.
- Args:
msg: The std_msgs/String message containing the user’s prompt.
- bob_llm.llm_node.main(args=None)