Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

message validation is too strict when responding to system message #39

Open
codefromthecrypt opened this issue Sep 9, 2024 · 0 comments

Comments

@codefromthecrypt
Copy link
Contributor

llama3+ based models do not reply to a system instruction with a message like "ok, go ahead". In our message.py, we enforce there must be a text or tool usage in reply. This logic makes it impossible to try anything based on llama3+, at least without changing the system message to a point where it might affect the reply.

Request method: POST
Request URL: http://localhost:11434/v1/chat/completions
Request headers: Headers({'host': 'localhost:11434', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'connection': 'keep-alive', 'user-agent': 'python-httpx/0.27.2', 'content-length': '357', 'content-type': 'application/json'})
Request content: b'{"messages": [{"role": "system", "content": "You are a helpful assistant. Expect to need to authenticate using get_password."}], "model": "llama3-groq-tool-use", "tools": [{"type": "function", "function": {"name": "get_password", "description": "Return the password for authentication", "parameters": {"type": "object", "properties": {}, "required": []}}}]}'
Response content:
{"id":"chatcmpl-364","object":"chat.completion","created":1725853404,"model":"llama3-groq-tool-use","system_fingerprint":"fp_ollama","choices":[{"index":0,"message":{"role":"assistant","content":""},"finish_reason":"stop"}],"usage":{"prompt_tokens":137,"completion_tokens":1,"total_tokens":138}}

I would suggest one of the following choices:

  • change to skip enforcement logic on the response to the initial system prompt
  • change validation hooks so it can see the prior message and then skip validation after a system prompt
  • don't change validation system, rather skip like below always
--- a/src/exchange/message.py
+++ b/src/exchange/message.py
@@ -19,8 +19,10 @@ def validate_role_and_content(instance: "Message", *_: Any) -> None:  # noqa: AN
         if instance.tool_use:
             raise ValueError("User message does not support ToolUse")
     elif instance.role == "assistant":
-        if not (instance.text or instance.tool_use):
-            raise ValueError("Assistant message must include a Text or ToolUsage")
+        # Note: Models based on llama3 return no instance.text in the response
+        # when the input was a single system message. We also can't determine
+        # the input inside a validator. Hence, we can't enforce a condition
+        # that the assistant message must include a Text or ToolUsage.
         if instance.tool_result:
             raise ValueError("Assistant message does not support ToolResult")
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant