Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gradio chat exception #103

Open
faev999 opened this issue Oct 23, 2024 · 3 comments
Open

gradio chat exception #103

faev999 opened this issue Oct 23, 2024 · 3 comments

Comments

@faev999
Copy link

faev999 commented Oct 23, 2024

hi all,
I had the following exception when trying to run the gradio example with: python -m mlx_vlm.chat_ui --model mlx-community/Qwen2-VL-72B-Instruct-4bit

.../mlx_vlm/chat_ui.py", line 32, in chat
    if len(message.files) >= 1:
    
AttributeError: 'dict' object has no attribute 'files'

when using the CLI example with: python -m mlx_vlm.generate --model mlx-community/Qwen2-VL-72B-Instruct-4bit --max-tokens 100 --temp 0.0 --image http://images.cocodataset.org/val2017/000000039769.jpg there's no exception.

I installed the package with: pip install mlx-vlm and have tried python 3.12 and python 3.10 with and got the same result

@kaimerklein
Copy link

kaimerklein commented Oct 25, 2024

I modified the chat function in chat_ui.py like this:

def chat(message, history, temperature, max_tokens):
    chat = []
    if len(message["files"]) >= 1:
        chat.append(message["text"])
    else:
        raise gr.Error("Please upload an image. Text only chat is not supported.")

    files = message["files"][-1]
    if model.config.model_type != "paligemma":
        messages = apply_chat_template(processor, config, message["text"], num_images=1)
    else:
        messages = message["text"]

    response = ""
    for chunk in stream_generate(
        model, processor, files, messages, image_processor, max_tokens, temp=temperature
    ):
        response += chunk
        yield response

Seems to work.

@faev999
Copy link
Author

faev999 commented Oct 28, 2024

I also modified that function like that, but then I got other kind of errors:

...prompt_utils.py", line 138, in apply_chat_template
if p.get("role") in ["system", "assistant"]:
AttributeError: 'str' object has no attribute get

Did you also modify that file?

I modified the chat function in chat_ui.py like this:

def chat(message, history, temperature, max_tokens):
    chat = []
    if len(message["files"]) >= 1:
        chat.append(message["text"])
    else:
        raise gr.Error("Please upload an image. Text only chat is not supported.")

    files = message["files"][-1]
    if model.config.model_type != "paligemma":
        messages = apply_chat_template(processor, config, message["text"], num_images=1)
    else:
        messages = message["text"]

    response = ""
    for chunk in stream_generate(
        model, processor, files, messages, image_processor, max_tokens, temp=temperature
    ):
        response += chunk
        yield response

Seems to work.

@faev999
Copy link
Author

faev999 commented Oct 28, 2024

I also modified that function like that, but then I got other kind of errors:

...prompt_utils.py", line 138, in apply_chat_template
if p.get("role") in ["system", "assistant"]:
AttributeError: 'str' object has no attribute get

Did you also modify that file?

I modified the chat function in chat_ui.py like this:

def chat(message, history, temperature, max_tokens):
    chat = []
    if len(message["files"]) >= 1:
        chat.append(message["text"])
    else:
        raise gr.Error("Please upload an image. Text only chat is not supported.")

    files = message["files"][-1]
    if model.config.model_type != "paligemma":
        messages = apply_chat_template(processor, config, message["text"], num_images=1)
    else:
        messages = message["text"]

    response = ""
    for chunk in stream_generate(
        model, processor, files, messages, image_processor, max_tokens, temp=temperature
    ):
        response += chunk
        yield response

Seems to work.

I added a string type check for p like this and now it works:

else:
           for prompts in prompt:
                for i, p in enumerate(prompts):
                    if not isinstance(p, str):
                        messages.append(process_single_prompt(p, i == 0))
                        if p.get("role") in ["system", "assistant"]:
                            messages.append(process_single_prompt(p, False))
                    else:
                        is_first = i == 0 or i == 1
                        messages.append(process_single_prompt(p, is_first))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants