MIA-Bench: Towards better instruction after evaluating multimodal LLMs
We introduce MIA-Bench, a new benchmark designed to evaluate large multimodal language models (MLLMs) on their ability to strictly adhere ...
We introduce MIA-Bench, a new benchmark designed to evaluate large multimodal language models (MLLMs) on their ability to strictly adhere ...