Skip to content

Commit 1b3f660

Browse files
authored
chore(model gallery): add opengvlab_internvl3_5-30b-a3b (#6143)
Signed-off-by: Ettore Di Giacinto <[email protected]>
1 parent 4381e89 commit 1b3f660

File tree

1 file changed

+44
-0
lines changed

1 file changed

+44
-0
lines changed

gallery/index.yaml

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,48 @@
11
---
2+
- &internvl35
3+
name: "opengvlab_internvl3_5-30b-a3b"
4+
url: "github:mudler/LocalAI/gallery/qwen3.yaml@master"
5+
icon: https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png
6+
urls:
7+
- https://huggingface.co/OpenGVLab/InternVL3_5-30B-A3B
8+
- https://huggingface.co/bartowski/OpenGVLab_InternVL3_5-30B-A3B-GGUF
9+
license: apache-2.0
10+
tags:
11+
- multimodal
12+
- gguf
13+
- GPU
14+
- Cpu
15+
- image-to-text
16+
- text-to-text
17+
description: |
18+
We introduce InternVL3.5, a new family of open-source multimodal models that significantly advances versatility, reasoning capability, and inference efficiency along the InternVL series. A key innovation is the Cascade Reinforcement Learning (Cascade RL) framework, which enhances reasoning through a two-stage process: offline RL for stable convergence and online RL for refined alignment. This coarse-to-fine training strategy leads to substantial improvements on downstream reasoning tasks, e.g., MMMU and MathVista. To optimize efficiency, we propose a Visual Resolution Router (ViR) that dynamically adjusts the resolution of visual tokens without compromising performance. Coupled with ViR, our Decoupled Vision-Language Deployment (DvD) strategy separates the vision encoder and language model across different GPUs, effectively balancing computational load. These contributions collectively enable InternVL3.5 to achieve up to a +16.0% gain in overall reasoning performance and a 4.05 ×\times× inference speedup compared to its predecessor, i.e., InternVL3. In addition, InternVL3.5 supports novel capabilities such as GUI interaction and embodied agency. Notably, our largest model, i.e., InternVL3.5-241B-A28B, attains state-of-the-art results among open-source MLLMs across general multimodal, reasoning, text, and agentic tasks—narrowing the performance gap with leading commercial models like GPT-5. All models and code are publicly released.
19+
overrides:
20+
parameters:
21+
model: OpenGVLab_InternVL3_5-30B-A3B-Q4_K_M.gguf
22+
mmproj: mmproj-OpenGVLab_InternVL3_5-30B-A3B-f16.gguf
23+
files:
24+
- filename: OpenGVLab_InternVL3_5-30B-A3B-Q4_K_M.gguf
25+
sha256: c352004ac811cf9aa198e11f698ebd5fd3c49b483cb31a2b081fb415dd8347c2
26+
uri: huggingface://bartowski/OpenGVLab_InternVL3_5-30B-A3B-GGUF/OpenGVLab_InternVL3_5-30B-A3B-Q4_K_M.gguf
27+
- filename: mmproj-OpenGVLab_InternVL3_5-30B-A3B-f16.gguf
28+
sha256: fa362a7396c3dddecf6f9a714144ed86207211d6c68ef39ea0d7dfe21b969b8d
29+
uri: huggingface://bartowski/OpenGVLab_InternVL3_5-30B-A3B-GGUF/mmproj-OpenGVLab_InternVL3_5-30B-A3B-f16.gguf
30+
- !!merge <<: *internvl35
31+
name: "opengvlab_internvl3_5-30b-a3b-q8_0"
32+
urls:
33+
- https://huggingface.co/OpenGVLab/InternVL3_5-30B-A3B
34+
- https://huggingface.co/bartowski/OpenGVLab_InternVL3_5-30B-A3B-GGUF
35+
overrides:
36+
parameters:
37+
model: OpenGVLab_InternVL3_5-30B-A3B-Q8_0.gguf
38+
mmproj: mmproj-OpenGVLab_InternVL3_5-30B-A3B-f16.gguf
39+
files:
40+
- filename: OpenGVLab_InternVL3_5-30B-A3B-Q8_0.gguf
41+
sha256: 79ac13df1d3f784cd5702b2835ede749cdfd274f141d1e0df25581af2a2a6720
42+
uri: huggingface://bartowski/OpenGVLab_InternVL3_5-30B-A3B-GGUF/OpenGVLab_InternVL3_5-30B-A3B-Q8_0.gguf
43+
- filename: mmproj-OpenGVLab_InternVL3_5-30B-A3B-f16.gguf
44+
sha256: fa362a7396c3dddecf6f9a714144ed86207211d6c68ef39ea0d7dfe21b969b8d
45+
uri: huggingface://bartowski/OpenGVLab_InternVL3_5-30B-A3B-GGUF/mmproj-OpenGVLab_InternVL3_5-30B-A3B-f16.gguf
246
- &lfm2
347
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
448
name: "lfm2-vl-450m"

0 commit comments

Comments
 (0)