|
105 | 105 | model: gpt-oss-20b-mxfp4.gguf
|
106 | 106 | files:
|
107 | 107 | - filename: gpt-oss-20b-mxfp4.gguf
|
108 |
| - sha256: 52f57ab7d3df3ba9173827c1c6832e73375553a846f3e32b49f1ae2daad688d4 |
109 | 108 | uri: huggingface://ggml-org/gpt-oss-20b-GGUF/gpt-oss-20b-mxfp4.gguf
|
| 109 | + sha256: be37a636aca0fc1aae0d32325f82f6b4d21495f06823b5fbc1898ae0303e9935 |
110 | 110 | - !!merge <<: *gptoss
|
111 | 111 | name: "gpt-oss-120b"
|
112 | 112 | url: "github:mudler/LocalAI/gallery/harmony.yaml@master"
|
|
119 | 119 | model: gpt-oss-120b-mxfp4-00001-of-00003.gguf
|
120 | 120 | files:
|
121 | 121 | - filename: gpt-oss-120b-mxfp4-00001-of-00003.gguf
|
122 |
| - sha256: 40b630223b9fc43820fa0aae5d0ab61020f5858d1719642357753dca9e7df29f |
123 | 122 | uri: huggingface://ggml-org/gpt-oss-120b-GGUF/gpt-oss-120b-mxfp4-00001-of-00003.gguf
|
| 123 | + sha256: e2865eb6c1df7b2ffbebf305cd5d9074d5ccc0fe3b862f98d343a46dad1606f9 |
124 | 124 | - filename: gpt-oss-120b-mxfp4-00002-of-00003.gguf
|
125 |
| - sha256: fbdb8cdec70edb82c53bfc69cc0f54a34759a23d317fa0771a63be6571907b38 |
126 | 125 | uri: huggingface://ggml-org/gpt-oss-120b-GGUF/gpt-oss-120b-mxfp4-00002-of-00003.gguf
|
| 126 | + sha256: 346492f65891fb27cac5c74a8c07626cbfeb4211cd391ec4de37dbbe3109a93b |
127 | 127 | - filename: gpt-oss-120b-mxfp4-00003-of-00003.gguf
|
128 |
| - sha256: b326bfd8ac696c4b9a14e9e84d5529b2bb86847aea0e65443cbf075accba8b71 |
129 | 128 | uri: huggingface://ggml-org/gpt-oss-120b-GGUF/gpt-oss-120b-mxfp4-00003-of-00003.gguf
|
| 129 | + sha256: 66dca81040933f5a49177e82c479c51319cefb83bd22dad9f06dad45e25f1463 |
130 | 130 | - !!merge <<: *gptoss
|
131 | 131 | name: "openai_gpt-oss-20b-neo"
|
132 | 132 | icon: https://huggingface.co/DavidAU/Openai_gpt-oss-20b-NEO-GGUF/resolve/main/matrix1.gif
|
|
312 | 312 | - https://huggingface.co/Dream-org/Dream-v0-Instruct-7B
|
313 | 313 | - https://huggingface.co/bartowski/Dream-org_Dream-v0-Instruct-7B-GGUF
|
314 | 314 | description: |
|
315 |
| - This is the instruct model of Dream 7B, which is an open diffusion large language model with top-tier performance. |
| 315 | + This is the instruct model of Dream 7B, which is an open diffusion large language model with top-tier performance. |
316 | 316 | overrides:
|
317 | 317 | parameters:
|
318 | 318 | model: Dream-org_Dream-v0-Instruct-7B-Q4_K_M.gguf
|
|
14453 | 14453 | urls:
|
14454 | 14454 | - https://huggingface.co/SicariusSicariiStuff/Impish_Nemo_12B
|
14455 | 14455 | - https://huggingface.co/SicariusSicariiStuff/Impish_Nemo_12B_GGUF
|
14456 |
| - description: | |
14457 |
| - August 2025, Impish_Nemo_12B — my best model yet. And unlike a typical Nemo, this one can take in much higher temperatures (works well with 1+). Oh, and regarding following the character card: It somehow gotten even better, to the point of it being straight up uncanny 🙃 (I had to check twice that this model was loaded, and not some 70B!) |
14458 |
| - |
14459 |
| - I feel like this model could easily replace models much larger than itself for adventure or roleplay, for assistant tasks, obviously not, but the creativity here? Off the charts. Characters have never felt so alive and in the moment before — they’ll use insinuation, manipulation, and, if needed (or provoked) — force. They feel so very present. |
14460 |
| - |
14461 |
| - That look on Neo’s face when he opened his eyes and said, “I know Kung Fu”? Well, Impish_Nemo_12B had pretty much the same moment — and it now knows more than just Kung Fu, much, much more. It wasn’t easy, and it’s a niche within a niche, but as promised almost half a year ago — it is now done. |
14462 |
| - |
14463 |
| - Impish_Nemo_12B is smart, sassy, creative, and got a lot of unhingedness too — these are baked-in deep into every interaction. It took the innate Mistral's relative freedom, and turned it up to 11. It very well maybe too much for many, but after testing and interacting with so many models, I find this 'edge' of sorts, rather fun and refreshing. |
14464 |
| - |
14465 |
| - Anyway, the dataset used is absolutely massive, tons of new types of data and new domains of knowledge (Morrowind fandom, fighting, etc...). The whole dataset is a very well-balanced mix, and resulted in a model with extremely strong common sense for a 12B. Regarding response length — there's almost no response-length bias here, this one is very much dynamic and will easily adjust reply length based on 1–3 examples of provided dialogue. |
14466 |
| - |
14467 |
| - Oh, and the model comes with 3 new Character Cards, 2 Roleplay and 1 Adventure! |
| 14456 | + description: "August 2025, Impish_Nemo_12B — my best model yet. And unlike a typical Nemo, this one can take in much higher temperatures (works well with 1+). Oh, and regarding following the character card: It somehow gotten even better, to the point of it being straight up uncanny \U0001F643 (I had to check twice that this model was loaded, and not some 70B!)\n\nI feel like this model could easily replace models much larger than itself for adventure or roleplay, for assistant tasks, obviously not, but the creativity here? Off the charts. Characters have never felt so alive and in the moment before — they’ll use insinuation, manipulation, and, if needed (or provoked) — force. They feel so very present.\n\nThat look on Neo’s face when he opened his eyes and said, “I know Kung Fu”? Well, Impish_Nemo_12B had pretty much the same moment — and it now knows more than just Kung Fu, much, much more. It wasn’t easy, and it’s a niche within a niche, but as promised almost half a year ago — it is now done.\n\nImpish_Nemo_12B is smart, sassy, creative, and got a lot of unhingedness too — these are baked-in deep into every interaction. It took the innate Mistral's relative freedom, and turned it up to 11. It very well maybe too much for many, but after testing and interacting with so many models, I find this 'edge' of sorts, rather fun and refreshing.\n\nAnyway, the dataset used is absolutely massive, tons of new types of data and new domains of knowledge (Morrowind fandom, fighting, etc...). The whole dataset is a very well-balanced mix, and resulted in a model with extremely strong common sense for a 12B. Regarding response length — there's almost no response-length bias here, this one is very much dynamic and will easily adjust reply length based on 1–3 examples of provided dialogue.\n\nOh, and the model comes with 3 new Character Cards, 2 Roleplay and 1 Adventure!\n" |
14468 | 14457 | overrides:
|
14469 | 14458 | parameters:
|
14470 | 14459 | model: Impish_Nemo_12B-Q6_K.gguf
|
@@ -19486,14 +19475,14 @@
|
19486 | 19475 | url: "github:mudler/LocalAI/gallery/flux-ggml.yaml@master"
|
19487 | 19476 | icon: https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev/media/main/teaser.png
|
19488 | 19477 | description: |
|
19489 |
| - FLUX.1 Kontext [dev] is a 12 billion parameter rectified flow transformer capable of editing images based on text instructions. For more information, please read our blog post and our technical report. You can find information about the [pro] version in here. |
19490 |
| - Key Features |
19491 |
| - Change existing images based on an edit instruction. |
19492 |
| - Have character, style and object reference without any finetuning. |
19493 |
| - Robust consistency allows users to refine an image through multiple successive edits with minimal visual drift. |
19494 |
| - Trained using guidance distillation, making FLUX.1 Kontext [dev] more efficient. |
19495 |
| - Open weights to drive new scientific research, and empower artists to develop innovative workflows. |
19496 |
| - Generated outputs can be used for personal, scientific, and commercial purposes, as described in the FLUX.1 [dev] Non-Commercial License. |
| 19478 | + FLUX.1 Kontext [dev] is a 12 billion parameter rectified flow transformer capable of editing images based on text instructions. For more information, please read our blog post and our technical report. You can find information about the [pro] version in here. |
| 19479 | + Key Features |
| 19480 | + Change existing images based on an edit instruction. |
| 19481 | + Have character, style and object reference without any finetuning. |
| 19482 | + Robust consistency allows users to refine an image through multiple successive edits with minimal visual drift. |
| 19483 | + Trained using guidance distillation, making FLUX.1 Kontext [dev] more efficient. |
| 19484 | + Open weights to drive new scientific research, and empower artists to develop innovative workflows. |
| 19485 | + Generated outputs can be used for personal, scientific, and commercial purposes, as described in the FLUX.1 [dev] Non-Commercial License. |
19497 | 19486 | urls:
|
19498 | 19487 | - https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev
|
19499 | 19488 | - https://huggingface.co/QuantStack/FLUX.1-Kontext-dev-GGUF
|
|
0 commit comments