-
Notifications
You must be signed in to change notification settings - Fork 3.5k
Description
Before submitting your bug report
- I've tried using the "Ask AI" feature on the Continue docs site to see if the docs have an answer
- I believe this is a bug. I'll try to join the Continue Discord for questions
- I'm not able to find an open issue that reports the same bug
- I've seen the troubleshooting guide on the Continue Docs
Relevant environment info
- OS: Ubuntu
- Continue version: 1.3.2
- IDE version: VSCode 1.103.2
- Model: mistralai/Mamba-Codestral-7B-v0.1 (also Qwen/Qwen2.5-Coder-7B)
- config yaml:
- name: mistralai/Mamba-Codestral-7B-v0.1
provider: openai
model: mistralai/Mamba-Codestral-7B-v0.1
apiBase: http://localhost:8080/v1
apiKey: key
roles:
- autocomplete
defaultCompletionOptions:
contextLength: 6000
maxTokens: 1024
temperature: 0.1
stop:
- "[PREFIX]"
- "[SUFFIX]"
autocompleteOptions:
disable: false
onlyMyCode: true
debounceDelay: 100
modelTimeout: 100
template: "[SUFFIX]{{{suffix}}}[PREFIX]{{{prefix}}}"
requestOptions:
extraBodyProperties:
transforms: []
- config json:
"tabAutocompleteModel": {
"title": "mistralai/Mamba-Codestral-7B-v0.1",
"provider": "openai",
"apiBase": "http://localhost:8080/v1",
"apiKey": "key",
"model": "mistralai/Mamba-Codestral-7B-v0.1",
"contextLength": 25000,
"requestOptions": {"timeout": 7200},
"completionOptions": {"temperature": 0.7, "topP": 0.7, "maxTokens": 64}
},
"tabAutocompleteOptions": {
"disable": false,
"maxPromptTokens": 4000,
"prefixPercentage": 0.8,
"maxSuffixPercentage": 0.8,
"debounceDelay": 500,
"template": "[SUFFIX]{{{suffix}}}[PREFIX]{{{prefix}}}",
"completionOptions": { "stop": ["[PREFIX]", "[SUFFIX]"] },
"transform": false
}
Description
In the old JSON config, the parameter "transform"
could be set to false
to prevent "trimming" of multiline autocomplete completions.
In the new YAML config, this option is missing. Even with requestOptions.extraBodyProperties.transforms: []
, the behavior still defaults to "trimming" (as if transform: true
). In the DataDev logs, transform
is always true
when the config.yaml
is loaded.
Impact: This breaks multiline autocomplete for models like Mamba-Codestral-7B when served via vLLM 0.10.1.1, and is also observed with Qwen 2.5 Coder models.
Additional note:
I have read the docs about autocompletions (Customize Autocomplete User Settings) and also set “Multiline Autocompletions” in the IDE UI configuration to “always”.
To reproduce
-
Open an empty project with empty Python file in VSCode.
-
Begin typing a sorting function stub, e.g.:
def bubble_sort(arr):
-
With the YAML config above, autocomplete suggestions are trimmed to single line or partial line.
-
With the old JSON config where
"transform": false
, autocomplete can returns full multiline function bodies.
Log output
Captured from `data_dev/autocomplete.jsonl` during the test:
{
"timestamp": "2025-08-31T08:38:36.481Z",
"userAgent": "Visual Studio Code/1.103.2 (Continue/1.3.2)",
"eventName": "autocomplete",
"disable": false,
"maxPromptTokens": 1024,
"debounceDelay": 100,
"template": "[SUFFIX]{{{suffix}}}[PREFIX]{{{prefix}}}",
"multilineCompletions": "always",
"modelProvider": "openai",
"modelName": "mistralai/Mamba-Codestral-7B-v0.1",
}
---
### Comparison of outputs
**With autocomplete in config.yaml → "trimmed" to single-line completions**
{
"transform": true,
"completion": " k in range(0, n-i-1):"
}
{
"transform": true,
"completion": "# Swap if the element found is greater"
}
{
"transform": true,
"completion": "arr = [64,"
}
**With autocomplete in config.json and `transform: false` → full multiline completions**
{
"transform": false,
"completion": " if len(arr) <= 1:\n return arr\n\n for i in range(len(arr)):\n for j in range(len(arr) - i - 1):\n if arr[j] > arr[j + 1]:\n arr[j], arr"
}
{
"transform": false,
"completion": "] = arr[j+1], arr[j]\n return arr\n\narr = [64, 34, 25, 12, 22, 11, 90]\nprint(\"Sorted array is:\", bubble_sort(arr))"
}
Metadata
Metadata
Assignees
Labels
Type
Projects
Status