-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Description
What specific problem does this solve?

Currently, each API request in the statistics only shows the price. However, many models are now free or have a low cost, making these figures less informative.
I believe it would be very convenient to optimize prompts and requests if not only the price but also the number of tokens for each request were visible. The number of cached tokens would be especially valuable.
Optionally, it would also be interesting to see the speed of incoming and outgoing tokens, although this is somewhat less important.
Assuming that some users may find this information excessive, I would also add an option to hide this statistics (just in case).
I could potentially try to implement this improvement myself if the maintainers agree to accept this feature.
Additional context (optional)
There is some html example how it could be done from dev tools, that could help to implement the view.
<div style="display: flex; align-items: center; gap: 10px; flex-grow: 1;"><div style="width: 16px; height: 16px; display: flex; align-items: center; justify-content: center;"><span class="codicon codicon-check" style="color: var(--vscode-charts-green); font-size: 16px; margin-bottom: -1.5px;"></span></div><span style="color: var(--vscode-foreground); font-weight: bold;">API Request</span><vscode-badge circular="" class="circular" style="opacity: 1;">$0.0011</vscode-badge><div class="flex items-center gap-1 flex-wrap"><span>↑ 24.8k (20k cache)</span><span>↓ 177</span></div></div>
Roo Code Task Links (Optional)
No response
Request checklist
- I've searched existing Issues and Discussions for duplicates
- This describes a specific problem with clear impact and context
Interested in implementing this?
- Yes, I'd like to help implement this feature
Implementation requirements
- I understand this needs approval before implementation begins
How should this be solved? (REQUIRED if contributing, optional otherwise)
See the screenshot above.
How will we know it works? (Acceptance Criteria - REQUIRED if contributing, optional otherwise)
Given I have a task for the AI
When I run it
Then API requests occur
And in addition to the price of each request, I also see the number of input tokens (in parentheses cache hit) and the number of output tokens (numbers in k - thousands of tokens)
And this happens with every API request as the AI executes the task
Technical considerations (REQUIRED if contributing, optional otherwise)
Adjust the user interface by showing details that the provider supplies with every request (see the screenshot above). Some calculations might be required.
Adding an option to disable the output makes the task significantly more complex, so I would include this in the next iteration (depending on user feedback?)
Trade-offs and risks (REQUIRED if contributing, optional otherwise)
- Some users may find this information excessive
- It may look worse on a very narrow window
Metadata
Metadata
Assignees
Labels
Type
Projects
Status