Skip to content

Conversation

lukaszkorecki
Copy link

@lukaszkorecki lukaszkorecki commented Aug 3, 2025

Picks up from https://github.com/editor-code-assistant/eca/pull/17/files

Warning

Early days, this PR aims to provide basic functionality. Other features like web search and reasoning support will be implemented as follow up PRs

Authentication

  • support for Gemini developer API
  • support for Gemini + Vertex AI (ADC & auth token)
  • tests

TBD

  • chat
  • built-in tool calling

@@ -34,6 +35,7 @@
(string/join "\n" (subvec lines start end)))
content)))

;; NTOE: this is not going to scale ;-)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I think we can easily create a util that receives the provider and do this check automagically, was waiting for more cases to understand usage

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I figured that until this point there was no need for more abstractions for providers but I had to leave that comment after adding all possible configs for Gemini - perhaps a provider protocol would make sense? It would also simplify the dispatch code - half of the args are needed only once (auth, URLs) and could be pre-computed on a record instance just once.

@@ -168,6 +210,7 @@
provider-fn (case (:api provider-config)
"openai" llm-providers.openai/completion!
"anthropic" llm-providers.anthropic/completion!
"gemini" llm-providers.google/completion!
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍🏻

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually, I'll change that to Google - it will follow established pattern of using company names rather than products (OpenAI vs ChatGPT or Anthropic vs Claude)

Comment on lines 186 to 187
;; NOTE: no :api-url here, because Google provider figures it out
;; because it depends on how we're authenticated
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's new to me, interesting, I wonder if there is any case user want to customize the url, but it's not a problem right now

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did some research (well... Perplexity did most of it ;-)), and outside of running the code within GCP there's no real support for providing custom API urls, apart from custom telemetry endpoints. I'd say that we add a TODO and wait for someone to request it.

Copy link
Member

@ericdallo ericdallo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So far looking good, LMK if you wanna discuss any TODO, BTW we can start supporting basic chat features and leave MCP/reason and other features to next PRs what you think?

@lukaszkorecki
Copy link
Author

So far looking good, LMK if you wanna discuss any TODO, BTW we can start supporting basic chat features and leave MCP/reason and other features to next PRs what you think?

I think that makes sense - with all of the plumbing out of the way, chat was the next thing that I was going to tackle

[:api-key-found "o4-mini"])
(when-let [ollama-model (first (filter #(string/starts-with? % config/ollama-model-prefix) (keys (:models db))))]
[:ollama-running ollama-model])
(when (google-any-auth? config)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe this check should come before ollama?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah good call, I just updated the order - external APIs first, then ollama, then the default

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants