Skip to content

[Bug]: get_model_info return structure is not consistent. #4711

@SmartManoj

Description

@SmartManoj

What happened?

import litellm
from pprint import pprint
max_tokens = litellm.get_model_info("gpt-4")
pprint(max_tokens)

Now it returns max_input_tokens and max_output_tokens but the docstring is not mentioned this.
Return the same for huggingface custom_llm_provider and update the docstring.

Relevant log output

{'input_cost_per_token': 3e-05,
 'litellm_provider': 'openai',
 'max_input_tokens': 8192,
 'max_output_tokens': 4096,
 'max_tokens': 4096,
 'mode': 'chat',
 'output_cost_per_token': 6e-05,
 'supported_openai_params': ['frequency_penalty',
                             'logit_bias',
                             'logprobs',
                             'top_logprobs',
                             'max_tokens',
                             'n',
                             'presence_penalty',
                             'seed',
                             'stop',
                             'stream',
                             'stream_options',
                             'temperature',
                             'top_p',
                             'tools',
                             'tool_choice',
                             'function_call',
                             'functions',
                             'max_retries',
                             'extra_headers',
                             'user'],
 'supports_function_calling': True}

Twitter / LinkedIn details

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions