expose the max tokens each model takes - this enables people to adjust around each model's context window <img width="697" alt="Screenshot 2023-08-03 at 9 27 17 AM" src="https://github.com/BerriAI/litellm/assets/17561003/d30ceb38-ff7d-4d56-9b48-0ed44c4a5c5e">