Documentation about Llama 3.2 11B Vision Instruct Model says 128K context window but not able to process more than 8k tokens
Maheshbabu Boggu
0
Reputation points
I am writing to inquire about the context window of the Llama 3.2 11B Vision Instruct model.
The documentation states that the context window is 128K tokens. However, when using the model, I am unable to provide input exceeding 8192 tokens. I would appreciate it if you could clarify this discrepancy and provide guidance on how to utilize the full 128K context window.
Thank you for your time and assistance.
Sign in to answer