Hi Leo Chow,
Currently, the DeepSeek-R1 model is in Preview mode, and it supports a maximum context length of 128k tokens. This extended context length enables the model to excel at complex reasoning tasks, including language understanding, scientific reasoning, and coding.
Hope this helps. Do let us know if you have any further queries.
If this answers your query, do click Accept Answer
and Yes
for was this answer helpful.