Hello Ching-Wei,
According to @Ching-Wei
Thank you to @Sina Salam for your suggestions. The answer is related to your suggestion 3.
Issue: The model customer deployed was the wrong version. I missed the fact that the docs specify only specific model versions that support fine-tuning - for gpt-4o
it is only gpt-4o-2024-08-06
. My deployment had version 2024-11-20, which is the latest version, but apparently doesn't support fine-tuning.
Error Message: Error code: 400 - {'error': {'code': 'invalidPayload', 'message': 'The specified base model does not support fine-tuning.'}}
Solution: According to Customer "When calling the fine-tuning method for gpt-4o
, you have to use the full model string with version, in this case gpt-4o-2024-08-06
. Using just the model's name gives an error that the base model doesn't support fine-tuning. BTW this is not true for gpt-35-turbo
or gpt-4o-mini
- when calling fine-tuning with those model strings alone (no versions) the fine-tuning job will be created no problem."
I hope this is helpful! Do not hesitate to let me know if you have any other questions or clarifications.
Please don't forget to close up the thread here by upvoting and accept it as an answer if it is helpful.