Greetings & Welcome to the Microsoft Q&A forum! Thank you for sharing your query.
I understand you're encountering a common issue with fine-tuning language models, where the model's responses are influenced by its pre-existing knowledge rather than the fine-tuned dataset. If the temperature is set too high, it makes the model's predictions more random, which is why you're seeing different answers each time. Lowering the temperature will make the model's outputs more consistent and reliable.
- Try setting the temperature between 0 and 0.5. A lower temperature makes the model more focused and less random, which should help stabilize the answers. Experiment with different values in this range to see if the results improve. Using Azure AI Foundry or a similar platform, you can usually find the temperature setting in the generation settings. Lower it and see if the model gives more accurate and consistent responses.
I hope this helps you. Thank you!