Latin_General_100_CI_AS_SC_UTF8 is not a bad choice for a collation. However, I am not sure that the encoding mentioned in the requirement is relevant here. That may be related to what the application uses internally, but that does not have to match how data is stored in SQL Server.
In SQL Server you have nvarchar and varchar. With nvarchar data is stored in the UTF-16 encoding, and you have access to the full range of Unicode. Or, well, exact how big that range is depends on the version of the collation. Latin1_General_CI_AS is defined from a quite old definition of Unicode and lacks support for some major languages like Hindi and Pinyin (i.e. Chinese written with Latin letters.) It also have no understanding of fun things like emojis.
A collation with 100 in the name supports more characters, and if you add_SC in the name, you get full understanding of those emojis.
With varchar and a non-UTF8 collation, you only have support for 250 or so characters with Latin1_General_CI_AS. With Latin1_General_100_CI_AS_SC_UTF8, you have support for the same set of characters with both varchar and nvarchar. The only difference is how data is stored. For text that is mainly in English, the UTF-8 gives you some space saving, but you can achieve that with row compression as well.
But how you store the data internally in SQL Server has little relevance to the application. The client APIs will take care of the conversions needed. But the application may need to state that data is UTF-8 so that the APIs understand it. And this matters, no matter if you have varchar or nvarchar in the database.