Microsoft Support helped me to a fix on this issue - it was NOT a problem with the AlmaLinux8 HPC image as I had first surmised...
There were two issues - firstly yum had updated my CycleCloud to version 8.5 (from 8.4), which meant my Slurm 3.0.1 Cluster templates were out of date wrt. CycleCloud version. I first needed to download the Azure CycleCloud 3.0.5 default template (https://github.com/Azure/cyclecloud-slurm/blob/master/templates/slurm.txt) and merge it with my custom template, and then create a new 8.5 Cluster with the correct Slurm template version.
This was the main cause of the problem - for some reason only Ampere GPU jobs ran with this out of date configuration, which confused things, we don't really know why. But this messed up configuration was causing there to be no gres.conf (outlining the gpus available) to be in /sched, nor linked to in /etc/slurm
Once the cluster template was fixed and new cluster built, I just had to ensure that my slurm job runit script had the following option in it so that there were GPUs available to my job
## Specify the number of GPUs for the task
#SBATCH --gres=gpu:4
Then, the GPU nodes would initialise correctly in Slurm and I could then run jobs against the GPUs.