Deploying an Azure OpenAI Service with GPT Models is pretty straight forward. However, I ran into an issue where deploying two models at the same time resulted in the following error most of the time:
“Another operation is being performed on the parent resource ‘/subscriptions/xxxxxxxxxxxxxxxx/resourceGroups/openaibiceptest/providers/Microsoft.CognitiveServices/accounts/mlbiceptest’. Please try again later.”
Turns out Azure does not like deploying multiple models simultaneously to OpenAI, so we need to make sure these run in sqeuence. Luckily this is pretty straight forward to do with bicep. Only thing we need to add is a dependency (here in line 37) from one model to the other to await the deployment before starting the next one.
You could also add even more models like this by just making sure you have a chain of models deployed in sqeuence.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters