Hi ,
Thanks for reaching out to Microsoft Q&A.
To implement a solution where a runbook monitors and automatically adjusts the max data size of a resource (a database or storage) based on usage thresholds, you can follow these steps. You may need to use Azure Automation Runbooks along with Azure Resource Manager (ARM) APIs or SDKs for automation.
- Plan the Workflow
- Monitor Resource Size: Query the current size and usage percentage.
- Check Threshold: If the resource is 90% full, wait for 5 minutes and re-check to confirm.
- Adjust Size: If usage is still above 90%, increase the max size to the next available tier/size.
- Optional Downgrade: Include logic to check for downsizing when usage consistently drops below a lower threshold.
- Determine Max Size Levels
- If the resource has predefined size levels (Azure SQL Database tiers or storage limits), you can hardcode these levels into a configuration file or use an Azure API to fetch them dynamically.
Example: For Azure SQL Database, size levels might include 5 GB, 10 GB, 20 GB, etc. For Azure Storage, tiers and limits vary by account type.
- Set Up an Azure Automation Runbook
Use a PowerShell or Python runbook for implementation.
import time
from azure.identity import DefaultAzureCredential
from azure.mgmt.sql import SqlManagementClient
RESOURCE_GROUP = "YourResourceGroupName"
DATABASE_NAME = "YourDatabaseName"
THRESHOLD_PERCENTAGE = 90
CHECK_INTERVAL_SECONDS = 300
def get_resource_usage(client):
db = client.databases.get(RESOURCE_GROUP, "YourServerName", DATABASE_NAME)
used_space = db.current_service_objective_name
max_size = db.max_size_bytes
return used_space / 1e9, max_size / 1e9 # Convert to GB
def update_max_size(client, new_size_gb):
client.databases.update(
RESOURCE_GROUP,
"YourServerName",
DATABASE_NAME,
{"max_size_bytes": int(new_size_gb * 1e9)}
)
def main():
credential = DefaultAzureCredential()
client = SqlManagementClient(credential, "YourSubscriptionId")
while True:
used_gb, max_gb = get_resource_usage(client)
usage_percentage = (used_gb / max_gb) * 100
if usage_percentage >= THRESHOLD_PERCENTAGE:
print(f"Threshold reached: {usage_percentage}%")
time.sleep(CHECK_INTERVAL_SECONDS)
# Re-check
used_gb, max_gb = get_resource_usage(client)
usage_percentage = (used_gb / max_gb) * 100
if usage_percentage >= THRESHOLD_PERCENTAGE:
print("Confirmed threshold. Increasing size.")
new_size = max_gb + 10 # Increment by 10 GB
update_max_size(client, new_size)
print(f"Max size updated to {new_size} GB")
else:
print(f"Usage within limits: {usage_percentage}%")
time.sleep(CHECK_INTERVAL_SECONDS)
if __name__ == "__main__":
main()
- Deploy and Test
- Deploy the runbook in Azure Automation.
- Add parameters to allow flexibility (e.g.,
ThresholdPercentage
,CheckIntervalMinutes
,IncrementSize
). - Test thoroughly in a non-production environment to avoid unexpected scaling or costs.
- Optional: Auto-Scaling for Downgrade
- Include logic to monitor usage drops below a lower threshold (ex:, 30%) for a sustained period (24 hours).
- Automate tier downgrades based on this logic.
Notes
- APIs: Use the relevant Azure API for your resource (ex:
Azure SQL
,Blob Storage
) to retrieve usage and adjust limits. - Costs: Be aware of potential cost implications with automatic scaling.
- Hardcoding: Hardcoding size levels is acceptable if there’s no API to fetch them dynamically, but ensure it’s well-documented for future changes.
Please feel free to click the 'Upvote' (Thumbs-up) button and 'Accept as Answer'. This helps the community by allowing others with similar queries to easily find the solution.