Поделиться через


Azure networking - Public IP addresses in Classic vs ARM

This post was contributed by Stefano Gagliardi, Pedro Perez, Telma Oliveira, and Leonid Gagarin

As you know, we recently introduced the Azure Resource Manager deployment model as an enhancement of the previous Classic deployment model. Read here for more details on them https://azure.microsoft.com/en-us/documentation/articles/resource-manager-deployment-model/

There are important differences between the two models on several aspects spanning different technologies in Azure. In this article we wanted to clarify in particular what has changed when it comes to public IP addresses that you can assign to your resources.

Azure Service Management/ Classic deployment / v1

in ASM, we have the concept of Cloud Service.

The Cloud Service is a container of instances, either IaaS VMs or PaaS roles. (more about cloud services here https://blogs.msdn.com/b/plankytronixx/archive/2014/04/24/i-m-confused-between-azure-cloud-services-and-azure-vms-what-the.aspx )

Cloud Services are bound to a public IP address that is called VIP and have a name that is registered in the public DNS infrastructure with the cloudapp.net suffix.

For example my Cloud Service is called chicago and has 23.101.69.53 as a VIP.

 

Note: it is possible to assign multiple VIPs to the same cloud service https://azure.microsoft.com/en-us/documentation/articles/load-balancer-multivip/

It is alsopossible to reserve a cloud service VIP so that you don’t risk that your VIP will change when VMs restart. https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-reserved-public-ip/

In Azure Service Manager model, you deploy IaaS VMs inside Cloud Services. You can reach resources located on the VM from the Internet only on specific TCP/UDP ports (no ICMP!) for which you have created endpoints.

 

(read here for more info about endpoints https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-set-up-endpoints/ )

Endpoints are simply a mapping between a certain private port on the VM’s internal dedicated IP address (DIP) and a public port to be opened on the cloud service public IP (VIP). Azure will take care of all NATting, you will not need to worry about configuring anything else. Notice that in ASM you don’t need to necessarily add the VM to a Virtual Network. If you do, the VM will have a DIP in the private address range of your choice. Else, Azure will assign the VM a random internal IP. In some datacenters if the VM is not in a Vnet it can receive a public IP address as a DIP, but the machine won’t be reachable from the internet on that IP! It will, again, be reachable only on the endpoints of the VIP.

Security is taken care by Azure for you as well: no connection will ever be possible from the outside on ports for which you haven’t defined an endpoint. Instead, traffic on opened ports can be filtered by means of the Endpoint ACLs https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-acl/ or Network Security Groups https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-nsg/

Now, you can deploy several IaaS VMs inside the same cloud service. By default, all VMs in the same cloud service will inherit the same cloud service public IP address (VIP).

This has a consequence: you cannot expose different services on different VMs on the public internet using the same public port. You will need to create an endpoint on each VM referencing a different public port. For example, in order to connect via RDP to vm1 and vm2 in my chicago Cloud Service, i have done the following. Created an endpoint on vm1 that maps internal port 3389 to public port 50001 Created an endpoint on vm2 that maps internal port 3389 to public port 50002 And then i can connect to vm1 via chicago.cloudapp.net:50001 and to vm2 via chicago.cloudapp.net:50002

it is worth noticing that the client you are starting RDP from does not need to have any knowledge of the hostname of the destination machine (vm1,vm2). Also, the destination machine is completely unaware of the Cloud Service, its public DNS name and its VIP. The machine is just listening on its private DIP on the appropriate ports. You will not be able to see the VIP on the Azure VM’s network interface.

You can however create load-balanced endpoints to expose the same port of different VMs to the same port on the VIP (think of an array of web servers to handle http requests for the same website).

There is a limit on the amount of 150 endpoints you can open on a cloud service. (check back here for updated info on service limits https://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/)

This means that you cannot open the whole range of TCP dynamic ports for a VM. If you have applications that require to be contacted on dynamic TCP ports (for example passive FTP), you may want to consider assigning your machine an Instance Level Public IP. ILPIPs are assigned exclusively to the VM and are not shared amongst other VMs in the same cloud service. Hence, the whole range of TCP/UDP ports is available with a 1to1 mapping between the public port on the ILPIP and the private port on the VM’s DIP – again, no ICMP! (more about ILPIPs here https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-instance-level-public-ip/)

The ILPIP does not substitute the cloud service VIP, it is just an additional public IP for the VM. However, the VM uses the ILPIP as its outbound IP address.

Note that you do not need to open endpoints for ILPIPs are there is no need to NAT. All TCP/UDP ports will be “opened” by default so make sure you take care of the Security with a proper firewall configuration on the Guest VM and/or by applying Network Security Groups.

As of January 2016, you can not reserve an ILPIP address as static for your classic VM. Check back on these official documents for future announcements. https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-reserved-public-ip/ https://azure.microsoft.com/en-us/updates/static-public-ip-addresses-available-for-azure-virtual-machines/

Azure Resource Manager / ARM / v2

In this new deployment model, we have changed how Azure works under the covers. in ARM, there is no longer the Cloud Service concept, while instead we have the Resource Group. While you can still think as the Resource Group as a container for your VMs (and other resources), it is very different than the Cloud Service.

What is interesting to notice from a networking perspective is that the Resource Group doesn’t have a VIP bound to it by default. Also, in ARM Every VM must be deployed in a Virtual Network.

ARM has introduced the concept of the Public IP, an object that can be bound to VM NICs, load balancers and other PaaS instances like VPN or Application gateways.

As you create VMs, you will then assign them a NIC and a public IP. The public IP will be different for every VM. Simplifying, in the ARM model all VMs will have their own public IP (it’s like they were classic VMs with an ILPIP ) .

Hence, you no longer need to open endpoints as you do in ASM/Classic because all ports are potentially open and no longer NATted: there are no endpoints in ARM. You need again to take care of your VM’s security with a proper firewall configuration on the Guest VM and/or by applying Network Security Groups. Notice that as a security enhancement to classic model, an NSG is automatically assigned to every VM with the only rule to allow RDP traffic on port 3389. You will need to modify the NSG to fully open other TCP/UDP ports.

 

By default, all public IP addresses in ARM come as dynamic. You can now change to static the public IP addresses that are assigned to a NIC bound to Virtual Machine.https://azure.microsoft.com/en-us/updates/static-public-ip-addresses-available-for-azure-virtual-machines/

Note: you will need to stop/deallocate the VM to make this effective. it doesn’t work on a running VM. So plan some downtime ahead. Then you will have to perform something like the below:

#create a new static public IP

$PubIP = New-AzureRmPublicIpAddress –name $IPname –ResourceGroupName $rgname -AllocationMethod Static –Location $location

#fetch the current NIC of the VM

$NIC = Get-AzureRmNetworkInterface –name $NICname –ResourceGroupName $rgname

#assign the new public static IP to the NIC

$NIC.IpConfigurations.publicIPaddress.id = $PubIP.Id

#commit changes

Set-AzureRmNetworkInterface -NetworkInterface $NIC

This is a sample script: consider extensive testing before applying any kind of change in a production environment.

Now, there are circumstances in which we will still like to take advantage of the port forwarding/NATting in ARM, just like Endpoints in classic did. This is possible: You will have to resemble the V1 mechanism of traffic going through the load balancer.

However be aware of the requirements:

  • ·         You will have to stop/deallocate the VM. This will cause downtime.
  • ·         In order to add a VM to a load balancer it must be in an availability set.  
  • ·         You are going to use the loadbalancer’s IP address to connect to the VM on the NATted port, not the VM’s public IP.

Once you’re ok with the above, the procedure is “simple” and can be derived from here

https://azure.microsoft.com/en-us/documentation/articles/load-balancer-get-started-internet-arm-ps/

 

For your reference, here is the sample script I have used.

 #necessary variables

$vmname="<the name of the VM>"

$rgname="<the name of the Resource Group>"

$vnetname="<the name of the Vnet where the VM is>"

$subnetname=" the name of the Subnet where the VM is>"

$location="West Europe"

 

#This creates a new loadbalancer and creates a NAT rule from public port 50000 to private port 80

$publicIP = New-AzureRmPublicIpAddress -Name PublicIP -ResourceGroupName $rgname -Location $location –AllocationMethod Static

$externalIP = New-AzureRmLoadBalancerFrontendIpConfig -Name LBconfig -PublicIpAddress $publicIP

$internaladdresspool= New-AzureRmLoadBalancerBackendAddressPoolConfig -Name "LB-backend"

$inboundNATRule1= New-AzureRmLoadBalancerInboundNatRuleConfig -Name "natrule" -FrontendIpConfiguration $externalIP -Protocol TCP -FrontendPort 50000 -BackendPort 80

$NRPLB = New-AzureRmLoadBalancer -ResourceGroupName $rgname -Name IrinaLB -Location $location -FrontendIpConfiguration $externalIP -InboundNatRule $inboundNATRule1 -BackendAddressPool $internaladdresspool

 

#These retrieve the vnet and VM settings (necessary for later steps)

$vm= Get-AzureRmVM -name $vmname -ResourceGroupName $rgname

$vnet = Get-AzureRmVirtualNetwork -Name $vnetname -ResourceGroupName $rgname

$internalSubnet = Get-AzureRmVirtualNetworkSubnetConfig -Name $subnetname -VirtualNetwork $vnet

 

#This creates a new NIC with the LB settings

$lbNic= New-AzureRmNetworkInterface -ResourceGroupName $rgname -Name LBnic -Location $location -Subnet $internalSubnet -LoadBalancerBackendAddressPool $nrplb.BackendAddressPools[0] -LoadBalancerInboundNatRule $nrplb.InboundNatRules[0]

 

#This removes the old NIC from the VM

Remove-AzureRmVMNetworkInterface -vm $vm -NetworkInterfaceIDs $vm.NetworkInterfaceIDs[0]

 

#This adds the new NIC we just created to the VM

Add-AzureRmVMNetworkInterface -vm $vm -id $lbNic.id -Primary

 

#This Stops the VM

Stop-AzureRmVM -Name $vmname -ResourceGroupName $rgname

 

#This commits changes to the Fabric

Update-AzureRmVM -vm $vm -ResourceGroupName $rgname

 

#This restarts the VM

Start-AzureRmVM -Name $vmname -ResourceGroupName $rgname

 

After this, you can access port 80 on the VM by accessing port 50000 on the load balancer’s $publicIP.

Again, this is a sample script: consider extensive testing before applying any kind of change in a production environment.

 

Comments

  • Anonymous
    February 04, 2016
    The comment has been removed