Ethan's Blog


  • Home

  • Archives

  • Tags

  • Search

Azure Application Gateway and Azure WAF

Posted on 2022-04-25

Azure Application Gateway is essentially a load balancer for web traffic, but it also provides us with better traffic control. Traditional load balancers operate on the transport layer and allow us to route traffic based on protocol (TCP or UDP) and IP address, mapping IP addresses, and protocols in the frontend to IP addresses and protocols in the backend. This classic operation mode is often referred to as layer 4.

Application gateway expands on that and allows us to use hostnames and paths to determine where traffic should go, making it a layer 7 load balancer. For example, we can have multiple servers that are optimized for different things. If one of our servers is optimized for video, then all video requests should be routed to that specific server based on the incoming URL request.

Create an application gateway

Azure Application Gateway can be used as a simple load balancer to perform traffic distribution from the frontend to the backend based on protocols and ports. But it can also expand on that and perform additional routing based on hostnames and paths. This allows us to have resource pools based on rules and also allows us to optimize performance. Using these options and performing routing based on context will increase application performance, along with providing high availability. Of course, in this case, we need to have multiple resources for each performance type in each backend pool (each performance type requests a separate backend pool).

Using these additional rules, we can route incoming requests to endpoints that are optimized for certain roles. For example, we can have multiple backend pools with different settings that are optimized to perform only specific tasks. Based on the nature of the incoming requests, the application gateway will route the requests to the appropriate backend pool. This approach, along with high availability, will provide better performance by routing each request to a backend pool that will process the request in a more optimized way.

We can set up autoscaling for application gateway (available only for V2) with additional information for the minimum and maximum number of units. This way, application gateway will scale based on demand and ensure that performance is not impacted, even with the maximum number of requests.

Read more »

Traffic Manager

Posted on 2022-04-23

Azure Load Balancer is limited to providing high availability and scalability only to Azure VMs. Also, a single load balancer is limited to VMs in a single Azure region. If we want to provide high availability and scalability to other Azure services that are globally distributed, we must introduce a new component—Azure Traffic Manager. Traffic manager is DNS-based and provides the ability to distribute traffic over services and spread traffic across Azure regions. But traffic manager is not limited to Azure services only; we can add external endpoints as well.

Create a traffic manager profile

Traffic manager provides load balancing to services, but traffic is routed and directed using DNS entries. The frontend is an FQDN assigned during creation, and all traffic coming to traffic manager is distributed to endpoints in the backend. The default routing method is Performance. The performance method will distribute traffic based on the best possible performance available.

For example, if we have more than one backend endpoint in the same region, traffic will be spread evenly. If the endpoints are located across different regions, traffic manager will direct traffic to the endpoint closest to the incoming traffic in terms of geographical location and minimum network latency.

Read more »

Load Balancers

Posted on 2022-04-23

Azure Load Balancer is used to support scaling and high availability for applications and services. A load balancer is primarily composed of three components—a frontend, a backend, and routing rules. Requests coming to the frontend of a load balancer are distributed based on routing rules to the backend, where we place multiple instances of a service.

This can be used for performance-related reasons, where we would like to distribute traffic equally between endpoints in the backend, or for high availability, where multiple instances of services are used to increase the chances that at least one endpoint will be available at all times. Azure supports two types of load balancers—internal and public.

Create an internal load balancer

An internal load balancer is assigned a private IP address (from the address range of subnets in the VNet) for a frontend IP address, and it targets the private IP addresses of our services (usually, an Azure VM) in the backend. An internal load balancer is usually used by services that are not internet-facing and are accessed only from within our VNet.

Traffic can come from other networks (other VNets or local networks) if there is some kind of VPN in place. The traffic coming to the frontend of the internal load balancer will be distributed across the endpoints in the backend of the load balancer. Internal load balancers are usually used for services that are not placed in a DeMilitarized Zone (DMZ) (and are therefore not accessible over the internet), but rather in middle- or back-tier services in a multitier application architecture.

We also need to keep in mind the differences between the Basic and Standard SKUs. The main difference is in Performance (this is better in the Standard SKU) and SLA (Standard has an SLA guaranteeing 99.99% availability, while Basic has no SLA). Also, note that Standard SKU requires an NSG. If an NSG is not present on the subnet or network interface, or NIC (of the VM in the backend), traffic will not be allowed to reach its target. For more information on load balancer SKUs, see https://docs.microsoft.com/azure/load-balancer/skus.

Read more »

Connect to Resources Securely

Posted on 2022-04-22

Exposing management endpoints (RDP, SSH, HTTP, and others) over a public IP address is not a good idea. Any kind of management access should be controlled and allowed only over a secure connection. Usually, this is done by connecting to a private network (via S2S or P2S) and accessing resources over private IP addresses. In some situations, this is not easy to achieve. The cause of this can be insufficient local infrastructure, or in some cases, the scenario may be too complex. Fortunately, there are other ways to achieve the same goal. We can safely connect to our resources using Azure Bastion, Azure Virtual WAN, and Azure Private Link.

Create a bastion instance

Azure Bastion allows us to connect securely to our Azure resources without additional infrastructure. All we need is a browser. It is essentially a PaaS service provisioned in our VNet that provides a secure RDP/SSH connection to Azure VMs. The connection is made directly from the Azure Portal over Transport Layer Security (TLS). Using TLS, it provides a secure RDP and SSH connection to all resources on that network. The connection is made through a browser session, and no public IP address is required. This means that we don’t need to expose any of the management ports over a public IP address.

Connect to a VM with bastion

With Azure Bastion, we can connect to a VM through the browser without a public IP address and without exposing it publicly. Azure Bastion uses a subnet in the VNet to connect to VMs in that specific network. It provides a safe connection over TLS and allows a connection to a VM without exposing it over a public IP address.

Read more »

Create Hybrid Connections

Posted on 2022-04-22

Hybrid Connection allows us to create secure connections with VNets. These connections can either be from on-premises or from other VNets. Establishing connections to VNets enables the exchange of secure network traffic with other services that are located in different VNets, different subscriptions, or outside Azure (in different clouds or on-premises). Using secure connections removes the need for publicly exposed endpoints that present a potential security risk. This is especially important when we consider management, where opening public endpoints creates a security risk and presents a major issue.

For example, if we consider managing VMs, it’s a common practice to use either Remote Desktop Protocol (RDP) or PowerShell for management. Exposing these ports to public access presents a great risk. A best practice is to disable any kind of public access to such ports and use only access from an internal network for management. In this case, we use either a Site-to-Site or a Point-to-Site connection to enable secure management.

In another scenario, we might need the ability to access a service or a database on another network, either on-premises or via another VNet. Again, exposing these services might present a risk, and we use either Site-to-Site, VNet-to-VNet, or VNet peering to enable such a connection in a secure way.

Read more »
1…8910…55
necusjz

necusjz

274 posts
16 tags
© 2016 - 2025 necusjz
Powered by Hexo
Theme - NexT.Mist