HON’s Wiki # Azure

Home / Cloud

Contents

General

IPv6

Azure CLI

Install

Usage

Virtual Machine (VM)

Setup (Web Example)

This sets up a simple VM (called Yolo) in its own resource group and its own resources.

  1. Create a resource group (Yolo-RG) in the desired region.
    • This will be used by all other resources for the VM.
    • You may want to put all your VMs and resources in the same resource group, in which case you probably want to call it something else.
  2. Create a virtual network (Yolo-VNet).
    • Note: Remove any leading zeroes from IPv6 addresses and zero-compress everything. Azure doesn’t like zeroes, apparently.
    • Press “add IPv6 address space” and add a valid and randomized /48 ULA prefix (e.g. from here), so you’ll get internal address spaces for both IPv4 (/16) and IPv6 (/48). Remove any existing IPv6 prefixes.
    • Remove the “default” subnet and add a new “default” containing the first IPv4 /24 and IPv6 /64 subnets from the address spaces. No NAT gateways or service endpoints are needed.
    • No bastion host, DDoS protection or firewall is needed.
    • If you plan on using a outbound NAT gateway, this can be configured later.
  3. Create public IP addresses for the VM (IPv4 and IPv6) (Yolo-IPv{4,6}).
    • Note: This can be done differently if using a NAT gateway.
    • Select “both IPv4 and IPv6”.
    • Use the “standard” SKU.
    • Use static assignment.
    • Use “Microsoft network” routing preference.
    • Use the “zone-redundant” availability zone.
    • Take note of the allocated IPv4 and IPv6 addresses so you can add it to DNS records.
    • TODO See the docs about “IPs created before the availability zone are not zone redundant” etc.
  4. (Optional) Create a NAT gateway for outbound connections (Yolo-NATGW):
    • This is required when using multiple VMs behind a limited number of public IPv4 addresses, which may cause port exhaustion if the VMs create many outbound connections. This is not required if all VMs have dedicated public IPv4 addresses, however.
    • Create the NAT gateway with TCP idle timeout 10 minutes (e.g.).
    • TODO Add public IPv4/IPv6 addresess/prefixes and select the VNet. I haven’t done this since all my VMs use public addresses.
  5. Create a network security group (Yolo-NSG).
    • The configuration of this one is after its creation.
    • Add the following inbound rules (depending on the purpose of the VM):
      • (Note) Use source “any”, destination “any”, source port “any” and action “allow”.
      • (Note) The predefined services are a bit dumb, just use custom specifications instead.
      • ICMPv4: Port *, protocol ICMP.
      • SSH: Port 22, protocol TCP.
      • HTTP(S): Port 80,443, protocol any.
    • Go to the “subnets” tab and associate it with the just-created virtual network and subnet. This will implicitly associate it with NICs in the subnet too (no need to associate NICs explicitly).
  6. Create a virtual machine (Yolo-VM).
    • Instance availability: Don’t require infrastructure redundancy.
    • Instance security: Use standard security.
    • Instance OS: Use your desired OS image, e.g. Debian.
    • Instance type: Use an appropriate size. This might require a bit of research. The B-series is fine for e.g. smaller web hosting servers.
    • Admin account: If you plan on provisioning the server with e.g. Ansible after creation, use an appropriate username and SSH pubkey.
    • Inbound ports: Allow public inbound port SSH. The NSG can be changed later. (TODO: )
    • OS disk:
      • Use standard SSD unless you need high IOPS.
      • Use default encryption type (at-rest with platform-managed key).
      • Delete disk with VM.
    • Data disk (if needed):
      • Create a new disk.
      • The auto-generated name is fine IMO.
      • Use the same options as the OS disk, where applicable, except maybe “delete with VM”.
    • Network:
      • Use the created virtual network and subnet.
      • Use the created IPv4 address, the created IPv6 address can be added later.
      • Don’t use a NIC NSG, the created one is already assigned to the used subnet.
      • Delete NIC when VM is deleted, but don’t dfelete IP address when VM is deleted.
      • Don’t use load balancing.
    • Monitoring: You choose.
    • Backup:
      • Enable if not using other backup solutions.
      • Create a new recovery services vault (Yolo-RSV) within the RG.
      • Use policy subtype “standard”.
      • Use the default, new backup policy or create a custom one.
    • Cloud-Init (optional): Add custom data and user data.
  7. Fix the NIC:
    • (TODO) Was it pointless to select any inbound ports during VM creation when the NSG rules will be applied anyways?
    • Go to the “IP configurations” tab and add a new secondary config for IPv6 named ipconfig2, with dynamic assignment and associated with the created public IPv6 address.

Usage

Miscellanea

Azure Kubernetes Service (AKS)

UPDATE: I have given up on this … To bad IPv6 support and an error when trying to deploy AKS using dual-stack Cilium, just like how the guides show how to do it. IPv4-only is not an option …

Resources

Info

Nodes

Networking

Outbound Node Networking Types
CNI Plugin
Network Policy Engines

Ingress Controllers

Setup (CLI Example)

Creates a public Linux AKS cluster, using dual-stack Azure CNI with Cilium networking.

Using Azure CLI and example resource names.

  1. Log into Azure CLI and set an active subscription.
  2. (TODO Maybe) Enable Cilium Preview for e.g. dual-stack support: az feature register --namespace=microsoft.ContainerService --name=CiliumDataplanePreview
  3. Create AKS cluster: az aks create --resource-group=test_rg --name=test_aks --tier=standard --load-balancer-sku=standard --vm-set-type=VirtualMachineScaleSets -s Standard_B2als_v2 --node-count=3 --kubernetes-version=1.29 --network-plugin=azure --network-plugin-mode=overlay --network-dataplane=cilium --ip-families=ipv4,ipv6 --pod-cidrs=172.16.0.0/16,fdfb:4d1e:98bf::/48 --service-cidrs=10.0.0.0/16,fd92:d839:1c02::/108 --os-sku=AzureLinux --generate-ssh-keys --api-server-authorized-ip-ranges=a.b.c.d/e
    • To give the cluster identity rights to pull images from an ACR (see the optional first step), add argument --attach-acr=<acr_id>. You need owner/admin privileges to orchestrate this.
    • --tier=standard is for production. Use free for simple testing.
    • --load-balancer-sku=standard is for using the recommended load balancer variant, which is required for e.g. authorized IP ranges and multiple node pools.
    • -s Standard_DS2_v2 is the default and has 2 vCPU, 7GiB RAM and 14GiB SSD temp storage. Standard_B2als_v2 is a cheaper B-series alternative with 4GiB RAM.
    • --node-count=3 creates 3 nodes if specified size. As this node pool is the system node pool hosting critical pods, it’ simportant to have at least 3 nodes for proper redundancy.
    • --node-osdisk-type=Ephemeral uses a host-local OS disk instead of a network-attached disk, yielding better performance and zero cost. This only works if the VM cache for the given size is big enough for the VM image, which is not the case for small VM sizes.
    • --ip-families=ipv4,ipv6 enables IPv6 support (only dual-stack supported for IPv6).
    • --api-server-authorized-ip-ranges=<cidr1>,[cidr2]... is used to limit access to the k8s controller from any ranges you want access from. The cluster egress IP address is added to this list automatically. Up to 200 IP ranges can be specified. Does not support IPv6.
    • TODO “(UnsupportedDualStackNetworkPolicy) Network policy cilium is not supported for dual-stack clusters.”
  4. Add k8s credentials to local kubectl config: az aks get-credentials --resource-group=test_rg --name=test_aks
  5. Check stuff:
    • Show nodes: kubectl get nodes -o wide
    • Show node resource usage: kubectl top nodes
    • Show all pods: kubectl get pods -A -o wide
    • Check Cilium status: kubectl -n kube-system exec ds/cilium -- cilium status
    • Check Cilium health: kubectl -n kube-system exec ds/cilium -- cilium-health status

TODO First

TODO Next

Usage

Using example resource names.


hon.one | HON95/wiki | Edit page