Monday, 4 August 2025

Kubernetes Service Type

Kubernetes Service Types

๐ŸŒ Kubernetes Service Types

Kubernetes Services expose your pods to other services or the outside world. Here are the main types:

1. ClusterIP (default)

Access: Internal only (within the cluster)

Use case: Microservice-to-microservice communication

type: ClusterIP

๐Ÿง  Tip: Not reachable from outside the cluster.

2. NodePort

Access: Exposes service on a static port on each Node’s IP

Use case: Basic external access (dev/test), or when using a load balancer manually

type: NodePort

๐Ÿง  Tip: Accessed via http://<NodeIP>:<NodePort>

3. LoadBalancer

Access: Exposes service using an external cloud load balancer (like AWS ELB)

Use case: Production-grade external access

type: LoadBalancer

๐Ÿง  Tip: AWS assigns a public ELB automatically

Types of Probes in EKS (Kubernetes)

Types of Probes in EKS

✅ Types of Probes in EKS (Kubernetes)

๐Ÿ”น 1. Liveness Probe

Purpose: Checks if the container is still alive.

Action: If it fails, Kubernetes kills the container and restarts it.

Use case: Useful when your app hangs or enters a deadlock.

livenessProbe:
  httpGet:
    path: /healthz
    port: 8080
  initialDelaySeconds: 10
  periodSeconds: 15

Example: A Spring Boot app hangs due to a memory leak — the liveness probe fails, and the pod restarts automatically.

๐Ÿ”น 2. Readiness Probe

Purpose: Checks if the container is ready to receive traffic.

Action: If it fails, the pod is removed from Service load balancing.

Use case: Ensures traffic is only routed to healthy and initialized containers.

readinessProbe:
  httpGet:
    path: /readiness
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 10

Example: A microservice needs time to warm up and connect to a DB — readiness ensures no traffic hits it before it's ready.

๐Ÿ”น 3. Startup Probe

Purpose: Checks if the container has started successfully.

Action: Gives the container more time to start before liveness checks begin.

Use case: Ideal for slow-starting applications.

startupProbe:
  httpGet:
    path: /startup
    port: 8080
  failureThreshold: 30
  periodSeconds: 10

Example: A legacy Java app takes 3 minutes to start — startup probe delays liveness checks to avoid premature kills.

VPC CNI and Istio Overview

VPC CNI and Istio Overview

✅ VPC CNI Plugin (aws-node DaemonSet)

  • Purpose: Pod networking — assigns ENIs and IP addresses from your VPC subnets to pods.
  • Scope: Controls how pods communicate at the network layer (Layer 3/4).
  • Key Features:
    • Each pod gets a VPC IP — visible inside the VPC.
    • Native integration with AWS networking/security groups.
    • Enables Kubernetes services, DNS, etc.

✅ Istio (Service Mesh)

  • Purpose: Handles application-level traffic control (Layer 7).
  • Scope: Service-to-service communication inside your cluster.
  • Key Features:
    • mTLS encryption between services.
    • Traffic routing, retries, circuit breaking.
    • Observability: metrics, tracing, logging.
    • Policy enforcement.

๐Ÿง  Analogy

LayerRoleToolAnalogy
L3/L4IP, PortsVPC CNIRoad + Traffic lanes
L7HTTP, gRPC logicIstioTraffic lights + Checkpoints

๐Ÿง  What is the VPC CNI Plugin?

The Amazon VPC CNI plugin (aws-node DaemonSet) allows EKS pods to receive native VPC IPs, letting them directly communicate within the VPC.

๐Ÿ” Key Features of the VPC CNI Plugin

FeatureDescription
Pod IPs from VPCAssigned from ENI’s secondary IP pool in the VPC subnet.
Native VPC RoutingNo overlay network. Uses native AWS networking.
Security Groups for PodsDifferent SGs per pod (via SG for Pods).
PrivateLink CompatiblePods communicate over VPC PrivateLink.
CloudWatch MetricsExport CNI metrics.

๐Ÿงฉ How It Works

  1. ENI Allocation: EC2 nodes get multiple ENIs with multiple IPs.
  2. aws-node DaemonSet: Allocates IPs by managing ENIs and assigning IPs to pods.
  3. Pods Get VPC IPs: No NAT, direct VPC communication.

๐Ÿ“ Default IP Allocation Strategy

Each EC2 type has ENI and IP limits. For m5.large:

  • 3 ENIs (1 primary + 2 secondary)
  • 10 IPv4 addresses per ENI
  • Max pods: (3 * 10) - 1 = 29

Use:

curl https://raw.githubusercontent.com/awslabs/amazon-eks-ami/master/files/eni-max-pods.txt

⚙️ Custom Networking Mode (Secondary CIDR)

  • Assign pod IPs from different subnets.
  • Helps with IP exhaustion or network segmentation.
  • Needs branch ENIs in custom subnets.

๐Ÿ“‰ Observability & Metrics

Enable metrics with Helm:

helm install aws-vpc-cni \
  --namespace kube-system \
  --set enableNetworkPolicy=true \
  --set env.ENABLE_PROMETHEUS=true \
  aws/aws-vpc-cni
    

Metrics exposed:

  • vpc_cni_ip_assigned
  • vpc_cni_ip_in_use
  • vpc_cni_eni_allocations_failed_total

⚠️ Common Issues

ProblemCauseFix
Pods stuck in PendingNo IPs on ENIsScale node group or increase ENIs/IPs
ENI allocation failsMissing IAM permissionsCheck policy for ec2:AssignPrivateIpAddresses
Pod IPs not releasedCNI bug/misconfigUpdate CNI
IP ExhaustionNo free IPs in subnetUse secondary CIDRs or split traffic

๐Ÿ“˜ Configuration Parameters

VariableDescriptionExample
WARM_IP_TARGETFree IPs to keep per node3
MAX_ENIMax ENIs to allocate4
ENABLE_POD_ENIEnable SGs for podstrue
AWS_VPC_K8S_CNI_LOGLEVELDebug log levelDEBUG

Check with:

kubectl -n kube-system describe daemonset aws-node

๐Ÿ” Security Groups for Pods (Advanced)

  • Needs prefix delegation or custom networking
  • Assign SGs per pod
  • Compatible with Calico network policies

✅ When to Use VPC CNI

Use CaseUse VPC CNI?
Need VPC-native networking✅ Yes
Pods need access to RDS, S3, ELB✅ Yes
Using service mesh (e.g., Istio)✅ Use alongside
Large clusters with IP pressure⚠️ Use custom networking
High-density, internal clusters❌ Use Calico/Cilium

๐Ÿ“ EC2 ENI/IP Limits and Pod IP Assignment

  1. Step 1: Check ENI & IP limits per instance type (e.g., m5.large)
  2. Step 2: Max Pods = (ENIs * IPs per ENI) - 1 = 29
  3. Step 3: Pod IPs are assigned from subnet CIDR (e.g. 10.0.0.0/24)
  4. Step 4: Example allocation:
    ENI NameIP Addresses
    eth0 (primary)10.0.1.10 to 10.0.1.19
    eni110.0.1.20 to 10.0.1.29
    eni210.0.1.30 to 10.0.1.39
  5. Step 5: aws-node assigns IPs as secondary IPs on ENIs, directly usable by pods

๐Ÿ“Œ Important Notes

  • Pods per node limited by ENI/IP limits
  • Need more? Use larger instances or custom networking mode
  • Ensure VPC subnet has enough free IPs

Summary Table (m5.large)

ResourceCount
ENIs per node3
IPs per ENI10
Total IPs30
Pods allowed29

Sunday, 3 August 2025

Debugging a Pod in Kubernetes

Kubernetes Pod Debugging Guide

๐Ÿ” Step-by-Step: Debugging a Crashing or Problematic Pod in Kubernetes

✅ 1. Check Pod Status

kubectl get pods -n <namespace>
  • STATUS: CrashLoopBackOff, Error, ImagePullBackOff, Pending, etc.
  • RESTARTS: Helps understand how often it's failing.

✅ 2. Describe the Pod

kubectl describe pod <pod-name> -n <namespace>
  • Check Events at the bottom: scheduling issues, volume mount errors, etc.
  • Check Container State (waiting, terminated, reason)

✅ 3. Get Pod Logs

kubectl logs <pod-name> -n <namespace>
kubectl logs <pod-name> -n <namespace> -c <container-name>
  • Use --previous if the pod restarted and you want logs from the prior container:
kubectl logs --previous <pod-name> -n <namespace>

✅ 4. Common Pod Failure States

  • CrashLoopBackOff: The container keeps crashing on startup
  • ImagePullBackOff / ErrImagePull: Image is incorrect or unauthenticated
  • OOMKilled: Out of memory — check resource limits
  • ContainerCreating: Volume or node issues
  • Completed: Pod exited successfully (common for Jobs)

✅ 5. Exec Into the Pod (If Running)

kubectl exec -it <pod-name> -n <namespace> -- /bin/sh
  • Explore logs/configs/environment manually

✅ 6. Check Events at Namespace Level

kubectl get events -n <namespace> --sort-by=.metadata.creationTimestamp

✅ 7. Look for Liveness/Readiness Probe Failures

kubectl describe pod <pod-name>
  • Check if probes are misconfigured and causing restarts.

✅ 8. Resource Limits

  • Check if the pod is being OOMKilled (killed due to memory)
kubectl describe pod <pod-name>
  • Look for: State: Terminated Reason: OOMKilled

✅ 9. Pod Stuck in Pending

  • No nodes available, missing resources, or bad nodeSelector/toleration
kubectl describe pod <pod-name>

✅ 10. Look at Node or DaemonSet Logs (for CNI/containerd issues)

  • If pod never gets created or stuck, may be CNI/networking issue
kubectl logs <node-name> -n kube-system -c aws-node

๐Ÿ› ️ Optional: Use Stern to Tail Pod Logs Across Containers

stern <pod-name> -n <namespace>

๐Ÿ“Œ TL;DR - Common Fixes for Crashing Pods

  • CrashLoopBackOff: Application error or misconfigured command
  • OOMKilled: Memory limit too low — increase it
  • ImagePullBackOff: Bad image or no access to private registry
  • Pending: No schedulable nodes or resource constraints
  • Probe failures: Health check misconfigured

Troubleshoot EKS Node Not Joining

Troubleshoot EKS Node Not Joining

๐Ÿ” Step-by-Step: Troubleshoot EKS Node Not Joining the Cluster

✅ 1. Check Node Status in EC2

  • Log into AWS Console > EC2:
    • Verify that the EC2 instance for the node is running and in a public/private subnet as expected.
    • Check tags – ensure they include:
      kubernetes.io/cluster/<cluster-name> = owned or shared

✅ 2. Check Node IAM Role

  • Go to the EC2 instance > Check the IAM role attached.
  • Confirm that the role has the following AWS managed policies:
AmazonEKSWorkerNodePolicy
AmazonEC2ContainerRegistryReadOnly
AmazonEKS_CNI_Policy
    

✅ Also ensure the role is listed in your aws-auth ConfigMap.

✅ 3. Check aws-auth ConfigMap in EKS

If the IAM role of the EC2 node is not mapped to Kubernetes, the node won't join.

kubectl get configmap aws-auth -n kube-system -o yaml

Look for:

mapRoles: |
  - rolearn: arn:aws:iam::<account-id>:role/<your-node-instance-role>
    username: system:node:{{EC2PrivateDNSName}}
    groups:
      - system:bootstrappers
      - system:nodes
    

If missing, add it:

kubectl edit configmap aws-auth -n kube-system

✅ 4. Check Logs on the Node (via SSH)

  • SSH into the instance using the key pair.
  • Check kubelet and bootstrap logs:
# Bootstrap logs
cat /var/log/cloud-init-output.log

# Kubelet logs
journalctl -u kubelet -xe
    

You may see common errors like:

  • IAM role not authorized
  • Incorrect cluster endpoint
  • TLS certificate errors

✅ 5. Check Cluster Endpoint and Bootstrap Script

If you’re using a custom AMI or self-managed node group, ensure the bootstrap script is being run properly.

Look for this in user-data:

#!/bin/bash
/etc/eks/bootstrap.sh <cluster-name>
    

Validate it's running:

cat /var/log/cloud-init-output.log

✅ 6. Check Security Groups and Networking

  • The node's security group allows outbound HTTPS (443) to EKS and S3 endpoints.
  • The control plane security group allows traffic from the node's security group.
  • If you're using private subnets, ensure NAT Gateway or interface endpoints (VPC endpoints) are properly set.

✅ 7. Check Node Group Events (if using managed node group)

aws eks describe-nodegroup \
  --cluster-name <cluster> \
  --nodegroup-name <nodegroup>
    

Look under status, health.issues, or use:

kubectl get nodes

✅ 8. Check if Node is Registered with Cluster

kubectl get nodes
  • If the node is missing: It failed to register (likely bootstrap or IAM issue)
  • If the node is in NotReady state: There’s a runtime issue (e.g., containerd, kubelet, CNI)

๐Ÿ›  Common Fixes

ProblemFix
IAM role not in aws-authAdd role using kubectl edit configmap aws-auth -n kube-system
Kubelet errorsCheck /var/log/messages, journalctl -u kubelet
Networking issueUpdate SGs, check subnet routing/NAT
Bootstrap script not runningVerify user-data and cloud-init logs
Missing policiesAttach AmazonEKSWorkerNodePolicy, EKS_CNI_Policy, etc.

๐Ÿงช Optional: Run a Quick Node Debug DaemonSet

kubectl apply -f https://raw.githubusercontent.com/Azure/aks-periscope/main/deployment/debug-daemonset.yaml

This runs a privileged pod on all nodes and gives more insight.

✅ Required Tags for Discovery

These tags are required for:

  • The EKS control plane to discover worker nodes
  • The VPC CNI plugin to discover subnets
  • The Cluster Autoscaler to manage scaling

Examples:

  • For EC2 Instances (Nodes):
    Key: kubernetes.io/cluster/my-eks-cluster
    Value: owned
  • For Subnets:
    Key: kubernetes.io/cluster/my-eks-cluster
    Value: shared
  • For ELBs (provisioned by Kubernetes):
    Key: kubernetes.io/cluster/my-eks-cluster
    Value: owned

๐Ÿงช Example in Practice

  • A VPC shared across multiple EKS clusters.
  • You want Cluster A to use subnet-1, and Cluster B to use the same subnet.

Then you tag subnet-1 as:

kubernetes.io/cluster/cluster-a = shared
kubernetes.io/cluster/cluster-b = shared
    

If instead subnet-1 is only used by cluster-a:

kubernetes.io/cluster/cluster-a = owned

✅ TL;DR

ResourceRequired?Tag Example
EC2 Node✅ Yeskubernetes.io/cluster/my-cluster = owned
Subnet✅ Yeskubernetes.io/cluster/my-cluster = shared
Security GroupOptional*Sometimes required for Load Balancer discovery
ELB✅ Yeskubernetes.io/cluster/my-cluster = owned

Tuesday, 22 July 2025

Boto3

 

๐Ÿง  Understanding Boto3: Overview

Boto3 is the official AWS SDK for Python, used to interact with AWS services like S3, EC2, Lambda, DynamoDB, etc.


⚙️ 1. boto3.client() vs boto3.resource() vs boto3.session()

boto3.client(service_name, ...)

  • Low-level client.

  • Maps 1:1 to AWS service APIs.

  • Returns response dicts (JSON-like).

  • Example: client('s3')

boto3.resource(service_name, ...)

  • High-level abstraction.

  • Uses Python objects.

  • Easier for common operations (like bucket.upload_file(...))

⚠️ Not available for all AWS services.

boto3.session.Session(...)

  • Used to manage configuration: profiles, credentials, and regions.

  • You can have multiple sessions, for example for multi-account or multi-region setups.


๐Ÿ” When to Use What

FeatureUse client()Use resource()Use session()
Needs raw API access✅ Yes❌ No❌ Use session.client()
Object-based actions❌ Too verbose✅ Ideal✅ For multi-profile access
Working across profiles❌ Only default or env vars❌ Same✅ Fully supported
Need flexibility✅ Advanced control❌ Less control✅ Multi-region and credential flexibility


✅ Example 1: Using boto3.client to List S3 Buckets


import boto3 s3_client = boto3.client('s3', region_name='us-east-1') def list_buckets(): response = s3_client.list_buckets() for bucket in response['Buckets']: print(f"- {bucket['Name']}")

Why use client?
We want direct access to AWS API to fetch raw data like bucket names.


✅ Example 2: Using boto3.resource to Upload File to S3


import boto3 s3 = boto3.resource('s3') bucket = s3.Bucket('my-bucket-name') def upload_file(): bucket.upload_file(Filename='file.txt', Key='uploaded_file.txt')

Why use resource?
This is a high-level operation (upload_file) which is easier with resource than calling put_object() manually with client.


✅ Example 3: Using boto3.session for Multiple Profiles

Let's say you have 2 AWS profiles: dev and prod.


import boto3 def get_instance_count(profile_name): session = boto3.Session(profile_name=profile_name) ec2 = session.client('ec2', region_name='us-east-1') instances = ec2.describe_instances() total = sum(len(reservation['Instances']) for reservation in instances['Reservations']) print(f"{profile_name} has {total} EC2 instance(s).") get_instance_count('dev') get_instance_count('prod')

Why use session?
Each session uses its own credentials and region. Useful for multi-account management.


✅ Example 4: Using boto3.session to Assume Role into Another Account


import boto3 def assume_role_and_list_s3(role_arn): base_session = boto3.Session() sts_client = base_session.client('sts') assumed_role = sts_client.assume_role( RoleArn=role_arn, RoleSessionName='CrossAccountSession' ) credentials = assumed_role['Credentials'] temp_session = boto3.Session( aws_access_key_id=credentials['AccessKeyId'], aws_secret_access_key=credentials['SecretAccessKey'], aws_session_token=credentials['SessionToken'] ) s3 = temp_session.client('s3') buckets = s3.list_buckets() for b in buckets['Buckets']: print(b['Name']) assume_role_and_list_s3("arn:aws:iam::123456789012:role/SomeRole")

Why use session?
You can create temporary sessions with assumed roles — essential in enterprise, multi-account setups.


๐Ÿงช Quick Summary Table

Use CaseMethod UsedWhy?
List S3 bucketsboto3.client()Raw API for precise data
Upload files to S3boto3.resource()High-level object methods
Switch between dev and prod accountsboto3.session()Supports multiple profiles
Cross-account access with STS assume roleboto3.session()Use temporary credentials via STS

๐Ÿงฐ Pro Tip

Use session.client() or session.resource() like this:


session = boto3.Session(profile_name='dev') s3_client = session.client('s3')

It gives you flexibility + cleaner multi-env support.



✅ 1. S3 File Operations (Upload, List, Download)

python
import boto3 # High-level resource s3 = boto3.resource('s3') bucket_name = 'my-demo-bucket' # Upload a file s3.Bucket(bucket_name).upload_file('local.txt', 'uploaded.txt') # List objects for obj in s3.Bucket(bucket_name).objects.all(): print(f'File in bucket: {obj.key}') # Download a file s3.Bucket(bucket_name).download_file('uploaded.txt', 'downloaded.txt')

✅ Use resource for S3 when you want file operations, cleaner syntax, and auto-pagination.


✅ 2. EC2: Launch Instance, List Instances

python
import boto3 ec2 = boto3.resource('ec2') # Launch new EC2 instance instances = ec2.create_instances( ImageId='ami-0c55b159cbfafe1f0', InstanceType='t2.micro', MinCount=1, MaxCount=1 ) print("Launched instance ID:", instances[0].id) # List running instances for instance in ec2.instances.filter(Filters=[{'Name': 'instance-state-name', 'Values': ['running']}]): print(instance.id, instance.instance_type, instance.state['Name'])

boto3.resource('ec2') is great for managing instances in an object-oriented way.


✅ 3. EKS: List Clusters and Get Cluster Info

python
import boto3 eks = boto3.client('eks') # List all EKS clusters response = eks.list_clusters() print("Clusters:", response['clusters']) # Get details for a specific cluster cluster_info = eks.describe_cluster(name='my-eks-cluster') print("Cluster status:", cluster_info['cluster']['status'])

❗ EKS supports only client, not resource.


✅ 4. Lambda: List and Invoke a Function

python
import boto3 import json lambda_client = boto3.client('lambda') # List Lambda functions functions = lambda_client.list_functions() for func in functions['Functions']: print(func['FunctionName']) # Invoke a function response = lambda_client.invoke( FunctionName='my-function-name', InvocationType='RequestResponse', Payload=json.dumps({'key1': 'value1'}), ) print("Function output:", response['Payload'].read().decode())

client is required for AWS Lambda.


✅ 5. DynamoDB: Add Item, Query Table

python
import boto3 dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('MyTable') # Put item table.put_item(Item={'id': '123', 'name': 'John Doe'}) # Get item response = table.get_item(Key={'id': '123'}) print(response['Item'])

resource is perfect for table access in DynamoDB.


Wednesday, 16 July 2025

EKS - 1

✅ Web App (3 Pods) ✅ MongoDB (3 Pods) ✅ AWS ALB Ingress ✅ Uses: Deployment, Service, ConfigMap, Secret, Ingress # =========================== # 1. Secret for MongoDB creds # =========================== apiVersion: v1 kind: Secret metadata: name: mongo-secret type: Opaque stringData: mongo-username: mongouser mongo-password: mongopass --- # =========================== # 2. ConfigMap for WebApp # =========================== apiVersion: v1 kind: ConfigMap metadata: name: webapp-config data: mongo-uri: mongodb://mongo-service:27017/mydb --- # =========================== # 3. MongoDB Deployment # =========================== apiVersion: apps/v1 kind: Deployment metadata: name: mongo spec: replicas: 3 selector: matchLabels: app: mongo template: metadata: labels: app: mongo spec: containers: - name: mongo image: mongo:6 ports: - containerPort: 27017 env: - name: MONGO_INITDB_ROOT_USERNAME valueFrom: secretKeyRef: name: mongo-secret key: mongo-username - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: name: mongo-secret key: mongo-password livenessProbe: httpGet: path: /healthz port: 80 initialDelaySeconds: 10 periodSeconds: 15 readinessProbe: httpGet: path: /readyz port: 80 initialDelaySeconds: 5 periodSeconds: 10 --- # =========================== # 4. MongoDB Service # =========================== apiVersion: v1 kind: Service metadata: name: mongo-service spec: selector: app: mongo ports: - protocol: TCP port: 27017 targetPort: 27017 type: ClusterIP --- # =========================== # 5. WebApp Deployment # =========================== apiVersion: apps/v1 kind: Deployment metadata: name: webapp spec: replicas: 3 selector: matchLabels: app: webapp template: metadata: labels: app: webapp spec: containers: - name: webapp image: your-ecr-repo/your-webapp:latest # Replace this ports: - containerPort: 80 env: - name: MONGO_URI valueFrom: configMapKeyRef: name: webapp-config key: mongo-uri - name: MONGO_USER valueFrom: secretKeyRef: name: mongo-secret key: mongo-username - name: MONGO_PASS valueFrom: secretKeyRef: name: mongo-secret key: mongo-password --- # =========================== # 6. WebApp Service # =========================== apiVersion: v1 kind: Service metadata: name: webapp-service labels: app: webapp spec: selector: app: webapp ports: - port: 80 targetPort: 80 type: ClusterIP --- # =========================== # 7. Ingress for WebApp via AWS ALB # =========================== apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: webapp-ingress annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]' spec: rules: - host: webapp.example.com # Update with your domain or local entry http: paths: - path: / pathType: Prefix backend: service: name: webapp-service port: number: 80 ✅ Deploy It kubectl apply -f k8s-all.yaml ✅ How External Access Works in EKS In EKS, to expose a service (like your web app) externally, you use: ๐Ÿ”ธ Step 1: Internal Communication webapp-service (ClusterIP): Exposes port 80 inside the cluster. Pods can talk to it, but not accessible from outside. ๐Ÿ”ธ Step 2: Expose via Ingress webapp-ingress defines routing rules for HTTP requests from outside. Uses the AWS Load Balancer Controller to provision an ALB (Application Load Balancer). ✅ Flow of External Access INTERNET │ ▼ [ALB] ← created by Ingress via AWS ALB Controller │ ▼ [Ingress] ──> matches path/host → routes to │ ▼ [webapp-service] (type: ClusterIP) │ ▼ [webapp pods] ๐Ÿงช Example: Customer Service Endpoints Let’s say your FastAPI app exposes: Endpoint HTTP Verb Description /customer GET List all customers /customer POST Add a new customer /customer/{id} GET Get specific customer /customer/{id} PUT Update a customer ๐ŸŒ Accessing Endpoints via ALB Assume your Ingress sets the host as: host: webapp.example.com And AWS ALB provides a DNS: ➡ a1b2c3d4e5f6.elb.us-east-1.amazonaws.com You can: ๐Ÿ” Option 1: Map DNS via /etc/hosts (Local Dev) sudo nano /etc/hosts Add: a1b2c3d4e5f6.elb.us-east-1.amazonaws.com webapp.example.com ๐Ÿ” Option 2: Use Route 53 DNS (Production) Point webapp.example.com to ALB DNS in a Route 53 Hosted Zone. ๐Ÿงช Test from Local Use curl or Postman: # List all customers curl http://webapp.example.com/customer # Get specific customer curl http://webapp.example.com/customer/123 # Add new customer (POST) curl -X POST http://webapp.example.com/customer \ -H "Content-Type: application/json" \ -d '{"name": "John", "email": "john@example.com"}' # Update customer curl -X PUT http://webapp.example.com/customer/123 \ -H "Content-Type: application/json" \ -d '{"email": "john.new@example.com"}' ✅ What Are Liveness and Readiness Probes in Kubernetes? Probes are used by Kubernetes to check the health of your application: Probe Type Purpose Liveness Probe Checks if the app is alive and should continue running Readiness Probe Checks if the app is ready to serve traffic ⚙️ What Happens Situation Liveness Status Readiness Status Effect App starts booting up ✅ alive ❌ not ready No traffic routed yet App fully ready ✅ alive ✅ ready Traffic routed App gets stuck (infinite loop) ❌ dead ❌ not ready Pod is restarted App healthy but DB is down ✅ alive ❌ not ready Pod not removed, but traffic not routed ✅ What is a Kubernetes Operator? A Kubernetes Operator is a method of automating the management of complex, stateful applications on Kubernetes using custom resources and custom controllers. examle - MondoDB, ArogoCD