Topic outline

  • Whoami, Whoareyou and Whereami Problems

    What We’ll Do

    We’ll use a pre-made container — containous/whoami — capable of telling you where it is hosted and what it receives when you call it.

    If you'd like to build your own container image with docker, do:

    git clone
    docker build -t whoami .
    # use your own dockerhub account
    docker tag whoami kubernautslabs/whoami
    docker push kubernautslabs/whoami
    docker images | head

    We’ll define two different deployments, a whoami and a whoareyou deployment that will use containous/whoami container image.

    We’ll create a deployment to ask Kubernetes to deploy 2 replicas of whoami and 3 replicas of whoareyou.

    We’ll define two services, one for each of our Pods.

    We’ll define Ingress objects to define the routes to our services to the outside world.

    We’ll use our Nginx Ingress Controller on our Rancher Cluster.

    Explanations about the file content of whoami-deployment.yaml:

    We define a “deployment” (kind: Deployment)

    The name of the object is “whoami-deployment” (name: whoami-deployment)

    We want two replica (replicas: 2)

    It will deploy pods that have the label app:whoami (selector: matchLabels: app:whoami)

    Then we define the pods with the (template: …) which will have the whoami label (metadata:labels:app:whoami)

    The Pods will host a container using the image containous/whoami (image:containous/whoami)

    Please run the command and talk to your trainer :-)
    • DNS based Service discovery with whereami kubia pod

      What We’ll Do

      We'll use a slightly extended node.js app (which is a simple web server) from the Kubernetes in Action book by Marko Lukša in 2 different namespaces ns1 and ns2 to demonstrate the DNS based services discovery.

      A service provides a Virtual IP (VIP) address, which means the Service IP is not bound to a physical network interface. A service acts like an internal loadbalancer in K8s! The magic of of routing trafic through the VIP is implemented by IPtable rules managed by kube-proxy!

      A service can be called through its FQDN in the form of:


      • Headless Services for Stickiness

        As we learned services are exposed by default through the type ClusterIP, they work as an internal layer 4 load-balancer and provide a VIP with a stable DNS address, where the clients can connect to. The service forwards the connections to one of the pods which are backing the service via round robin.

        This works fine and is desired for stateless apps which need to connect to one of the pods randomly and gain more performance through trafic routing via load balancing.

        But in some cases where stickiness is needed and the clients need to connect to a particular pod for session or data stickiness, then we need to define our service without ClusterIP, which is by default the head of the service (that's the VIP).

        To do that we need to define our service as a headless service, let's see that in action with the whereami service and our utils pod.

        In the following we expose the kubia deployment as a headless service by setting the ClusterIP to None, scale the deployment and do a DNS query to both services with host kubia-headless and host kubia-clusterip from within the util client pod. As you'll see our client pod always connects to the first IP from the DNS response, if we curl the headless service. This means no load balancing happens, the call is Sticky!

        The second curl to the service with ClusterIP does load balancing and distributes the traffic between pods.