Kubernetes and IBM Bluemix: How to deploy, manage, and secure your container-based workloads – Part 5

Kubernetes and IBM Bluemix: How to deploy, manage, and secure your container-based workloads – Part 5

Share this post:

Kubernetes A single microservice is easy to write, and now easy to deploy, especially with the IBM Bluemix Container Service based on Kubernetes. But once you’ve got dozens of microservices communicating with each other inside of the Kubernetes cluster, it’s hard to make sure that they’re not doing something they’re not supposed to. Traditionally, a firewall on the edge of the network is used to control traffic coming in and out of the application service, but now with many more microservices the workload becomes much more complex to manage.

In the IBM Bluemix Container Service, the Calico network project can be used to create cluster-wide application-aware network policies that can be used to distribute firewalls across worker nodes. We can put up mini-firewalls for traffic between Kubernetes pods, but also ingress and egress traffic outside of the cluster for each pod. This is something that complements an edge firewall that might not necessarily know what’s going on inside the VLAN.

This blog series is based on my team’s experience deploying our Microservices reference architecture; you can find the code for our simple storefront application on GitHub.

This post is a continuation of a series on the the networking topology of the IBM Bluemix Container Service.

NetworkPolicy resources in Kubernetes – a firewall for your microservice

As we mentioned before, the Calico project is deployed in the Kubernetes cluster, which is used by Kubernetes to distribute firewall rules on each worker node in the Kubernetes cluster.

Initially, it’s a good idea to network isolate each namespace in Kubernetes from each other using a default network policy as shown in the Kubernetes documentation. You may place different development teams in different namespaces and you don’t want them stepping on each other’s’ toes, or in smaller deployments you might run development and QA in the same cluster but want to keep them network isolated. Turning on network isolation in your namespace will configure Kubernetes to drop all incoming traffic for each pod in the namespace.

Once all incoming traffic for each pod is denied, you can then create Kubernetes NetworkPolicy resources that whitelist incoming traffic to each pod based on label selectors. In our reference implementation for example, we know the frontend Web BFF service will talk to each of the microservices in our backend tier, so we can whitelist traffic between these pods.

IBM Bluemix Container Services / Kubernetes Cluster

Because pods get redeployed and restarted, Kubernetes uses label selectors instead of IP addresses or subnets to identify workloads. This allows the services to scale outside of the firewall rules. I have deployed the reference implementation to the refarch namespace, and labeled the Web BFF pod with tier=frontend and micro=web-bff, and the Catalog microservice with tier=backend and micro=catalog. Then the following NetworkPolicy yaml will allow Web BFF pods to talk to Catalog service pods on port 8081:

apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
  name: allow-web-catalog
  namespace: refarch
spec:
  podSelector:
    matchLabels:
      micro: catalog
      tier: backend
  ingress:
  - from:
    - podSelector:
        matchLabels:
          micro: web-bff
          tier: frontend
    ports:
    - protocol: tcp
      port: 8081

Similarly, only the Catalog microservice should be talking to its ElasticSearch backend, so a similar NetworkPolicy can be created.

Securing ingress traffic from outside the cluster using calicoctl

Using Kubernetes NetworkPolicy works with traffic between pods, but we can also use Calico to control ingress traffic to a particular pod running in Kubernetes. The IBM Bluemix Container Service documentation shows how to install calicoctl to interact with the Calico running on each worker node in the cluster. The Kubernetes cluster is provisioned with some default network policies that drop most public traffic from the internet.

As mentioned in earlier blog posts, pods can run anywhere, so once again we use the label selectors to identify the workload. Pods created in Kubernetes are automatically exposed in Calico as Workload Endpoints, which are labeled in a similar fashion as they are in Kubernetes. In the microservices reference application, one of the front doors to our application is the Web BFF, which is an Angular and Node.js based web app listening on port 8000.

Routing between public and Kubernetes cluster

To allow all traffic to my Web BFF running in the refarch namespace, I created the following Calico Policy. This allows anybody to open a connection to the Web frontend on port 8000, making my Web BFF accessible from the internet.

apiVersion: v1
kind: policy
metadata:
  name: refarch.allow-external-web
spec:
  egress:
  - action: allow
    destination: {}
    source: {}
  ingress:
  - action: allow
    protocol: tcp
    source: {}
    destination:
      ports:
      - 8000
  order: 1000
  selector: calico/k8s_ns == 'refarch' && tier == 'frontend' && micro == 'web-bff'

Securing egress traffic to external services with calicoctl

Not everything runs in a pod in Kubernetes, so we may want to utilize some managed Bluemix services or on-premise data sources outside of the cluster. However, since the microservices run in pods, we need some pod-aware network policy to allow only particular pods running in Kubernetes to open connections to the data sources.

The Calico documentation contains a policy to disable egress, which prevents pods from opening connections outside of the namespace (and by extension, the Internet). One notable exception is to allow outbound access to kube-dns, which is a set of pods running in the kube-system namespace. I added these policies to the refarch namespace to deny all outbound traffic outside of the namespace.

In the reference application, I have a Customer microservice that stored some JSON data in CouchDB that I wanted to move to the managed Cloudant offering on Bluemix.

Customer microservice with JSON data in CouchDB

Following the instructions on how to bind Bluemix services to clusters, I want to whitelist the Customer service to be able to talk to Cloudant on port 443. I resolved the Cloudant service to an IP address W.X.Y.Z and used a Calico policy to allow outgoing connections (and incoming connections for the response) to this IP address from my pods that I’ve labelled as my Customer microservice:

apiVersion: v1
kind: policy
metadata:
  name: refarch.allow-customer-cloudantdb
spec:
  ingress:
  - action: allow
    protocol: tcp
    source: 
      net: W.X.Y.Z/32
      ports:
      - 443
    destination: {}
  egress:
  - action: allow
    protocol: tcp
    destination: 
      net: W.X.Y.Z/32
      ports:
      - 443
    source: {}
  order: 400
  selector: calico/k8s_ns == 'refarch' && tier == 'backend' && micro == 'customer'

Note that the order is 400, which is lower than the rule used to deny all egress traffic, so it gets processed before the outbound traffic is dropped. We can use a similar rule if we wanted to connect to services on-premise, for example; we would substitute the CIDR for the on-premise subnet instead.

Don’t throw away your hardware firewall just yet!

While network policies allow us to create powerful rules that are application-aware, there’s still a place for dedicated hardware firewalls in production that can protect the front door of our applications from DDOS attacks. While Calico allows us to define network policies to control traffic at the pod level, these are at the end of the day still iptables rules on worker nodes and may not be enough to protect infrastructure from being overwhelmed with traffic.

Conclusion

Kubernetes and Calico provide nice abstractions to create firewall rules on microservices running in the IBM Bluemix Container Service. This makes network operations and security more manageable at a cluster level, especially when developers are pushing pods at a rapid pace because of how agile microservices architectures and Kubernetes makes them. However, there’s still a place for hardware firewalls in your architecture to secure applications from attacks outside of your infrastructure.

For a broader introduction to microservices as well as other related topics, check out the Architecture Center in the IBM Garage Method:

This concludes our look at the IBM Bluemix Container Service and Application networking.

#bluemix,#awvi,#ibm

Bluemix

via Bluemix Blog https://ibm.co/2pQcNaA

July 24, 2017 at 11:27AM

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s