System requirements

Node requirements

  • AMD64 processor

  • Linux kernel 3.10 or later with required dependencies. The following distributions have the required kernel, its dependencies, and are known to work well with Calico and Kubernetes.

    • RedHat Linux 7
    • CentOS 7
    • CoreOS Container Linux stable
    • Ubuntu 16.04
    • Debian 8

Key/value store

Calico master requires a key/value store accessible by all Calico components. On Kubernetes, you can configure Calico to access an etcdv3 cluster directly or to use the Kubernetes API datastore.

Network requirements

Ensure that your hosts and firewalls allow the necessary traffic based on your configuration.

Configuration Host(s) Connection type Port/protocol
Calico networking (BGP) All Bidirectional TCP 179
Calico networking with IP-in-IP enabled (default) All Bidirectional IP-in-IP, often represented by its protocol number 4
Calico networking with Typha enabled Typha agent hosts Incoming TCP 5473 (default)
flannel networking (VXLAN) All Bidirectional UDP 4789
All kube-apiserver host Incoming Often TCP 443 or 6443*
etcd datastore etcd hosts Incoming Officially TCP 2379 but can vary

* The value passed to kube-apiserver using the --secure-port flag. If you cannot locate this, check the targetPort value returned by kubectl get svc kubernetes -o yaml.

Kubernetes requirements

Supported versions

We test Calico master against the following Kubernetes versions.

  • 1.9
  • 1.10
  • 1.11

Other versions are likely to work, but we do not actively test Calico master against them.

Application Layer Policy requires Kubernetes 1.9 or later.

CNI plug-in enabled

Calico is installed as a CNI plugin. The kubelet must be configured to use CNI networking by passing the --network-plugin=cni argument. (On kubeadm, this is the default.)

Other network providers

Calico must be the only network provider in each cluster. We do not currently support migrating a cluster with another network provider to use Calico networking.

Supported kube-proxy modes

Calico supports the following kube-proxy modes:

IP pool configuration

The IP range selected for pod IP addresses cannot overlap with any other IP ranges in your network, including:

  • The Kubernetes service cluster IP range
  • The range from which host IPs are allocated

Our manifests default to 192.168.0.0/16 for the pod IP range except Canal/flannel, which defaults to 10.244.0.0/16. Refer to Configuring the pod IP range for information on modifying the defaults.

Mutating webhooks

Application Layer Policy requires Mutating Webhooks to be enabled.

Kernel dependencies

Tip: If you are using one of the recommended distributions, you will already satisfy these.

  • nf_conntrack_netlink subsystem
  • ip_tables (for IPv4)
  • ip6_tables (for IPv6)
  • ip_set
  • xt_set
  • ipt_set
  • ipt_rpfilter
  • ipt_REJECT
  • ipip (if using Calico networking)