Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 46 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,27 +13,56 @@
[![Go Report Card](https://goreportcard.com/badge/github.com/aws/aws-cloud-map-mcs-controller-for-k8s)](https://goreportcard.com/report/github.com/aws/aws-cloud-map-mcs-controller-for-k8s)

## Introduction
AWS Cloud Map multi-cluster service discovery for Kubernetes (K8s) is a controller that implements existing multi-cluster services API that allows services to communicate across multiple clusters. The implementation relies on [AWS Cloud Map](https://aws.amazon.com/cloud-map/) for enabling cross-cluster service discovery.
The AWS Cloud Map Multi-cluster Service Discovery Controller for Kubernetes (K8s) implements the Kubernetes [multi-cluster services API](https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/1645-multi-cluster-services-api) specification, which allows services to communicate across multiple clusters. The implementation relies on [AWS Cloud Map](https://aws.amazon.com/cloud-map/) for enabling cross-cluster service discovery.

See the demo from AWS Container Day x KubeCon!

[![Watch the video](https://img.youtube.com/vi/3f0Tv7IiQQw/0.jpg)](https://youtu.be/3f0Tv7IiQQw?t=24458)

## Usage
> ⚠ **There must exist network connectivity (i.e. VPC peering, security group rules, ACLs, etc.) between clusters**: Undefined behavior may occur if controller is set up without network connectivity between clusters.
## Installation

Perform the following installation steps on each participating cluster.

- For multi-cluster service discovery and consumption, the controller should be installed on a minimum of 2 EKS clusters.
- Participating clusters should be provisioned into a single AWS account, within a single AWS region.

### Dependencies

#### Network

> ⚠ **The AWS Cloud Map MCS Controller for K8s provides service discovery and communication across multiple clusters, therefore implementations depend on end-end network connectivity between workloads provisioned within each participating cluster.**

- In deployment scenarios where participating clusters are provisioned into separate VPCs, connectivity will depend on correctly configured [VPC Peering](https://docs.aws.amazon.com/vpc/latest/peering/create-vpc-peering-connection.html), [inter-VPC routing](https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-routing.html), and Security Group configuration. The [VPC Reachability Analyzer](https://docs.aws.amazon.com/vpc/latest/reachability/getting-started.html) can be used to test and validate end-end connectivity between worker nodes within each cluster.
- Undefined behavior may occur if controllers are deployed without the required network connectivity between clusters.

#### Configure CoreDNS

### Setup clusters
Install the The CoreDNS multicluster plugin into each participating cluster. The multicluster plugin enables CoreDNS to lifecycle manage DNS records for `ServiceImport` objects.

First, install the controller with latest release on at least 2 AWS EKS clusters. Nodes must have sufficient IAM permissions to perform CloudMap operations.
To install the plugin, run the following commands.

> **_NOTE:_** AWS region environment variable can be _optionaly_ set like `export AWS_REGION=us-west-2` Otherwise controller will infer region in the order `AWS_REGION` environment variable, ~/.aws/config file, then EC2 metadata (for EKS environment)
```bash
kubectl apply -k "github.com/aws/aws-cloud-map-mcs-controller-for-k8s/samples/coredns-clusterrole.yaml
kubectl apply -k "github.com/aws/aws-cloud-map-mcs-controller-for-k8s/samples/coredns-configmap.yaml
kubectl apply -k "github.com/aws/aws-cloud-map-mcs-controller-for-k8s/samples/coredns-deployment.yaml
```

### Install Controller

To install the latest release of the controller, run the following commands.

> **_NOTE:_** AWS region environment variable can be _optionaly_ set like `export AWS_REGION=us-west-2` Otherwise the controller will infer region in the order `AWS_REGION` environment variable, ~/.aws/config file, then EC2 metadata (for EKS environment)

```sh
kubectl apply -k "github.com/aws/aws-cloud-map-mcs-controller-for-k8s/config/controller_install_release"
```

> 📌 See [Releases](#Releases) section for details on how to install other versions.

The controller must have sufficient IAM permissions to perform required Cloud Map operations. Grant IAM access rights `AWSCloudMapFullAccess` to the controller Service Account to enable the controller to manage Cloud Map resources.

## Usage

### Export services

Then assuming you already have a Service installed, apply a `ServiceExport` yaml to the cluster in which you want to export a service. This can be done for each service you want to export.
Expand All @@ -55,17 +84,18 @@ metadata:
name: my-amazing-service
```

*See the `samples` directory for a set of example yaml files to set up a service and export it. To apply the sample files run*
*See the `samples` directory for a set of example yaml files to set up a service and export it. To apply the sample files run the following commands.*

```sh
kubectl create namespace example
kubectl apply -f https://raw.githubusercontent.com/aws/aws-cloud-map-mcs-controller-for-k8s/main/samples/example-deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/aws/aws-cloud-map-mcs-controller-for-k8s/main/samples/example-service.yaml
kubectl apply -f https://raw.githubusercontent.com/aws/aws-cloud-map-mcs-controller-for-k8s/main/samples/example-serviceexport.yaml
kubectl apply -k "github.com/aws/aws-cloud-map-mcs-controller-for-k8s/samples/example-deployment.yaml
kubectl apply -k "github.com/aws/aws-cloud-map-mcs-controller-for-k8s/samples/example-service.yaml
kubectl apply -k "github.com/aws/aws-cloud-map-mcs-controller-for-k8s/samples/example-serviceexport.yaml
```

### Import services

In your other cluster, the controller will automatically sync services registered in AWS Cloud Map by applying the appropriate `ServiceImport`. To list them all, run
In your other cluster, the controller will automatically sync services registered in AWS Cloud Map by applying the appropriate `ServiceImport`. To list them all, run the following command.
```sh
kubectl get ServiceImport -A
```
Expand All @@ -76,24 +106,24 @@ AWS Cloud Map MCS Controller for K8s adheres to the [SemVer](https://semver.org/

> **_NOTE:_** AWS region environment variable can be _optionally_ set like `export AWS_REGION=us-west-2` Otherwise controller will infer region in the order `AWS_REGION` environment variable, ~/.aws/config file, then EC2 metadata (for EKS environment)

To install from a release run
The following command format is used to install from a particular release.
```sh
kubectl apply -k "github.com/aws/aws-cloud-map-mcs-controller-for-k8s/config/controller_install_release[?ref=*git version tag*]"
```

Example to install latest release
Run the following command to install the latest release.
```sh
kubectl apply -k "github.com/aws/aws-cloud-map-mcs-controller-for-k8s/config/controller_install_release"
```

Example to install v0.1.0
The following example will install release v0.1.0.
```sh
kubectl apply -k "github.com/aws/aws-cloud-map-mcs-controller-for-k8s/config/controller_install_release?ref=v0.1.0"
```

We also maintain a `latest` tag, which is updated to stay in line with the `main` branch. We **do not** recommend installing this on any production cluster, as any new major versions updated on the `main` branch will introduce breaking changes.

To install from `latest` tag run
To install from `latest` tag run the following command.
```sh
kubectl apply -k "github.com/aws/aws-cloud-map-mcs-controller-for-k8s/config/controller_install_latest"
```
Expand All @@ -109,4 +139,4 @@ Join the channel with this [invite](https://join.slack.com/t/awsappmesh/shared_i

This project is distributed under the
[Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0),
see [LICENSE](https://github.com/aws/aws-cloud-map-mcs-controller-for-k8s/blob/main/LICENSE) and [NOTICE](https://github.com/aws/aws-cloud-map-mcs-controller-for-k8s/blob/main/NOTICE) for more information.
see [LICENSE](https://github.com/aws/aws-cloud-map-mcs-controller-for-k8s/blob/main/LICENSE) and [NOTICE](https://github.com/aws/aws-cloud-map-mcs-controller-for-k8s/blob/main/NOTICE) for more information.
58 changes: 58 additions & 0 deletions samples/coredns-clusterrole.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
eks.amazonaws.com/component: coredns
k8s-app: kube-dns
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- create
- get
- list
- patch
- update
- watch
- apiGroups:
- multicluster.x-k8s.io
resources:
- serviceimports
verbs:
- create
- get
- list
- patch
- update
- watch
- apiGroups:
- multicluster.x-k8s.io
resources:
- serviceexports
verbs:
- create
- get
- list
- patch
- update
- watch
26 changes: 26 additions & 0 deletions samples/coredns-configmap.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
multicluster clusterset.local
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
annotations:
labels:
eks.amazonaws.com/component: coredns
k8s-app: kube-dns
name: coredns
namespace: kube-system
137 changes: 137 additions & 0 deletions samples/coredns-deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
eks.amazonaws.com/component: coredns
k8s-app: kube-dns
kubernetes.io/name: CoreDNS
name: coredns
namespace: kube-system
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
eks.amazonaws.com/component: coredns
k8s-app: kube-dns
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
eks.amazonaws.com/compute-type: ec2
creationTimestamp: null
labels:
eks.amazonaws.com/component: coredns
k8s-app: kube-dns
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- arm64
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values:
- kube-dns
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- args:
- -conf
- /etc/coredns/Corefile
image: ghcr.io/aws/aws-cloud-map-mcs-controller-for-k8s/coredns-multicluster/coredns:v1.8.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 8080
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/coredns
name: config-volume
readOnly: true
- mountPath: /tmp
name: tmp
dnsPolicy: Default
priorityClassName: system-cluster-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: coredns
serviceAccountName: coredns
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- key: CriticalAddonsOnly
operator: Exists
volumes:
- emptyDir: {}
name: tmp
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
name: coredns
name: config-volume