Service Graph Connector for AWS - Amazon EKS IntegrationSummaryThis article describes the Service Graph connector for AWS (SGC-AWS) and the AWS EKS integration and the steps required to configure on the AWS side. Before you try to setup the SGC-AWS and EKS integration, please read our introduction article. ReleaseService Graph Connector for AWS v2.2Instructions Table of Contents Amazon EKS IntegrationIntroductionArchitectureCore Components AWS Systems ManagerKubectlEKS Cluster Setup Steps 1. Security - ServiceNow user EKS API access2. Security - EKS Role setup3. EC2 Bastion Host Setup4. Register EC2 instance details in SG-AWS Guided Setup Functional SpecFAQ Why do we need many bastion host (EC2)? References Amazon EKS Integration Introduction To get EKS CI information into a CI, AWS doesn't have a direct API to pull in Pod, Node, Service, Volume etc, due to its' unique security nature. While exploring various options, it was decided to use an EC2 bastion host. We need to have a bastion host in an AWS environment which has the Kube Config setup done. This EC2 (Linux/Unix based) instance can be shared with any of your existing EC2 instances or a basic instance. Architecture The SGC-AWS import schedule invokes SSM-SendCommand API and initiates the Kubectl command execution. Systems Manager executes the SSM document in an EC2 instance. Kubectl commands will invoke EKS cluster APIs and dump the output in terminal. The terminal output is then transferred to S3 bucket. SG-AWS will invoke S3 bucket and parse the terminal output data to populate CIs. Later, it deletes the file from S3 bucket. Core Components AWS Systems Manager Systems Manager is used by SGC-AWS to issue an on-demand request to an EC2 instance to execute Kubernetes commands. This is similar to the existing SSM --> EC2 integration TCP/Running process data collection architecture. We will be providing SSM Document scripts along with a guided setup which can be deployed in AWS SSM. Our application will invoke SSM Document scripts, which has kubectl commands to get the required K8S information. As part of this integration, we will be sharing two SSM Documents scripts. SG-AWS-RunKubeCtlEKSNamesShellScript.ymlSG-AWS-RunKubeCtlShellScript.yml Please note that for this release the EKS SendCommand Documents provided only work with IMDSv1 enabled. Support for IMDSv2 is planned in a future release. Therefore, when configuring EC2 Bastion instances please make sure IMDSv1 is enabled. We have attached a few documents in the references section. Kubectl The Kubectl command line tool is used to run commands against EKS clusters to get the desired details. This command-line tool needs to be installed in the EC2 instance that is being used to get the EKS details. When the Kubeconfig file is created it will have all of the cluster config details. The following kubectl commands are executed to get the details. kubectl config view config get-contextskubectl get nodes kubectl get serviceskubectl get podskubectl get deployments EKS Cluster You may have private or public end points defined in your cluster. To access a private EKS cluster, EC2 instance needs be in the same VPC. Hence we need to configure one EC2 instance per VPC. We need read-only access in the EKS cluster. The read-only role needs to be mapped to the AWS role you defined for the ServiceNow user. Setup Steps Security - ServiceNow user EKS API accessSecurity - EKS Role setupEC2 Bastion Host SetupRegister EC2 instance details in SG-AWS Guided Setup 1. Security - ServiceNow user EKS API access Servicenow uses Assume Role - which has read-only access. In case of EKS API access, the API call is being made in EC2 bastion host. To access EKS APIs, we need to add below policy to ServiceNow user. We need this role to discover clusters in the network and create kubeconfig and make EKS API calls. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "eks:DescribeCluster", "eks:ListClusters" ], "Resource": "*" } ] } 2. Security - EKS Role setup We need read-only access to your clusters. Below is the Cluster Role, Cluster Role Binding, and AWS IAM Role Cluster Role mapping details. This should be mapped in each EKS cluster: kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: eks-ro-group-cluster-role rules: - apiGroups: - "" - "*" - apps - extensions - batch - autoscaling resources: ["*"] verbs: ["get", "watch", "list","describe"] apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: eks-ro-group-binding subjects: - kind: Group name: snow.group apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: eks-ro-group-cluster-role apiGroup: rbac.authorization.k8s.io mapRoles: ---- - groups: - snow.group rolearn: arn:aws:iam::#EC2ACCNUMBER#:role/#InstanceProfileRoleName# username: AmazonSSMRoleForInstances #EC2ACCNUMBER# - Account number where the bastion host is hosted. #InstanceProfileRoleName# - Instance Profile attached to the EC2 instance mapRoles: ---- - groups: - snow.group rolearn: arn:aws:iam::1234567890:role/AmazonSSMRoleForInstances username: AmazonSSMRoleForInstances 3. EC2 Bastion Host Setup You can spin up a tiny(t2.nano/micro) EC2 instance (Unix/Linux based) or us an existing EC2 instance to share with other applications. Make sure the following setup is done in the EC2 instance. SSM Agent installed in EC2 instance. AmazonSSMManagedInstanceCore role attached to EC2.S3 bucket access instance profile attached to EC2.Install Kubectl.Kubeconfig Setup. Operational Aspects You need to manage the security, upgrade patch etc of the EC2 instance as per your company standards. 3.1 SSM Agent For us to communicate via AWS Systems Manager, you need to have SSM Agent installed in the EC2 instance. Most EC2 instance have SSM Agent pre-installed. 3.2 AmazonSSMManagedInstanceCore role attached to EC2 For Systems Manager to communicate with an EC2 instance, we need to have 'AmazonSSMManagedInstanceCore' role attached to an EC2 instance. You can refer to our existing CloudFormation (CFT) script - AmazonSSMForInstancesRoleSetup.yml. 3.3 S3 bucket access instance profile attached to EC2 As the output of kubectl commands is large, we need to publish the terminal output to S3 bucket. You can use the S3 bucket defined for deep discovery. Follow the steps, as described in AWS Systems Manager for getting EC2 System Information (aka. Deep Discovery) 3.4 Install Kubectl You need to install the kubectl command-line tool. 3.5 Kubeconfig Setup 'SG-AWS-RunKubeCtlEKSNamesShellScript' scans the available EKS clusters in the network using eks list-clusters. With this list, the script will create kubeconfig file in location '/root/.kube/config'. With this context, 'SG-AWS-RunKubeCtlShellScript' will execute kubectl commands and gets pods, nodes, deployment details. 4. Register EC2 instance details in SG-AWS Guided Setup In order for us to know where all the Kubectl setup is done, we need you to provide the following details in a spreadsheet which can be uploaded in the guided setup. This data is stored in table - sn_aws_integ_sg_aws_eks_ec2_resourceids EKS Account EKS AWS Region Resource ID EKACC1 us-east-1 i-1234567890abcdef0 EKACC1 us-east-1 i-1234567890abcdef1 EKACC1 us-east-2 i-1234567890abcdef3 EKACC2 us-east-1 i-1234567890abcdef4 Once we execute our scripts, the following master (sn_aws_integ_sg_aws_eks_master) table is populated which will have one-to-one mapping of cluster and its source. Account AWS Region Cluster Name EKS Server URL Resource ID EKS AccountID EKS Region ACC1 us-east-1 itxlab-cloud-eks https://endpoint1.gr7.us-east-1.eks.amazonaws.com i-1234567890abcdef0 EKACC1 us-east-1 ACC1 us-east-1 itxlab-cloud-eks1 https://endpoint2.gr7.us-east-1.eks.amazonaws.com i-1234567890abcdef1 EKACC1 us-east-1 ACC1 us-east-2 itxlab-cloud-eks2 https://endpoint3.gr7.us-east-2.eks.amazonaws.com i-1234567890abcdef3 EKACC1 us-east-2 ACC2 us-east-1 itxlab-cloud-eks https://endpoint4.gr7.us-east-1.eks.amazonaws.com i-1234567890abcdef4 EKACC2 us-east-1 Functional Spec As part of this integration, the following EKS details are pulled into the CMDB. # Name CI Name 1 Kubernetes Cluster cmdb_ci_kubernetes_cluster 2 Kubernetes Service cmdb_ci_kubernetes_service 3 Kubernetes Node cmdb_ci_kubernetes_node 4 Kubernetes Pod cmdb_ci_kubernetes_pod 5 Kubernetes Volume cmdb_ci_kubernetes_volume 6 Kubernetes Deployments cmdb_ci_kubernetes_deployment 7 Kubernetes DaemonSets cmdb_ci_kubernetes_daemonset 8 Kubernetes Namespace cmdb_ci_kubernetes_namespace 9 Kubernetes Component cmdb_ci_kubernetes_component 10 Kubernetes Workload cmdb_ci_kubernetes_workload 11 Docker Container cmdb_ci_docker_container 12 Docker Image cmdb_ci_docker_image For more details on the attributes, refer to this article. FAQ Why do we need many bastion host (EC2)? EKS endpoints can be private or public. To access a private EKS cluster, EC2 instance needs be in the same VPC, hence we need to have a bastion host locally setup in the VPC where EKS is installed.On the other side, if you have many or all public endpoints, then you can have one bastion host, which can talk to all the public endpoint EKS clusters.If VPCs in an account have VPC peering or Transit Gateway established, then the one bastion host is enough to communicate with all those EKS in multiple VPCs. References https://aws.amazon.com/premiumsupport/knowledge-center/amazon-eks-cluster-access/ To learn more about how to configure IMDS: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-IMDS-existing-instances.html#modify-restore-IMDSv1https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-options.htmlhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html