SG-AWS Deep Discovery , How it works?Introduction This article will describe all the pre - requisites & setup , How it runs , debugging steps required for running Deep Discovery is Service Graph Connector for AWS Pre - Requisites and Setup : For a successful run of SG-AWS Deep discovery , we need have the following pre - requisites setup in AWS 1. AWS Systems Manager Setup AWS Systems Manager Inventory is not enabled by default. We need to enable Inventory setup to import the applications deployed in the EC2 instances. The AWS Systems manager provides features for the DevOps team. As part of the SG AWS connector integration, we will use AWS Systems Manager Inventory to collect software installed in the EC2 instance. To enable this feature, AWS Systems Manager needs access to the EC2 instance. Hence, we need to create an EC2 instance profile which needs to be attached with the EC2 instance. If you would like to get Serial Number, TCP and Process information, we need to attach additional privilege in the Instance profile. This is discussed in next section AWS SSM SendCommand Setup. AWS Systems Manager Inventory is not enabled by default. We need to enable Inventory setup to import the applications deployed in the EC2 instances. For a successful ServiceNow – AWS Systems Manager Inventory integration, you need to have these setups completed: SSM Agent installed in the EC2 instance. SSM agent is deployed by default in some AWS AMIs. Instance profile created with IAM policy – AmazonSSMManagedInstanceCore.SSM Instance profile attached to the EC2 instances.AWS Systems Manager inventory setup enabled. 1.1 Prerequisites: SSM Agent is installed in all EC2 instances. Please refer to AWS documentation for more details.SSM Instance profile is attached to EC2 instances. Please refer to AWS documentation for more details. 1.2 AWS Systems Manager Inventory Setup AWS provides an automation script which is available as part of the AWS Systems Manager Automation which will setup AWS Systems Manager inventory in an account. AWS has also provided a one step operation, where you can set up Systems manager from a management account to all of the member accounts. To do that, you need to set up special IAM privileges for an admin user to execute from the management account. More details are found here. If you would like to set up AWS Systems Manager Inventory from a management account to member accounts, you need to have follow IAM setup for the admin. If you already have AWS Systems Manager Inventory setup with other ways, you can skip this section. 1.3 AWS-SystemsManager-AutomationAdministrationRole.yml This script needs to be executed in the management account which will give privileges to the admin to execute the automation script. Login to the management account as an admin.Navigate to the CloudFormation page.Click on View Stack.Click on Create Stack.Under Prerequisite - Prepare template select Template is ready.Under Specify template select Upload a template file.Choose the AWS-SystemsManager-AutomationAdministrationRole.yml downloaded earlier and click Next.On the Configure stack options leave the default values and select Next.Review changes and check the acknowledgment box at the bottom of the page and click Create stack. AWSTemplateFormatVersion: 2010-09-09 Description: >- Configure the AWS-SystemsManager-AutomationAdministrationRole to enable use of AWS Systems Manager Cross Account/Region Automation execution. Resources: AWSSMAutomationAdministrationRole: Type: 'AWS::IAM::Role' Properties: RoleName: AWS-SystemsManager-AutomationAdministrationRole AssumeRolePolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: Service: ssm.amazonaws.com Action: - 'sts:AssumeRole' Path: / Policies: - PolicyName: AssumeRole-AWSSystemsManagerAutomationExecutionRole PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Action: - 'sts:AssumeRole' Resource: !Sub >- arn:${AWS::Partition}:iam::*:role/AWS-SystemsManager-AutomationExecutionRole - Effect: Allow Action: - 'organizations:ListAccountsForParent' Resource: - '*' 1.4 AWS-SystemsManager-AutomationExecutionRole.yml This script needs to be executed in all member account(s) which will give privileges to the admin to execute the automation script in member accounts. Login to the management account as an admin.Navigate to the CloudFormation page.Click on View Stack.Click on Create Stack.Under Prerequisite - Prepare template select Template is ready.Under Specify template select Upload a template file.Choose the AWS-SystemsManager-AutomationExecutionRole.yml downloaded earlier and click Next.Under Parameters, enter the Account ID of either management or designated member account where the ServiceNow user will be created and click Next.On the Configure stack options leave the default values and select Next.Review changes and check the acknowledgment box at the bottom of the page and click Create stack. Parameters: MasterAccountId: Type: String Description: >- AWS Account ID of the primary account (the account from which AWS Systems Manager Automation will be initiated). MaxLength: 12 MinLength: 12 Resources: AWSSystemsManagerAutomationExecutionRole: Type: 'AWS::IAM::Role' Properties: RoleName: AWS-SystemsManager-AutomationExecutionRole AssumeRolePolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: AWS: !Ref MasterAccountId Action: - 'sts:AssumeRole' - Effect: Allow Principal: Service: ssm.amazonaws.com Action: - 'sts:AssumeRole' ManagedPolicyArns: - 'arn:aws:iam::aws:policy/service-role/AmazonSSMAutomationRole' Path: / Policies: - PolicyName: ExecutionPolicy PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Action: - 'resource-groups:ListGroupResources' - 'tag:GetResources' Resource: '*' - Effect: Allow Action: - 'iam:PassRole' Resource: !Sub >- arn:${AWS::Partition}:iam::${AWS::AccountId}:role/AWS-SystemsManager-AutomationExecutionRole - Effect: Allow Action: - 'iam:ListUserTags' - 'iam:ListRoleTags' - 'iam:TagUser' - 'iam:TagRole' - 'iam:UntagUser' - 'iam:UntagRole' - 'iam:CreateRole' - 'iam:PutRolePolicy' - 'iam:GetRole' - 'iam:getRolePolicy' Resource: - '*' - Effect: Allow Action: - 'iam:PassRole' Resource: - 'arn:aws:iam::*:role/SetupInventoryStack*' - Effect: Allow Action: - 'lambda:GetFunction' - 'lambda:DeleteFunction' - 'lambda:CreateFunction' - 'lambda:InvokeFunction' Resource: - '*' 1.5 Create an IAM instance profile for Systems Manager As described in previous section, you need to create an IAM instance profile in each account and then it needs to be attached to an EC2 instance. You can use the AmazonSSMForInstancesRoleSetup.yml CFT template to create an instance profile. More details can be found here. If you already have the instance profile role available in the IAM, then this step can be skipped. Execution Steps for AmazonSSMForInstancesRoleSetup.yml AmazonSSMForInstancesRoleSetup.yml script is executed in CloudFormation using the StackSets option. Login to the management account as an admin.Navigate to the CloudFormation page.Click on View StackSets.Click on Create StackSet.Under Prerequisite - Prepare template select Template is ready.Under Specify template select Upload a template file.In Specify stack details, specify a Stack name – AmazonSSMForInstancesRoleSetup.Parameters can be left with the default values and click Next.Under Permissions, select Service-managed permissions (default) and click Next.In Set deployments options page: Accounts - Select Deploy to organization.Specify regions - Select any one region of your choice. Since it's an IAM role it is applied globally. StackSets deployment needs a region for you to deploy the script. Deployment Options: Maximum concurrent accounts - Leave it to default provided by AWS.Failure tolerance – Select Percentage from dropdown and enter 100 in text boxRegion Concurrency - Parallel Click Next when you're finished. Review the changes and at the bottom of the page click Submit. AWSTemplateFormatVersion: 2010-09-09 Description: 'This CF Template creates AmazonSSMRoleForInstances in account. ' Metadata: 'AWS::CloudFormation::Interface': ParameterGroups: - Label: default: S3 Bucket Details Parameters: - S3Bucket ParameterLabels: S3Bucket: default: S3 Bucket Name Parameters: S3Bucket: Type: String Description: >- Enter the S3 bucket name you have created/chosen to publish SendCommand output. Resources: AmazonSSMRoleForInstancesRole: Type: 'AWS::IAM::Role' Properties: RoleName: AmazonSSMForInstancesRole AssumeRolePolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: Service: - ec2.amazonaws.com Action: - 'sts:AssumeRole' - Sid: SSMSendCommand Effect: Allow Action: - 's3:PutObject' - 's3:GetObject' - 's3:PutObjectAcl' Resource: - !Join - '' - - 'arn:aws:s3:::' - !Ref S3Bucket - '/*' Path: / ManagedPolicyArns: - 'arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore' AmazonSSMRoleForInstancesProfile: Type: 'AWS::IAM::InstanceProfile' Properties: InstanceProfileName: AmazonSSMForInstancesProfile Path: / Roles: - !Ref AmazonSSMRoleForInstancesRole 1.6 Attach an IAM instance profile to an Amazon EC2 instance Once the instance profile is created, you need to attach the profile to an EC2 instance. If you already have an EC2 instance and need to attach the instance profile, you can file the details here. If you already have the instance profile attached to the EC2 instance, then this step can be skipped. Here is the checklist of setting up SSM Inventory: #CloudFormation TemplateExecution ScopeComments1AWS-SystemsManager-AutomationAdministrationRole.ymlManagement AccountRequired if you would like to setup AWS Systems Manager from the management account to all member accounts.2AWS-SystemsManager-AutomationExecutionRole.ymlMember Account 3AmazonSSMForInstancesRoleSetup.ymlMember AccountInstance profile need to be attached with the EC2 instance. If you have your own instance role, you can skip this script.4Attach Instance profile to EC2 instanceMember AccountsThere is no script for this step. You have to setup on your own.5Setup Systems Manager Inventory using Systems Manager Automation scriptMember AccountYou have to login to the account to execute the script created by AWS. The figure below shows the SSM Inventory Dashboard in a region after setting up all the required setups described above. However, to view the Inventory dashboard, we need to have the EC2 instance(s) created in the region or else it will show up on the 'Setup Inventory' page. The figure below shows the software inventory list of an EC2 instance. 1.7 Links & References: 1. Setting up management account permissions for multi-Region and multi-account automation: https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation-multiple-accounts-and-regions.html 2. AWS-SetupInventory https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-aws-setupinventory.html 3. Create an IAM instance profile for Systems Manager - Quick Setup Host Management https://docs.aws.amazon.com/systems-manager/latest/userguide/quick-setup-host-management.html 4. Attach an IAM instance profile to an Amazon EC2 instance https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-launch-managed-instance.html 5. Systems Manager - Inventory Setup page https://console.aws.amazon.com/systems-manager/automation/execute/AWS-SetupInventory 6. Systems Manager prerequisites https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-prereqs.html 7. Create an IAM instance profile for Systems Manager https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-instance-profile.html 2. AWS SSM SendCommand Setup This setup is optional. If you need to import Serial Number, TCP connections, and Process information in a EC2 instance, then this setup is required. This setup depends on SSM enablement and you need to follow the steps described in 'AWS Systems Manager Setup'. 2.1 Legal Disclaimer: S3 Bucket costs This feature uses S3 bucket to collect the terminal output in a central S3 bucket location. SSM SendCommand captures the output in S3 bucket and the SG-AWS application reads the file then deletes it immediately. There is cost for the files created in S3 bucket. Depending on the number of EC2 instances running in the organization and size of the terminal output, the monthly cost varies. For more information on pricing, visit the AWS S3 Pricing page. 2.2 AWS EC2 System Information Currently there is no AWS API to provide Serial Number, TCP connections, and Process information. We can obtain this information by executing OS specific commands. AWS SSM provides the SendCommand API where we can execute the OS specific commands. The terminal output is collected in S3 bucket and SG-AWS parses the output and populates CMDB CI. At a high level, here is the flow to execute commands and get the data into the CMDB: SG-AWS makes the SSM SendCommand API call to execute custom SSM Document defined by SG-AWS.SSM executes custom SSM Document script in a EC2 instance.EC2 executes the command and publishes the output into the common/centralized S3 bucket.SG-AWS then invokes files generated in S3 bucket and parse the command output and populates the relevant CMDB data. SG-AWS then deletes the files in S3 and completes the process. 2.3 Security Risk and Mitigation: 2.3.1. Malicious Command Execution: SendCommand executes SSM Document scripts by taking inputs from the API request body. There are pre-existing documents available for Windows (AWS-RunPowerShellScript) and Linux (AWS-RunShellScript) which takes commands as inputs and executes in a EC2 instance. This can cause serious security risks like shutting down an instance, killing process, etc. To avoid these risks, we have created a custom SSM document which doesn't take any input parameters and executes predefined commands which is discussed later in the sections. 2.3.2. S3 Bucket Access: The output of the terminal is large and the API response truncates it halfway, hence we are collecting the output in S3 Bucket. To make the setup easier, we request you to create one S3 bucket for this feature where the ServiceNow user will have the privileges to read and delete files once the process is completed. It is recommended to create a S3 Bucket specifically for the SG-AWS application and allow the ServiceNow user to have access to this bucket in the organization. To have enhanced security in the AWS environment, the ServiceNow user will need access to the following SSM documents (SG-AWS-RunShellScript, SG-AWS-RunPowerShellScript) and specific S3 bucket. The IAM policies defined later in the sections will describe the policy to be set. 2.3.3 Setup Instructions: Create S3 bucket.Target S3 bucket policy.Attach IAM permissions to EC2 instance to publish command output to S3 bucket.Setup SSM Documents - Windows (AWS-RunPowerShellScript) & Linux (AWS-RunShellScript).Add additional SSM, S3 IAM permissions to the ServiceNow user. 2.3.3.1 Create S3 bucket Create S3 bucket in the account region you want with the permissions listed below: Permissions overviewAccessBucket and objects not publicBlock public access (bucket settings)Block all public accessBucket policySee next section 2.3.3.2. Target S3 bucket policy The target S3 bucket must allow the instance profile role, that is attached to the managed EC2 instance, to access the bucket. You can either create a bucket policy or grant access to the source AWS account in the bucket access control list (ACL). Warning: It's a security best practice to create a bucket policy. Adding the source AWS account to the bucket ACL allows all users and roles in the source AWS account to access the S3 bucket. The following is an example bucket policy for the target S3 bucket. The bucket policy needs to be repeated for each account as described in the example: Replace DOC-EXAMPLE-BUCKET with the S3 bucket name in the target account. Make sure, you have '/*' after the bucket name.Replace SOURCE-AWS-ACCOUNT with the source AWS account ID.Replace INSTANCE-PROFILE-ROLE-NAME with the IAM instance profile that is attached to the EC2 instance. Template: { "Version": "2012-10-17", "Id": "Policy1589684413780", "Statement": [ { "Sid": "Stmt1589684412557", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::SOURCE-AWS-ACCOUNT:role/INSTANCE-PROFILE-ROLE-NAME" }, "Action": [ "s3:GetObject", "s3:PutObject", "s3:PutObjectAcl" ], "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" } ]} Example: { "Version": "2012-10-17", "Id": "Policy123456789000", "Statement": [ { "Sid": "EC2S3Access", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::123456789000:role/AmazonSSMRoleForInstances", "arn:aws:iam::123456789001:role/AmazonSSMRoleForInstances", "arn:aws:iam::123456789002:role/AmazonSSMRoleForInstances", "arn:aws:iam::123456789003:role/AmazonSSMRoleForInstances", "arn:aws:iam::123456789004:role/AmazonSSMRoleForInstances" ] }, "Action": [ "s3:GetObject", "s3:PutObject", "s3:PutObjectAcl" ], "Resource": "arn:aws:s3:::myS3Bucket/*" } ]} 2.3.3.3. Attach IAM permissions to instance profile for EC2 instance to publish command output to S3 bucket In the AWS Systems Manager setup, we created the 'AmazonSSMForInstancesRole' role and attached it to each EC2 instance to get software information. You need to attach the below policy to the existing instance profile. AmazonSSMForInstancesRoleSetup.yml script has the update. The IAM instance profile role attached to your managed Amazon Elastic Compute Cloud (Amazon EC2) instance must have the following actions in place to allow access to the S3 bucket. Replace DOC-EXAMPLE-BUCKET with the S3 bucket name in the target account. { "Version": "2012-10-17", "Statement": [ { "Sid": "PublishTerminalOutputToS3", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:PutObjectAcl" ], "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" } ]} Note: Make sure you suffix with '/*' at the end of the bucket name to allow files to be created under this bucket name. 2.3.3.4. Setup SSM Documents: The scripts SG-AWS-RunShellScript for Linux based instances, SG-AWS-RunPowerShellScript for Windows based instances will be invoked from SG-AWS via SendCommand API. SG-AWS-RunShellScript: FeatureLinuxSerial Number, Manufacturer, Product Name dmidecode system | grep -E '(Manufacturer|Product Name|Serial Number) Process Informationps awwxo pid,ppid,commandTCP netstat -anptCPU Info grep -E '(model name|vendor_id|cpu MHz|cpu cores)' /proc/cpuinfo SG-AWS-RunPowerShellScript: FeatureWindowsSerial Numberwmic bios get serialnumberProcess Information wmic process get ProcessId, ParentProcessId, Name, ExecutablePath, Description, CommandLine /format:rawxml TCP netstat -anop TCPCPU Info wmic computersystem get model,name,systemtype,manufacturer,DNSHostName,domain,TotalPhysicalMemory,NumberOfProcessors /format:list CPU Info wmic cpu get Manufacturer,MaxClockSpeed,DeviceID,Name,Caption /format:list Here is the content of the custom SSM document. These scripts will not take any input parameters and execute the listed OS commands. These scripts need to be deployed in all of the account regions where you need to get the EC2 system information. SG-AWS-RunShellScript-Setup.yml AWSTemplateFormatVersion: 2010-09-09 Resources: SSMDocument: Type: 'AWS::SSM::Document' Properties: Content: schemaVersion: '2.2' description: 'Service Graph AWS - aws:runShellScript' mainSteps: - action: 'aws:runShellScript' name: runShellScript inputs: timeoutSeconds: '3600' runCommand: - "echo '####SG-AWS-06-02-2022####'" - "dmidecode system | grep -E '(Manufacturer|Product Name|Serial Number)'| sed 's/^/#DMI#/'" - "ps awwxo pid,ppid,command | sed 's/^/#PS#/'" - "netstat -anpt | sed 's/^/#NETSTAT#/'" - "grep -E '(model name|vendor_id|cpu MHz|cpu cores)' /proc/cpuinfo | sed 's/^/#CPU#/'" - "awk '/MemTotal/ {print $2}' /proc/meminfo | sed 's/^/#RAM-KB#/'" - "lsblk -dn | grep -v '^loop' | sed 's/^/#DISK#/'" DocumentType: Command Name: SG-AWS-RunShellScript VersionName: '1.0' SG-AWS-RunPowerShellScript-Setup.yml AWSTemplateFormatVersion: 2010-09-09 Resources: SSMDocument: Type: 'AWS::SSM::Document' Properties: Content: schemaVersion: '2.2' description: 'Service Graph AWS - aws:runPowerShellScript' mainSteps: - action: 'aws:runPowerShellScript' name: runPowerShellScript inputs: timeoutSeconds: '3600' runCommand: - 'echo ''####SG-AWS-06-02-2022####''' - 'echo ''####-WINDOWS-####''' - wmic bios get serialnumber | foreach {"###SERIAL###"+ $_} - netstat -anop TCP | foreach {"###TCP###"+ $_} - cmd /a /c 'wmic computersystem get model,name,systemtype,manufacturer,DNSHostName,domain,TotalPhysicalMemory,NumberOfProcessors /format:list' | foreach {"###CS###"+ $_} - cmd /a /c 'wmic cpu get Manufacturer,MaxClockSpeed,DeviceID,Name,Caption /format:list' | foreach {"###CPU###"+ $_} - cmd /a /c 'wmic process get ProcessId, ParentProcessId, Name, ExecutablePath, Description, CommandLine /format:rawxml' | foreach {"###PS###"+ $_} - (Get-Disk | measure-object -Property size -Sum).Sum / 1GB | foreach {"###DISK###"+ $_} - (Get-WmiObject Win32_PhysicalMemory | measure-object Capacity -sum).sum/1gb | foreach {"###RAM-GB###"+ $_} DocumentType: Command Name: SG-AWS-RunPowerShellScript VersionName: '1.0' 2.3.3.4.1 Deployment Instructions - StackSet option StackSet deployment option is chosen to deploy the script in multi-account multi-region from management/designated account. However, StackSet will not deploy the script in a management account. To deploy in a management account, you need to follow the steps described in Stack deployment option. Login to the member account as an admin.Navigate to the CloudFormation page.Click on View StackSets.Click on Create StackSet.Under Prerequisite - Prepare template select Template is ready.Under Specify template select Upload a template file.Choose the SG-AWS-RunShellScript-Setup.yml downloaded earlier and click Next.In Specify stack details specify a StackSet name and click Next.Parameters can be left with the default values and click Next.Under Permissions, select Service-managed permissions (default) and click Next.In the Set deployments options page: Add Stacks to stack set - leave default selected to Deploy new stacks.Accounts - Select Deploy stacks in accounts and enter account numbers separated by commas.Specify regions - Select one or more regions.Deployment Options - Leave it to default provided by AWS.Click Next when you're finished. Review the changes and check the acknowledgment box at the bottom of the page and click Submit.Repeat the steps 1-12 for the SG-AWS-RunPowerShellScript-Setup.yml script. 2.3.3.4.2 Deployment Instructions - Stack option Login to the management account as an admin.Navigate to the CloudFormation page.Click on View Stack.Click on Create Stack.Under Prerequisite - Prepare template select Template is ready.Under Specify template select Upload a template file.Choose the SG-AWS-RunShellScript-Setup.yml downloaded earlier and click Next.Under Parameters, enter the Account ID of either management or designated member account where the ServiceNow user will be created and click Next.On the Configure stack options leave the default values and select Next.Review changes and check the acknowledgment box at the bottom of the page and click Create stack.Repeat the steps 1-10 for the SG-AWS-RunPowerShellScript-Setup.yml script. 2.3.3.5. Add additional SSM, S3 IAM permissions to ServiceNow user The ServiceNow user needs to access SSM APIs to send commands and access S3 buckets. The CloudFormation scripts are updated to reflect the permissions. ssm:List*ssm:Get*ssm:Send*ssm:Describe*s3:GetObjects3:GetBucketLocations3:ListBucket 2.3.3.5.1 Restricted SendCommand Access for ServiceNow user To avoid the security risk described in the previous section, the ServiceNow user is given restricted access to execute only custom SSM documents - SG-AWS-RunShellScript and SG-AWS-RunPowerShellScript. This IAM setup ensures the ServiceNow user will not be able to execute any other SSM document(s). { "Action": [ "ssm:SendCommand" ], "Resource": [ "arn:aws:ec2:*:*:instance/*", "arn:aws:ssm:*:*:document/SG-AWS-RunShellScript", "arn:aws:ssm:*:*:document/SG-AWS-RunPowerShellScript" ], "Effect": "Allow", "Sid": "SendCommandAccess"} 2.3.3.5.2 Restricted S3 Bucket Access for ServiceNow user The ServiceNow user will get access to the specific S3 bucket which has read/write access. As described earlier, the SG-AWS application will delete the files from S3 bucket once it is processed. { "Action": [ "s3:GetObject", "s3:GetBucketLocation", "s3:ListBucket", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::myS3Bucket/*" ], "Effect": "Allow", "Sid": "S3BucketAccess"} Here is the summary of complete IAM permissions for ServiceNow user for this feature. { "Version": "2012-10-17", "Statement": [ { "Action": [ "organizations:DescribeOrganization", "organizations:ListAccounts", "config:ListDiscoveredResources", "config:SelectAggregateResourceConfig", "config:BatchGetAggregateResourceConfig", "config:SelectResourceConfig", "config:BatchGetResourceConfig", "ec2:DescribeRegions", "ec2:DescribeImages", "ec2:DescribeInstances", "ec2:DescribeInstanceTypes", "ssm:DescribeInstanceInformation", "ssm:ListInventoryEntries", "ssm:GetInventory", "ssm:SendCommand", "s3:GetObject", "s3:DeleteObject", "tag:GetResources", "iam:CreateAccessKey", "iam:DeleteAccessKey" ], "Resource": "*", "Effect": "Allow", "Sid": "ServiceNowUserReadOnlyAccess" }, { "Action": [ "ssm:SendCommand" ], "Resource": [ "arn:aws:ec2:*:*:instance/*", "arn:aws:ssm:*:*:document/SG-AWS-RunShellScript", "arn:aws:ssm:*:*:document/SG-AWS-RunPowerShellScript" ], "Effect": "Allow", "Sid": "SendCommandAccess" }, { "Action": [ "s3:GetObject", "s3:GetBucketLocation", "s3:ListBucket", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::myBucket/*" ], "Effect": "Allow", "Sid": "S3BucketAccess" } ] } The outcome of this setup requires the following inputs for 'AWS SSM SendCommand Setup' Guided Setup configuration: Account ID where the S3 bucket is created.AWS Region where the S3 bucket is created.S3 Bucket name. References: https://aws.amazon.com/getting-started/hands-on/remotely-run-commands-ec2-instance-systems-manager/https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_SendCommand.htmlhttps://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-rc-setting-up.html#sysman-rc-setting-up-cmdsec 3.How it works In Service Graph Connector for AWS , the Deep discovery is done via two datasources , SG-AWS-SSM-SendCommandSG-AWS-GetS3Object 3.1 SG-AWS-SSM-SendCommand There are essentially 4 steps in running the Deep Discovery in SG-AWS SG-AWS makes the SSM SendCommand API call to execute custom SSM Document defined by SG-AWS.SSM executes custom SSM Document script in a EC2 instance.EC2 executes the command and publishes the output into the common/centralized S3 bucket.SG-AWS then invokes files generated in S3 bucket and parse the command output and populates the relevant CMDB data. SG-AWS then deletes the files in S3 and completes the process. Out of these 4 steps , the first 3 steps were performed by SG-AWS-SSM-SendCommand data source , This is because when the EC2 executes the command in step 3 it takes some time to get the output and then push the output to S3 bucket , Instead of waiting for the EC2 to complete this we run other data sources in the mean time.Once the EC2 starts executing the commands it generates command ids which are stored in the import set table of this data source 3.2 SG-AWS-GetS3Object As mentioned in the previous section , the step 4 SG-AWS then invokes files generated in S3 bucket and parse the command output and populates the relevant CMDB data. SG-AWS then deletes the files in S3 and completes the process. This step is handled by SG-AWS-GetS3Object data source , Once the SG-AWS-SSM-SendCommand data sources execution is completed it stores all the command ids and corresponding EC2 Instance ids in its import set table.The SG-AWS-GetS3Object data source by looking at the command ids in the SG-AWS-SSM-SendCommand data source's import set table form's the S3 Urls and invokes the files generated in S3 bucket.After processing the data from the S3 bucket that data will be inserted into SG-AWS-GetS3Object import set table. The classes that were mapped by this data source are cmdb_tcp (TCP connections) , cmdb_running_process (Running process) and It also uses ADM to classify all the applications we recieve from the payload like Apache Web Server , tomcat server etc.