How SGC AWS brings the CPU Related Deep Discovery Information(cpu_core_count,cpu_manufacturer, cpu_speed,.etc)Issue <!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: #000000; } span { font-size: 12pt; font-family: Lato; color: #000000; } h2 { font-size: 24pt; font-family: Lato; color: black; } h3 { font-size: 18pt; font-family: Lato; color: black; } h4 { font-size: 14pt; font-family: Lato; color: black; } a { font-size: 12pt; font-family: Lato; color: #00718F; } a:hover { font-size: 12pt; color: #024F69; } a:target { font-size: 12pt; color: #032D42; } a:visited { font-size: 12pt; color: #00718f; } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: ; max-width: ; width: ; height: ; } } 1. The CPU related fields are empty in the CMDB for the records that are being discovered by the SGC AWS. 2. Please refer the document -- CMDB classes targeted in Service Graph Connector for AWS -- https://www.servicenow.com/docs/bundle/zurich-servicenow-platform/page/product/configuration-management/reference/cmdb-aws-classes.html. In the document we can clearly see that for the cmdb_ci_server table, the SGC AWS should have to populate the CPU related info, but the same info is not available in the staging table itself. 3. Also we are seeing no direct mapping to these target cpu related fields for the server in the transform map. Release<!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: #000000; } span { font-size: 12pt; font-family: Lato; color: #000000; } h2 { font-size: 24pt; font-family: Lato; color: black; } h3 { font-size: 18pt; font-family: Lato; color: black; } h4 { font-size: 14pt; font-family: Lato; color: black; } a { font-size: 12pt; font-family: Lato; color: #00718F; } a:hover { font-size: 12pt; color: #024F69; } a:target { font-size: 12pt; color: #032D42; } a:visited { font-size: 12pt; color: #00718f; } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: ; max-width: ; width: ; height: ; } } All Release Resolution<!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: #000000; } span { font-size: 12pt; font-family: Lato; color: #000000; } h2 { font-size: 24pt; font-family: Lato; color: black; } h3 { font-size: 18pt; font-family: Lato; color: black; } h4 { font-size: 14pt; font-family: Lato; color: black; } a { font-size: 12pt; font-family: Lato; color: #00718F; } a:hover { font-size: 12pt; color: #024F69; } a:target { font-size: 12pt; color: #032D42; } a:visited { font-size: 12pt; color: #00718f; } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: ; max-width: ; width: ; height: ; } } Step 1: 1. The Data Source -- SG-AWS-SSM-SendCommand gets triggered. 2. The above data source using the script include -- SgAwsSendCommandDataSourceUtils, will make an Outbound POST API call to the AWS with the Instances Id. 3. queryInstanceIds() method in line 177 of script include is responsible for selecting and sending the Object Ids for which we want to get the CPU related details in the API calls. 4. Now, please access the staging table -- https://instance.service-now.com/sn_aws_integ_sg_aws_ssm_sendcommand_list.do 5. In the above staging table please refer the Data Field, Command Status, Command ID and Failed Instances. 6. We use method _sendSSMCommand in line 298 in the script include below to execute SSM Documents to capture CPU Details Step 2: 1. The SG-AWS application will use the SendCommand API(_sendSSMCommand) to invoke the SG-AWS-RunShellScript and SG-AWS-RunPowerShellScript scripts for Linux-based and Windows-based instances, respectively. SSM Documents were introduced in version 1.4.2, and we will continue to add more features over time. Depending on the version you download, you will receive appropriate attributes in the system. 2. Please follow the document to get the details about how to download the scripts -- Download the AWS scripts -- https://www.servicenow.com/docs/bundle/xanadu-servicenow-platform/page/product/configuration-management/task/sgc-cmdb-aws-scripts-dwld.html 3. Sample SG-AWS-RunPowerShellScript-Setup.yml defined in the SSM Documents for Windows AWSTemplateFormatVersion: 2010-09-09 Resources: SSMDocument: Type: 'AWS::SSM::Document' Properties: Content: schemaVersion: '2.2' description: 'Service Graph AWS - aws:runPowerShellScript' mainSteps: - action: 'aws:runPowerShellScript' name: runPowerShellScript inputs: timeoutSeconds: '3600' runCommand: - 'echo ''####SG-AWS-06-02-2022####''' - 'echo ''####-WINDOWS-####''' - wmic bios get serialnumber | foreach {"###SERIAL###"+ $_} - netstat -anop TCP | foreach {"###TCP###"+ $_} - cmd /a /c 'wmic computersystem get model,name,systemtype,manufacturer,DNSHostName,domain,TotalPhysicalMemory,NumberOfProcessors /format:list' | foreach {"###CS###"+ $_} - cmd /a /c 'wmic cpu get Manufacturer,MaxClockSpeed,DeviceID,Name,Caption /format:list' | foreach {"###CPU###"+ $_} - cmd /a /c 'wmic process get ProcessId, ParentProcessId, Name, ExecutablePath, Description, CommandLine /format:rawxml' | foreach {"###PS###"+ $_} - (Get-Disk | measure-object -Property size -Sum).Sum / 1GB | foreach {"###DISK###"+ $_} - (Get-WmiObject Win32_PhysicalMemory | measure-object Capacity -sum).sum/1gb | foreach {"###RAM-GB###"+ $_} DocumentType: Command Name: SG-AWS-RunPowerShellScript VersionName: '1.0' We will be getting the output for windows server like: { "cpu": {"Manufacturer":"xxx", "ModelId":"xxx", "serialnumber":"xxxxxxxx", "CPUManufacturer":"xxxxxxx", "model name":"xxxxxxxx", "CPUSpeed":"xxxxxxx", "cpu cores":"n", "Vendor":"xxxxx", "CPUName":"xxxxxx", "CPUCount":"n", "CPUCores":"1", "CPUCoreThreads":"n", "ramInMB":nnn, "diskSizeinGB":"nn"}, "connection_alias_id":"xxxxxxxxxxxxxxxxxxxxxx"} 4. So we can see that these scripts will run, fetch the data and transform the data into the respective S3 Bucket. Step 3: 1. The next step is, The data source -- SG-AWS-SSM-GetS3Object makes an Outbound Get API call to S3 Bucket with all the Instance Ids that were fetch in the above process(SgAwsSendCommandDataSourceUtils) to get the CPU related info for these Instance Ids. 2. After enabling the outbound debugging, we can see the response we are getting from the AWS S3 bucket. The response gets parsed to staging table. 3. Now, the Transformation process doesn't takes place via the direct mapping. We have a onBefore script that calls the SgAwsSendCommandTransformUtils to map all the staging cpu related data into the target records. Steps to investigate: 1. Get the object Id of the Server record for which we are not getting the CPU related details. 2. Now access the staging table that was mentioned in the above Step 1. 4rth Row. Now verify whether in the data filed are we sending this Instance Id for running the scripts and creation of the related in the S3 Bucket. If we are sending the same but not getting the CPU related details, please check in the Failed Instances Column with the instance id. In the failed Instances Column, we can see whether the updation of data successfully happened or not in S3 Bucket. 3. Along with this please also check the outbound logs for the API calls made to this Instance id by the SG-AWS-SSM-GetS3Object data source. Please observe the response. 4. There might be some situations where to check further we need to manually run these scripts in the AWS to check whether we are able to run these scripts in AWS properly or there is any issue, as these are scripts that are responsible to fetch the cpu related data from the respective EC2 and place the same in the S3 bucket Please follow the below KB to get more information regarding how to execute these scripts manually in the AWS. Where to run the SG-AWS-RunShellScript-Setup.yml Script and SG-AWS-RunPowerShellScript-Setup.yml Script in AWS? 5. By using the above KB, please check whether we are able to get the data in the S3 bucket after running the script manually for the issued instance ids. 6. If we are seeing the data in the output but we are not able to see the data in the S3 bucket for this specific instance id, there is a permission issue. 7. Please access the below KB and provide the required permission to the user account in AWS. What is the Policy required by SGC AWS Account to publish the created deep discovery information in S3 Bucket