OAuth 2.0 credential that is configured with extremely short TTL (<1 minute) causes ECC queue flooding (credentials_reload command) and leads to semaphore exhaustion on instanceDescriptionAfter configuring an OAuth2 token with a short TTL you observe an increase in credentials_reload records in the ECC queue and observe increasing performance impacts on the instance and with activities utilizing the MID server. Symptoms: All MID Servers get the same credentials_reload jobs, so all will be running flat-out fetching, processing, and returning the results to the instance via the API_INT semaphores. This will delay or block other features using those semaphores, for example inbound REST and SOAP integrations and to other APIs may fail with status code 429 rejections, including MID Server connections. The MID Servers's queue of jobs are likely to be delayed by hours, causing many of those jobs to time out. Import sets may run for hours. Discovery may time out at max runtime. Confirming you have the problem: To have this problem, you will need to have an active OAuth2 credential in the Discovery Credentials table, set to All MID Servers. When opening the OAuth2 credential records, you are likely to see a warning banner for imminent token expiry. Reloading the form a minute or so later will likely give a different expiry time, showing the token has a very short TTL:OAuth Access token is available but will expire soon at 25-Jun-20 10:42:47. Verify the OAuth configuration and click the 'Get OAuth Token' link below to request a new token.OAuth Access token is available but will expire soon at 25-Jun-20 10:43:07. Verify the OAuth configuration and click the 'Get OAuth Token' link below to request a new token. Look in the ECC Queue table, and filter on source=credentials_reload. The ECC Queue is rotated and only holds 4-5 days worth of data, so if you see thousands of these records then you probably have this problem.If you filter further on Queue=Output and Status=ready, then the records shown will be the backlog that the MID Servers are struggling to process. Instances running versions older than Orlando and New York Patch 8 are likely to also have additional problem, but if you also have a huge number of credentials_reload records in the ecc_queue then this is the problem to link with the case, as this problems workaround will be required for the additional steps for the token TTL change and cleanup.PRB1342894 / KB0748823 An error occurred while decrypting credentials from instance - Creating OAuth 2.0 credential results in no credentials being retrieved by the MID server and Probes no longer being able to use themSteps to Reproduce Configure OAuth2 token with TTL <1 minuteSet OAuth2 discovery_credential record to "All MID Servers", which is the defaultConfigure significant number of MID servers >100Let MID servers run Note: This problem has been seen with considerably fewer MID Servers. It is a combination of the short TTL and number of MID Servers that causes the impact.Workaround- Disable "Sync Credentials to MID servers"- Set problematic OAuth2 credential to "Specific MIDs" and empty MID assignment- Set "credentials_reload" ecc_queue records in Output queue and Ready state to "Error" either through Database or background script - Increase OAuth2 token TTL >30 minutes - After these steps are done then the "Sync Credentials to MID servers" business rule can be reactivatedRelated Problem: PRB1411442