Auto MID selection causes excessive metrics to accumulate when connection issues occurDescriptionThe MID server during auto-mid-selection can experience excessive memory allocation and CPU spikes due to agents connecting with network connectivity issues. As part of auto-mid-selection, agents make REST based calls to the MID server querying for the number of connected agents in order to perform load balancing and honor the maximum number of agents connections allowed. Part of this call will generate a metric instance for tracking which accumulates and can potentially block other thread calls to the MID web server hosting the REST calls. The result is the MID server experiences heavy CPU spikes and excessive memory retention while agents attempt to find the correct MID server to connect with. Coupled with network interruptions, this can result in the MID server running out of memory.Steps to Reproduce This was observed via heap analysis from CSTASK1241306. Several threads were blocking on the REST service call for /api/mid/mon, which is only used by the agent during auto-mid-selection. Analysis of the code indicates that the RateCounter is retrieved via a sync() call on the existing object to ensure data is available when calling toString() on the object. This places a new instance of the data into memory which is accumulated and not released for 24 hours.To duplicate perform the following:- Create a default MID server and enable auto-mid-selection- Install and configure an agent using auto-mid-selection on Linux for ease of testing- On the agent, create a cron job to restart the agent every 15 seconds- Take a heap snapshot on the MID server JVM to record baseline usage and to note the number of TimeSlot objects allocated- Stop the MID server- Repeat above steps on fix- Compare the number of instances of the TimeSlot. They should be lowered if not completely eliminated.WorkaroundDisable auto-mid-selection in the acc.yml file.Related Problem: PRB1966095