Data Residency and Privacy for AI<!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: #000000; } span { font-size: 12pt; font-family: Lato; color: #000000; } h2 { font-size: 24pt; font-family: Lato; color: black; } h3 { font-size: 18pt; font-family: Lato; color: black; } h4 { font-size: 14pt; font-family: Lato; color: black; } a { font-size: 12pt; font-family: Lato; color: #00718F; } a:hover { font-size: 12pt; color: #024F69; } a:target { font-size: 12pt; color: #032D42; } a:visited { font-size: 12pt; color: #00718f; } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: ; max-width: ; width: ; height: ; } } Advanced AI security concerns Artificial intelligence (AI) is a transformative technology that has introduced a host of new possibilities and opportunities for industries around the world. As with any new technology, however, AI comes with significant potential security risks and challenges that companies should carefully evaluate. ServiceNow is committed to developing responsible AI solutions by mitigating against common risks associated with AI, including hallucinations, bias, accuracy, and consistency in responses. Please read the Responsible AI at ServiceNow white paper for more details. Customers may also have concerns related to data residency, architecture, and privacy when implementing AI solutions. It is important to understand where data resides when at rest or in transit, how data is used, and the steps taken by ServiceNow to ensure data privacy. Data residency for advanced AI ServiceNow processes AI workloads using specialized infrastructure that supports secure, high-performance inference. The routing and execution behavior depends on the model provider, region, and customer configuration. ServiceNow AI features, including Now Assist and AI Agents To ensure that ServiceNow advanced AI and data products like Now Assist and AI Agents are executed efficiently, ServiceNow uses specialized compute infrastructure provisioned for generative AI when using the ServiceNow language models. ServiceNow concentrates this compute capacity across regional and in-country compute hubs to deliver consistent AI performance for customers. Customer workloads are securely transmitted from their ServiceNow instance to one of these compute hubs using Transport Layer Security (TLS) 1.2, at a minimum. Inference processing occurs in memory only, and the data used to generate the response is deleted immediately afterward. The result is then returned to the instance. In periods of high demand for ServiceNow language models, ServiceNow may leverage Microsoft Azure Public Cloud Infrastructure to temporarily burst traffic and maintain system performance. Bursting is handled via ServiceNow-managed Azure networks within the same region. Customers can choose to opt out of bursting using the AI Control Tower. Third-party model providers ServiceNow provides secure access to both third-party language models and additional AI services through OEM integrations, including OEM Microsoft Azure AI Service (Translator), Microsoft Azure OpenAI, Google Gemini, and Anthropic Claude on Amazon Web Services (AWS). Data processed by third-party endpoints is processed within the model provider or model provider service's endpoint. Further, data processed by third-party endpoints is not subject to use or access by third-party providers, with the exception of AWS, which uses automated abuse detection mechanisms to detect harmful content. Abuse monitoring has been disabled by ServiceNow for Microsoft Azure OpenAI and Google Gemini. For more information on data processing by third-party language model providers, visit the Data Processing for Advanced AI & Data Products FAQ. Protecting sensitive data ServiceNow provides multiple privacy-preserving capabilities to protect customer data during the processing of generative AI prompts and responses. During interactions, users may accidentally share sensitive data like social security numbers, credit card information, or other personally identifiable information that should not be sent to AI for processing. ServiceNow provides out-of-the-box data privacy features for Now Assist and Virtual Agent that mask sensitive data using real-time anonymization techniques so that it does not get processed from interactions with AI. Placeholder text and anonymized data are sent with the prompt instead, and these values are replaced with the original text after the response has been received. This two-way masking ensures that end users receive accurate responses, but sensitive data is not exposed to the language model. Customers can also define privacy policies and data patterns to identify and mask many types of sensitive data, making sure that the privacy standards meet any policy requirements. These safeguards are applied across all AI integrations, and ensure that data remains secure, isolated, and transient throughout its lifecycle. Customers retain control over how their data is used Data plays a critical role in delivering the best possible AI experience. ServiceNow collects specific usage data from opted-in customers to help develop, evaluate, and improve the performance of its AI models. This data helps determine whether the model outputs are accurate, relevant, and aligned to expected business outcomes. As the data controller, customers retain full control over their data and how it is used. ServiceNow does not share customer-contributed data with third-party language model providers for their model improvement purposes. This includes providers such as Microsoft Azure OpenAI, Google Gemini, Anthropic Claude on AWS, and others. Sharing data with ServiceNow to help develop and improve ServiceNow language models is optional. Following the adoption of ServiceNow AI solutions, Customers have 30 days to opt out of sharing their data with ServiceNow. No customer data is collected during these first 30 days, and customers can change their data sharing preferences at any time after the 30-day period. Whenever possible, ServiceNow may use filtered AI content from opted-in customers to create synthetic data for the development of ServiceNow language models. Data is extracted nightly and sent to a dedicated development environment, which resides in a data center that is distinct from ServiceNow processing compute hubs. ServiceNow performs a multi-step cleansing process designed to remove sensitive information both within the customer's instance and within ServiceNow's AI Development Environment: The first stage occurs in the customer instance. A default set of data masking patterns are used to identify and sanitize personal data prior to extraction.In the second stage, ServiceNow applies industry-standard tooling and in-house techniques in its AI Development Environment to further detect and remove residual personal data before the data is used for model improvement activities. With Data Privacy for Now Assist, customers can also take advantage of the same cleansing process ServiceNow uses to protect sensitive data. Data Privacy for Now Assist offers customers the option to mask sensitive data such as age, phone number, and other personally identifiable information so that it does not get processed from generative AI prompts. Placeholder text and anonymized data are sent with the prompt instead, and these values are replaced with the original text after the language model response has been received. This two-way masking ensures that end users receive accurate responses, but sensitive data is not exposed to the language models. It is the customer's responsibility to ensure all data sharing activities comply with their organization's security and privacy policies. For more information, visit the Security Best Practices Guide.