Trusted AI

Trusted AI

Unlocking your path to AI trust and assurance

Understanding how your technology partners use AI is the first step to unlocking its potential for your firm. With Intapp technology, your platform safeguards are guided by over a decade of expertise integrating AI capabilities into our tools.

And you can trust that as AI capabilities change and grow, we’ll continue to develop new security protocol that prioritizes the integrity and safety of your data.

Unlock the power of AI for your firm with confidence
— and without putting your data at risk.

Frequently asked questions about Intapp Applied AI

AI practices

We use two types of models:

  • Private models: Client-specific models fine-tuned on client data. These models may use Intapp proprietary data as well.
  • Shared models: Built using Intapp proprietary data and client agnostic data or based on services provided by other vendors such as Microsoft Open AI.

The below table identifies the various ways our models are used in Intapp products:

Product Feature Model
Intapp Terms Analyze documents to discover contract terms Private
Intapp Conflicts Discover conflicts of interest Private
Intapp Time Assist in reporting work Private
Intapp DealCloud Relationship Intelligence Shared and private
Generative experiences, Recommendations Shared and/or private depending on Intapp DealCloud features enabled. 

 

AI data governance

Intapp Applied AI uses data that clients create in various Intapp products in the course of using the product.

In addition, Intapp also captures:

  • Client data: This is data generated in the scope of a client using various features. This is securely stored in client specific data stores. This data is used with client permission to deliver private, client-specific AI models.
  • Application data: This is all the metadata associated ‘data and activity’ along with any additional features and metrics that can be derived from them. This data is collected and stored in our infrastructure to monitor system performance and user satisfaction.

Client data is processed in different ways depending on the AI use case. Typically, client data is converted into a machine comprehensible format as the first step. This may include: tokenization, vector format encoding, and other processing steps. For example, in the case of generative AI experiences, client data is first processed in a client’s tenant environment to generate a prompt appropriate for the AI feature being used.

AI services using shared models are stateless and therefore designed to retain no data. AI services using private models retain some request metadata to support ongoing training. AI service responses, along with the inputs supplied to generate them, are captured, transferred back to the Intapp product that initiated the call, and stored, when appropriate, with other client data. Client data is encrypted in transit throughout these steps.

Client data is processed exclusively within Intapp’s infrastructure footprint. Private models live alongside a client’s own tenant in their regional cluster. Shared models are deployed among our global clusters as their required infrastructure is supported in different regions. For non-US regions that do not yet support Intapp AI’s needed infrastructure, calls are routed to the EU infrastructure.

Client data is not used for training shared models. However, our private models do use client data for training. Explicit client permission is required before we access client data before or during the process of training a private model.

Each request processed through our generative AI service is assigned a unique inference identification, which is retained throughout the lifecycle of the request. This measure ensures that data remains segregated between clients, and that the progression of each request can be traced from initiation to the response presented in the user interface.

Intapp’s products are designed to avoid commingling client data when processing calls. When an Intapp product calls a Microsoft Azure OpenAI service, the content is not stored by the stateless OpenAI service. View the Microsoft Azure OpenAI data policy.

Intapp maintains industry standard security controls on its client tenant environments where client data is stored. Intapp is regularly audited and maintains security and privacy certifications to validate its security and privacy controls. Learn more.

Intapp uses third-party sub-processors to provide our services to clients.  The list of sub-processors is available here.

Intapp does not sell or make available any client data. View the Intapp Privacy Policy to learn more.

Intapp has specific procedures defined for data access, rectification, and removal, which are described in the Intapp Data Processing Addendum.

Any model that has been directly trained with client data will be destroyed when that data is destroyed.

It depends on if the model is private or shared.

  • Shared models: Deployed centrally and trained using our proprietary data sets. Client data is not used to train shared models.
  • Private models: Trained on client-specific data and are deployed within a client’s individual tenant security perimeter.

The below table identifies the various ways our models are used in Intapp products:

Product Feature Model
Intapp Terms Analyze documents to discover contract terms Private
Intapp Conflicts Discover conflicts of interest Private
Intapp Time Assist in reporting work Private
Intapp DealCloud Relationship Intelligence Shared and private
Intapp DealCloud Generative experiences, Recommendations Shared and/or private depending on Intapp DealCloud features enabled. 

 

We use specific cloud monitoring infrastructure to monitor and track performance of AI services. User interactions and AI results are also monitored and stored in client-specific databases.

Data that fits Intapp’s application data definition can be used by automated benchmarking and analytic processes to track algorithm performance and reliability. On an as-needed basis, and only with written permission, specialized Intapp personnel can be given access to feedback data containing client data to conduct client support and debugging activities.

Intapp AI products are designed to perform inference on business processes. This, by the nature of the task, minimizes the risk of discrimination and harm. In addition to this, we also implement explicit filters for content moderation where applicable.

Our AI-driven features provide responses that help users complete their tasks efficiently. Intapp Applied AI does not take action or make decisions — the user is ultimately responsible for the action taken or decision made. The output of our AI models is presented with context reminding users their review is necessary before acting on AI-generated information. No outputs of Intapp AI pass directly to systems outside of Intapp infrastructure automatically.

Intapp Applied AI services are provided with adequate documentation on the behavior of the specific AI functionality. We continually test AI systems to test that they work as intended. In addition, we use the Azure AI Content Safety provided by Azure AI Service to safeguard against potentially obscene content.

Intapp Security, Legal, Data, and AI teams are constantly working in coordination to understand the necessary adjustments that must be made to our AI services and product features in response to regulatory changes and technical opportunities.

Intapp AI solutions are continuously benchmarked and tested. From time to time, Intapp also conducts comparative analysis of existing solutions against new technologies, algorithms, and platforms. This can lead to a partial or complete replacement or upgrade of some solutions when deemed necessary.

Model considerations

Our AI products primarily use well-understood, documented services. In addition, our models provide confidence indicators for AI inferences and, where appropriate, explanatory content in our product interfaces.

We constantly monitor and evaluate model performances against both quantitative and qualitative benchmarks. Specific attention is paid to potential data and concept drifts (a common problem for AI models), where, over long periods of time, the training data eventually differs substantially from actual data. Techniques such as model adaptation, incremental training, and parameter calibrations are periodically used to ensure that model integrity and reliability are preserved over time.

Most of our AI models return confidence scores associated with inference events. Such scores can be interpreted as the model’s degree of certainty regarding its provided outputs. While these are good indicators of relative relevance among outputs (i.e. good for ranking purposes), these scores cannot be used as absolute quality metrics.

Past queries might be used (either explicitly or implicitly) as part of an autocompletion or query augmentation mechanism to improve retrieval and inference efficiency and quality. Past queries can be used for as long as the service is available.

Our model APIs are only accessible through secure internal product calls with the corresponding client access keys and user IDs. The service behind the API will ignore any invalid requests. In addition to this, content filtering and moderation is used to prevent attempts to produce either malformed or malicious content from the model.

Model ethics

Most of the generative AI use cases supported at Intapp follow a Retrieve-Augment-Generate (RAG) approach, which allows for explicitly referencing the source of information used to generate the model response. For those generative AI use cases not implementing RAG — but still using LLMs pre-trained with publicly available data (such as Microsoft Azure OpenAI GPT-3.5) — there is no way of avoiding copyrighted content to be potentially surfaced in model outputs.

Although algorithmic bias doesn’t pose a real threat in most of the ways we apply AI at Intapp, we do still use several techniques to reduce bias in our models. These techniques include, but are not limited to, training data filtering, balancing and augmentation, and client-agnostic model weight calibrations. Intapp also supports prompting controls available to provide explicit instructions to AI models.

We constantly monitor and evaluate model performances against both testing benchmarks and actual client data with explicit permission. Techniques such as model adaptation, incremental training, and parameter calibrations are periodically used to reduce bias and improve model performance and reliability.

User consent and control

You have the same extent of control over your data on AI algorithms as you currently have in Intapp products. For instance, if a client disables an Intapp product from accessing email inbox, AI algorithms will not have access to email inbox as well.

The private models trained with your data belong to Intapp. Intapp uses models directly trained with your data only to process or make inferences on your own data.

You can obtain a copy of the private models trained with your data after signing the corresponding contract/agreement with us and releasing us from any responsibility for undesirable outcomes resulting from your usage of the model.

The outputs generated by a model for a given client input are owned by the client. We may use the output periodically for evaluation and training to improve client specific models.

Get started today