Security & Data Policies
Last updated
Last updated
Patterns is engineered to guarantee that data access is limited to authorized users only. Its architecture ensures the segregation of organizations and their respective data sets, with additional layers of isolation applied on a per-user basis.
Our approach to product and technical design adheres to several key principles:
All customer data is encrypted both at rest and during transmission across public networks. Furthermore, customer credentials undergo an extra layer of encryption within the application, decryptable only by specific application components that need access.
User authentication is managed by an external identity provider, offering the flexibility to include extra user information. This information can be utilized in authorization processes to, for instance, restrict data access levels for users.
Every request received by Patterns undergoes immediate authentication and authorization checks. Once a request is authenticated, it triggers the establishment of an authorization context for all following actions, confining them to the data and settings relevant to the specific user and their organization.
The ability of Patterns staff to access a customer's instance for support purposes is transparent and can be regulated by the customer, ensuring visibility and control.
Patterns processes the following data:
Information about Patterns users, for example name and email. This does not include user passwords since this is delegated to a third party identity provider
Patterns configuration data, for example data schemas, data samples, prompt context, connection parameters, and chart and report configuration
Data contained in the data sources connected to Patterns, referred to as "Customer Data"
All Customer Data is stored and persisted in the customer's data warehouse. Patterns Query Engine writes query results to a new table in a dedicated Patterns schema in customers data warehouse. Results from query execution are temporarily cached on Patterns servers to power the user interface.
Credentials to access Customer Data, referred to as "Customer Credentials"
Customer Data and Customer Credentials are logically segregated on Patterns systems by customer tenant ID and unique dataset identifiers. Customer Data and Customer Credentials are always encrypted at rest and in transit over public networks. Ownership of Customer Data is retained by the customer.
We use a combination of proprietary and third-party large language models from OpenAI, Anthropic, and Mistral AI to ensure the best performance, accuracy and resilience.
We use OpenAI and self-hosted open-source models as our third-party LLM provider. We require these providers to use customer information only for the purpose of facilitating the Patterns tool, and we do not allow these providers to train their AI models using personal customer personal information. We require these providers to delete personal information within 30 days, unless otherwise required by law.
We implement comprehensive security measures to protect against malicious prompts, including input validation, behavior analysis, and constant monitoring of AI interactions. Our systems are designed to detect and mitigate potential threats or unusual patterns of activity, ensuring that our AI models respond appropriately to all inputs.
At Patterns, we prioritize the privacy and security of customer data. Importantly, none of your data is used for training our AI models. While customer data may be temporarily sent to our third-party large language model (LLM) providers, such as OpenAI and our self-hosted models, it is used solely to facilitate the Patterns tool's functionality. These providers are contractually bound to delete customer data within 30 days and are prohibited from using it for model training purposes.
We leverage in-context learning to enhance the performance of our AI models. This approach allows the AI to understand and respond based on the specific context of each query without any fine-tuning or permanent training involving your data. Consequently, there is no risk of your information being inadvertently retained or leaked through AI model training. This ensures that your data remains confidential and is used strictly for the purpose of delivering immediate, context-specific responses.
Patterns maintains written policies and procedures designed to ensure the security of our employees, partners, and customers. Patterns' CTO is responsible for the Information Security Program, which is reviewed and updated periodically. Compliance with the policies and procedures is audited through a SOC 2 Type II audit (report available upon request). Some key aspects of the program include:
Employees and contractors with access to company data and resources must sign a confidentiality agreement, agree to comply with the policies of the information security program, and pass a background check by a third party provider.
We classify Customer Data as our most sensitive asset, and protect it as follows:
Customer Data is not permitted to be copied to destinations outside of the production infrastructure, and is not used for testing, development, or any purpose other than providing the product.
Access:
Access to the production infrastructure and systems is granted on a least privilege basis, requires authentication with multiple factors, and logged. Production infrastructure is configured and deployed through automated processes, so direct human access is needed only in rare cases and is not granted to employees other than those responsible for maintaining the automated processes.
Access to Customer Data via the Patterns application by Patterns personnel for support can be controlled and revoked by the customer.
Use of third party subprocessors on customer data is minimized and, when necessary, subject to thorough review.
Customer Data is only stored in Patterns systems temporarily, and can be permanently deleted upon request.