All Collections
AI, Data Security & Compliance
AI (Artificial Intelligence) And Your Data
AI (Artificial Intelligence) And Your Data

Security details on Vurvey's suite of artificial intelligence (“AI”) tools that help users analyze, summarize, and generate outputs.

Andrew @ Vurvey avatar
Written by Andrew @ Vurvey
Updated over a week ago

Vurvey includes a suite of artificial intelligence (“AI”) tools that help users analyze, summarize, and generate outputs. We recognize that the use of AI tools and particularly Generative AI tools can increase productivity and innovation, and Vurvey supports the use of AI tools in a safe, ethical, and secure manner. Vurvey utilizes responsible AI practices while protecting and mitigating risks of misuse, legal implications, unethical outcomes, potential biases, inaccuracy, and information security or data security breaches.

AI and Your Data

  • Vurvey AI is a combination of data collection, processing, and user interface.

  • The process of generating AI outputs starts with collecting accurate and trustworthy data through consumer responses and/or customer-supplied datasets that may include documents, files, and instructions.

  • Datasets are converted into numerical representations, called embeddings, that our machine learning (ML) and AI systems use to understand complex knowledge domains. Together, these create a holistic AI capability which includes agents (multi-step processes) and personas (tones, guidelines).

  • The source data, along with the resulting embeddings, and Vurvey AI configurations are all securely stored in the Vurvey platform (hosted within a Virtual Private Network on the Google Cloud Platform).

  • Vurvey’s chat-style interface allows workspace users to choose source data (for grounding) along with an agent and persona to a large language model for reasoning. The resulting output can be saved within the workspace if desired.

  • Our chat-style interface means users are in control of what data to use and what outputs to produce.

  • In addition, the chat-style interface provides a simple feedback feature where users can give results a thumbs up or thumbs down. This information, combined with additional metrics such as precision and recall, is used to reduce the likelihood of hallucinations.

Large Language Models (LLMs)

Vurvey has contracted with Anthropic, Google Cloud, and other third-party providers to use their Large Language Models (LLMs.) These LLMs serve as a “reasoning” engine. Per our service agreements with Anthropic and Google Cloud, the data we interface with the LLMs is not used for training purposes. Any prompts and outputs are automatically deleted on the backend within 28 days of receipt or upon generation.

  • We use caution with confidential customer information in AI tools, avoiding submission of sensitive data unless a) explicitly authorized and/or b) we have platform assurances that such data will not be used for training publicly available Large Language Models (LLMs.)

  • We do not fine-tune large language models. Fine-tuning alters the parameters and weights of an existing model by supplying labeled data. This has the potential to weaken built-in safety measures as well as leak sensitive data.

  • We anonymize all users by generating pseudo-identifiers. Personally identifiable information for users is never shared with the large language model.

  • We comply with all customer agreements, policies, and directives in our AI deployments.

If you have further questions about how your data is protected from training public LLMs or other your team requires additional trust-related documentation, please request access to our Trust Center at trust.vurvey.com.

Did this answer your question?