Privacy

AssessForge is built around a simple privacy principle: your client data should stay under your control, never ours. This page explains how that works in practice, which AI providers we support, what privacy protections each offers, and what you need to do to use the software responsibly with Personal Health Information (PHI) or other sensitive client data.

How AssessForge Handles Your Data

AssessForge software runs on your own computer as a desktop application. When you work on a file, type a prompt, or generate report content, that information travels directly from your computer to the AI provider you have chosen — Anthropic, Amazon Web Services, or Google. It does not pass through any server operated by AssessForge AI Technologies on the way.

The only information AssessForge software sends to our own servers is a periodic license check to confirm that your subscription is valid. That check contains no client data, no prompts, no AI responses, no file contents, and nothing about the work you are doing. We do not see, store, or have any access to the information you process through the software.

This architecture is deliberate. Because we are not in the path of your client data, we cannot lose, leak, misuse, or be compelled to disclose something we never received. It also means the privacy and compliance relationship that matters for your client data is the one between you and the AI provider you choose — not between you and us.

A Common Misconception

Before going further, it is worth addressing a question that comes up often: when you send a prompt to a commercial AI API, your data is not shared with other users, made available online, or used to train future models. This sometimes confuses people because consumer chat products (the free tier of Claude, ChatGPT, and similar) have different policies than the commercial APIs the same companies offer for business use. AssessForge uses commercial APIs exclusively. On every supported provider, the contractual default is that your prompts and the AI's responses are confidential to you, are not used for model training, and are not visible to anyone outside the provider's own systems for purposes other than abuse prevention and required legal compliance.

Choosing an AI Provider

AssessForge supports three AI providers. You decide which one to use, you sign up for it directly, and you pay them directly for usage. To configure the software, you enter the API credentials provided to you by that provider into AssessForge's software settings, where they are stored securely on your computer and used only to make AI calls on your behalf.

  • Anthropic (Claude API, direct)

  • Amazon Web Services (AWS) Bedrock — our recommended option

  • Google Gemini Enterprise Agent Platform (formerly Vertex AI)

All three offer access to Anthropic's Claude models, which is what AssessForge is built around. The differences come down to privacy protections, ease of getting set up for use with PHI, and the contractual safeguards each provider offers.

You only need a Business Associate Agreement with one of them — the one you actually use. If you choose AWS Bedrock, you sign AWS's BAA. If you choose Gemini Enterprise, you sign Google's BAA. If you choose the Anthropic API directly, you sign Anthropic's BAA. The other providers' agreements are not required.

Option 1: Anthropic (Direct)

Anthropic is the company that builds Claude. You can use Claude through Anthropic's own API by signing up at https://platform.claude.com/ and generating an API key.

Privacy protections on the Anthropic API:

  • Anthropic does not use commercial API customer data to train its models. This is a contractual commitment under Anthropic's Commercial Terms, and it applies regardless of any consumer-product settings you may have seen elsewhere.

  • Anthropic deletes API inputs and outputs from its backend within thirty days of receipt or generation. Longer retention may apply only where required by law, where required to enforce Anthropic's Usage Policy, or where you have specifically chosen a longer-retention service such as the Files API.

  • Qualifying enterprise customers can sign a Zero Data Retention (ZDR) addendum, under which prompts and responses are not stored at rest at all beyond what is needed to screen for abuse.

  • For organizations handling PHI, Anthropic offers HIPAA-ready API access. Until recently, Anthropic required customers to enable ZDR in order to qualify for HIPAA support; that is no longer the case. With a signed Business Associate Agreement (BAA) and a HIPAA-enabled organization, you can process PHI on the supported feature set, with controls enforced automatically by Anthropic at the platform level.

  • Anthropic publishes its compliance certifications (SOC 2 Type 2, ISO 27001, ISO 42001, CSA Star, HIPAA, NIST 800-171) for the Claude API on the Anthropic Trust Center, which is publicly accessible.

Caveats:

  • The BAA is discretionary and sales-mediated. Unlike AWS, Anthropic does not currently offer a self-service BAA. Anthropic's published policy is that, after reviewing your HIPAA-related compliance posture and your use case, they may provide a BAA — not that they will. You must contact their team, describe what you intend to do with Claude, and wait for them to assess and respond. At the time of writing, response times are inconsistent: some customers are signed within days, others wait weeks, and small or solo practices sometimes do not get a timely response at all. If you need to start working today, this process is often a deal-breaker.

  • The BAA only covers specific Anthropic products. Anthropic's BAA covers Claude Enterprise Chat, the native Anthropic API (which is what AssessForge uses), and Claude Code with Zero Data Retention enabled. It does not cover the Free, Pro, Max, or Team plans, and it does not cover beta features or beta products. If you are working under an Anthropic BAA, you must use the products that fall within its scope.

  • HIPAA-ready API has feature restrictions. Once your organization is HIPAA-enabled, Anthropic automatically blocks API requests that use features not yet covered by their HIPAA program. Most core features are eligible — the Messages API, structured outputs, prompt caching, extended thinking, PDF input, citations, tool use, web search — but several are not, including web fetch, code execution, batch processing, the Files API, computer use, and the MCP connector. AssessForge is built to work within the eligible feature set, but if you use the Anthropic API for other purposes you may run into restrictions.

  • Separate organizations for HIPAA and non-HIPAA work. Anthropic enforces HIPAA readiness at the organization level. If you want to use Claude for both PHI workflows and unrelated experimentation, you need two separate Anthropic organizations — one HIPAA-enabled, one not. This is more administrative overhead than the AWS or Google routes.

Option 2: AWS Bedrock (Recommended)

Amazon Bedrock is Amazon Web Services's managed service for foundation models, including the full Claude lineup from Anthropic. Bedrock is the same Claude models, served from Amazon's infrastructure, governed by AWS's security and compliance controls.

We recommend AWS Bedrock for most AssessForge users handling PHI for one practical reason: you can be set up with a fully signed BAA in about ten minutes, entirely self-service, with no sales calls and no waiting.

Privacy protections on AWS Bedrock:

  • Anthropic does not receive your prompts or use them for training. Bedrock runs Claude on Amazon's infrastructure under contract; your data flows between your computer and AWS, not through Anthropic.

  • All data is encrypted in transit using TLS 1.2 or higher, and encrypted at rest using AES-256, by default. No configuration is required for this baseline encryption.

  • Amazon Bedrock is on AWS's list of HIPAA-Eligible Services, meaning it can be used to process PHI under the AWS Business Associate Addendum.

  • The AWS BAA contractually obligates AWS to safeguard PHI consistent with HIPAA requirements, to limit how PHI is used and disclosed, to report breaches and security incidents, and to flow equivalent obligations down to its subcontractors.

  • On the Anthropic Trust Center, HIPAA compliance for "Claude in Amazon Bedrock" is shown as "Partner-Managed." This means AWS — not Anthropic — handles the HIPAA contractual relationship. You do not need a separate Anthropic BAA when using Bedrock; your AWS BAA is the one that covers your use of Claude through Bedrock.

Why we suggest AWS Bedrock:

  • BAA is self-service and instant. You sign in to AWS Artifact (Amazon's compliance portal), review the BAA, click accept, and the BAA is in effect immediately. No sales process, no waiting on lawyers, no phone calls.

  • Account creation is free. AWS does not charge you to open an account or to sign the BAA. You only pay for AI usage you actually consume, billed monthly to a credit card you provide.

  • Free credits to start. New AWS accounts typically receive promotional credits when signing up — at the time of writing, around one hundred dollars in general AWS Free Tier credit, plus additional Bedrock-specific credit for trying out a foundation model. The exact amount and terms are set by Amazon and may change. For most AssessForge users, this is more than enough to test the software and complete dozens of real reports before any paid usage starts.

  • Same Claude models, same quality. The Claude models on Bedrock are the same ones Anthropic offers directly. Bedrock pricing is essentially identical to Anthropic's direct pricing.

  • Pay only for what you use. Bedrock bills per token of input and output. There is no monthly minimum and no subscription.

  • No feature restrictions for HIPAA workflows. Unlike Anthropic's HIPAA-ready direct API, AWS Bedrock does not selectively disable features for HIPAA-designated accounts. The full Claude capability set in Bedrock is available under the AWS BAA, provided you configure your account in line with the BAA's requirements (encryption, eligible services only for PHI, and so on).

The AWS Business Associate Addendum. A copy of the AWS BAA is provided below so you can review the full contractual terms yourself before signing up.

Download the AWS Business Associate Addendum (PDF)

Note: the AWS BAA is a confidential agreement under AWS's terms. The copy provided here is for your review in deciding whether to proceed; the contractually binding version is the one you accept inside your own AWS account through AWS Artifact.

Option 3: Google Gemini Enterprise Agent Platform

Google's Gemini Enterprise Agent Platform — the rebranded successor to Vertex AI — also offers access to Anthropic's Claude models, alongside Google's own Gemini models and a number of others. Google Cloud is a HIPAA-eligible platform, and Google offers a BAA covering its full Cloud infrastructure and the platform's covered services.

Privacy protections on Gemini Enterprise:

  • Customer prompts and responses on Gemini Enterprise are not used to train Google's or Anthropic's models.

  • Encryption in transit and at rest is enabled by default.

  • Once a BAA is in place and the project is configured for regulated workloads, Gemini Enterprise can be used for HIPAA-covered AI workflows, including those that process PHI.

  • The Google Cloud BAA covers the company's entire infrastructure, and regional data residency options are available for organizations with specific location requirements.

  • As with Bedrock, the Anthropic Trust Center shows HIPAA compliance for "Claude on Google Cloud's Vertex AI" as "Partner-Managed." Google handles the HIPAA contractual relationship under its own BAA. You do not need a separate Anthropic BAA when using Claude through Gemini Enterprise.

How the BAA works at Google:

Google Cloud's BAA process is straightforward but not as instantaneous as AWS's. You generally need to contact your Google Cloud account representative, request the BAA, accept the terms in the Google Cloud Console, and configure your project for HIPAA workloads. In practice this typically takes one to two business days for legitimate organizations, rather than the few minutes AWS requires. If you have an existing Google Cloud relationship, or if you specifically prefer Google's ecosystem, Gemini Enterprise is a perfectly valid choice.

A Note for Canadian Users: PIPEDA and Provincial Health Privacy Laws

HIPAA is a United States federal law. It does not directly apply to most Canadian assessors, clinics, or businesses. If you are operating in Canada with Canadian clients, the laws that actually govern your handling of personal information are different.

Federally, the Personal Information Protection and Electronic Documents Act (PIPEDA) sets the baseline rules for how private-sector organizations collect, use, and disclose personal information in the course of commercial activities.

Provincially, most provinces have their own health privacy legislation that applies to personal health information held by health information custodians. The specifics depend on where you practice:

  • Alberta: Health Information Act (HIA)

  • British Columbia: Personal Information Protection Act (PIPA), with additional public-sector rules under FIPPA

  • Manitoba: Personal Health Information Act (PHIA)

  • New Brunswick: Personal Health Information Privacy and Access Act (PHIPAA)

  • Newfoundland and Labrador: Personal Health Information Act (PHIA)

  • Nova Scotia: Personal Health Information Act (PHIA)

  • Ontario: Personal Health Information Protection Act (PHIPA)

  • Prince Edward Island: Health Information Act (HIA)

  • Quebec: Law 25 (an Act respecting the protection of personal information in the private sector)

  • Saskatchewan: Health Information Protection Act (HIPA)

Even though HIPAA itself does not apply to you, signing a BAA with a US cloud provider is still useful in a Canadian context. Here is why. Canadian privacy law generally permits cross-border transfers of personal information, but it requires you to remain accountable for that information and to ensure it receives "comparable protection" wherever it is processed. The Office of the Privacy Commissioner of Canada recommends contractual safeguards with foreign service providers, documented in writing. A signed BAA — with its commitments around permitted uses, encryption, breach notification, subcontractor obligations, and audit rights — is exactly the kind of contractual safeguard the OPC has in mind. It does not make you "PIPEDA compliant" on its own, but it is a meaningful piece of the cross-border accountability story you are expected to be able to demonstrate.

A few additional considerations for Canadian assessors:

  • Inform your clients. Most Canadian privacy regimes expect organizations to tell individuals when their personal information is processed outside Canada. Update your intake forms and privacy notice to disclose that AI processing may occur on US-based cloud infrastructure.

  • Be aware of stricter provincial regimes. Some jurisdictions — notably British Columbia's public sector and Quebec under Law 25 — have stricter rules on cross-border data flows. Ontario's PHIPA, which applies to most assessors working with health information custodians in Ontario, also imposes specific obligations around audit logging and notice. If you are subject to those regimes, you may need additional safeguards or Canadian data residency, and the analysis is best done with a privacy professional familiar with your sector.

  • Consider data minimization. Whatever framework applies to you, sending less PHI to any external service is always better than sending more. AssessForge does not require you to send full medical records into AI prompts; the more you can summarize, de-identify, or excerpt before prompting, the lower your risk profile.

None of the above is legal advice. Your specific obligations depend on your province, your role (organization, custodian, contractor), the nature of the information you handle, and the contractual relationships you have with your clients and their counsel. If you are unsure, consult a Canadian privacy lawyer or your provincial privacy commissioner's office before processing real client information through any AI service.

Your Responsibilities

AssessForge is a software tool. It is not, by itself, a privacy or compliance product, and using it does not automatically make your practice compliant with any particular law. The legal relationship that protects your clients' information is between you and the AI provider you choose, not between you and AssessForge AI Technologies.

If you intend to process Personal Health Information or other regulated sensitive data through AssessForge, you are responsible for:

  • Selecting an AI provider whose terms permit processing of the kind of information you handle.

  • Executing a Business Associate Agreement, Data Processing Addendum, or equivalent contractual instrument with that provider before sending any regulated information through the software.

  • Configuring your provider account in line with the security requirements of that agreement (encryption, access controls, logging, and similar safeguards).

  • Safeguarding the API credentials you enter into AssessForge. Anyone with your API key can incur usage charges on your provider account. Do not share keys, store them in unencrypted documents, or commit them to shared drives.

  • Informing your clients that AI processing forms part of your workflow and obtaining whatever consent your professional and legal obligations require.

  • Complying with your own provincial, federal, and professional obligations regarding the collection, use, disclosure, retention, and destruction of personal information.

  • Maintaining your own records of consent, data flows, and incident response procedures.

AssessForge AI Technologies makes no representation that your use of the software, in any particular configuration, will satisfy the requirements of HIPAA, PIPEDA, any provincial health privacy law, or any other regulatory regime. We strongly recommend that any practice processing regulated information work with a qualified privacy professional to review their workflow, contracts, and consent practices before going live with PHI.

In Summary

AssessForge is designed so that your client data flows directly between your computer and an AI provider you have contracted with. We are not in that path. The privacy protections that apply to your work are the ones you put in place with that provider — most easily through AWS Bedrock, which offers an instant self-service BAA, free starter credits, and the same Claude models you would get anywhere else.

Setting up AWS Bedrock with a signed BAA takes about ten minutes and costs nothing. For most AssessForge users handling PHI or sensitive Canadian client information, it is the fastest, simplest, and most defensible way to start working with AI safely.