← all posts

Your Law Firm Has an AI Security Problem. Here's How to Solve It.

· CableKnit Team

In July 2024, the American Bar Association issued Formal Opinion 512, its first comprehensive guidance on lawyers’ ethical obligations when using generative AI. The opinion doesn’t ban AI. It recognizes that AI tools can improve the efficiency and quality of legal services. But it draws clear lines around how lawyers can and cannot use these tools, and those lines create a serious problem for any firm relying on cloud-based AI.

Here’s what the ABA said, and why it matters for how your firm adopts AI.

What the ABA Opinion Actually Says

Opinion 512 examines generative AI through six ethical lenses: competence, confidentiality, client communication, candor to courts, supervisory duties, and fees. The confidentiality section is the most consequential for firms evaluating AI tools.

The Confidentiality Problem

Under Model Rule 1.6, lawyers must protect all information relating to client representation from unauthorized disclosure. The ABA opinion warns that AI tools which learn from user input create a direct risk of violating this duty. When a lawyer inputs client information into a cloud AI tool, that information may be stored on external servers, used to train the model, or surfaced in responses to other users, including lawyers at other firms using the same tool.

The opinion states that before inputting client information into any AI tool that could disclose it to others, lawyers must obtain informed consent from the client. That consent must be specific, not boilerplate language buried in an engagement letter. The lawyer must explain what information will be shared, how others might access it, and why the tool is being used.

For “self-learning” AI tools, the risks compound. Information from one client’s matter could influence the tool’s responses on another client’s matter, even within the same firm. The opinion notes that this could violate ethical walls and conflict-of-interest obligations.

There is also a risk the opinion does not address directly but which any litigator will recognize: cloud AI providers can be subpoenaed. Chat transcripts, API request histories, and prompt data sitting on OpenAI’s or Anthropic’s servers are discoverable. A party adverse to your client could, in principle, seek production of the AI conversations in which your firm worked through case strategy, drafted arguments, or analyzed evidence. Even where attorney work-product and privilege arguments ultimately succeed, the cost and disruption of fighting those motions is substantial. And the metadata alone — which matters were queried, when, by whom, and how often — can reveal more than the firm would prefer.

The Competence Obligation

Model Rule 1.1 requires lawyers to understand the technologies they use. The opinion makes clear that lawyers don’t need to become AI experts, but they must have a reasonable understanding of the capabilities and limitations of any AI tool they employ. This includes understanding where the AI gets its information, how it might produce inaccurate results, and the well-documented problem of AI “hallucinations,” where the tool generates plausible but entirely fabricated content.

The opinion emphasizes that lawyers cannot rely on AI output without independent verification. A lawyer who submits AI-generated content to a court without checking it for accuracy may violate duties of competence and candor.

Supervisory Duties

Managing partners and senior lawyers must establish clear policies governing AI use within the firm. They must ensure that all attorneys and staff are trained on the ethical and practical implications of these tools. When AI services are provided by third parties, lawyers must vet those providers, reviewing their security policies, data retention practices, hiring standards, and breach notification procedures.

Fee Implications

Lawyers billing hourly must bill for actual time spent, not the time the work would have taken without AI. If an AI tool generates a draft brief in 15 minutes that would have taken 4 hours manually, the lawyer bills for the 15 minutes of input plus whatever time is spent reviewing and refining the output. Firms cannot charge clients for time spent learning how to use AI tools that will be used across multiple matters, as that is considered maintaining general competence rather than client-specific work.

The Core Tension

The ABA opinion creates a practical dilemma for law firms. AI tools offer genuine value: faster research, more efficient document review, better contract analysis. But the most capable and accessible AI tools on the market are cloud services that process data on external servers owned by companies like OpenAI, Google, and Microsoft.

Using these tools with real client data triggers the full weight of the confidentiality analysis. Lawyers must evaluate the provider’s terms of service, understand how data is stored and used, assess the risk of disclosure, explain all of this to each client, and obtain specific informed consent. For firms handling sensitive matters involving litigation strategy, financial records, medical information, or privileged communications, the compliance burden may outweigh the benefits.

Enterprise agreements with major AI providers can address some of these concerns through contractual safeguards, data isolation, and compliance certifications. But these agreements typically cost six figures annually and involve months of procurement and legal review. A 40-person law firm doesn’t have the budget or the bandwidth for that process.

The result is a gap in the market. Small and mid-size firms in regulated industries need AI. They know their competitors are adopting it. But the tools available to them either create unacceptable ethical risks or are priced beyond their reach.

How On-Premises AI Changes the Analysis

CableKnit takes a fundamentally different approach. Instead of sending your data to the cloud, CableKnit runs the AI model on a dedicated machine inside your office. The hardware sits in your server closet. The data stays on your local network. Your information never leaves your building.

This architecture doesn’t just reduce the ethical risks identified in ABA Opinion 512. It eliminates them entirely.

Confidentiality: Resolved

The opinion’s confidentiality analysis centers on the risk that client information will be disclosed to or accessed by third parties through the AI tool. With CableKnit, there is no third party. The AI model runs on hardware your firm owns. No external server processes your queries. No cloud provider stores your data. No other law firm shares the same AI instance.

The informed consent analysis becomes dramatically simpler. Instead of explaining complex data-sharing risks and third-party access policies, you can tell your clients: “Our AI system runs on a computer in our office. Your information is never transmitted to any outside service.”

The opinion also warns about “self-learning” tools that might leak information between clients. CableKnit’s document access controls prevent this at the system level. Documents uploaded by one attorney with restricted visibility cannot appear in another attorney’s AI responses. These permissions are enforced before the AI ever sees the content, not after.

The subpoena exposure disappears as well. There is no third-party provider holding transcripts of your strategy sessions, no API log on a vendor’s server, no metadata trail outside your firm’s own systems. Opposing counsel cannot subpoena a record that does not exist. Your AI conversations live and die on your network, governed by the same retention and litigation-hold policies your firm already applies to email and document management.

Competence: Supported

The opinion requires lawyers to understand their AI tools and verify their output. CableKnit supports both requirements. The system is transparent about its sources: every AI response includes citations showing exactly which documents the answer came from. Attorneys can click through to the original document and verify the information in seconds.

Because CableKnit answers questions using your firm’s actual documents rather than general internet training data, the hallucination risk is significantly reduced. The AI is not inventing case citations from its training set. It is retrieving information from documents your firm uploaded and generating responses grounded in that content.

Supervisory Duties: Simplified

The opinion requires firms to vet their AI providers, evaluate security policies, and understand data handling practices. With CableKnit, the “provider vetting” process is straightforward: the AI runs on your hardware, managed by CableKnit’s team via secure remote access. There is no complex vendor security assessment because the data never enters a vendor’s infrastructure.

CableKnit also provides an administrative dashboard where managing partners can monitor system usage, manage user accounts, control document permissions, and review system health. Supervisory oversight is built into the product.

Client Communication: Straightforward

Disclosure to clients becomes a strength rather than a liability. Telling a client “we use AI to improve our research efficiency, and the system runs entirely within our office on hardware we own” builds confidence. It demonstrates both technological sophistication and respect for client privacy.

Fees: Clean

CableKnit operates on a flat monthly subscription. There are no per-query charges to pass through to clients. Firms absorb the cost as practice overhead, the same way they handle Westlaw or LexisNexis subscriptions. The efficiency gains benefit both the firm (faster work) and the client (lower bills), without creating complex billing allocation questions.

The Practical Result

ABA Opinion 512 is not an obstacle to AI adoption. It is a framework that, when followed, makes AI use in legal practice both ethical and defensible. The challenge is that most AI tools available today make compliance difficult, expensive, or both.

On-premises AI is the cleanest path to compliance because it removes the variable that creates the most ethical complexity: sending client data to a third party. When the data never leaves your office, the hardest questions in the ABA’s analysis simply don’t arise.

CableKnit was built specifically for this reality. Private AI, running on your hardware, in your building, under your control. The ABA opinion describes what responsible AI adoption looks like. CableKnit is how you get there.