Artificial intelligence in law firms: opportunities, risks and guidelines according to the guidelines of the German Federal Bar Association (BRAK)
From data protection and liability to practical implementation.
Artificial intelligence (AI) is increasingly finding its way into legal practice. For law firms, it offers considerable efficiency gains and new service opportunities - but at the same time, it also raises new liability, data protection and ethical questions. In December 2024, the German Federal Bar Association (BRAK) published comprehensive guidelines on the use of AI in law firms.
In the following, we summarise the key points of this guideline and show what law firms need to pay attention to.
The use of AI technologies can offer great practical benefits for law firms. The BRAK assumes that AI tools can enable significant efficiency gains in law firms. Routine tasks can be accelerated and large amounts of information can be analysed more quickly. AI systems are already being used in a wide range of areas:
- Document analysis and creation: automatic review of large amounts of data, contract reviews, drafting of pleadings.
- Research services: Faster searches for case law, literature and standards.
- Communication support: Pre-formulation of e-mails, client correspondence.
- Law firm organisation: Automated scheduling, deadline monitoring, time recording.
- Translation of legal texts
Advanced language models (so-called large language models such as ChatGPT) are particularly impressive in that they can generate legal texts seemingly effortlessly in fractions of a second. There are also specialised legal AI applications that are trained on legal data and support contract review or case analysis, for example.
The potential of these technologies lies primarily in saving time and increasing productivity. AI can automate routine work and relieve lawyers so that they can concentrate on more complex legal activities.
In large law firms or in mass proceedings with a flood of similar cases, the use of AI can even be crucial in order to manage the processes efficiently. Law firms that make sensible use of AI can gain a competitive advantage and deliver results to clients more quickly.
However, the BRAK emphasises in the same breath that the opportunities also come with risks. AI models work statistically and without real understanding, which means that although their answers often sound plausible, their content can be wrong. This phenomenon is known as "hallucination" - the AI invents seemingly logical information that is actually wrong. There is also a risk of bias due to inadequate or one-sided training material, which can falsify results. In the legal context in particular, such errors can have serious consequences, for example if incorrect information is provided in written pleadings or during counselling.
The BRAK emphasises that without careful monitoring of AI results by lawyers, there is a risk of liability problems. The following professional guidelines and precautionary measures are therefore key to being able to utilise the potential of AI without risk.
Lawyers are subject to strict professional and legal requirements when practising their profession - and these also apply to the use of AI regardless of technology. The BRAK clarifies that the Professional Code of Conduct for Lawyers (BORA) and the Federal Lawyers' Act (BRAO) are formulated in a technology-neutral manner and that their obligations must therefore also be complied with without restriction when using AI tools. In particular, the use of AI does not release the legal profession from its obligation to advise clients independently and on its own responsibility. This means that all decisions and advice must continue to be the responsibility of the lawyer - AI may only serve as an aid, never as a substitute for legal expertise.
A central principle of professional law is the principle of the conscientious exercise of the profession (Section 43 BRAO) and the highly personalised provision of services. The latter means that, in case of doubt, lawyers must carry out their work personally (Section 613 BGB). If this is applied to AI, it follows that the core activity of a lawyer must not be fully automated. AI tools can provide support, but the final check and final legal assessment must always be carried out by a human lawyer. Otherwise, there is a risk of violating professional law.
In addition to the profession-specific rules, general legal framework conditions must also be observed when using AI. In particular, data protection law (see below) and criminal law (keyword: violation of private secrets in accordance with Section 203 of the German Criminal Code) set limits on what may be done with sensitive data and AI. Copyright law may also be affected, for example if AI-generated texts reproduce protected third-party content. Law firms should also check contractual regulations: If AI tools are used on behalf of clients, for example, contractual duties of disclosure or information may arise. The BRAK emphasises that transparency obligations outside of professional law - for example from civil law or the Unfair Competition Act (UWG) - may be relevant.
Transparency towards clients
As things stand at present, neither the BRAO nor the BORA impose a professional obligation to inform your clients about the use of AI. However, we would like to point out that corresponding transparency obligations may also arise outside of the law governing the legal profession - e.g. from contract law or unfair competition law.
If AI is used for a mandate, we nevertheless recommend communicating this openly and, in case of doubt, regulating it contractually. However, the client's express consent is not legally mandatory. In the interests of transparency, however, this should be reconsidered.
A look into the near future
At EU level, the Regulation on Artificial Intelligence (so-called AI Act) is about to be implemented. The BRAK guidelines describe the most important upcoming requirements of the AI Regulation and how they relate to the law governing the legal profession. For example, providers and operators of certain AI systems must ensure that their staff have sufficient AI expertise (Art. 4 AI Regulation). According to the BRAK, lawyers are often likely to be considered "operators" in the context of legal tech and should therefore already pay attention to training and competence development.
From 2 August 2026, transparency obligations will also apply under the AI Regulation, according to which AI-generated content must be identified as such (Art. 50 AI Regulation). For law firms, this means, for example, that they will have to disclose in future if, for example, draft pleadings originate unchanged from an AI. However, the disclosure obligation does not apply if a lawyer has reviewed the text and is responsible for it. Overall, professional law and the AI Regulation stand side by side, i.e. an AI system may be permissible from a regulatory point of view, but still violate a lawyer's duties - and vice versa. It is therefore important to keep an eye on both sets of regulations.
Process automation
at a fixed price!
Contact us now.
AIMAX Business Solutions combines excellent solutions with first-class service. Your added value is our goal. Unique AI systems allow us to act independently of the application. With process automation and digital assistance, we unlock new potential in your company.
Data protection and confidentiality play a key role in the use of AI in law firms. Lawyers are legally obliged to maintain confidentiality (Section 43a (2) BRAO) - and this obligation also applies without restriction when using AI tools. All information that becomes known in the course of a mandate is subject to professional secrecy and may not be disclosed to third parties without authorisation. Consequently, great caution is required when law firm data is fed into AI systems operated by third-party providers.
In principle, confidential client information may only be disclosed to external AI providers under very strict conditions. § Section 43e BRAO regulates the permissibility of IT outsourcing - this also includes the use of cloud services or AI services. Accordingly, it must be ensured that the service provider complies with comparable confidentiality standards. The BRAK recommends only making anonymised or abstracted entries to AI systems if at all possible. In concrete terms, this means that enquiries to ChatGPT & Co. should be formulated in such a way that no conclusions can be drawn about the identity of the client or the specific circumstances of the case. If documents need to be uploaded for analysis, all personal data and case names should be removed or anonymised beforehand. This ensures that client confidentiality is maintained.
In addition to professional confidentiality, general data protection law is also relevant. As soon as personal data is processed, the provisions of the General Data Protection Regulation (GDPR) and the Federal Data Protection Act (BDSG) must be complied with. For law firms, this means, for example, that there must be a clear legal basis for data processing (usually a legitimate interest will come into consideration if the use of AI serves to process a mandate). Furthermore, appropriate data processing agreements should be concluded with AI service providers, if they act as processors, to ensure data protection. In this context, the BRAK refers to current information from the Data Protection Conference - for example the resolution of 6 May 2024, which formulates requirements for the data protection-compliant use of AI. Among other things, it emphasises that the data protection requirements for international data transfers must be observed when using US-based AI services.
In practice, this can be summarised as follows: Law firms should preferably use secure AI solutions that meet high data protection standards - ideally those that are hosted on-premise or in the EU so that sensitive data does not leave the protected environment. Examples of this include the solutions we offer - such as the AI agent AIMAX® and the RPA solution EMMA with cognitive AI. Where this is not possible, technical and organisational measures must be taken to ensure confidentiality and data protection (e.g. encryption, pseudonymisation, strict access rights). The protection of client data and compliance with the duty of confidentiality have absolute priority over any gains in convenience through external AI services.
The guide provides specific tips on how law firms can organise the introduction of AI responsibly:
Step 1: Analyse needs and risks
- Where can AI really create added value?
- Which data is affected?
- What risks arise?
Step 2: Select suitable systems
- Carefully check providers (data protection, support, transparency).
- Conclude contracts for commissioned data processing.
Step 3: Sensitise employees
- Offer training on AI expertise and liability issues.
- Establish clear rules for dealing with AI systems in the law firm.
Step 4: Continuous monitoring
- Regular review of AI results.
- Document utilisation and decisions.
"Especially in times of a shortage of skilled labour, the potential to make work easier through the capabilities of artificial intelligence is more welcome than ever before.
If you have any questions - e.g. on the selection of the system or its implementation in your day-to-day work - I will be happy to discuss them with you personally."
The BRAK sees AI as an important tool, but warns against overestimating it:
Opportunities:
- Increased efficiency, especially for routine tasks.
- Error avoidance in standardised processes.
- New advisory formats for clients (e.g. self-service offerings).
Limitations:
- Complex legal assessments require human expertise.
- Empathy, strategic process management and ethical considerations remain irreplaceable.
1. inadequate checking of AI results
AI delivers drafts, not finished solutions. A careful plausibility check remains mandatory.
2. data protection violations due to insecure systems
The use of non-GDPR-compliant tools jeopardises client confidentiality and can result in a warning letter.
3. lack of transparency towards clients
If the use of AI is not communicated openly, this can jeopardise the relationship of trust.
4. inadequate employee training
Without training, there is often a lack of awareness of the risks and limitations of the technology.
5. overestimation of AI capabilities
AI is strong in providing support, but cannot replace complex legal assessments or strategic decisions.
The BRAK guidelines (as of December 2024) make it clear that artificial intelligence has great future potential in law firms, but at the same time must be introduced and used carefully. Law firms wishing to use AI tools will find practical recommendations in the guide to help them weigh up the opportunities and risks. The following are particularly important: strict confidentiality and data protection, conscientious review of all AI results, transparent communication - both internally and externally - as well as ongoing training and documentation. If these requirements are met, AI can become a valuable aid in the day-to-day running of a law firm. This is demonstrated by the AI agent AIMAX® and the RPA solution EMMA with cognitive AI demonstrate this time and again - not least in the context of the various use cases in everyday law firm work.
Finally, it should be emphasised that although the recommendations presented here have been formulated specifically for lawyers, they also essentially apply to other sectors in which sensitive data is handled. Ultimately, AI should support human work, not replace it. If used responsibly, law firms can utilise the advantages of AI and at the same time fulfil their high level of professional responsibility - for the benefit of their clients and the quality of the administration of justice.