Skip to content

Artificial intelligence (AI) that takes over purchasing in companies, autonomously monitors product inventories, researches prices and triggers orders. The proactive assistant that answers emails, manages calendars and checks in for flights. Or AI-to-AI or multi-agent systems that take over the entire checkout process and in which all processes on both the buyer’s and seller’s side run entirely within an AI interface – the use of AI agents or agentic AI promises completely new possibilities.

Companies, and in particular retailers, that decide to participate in this new AI-based ecosystem must above all be aware of the legal pitfalls: What are the legal rules of the game when AI suddenly negotiates and concludes contracts independently? Who is liable for what?

From AI assistants to agentic AI: what is what?

1. classification of AI systems

From a technical point of view, what is commonly understood as agentic AI does not usually consist of a single system, but of several layers that build on each other:

  • The basis is formed by AI models, such as large language models, which process information and prepare decisions. This is followed by agent orchestration: software that formulates goals, plans tasks and coordinates intermediate steps.
  • Agents can then use tools, APIs and system access to access external systems – such as merchandise management, booking systems, payment services or e-mail functions.
  • Agent ecosystems, in which several systems interact with each other, only emerge at the highest level. These include multi-agent systems or standardised communication protocols such as the Universal Commerce Protocol (UCP), which AI agents can use to negotiate directly with retailers‘ backend systems and initiate transactions (find out more here).

The higher a system is positioned in this architecture, the greater its autonomy, complexity and therefore also the legal risks. In practice, a rough distinction can be made between three levels of autonomy:

  • AI assistants have a low level of autonomy, respond primarily to user input and work reactively. Classic examples of AI assistants are ChatGPT, the AI chatbot on the online shop site or the travel assistant „Romie“ from Expedia.
  • AI agents act more autonomously than AI assistants, implement goals independently within a predefined framework and access external tools, e.g. via APIs, to do so – and plan their actions, such as the sales agent.
  • Agentic AI: These highly autonomous systems combine logical thinking, memory and decision-making to solve problems independently. One example is Walmart’s „Conversational Checkout“, in which the AI controls the entire purchasing process.

2 Legal classification and risks

Some companies have already discovered that the autonomy of AI-based systems has its pitfalls with their AI chatbots: if these „AI customer advisors“ promise refunds, agree to a disadvantageous contract conclusion or insult customers and competitors, the companies are usually liable. „It wasn’t me, it was my AI“ is not an excuse that courts will accept.

Remember: the more autonomous, self-learning and networked an AI system is, the higher the legal and economic risk.

What applies when AI concludes contracts?

When AI agents conclude contracts, the same legal rules apply as for human employees: companies remain responsible.

1 Conclusion of contract and attribution: Who becomes a contractual partner?

Does the AI itself become a contracting party when it submits or accepts an offer? No. Under current law, an AI has no legal capacity of its own. It can therefore only act as a kind of tool for a person, a company or another legally recognised organisation.

In practice, this means that the statements made by an AI are legally attributed to the user or the company that uses it. The only question is: when is the declaration relevant to the contract made? When the agent is first set up? Or only if there is another intermediate human step?

And what happens if the AI deviates from the human specifications and orders 1,000 units instead of 100 or accepts a price that is far too high? What already applies without the involvement of AI will probably also apply here:

  • If the AI does not declare what the user wanted to declare, the user will probably be able to withdraw from the contract due to a (technical) error in the transmission, but not if the user merely „did not want to make the declaration made in this way“.
  • If the AI is mistaken, the user must also take this into account. However, a cancellation is conceivable if an AI system makes a declaration that is outside the scope specified by the user.

2 Liability for AI errors

The use of AI does not protect against liability. The general principles of liability also apply here, in particular liability within and along the respective contractual chains. In other words:

  • the provider is liable to contractual partners for defect-free performance, i.e. the error-free provision of the AI infrastructure;
  • the retailer is liable to the customer for the contractually agreed service, in particular the accuracy of the data provided, including price information, product description, etc. and to the provider for the contractually agreed service (usually payment) and
  • the customer is liable to the merchant for the declaration of intent that can be attributed to it and to the provider for contractually agreed performance (usually payment).

Incidentally, a blanket exclusion of liability, e.g. in the GTC for „AI-related errors“, is generally invalid. The decisive factor is always which service is promised.

3. consumer protection: new manipulations?

The use of AI also does not release the retailer from the statutory pre- and post-contractual information obligations (e.g. cancellation policy, price information, etc.). However, the exciting question is how these are implemented if the retailer is no longer faced with the „average informed consumer“, but the „average custobot“. Companies must consider how they can technically implement the fulfilment of their obligations when using AI agents and at the same time defend themselves against new risks such as prompt injections.

AI governance: what companies should do now

If AI negotiates prices, triggers orders or even concludes contracts in the future, companies should check a few key points in advance. The following questions will help to recognise typical risks at an early stage:

  1. What impact will AI-to-AI commerce and multi-agent systems have on your own business model?
  2. Should and can your own systems interact with AI agents from other companies or customers?
  3. How autonomous is the AI system used and which internal or external systems (APIs, tools, payment services, ERP, etc.) does it access?
  4. Which decisions can the AI make independently (e.g. price negotiations, orders or contract acceptance) and which not?
  5. Who in the company is responsible for configuring, monitoring and approving the AI?
  6. How are responsibilities and liability risks distributed along the contractual chain (technology provider – retailer – customer)?
  7. To what extent may the AI make legally binding declarations and what technical limits apply (e.g. price or quantity limits)?
  8. Where are human authorisations required and how are AI decisions and transactions documented and made traceable?
  9. How can the AI be prevented from deviating from the specified scope of action and how can incorrect decisions be recognised and corrected?
  10. Are compliance, consumer protection and security requirements (e.g. data protection, AI regulation, information obligations, protection against manipulation such as prompt injections) sufficiently implemented technically and contractually?

Brief summary

AI agents and agentic AI can fundamentally change purchasing, customer service and checkout processes. At the same time, much remains the same in legal terms: Contracts, declarations and errors made by the AI are attributed to the company. Anyone using AI agents should therefore define clear rules for autonomy, technical limits, responsibilities and documentation – and adapt existing processes, contracts and general terms and conditions accordingly.

Are you planning to use AI agents or would you like to check which legal requirements apply to your company? We can support you with the legal categorisation, the design of your AI governance and the adaptation of your contracts and processes. Please feel free to contact us.

Listen & read now: