In exchange for their respective discoveries, inventors who hold patent rights also have the right to prevent others from using (or otherwise violating) those inventions. According to US patent law, someone who “makes, uses, proposes to sell, or sells any patented invention inside the United States or imports any patented invention into the United States within the period of the patent therefor” is infringing on a patent claim. Â
 A two-step investigation is necessary to establish infringement to: (1) ascertain the meaning of each phrase in a patent claim; and (2) demonstrate that the accused device satisfies each claim term (i.e., claim limitation), either explicitly or in accordance with the theory of equivalents.Â
 Given that most AIs now have the technological ability to violate patent claims, liability in circumstances where AI is the infringer of patent rights is another crucial patent law problem that will probably be disrupted by AI. Â
The liability problem presents the question of who should be held accountable for acts made by AI – the end user, the developer, or AI itself- as well as the associated question of how to measure culpability.Â
Solution can be:Â
- As stated in the Resolution, the “laws regulating culpability for harmful activities – where the user of a product is accountable for a behaviour that leads to harm” may apply to damages brought on by AI. One potential defendant would be the AI’s end users. However, this could cause customers to become more concerned and less eager to adopt potentially helpful AI. In many cases, especially when they are people rather than smart organisations, end users are unable to anticipate the patent violation.Â
- This brings up the alternative of holding the AI developer or manufacturer responsible. In patent litigation, it is typical practise to hold the maker of a product accountable for patent infringement. This may be appropriate in the context of artificial intelligence as well because the developers are ultimately responsible for creating the AI (that violates the patent), are typically in a better position to anticipate the infringement than the end users, and have probably reaped financial benefits from the AI.Â
- However, can a human agent genuinely foresee against or adequately supervise the AI to prevent violation in the case of totally autonomous AI? If humans were held accountable for unforeseen behaviours, such as violation of patents, would this discourage the development and usage of artificial intelligence (AI)? If so, how would this affect innovation? Â