FAQs on the Commercial Use of Elsevier Content with Generative AI

Last updated on September 11, 2023

What is Elsevier’s position on using Elsevier content or products in public AI tools?
Elsevier’s legal agreements do not permit its content and products to be processed through public AI tools (such as public versions of chat based LLMs). This is inclusive of content from all our dotcom products (for example ScienceDirect, Scopus, Reaxys, Knovel, PharmaPendium, Embase etc.) and our separate dataset offerings. Processing Elsevier content and products through public tools is counter to our Responsible AI Principles.

What is the risk to Elsevier customers if Elsevier content ends up in public LLMs?
As is outline in the Responsible AI Principles there is potential real-world impact that we and our customers should be mindful of. Public generative AI tools are not created transparently, and bias, inaccuracy or outright hallucinations are all well-documented risks. Provenance of a public LLM responses can also be unclear, which also carries potential consequences for real-world impact: which source was the answer extracted from. Decisions in research-intensive environments have critical real-world consequences, and without provenance, accurate data driven decisions cannot be guaranteed.

What is Elsevier’s position on using Elsevier content or products in private AI tools?
Uses of private AI tools (secure, ‘enclosed’ versions of LLMs) may be permitted in conjunction with our data providing that the corporate customer has a corresponding data license and has had a discussion and assessment of the private AI tool by Elsevier. The use needs to be in alignment with the dataset legal terms and conditions.

What can I do if I want to use Elsevier data in a secure, private generative AI tool or want to train our internal LLM with an Elsevier dataset?
Please speak in more detail to your existing Elsevier account manager, or get in touch using the Email option at the bottom of this page. There will need to be a separate data license in place, and there are specific terms and conditions around using Elsevier’s datasets that will need to be adhered to.

Can you give any examples of permitted private AI tools that we can work with?
Elsevier reviews each project on a case-by-case basis to ensure that there is adherence to specific criteria. Please contact us using the Email option at the bottom of this page to discuss this in more detail.

For further assistance: