• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

RL-BASED LONG-TERM MEMORY for LARGE LANGUAGE MODELS

Student: Belova Yuliya

Supervisor: Mikhail Mukhin

Faculty: St. Petersburg School of Physics, Mathematics, and Computer Science

Educational Programme: Machine Learning and Data Analysis (Master)

Year of Graduation: 2024

This study addresses the problem of large language models (LLMs) handling long texts due to the quadratic complexity of the attention mechanism in the Transformer architecture. This problem is particularly relevant for domains that require the analysis of large amounts of textual information, such as medicine, science, and dialogue systems. The study proposes a novel approach to long-term memory formation in LLMs using reinforcement learning (RL), which has previously been successfully applied in other domains but has been little explored for this task. The structure of the paper includes an overview of the subject area, an analysis of existing solutions, a description of the proposed method, and experimental results.

Student Theses at HSE must be completed in accordance with the University Rules and regulations specified by each educational programme.

Summaries of all theses must be published and made freely available on the HSE website.

The full text of a thesis can be published in open access on the HSE website only if the authoring student (copyright holder) agrees, or, if the thesis was written by a team of students, if all the co-authors (copyright holders) agree. After a thesis is published on the HSE website, it obtains the status of an online publication.

Student theses are objects of copyright and their use is subject to limitations in accordance with the Russian Federation’s law on intellectual property.

In the event that a thesis is quoted or otherwise used, reference to the author’s name and the source of quotation is required.

Search all student theses