• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Improving Language Models With Graph Embeddings

Student: Stanislav Ilyushin

Supervisor: Sergei Kuznetsov

Faculty: Faculty of Computer Science

Educational Programme: Data Science (Master)

Year of Graduation: 2024

In recent years, language models have become key tools for tasks requiring text processing. These models have undergone a significant evolution from simple statistical approaches to complex systems using neural networks such as Recurrent Neural Networks (RNN) and Transformer models. Despite significant advances in Natural Language Processing (NLP), they face limitations in utilizing structured information and knowledge. This paper explores the possibility of improving language models using graph neural networks (GNNs) such as Graph Autoencoder (GAE). The main idea is to integrate structured information from knowledge graphs into the training process of language models. We propose a novel graph structure tokenization algorithm that represents a graph vertex as a subgraph of tokens, which can effectively capture relationships between entities and concepts. The research goals include developing a GAE-based model for encoding the vertices and edges of association graphs, creating a graph tokenization model architecture and training architecture, integrating a pipeline of two graph neural networks, and evaluating the impact of the proposed approach on the quality of language models in classical NLP tasks. This work combined data from two multi-lingual sources on associative series and constructed two associative graphs: an association word graph and a word token graph. A scheme for learning representations of the association graph and adaptive learning via the word token graph was designed and implemented. The resulting model has shown good results in word-semantics tasks, successfully solving the problem of underpowered vocabularies (OOV) and the lack of an adequate graph vertex tokenization mechanism. The main results of the paper include representation of the association map as a graph structure, application of graph encoders in NLP tasks, construction of embeddings, derivation of model metrics and visualization of embeddings, testing and comparison of GAE/TokenGAE models, and evaluation of metrics on word-sim benchmarks against popular fast small encoders.

Student Theses at HSE must be completed in accordance with the University Rules and regulations specified by each educational programme.

Summaries of all theses must be published and made freely available on the HSE website.

The full text of a thesis can be published in open access on the HSE website only if the authoring student (copyright holder) agrees, or, if the thesis was written by a team of students, if all the co-authors (copyright holders) agree. After a thesis is published on the HSE website, it obtains the status of an online publication.

Student theses are objects of copyright and their use is subject to limitations in accordance with the Russian Federation’s law on intellectual property.

In the event that a thesis is quoted or otherwise used, reference to the author’s name and the source of quotation is required.

Search all student theses