Abstract
This paper studies how the model architecture and data configurations influence the empirical memorization capacity of generative transformers. The models are trained using synthetic text datasets derived from the Systematized Nomenclature of Medicine (SNOMED) knowledge graph: triplets, representing static connections, and sequences, simulating complex relation patterns. The results show that embedding size is the primary determinant of learning speed and capacity, while additional layers provide limited benefits and may hinder performance on simpler datasets. Activation functions play a crucial role, and Softmax demonstrates greater stability and capacity. Furthermore, increasing the complexity of the data set seems to improve the final memorization. These insights improve our understanding of transformer memory mechanisms and provide a framework for optimizing model design with structured real-world data.
| Original language | English |
|---|---|
| Title of host publication | PROCEEDINGS OF THE FIRST WORKSHOP ON LARGE LANGUAGE MODEL MEMORIZATION, L2M2 |
| Publisher | Association for Computational Linguistics (ACL) |
| Pages | 227-238 |
| Number of pages | 12 |
| ISBN (Print) | 9798891762787 |
| Publication status | Published - 2025 |
| Event | 1st Workshop on Large Language Model Memorization-L2M2 - Vienna International Centre, Vienna, Austria Duration: 1 Aug 2025 → 1 Aug 2025 https://sites.google.com/view/memorization-workshop/ |
Conference
| Conference | 1st Workshop on Large Language Model Memorization-L2M2 |
|---|---|
| Country/Territory | Austria |
| City | Vienna |
| Period | 1/08/25 → 1/08/25 |
| Internet address |
Fingerprint
Dive into the research topics of 'Capacity Matters: a Proof-of-Concept for Transformer Memorization on Real-World Data'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver