Transformer Architecture — The Backbone of LLMs

Part 4 of a 13-part series about LLMs

Janani Srinivasan Anusha
6 min read6 days ago

Introduction to Transformers

Transformers were introduced in the seminal paper Attention Is All You Need by Vaswani et al., 2017. They replaced the limitations of Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks, offering parallel processing and better handling of long-range dependencies.

1. Core Components of the Transformer Architecture

The transformer model can be divided into several core components, each playing a vital role in processing and learning from data.

1.1. Input Embedding and Positional Encoding

Before input data enters the transformer, it must be tokenized and embedded into high-dimensional vectors. However, since transformers process sequences in parallel, they lack a natural understanding of the order of tokens. Positional encodings are added to embeddings to introduce this sequential information.

Formula for Positional Encoding

1.2. Self-Attention Mechanism

The self-attention mechanism computes the relationships between different tokens in a sequence, enabling the model to focus on relevant parts of the input.

Steps of Self Attention

1.3. Multi-Head Attention

Self-attention is further enhanced using multi-head attention, which allows the model to focus on different parts of the input simultaneously. Each attention “head” operates independently, capturing diverse contextual relationships, and the results are concatenated for richer representations.

Multi-Head Attention Formula
Souce: Attention is All You Need (Vaswani et al. 2017)

1.4. Feedforward Neural Network (FFN)

After self-attention, each token is processed independently through a feedforward neural network (FFN), adding depth and complexity to the model’s understanding.

The FFN consists of two linear transformations separated by a non-linear activation function (typically ReLU):

The FFN allows the model to learn complex patterns and transformations for each token.

Source: https://learnopencv.com/understanding-feedforward-neural-networks/

1.5. Layer Normalization

Layer normalization stabilizes training by normalizing the output of each sub-layer (self-attention and FFN). It ensures that the activations have a mean of zero and a variance of one, preventing divergence during training.

1.6. Residual Connections

Residual connections add the input of each sub-layer to its output, preserving the original information and allowing gradients to flow effectively during backpropagation:

Output=LayerNorm(x+SubLayerOutput)

These connections mitigate the problem of vanishing gradients and enable deeper architectures.

Transformer Architecture

Transformer Architecture- Encoder & Decoder

Encoders

Over the years, various encoder-only architectures have been developed based on the encoder module of the original transformer model outlined above. Notable examples include BERT (Pre-training of Deep Bidirectional Transformers for Language Understanding, 2018) and RoBERTa (A Robustly Optimized BERT Pretraining Approach, 2018).

BERT (Bidirectional Encoder Representations from Transformers) is an encoder-only architecture based on the Transformer’s encoder module. The BERT model is pre-trained on a large text corpus using masked language modeling (illustrated in the figure below) and next-sentence prediction tasks.

RoBERTa (Robustly optimized BERT approach) is an optimized version of BERT. It maintains the same overall architecture as BERT but employs several training and optimization improvements, such as larger batch sizes, more training data, and eliminating the next-sentence prediction task. These changes resulted in RoBERTa achieving better performance on various natural language understanding tasks than BERT.

Decoders

Over time, researchers have built upon the original encoder-decoder transformer architecture, leading to the development of several decoder-only models. These models focus on generating text rather than understanding input sequences, allowing them to excel in tasks such as text generation, translation, and summarization. One of the most notable examples of these models is the GPT (Generative Pre-trained Transformer) series. These models are pre-trained on vast amounts of unsupervised text data and later fine-tuned for specific tasks, including sentiment analysis, text classification, and question-answering.

The GPT family, including GPT-2, (GPT-3 Language Models are Few-Shot Learners, 2020), and the more recent GPT-4, has shown remarkable performance across a wide range of benchmarks. These models are now among the most popular and influential architectures in NLP, due to their ability to generate coherent and contextually relevant text. Their success in various tasks has solidified their position as one of the leading approaches in the field, further pushing the boundaries of what AI systems can achieve in language understanding and generation.

Encoder-Decoder Hybrids

In addition to the traditional encoder and decoder architectures, there have been developments in new encoder-decoder models that combine the advantages of both components. These models typically integrate innovative methods, pre-training goals, or architectural adjustments to improve their performance across different natural language processing tasks. Some notable examples of these new encoder-decoder models include

Encoder-decoder models are commonly employed in natural language processing tasks that require both understanding input sequences and producing output sequences, which may vary in length and structure. These models excel in situations where there is a complex relationship between the input and output sequences, and it is essential to capture the connections between elements in both. Typical applications of encoder-decoder models include text translation and summarization.

2. Encoder-Decoder Architecture

Transformers are composed of encoders and decoders:

2.1. Encoder

  • Processes the input sequence.
  • Contains stacked layers of self-attention and FFN sub-layers.

2.2. Decoder

  • Generates the output sequence.
  • Uses masked self-attention to ensure predictions depend only on previous tokens.
  • Includes encoder-decoder attention to incorporate information from the encoder.

3. Data Flow Through Transformers

  1. Input Embedding: Text is tokenized and embedded into high-dimensional vectors.
  2. Positional Encoding: Sequential information is added to embeddings.
  3. Encoder Layers:
  • Self-attention identifies relationships within the input sequence.
  • The FFN processes each token independently.

4. Decoder Layers:

  • Masked self-attention processes the target sequence.
  • Encoder-decoder attention incorporates information from the encoder.

5. Output Generation: The decoder produces the final predictions.

4. Integration into LLMs

Transformers are the backbone of LLMs, enabling their impressive capabilities:

  • GPT (Generative Pre-trained Transformer): Decoder-only transformer for generative tasks.
  • BERT (Bidirectional Encoder Representations from Transformers): Encoder-only transformer for understanding tasks.
  • T5 (Text-to-Text Transfer Transformer): Combines encoder and decoder for versatility.

These models leverage massive datasets and billions of parameters to achieve state-of-the-art performance in NLP tasks.

Conclusion

The transformer architecture, with its innovations like self-attention, multi-head attention, and feedforward networks, has transformed the landscape of NLP and AI. Its ability to process sequences in parallel and capture long-range dependencies makes it the perfect foundation for LLMs.

Further Reading:

  • Tay et al., 2020. Efficient Transformers: A Survey on Attention Mechanisms.
  • Radford et al., 2021. Learning Transferable Visual Models From Natural Language Supervision.
  • Vig, J., 2019. A Multiscale Visualization of Attention in the Transformer Model.
  • Jaegle et al., 2021. Perceiver: General Perception with Iterative Attention.

Stay tuned for Part 5: Training Large Language Models — An In-Depth Guide, where we’ll dive into the essential steps, tools, and techniques that power the training of LLMs. From data preprocessing and tokenization to hardware requirements, we’ll cover it all to ensure your models are ready for action!!

--

--

Janani Srinivasan Anusha
Janani Srinivasan Anusha

Written by Janani Srinivasan Anusha

Tech enthusiast and knowledge sharer, passionate about AI and Data Science. Always eager to connect, learn, and help others. Let’s explore tech together!

No responses yet