In this chapter, we implement a GPT-like LLM architecture; the next chapter will focus on training this LLM
4.1 Coding an LLM architecture
Chapter 1 discussed models like GPT and Llama, which generate words sequentially and are based on the decoder part of the original transformer architecture
Therefore, these LLMs are often referred to as “decoder-like” LLMs
Compared to conventional deep learning models, LLMs are larger, mainly due to their vast number of parameters, not the amount of code
We’ll see that many elements are repeated in an LLM’s architecture
In previous chapters, we used small embedding dimensions for token inputs and outputs for ease of illustration, ensuring they fit on a single page
In this chapter, we consider embedding and model sizes akin to a small GPT-2 model
We’ll specifically code the architecture of the smallest GPT-2 model (124 million parameters), as outlined in Radford et al.’s Language Models are Unsupervised Multitask Learners (note that the initial report lists it as 117M parameters, but this was later corrected in the model weight repository)
Chapter 6 will show how to load pretrained weights into our implementation, which will be compatible with model sizes of 345, 762, and 1542 million parameters
Configuration details for the 124 million parameter GPT-2 model include:
In [2]:
GPT_CONFIG_124M = {"vocab_size": 50257, # Vocabulary size"context_length": 1024, # Context length"emb_dim": 768, # Embedding dimension"n_heads": 12, # Number of attention heads"n_layers": 12, # Number of layers"drop_rate": 0.1, # Dropout rate"qkv_bias": False# Query-Key-Value bias}
We use short variable names to avoid long lines of code later
"vocab_size" indicates a vocabulary size of 50,257 words, supported by the BPE tokenizer discussed in Chapter 2
"context_length" represents the model’s maximum input token count, as enabled by positional embeddings covered in Chapter 2
"emb_dim" is the embedding size for token inputs, converting each input token into a 768-dimensional vector
"n_heads" is the number of attention heads in the multi-head attention mechanism implemented in Chapter 3
"n_layers" is the number of transformer blocks within the model, which we’ll implement in upcoming sections
"drop_rate" is the dropout mechanism’s intensity, discussed in Chapter 3; 0.1 means dropping 10% of hidden units during training to mitigate overfitting
"qkv_bias" decides if the Linear layers in the multi-head attention mechanism (from Chapter 3) should include a bias vector when computing query (Q), key (K), and value (V) tensors; we’ll disable this option, which is standard practice in modern LLMs; however, we’ll revisit this later when loading pretrained GPT-2 weights from OpenAI into our reimplementation in chapter 5
In [3]:
import torchimport torch.nn as nnclass DummyGPTModel(nn.Module):def__init__(self, cfg):super().__init__()self.tok_emb = nn.Embedding(cfg["vocab_size"], cfg["emb_dim"])self.pos_emb = nn.Embedding(cfg["context_length"], cfg["emb_dim"])self.drop_emb = nn.Dropout(cfg["drop_rate"])# Use a placeholder for TransformerBlockself.trf_blocks = nn.Sequential(*[DummyTransformerBlock(cfg) for _ inrange(cfg["n_layers"])])# Use a placeholder for LayerNormself.final_norm = DummyLayerNorm(cfg["emb_dim"])self.out_head = nn.Linear( cfg["emb_dim"], cfg["vocab_size"], bias=False )def forward(self, in_idx): batch_size, seq_len = in_idx.shape tok_embeds =self.tok_emb(in_idx) pos_embeds =self.pos_emb(torch.arange(seq_len, device=in_idx.device)) x = tok_embeds + pos_embeds x =self.drop_emb(x) x =self.trf_blocks(x) x =self.final_norm(x) logits =self.out_head(x)return logitsclass DummyTransformerBlock(nn.Module):def__init__(self, cfg):super().__init__()# A simple placeholderdef forward(self, x):# This block does nothing and just returns its input.return xclass DummyLayerNorm(nn.Module):def__init__(self, normalized_shape, eps=1e-5):super().__init__()# The parameters here are just to mimic the LayerNorm interface.def forward(self, x):# This layer does nothing and just returns its input.return x
Since these are just random numbers, this is not a reason for concern, and you can proceed with the remainder of the chapter without issues
One possible reason for this discrepancy is the differing behavior of nn.Dropout across operating systems, depending on how PyTorch was compiled, as discussed here on the PyTorch issue tracker
4.2 Normalizing activations with layer normalization
Layer normalization, also known as LayerNorm (Ba et al. 2016), centers the activations of a neural network layer around a mean of 0 and normalizes their variance to 1
This stabilizes training and enables faster convergence to effective weights
Layer normalization is applied both before and after the multi-head attention module within the transformer block, which we will implement later; it’s also applied before the final output layer
Let’s see how layer normalization works by passing a small input sample through a simple neural network layer:
In [6]:
torch.manual_seed(123)# create 2 training examples with 5 dimensions (features) eachbatch_example = torch.randn(2, 5) layer = nn.Sequential(nn.Linear(5, 6), nn.ReLU())out = layer(batch_example)print(out)
The normalization is applied to each of the two inputs (rows) independently; using dim=-1 applies the calculation across the last dimension (in this case, the feature dimension) instead of the row dimension
Subtracting the mean and dividing by the square-root of the variance (standard deviation) centers the inputs to have a mean of 0 and a variance of 1 across the column (feature) dimension:
Now, using the same idea, we can implement a LayerNorm class:
In [10]:
class LayerNorm(nn.Module):def__init__(self, emb_dim):super().__init__()self.eps =1e-5self.scale = nn.Parameter(torch.ones(emb_dim))self.shift = nn.Parameter(torch.zeros(emb_dim))def forward(self, x): mean = x.mean(dim=-1, keepdim=True) var = x.var(dim=-1, keepdim=True, unbiased=False) norm_x = (x - mean) / torch.sqrt(var +self.eps)returnself.scale * norm_x +self.shift
Scale and shift
Note that in addition to performing the normalization by subtracting the mean and dividing by the variance, we added two trainable parameters, a scale and a shift parameter
The initial scale (multiplying by 1) and shift (adding 0) values don’t have any effect; however, scale and shift are trainable parameters that the LLM automatically adjusts during training if it is determined that doing so would improve the model’s performance on its training task
This allows the model to learn appropriate scaling and shifting that best suit the data it is processing
Note that we also add a smaller value (eps) before computing the square root of the variance; this is to avoid division-by-zero errors if the variance is 0
Biased variance - In the variance calculation above, setting unbiased=False means using the formula \(\frac{\sum_i (x_i - \bar{x})^2}{n}\) to compute the variance where n is the sample size (here, the number of features or columns); this formula does not include Bessel’s correction (which uses n-1 in the denominator), thus providing a biased estimate of the variance - For LLMs, where the embedding dimension n is very large, the difference between using n and n-1 is negligible - However, GPT-2 was trained with a biased variance in the normalization layers, which is why we also adopted this setting for compatibility reasons with the pretrained weights that we will load in later chapters
4.3 Implementing a feed forward network with GELU activations
In this section, we implement a small neural network submodule that is used as part of the transformer block in LLMs
We start with the activation function
In deep learning, ReLU (Rectified Linear Unit) activation functions are commonly used due to their simplicity and effectiveness in various neural network architectures
In LLMs, various other types of activation functions are used beyond the traditional ReLU; two notable examples are GELU (Gaussian Error Linear Unit) and SwiGLU (Swish-Gated Linear Unit)
GELU and SwiGLU are more complex, smooth activation functions incorporating Gaussian and sigmoid-gated linear units, respectively, offering better performance for deep learning models, unlike the simpler, piecewise linear function of ReLU
GELU (Hendrycks and Gimpel 2016) can be implemented in several ways; the exact version is defined as GELU(x)=x⋅Φ(x), where Φ(x) is the cumulative distribution function of the standard Gaussian distribution.
In practice, it’s common to implement a computationally cheaper approximation: \(\text{GELU}(x) \approx 0.5 \cdot x \cdot \left(1 + \tanh\left[\sqrt{\frac{2}{\pi}} \cdot \left(x + 0.044715 \cdot x^3\right)\right]\right)\) (the original GPT-2 model was also trained with this approximation)
In [13]:
class GELU(nn.Module):def__init__(self):super().__init__()def forward(self, x):return0.5* x * (1+ torch.tanh( torch.sqrt(torch.tensor(2.0/ torch.pi)) * (x +0.044715* torch.pow(x, 3)) ))
Next, let’s talk about the concept behind shortcut connections, also called skip or residual connections
Originally, shortcut connections were proposed in deep networks for computer vision (residual networks) to mitigate vanishing gradient problems
A shortcut connection creates an alternative shorter path for the gradient to flow through the network
This is achieved by adding the output of one layer to the output of a later layer, usually skipping one or more layers in between
Let’s illustrate this idea with a small example network:
In code, it looks like this:
In [18]:
class ExampleDeepNeuralNetwork(nn.Module):def__init__(self, layer_sizes, use_shortcut):super().__init__()self.use_shortcut = use_shortcutself.layers = nn.ModuleList([ nn.Sequential(nn.Linear(layer_sizes[0], layer_sizes[1]), GELU()), nn.Sequential(nn.Linear(layer_sizes[1], layer_sizes[2]), GELU()), nn.Sequential(nn.Linear(layer_sizes[2], layer_sizes[3]), GELU()), nn.Sequential(nn.Linear(layer_sizes[3], layer_sizes[4]), GELU()), nn.Sequential(nn.Linear(layer_sizes[4], layer_sizes[5]), GELU()) ])def forward(self, x):for layer inself.layers:# Compute the output of the current layer layer_output = layer(x)# Check if shortcut can be appliedifself.use_shortcut and x.shape == layer_output.shape: x = x + layer_outputelse: x = layer_outputreturn xdef print_gradients(model, x):# Forward pass output = model(x) target = torch.tensor([[0.]])# Calculate loss based on how close the target# and output are loss = nn.MSELoss() loss = loss(output, target)# Backward pass to calculate the gradients loss.backward()for name, param in model.named_parameters():if'weight'in name:# Print the mean absolute gradient of the weightsprint(f"{name} has gradient mean of {param.grad.abs().mean().item()}")
Let’s print the gradient values first without shortcut connections:
layers.0.0.weight has gradient mean of 0.00020173587836325169
layers.1.0.weight has gradient mean of 0.00012011159560643137
layers.2.0.weight has gradient mean of 0.0007152039906941354
layers.3.0.weight has gradient mean of 0.0013988736318424344
layers.4.0.weight has gradient mean of 0.005049645435065031
Next, let’s print the gradient values with shortcut connections:
layers.0.0.weight has gradient mean of 0.22169792652130127
layers.1.0.weight has gradient mean of 0.20694106817245483
layers.2.0.weight has gradient mean of 0.32896995544433594
layers.3.0.weight has gradient mean of 0.2665732204914093
layers.4.0.weight has gradient mean of 1.3258540630340576
As we can see based on the output above, shortcut connections prevent the gradients from vanishing in the early layers (towards layer.0)
We will use this concept of a shortcut connection next when we implement a transformer block
4.5 Connecting attention and linear layers in a transformer block
In this section, we now combine the previous concepts into a so-called transformer block
A transformer block combines the causal multi-head attention module from the previous chapter with the linear layers, the feed forward neural network we implemented in an earlier section
In addition, the transformer block also uses dropout and shortcut connections
In [21]:
# If the `previous_chapters.py` file is not available locally,# you can import it from the `llms-from-scratch` PyPI package.# For details, see: https://github.com/rasbt/LLMs-from-scratch/tree/main/pkg# E.g.,# from llms_from_scratch.ch03 import MultiHeadAttentionfrom previous_chapters import MultiHeadAttentionclass TransformerBlock(nn.Module):def__init__(self, cfg):super().__init__()self.att = MultiHeadAttention( d_in=cfg["emb_dim"], d_out=cfg["emb_dim"], context_length=cfg["context_length"], num_heads=cfg["n_heads"], dropout=cfg["drop_rate"], qkv_bias=cfg["qkv_bias"])self.ff = FeedForward(cfg)self.norm1 = LayerNorm(cfg["emb_dim"])self.norm2 = LayerNorm(cfg["emb_dim"])self.drop_shortcut = nn.Dropout(cfg["drop_rate"])def forward(self, x):# Shortcut connection for attention block shortcut = x x =self.norm1(x) x =self.att(x) # Shape [batch_size, num_tokens, emb_size] x =self.drop_shortcut(x) x = x + shortcut # Add the original input back# Shortcut connection for feed forward block shortcut = x x =self.norm2(x) x =self.ff(x) x =self.drop_shortcut(x) x = x + shortcut # Add the original input backreturn x
Suppose we have 2 input samples with 6 tokens each, where each token is a 768-dimensional embedding vector; then this transformer block applies self-attention, followed by linear layers, to produce an output of similar size
You can think of the output as an augmented version of the context vectors we discussed in the previous chapter
We are almost there: now let’s plug in the transformer block into the architecture we coded at the very beginning of this chapter so that we obtain a usable GPT architecture
Note that the transformer block is repeated multiple times; in the case of the smallest 124M GPT-2 model, we repeat it 12 times:
The corresponding code implementation, where cfg["n_layers"] = 12:
In [23]:
class GPTModel(nn.Module):def__init__(self, cfg):super().__init__()self.tok_emb = nn.Embedding(cfg["vocab_size"], cfg["emb_dim"])self.pos_emb = nn.Embedding(cfg["context_length"], cfg["emb_dim"])self.drop_emb = nn.Dropout(cfg["drop_rate"])self.trf_blocks = nn.Sequential(*[TransformerBlock(cfg) for _ inrange(cfg["n_layers"])])self.final_norm = LayerNorm(cfg["emb_dim"])self.out_head = nn.Linear( cfg["emb_dim"], cfg["vocab_size"], bias=False )def forward(self, in_idx): batch_size, seq_len = in_idx.shape tok_embeds =self.tok_emb(in_idx) pos_embeds =self.pos_emb(torch.arange(seq_len, device=in_idx.device)) x = tok_embeds + pos_embeds # Shape [batch_size, num_tokens, emb_size] x =self.drop_emb(x) x =self.trf_blocks(x) x =self.final_norm(x) logits =self.out_head(x)return logits
Using the configuration of the 124M parameter model, we can now instantiate this GPT model with random initial weights as follows:
However, a quick note about its size: we previously referred to it as a 124M parameter model; we can double check this number as follows:
In [25]:
total_params =sum(p.numel() for p in model.parameters())print(f"Total number of parameters: {total_params:,}")
Total number of parameters: 163,009,536
As we see above, this model has 163M, not 124M parameters; why?
In the original GPT-2 paper, the researchers applied weight tying, which means that they reused the token embedding layer (tok_emb) as the output layer, which means setting self.out_head.weight = self.tok_emb.weight
The token embedding layer projects the 50,257-dimensional one-hot encoded input tokens to a 768-dimensional embedding representation
The output layer projects 768-dimensional embeddings back into a 50,257-dimensional representation so that we can convert these back into words (more about that in the next section)
So, the embedding and output layer have the same number of weight parameters, as we can see based on the shape of their weight matrices
However, a quick note about its size: we previously referred to it as a 124M parameter model; we can double check this number as follows:
In the original GPT-2 paper, the researchers reused the token embedding matrix as an output matrix
Correspondingly, if we subtracted the number of parameters of the output layer, we’d get a 124M parameter model:
In [27]:
total_params_gpt2 = total_params -sum(p.numel() for p in model.out_head.parameters())print(f"Number of trainable parameters considering weight tying: {total_params_gpt2:,}")
Number of trainable parameters considering weight tying: 124,412,160
In practice, I found it easier to train the model without weight-tying, which is why we didn’t implement it here
However, we will revisit and apply this weight-tying idea later when we load the pretrained weights in chapter 5
Lastly, we can compute the memory requirements of the model as follows, which can be a helpful reference point:
In [28]:
# Calculate the total size in bytes (assuming float32, 4 bytes per parameter)total_size_bytes = total_params *4# Convert to megabytestotal_size_mb = total_size_bytes / (1024*1024)print(f"Total size of the model: {total_size_mb:.2f} MB")
Total size of the model: 621.83 MB
Exercise: you can try the following other configurations, which are referenced in the GPT-2 paper, as well.
GPT2-small (the 124M configuration we already implemented):
“emb_dim” = 768
“n_layers” = 12
“n_heads” = 12
GPT2-medium:
“emb_dim” = 1024
“n_layers” = 24
“n_heads” = 16
GPT2-large:
“emb_dim” = 1280
“n_layers” = 36
“n_heads” = 20
GPT2-XL:
“emb_dim” = 1600
“n_layers” = 48
“n_heads” = 25
4.7 Generating text
LLMs like the GPT model we implemented above are used to generate one word at a time
The following generate_text_simple function implements greedy decoding, which is a simple and fast method to generate text
In greedy decoding, at each step, the model chooses the word (or token) with the highest probability as its next output (the highest logit corresponds to the highest probability, so we technically wouldn’t even have to compute the softmax function explicitly)
In the next chapter, we will implement a more advanced generate_text function
The figure below depicts how the GPT model, given an input context, generates the next word token
In [29]:
def generate_text_simple(model, idx, max_new_tokens, context_size):# idx is (batch, n_tokens) array of indices in the current contextfor _ inrange(max_new_tokens):# Crop current context if it exceeds the supported context size# E.g., if LLM supports only 5 tokens, and the context size is 10# then only the last 5 tokens are used as context idx_cond = idx[:, -context_size:]# Get the predictionswith torch.no_grad(): logits = model(idx_cond)# Focus only on the last time step# (batch, n_tokens, vocab_size) becomes (batch, vocab_size) logits = logits[:, -1, :] # Apply softmax to get probabilities probas = torch.softmax(logits, dim=-1) # (batch, vocab_size)# Get the idx of the vocab entry with the highest probability value idx_next = torch.argmax(probas, dim=-1, keepdim=True) # (batch, 1)# Append sampled index to the running sequence idx = torch.cat((idx, idx_next), dim=1) # (batch, n_tokens+1)return idx
The generate_text_simple above implements an iterative process, where it creates one token at a time
Let’s prepare an input example:
In [30]:
start_context ="Hello, I am"encoded = tokenizer.encode(start_context)print("encoded:", encoded)encoded_tensor = torch.tensor(encoded).unsqueeze(0)print("encoded_tensor.shape:", encoded_tensor.shape)