import traxfrom trax import layers as tlimport trax.fastmath.numpy as npimport numpy# Setting random seeds# set random seeds to make this notebook easier to replicatefrom trax import fastmathseed=10rng = fastmath.random.get_prng(seed)#trax.supervised.trainer_lib.init_random_number_generators(10)numpy.random.seed(seed)
2025-02-10 16:56:47.210910: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1739199407.224698 124526 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1739199407.228897 124526 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
L2 Normalization
Before building the model you will need to define a function that applies L2 normalization to a tensor. This is very important because in this week’s assignment you will create a custom loss function which expects the tensors it receives to be normalized. Luckily this is pretty straightforward:
def normalize(x):return x / np.sqrt(np.sum(x * x, axis=-1, keepdims=True))
Notice that the denominator can be replaced by np.linalg.norm(x, axis=-1, keepdims=True) to achieve the same results and that Trax’s numpy is being used within the function.
tensor = numpy.random.random((2,5))print(f'The tensor is of type: {type(tensor)}\n\nAnd looks like this:\n\n{tensor}')
The tensor is of type: <class 'numpy.ndarray'>
And looks like this:
[[0.77132064 0.02075195 0.63364823 0.74880388 0.49850701]
[0.22479665 0.19806286 0.76053071 0.16911084 0.08833981]]
norm_tensor = normalize(tensor)print(f'The normalized tensor is of type: {type(norm_tensor)}\n\nAnd looks like this:\n\n{norm_tensor}')
The normalized tensor is of type: <class 'jaxlib.xla_extension.ArrayImpl'>
And looks like this:
[[0.5739379 0.01544148 0.4714962 0.5571832 0.37093794]
[0.26781026 0.23596111 0.9060541 0.20146926 0.10524315]]
Notice that the initial tensor was converted from a numpy array to a jax array in the process.
Siamese Model
To create a Siamese model you will first need to create a LSTM model using the Serial combinator layer and then use another combinator layer called Parallel to create the Siamese model. You should be familiar with the following layers (notice each layer can be clicked to go to the docs): - Serial A combinator layer that allows to stack layers serially using function composition. - Embedding Maps discrete tokens to vectors. It will have shape (vocabulary length X dimension of output vectors). The dimension of output vectors (also called d_feature) is the number of elements in the word embedding. - LSTM The LSTM layer. It leverages another Trax layer called LSTMCell. The number of units should be specified and should match the number of elements in the word embedding. - Mean Computes the mean across a desired axis. Mean uses one tensor axis to form groups of values and replaces each group with the mean value of that group. - Fn Layer with no weights that applies the function f, which should be specified using a lambda syntax. - Parallel It is a combinator layer (like Serial) that applies a list of layers in parallel to its inputs.
Putting everything together the Siamese model will look like this:
vocab_size =500model_dimension =128# Define the LSTM modelLSTM = tl.Serial( tl.Embedding(vocab_size=vocab_size, d_feature=model_dimension), tl.LSTM(model_dimension), tl.Mean(axis=1), tl.Fn('Normalize', lambda x: normalize(x)) )# Use the Parallel combinator to create a Siamese model out of the LSTM Siamese = tl.Parallel(LSTM, LSTM)
Next is a helper function that prints information for every layer (sublayer within Serial):
def show_layers(model, layer_prefix):print(f"Total layers: {len(model.sublayers)}\n")for i inrange(len(model.sublayers)):print('========')print(f'{layer_prefix}_{i}: {model.sublayers[i]}\n')print('Siamese model:\n')show_layers(Siamese, 'Parallel.sublayers')print('Detail of LSTM models:\n')show_layers(LSTM, 'Serial.sublayers')
Siamese model:
Total layers: 2
========
Parallel.sublayers_0: Serial[
Embedding_500_128
LSTM_128
Mean
Normalize
]
========
Parallel.sublayers_1: Serial[
Embedding_500_128
LSTM_128
Mean
Normalize
]
Detail of LSTM models:
Total layers: 4
========
Serial.sublayers_0: Embedding_500_128
========
Serial.sublayers_1: LSTM_128
========
Serial.sublayers_2: Mean
========
Serial.sublayers_3: Normalize
Try changing the parameters defined before the Siamese model and see how it changes!
You will actually train this model in this week’s assignment. For now you should be more familiarized with creating Siamese models using Trax.
Keep it up!
Citation
BibTeX citation:
@online{bochman2020,
author = {Bochman, Oren},
title = {Creating a {Siamese} Model Using {Trax:} {Ungraded} {Lecture}
{Notebook}},
date = {2020-11-19},
url = {https://orenbochman.github.io/notes-nlp/notes/c3w4/lab01.html},
langid = {en}
}