Home / Blog / Artificial Intelligence / BERT Variants and their Differences

BERT Variants and their Differences

  • July 12, 2023
  • 29612
  • 44
Author Images

Meet the Author : Mr. Bharani Kumar

Bharani Kumar Depuru is a well known IT personality from Hyderabad. He is the Founder and Director of Innodatatics Pvt Ltd and 360DigiTMG. Bharani Kumar is an IIT and ISB alumni with more than 17 years of experience, he held prominent positions in the IT elites like HSBC, ITC Infotech, Infosys, and Deloitte. He is a prevalent IT consultant specializing in Industrial Revolution 4.0 implementation, Data Analytics practice setup, Artificial Intelligence, Big Data Analytics, Industrial IoT, Business Intelligence and Business Management. Bharani Kumar is also the chief trainer at 360DigiTMG with more than Ten years of experience and has been making the IT transition journey easy for his students. 360DigiTMG is at the forefront of delivering quality education, thereby bridging the gap between academia and industry.

Read More >

Over time, BERT evolution has spread into many other fields. BERT, a Bidirectional Encoder Representation that is descended from the Transformer architecture, teaches the model to forecast the context in many ways. BERT models have undergone extensive pre-training on trillions of unannotated texts, enabling us to fine-tune the model for specialised tasks and particular datasets. Pre-training doesn't need to start from scratch since BERT can use transfer learning to achieve high accuracy with faster calculation. Open source and heavily studied, BERT generates cutting-edge forecasts. Since its debut, a number of substitute versions have been introduced. Sentimental analysis, phrase prediction, abstract summarization, question-answering, natural language inference, and a host of other NLP tasks have all been revolutionised by BERT Technology. BERT has several different model configurations, the most basic of which is BERT-Base, which includes 12 encoder layers. Afterward, a BERT-Large model with more layers. Some of the fundamental BERT variations and their features are shown in the table below. Many new models that are trained in various languages or optimised on domain-specific data sets but are inspired by the BERT architecture throughout time. Continuous improvement is taking place, and several optimised versions are frequently released. This article discusses the variations and traits of a few of the commercially available pre-trained BERT models.

BERT & It’s variants

Figure 1 Common Characteristics of pre-trained NLP models (Source: Humboldt Universitat)

Learn the core concepts of Data Science Course video on YouTube:

RoBERTa

Known as a ‘Robustly Optimized BERT Pretraining Approach’ RoBERTa is a BERT variant developed to enhance the training phase, RoBERTa was developed by training the BERT model longer, on larger data of longer sequences and large mini-batches. The researchers of RoBERTa obtained substantially improved results with some modifications of BERT hyperparameters. Unlike BERT, RoBERTa uses the following techniques for its Robust training.

  • Dynamic Masking

    Sequences entering the BERT basic pre-processing are statically masked. The dynamic masking approach is used by RoBERTa. 15% of the input sequences are randomly masked after being multiplied to enhance the number of sequences. This enables the model to read many unique masking patterns in the same sequence while also minimising the requirement for much more training cases.

    Trained without Next Sentence Prediction (NSP)

    RoBERTa model uses complete sentences as input. These input sentences are sampled from the data continuously with a separator token applied to mark document boundaries. The researcher experimented while removing the Next sentence prediction loss and found eliminating NSP slightly improves downstream task performance.

  • Trained on Large Mini-Batches

    The researchers found that the model produced superior outcomes when it was trained in big mini-batches. Greater improvements were obtained by increasing the batch size and training data. Just 256 sequences are trained in a batch size of 1 million training steps for the original BERT. Using a batch size of 2k sequences, RoBERTa is trained in 125k steps.

ALBERT

Known as ‘A lite version of BERT’, ALBERT was proposed recently to enhance the training and results of BERT architecture by using parameter sharing and factorizing techniques. BERT model contains millions of parameters, BERT-based holds about 110 million parameters which makes it hard to train also too many parameters impact the computation. To overcome such challenges ALBERT was introduced as It has fewer parameters compared to BERT. ALBERT uses two techniques:

  • Cross-Layer Parameter Sharing

    It is a method for minimising the amount of parameters in BERT. For instance, BERT-base includes 12 encoder layers. BERT has N number of encoder layers. All of the encoder layers' settings are learnt during training. with relation to cross-layer. Rather than learning settings for every encoder layer. All additional encoder layers share the parameter from the first encoder layer. Expanded is only the first encoder layer. Sub-layers are present in every encoder. attention with many heads and feedforward. We become familiar with encoder 1's settings and impart them to subsequent encoder layers. ALBERT offers a variety of methods to do that.

    BERT & It’s variants

    Figure 2 Bidirectional Encoder Representations from Transformers (BERT), (Source: Humboldt Universität)

    • All-shared – All parameters of the encoder layer, including all the sub-layers (multi-head attention layer and feed-forward layer) are shared across all encoder layers of the BERT model.
    • Feed-forward – Only the feed-forward layer parameters are shared with the feed-forward sub-layers across all the other encoder layers of the BERT model.
    • Shared attention – Only the attention head layer parameters are shared with the attention head sub-layers across all encoder layers of the BERT model.

    By default, ALBERT uses an all-shared technique where the parameters of both feed-forward and attention sublayer with all encoder layers.

  • Factorized embedding layer Parameterization

    This method is often referred to as reduction. The size of the input layer embeddings and hidden layer embeddings in BERT is the same. The two embedding matrices are separated during factorised layer parameterization. This is because BERT generates tokens using a word piece tokenizer. The one-hot encoding vectors are used to learn non-contextual word piece tokens. Context-dependent learning is necessary for the hidden layer embeddings. When the size of the hidden layer embeddings 'H' is increased, the word piece embedding 'E' will likewise grow in size. The size relationship between the concealed layer and the embedding layer is separated to prevent an increase in the number of parameters. Here, the word embedding matrix is factored into smaller matrices. By using them, it is possible to cut the BERT model's training and inference times by roughly 70% while maintaining the same number of parameters. The ALBERT model has less parameters than the related BERT model, as seen in the picture below.

    BERT & It’s variants

    Figure 3 ALBERT - A Lite BERT Parameters comparison (Source: @pawangfg)

ELECTRA

Known as ‘Efficiently Learning an Encoder that Classifies Token Replacement Accurately’ or ELECTRA, this variant of BERT applies a replace token detection technique (RTD) to improve results instead of using Masked language modeling (MLM) as in the original BERT. The MLM method in BERT replaces some tokens of the input with a [Mask] and then performs training on those to predict original content. In ELECTRA the tokens are replaced with alternative samples instead of masking the input. This method of pre-training provides more efficient results than MLM as the model learns from all input tokens rather than masked out sub tokens. The replacement tokens utilize a neural network called the generator and a discriminator encoder that maps a sequence on input tokens and converts them into a sequence of contextualized vector representations. Illustrated in the figure below the generator performs the masking out of inputs and the discriminator is trained to predict data tokens replaced by the generator. As a result, the model learns from all input tokens rather than masked out pieces, making it efficient, faster, and performs with higher accuracy when applied on downstream NLP tasks.

BERT & It’s variants

Figure 4 Replaced Token Detection (Source: ai.googleblog)

XLNet

A BERT version based on the Large Bidirectional Transformer-XL paired with denoising autoencoding of BERT is known as the "Generalised Autoregressive Pretraining for Language Understanding" or XLNet. When tokens are anticipated arbitrarily, it uses the permutation approach to capture bidirectional context. In BERT, 15% of the tokens are hidden, and all token predictions follow a predetermined order rather than being made at random. The XLNet model is able to learn word connections and long-term dependencies as a result. Researchers' trials on about 20 tasks showed that XLNet outperformed and produced state-of-the-art results on a variety of tasks, including sentiment analysis, question-answering, natural language inference, etc. In order to determine the likelihood of sequence, XLNet produces permutations of the words in a phrase. This implies that in order to anticipate the words, it uses a bidirectional context to look at words both before and after a specific token. Transformer XL's autoregressive formulation is included into XLNet's pretraining to improve it.

BERT & It’s variants

Figure 5 Illustration of the permutation language modeling for predicting X3 given the same input sequence X but with different factorization orders (Source: XLNet Research Journal arxiv.org)

DistilBERT

Known as the Distilled version of BERT, this is another compressed, smaller, faster, cheaper, and lighter variant. BERT has millions of parameters; due to its large size, it is challenging to apply it in a real-world task. With a complex layered architecture, these pre-trained models may achieve higher accuracy but the enormous number of parameters makes it expensive on resources especially when the model would be too heavy to use on mobile devices. Hence, a lighter, efficient and effective model is needed that can perform as powerful as BERT while reducing the size of a large model. Compared to BERT/RoBERTa/XLNet models, which provide results with improved performances, DistilBERT aims to reduce computation time. To compress the model size, DistilBERT applies the “teacher-student” framework also referred to as knowledge distillation where a larger model or the “teacher” network is trained and the knowledge is passed on to the smaller model also known as the “student” network. The research conducted by pioneers demonstrated “Distilling the knowledge in a Neural Network” in which a smaller language model is trained by removing the token-type embeddings while reducing the number of parameters, this largely impacted the computation efficiency. According to GLUE ‘General Language Understanding Evaluation’ standards, DistilBERT retains 97% performance of BERT with 40% fewer parameters and faster inference time. With the combination of transfer distillation and a two-stage learning framework, researchers discovered the accuracy of general BERT with a model that is smaller and faster. The main innovation of the two-stage learning framework is the TinyBERT. Illustrated in the figure below a transformer distillation learning process is performed between the Teacher BERT and general TinyBERT, this can be then further fine-tuned for downstream NLP tasks. Click here to learn Machine Learning in Bangalore

BERT & It’s variants

Figure 6 Transformer Distillation (Source: @synced)

SPANBERT

Another BERT variation that is mostly utilised for Question Answering activities is SpanBERT. To forecast the Spans of the text, the SpanBERT model was created as an enhancement to the BERT model. The stages involved in this are as follows: 1. Contiguous text spans rather than single tokens are randomly masked. 2. To anticipate the whole masked spans of text, a Span Boundary Objective approach is used to train the model. Sequence SpanBERT applies masking to the whole span of text, i.e. random continuous spans of text, as opposed to BERT which employs random mask tokens in an input. SpanBERT routinely beat BERT because it predicted the complete content of a masked span without depending on individual token representations. By employing boundary tokens at the beginning and end of the span, this strategy produces a fixed-length representation of the span. The training for SpanBERT is shown in the diagram below.

BERT & It’s variants

Figure 7 Training SpanBERT (Source: @prakharr0y)

BERTSUM

One of the most popular applications of natural language processing is Text Summarization. A BERT model fine-tuned for text summarization tasks is often referred to as BERTSUM or BERT for Summarization. Text Summarization is a process of compressing long, lengthy text into small summaries. It applies two methods, 1. Extractive summarization – a summary is created from a given text by extracting key sentences that hold essential meanings 2. Abstractive summarization – A summary is created by paraphrasing or rewriting the long text in summarized forms. To fine-tune the pre-trained BERT input data is slightly modified off the BERT model. The figure below illustrates the BERTSUM architecture on the right side. The input sequence on the top follows the summation of three kinds of embeddings for each token. The summed vectors are then applied as input embeddings to several transformer layers responsible for generating contextual vectors for every token. BERTSUM extends the original BERT by inserting multiple classifiers token [CLS] to learn sentence representations and utilizes interval segmentation embeddings (illustrated in red and green) to distinguish multiple given sentences in the input.

BERT & It’s variants

Figure 8 Original BERT Architecture (left) and BERTSUM (right) (Source: BERT for summarization @programmerSought)

The most recent development in AI research is the use of transfer learning and the BERT mechanism in different downstream NLP tasks. Research into Natural Language Processing tasks is ongoing, and recent advancements in BERT models have greatly accelerated the development of creative methods for enhancing performances and increasing computation capacity while creating lighter, faster, more precise, and potent language modelling tools. Additional information and a number of other pre-trained BERT model types are open-source and accessible through huggingface.co/transformers documentations, which offer a thorough analysis, comparison, and features of BERT and its many Variants.

Data Science Placement Success Story

Data Science Training Institutes in Other Locations

Navigate to Address

360DigiTMG - Data Analytics, Data Science Course Training Hyderabad

2-56/2/19, 3rd floor, Vijaya Towers, near Meridian School, Ayyappa Society Rd, Madhapur, Hyderabad, Telangana 500081

099899 94319

Get Direction: Data Science Course

Read
Success Stories
Make an Enquiry