Close Menu
    Trending
    • Why Should We Bother with Quantum Computing in ML?
    • Federated Learning and Custom Aggregation Schemes
    • How To Choose The Perfect AI Tool In 2025 » Ofemwire
    • Implementing DRIFT Search with Neo4j and LlamaIndex
    • Agentic AI in Finance: Opportunities and Challenges for Indonesia
    • Dispatch: Partying at one of Africa’s largest AI gatherings
    • Topp 10 AI-filmer genom tiderna
    • OpenAIs nya webbläsare ChatGPT Atlas
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Behind the Magic: How Tensors Drive Transformers
    Artificial Intelligence

    Behind the Magic: How Tensors Drive Transformers

    ProfitlyAIBy ProfitlyAIApril 25, 2025No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Transformers have modified the best way synthetic intelligence works, particularly in understanding language and studying from information. On the core of those fashions are tensors (a generalized sort of mathematical matrices that assist course of data) . As information strikes by means of the totally different components of a Transformer, these tensors are topic to totally different transformations that assist the mannequin make sense of issues like sentences or pictures. Studying how tensors work inside Transformers will help you perceive how as we speak’s smartest AI techniques truly work and suppose.

    What This Article Covers and What It Doesn’t

    ✅ This Article IS About:

    • The stream of tensors from enter to output inside a Transformer mannequin.
    • Making certain dimensional coherence all through the computational course of.
    • The step-by-step transformations that tensors bear in numerous Transformer layers.

    ❌ This Article IS NOT About:

    • A common introduction to Transformers or deep studying.
    • Detailed structure of Transformer fashions.
    • Coaching course of or hyper-parameter tuning of Transformers.

    How Tensors Act Inside Transformers

    A Transformer consists of two important parts:

    • Encoder: Processes enter information, capturing contextual relationships to create significant representations.
    • Decoder: Makes use of these representations to generate coherent output, predicting every aspect sequentially.

    Tensors are the basic information buildings that undergo these parts, experiencing a number of transformations that guarantee dimensional coherence and correct data stream.

    Picture From Analysis Paper: Transformer normal archictecture

    Enter Embedding Layer

    Earlier than getting into the Transformer, uncooked enter tokens (phrases, subwords, or characters) are transformed into dense vector representations by means of the embedding layer. This layer capabilities as a lookup desk that maps every token vector, capturing semantic relationships with different phrases.

    Picture by writer: Tensors passing by means of Embedding layer

    For a batch of 5 sentences, every with a sequence size of 12 tokens, and an embedding dimension of 768, the tensor form is:

    • Tensor form: [batch_size, seq_len, embedding_dim] → [5, 12, 768]

    After embedding, positional encoding is added, making certain that order data is preserved with out altering the tensor form.

    Modified Picture from Analysis Paper: Scenario of the workflow

    Multi-Head Consideration Mechanism

    One of the crucial parts of the Transformer is the Multi-Head Consideration (MHA) mechanism. It operates on three matrices derived from enter embeddings:

    • Question (Q)
    • Key (Okay)
    • Worth (V)

    These matrices are generated utilizing learnable weight matrices:

    • Wq, Wk, Wv of form [embedding_dim, d_model] (e.g., [768, 512]).
    • The ensuing Q, Okay, V matrices have dimensions 
      [batch_size, seq_len, d_model].
    Picture by writer: Desk exhibiting the shapes/dimensions of Embedding, Q, Okay, V tensors

    Splitting Q, Okay, V into A number of Heads

    For efficient parallelization and improved studying, MHA splits Q, Okay, and V into a number of heads. Suppose we now have 8 consideration heads:

    • Every head operates on a subspace of d_model / head_count.
    Picture by writer: Multihead Consideration
    • The reshaped tensor dimensions are [batch_size, seq_len, head_count, d_model / head_count].
    • Instance: [5, 12, 8, 64] → rearranged to [5, 8, 12, 64] to make sure that every head receives a separate sequence slice.
    Picture by writer: Reshaping the tensors
    • So every head will get the its share of Qi, Ki, Vi
    Picture by writer: Every Qi,Ki,Vi despatched to totally different head

    Consideration Calculation

    Every head computes consideration utilizing the system:

    As soon as consideration is computed for all heads, the outputs are concatenated and handed by means of a linear transformation, restoring the preliminary tensor form.

    Picture by writer: Concatenating the output of all heads
    Modified Picture From Analysis Paper: Scenario of the workflow

    Residual Connection and Normalization

    After the multi-head consideration mechanism, a residual connection is added, adopted by layer normalization:

    • Residual connection: Output = Embedding Tensor + Multi-Head Consideration Output
    • Normalization: (Output − μ) / σ to stabilize coaching
    • Tensor form stays [batch_size, seq_len, embedding_dim]
    Picture by writer: Residual Connection

    Feed-Ahead Community (FFN)

    Within the decoder, Masked Multi-Head Consideration ensures that every token attends solely to earlier tokens, stopping leakage of future data.

    Modified Picture From Analysis Paper: Masked Multi Head Consideration

    That is achieved utilizing a decrease triangular masks of form [seq_len, seq_len] with -inf values within the higher triangle. Making use of this masks ensures that the Softmax perform nullifies future positions.

    Picture by writer: Masks matrix

    Cross-Consideration in Decoding

    For the reason that decoder doesn’t absolutely perceive the enter sentence, it makes use of cross-attention to refine predictions. Right here:

    • The decoder generates queries (Qd) from its enter ([batch_size, target_seq_len, embedding_dim]).
    • The encoder output serves as keys (Ke) and values (Ve).
    • The decoder computes consideration between Qd and Ke, extracting related context from the encoder’s output.
    Modified Picture From Analysis Paper: Cross Head Consideration

    Conclusion

    Transformers use tensors to assist them be taught and make good selections. As the information strikes by means of the community, these tensors undergo totally different steps—like being become numbers the mannequin can perceive (embedding), specializing in necessary components (consideration), staying balanced (normalization), and being handed by means of layers that be taught patterns (feed-forward). These modifications maintain the information in the proper form the entire time. By understanding how tensors transfer and alter, we will get a greater concept of how AI models work and the way they’ll perceive and create human-like language.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleA Step-By-Step Guide To Powering Your Application With LLMs
    Next Article OpenAI har släppt en omfattande prompt guide för GPT-4.1
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Why Should We Bother with Quantum Computing in ML?

    October 22, 2025
    Artificial Intelligence

    Federated Learning and Custom Aggregation Schemes

    October 22, 2025
    Artificial Intelligence

    Implementing DRIFT Search with Neo4j and LlamaIndex

    October 22, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    600+ AI Micro SaaS Ideas for Entrepreneurs in 30+ Categories • AI Parabellum

    April 3, 2025

    Three Essential Hyperparameter Tuning Techniques for Better Machine Learning Models

    August 22, 2025

    The AI Hype Index: DeepSeek mania, vibe coding, and cheating at chess

    April 3, 2025

    Empowering LLMs to Think Deeper by Erasing Thoughts

    May 13, 2025

    AI-utvecklingen 2025: Mindre, billigare och allt mer integrerad i våra liv

    April 16, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    OpenAI lanserar globalt initiativ – vill samarbeta med regeringar om AI-infrastruktur

    May 8, 2025

    Build a Data Dashboard Using HTML, CSS, and JavaScript

    October 3, 2025

    Ny AI-radarteknik kan avlyssna telefonsamtal på tre meters avstånd

    August 12, 2025
    Our Picks

    Why Should We Bother with Quantum Computing in ML?

    October 22, 2025

    Federated Learning and Custom Aggregation Schemes

    October 22, 2025

    How To Choose The Perfect AI Tool In 2025 » Ofemwire

    October 22, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.