Close Menu
    Trending
    • Agentic AI in Finance: Opportunities and Challenges for Indonesia
    • Dispatch: Partying at one of Africa’s largest AI gatherings
    • Topp 10 AI-filmer genom tiderna
    • OpenAIs nya webbläsare ChatGPT Atlas
    • Creating AI that matters | MIT News
    • Scaling Recommender Transformers to a Billion Parameters
    • Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know
    • Is RAG Dead? The Rise of Context Engineering and Semantic Layers for Agentic AI
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Modular Arithmetic in Data Science
    Artificial Intelligence

    Modular Arithmetic in Data Science

    ProfitlyAIBy ProfitlyAIAugust 19, 2025No Comments12 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    is a mathematical system the place numbers cycle after reaching a worth known as the modulus. The system is also known as “clock arithmetic” resulting from its similarity to how analog 12-hour clocks symbolize time. This text offers a conceptual overview of modular arithmetic and explores sensible use circumstances in knowledge science.

    Conceptual Overview

    The Fundamentals

    Modular arithmetic defines a system of operations on integers primarily based on a selected integer known as the modulus. The expression x mod d is equal to the rest obtained when x is split by d. If r ≡ x mod d, then r is claimed to be congruent to x mod d. In different phrases, which means r and x differ by a a number of of d, or that x – r is divisible by d. The image ‘≡’ (three horizontal strains) is used as a substitute of ‘=’ in modular arithmetic to emphasise that we’re coping with congruence reasonably than equality within the typical sense.

    For instance, in modulo 7, the quantity 10 is congruent to three as a result of 10 divided by 7 leaves a the rest of three. So, we will write 3 ≡ 10 mod 7. Within the case of a 12-hour clock, 2 a.m. is congruent to 2 p.m. (which is 14 mod 12). In programming languages akin to Python, the % signal (‘%’) serves because the modulus operator (e.g., 10 % 7 would consider to three).

    Here’s a video that explains these ideas in additional element:

    Fixing Linear Congruences

    A linear congruence could be a modular expression of the shape n ⋅ y ≡ x (mod d), the place the coefficient n, goal x, and modulus d are identified integers, and the unknown integer y has a level of 1 (i.e., it isn’t squared, cubed, and so forth). The expression 2017 ⋅ y ≡ 2025 (mod 10000) is an instance of a linear congruence; it states that when 2017 is multiplied by some integer y, the product leaves a the rest of 2025 when divided by 10000. To resolve for y within the expression n ⋅ y ≡ x (mod d), observe these steps:

    1. Discover the best frequent divisor (GCD) of the coefficient n and modulus d, additionally written as GCD(n, d), which is the best optimistic integer that may be a divisor of each n and d. The Extended Euclidean Algorithm could also be used to effectively compute the GCD; this will even yield a candidate for n-1, the modular inverse of the coefficient n.
    2. Decide whether or not an answer exists. If the goal x isn’t divisible by GCD(n, d), then the equation has no resolution. It’s because the congruence is barely solvable when the GCD divides the goal.
    3. Simplify the modular expression, if wanted, by dividing the coefficient n, goal x, and modulus d by GCD(n, d) to scale back the issue to an easier equal kind; allow us to name these simplified portions n0, x0, and d0, respectively. This ensures that n0 and d0 are coprime (i.e., 1 is their solely frequent divisor), which is critical for locating a modular inverse.
    4. Compute the modular inverse n0-1 of n0 mod d0 (once more, utilizing the Prolonged Euclidean Algorithm).
    5. Discover one resolution for the unknown worth y. To do that, multiply the modular inverse n0-1 by the lowered goal x0 to acquire one legitimate resolution for y mod d0.
    6. Lastly, by constructing on the results of step 5, generate all potential options. For the reason that authentic equation was lowered by GCD(n, d), there are GCD(n, d) distinct options. These options are spaced evenly aside by the lowered modulus d0, and all are legitimate with respect to the unique modulus d.

    Following is a Python implementation of the above process:

    def extended_euclidean_algorithm(a, b):
        """
        Computes the best frequent divisor of optimistic integers a and b,
        together with coefficients x and y such that: a*x + b*y = gcd(a, b)
        """
        if b == 0:
            return (a, 1, 0)
        else:
            gcd, x_prev, y_prev = extended_euclidean_algorithm(b, a % b)
            x = y_prev
            y = x_prev - (a // b) * y_prev
            return (gcd, x, y)
    
    def solve_linear_congruence(coefficient, goal, modulus):
        """
        Solves the linear congruence: coefficient * y ≡ goal (mod modulus)
        Returns all integer options for y with respect to the modulus.
        """
        # Step 1: Compute the gcd
        gcd, _, _ = extended_euclidean_algorithm(coefficient, modulus)
    
        # Step 2: Verify if an answer exists
        if goal % gcd != 0:
            print("No resolution exists: goal isn't divisible by gcd.")
            return None
    
        # Step 3: Cut back the equation by gcd
        reduced_coefficient = coefficient // gcd
        reduced_target = goal // gcd
        reduced_modulus = modulus // gcd
    
        # Step 4: Discover the modular inverse of reduced_coefficient with respect to the reduced_modulus
        _, inverse_reduced, _ = extended_euclidean_algorithm(reduced_coefficient, reduced_modulus)
        inverse_reduced = inverse_reduced % reduced_modulus
    
        # Step 5: Compute one resolution
        base_solution = (inverse_reduced * reduced_target) % reduced_modulus
    
        # Step 6: Generate all options modulo the unique modulus
        all_solutions = [(base_solution + i * reduced_modulus) % modulus for i in range(gcd)]
    
        return all_solutions

    Listed here are some instance exams:

    options = solve_linear_congruence(coefficient=2009, goal=2025, modulus=10000)
    print(f"Options for y: {options}")
    
    options = solve_linear_congruence(coefficient=20, goal=16, modulus=28)
    print(f"Options for y: {options}")

    Outcomes:

    Options for y: [225]
    Options for y: [5, 12, 19, 26]

    This video explains methods to clear up linear congruences in additional element:

    Knowledge Science Use Circumstances

    Use Case 1: Function Engineering

    Modular arithmetic has numerous fascinating use circumstances in knowledge science. An intuitive one is within the context of function engineering, for encoding cyclical options like hours of the day. Since time wraps round each 24 hours, treating hours as linear values can misrepresent relationships (e.g., 11 PM and 1 AM are numerically far aside however temporally shut). By making use of modular encoding (e.g., utilizing sine and cosine transformations of the hour modulo 24), we will protect the round nature of time, permitting machine studying (ML) fashions to acknowledge patterns that happen throughout particular durations like nighttime. The next Python code reveals how such an encoding may be carried out:

    import numpy as np
    import pandas as pd
    import matplotlib.pyplot as plt
    
    # Instance: Checklist of incident hours (in 24-hour format)
    incident_hours = [22, 23, 0, 1, 2]  # 10 PM to 2 AM
    
    # Convert to a DataFrame
    df = pd.DataFrame({'hour': incident_hours})
    
    # Encode utilizing sine and cosine transformations
    df['hour_sin'] = np.sin(2 * np.pi * df['hour'] / 24)
    df['hour_cos'] = np.cos(2 * np.pi * df['hour'] / 24)

    The ensuing dataframe df:

       hour  hour_sin  hour_cos
    0    22 -0.500000  0.866025
    1    23 -0.258819  0.965926
    2     0  0.000000  1.000000
    3     1  0.258819  0.965926
    4     2  0.500000  0.866025

    Discover how using sine nonetheless differentiates between the hours earlier than and after 12 (e.g., encoding 11 p.m. and 1 a.m. as -0.258819 and 0.258819, respectively), whereas using cosine doesn’t (e.g., each 11 p.m. and 1 a.m. are mapped to the worth 0.965926). The optimum alternative of encoding will rely on the enterprise context by which the ML mannequin is to be deployed. In the end, the method enhances function engineering for duties akin to anomaly detection, forecasting, and classification the place temporal proximity issues.

    Within the following sections, we’ll take into account two bigger knowledge science use circumstances of linear congruence that contain fixing for y in modular expressions of the shape n ⋅ y ≡ x (mod d).

    Use Case 2: Resharding in Distributed Database Programs

    In distributed databases, knowledge is commonly partitioned (or sharded) throughout a number of nodes utilizing a hash operate. When the variety of shards modifications — say, from d to d’ — we have to reshard the information effectively with out rehashing the whole lot from scratch.

    Suppose every knowledge merchandise is assigned to a shard as follows:

    shard = hash(key) mod d

    When redistributing gadgets to a brand new set of d’ shards, we’d need to map the previous shard indices to the brand new ones in a approach that preserves steadiness and minimizes knowledge motion. This could result in fixing for y within the expression n ⋅ y ≡ x (mod d), the place:

    • x is the unique shard index,
    • d is the previous variety of shards,
    • n is a scaling issue (or transformation coefficient),
    • y is the brand new shard index that we’re fixing for

    Utilizing modular arithmetic on this context ensures constant mapping between previous and new shard layouts, minimizes reallocation, preserves knowledge locality, and permits deterministic and reversible transformations throughout resharding.

    Under is a Python implementation of this situation:

    def extended_euclidean_algorithm(a, b):
        """
        Computes gcd(a, b) and coefficients x, y such that: a*x + b*y = gcd(a, b)
        Used to search out modular inverses.
        """
        if b == 0:
            return (a, 1, 0)
        else:
            gcd, x_prev, y_prev = extended_euclidean_algorithm(b, a % b)
            x = y_prev
            y = x_prev - (a // b) * y_prev
            return (gcd, x, y)
    
    def modular_inverse(a, m):
        """
        Returns the modular inverse of a modulo m, if it exists.
        """
        gcd, x, _ = extended_euclidean_algorithm(a, m)
        if gcd != 1:
            return None  # Inverse does not exist if a and m will not be coprime
        return x % m
    
    def reshard(old_shard_index, old_num_shards, new_num_shards):
        """
        Maps an previous shard index to a brand new one utilizing modular arithmetic.
        
        Solves: n * y ≡ x (mod d)
        The place:
            x = old_shard_index
            d = old_num_shards
            n = new_num_shards
            y = new shard index (to unravel for)
        """
        x = old_shard_index
        d = old_num_shards
        n = new_num_shards
    
        # Step 1: Verify if modular inverse of n modulo d exists
        inverse_n = modular_inverse(n, d)
        if inverse_n is None:
            print(f"No modular inverse exists for n = {n} mod d = {d}. Can't reshard deterministically.")
            return None
    
        # Step 2: Clear up for y utilizing modular inverse
        y = (inverse_n * x) % d
        return y

    Instance take a look at:

    import hashlib
    
    def custom_hash(key, num_shards):
        hash_bytes = hashlib.sha256(key.encode('utf-8')).digest()
        hash_int = int.from_bytes(hash_bytes, byteorder='massive')
        return hash_int % num_shards
    
    # Instance utilization
    old_num_shards = 10
    new_num_shards = 7
    
    # Simulate resharding for just a few keys
    keys = ['user_123', 'item_456', 'session_789']
    for key in keys:
        old_shard = custom_hash(key, old_num_shards)
        new_shard = reshard(old_shard, old_num_shards, new_num_shards)
        print(f"Key: {key} | Outdated Shard: {old_shard} | New Shard: {new_shard}")

    Be aware that we’re utilizing a customized hash operate that’s deterministic with respect to key and num_shards to make sure reproducibility.

    Outcomes:

    Key: user_123 | Outdated Shard: 9 | New Shard: 7
    Key: item_456 | Outdated Shard: 7 | New Shard: 1
    Key: session_789 | Outdated Shard: 2 | New Shard: 6

    Use Case 3: Differential Privateness in Federated Studying

    In federated studying, ML fashions are skilled throughout decentralized units whereas preserving person privateness. Differential privateness provides noise to gradient updates in an effort to obscure particular person contributions throughout units. Typically, this noise is sampled from a discrete distribution and have to be modulo-reduced to suit inside bounded ranges.

    Suppose a consumer sends an replace x, and the server applies a metamorphosis of the shape n ⋅ (y + okay) ≡ x (mod d), the place:

    • x is the noisy gradient replace despatched to the server,
    • y is the unique (or true) gradient replace,
    • okay is the noise time period (drawn at random from a variety of integers),
    • n is the encoding issue,
    • d is the modulus (e.g., measurement of the finite subject or quantization vary by which all operations happen)

    Because of the privacy-preserving nature of this setup, the server can solely recuperate y + okay, the noisy replace, however not the true replace y itself.

    Under is the now-familiar Python setup:

    def extended_euclidean_algorithm(a, b):
        if b == 0:
            return a, 1, 0
        else:
            gcd, x_prev, y_prev = extended_euclidean_algorithm(b, a % b)
            x = y_prev
            y = x_prev - (a // b) * y_prev
            return gcd, x, y
    
    def modular_inverse(a, m):
        gcd, x, _ = extended_euclidean_algorithm(a, m)
        if gcd != 1:
            return None
        return x % m

    Instance take a look at simulating some purchasers:

    import random
    
    # Parameters
    d = 97  # modulus (finite subject)
    noise_scale = 20  # controls magnitude of noise
    
    # Simulated purchasers
    purchasers = [
        {"id": 1, "y": 12, "n": 17},
        {"id": 2, "y": 23, "n": 29},
        {"id": 3, "y": 34, "n": 41},
    ]
    
    # Step 1: Purchasers add noise and masks their gradients
    random.seed(10)
    for consumer in purchasers:
        noise = random.randint(-noise_scale, noise_scale)
        consumer["noise"] = noise
        noisy_y = consumer["y"] + noise
        consumer["x"] = (consumer["n"] * noisy_y) % d
    
    # Step 2: Server receives x, is aware of n, and recovers noisy gradients
    for consumer in purchasers:
        inv_n = modular_inverse(consumer["n"], d)
        consumer["y_noisy"] = (consumer["x"] * inv_n) % d
    
    # Output
    print("Shopper-side masking with noise:")
    for consumer in purchasers:
        print(f"Shopper {consumer['id']}:")
        print(f"  True gradient y       = {consumer['y']}")
        print(f"  Added noise           = {consumer['noise']}")
        print(f"  Masked worth x        = {consumer['x']}")
        print(f"  Recovered y + noise   = {consumer['y_noisy']}")
        print()

    Outcomes:

    Shopper-side masking with noise:
    Shopper 1:
      True gradient y       = 12
      Added noise           = 16
      Masked worth x        = 88
      Recovered y + noise   = 28
    
    Shopper 2:
      True gradient y       = 23
      Added noise           = -18
      Masked worth x        = 48
      Recovered y + noise   = 5
    
    Shopper 3:
      True gradient y       = 34
      Added noise           = 7
      Masked worth x        = 32
      Recovered y + noise   = 41

    Discover that the server is barely capable of derive the noisy gradients reasonably than the unique ones.

    The Wrap

    Modular arithmetic, with its elegant cyclical construction, gives way over only a intelligent strategy to inform time — it underpins among the most important mechanisms in fashionable knowledge science. By exploring modular transformations and linear congruences, we have now seen how this mathematical framework turns into a robust software for fixing real-world issues. In use circumstances as numerous as function engineering, resharding in distributed databases, and safeguarding person privateness in federated studying by means of differential privateness, modular arithmetic offers each the abstraction and precision wanted to construct strong, scalable techniques. As knowledge science continues to evolve, the relevance of those modular methods will seemingly develop, suggesting that typically, the important thing to innovation lies within the the rest.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleResearchers glimpse the inner workings of protein language models | MIT News
    Next Article Can LangExtract Turn Messy Clinical Notes into Structured Data?
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Agentic AI in Finance: Opportunities and Challenges for Indonesia

    October 22, 2025
    Artificial Intelligence

    Creating AI that matters | MIT News

    October 21, 2025
    Artificial Intelligence

    Scaling Recommender Transformers to a Billion Parameters

    October 21, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Optimizing Multi-Objective Problems with Desirability Functions

    May 20, 2025

    The unique, mathematical shortcuts language models use to predict dynamic scenarios | MIT News

    July 21, 2025

    “My biggest lesson was realizing that domain expertise matters more than algorithmic complexity.“

    August 14, 2025

    AI learns how vision and sound are connected, without human intervention | MIT News

    May 22, 2025

    LLM-as-a-Judge: A Practical Guide | Towards Data Science

    June 19, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Adobe’s New AI Is So Good You Might Ditch Other Tools

    April 25, 2025

    Building A Modern Dashboard with Python and Taipy

    June 23, 2025

    Microsoft’s Revolutionary Diagnostic Medical AI, Explained

    July 8, 2025
    Our Picks

    Agentic AI in Finance: Opportunities and Challenges for Indonesia

    October 22, 2025

    Dispatch: Partying at one of Africa’s largest AI gatherings

    October 22, 2025

    Topp 10 AI-filmer genom tiderna

    October 22, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.