Close Menu
    Trending
    • Gemini introducerar funktionen schemalagda åtgärder i Gemini-appen
    • AIFF 2025 Runway’s tredje årliga AI Film Festival
    • AI-agenter kan nu hjälpa läkare fatta bättre beslut inom cancervård
    • Not Everything Needs Automation: 5 Practical AI Agents That Deliver Enterprise Value
    • Prescriptive Modeling Unpacked: A Complete Guide to Intervention With Bayesian Modeling.
    • 5 Crucial Tweaks That Will Make Your Charts Accessible to People with Visual Impairments
    • Why AI Projects Fail | Towards Data Science
    • The Role of Luck in Sports: Can We Measure It?
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » A new AI translation system for headphones clones multiple voices simultaneously
    AI Technology

    A new AI translation system for headphones clones multiple voices simultaneously

    ProfitlyAIBy ProfitlyAIMay 9, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Spatial Speech Translation consists of two AI fashions, the primary of which divides the house surrounding the individual carrying the headphones into small areas and makes use of a neural community to seek for potential audio system and pinpoint their path. 

    The second mannequin then interprets the audio system’ phrases from French, German, or Spanish into English textual content utilizing publicly accessible knowledge units. The identical mannequin extracts the distinctive traits and emotional tone of every speaker’s voice, such because the pitch and the amplitude, and applies these properties to the textual content, basically making a “cloned” voice. Which means when the translated model of a speaker’s phrases is relayed to the headphone wearer a number of seconds later, apparently it’s coming from the speaker’s path and the voice sounds quite a bit just like the speaker’s personal, not a robotic-sounding laptop.

    Provided that separating out human voices is tough sufficient for AI techniques, with the ability to incorporate that capability right into a real-time translation system, map the gap between the wearer and the speaker, and obtain first rate latency on an actual system is spectacular, says Samuele Cornell, a postdoc researcher at Carnegie Mellon College’s Language Applied sciences Institute, who didn’t work on the venture.

    “Actual-time speech-to-speech translation is extremely laborious,” he says. “Their outcomes are excellent within the restricted testing settings. However for an actual product, one would wish rather more coaching knowledge—probably with noise and real-world recordings from the headset, slightly than purely counting on artificial knowledge.”

    Gollakota’s staff is now specializing in lowering the period of time it takes for the AI translation to kick in after a speaker says one thing, which is able to accommodate extra natural-sounding conversations between folks talking completely different languages. “We need to actually get down that latency considerably to lower than a second, in an effort to nonetheless have the conversational vibe,” Gollakota says.

    This stays a serious problem, as a result of the velocity at which an AI system can translate one language into one other is determined by the languages’ construction. Of the three languages Spatial Speech Translation was educated on, the system was quickest to translate French into English, adopted by Spanish after which German—reflecting how German, in contrast to the opposite languages, locations a sentence’s verbs and far of its which means on the finish and never originally, says Claudio Fantinuoli, a researcher on the Johannes Gutenberg College of Mainz in Germany, who didn’t work on the venture. 

    Lowering the latency might make the translations much less correct, he warns: “The longer you wait [before translating], the extra context you’ve gotten, and the higher the interpretation can be. It’s a balancing act.”



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleClustering Eating Behaviors in Time: A Machine Learning Approach to Preventive Health
    Next Article Svenska AI-reformen – miljoner svenskar får gratis AI-verktyg
    ProfitlyAI
    • Website

    Related Posts

    AI Technology

    Manus has kick-started an AI agent boom in China

    June 5, 2025
    AI Technology

    What’s next for AI and math

    June 4, 2025
    AI Technology

    Inside the tedious effort to tally AI’s energy appetite

    June 3, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    What is Test Time Training

    April 4, 2025

    Mining Rules from Data | Towards Data Science

    April 9, 2025

    It’s been a massive week for the AI copyright debate

    April 3, 2025

    How to Ensure Your AI Solution Does What You Expect iI to Do

    April 29, 2025

    What Is It About » Ofemwire

    April 4, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Fueling seamless AI at scale

    May 30, 2025

    Pairwise Cross-Variance Classification | Towards Data Science

    June 3, 2025

    AI-hörlurar översätter flera talare samtidigt klonar deras röster i 3D

    May 12, 2025
    Our Picks

    Gemini introducerar funktionen schemalagda åtgärder i Gemini-appen

    June 7, 2025

    AIFF 2025 Runway’s tredje årliga AI Film Festival

    June 7, 2025

    AI-agenter kan nu hjälpa läkare fatta bättre beslut inom cancervård

    June 7, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.