Close Menu
    Trending
    • Don’t let hype about AI agents get ahead of reality
    • DRAWER: skapar interaktiva digitala miljöer från statiska inomhusvideo
    • Microsoft hävdar att deras AI-diagnosverktyg kan överträffa läkare
    • Taking ResNet to the Next Level
    • Confronting the AI/energy conundrum
    • Four AI Minds in Concert: A Deep Dive into Multimodal AI Fusion
    • Software Engineering in the LLM Era
    • Interactive Data Exploration for Computer Vision Projects with Rerun
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Why We Should Focus on AI for Women
    Artificial Intelligence

    Why We Should Focus on AI for Women

    ProfitlyAIBy ProfitlyAIJuly 2, 2025No Comments7 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    The story started with a dialog I had with my girlfriend final Sunday. She, taken with medical analysis, talked about that ladies are sometimes underdiagnosed for stroke. There are sometimes many false unfavourable instances amongst girls as a result of preliminary stroke analysis was primarily carried out on male topics. Because of this, the signs seen in girls—typically totally different from these noticed in males—couldn’t be acknowledged clinically.

    An identical difficulty has been noticed in pores and skin most cancers prognosis. People with darker pores and skin tones have much less of an opportunity of being accurately identified.

    Examples like these present how bias in information assortment and analysis design can result in dangerous outcomes. We live in an period the place AI is current in practically each area — and it’s inevitable that biased information is fed into these programs. I’ve even witnessed medical doctors utilizing chatbot instruments as medical assistants whereas writing prescriptions.

    From this facet, earlier than a topic or a subject has been absolutely studied amongst totally different teams—reminiscent of these based mostly on gender or race—making use of its incomplete findings to AI programs carries important dangers, each scientifically and ethically. AI programs not solely are inclined to inherit current human cognitive biases, however may also unintentionally amplify and entrench these biases inside their technical constructions.

    On this put up, I’ll stroll by means of a case examine from my private expertise: defining the optimum temperature in an workplace constructing, contemplating the totally different thermal consolation ranges of women and men.

    Case examine: Thermal consolation

    Two years in the past, I labored on a mission to optimize the power effectivity in a constructing whereas sustaining thermal consolation. This raised a necessary query: What precisely is thermal consolation? In lots of workplace buildings or industrial facilities, the reply is a hard and fast temperature. Nonetheless, analysis has proven that ladies report considerably extra dissatisfaction than males below related thermal situations (Indraganti & Humphreys, 2015). Past the intense scientific investigation, I, together with different feminine colleagues, have all reported feeling chilly throughout workplace hours.

    We’ll now design a simulation experiment to point out simply how gender inclusivity is essential in defining thermal consolation, in addition to in different actual‑world situations.

    Picture by Creator: Experimental Flowchart

    Simulation setup

    We now simulate two populations—female and male—with barely totally different thermal preferences. This distinction could appear of small significance at first look, however we’ll see it actually turns into one thing within the following chapter, the place we introduce a reinforcement studying (RL) mannequin to be taught the optimum temperature. We see how effectively the agent satisfies the feminine occupants if the agent is skilled solely on males.

    We start with defining an idealized thermal consolation mannequin impressed by the Predicted Imply Vote (PMV) framework. Every temperature is assigned a consolation rating outlined as max(0, 1 – dist / zone), based mostly on how shut its worth is to the middle of the gender-specific consolation vary:

    Males: 21–23°C (centered at 22°C)
    Females: 23–25°C (centered at 24°C)

    By definition, the additional the temperature strikes from the middle of this vary, the extra the consolation rating decreases.

    Subsequent, we simulate a simplified room-like atmosphere the place an agent controls the temperature. Three doable actions:

    • Lower the temperature by 1°C
    • Preserve the temperature
    • Improve the temperature by 1°C

    The atmosphere updates the temperature accordingly and returns a comfort-based reward.

    The agent’s objective is to maximise this reward over time, and it learns the optimum temperature setting for the occupants. See the code beneath for the atmosphere simulation.

    RL agent: Q-learning

    We implement a Q-learning methodology, letting the agent work together with the atmosphere.

    It learns an optimum coverage by updating a Q-table, the place the anticipated consolation rewards for every state-action pair are saved. The agent balances exploration—that’s, attempting random actions—and exploitation—that’s, selecting the best-known actions—because it learns a temperature-controlling technique by maximizing the reward.

    class QLearningAgent:
        def __init__(self, state_space, action_space, alpha=0.1, gamma=0.9, epsilon=0.2):
            self.states = state_space
            self.actions = action_space
            self.alpha = alpha
            self.gamma = gamma
            self.epsilon = epsilon
            # Initialize Q-table with zeros: states x actions
            self.q_table = np.zeros((len(state_space), len(action_space)))
    
        def choose_action(self, state):
            if random.random() < self.epsilon:
                return random.alternative(vary(len(self.actions)))
            else:
                return np.argmax(self.q_table[state])
    
        def be taught(self, state, motion, reward, next_state):
            predict = self.q_table[state, action]
            goal = reward + self.gamma * np.max(self.q_table[next_state])
            self.q_table[state, action] += self.alpha * (goal - predict)

    We up to date our Q-table by letting the agent select both the best-known motion based mostly on the present atmosphere or a random motion. We management the trade-off with a small epsilon—right here, 0.2—representing the extent of uncertainty we would like.

    Biased coaching and testing

    As promised earlier than, we practice the agent utilizing solely male information.

    We let the agent work together with the atmosphere for 1000 episodes, 20 steps every. It progressively learns how you can affiliate desired temperature ranges with excessive consolation scores for males.

    def train_agent(episodes=1000):
        env = TempControlEnv(intercourse='male')
        agent = QLearningAgent(state_space=env.state_space, action_space=env.action_space)
        rewards = []
    
        for ep in vary(episodes):
            state = env.reset()
            total_reward = 0
            for step in vary(20):
                action_idx = agent.choose_action(state - env.min_temp)
                motion = env.action_space[action_idx]
                next_state, reward, completed = env.step(motion)
                agent.be taught(state - env.min_temp, action_idx, reward, next_state - env.min_temp)
                state = next_state
                total_reward += reward
            rewards.append(total_reward)
        return agent, rewards

    The code reveals a regular coaching means of Q-learning. Here’s a plot of the educational curve.

    Picture by Creator: Studying curve

    We are able to now consider how effectively the male-trained agent performs when positioned in a feminine consolation atmosphere. The take a look at is completed in the identical environmental setting, solely with a barely totally different consolation scoring mannequin reflecting feminine preferences.

    Outcome

    The experiment reveals the next end result:

    The agent has achieved a median consolation reward of 16.08 per episode for male consolation. We see that it efficiently realized how you can preserve temperatures across the male-optimal consolation vary (21–23 °C).

    The agent’s efficiency dropped to a median reward of 0.24 per episode on feminine consolation. This reveals that the male-trained coverage, sadly, can’t be generalized to feminine consolation wants.

    Picture by Creator: Reward Distinction

    We are able to thus say that such a mannequin, skilled solely on one group, could not carry out effectively when utilized to a different, even when the distinction between teams seems small.

    Conclusion

    That is solely a small and easy instance.

    However it would possibly spotlight a much bigger difficulty: when AI fashions are skilled on information from just one or a number of teams, they’ve some dangers to fail to satisfy the wants of others—even when variations between teams appear small. You see the above male-trained agent fails to fulfill the feminine consolation, and it proves that bias in coaching information displays straight on outcomes.

    This could transcend the case of workplace temperature management. In lots of domains like healthcare, finance, schooling, and so on., if we practice fashions on some non-representative information, we will anticipate unfair or dangerous outcomes for underrepresented teams.

    For readers, this implies questioning how AI programs round us are constructed and pushing for transparency and equity of their design. It additionally means recognizing the restrictions of “one-size-fits-all” options and advocating for approaches that think about various experiences and wishes. Solely then can AI really serve everybody equitably.

    Nonetheless, I at all times really feel that empathy is tremendous troublesome in our society. Variations in race, gender, wealth, and tradition make it very exhausting for almost all of us to remain in others’ footwear. AI, a data-driven system, can’t solely simply inherit current human cognitive biases but in addition could embed these biases into its technical constructions. Teams already much less acknowledged could thus obtain even much less consideration or, worse, be additional marginalized.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleHow to Maximize Technical Events – NVIDIA GTC Paris 2025
    Next Article Interactive Data Exploration for Computer Vision Projects with Rerun
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Taking ResNet to the Next Level

    July 3, 2025
    Artificial Intelligence

    Confronting the AI/energy conundrum

    July 2, 2025
    Artificial Intelligence

    Four AI Minds in Concert: A Deep Dive into Multimodal AI Fusion

    July 2, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Norma Kamali is transforming the future of fashion with AI | MIT News

    April 25, 2025

    The multifaceted challenge of powering AI | MIT News

    April 7, 2025

    Gratis AI-verktyg inför sommaren 2025

    June 4, 2025

    Exploring the Proportional Odds Model for Ordinal Logistic Regression

    June 12, 2025

    Bridging philosophy and AI to explore computing ethics | MIT News

    April 5, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    10,000x Faster Bayesian Inference: Multi-GPU SVI vs. Traditional MCMC

    June 11, 2025

    Teaching AI models what they don’t know | MIT News

    June 3, 2025

    Empowering LLMs to Think Deeper by Erasing Thoughts

    May 13, 2025
    Our Picks

    Don’t let hype about AI agents get ahead of reality

    July 3, 2025

    DRAWER: skapar interaktiva digitala miljöer från statiska inomhusvideo

    July 3, 2025

    Microsoft hävdar att deras AI-diagnosverktyg kan överträffa läkare

    July 3, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.