Close Menu
    Trending
    • When Transformers Sing: Adapting SpectralKD for Text-Based Knowledge Distillation
    • How to Keep AI Costs Under Control
    • How to Control a Robot with Python
    • Redefining data engineering in the age of AI
    • Multiple Linear Regression, Explained Simply (Part 1)
    • En ny super prompt kan potentiellt öka kreativiteten i LLM
    • Five with MIT ties elected to National Academy of Medicine for 2025 | MIT News
    • Why Should We Bother with Quantum Computing in ML?
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » How to Control a Robot with Python
    Artificial Intelligence

    How to Control a Robot with Python

    ProfitlyAIBy ProfitlyAIOctober 23, 2025No Comments10 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    PyBullet is an open-source simulation platform created by Fb that’s designed for coaching bodily brokers (corresponding to robots) in a 3D setting. It supplies a physics engine for each inflexible and comfortable our bodies. It’s generally used for robotics, synthetic intelligence, and sport growth.

    Robotic arms are highly regarded resulting from their pace, precision, and skill to work in hazardous environments. They’re used for duties corresponding to welding, meeting, and materials dealing with, particularly in industrial settings (like manufacturing).

    There are two methods for a robotic to carry out a job:

    • Handbook Management – requires a human to explicitly program each motion. Higher suited to mounted duties, however struggles with uncertainty and requires tedious parameter tuning for each new state of affairs.
    • Synthetic Intelligence – permits a robotic to study one of the best actions by way of trial and error to realize a objective. So, it may well adapt to altering environments by studying from rewards and penalties with out a predefined plan (Reinforcement Learning).

    On this article, I’m going to indicate methods to construct a 3D setting with PyBullet for manually controlling a robotic arm. I’ll current some helpful Python code that may be simply utilized in different comparable instances (simply copy, paste, run) and stroll by way of each line of code with feedback so to replicate this instance.

    Setup

    PyBullet can simply be put in with one of many following instructions (if pip fails, conda ought to undoubtedly work):

    pip set up pybullet
    
    conda set up -c conda-forge pybullet

    PyBullet comes with a group of preset URDF information (Unified Robotic Description Format), that are XML schemas describing the bodily construction of objects within the 3D setting (i.e. dice, sphere, robotic).

    import pybullet as p
    import pybullet_data
    import time
    
    ## setup
    p.join(p.GUI)
    p.resetSimulation()
    p.setGravity(gravX=0, gravY=0, gravZ=-9.8)
    p.setAdditionalSearchPath(path=pybullet_data.getDataPath())
    
    ## load URDF
    aircraft = p.loadURDF("aircraft.urdf")
    
    dice = p.loadURDF("dice.urdf", basePosition=[0,0,0.1], globalScaling=0.5, useFixedBase=True)
    
    obj = p.loadURDF("cube_small.urdf", basePosition=[1,1,0.1], globalScaling=1.5)
    p.changeVisualShape(objectUniqueId=obj, linkIndex=-1, rgbaColor=[1,0,0,1]) #purple
    
    whereas p.isConnected():
        p.setRealTimeSimulation(True)

    Let’s undergo the code above:

    • when you possibly can hook up with the physics engine, it’s essential to specify if you wish to open the graphic interface (p.GUI) or not (p.DIRECT).
    • The Cartesian house has 3 dimensions: x-axis (ahead/backward), y-axis (left/proper), z-axis (up/down).
    • It’s widespread apply to set the gravity to (x=0, y=0, z=-9.8) simulating Earth’s gravity, however one can change it primarily based on the target of the simulation.
    • Often, it’s essential to import a aircraft to create a floor, in any other case objects would simply fall indefinitely. If you would like an object to be mounted to the ground, then set useFixedBase=True (it’s False by default). I imported the objects with basePosition=[0,0,0.1] that means that they’re 10cm above the bottom.
    • The simulation may be rendered in real-time with p.setRealTimeSimulation(True) or quicker (CPU-time) with p.stepSimulation(). One other technique to set real-time is:
    import time
    
    for _ in vary(240):   #240 timestep generally utilized in videogame growth
        p.stepSimulation() #add a physics step (CPU pace = 0.1 second)
        time.sleep(1/240)  #decelerate to real-time (240 steps × 1/240 second sleep = 1 second)

    Robotic

    At the moment, our 3D setting is manufactured from a little bit object (tiny purple dice), and a desk (huge dice) mounted to the bottom (aircraft). I shall add a robotic arm with the duty of choosing up the smaller object and placing it on the desk.

    PyBullet has a default robotic arm modeled after the Franka Panda Robot.

    robo = p.loadURDF("franka_panda/panda.urdf", 
                       basePosition=[1,0,0.1], useFixedBase=True)

    The very first thing to do is to research the hyperlinks (the strong elements) and joints (connections between two inflexible elements) of the robotic. For this objective, you possibly can simply use p.DIRECT as there is no such thing as a want for the graphic interface.

    import pybullet as p
    import pybullet_data
    
    ## setup
    p.join(p.DIRECT)
    p.resetSimulation()
    p.setGravity(gravX=0, gravY=0, gravZ=-9.8)
    p.setAdditionalSearchPath(path=pybullet_data.getDataPath())
    
    ## load URDF
    robo = p.loadURDF("franka_panda/panda.urdf", 
                      basePosition=[1,0,0.1], useFixedBase=True)
    
    ## joints
    dic_info = {
        0:"joint Index",  #begins at 0
        1:"joint Identify",
        2:"joint Kind",  #0=revolute (rotational), 1=prismatic (sliding), 4=mounted
        3:"state vectorIndex",
        4:"velocity vectorIndex",
        5:"flags",  #nvm all the time 0
        6:"joint Damping",  
        7:"joint Friction",  #coefficient
        8:"joint lowerLimit",  #min angle
        9:"joint upperLimit",  #max angle
        10:"joint maxForce",  #max power allowed
        11:"joint maxVelocity",  #max pace
        12:"hyperlink Identify",  #baby hyperlink linked to this joint
        13:"joint Axis",
        14:"mum or dad FramePos",  #place
        15:"mum or dad FrameOrn",  #orientation
        16:"mum or dad Index"  #−1 = base
    }
    for i in vary(p.getNumJoints(bodyUniqueId=robo)):
        joint_info = p.getJointInfo(bodyUniqueId=robo, jointIndex=i)
        print(f"--- JOINT {i} ---")
        print({dic_info[k]:joint_info[k] for ok in dic_info.keys()})
    
    ## hyperlinks
    for i in vary(p.getNumJoints(robo)):
        link_name = p.getJointInfo(robo, i)[12].decode('utf-8')  #subject 12="hyperlink Identify"
        dyn = p.getDynamicsInfo(robo, i)
        pos, orn, *_ = p.getLinkState(robo, i)
        dic_info = {"Mass":dyn[0], "Friction":dyn[1], "Place":pos, "Orientation":orn}
        print(f"--- LINK {i}: {link_name} ---")
        print(dic_info)

    Each robotic has a “base“, the foundation a part of its physique that connects the whole lot (like our skeleton backbone). Trying on the output of the code above, we all know that the robotic arm has 12 joints and 12 hyperlinks. They’re linked like this:

    Motion Management

    To be able to make a robotic do one thing, you must transfer its joints. There are 3 predominant sorts of management, that are all purposes of Newton’s laws of motion:

    • Place management: mainly, you inform the robotic “go right here”. Technically, you set a goal place, after which apply power to regularly transfer the joint from its present place towards the goal. Because it will get nearer, the utilized power decreases to keep away from overshooting and ultimately balances completely in opposition to resistive forces (i.e. friction, gravity) to carry the joint regular in place.
    • Velocity management: mainly, you inform the robotic “transfer at this pace”. Technically, you set a goal velocity, and apply power to regularly deliver the rate from zero to the goal, then keep it by balancing utilized power and resistive forces (i.e. friction, gravity).
    • Drive/Torque management: mainly, you “push the robotic and see what occurs”. Technically, you straight set the power utilized on the joint, and the ensuing movement relies upon purely on the thing’s mass, inertia, and the setting. As a aspect word, in physics, the phrase “power” is used for linear movement (push/pull), whereas “torque” signifies rotational movement (twist/flip).

    We are able to use p.setJointMotorControl2() to maneuver a single joint, and p.setJointMotorControlArray() to use power to a number of joints on the identical time. As an example, I shall carry out place management by giving a random goal for every arm joint.

    ## setup
    p.join(p.GUI)
    p.resetSimulation()
    p.setGravity(gravX=0, gravY=0, gravZ=-9.8)
    p.setAdditionalSearchPath(path=pybullet_data.getDataPath())
    
    ## load URDF
    aircraft = p.loadURDF("aircraft.urdf")
    dice = p.loadURDF("dice.urdf", basePosition=[0,0,0.1], globalScaling=0.5, useFixedBase=True)
    robo = p.loadURDF("franka_panda/panda.urdf", basePosition=[1,0,0.1], useFixedBase=True)
    obj = p.loadURDF("cube_small.urdf", basePosition=[1,1,0.1], globalScaling=1.5)
    p.changeVisualShape(objectUniqueId=obj, linkIndex=-1, rgbaColor=[1,0,0,1]) #purple
    
    ## transfer arm
    joints = [0,1,2,3,4,5,6]
    target_positions = [1,1,1,1,1,1,1] #<--random numbers
    p.setJointMotorControlArray(bodyUniqueId=robo, jointIndices=joints,
                                controlMode=p.POSITION_CONTROL,
                                targetPositions=target_positions,
                                forces=[50]*len(joints))
    for _ in vary(240*3):
        p.stepSimulation()
        time.sleep(1/240)

    The true query is, “what’s the best goal place for every joint?” The reply is Inverse Kinematics, the mathematical means of calculating the parameters wanted to put a robotic in a given place and orientation relative to a place to begin. Every joint defines an angle, and also you don’t wish to guess a number of joint angles by hand. So, we’ll ask PyBullet to determine the goal positions within the Cartesian house with p.calculateInverseKinematics().

    obj_position, _ = p.getBasePositionAndOrientation(obj)
    obj_position = record(obj_position)
    
    target_positions = p.calculateInverseKinematics(
        bodyUniqueId=robo,
        endEffectorLinkIndex=11, #grasp-target hyperlink
        targetPosition=[obj_position[0], obj_position[1], obj_position[2]+0.25], #25cm above object
        targetOrientation=p.getQuaternionFromEuler([0,-3.14,0]) #[roll,pitch,yaw]=[0,-π,0] -> hand pointing down
    )
    
    arm_joints = [0,1,2,3,4,5,6]
    p.setJointMotorControlArray(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                                jointIndices=arm_joints,
                                targetPositions=target_positions[:len(arm_joints)],
                                forces=[50]*len(arm_joints))

    Please word that I used p.getQuaternionFromEuler() to convert the 3D angles (easy for humans to understand) into 4D, as “quaternions” are simpler for physics engines to compute. If you wish to get technical, in a quaternion (x, y, z, w), the primary three parts describe the axis of rotation, whereas the fourth dimension encodes the quantity of rotation (cos/sin).

    The code above instructions the robotic to maneuver its hand to a particular place above an object utilizing Inverse Kinematics. We seize the place the little purple dice is sitting on this planet with p.getBasePositionAndOrientation(), and we use the knowledge to calculate the goal place for the joints.

    Work together with Objects

    The robotic arm has a hand (“gripper”), so it may be opened and closed to seize objects. From the earlier evaluation, we all know that the “fingers” can transfer inside (0-0.04). So, you possibly can set the goal place because the decrease restrict (hand closed) or the higher restrict (hand open).

    Furthermore, I wish to be sure that the arm holds the little purple dice whereas transferring round. You should use p.createConstraint() to make a momentary physics bond between the robotic’s gripper and the thing, so that it’s going to transfer along with the robotic hand. In the true world, the gripper would apply power by way of friction and speak to to squeeze the thing so it doesn’t fall. However, since PyBullet is a quite simple simulator, I’ll simply take this shortcut.

    ## shut hand
    p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                            jointIndex=9, #finger_joint1
                            targetPosition=0, #decrease restrict for finger_joint1
                            power=power)
    p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                            jointIndex=10, #finger_joint2
                            targetPosition=0, #decrease restrict for finger_joint2
                            power=power)
    
    ## maintain the thing
    constraint = p.createConstraint(
        parentBodyUniqueId=robo,
        parentLinkIndex=11,
        childBodyUniqueId=obj,
        childLinkIndex=-1,
        jointType=p.JOINT_FIXED,
        jointAxis=[0,0,0],
        parentFramePosition=[0,0,0],
        childFramePosition=[0,0,0,1]
    )

    After that, we will transfer the arm towards the desk, utilizing the identical technique as earlier than: Inverse Kinematics -> goal place -> place management.

    Lastly, when the robotic reaches the goal place within the Cartesian house, we will open the hand and launch the constraint between the thing and the arm.

    ## shut hand
    p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                            jointIndex=9, #finger_joint1
                            targetPosition=0.04, #higher restrict for finger_joint1
                            power=power)
    p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                            jointIndex=10, #finger_joint2
                            targetPosition=0.04, #higher restrict for finger_joint2
                            power=power)
    
    ## drop the obj
    p.removeConstraint(constraint)

    Full Handbook Management

    In PyBullet, it’s essential to render the simulation for each motion the robotic takes. Subsequently, it’s handy to have a utils perform that may pace up (i.e. sec=0.1) or decelerate (i.e. sec=2) the real-time (sec=1).

    import pybullet as p
    import time
    
    def render(sec=1):
        for _ in vary(int(240*sec)):
            p.stepSimulation()
            time.sleep(1/240)

    Right here’s the total code for the simulation now we have completed on this article.

    import pybullet_data
    
    ## setup
    p.join(p.GUI)
    p.resetSimulation()
    p.setGravity(gravX=0, gravY=0, gravZ=-9.8)
    p.setAdditionalSearchPath(path=pybullet_data.getDataPath())
    
    aircraft = p.loadURDF("aircraft.urdf")
    dice = p.loadURDF("dice.urdf", basePosition=[0,0,0.1], globalScaling=0.5, useFixedBase=True)
    robo = p.loadURDF("franka_panda/panda.urdf", basePosition=[1,0,0.1], globalScaling=1.3, useFixedBase=True)
    obj = p.loadURDF("cube_small.urdf", basePosition=[1,1,0.1], globalScaling=1.5)
    p.changeVisualShape(objectUniqueId=obj, linkIndex=-1, rgbaColor=[1,0,0,1])
    
    render(0.1)
    power = 100
    
    
    ## open hand
    print("### open hand ###")
    p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                            jointIndex=9, #finger_joint1
                            targetPosition=0.04, #higher restrict for finger_joint1
                            power=power)
    p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                            jointIndex=10, #finger_joint2
                            targetPosition=0.04, #higher restrict for finger_joint2
                            power=power)
    render(0.1)
    print(" ")
    
    
    ## transfer arm
    print("### transfer arm ### ")
    obj_position, _ = p.getBasePositionAndOrientation(obj)
    obj_position = record(obj_position)
    
    target_positions = p.calculateInverseKinematics(
        bodyUniqueId=robo,
        endEffectorLinkIndex=11, #grasp-target hyperlink
        targetPosition=[obj_position[0], obj_position[1], obj_position[2]+0.25], #25cm above object
        targetOrientation=p.getQuaternionFromEuler([0,-3.14,0]) #[roll,pitch,yaw]=[0,-π,0] -> hand pointing down
    )
    print("goal place:", target_positions)
    
    arm_joints = [0,1,2,3,4,5,6]
    p.setJointMotorControlArray(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                                jointIndices=arm_joints,
                                targetPositions=target_positions[:len(arm_joints)],
                                forces=[force/3]*len(arm_joints))
    
    render(0.5)
    print(" ")
    
    
    ## shut hand
    print("### shut hand ###")
    p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                            jointIndex=9, #finger_joint1
                            targetPosition=0, #decrease restrict for finger_joint1
                            power=power)
    p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                            jointIndex=10, #finger_joint2
                            targetPosition=0, #decrease restrict for finger_joint2
                            power=power)
    render(0.1)
    print(" ")
    
    
    ## maintain the thing
    print("### maintain the thing ###")
    constraint = p.createConstraint(
        parentBodyUniqueId=robo,
        parentLinkIndex=11,
        childBodyUniqueId=obj,
        childLinkIndex=-1,
        jointType=p.JOINT_FIXED,
        jointAxis=[0,0,0],
        parentFramePosition=[0,0,0],
        childFramePosition=[0,0,0,1]
    )
    render(0.1)
    print(" ")
    
    
    ## transfer in the direction of the desk
    print("### transfer in the direction of the desk ###")
    cube_position, _ = p.getBasePositionAndOrientation(dice)
    cube_position = record(cube_position)
    
    target_positions = p.calculateInverseKinematics(
        bodyUniqueId=robo,
        endEffectorLinkIndex=11, #grasp-target hyperlink
        targetPosition=[cube_position[0], cube_position[1], cube_position[2]+0.80], #80cm above the desk
        targetOrientation=p.getQuaternionFromEuler([0,-3.14,0]) #[roll,pitch,yaw]=[0,-π,0] -> hand pointing down
    )
    print("goal place:", target_positions)
    
    arm_joints = [0,1,2,3,4,5,6]
    p.setJointMotorControlArray(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                                jointIndices=arm_joints,
                                targetPositions=target_positions[:len(arm_joints)],
                                forces=[force*3]*len(arm_joints))
    render()
    print(" ")
    
    
    ## open hand and drop the obj
    print("### open hand and drop the obj ###")
    p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                            jointIndex=9, #finger_joint1
                            targetPosition=0.04, #higher restrict for finger_joint1
                            power=power)
    p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                            jointIndex=10, #finger_joint2
                            targetPosition=0.04, #higher restrict for finger_joint2
                            power=power)
    p.removeConstraint(constraint)
    render()

    Conclusion

    This text has been a tutorial on methods to manually management a robotic arm. We discovered about motion management, Inverse Kinematics, grabbing and transferring objects. New tutorials with extra superior robots will come.

    Full code for this text: GitHub

    I hope you loved it! Be at liberty to contact me for questions and suggestions, or simply to share your fascinating initiatives.

    👉 Let’s Connect 👈

    (All photographs are by the writer until in any other case famous)



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleRedefining data engineering in the age of AI
    Next Article How to Keep AI Costs Under Control
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    When Transformers Sing: Adapting SpectralKD for Text-Based Knowledge Distillation

    October 23, 2025
    Artificial Intelligence

    How to Keep AI Costs Under Control

    October 23, 2025
    Artificial Intelligence

    Multiple Linear Regression, Explained Simply (Part 1)

    October 23, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    The Geospatial Capabilities of Microsoft Fabric and ESRI GeoAnalytics, Demonstrated

    May 15, 2025

    Meta lanserar fristående AI-app som utmanar ChatGPT

    May 1, 2025

    Generating Structured Outputs from LLMs

    August 8, 2025

    A Farewell to APMs — The Future of Observability is MCP tools

    May 2, 2025

    How to Spark AI Adoption in Your Organization with Janette Roush [MAICON 2025 Speaker Series]

    July 24, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    NumExpr: The “Faster than Numpy” Library Most Data Scientists Have Never Used

    April 28, 2025

    🚪🚪🐐 Lessons in Decision Making from the Monty Hall Problem

    May 15, 2025

    Pharmacy Placement in Urban Spain

    May 8, 2025
    Our Picks

    When Transformers Sing: Adapting SpectralKD for Text-Based Knowledge Distillation

    October 23, 2025

    How to Keep AI Costs Under Control

    October 23, 2025

    How to Control a Robot with Python

    October 23, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.