<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>CTO ROBOTICS Media CTO Robotics Media - Global Robotics &amp; AI News</title>
	<atom:link href="https://ctorobotics.com/category/robotics/humanoid-robots/feed/" rel="self" type="application/rss+xml" />
	<link>https://ctorobotics.com/</link>
	<description>Global Robotics, AI &#38; Technology Media</description>
	<lastBuildDate>Tue, 21 Apr 2026 19:15:58 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Nvidia-powered humanoid clears 8-hour Siemens factory shift at 60 totes per hour</title>
		<link>https://ctorobotics.com/nvidia-powered-humanoid-clears-8-hour-siemens-factory-shift-at-60-totes-per-hour/</link>
					<comments>https://ctorobotics.com/nvidia-powered-humanoid-clears-8-hour-siemens-factory-shift-at-60-totes-per-hour/#respond</comments>
		
		<dc:creator><![CDATA[CTO Robotics]]></dc:creator>
		<pubDate>Tue, 21 Apr 2026 19:15:58 +0000</pubDate>
				<category><![CDATA[Humanoid Robots]]></category>
		<category><![CDATA[Robotics]]></category>
		<guid isPermaLink="false">https://ctorobotics.com/?p=2511</guid>

					<description><![CDATA[<p><img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/04/3fixjy2gwtg-XOTHsO-150x150.jpg" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" />Siemens, UK robotics startup Humanoid, and Nvidia announced at Hannover Messe 2026 the results of...</p>
<p>The post <a href="https://ctorobotics.com/nvidia-powered-humanoid-clears-8-hour-siemens-factory-shift-at-60-totes-per-hour/">Nvidia-powered humanoid clears 8-hour Siemens factory shift at 60 totes per hour</a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/04/3fixjy2gwtg-XOTHsO-150x150.jpg" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" /><p>Siemens, UK robotics startup Humanoid, and Nvidia announced at Hannover Messe 2026 the results of a two-week live factory deployment conducted in January, which exceeded all predefined benchmarks.</p>
<p>Humanoid’s HMND 01 Alpha wheeled robot operated continuously for over eight hours at Siemens’ electronics factory in Erlangen, Germany, performing tote destacking at 60 container moves per hour, with a pick-and-place success rate above 90 per cent.</p>
<p>The robot operated alongside human workers and existing automated systems in a live production environment, where performance directly impacted operations.</p>
<h2 class="wp-block-heading">What the robot actually did</h2>
<p>The robot’s task involved picking storage totes, transporting them across the facility, and placing them on conveyor belts at designated handover points for human workers.</p>
<p>This cycle repeated until each stack was cleared. The task is repetitive and physically demanding, representing a challenge for industrial automation in unpredictable environments or where real-time human coordination is needed.</p>
<p>Siemens’ Global Head of Manufacturing Motion Control, Stephan Schlauss, described the Erlangen plant as “customer zero,” noting that Siemens prioritized its own factory before offering the capability to external customers.</p>
<p>This approach positions Siemens as the first paying customer and validator of the technology, rather than a passive evaluator.</p>
<h2 class="wp-block-heading">The technology stack behind it</h2>
<p>The HMND 01 Alpha is built on <a href="https://interestingengineering.com/ai-robotics/siemens-nvidia-industrial-ai-operating-system" rel="dofollow">Nvidia’s physical AI stack</a>, with on-board computing powered by Nvidia Jetson Thor.</p>
<p>Training was conducted using Nvidia Isaac Lab for reinforcement learning and policy development, with Nvidia Isaac Sim handling simulation-first validation before any physical deployment.</p>
<p>Integration into Siemens’ production systems was handled through the Siemens Xcelerator platform, which provided digital twin capability, AI-enabled perception, PLC-robot interfaces, fleet management, and industrial communication networks.</p>
<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<div class="wp-block-embed__wrapper"></div>
</figure>
<p>This enabled the robot to coordinate in real time with production systems, other autonomous guided vehicles, and human workers, demonstrating the level of enterprise integration that distinguishes a true factory deployment from a demonstration.</p>
<p>Deepu Talla, Nvidia’s vice president of robotics and edge AI, stated that the deployment paves the way for humanoid robots to achieve real production targets on active factory floors.</p>
<h2 class="wp-block-heading">Seven months instead of two years</h2>
<p>One of the more striking claims in the announcement is the development timeline.</p>
<p>The simulation-first approach, which involves training and validating behaviors in a virtual environment before physical deployment, allowed Humanoid to reduce prototype development from the typical 18 to 24 months to <a href="https://interestingengineering.com/ai-robotics/bts-of-fastest-humanoid-robot-development" rel="dofollow">approximately 7 months</a>.</p>
<p>That speed is itself a product pitch: faster iteration cycles mean faster deployment readiness for potential customers.</p>
<p>Founded in 2024 by Artem Sokolov, Humanoid is headquartered in London with offices in Boston and Vancouver, and employs over 200 engineers from companies such as Apple, Tesla, Google, and Boston Dynamics.</p>
<p>The company also produces a bipedal version of the HMND 01 Alpha, featuring 29 degrees of freedom and a comprehensive sensor suite, including RGB cameras, depth sensors, and 6D force/torque sensors.</p>
<h2 class="wp-block-heading">A reference architecture, not a one-off</h2>
<p>Siemens and Humanoid did not provide a commercial rollout timeline, but presented the Erlangen deployment as establishing a “<a href="https://interestingengineering.com/ai-robotics/humanoid-robot-completes-siemens-trial" rel="dofollow">factory-grade model</a>” that other manufacturers can replicate, serving as a reference architecture for humanoid deployment rather than a standalone demonstration.</p>
<p>This positioning reflects a broader industrial trend, as humanoid robots capable of operating in human-centered environments are increasingly viewed as solutions to labor shortages in manufacturing, where fully automated lines are impractical due to product variability, safety requirements, or the need for human-robot collaboration.</p>
<p>The post <a href="https://ctorobotics.com/nvidia-powered-humanoid-clears-8-hour-siemens-factory-shift-at-60-totes-per-hour/">Nvidia-powered humanoid clears 8-hour Siemens factory shift at 60 totes per hour</a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://ctorobotics.com/nvidia-powered-humanoid-clears-8-hour-siemens-factory-shift-at-60-totes-per-hour/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Tesla’s Optimus humanoid robot greets runners, poses for photos at Boston Marathon</title>
		<link>https://ctorobotics.com/teslas-optimus-humanoid-robot-greets-runners-poses-for-photos-at-boston-marathon/</link>
					<comments>https://ctorobotics.com/teslas-optimus-humanoid-robot-greets-runners-poses-for-photos-at-boston-marathon/#respond</comments>
		
		<dc:creator><![CDATA[CTO Robotics]]></dc:creator>
		<pubDate>Tue, 21 Apr 2026 18:50:23 +0000</pubDate>
				<category><![CDATA[Humanoid Robots]]></category>
		<category><![CDATA[Robotics]]></category>
		<guid isPermaLink="false">https://ctorobotics.com/?p=2482</guid>

					<description><![CDATA[<p><img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/04/jsh7yelrrxw-M1nnpN-150x150.jpg" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" />US carmaker Tesla turned the Boston Marathon into a live stage for its Optimus humanoid...</p>
<p>The post <a href="https://ctorobotics.com/teslas-optimus-humanoid-robot-greets-runners-poses-for-photos-at-boston-marathon/">Tesla’s Optimus humanoid robot greets runners, poses for photos at Boston Marathon</a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/04/jsh7yelrrxw-M1nnpN-150x150.jpg" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" loading="lazy" /><p>US carmaker Tesla turned the Boston Marathon into a live stage for its Optimus humanoid robot after placing it a few steps away from the finish line, at one of the race’s most photographed spots.</p>
<p>On Marathon Monday, the company stationed the robot at its 888 Boylston Street showroom in Boston, displaying it in front of thousands of runners from roughly 130 countries, as well as hundreds of thousands of spectators.</p>
<p>Used as part of a broader marketing push, the humanoid cheered on runners and posed for photos, while engaging with spectators, hence turning the global event into a high-visibility showcase at no advertising cost.</p>
<p>“Join us from April 19 to 20, 2026, at Tesla Boston Boylston Street showroom to meet Optimus, our humanoid robot, for Marathon Monday,” the company said. “Optimus will be cheering with you on the sidelines and posing for photos.”</p>
<h2 class="wp-block-heading">Optimus meets runners</h2>
<p>According to California-based media company focused on Tesla, SpaceX, and Elon Musk, <em><a href="https://www.teslarati.com/tesla-optimus-boston-marathon/" target="_blank" rel="noopener noreferrer nofollow">Teslarati</a>,</em> the humanoid’s location was a well-planned strategic move, since the final mile of the Boston Marathon runs along Boylston Street.</p>
<p>The 26.2-mile race, which is hosted by eight cities and towns, finishes at Copley Square, where more than 30,000 runners cross the line as hundreds of thousands watch, putting <a href="https://interestingengineering.com/ai-robotics/optimus-humanoid-robot-masters-running" target="_blank" rel="dofollow noopener">Optimus</a> at the center of attention.</p>
<p>Also referred to as <a href="https://interestingengineering.com/ai-robotics/chinese-parts-cloud-tesla-humanoid-robot-growth" target="_blank" rel="dofollow noopener">Tesla Bot, Optimus</a> was first announced during Tesla’s Artificial Intelligence (AI) Day event on August 19, 20221. The humanoid represents CEO Elon Musk’s vision to create a general-purpose robot designed to handle risky, repetitive, or undesirable tasks.</p>
<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<div class="wp-block-embed__wrapper"></div>
</figure>
<p>The first prototype debuted in 2022, standing five feet, eight inches tall. Powered by T<a href="https://interestingengineering.com/transportation/autonomous-self-driving-cars-tesla" target="_blank" rel="dofollow noopener">esla’s Autopilot hardware</a>, it demonstrated basic motor skills with 40 degrees of freedom and custom actuators.</p>
<p>After serving popcorn at the Tesla Diner in Hollywood in July 2025 and appearing at a Miami showroom event in December 2025, the humanoid was showcased at the Appliance and Electronics World Expo in Shanghai in March 2026.</p>
<p>At the expo, staff said that mass production could begin by late 2026. According to Musk, at scale, the robot could cost between USD 20,000 and 30,000, roughly in line with a typical car.</p>
<h2 class="wp-block-heading">Marathon robot rival</h2>
<p>While <a href="https://www.inc.com/jeff-haden/use-military-method-to-fall-asleep-within-2-minutes-starting-tonight.html" target="_blank" rel="noopener noreferrer nofollow">Tesla showed Optimus</a> as a static attraction at the Boston Marathon, China pushed humanoid robotics in a far more dynamic direction at the Beijing E-Town Humanoid Robot Half-Marathon.</p>
<p>Held on Sunday, April 19, a day before the Boston race, <a href="https://www.theguardian.com/sport/2026/apr/19/humanoid-robots-race-beijing-half-marathon" target="_blank" rel="noopener noreferrer nofollow">the event</a> saw dozens of robots run alongside humans on a parallel course. It represented one of the first real-world endurance trials for humanoid machines.</p>
<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<div class="wp-block-embed__wrapper"></div>
</figure>
<p>Around half the robots completed the 13-mile (21-kilometer) race autonomously, without remote control. The <a href="https://www.reuters.com/sports/humanoid-robots-race-past-humans-beijing-half-marathon-showing-rapid-advances-2026-04-19/" target="_blank" rel="noopener noreferrer nofollow">winnin</a>g<a href="https://www.reuters.com/sports/humanoid-robots-race-past-humans-beijing-half-marathon-showing-rapid-advances-2026-04-19/" target="_blank" rel="noopener noreferrer nofollow"> robot “Lightning,”</a> a bipedal humanoid built by Honor, a Huawei spin-off, finished the race in 50 minutes and 26 seconds.</p>
<p>It reportedly surpassed the human half-marathon <a href="https://edition.cnn.com/2026/04/19/china/china-robot-half-marathon-intl-hnk" target="_blank" rel="noopener noreferrer nofollow">world record</a> of 57 minutes and 20 seconds, held by Ugandan runner Jacob Kiplimo, according to World Athletics.</p>
<p>“The future will definitely be an AI era,” Chu Tianqi, a 23-year-old engineering student at Beijing University of Posts and Telecommunications, <a href="https://www.reuters.com/sports/humanoid-robots-race-past-humans-beijing-half-marathon-showing-rapid-advances-2026-04-19/" target="_blank" rel="noopener noreferrer nofollow">told<em> Reuters</em></a>. “If people don’t know how to use AI now, especially if some are still resistant to it, they will definitely become obsolete.”</p>
<p>The post <a href="https://ctorobotics.com/teslas-optimus-humanoid-robot-greets-runners-poses-for-photos-at-boston-marathon/">Tesla’s Optimus humanoid robot greets runners, poses for photos at Boston Marathon</a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://ctorobotics.com/teslas-optimus-humanoid-robot-greets-runners-poses-for-photos-at-boston-marathon/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The USC Professor Who Pioneered Socially Assistive Robotics</title>
		<link>https://ctorobotics.com/the-usc-professor-who-pioneered-socially-assistive-robotics/</link>
					<comments>https://ctorobotics.com/the-usc-professor-who-pioneered-socially-assistive-robotics/#respond</comments>
		
		<dc:creator><![CDATA[CTO Robotics]]></dc:creator>
		<pubDate>Tue, 21 Apr 2026 15:58:15 +0000</pubDate>
				<category><![CDATA[Humanoid Robots]]></category>
		<category><![CDATA[Robotics]]></category>
		<guid isPermaLink="false">https://ctorobotics.com/?p=2458</guid>

					<description><![CDATA[<p><img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/04/a-smiling-blonde-woman-poses-with-a-humanoid-robotic-torso-wearing-a-usc-sweatshirt-DPS4DD-150x150.jpg" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" loading="lazy" />When the robotics engineering field that Maja Matarić wanted to work in didn’t exist, she helped create it. In 2005 she helped define the new area of socially assistive robotics.As an associate professor of computer science, neuroscience, and pediatrics at the University of Southern California, in Los Angeles, she developed robots to provide personalized therapy and care through social interactions.Maja MatarićEmployer University of Southern California, Los AngelesJob Title Professor of computer science, neuroscience, and pediatricsMember gradeFellowAlma maters University of Kansas and MITThe robots could have conversations, play games, and respond to emotions.Today the IEEE Fellow is a professor at USC. She studies how robots can help students with anxiety and depression undergo cognitive behavioral therapy. CBT focuses on changing a person’s negative thought patterns, behaviors, and emotional responses.For her work, she received a 2025 Robotics Medal from MassRobotics, which recognizes female researchers advancing robotics. The Boston-based nonprofit provides robotics startups with a workspace, prototyping facilities, mentorship, and networking opportunities.When receiving the award at the ceremony in Boston, Matarić was overcome with joy, she says.“I’ve been very fortunate to be honored with several awards, which I am grateful for. But there was something very special about getting the MassRobotics medal, because I knew at least half the people in the room,” she says. “Everyone was just smiling, and there was a great sense of love.”Seeing herself as an engineerMatarić grew up in Belgrade, Serbia. Her father was an engineer, and her mother was a writer. After her father died when she was 16, Matarić and her mother moved to the United States.She credits her father for igniting her interest in engineering, and her uncle who worked as an aerospace engineer for introducing her to computer science.Matarić says she didn’t consider herself an engineer until she joined USC’s faculty, since she always had worked in computer science.“In retrospect, I’ve always been an engineer,” Matarić says. “But I didn’t set out specifically thinking of myself as one—which is just one of the many things I like to convey to young people: You don’t always have to know exactly everything in advance.”  Maja Matarić and her lab are exploring how socially assistive robots can help improve the communication skills of children with autism spectrum disorder. National Science Foundation News While pursuing her bachelor’s degree in computer science at the University of Kansas in Lawrence, she was introduced to industrial robotics through a textbook. After earning her degree in 1987, she had an opportunity to continue her education as a graduate student at MIT’s AI Lab (now the Computer Science and Artificial Intelligence Lab). During her first year, she explored the different research projects being conducted by faculty members, she said in a 2010 oral history conducted by the IEEE History Center. She met IEEE Life Fellow Rodney Brooks, who was working on novel reactive and behavior-based robotic systems. His work so excited her that she joined his lab and conducted her master’s thesis under his tutelage.Inspired by the way animals use landmarks to navigate, Matarić developed Toto, the first navigating behavior-based robot. Toto used distributed models to map the AI Lab building where Matarić worked and plan its path to different rooms. Toto used sonar to detect walls, doors, and furniture, according to Matarić’s paper, “The Robotics Primer.”After earning her master’s degree in AI and robotics in 1990, she continued to work under Brooks as a doctoral student, pioneering distributed algorithms that allowed a team of up to 20 robots to execute complex tasks in tandem, including searching for objects and exploring their environment.Matarić earned her Ph.D. in AI and robotics in 1994 and joined Brandeis University, in Waltham, Mass., as an assistant professor of computer science. There she founded the Interaction Lab, where she developed autonomous robots that work together to accomplish tasks.Three years later, she relocated to California and joined USC’s Viterbi School of Engineering as an assistant professor in computer science and neuroscience.In 2002 she helped to found the Center for Robotics and Embedded Systems (now the Robotics and Autonomous Systems Center). The RASC focuses on research into human-centric and scalable robotic systems and promotes interdisciplinary partnerships across USC.Matarić’s shift in her research came after she gave birth to her first child in 1998. When her daughter was a bit older and asked Matarić why she worked with robots, she wanted to be able to “say something better than ‘I publish a lot of research papers,’ or ‘it’s well-recognized,’” she says.“In academia, you can be in a leadership role and still do research. It’s a wonderful and important opportunity that lets academics be on top of our field and also train the next generation of students and help the next generation of faculty colleagues.”“Kids don’t consider those good answers, and they’re probably right,” she says. “This made me realize I was in a position to do something different. And I really wanted the answer to my daughter’s future question to be, ‘Mommy’s robots help people.’”Matarić and her doctoral student David Feil-Seifer presented a paper defining socially assistive robotics at the 2005 International Conference on Rehabilitation Robotics. It was the only paper that talked about helping people complete tasks and learn skills by speaking with them rather than by performing physical jobs, she says.Feil-Seifer is now a professor of computer science and engineering at the University of Nevada in Reno.At the same time, she founded the Interaction Lab at USC and made its focus creating robots that provide social, rather than physical, support.“At this point in my career journey, I’ve matured to a place where I don’t want to do just curiosity-driven research alone,” she says. “Plenty of what my team and I do today is still driven by curiosity, but it is answering the question: ‘How can we help someone live a better life?’”In 2006 she was promoted to full professor and made the senior associate dean for research in USC’s Viterbi School of Engineering. In 2012 she became vice dean for research.“In academia, you can be in a leadership role and still do research,” she says. “It’s a wonderful and important opportunity that lets academics be on top of our field and also train the next generation of students and help the next generation of faculty colleagues.”Research in socially assistive roboticsOne of the longest research projects Matarić has led at her Interaction Lab is exploring how socially assistive robots can help improve the communication skills of children with autism spectrum disorder. ASD is a lifelong neurological condition that affects the way people interact with others, and the way they learn. Children with ASD often struggle with social behaviors such as reading nonverbal cues, playing with others, and making eye contact.Matarić and her team developed a robot, Bandit, that can play games with a child and give the youngster words of affirmation. Bandit is 56 centimeters tall and has a humanlike head, torso, and arms. Its head can pan and tilt. The robot uses two FireWire cameras as its eyes, and it has a movable mouth and eyebrows, allowing it to exhibit a variety of facial expressions, according to the IEEE Spectrum’s robots guide. Its torso is attached to a wheeled base.The study showed that when interacting with Bandit, children with ASD exhibited social behaviors that were out of the ordinary for them, such as initiating play and imitating the robot.Matarić and her team also studied how the robot could serve as a social and cognitive aid for elderly people and stroke patients. Bandit was programmed to instruct and motivate users to perform daily movement exercises such as seated aerobics.  Maja Matarić and doctoral student Amy O’Connell testing Blossom, which is being used to study how it can aid students with anxiety or depression.University of Southern CaliforniaOver the years, Matarić’s lab developed other robots including Kiwi and Blossom. Kiwi, which looked like an owl, helped children with ASD learn social and cognitive skills, helped motivate elderly people living alone to be more physically active, and mediated discussions among family members. Blossom, originally developed at Cornell, was adapted by the Interaction Lab to make it less expensive and personalizable for individuals. The robot is being used to study how it can aid students with anxiety or depression to practice cognitive behavioral therapy.Matarić’s line of research began when she learned that large language model (LLM) chatbots were being promoted to help people with mental health struggles, she said in an episode of the AMA Medical News podcast.“It is generally not easy to get [an appointment with a] therapist, or there might not be insurance coverage,” she said. “These, combined with the rates of anxiety and depression, created a real need.”That made the chatbot idea appealing, she says, but she was interested to see if they were effective compared with a friendly robot such as Blossom.Matarić and her team used the same LLMs to power CBT practice with a chatbot and with Blossom. They ran a two-week study in the USC dorms, where students were randomly assigned to complete CBT exercises daily with either a chatbot or the robot. Participants filled out a clinical assessment to measure their psychiatric distress before and after each session.The study showed that students who interacted with the robot experienced a significant decrease in their mental state, Matarić said in the podcast, and students who interacted with the chatbot did not.“Joining an [IEEE] society has an impact, and it can be personal. That’s why I recommend my students join the organization—because it’s important to get out there and get connected.”She and her team also reviewed transcripts of conversations between the students and the robot to evaluate how well the LLM responded to the participants. They found the robot was more effective than the chatbot, even though both were using the same model.Based on those findings, in 2024 Matarić received a grant from the U.S. National Institute of Mental Health to conduct a six-week clinical trial to explore how effective a socially assistive robot could be at delivering CBT practice. The trial, currently underway, also is expected to study how Blossom can be personalized to adapt to each user’s preferences and progress, including the way the robot moves, which exercises it recommends, and what feedback it gives.During the trial, the 120 students participating are wearing Fitbits to study their physiologic responses. The participants fill out a clinical assessment to measure their psychiatric distress before and after each session.Data including the participants’ feelings of relating to the robot, intrinsic motivation, engagement, and adherence will be assessed by the research team, Matarić says.She says she’s proud of the graduate students working on this project, and seeing them grow as engineers is one of the most rewarding parts of working in academia.“Engineers generally don’t anticipate having to work with human study participants and needing to understand psychology in addition to the hardcore engineering,” she says. “So the students who choose to do this research are just wonderful, caring people.”Finding a community at IEEEMatarić joined IEEE as a graduate student in 1992, the year she published her first paper in IEEE Transactions on Robotics and Automation. The paper, “Integration of Representation Into Goal-Driven Behavior-Based Robots,” described her work on Toto.As a member of the IEEE Robotics and Automation Society, she says she has gained a community of like-minded people. She enjoys attending conferences including the IEEE International Conference on Robotics and Automation, the IEEE/RSJ International Conference on Intelligent Robots and Systems, and the ACM/IEEE International Conference on Human-Robot Interaction, which is closest to her field of research.Matarić credits IEEE Life Fellow George Bekey, the founding editor in chief of the IEEE Transactions on Robotics, for recruiting her for the USC engineering faculty position. He knew of her work through her graduate advisor Brooks, who published a paper in the journal that introduced reactive control and the subsumption architecture, which became the foundation of a new way to control robots. It is his most cited paper. Bekey, who was editor in chief at the time, helped guide Brooks through the challenging review process. Matarić joined Brooks’s lab at MIT two years after its publication, and her work on Toto built on that foundation.“Joining a society has an impact, and it can be personal,” she says. “That’s why I recommend my students join the organization—because it’s important to get out there and get connected.”</p>
<p>The post <a href="https://ctorobotics.com/the-usc-professor-who-pioneered-socially-assistive-robotics/">The USC Professor Who Pioneered Socially Assistive Robotics</a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/04/a-smiling-blonde-woman-poses-with-a-humanoid-robotic-torso-wearing-a-usc-sweatshirt-DPS4DD-150x150.jpg" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" loading="lazy" /><p>When the robotics engineering field that <a href="https://www.linkedin.com/in/maja-mataric-5b670014/" target="_blank" rel="noopener noreferrer">Maja Matarić</a> wanted to work in didn’t exist, she helped create it. In 2005 she helped define the new area of socially assistive robotics.</p>
<p>As an associate professor of computer science, neuroscience, and pediatrics at the <a href="https://www.usc.edu/" target="_blank" rel="noopener noreferrer">University of Southern California</a>, in Los Angeles, she developed robots to provide personalized therapy and care through social interactions.</p>
<h3>Maja Matarić</h3>
<p>&nbsp;</p>
<p><strong>Employer </strong></p>
<p>University of Southern California, Los Angeles</p>
<p><strong>Job Title </strong></p>
<p>Professor of computer science, neuroscience, and pediatrics</p>
<p><strong>Member grade</strong></p>
<p>Fellow</p>
<p><strong>Alma maters </strong></p>
<p>University of Kansas and MIT</p>
<p>The robots could have conversations, play games, and respond to emotions.</p>
<p>Today the IEEE Fellow is a professor at USC. She studies how robots can help students with anxiety and depression undergo cognitive behavioral therapy. CBT focuses on changing a person’s negative thought patterns, behaviors, and emotional responses.</p>
<p>For her work, she received a 2025 Robotics Medal from <a href="https://www.massrobotics.org/" target="_blank" rel="noopener noreferrer">MassRobotics</a>, which recognizes female researchers advancing robotics. The Boston-based nonprofit provides robotics startups with a workspace, prototyping facilities, mentorship, and networking opportunities.</p>
<p>When receiving the award at the ceremony in Boston, Matarić was overcome with joy, she says.</p>
<p>“I’ve been very fortunate to be honored with several awards, which I am grateful for. But there was something very special about getting the MassRobotics medal, because I knew at least half the people in the room,” she says. “Everyone was just smiling, and there was a great sense of love.”</p>
<h2>Seeing herself as an engineer</h2>
<p>Matarić grew up in Belgrade, Serbia. Her father was an engineer, and her mother was a writer. After her father died when she was 16, Matarić and her mother moved to the United States.</p>
<p>She credits her father for igniting her interest in engineering, and her uncle who worked as an aerospace engineer for introducing her to computer science.</p>
<p>Matarić says she didn’t consider herself an engineer until she joined USC’s faculty, since she always had worked in computer science.</p>
<p>“In retrospect, I’ve always been an engineer,” Matarić says. “But I didn’t set out specifically thinking of myself as one—which is just one of the many things I like to convey to young people: You don’t always have to know exactly everything in advance.”</p>
<p class="shortcode-media shortcode-media-youtube"><small class="image-media media-caption">Maja Matarić and her lab are exploring how socially assistive robots can help improve the communication skills of children with autism spectrum disorder.</small> <small class="image-media media-photo-credit">National Science Foundation News</small></p>
<p>While pursuing her bachelor’s degree in computer science at the <a href="https://www.ku.edu/" target="_blank" rel="noopener noreferrer">University of Kansas</a> in Lawrence, she was introduced to industrial robotics through a textbook. After earning her degree in 1987, she had an opportunity to continue her education as a graduate student at MIT’s AI Lab (now the <a href="https://www.csail.mit.edu/node/2873" target="_blank" rel="noopener noreferrer">Computer Science and Artificial Intelligence Lab</a>). During her first year, she explored the different research projects being conducted by faculty members, she said in a <a href="https://ethw.org/Oral-History:Maja_Mataric" target="_blank" rel="noopener noreferrer">2010 oral history</a> conducted by the <a href="https://www.ieee.org/content/dam/ieee-org/ieee/web/org/about/history-center/ieee-history-center-newsletter-114.pdf" target="_blank" rel="noopener noreferrer">IEEE History Center</a>. She met IEEE Life Fellow <a href="https://spectrum.ieee.org/rodney-brooks-three-laws-robotics" target="_self">Rodney Brooks</a>, who was working on novel reactive and behavior-based robotic systems. His work so excited her that she joined his lab and conducted her master’s thesis under his tutelage.</p>
<p>Inspired by the way animals use landmarks to navigate, Matarić developed <a href="https://dspace.mit.edu/bitstream/handle/1721.1/7027/AITR-1228.pdf?...#:~:text=Toto%20is%20an%20example%20of,learn-%20ing%20and%20path%20planning." target="_blank" rel="noopener noreferrer">Toto</a>, the first navigating behavior-based robot. Toto used distributed models to map the AI Lab building where Matarić worked and plan its path to different rooms. Toto used sonar to detect walls, doors, and furniture, according to Matarić’s paper, “<a href="https://pages.ucsd.edu/~ehutchins/cogs8/mataric-primer.pdf" target="_blank" rel="noopener noreferrer">The Robotics Primer</a>.”</p>
<p>After earning her master’s degree in AI and robotics in 1990, she continued to work under Brooks as a doctoral student, pioneering distributed algorithms that allowed a team of up to 20 robots to execute complex tasks in tandem, including searching for objects and exploring their environment.</p>
<p>Matarić earned her Ph.D. in AI and robotics in 1994 and joined <a href="https://www.brandeis.edu/" target="_blank" rel="noopener noreferrer">Brandeis University</a>, in Waltham, Mass., as an assistant professor of computer science. There she founded the Interaction Lab, where she developed autonomous robots that work together to accomplish tasks.</p>
<p>Three years later, she relocated to California and joined USC’s <a href="https://viterbischool.usc.edu/" target="_blank" rel="noopener noreferrer">Viterbi School of Engineering</a> as an assistant professor in computer science and neuroscience.</p>
<p>In 2002 she helped to found the Center for Robotics and Embedded Systems (now the <a href="https://rasc.usc.edu/" target="_blank" rel="noopener noreferrer">Robotics and Autonomous Systems Center</a>). The RASC focuses on research into human-centric and scalable robotic systems and promotes interdisciplinary partnerships across USC.</p>
<p>Matarić’s shift in her research came after she gave birth to her first child in 1998. When her daughter was a bit older and asked Matarić why she worked with robots, she wanted to be able to “say something better than ‘I publish a lot of research papers,’ or ‘it’s well-recognized,’” she says.</p>
<p class="pull-quote">“In academia, you can be in a leadership role and still do research. It’s a wonderful and important opportunity that lets academics be on top of our field and also train the next generation of students and help the next generation of faculty colleagues.”</p>
<p>“Kids don’t consider those good answers, and they’re probably right,” she says. “This made me realize I was in a position to do something different. And I really wanted the answer to my daughter’s future question to be, ‘Mommy’s robots help people.’”</p>
<p>Matarić and her doctoral student <a href="https://www.unr.edu/cse/people/david-feil-seifer" target="_blank" rel="noopener noreferrer">David Feil-Seifer</a> presented a paper defining socially assistive robotics at the 2005 <a href="https://icorr-c.org/" target="_blank" rel="noopener noreferrer">International Conference on Rehabilitation Robotics</a>. It was the only paper that talked about helping people complete tasks and learn skills by speaking with them rather than by performing physical jobs, she says.</p>
<p>Feil-Seifer is now a professor of computer science and engineering at the <a href="https://www.unr.edu/" target="_blank" rel="noopener noreferrer">University of Nevada</a> in Reno.</p>
<p>At the same time, she founded the <a href="https://uscinteractionlab.web.app/" target="_blank" rel="noopener noreferrer">Interaction Lab at USC</a> and made its focus creating robots that provide social, rather than physical, support.</p>
<p>“At this point in my career journey, I’ve matured to a place where I don’t want to do just curiosity-driven research alone,” she says. “Plenty of what my team and I do today is still driven by curiosity, but it is answering the question: ‘How can we help someone live a better life?’”</p>
<p>In 2006 she was promoted to full professor and made the senior associate dean for research in USC’s Viterbi School of Engineering. In 2012 she became vice dean for research.</p>
<p>“In academia, you can be in a leadership role and still do research,” she says. “It’s a wonderful and important opportunity that lets academics be on top of our field and also train the next generation of students and help the next generation of faculty colleagues.”</p>
<h2>Research in socially assistive robotics</h2>
<p>One of the longest research projects Matarić has led at her Interaction Lab is exploring how socially assistive robots can help improve the communication skills of children with <a href="https://www.mayoclinic.org/diseases-conditions/autism-spectrum-disorder/symptoms-causes/syc-20352928" target="_blank" rel="noopener noreferrer">autism spectrum disorder</a>. ASD is a lifelong neurological condition that affects the way people interact with others, and the way they learn. Children with ASD often struggle with social behaviors such as reading nonverbal cues, playing with others, and making eye contact.</p>
<p>Matarić and her team developed a robot, <a href="https://spectrum.ieee.org/041910-bandit-little-dog-and-more-usc-shows-off-its-robots" target="_self">Bandit</a>, that can play games with a child and give the youngster words of affirmation. Bandit is 56 centimeters tall and has a humanlike head, torso, and arms. Its head can pan and tilt. The robot uses two <a href="https://www.edmundoptics.com/c/firewire-cameras/1014/?srsltid=AfmBOopjvhJQdzbmxyRP-Bgi50iYGeAIcQp3WkFHPM4R78EHqgr4buL0" target="_blank" rel="noopener noreferrer">FireWire</a> cameras as its eyes, and it has a movable mouth and eyebrows, allowing it to exhibit a variety of facial expressions, according to the <a href="https://spectrum.ieee.org/" target="_self"><em><em>IEEE Spectrum</em></em></a>’s <a href="https://robotsguide.com/robots/bandit" target="_blank" rel="noopener noreferrer">robots guide</a>. Its torso is attached to a wheeled base.</p>
<p>The study showed that when interacting with Bandit, children with ASD exhibited social behaviors that were out of the ordinary for them, such as initiating play and imitating the robot.</p>
<p>Matarić and her team also studied how the robot could serve as a social and cognitive aid for elderly people and stroke patients. Bandit was programmed to instruct and motivate users to perform daily movement exercises such as seated aerobics.</p>
<p class="shortcode-media shortcode-media-rebelmouse-image"><img decoding="async" class="rm-shortcode aligncenter" src="https://spectrum.ieee.org/media-library/a-smiling-blonde-woman-gestures-at-a-customizable-tabletop-robot-that-wears-a-knit-outfit-of-a-cute-animal-over-its-shell.jpg?id=65574186&amp;width=980" alt="A smiling blonde woman gestures at a customizable tabletop robot that wears a knit outfit of a cute animal over its shell." data-rm-shortcode-id="d0240a8f48f895ca49e2fdac2114e5f9" data-rm-shortcode-name="rebelmouse-image" /> <small class="image-media media-caption">Maja Matarić and doctoral student Amy O’Connell testing Blossom, which is being used to study how it can aid students with anxiety or depression.</small><small class="image-media media-photo-credit">University of Southern California</small></p>
<p>Over the years, Matarić’s lab developed other robots including <a href="https://magazine.viterbi.usc.edu/spring-2020/features/say-hi-to-kiwi/" target="_blank" rel="noopener">Kiwi</a> and <a href="https://dl.acm.org/doi/10.1145/3310356" target="_blank" rel="noopener noreferrer">Blossom</a>. Kiwi, which looked like an owl, helped children with ASD learn social and cognitive skills, helped motivate elderly people living alone to be more physically active, and mediated discussions among family members. Blossom, originally developed at <a href="https://www.cornell.edu/" target="_blank" rel="noopener noreferrer">Cornell</a>, was adapted by the Interaction Lab to make it less expensive and personalizable for individuals. The robot is being used to study how it can aid students with anxiety or depression to practice cognitive behavioral therapy.</p>
<p>Matarić’s line of research began when she learned that large language model (LLM) chatbots were being promoted to help people with mental health struggles, she said in an <a href="https://edhub.ama-assn.org/jn-learning/audio-player/18985349" target="_blank" rel="noopener noreferrer">episode of the AMA Medical News podcast</a>.</p>
<p>“It is generally not easy to get [an appointment with a] therapist, or there might not be insurance coverage,” she said. “These, combined with the rates of anxiety and depression, created a real need.”</p>
<p>That made the chatbot idea appealing, she says, but she was interested to see if they were effective compared with a friendly robot such as Blossom.</p>
<p>Matarić and her team used the same LLMs to power CBT practice with a chatbot and with Blossom. They ran a two-week study in the USC dorms, where students were randomly assigned to complete CBT exercises daily with either a chatbot or the robot. Participants filled out a clinical assessment to measure their psychiatric distress before and after each session.</p>
<p>The study showed that students who interacted with the robot experienced a significant decrease in their mental state, Matarić said in the podcast, and students who interacted with the chatbot did not.</p>
<p class="pull-quote">“Joining an [IEEE] society has an impact, and it can be personal. That’s why I recommend my students join the organization—because it’s important to get out there and get connected.”</p>
<p>She and her team also reviewed transcripts of conversations between the students and the robot to evaluate how well the LLM responded to the participants. They found the robot was more effective than the chatbot, even though both were using the same model.</p>
<p>Based on those findings, in 2024 Matarić received a <a href="https://reporter.nih.gov/search/l8sqmMXycEaOMmv3hQHU1A/project-details/11064932" target="_blank" rel="noopener noreferrer">grant</a> from the U.S. <a href="https://www.nimh.nih.gov/" target="_blank" rel="noopener noreferrer">National Institute of Mental Health</a> to conduct a six-week clinical trial to explore how effective a socially assistive robot could be at delivering CBT practice. The trial, currently underway, also is expected to study how Blossom can be personalized to adapt to each user’s preferences and progress, including the way the robot moves, which exercises it recommends, and what feedback it gives.</p>
<p>During the trial, the 120 students participating are wearing <a href="https://spectrum.ieee.org/fitbit" target="_self">Fitbits</a> to study their physiologic responses. The participants fill out a clinical assessment to measure their psychiatric distress before and after each session.</p>
<p>Data including the participants’ feelings of relating to the robot, intrinsic motivation, engagement, and adherence will be assessed by the research team, Matarić says.</p>
<p>She says she’s proud of the graduate students working on this project, and seeing them grow as engineers is one of the most rewarding parts of working in academia.</p>
<p>“Engineers generally don’t anticipate having to work with human study participants and needing to understand psychology in addition to the hardcore engineering,” she says. “So the students who choose to do this research are just wonderful, caring people.”</p>
<h2>Finding a community at IEEE</h2>
<p>Matarić joined IEEE as a graduate student in 1992, the year she published her first paper in <a href="https://ieeexplore.ieee.org/document/1303682" target="_blank" rel="noopener noreferrer">IEEE Transactions on Robotics and Automation</a>. The paper, “<a href="https://ieeexplore.ieee.org/document/143349/" target="_blank" rel="noopener noreferrer">Integration of Representation Into Goal-Driven Behavior-Based Robots</a>,” described her work on Toto.</p>
<p>As a member of the <a href="https://www.ieee-ras.org/" target="_blank" rel="noopener noreferrer">IEEE Robotics and Automation Society</a>, she says she has gained a community of like-minded people. She enjoys attending conferences including the <a href="https://2025.ieee-icra.org/" target="_blank" rel="noopener noreferrer">IEEE International Conference on Robotics and Automation</a>, the <a href="https://www.ieee-ras.org/conferences-workshops/financially-co-sponsored/iros/" target="_blank" rel="noopener noreferrer">IEEE/RSJ International Conference on Intelligent Robots and Systems</a>, and the <a href="https://humanrobotinteraction.org/2026/" target="_blank" rel="noopener noreferrer">ACM/IEEE International Conference on Human-Robot Interaction</a>, which is closest to her field of research.</p>
<p>Matarić credits IEEE Life Fellow <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10896982" target="_blank" rel="noopener noreferrer">George Bekey</a>, the founding editor in chief of the <a href="https://dl.acm.org/journal/tor" target="_blank" rel="noopener noreferrer"><em><em>IEEE Transactions on Robotics</em></em></a>, for recruiting her for the USC engineering faculty position. He knew of her work through her graduate advisor Brooks, who published a paper in the journal that introduced reactive control and the subsumption architecture, which became the foundation of a new way to control robots. It is his <a href="https://ieeexplore.ieee.org/document/108703" target="_blank" rel="noopener noreferrer">most cited paper</a>. Bekey, who was editor in chief at the time, helped guide Brooks through the challenging review process. Matarić joined Brooks’s lab at MIT two years after its publication, and her work on Toto built on that foundation.</p>
<p>“Joining a society has an impact, and it can be personal,” she says. “That’s why I recommend my students join the organization—because it’s important to get out there and get connected.”</p>
<p>The post <a href="https://ctorobotics.com/the-usc-professor-who-pioneered-socially-assistive-robotics/">The USC Professor Who Pioneered Socially Assistive Robotics</a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://ctorobotics.com/the-usc-professor-who-pioneered-socially-assistive-robotics/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>NEURA Robotics and Amazon Web Services (AWS) Announce Strategic AI Robotics Partnership</title>
		<link>https://ctorobotics.com/neura-robotics-and-amazon-web-services-aws-announce-strategic-ai-robotics-partnership/</link>
					<comments>https://ctorobotics.com/neura-robotics-and-amazon-web-services-aws-announce-strategic-ai-robotics-partnership/#respond</comments>
		
		<dc:creator><![CDATA[CTO Robotics]]></dc:creator>
		<pubDate>Tue, 21 Apr 2026 14:03:30 +0000</pubDate>
				<category><![CDATA[Humanoid Robots]]></category>
		<category><![CDATA[Robotics]]></category>
		<guid isPermaLink="false">https://ctorobotics.com/?p=2449</guid>

					<description><![CDATA[<p><img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/04/AWS_NEURA_20.04.26_05_v2-scaled-1-150x150.webp" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" loading="lazy" />NEURA Robotics and Amazon Web Services have announced a strategic agreement to accelerate Physical AI at scale. Bringing cognitive robots that can perceive, reason, and act alongside humans from development into global deployment. The collaboration combines NEURA’s cognitive robotics platform with AWS’s cloud and AI infrastructure to help train, validate, and deploy the next generation<br />
The post NEURA Robotics and Amazon Web Services Announce Strategic Agreement appeared first on Humanoid Robotics Technology.</p>
<p>The post <a href="https://ctorobotics.com/neura-robotics-and-amazon-web-services-aws-announce-strategic-ai-robotics-partnership/">NEURA Robotics and Amazon Web Services (AWS) Announce Strategic AI Robotics Partnership</a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/04/AWS_NEURA_20.04.26_05_v2-scaled-1-150x150.webp" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" loading="lazy" /><p><a href="https://humanoidroboticstechnology.com/company/neura-robotics/" target="_blank" rel="noreferrer noopener">NEURA Robotics</a> and Amazon Web Services have announced a strategic agreement to accelerate Physical AI at scale. Bringing cognitive robots that can perceive, reason, and act alongside humans from development into global deployment. The collaboration combines NEURA’s cognitive robotics platform with AWS’s cloud and AI infrastructure to help train, validate, and deploy the next generation of intelligent robots.</p>
<p>The collaboration tackles one of the most critical challenges of Physical AI: while large language models benefit from trillions of data points drawn from the internet, robots have a fraction of that – making real-world training data the key to unlocking the next era of AI. NEURA’s intelligence layer enables robots to perceive, adapt, and work reliably in the real world and together with AWS’s global cloud infrastructure, forms the full stack to scale Physical AI at speed. The collaboration spans three areas:</p>
<ul class="wp-block-list">
<li>Cloud infrastructure: AWS will provide the cloud backbone for the Neuraverse, enabling Physical AI training, data processing, and intelligence sharing across robot fleets.</li>
<li>AI development:<strong> </strong>NEURA Gym training environments are purpose-built training facilities where robots practice complex tasks in controlled environments, along with high-fidelity simulation. It will integrate with Amazon SageMaker to accelerate joint AI training pipelines across NEURA and partner use cases.</li>
<li>Real-world validation: NEURA will join the AWS Partner Network to expand go-to-market activities, while Amazon will explore deploying NEURA’s robotic systems in select fulfilment centers, providing real-world data and use cases to accelerate the development of new robotic capabilities for logistics and warehouse operations.</li>
</ul>
<h2 class="wp-block-heading"><strong>Scaling Physical AI Requires Real-World Data, Trusted Infrastructure, and Global Reach</strong></h2>
<p>Building cognitive robots that can perceive, reason, and act reliably alongside humans requires more than hardware. It demands continuous learning loops between simulation and reality, robust cloud infrastructure, and real-world environments where intelligence can be validated under production conditions.</p>
<p>NEURA’s partnership with AWS is rooted in the conviction that leading in Physical AI requires not just raw compute, but advanced training infrastructure and a managed service network to make AI training faster, more efficient, and reproducible across robotic platforms and fleets. AWS’s position as the world’s leading cloud provider, with unmatched compute availability and a comprehensive portfolio of AI and machine learning services makes it NEURA’s partner of choice.  By running the Neuraverse on AWS and connecting NEURA Gym to AWS services, NEURA can accelerate how robotic intelligence is trained, tested, and continuously improved across customer, partner, and internal use cases.</p>
<p>For NEURA, this partnership is part of a broader mission: building a global Physical AI ecosystem where every breakthrough can benefit all. By combining European robotics innovation with Amazon’s global infrastructure and operational reach, the collaboration creates a foundation to bring Physical AI from vision into real-world scale.</p>
<p>David Reger, CEO and founder of NEURA Robotics, commented: “Physical AI will only reach its full potential if intelligence can be trained, validated, and continuously improved in the real world. With AWS, we gain the infrastructure to scale the Neuraverse globally. With Amazon, we have the opportunity to bring Physical AI into one of the most advanced operational environments in the world. This is how Physical AI moves from vision to global reality, from Europe, together for the world.”</p>
<p>Jason Bennett, VP and Global Head of Startups and Venture Capital at AWS, stated: “NEURA represents exactly the kind of transformative thinking required to unlock the full potential of Physical AI. Their open platform approach addresses the industry’s most critical challenge, the data gap, and we’re excited to support their mission with AWS’s scalable cloud infrastructure. As NEURA scales production, AWS will provide the reliable, global foundation needed to power the Neuraverse and enable real-time intelligence sharing across its entire fleet.”</p>
<h2 class="wp-block-heading"><strong>Building the Infrastructure</strong> for <strong>A Growing Global Robotics Ecosystem</strong></h2>
<p>The collaboration with AWS and Amazon marks another milestone in NEURA’s growing ecosystem of global technology partners. Spanning cloud infrastructure, AI, semiconductors, industrial deployment, and data creation. This ecosystem includes four of the world’s 10 largest robotics companies, such as Kawasaki, alongside industrial leaders including Schaeffler, Bosch, and Qualcomm Technologies. Together, these partnerships form the foundation for a new robotics ecosystem in which robots learn faster, scale more reliably, and create value across industries, with the shared goal of enabling millions of cognitive robots by 2030.</p>
<p>Source: NEURA Robotics</p>
<p>The post <a href="https://humanoidroboticstechnology.com/industry-news/neura-robotics-and-amazon-web-services-announce-strategic-agreement/">NEURA Robotics and Amazon Web Services Announce Strategic Agreement</a> appeared first on <a href="https://humanoidroboticstechnology.com/">Humanoid Robotics Technology</a>.</p>
<p>The post <a href="https://ctorobotics.com/neura-robotics-and-amazon-web-services-aws-announce-strategic-ai-robotics-partnership/">NEURA Robotics and Amazon Web Services (AWS) Announce Strategic AI Robotics Partnership</a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://ctorobotics.com/neura-robotics-and-amazon-web-services-aws-announce-strategic-ai-robotics-partnership/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Siemens and Humanoid Deploy Humanoids in Industrial Operations With NVIDIA</title>
		<link>https://ctorobotics.com/siemens-and-humanoid-deploy-humanoids-in-industrial-operations-with-nvidia/</link>
					<comments>https://ctorobotics.com/siemens-and-humanoid-deploy-humanoids-in-industrial-operations-with-nvidia/#respond</comments>
		
		<dc:creator><![CDATA[CTO Robotics]]></dc:creator>
		<pubDate>Mon, 20 Apr 2026 22:51:11 +0000</pubDate>
				<category><![CDATA[Humanoid Robots]]></category>
		<category><![CDATA[Robotics]]></category>
		<guid isPermaLink="false">https://ctorobotics.com/?p=2407</guid>

					<description><![CDATA[<p><img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/04/siemen-humanoid-deployment-150x150.jpg" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" loading="lazy" />Siemens and Humanoid has announced a landmark milestone in the journey to bring physical AI from vision to industrial reality. Humanoid’s HMND 01 wheeled Alpha humanoid robot, built using the NVIDIA physical AI stack, has been successfully tested in operations at Siemens’ electronics factory in Erlangen, Germany, performing autonomous logistics tasks. This builds on the Siemens<br />
The post Siemens and Humanoid Deploy Humanoids in Industrial Operations With NVIDIA appeared first on Humanoid Robotics Technology.</p>
<p>The post <a href="https://ctorobotics.com/siemens-and-humanoid-deploy-humanoids-in-industrial-operations-with-nvidia/">Siemens and Humanoid Deploy Humanoids in Industrial Operations With NVIDIA</a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/04/siemen-humanoid-deployment-150x150.jpg" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" loading="lazy" /><p>Siemens and Humanoid has announced a landmark milestone in the journey to bring physical AI from vision to industrial reality. Humanoid’s HMND 01 wheeled Alpha humanoid robot, built using the NVIDIA physical AI stack, has been successfully tested in operations at Siemens’ electronics factory in Erlangen, Germany, performing autonomous logistics tasks. This builds on the Siemens and NVIDIA strategic partnership, announced at CES, to build the world’s first fully AI-driven, adaptive manufacturing sites.</p>
<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter" src="https://humanoidroboticstechnology.com/wp-content/uploads/2026/04/siemen-humanoid-deployment.jpg" alt="humanoid" /></p>
<h2 class="wp-block-heading"><strong>The Dawn of Physical AI in Manufacturing</strong></h2>
<p>Physical AI – the discipline of training intelligent machines to perceive, reason and act in the physical world – is poised to transform how goods are made. Bridging the gap between AI research and the demands of a real factory requires a high-performing ecosystem: world-class AI compute and simulation, a proven robotics platform, and the deep industrial automation infrastructure to tie it all together.</p>
<p>The HMND 01 Alpha robot was deployed in Siemens’ logistics operations, where it autonomously executed tote-handling tasks – picking, transporting and placing containers for human operators. All target performance metrics were met, including a throughput of 60 tote moves per hour, uptime exceeding 8 hours, and autonomous pick-and-place success rates above 90 percent.</p>
<p>Siemens and Humanoid Bring Physical AI to the Factory Floor: Deploying Humanoids in Industrial Operations with NVIDIA</p>
<h2 class="wp-block-heading"><strong>Building the Industrial Backbone with Siemens Xcelerator</strong></h2>
<p>A humanoid robot’s true value is in becoming a fully integrated, collaborative asset on the shop floor. That means real-time data exchange with production systems and other Autonomous Guided Vehicles, synchronized workflows with other machinery and human operators, and adaptive behavior that responds dynamically to changing conditions. Without this deep integration, even the most sophisticated robot remains an isolated feature.</p>
<p>Siemens provides this critical layer through its Siemens Xcelerator portfolio, from a comprehensive digital twin to AI-enabled perception, to integrated control and PLC-robot interfaces, along with fleet management, industrial communication networks and high-performance drives. Together, these technologies form the digital backbone and automation infrastructure that help to ensure humanoid robots operate efficiently and in concert with the broader factory environment. The outcome is a factory-grade model for deploying humanoids in any industrial setting.</p>
<h2 class="wp-block-heading"><strong>Accelerating Intelligence with NVIDIA Libraries, Frameworks and AI Infrastructure</strong></h2>
<p>Humanoid has integrated NVIDIA’s full physical AI stack into the HMND 01 platform, including NVIDIA Jetson Thor for edge compute, NVIDIA Isaac Sim for simulation and NVIDIA Isaac Lab for reinforcement learning and policy training. The result is a dramatic compression of development timelines. Simulation-first hardware design has also enabled the team to optimize actuator selection, joint strength and mass distribution virtually, cutting prototype development from a typical 18–24 months to just 7 months.</p>
<p>“Factories of the future demand robots that can perceive, reason, and adapt autonomously alongside human workers, tackling the labor shortages and operational complexity that traditional automation struggled to handle,” said Deepu Talla, vice president of Robotics and Edge AI at NVIDIA. “With Siemens providing the industrial integration backbone and Humanoid deploying NVIDIA’s full physical AI stack – from simulation-first training to real-time edge inference – this deployment paves the way for humanoid robots meeting real production targets on a live factory floor.”</p>
<h2 class="wp-block-heading"><strong>Humanoid: Building Factory-Grade Humanoids</strong></h2>
<p>Humanoid, a UK-based AI and robotics company, developed the HMND 01 Alpha, a humanoid robot purpose-built for industrial environments. Combining an omnidirectional wheeled mobility platform with advanced manipulation capabilities, powered by KinetIQ, a proprietary AI framework, the HMND 01 is engineered to work in human-centric spaces, adapting to diverse tasks and handling complex actions.</p>
<p>“Our mission is to create humanoid robots that perform not only in controlled lab settings, but also in real-world factory environments, handling meaningful industrial tasks. Our collaboration with Siemens and NVIDIA gives us a powerful advantage by combining NVIDIA’s leading AI infrastructure, simulation tools, and frameworks with Siemens’ deep industrial expertise and integration capabilities,” said Artem Sokolov, CEO and Founder of Humanoid. “Together, we’ve proven that humanoid robots are ready for real-world industrial deployment.”</p>
<p>Source: Humanoid</p>
<p>The post <a href="https://humanoidroboticstechnology.com/industry-news/siemens-and-humanoid-deploy-humanoids-in-industrial-operations-with-nvidia/">Siemens and Humanoid Deploy Humanoids in Industrial Operations With NVIDIA</a> appeared first on <a href="https://humanoidroboticstechnology.com/">Humanoid Robotics Technology</a>.</p>
<p>The post <a href="https://ctorobotics.com/siemens-and-humanoid-deploy-humanoids-in-industrial-operations-with-nvidia/">Siemens and Humanoid Deploy Humanoids in Industrial Operations With NVIDIA</a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://ctorobotics.com/siemens-and-humanoid-deploy-humanoids-in-industrial-operations-with-nvidia/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Pudu Robotics Announced a Strategic Partnership with Gom Schoonhouden B.V.</title>
		<link>https://ctorobotics.com/pudu-robotics-announced-a-strategic-partnership-with-gom-schoonhouden-b-v/</link>
					<comments>https://ctorobotics.com/pudu-robotics-announced-a-strategic-partnership-with-gom-schoonhouden-b-v/#respond</comments>
		
		<dc:creator><![CDATA[CTO Robotics]]></dc:creator>
		<pubDate>Mon, 20 Apr 2026 22:50:26 +0000</pubDate>
				<category><![CDATA[Humanoid Robots]]></category>
		<category><![CDATA[Robotics]]></category>
		<guid isPermaLink="false">https://ctorobotics.com/?p=2404</guid>

					<description><![CDATA[<p><img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/04/pudu-robotics-s.Gom-Schoonhouden-B.V.--150x150.jpg" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" loading="lazy" />Pudu Robotics has announced a strategic partnership with Gom Schoonhouden B.V., one of the Netherlands’ premier professional cleaning service providers. The agreement, facilitated by Pudu’s regional partner Fulin Robot Technologie B.V., marks the first deployment of the AI-Native Large Scrubber-Dryer Robot PUDU BG1 Series in Europe. This milestone represents a significant step forward in the adoption of next-generation cleaning automation across<br />
The post Pudu Robotics Announced a Strategic Partnership with Gom Schoonhouden B.V. appeared first on Humanoid Robotics Technology.</p>
<p>The post <a href="https://ctorobotics.com/pudu-robotics-announced-a-strategic-partnership-with-gom-schoonhouden-b-v/">Pudu Robotics Announced a Strategic Partnership with Gom Schoonhouden B.V.</a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/04/pudu-robotics-s.Gom-Schoonhouden-B.V.--150x150.jpg" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" loading="lazy" /><p>Pudu Robotics has announced a strategic partnership with Gom Schoonhouden B.V., one of the Netherlands’ premier professional cleaning service providers. The agreement, facilitated by Pudu’s regional partner Fulin Robot Technologie B.V., marks the first deployment of the AI-Native Large Scrubber-Dryer Robot PUDU BG1 Series in Europe.</p>
<p>This milestone represents a significant step forward in the adoption of next-generation cleaning automation across the European market and builds on the long-standing collaboration between Gom Schoonhouden B.V., Pudu Robotics, and Fulin Robot Technologie B.V. across multiple projects.</p>
<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter" src="https://humanoidroboticstechnology.com/wp-content/uploads/2026/04/pudu-robotics-s.Gom-Schoonhouden-B.V.-.jpg" alt="pudu" /></p>
<h2 class="wp-block-heading"><strong>Leading the AI-Native Cleaning Era in Europe</strong></h2>
<p>Recognized for its operational scale, service quality, and forward-looking approach to innovation, Gom Schoonhouden B.V. continues to set innovative benchmarks in the cleaning industry. Its decision to become the first adopter of the BG1 Series in Europe reflects Gom’s focus on practical innovation and continuous improvement.</p>
<p>“With this deployment, we are not simply introducing a new machine, but a new way of operating,” said Geoffrey Nouws, Head of Innovation at Gom. “We continuously look for innovations that can elevate both quality and efficiency in large-scale cleaning operations. For us, this deployment is a next step in continuously improving our service delivery. By applying this technology, we can support our clients with more consistent quality and more efficient operations in large-scale environments. Innovation is only valuable when it contributes to better results for our clients.”</p>
<h2 class="wp-block-heading"><strong>Redefining Intelligence: AI Magic Cleaning</strong></h2>
<p>At the core of the PUDU BG1 Series is an AI-native architecture, which integrates perception, decision-making, and execution into a continuous operational loop. Unlike traditional robots that rely on predefined routes and schedules, the BG1 is designed to dynamically respond to real-world environments, enabling more adaptive, efficient, and consistent cleaning outcomes.</p>
<p>This intelligence is powered by Pudu’s proprietary AI Magic Cleaning system, which transforms cleaning from a task-based process into a proactive, outcome-driven operation. Through real-time mess detection, adaptive cleaning strategies, and automatic optimization of key parameters such as chemical usage and brush pressure, the system ensures high performance across complex, large-scale environments while reducing manual intervention.</p>
<p>In addition, the BG1 Series introduces an industry-first extendable edge cleaning mechanism, enabling the scrubbing brush to reach flush against walls and shelving to eliminate traditional cleaning blind spots. Its integrated sweep-and-scrub system further enhances efficiency by combining dry debris collection and wet scrubbing in a single pass, reducing redundant operations and maximising productivity.</p>
<p>“This partnership is a strong validation of where the cleaning industry is heading,” said Felix Zhang, Founder and CEO of Pudu Robotics. “AI-native robotics is not just about improving efficiency, it’s about fundamentally redefining how work gets done. With the BG1 Series and our AI Magic Cleaning system, we are enabling machines to move beyond execution to true understanding and decision-making. We’re excited to work with forward-thinking partners like Gom to bring this new operational paradigm into real-world environments at scale.”</p>
<h2 class="wp-block-heading"><strong>A Proven Leader in Global Growth</strong></h2>
<p>The partnership also highlights Pudu Robotics’ accelerating growth in the global cleaning robotics sector. According to data from Frost &amp; Sullivan’s “Market Research on Global Commercial Service Robotics (2023),” Pudu Robotics was recognised as the global leader, commanding a 23% market share—the highest in the industry. This leadership is further evidenced by Pudu’s massive operational scale, with over 120,000 robots shipped worldwide to date and a remarkable 100% year-over-year revenue growth in 2025. Today, cleaning robotics has emerged as the company’s primary growth engine, now accounting for over 70% of its total revenue.</p>
<p>Among its flagship products, the PUDU CC1 has surpassed 20,000 units in cumulative global shipments, with 60% deployed across Europe and North America—markets known for their stringent requirements in reliability, compliance, and service standards.</p>
<h2 class="wp-block-heading"><strong>Setting a New Standard for the Industry</strong></h2>
<p>Through this collaboration, Pudu Robotics, Gom Schoonhouden B.V., and Fulin Robot Technologie B.V. will continue to explore scalable, intelligent solutions for commercial cleaning, setting new benchmarks for efficiency, consistency, and automation in complex operational environments.</p>
<p>As the industry evolves, this partnership signals a broader transition toward AI-native infrastructure, where cleaning is no longer defined by repetitive tasks, but by intelligent systems capable of continuous optimisation and autonomous decision-making.</p>
<p>Source: Pudu Robotics</p>
<p>The post <a href="https://humanoidroboticstechnology.com/industry-news/at-interclean-amsterdam-pudu-robotics-announced-a-strategic-partnership-with-gom-schoonhouden-b-v/">Pudu Robotics Announced a Strategic Partnership with Gom Schoonhouden B.V.</a> appeared first on <a href="https://humanoidroboticstechnology.com/">Humanoid Robotics Technology</a>.</p>
<p>The post <a href="https://ctorobotics.com/pudu-robotics-announced-a-strategic-partnership-with-gom-schoonhouden-b-v/">Pudu Robotics Announced a Strategic Partnership with Gom Schoonhouden B.V.</a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://ctorobotics.com/pudu-robotics-announced-a-strategic-partnership-with-gom-schoonhouden-b-v/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>X-Humanoid Announces Tien Kung 3.0 Wins Beijing Robot Warrior Challenge</title>
		<link>https://ctorobotics.com/x-humanoid-announces-tien-kung-3-0-wins-beijing-robot-warrior-challenge/</link>
					<comments>https://ctorobotics.com/x-humanoid-announces-tien-kung-3-0-wins-beijing-robot-warrior-challenge/#respond</comments>
		
		<dc:creator><![CDATA[CTO Robotics]]></dc:creator>
		<pubDate>Mon, 20 Apr 2026 22:49:28 +0000</pubDate>
				<category><![CDATA[Humanoid Robots]]></category>
		<category><![CDATA[Robotics]]></category>
		<guid isPermaLink="false">https://ctorobotics.com/?p=2403</guid>

					<description><![CDATA[<p><img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/04/tien-kung-3-humanoid-robot-winner-150x150.jpg" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" loading="lazy" />Beijing Innovation Center of Humanoid Robotics (X-Humanoid) entered the competition with Embodied Tien Kung 3.0. The full-size general-purpose humanoid robot completed the course fully autonomously, successfully navigating multiple competition tasks designed around real-world high-risk scenarios, including pendulum traversal, forward progression, and barrier breaching and obstacle clearance. It achieved the highest overall score, becoming the first<br />
The post X-Humanoid Announces Tien Kung 3.0 Wins Beijing Robot Warrior Challenge appeared first on Humanoid Robotics Technology.</p>
<p>The post <a href="https://ctorobotics.com/x-humanoid-announces-tien-kung-3-0-wins-beijing-robot-warrior-challenge/">X-Humanoid Announces Tien Kung 3.0 Wins Beijing Robot Warrior Challenge</a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/04/tien-kung-3-humanoid-robot-winner-150x150.jpg" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" loading="lazy" /><p>Beijing Innovation Center of Humanoid Robotics (X-Humanoid) entered the competition with Embodied Tien Kung 3.0. The full-size general-purpose humanoid robot completed the course fully autonomously, successfully navigating multiple competition tasks designed around real-world high-risk scenarios, including pendulum traversal, forward progression, and barrier breaching and obstacle clearance. It achieved the highest overall score, becoming the first winner of a fully autonomous Robot Warrior Challenge and receiving the Warrior Intelligent Mobility Award.</p>
<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter" src="https://humanoidroboticstechnology.com/wp-content/uploads/2026/04/tien-kung-3-humanoid-robot-winner-740x572.jpg" alt="tien kung x humanoid" /></p>
<p>Following its victory in the 2025 humanoid robot marathon, X-Humanoid again took the top spot, establishing itself as a dual-title winner. As the only full-size robot in the competition, Embodied Tien Kung 3.0 outperformed more agile small humanoid robots, marking a critical leap from basic locomotion to real-world operational capability.</p>
<p>During the event, X-Humanoid not only fielded four teams of its own, but also worked with joint laboratory teams from several universities, including Hunan University and Renmin University of China. Through the open interface provided by X-Humanoid, the teams carried out further development based on Embodied Tien Kung 3.0’s developer accessibility, reduced sim-to-real gap, high performance potential, and ease of tuning. Together, they generated validation data and repeatable development approaches for scaling robotic deployment in post-earthquake recovery, chemical operations, firefighting, and other high-risk scenarios.</p>
<h2 class="wp-block-heading"><strong>A Hard-Core Victory in Full Autonomy: From Passive Execution to Active Decision-Making</strong></h2>
<p>The competition course closely mirrored real disaster environments, including unstructured terrain, dynamic obstacles, fine manipulation, and high-impact motion challenges. Most participating robots still required manual remote assistance, preset routes, or on-site intervention, while Embodied Tien Kung 3.0 operated fully autonomously, powered by its proprietary “Wise KaiWu” embodied intelligence platform, achieving closed-loop operation across perception, planning, control, and fault recovery, with no human intervention, no remote control, and no preset scripts.</p>
<h2 class="wp-block-heading"><strong>Four Core Technologies: The Embodied-Intelligence Strength Behind the Championship</strong></h2>
<h3 class="wp-block-heading">Embodied Hierarchical Control Architecture: Deep Integration of Cognitive Decision-Making and Motion Execution</h3>
<p>Based on the “Wise KaiWu” platform’s coordinated architecture, the high-level autonomy stack is responsible for environmental understanding and task decision-making, while the low-level control layer handles motion generation and real-time execution. Through high-speed, low-latency communication, the two operate in a closed loop, translating high-level decisions into stable, precise, human-like full-body motion. This enables fully autonomous, continuous, and robust operation in unstructured, high-disturbance, multi-task environments, eliminating reliance on pre-programmed behaviours and human intervention, and enabling real-time translation of decision-making into precise, goal-directed motion.</p>
<h3 class="wp-block-heading"><strong>Multimodal Terrain Perception and Environmental Understanding: Centimeter-Level Modeling, Millisecond-Level Response</strong></h3>
<p>Faced with complex environments, Embodied Tien Kung 3.0 uses an end-to-end perception-to-motion model, integrating multimodal sensor input and directly learning motion decision-making and control strategies from raw observations, enabling a unified pipeline from perception to action.</p>
<h3 class="wp-block-heading"><strong>Precise Foot Placement and Integrated Autonomous Decision-Making and Planning: Global Optimization with Step-by-Step Control</strong></h3>
<p>Based on an integrated architecture combining global semantic path planning with local foot placement optimization, the robot manages both path-level decisions and real-time step execution, coordinating body posture and joint motion under complex constraints to ensure stable and continuous traversal in narrow passages and dynamic environments.</p>
<h3 class="wp-block-heading"><strong>Highly Dynamic Full-Body Motion Control: Stable, Continuous, and Highly Resistant to Disturbance</strong></h3>
<p>With an integrated stand-walk-run control framework driven by reinforcement learning and imitation techniques, Embodied Tien Kung 3.0 delivers smooth, stable movement across continuous obstacles, with strong disturbance resistance and posture recovery. The team was also among the first in the industry to achieve highly dynamic full-body motion control during physical interaction, enabling the robot to maintain coordination and stability under direct contact with the environment, and significantly expanding its operational capability in complex scenarios.</p>
<h2 class="wp-block-heading"><strong>Accelerating Deployment in High-Risk Scenarios: From Competition Champion to Industry Pioneer</strong></h2>
<p>The Robot Warrior Challenge was modeled on real rescue and hazardous-operation scenarios, testing performance in earthquake debris, chemical hazard environments, and collapsed structures. X-Humanoid focuses on fully autonomous systems designed for real-world use, and through open hardware/software interfaces, development tools, and shared code frameworks, continues to reduce barriers to development and deployment. This breakthrough not only extends the use of these systems into emergency response and hazardous operations, but also provides mature, transferable solutions for industrial manufacturing, logistics, and commercial services, supporting deployment across a broad range of real-world environments and use cases, and accelerating the transition of embodied intelligence from controlled testing to practical, real-world operation.</p>
<p>Source: X-Humanoid</p>
<p>The post <a href="https://humanoidroboticstechnology.com/industry-news/x-humanoid-announces-tien-kung-3-0-wins-beijing-robot-warrior-challenge/">X-Humanoid Announces Tien Kung 3.0 Wins Beijing Robot Warrior Challenge</a> appeared first on <a href="https://humanoidroboticstechnology.com/">Humanoid Robotics Technology</a>.</p>
<p>The post <a href="https://ctorobotics.com/x-humanoid-announces-tien-kung-3-0-wins-beijing-robot-warrior-challenge/">X-Humanoid Announces Tien Kung 3.0 Wins Beijing Robot Warrior Challenge</a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://ctorobotics.com/x-humanoid-announces-tien-kung-3-0-wins-beijing-robot-warrior-challenge/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>At APC 2026, AGIBOT Declares 2026 as “Deployment Year One” </title>
		<link>https://ctorobotics.com/at-apc-2026-agibot-declares-2026-as-deployment-year-one/</link>
					<comments>https://ctorobotics.com/at-apc-2026-agibot-declares-2026-as-deployment-year-one/#respond</comments>
		
		<dc:creator><![CDATA[CTO Robotics]]></dc:creator>
		<pubDate>Mon, 20 Apr 2026 22:33:53 +0000</pubDate>
				<category><![CDATA[Humanoid Robots]]></category>
		<category><![CDATA[Robotics]]></category>
		<guid isPermaLink="false">https://ctorobotics.com/?p=2406</guid>

					<description><![CDATA[<p><img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/04/edward-deng-founder-chairman-and-ceo-of-agibot-speaks-on-stage-at-apc-2026-pexEbS-150x150.png" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" loading="lazy" />AGIBOT outlined its long-term strategic vision for embodied intelligence at the AGIBOT Partner Conference (APC) 2026, identifying 2026 as the first year of large-scale commercial deployment of physical AI systems that deliver measurable productivity gains. Building on three years of rapid progress from R&#38;D to mass production and now commercialisation, AGIBOT highlighted a fundamental industry<br />
The post At APC 2026, AGIBOT Declares 2026 as “Deployment Year One”  appeared first on Humanoid Robotics Technology.</p>
<p>The post <a href="https://ctorobotics.com/at-apc-2026-agibot-declares-2026-as-deployment-year-one/">At APC 2026, AGIBOT Declares 2026 as “Deployment Year One” </a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/04/edward-deng-founder-chairman-and-ceo-of-agibot-speaks-on-stage-at-apc-2026-pexEbS-150x150.png" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" loading="lazy" /><p>AGIBOT outlined its long-term strategic vision for embodied intelligence at the AGIBOT Partner Conference (APC) 2026, identifying 2026 as the first year of large-scale commercial deployment of physical AI systems that deliver measurable productivity gains.</p>
<p>Building on three years of rapid progress from R&amp;D to mass production and now commercialisation, AGIBOT highlighted a fundamental industry shift. Artificial intelligence is moving beyond digital cognition into real-world execution. As embodied systems begin to operate reliably in physical environments, the industry is entering a critical phase where scalable deployment and tangible productivity value are becoming achievable. Within this context, AGIBOT is positioning itself as a key architect of the emerging Physical AI ecosystem.</p>
<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" class="wp-image-13156" src="https://humanoidroboticstechnology.com/wp-content/uploads/2026/04/edward-deng-founder-chairman-and-ceo-of-agibot-speaks-on-stage-at-apc-2026.png" alt="edward-deng-" width="740" height="493" /></figure>
<p>“The industry is moving from proving what robots can do to proving what value they can consistently deliver at scale,” said Edward Deng, Founder, Chairman and CEO of AGIBOT. “At AGIBOT, we focus on making embodied intelligence deployable by combining motion, interaction and manipulation intelligence into one system that can operate under real-world constraints. Our goal is not only to build capable robotic machines, but to turn them into reliable units of productivity that can be scaled across industries.”</p>
<p>A Full-Series Portfolio Built on a Unified Physical Intelligence Architecture At APC 2026, AGIBOT presented its technological architecture, positioning itself as the only company offering a full-series, full-scenario lineup that spans humanoids, wheeled platforms and multi-form robots across different sizes and applications.</p>
<p>At the core of this portfolio is AGIBOT’s “One Robotic Body with Three Intelligences” framework, an engineering-ready paradigm that integrates motion, interaction and operation intelligence into a unified system. The robotic body functions not only as the physical carrier of intelligence, but also as the interface to the real world, where perception, decision-making and execution must operate under constraints such as force, precision, timing and safety. This close coupling between intelligence and embodiment enables robots to progress from isolated capabilities to full-domain generalisation in complex environments.</p>
<p>Supported by one of the most comprehensive full-stack technology systems in embodied AI, covering both the “brain” and the “body”, and reinforced by industry-leading mass production capabilities, AGIBOT continues to iterate its product lineup while scaling deployment across increasingly complex real-world scenarios.</p>
<p>The XYZ Framework for Embodied Intelligence At APC 2026, AGIBOT introduced the XYZ-curve framework to define the development trajectory of the embodied intelligence industry.</p>
<ul class="wp-block-list">
<li>The X curve (2022–2026) represents the value exploration phase, where foundational breakthroughs enabled robots to achieve human-like movement. This stage is defined by a development-state data flywheel, rapid advances in motion intelligence and the stabilisation of robotic hardware for mass production.</li>
</ul>
<ul class="wp-block-list">
<li>The Y curve (2026–2030) marks the deployment growth phase, where the focus shifts from capability validation to large-scale value creation. Productivity begins to approach human levels, driven by a deployment-state data flywheel, the scaling of interaction intelligence and scenario-based deployment of operation intelligence, leading to the emergence of embodied agents capable of executing real tasks.</li>
</ul>
<ul class="wp-block-list">
<li>The Z curve (2030 onward) represents the deployment popularisation phase, where intelligence evolves from quantitative accumulation to qualitative breakthroughs. Generalisation capabilities expand, collective intelligence emerges and robots begin to surpass human productivity in selected domains.</li>
</ul>
<p>With 2026 designated as “Deployment Year One”, AGIBOT is formally transitioning the industry into the era of measurable productivity. This milestone is reinforced by the rollout of its 10,000th robot as of March 2026, demonstrating both manufacturing scale and accelerating real-world adoption. Combined with rapid revenue growth, AGIBOT has become one of the fastest-scaling embodied AI companies globally.</p>
<p>Seven Standardised Solutions Driving Real-World Adoption To accelerate commercialisation, AGIBOT introduced seven standardised productivity solutions for high-value industry scenarios: loading and unloading, industrial handling, logistics sorting, guidance and retail assistance, retail service stations, security patrol and industrial and commercial cleaning. Each solution integrates hardware, AI models and data systems into a unified, repeatable deployment package that enables faster rollout cycles and reduces integration complexity. Unlike traditional robotics deployments that rely heavily on customisation, AGIBOT’s approach prioritises modularity, scalability and measurable ROI.</p>
<p>Supported by real-world deployments across manufacturing, logistics, retail and public infrastructure, these solutions have demonstrated quantifiable impact, including improved efficiency, enhanced precision, reduced labour costs and stronger service capabilities. This solution-driven model represents a key step in moving embodied AI from pilot projects to scalable productivity infrastructure.</p>
<p>Launching AIMA, A Full-Stack Open Architecture for Embodied AI Strengthening its role as an ecosystem builder, AGIBOT announced AIMA (AI Machine Architecture), the industry’s first complete open technology system for embodied intelligence. Designed as a “1+3+X” architecture, AIMA includes a unified robot operating system (Link-U OS), three core development platforms (LinkCraft for motion creation, LinkSoul for interaction design and Genie Studio for task development) and an extensible ecosystem layer that supports a wide range of applications. The “X” represents the AGIBOT Embodied Agent Framework, enabling deployment across commercial, industrial and home scenarios while supporting developers and partners.</p>
<p>This full-stack architecture provides an end-to-end toolchain, from low-level system control to high-level application development, significantly reducing the complexity and cost of building embodied AI solutions. Through ongoing open-sourcing and platform expansion, AGIBOT has already attracted a rapidly growing global community of developers and partners, laying the groundwork for scalable ecosystem innovation.</p>
<p>Building a Global Ecosystem for the Next Phase of Productivity Over the next five years, AGIBOT plans to invest more than RMB 2 billion to expand its ecosystem, working with leading universities, industry partners and developers to build a globally competitive embodied AI infrastructure. AGIBOT aims to support thousands of partners and cultivate a large-scale developer community, driving both technological innovation and commercial adoption. Looking toward 2030, AGIBOT envisions embodied intelligence reaching widespread adoption, unlocking trillion-scale market potential and enabling robots to become a foundational layer of productivity across industries.</p>
<p>By aligning technology, ecosystem development and commercial deployment, AGIBOT aims to usher in a new era of embodied AI-driven productivity.</p>
<p>Source: AGIBOT</p>
<p>The post <a href="https://humanoidroboticstechnology.com/news/at-apc-2026-agibot-declares-2026-as-deployment-year-one/">At APC 2026, AGIBOT Declares 2026 as “Deployment Year One” </a> appeared first on <a href="https://humanoidroboticstechnology.com/">Humanoid Robotics Technology</a>.</p>
<p>The post <a href="https://ctorobotics.com/at-apc-2026-agibot-declares-2026-as-deployment-year-one/">At APC 2026, AGIBOT Declares 2026 as “Deployment Year One” </a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://ctorobotics.com/at-apc-2026-agibot-declares-2026-as-deployment-year-one/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AGIBOT Announces New Generation of Embodied AI Robots and Models</title>
		<link>https://ctorobotics.com/agibot-announces-new-generation-of-embodied-ai-robots-and-models/</link>
					<comments>https://ctorobotics.com/agibot-announces-new-generation-of-embodied-ai-robots-and-models/#respond</comments>
		
		<dc:creator><![CDATA[CTO Robotics]]></dc:creator>
		<pubDate>Mon, 20 Apr 2026 22:33:35 +0000</pubDate>
				<category><![CDATA[Humanoid Robots]]></category>
		<category><![CDATA[Robotics]]></category>
		<guid isPermaLink="false">https://ctorobotics.com/?p=2405</guid>

					<description><![CDATA[<p><img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/04/peng-zhihui-co-founder-president-and-cto-of-agibot-demonstrates-interactive-intelligence-with-agibot-x2-rrzk6G-150x150.jpg" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" loading="lazy" />AGIBOT has announced a new generation of embodied AI products and foundation models at its 2026 Partner Conference, marking a major step toward large-scale real-world deployment of physical AI. The releases centre on the company’s “One Robotic Body, Three Intelligences” full-stack architecture and introduce four new robotic platforms and multiple AI models designed to close<br />
The post AGIBOT Announces New Generation of Embodied AI Robots and Models appeared first on Humanoid Robotics Technology.</p>
<p>The post <a href="https://ctorobotics.com/agibot-announces-new-generation-of-embodied-ai-robots-and-models/">AGIBOT Announces New Generation of Embodied AI Robots and Models</a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/04/peng-zhihui-co-founder-president-and-cto-of-agibot-demonstrates-interactive-intelligence-with-agibot-x2-rrzk6G-150x150.jpg" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" loading="lazy" /><p>AGIBOT has announced a new generation of embodied AI products and foundation models at its 2026 Partner Conference, marking a major step toward large-scale real-world deployment of physical AI. The releases centre on the company’s “One Robotic Body, Three Intelligences” full-stack architecture and introduce four new robotic platforms and multiple AI models designed to close the gap between advanced intelligence and real-world productivity. As the document states, “embodied AI is rapidly evolving into a new production infrastructure” and AGIBOT’s latest products aim to accelerate the shift from showcasing capabilities to delivering measurable outcomes across industrial, commercial and service environments.</p>
<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" class="wp-image-13157" src="https://humanoidroboticstechnology.com/wp-content/uploads/2026/04/peng-zhihui-co-founder-president-and-cto-of-agibot-demonstrates-interactive-intelligence-with-agibot-x2.jpg" alt="peng-zhihui" width="740" height="493" /></figure>
<p>“Embodied intelligence is no longer a concept, it is becoming a new form of productive infrastructure,” said Peng Zhihui, Co-founder, President and CTO of AGIBOT. “We are moving embodied intelligence from laboratory curiosity to production-line reality, enabling robots to truly integrate into human workflows and create measurable value across major scenarios.”</p>
<p>Reliable Bodies as the Foundation of Embodied AI AGIBOT introduced five new robotic platforms designed for diverse real-world scenarios, spanning entertainment, retail, industrial operations and field inspection.</p>
<h2 class="wp-block-heading">AGIBOT A3:</h2>
<p>AGIBOT A3 is a high-performance, highly customisable humanoid platform designed for interactive environments. Standing 173 cm tall and weighing 55 kg, it uses lightweight magnesium, titanium and TPU materials to achieve a leading 0.218 kW/kg power-to-weight ratio. With 10-hour endurance, 10-second battery swap, advanced UWB centimetre-level swarm positioning for synchronised 100-robot performances, shoulder tactile sensing and 360-degree multi-array microphones, the A3 supports seamless multi-robot coordination. Its enhanced interaction system makes it suitable for entertainment, education and customer engagement.</p>
<h2 class="wp-block-heading">AGIBOT G2 Air: Lightweight “Human-Machine Collaborative New Paradigm”</h2>
<p>AGIBOT G2 Air is a compact and highly agile single‑arm mobile manipulator designed for light‑duty, human‑in‑the‑loop operations. It offers 7 DOF, a 3 kg payload, a 750–800 mm reach, a sub‑800 mm width and speeds of at least 1.5 m/s. Built for seamless human–robot collaboration, it enhances efficiency and consistency while addressing the cost and quality challenges associated with manual work. Its responsiveness and rapid deployment make it suitable for retail, hospitality, logistics and structured industrial workflows.</p>
<p>AGIBOT G2 Air also integrates task execution and data collection into a unified workflow. Unlike traditional methods that separate manual operations from AI training, it enables real‑time data capture during task execution. Its UMI‑isomorphic layout ensures alignment between egocentric and real‑machine data. With an agile, compact design capable of operating in sub‑800 mm spaces with zero‑radius turning, it provides a clear upgrade path from assisted operation to full autonomy, safeguarding investment as AI capabilities advance.</p>
<h2 class="wp-block-heading">OmniHand 3 Ultra-T: Flagship of the NEW Omni 3 Series</h2>
<p>Flagship of the New Omni 3 Series OmniHand 3 Ultra-T is the next-generation upgrade of the OmniHand portfolio, delivering human-level dexterity. It features a 22+3 DOF tendon-driven system, a lightweight 500 g design and a 10:1 load-to-weight ratio. With full-hand 3D tactile sensing, an integrated palm camera, sub-0.3 second response time and a wide wrist range, it enables precise manipulation across industrial assembly, domestic tasks and multi-axis operations. Two additional products expand the lineup: the industrial-grade OmniPicker 3 gripper and OmniHand 3 Lite, a ruggedised dexterous hand for high-impact environments.</p>
<h2 class="wp-block-heading">D2 Max: The First All-Terrain Level 3 Autonomous Quadruped Robot:</h2>
<p>The First All-Terrain Level 3 Autonomous Quadruped Robot The D2 Max is AGIBOT’s next-generation flagship quadruped robot and the world’s first all-terrain Level 3 autonomous quadruped. Designed for mission-critical scenarios, it excels in security patrol, industrial inspection, emergency rescue, logistics, agriculture and education, transforming quadrupeds from remote-controlled tools into autonomous intelligent systems.</p>
<h2 class="wp-block-heading">MEgo: Body-Free Data Collection System for Scalable Physical AI</h2>
<p>Body-Free Data Collection for Scalable Physical AI MEgo is a next-generation body-free data collection system that redefines how physical AI data is generated. By removing reliance on robotic hardware, it enables operators to collect high-quality multimodal data across real-world environments at lower cost and greater scale. As the document notes, it “captures synchronized vision, motion, and tactile data with high precision” and provides an end-to-end pipeline for ready-to-train datasets.</p>
<p>Unveiling Eight Foundational AI Models Across Three Pillars of Embodied Intelligence AGIBOT introduced eight foundational AI models under its “One Robotic Body, Three Intelligences” architecture, covering Locomotion Intelligence, Manipulation Intelligence and Interactive Intelligence. Together, they form a unified Physical AI platform that integrates motion, task execution and human interaction into a closed-loop system.</p>
<h3 class="wp-block-heading">Locomotion Intelligence</h3>
<ul class="wp-block-list">
<li>BFM (Behavioural Foundation Model) enables robots to imitate human movements from a single demonstration or short video, maintaining stability even in noisy environments.</li>
</ul>
<ul class="wp-block-list">
<li>GCFM (Generative Control Foundation Model) converts text, audio or video inputs into natural, context-aware robot motions in real time.</li>
</ul>
<h3 class="wp-block-heading">Manipulation Intelligence</h3>
<ul class="wp-block-list">
<li>AGIBOT WORLD 2026 is an open-source, production-grade real-world dataset collected from industrial, logistics, home, hotel and commercial scenarios.</li>
</ul>
<ul class="wp-block-list">
<li>GO-2 (ViLLA Embodied Foundation Model) bridges planning and execution with Action Chain-of-Thought for consistent long-horizon task performance. ● GE-2 (World Action Model) creates interactive virtual worlds for safe, high-speed strategy testing.</li>
</ul>
<ul class="wp-block-list">
<li>Genie Sim 3.0 generates accurate digital twins using natural language for rapid training and near-perfect sim-to-real transfer. ● SOP (Real-World Distributed Online Learning System) enables deployed fleets to learn continuously from real operations.</li>
</ul>
<h3 class="wp-block-heading">Interactive Intelligence</h3>
<ul class="wp-block-list">
<li>WITA Omni is the first robot-native end-to-end multimodal interaction model, fusing vision, audio, language and action to deliver context-aware, emotionally intelligent responses.</li>
</ul>
<p>Full-Stack Architecture and Open Ecosystem for Scalable Deployment AGIBOT is extending its “One Robotic Body, Three Intelligences” architecture into a full-stack ecosystem to support large-scale deployment. The AIMA platform integrates operating systems, interaction frameworks, development tools and deployment platforms.</p>
<p>Core components include:</p>
<ul class="wp-block-list">
<li>Link-U OS: Native operating system for embodied intelligence</li>
<li>LinkSoul Platform: Personality, memory and long-term interaction engine</li>
<li>LinkCraft Platform: No-code environment for motion and behaviour creation</li>
<li>Genie Studio: Full-stack development platform for data collection, training, simulation and deployment</li>
</ul>
<p>These platforms reduce development complexity and enable partners to build and scale applications more efficiently.</p>
<p>Advancing Real-World Deployment at Scale AGIBOT emphasised that the true inflection point for embodied AI lies in reliable large-scale deployment within real-world workflows. To support this, the company introduced production-ready solutions across industrial handling, logistics sorting, retail services, security inspection and commercial operations. With hundreds of robots already deployed and a growing partner ecosystem, AGIBOT is shifting the industry from standalone robotic systems to outcome-driven solutions.</p>
<p>As embodied AI enters its next phase, AGIBOT’s integrated approach positions it to transform intelligent machines into a scalable productive force.</p>
<p>Source: AGIBOT</p>
<p>The post <a href="https://humanoidroboticstechnology.com/industry-news/agibot-announces-new-generation-of-embodied-ai-robots-and-models/">AGIBOT Announces New Generation of Embodied AI Robots and Models</a> appeared first on <a href="https://humanoidroboticstechnology.com/">Humanoid Robotics Technology</a>.</p>
<p>The post <a href="https://ctorobotics.com/agibot-announces-new-generation-of-embodied-ai-robots-and-models/">AGIBOT Announces New Generation of Embodied AI Robots and Models</a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://ctorobotics.com/agibot-announces-new-generation-of-embodied-ai-robots-and-models/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Teleoperating a 22-DoF Sharpa Hand Inside NVIDIA Isaac Lab using MANUS Gloves</title>
		<link>https://ctorobotics.com/teleoperating-a-22-dof-sharpa-hand-inside-nvidia-isaac-lab-using-manus-gloves/</link>
					<comments>https://ctorobotics.com/teleoperating-a-22-dof-sharpa-hand-inside-nvidia-isaac-lab-using-manus-gloves/#respond</comments>
		
		<dc:creator><![CDATA[CTO Robotics]]></dc:creator>
		<pubDate>Sat, 14 Mar 2026 23:28:46 +0000</pubDate>
				<category><![CDATA[AI Tools & Software]]></category>
		<category><![CDATA[Humanoid Robots]]></category>
		<category><![CDATA[Robotics]]></category>
		<category><![CDATA[humanoid]]></category>
		<category><![CDATA[robotics]]></category>
		<guid isPermaLink="false">https://ctorobotics.com/?p=2314</guid>

					<description><![CDATA[<p><img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/03/manus-sharpa-teleoperation-150x150.png" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" loading="lazy" />Collecting high-quality dexterous manipulation data in simulation demands an input device that can faithfully capture the full range of human...</p>
<p>The post <a href="https://ctorobotics.com/teleoperating-a-22-dof-sharpa-hand-inside-nvidia-isaac-lab-using-manus-gloves/">Teleoperating a 22-DoF Sharpa Hand Inside NVIDIA Isaac Lab using MANUS Gloves</a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="150" height="150" src="https://ctorobotics.com/wp-content/uploads/2026/03/manus-sharpa-teleoperation-150x150.png" class="attachment-thumbnail size-thumbnail wp-post-image" alt="" decoding="async" loading="lazy" /><div style="width: 1280px;" class="wp-video"><video class="wp-video-shortcode" id="video-2314-1" width="1280" height="720" preload="metadata" controls="controls"><source type="video/mp4" src="https://ctorobotics.com/wp-content/uploads/2026/03/Teleoperating-a-22-DoF-Sharpa-Hand-Inside-NVIDIA-Isaac-Lab-using-MANUS-Gloves.mp4?_=1" /><a href="https://ctorobotics.com/wp-content/uploads/2026/03/Teleoperating-a-22-DoF-Sharpa-Hand-Inside-NVIDIA-Isaac-Lab-using-MANUS-Gloves.mp4">https://ctorobotics.com/wp-content/uploads/2026/03/Teleoperating-a-22-DoF-Sharpa-Hand-Inside-NVIDIA-Isaac-Lab-using-MANUS-Gloves.mp4</a></video></div>
<p>Collecting high-quality dexterous manipulation data in simulation demands an input device that can faithfully capture the full range of human hand motion. <a href="https://www.manus-meta.com/products/overview" target="_blank" rel="noreferrer noopener">MANUS gloves</a>, now natively integrated into <a href="https://www.manus-meta.com/blog/manus-gloves-are-natively-supported-in-nvidia-isaac-lab" target="_blank" rel="noreferrer noopener">NVIDIA Isaac Lab 2.3</a>, address this requirement directly. In this demonstration, operators use MANUS gloves to teleoperate the Sharpa Wave, a 22-DOF dexterous robotic hand inside NVIDIA Isaac Lab, translating natural hand motion into real-time robot joint control with millimeter-level fidelity.</p>
<h2 class="wp-block-heading">The Data Quality Bottleneck in Dexterous Manipulation</h2>
<p>Simulation-first robot policy training offers advantages such as lower cost, faster iteration, and safer deployment pipelines. However, the quality of trained policies is bounded by the quality of demonstration data. For dexterous manipulation tasks requiring highly coordinated multi-finger control, most teleoperation input devices fall short. Vision-based hand tracking introduces occlusion errors and latency, while low-DoF controllers cannot capture the nuanced finger kinematics that complex manipulation tasks require.</p>
<h2 class="wp-block-heading">From Human Hand to Robot Policy</h2>
<p><a id="https://humanoidroboticstechnology.com/company/manus/metagloves-pro-haptic/" href="https://humanoidroboticstechnology.com/company/manus/metagloves-pro-haptic/" target="_blank" rel="noreferrer noopener" type="link">MANUS gloves</a> are built to capture the full range of hand motion with millimeter-level precision, remaining stable across extended operation sessions without drift. Natively supported in NVIDIA Isaac Lab 2.3, they stream high-fidelity hand tracking data directly into simulation, eliminating the setup friction that typically slows down data collection pipelines.</p>
<p>In this use case, operators wearing MANUS gloves teleoperate the 1:1 anthropomorphic 22-DoF Sharpa Wave inside Isaac Lab in real time. Hand configurations map directly to robot joint positions, creating an egocentric teleoperation interface that captures the full range of finger kinematics as naturally as possible.</p>
<p>The recorded demonstrations feed directly into Isaac Lab Mimic for augmentation and scaling, then into imitation learning pipelines, all within simulation, before any real-world deployment.</p>
<h2 class="wp-block-heading">Precision That Manipulation Policies Depend on</h2>
<p>MANUS data gloves capture every finger and micro-movement in real time, delivering the precise, stable, occlusion-free hand tracking that dexterous manipulation research requires, natively integrated into the NVIDIA Isaac Lab workflow. <a href="https://www.manus-meta.com/blog/manus-gloves-are-natively-supported-in-nvidia-isaac-lab" target="_blank" rel="noreferrer noopener">Learn more about the integration</a>.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<h2 style="font-size: 24px; color: #333333; margin-bottom: 15px; text-align: center;">Connect with the CTO ROBOTICS Media Community</h2>
<p style="font-size: 16px; color: #666666; margin-bottom: 30px; text-align: center;">Follow us and join our community channels for the latest insights in AI, Robotics, Smart Manufacturing and Smart Tech.</p>
<p><!-- YATAY DÜZEN: display: flex; ve flex-wrap: wrap; ile butonları yan yana tutar --></p>
<div style="display: flex; justify-content: center; gap: 10px; flex-wrap: wrap; margin-top: 25px;">
<p><!-- LinkedIn --><br />
<a style="display: inline-flex; align-items: center; padding: 10px 15px; border: 1px solid #e0e0e0; border-radius: 8px; text-decoration: none; font-weight: 600; color: #333; background-color: white; transition: all 0.2s;" href="https://www.linkedin.com/company/ctorobotics" target="_blank" rel="noopener"><br />
<img decoding="async" style="margin-right: 8px; width: 24px; height: 24px;" src="https://img.icons8.com/color/48/linkedin.png" alt="LinkedIn Icon" /><br />
<span style="font-size: 16px; white-space: nowrap;">LinkedIn</span><br />
</a></p>
<p><!-- X (Twitter) --><br />
<a style="display: inline-flex; align-items: center; padding: 10px 15px; border: 1px solid #e0e0e0; border-radius: 8px; text-decoration: none; font-weight: 600; color: #333; background-color: white; transition: all 0.2s;" href="https://x.com/ctorobotics" target="_blank" rel="noopener"><br />
<img decoding="async" style="margin-right: 8px; width: 24px; height: 24px;" src="https://img.icons8.com/color/48/twitterx--v1.png" alt="X (Twitter) Icon" /><br />
<span style="font-size: 16px; white-space: nowrap;">X (Twitter)</span><br />
</a></p>
<p><!-- YouTube --><br />
<a style="display: inline-flex; align-items: center; padding: 10px 15px; border: 1px solid #e0e0e0; border-radius: 8px; text-decoration: none; font-weight: 600; color: #333; background-color: white; transition: all 0.2s;" href="https://www.youtube.com/@ctorobotics" target="_blank" rel="noopener"><br />
<img decoding="async" style="margin-right: 8px; width: 24px; height: 24px;" src="https://img.icons8.com/color/48/youtube-play.png" alt="YouTube Icon" /><br />
<span style="font-size: 16px; white-space: nowrap;">YouTube</span><br />
</a></p>
<p><!-- Instagram --><br />
<a style="display: inline-flex; align-items: center; padding: 10px 15px; border: 1px solid #e0e0e0; border-radius: 8px; text-decoration: none; font-weight: 600; color: #333; background-color: white; transition: all 0.2s;" href="https://www.instagram.com/ctorobotics/" target="_blank" rel="noopener"><br />
<img decoding="async" style="margin-right: 8px; width: 24px; height: 24px;" src="https://img.icons8.com/color/48/instagram-new--v1.png" alt="Instagram Icon" /><br />
<span style="font-size: 16px; white-space: nowrap;">Instagram</span><br />
</a></p>
<p><!-- Facebook --><br />
<a style="display: inline-flex; align-items: center; padding: 10px 15px; border: 1px solid #e0e0e0; border-radius: 8px; text-decoration: none; font-weight: 600; color: #333; background-color: white; transition: all 0.2s;" href="https://www.facebook.com/ctorobotics/" target="_blank" rel="noopener"><br />
<img decoding="async" style="margin-right: 8px; width: 24px; height: 24px;" src="https://img.icons8.com/color/48/facebook-new.png" alt="Facebook Icon" /><br />
<span style="font-size: 16px; white-space: nowrap;">Facebook</span><br />
</a></p>
<p><!-- TikTok --><br />
<a style="display: inline-flex; align-items: center; padding: 10px 15px; border: 1px solid #e0e0e0; border-radius: 8px; text-decoration: none; font-weight: 600; color: #333; background-color: white; transition: all 0.2s;" href="https://www.tiktok.com/@ctorobotics" target="_blank" rel="noopener"><br />
<img decoding="async" style="margin-right: 8px; width: 24px; height: 24px;" src="https://img.icons8.com/color/48/tiktok--v1.png" alt="TikTok Icon" /><br />
<span style="font-size: 16px; white-space: nowrap;">TikTok</span><br />
</a></p>
<p><!-- WhatsApp Channel --><br />
<a style="display: inline-flex; align-items: center; padding: 10px 15px; border: 1px solid #e0e0e0; border-radius: 8px; text-decoration: none; font-weight: 600; color: #333; background-color: white; transition: all 0.2s;" href="https://whatsapp.com/channel/0029VawVaJgGOj9rKTAdPn0E" target="_blank" rel="noopener"><br />
<img decoding="async" style="margin-right: 8px; width: 24px; height: 24px;" src="https://img.icons8.com/color/48/whatsapp--v1.png" alt="WhatsApp Icon" /><br />
<span style="font-size: 16px; white-space: nowrap;">WhatsApp Channel</span><br />
</a></p>
<p><!-- Telegram Channel --><br />
<a style="display: inline-flex; align-items: center; padding: 10px 15px; border: 1px solid #e0e0e0; border-radius: 8px; text-decoration: none; font-weight: 600; color: #333; background-color: white; transition: all 0.2s;" href="https://t.me/ctorobotics" target="_blank" rel="noopener"><br />
<img decoding="async" style="margin-right: 8px; width: 24px; height: 24px;" src="https://img.icons8.com/color/48/telegram-app--v1.png" alt="Telegram Icon" /><br />
<span style="font-size: 16px; white-space: nowrap;">Telegram Channel</span></a></p>
</div>
<p>The post <a href="https://ctorobotics.com/teleoperating-a-22-dof-sharpa-hand-inside-nvidia-isaac-lab-using-manus-gloves/">Teleoperating a 22-DoF Sharpa Hand Inside NVIDIA Isaac Lab using MANUS Gloves</a> appeared first on <a href="https://ctorobotics.com">CTO ROBOTICS Media</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://ctorobotics.com/teleoperating-a-22-dof-sharpa-hand-inside-nvidia-isaac-lab-using-manus-gloves/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure url="https://ctorobotics.com/wp-content/uploads/2026/03/Teleoperating-a-22-DoF-Sharpa-Hand-Inside-NVIDIA-Isaac-Lab-using-MANUS-Gloves.mp4" length="5161882" type="video/mp4" />

			</item>
	</channel>
</rss>
