AIPA Lab Projects

AIPA Lab develops experimental projects and prototypes to validate research ideas in real-world settings. These projects demonstrate how theoretical research in Physical AI can be translated into functioning systems that perceive, decide, and act in physical environments. Each project follows a structured workflow from problem definition through simulation, algorithm development, system integration, and physical deployment.

Featured AIPA Lab project prototype demonstration

Vision Based Robotic Grasping Prototype

This project focuses on developing a vision-guided manipulation system that enables a robotic arm to grasp and manipulate objects with varying shapes, sizes, and poses. The prototype integrates perception, grasp planning, and motion control modules to improve adaptability and robustness when operating in unstructured or semi-structured environments. The system uses depth cameras to acquire three-dimensional information about target objects and employs learned models to predict stable grasp configurations.

The technical stack includes PyTorch for deep learning model training, ROS2 for robot middleware and communication, depth cameras for visual perception, and Isaac Sim for simulation-based development and validation. A feedback loop architecture connects perception to grasp planning to motion control, enabling continuous refinement of manipulation behaviors.

The project is currently in the prototype phase, with evaluation focused on grasp success rate, cycle time, and robustness under varying object conditions and environmental factors.

Robot Arm Sorting and Pick and Place Cell

This project aims to build an automated sorting system capable of classifying and manipulating mixed industrial components using robotic arms. The system combines computer vision for object detection and classification with trajectory planning algorithms that coordinate pick-and-place operations along a production flow. The goal is to achieve reliable sorting with minimal human intervention.

The technical stack includes ROS2 for robotics middleware, OpenCV for image processing and classification, and PLC integration for industrial control system connectivity. The system architecture follows a pipeline from vision acquisition through classification to trajectory planning and execution.

The project is in the pilot demonstration stage, with integrated hardware and software testing underway. Evaluation metrics include throughput, classification accuracy, and system reliability over extended operation periods.

Physical AI Digital Twin Platform

The objective of this project is to establish a digital twin platform for robotic workcells that enables simulation-driven learning, validation, and optimization before deployment in real-world scenarios. The platform replicates the physics, kinematics, and sensor characteristics of physical robot systems, allowing researchers to train and test control policies in a virtual environment.

The technical stack includes Isaac Sim for physics-based simulation, Python for algorithm development and scripting, CUDA for GPU-accelerated computation, and a ROS2 bridge for communication between simulation and real hardware. The platform supports domain randomization techniques to improve sim-to-real transfer performance.

The project is currently in beta, with early sim-to-real evaluation underway. Key evaluation criteria include the sim-to-real gap measurement, inference latency, and system stability during transfer from simulation to physical deployment.

Edge AI Deployment for Industrial Robots

This project addresses the deployment of low-latency AI models on embedded computing systems for real-time inference in industrial robotics applications. The work focuses on optimizing model performance and enabling fast decision making at the edge without reliance on remote servers or cloud infrastructure. Edge deployment reduces communication latency and improves system responsiveness for time-critical control tasks.

The technical stack includes TensorRT for inference optimization, NVIDIA Jetson platforms for embedded computing, and ONNX for model interoperability. The system architecture follows a sensor to edge inference to control to feedback loop, enabling closed-loop operation with minimal delay.

The project is in the experimental phase, with ongoing evaluation of inference latency, power consumption, and prediction accuracy under real-time operating conditions.

Learn More About Our Work

These projects represent ongoing research efforts at AIPA Lab. To explore the laboratory infrastructure that supports this work or to discuss potential collaboration opportunities, please visit the related pages.