Human Robot Interaction(2021)
Human Collision Avoidance based on Global Skeleton Detection
The quick glance(left) is a demo for exploring collision avoidance between humans and manipulators through global skeleton detection.(The camera is not shown in the animation, but you can see it in the picture(right) of workspace setup.)
We use Nuitrack to detect skeleton keypoints of human and then model these points as spheres which are given collision volume so that Moveit! can plan a trajectory to avoid collision. If there is any potential collision appearing in the trajectory, the manipulator will stop first and replan a new path so that it can get around the obstacles.


GGCNN guided Grasping on the Plane and for Handover


The first quick glance is a demo grasping objects fixed on the plane and the second one shows a human-to-robot handover task which means that robot tries to grasp from human hand while avoiding to collide with people meanwhile. Both two demos are based on GGCNN, a light-weight network structure that can give pose candidates for grasping.
The attached link is for the full video of two experiments(without speeding up).
PBVS tracking for marker and hand


These two demos are to implement servoing interface for UR3e robot based on (Position based Visual Servoing)PBVS instead of using Moveit. The first demo uses Acuro makers as target pose while the second one track hand which is processed by BiSeNet. To use the network in my task, I train it with LIP and Egohand dataset by myself.
The attached link is for the full video of two experiments(without speeding up).