I recently graduated from the Master of Science in Robotics program at Northwestern University.
I am always fond of understanding how a robot can be manipulated under different
conditions and how it makes our lives more convenient. I am mainly interested in
controlling robots for manufacturing, packaging, mapping, and path planning. Upon completing my
master's final project, I built an interest in machine learning as well.
I am looking for job and research opportunities in leading technological company to work
on robotics system and control issues. I plan to gain more experience on manipulating robots naturally and
operate robot on real-life problems.
You can check out the projects I have worked on in My Work section. These projects
are sorted from latest to oldest.
This project is the final project of Embedded Systems in Robotics. Its goal is to control a Baxter robot to stack a fixed number of cups into a tower. The tower is built on a table that is placed in front of Baxter. The source code includes several different nodes to operate Baxter with or without computer vision and to build a tower out of 3, 6, and 10 cups. More detailed descriptions can be found on the project's GitHub page. To minimize challenges from vision detection and precise manipulation, this project is separated into two tasks. Starting with random cups placed in the middle of the table, task 1 needs to grab and place them at each side of the table in order. Then, task2 needs to grab the sorted cups and place them back in the middle of the table to stack them into a tower. To avoid collision between the two arms, these two tasks are executed alternatively. Once one arm completes placing a cup, it should move away from the middle of the table and leave workspace for the other arm. The image below shows how a table is divided into the working area and sorted area. For this project, my main job is to control both arms of Baxter to let them grab and place cups from one to another specified location. The locations are either pre-defined or provided by Apriltags depending on whether the node uses computer vision. The image below exhibits the logic of the robot's behaviors. The image on the left below shows Baxter grabbing a sorted cup. The image on the right below shows Baxter placing the cup into the workstation. The movement of Baxter arm is controlled using ROS MoveIt package for Baxter. The gripper is controlled using Baxter ROS interface. The team mainly uses cartesian planning methods provided by MoveIt to control behavior of the arms and add a short settlement time in between each move. As shown in the image below, initially, the node will execute pose planning to get a better result if the output of cartesian planning is insufficient to complete a task (if the fraction of a computed path is less than 30%). However, path planning with pose target sometimes result in unexpected behavior. For instance, Baxter may rotate its arm by a full circle even if the task is to move a cup horizontally from left to right. Also, planners may give a trajectory path too close to MoveIt objects leading to a collision at reality. This problem is fixed by adding more larger thresholds at MoveIt. Nonetheless, these thresholds need to be smaller to allow proper collision detection for attaching cup objects inside rviz. Rather than finding the right thresholds for all parameters, the team decides to generate all movements of Baxter only using cartesian planning. The revised code diagram is shown in the image below.
Tasks completed by other team members include gazebo simulation, computer vision using AprilTags, and integration of all components. Please refer to the project's GitHub page for more information.