Rico graduated from the University of British Columbia (UBC) in Vancouver, Canada, with a B.Asc Degree in Electrical Engineering in 2019. During his undergraduate years, he developed a strong interest in robotics and control systems, alongside with skills in C, C++, Python, MATLAB, SIMULINK, circuit design, and many others!

For his graduate studies at Northwestern University, he aspires to become a well-rounded full-stack robotics engineer. He is extending his knowledge in machine learning, dynamics and kinematics, SLAM, and computer vision. He is also honing his skills in Python, ROS, and real world robotics design.

As part of an independent study class, I implemented a collection of essential motion planning algorithms and necessary utility functions.

As the foundation of planning, I implemented a graph-based Probablistic Road Map (PRM). Also, I implemented a grid map to discretize the search space

For planning algorithms, I implemented/am implementing Global incremental planning algorithms such as Lifelong PLanning Star (LPA*), D* Lite;Potential Field Algorithms: Continuous Potential Field, Bushfire algorithm, Wavefront algorithm; Local Planning Algorithm - Dynammic Window Approach, MPPI (Model Predictive Path Integral Controller).

This is a research project that I conducted under the supervision of Dr.Ying Wu at Northwestern University, as part of the course EECE432 - Advanced Computer Vision. In this paper, I explored core techniques of each step of Robust Visual Simultaneous Localization and Mapping (SLAM), a group of SLAM techniques that employ static features for localizing and mapping.

In the last few decades, visual Simultaneous Localization (visual SLAM) and Mapping and Structure from Motion (SfM) have been a hotspot of research in both the computer vision and robotic communities. Many variants of these techniques have started to make an impact in a wide range of applications, including robot navigation and augmented reality. Most SfM and visual SLAM techniques are developed based on the assumption of static environment. These methods can be categorized as Robust Visual SLAM methods. In this survey, we present multiple techniques that are commonly used in major steps of Robust Visual SLAM. We identify five major steps of Robust Visual SLAM - feature extraction, feature matching, motion segmentation, localization, and 3D reconstruction. In each section, we present the core ideas of every method and we discuss their advantages and disadvantages. Finally, the future research trends in Robust VSLAM and conclusion of this survey is discussed.

In navigation, robotic mapping and odometry for virtual reality or augmented reality, simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. Due to its importance, it has been a hot research topic for years, and there is a variety of research papers in the recent years.

In this project, I implemented a light-weight simulator that can be used to verify landmark-based SLAM algorithms (EKF, UKF, landmark-based FastSLAM,etc. ). All essentials are provided, including simulation of an actual robot pose with Gaussian noise, landmark observations, laser scan data and visualization of the environment and frames. With this light-weight RViz based simulator, there is no need to turn on a heavy physics engine for verifying your SLAM algoithms! Check out my Github Repo or shoot me a message!

The layout node structure of the simulator is shown below. Note that you can substitute your filter node in the circled section.

A robot can draw like an artist! In this project, I drew a custom picture on a piece of letter size paper using a Rethink Sawyer Robot. The drawing process was conducted on a pixel by pixel basis, using Depth First Search. The drawing quality is further enhanced by a force control feature, so don't worry about bumpy or slanted surfaces! For more information, check out my Github source code Github Source Code. Also, don't forget to check out my Youtube channel for live demonstration!

In this project, I developed a 3R robot arm that can draw any shape within its workspace, for example a heart curve. As part of the project, I implemented Newton-Raphson inverse kinematics algorithm to solve for all joint angles given a marker position at a given time. Other simple continuous shapes, for example a line, a square, can be drawn too.

For more details, check out my Github Repo. Also, don't forget to check out a live demo of this project on my Youtube Channel

The Tic Tac Toe artist is a Sawyer robot based interactive robot Tic Tac Toe player. This project allows a human to play a game of Tic Tac Toe against the Sawyer robot on a dry-erase board. We programmed the robot arm to execute three objectives, read the current board state, compute its next optimal move, and perform the trajectory needed to draw an X in that board space. At the current stage of development, the robot goes first and uses the perfect Tic Tac Toe strategy, making it nearly impossible to win against the robot. This level of brutality will be neutralized in future work.

Project Setup

Equipment

Sawyer arm

Handheld dry-erase board (with electrical tape as the board lines and a 2cmx2cm brightly-colored piece of tape placed at the top left corner of the top left board space)

2 Dry-erase markers

Eraser

2 Clamps

Table

Hybrid Motion-Force Control

When the robot needs to draw, the force controller reads the vertical direction force, i.e, along the z axis, and issue the desired end-effector z position. Therefore, The force control problem is converted to a position control problem, and the force controller can always make sure the pen (attached to end-effector gripper) is is in contact with the board. Desired position along the z axis (axis that is perpendicular to the board) is set by using a simple P controller as, z_position = -K*Force

When the robot needs to lose contact with the board, desired z position is set 1.5 cm above the current end effector position. When the robot needs to move in the air, desired z position is set to the current end effector position. After the desired z position is found, it is used in combination with the desired (x,y) position obtained from the Trajectory Generator given as an input to the built in inverse kinematic function of the Sawyer robot.

Trajectory Generation

This part encompasses all motions of the robot except drawing using force control. Along the entire trajectory there should not be any jerks or unsafe moves. The motions of the robot are illustrated in the state machine diagram below

How the state machine works The robot starts off at a pre-configured camera position in idle mode. When AI sends the position of the center of a new cross, the robot will move to the Write Standoff Position, which is the pre-configured center position. Afterwards, the robot will generate a trajectory of the cross and move to the first point of the trajectory. When the robot finishes drawing the cross, it will go back to the camera position.

For more information, especially about computer vision and AI of this project, come check out my Github Repo, check out the video below, or shoot me a message!

Fidget Spinner Faceoff with Euler Lagrange Dynamics

In this project I simulated a fundamental physical phenomenon - collision under external force. The goal of this project is to construct an essential building block for physics simulator, game engine, etc. The project is built using Lagriangian Dynamics and its numerical evaluation at each time step.

The collision of two fidget spinners on a horizontal plane (gravity perpendicular to the plane). The winning condition for this “fight” is to see which spinner can reach a wall first. One spinner is a rectangular prism and is modelled as a rectangle, with all sides being 1m long. Another spinner is a triangular cylinder and is modelled as a triangle, with all sides being 1m long. There will be friction on the ground.

How to Detect Impact? - This is the Tricky Part of This Project As shown in the below illustration, there is one frame at each apex of a disk. The x axis is always pointing along a side of the disk towards another apex, and the y axis is set up following the right hand rule. The way each frame is set up allows for a simple way to detect impacts and return the correct impact condition - when a point enters an object, if not through a line that goes through one of the apexes, the point’s y coordinate in all the apex frames of the body will be positive.

For example, if point 1 of the triangle enters the rectangle, we can represent point 1’s body frame in the frames of the rectangle’s apexes a,b,c d - ga1, gb1, gc1, gd1, then we can easily extracts the y coordinate of point 1 in these frames and check if they are all positive. If so, there will be an impact

OK, Now How to Compute Velocities of the Two Objects After Imact? The way to determine velocities after impact is the Canonical Langrangian Impact Update Method.

H is the Hamiltonian, L is the Lagrangian Equation, q is the configuration variable vector, and is the "impact surface" (think of this as a set of equations in which all configuration variables during impact should satisfy)". The unknowns are , the derivative of configuration variables q right after the impact, and two coefficient . Lastly, represent the times right before and right after the impact.

For calculating the unknowns, H and L can be calculated using the canonical Euler-Lagrange Equations, so they can be easily computed. Another tricky part is calculating the impact surface, . In order to compute that which side a point crosses, let’s say point 1 of the triangle, we need to keep track of the y coordinate of point1 in all ga1, gb1, gc1, gd1 from the previous time instance. Once an impact is identified, the y coordinates of point 1 in all aforementioned frames are positive. If the y coordinate of one particular frame from the previous time instance was negative (there could be only one if the point did not cross through one of the apexes), we know for sure that the point crosses through the side that the frame’s x axis corresponding to.

For example, if point 1 of the triangle crosses side AD into the rectangle, we can identify the impact condition by looking at the sign change in the y coordinate in gd1, yd1, because frame d has its x axis aligned with side AD. Then, the impact condition will be

Woody the Wood Inspector - the baby of my undergraduate Capstone Project at UBC. The purpose of this project is to facilitate the wood scanning process for quality control in the forestry industry. Presently, logs are trucked into specific scanning facilities for near-infrared, color-imaging, x-ray and other types of scans. With woody, a proof of concept wood scanning robot, this laborious process can be simplified. Woody can move in parallel to a straight surface and keep a certain distance from it. This way, onboard scanning devices can conduct high-quality scanning. Due to intellectual property concerns, the code and detailed design of this project are confidential. However, feel free to contact me about the project.

Below is a typical log yard where parallel scanning takes place.

As an overview, The project is composed of:

Traxxas Rustler 2WD 1/10 Scale RC Truck

Intel Nuc

PhidgetSpatial Precision 3/3/3 High Resolution

HC-SR04

Arduino Nano

DCDC-NUC Power Supply

I designed and developed a controller that ensures constant scanning distance from the log, using linear state-space control techniques. Also, I implemented a linear Kalman Filter for fusing orientation readings from the ultrasonic sensors and the IMU.

On top of our customized mobile platform, the robot can scan along a straight line, from a specified distance from the surface. For a live demo, come check out this video, and feel free to contact me if you're interested in learning about this project!

A* is the most popular choice for pathfinding, because it’s fairly flexible and can be used in a wide range of contexts. The secret to its success is that it combines the pieces of information that Dijkstra’s Algorithm uses (favoring vertices that are close to the starting point) and information that Greedy Best-First-Search uses (favoring vertices that are close to the goal). In the standard terminology used when talking about A*, g(n) represents the exact cost of the path from the starting point to any vertex n, and h(n) represents the heuristic estimated cost from vertex n to the goal.

Imagine we have the (h) represents vertices far from the goal and (g) represents vertices far from the starting point. A* balances the two as it moves from the starting point to the goal. Each time through the main loop, it examines the vertex n that has the lowest true cost f(n) = g(n) + h(n). Therefore, A* algorithm uses both the actual distance from the start and the estimated distance to the goal.

A* will find the shortest path by repeating the following steps:

Get the cell S on the open list which has the lowest true cost.

Remove S from the open list and add S to the closed list.

For each square T in S’s walkable adjacent 8 tiles,

If T is in the closed list, Ignore it.

If T is not in the open list, Add it and compute its score.

If T is already in the open list, Check if the F score is lower when we use the current generated path to get there. If it is, update its score and update its parent as well.

The heuristic function I use is

A note on online A* - The term online indicates that a map is not known before hand and only the nodes that are neighboring to the current node are known and the rest of the map is unexplored. In such scenarios, where backtracking is prohibited or extremely costly, a fast data structure that quickly sorts the open list and returns the node with lowest true cost is very desirable. In my implementation, I used Python's heap queue for that purpose.