As a PhD student, time is my most valuable resource. Unfortunately, there is never enough of it to do all the things I wish I could. Sometimes I get crazy project ideas but balancing research, teaching and family I never get the time to work on them.
Luckily, every once in a while I tell my idea to a student or two and I see a spark in their eyes and every once in a blue moon they actually take it on themselves and create something new and exciting that exceeds my expectations.
This post is about one of these unique cases where Dan Nabel and Raz Kochavi created their “Part Validation using Kinect and Augmented Reality” project which received the “best poster” award in the Technion’s mechanical engineering undergraduate projects competition.
You can view the poster here (Hebrew).
*The work was done under the supervision of Prof. Anath Fischer
Intro (mostly for non- mechanical engineers): The design of a new product encapsulates many stages, from characterizing the customer’s requirements, creating a detailed specification, conceptual design, detailed mechanical design, manufacturing, assembly and evaluation testing. Between the manufacturing and assembly stages, there is an internal stage called “validation”. In this stage, the manufacturer ( and usually the client too) inspects and measures the manufactured part in order to check if it meets the design specification. In a perfect world, every manufacturer will supply every single part exactly as the CAD model and sketch specify. However, we don’t live in a perfect world. The process of validation and inspection is something that takes time (and money). Sometimes there is a person that stands and manually measures the part with a caliper and sometimes there is a very big, very expensive, very calibrated machine that does it automatically for a subset of the manufactured parts. One of the hardest cases to validate are free-form surfaces. Their unique geometry is usually so complex that traditional measuring instruments are just not enough.
The goal of this work was to create a small, portable and low-cost part validation system for mechanical parts with free-form surfaces.
The challenges are multiple (especially for an undergraduate), here are some of the questions Dan and Raz faced (and the answers they came up with).
- How to scan the part? (Microsoft Kinect V2 – low-cost 3D camera)
- How to separate the scanned part from its surrounding? (Segmentation using adjustable color threshold on the image)
- How to align between the scanned part and the CAD model? (Registration using ICP with a user interface for initial coarse alignment)
- How to display the computed errors? ( Augmented Reality – project the error map on the part using a projector )
The approach consists of two main parallel input processing branches:
- Digital processing
- Physical “measuring”
These branches unite in the registration stage and finally, the error is evaluated and visualized.
In the digital processing branch, we get the CAD model of the manufactured part. Commercial CAD software (like Solidworks, Creo, NX etc.) save their models in a parametric representation of its surfaces. We then discretize this model by converting it to a triangle mesh ( faces and vertices). The algorithm that performs this conversion in the CAD software creates a highly non-uniform mesh (few large triangles on planes, many small triangles on curved surfaces). We wish to get a relatively uniform sampling of points on the surface of the model, therefore remesh and then sample the points. At the end of this branch, we get a digitally sampled point cloud, evaluated on the faces of the original CAD model. We use this point cloud as the reference for the scanned point cloud that is the output of the physical “measuring” branch.
In the physical “measuring” stage we take the manufactured part and scan it using a Kinect V2. The Kinect is a low cost but rather accurate 3D camera. It produces two images – an RGB image (just like a regular camera) and a depth image. Each pixel in the depth image contains a value that is proportional to the distance of that point in space from the camera. Using the camera parameters, these two images are aligned and a 3D point cloud is generated, which contains XYZ coordinates for a list of points. The next problem is that the camera “doesn’t know” that we only want points on the part so it “gives us” points on the part and around it. In order to discard the background points we put the part in an area with a distinct background color (for example white) and then discard the white points. In reality, we allowed several different background colors with an adjustable threshold. At the end of this branch, we get a measured point cloud sampled on the manufactured part.
Next, we take the two point clouds and align them using a two stage registration process. First, we perform coarse registration using a user specified matching points. Second, we run a variation of the well known Iterative Closest Point (ICP) algorithm for the final fine alignment. Once the two point clouds are
Once the two point clouds are aligned we compute the distances between every two closest points between the two point clouds. To do this efficiently we use a K nearest neighbor algorithm (K=1 in this case) which uses the K-d tree data structure. If there are no manufacturing errors all of the computed distances should be zero (or a very small number) but, if there is a defect the distance will spike. For visualization purposes, we now map all of the distances to a color map (from blue=acurate to red=high error). Next, we project these points onto a plane and display it on screen. At this point we realized that just looking at it on the screen is simply not enough anymore (in the world where augmented reality is a great buzz word) so we connected a projector and projected the error map onto the part (I still want to get an Oculus Rift or a Microsoft HoloLens for it) . Now we could really see the errors on the part.
The approach is summarized visually in the flowchart below (click on it to see full-size). For further details, you can read their full report here (Hebrew)
In the image below you can see the results of visualization on a test part. We took a big chunk of Plasticine and placed it on the part (right) and the projected visualization (left). The system shows the expected high errors on the Plasticine. (Note that even though the part is shiny and red the projection is seen well)
Remember how a picture is worth a thousand words? So how many words is a demo?
Demo: The video below shows each of the stages of the solution (prototype)
Disclaimer: I must say that Dan and Raz built a great prototype but, as in any other system, there are some drawbacks. We all agreed that the overall accuracy of the system can be improved. Each of the steps contributes to inaccuracy, starting from the Kinect itself which has a limited depth resolution and ending with the registration that may be inaccurate (especially in scenarios of high deformation). In addition, when projecting the error map on the part some manual alignment is performed. All of these issues are solvable but when squeezed into an undergraduate project we just didn’t get to them ( future work anyone?).
You can download their source code here. It contains MATLAB files and some sample STL files (the package also contains its dependencies like Dragzoom).
In summary, I think Dan and Raz did a great job. They created a new low-cost, portable system for mechanical parts validation. Most importantly, they learned a lot in the process and had some fun, I know I did.