top of page
Search
Writer's pictureTerry Tham

Introduction to Anomaly Detection with Squidify Vision

Updated: Jan 19, 2022


When you have a task to identify any abnormal reject from images, where the reject criteria are unknown. With anomaly detection, we want to detect whether or not an image contains anomalies. Therefore, this is a perfect example to answer this task.


With Squidify Vision, we created a total no-code platform for users to create their specific anomaly detection application.


In the example we are going to discuss, our task is to build a model that can automatically recognize any foreign materials, damage or unknown reject that might occur from a die. Such implementation can be used to quickly filter out any undetermined and obvious possible reject before entering the next stage of production.


Before getting started, there are some pre-process resources required to be ready:

  • Squidify Vision System (Licensed with DL Tools)

  • 200 - 400 categorized as "good" images.

In general, the workflow of building an anomaly detection model consists of 4 basic steps:


 

Resources


The data used was captured and processed from a sample of wafers. In the wafer, there are dies human visualized categorized as "good", and "nok" where "nok" image consists of dirt, damaged, and other foreign particles. This dataset consists of 721 "good" images, and 269 "nok" images. All data is 1208 width x1308 height pixels with a mono channel. Examples of each group are shown below.

In Squidify Vision, we can add an anomaly framework to the system, and then import "good" images by clicking the "import good image" button.


 

Dataset


All the sample images imported will eventually be scaled down to a relatively smaller size to perform training. On the tab of Dataset, the down-size of image size should be large enough to visible detectable defects. In this case, we are using 320 width x 320 height pixels, with a complexity index of 20.

Once the dataset is created successfully, we are now ready for the next step to training the model.


 

Training


There is an important hyperparameter to be determined by the user:

  • Epochs: a full iteration over the entire training data is called epoch. It is beneficial to iterate several times over the training data. The higher iteration will exhaust the training process, a lesser iteration makes the model less reliable. Using a larger number of epochs will force the training process to stop at any epoch when the error threshold reaches the user-defined value.

One-click to start the model training process, grab a coffee, and the system will send the training process to the background. Once the model is ready, a window notification will indicate the training process has been done.


 

Infer


Once the model is ready, you can browse some "good" or "nok" images to evaluate and infer the trained model. You can also control the sensitivity of the anomaly by adjusting the segmentation and classification threshold values. Finally, available results are:

  • Anomaly score

  • Abnormal region

  • Classification: "good" or "nok"

  • Classification ID: "good" as 0; "nok" as 1


 

Conclusion


As result, users can easily build multiple specific yet meaningful deep-learning capabilities to your application.

65 views0 comments

Comments


bottom of page