As the AI industry is thriving in the digital landscape and uplifting every business vertical, a leading market leader in the mobile phone refurbishment and replacement industry approached New Tier Technologies to detect the extent of screen damage in mobile devices automatically. It would help save a huge time, costs, and efforts required throughout the manual inspection and tagging of all the inventory coming into their warehouses.
Acknowledging the business requirements and expectations of the client, New Tier Technologies developed the Blimp app while deploying a Convolutional Neural Network-based visual recognition model on TensorFlow. This would directly automate the classification of screen damage on mobile devices and attain screen damage classification of more than 200,000 devices at around 90% accuracy.
Automated approach for visual inspection of the screen from varying angles, which ensures a robust data capture mechanism, with minimal efforts.
Seamless detection and grading screen damage throughout multiple mobile phone models and screen types, with higher efficiencies.
Trained to operate across latest brands of mobile devices; as adjunct information, the algorithm even recognizes the phone make and model.
New Tier Technologies established a digital vision-based identification system for detecting cracks on the mobile screen. Backed by CCD (charge coupled device) camera, images of the mobile’s screen on the industrial assembly line are received. The camera is further revolved from different viewing angles through detection of the mobile to evade the cracks that couldn’t be spotted from specific viewing angles. Once done with that, the received RGB model images are converted into HSI color format.
Backed by Nanonets and NSFW classification, general object detection, and OCR, image processing and deep learning models become hassle-free and make the user’s job easy. In addition, the simple and user-friendly model will enable users to rapidly build their own custom model by uploading the data and labeling them, and the Nanonets API further helps in identifying if the screen is cracked or not.
Delivered a next-generation ML model solely based on Convolutional Neural Networks utilizing TensorFlow and trained on 8000+ training images across 200+ cracked and normal devices.
The subsequent TensorFlow model was arrayed at three multiple warehouses for the customers for processing mobile devices; training is underway for identifying cracked tablet screens further.
The advanced model automated 80% of the incoming inventory processing, which directly ensured approximately 90% reduction in the overall processing time, which further helped save a lot of effort, costs, and resources.