In today’s era of artificial intelligence (AI), the availability of data has increased significantly along with the ability to process this data using advanced hardware systems. These processing units now make it possible to incorporate AI and computer vision into embedded systems and devices using intelligent algorithms to interpret images or videos. IRIDA Labs has been making the best use of these technologies to bridge the gap that prevails between a camera and the human eye. The company facilitates automatic extraction and analysis of useful information from images and videos in real time by integrating AI into devices and equipping them with visual perception. The innovative embedded vision software that IRIDA Labs develops for Industry 4.0 and IoT applications is highly efficient in terms of power consumption, memory, and processing speed. These solutions can be implemented in mobile devices, action cameras, drones, surveillance systems, as well as automobiles.
“Our vast experience in the field of AI and embedded vision along with multiple collaborations with big companies around the world allows us to develop impactful tools for the Industry 4.0 and the IoT era,” remarks Vassilis Tsagaris, CEO of IRIDA Labs. One of these tools—AEye4—integrates AI with embedded vision and heterogeneous computing techniques in order to address complex tasks including quality inspection, classification, and defect detection. AEye4 provides a human-like workflow consisting of a ‘training and execution’ phase that guarantees the specialization and flexibility of a visual inspection at low costs along with all the repeatability and reliability that AI and embedded systems can offer.
“Most of our clients bear the misconception that AI only concerns big companies and are unaware of how it can also enhance their processes,” points out Tsagaris.
IRIDA Labs is bridging the gap between a camera and the human eye by integrating AI into devices and equipping them with visual perception
IRIDA Labs interacts with the C-level executives of organizations and explains how embedded vision and AI can be incorporated into their systems or process to disrupt the markets they function in. “We offer a software library based on deep learning—EVLib—which consists of more than a hundred learning models to help organizations use data efficiently,” adds Christos Theocharatos, the company’s Chief R&D & Chairman of the BoD . These models are highly optimized in order to power AI on edge devices such as cameras, IoT devices, and embedded CPUs. EVLib also offers flexibility for specific object detection including food and product catalog recognition using case-relevant data.
The effectiveness of these solutions can be illustrated by the project managed by IRIDA Labs for a multinational company aiming to build an entirely new revenue system based on AI and computer vision. IRIDA Labs devised a way to integrate embedded vision into the client’s processes in order to detect and track their products in a real work environment. Along with the client’s vast knowledge and understanding of the market, the inclusion of embedded vision enabled the multinational company to provide disruptive services to its customers.
IRIDA Labs also assists clients in expanding their AI and computer vision expertise into the fields of data gathering, data augmentation, model selection, model training, and data optimization to fully exploit the capabilities of embedded vision. This holistic approach of providing a complete AI processing chain is what makes IRIDA Labs unique. “We are now working on consolidating all of the aforementioned fields in a unified framework,” mentions Nikos Fragoulis, CTO of the company. With the rising need to employ AI across industries, IRIDA Labs also plans to invest more in disruptive solutions and tools for optimizing and democratizing embedded vision and AI. Furthermore, the company is going to be a part of the IoT Solutions World Congress, Barcelona and VISION 2018, Stuttgart, later this year, and will continue to expand its global reach across the US, Europe, and Asia.