We have Mobile apps to create instructions for both Android and iOS. In order to add Augmented Reality Objects to visual work instructions, we require to recognise the objects visible on the images, specifically the depth. There are 2 options for us to do so:
Visual Information: Here, we use image recognition technology to understand the type and depth of an object. This is the way we use on all Android Devices plus on Apple iOS Devices, that are NOT Pro-Devices.
βObject Scan with LiDAR-Sensors: LiDAR-Sensors are used to almost instantly scan the environmentand are for example used in autonomous driving vehicles. LiDAR sensors work by emitting laser pulses that bounce off objects and return to the sensor. The time it takes for the light to travel to the object and back is measured, and using the known speed of light, the distance to the object is calculated. This process is repeated rapidly, allowing the sensor to build a detailed 3D map of the environment. The accuracy is significantly higher and significantly faster.
The disadvantages of option number 1 are accuracy and speed. The advantage is the affordability of Android devices over Apple Devices.
We strongly recommend Apple Pro Devices to record instructions.