AR SDK'S for Best Practices
In 2017, Apple released iOS 11, and the subsequent launch of ARKit witnessed arguably the most seismic event in the history of augmented reality technology. ARKit is a unique framework that enables brands and developers to design and create unparalleled experiences for compatible iPhone and iPad devices (compatible iPhone’s and iPad’s must be equipped with an A9 processor or above). The ARKit SDK functions in the same way as most AR SDK’s function, by enabling digital information and 3D objects to be blended with the real world but offers largely unparalleled accessibility in terms of the number of existing devices that it supports.
ARKit can be run on any device equipped with an Apple A9, A10, or A11 processor and utilizes VIO (Visual Inertial Odometry) to track the surrounding environment with seamless levels of accuracy.
ARKit provides the following functionalities:
SLAM tracking (simultaneous localization and mapping) and sensor fusion
Ambient lighting estimation
Vertical and horizontal plane estimation with basic boundaries
Stable and fast motion tracking
The Vuforia augmented reality SDK is capable of supporting a wide variety of 3D and 2D targets, including 3D multi-target configurations, and fiducial markers referred to as a ‘.’ Some of the additional features in the Vuforia SDK include localized occlusion detection using virtual buttons, the capacity to develop and calibrate target sets at runtime, and runtime target image selection.
Vuforia provides API’s (application programming interfaces) in Java, C++ and Objective C++, and .NET via an extension of the Unity game engine. With this in mind, Vuforia SDK is capable of supporting both native development for iOS and Android and the development of AR apps and prototypes in Unity that can easily be ported across both platforms. This represents a great option for businesses and brands seeking to develop apps that cover both iOS and Android whilst minimizing commercial and technical risk. This means AR apps can be developed seamlessly for the widest possible number of target mobile devices in the shortest possible time scale.
The EasyAR SDK is available to businesses and developers across two-tiered pricing packages: EasyAR SDK Basic and EasyAR SDK Pro. The basic package promises developers enhanced API’s, workflow, and increased compatibility. The Pro package is brand new and is equipped with exclusive features that are not available in the basic package. The basic package is free to developers looking to develop AR applications and supports the Java API for Android, the Swift API for iOS, and also supports Windows OS. Some of the additional features supported by the basic package include video playback, transparent video playback, QR code scanning, and comprehensive Unity integration.
The EasyAR Pro package is available with all of the features associated with the free package on the basic version of the platform, plus support for SLAM, 3D object tracking, screen recording and simultaneous detection and tracking for multiple types of targets.
The core feature offering of the EasyAR Pro package focuses around the following: SLAM (including Monocular real-time 6 DOF camera pose tracking and full mobile compatibility), 3D-object tracking (equipped with the ability to recognise and track a common 3D object complete with textures in real-time), screen recording (provides a simple and efficient way of recording AR content), planar image tracking (ability to track and identify planar images in real-time), a concise API that integrates with all major mobile AR platforms and content, and interaction support in order to display the most compelling AR content with additional functionality. The EasyAR website is packed with tonnes of useful information to get your AR app up and running in the shortest possible timeframe.
Onirix promises effortless mobile AR development and was designed primarily to offer developers a fast and intuitive experience. Onirix Studio enables businesses and brands to develop, host, and publish the visual elements of each new AR project that is created. The Onirix platform provides a range of different features for each new AR project including the ability to add specific points of interest based on location, routes and wayfinding, 3D models, and a range of other information. Onirix leverages a cloud-based platform that enables each project to be assigned the optimal level of resource and performance, which, in turn, provides an optimal level of experience for the mobile user. The Onirix AR SDK is tightly integrated with their native iOS and Android apps.
The Onirix SDK itself was developed specifically to interact with AR-enabled smartphones and tablets. The SDK provides utilities and libraries for simple and quick application development for Unity, iOS, and Android. Their complementary REST API enables existing data sets to be baked into new AR apps and experience with ease. The Onirix team do a great job of constantly updating documentation for all supported components and compatible devices. This includes support and documentation for iOS, Android, associated ARKit and ARCore libraries, with support for Magic Leap and HoloLens expected imminently. If you’re interested in developing an app or prototype for Microsoft HoloLens.
9.) Pikkart AR SDK
The Pikkart SDK enables developers to create AR apps with what promises to be a lightweight, simple-to-use, quick, robust, and ‘computationally inexpensive’ on-device detection and tracking. With offices headquartered in Italy, Pikkart AR SDK has four core pricing tiers, starting with a basic version that is completely free to use. The free version of Pikkart AR SDK equips developers with an unlimited number of local markers, one demo app (on either iOS or Android), and twenty cloud-based markers. For a fixed fee of €299, developers can access all of the features available in the free SDK with added email support for assistance and guidance on using the platform to optimal effect. The two premium pricing tiers, cloud recognition and cloud API (both costing €99 per month), provide a broad range of functionality including unlimited databases, 1500 cloud markers, email support, and cloud recognition.
The Pikkart SDK promises to enable developers to create highly engaging and immersive AR experiences that can be up and running on-device in a matter of minutes. The platform includes native plugins for iOS and Android and also integrates with existing Unity and Xamarin projects. The SDK also enables developers to add geolocated augmented markers in order to develop integrated navigation services.
11.) Lumin (Magic Leap)
Magic Leap is a US-based start-up founded by Rony Abovitz in 2010. To date, the company has raised in excess of $1.4 billion from a list of investors which includes the likes of Google and the Chinese-based Alibaba group. Back in December 2016, Forbes suggested that Magic Leap would be valued at $4.5 billion and in 2018 the Magic Leap One was launched and made available to AR developers in the US. The Magic Leap One HUD superimposes 3D computer-generated imagery on top of real-world objects by ‘projecting a digital light field into the user’s eye.’
Magic Leap’s augmented reality SDK is called ‘Lumin SDK’ and provides everything Unity developers require to get started developing for Magic Leap One. The Lumin SDK includes a simulator to start exploring the capabilities of the SDK without having to purchase the HUD beforehand, a Unity Package that’s compatible with Magic Leap Zero Iteration and Magic Leap Remote to get things up and running quickly and there are a range of samples to demonstrate all of the features that are available to AR developers.
The Lumin SDK Technical Preview has been developed against Unity 2018.1 and includes a new platform under the build window in order to specifically target Magic Leap’s Lumin OS. In addition, there is a comprehensive C/C++ toolchain, debugger, and build/packaging system for creating native plugins. The technical previews are supposed to provide a first glance at the technology, and some minor instabilities are to be expected as a consequence.
ARCore is Google’s proprietary augmented reality SDK. Similar to ARKit, it enables brands and developers to get AR apps up and running on compatible Google smartphones and tablets. One of the most notable features of ARCore is that it also supports iOS-enabled devices and gives developers unparalleled access to users across both platforms. ARCore possesses three significant features that enable developers to merge the real world with the virtual:
Light estimation: Estimates real-world lighting conditions
Environmental understanding: Detects the size and location of vertical, horizontal and angled surfaces
Motion tracking: Understands the phone’s position relative to its surroundings
The entire ARCore offering is heavily built around two key elements: real-time tracking and the calculation of the device’s location, paired with the integration of virtual objects with the real-world environment. This enables businesses and brands to develop rich and immersive mobile supported AR experiences, enabling 3D objects, text, and digital information directly into the surrounding real-world environment. ARCore is free to use for developers and supports a range of Android-enabled (and iOS enabled) smartphones and tablets including Samsung Galaxy and Google Pixel, plus many more.
Wikitude is an SDK specifically designed for developing mobile AR apps and prototypes. The company was founded back in 2008 in Salzburg, Austria. When the Wikitude SDK was initially launched, the platform was designed with a core objective: to enable AR developers to create location-centric augmented reality experiences through the Wikitude World Browser app. Fast forward to 2012 and Wikitude repositioned its core technology offering by launching the Wikitude SDK with geolocation features, tracking, and image recognition all baked directly into the core platform.
The Wikitude SDK is now the company’s core product offering and promises developers the ability to create immersive mobile AR experiences in the shortest possible time frame. The Wikitude SDK now also includes functionality such as 3D model rendering, location-based AR, and video overlay. The company latterly rolled out SLAM technology (simultaneous localization and mapping), which facilitates seamless object tracking and recognition alongside markerless instantaneous tracking.
With offices in Tokyo, Japan, and Bristol, United Kingdom, Kudan AR SDK is a platform designed for AR developers as a ‘one-stop shop’ platform to support both marker-based and markerless location and tracking requirements. The core Kudan SDK engine is developed entirely in C++ and possesses architecture specific optimizations developed in assembly to provide the quickest and robust operational performance without negatively impacting memory footprint. This means that Kudan AR SDK can be leveraged across a range of development scenarios from supporting specialist HUD’s to being integrated into a chipset. This means that data size, speed, and sensitivity can all be adjusted to suit the needs of specific AR project requirements on an individual basis.
The Kudan AR SDK’s possess native platform API’s and provide seamless support for Objective C (iOS) and Java (Android), whilst cross-platform support is also provided for Unity game engine. Kudan SDK also supports both marker-based and markerless tracking, which is great for AR developers who need to create functionality without marker based initialization.
The company’s goal is to accelerate the evolution of Virtuality (this covers all aspects of augmented, virtual, and mixed reality) robotics (cars, drones, and robots) by creating algorithms that are classified as Artificial Perception (AP). Kudan’s mission is to develop these AP algorithms that are considered to be the machine equivalent of human eyes. Through combining AI (artificial intelligence) and AP, machines are nearly at a stage whereby they can sense and interact with the surrounding world in the same way as humans, by leveraging both the eyes and the brain.
The MaxST augmented reality SDK provides a comprehensive cross-platform AR engine equipped with all of the features required by brands and developers to build AR experiences and apps. The MaxST platform promises competitive pricing combined with speed and ease of AR app development. The MaxST AR SDK provides the following functionality: instant tracking (provides the ability to identify horizontal/vertical planes in order to overlay relevant content), visual SLAM (uses the smartphone camera to create a ‘virtual map’ of the surrounding area), object tracking (ability to import map files created by visual SLAM), image tracking (superimpose 3D content, videos and images), marker tracking (overlay content on top of markers with 8,192 markers provided), and QR/barcode scanning functionality.
The MaxST AR SDK also provides a range of useful features such as cross-platform development capabilities, running on all major platforms including Mac OS, iOS, Android, Windows, and Unity 3D. The platform is also compatible with a wide range of HUD’s and smart eyewear products such as the Epson MOVERIO BT-300, 350, and ODG R-7.
The DeepAR augmented reality SDK was originally created for app developers seeking to build high quality, fully mobile optimised, Facebook, and Snapchat style 3D-face lenses, masks, and special effects via iOS, Android, HTML5 and Unity. The DeepAR SDK is lightweight and quick to integrate into existing projects and supports a huge range of different lenses, effects, masks and filters for creating highly immersive consumer-facing AR apps and prototypes.
The DeepAR platform provides facial detection functionality to detect faces and facial characteristics. This is achieved by combining a variety of different data models with sophisticated machine learning in 3D. The DeepAR SDK possesses extremely precise and fast facial detection, combined with chin, eye and nose detection and is capable of detecting over 68 facial feature points at nearly 60 frames per second. The platform is heavily optimized to detect multiple faces in real-time via compatible smartphones and tablets.
DeepAR also possesses real-time emotion detection functionality, capable of detecting all of the core human emotions: anger, disgust, fear, happiness, sadness, surprise (and neutral). The technology leverages proprietary deep learning and neural network models. You can try the app here.
12.) MixedReality Toolkit (HoloLens)
The MixedReality Toolkit is comprised of a number of components and scripts that are intended to accelerate the development of augmented reality applications designed to target the Microsoft HoloLens and other Windows-based Mixed Reality headsets. The most recent version of MixedReality Toolkit has extended capabilities and comes equipped with a range of new features, including the ability to support a wide range of virtual and augmented reality platforms beyond Microsoft’s own Mixed Reality range of products.
The Mixed Reality Toolkit vNext includes numerous APIs to speed up the development of mixed reality projects for a wide range of supported devices, which includes:
Microsoft Immersive headsets (IHMD)
In order to start developing apps using the MixedReality Toolkit, you’ll require Windows 10 FCU (fall creators update), Unity 3D (provides support developing mixed reality projects in Windows 10), and Visual Studio 2017 (used for code editing, developing and deploying Universal Windows Platform app packages)