Interactive AR: Using MindAR to display video in Augmented Reality for Android, iOS, and other devices
Welcome to this new augmented tutorial!
If you’ve already mastered AR with AR.js, it’s time for an upgrade! In this tutorial, we will explore MindAR, a powerful alternative that will allow you to create more robust and fluid experiences. We will learn to integrate videos into your AR projects, bringing them to life with play, pause, and volume controls. Unlike previous solutions, MindAR offers superior performance and an optimized configuration, making this a step forward for your projects.
Here are these posts if you are interested in creating a video augmented reality app using AR.js
Como poner un video en realidad aumentada en android y pc
Como poner un video en realidad aumentada en ios y desktop
Working with videos in MindAR has a great advantage: you only need to develop a single application that works on all devices.
Previously, with the version of AR.js I worked with, it was necessary to create two different codes to handle the security restrictions of iOS and Android. This resulted in two separate applications with minimal changes, which could not be combined without causing errors.
Thanks to MindAR, this problem is eliminated, allowing for more efficient and unified development.
JOIN THE AUGMENTED REALITY COMMUNITY ON WHATSAPP TO GET THE SOURCE CODE IN A ZIP FILE.
Code Explanation
The HTML code is divided into several key sections that define the AR scene and its components.
- Header and Scripts
The <head> section includes the page title and essential libraries. Here, we load A-Frame, a framework for building virtual and augmented reality experiences, and MindAR, which integrates with A-Frame for image recognition.
<script src="https://aframe.io/releases/1.5.0/aframe.min.js"></script> <script src="https://cdn.jsdelivr.net/npm/mind-ar@1.2.5/dist/mindar-image-aframe.prod.js"></script>
- The Augmented Reality Scene
The <a-scene> element is the main container for all AR content. The mindar-image attribute is the heart of the application, where the target image file (targets.mind) and the tracking parameters are configured.
<a-scene mindar-image="imageTargetSrc: ./targets.mind; ..." color-space="sRGB" renderer="colorManagement: true, physicallyCorrectLights: true" vr-mode-ui="enabled: false" device-orientation-permission-ui="enabled: false">
The mindar-image line uses the optimized configuration you provided, ensuring robust and stable marker detection.
- Assets
The <a-assets> section is fundamental for preloading resources like videos and images before the scene renders. This improves performance and prevents delays.
<video>: The videoAR.mp4 video is loaded here. Attributes like crossorigin, playsinline, and webkit-playsinline are vital to ensure the video plays correctly on iOS devices.
<img>: PNG images for the control buttons (play, pause, and volume) are loaded here.
- The Camera and the Cursor
The <a-camera> defines the user’s point of view. The magic of interactivity happens here with the cursor and raycaster attributes.
cursor=»fuse: false; rayOrigin: mouse;»: This allows the mouse or a screen tap to act as a cursor, enabling clicks.
raycaster=»…»: This component detects which 3D objects are «hit» by the cursor. The objects: .clickable attribute tells the raycaster to only pay attention to elements with the CSS class .clickable.
- The Marker and the Container
The <a-entity mindar-image-target=»targetIndex: 0″> element is the main container for the 3D content. Everything inside this tag will appear superimposed on the first image that MindAR detects.
- The Video in Augmented Reality
The video is inserted into the scene with the <a-video> tag.
src=»#videoAR»: This links the video from the assets.
width=»2″ height=»1″: This defines the video’s size in the 3D scene.
position=»0 0.5 0″: This places the video slightly above the marker.
- The Interactive Buttons
The buttons are created with <a-circle> to give them a round shape and <a-image> to display the PNG icon.
class=»clickable»: This class is crucial so that the camera’s raycaster can detect clicks on the buttons.
position=»…»: These position values place the buttons below the video.
The Logic: The JavaScript Code
The <script> block at the bottom of the file contains the JavaScript logic that controls the application’s behavior.
Variables and Events: References to the DOM elements (video, buttons, images) are obtained, and «listeners» are added to them, which wait for a click event.
playPauseButton.addEventListener('click', () => { ... });
muteButton.addEventListener('click', () => { ... });
video.addEventListener('ended', () => { ... });Play/Pause Button Functionality
When clicked, the code checks if the video is paused (video.paused).
If it’s paused, it calls video.play() and changes the button’s image to the pause icon (#pause-img).
If it’s playing, it pauses the video (video.pause()) and changes the image back to the play icon (#play-img).
Volume Button Functionality
The muteButton toggles the video.muted property, muting or unmuting the audio. The code also dynamically changes the button’s image between #volume-off-img (mute) and #volume-on-img (sound).
Automatic Restart
The video.addEventListener(‘ended’, …) ensures that when the video finishes, it restarts at second 0 and automatically pauses, displaying the play icon so the user can watch it again.
This code is a robust and well-structured example for creating an AR experience that displays a video. By combining the power of MindAR for image tracking and the simplicity of A-Frame for the 3D scene, it’s possible to create immersive experiences that work on multiple platforms with a single codebase.