How to code an augmented reality marker

Home / Developer Tools / How to code an augmented reality marker
How to code an augmented reality marker

 

Augmented reality has been around for a while now, but with the support of WebRTC (real-time communication), it is possible for users on Android and desktop devices to access a phone’s camera. 

At present, iOS can’t support this as it hasn’t been implemented in the WebKit browser that powers Safari, but it is in development and you can check the status here. If you do have an iOS device, you don’t have to miss out, as you can still use the webcam on your desktop computer. 

Note: To get this working on the mobile Chrome browser, the content must be served by a secure socket layer (i.e. over HTTPS rather than standard HTTP). Desktop currently works with regular HTTP though.

  • To download the files you need for this tutorial, go to FileSilo, select Free Stuff and Free Content next to the tutorial.

In this tutorial I’m going to show you how to place an augmented reality marker in front of a phone camera. This will be picked up by the browser and AR.js, and content will be mapped over the top in 3D, sticking to the AR marker. 

There are lots of possible uses for this technique. For example, you might want to create a simple 3D creative resume, and then the AR marker could be printed on your business card. Because you can walk around the marker, this is great for content that you might want to see from different angles – think of a certain Swedish furniture manufacturer giving you animated steps that can be viewed from any angle! There are so many possibilities that this can be useful for.

01. Add the libraries

Start by linking up your project libraries

Once you’ve downloaded the tutorial files go to the project folder, open the start folder in your code editor and then open up the index.html file for editing. At this stage the libraries need to be linked up – and there are quite a few for this project! The libraries are in three sections: Three.js, JSARToolKit, and the Three.js extension for the ARToolKit and marker.

<script src='js/three.js'></script>
<script src="js/ColladaLoader.js"></script>
<script src="vendor/jsartoolkit5/build/artoolkit.min.js"></script>
<script src="vendor/jsartoolkit5/js/artoolkit.api.js"></script>
<script src="threex-artoolkitsource.js"></script>
<script src="threex-artoolkitcontext.js"></script>
<script src="threex-armarkercontrols.js"></script>
<script>THREEx.ArToolkitContext.baseURL = '/'</script>

02. Take care of CSS styling

In the head&nbsp;section of the page, add some script&nbsp;tags and drop in the style rules for the body&nbsp;and the canvas&nbsp;element. This ensures they are placed correctly on the page without the default margins added by the browser.

body {
margin: 0px;
overflow: hidden;
}
canvas {
position: absolute;
top: 0;
left: 0;
}

03. Add global variables

In the body&nbsp;section of the page, add some script&nbsp;tags where the remaining JavaScript code for this tutorial will go. There are a number of variables needed: the first line is for Three.js, the second for the AR.js, the third for the model and then a variable to load the model.

var renderer, scene, camera;
var arToolkitContext, onRenderFcts, arToolkitSource, markerRoot, artoolkitMarker, lastTimeMsec;
var model, tube1, tube2, mid, details, pulse;
var loader = new THREE.ColladaLoader();

04. Load the model

Before the scene is set up the model will be loaded so that it can be displayed when markers are detected. This is scaled down by 10 to fit exactly onto the AR marker. The model is 10cm for the width and height, so the marker is 1cm which translates to 1 increment in Three.js.

loader.load('model/scene.dae', function(collada) {
model = collada.scene;
model.scale.x = model.scale.y = model.scale.z = 0.1;
details = model.getObjectByName(“details", true);

05. Fix some display issues

Still inside the Collada loading code, once the model is loaded there will be a couple of tubes that spin around so they are found in the Collada scene. The first tube is found and its material is grabbed. Here the material is set to just render on the inside of the model, not the outside.

tube1 = model.getObjectByName(“tube1", true);
var a = tube1.children[0].material;
a.transparent = true;
a.side = THREE["BackSide"];
a.blending = THREE[“AdditiveBlending"];
a.opacity = 0.9;

06. Repeat the fix

If the transparency and additive blending is not enabled, the model looks like this when loaded and displayed on top of the AR marker – not very exciting and barely visible!

As in the last step, this same principle is repeated for the second tube and the blending mode, similar to those found in After Effects and Photoshop, is set to be an additive blend. This enables the outside of the pixels to have a softer transition to the camera image.

tube2 = model.getObjectByName("tube2", true);
c = tube2.children[0].material;
c.transparent = true;
c.side = THREE["BackSide"];
c.blending = THREE["AdditiveBlending"];
c.opacity = 0.9;

07. Final fix

The last model is a spinning circle just at the middle of the design. This follows the same rules as before but doesn’t render the back of the object, just the front. The opacity of each of these materials has been set to 90% just to make it slightly softer. Once the model is loaded the init function is called.

mid = model.getObjectByName("mid", true);
b = mid.children[0].material;
b.transparent = true;
b.blending = THREE["AdditiveBlending"];
 b.opacity = 0.9;
init();
});

08. Initialise the scene

The init function is set up and inside here the renderer settings are created. The renderer is using WebGL to give the fastest render speed to the content, and the background alpha value is set to transparent so that the camera image can be seen behind this.

function init() {
renderer = new THREE.WebGLRenderer({
alpha: true
});
renderer.setClearColor(new THREE.Color('lightgrey'), 0);
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);

09. Create the scene display

The renderer is made to be the same size as the browser window and added to the Document Object Model of the page. Now an empty array is created that will store objects that must be rendered. A new scene is created so that content can be displayed inside of this.

onRenderFcts = [];
scene = new THREE.Scene();

10. Light up

To be able to see content in the scene, just like in the real world, lights are needed. One is an ambient grey light while the directional light is a muted blue colour just to give a slight tint to the 3D content on display in the scene.

Experiment with the lighting colours to give some different tints

var ambient = new THREE.AmbientLight(0x666666);
scene.add(ambient);
var directionalLight = new THREE.DirectionalLight(0x4e5ba0);
directionalLight.position.set(-1, 1, 1).normalize();
scene.add(directionalLight);

11. Lights, camera, action!

With the lights added to the scene, the next part to set up is the camera. As previously with the lights, once created it has to be added into the scene to be used. This camera will auto align with the position of the webcam or phone camera through AR.js.

camera = new THREE.Camera();
scene.add(camera);

12. Set up AR.js

Enabling the webcam means that both desktop webcam and the phone’s camera can be used to view the content

Now AR.js is set up so that it takes the webcam as its input, it can also take an image or a prerecorded video. The AR toolkit is told to initialise and if it’s resized it will match the same as the renderer on the HTML page.

arToolkitSource = new THREEx.ArToolkitSource({
sourceType: 'webcam',
});
arToolkitSource.init(function onReady() {
arToolkitSource.onResize(renderer.domElement)
});

13. Keep it together

Because resizing is something that happens a lot with mobile screens, as the device can easily rotate to the point that it re-orientates, the browser window is given an event listener to check for resizing. This resizes the AR toolkit.

window.addEventListener('resize', function() {
arToolkitSource.onResize(renderer.domElement)
});

14. AR renderer

The AR.js needs a context set up, calling the Three.JS extension to do so. Here it takes the camera data file, which is included in the data folder, and detects at 30 frames per second with the canvas width and height set up for it.

arToolkitContext = new THREEx.ArToolkitContext({
cameraParametersUrl: 'data/camera_para.dat',
detectionMode: 'mono',
maxDetectionRate: 30,
canvasWidth: 80 * 3,
canvasHeight: 60 * 3,
});

15. Get the camera data

The AR toolkit is initialised now and the camera in the WebGL scene gets the same projection matrix as the input camera from the AR toolkit. The AR toolkit is pushed into the render queue so that it can be displayed on the screen every frame.

arToolkitContext.init(function onCompleted() {
camera.projectionMatrix.copy(arToolkitContext.getProjectionMatrix());
});
onRenderFcts.push(function() {
if (arToolkitSource.ready === false) return
arToolkitContext.update(arToolkitSource.domElement)
});

16. Match the marker

The markerRoot is a group that will be used to match the shape in augmented reality. It’s first added to the scene, then this is used along with the AR toolkit to detect the pattern, which is also located in the data folder.

markerRoot = new THREE.Group
scene.add(markerRoot)
artoolkitMarker = new THREEx.ArMarkerControls(arToolkitContext, markerRoot, {
type: 'pattern',
patternUrl: 'data/patt.hiro'
});

17. Add the model

Here the tubes and discs spin, while the hexagon in the centre moves up and down

Back in the early steps a model was loaded and stored in the variable of the model. This is added to the markerRoot&nbsp;group from the previous frame. The model had some specific elements within it that are going to be animated every frame. They are also pushed into the render queue.

markerRoot.add(model);
onRenderFcts.push(function() {
tube1.rotation.y -= 0.01;
tube2.rotation.y += 0.005;
mid.rotation.y -= 0.008;
details.position.y = (5 + 3 * Math.sin(1.2 * pulse));
});

18. Finish the init function

The renderer is told to render the scene with the camera every frame by adding it into the render queue, which is the array set up in step 9. The animate function is called, and this will render every frame to display content. The closing bracket finishes and closes the init function.

onRenderFcts.push(function() {
renderer.render(scene, camera)
});
lastTimeMsec = null;
animate();
}

19. Just keep going

The animate function is created now and uses the browser’s requestAnimationFrame, which is a call to repaint before the screen is drawn. This continues to call itself, and the browser attempts to call this function at 60 frames per second.

function animate(nowMsec) {
// keep looping
requestAnimationFrame(animate);

20. Timing issues

Mobile browsers sometimes find it difficult to reach 60 frames per second with different apps running. Here timing is worked out so that the screen is updated based on timing. This means if frames drop, it looks much smoother.

lastTimeMsec = lastTimeMsec || nowMsec - 1000 / 60;
var deltaMsec = Math.min(200, nowMsec - lastTimeMsec);
lastTimeMsec = nowMsec;
pulse = Date.now() * 0.0009;

21. Finish it up

This is the image that will be detected by the camera as an AR marker; as you can see it shares some similarities with a QR marker, which you might be familiar with

Finally each of the elements in the render queue are now rendered to the screen. Save the page and view this from a https server on mobile or a regular http server on desktop, print the supplied marker and hold it in front of the camera to see the augmented content.

onRenderFcts.forEach(function(onRenderFct) {
onRenderFct(deltaMsec / 1000, nowMsec / 1000);
});
}

This article originally appeared in Web Designer issue 262; buy it here!

Source: http://www.creativebloq.com/how-to/how-to-code-an-augmented-reality-marker

Leave a Reply

Your email address will not be published.