FAQ

From Kudan AR Engine
Jump to: navigation, search

Contents

What is KudanAR?

Kudan AR SDK is an engine that gives applications the ability to see and understand their physical surroundings using the camera and sensors within the device. It provides various forms of tracking, as well as 3D rendering. The engine allows developers to easily integrate this capability into their own apps in order to position digital content within the real world.

Which devices are supported?

Devices running iOS and Android, including smartphones and tablets are supported.

How much does the AR SDK cost?

The SDK is free to develop with. In order to release an app to the Apple or Android app stores, a license for a bundle ID will be required. More information on pricing can be found here: https://www.kudan.eu/pricing/

Is there a tutorial?

Please go to https://wiki.kudan.eu/ and follow the steps to create your first project.

Is there any sample code to demonstrate the SDK?

You can visit GitHub for the samples repository https://github.com/kudan-eu/

Where is the API Documentation?

Head to https://wiki.kudan.eu for a list of classes and their functionality.

What specialised knowledge is required?

Developers using our SDK require no knowledge of complex computer vision algorithms or augmented reality. Development can be carried out natively on the iOS and Android IDEs and using our Unity3D plugin.

What kind of content can I use in Augmented Reality?

Anything that can be rendered in the device natively is also supported by the AR engine. Content can be highly realistic 3D models, motion graphics, videos or even sound.

What is KudanCV?

KudanCV is the computer vision component of KudanAR that can be used in a stand-alone manner. This is useful for integrating with third-party renderers and platforms.

What kinds of tracking does KudanAR provide?

KudanAR currently provides 2D image tracking, as well as Arbitrary Tracking.

What is Arbitrary Tracking?

Arbitrary Tracking is KudanAR's own SLAM based on tracking for arbitrary parts of the camera image. This can be the entire camera image, including any new parts that come into view, or constrained to a narrower area such as a person. It initialises instantly without requiring different viewpoints and can instantly expand to include new parts of the world. It is especially suited for tracking desk surfaces or floors in 6 degrees of freedom.

What is the difference between Arbitrary Tracking and SLAM?

Arbitrary Tracking initialises instantly without having to move the device to different viewpoints. It also works especially well on flat surfaces such as distance landscapes, floors, or walls. It will expand to include new features instantly. SLAM is more suited to mapping out environments which it can recognise even after tracking has been lost.

How many different markers can I detect?

There is no limit on the number of markers you can detect. However, the more active markers there are at one time, the more latency there will be in recognising a marker.

What makes a good marker?

A good marker has lots of high contrast corners that distributed fairly evenly. The pattern should be non-repetitive. The aspect ratio of the marker shouldn't be too large. For more information on what makes a good marker please read: https://wiki.kudan.eu/What_Makes_a_Good_Marker%3F

Does your image tracker work with bad markers?

The Kudan image tracker works well with bad markers but tracking quality may be degraded. There are several controls available to fine-tune the tracking parameters to better suit poor markers.

My marker jitters. What can I do?

Firstly you should attempt to improve the quality of your marker. If this isn't an option, then you can try adjusting the tracker parameters for the marker. In the case of failing to detect you can reduce the detection threshold. For unstable tracking you can adjust the number of features tracked as well as pose filtering.

Can I mix different kinds of tracking?

Yes. Multple tracker types can run on the same camera stream but this may have a performance impact. You can, for example, initialise with a marker and switch to Arbitrary Tracking.

What 3D model formats does the renderer support?

The renderer requires a custom lightweight format for fast loading. The tool provided to convert to this format supports OBJ, COLLADA and FBX.

Which 3D model format is preferred?

FBX is the preferred format and has best support by the major 3D editing tools. OBJ is incapable of supporting animation or scene graphs.

What 3D model features does the renderer support?

The renderer supports: complex scene graphs, bone animations and blend shapes.

What properties are keyable in animations?

Node transformations, blend shape influences and node visibility are all keyable.

What do I need to know when creating a 3D model?

The renderer will import a large variety of 3D models but there are some guidelines to follow. Try to keep the total number of meshes small as this has a big impact on rendering performance. There shouldn't be more than around 50 different meshes rendered at one time. A single mesh shouldn't exceed around 15 bones and the mesh should look good with a maximum of 4 bones per vertex influence.

How many blend shapes can be active at once?

Currently the renderer will morph between two different shapes, whether they are from separate channels or not. Inbetween shapes on a blend shape deformer are fine since only two contribute at once.

Is there a maximum polygon count?

There is no limit on the number of polygons. As many as 1 million triangles can be rendered without much performance impact. Beware that this will bloat the app size.

What material types do you support?

Per-pixel lighting with fresnel reflections, occlusion and normal maps. We also provide custom AR-specific materials for use with object occlusion or for working with the camera texture.

Can I create my own materials?

We will be adding programmable shader support in a future release.

What video formats are supported?

Anything iOS and Android can otherwise decode.

What guidelines are there for encoding videos?

HD videos are fully supported. AR is rendered at 30fps so exceeding this framerate has no benefit. Lower framerates such as 20fps are often imperceptible and can result in a much smaller video file size.

What are alpha videos?

Alpha videos are videos with a transparency channel. These are especially useful in AR where it is desirable to have the camera background behind.

I have a green screened video, how do I use it as an alpha video?

The video must be chroma keyed in software such as AfterEffects and exported as a series of transparent PNGs. The SDK provides a tool for converting these to a video.

Do you plan to release a tool to handle the chroma keying in the future?

Yes.

My marker is poor and the loss of tracking causes video audio to stutter. Is there anything I can do?

Videos have a jitter threshold that can be set which causes the audio to continue playing across momentary losses of tracking.

Can I make my video go full screen?

Yes. There is a sample coming shortly demonstrating this.

How can I share content across nodes?

Because our scenegraph nodes have a single parent, sharing content across markers is problematic. You can clone the nodes but share the content (meshes, materials) or use an ARMultiTrackableNode which moves nodes around depending on which marker is currently detected.

Can I use the camera texture deform without a marker?

Yes, the camera texture extractor will work no matter what is controlling the position, so will always work in ArbiTrack and gyro tracking modes. Just position the target node as usual.

How can I integrate an AR component with a larger, regular app?

Everything is usually encapsulated within a subclass of a UIViewController. This can just be dropped into a larger app and called in the usual ways.

My app has a base class that inherits from UIViewController so I can't use your ARViewController. What can I do?

You can work with the ARView directly. A sample is provided which demonstrates what needs to be implemented in your custom view controller to properly handle the AR view.

Can I load content from the web?

All content loaders such as the model importer, texture and video loaders can load from a full path name. Your downloader should download content to the app's cache directory and pass the full path as appropriate.

Can I control the scenegraph from the web?

You would need to write your own importer for a format such as JSON/XML that describes the scene and use the appropriate node creation API accordingly. However, the 3D format is capable of representing complex animations so you may find your setup is best encoded via that.

Can I use the 3D format for 2D animations?

Yes.

What do I need to link with in order to use this SDK?

On iOS libc++ is required.