You wanted it, and now it is here — continuous scan and optimized search are alive and well in our new SDK v1.0 which was released this week. And you know the best part? All you have to do is download it and start integrating. It’s that easy.
Here at Catchoom, we’re continuously working to improve our core technology so you can have the best results for your work which means extending our APIs and the functionalities of the web panel so we felt it was time to update our SDK and pimp out integration into your mobile app development.
Our New SDK does some pretty cool new things:
- Easier integration to start your app development faster.
- Faster and seamless image capture to improve the response time of your app.
- Re-skinnable native camera preview so you don’t need to worry about image resolution or JPEG compression rates.
- Two modes: Single Shot and Finder Mode which lets you choose the user experience when capturing objects.
Easier Image Recognition integration for Android and iOS
You won’t need to download the SDK from github, and compile our library anymore. It comes in binary form (Framework for iOS and jar file for Android). You can now take our example app that comes with all the dependencies included and start with your project right away.
Faster and seamless camera image capture
Open the camera, snap and send it to Catchoom. Looks simple, huh? Well, there are a few tricks we use to reduce the round-trip time in the background before sending the image. Your users won’t realize this even happened, but Catchoom’s engine will produce better recognition rates and you’ll just deliver better experiences to your happy customers.
Re-skinnable native camera preview
Mobile operating systems (and mobile manufacturers in their own way) have many ways to open a camera view, but not all them are practical for image recognition: large images, strange focus effects, etc. Don’t worry, we took care of that problem and integrated it into a native camera preview that improves recognition.
Full control for your User Interface. The native camera is easily re-skinnable with your own buttons, navbars and all the graphic assets that you want to show. Nifty right?
Two modes: Single Shot and Finder Mode
Some of you want the user to tap on a button to scan a picture, capture the object and send the picture for recognition. And, others prefer the camera just opens right away and starts searching continuously for matches while the user just points (or tosses) the camera around.
What’s different about these modes? In the first scenario, a single image is taken so we have more time to prepare it and send it. In the second situation, two images are sent every second to our cloud, and as soon as we find a match in the database the app is notified with the positive result.
Which one should you use? It’s up to you and your UX expert. We just want you to know what will happen with the scans included in your monthly plans. With Single Shot, each user will send one image each time. With Finder Mode, every user will send several images before we recognize it, unless the camera opens and the user is already pointing at the object (in which case it will happen almost immediately). So really, we are talking options, how nice is that?!
Wanna see it in action?
Check out the demo we did in July at Augmented World Expo 2013:
Want to start? Sure you do. We have made example apps for both iOS and Android that are ready to download from our public github repository.
Request your copy of the SDK now and start coding your app in minutes:
Download iOS SDK
Download Android SDK