Main idea was to build AWS Summit Demo which was used in AWS Summit Stockholm 16.5.
We started our project with getting ourselves familiar with Deeplens - it was a completely new device for us and therefore there was lots to learn. Our initial idea was to build our own project models, but due to the lack of time, we decided to use existing models from AWS. (All the sample projects can be seen https://aws.amazon.com/deeplens/).
After a fruitful brainstorming session we decided to build a system, where a light bulb (Philips HUE) and AWS Alexa could be controlled by different movements - we wanted to test how IoT devices can be controlled by Deeplens. Unfortunately, the result was too problematic because the learning models of Deeplens were too incomplete.
This resulted in a quick change of plans - we decided to use face detection model, which worked quite well. In the final demo, Deeplens detects faces, blurs them and takes a photo when the face area percentage is high enough. We connected it with Twitter, so that whenever the picture was taken, it got sent onto the platform. Also, the Philips HUE light bulb flashed automatically whenever Deeplens took a photo.
We have now used the demo in several events to depict our expertise on utilizing new technologies. The applications based on DeepLens are being developed even further within our InnovationZone.