Posts

meetup

We crunch(ed) Artificial Intelligence Meetup – part three

For our third Meetup about putting Artificial Intelligence (A.I.) to practice, we were very happy to welcome Thomas Stalman and Peter van Lith. They had two quite different, but very interesting stories. And despite the beautiful weather and the national strike at the regional public transport, they had a full room of people to share their stories with.

20180627 Meetup (7)

Thomas started the evening with his story about welkrestaurant.nl. Have you ever found yourself in the situation that you were in a city which you didn’t know so well and were looking for a nice place to have some dinner? Fear no more, because now we have AI, in the form of welkrestaurant.nl, which can help you find a nice restaurant in the city you’re in, based on another restaurant you do know and like in any other Dutch city. During his presentation Thomas took us on the journey of building a recommender system using, amongst other information, text mining on the reviews people gave about a restaurant.

20180627 Meetup (22) 20180627 Meetup (4)

 

Peter started his talk by showing us some videos on (humanoid) robotics football, with his own humorous commentary in the voice-over. Which was a brilliant introduction into the subject of robotics football. Something the TU Eindhoven is really good at, seeing that they won the World Championship for the fourth time in the last seven years. Peter took us on a whole different journey than Thomas did, our second journey of the evening took us along the path of creating (distorted) images for training purposes, the training of neural networks combined with existing programmed behaviour of the robots and how this should work on the robots in the end.

20180627 Meetup (29) 20180627 Meetup (25a)

All in all we really enjoyed the evening and we want to thank our speakers, but also our audience! It was so great to see you all coming (again for some of you) to our headquarters in Amersfoort. Thank you for your attention and questions and we hope to see you at our next Meetup, which will be on Wednesday 17 October 2018. Please sign yourself up for our The Analytics Lab Meetup group, to make sure you will stay informed about our Meetups!

We definitely hope to see you there!

Cmotions (111 van 174)_bijgesneden

Education for the Next Generation: a Handsign recognition project in Python

“Could you create a handsign recognition model which we can use to teach High School students a bit more about A.I. in a fun way?”
This is the question a few colleagues asked a couple of weeks ago, and ofcourse, the only real response here could be YES! I was immediately enthusiastic and started working on this fun project.

After a lot of messing around with different models, among which xgboost and neural networks, I found a real goldmine. Which, in this case, was the GitHub page of loicmarie, where he created a script to not only train such a model using an Inception Model (convolutional neural network classifier), but also use it. So I combined my own script with the ones of loicmarie and we were ready to go!

The Inception Model V3 is a deep learning model created by Google based on images from ImageNet.

image03

The Inception Model is capable of classifying images in 1.000 classes with an error rate like a human would have. An impressive model, which isn’t only cool on its own, but can also be used for Transfer Learning. Which means we can use the knowledge from this model and expand it with our own images. Which makes it quite “easy” and “fast” to create a good performing model on our own images, which, in this case, are different handsigns.

When we arrived at the High School, we first gave the students an introduction to what A.I. actually is and where they encounter A.I. in their world. After that we introduced them to our handsign recognition model and gave them the assignment to create their own handsigns.

Cmotions (41 van 174) Cmotions (58 van 174)

After which they used a script to take their own pictures for each handsign.

Cmotions (111 van 174)_bijgesneden

And then it was time for us to put our computers to work! It started with a script to generate 10.000 pictures for each handsign. As soon as this script was ready, the training of the model started.

WhatsApp Image 2018-05-17 at 07.29.20

After 23 hours (!) all the models were succesfully trained and it was battle time! The group who could write the most flawless text, using their own handsigns, within 5 minutes was the project-winner!

Cmotions (158 van 174) Cmotions (170 van 174)_bijgesneden

 

Check out this video to see how it works:

 

Are you interested in our code? Please feel free to take a look at our GitHub repo!

mountain-2574006_1920

Project Friday 2.2: let’s fly!

A little while ago we started with our second Project Friday; once a month (or so) we’ll lock some colleagues in a room with a couple of beers and a fun project. This project: give a drone a brain and an eye, so we can call itand make it do stuff for us. Why do we do this? Well, because it’s fun. and we learn a lot.

During our first session we mostly discovered how difficult it was to maneuver the drone around inside our office building. This didn’t put us off even a tiny little bit, we love a good challenge!

We’ve spend most of this session on thinking of a way we can use the camera on the bottom of the drone to make it follow a path we’ve laid out for it. If we put it simply: we want to make the drone to be able to follow a line on the floor. The first thing we did was create a line made out of white adhesive tape on our dark carpet. After that we held the drone above this line to take pictures. And then the thinking started… We had to make sure we took every possible deviation into account and thought of the best way to correct the drone if that deviation occured. Believe it or not, but this drawing helped us do that.

deviationsWhile thinking of every possible deviation and the correction that had to be applied for that deviation, we immediately programmed it into our Python script for the drone. As soon as this script was done, it was time for our first test flight. Which you can see in this video:

Ok… Not succesful yet. Enough work left for some more friday afternoons. As far as we’re concerned: bring it on!

 

Read more about what we did before

mountain-2574006_1920

Project Friday 2.1: let’s fly!

After all the fun we had, while also learning a lot, during our first Project Friday “Artificial Intelligence meets Coffee” project We felt it was time for a second project. So this time, instead of giving an eye and a brain to a coffeemachine, why not try to do the same to a drone?! What if we could make a drone come fly up to us when we call for it and tell it what to do after it recognizes who we are…

Our second Project Friday was born!

You might wonder what Project Friday actually is… Well, that’s an easy one; once a month(or so) we’ll lock some colleagues in a room with a couple of beers and a fun project. Why do we do this? Well, because it’s fun and we also learn a lot.

To get started with this Project Friday, we first needed a drone! We chose for the Parrot AR 2.0 Drone, because you can easily connect this drone to your computer to take over the command.

3029-large-parrot-3029jpg

Most of this first afternoon was spend on trying to fly the drone inside. Which, we found out the hard way, isn’t so easy! A few walls were hit and we did see some people running for their lives, but in the end of the day everbody, including the drone, survived. All is good!

We even managed do give a few commands to the drone from the computer. Although the effect of the commands weren’t as succesful as we had hoped…

 

What we’ve learned so far:

  • flying a drone inside is difficult,
  • connecting to the drone from the computer is easy,
  • giving the right commands isn’t easy at all,
  • we love being pilots!

DroneFun

 

meetup

“We crunch(ed) Artificial Intelligence” Meetup

On December 13 we had our very first The Analytics Lab Meetup at our Headquarters in Amersfoort. Despite the traffic jams around Amersfoort and a broken train that blocked the track between Utrecht and Amersfoort we could welcome around thirty people on this cold Wednesday. Unfortunately one of our expected speakers, Thomas Stalman from welkrestaurant.nl wasn’t one of them, the broken train made it impossible for him to get to Amersfoort on time. But on the bright side, this means we already have an amazing speaker planned for our next meetup!

After some simple but nice dinner, our other three speakers made grateful use of the space that Thomas left for them and brought their story full of enthusiasm to the bedazzled attendees of the meetup. Our first speaker was Klaas Tjepkema who told us about the Advanced Data Sampler and about the plans he has to evolve this. A story about dreaming big and starting small, a perfect example of putting AI to practice. Our second speakers were Joost van der Leegte and Willem van der Geest who told us about their Project Friday Project: AI and Coffee. This project is part of our The Analytics Lab Playground, where we can play around, have fun and learn a lot of new skills! In their presentation Joost and Willem told us more about the coffee machine that was extended with facial recognition. We even got a live demonstration of this coffee machine.

All in all it was a really interesting and fun evening, which gave a nice opportunity to learn more about artificial intelligence in general and data sampling and facial recognition more particular.

Thanks to everybody who was there at our very first meetup, we definitely enjoyed it and hope you did as well!

If you want to know more, or you want to be informed about our next meetup, please sign up at meetup.com and join our The Analytics Lab group. We’d love to see you there!

video_still_3

Does your coffee machine know who you are? Ours does!

When we tell  our friends or family we were adding face recognition to a coffee machine, often the first question was ‘Why?’. A valid question, which is actually quite easy to answer: because we want to know whether we can, it’s fun and it’s another, just not your ordinary, reason to drink some beers on a Friday afternoon. This project is part of an ongoing idea, which we call Project Friday: doing stuff we’re excited about, which don’t necessary generate revenue. When we answer our friends and family that we do this just for fun, you can deduct from their facial expressions that the words ‘Nerd-Alert’ are going through their minds. The question that often follows is ‘does it actually work?’, well check out this video.

Credits video: Jan Persoon

So, how does it work. We placed a small camera on top of the coffee machine, which captures everything and sends the images to a small computer (a Raspberry PI). An algorithm looks real-time at these images and tries to detect faces. Once it has detected a face, it tries to recognize the face by comparing it with photos of colleagues. After recognizing the face, the computer can determine the favorite drink by using a database filled with these preferences. It then sends a signal to an Arduino board which we’ve soldered to the motherboard of the coffee machine. When the signal is sent, the coffee machine knows whether to brew coffee, espresso or cappuccino for example. When you want to differ from your preferences, you can just say “Stop”; the computer also has a microphone attached, which is able to recognize some basic commands. And we’ve even made the time you have to wair for your coffee a bit more fun, by making sure the computer plays a part of a song that you like.

Still not convinced? Just drop by (but send a photo beforehand)! You can reach us at info@theanalyticslab.nl

The code is on Github, check it out if you’re interested.

video_still_5video_still_4

Read more about what happened before during our friday afternoons we were working on this cool project!

pexels-photo-296888

Project Friday 1.3: Artificial Intelligence meets coffee

Last Friday the third afternoon of Project Friday took place. In Project Friday we spent about once a month  an afternoon on something completely useless. Why do we do this? Because we can, it’s fun and interesting and it’s a good reason to grab a couple of beers. This Project Friday is all about mixing Espresso Machines with Artificial Intelligence: adding facial recognition to the machine so that you don’t need to push the button to get your favorite coffee.

In the previous afternoons (day 1 and day 2), we’ve installed everything on the Raspberry PI (which was quite a hassle), learned what relays were and how to use them, soldered the first buttons and were able to control these buttons via the computer. This Friday we set ourselves the goal to real time face detection on the PI cam.

As the weather was warm and sunny, we decided it was better to leave the office and reach our goal in a more suitable environment, somewhere we the Raspberry PI (and us) wouldn’t overheat. So we drove to my place and settled ourselves in the garden, brought a television outside, hooked up the Raspberry PI and off coding we went. An additional benefit was the BBQ!

Astonishingly enough, not only the Raspberry PI had trouble with the warm weather, so did your cognitive capabilities. The move didn’t make us more productive (but was still the right choice with this kind of weather). So progress was slow, and we were on half the strength as three team members were on holiday. We did had some success, we did manage to implement the pre-trained HAAR-cascades for detecting faces and our trained cascade for detecting your middle finger when you flip it. But didn’t get so far to get default face recognition in place. So we’ll leave that for the next afternoon!

20170526_170709 20170526_161358

Read more about what happened before or read more about what happened next