Hoe we via Kunstmatige Intelligentie klanttevredenheid verhogen en kosten verlagen

Iedereen die wel eens gebeld heeft naar het servicenummer van een overheidsinstantie of een grote dienstverlener zal het ongetwijfeld herkennen: een lange wachttijd, een aantal keer doorverbonden worden en niet meteen het juiste antwoord krijgen. Sinds Youp van het Hek in 2010 een Twitter-storm startte over T-Mobile en de, in zijn ogen, beroerde klantenservice is er veel verbeterd in de meeste klant contact centers. Maar er kan ook nog steeds een hoop beter. Omdat wij wel van een uitdaging houden hebben we bij The Analytics Lab een dashboard gemaakt dat via kunstmatige intelligentie bedrijven helpt om hun klantcontact te verbeteren en tegelijkertijd hun kosten te verlagen. 

Een groot deel van de extra kosten in klant contact centers is te wijten aan onvoorziene proceskosten. Denk hierbij aan extra telefonische vragen of een groter aandeel klachten. Meestal leidt dit niet alleen tot extra kosten, maar ook tot een lagere klanttevredenheid. Vaak lijkt het alsof deze vragen en klachten het callcenter overvallen: de aard van de vragen en het volume zijn onvoorzien. Onvoorziene gebeurtenissen zijn echter wel degelijk goed te voorspellen met de juiste kennis, de bedrijfsprocessen en de combinatie met kunstmatige intelligentie.

Via kunstmatige intelligentie kunnen wij een dashboard opleveren dat gebruikmaakt van voorspellende algoritmes die realtime aanbevelingen kunnen doen. Deze aanbevelingen voor procesverbetering kunnen per uur, dag, week en maand niveau gegeven worden. Op basis van een zelf gecreëerde dataset met onze data sampler hebben we een eerste versie van een dashboard gemaakt om meer grip te krijgen op onverwachte kosten. Daarnaast zijn we van mening dat het realtime omzetten van spraak naar tekst waardevolle informatie genereert. Daardoor is sneller inzichtelijk wat de daadwerkelijke reden is van een telefonische vraag. Via sentiment analyses kunnen we ook inzicht geven over het sentiment van elk gesprek en kan er een psychometrische profiel van een beller worden gemaakt. De eerste versie van ons dashboard staat nu live.

Benieuwd naar ons dashboard? Wilt u weten hoe uw bedrijf via kunstmatige intelligentie meer grip kan krijgen op klantinteracties? Neem dan een kijkje op de pagina Het ongeplande voorspellen of neem contact met ons op.

Dit artikel verscheen eerder op cmotions.nl.


“We crunch(ed) Artificial Intelligence” Meetup

On December 13 we had our very first The Analytics Lab Meetup at our Headquarters in Amersfoort. Despite the traffic jams around Amersfoort and a broken train that blocked the track between Utrecht and Amersfoort we could welcome around thirty people on this cold Wednesday. Unfortunately one of our expected speakers, Thomas Stalman from welkrestaurant.nl wasn’t one of them, the broken train made it impossible for him to get to Amersfoort on time. But on the bright side, this means we already have an amazing speaker planned for our next meetup!

After some simple but nice dinner, our other three speakers made grateful use of the space that Thomas left for them and brought their story full of enthusiasm to the bedazzled attendees of the meetup. Our first speaker was Klaas Tjepkema who told us about the Advanced Data Sampler and about the plans he has to evolve this. A story about dreaming big and starting small, a perfect example of putting AI to practice. Our second speakers were Joost van der Leegte and Willem van der Geest who told us about their Project Friday Project: AI and Coffee. This project is part of our The Analytics Lab Playground, where we can play around, have fun and learn a lot of new skills! In their presentation Joost and Willem told us more about the coffee machine that was extended with facial recognition. We even got a live demonstration of this coffee machine.

All in all it was a really interesting and fun evening, which gave a nice opportunity to learn more about artificial intelligence in general and data sampling and facial recognition more particular.

Thanks to everybody who was there at our very first meetup, we definitely enjoyed it and hope you did as well!

If you want to know more, or you want to be informed about our next meetup, please sign up at meetup.com and join our The Analytics Lab group. We’d love to see you there!


Our Advanced Data Sampler Beta version is live!

Great news! Our Advanced Data Sampler is live!

Why great? Well, did you ever find yourself preparing for teaching a course by spending hours on Kaggle (or other websites) to get the right dataset. Which contained data that fitted the business sector of your students as well as the subject of your course and had the right type of data issues? We did and we resented it, such a waste of precious time… Don’t get us wrong, we absolutely love Kaggle! But not if we are looking for just the right dataset to use when teaching.

Or did you ever find yourself wanting to show your colleagues or customers a really fancy and shiny dashboard, maybe by using some new/advanced/cool tool, but you couldn’t use or didn’t have the right data?

After hours on Kaggle and adjusting data by hand we have something that we can classify as “OK”, but not great. Sometimes it get’s even more frustrating, the data is available, but you’re just not allowed to use it due to security or privacy issues.

Here is where our Advanced Data Sampler comes in the picture. This all started with an idea of a tool that could help us mimic a database, but without any of the privacy and security issues and with only the data issues (outliers, missings) that we want or need.


Because we live by the motto ‘think big, start small’, that is exactly what we did, and here we are with our first beta version of the Advanced Data Sampler. Within this version you, as a user, are capable of creating two datasets. One dataset will contain customers with all the characteristics you defined, the other set will contain all their orders.

This is just our first small step in the path to our vision. We’re really curious how you value this first version and where you think we should put the focus on when further developing our Advanced Data Sampler. We want to invite you to try and test our Advanced Data Sampler!

Please share your opinion, feedback and ideas with us at info@theanalyticslab.nl


Does your coffee machine know who you are? Ours does!

When we tell  our friends or family we were adding face recognition to a coffee machine, often the first question was ‘Why?’. A valid question, which is actually quite easy to answer: because we want to know whether we can, it’s fun and it’s another, just not your ordinary, reason to drink some beers on a Friday afternoon. This project is part of an ongoing idea, which we call Project Friday: doing stuff we’re excited about, which don’t necessary generate revenue. When we answer our friends and family that we do this just for fun, you can deduct from their facial expressions that the words ‘Nerd-Alert’ are going through their minds. The question that often follows is ‘does it actually work?’, well check out this video.

Credits video: Jan Persoon

So, how does it work. We placed a small camera on top of the coffee machine, which captures everything and sends the images to a small computer (a Raspberry PI). An algorithm looks real-time at these images and tries to detect faces. Once it has detected a face, it tries to recognize the face by comparing it with photos of colleagues. After recognizing the face, the computer can determine the favorite drink by using a database filled with these preferences. It then sends a signal to an Arduino board which we’ve soldered to the motherboard of the coffee machine. When the signal is sent, the coffee machine knows whether to brew coffee, espresso or cappuccino for example. When you want to differ from your preferences, you can just say “Stop”; the computer also has a microphone attached, which is able to recognize some basic commands. And we’ve even made the time you have to wair for your coffee a bit more fun, by making sure the computer plays a part of a song that you like.

Still not convinced? Just drop by (but send a photo beforehand)! You can reach us at info@theanalyticslab.nl

The code is on Github, check it out if you’re interested.



Create your DIY remote for Philips Hue with Raspberry Pi

You know those Philips Hue lights, which are amazing and really cool gadgets for your house? Well, thanks to Philips it is also easy to create your very own DIY remote control for them. Yes, of course you can get a remote for these lights in the shop as well, but why would you if you can program one yourself?!

To do this I’ve used my Raspberry Pi 2 model B, and from my Arduino Starter Kit I’ve used a breadboard, a remote control and an IR (infrared) receiver. I’ve named my remote control “SpecialForMP3”, since this is the only text I could find on it. But you can use any infrared remote control you want, so if you have some useless remote controls lying around, just give them a purpose again.


First, we need to setup the Raspberry Pi to be able to receive infrared signals sent by the remote control. Furthermore, we have to make sure the Raspberry Pi can not only receive the signals, but is also capable of deciphering them. In other words, we want to make sure the Raspberry Pi can “hear” the remote control and is also capable of understanding what is said; they need to speak the same language.

The first step towards achieving this goal, is connecting the IR receiver to the Raspberry Pi. After we’ve done this we can start with setting up the LIRC (Linux Infrared Remote Control) package on the Raspberry Pi. LIRC is a package that allows you to decode and send infrared signals of many (but not all) commonly used remote controls. To setup both the IR receiver and LIRC, you can follow the steps as described here. You can find the resulting LIRC configuration file on GitHub, together with the rest of the code needed to finish this project.

When finished, my setup looks like this:
20170827_161012 20170827_160940

As a side note, I had some troubles using mode2 -d /dev/lirc0, after pushing a button on my remote instead of seeing something like the example I got the message “Partial read 8 bytes” and then it just stopped. After changing the driver from devinput to default in lirc_options.conf this issue was fixed.

When you find yourself having no permission to change files, look at the chmod 777 option, this makes sure everybody has read-write-execute permissions on the specified files and/or folders.

Now we’re ready for installing Python packages that make it possible to use LIRC and connect to the Philips Hue Bridge which controls the lights:

  • sudo pip install phue
  • sudo apt-get install python-lirc

We get started by connecting the Philips Hue Bridge through a Python script, this can easily be done by using it’s IP address. You can find out what the correct IP address is in multiple ways, an easy one is using the (official) Philips Hue app:

  • Go to the settings menu in the Philips Hue app and go to My Bridge, click on Network settings and switch off the DHCP toggle; the IP address of the bridge will show.

The next thing we need to do is connect to the Philips Hue Bridge and determine the names of the available lights and light groups.

# import the necessary packages
from phue import Bridge
# identify the bridge
b = Bridge(‘’)
# connect to the bridge (first press button on bridge)
# this only has to be done the first time you setup the connection
# get the names of all the lights
print b.get_light_objects(‘name’)
# get the names of the light groups
print b.get_group()

 In order to change the colour of a light, we need to know the XY code of this colour. Since we’re more familiar with using RGB codes for colour, we can use this function to convert RGB to XY.

Finally, we can connect to the LIRC and create the loop in which we will change the colour of the lights. In my case I have the following lights available: Zithoek, Zithoek bloom, Raamkant, Midden, Keukenkant, Slaapkamer. And I have the following light groups available:  Zithoek, Eetkamer, Slaapkamer. You can find the complete script to use the remote to control the Philips Hue Lights on GitHub as well, this script connects to the Philips Hue Bridge and to LIRC and has ten different scenes for the lights, but you can of course adapt this to your own needs and wishes.

Good luck and enjoy!


Project Friday 1.5: AI & Coffee – When you flip your coffee machine the finger

Everybody has those mornings when your mood is not as wonderful as always and your energy levels seem to be lagging, especially before you had your coffee. Those mornings when you just want to flip the finger to everything and everyone and tell the world to fuck off. When “working” on our A.I. coffee machine, we recognized this feeling and felt that it’s our duty to take this feeling seriously.

So, here’s what we did, we trained our coffee machine to recognize when someone flips it the finger. Thomas spend the entire afternoon collecting training data, meaning that he was just flipping the finger to his computer and looking for YouTube clips where people flip the finger. Which is why I quoted “working” in the first paragraph. In the picture below you’ll see some of our training data.


And the results after two days of training:


When it spots the middle finger, it serves a double shot of espresso. Now, we’re not claiming that it fixes your mood entirely, but sure as hell makes it a bit better!

Last session we’ve had our dr. Frankenstein moment, it was working. It was however on a dry run, we saw the buttons worked, but didn’t connect the coffee machine to the water. This time we did, apparently there’s still a little bug, as it decided to serve a cappuccino every 20 seconds.


Model in één middag in de praktijk

Ken je dat? Dat je eigenlijk gewoon eens wilt weten of de data die je hebt, geschikt is voor het voorspellen van bepaald gedrag van je klanten? Maar dat je geen tools tot je beschikking hebt en ook niet zo goed weet waar te beginnen…

Een klant van ons constateerde dat er omzet potentie bleef liggen door de foutieve inschatting van potentiële klantwaarde van prospects. De potentiële klantwaarde werd bepaald door naar enkele generieke kenmerken van prospects te kijken en op basis daarvan een schatting te maken. Dat de schatting van de klantwaarde niet goed (genoeg) was werd door de accountmanagers geconstateerd doordat zij (gedeeltelijk) de verkeerde prospects benaderden. Daarnaast bleek uit de evaluaties van campagnes voor lage klantwaarde prospects dat hier toch wel aardig wat hoge klantwaarde klanten tussen zaten. Hoog tijd voor verbetering dus. Voor dit probleem is een Business Case opgesteld met meerdere scenario’s; de potentie van deze Business Case lag tussen de vijfhonderdduizend en 2,5 miljoen euro.

Naast het helpen van de klant met de opzet voor deze Business Case, hebben we ook nog een aantal kwalitatieve aspecten meegenomen: we hebben een medewerker meegenomen in een aantal methoden en technieken voor data analytics en hem de grondbeginselen van R geleerd. Daarmee hebben we meteen input geleverd voor een grotere Business Case voor de uitfasering van SPSS naar R. Nadat we hier samen met de klant mee aan de slag zijn gegaan, hebben we binnen vijf dagen een eerste model op kunnen leveren, waarvan de toegevoegde waarde op dit moment getest wordt in een pilotproject.

Nou hoor ik je denken “jullie hebben toch een model in één middag, waarom heeft dit dan vijf dagen moeten duren?”. Een terechte vraag uiteraard. De extra tijd die we hierin hebben gestoken, hing vooral samen met het meenemen van de medewerker in de basics van data analytics en R. Zo kon hij meteen ruiken aan dit vakgebied en op basis daarvan (beter) bepalen of hij het leuk zou vinden om zich meer in die richting te ontwikkelen. Wat ons betreft niet alleen een interessant en leuk traject, maar vooral ook bijzonder nuttig en voor herhaling vatbaar!

Heb jij interesse, of loop je ook al tijden met een dergelijke kwestie? Schroom dan vooral niet om contact met ons op te nemen!



Project Friday 1.4: A.I. & Coffee – It’s Alive!

It’s raining cats and dogs, the fridge is stuffed with cold beers and the Spotfiy playlist Coffee House is blasting through the office. In other words, a perfect time to work on our coffee machine with face recognition capabilities. And today we had our own dr. Frankenstein moment. It’s alive and it works!

Last Christmas, everyone at Cmotions was photographed professionally. These photos are the input for the face recognition algorithm. We started out by using the face recognition package in python, which detects key facial points (edges of eyes, nose, mouth, etc.) and estimates the distances between those points. The algorithm was way too slow, so we used face detection first. Using pretrained Haar Cascades, we first extract a face from a picture and that pass to the recognition algorithm. Next step will be to train our own recognition algorithm and to implement some easter eggs, like when you flip the finger, it give you a double shot of espresso. Or trying to identify your facial expressions and when you’re angry it brews your coffee extra strong. If you got any ideas, let us know!


As stated in earlier articles, we do this for the fun of it and to learn new stuff.

Pandas, does that make the picture black and white? – Joost

One of the things we had to learn about is electrical engineering. At Cmotions, we know data and how to use it in order to retrieve information out of it (like extracting faces from a bunch of pixels in a photo). The soldering part and especially figuring out on to which point we needed to solder took some trial and error. See part one here, where a colleague short circuited the Arduino within ten minutes. The first Frankenstein moment was when we we’re able to say, with Python, that we wanted a coffee and it actually makes a cup of coffee.

The second Eureka moment was when we integrated this with the Face Recognition scripts on the Raspberry PI. It’s Alive! And how do we know it’s working? Well, after testing it on our faces, we came up we the (quite brilliant) idea of showing Facebook photos of colleagues.




Project Friday 1.3: Artificial Intelligence meets coffee

Last Friday the third afternoon of Project Friday took place. In Project Friday we spent about once a month  an afternoon on something completely useless. Why do we do this? Because we can, it’s fun and interesting and it’s a good reason to grab a couple of beers. This Project Friday is all about mixing Espresso Machines with Artificial Intelligence: adding facial recognition to the machine so that you don’t need to push the button to get your favorite coffee.

In the previous afternoons (day 1 and day 2), we’ve installed everything on the Raspberry PI (which was quite a hassle), learned what relays were and how to use them, soldered the first buttons and were able to control these buttons via the computer. This Friday we set ourselves the goal to real time face detection on the PI cam.

As the weather was warm and sunny, we decided it was better to leave the office and reach our goal in a more suitable environment, somewhere we the Raspberry PI (and us) wouldn’t overheat. So we drove to my place and settled ourselves in the garden, brought a television outside, hooked up the Raspberry PI and off coding we went. An additional benefit was the BBQ!

Astonishingly enough, not only the Raspberry PI had trouble with the warm weather, so did your cognitive capabilities. The move didn’t make us more productive (but was still the right choice with this kind of weather). So progress was slow, and we were on half the strength as three team members were on holiday. We did had some success, we did manage to implement the pre-trained HAAR-cascades for detecting faces and our trained cascade for detecting your middle finger when you flip it. But didn’t get so far to get default face recognition in place. So we’ll leave that for the next afternoon!

20170526_170709 20170526_161358


Project Friday 1.2: Artificial Intelligence meets coffee

A month ago we started with ‘Project Friday’; once a month we’ll lock some collegues in a room with a couple of beers and a fun project. This project: give a coffeemachine a brain and an eye, so it can see who’s standing in front of the machine and knows what he/she wants. Why do we do this? Well, because it’s fun.

This friday we made some significant progress, first of all we didn’t short circuit the arduino board, which we accomplished in the first ten minutes the previous time. We managed to reverse engineer some of the buttons of the circuitboard of the coffee machine and after learning about the transistors and relays, we managed to “push” a few of the buttons of the machine using the arduino. We bought a cheap solder around the corner of the office and soldered the first three buttons.

Meanwhile, on the Raspberry we succeeded extracting faces from the images using HAAR cascades, when multiple faces are detected it selects the biggest one. Face Recognition consists of two parts: detection (where in image / frame is a face) and recognition (to whom belongs that face). The first part works, for the second part we had some trouble installing libraries, apparently this takes a long time!

Next time, we’re going to solder the rest of the buttons and program the arduino, create trainingsdata by taking pictures of ourselves and try to train the first algorithms.

IMG_20170414_175446 IMG_20170414_164921 IMG_20170414_145149