An AI on the Edge : a Microsoft Azure Story

Remember when I started by last internship four months ago ? Well, now is the time to deploy it using Microsoft Azure IoT Edge !

This is my first experience working on the Azure IoT Edge Solutions provided by Microsoft, and I have to say : this is real fun. Here what I want to achieve is to deploy my AI algorithm on “edge” devices. This process will then allow them to work disconnected from the cloud but with all the advantages of it.

Here is the architecture that I am working on :


The key of this strategy is that everything is deployed as an Edge Module in a Docker Container. Every one of those modules is autonomous and they are all able to communicate with each other through the IoT Edge Runtime (purple in the frame). All those modules can do some processing inside the device and then send their conclusion on the cloud using the Azure IoT Hub. This solution avoids a lot of messages traffic that can coast a lot of money. Indeed if my AI DCOP is in on the cloud, every device will have to send their data on it (let's say every second) which will make my system a really slow one.

What I also really like is that the system is modulate which allow me to add a lot of other process to improve my artificial intelligence. For instance, I can add to the DCOP system another AI (like a Machine Learning model) that will do some video recognition (Emotion Recognition on the frame). In some specific scenarios, we would like to add a camera in the room able to detect facial emotions, and therefore, detect when a patient is in pain.

Here, all my modules (IA DCOP, Emotion Recognition and Database) are written in Python and are running on Docker Ubuntu Xenial containers. I also add an Azure Stream Analytics that will aggregate values coming from my different modules.

To use all of this Microsoft magic, I use the Azure Portal with the Azure CLI and the Azure IoT Edge Sdk.

For those interested, I also started a little cheat sheet where I store all azure-iot-edge-runtime-ctl and docker commands that I use : https://github.com/SachaLhopital/azure-iot-edge-cheatsheet.
   

Custom Vision - A Service to Create your own Images Classifiers and Deploy it

Custom Vision is a Microsoft Service that can create a custom computer vision model based on a specific images set. This website is therefore built to help people train and deploy image classifiers for their specific needs.

What is really interesting here, is that it very simple to use ! This service works in 3 steps : upload the data-set, train the model and deploy it. Here is a little “How To” :  give it a try !

1. Upload images & Tag them

First step is to get as many images as you can in order to create a good data-set. Also, you need to determine one or multiple “tags” for each one of your images. For this example, I download some photos of dotted and leopard clothes from another Microsoft tutorial. You can then download them using this script (from the same tutorial) with Python 3.

Be aware that you must have at least 30 images by tag in order for the model to be effective. But it is not the only prerequisite : The quality of your data-set is also a very important setting. Depending on the quality and the variety of your images, your final model can be very efficient – or not !

Once you have all your data, you can create a new project. Connect you to the Custom Vision web page and click on New Project (Note : you need a Microsoft Account to sign in). Choose the more useful domain for your need. If you followed my example with clothes, choose General. Select also Classification for the project type.

 image

Once the project is created, you can upload your images by clicking on Add Images. I suggest you to upload your images by tag in order to save time. But anyway, you can change tags later on every image (Note : Follow the instructions - the website is really intuitive !).

When all you images are downloaded, you should see all of you data classified by tags.

Capture

2. Train

To train the classifier, use the Train green button (top right of the page). The training may take a little while depending on the amount of data you provided. After the training, you can now see the performance of your model :

Capture

Note : for those interested, those estimations was obtained through a k-fold cross validation (a data scientist trick). Also Precision and Recall are common metric in this field.

If you need, you can add more data and train again your model.

Also you can click on Quick Test (next to the Train button) to select you own custom image and submit it to the model. For instance, here my model give me the tag “dotted” when I expected “leopard”.

image

3. Deploy…

And that’s it ! When you find a correct model for your need, you can download it and deploy it as a REST Api Service !

On the Performance panel, click on the Export button. You can use multiple platform.



For our example, choose the DockerFile format (Note : this format is really useful in order to work with other Microsoft Services). You can now run the docker file as usual :

docker build -t
docker run -p 127.0.0.1:80:80 -d

When your docker is running, you can access the api with curl. For instance, post an image and get Json response from the model API (Note : take a look at the Readme.md of the project you just download ! Winking smile).

Enjoy !

My First Developers Meeting as a Speaker :

Since I start my last project (Distributed AI in IoT Devices), I have to admit that I learn a lot of new things in very different fields : Artificial Intelligence, Mathematic Modelling, Project Architecture, Craftsmanship, IoT, ... This project also gives me the opportunity to experiment a lot : new language, new tools, new methods.

With that kind of experience, I soon got in touch with an association of developers : Microsoft User Group of Lyon (MUG Lyon). Thus, they submitted me to a new challenge : present my project as feedback of my experiences in front of other developers. After some thoughts, I decided to meet this challenge and to deliver my project through a very specific approach : “Are Craftsmanship Good Practices Achievable in an AI and IoT Project ?”.

Why did I say yes ?

This was a great opportunity for me to conciliate the two things I love the most in my work : Artificial Intelligence and Craftsmanship best practices.

When I start my double degree in AI, a lot of people told me that engineering and scientific are two very separate fields that are not mixable. I believe they are wrong since I did it in my current project. Indeed I currently use all of my skills (from all of my oldest experiences) to achieve this project. And I am very proud of that.

There is no reason to reject good practices just because a project involve complex mathematic calculations. Also, It allows the code to be more easily accessible from any developers : no need to be an expert in mathematics.

Last but not least, this was a great opportunity to improve my social skills and my communication capacities. I worked hard on this presentation to present this project as simply as possible, and to produce an accessible speech for anyone. Those kinds of skills are very useful to develop, and I am happy to have tested them on a real professional context.

IMG_1326

Thanks the MUG for this great opportunity !

The meetup event :
https://www.meetup.com/fr-FR/MUGLyon/events/250854003/



How To “Clean Code”

A long time ago, I read the Clean Code during my first internship. Immediatly after, I started to apply the global idea hidden in this bible for developers. But this task can be quit complicated because it’s too long and the result can end up worst then before. Here are my 4 ideals that I follow in order to apply the Clean Code’s principles without getting lost.

1. Start to define what is the most urgent

The Clean Code gives a lot of little rules to apply in order to provide a beautiful and readable program : Don’t write global functions, Don’t use flags as function parameters, Functions should only be one level of abstraction, and so on… But implement all of them can take a lot of time and require a lot of practice because not all of them are that simple to achieve.
In the ocean of rules that you should use (according to the clean code), I believe that it is important to focus on just some of them at the beginning. The benefit is to achieve a more readable code quicker than expected but also to transform your “most important” rules in automatism faster. 
In my own case, here are my “most important” rules that I focus on every day :
  1. Every element of the code should have a meaningful name (file, variable, function, class, …)
  2. Max indent per method should be 2 or 3
  3. Use Constants
  4. Don’t Repeat Yourself
If you noticed, those rules only take care of the readability of the code itself. But choosing to first focus on those do not mean that I don’t try to apply the other rules. It is just my priority.

2. SOLID is difficult, but Single Responsible Principle is the key

The SOLID principles are kind difficult to achieve, especially when the project is big. (Know that the Dependency Inversion Principle gives me nightmares !)

If there is one that is the most important and the easier to use, it surely is the Single Responsible Principle : a class or a function has one and only one responsibility. Refactoring my code this way helps me achieve a more readable code quickly. This is also very convenient for testing. 

3. Don’t lose your mind

This is important for me because I am a perfectionist. But clearly, provide a perfect code is not possible.
First, because it requires too much time.
Secondly, because I can see a code has perfect, meanwhile my coworkers will still be unable to read it. It is my own perception of it.
Finally, because sometime, splitting the code too much makes it unreadable.
Even if refactoring is good, I try not apply each rule literally, because it can have the opposite effect.

4. Don’t forget tests

Last, but not least, you need to clean the tests !
They can be read by any developers to and it is not that hard to refactor them as well. It will be then easier to solve failed test afterwards ;-) .

Et Voilà !

I gave you all of my tricks to help me write better code. Of course it is just my way of doing it, so feel free to adapt those advices to your own situation.
Now, will you excuse me, I have some refactoring to do.

BDD - My Own Specification Sheet

I like to say that I don't have very much time ahead in this project, since my internship is only six months long. On one hand, this short time is not an excuse to skip tests. But on the other hand, I can not spend most of my time writing them. A good alternative for me was then to do some behavior-driven development tests.

Behavior-Driven Development test method allow me to test really important features without making me test every single line of my code. This method helps me create a minimal code that answer a specific need.

To use this method with my Python code, I use behave. Behave is a framework similar to cucumber that allow me to merge my specifications and my tests. This is really powerful since I can now show my tests results to anyone in my project : everybody can understand what is working, and what is not.

Here is a little illustration.


First, I describe my feature just like behave wants me to. I do it myself since my project is specific for my internship, but it can be written by an analyst, a customer, or whoever in charge of those scenarios.


Then, I - the developer - can assign the scenario to the right tests using specific behave tags : @given, @when and @then. It can seem quite annoying to separate those 3 steps, but everything can be store in the context. Also it makes every step reusable for multiple tests !

Behave also provides a lot of different features that can create very particular tests like the concept of "work in progress" scenarios or the use of some step data if needed.

Finally when I run those tests, results are shown in a very understandable way :


Note that the path following the '#' indicate the location of the method associate with the step. It is really useful for refactoring the code and still be able to fix failed tests.





How I Make My First P.O.C

Everything is going really fast since the start of this project. But after only two months of hard work, I start the development of (kind of) the Proof Of Concept (P.O.C) !

I will not explain here what is a Proof Of Concept or what is it for, but I will try to explain how I proceed in my very specific case.

1. State of the art

Since I consider a DCOP algorithm as part of my solution for a long time, I first reviewed a lot of different algorithm, trying to find the best one for my project. I take a look on ADOPT, DPOP, even some specific ones like CoCoA. Finally, I choose the DPOP algorithm (Dynamic Parameters Optimization Problem) which gives me more advantages :
  • It is one of the fastest in the execution time. It is always a nice advantage.

  • All agents are ranked with a DFS Tree which allow agents to organize themselves during the process. 

  • Also, the DPOP algorithm is a 100% decentralized method since all agents executes the same code : there are no “intelligent mediator” to manage them. This is really convenient since the system does not rely on a central process.
Coupled with this research phase, I made my own inquiries about the medical field, and more precisely, nurses work. My objective was to gain enough information about their process to define a clear purpose for my project. And to do so, I needed to understand their needs.

2. Mathematical aspect

With this first step, I define the main goal of my system :
Avoid syringe pump to ring without involving nurses too often”
Now that it was said, it was time to start the tough part : I needed to translate my constraints in a mathematical language in order to transcribe it inside the DPOP algorithm. This is required because the algorithm optimize constraints through a (mathematical) minimizing function. Therefore, I need to describe my constraints as functions. Above are some examples.  
image
For Mi = {m1, …, ml} the set of devices linked to the agent i, if the agent has no devices, then there is no need to call the nurse.
image
This next function transcribe the following constraint : If two agents are in the same geographical area (= they are neighbors), they can eventually synchronize themselves in order to avoid two interventions in a t_synchro laps of time.
To understand those functions, notes that vi is the hypothetical time that remains before the next nurse passage in the room. For instance, if vi = 5, it means that the next passage of the nurse is encouraged in the next 5 minutes. Thus, the algorithm try to find the best vi affectations for each agent i of the system depending on those constraints (by trying to minimize their results).  

3. Hands on keyboard

With those constraints, I finally start to code ! Here is a photo of the current installation.
IMG_1046
I am working with two raspberry pi : each one of them is an agent – which mean that my DPOP algorithm is running on both of them in the same time.
I also have an AVNET Linux server which run a “server” specific code. This process is here to give a kind of Let’s go! signal to all agents. This Let’s go! allow all agents to start the DPOP algorithm in the same time. This is just for implementation. Maybe I will remove it later, when my agents will be more advanced.
For those who are interested, I code this algorithm in Python 3, and my agents/server are communicating with an MQTT Protocol.

3 Things That Inspired Me After #SIdO18

The 4th and the 5th of April 2018, I attented a french IoT Showroom : the SIdO in Lyon. At this occasion, I was able to fall deeper in the IoT World, and the place of the Artificial Intelligence in it. Here is 3 breaknews that I will try to keep in mind :

1. AI & IoT to upgrade our process


Nowadays, Lean Management is a well known method in the industry. It surely helps a lot of companies in their development and in their organisation. But we are now at the beginning of the 4th industrial revolution. This leads to the creation of  “smart factories” over the Internet of Things.

Following that path, it appears that every company is now on its way to the 4.0, but no one has anything concret to show yet. This disability to run projects in production shows a problem in the analysis phase. The new services that those projects will create needs to be considered wisely. The key here is to focus on concrete uses cases that will give real benefits to consumers. This is a trending topic : even in the industrial world, all is about "services" rather then the product itself (customers prefer to use uber instead of their cars for instance !). Take this idea in consideration is part of the challenge !

Therefore, the 4.0 Industry gives two main values for companies :
  • Lower costs and predictive maintenance
  • Improvement of process quality (by using cognitives services that aggregates unstructured and structured datas). For instance, the sound is something becoming more and more important and there are lots of opportunities to explore.
In a way, this mix between IoT and Artificial Intelligence offers a computer science solution in addition with the Lean Management. We can take examples from the Edge AI (e.g. when we get AI directly on devices), which allows us to get better results than the combination Machine Learning - Cloud. Yet we can also take inspiration in the Asset Intelligence which allows us to focus on real industry expert instead of meaningless datas.

The success of industrial projects using those kind of unexpected technologies, is based, once again, on how clearly use cases are defined. The idea is to know exactly to which question we need to answer (through KPI). Also, it’s vital not to forget the human factor. Because bringing Artificial Intelligence (and IoT) in industry is a sensitive task which requires specific attention on a Human and Social aspect (to give a concret example : it needs to be explained to the staff properly, otherwise, the human factor will lead to the reject of the technology which can result in a complete failure for the entire project).

2. AI & IoT : it’s a match !


The Artificial Intelligence is a smart way to revolutionize every business. Not only it can bring more autonomy in connected objects process, but it will mostly give them the ability to adapt themselves to situations with no human interventions.

I believe that the couple AI - IoT can change our lives because their are opening new opportunities : we are now able to solve (previously unsolvable) problems with Artificial Intelligence. We are already seeing it in fields were AI was not expected : construction of aircrafts, fridges, tractors, and so on… Those successful examples happened with a good collaboration between customers and developpers.

The main goal of IoT here, is to provide to Artificial Intelligence good datas. By “good datas” I mean : a qualitative data that can be potentially usefull now or in the future. On a technical perspective, sensors are smaller every day and can retrieve more and more datas. Therefore, the question is not “What information can I get ?” but “How can I get this information”. Thus, we only need to focus on the data management (storage, cleaning, …).

In any project, it is clear that the data is very valuable because it helps algorithms/AI to understand specific scenarios. But to obtain sustainable results, we need to be sure that our datas doesn’t provide too much bias, which is very difficult and can lead to unexpected costs.

3. AI & Medical, is the new sexy !


In a different context, the medical field interests me since the beginning of my new project. Nowadays, integrating Artificial Intelligence to that field can seem quite difficult considering the current political and ethical climate. But it’s also a domain that inpires a lot ! A lot of little (r)evolutions are on their way. We can classify medical innovations in 5 technological groups :
  • The telemedecine (Artificial Intelligence prescribing medications, medical web forums, …)
  • The Big Data use (Which gives a better understanding of the patient and his environment)
  • The augmented reality
  • The augmented patient (with an artificial heart for instance)
  • The augmented surgeon (example)
Those fields are encouraging for the improvement of cares. For instance, human relations can be an obstacle to the diagnosis just because the patient doesn’t use the same vocabulary as the doctor. Also, doctors and nurses can be overwhelmed in their duty. We can find many other problems that shrink patient cares services !

As a response to all those issues, a first idea is to try to use datas in combination with Artificial intelligence (through Machine Learning and Data Science). If this solution is quite “basic” and common, it can surely improve the doctor’s listening time, which will lead the domain to a more human direction. 

Conclusion


I will say that we already are in a self-appropriation process with Artificial Intelligence, and it is no longer an utopian subject. There are more and more intelligent systems around us (Siri, Self-Drive Cars, Machine Learning marketing process, …). But in the end, those programs need to be a smarter mix between intrustion and the interest it brings in our lives. Indeed, the integration of Artificial Intelligence in our products/services is a quite sensitive aspect that we need to focus on.

IoT and Artificial Intelligence are a great match together, but it surely raises more and more difficulties that we need to keep in mind if we want to bring more evolutions to our lives. 

Improve the Configuration of Docker Compose with Environment Variables

I recently started working on a new python project (yeah!). This project is really interesting, but the first lines of code are at least a d...