Published On October 31, 2017
- posted in Blog, Opinion, Projects & Know-How

What was once considered science fiction is now becoming reality as developers have the resources and the tools to build artificial intelligence programs. Increased computing power in hardware, massive amounts of data and the wide availability of dedicated frameworks – all those are propelling the development of machine learning.

Still, artificial intelligence and machine learning are both a challenge and an opportunity for business and developers alike. While they can secure a competitive advantage for companies, developing such applications is largely uncharted territory.

As we constantly invest in research, we’ve started experimenting with these technologies ourselves. Here’s what we learned from it.

Note – This is a lengthy post. We’ve covered what machine learning is, how it works, types of machine learning algorithms, machine learning projects at Qubiz and what we learned.

Machine learning, artificial intelligence, Big Data – what’s what?

All these are relatively new concepts, both for business and for technology. So let’s clear things before we dive in.

Artificial intelligence is a branch of computer science that deals with the simulation of intelligent behavior in computers. Artificial intelligence is also referred to when a machine mimics cognitive functions associated with human behavior such as learning and problem-solving.

Machine learning is an application of artificial intelligence that provides systems the ability to automatically learn and improve from experience. It focuses on developing programs that can access data and use it to learn in order to make better decisions (source)

Big Data refers to large amounts of structured, semistructured and unstructured data that can be mined for information. It includes capturing, storing and analyzing data. Big Data has made machine learning possible as it provides the data needed to build machine learning programs.

How does machine learning work?

Machine learning has two basic approaches:

  • From top to bottom: This approach goes from high level to detail. It starts with an understanding of the type of problem you need to solve, what approaches and algorithms were used to solve similar challenges. The next step is finding the library that has that algorithm implemented in order to apply it to your problem. Once you get the correct result, study the details to understand how it works.
  • From bottom to top: This approach is a favorite with developers who need to understand every step. Understanding of basic machine learning concept is a must, followed by an understanding of what type of algorithm is required to solve a problem. The next step is finding the libraries and frameworks and learning what goes on in the backend. From bottom to top is a good approach for research and to tackle new problems.

Teaching machines involves a process in which each stage builds a better version of the machine. The process can be broken down into 3 parts, as seen below.

Largely, machine learning projects have 3 major steps:

  • Collecting and cleaning data: This step lays the foundation. Data can be collected from in various formats: CSV, text files, spreadsheets, output from databases, etc. Also, to get good results, data must be appropriately formatted and cleaned – no missing data and outliers that might skew results.
  • Training: In this stage, developers select the appropriate algorithm to form the model. The data collected beforehand is used to train and develop the machine learning model.
  • Evaluation: This step determines the precision of the selected algorithm. To check the accuracy level, test with data that hasn’t been used in the training phase. If the algorithm doesn’t return the right results, developers can select a different algorithm or add more data.

Types of machine learning algorithms

Machine learning algorithms can be classified into:

  • Supervised learning (Predictive models)
    This type of algorithm predicts an outcome based on labeled historical data. It can be used, for example, to find out which customers are likely to churn or for budget forecasts. Predictive models receive clear instructions from the beginning as to what needs to be learned.
  • Unsupervised learning (Descriptive models)
    This type of algorithm doesn’t have a clear goal or outcome. It can be used to find out new elements. One popular application is product recommendations – what other products would customers be interested in after buying a TV, for instance.

Machine learning projects at Qubiz

While developing Big Data solutions, we worked with tools and frameworks that could scale on thousands of systems. Also, in order to get value from collected data, companies need more than traditional data analysis solutions. This is where machine learning comes in and that’s how we started researching this topic.

We have worked on two projects involving machine learning.

Project 1: This project was more about natural language processing, the part of artificial intelligence that deals with speech recognition and understanding human language. The available data set was recorded customer support conversations. The challenge here was two-fold. First, we had to transcript the audio conversations without having someone to do this manually, a process referred to as speech to text. The next step was to perform sentiment analysis and entity detection to understand what the sentence was about and what feelings it generated.

Project 2: Object recognition and computer vision are one of the oldest problems in artificial intelligence. Our goal was to build a program that could detect and localize a small object on a given image. We collected the data ourselves and started experimenting. The first machine library frameworks we experimented with was OpenCV as it’s a mature solution, with lots of resources, available across multiple devices. Also, it didn’t require a lot of resources to run.

Since the object we wanted to detect was rather small, OpenCV with Haar features didn’t return the appropriate results. Tensor Flow has a module dedicated to object detection, so we tried it next, selecting a pre-trained model from a larger dataset. We retrained it with the same data we input to OpenCV and got better results. As we increased the data for the training set, the quality of results improved considerably. We’re now experimenting with other frameworks: Caffe, Digits, and CNTK.

6 things we learned from machine learning projects

1. Data is paramount

Currently, supervised deep learning algorithms provide the best results. However, these need a lot of data to generate good results. We knew this beforehand, but it’s something different to see it in practice. In the computer vision project, when we increased the amount of data in the training set, the algorithm accuracy improved considerably. We started with a set of 4000 images, going up to 10000 images with 20000 objects. To build a machine learning algorithm, you need huge amounts of high-quality data, it can provide the competitive advantage. There are models that don’t require a lot of data, but these don’t deliver results.

2. Data takes up a lot of time

Collecting and cleaning data takes up a lot of time. Still, the output depends on the data you input into the model during the training phase. From our experience, collecting and cleaning data took up about 70% of the time allocated to the project. Only 30% of the time was spent actually solving the problem.

3. A lot of people talk, but few of them do

We learned this while working on the first project. While researching and drilling through articles on the subject, we found a lot of articles about machine learning and a lot of similar tutorials. However, at some point, we started running into problems that are not covered in those tutorials. In these cases, we had to dig a little deeper and search forums for potential answers.

4. You need the computing resources

For the first project, running a simple test would require a machine with a considerable amount of GPU for days to weeks. Also, one of the reasons why we chose OpenCV for object detection was the fact that it didn’t need a lot of resources.

5. You need to experiment to find the right algorithm

For the computer vision project, we tested different libraries. There was no way to tell which one solved our particular problem better beforehand. OpenCV, Tensor Flow, Caffe, Digit, CNTK – we tested all of them to see which one provides the best result. Also, there is no “best machine learning framework” – don’t get into that debate. Each framework has its advantages and disadvantages. Experiment various algorithms, libraries and frameworks to see which one is best for you.

6. Working with bleeding edge technology has its perks

One of the things we liked about OpenCV was the fact that it is a mature solution. It has a large community, lots of examples and solid documentation. Compared to that, Tensor Flow object detection API is still in beta. Sometimes we would run into bugs that were being solved the same day. Also, we would run into problems we didn’t even know they existed. Still, it’s challenging, interesting work.

Related Posts

Comments