In the modern age where machines have overtaken humans in many areas, it is fascinating to see how machines learn just by looking at data, analyzing it and making decisions based on the processed data, but also simple as it may seem, it's not that easy when you do it yourself.

If you are working with offline machine learning, the main reason is probably the offline server and the lower cost of offline machine learning compared to online machine learning, but you may still find yourself in trouble when you don't have high specs. machine and you need to run a big application. However, there is no need to worry if you are familiar with the concept of mini lots.

Before we dive into the mini-batch summary, I'd like to go over some basic machine learning concepts that include epoch and weight. These will make mini-lots easier to understand.

So, let's talk about weight in machine learning first. According to Deepai.org

“  Weight is the parameter of a neural network that transforms input data into the hidden layers of the network. A neural network is a series of nodes, or neurons. Within each node is a set of inputs, a weight, and a bias value. »

Simply put, you can say the weight is the machine learning algorithm you created.

The word epoch in machine learning refers to a full cycle of training and updating machine learning algorithms.

Mini Batch Teach

Before introducing mini-batches, let's imagine a scenario where you have a model for a training set with millions of images. When you run this model, all the images will go through the model, analyze the data and the corresponding output will be generated, and after all this process the weight will be updated. This is the standard rule of offline learning and as you may have noticed this process can take a long time and if there are any issues with your hardware the whole process can be ruined. This is where mini-lots come in and solve all these problems.

Now let's imagine the same scenario again but with the concept of mini batch learning.

In mini batch learning this big chunk of millions of images will be divided into smaller batches called mini batches to be executed one by one, the mini batches will also be for input and output.


In this scenario, the smaller batches will be taken one by one as input and the algorithm data will be analyzed, errors will be removed and based on this the weight will be updated. This process will continue until the last mini batch is analyzed and the algorithm is updated. It doesn't matter how many such datasets there are. It can be a single epoch or thousands of epochs, it doesn't matter because we give our neural network a massive amount to run, analyze and update the weights.

Mini lot size

The reason for using mini batches is that we can run smaller batches in the neural network to run and update the weight more frequently and that's why we need to set the mini batch size properly so to make the most of it. There are some standards to define the size of a mini lot and the highest priority is that the size of the mini lot should be in number, and this number should be decided very carefully because if the number is very large, the execution will take time and the neural network will be updated less frequently, which will eventually decrease accuracy. If we use a smaller value, we will update the neural network more frequently. It can also produce erroneous results if any of the datasets are misguided and updates will also be done frequently which can slow down the processing speed and increase the time it takes, that's why it's important to use lots of batch sizes, not too big and not too small either. Generally, the lot size is set to the power of 2.

autorelaxed

Advertisement

 
Top