Tuning parameters to improve Accuracy in ConvNets
So far we have performed the convolutional neural networks with regularization techniques in tensorflow but we have not focused on improving the accuracy of the Convnets. In this tutorial, we will tune the parameters to get top accuracy as possible, I was able to get 96%, lets see how much accuracy can you get.
I performed test with various variations, with dropout, L2, learning rate decay, batch size and no of steps but the best I got was by just simply adding dropout, tuning the batch size and no. of steps.
In all the cases I have used two layer conv network with two fully connected layers, also it has ReLu as an activation function and maximum pooling. The following three properties remain same in all cases.
Patchsize = 5
Depth= 16
Hidden units = 64
With L2 Regularization:
Iteration
5001
8001
12001
8001
Batch size
16
16
16
50
Reg. constant
0.01
0.01
0.01
0.005
Accuracy
93.6
93.6
91.2
93.6
With Dropout:
Iteration
4001
8001
20001
Batch size
50
50
50
Reg. constant
-
-
-
Accuracy
95
95.6
96
With Dropout and L2:
Iteration
12001
4001
4001
6001
10001
Batch size
16
30
50
50
50
Learning Rate
0.01
0.005
0.005
0.005
0.005
Accuracy
93
93.2
94.6
95.2
94.4
I also tested with learning rate decay along with both regularization techniques, it was not crossing the value `91%` in my case, may be due to small no. of iterations.
Also I removed the ReLu from conv layers and added L2 with constant 0.01
and 8001 steps, got 93.6%
better than other scenarios. Especially it is giving same as you can see in the first table where ReLu was also present.
Here is the full code, I got 96%
.
I would recommend you to play and spend some time with it and try to beat my record, you can do it.
Tip: first increase only the no. of steps if you can afford because dropout’s impact rise late in the game. Also try increasing the conv layers.