Welcome again to the Tiny Large sequence — a sequence the place I share what I discovered about MobileNet architectures. Up to now two articles I coated MobileNetV1 and MobileNetV2. Take a look at references [1] and [2] when you’re inquisitive about studying them. In right this moment’s article I want to proceed with the subsequent model of the mannequin: MobileNetV3.
MobileNetV3 was first proposed in a paper titled “Trying to find MobileNetV3” written by Howard et al. in 2019 [3]. Only a fast evaluation: the primary concept of the primary MobileNet model was changing full-convolutions with depthwise separable convolutions, which decreased the variety of params by almost 90% in comparison with its commonplace CNN counterpart. Within the second MobileNet model, the authors launched the so-called inverted residual and linear bottleneck mechanisms, which they built-in into the unique MobileNetV1 constructing blocks. Now within the third MobileNet model, the authors tried to push the efficiency of the community even additional by incorporating Squeeze-and-Excitation (SE) modules and onerous activation features into the constructing blocks. Moreover, the general construction of MobileNetV3 itself is partially designed utilizing NAS (Neural Structure Search), by which it basically works considerably like a parameter tuning that operates on the architectural stage by maximizing accuracy whereas minimizing latency. Nonetheless, be aware that on this article I gained’t go into how NAS works intimately. As a substitute, I’ll give attention to the ultimate design of MobileNetV3 proposed within the paper.
The Detailed MobileNetV3 Structure
The authors suggest two variants of this mannequin which they seek advice from as MobileNetV3-Massive and MobileNetV3-Small. You may see the main points of the 2 architectures in Determine 1 under.
Taking a better have a look at the structure, we will see that the 2 networks primarily encompass bneck (bottleneck) blocks. The configuration of the blocks themselves is described in columns exp measurement, #out, SE, NL, and s. The inner construction of those blocks in addition to the corresponding parameter configurations will likely be mentioned additional within the following subsection.
The Bottleneck
MobileNetV3 makes use of the modified model of the constructing blocks utilized in MobileNetV2. As I’ve talked about earlier, what makes the 2 completely different is the presence of SE module and using onerous activation operate. You may see the 2 constructing blocks in Determine 2, with MobileNetV2 on the high and MobileNetV3 on the backside.

Discover that the primary two convolution layers in each constructing blocks are principally the identical: a pointwise convolution adopted by a depthwise convolution. The previous is used for increasing the variety of channels to exp measurement (growth measurement), whereas the latter is accountable to course of every channel of the ensuing tensor independently. The one distinction between the 2 constructing blocks lies within the activation features used, which they seek advice from as NL (Nonlinearity). In MobileNetV2, the activation features positioned after the 2 convolution layers are set fastened to ReLU6, whereas in MobileNetV3 it will probably both be ReLU6 or hard-swish. The RE and HS you noticed earlier in Determine 1 principally refer to those two kinds of activations.
Subsequent, in MobileNetV3 we place the SE module after the depthwise convolution layer. In the event you’re not but conversant in SE module, it’s basically a sort of constructing block we will connect in any sort of CNN-based mannequin. This element is helpful for giving weights to completely different channels, permitting the mannequin to pay extra consideration to the essential channels solely. I even have a separate article discussing the SE module intimately. Click on on the hyperlink at reference quantity [4] if you wish to learn that one. You will need to be aware that the SE module used right here is barely completely different, in that the final FC layer makes use of hard-sigmoid moderately than the usual sigmoid activation operate. (I’ll speak extra in regards to the onerous activations utilized in MobileNetV3 later within the subsequent subsection.) The truth is, the SE module itself will not be at all times included in each bottleneck block. In the event you return to Determine 1, you’ll discover that a few of the bottleneck blocks have a checkmark within the SE column, indicating that the SE module is utilized. Alternatively, some blocks don’t embrace the module, which could in all probability be as a result of the NAS course of didn’t discover any efficiency enchancment from utilizing SE modules in these blocks.
Because the SE module has been linked, we have to place one other pointwise convolution, which is accountable to regulate the variety of output channels in accordance with the #out column in Determine 1. This pointwise convolution doesn’t embrace any activation operate, aligning with the linear bottleneck design initially launched in MobileNetV2. I truly have to make clear one thing right here. In the event you check out the MobileNetV2 constructing block in Determine 2 above, you’ll discover that the final pointwise convolution has a ReLU6 positioned on it. I imagine it is a mistake made by the authors, as a result of in accordance with the MobileNetV2 paper [6], the ReLU6 must be within the first pointwise convolution originally of the block as an alternative.
Final however not least, discover that there’s additionally a residual connection that skips throughout all layers within the bottleneck block. This connection is barely current when the output tensor has the very same dimensions because the enter, i.e., when the variety of enter and output channels is similar and when the s (stride) is 1.
Arduous-Sigmoid and Arduous-Swish
The activation features utilized in MobileNetV3 should not generally present in different deep studying fashions. To start out with, let’s have a look at the hard-sigmoid activation first, which is the one used within the SE module as a substitute for the traditional sigmoid. Check out Determine 3 under to see the distinction between the 2.

Right here you may in all probability be questioning, why don’t we simply use the traditional sigmoid? Why do we actually want to make use of piecewise linear operate that seems much less clean as an alternative? To reply this query, we have to perceive the mathematical definition of a sigmoid operate prematurely, which I present in Determine 4 under.

We will clearly see within the above determine that the sigmoid operate initially entails an exponential time period within the denominator. The truth is, this time period causes the operate to be computationally costly, which in flip makes the activation operate much less appropriate for low-power gadgets. Not solely that, the output of the sigmoid operate itself is a high-precision floating-point worth, which can be not preferable for low-power gadgets as a consequence of their restricted help for dealing with such values.
In the event you have a look at Determine 3 once more, you may assume that the hard-sigmoid operate is instantly derived from the unique sigmoid. The truth is, that’s truly not fairly proper. Regardless of having an analogous form, hard-sigmoid is principally constructed utilizing ReLU6 as an alternative, which may formally be expressed in Determine 5 under. Right here you may see that the equation is way easier because it solely consists of fundamental arithmetic operations and clipping, permitting it to be processed a lot sooner.

The subsequent activation operate we’re going to make the most of in MobileNetV3 is the so-called hard-swish, which will likely be applied after every of the primary two convolution layers within the bottleneck block. Identical to sigmoid and hard-sigmoid, the graph of the hard-swish operate seems to be just like the unique one.

The unique swish operate itself can mathematically be expressed within the equation in Determine 7. Once more, because the equation entails sigmoid, it would undoubtedly decelerate the computation. Therefore, to hurry up the method, we will merely exchange the sigmoid operate with hard-sigmoid we simply mentioned. By doing so, we now have the onerous model of the swish activation operate as proven in Determine 8.


Some Experimental Outcomes
Earlier than we get into the experimental outcomes, it is advisable to know that there are two parameters in MobileNetV3 that permit us to regulate the mannequin measurement in accordance with our wants. These two parameters are width multiplier and enter decision, which in MobileNetV1 are referred to as α and ρ, respectively. Though we will technically regulate the worth for the 2 freely, the authors already offered a number of numbers we will use. For the width multiplier, we will set it to both 0.35, 0.5, 0.75, 1.0, or 1.25, the place utilizing a worth smaller than 1.0 causes the mannequin to have fewer variety of channels than these disclosed in Determine 1, successfully lowering the mannequin measurement. As an illustration, if we set this parameter to 0.35, then the mannequin will solely have 35% of its default width (i.e., channel rely) all through your entire community.
In the meantime, the enter decision can both be 96, 128, 160, 192, 224, or 256, which because the title suggests, it instantly controls the spatial dimension of the enter picture. It’s price noting that despite the fact that utilizing a small enter measurement reduces the variety of operations throughout inference, it doesn’t have an effect on the mannequin measurement in any respect. So, in case your goal is to cut back mannequin measurement, it is advisable to regulate the width multiplier, whereas in case your objective is to decrease computational price, you may mess around with each the width multiplier and enter decision.
Now wanting on the experimental leads to Determine 9, we will clearly see that MobileNetV3 outperforms MobileNetV2 by way of accuracy at related latency. The MobileNetV3-Small of default configuration (i.e., width multiplier 1.0 and enter decision 224×224) certainly has a decrease accuracy than the most important MobileNetV2 variant. However when you take the default MobileNetV3-Massive under consideration, it bought a simple win over the most important MobileNetV2 each by way of accuracy and latency. Moreover, we will nonetheless push the accuracy of MobileNetV3 even additional by enlarging the mannequin measurement by 1.25 instances (the blue datapoint on the high proper), however remember the fact that doing so considerably sacrifices computational pace.

The authors additionally performed a comparative evaluation with different light-weight fashions, of which the outcomes are proven within the desk in Determine 10.

The rows of the desk above are divided into two teams, the place the higher group is used to check fashions with complexity just like MobileNetV3-Massive, whereas the decrease group consists of fashions similar to MobileNetV3-Small. Right here you may see that each V3-Massive and V3-Small obtained the perfect accuracy on ImageNet inside their respective teams. It’s price noting that though MnasNet-A1 and V3-Massive have the very same accuracy, the variety of operations (MAdds) of the previous mannequin is larger, which leads to larger latency, as seen in columns P-1, P-2, and P-3 (measured in milliseconds). In case you’re questioning, the labels P-1, P-2, and P-3 basically correspond to completely different Google Pixel sequence used to check the precise computational pace. Subsequent, it’s essential to acknowledge that each MobileNetV3 variants have the very best parameter rely (the params column) in comparison with different fashions of their group. Nonetheless, this doesn’t appear to be a serious concern for the authors as the first objective of MobileNetV3 is to attenuate computational latency, even when which means having a barely larger mannequin.
The subsequent experiment the authors performed was in regards to the results of worth quantization, i.e., a way that reduces the precision of floating-point numbers to hurry up computation. Whereas the networks already incorporate onerous activation features, that are suitable with quantized values, this experiment takes quantization a step additional by making use of it to your entire community to see how a lot the pace improves. The experimental outcomes when worth quantization was utilized are proven in Determine 11 under.

In the event you evaluate the outcomes of V2 and V3 in Determine 11 with the corresponding fashions in Determine 10, you’ll discover that there’s a lower in latency, proving that using low-precision numbers does enhance computational pace. Nonetheless, you will need to remember the fact that this additionally results in a lower in accuracy.
MobileNetV3 Implementation
I feel all the reasons above cowl just about all the things it is advisable to know in regards to the idea behind MobileNetV3. Now on this part I’m going to convey you into essentially the most enjoyable a part of this text: implementing MobileNetV3 from scratch.
As at all times, the very very first thing we do is importing the required modules.
# Codeblock 1
import torch
import torch.nn as nn
Afterwards, we have to initialize the configurable parameters of the mannequin, specifically WIDTH_MULTIPLIER, INPUT_RESOLUTION, and NUM_CLASSES, as proven in Codeblock 2 under. I imagine the primary two variables are simple as I’ve defined them completely within the earlier part. Right here I made a decision to assign default values for the 2. You may undoubtedly change these numbers primarily based on the values offered within the paper if you wish to regulate the complexity of the mannequin. Subsequent, the third variable corresponds to the variety of output neurons within the classification head. Right here I set it to 1000 as a result of the mannequin is initially educated on the ImageNet-1K dataset. It’s price noting that the MobileNetV3 structure is definitely not restricted to classification duties solely. As a substitute, it may also be used for object detection and semantic segmentation as demonstrated within the paper. Nonetheless, because the focus of this text is to implement the spine, let’s simply use the usual classification head for the output layer to maintain issues easy.
# Codeblock 2
WIDTH_MULTIPLIER = 1.0
INPUT_RESOLUTION = 224
NUM_CLASSES = 1000
What we’re going to do subsequent is to wrap the repeating parts into separate courses. By doing this, we’ll later be capable to merely instantiate them every time wanted as an alternative of rewriting the identical code time and again. Now let’s start with the Squeeze-and-Excitation module first.
The Squeeze-and-Excitation Module
The implementation of this element is proven in Codeblock 3. I’m not going to get very deep into the code since it’s virtually precisely the identical because the one in my earlier article [4]. Nonetheless, typically talking, this code works by representing every enter channel with a single quantity (line #(1)), processing the ensuing vector with a sequence of linear layers (#(2–3)), then changing it right into a weight vector (#(4)). Needless to say within the authentic SE module we sometimes use the usual sigmoid activation operate to acquire the load vector, however right here in MobileNetV3 we use hard-sigmoid as an alternative. This weight vector will then be multiplied with the unique tensor, which by doing so we will cut back the affect of channels that don’t give contribution to the ultimate output (#(5)).
# Codeblock 3
class SEModule(nn.Module):
def __init__(self, num_channels, r):
tremendous().__init__()
self.global_pooling = nn.AdaptiveAvgPool2d(output_size=(1,1))
self.fc0 = nn.Linear(in_features=num_channels,
out_features=num_channels//r,
bias=False)
self.relu6 = nn.ReLU6()
self.fc1 = nn.Linear(in_features=num_channels//r,
out_features=num_channels,
bias=False)
self.hardsigmoid = nn.Hardsigmoid()
def ahead(self, x):
print(f'originaltt: {x.measurement()}')
squeezed = self.global_pooling(x) #(1)
print(f'after avgpooltt: {squeezed.measurement()}')
squeezed = torch.flatten(squeezed, 1)
print(f'after flattentt: {squeezed.measurement()}')
excited = self.fc0(squeezed) #(2)
print(f'after fc0tt: {excited.measurement()}')
excited = self.relu6(excited)
print(f'after relu6tt: {excited.measurement()}')
excited = self.fc1(excited) #(3)
print(f'after fc1tt: {excited.measurement()}')
excited = self.hardsigmoid(excited) #(4)
print(f'after hardsigmoidt: {excited.measurement()}')
excited = excited[:, :, None, None]
print(f'after reshapett: {excited.measurement()}')
scaled = x * excited #(5)
print(f'after scalingtt: {scaled.measurement()}')
return scaled
Now let’s test if the above code works correctly by creating an SEModule occasion and passing a dummy tensor by way of it. See Codeblock 4 under for the main points. Right here I configure the SE module to simply accept a 512-channel picture for the enter. In the meantime, the r (discount ratio) parameter is ready to 4, that means that the vector size between the 2 FC layers goes to be 4 instances smaller than that of its enter and output. It is likely to be price understanding that this quantity is completely different from the one talked about within the authentic Squeeze-and-Excitation paper [7], the place r = 16 is claimed to be the candy spot for balancing accuracy and complexity.
# Codeblock 4
semodule = SEModule(num_channels=512, r=4)
x = torch.randn(1, 512, 28, 28)
out = semodule(x)
If the code above produces the next output, it confirms that our SE module implementation is right because it efficiently handed the enter tensor by way of all layers throughout the whole SE module.
# Codeblock 4 Output
authentic : torch.Measurement([1, 512, 28, 28])
after avgpool : torch.Measurement([1, 512, 1, 1])
after flatten : torch.Measurement([1, 512])
after fc0 : torch.Measurement([1, 128])
after relu6 : torch.Measurement([1, 128])
after fc1 : torch.Measurement([1, 512])
after hardsigmoid : torch.Measurement([1, 512])
after reshape : torch.Measurement([1, 512, 1, 1])
after scaling : torch.Measurement([1, 512, 28, 28])
The Convolution Block
The subsequent element I’m going to create is the one wrapped within the ConvBlock class, which the detailed implementation will be seen in Codeblock 5. The truth is, that is truly simply a regular convolution layer, however we don’t merely use nn.Conv2d as a result of in CNN we sometimes use the Conv-BN-ReLU construction. Therefore, it will likely be handy if we simply group these three layers collectively inside a single class. Nonetheless, as an alternative of really following this commonplace construction, we’re going to customise it to match the necessities for the MobileNetV3 structure.
# Codeblock 5
class ConvBlock(nn.Module):
def __init__(self,
in_channels, #(1)
out_channels, #(2)
kernel_size, #(3)
stride, #(4)
padding, #(5)
teams=1, #(6)
batchnorm=True, #(7)
activation=nn.ReLU6()): #(8)
tremendous().__init__()
bias = False if batchnorm else True #(9)
self.conv = nn.Conv2d(in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
stride=stride,
padding=padding,
teams=teams,
bias=bias)
self.bn = nn.BatchNorm2d(num_features=out_channels) if batchnorm else nn.Identification() #(10)
self.activation = activation
def ahead(self, x): #(11)
print(f'originaltt: {x.measurement()}')
x = self.conv(x)
print(f'after convtt: {x.measurement()}')
x = self.bn(x)
print(f'after bntt: {x.measurement()}')
x = self.activation(x)
print(f'after activationt: {x.measurement()}')
return x
There are a number of parameters it is advisable to go to instantiate a ConvBlock occasion. The primary 5 ones (#(1–5)) are fairly simple as they’re principally simply the usual parameters for the nn.Conv2d layer. Right here I set the teams parameter to be configurable (#(6)) in order that this class will be flexibly used not just for commonplace convolutions but additionally for depthwise convolutions. Subsequent, at line #(7) I create a parameter known as batchnorm, which determines whether or not or not a ConvBlock occasion implements a batch normalization layer. That is basically executed as a result of there are some instances the place we don’t implement this layer, i.e., within the final two convolutions with NBN label (which stands for no batch normalization) in Determine 1. The final parameter now we have right here is the activation operate (#(8)). In a while, there will likely be instances that require us to set it to both nn.ReLU6(), nn.Hardswish() or nn.Identification() (no activation).
Contained in the __init__() methodology, there are two issues occurring if we alter the enter argument for the batchnorm parameter. Once we set it to True, firstly, the bias time period of the convolution layer will likely be deactivated (#(9)), and secondly, bn will likely be an nn.BatchNorm2d() layer (#(10)). The bias time period won’t be used on this case as a result of making use of batch normalization after convolution will cancel it out. So, there’s principally no level of using bias within the first place. In the meantime, if we set the batchnorm parameter to False, the bias variable goes to be True since on this state of affairs it won’t be canceled out. The bn itself will simply be an id layer, that means that it gained’t do something to the tensor.
Concerning the ahead() methodology (#(11)), I don’t assume I would like to elucidate something as a result of what we do right here is simply passing a tensor by way of the layers sequentially. Now let’s simply transfer on to Codeblock 6 to see whether or not our ConvBlock implementation is right. Right here I attempt to create two ConvBlock situations, the place the primary one makes use of default batchnorm and activation, whereas the second omits the batch normalization layer (#(1)) and makes use of hard-swish activation operate (#(2)). As a substitute of passing a tensor by way of them, right here I need you to see within the ensuing output that our code appropriately implements each constructions in accordance with the enter arguments we go.
# Codeblock 6
convblock1 = ConvBlock(in_channels=64,
out_channels=128,
kernel_size=3,
stride=2,
padding=1)
convblock2 = ConvBlock(in_channels=64,
out_channels=128,
kernel_size=3,
stride=2,
padding=1,
batchnorm=False, #(1)
activation=nn.Hardswish()) #(2)
print(convblock1)
print('')
print(convblock2)
# Codeblock 6 Output
ConvBlock(
(conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU6()
)
ConvBlock(
(conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(bn): Identification()
(activation): Hardswish()
)
The Bottleneck
Because the SEModule and the ConvBlock are executed, we will now transfer on to the primary element of the MobileNetV3 structure: the bottleneck. What we basically do within the bottleneck is simply inserting one layer after one other which the overall construction is proven earlier in Determine 2. Within the case of MobileNetV2, it solely consists of three convolution layers, whereas right here in MobileNetV3 now we have an extra SE block positioned between the second and the third convolutions. Have a look at Codeblock 7a and 7b to see how I implement the bottleneck block for MobileNetV3.
# Codeblock 7a
class Bottleneck(nn.Module):
def __init__(self,
in_channels,
out_channels,
kernel_size,
stride,
padding,
exp_size, #(1)
se, #(2)
activation):
tremendous().__init__()
self.add = in_channels == out_channels and stride == 1 #(3)
self.conv0 = ConvBlock(in_channels=in_channels, #(4)
out_channels=exp_size, #(5)
kernel_size=1, #(6)
stride=1,
padding=0,
activation=activation)
self.conv1 = ConvBlock(in_channels=exp_size, #(7)
out_channels=exp_size, #(8)
kernel_size=kernel_size, #(9)
stride=stride,
padding=padding,
teams=exp_size, #(10)
activation=activation)
self.semodule = SEModule(num_channels=exp_size, r=4) if se else nn.Identification() #(11)
self.conv2 = ConvBlock(in_channels=exp_size, #(12)
out_channels=out_channels, #(13)
kernel_size=1, #(14)
stride=1,
padding=0,
activation=nn.Identification()) #(15)
The enter parameters of the Bottleneck class look just like these of the ConvBlock class at a look. This undoubtedly is smart as a result of we’ll certainly use them to instantiate ConvBlock situations contained in the Bottleneck. Nonetheless, when you take a better have a look at them once more, you’ll discover that there are another parameters you haven’t seen earlier than, specifically se (#(1)) and exp_size (#(2)). In a while, the enter arguments for these parameters will likely be obtained from the configuration offered within the desk in Determine 1.
Contained in the __init__() methodology, what we have to do first is to test whether or not the enter and output tensor dimensions are the identical utilizing the code at line #(3). By doing this, we may have our add variable containing both True or False. This dimensionality checking is essential as a result of we have to resolve whether or not or not we carry out element-wise summation between the 2 to implement the skip-connection that skips by way of all layers throughout the bottleneck block.
Subsequent, let’s now instantiate the layers themselves, of which the primary two are a pointwise convolution (conv0) and a depthwise convolution (conv1). For conv0, we have to set the kernel measurement to 1×1 (#(6)), whereas for conv1 the kernel measurement ought to match the one within the enter argument (#(9)), which may both be 3×3 or 5×5. It’s obligatory to use padding within the ConvBlock to forestall the picture measurement from shrinking after each convolution operation. For kernel sizes of 1×1, 3×3, and 5×5, the required padding values are 0, 1, and a couple of, respectively. Speaking in regards to the variety of channels, conv0 is accountable to increase it from in_channels to exp_size (#(4–5)). In the meantime, the variety of enter and output channels of conv1 are precisely the identical (#(7–8)). Along with the conv1 layer, the teams parameter must be set to exp_size (#(10)) as a result of we wish every enter channel to be processed independently of one another.
After the primary two convolution layers are executed, what we have to instantiate subsequent is the Squeeze-and-Excitation module (#(11)). Right here we have to set the enter channel rely to exp_size, matching with the tensor measurement produced by the conv1 layer. Do not forget that SE module will not be at all times used, therefore the instantiation of this element must be executed inside a situation, the place it would truly be instantiated solely when the se parameter is True. In any other case, it would simply be an id layer.
Lastly, the final convolution layer (conv2) is accountable to map the variety of output channels from exp_size to out_channels (#(12–13)). Identical to the conv0 layer, this one can be a pointwise convolution, therefore we set the kernel measurement to 1×1 (#(14)) in order that it solely focuses on aggregating info alongside the channel dimension. The activation operate of this layer is ready fastened to nn.Identification() (#(15)) as a result of right here we’ll implement the thought of linear bottleneck.
And that’s just about all the things for the layers throughout the bottleneck block. All we have to do afterwards is to create the circulate of the community within the ahead() methodology as proven in Codeblock 7b under.
# Codeblock 7b
def ahead(self, x):
residual = x
print(f'originaltt: {x.measurement()}')
x = self.conv0(x)
print(f'after conv0tt: {x.measurement()}')
x = self.conv1(x)
print(f'after conv1tt: {x.measurement()}')
x = self.semodule(x)
print(f'after semodulett: {x.measurement()}')
x = self.conv2(x)
print(f'after conv2tt: {x.measurement()}')
if self.add:
x += residual
print(f'after summationtt: {x.measurement()}')
return x
Now I want to check the Bottleneck class we simply created by simulating the third row of the MobileNetV3-Massive structure within the desk in Determine 1. Have a look at the Codeblock 8 under to see how I do that. In the event you return to the architectural particulars, you’ll discover that this bottleneck accepts a tensor of measurement 16×112×112 (#(7)). On this case, the bottleneck block is configured to increase the variety of channels to 64 (#(3)) earlier than finally shrinking it to 24 (#(1)). The kernel measurement of the depthwise convolution is ready to three×3 (#(2)) and the stride is ready to 2 (#(4)) which can cut back the spatial dimension by half. Right here we use ReLU6 for the activation operate (#(6)) of the primary two convolutions. Lastly, SE module won’t be applied (#(5)) since there is no such thing as a checkmark within the SE column within the desk.
# Codeblock 8
bottleneck = Bottleneck(in_channels=16,
out_channels=24, #(1)
kernel_size=3, #(2)
exp_size=64, #(3)
stride=2, #(4)
padding=1,
se=False, #(5)
activation=nn.ReLU6()) #(6)
x = torch.randn(1, 16, 112, 112) #(7)
out = bottleneck(x)
In the event you run the above code, the next output ought to seem in your display.
# Codeblock 8 Output
authentic : torch.Measurement([1, 16, 112, 112])
after conv0 : torch.Measurement([1, 64, 112, 112])
after conv1 : torch.Measurement([1, 64, 56, 56])
after semodule : torch.Measurement([1, 64, 56, 56])
after conv2 : torch.Measurement([1, 24, 56, 56])
This output confirms that our implementation is right by way of the tensor form, the place the spatial dimension halves from 112×112 to 56×56 whereas the variety of channels appropriately expands from 16 to 64 after which reduces from 64 to 24. Speaking extra particularly in regards to the SE module, we will see within the above output that the tensor remains to be handed by way of this element regardless of now we have set the se parameter to False. The truth is, when you attempt to print out the detailed structure of this bottleneck like what I do in Codeblock 9, you will note that semodule is simply an id layer, which successfully makes this construction behave as if we’re passing the output of conv1 on to conv2.
# Codeblock 9
bottleneck
# Codeblock 9 Output
Bottleneck(
(conv0): ConvBlock(
(conv): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU6()
)
(conv1): ConvBlock(
(conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), teams=64, bias=False)
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU6()
)
(semodule): Identification()
(conv2): ConvBlock(
(conv): Conv2d(64, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): Identification()
)
)
The above bottleneck goes to behave in another way if we instantiate it with the se parameter set to True. In Codeblock 10 under, I attempt to create the bottleneck block within the fifth row within the MobileNetV3-Massive structure. On this case, when you print out the detailed construction, you will note that semodule consists of all layers within the SEModule class we created earlier as an alternative of simply being an id layer like earlier than.
# Codeblock 10
bottleneck = Bottleneck(in_channels=24,
out_channels=40,
kernel_size=5,
exp_size=72,
stride=2,
padding=2,
se=True,
activation=nn.ReLU6())
bottleneck
# Codeblock 10 Output
Bottleneck(
(conv0): ConvBlock(
(conv): Conv2d(24, 72, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU6()
)
(conv1): ConvBlock(
(conv): Conv2d(72, 72, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), teams=72, bias=False)
(bn): BatchNorm2d(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU6()
)
(semodule): SEModule(
(global_pooling): AdaptiveAvgPool2d(output_size=(1, 1))
(fc0): Linear(in_features=72, out_features=18, bias=False)
(relu6): ReLU6()
(fc1): Linear(in_features=18, out_features=72, bias=False)
(hardsigmoid): Hardsigmoid()
)
(conv2): ConvBlock(
(conv): Conv2d(72, 40, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(40, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): Identification()
)
)
The Full MobileNetV3
As all parts have been created, what we have to do subsequent is to assemble the primary class of the MobileNetV3 mannequin. However earlier than doing so, I want to initialize an inventory that shops the enter arguments used for instantiating the bottleneck blocks as proven in Codeblock 11 under. Needless to say these arguments are written in accordance with the MobileNetV3-Massive model. You’ll want to regulate the values within the BOTTLENECKS record if you wish to create the small model as an alternative.
# Codeblock 11
HS = nn.Hardswish()
RE = nn.ReLU6()
BOTTLENECKS = [[16, 16, 3, 16, False, RE, 1, 1],
[16, 24, 3, 64, False, RE, 2, 1],
[24, 24, 3, 72, False, RE, 1, 1],
[24, 40, 5, 72, True, RE, 2, 2],
[40, 40, 5, 120, True, RE, 1, 2],
[40, 40, 5, 120, True, RE, 1, 2],
[40, 80, 3, 240, False, HS, 2, 1],
[80, 80, 3, 200, False, HS, 1, 1],
[80, 80, 3, 184, False, HS, 1, 1],
[80, 80, 3, 184, False, HS, 1, 1],
[80, 112, 3, 480, True, HS, 1, 1],
[112, 112, 3, 672, True, HS, 1, 1],
[112, 160, 5, 672, True, HS, 2, 2],
[160, 160, 5, 960, True, HS, 1, 2],
[160, 160, 5, 960, True, HS, 1, 2]]
The arguments listed above are structured within the following order (from left to proper): in channels, out channels, kernel measurement, growth measurement, SE, activation, stride, and padding. Needless to say padding will not be explicitly said within the authentic desk, however I embrace it right here as a result of it’s required as an enter when instantiating the bottleneck blocks.
Now let’s truly create the MobileNetV3 class. See the code implementation in Codeblocks 12a and 12b under.
# Codeblock 12a
class MobileNetV3(nn.Module):
def __init__(self):
tremendous().__init__()
self.first_conv = ConvBlock(in_channels=3, #(1)
out_channels=int(WIDTH_MULTIPLIER*16),
kernel_size=3,
stride=2,
padding=1,
activation=nn.Hardswish())
self.blocks = nn.ModuleList([]) #(2)
for config in BOTTLENECKS: #(3)
in_channels, out_channels, kernel_size, exp_size, se, activation, stride, padding = config
self.blocks.append(Bottleneck(in_channels=int(WIDTH_MULTIPLIER*in_channels),
out_channels=int(WIDTH_MULTIPLIER*out_channels),
kernel_size=kernel_size,
exp_size=int(WIDTH_MULTIPLIER*exp_size),
stride=stride,
padding=padding,
se=se,
activation=activation))
self.second_conv = ConvBlock(in_channels=int(WIDTH_MULTIPLIER*160), #(4)
out_channels=int(WIDTH_MULTIPLIER*960),
kernel_size=1,
stride=1,
padding=0,
activation=nn.Hardswish())
self.avgpool = nn.AdaptiveAvgPool2d(output_size=(1,1)) #(5)
self.third_conv = ConvBlock(in_channels=int(WIDTH_MULTIPLIER*960), #(6)
out_channels=int(WIDTH_MULTIPLIER*1280),
kernel_size=1,
stride=1,
padding=0,
batchnorm=False,
activation=nn.Hardswish())
self.dropout = nn.Dropout(p=0.8) #(7)
self.output = ConvBlock(in_channels=int(WIDTH_MULTIPLIER*1280), #(8)
out_channels=int(NUM_CLASSES), #(9)
kernel_size=1,
stride=1,
padding=0,
batchnorm=False,
activation=nn.Identification())
Discover in Determine 1 that we initially begin from the usual convolution layer. Within the above codeblock, I seek advice from this layer as first_conv (#(1)). It’s price noting that the enter arguments for this layer should not included within the BOTTLENECKS record, therefore we have to outline them manually. Keep in mind to multiply the channel counts at every step by WIDTH_MULTIPLIER since we wish the mannequin measurement to be adjustable by way of that variable. Subsequent, we initialize a placeholder named blocks for storing all of the bottleneck blocks (#(2)). With a easy loop at line #(3), we’ll iterate by way of all objects within the BOTTLENECKS record to really instantiate the bottleneck blocks and append them one after the other to blocks. The truth is, this loop constructs the vast majority of the layers within the community, because it covers almost all parts listed within the desk.
Because the sequence of bottleneck blocks is completed, we’ll now proceed with the subsequent convolution layer, which I seek advice from as second_conv (#(4)). Once more, because the configuration parameters for this layer should not saved within the BOTTLENECKS record, we have to manually hard-code them. The output of this layer will then be handed by way of a world common pooling layer (#(5)) which can drop the spatial dimension to 1×1. Afterwards, we join this layer to 2 consecutive pointwise convolutions (#(6) and #(8)) with a dropout layer in between (#(7)).
Speaking extra particularly in regards to the two convolutions, you will need to know that making use of a 1×1 convolution on a tensor that has a 1×1 spatial dimension is basically equal to making use of an FC layer to a flattened tensor, the place the variety of channels will correspond to the variety of neurons. That is the explanation that I set the output channel rely of the final layer equal to the variety of courses within the dataset (#(9)). The batchnorm parameter of each third_conv and output layers are set to False, as advised within the structure.
In the meantime, the activation operate of third_conv is ready to nn.Hardswish(), whereas the output layer makes use of nn.Identification(), which is equal to not making use of any activation operate in any respect. That is basically executed as a result of throughout coaching softmax is already included within the loss operate (nn.CrossEntropyLoss()). Later within the inference part, we have to exchange nn.Identification() with nn.Softmax() within the output layer in order that the mannequin will instantly return the chance rating of every class.
Subsequent, let’s check out the ahead() methodology under, which I gained’t clarify any additional since I feel it’s fairly straightforward to grasp.
# Codeblock 12b
def ahead(self, x):
print(f'originaltt: {x.measurement()}')
x = self.first_conv(x)
print(f'after first_convt: {x.measurement()}')
for i, block in enumerate(self.blocks):
x = block(x)
print(f"after bottleneck #{i}t: {x.form}")
x = self.second_conv(x)
print(f'after second_convt: {x.measurement()}')
x = self.avgpool(x)
print(f'after avgpooltt: {x.measurement()}')
x = self.third_conv(x)
print(f'after third_convt: {x.measurement()}')
x = self.dropout(x)
print(f'after dropouttt: {x.measurement()}')
x = self.output(x)
print(f'after outputtt: {x.measurement()}')
x = torch.flatten(x, start_dim=1)
print(f'after flattentt: {x.measurement()}')
return x
The code in Codeblock 13 demonstrates how we initialize a MobileNetV3 occasion and go a dummy tensor by way of it. Do not forget that right here we use the default enter decision, so we will principally consider the tensor as a batch of a single RGB picture of measurement 224×224.
# Codeblock 13
mobilenetv3 = MobileNetV3()
x = torch.randn(1, 3, INPUT_RESOLUTION, INPUT_RESOLUTION)
out = mobilenetv3(x)
And under is what the ensuing output appears like, by which the tensor dimension after every block matches precisely with the MobileNetV3-Massive structure in Determine 1.
# Codeblock 13 Output
authentic : torch.Measurement([1, 3, 224, 224])
after first_conv : torch.Measurement([1, 16, 112, 112])
after bottleneck #0 : torch.Measurement([1, 16, 112, 112])
after bottleneck #1 : torch.Measurement([1, 24, 56, 56])
after bottleneck #2 : torch.Measurement([1, 24, 56, 56])
after bottleneck #3 : torch.Measurement([1, 40, 28, 28])
after bottleneck #4 : torch.Measurement([1, 40, 28, 28])
after bottleneck #5 : torch.Measurement([1, 40, 28, 28])
after bottleneck #6 : torch.Measurement([1, 80, 14, 14])
after bottleneck #7 : torch.Measurement([1, 80, 14, 14])
after bottleneck #8 : torch.Measurement([1, 80, 14, 14])
after bottleneck #9 : torch.Measurement([1, 80, 14, 14])
after bottleneck #10 : torch.Measurement([1, 112, 14, 14])
after bottleneck #11 : torch.Measurement([1, 112, 14, 14])
after bottleneck #12 : torch.Measurement([1, 160, 7, 7])
after bottleneck #13 : torch.Measurement([1, 160, 7, 7])
after bottleneck #14 : torch.Measurement([1, 160, 7, 7])
after second_conv : torch.Measurement([1, 960, 7, 7])
after avgpool : torch.Measurement([1, 960, 1, 1])
after third_conv : torch.Measurement([1, 1280, 1, 1])
after dropout : torch.Measurement([1, 1280, 1, 1])
after output : torch.Measurement([1, 1000, 1, 1])
after flatten : torch.Measurement([1, 1000])
In an effort to be sure that our implementation is right, we will print out the variety of parameters contained within the mannequin utilizing the next code.
# Codeblock 14
total_params = sum(p.numel() for p in mobilenetv3.parameters())
total_params
# Codeblock 14 Output
5476416
Right here you may see that this mannequin comprises round 5.5 million parameters, by which that is roughly the identical because the one disclosed within the authentic paper (see Determine 10). Moreover, the parameter rely given within the PyTorch documentation can be just like this quantity as you may see in Determine 12 under. Primarily based on these information, I imagine I can affirm that our MobileNetV3-Massive implementation is right.

Ending
Nicely, that’s just about all the things in regards to the MobileNetV3 structure. Right here I encourage you to really practice this mannequin from scratch on any datasets you need. Not solely that, I additionally need you to mess around with the parameter configurations of the bottleneck blocks to see whether or not we will nonetheless enhance the efficiency of MobileNetV3 even additional. By the best way, the code used on this article can be obtainable in my GitHub repo, which yow will discover within the hyperlink at reference quantity [9].
Thanks for studying. Be happy to succeed in me by way of LinkedIn [10] when you spot any mistake in my clarification or within the code. See ya in my subsequent article!
References
[1] Muhammad Ardi. MobileNetV1 Paper Walkthrough: The Tiny Large. AI Advances. https://medium.com/ai-advances/mobilenetv1-paper-walkthrough-the-tiny-giant-987196f40cd5 [Accessed October 24, 2025].
[2] Muhammad Ardi. MobileNetV2 Paper Walkthrough: The Smarter Tiny Large. In direction of Knowledge Science. https://towardsdatascience.com/mobilenetv2-paper-walkthrough-the-smarter-tiny-giant/ [Accessed October 24, 2025].
[3] Andrew Howard et al. Trying to find MobileNetV3. Arxiv. https://arxiv.org/abs/1905.02244 [Accessed May 1, 2025].
[4] Muhammad Ardi. SENet Paper Walkthrough: The Channel-Smart Consideration. AI Advances. https://medium.com/ai-advances/senet-paper-walkthrough-the-channel-wise-attention-8ac72b9cc252 [Accessed October 24, 2025].
[5] Picture created initially by creator.
[6] Mark Sandler et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks. Arxiv. https://arxiv.org/abs/1801.04381 [Accessed May 12, 2025].
[7] Jie Hu et al. Squeeze and Excitation Networks. Arxiv. https://arxiv.org/abs/1709.01507 [Accessed May 12, 2025].
[8] Mobilenet_v3_large. PyTorch. https://docs.pytorch.org/vision/main/models/generated/torchvision.models.mobilenet_v3_large.html#torchvision.models.mobilenet_v3_large [Accessed May 12, 2025].
[9] MuhammadArdiPutra. The Tiny Large Getting Even Smarter — MobileNetV3. GitHub. https://github.com/MuhammadArdiPutra/medium_articles/blob/main/The%20Tiny%20Giant%20Getting%20Even%20Smarter%20-%20MobileNetV3.ipynb [Accessed May 12, 2025].
[10] Muhammad Ardi Putra. LinkedIn. https://www.linkedin.com/in/muhammad-ardi-putra-879528152/ [Accessed May 12, 2025].
