


For each layer, the feature maps of all preceding layers are treated as separate inputs whereas its own feature maps are passed on as inputs to all subsequent layers. Whereas traditional convolutional networks with L layers have L connections, one between each layer and its subsequent layer (treating the input as layer 0), our network has L(L+1)/2 direct connections. In this paper we embrace this observation and introduce the Dense Convolutional Network (DenseNet), where each layer is directly connected to every other layer in a feed-forward fashion.

Recent work has shown that convolutional networks can be substantially deeper, more accurate and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In addition, the authors also implemented a parallel version of the same algorithm using OpenMP with eight logical cores and achieved a speed-up of seven times compared to the serial implementation. The authors observed tremendous performance gains due to better spatial locality. Moreover, we evaluated the performance of matrix multiplication by comparing the execution time under various loop settings.

In contrast, the authors show that by changing the loop order, more performance gains are possible. All the aforementioned languages use a row-major scheme, and hence, there are many cache misses encountered when implemented through simple looping. This analysis showed that Python’s implementation was poor while Java was relatively slower compared to the C++ implementation. In this work, the authors compared the run times of matrix multiplication in popular languages such as C++, Java, and Python. There has always been a need and an interest for improved performance in terms of algorithm implementation. Matrix multiplication is of paramount interest to machine learning, a lightweight matrix-based key management protocol for IoT networks, animation, and so on. Matrix multiplication has been implemented in various programming languages, and improved performance has been reported in many articles under various settings.
