Understanding Capsule Networks: will they overcome Convolutional Neural Networks?
Speaker: Riccardo Renzulli
Robotics Seminar Series. First Session – 18th March 2022, 15:00-16:00, via zoom (use this link): https://aalto.zoom.us/j/62124942899
Abstract: Capsule Networks (CapsNets) ambition is to build an interpretable and biologically-inspired neural network model. They were recently introduced to overcome the shortcomings of Convolutional Neural Networks (CNNs). CNNs loose the parts-objects relationships because of max-pooling layers, which progressively drop spatial information. CapsNets innovations rely on explicit representation of an entity and on how the information is sent to upper layers. In fact, CapsNets group neurons into capsules, namely activity vectors, where each capsule accounts for an object of one of its parts. Each element of these vectors accounts for different properties of the object such as its pose and other properties like color, deformation, etc. Furthermore, the routing mechanism carves a parse tree from an input image: its main purpose is to explicitly build relationships between capsules. This can be seen as a parallel attention mechanism, where each active capsule can choose a capsule in the layer above to be its parent in the tree. This seminar gives a brief overview of the main components of a CapsNet mentioned above and recent developments on this new architecture. Will CapsNets overcome Convolutional Neural Networks? After attending this seminar a better understanding of how to answer this question will be given.
Sabour, Sara, Nicholas Frosst, and Geoffrey E. Hinton. “Dynamic routing between capsules.” Advances in neural information processing systems 30 (2017).