{ localUrl: '../page/8s1.html', arbitalUrl: 'https://arbital.com/p/8s1', rawJsonUrl: '../raw/8s1.json', likeableId: '0', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: '8s1', edit: '1', editSummary: '', prevEdit: '0', currentEdit: '1', wasPublished: 'true', type: 'wiki', title: 'Note on 1x1 Convolutions', clickbait: 'what's the purpose?', textLength: '6142', alias: '8s1', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'AltoClef', editCreatedAt: '2017-10-27 14:31:23', pageCreatorId: 'AltoClef', pageCreatedAt: '2017-10-27 14:31:23', seeDomainId: '0', editDomainId: '2835', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'false', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '9', text: '## What does it do?\n\nThis is pretty straight forward, just like normal convolution operation, it converts a piece of data structured `[channels, height, width]` to `[kernal_numbers, same_height, same_width]`, using a set of learn-able parameters in the size of `[kernel_number, channel_number, weight_for_this_channel]`. This operation is usually just a meaningless scaling by a constant unless it is performed across channels, which is usually called a cascade cross channel pooling layer. \n\n## What's the purpose?\n\nAlong all materials read, it seems like this operation is widely used on the following purposes:\n\n### Dimension augmentation/reduction. \nThis operation is capable of mapping an input of any channel to an output of any channel while preserving the original size of the picture. Do be aware that such operation, especially in dimensionality augmentations, uses large amount of extra parameters, making the network possible more prone to overfitting.\n\n### Rescaling the last layer. \nIn the above it's mentioned that 1x1 convolution could serve as a plain scale of a whole channel, which is usually unnecessary. However, if such need exist, it could be met though this operation.\n\n### Increasing non-linearity. \nThe fact that it involves a non-linear mapping without drastically altering the input, the non-linearity of the network is increased without using plain fully-connected layers which destroys the relationship between nearby pixels. At the same time, the original size of the input could be preserved.\n\n### The (Network in Network)NIN Structure\n\nThis concept first appeared to me while reading a research paper called [network in network](https://arxiv.org/pdf/1312.4400v3.pdf), which seems to be another powerful modification made on CNNs. One of the key aspect this paper adapted is using a multilayer perceptron model to replace the traditional convolution kernel. \n\nAs I wonder how could this model be implemented using higher level api of popular machine learning libraries without modifying the lower level codes, the paper actually stated that such operation of sliding a mini-MLP over a picture across the previous channels is equivalent to cross channel convolution with 1x1 kernels, where a few new 1x1 convolutional layer(CCCP) is appended to a normal convolutional layer to reach the goal.\n\nThe fact that this CCCP operation appending to a normal convolution layers will make a equivalent MLP serves as a sliding convolution kernel, is hard to imagine to be true. It creates confusion in my understanding and it is not until I unroll the whole process so it could be under stood.\n\nRather than 2d convolution, using 1d convolution makes things more straight forward while the same rule applies to arbitrary dimensions of convolution.\n\nSuppose we have an 1d input with 2 channels:\n<br>\n![](https://github.com/D0048/makeyourownneuralnetwork/blob/master/img_bed/1d_input.png?raw=true)\n<br>\nAnd we perform a normal convolution with one kernel of 2.\n<br>\n![](https://github.com/D0048/makeyourownneuralnetwork/blob/master/img_bed/1d_input_conv2.png?raw=true)\n<br>\nAfter appending a 1x1(in 1d convolution, just 1) convolution layer with **two** kernels, it looks like:\n<br>\n![](https://github.com/D0048/makeyourownneuralnetwork/blob/master/img_bed/1d_input_conv2_conv1.png?raw=true)\n<br>\nAnd another 1x1 layer with **two** kernels added:\n<br>\n![](https://github.com/D0048/makeyourownneuralnetwork/blob/master/img_bed/1d_input_conv2_conv1_conv1.png?raw=true)\n<br>\nIt's not hard to see, that this structure is indeed a sliding MLP layer with the input size of the convolution size of the layer appended to, the depth of the number of 1x1 layers appended and the hidden unit numbers of the product of all kernel numbers in the hidden unit, the input numbers and the last layer.\n\n### The Inception module\nThe [inception module](https://hacktilldawn.com/2016/09/25/inception-modules-explained-and-implemented/) is first used in the googlenet architecture, and proved to be really useful. \n![](https://raw.githubusercontent.com/iamaaditya/iamaaditya.github.io/master/images/inception_1x1.png)\n<center>picture retrieved from this [website](https://wiki.tum.de/display/lfdv/Layers+of+a+Convolutional+Neural+Network)</center>\n\nWhat this module does is really just adjoining all the output of different size of convolution/pooling together and let the network to choose which to use itself. The pro of this method is that the network is made more resistant to shift in sizes of the target, and the manually adjusting size of the kernels is no longer required--we got most of the possible sizes needed all here.\n\nAs a result, the 1x1 convolution naturally became one of the choices.\n\n## Ideas for Future Research\n\nAs I read, neurons that fired together creates a relationship between each other and every one of them got easier to fire next time given the condition that the related neurons are fired.\n\nI am thinking that neurons in regular DNNs does not have any knowledge of the state of other neurons in the same layer, thus maybe it would be possible to create such a relation, to somehow create **"logic"** for networks?\n\nTo do so, maybe we need another set of weights in each neuron used to scale the states of other neurons in the same layer and add to the output, somehow like this:\n$$\n\\vec{y_{n}}=(\\mathbf{W_n}^T \\times \\vec{y_{n-1}} + \\vec{b_n})+\\mathbf{W_{new}}^T (\\mathbf{W_n}^T \\times \\vec{y_{n-1}} + \\vec{b_n})\n$$\n<center>**If the formula is not displayed correctly, please allow unsecure(HTTP) cross site scripts in your browser.**</center>\nThe new weight matrix for the new output will be an `n*n` matrix considering `n` as the size of the output.\n\nThe detailed implementation is to be researched.\n\nEdit: Okay... This seems to be just an really stupid way of adding an extra layer, and won't really make any difference from just adding one at all.(2017/10/27)\n\n\nReferences:<br>\nhttp://blog.csdn.net/yiliang_/article/details/60468655\nhttp://blog.csdn.net/mounty_fsc/article/details/51746111\nhttp://jntsai.blogspot.com/2015/03/paper-summary-network-in-network-deep.html\nhttps://www.zhihu.com/question/64098749', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'AltoClef' ], childIds: [], parentIds: [], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22859', pageId: '8s1', userId: 'AltoClef', edit: '1', type: 'newEdit', createdAt: '2017-10-27 14:31:23', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'false', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }