Совместный семинар ИППИ РАН, НИУ ВШЭ и Сколтеха
Аннотация:
Convolutional neural networks excel in image recognition tasks, but this comes at the cost of high computational and memory complexity. CNNs require millions of floating point operations to process an image and therefore real-time applications need powerful CPU or GPU devices. Moreover, these networks contain millions of trainable parameters and consume hundreds of megabytes of storage and memory bandwidth. Thus, CNNs are forced to use RAM instead of solely relying on the processor cache – orders of magnitude more energy efficient memory device – which increases the energy consumption even more. These reasons restrain the spread of CNNs on mobile devices. I will talk about our work on tensor factorization framework to compress fully-connected and convolutional layers of CNNs. Another research direction (besides compression) is to increase the size of the layers by training them in the compact tensor format to increase the accuracy.
страница семинара
22.11.2016 | |