Iclr2020: Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network - Download as a PDF or view online for free
1) The document presents a new compression-based bound for analyzing the generalization error of large deep neural networks, even when the networks are not explicitly compressed.
2) It shows that if a trained network's weights and covariance matrices exhibit low-rank properties, then the network has a small intrinsic dimensionality and can be efficiently compressed.
3) This allows deriving a tighter generalization bound than existing approaches, providing insight into why overparameterized networks generalize well despite having more parameters than training examples.
Heterogeneous graphlets-guided network embedding via eulerian-trail-based representation - ScienceDirect
Entropy, Free Full-Text
Continuum Modeling and Control of Large Nonuniform Networks
Continuum Modeling and Control of Large Nonuniform Networks
Iclr2020: Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network
i1.rgstatic.net/publication/378829045_Enhanced_Net
Neural material (de)compression – data-driven nonlinear dimensionality reduction
Conference Proceedings - CECS
On the Resilience of Deep Learning for reduced-voltage FPGAs
Future Internet, Free Full-Text