[Japanese|English]
Pruning and quantization are common approaches in neural network compression. However, the Strong Lottery Ticket Hypothesis (SLTH) demonstrated that even before training, there exists a sparse subnetwork within a random initialization capable of achieving high accuracy. Instead of training the weights, a supermask is learned to determine active connections, allowing to reconstruct the network at inference time only from random seed and mask. Our research, the Trichromatic SLTH (T-SLTH), generalizes the SLTH into a new theoretical framework: trichromatic supermasks separately optimize three components of each weight—connectivity (C), sign(S), and magnitude scaling (M).


T-SLTH provides a unified theoretical foundation spanning pruning, quantization, and randomness. Future work aims to apply this framework to heterogeneous hardware (optical, analog, quantum) and to develop compression methods leveraging structural priors. Ultimately, this line of research seeks algorithm–hardware co-design for energy efficiency.
Ángel López García-Arias
Recognition Research Group, Media Information Laboratory, NTT Communication Science Laboratories