Ming-Yang Zhang, Xin-Yi Yu, Lin-Lin Ou. Effective Model Compression via Stage-wise Pruning[J]. Machine Intelligence Research, 2023, 20(6): 937-951. DOI: 10.1007/s11633-022-1357-9
Citation: Ming-Yang Zhang, Xin-Yi Yu, Lin-Lin Ou. Effective Model Compression via Stage-wise Pruning[J]. Machine Intelligence Research, 2023, 20(6): 937-951. DOI: 10.1007/s11633-022-1357-9

Effective Model Compression via Stage-wise Pruning

  • Automated machine learning (AutoML) pruning methods aim at searching for a pruning strategy automatically to reduce the computational complexity of deep convolutional neural networks (deep CNNs). However, some previous work found that the results of many Auto-ML pruning methods cannot even surpass the results of the uniformly pruning method. In this paper, the ineffectiveness of Auto-ML pruning, which is caused by unfull and unfair training of the supernet, is shown. A deep supernet suffers from unfull training because it contains too many candidates. To overcome the unfull training, a stage-wise pruning (SWP) method is proposed, which splits a deep supernet into several stage-wise supernets to reduce the candidate number and utilize inplace distillation to supervise the stage training. Besides, a wide supernet is hit by unfair training since the sampling probability of each channel is unequal. Therefore, the fullnet and the tinynet are sampled in each training iteration to ensure that each channel can be overtrained. Remarkably, the proxy performance of the subnets trained with SWP is closer to the actual performance than that of most of the previous AutoML pruning work. Furthermore, experiments show that SWP achieves the state-of-the-art in both CIFAR-10 and ImageNet under the mobile setting.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return