
- 論文誌
- [1] T. Cheng, Y. Masuda, J. Chen, J. Yu, and M. Hashimoto, "Logarithm-Approximate Floating-Point Multiplier Is Applicable to Power-Efficient Neural Network Training," Integration, the VLSI Journal, volume 74, pages 19--31, September 2020. [pdf]
- 国際会議
- [1] T. Cheng and M. Hashimoto, "Minimizing Energy of DNN Training with Adaptive Bit-Width and Voltage Scaling," Proceedings of IEEE International Symposium on Circuits and Systems (ISCAS), May 2021. [pdf]
- [2] K. Onishi, J. Yu, and M. Hashimoto, "Memory Efficient Training Using Lookup-Table-Based Quantization for Neural Network," Proceedings of International Conference on Artificial Intelligence Circuits and Systems (AICAS), August 2020. [pdf]
- [3] T. Tanio, J. Yu, and M. Hashimoto, "Training Data Reduction Using Support Vectors for Neural Networks," Proceedings of Asia-Pacific Signal and Information Processing Association (APSIPA) Annual Summit and Conference (ASC), November 2019. [pdf]
- [4] T.-Y. Cheng, J. Yu, and M. Hashimoto, "Minimizing Power for Neural Network Training with Logarithm-Approximate Floating-Point Multiplier," Proceedings of International Symposium on Power and Timing Modeling, Optimization and Simulation (PATMOS), July 2019. [pdf]