2020年7月5日 星期日

[Paper List] 脈衝神經網路 (SNN)



Title: Conversion of continuous-valued deep networks to efficient event-driven networks for image classification
Authors: Rueckauer, Bodo and Lungu, Iulia-Alexandra and Hu, Yuhuang and Pfeiffer, Michael and Liu, Shih-Chii
Source: Frontiers in neuroscience
Year: 2017
Notes: DNN 轉 SNN

Title: Direct training for spiking neural networks: Faster, larger, better
Authors: Wu, Yujie and Deng, Lei and Li, Guoqi and Zhu, Jun and Xie, Yuan and Shi, Luping
Source: Proceedings of the AAAI Conference on Artificial Intelligence
Year: 2019
Notes: 直接進行 SNN 訓練

Title: SpikeGrad: An ANN-equivalent Computation Model for Implementing Backpropagation with Spikes
Authors: Thiele, Johannes Christian and Bichler, Olivier and Dupret, Antoine
Source: arXiv preprint arXiv:1906.00851
Year: 2019
Notes: Forward 和 Backward 皆使用 spike 進行運算

Title: S4NN: temporal backpropagation for spiking neural networks with one spike per neuron
Authors: Kheradpisheh, Saeed Reza and Masquelier, Timothee
Source: International Journal of Neural Systems
Year: 2020
Notes: 使用 TTFS (time-to-first-spike) encoding


2020年3月18日 星期三

[Paper List] 卷積神經網路 (CNN)

卷積神經網路 (Convolutional Neural Network; CNN)

LeNet
Title: Gradient-based learning applied to document recognition
Authors: LeCun, Yann and Bottou, L{\'e}on and Bengio, Yoshua and Haffner, Patrick and others
Source: Proceedings of the IEEE
Year: 1998

AlexNet
Title: Imagenet classification with deep convolutional neural networks
Authors: Krizhevsky, Alex and Sutskever, Ilya and Hinton, Geoffrey E
Source: Advances in neural information processing systems
Year: 2012

VGG
Title: Very deep convolutional networks for large-scale image recognition
Authors: Simonyan, Karen and Zisserman, Andrew
Source: arXiv
Year: 2014

Inception V1
Title: Going deeper with convolutions
Authors: Szegedy, Christian and Liu, Wei and Jia, Yangqing and Sermanet, Pierre and Reed, Scott and Anguelov, Dragomir and Erhan, Dumitru and Vanhoucke, Vincent and Rabinovich, Andrew
Source: Proceedings of the IEEE conference on computer vision and pattern recognition
Year: 2015

Inception V2
Title: Batch normalization: Accelerating deep network training by reducing internal covariate shift
Authors: Ioffe, Sergey and Szegedy, Christian
Source: arXiv
Year: 2015

Inception V3
Title: Rethinking the inception architecture for computer vision
Authors: Szegedy, Christian and Vanhoucke, Vincent and Ioffe, Sergey and Shlens, Jon and Wojna, Zbigniew
Source: Proceedings of the IEEE conference on computer vision and pattern recognition
Year: 2016

Inception V4
Title: Inception-v4, inception-resnet and the impact of residual connections on learning
Authors: Szegedy, Christian and Ioffe, Sergey and Vanhoucke, Vincent and Alemi, Alexander A
Source: Thirty-First AAAI Conference on Artificial Intelligence
Year: 2017

ResNet
Title: Deep residual learning for image recognition
Authors: He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian
Source: Proceedings of the IEEE conference on computer vision and pattern recognition
Year: 2016

ResNext
Title: Aggregated residual transformations for deep neural networks
Authors: Xie, Saining and Girshick, Ross and Doll{\'a}r, Piotr and Tu, Zhuowen and He, Kaiming
Source: Proceedings of the IEEE conference on computer vision and pattern recognition
Year: 2017

AllConvNet
Title: Striving for simplicity: The all convolutional net
Authors: Springenberg, Jost Tobias and Dosovitskiy, Alexey and Brox, Thomas and Riedmiller, Martin
Source: arXiv
Year: 2014

DenseNet
Title: Densely connected convolutional networks
Authors: Huang, Gao and Liu, Zhuang and Van Der Maaten, Laurens and Weinberger, Kilian Q
Source: Proceedings of the IEEE conference on computer vision and pattern recognition
Year: 2017

EfficientNet
Title: Efficientnet: Rethinking model scaling for convolutional neural networks
Authors: Tan, Mingxing and Le, Quoc V
Source: arXiv
Year: 2019

Noisy Student
Title: Self-training with Noisy Student improves ImageNet classification
Authors: Xie, Qizhe and Hovy, Eduard and Luong, Minh-Thang and Le, Quoc V
Source: arXiv
Year: 2019

輕量化卷積神經網路 (Light-Weight CNN)

MobileNets
Title: Mobilenets: Efficient convolutional neural networks for mobile vision applications
Authors: Howard, Andrew G and Zhu, Menglong and Chen, Bo and Kalenichenko, Dmitry and Wang, Weijun and Weyand, Tobias and Andreetto, Marco and Adam, Hartwig
Source: arXiv
Year: 2017

ShuffleNet
Title: Shufflenet: An extremely efficient convolutional neural network for mobile devices
Authors: Zhang, Xiangyu and Zhou, Xinyu and Lin, Mengxiao and Sun, Jian
Source: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
Year: 2018

CondenseNet
Title: Condensenet: An efficient densenet using learned group convolutions
Authors: Huang, Gao and Liu, Shichen and Van der Maaten, Laurens and Weinberger, Kilian Q
Source: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
Year: 2018

神經網路量化 (Quantization)

Quantization
Title: Quantization and training of neural networks for efficient integer-arithmetic-only inference
Authors: Jacob, Benoit and Kligys, Skirmantas and Chen, Bo and Zhu, Menglong and Tang, Matthew and Howard, Andrew and Adam, Hartwig and Kalenichenko, Dmitry
Source: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
Year: 2018

TWN
Title: Ternary weight networks
Authors: Li, Fengfu and Zhang, Bo and Liu, Bin
Source: arXiv preprint arXiv:1605.04711
Year: 2016
Notes: weights are quantized to +1/0/-1

XNORNet
Title: Xnor-net: Imagenet classification using binary convolutional neural networks
Authors: Rastegari, Mohammad and Ordonez, Vicente and Redmon, Joseph and Farhadi, Ali
Source: European Conference on Computer Vision
Year: 2016
Notes: weights are quantized to +1/-1

Do-Re-Fa Net
Title: Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients
Authors: Zhou, Shuchang and Wu, Yuxin and Ni, Zekun and Zhou, Xinyu and Wen, He and Zou, Yuheng
Source: arXiv preprint arXiv:1606.06160
Year: 2016
Notes: quantize both forward and backward path

WAGE
Title: Training and inference with integers in deep neural networks
Authors: Wu, Shuang and Li, Guoqi and Chen, Feng and Shi, Luping
Source: arXiv preprint arXiv:1802.04680
Year: 2018
Notes: quantize both forward and backward path


2020年2月19日 星期三

Seminar 報告注意事項 (一) 標題頁


學弟/學妹:「啊!學長/學姐,下禮拜換我報告 seminar 了,怎麼辦?」

學長/學姐:「別緊張,讓我來跟大家說明一些準備報告時的注意事項。
看完這篇以後,再好好檢查一下你的報告內容,應該是不至於被釘在台上下不來啦!
你用心的做好充分的準備,然後從容赴義吧!」

學弟/學妹:「等一下,學長/學姐,這個成語你確定沒有用錯嗎?我還想活著畢業啊!」


投影片第一頁

標題 (Title)

通常我們要報告的 paper 到底在講什麼,從標題中就可以先有一個初步的概念了。很多同學在報告的一開始,就像讀稿機一樣快速的把標題念過去,然後就跳下一頁了,其實很可惜啊。那麼應該要怎麼做呢?簡而言之,

「你可以用自己的語言,把這個標題想要傳達的重點說出來。」

什麼,你覺得這有什麼困難嗎?沒關係,等你以後看到有些 paper 不知所云的標題以後,你就不會這麼說了。


作者 (Author)

在標題頁上,也要把作者名字完整的列出來。為什麼?因為你的教授有可能會認識這些作者,然後你可能就會聽到一些有趣故事,一些秘辛。

學弟/學妹:「那學長/學姐,我可以用縮寫嗎?」
學長/學姐:「你覺得用縮寫,老師看得出是誰嗎?」
學弟/學妹:「那學長/學姐,作者好多,我可以用 ..., etc 嗎?」
學長/學姐:「...... 有100位是嗎?」


研究單位 (Affiliation)

同上,想聽故事的話,就列完整一點。

來源 (Source)

同學啊,如果你想要順利的完成 seminar 報告的話,請聽我一言。

當老師問你:「這篇 paper 從哪來的?」
千萬不要回答:「從 Google 來的。」

我可是會在台下為你捏一把冷汗的。

那麼要如何查詢一篇 paper 的來源呢?當你將 paper 的 title 輸入 Google 學術搜尋 以後,應該會看到這個結果。


看到紅色框框選起來的地方嗎?是來自 IEEE Computer 這個期刊。那你點下去就會進入 IEEE 的網頁如下圖。



紅色框框選起來的地方,就是這篇 paper 的出處。通常如果我們要寫這篇 paper 的 reference 的時候,依照 IEEE 的格式,會寫成:

C.Merkel, R. Hasan, N. Soures, D. Kudithipudi, T. Taha, S. Agarwal and M. Marinella, "Neuromemristive systems: Boosting efficiency through brain-inspired computing." IEEE Computer, vol. 49, no. 10, pp. 56-64, Oct. 2016.

下次不要再跟教授說這篇 paper 來自 Google 了。

學弟/學妹:「老師,我今天真的要報一篇來自 Google 的 paper,在講 TPU 的。」



2020年1月8日 星期三

[Paper List] RRAM Model


RRAM Cell Model


一維導電路徑

Title: Device and SPICE modeling of RRAM devices
Authors: Sheridan, Patrick and Kim, Kuk-Hwan and Gaba, Siddharth and Chang, Ting and Chen, Lin and Lu, Wei
Source: Nanoscale
Year: 2011

Title: A SPICE compact model of metal oxide resistive switching memory with variations
Authors: Guan, Ximeng and Yu, Shimeng and Wong, H-S Philip
Source: IEEE electron device letters
Year: 2012

Title: A neuromorphic visual system using RRAM synaptic devices with sub-pJ energy and tolerance to variability: Experimental characterization and large-scale modeling
Authors: Yu, Shimeng and Gao, Bin and Fang, Zheng and Yu, Hongyu and Kang, Jinfeng and Wong, H-S Philip
Source: 2012 International Electron Devices Meeting
Year: 2012

Title: Verilog-A compact model for oxide-based resistive random access memory (RRAM)
Authors: Jiang, Zizhen and Yu, Shimeng and Wu, Yi and Engel, Jesse H and Guan, Ximeng and Wong, H-S Philip
Source: 2014 International Conference on Simulation of Semiconductor Processes and Devices (SISPAD)
Year: 2014
Notes: modeling the cycle to cycle variation, the source code seems to be problematic 

Title: Compact modeling of RRAM devices and its applications in 1T1R and 1S1R array design
Authors: Chen, Pai-Yu and Yu, Shimeng
Source: Transactions on Electron Devices
Year: 2015
Notes: the temperature dynamics equation in the source code seems to be problematic


二維導電路徑

Title: A SPICE model of resistive random access memory for large-scale memory array simulation
Authors: Li, Haitong and Huang, Peng and Gao, Bin and Chen, Bing and Liu, Xiaoyan and Kang, Jinfeng
Source: IEEE Electron Device Letters
Year: 2013
Notes: having the RC models of the cell and the array

Title: Variation-aware, reliability-emphasized design and optimization of RRAM using SPICE model
Authors: Li, Haitong and Jiang, Zizhen and Huang, Peng and Wu, Yi and Chen, H-Y and Gao, Bin and Liu, XY and Kang, JF and Wong, H-SP
Source: 2015 Design, Automation & Test in Europe Conference & Exhibition (DATE)
Year: 2015
Notes: having the RC models of the cell and the array

RRAM Array Model


1T1R 陣列

Title: An N40 256Kx44 embedded RRAM macro with SL-precharge SA and low-voltage current limiter to improve read and write performance
Authors: Chou, Chung-Cheng and Lin, Zheng-Jun and Tseng, Pei-Ling and Li, Chih-Feng and Chang, Chih-Yang and Chen, Wei-Chi and Chih, Yu-Der and Chang, Tsung-Yung Jonathan
Source: 2018 IEEE International Solid-State Circuits Conference-(ISSCC)
Year: 2018
Notes: adopted the common source line architecture

Title: A 40nm 2Mb ReRAM Macro with 85% Reduction in FORMING Time and 99% Reduction in Page-Write Time Using Auto-FORMING and Auto-Write Schemes
Authors: Chiu, Yen-Cheng and Hu, Han-Wen and Lai, Li-Ya and Huang, Tsung-Yuan and Kao, Hui-Yao and Chang, Kuang-Tang and Ho, Mon-Shu and Chou, Chung-Cheng and Chih, Yu-Der and Chang, Tsung-Yung and others
Source: 2019 Symposium on VLSI Technology
Year: 2019
Notes: adopted the common source line architecture

Crossbar 陣列

Title: Modeling and analysis of passive switching crossbar arrays
Authors: Fouda, Mohammed E and Eltawil, Ahmed M and Kurdahi, Fadi
Source: IEEE Transactions on Circuits and Systems I: Regular Papers
Year: 2017
Notes: having the RC models of the array



[Paper List] 脈衝神經網路 (SNN)

Title: Conversion of continuous-valued deep networks to efficient event-driven networks for image classification Authors: Rueckauer, Bodo an...