[PR] Temporal-Spatial-Frequency Depth Extraction of Brain-Computer Interface based on mental tasks
Updated:
Journal : Biomedical Signal Processing and Control
Introduction
prior problems:
Too many params(DNN)
CNN+LSTM : 2D → too time consuming, or no temp-spati-freq jointly embedding
Contribution :
- Temporal spatial frequency feature를 동시에(jointly) 추출하는 DNN 구조 제안
- CNN→LSTM 을 series/parallel 방법으로 구현 및 비교(series가 더 좋음)
- Compact CNN architecture
Method
Preprocessing :
- 4~35Hz 6th Butterworth band-pass filter
CNN:
- weight sharing 및 sparse connectivity가 CNN의 장점.
-
EEG는 데이터가 적기 때문에 compact가 중요.
- Architecture
- Compact CNN
- 2D temporal convolution 8*(1, 64) 사용
- Depthwise convolution(depth=2)으로 spatial embedding
- Separable Convolution (1,32) + (1,1)
EEGNet이네?
- Shallow CNN
- Shallow ConvNet
- Compact CNN
LSTM :
Two stacked LSTM layers are used
SCCRNN(series-compact-convolutional-RNN) :
PCCRNN :
Results
- Segmentation : 1000 time points → 200 time points x 5, no overlap
- raw signal을 바로 LSTM에 넣는건 별로 → CNN feature와 LSTM feature를 concat하는것도 별로 : 두 feature가 너무 다름:: 낮은 feature값은 노이즈로 취급당함
- Kappa value : (acc - p) / (1-p), p=0.25 for 4classes and p=0.5 for 2 classes
- Compact CNN을 통해 spectral-spatial feature를 뽑고, LSTM을 통해 temporal feature를 뽑아서 우리 성능이 좋다!
- 근데 평균이 겨우 64%????
Leave a comment