Figure 1: Neural Decoder Architecture for EEG-to-Image Reconstruction
The decoder employs a multi-layer perceptron with bottleneck architecture to transform 512-dimensional EEG embeddings
into 28×28 pixel reconstructions. Each hidden layer incorporates ReLU activation, dropout regularization (p=0.2),
and batch normalization for stable training dynamics.
Input Layer
EEG Embeddings
512D
Enhanced Transformer
Features
→
Hidden Layer 1
Linear + ReLU
1024D
Dropout (0.2)
BatchNorm1D
→
Hidden Layer 2
Linear + ReLU
2048D
Dropout (0.2)
BatchNorm1D
(Bottleneck)
→
Hidden Layer 3
Linear + ReLU
1024D
Dropout (0.2)
BatchNorm1D
→
Output Layer
Linear + Tanh
784D
Reshape to
28×28 Image
Model Complexity Analysis:
Total Parameters: 5,526,304 (≈ 5.5M)
Computational Complexity: O(d₁d₂ + d₂d₃ + d₃d₄ + d₄d₅)
Memory Footprint: ~22 MB (FP32)
Forward Pass Operations: ~11.4 MFLOPs per sample