You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _projects/mcba.md
+19-19Lines changed: 19 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -108,7 +108,7 @@ _styles: >
108
108
109
109
## Abstract
110
110
111
-
Advancements in bionic technology are transforming the possibilities for restoring hand function in individuals with amputations or paralysis. This paper introduces a **cost-effective bionic arm** design that leverages **mind-controlled functionality** and integrates a **sense of touch** to replicate natural hand movements. The system utilizes a **non-invasive EEG-based control mechanism**, enabling users to operate the arm using brain signals processed into PWM commands for servo motor control of the bionic arm. Additionally, the design incorporates a touch sensor (tactile feedback) in the gripper, offering sensory feedback to enhance user safety and dexterity.
111
+
Advancements in bionic technology are transforming the possibilities for restoring hand function in individuals with amputations or paralysis. This paper introduces a **cost-effective bionic arm** design that leverages **mind-controlled functionality** and integrates a **sense of touch** to replicate natural hand movements. The system utilizes a **non-invasive EEG-based control mechanism**, enabling users to operate the arm using brain signals processed into PWM commands for servo motor control of the bionic arm. Additionally, the design incorporates a touch sensor (tactile feedback) in the gripper, offering sensory feedback to enhance user safety and dexterity<d-citekey="nprnews"></d-cite>.
112
112
The proposed bionic arm prioritizes three essential features:
113
113
1.**Integrated Sensory Feedback**: Providing users with a tactile experience to mimic the sense of touch (signals directly going to the brain). This capability is crucial for safe object manipulation by arm and preventing injuries
114
114
2.**Mind-Control Potential**: Harnessing EEG signals for seamless, thought-driven operation.
@@ -119,38 +119,38 @@ This novel approach aims to deliver an intuitive, natural, and efficient solutio
119
119
120
120
## Methodology
121
121
### 1. Data Collection and Dataset Overview
122
-
The model development utilized a publicly available EEG dataset comprising data from **60 volunteers** performing **8 distinct activities**. The dataset includes a total of **8,680 four-second EEG recordings**, collected using **16 dry electrodes** configured according to the **international 10-10 system**.
122
+
The model development utilized a publicly available EEG dataset comprising data from **60 volunteers** performing **8 distinct activities**<d-citekey="asanza2023"></d-cite> . The dataset includes a total of **8,680 four-second EEG recordings**, collected using **16 dry electrodes** configured according to the **international 10-10 system**<d-citekey="asanza2023"></d-cite>.
123
123
* Electrode Configuration: Monopolar configuration, where each electrode's potential was measured relative to neutral electrodes placed on both earlobes (ground references).
124
124
* Signal Sampling: EEG signals were sampled at **125 Hz** and preprocessed using:
125
-
-**A bandpass filter (5–50 Hz)** to isolate relevant frequencies.
126
-
-**A notch filter (60 Hz)** to remove powerline interference.
125
+
-**A bandpass filter (5–50 Hz)** to isolate relevant frequencies<d-citekey="asanza2023"></d-cite>.
126
+
-**A notch filter (60 Hz)** to remove powerline interference<d-citekey="asanza2023"></d-cite>.
127
127
128
128
### 2. Data Preprocessing
129
129
The dataset, originally provided in **CSV format**, underwent a comprehensive preprocessing workflow:
130
130
* The data was split into individual CSV files for each of the 16 channels, resulting in an increase from **74,441** files to **1,191,056** files.
131
131
* Each individual channel's EEG data was converted into **audio signals** and saved in **.wav format**, allowing the brain signals to be audibly analyzed.
132
132
* The entire preprocessing workflow was implemented in **Python** to ensure scalability and accuracy.
133
133
The dataset captured brainwave signals corresponding to the following activities:
134
-
1.**BEO** (Baseline with Eyes Open): One-time recording at the beginning of each run.
135
-
2.**CLH** (Closing Left Hand): Five recordings per run.
136
-
3.**CRH** (Closing Right Hand): Five recordings per run.
137
-
4.**DLF** (Dorsal Flexion of Left Foot): Five recordings per run.
138
-
5.**PLF** (Plantar Flexion of Left Foot): Five recordings per run.
139
-
6.**DRF** (Dorsal Flexion of Right Foot): Five recordings per run.
140
-
7.**PRF** (Plantar Flexion of Right Foot): Five recordings per run.
141
-
8.**Rest**: Recorded between each task to capture the resting state.
134
+
1.**BEO** (Baseline with Eyes Open): One-time recording at the beginning of each run<d-citekey="asanza2023"></d-cite>.
135
+
2.**CLH** (Closing Left Hand): Five recordings per run<d-citekey="asanza2023"></d-cite>.
136
+
3.**CRH** (Closing Right Hand): Five recordings per run<d-citekey="asanza2023"></d-cite>.
137
+
4.**DLF** (Dorsal Flexion of Left Foot): Five recordings per run<d-citekey="asanza2023"></d-cite>.
138
+
5.**PLF** (Plantar Flexion of Left Foot): Five recordings per run<d-citekey="asanza2023"></d-cite>.
139
+
6.**DRF** (Dorsal Flexion of Right Foot): Five recordings per run<d-citekey="asanza2023"></d-cite>.
140
+
7.**PRF** (Plantar Flexion of Right Foot): Five recordings per <d-citekey="asanza2023"></d-cite>.
141
+
8.**Rest**: Recorded between each task to capture the resting <d-citekey="asanza2023, gigadb"></d-cite>.
142
142
143
143
### 3. Feature Extraction and Classification
144
144
Feature extraction and activity classification were performed using **transfer learning** with **YamNet** <d-citekey="yamnetgithub"></d-cite>, a deep neural network model.
145
-
***Audio Representation**: Audio files were imported into **MATLAB** using an **Audio Datastore**. Mel-spectrograms, a time-frequency representation of the audio signals, were extracted using the yamnetPreprocess.
146
-
* Dataset Split: The data was divided into **training (70%)**, **validation (20%)**, and **testing (10%)** sets.
145
+
***Audio Representation**: Audio files were imported into **MATLAB** using an **Audio Datastore**<d-citekey="audiodatastore"></d-cite>. Mel-spectrograms, a time-frequency representation of the audio signals, were extracted using the yamnetPreprocess function<d-citekey="yamnetpreprocess, transferlearningmatlab"></d-cite>.
146
+
* Dataset Split: The data was divided into **training (70%)**, **validation (20%)**, and **testing (10%)** sets<d-citekey="transferlearningmatlab"></d-cite>.
147
147
Transfer Learning with YamNet :
148
-
- The **pre-trained YamNet model** (86 layers) was adapted for an 8-class classification task:
149
-
+ The initial layers of YamNet were **frozen** to retain previously learned representations.
150
-
+ A **new classification layer** was added to the model.
148
+
- The **pre-trained YamNet model** (86 layers)<d-citekey="yamnetgithub"></d-cite> was adapted for an 8-class classification task:
149
+
+ The initial layers of YamNet were **frozen** to retain previously learned representations<d-citekey="transferlearningmatlab, yamnetgithub"></d-cite>.
150
+
+ A **new classification layer** was added to the model <d-citekey="transferlearningmatlab"></d-cite>.
151
151
- Training details:
152
-
+**Learning Rate**: Initial rate of **3e-4**, with an exponential learning rate decay schedule.
153
-
+**Mini-Batch Size**: 128 samples per batch.
152
+
+**Learning Rate**: Initial rate of **3e-4**, with an exponential learning rate decay schedule<d-citekey="transferlearningmatlab"></d-cite>.
153
+
+**Mini-Batch Size**: 128 samples per batch<d-citekey="transferlearningmatlab"></d-cite>.
154
154
+**Validation**: Performed every **651 iterations**.
0 commit comments