Σφακιανάκης Αλέξανδρος
ΩτοΡινοΛαρυγγολόγος
Αναπαύσεως 5 Άγιος Νικόλαος
Κρήτη 72100
00302841026182
00306932607174
alsfakia@gmail.com

Αρχειοθήκη ιστολογίου

! # Ola via Alexandros G.Sfakianakis on Inoreader

Η λίστα ιστολογίων μου

Τρίτη 14 Μαρτίου 2017

Improving zero-training brain-computer interfaces by mixing model estimators.

Improving zero-training brain-computer interfaces by mixing model estimators.

J Neural Eng. 2017 Mar 13;:

Authors: Verhoeven T, Hübner D, Tangermann M, Mueller KR, Dambre J, Kindermans PJ

Abstract
OBJECTIVE: Brain-computer interfaces (BCI) based on event-related potentials (ERP) incorporate a decoder to classify recorded brain signals and subsequently select a control signal that drives a computer application. Standard supervised BCI decoders require a tedious calibration procedure prior to every session. Several unsupervised classification methods have been proposed that tune the decoder during actual use and as such omit this calibration. Each of these methods has its own strengths and weaknesses. Our aim is to improve overall accuracy of ERP-based BCIs without calibration.
APPROACH: We consider two approaches for unsupervised classification of ERP signals. Learning from label proportions (LLP) was recently shown to be guaranteed to converge to a supervised decoder when enough data is available. In contrast, the formerly proposed expectation maximization (EM) based decoding for ERP-BCI does not have this guarantee. However, while this decoder has high variance due to random initialization of its parameters, it obtains a higher accuracy faster than LLP when the initialization is good. We introduce a method to optimally combine these two unsupervised decoding methods, letting one method's strengths compensate for the weaknesses of the other and vice versa. The new method is compared to the aforementioned methods in a resimulation of an experiment with a visual speller.
MAIN RESULTS: Analysis of the experimental results shows that the new method exceeds the performance of the previous unsupervised classification approaches in terms of ERP classification accuracy and symbol selection accuracy during the spelling experiment. Furthermore, the method shows less dependency on random initialization of model parameters and is consequently more reliable.
SIGNIFICANCE: Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.

PMID: 28287076 [PubMed - as supplied by publisher]



http://ift.tt/2nzYzsd

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου