Hebbian learning in linear-nonlinear networks with tuning curves leads to near-optimal, multi-alternative decision making

Abstract

Optimal performance and physically plausible mechanisms for achieving it have been completely characterized for a general class of two-alternative, free response decision making tasks, and data suggest that humans can implement the optimal procedure. The situation is more complicated when the number of alternatives is greater than two and subjects are free to respond at any time, partly due to the fact that there is no generally applicable statistical test for deciding optimally in such cases. However, here, too, analytical approximations to optimality that are physically and psychologically plausible have been analyzed. These analyses leave open questions that have begun to be addressed: (1) How are near-optimal model parameterizations learned from experience? (2) What if a continuum of decision alternatives exists? (3) How can neurons’ broad tuning curves be incorporated into an optimal-performance theory? We present a possible answer to all of these questions in the form of an extremely simple, reward-modulated Hebbian learning rule by which a neural network learns to approximate the multihypothesis sequential probability ratio test.

Publisher

Elsevier

Publication Date

1-1-2011

Publication Title

Neural Networks

Department

Neuroscience

Document Type

Article

DOI

https://dx.doi.org/10.1016/j.neunet.2011.01.005

Language

English

Format

text

Share

COinS