Synthetic System Theory and a Cortex-Like Learning Machine


March 5, 2016

James Ting-Ho Lo
Professor
Department of Mathematics and Statistics
University of Maryland Baltimore County

Thursday, March 3, 2016
2:00pm
800 22nd Street NW, SEH B1220
Washington, DC 20052

Hosted by: Professor James Lee ([email protected])

Abstract
A synthetic approach based on deep learning machines is discussed for nonlinear filtering/prediction, system identification/control, time-series analysis, fault/change point detection/identification. Exponential convexification is presented for tilting plateaus and eliminating saddle points and non-global local minima on the surface of the error criterion for training deep learning machines. Numerical results illustrate the effectiveness of the approach and the convexification method.
For cognitive computing (e.g., computer vision and speech processing), a computational model of the biological neural networks is proposed, which integrates novel models of dendrites, spiking/nonspiking somas, synapses, unsupervised/supervised learning rules, and maximal generalization mechanisms. The computational model explains how the neural networks encode, learn, memorize, recall and generalize. As a learning machine, the computational model has a large number of properties highly desirable for artificial intelligence


Biography:
Dr. James Lo is a Professor in the Department of Mathematics and Statistics, University of Maryland Baltimore County. He received Ph.D. degree from University of Southern California and was a Postdoctoral Research Associate at Stanford University and Harvard University. His research interests include optimal filtering, system control and identification, active noise and vibration control, and computational intelligence. In 1992, he solved the long-standing notorious problem of optimal nonlinear filtering in its most general setting and obtained a best paper award. Subsequently, he conceived adaptive neural networks with long- and short-term memories, accommodative neural network for adaptive processing without online processor adjustment, and robust adaptive/accommodative neural networks with a continuous spectrum of robustness and proved that they are universal approximators of dynamical systems for adaptive, accommodative and/or robust identification, control and filtering.

He has been developing convexification and deconvexification methods for avoiding poor local-minima in data fitting (e.g., training neural networks and estimating regression models), hoping to soon remove a main obstacle in the neural network approach and nonlinear regression in statistics.

In recent years, Dr. Lo developed a functional and a low-order model of biological neural networks. The former, called the Temporal Hierarchical Probabilistic Associative Memory (THPAM) and Clustering Interpreting Probabilistic Associative Memory (CIPAM), is a new paradigm of learning machines. The latter, the low-order model, comprises biologically plausible models of dendritic nodes/trees, synapses, spiking/nonspiking somas, unsupervised/supervised learning mechanisms, a maximal generalization scheme, and feedbacks with different delay durations; which integrate into a biologically plausible learning/retrieving algorithm and answer numerous fundamental questions in neuroscience.