SNU Biointelligence Lab

Course

인공신경망 (2025-2)

Course Objectives

We study “self-learning” networks, i.e. models that learn in an unsupervised and “self-supervised” way without the help of an explicit teacher. These models are neuro-biologically inspired and, usually, self-organizing, dynamic, recurrent, and auto-encoding networks. We examine the principles of neural learning algorithms from the historical models, such as Willshaw-von der Malsburg feature maps, Linsker models, Kohonen’s self-organizing maps, Grossberg models, recurrent networks, Anderson’s brain-state-in-a-box, actor-critic networks, Hopfield’s associative memory, Boltzmann machines, and deep belief networks.

We study mathematical tools for approximation and optimization of the neural learning models. These include information-theoretic algorithms, such as maximum entropy, mutual information, and KL divergence as well as the statistical-mechanical methods, such as Markov chains, Metropolis algorithms, Gibbs sampling, and simulated annealing. We also examine the neurodynamic models of self-supervised, end-to-end learning to solve the challenging problems, such as time series prediction and reconstruction. These include Markov decision processes, approximate dynamic programming, reinforcement learning, sequential Bayesian estimation, Kalman filtering, particle filtering, real-time recurrent learning, dynamic reconstruction of a chaotic process.

Textbooks

Grading Policy

구분 비율
출석 10%
과제 10%
중간고사 40%
기말고사 40%

Office Hours

303- 4th floor, Mon/Wed 12:30~13:30 (talk to TA for meeting)

Teaching Method

이론 위주 수업

Course Schedule

Week 1 (9/1, 9/3)

Learning in Neurodynamic Self-organizing Systems

Week 2 (9/8, 9/10)

Self-organizing Maps (Ch. 9)

Week 3 (9/15, 9/17)

Information-Theoretic Learning Models (Ch. 10)

Week 4 (9/22, 9/24)

Statistical-Mechanical Learning Methods (Ch. 11)

Week 5 (9/29, 10/1)

Deep Neural Networks (Ch. 11)

Week 6 (10/6, 10/8)

Korean Thanksgiving day

Week 7 (10/13, 10/15)

Dynamic Programming (Ch. 12)

Week 8 (10/20, 10/22)

Week 9 (10/27, 10/29)

Dynamic Programming (Ch. 12)

Week 10 (11/3, 11/5)

Neurodynamic Models (Ch. 13)

Week 11 (11/10, 11/12)

Bayesian Filtering (Ch. 14)

Week 12 (11/17, 11/19)

Particle Filters (Ch. 14)

Dynamic Recurrent Networks (Ch. 15)

Week 13 (11/24, 11/26)

Real-Time Recurrent Learning (Ch. 15)

Week 14 (12/1, 12/3)

Review and discussion