Course
인공신경망 (2025-2)
Course Objectives
We study “self-learning” networks, i.e. models that learn in an unsupervised and “self-supervised” way without the help of an explicit teacher. These models are neuro-biologically inspired and, usually, self-organizing, dynamic, recurrent, and auto-encoding networks. We examine the principles of neural learning algorithms from the historical models, such as Willshaw-von der Malsburg feature maps, Linsker models, Kohonen’s self-organizing maps, Grossberg models, recurrent networks, Anderson’s brain-state-in-a-box, actor-critic networks, Hopfield’s associative memory, Boltzmann machines, and deep belief networks.
We study mathematical tools for approximation and optimization of the neural learning models. These include information-theoretic algorithms, such as maximum entropy, mutual information, and KL divergence as well as the statistical-mechanical methods, such as Markov chains, Metropolis algorithms, Gibbs sampling, and simulated annealing. We also examine the neurodynamic models of self-supervised, end-to-end learning to solve the challenging problems, such as time series prediction and reconstruction. These include Markov decision processes, approximate dynamic programming, reinforcement learning, sequential Bayesian estimation, Kalman filtering, particle filtering, real-time recurrent learning, dynamic reconstruction of a chaotic process.
Textbooks
- Neural Networks and Learning Machines - Haykin - Pearson - 2009
- 장교수의 딥러닝 - 장병탁 - 홍릉과학출판사 - 2017
Grading Policy
구분 | 비율 |
---|---|
출석 | 10% |
과제 | 10% |
중간고사 | 40% |
기말고사 | 40% |
Office Hours
303- 4th floor, Mon/Wed 12:30~13:30 (talk to TA for meeting)
Teaching Method
이론 위주 수업
Course Schedule
Week 1 (9/1, 9/3)
Learning in Neurodynamic Self-organizing Systems
- Neural Networks, Unsupervised / Self-supervised Learning
- Mathematics for Neural Learning
- Principal-Components Analysis (Ch. 8)
- Principal Component Analysis
- Hebbian-Based Maximum Eigenfilter
- Hebbian-Based PCA (Ch. 8)
- Generalized Hebbian Algorithm
- Kernel PCA
Week 2 (9/8, 9/10)
Self-organizing Maps (Ch. 9)
- Willshaw-von der Malsburg Model
- Kohonen’s SOM Model
Week 3 (9/15, 9/17)
Information-Theoretic Learning Models (Ch. 10)
- Maximum Entropy, Kullback-Leibler Divergence
- Mutual Information, Infomax, ICA
Week 4 (9/22, 9/24)
Statistical-Mechanical Learning Methods (Ch. 11)
- Statistical Mechanics, Markov Chains
- Metropolis, Gibbs Sampling Simulated Annealing
Week 5 (9/29, 10/1)
Deep Neural Networks (Ch. 11)
- Boltzmann Machines
- Deep Belief Networks
Week 6 (10/6, 10/8)
Korean Thanksgiving day
Week 7 (10/13, 10/15)
Dynamic Programming (Ch. 12)
- Markov Decision Process, DP, Bellman Equation
Week 8 (10/20, 10/22)
- Summary (10/20)
- Mid-term Exam (10/22)
Week 9 (10/27, 10/29)
Dynamic Programming (Ch. 12)
- ADP, Reinforcement Learning, TD, Q
Week 10 (11/3, 11/5)
Neurodynamic Models (Ch. 13)
- Dynamic Systems, Attractors, Chaos
- Hopfield Models, Dynamic Reconstruction
Week 11 (11/10, 11/12)
Bayesian Filtering (Ch. 14)
- State Space Models
- Kalman Filters, EKF, CKF
Week 12 (11/17, 11/19)
Particle Filters (Ch. 14)
- Approximate Bayesian Filtering
- Particle Filters, SIR Algorithm
Dynamic Recurrent Networks (Ch. 15)
- Recurrent Network Architectures
- Backpropagation through Time
Week 13 (11/24, 11/26)
Real-Time Recurrent Learning (Ch. 15)
- RTRL Algorithm, Vanishing Gradients
- EKF Algorithm for Training RMLP
- Final Exam (11/26)
Week 14 (12/1, 12/3)
Review and discussion