科研工作
首页 >> 科研工作 >> 学术交流 >> 正文

william威廉亚洲官方线上学术报告: Theory of Deep Learning

发布日期:2020/12/02    点击:

报告时间:2020125日(周六)下午15:00-16:00

会议地点: 腾讯会议 ID630 224 427

点击下面链接入会,或添加至会议列表:https://meeting.tencent.com/s/LjCdkarwvKIB

主讲人:周定轩教授


报告摘要:Deep learning has been widely applied and brought breakthroughs in speech recognition, computer vision, natural language processing, and many other domains. The involved deep neural network architectures and computational issues have been well studied in machine learning. But there lacks a theoretical foundation for understanding the modelling, approximation or generalization ability of deep learning models with network architectures. Here we are interested in deep convolutional neural networks (CNNs) with convolutional structures. The convolutional architecture gives essential differences between the deep CNNs and fully-connected deep neural networks, and the classical theory for fully-connected networks developed around 30 years ago does not apply. This talk describes a mathematical theory of deep CNNs associated with the rectified linear unit (ReLU) activation function. In particular, we give the first proof for the universality of deep CNNs, meaning that a deep CNN can be used to approximate any continuous function to an arbitrary accuracy when the depth of the neural network is large enough. We also show that deep CNNs perform at least as well as fully-connected neural networks for approximating general functions, and much better for approximating radial functions in high dimensions. Our quantitative estimate, given tightly in terms of the number of free parameters to be computed, verifies the efficiency of deep CNNs in dealing with big data.

报告人简介:
     
周定轩, 2005年获杰出青年基金, 2014-2017被评为高被引科学家.

2006-2012 任香港城市大学数学系主任, 2018年起任数据科学学院副院长, 2019年起任刘璧如数学科学研究中心主任. 曾任香港研究资助局理科组成员, 香港数学会副会长. 现为``Analysis and Application'', ``Mathematical Foundations of Computing”两个刊物和``Progress in Data Science"丛书主编, 也是``Applied and Computational Harmonic Analysis''等十几家刊物编委.


上一条:william威廉亚洲官方线上学术报告: Learning rates for kernel-regularized regressions with general convex losses

下一条:william威廉亚洲官方学术报告:混沌分组密码及噪声源的原理与设计