報(bào)告名稱:Algorithms for Large-Scale Optimization and Machine Learning
會(huì)議地點(diǎn):騰訊會(huì)議直播(2021-05-22,15:00–17:00, 會(huì)議ID:353 838 143;會(huì)議鏈接:https://meeting.tencent.com/s/TK4UDjCILCPj)
主辦單位:機(jī)電工程學(xué)院
主講人:曹彥凱
主持人:平續(xù)斌 李志武
講座人介紹:
曹彥凱,于2010年獲得浙江大學(xué)生物工程學(xué)士學(xué)位以及并獲得博士學(xué)位;2015年獲得美國(guó)普渡大學(xué)化學(xué)工程學(xué)位;2016年至2018年期間在美國(guó)威斯康星大學(xué)麥迪遜分校做研究助理;2018年至今在加拿大不列顛哥倫比亞大學(xué)(University of British Columbia)的化學(xué)與生物工程系做助理教授。他的研究重點(diǎn)是大規(guī)模局部和全局優(yōu)化算法的設(shè)計(jì)和實(shí)現(xiàn),從而解決各種決策過程中出現(xiàn)的問題,例如機(jī)器學(xué)習(xí),隨機(jī)優(yōu)化,模型預(yù)測(cè)控制和復(fù)雜網(wǎng)絡(luò)等。

報(bào)告1:Large-Scale Local Optimization: Algorithms and Applications
會(huì)議時(shí)間:5月22日15:00
報(bào)告內(nèi)容:
This talk presents our recent work on algorithms and software implementations to solve large-scale optimization problems to local optimality. Our algorithms exploit both problem structure and emerging high-performance computing hardware (e.g., multi-core CPUs, GPUs, and computing clusters) to achieve computational scalability. We are currently using our capabilities to address engineering and scientific questions that arise in diverse application domains including design of neural network controller, predictive control of wind turbines, power management in large networks, multiscale model predictive control of battery systems. The problems that we are addressing are of unprecedented complexity and defy the state-of-the-art. For example, the problem of designing a control system for wind turbines is a nonlinear programming problem (NLP) with 7.5 million variables that take days to solve with existing solvers. We have solved this problem in less than 1.3 hours using our parallel solvers.
報(bào)告2:Large-Scale Global Optimization and Machine learning
會(huì)議時(shí)間:5月22日16:00
報(bào)告內(nèi)容:
This talk presents a reduced-space spatial branch and bound (BB) strategy to solve two-stage stochastic nonlinear programs to global optimality. At each node, a lower bound is constructed by relaxing the non-anticipativity constraints, and an upper bound is constructed by fixing the first-stage variables. We also extend this algorithm to address clustering problems, which is a prototypical unsupervised learning task and also a special class of stochastic programs. One key advantage of this reduced-space algorithm is that it only needs to perform branching on the centers of clusters to guarantee convergence, and the size of centers is independent of the number of data samples. Another critical property of this algorithm is that bounds can be computed by solving individual small-scale subproblems. These two properties enable our algorithm to be scalable to the large dataset to find a global optimal solution. Our global optimal algorithm can now solve clustering problem on a dataset of 200,000 samples, which is at least 100 times larger than what the state-of-the-art method proposed in the literature can deal with.