講座名稱(chēng):The Insecurity of Machine Learning: Problems and Solutions
講座時(shí)間:2019-10-23 14:20:00
講座地點(diǎn):南校區(qū)辦公樓210報(bào)告廳
講座人:阿迪·薩莫爾
講座人介紹:
Adi Shamir教授是當(dāng)前著名的密碼學(xué)專(zhuān)家,以色列魏茲曼科學(xué)研究所教授,美國(guó)外籍科學(xué)院院士,現(xiàn)代密碼學(xué)奠基人之一。2002年,與R. L. Rivest、L. M. Adleman共同獲得了第三十七屆圖靈獎(jiǎng)。Adi Shamir教授在密碼學(xué)領(lǐng)域做出了杰出貢獻(xiàn):與R. L.Rivest、L. M. Adleman設(shè)計(jì)了著名的公鑰密碼體制RSA;首次提出基于身份的密碼體制和門(mén)限簽名方案的思想;首次破解Merkle-Hellman背包密碼體制并首次提出RSA公鑰密碼體制部分信息泄露下的分析;此外,他在側(cè)信道攻擊、多變?cè)€密碼體制分析和對(duì)稱(chēng)密碼分析等方面,都做出了多項(xiàng)原創(chuàng)性工作。Adi Shamir教授曾獲得Israel Prize(以色列國(guó)家最高獎(jiǎng)),Paris Kanellakis Theory and Practice Award、Erd?s Prize、IEEE W.R.G. Baker Prize、UAP Science Prize、PIUS XI Gold Medal、IEEE Koji Kobayashi Computers and Communications Award等。
講座內(nèi)容:
The development of deep neural networks in the last decade had revolutionized machine learning and led to major improvements in the precision with which we can perform many computational tasks. However, the discovery five years ago of adversarial examples in which tiny changes in the input can fool well trained neural networks makes it difficult to trust such results when the input can be manipulated by an adversary. This problem has many applications and implications in object recognition, autonomous driving, cyber security, etc, but it is still far from being understood. In particular, there had been no convincing explanations why such adversarial examples exist, and which parameters determine the number of input coordinates one has to change in order to mislead the network. In this talk I will describe a simple mathematical framework which enables us to think about this problem from a fresh perspective, turning the existence of adversarial examples in deep neural networks from a baffling phenomenon into an unavoidable consequence of the geometry of R^n under the Hamming distance, which can be quantitatively analyzed.
主辦單位:計(jì)算機(jī)科學(xué)與技術(shù)學(xué)院