ag视讯,ag视讯官网,ag视讯澳门官网

网站旧版 | English

学术报告

当前位置: ag视讯 > 学术报告 > 正文

机器学习与机器视觉研讨会系列报告

ag视讯:2015-07-11 编辑: 来源:

 

报告人:于剑、孟德宇、马林、山世光、卢湖川、刘烨斌

时间:20157119001800

会场:ag视讯办公楼二楼圆形报告厅

主持人:许信顺  张伟

 

 

报告一:

报告题目:归类问题研究

报告人:于剑

报告摘要:在大数据时代,如何从数据中提取常识是一个亟待解决的问题。而概念是常识的基本单位。从数据中提取概念称为归类问题。对于机器学习来说,归类通常包括数据降维、密度估计、回归,聚类和分类等问题。目前,针对归类问题文献中已经提出了众多的机器学习算法。这些算法的理论基础涉及面广,彼此之间的关系极其复杂。本次报告试图提出一个统一的归类表示方法,其基本假设是:归哪类,像哪类;像哪类,归哪类。据此,首次建立了归类公理化体系。该公理化体可以推演出归类方法的三条设计原则,以统一的方式重新说明了数据降维、密度估计、回归,聚类和分类等问题,而且与日常生活中的认知原则一致。 

 

个人概况:于剑,博士,教授,博士生导师。北京大学本科,硕士,博士。现任交通数据分析与挖掘北京市重点实验室主任,北京交通大学计算机科学系主任, 中国计算机学会人工智能与模式识别专委会副主任兼秘书长,中国人工智能学会机器学习专委会副主任,《计算机学报》、《App学报》、《计算机研究与发展》等多个学术期刊编委,数字出版技术国家重点实验室学术委员会委员,主持多项国家自然科学基金项目、教育部重点项目等。主要研究兴趣为模式识别,机器学习、数据挖掘等。已经在国内外重要学术期刊发表相关学术论文多篇,包括TPAMI, CVPR, TIP,TFS,TNN, TSMCB等。

 

 

报告二:

报告题目:Self-paced Learning: Challenge and Opportunity

报告人:孟德宇

报告摘要: Self-paced learning (SPL) is a recently proposed learning regime inspired by the learning process of humans and animals that gradually incorporates easy to more complex samples into training. While several easy SPL implementation strategies have been proposed, it is still short of a general paradigm for guiding the construction of rational SPL learning regimes targeting specific applications. To resolve this problem, we provide an axiom for insightfully formulating the underlying principles of self-paced learning. This axiomatic understanding not only involves the previous SPL learning schemes as its special cases, but also can be utilized to extend a series of new SPL implementation regimes based on certain application aims. In the recent two years, we have constructed several SPL realizations, including SPaR, SPLD, SPCL, SPMF, based on this axiom, and achieved the best performance in several known benchmark datasets, e.g., Web Query, Hollywood2, and Olympic Sports. Especially, this paradigm has been integrated into the system developed by CMU Informedia team, and achieved the leading performance in challenging semantic query (SQ)/000Ex tasks of the TRECVID MED/MER competition organized by NIST in 2014.

In this talk, I will introduce some of our main developments along this line, and attempt to discuss several of the typical challenges and possible opportunities for future SPL research.

 

个人概况:孟德宇,西安交通大学数学与统计学院副教授。曾赴香港理工大学,Essex大学与卡内基梅隆大学进行学术访问与合作。共接收/发表论文50余篇,其中包括TIP,TKDE,TNNLS,TSMCB,PR等国际期刊与ICML, NIPS, CVPR, ICCV, ECCV, AAAI, ACM MM等国际会议论文。担任TPAMI,TIP,TNNLS等期刊审稿人,ICCV,NIPS, ICML,ACM MM等会议程序委员会委员。曾获陕西省青年科技奖,陕西省优秀博士论文奖,入选首批西安交通大学青年拔尖人才计划。CCF会员, ACM会员, IEEE 会员。目前主要聚焦于机器学习、数据挖掘,计算机视觉与多媒体分析等方面的研究。

 

 

报告三:

报告题目:Multimodal Learning: From Image and Sentence Matching to Image Question Answering

报告人:马林

报告摘要:Deep neural networks have been successfully applied on single modalities, such as text, image, and audio. Nowadays, multiple modalities always accompany with each other, such as image and language, video and audio. Deep learning has been now studied for multiple modalities to study the association and correlation properties between them. In this talk, we discuss the multimodal learning, specifically the multimodal convolutional neural network (m-CNN), for image and sentence matching and image question answering. m-CNN provides an end-to-end framework with convolutional architectures to exploit image representation, word composition for sentence representation, and the matching relations between the two modalities. As such, the matching properties between image and sentence have been captured for the association between them. For bidirectional image and sentence retrieval, m-CNN achieves the state-of-the-art performances on Flickr8K, Flickr30K, and 微软 COCO databases. For image question answering, m-CNN can fuse the multimodal input of the image and question to obtain the joint representation for the classification in the space of candidate answer words. The efficacy of m-CNN on DAQUAR and COCO-QA datasets is demonstrated, two datasets recently created for the image question answering (QA), with performance substantially outperforming the state-of-the-arts.

 

个人概况:Lin Ma is now a Researcher at Huawei Noah's Ark Lab, Hong Kong. His current research interests lie in the areas of deep learning and multimodal learning, specifically for image and language. His Ph.D. research topics are image/video processing and quality tuning. He received his Ph.D. degree in Department of Electronic Engineering at the Chinese University of Hong Kong (CUHK) in 2013. He received the B. E., and M. E. degrees from Harbin Institute of Technology, Harbin, China, in 2006 and 2008, respectively, both in computer science. He got the best paper award in Pacific-Rim Conference on Multimedia (PCM) 2008. He was awarded the 微软 Research Asia fellowship in 2011. He was a finalist to HKIS young scientist award in engineering science in 2012.

 

报告四:

报告题目:深度学习及其在人脸识别上的应用

报告人:山世光

报告摘要:深度学习在语音、图像识别领域的成功已迅速影响了计算机视觉的各个研究方向。人脸分析和识别领域亦不例外。尽管简单照搬深度学习在其他视觉问题上的已有成功模型即已初显深度学习对于人脸判别特征提取的优异效果,但简单应用并不能解决所有问题。本报告将概述了近期大家在基于深度学习的人脸分析与识别方面的相关实践,包括面向人脸识别和表情识别的卷积神经网络特征学习,由粗到精的多阶段深度非线性人脸形状提取,以及姿态鲁棒的人脸特征渐进深度学习等。最后,总结了较小规模人脸数据条件下应用深度学习模型的有关经验。

 

个人概况:山世光,中国科学院计算技术研究所研究员、博士生导师,中科院智能信息处理重点实验室常务副主任。主要从事计算机视觉、模式识别、机器学习等相关研究工作。已在国际/国内期刊、国际会议上发表/录用学术论文200余篇,其中CCF A类国际会议和期刊论文40余篇。论文曾获CCF A类国际会议CVPR2008大会颁发的Best Student Poster Award Runner-up奖。所发表论文被国内外同行引用7000余次(谷歌 Scholar),领导课题组完成的人脸识别系统多次获得国内外人脸识别竞赛第一名。应邀担任CCF-A类国际刊物IEEE Trans. on Image Processing以及Neurocomputing EURASIP Journal of Image and Video Processing, Frontier of Computer Science, 《计算机研究与发展》等期刊的编委(Associate Editor),应邀担任过ICCV2011, ICPR2012, ACCV2012, FG2013, ICASSP2014ICPR2014等相关领域重要国际会议的Area Chair(领域主席)。所完成的人脸识别研究成果2005年度国家科技进步二等奖(第3完成人)。他是2012年度国家自然科学基金委员会首届“优青”获得者。

 

报告五:

报告题目:Saliency Object Detection: From Contrast-based methods to Supervised-based Methods

报告人:卢湖川

报告摘要:Saliency object detection, which aims to identify the most important and conspicuous object regions in an image, has received increasingly more interest in recent years. Salient object detection methods can be categorized as bottom-up stimuli-driven and top-down task-driven approaches. Bottom-up methods are usually based on low-level visual information and are more effective in detecting fine details. In contrast, top-down saliency models are able to detect objects of certain sizes and categories based on more representative features from training samples. In this talk, I would like to introduce Bayesian saliencyManifold-ranking saliency and Markov-chain saliency which are all Contrast-based methods from the Bottom-up perspective and Bootstrap Learning saliency, Deep Learning saliency which are all Supervised-based methods from the Top-down perspective.

 

个人概况:Huchuan Lu received both the B. Eng. and M. Eng. degrees in Electronic Engineering from the Department of Electronic Engineering, Dalian University of Technology(DUT), China, in 1995 and 1998 respectively, and the PhD degree in System Engineering, Dalian University of Technology(DUT), China, in 2008.  He joined School of Electronic and Information Engineering as faculty member at DUT in 1998, and He has been a Vice Dean and a Professor since 2009 and 2012 respectively, with the School of Information and Communication Engineering, DUT, China.

     His major research interests include computer vision and pattern recognition. He has obtained several honors and awards such as the Most Remembered Poster (ICCV2011) , Best Student Paper Award Finalist ( ICIP2012) and Best Paper (IET Image Processing 2014). He is an associate editor of IEEE Transactions on Systems, Man and Cybernetics – Part B.

 

 

报告六:

报告题目:Markerless Motion Capture and 4D Reconstruction

报告人:刘烨斌

报告摘要:三维动态对象的无标记运动捕捉及时空4D重建仍是计算机视觉及计算机图形学的热点和难题。本报告讲先容基于多摄像机或深度传感设备的动态对象三维重建及无标记运动捕捉,包括:动态对象运动捕捉和4D重建的基本方法、多动态对象4D重建技术、基于多Kinect及单Kinect的动态对象4D重建。报告最后将先容未来无标记运动捕捉及4D重建技术的技术发展。

 

个人概况:刘烨斌,清华大学自动化系副研究员。2002年获北京邮电大学自动化学士学位,2009年获得清华大学自动化系博士学位;2009在清华大学自动化系从事博士后研究工作,2011年出站后留校;2010年在德国马普计算所进行访问研究。研究兴趣包括基于图像的三维重建、运动捕捉、立体视频、计算摄像、计算光学等。获得2008年国家技术发明二等奖及2012年国家技术发明一等奖(皆排名第三),2013年清华大学学术新人奖。

 

 

    友情链接

  • ag视讯官网
  • 青岛校区
  • 本科生院
  • 研究生院
  • 党委研究生工作部
  • 党委学生工作部、武装部

联系大家

地址: 山东省青岛市即墨区滨海公路72号

           ag视讯官网(青岛)第周苑C座

邮编:266237

电话:(86)-532-58630620

传真:(86)-532-58630620

微信公众号

山大微信公众号

XML 地图 | Sitemap 地图