RESEARCH KEYNOTE SERIES
(Professor,University of Texas at Austin ,USA)
Bio: Al Bovik is the Cockrell Family Regents Endowed Chair Professor at The University of Texas at Austin. His research interests land squarely at the nexus of visual neuroscience and digital pictures. His many international honors include the 2019 Progress Medal of the Royal Photographic Society, the 2019 IEEE Fourier Award, the 2017 OSA Edwin H. Land Medal, a 2015 Primetime Emmy Award from the Academy of Television Arts and Sciences, and the Norbert Wiener and ‘Sustained Impact’ Awards of the IEEE Signal Processing Society.
Title of Talk: Here Comes Even More Video Data: High Frame Rates, Compression, and Video Quality
Abstract: Modern communication networks must continuously contend with increases in the sheer volume of streaming videos, as providers improve consumer experiences by delivering higher-quality, denser content. In addition to larger formats (UHD and beyond), increased bit depths (HDR), high-frame rate (HFR) videos are becoming more common as well. This requires carefully balances of frame rate, compression, and perceptual video quality. Towards advancing progress on these problems, I will describe a large-scale perceptual video quality study we conducted focused on these issues. I will also introduce new video quality models and algorithms that can be used to mediate frame rate selection versus compression, or how to combine space-time sampling with compression. My hopes are that this work will help advance and enhance the global delivery of HFR video content.
(Professor, Pennsylvania State University, USA)
Bio: Prof. Martin Fürer has graduated at ETH Zürich in 1978 with a doctorate in mathematics. He received the Silver Medal of the ETH for his thesis. He worked at various Computer Science Departments in the US and Europe, before returning to Applied Mathematics at the University of Zürich. He is currently a professor in the Department of Computer Science and Engineering at Penn State, where he has been for over three decades. During this time, he held longer visiting positions at Princeton University, ETH Zürich, EPFL Lausanne, and the University of Zürich. His research has been in computational complexity and algorithms, in particular approximation algorithms. With varying coauthors, he has obtained significant results on independent sets, k-set covers, and minimum degree spanning and Steiner trees. His best-known results are on graph isomorphism and the complexity of integer multiplications. For the latter he received a best paper award at the prestigious STOC conference, because the previous result had not been improved for 35 years. More recently, his research has concentrated on parameterized complexity and width parameters.
Title of Talk: On the Construction and Use of Tree Decompositions
Abstract: Many interesting and useful graph problems are NP-complete. Therefore, it is hopeless to solve them optimally for worst case graphs of significant size. In practice, the situation is not so dire. Many graphs are sparse, and often have small treewidth. Obviously, most combinatorial graph problems are trivial for trees. Interestingly, this fact has an important generalization. Usually, there are still efficient dynamic programming algorithms when small width tree decompositions of the graphs are known. Such a graph problem is called fixed parameter tractable (FPT), meaning that it has an algorithm whose running time is the product of a function of the parameter k (here the treewidth) and a polynomial in the size n of the problem instance. The function of k is allowed to be exponential or worse. This is fine, because we want to handle instances with small values of k.
There are FPT algorithms to compute the treewidth and a tree decomposition of a graph. Still, this is often more time consuming then the running of an application algorithm using the tree decomposition. Thus, the search for better algorithms to compute tree decompositions is still going on. Sometimes, it makes sense to study FPT algorithms where the function of k is polynomial too. With any sparse linear system of equations, we can associate a sparse graph. If this graph has treewidth k, then we aspire to solve problems in time O(k 2 n) instead of the customary O(n 3 ). We will also consider cliquewidth and multi-cliquewidth, which are generalizations of treewidth to include dense graphs.
(Professor, Massachusetts Institute of Technology, USA)
Bio: Harold (Hal) Abelson is Class of 1922 Professor of Electrical Engineering and Computer Science at MIT and a Fellow of the IEEE. He holds an A.B. degree from Princeton University and a Ph.D. degree in mathematics from MIT. Abelson leads the development of MIT App Inventor, a major focus of the MIT Center for Mobile Learning. App Inventor, originally started by Abelson when he was a visiting faculty member at Google Research, is a Web-based development system aimed at making it easy for young students -- or anyone -- to create their own mobile applications. In 1992, Abelson was designated as one of MIT's six inaugural MacVicar Faculty Fellows, in recognition of his significant and sustained contributions to teaching and undergraduate education. Abelson was recipient in 1992 of the Bose Award (MIT's School of Engineering teaching award), winner of the 1995 Taylor L. Booth Education Award given by IEEE Computer Society -- cited for his continued contributions to the pedagogy and teaching of introductory computer science -- and of the 2012 ACM Special Interest Group on Computer Science Education Award for Outstanding Contribution to Computer Science Education, and winner of the 2011 ACM Karl Karlstrom Outstanding Educator Award. Abelson has played key roles in fostering MIT institutional educational technology initiatives including MIT OpenCourseWare and MIT DSpace institutional repository, and he has served as co-chair of the MIT Council on Educational Technology, which oversees MIT's strategic educational technology activities and investments. He is a leader in the worldwide movement towards openness and democratization of culture and intellectual resources. He is a founding director of Creative Commons, Public Knowledge, and the Free Software Foundation, and a former director of the Center for Democracy and Technology — organizations that are devoted to strengthening the global intellectual commons.
Title of Talk: From Computational Thinking to Computational Action
Abstract: Starting from roots in the late 1960s, exposing young people to the ideas of computational thinking has emerged as an important theme in preparing them for effective citizenship in the information society. Beyond just exposure to ideas, we’re now seeing how students can use those ideas to improve life at the personal level, the community level, even the national level. This empowerment through computational action results from the past decade’s developments in mobile computing, cloud services, Internet of things, and machine learning, which bring the world’s most powerful computing tools within the grasp of even beginning learners. As educators, we have the responsibility to make our students aware of these possibilities, and we have the opportunity to help our students mature as empowered citizens in a world increasingly transformed by information technology..
(Professor, Harvard University, USA)
Bio: Krzysztof Gajos is a Gordon McKay Professor of Computer Science at the Harvard Paulson School of Engineering and Applied Sciences. Krzysztof’s current interests include 1. Principles and applications of intelligent interactive systems; 2. Tools and methods for behavioral research at scale; and 3. Design for equity and social justice. He has also made contributions in the areas of accessible computing, creativity support tools, social computing, and health informatics.
Title of Talk: AI, Design, and Social Justice
Abstract: Current practices for designing interactive systems inadvertently exacerbate existing social inequities and sometimes even create new ones. In our work, we uncover and characterize such technology-related inequities, and we develop novel AI-enabled approaches to provide more equitable access to digital resources.
With our Supple system, for example, we demonstrated how casting user interface design as an optimization problem allowed us to close more than half of the performance gap between able-bodied and motor-impaired users. Our IdeaHound project illustrates the concept of “integrated crowdsourcing”, a novel approach that combines innovations in design and computation to leverage implicit work in volunteer-powered peer production systems. Compared to existing peer production systems (e.g., Wikipedia), this approach makes more efficient use of volunteers’ effort and thus makes peer production systems more accessible to communities with fewer volunteer resources. Lastly, our recent results highlight two emerging threats to equity: First, some AI-driven productivity enhancements common in modern user interfaces disproportionately benefit people with high Need for Cognition (a stable personality trait that reflects general cognitive motivation). Second, predictive text entry technologies impact the content of what people write in a manner that systematically privileges majority voices.
I will conclude by making explicit the ethical aspirations that inform our technical work, and by sharing some of the intellectual tools we use to guide our everyday decisions to ensure that our work meets our aspirations.
(Professor, Columbia University, USA)
Bio: Julia Hirschberg is Percy K. and Vida L.W. Hudson Professor of CS at Columbia and was previously at Bell Laboratories/AT&T Labs. She works on speech and Natural Language Processing: text-to-speech synthesis and the detection of emotional, charismatic, abusive, deceptive, and trusted speech and language. She served on the ACL Executive Board, the ISCA board (2005-7 as president), the NAACL executive board, the CRA Executive Board, the AAAI Council, and is currently on the CRA-WP board and the IEEE SLTC. She was editor of Computational Linguistics and Speech Communication, is a fellow of AAAI, ISCA, ACL, ACM, and IEEE, and a member of the NAE, the American Academy, and the APS. She received the IEEE Flanagan Award and the ISCA Medal for Scientific Achievement.
Title of Talk: Who do you trust? Cues to Deception and Trust in Text and Speech
Abstract: Humans rarely perform better than chance at lie detection. In prior research on deception detection, we collected a very large corpus of interviews with subjects asked to provide true and false answers to 24 personal questions and built classifiers from their lexical and acoustic- prosodic features in truth and lie that performed significantly better than their human interviewers. To better understand human trust and mistrust, we then created a LieCatcher game to crowd-source ratings of the same statements from multiple raters. From these we analyzed the language features of trusted vs. mistrusted speech and compared these to features of statements that were in fact truthful or that were deceptive. We also collected information on the strategies that raters told us they used to discriminate between truth and lie, finding that that many of the features they consistently believed to be signs of truth or of lie were not in fact reliable cues. We then built ML models that can predict trusted vs. mistrusted speech significantly better than chance. Our current research focuses on using LieCatcher to actually train humans to be better at lie detection.
(Professor, California Institute of Technology, USA)
Bio: Houman Owhadi is Professor of Applied and Computational Mathematics and Control and Dynamical Systems in the Department of Computing and Mathematical Sciences of the California Institute of Technology. Owhadi serves as an associate editor of the SIAM Journal on Numerical Analysis, the SIAM/ASA Journal on Uncertainty Quantification, the International Journal of Uncertainty Quantification, the Journal of Computational Dynamics, and Foundations of Data Science. He is one of the main editors of the Springer Handbook of Uncertainty Quantification. His research interests concern the exploration of interplays among numerical approximation, statistical inference, and learning, especially the facilitation/automation possibilities emerging from these interplays. Owhadi was awarded the 2019 Germund Dahlquist Prize by the Society for Industrial and Applied Mathematics.
Title of Talk: Plato, Delphi, and AI
Abstract: We show that artificial neural networks (ANNs) are essentially discretized solvers for a generalization of image registration variational problems. In this generalization, two-dimensional images are replaced by high dimensional shapes/images, material/landmark points are replaced by data points, and grayscale intensities are replaced by Plato's space of forms. As a consequence, we show that Deep Learning is equivalent to kernel regression with a (warped) kernel learned from data. We present a simple cross-validation alternative (Kernel Flows) to learning kernels from data and illustrate its efficacy (and low computational complexity) in predicting time series.
(Professor, Georgia Institute of Technology, USA)
Bio: Amy Bruckman is Professor and Senior Associate Chair in the School of Interactive Computing at the Georgia Institute of Technology. Her research focuses on social computing, with interests in content moderation, collaboration, social movements, and internet research ethics. Bruckman an ACM Fellow and a member of the ACM CHI Academy. She received her Ph.D. from the MIT Media in 1997, and a B.A. in physics from Harvard University in 1987. Her book “Should You Believe Wikipedia?” is forthcoming from Cambridge University Press in 2021.
Title of Talk: Beyond the Technology: A Sociotechnical View of Managing Online Bad Behavior
Abstract: With the rising tide of disinformation, misinformation, hate speech and harassment on the internet, platforms face hard choices about content moderation. What content should be removed, and what content should be annotated as dubious? Are there ways to discourage bad behavior and sharing of bad content rather than simply removing it after the fact? In this talk, first, I will present a framework of approaches to managing bad behavior online, drawing on Larry Lessig’s classic book Code. Second, I’ll demonstrate this framework by applying it to a study from my lab at Georgia Tech on the impact of quarantining communities on Reddit. We found quarantining to be surprisingly effective, and it offers an alternative to censorship. Finally, I’ll share some big picture thoughts on how we can improve online interaction, and the critical role of research.
(Professor, Massachusetts Institute of Technology, USA)
Bio: Randall Davis a Professor of Electrical Engineering and Computer Science at MIT, and a Fellow of the Association for the Advancement of Artificial Intelligence. He graduated summa cum laude from Dartmouth and earned his PhD at Stanford. At MIT he and his group have designed and built systems that provide interaction with multiple modalities (e.g., drawing, gesture, speech, gaze tracking), facilitating novel forms of interaction. He has collaborated with Dr. Dana Penney (Beth Israel Lahey Health) to create a variety of novel neurological tests and corresponding software, based on the insight that when analyzed properly, very subtle human motions can be cognitive biomarkers indicative of cognitive health. This work has been recognized in several venues (e.g., INFORMS 2016 Innovative Applications in Analytics Award, finalist in the Geoffrey Beene Alzheimer’s Initiative, and selected as a winning team in MIT’s SOLVE competition). It aims to produce tools capable of detecting cognitive decline far earlier than is currently possible.
Title of Talk: Next Generation Testing of Cognitive Status
Abstract: Populations around the world are “greying,” i.e., the high end of the age distribution is becoming a larger percentage of the total. This is in part the consequence advances in healthcare that allow people to live longer. But this also means that more people are living long enough to be susceptible to the diseases of cognitive decline (e.g., Alzheimer’s) that occur in the late 60’s and beyond.
The toll these diseases take is astonishing. In the US alone it is estimated that in 2019 5.8 million people suffer from some form of dementia and their care costs $290 billion annually. This cost is projected to top 1 trillion dollars by 2050.
While there is as yet no cure for these neurodegenerative diseases, there are ways of slowing the progress of decline. That in turn means that early detection takes on special importance – the earlier the problem can be detected, the earlier steps can be taken toward mitigation.
Our research group at MIT and Lahey Health has been developing novel versions of traditional pen and paper neuropsychological tests, taking advantage of digital technology to capture detailed behavior, then extract more information from the test using AI technology to interpret the signals. We describe our efforts, including a test created and seen through to FDA approval, discuss tests under development, and sketch out our next steps.
(Professor, Carnegie Mellon University, USA)
Bio: Jonathan Aldrich is a Professor of Computer Science at Carnegie Mellon University. He teaches courses in programming languages, software engineering, object-oriented design, and program analysis for quality and security. Prof. Aldrich directed CMU's Software Engineering Ph.D. program from 2013-2019. Dr. Aldrich’s research centers on programming languages and type systems that are deeply informed by software engineering considerations. His research contributions include verifying the correct implementation of an architectural design, modular formal reasoning about code, and API protocol specification and verification. His notable awards include an NSF CAREER award , the Dahl-Nygaard Junior Prize, the DARPA Computer Science Study Group, and an ICSE most influential paper award. He served as general chair, program chair, and steering committee chair of SPLASH and OOPSLA. Aldrich holds a bachelor's degree in Computer Science from Caltech and a Ph.D. from the University of Washington.
Title of Talk: Penrose: From Mathematical Notation to Beautiful Diagrams
Abstract: A diagram can be worth a thousand words when describing a mathematical concept. But drawing good diagrams is difficult, leading mathematician William Thurston to quip that "mathematicians usually have fewer and poorer figures in their papers and books than in their heads." In the Penrose project, we are developing a system to democratize the creation of mathematical diagrams. A user can describe a concept using familiar mathematical notation, and our system will translate that notation into one or more possible visual representations. Rather than rely on a fixed library of visualization tools, the visual representation is user-defined in a constraint-based specification language; diagrams are then generated automatically via constrained numerical optimization. In this talk, I'll describe the key ideas that make Penrose general and can enable even novice users to develop high-quality diagrams. In fhe future, we hope that Penrose will enable students to better learn mathematical ideas and researchers to communicate their ideas more effectively.