KEYNOTE SERIES
Michael Bronstein
(Professor, University of Oxford, UK)
Michael Bronstein is the DeepMind Professor of AI at the University of Oxford and Head of Graph Learning Research at Twitter. He was previously a professor at Imperial College London and held visiting appointments at Stanford, MIT, and Harvard, and has also been affiliated with three Institutes for Advanced Study (at TUM as a Rudolf Diesel Fellow (2017-2019), at Harvard as a Radcliffe fellow (2017-2018), and at Princeton as a short-time scholar (2020)). Michael received his PhD from the Technion in 2007. He is the recipient of the Royal Society Wolfson Research Merit Award, Royal Academy of Engineering Silver Medal, five ERC grants, two Google Faculty Research Awards, and two Amazon AWS ML Research Awards. He is a Member of the Academia Europaea, Fellow of IEEE, IAPR, BCS, and ELLIS, ACM Distinguished Speaker, and World Economic Forum Young Scientist. In addition to his academic career, Michael is a serial entrepreneur and founder of multiple startup companies, including Novafora, Invision (acquired by Intel in 2012), Videocites, and Fabula AI (acquired by Twitter in 2019).
Abstract : The message-passing paradigm has been the “battle horse” of deep learning on graphs for several years, making graph neural networks a big success in a wide range of applications, from particle physics to protein design. From a theoretical viewpoint, it established the link to the Weisfeiler-Lehman hierarchy, allowing to analyse the expressive power of GNNs. We argue that the very “node-and-edge”-centric mindset of current graph deep learning schemes may hinder future progress in the field. As an alternative, we propose physics-inspired “continuous” learning models that open up a new trove of tools from the fields of differential geometry, algebraic topology, and differential equations so far largely unexplored in graph ML.

Aldo Faisal
(Professor, Imperial College London, UK)
Bio:
Professor Aldo Faisal (@FaisalLab) is the Professor of AI & Neuroscience at the Dept. of Computing and the Dept. of Bioengineering at Imperial College London (UK) and Chair of Digital Health at the University of Bayreuth (Germany). In 2021 he was awarded a prestigious 5-year UKRI Turing AI Fellowship. Since 2019, Aldo is the founding director of the £20Mio UKRI Centre for Doctoral Training in AI for Healthcare, and leads the Behaviour Analytics Lab at the Data Science Institute, London. Aldo works at the interface of Machine Learning, Medicine and translational Biomedical Rngineering to help people in diseases and health. He currently is one of the few engineers world-wide that lead their own clinical trials to validate their technology. In this space his works focusses on Digital Biomarkers and AI for medical intervention (Makin et al,Nat Biomed Eng; Komorowski et al, NatMed, 2018; Gottessmann et al NatMed, 2019). His work received a number of prizes and awards, including the $50,000 Research Discovery Prize by the Toyota Foundation.
Abstract : Here we present a novel AI-driven approach to human behaviour analytics called Ethomics (Kadirvelu et Faisal, 2023, Nature Medicine, Ricotti et Faisal, 2023, Nature Medicine) that allows unprecedented resolution in diagnosing and measuring disease progression. We apply the same AI framework to two very different degenerative diseases affecting adults (Friedreichs) and children (Duchenne). Crucially the AI method detects imperceptible changes to human behaviour that allows us to measure gene transcription rates from movement changes alone and can predict each individual patient’s disease trajectory up to a year into the future. Our ethomics technology allows us therefore to dramatically de-risk and accelerate drug development for rare diseases, as it allows us to cut the duration of clinical trials in half and requires only a fraction of patients to determine if a treatment is working compared to current „gold“-standard methods.
Faramarz Fekri
(Professor, Georgia Tech University, USA)
Bio: Dr. Fekri is a Professor of ECE and a GTRI Fellow at Georgia Tech. He is one of the leading researchers in Statistical Signal Processing, information theory, graphical models, inductive logic reasoning and machine learning with applications to communications, biology, robotics and artificial intelligence. He is an IEEE Fellow and a faculty member of the Georgia Tech Center in Machine Learning. Dr. Fekri received the Faculty Research Innovation Award by Sony Inc, Samsung GRO Award, National Science Foundation CAREER Award, Southern Center for Electrical Engineering Education Young Faculty Development Award, and Outstanding Young faculty Award of the School of ECE. He serves on the Technical Program Committees of several IEEE/ACM conferences. He is currently an Associate Editor in IEEE Transactions on Molecular, Biological, and Multi-Scale Communications. In the past, he served on the editorial board of the IEEE Transactions on Communications, and the Elsevier Journal on PHYCOM.
Abstract : Deep Learning has revolutionized machine learning and has expanded its reach into many diverse fields, from autonomous driving to augmented reality and distributed IoT devices. Not unexpectedly, this has also led to deep- learning based design of communication systems. In particular, in all these applications, we often need to compute specific target functions that do not have any simple forms, e.g., obstacle detection, object recognition, etc. However, traditional cloud-based methods that focus on transferring data to a central location either for training or inference place enormous strain on wireless network resources. To address this, we develop a machine learning framework for distributed functional compression over wireless channels. We advocate that our machine learning framework can, by design, compute any arbitrary function for the desired functional compression task in IoT. In particular, the raw sensory data are never transferred to a central node for training or inference, thus reducing communication overhead. For these algorithms, we provide theoretical convergence guarantees and upper bounds on communication. Our simulations show that the learned encoders and decoders for functional compression perform significantly better than traditional approaches, are robust to channel condition changes and sensor outages. Compared to the cloud-based scenario, our algorithms reduce channel use by two orders of magnitude. Finally, we turn our attention to the problem of privacy in the distributed functional compression, where the node(s) are looking to hide private attributes correlated with the function value. We first study the single node and receiver problem. We then return to the distributed functional compression problem and devise a framework that demonstrates a state-of-the-art privacy-utility trade-off in the distributed scenario.
Sami Abu-El-Haija
(Senior Research Scientist, Google Research)
Bio: Sami is a Senior Research Scientist at Google Research, working at the Algorithms & Optimizations research group. He has published several papers in top-tier venues. He studied at top-tier institutions. Most recently, at the University of Southern California for his PhD.
Abstract : Feed-forward neural networks -- such as, Graph Neural Networks (GNNs) achieve outstanding empirical performance on several prediction tasks -- such as link prediction and node classification, on graphs, e.g., on social or biological graphs. However, state-of-the-art (SOTA) models require long training time (hours-to-days, even on expensive GPUs). On the other hand, shallow (1-layer) neural networks pose convex objective functions. In some cases, their optimal parameters can be estimated in closed-form, without calculating gradients. Sami will describe his journey in explaining a new kind of deep neural networks, that are specially hand-crafted, such that, on one hyperplane in their parameter space, the networks will be equivalent to standard MLP network with ReLu activation. On another hyperplane, the network becomes linear in its parameters. Such networks can be initialized, in closed-form, by restricting projecting their parameters onto the linear hyperplane. Afterwards, these networks can be fine-tuned in the usual regime. In his experiments, such a training paradigm can speed-up training hundreds or thousands of times.
Yuejie Chi
(Professor, Carnegie Mellon University)
Bio: Dr. Yuejie Chi is a Professor in the department of Electrical and Computer Engineering, and a faculty affiliate with the Machine Learning department and CyLab at Carnegie Mellon University. She received her Ph.D. and M.A. from Princeton University, and B. Eng. (Hon.) from Tsinghua University, all in Electrical Engineering. Her research interests lie in the theoretical and algorithmic foundations of data science, signal processing, machine learning and inverse problems, with applications in sensing, imaging, decision making, and societal systems, broadly defined. Among others, Dr. Chi received the Presidential Early Career Award for Scientists and Engineers (PECASE) and the inaugural IEEE Signal Processing Society Early Career Technical Achievement Award for contributions to high-dimensional structured signal processing. She is an IEEE Fellow (Class of 2023) for contributions to statistical signal processing with low-dimensional structures.
Abstract: Many problems encountered in science and engineering can be formulated as estimating a low-rank object from incomplete, and possibly corrupted, linear measurements; prominent examples include matrix completion and tensor completion. Through the lens of matrix and tensor factorization, one of the most popular approaches is to employ simple iterative algorithms such as gradient descent to recover the low-rank factors directly, which allow for small memory and computation footprints. However, the convergence rate of gradient descent depends linearly, and sometimes even quadratically, on the condition number of the low-rank object, and therefore, slows down painstakingly when the problem is ill-conditioned. This talk introduces a new algorithmic approach, dubbed scaled gradient descent (ScaledGD), that provably converges linearly at a constant rate independent of the condition number of the low-rank object, while maintaining the low per-iteration cost of gradient descent. A nonsmooth variant of ScaledGD provides further robustness to corruptions by optimizing the least absolute deviation loss. In addition, ScaledGD continues to admit fast global convergence, again almost independent of the condition number, from a small random initialization when the rank is over-specified. In total, ScaledGD highlights the power of appropriate preconditioning in accelerating nonconvex statistical estimation, where the iteration-varying preconditioners promote desirable invariance properties of the trajectory with respect to the symmetry in low-rank factorization.

Pin-Yu Chen
(Principal Research Scientist, IBM)
Bio: Dr. Pin-Yu Chen is a principal research scientist at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-IBM AI Research Collaboration and PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Chen received his Ph.D. in electrical engineering and computer science from the University of Michigan, Ann Arbor, USA, in 2016. Dr. Chen’s recent research focuses on adversarial machine learning and robustness of neural networks. His long-term research vision is to build trustworthy machine learning systems. At IBM Research, he received several research accomplishment awards, including an IBM Master Inventor and IBM Corporate Technical Award in 2021. His research contributes to IBM open-source libraries including Adversarial Robustness Toolbox (ART 360) and AI Explainability 360 (AIX 360). He has published more than 50 papers related to trustworthy machine learning at major AI and machine learning conferences, given tutorials at NeurIPS’22, AAAI’22, IJCAI’21, CVPR(’20,’21), ECCV’20, ICASSP’20, KDD’19, and Big Data’18, and organized several workshops for adversarial machine learning. He received the IEEE GLOBECOM 2010 GOLD Best Paper Award and UAI 2022 Best Paper Runner-Up Award.
Abstract: In this talk, I will share my research journey toward building an AI model inspector for evaluating, improving, and exploiting adversarial robustness for deep learning. I will start by providing an overview of research topics concerning adversarial robustness and machine learning, including attacks, defenses, verification, and novel applications. For each topic, I will summarize my key research findings, such as (i) practical optimization-based attacks and their applications to explainability and scientific discovery, (ii) Plug-and-play defenses for model repairing and patching, and (iii) attack-agnostic robustness assessment. Finally, I will conclude my talk with my vision of preparing deep learning for the real world and the research methodology of learning with an adversary.

Ehsan Afshari
(Professor, University of Michigan)
Abstract: The increasing demands for compact, low-cost, and high-resolution imaging radar systems have pushed the operation frequency to the terahertz range due to the shorter wavelength and larger available bandwidth. These radars can be used in security screening, industrial quality control and biological hydration sensing applications. In this talk, we review basics of imaging radar systems as well as recent advances in this area.
Ben Adlam
(Research Scientist ,Google Brain)
Abstract: While kernel regression remains an important practical method, its connection to neural networks as their width becomes large has initiated fresh research. These neural kernels have drastically increased performance on diverse and nonstandard data modalities but require significantly more compute, which previously limited their application to smaller datasets. In this work, we address this by massively parallelizing their computation across many GPUs. We combine this with a distributed, preconditioned conjugate gradients algorithm to enable kernel regression at a large scale (i.e. up to five million examples). Using this approach, we study scaling laws of several neural kernels across many orders of magnitude for the CIFAR-5m dataset. Using data augmentation to expand the original CIFAR-10 training dataset by a factor of 20, we obtain a test accuracy of 91.2% (SotA for a pure kernel method). Moreover, we explore neural kernels on other data modalities, obtaining results on protein and small molecule prediction tasks that are competitive with SotA methods.
Alexandre Bayen
(Professor, UC Berkeley)
Bio: Alexandre Bayen is the Associate Provost for Moffett Field Program Development at UC Berkeley, and the Liao-Cho Professor of Engineering at UC Berkeley. He is a Professor of Electrical Engineering and Computer Science(link is external), and Civil and Environmental Engineering(link is external). From 2014 - 2021, he served as the Director of the Institute of Transportation Studies(link is external) at UC Berkeley (ITS). He is also a Faculty Scientist in Mechanical Engineering, at the Lawrence Berkeley National Laboratory(link is external) (LBNL). He received the Engineering Degree in applied mathematics from the Ecole Polytechnique, France, in 1998, the M.S. and Ph.D. in aeronautics and astronautics from Stanford University in 1999 and 2004, respectively. He was a Visiting Researcher at NASA Ames Research Center from 2000 to 2003. Between January 2004 and December 2004, he worked as the Research Director of the Autonomous Navigation Laboratory at the Laboratoire de Recherches Balistiques et Aerodynamiques, (Ministere de la Defense, Vernon, France), where he holds the rank of Major. He has been on the faculty at UC Berkeley since 2005. Bayen has authored two books and over 200 articles in peer reviewed journals and conferences. He is the recipient of the Ballhaus Award from Stanford University, 2004, of the CAREER award from the National Science Foundation, 2009 and he is a NASA Top 10 Innovators on Water Sustainability, 2010. His projects Mobile Century and Mobile Millennium received the 2008 Best of ITS Award for ‘Best Innovative Practice’, at the ITS World Congress and a TRANNY Award from the California Transportation Foundation, 2009. Mobile Millennium has been featured more than 200 times in the media, including TV channels and radio stations (CBS, NBC, ABC, CNET, NPR, KGO, the BBC), and in the popular press (Wall Street Journal, Washington Post, LA Times). Bayen is the recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE) award from the White House, 2010. He is also the recipient of the Okawa Research Grant Award, the Ruberti Prize from the IEEE, and the Huber Prize from the ASCE.

Torsten Hoefler
(Professor , ETH Zurich)
Bio : Torsten Hoefler is a Professor of Computer Science at ETH Zurich, a member of Academia Europaea, and a Fellow of the ACM and IEEE. Following a “Performance as a Science” vision, he combines mathematical models of architectures and applications to design optimized computing systems. Before joining ETH Zurich, he led the performance modeling and simulation efforts for the first sustained Petascale supercomputer, Blue Waters, at the University of Illinois at Urbana-Champaign. He is also a key contributor to the Message Passing Interface (MPI) standard where he chaired the "Collective Operations and Topologies" working group. Torsten won best paper awards at ACM/IEEE Supercomputing in 2010, 2013, 2014, 2019, 2022, and at other international conferences. He has published numerous peer-reviewed scientific articles and authored chapters of the MPI-2.2 and MPI-3.0 standards. For his work, Torsten received the IEEE CS Sidney Fernbach Memorial Award in 2022, the ACM Gordon Bell Prize in 2019, the IEEE TCSC Award of Excellence (MCR), ETH Zurich's Latsis Prize, the SIAM SIAG/Supercomputing Junior Scientist Prize, the IEEE TCSC Young Achievers in Scalable Computing Award, and the BenchCouncil Rising Star Award. Following his Ph.D., he received the 2014 Young Alumni Award and the 2022 Distinguished Alumni Award of his alma mater, Indiana University. Torsten was elected to the first steering committee of ACM's SIGHPC in 2013 and he was re-elected for every term since then. He was the first European to receive many of those honors; he also received both an ERC Starting and Consolidator grant. His research interests revolve around the central topic of performance-centric system design and include scalable networks, parallel programming techniques, and performance modeling for large-scale simulations and artificial intelligence systems. Additional information about Torsten can be found on his homepage at htor.inf.ethz.ch.
Important Deadlines
Full Paper Submission: | 25th January 2023 |
Acceptance Notification: | 11th February 2023 |
Final Paper Submission: | 2023 22nd February 2023 |
Early Bird Registration: | 21st February 2023 |
Presentation Submission: | 28th February 2023 |
Conference: | 8 - 11 March 2023 |
Previous Conferences
IEEE CCWC 2022
IEEE CCWC 2019
IEEE CCWC 2018
Search
Announcements
• Conference Proceedings will be submitted for publication at IEEE Xplore Digital Library.
• Best Paper Award will be given for each track
• Conference Record No- 57344