Research Talk

RESEARCH KEYNOTE SERIES

Jeffrey D. Ullman
(Professor, Stanford University)

Bio: Jeff Ullman is the Stanford W. Ascherman Professor of Engineering (Emeritus) in the Department of Computer Science at Stanford and CEO of Gradiance Corp. He received the B.S. degree from Columbia University in 1963 and the PhD from Princeton in 1966. Prior to his appointment at Stanford in 1979, he was a member of the technical staff of Bell Laboratories from 1966-1969, and on the faculty of Princeton University between 1969 and 1979. From 1990-1994, he was chair of the Stanford Computer Science Department. Ullman was elected to the National Academy of Engineering in 1989, the American Academy of Arts and Sciences in 2012, the National Academy of Sciences in 2020, and has held Guggenheim and Einstein Fellowships. He has received the Sigmod Contributions Award (1996), the ACM Karl V. Karlstrom Outstanding Educator Award (1998), the Knuth Prize (2000), the Sigmod E. F. Codd Innovations award (2006), the IEEE von Neumann medal (2010), the NEC C&C Foundation Prize (2017), and the ACM A.M. Turing Award (2020). He is the author of 16 books, including books on database systems, data mining, compilers, automata theory, and algorithms.

Title of the Talk:  Computational Complexity Theory for MapReduce

Abstract:  MapReduce, as embodied in systems such as Hadoop and Spark, has proven to be an important tool for developing reliable, resilient, parallel software.  But the mechanics of MapReduce require that we think differently about the cost of different algorithms to accomplish a given task.  We shall give the elements of a useful algorithm-design theory that involves tradeoffs between the amount of data permitted at any one reducer ("reducer size") and the amount of communication from mappers to reducers ("replication rate").  We shall give the exact form of this tradeoff for several interesting problems, including "all-pairs" comparisons, and detecting strings that differ by one bit.


Danijela Cabric

(Professor, University of California, Los Angeles)

Bio: Danijela Cabric is a Professor in the Electrical and Computer Engineering Department at the University of California, Los Angeles. She received M.S. from the University of California, Los Angeles in 2001 and Ph.D. from University of California, Berkeley in 2007, both in Electrical Engineering. In 2008, she joined UCLA as an Assistant Professor, where she heads Cognitive Reconfigurable Embedded Systems lab. Her current research projects include novel radio architectures, signal processing, communications, machine learning and networking techniques for spectrum sharing, 5G millimeter-wave, massive MIMO and IoT systems. 
Prof. Cabric was a recipient of the Samueli Fellowship in 2008, the Okawa Foundation Research Grant in 2009, Hellman Fellowship in 2012, the National Science Foundation Faculty Early Career Development (CAREER) Award in 2012, and the Qualcomm Faculty Award in 2020 and 2021. She is serving as an Associate editor for several IEEE journals and on the IEEE Signal Processing for Communications and Networking Technical Committee. She was the General Chair of IEEE Vehicular Networking Conference (VNC) in 2019 and IEEE Dynamic Spectrum Access (DySPAN) in 2021, and a Distinguished Lecturer for IEEE Communications Society from 2018 to 2019. Prof. Cabric is an IEEE Fellow.

Title of the Talk: Ultra-Low-Latency Millimeter Wave Networking using True-Time-Delay Array Architecture

Abstract: Future generations of millimeter wave networks (mmW-nets) will operate in the upper mmW frequency band where ≥ 10 GHz bandwidth can be used to meet the ever increasing demands. Their realization will demand addressing a completely new set of challenges including wider bandwidths, larger antenna array sizes, and higher cell density.  These new system requirements demand fundamental rethinking of radio architectures, signal processing, and networking protocols. Major breakthroughs are required in radio front-end architectures to enable wideband mmW-nets, as most commonly adopted phased antenna array (PAA) based radios face many challenges in achieving fast beam acquisition, interference suppression and wideband data communication. This talk will present the potential of true-time-delay (TTD) based array architecture for wideband mmW-nets for low latency initial beam training and data communication.


Stephen Boyd

(Professor, Stanford University)

Bio: Stephen Boyd is the Samsung Professor of Engineering, and Professor of Electrical Engineering at Stanford University, with courtesy appointments in Computer Science and Management Science and Engineering. He received the A.B. degree in Mathematics from Harvard University in 1980, and the Ph.D. in Electrical Engineering and Computer Science from the University of California, Berkeley, in 1985, before joining the faculty at Stanford. His current research focus is on convex optimization applications in control, signal processing, machine learning, finance, and circuit design. He is a member of the US National Academy of Engineering, a foreign member of the Chinese Academy of Engineering, and a foreign member of the National Academy of Korea.

Title of the Talk: Convex Optimization

Abstract: Convex optimization has emerged as a useful tool for applications that include data analysis and model fitting, resource allocation, engineering design, network design and optimization, finance, and control and signal processing. After an overview of the mathematics, algorithms, and software frameworks for convex optimization, we turn to common themes that arise across applications, such as sparsity and relaxation. We describe recent work on real-time embedded convex optimization, in which small problems are solved repeatedly in millisecond or microsecond time frames.


Edith Elkind

(Professor, Oxford University)

Bio: Edith Elkind is a Professor of Computer Science at University of Oxford. She obtained her PhD from Princeton in 2005, and has worked in the UK, Israel, and Singapore before joining Oxford in 2013. She works in algorithmic game theory, with a focus on algorithms for collective decision making and coalition formation. Edith has published over 100 papers in leading AI conferences and journals, and has served as a program chair of WINE, AAMAS, ACM EC and COMSOC; she will serve as a program chair of IJCAI in 2023.

Title of the Talk: United for Change: Deliberative Coalition Formation to Change the Status Quo

Abstract:  We study a setting in which a community wishes to identify a strongly supported proposal from a space of alternatives, in order to change the status quo. We describe a deliberation process in which agents dynamically form coalitions around proposals that they prefer over the status quo. We formulate conditions on the space of proposals and on the ways in which coalitions are formed that guarantee deliberation to succeed, that is, to terminate by identifying a proposal with the largest possible support. Our results provide theoretical foundations for the analysis of deliberative processes, in particular in systems for democratic deliberation support, such as, for instance, LiquidFeedback or Polis.

Based on joint work with Davide Grossi, Nimrod Talmon, Udi Shapiro and Abheek Ghosh.

Geoffrey Fox

(Professor, University of Virginia)

Bio: Dr. Fox received a Ph.D. in Theoretical Physics from Cambridge University, where he was Senior Wrangler. He is now a Professor in the Biocomplexity Institute & Initiative and Computer Science Department at the University of Virginia.  He previously held positions at Caltech, Syracuse University, Florida State University, and Indiana University. after being a postdoc at the Institute for Advanced Study at Princeton, Lawrence Berkeley Laboratory, and Peterhouse College Cambridge. He has supervised the Ph.D. of 75 students. He has an hindex of 86 with over 41,000 citations. He received the High-Performance Parallel and Distributed Computing (HPDC) Achievement Award and the ACM - IEEE CS Ken Kennedy Award for Foundational contributions to parallel computing in 2019. He is a Fellow of APS (Physics) and ACM (Computing) and works on the interdisciplinary interface between computing and applications. He is currently active in the Industry consortium MLCommons/MLPerf.

Title of the Talk: AI for Science illustrated by Deep Learning for Geospatial Time Series

Abstract: AI is expected to transform both science and the approach to science. As an example, we take the use of deep learning to describe geospatial time series. We present a general approach building on previous work on recurrent neural networks and transformers. We give three examples of so-called spatial bags from earthquake nowcasting, medical time series, and particle dynamics and focus on the earthquake case. The latter is presented as an MLCommons benchmark challenge with three different implementations: a pure recurrent network, a Spatio-temporal science transformer, and a version of the Google Temporal Fusion Transformer. We discuss how deep learning is used to both clean up the inputs and describe hidden dynamics. We show that both data engineering (wrangling data into desired input format) and data science (the deep learning training/inference) are needed and comment on achieving high performance in both. We briefly speculate how such particular examples can drive broad progress in AI for science.


Ken Birman

(Professor, Cornell University)

Bio: Ken Birman joined Cornell after receiving his Ph.D. degree from U.C. Berkeley in Computer Science. He currently holds the N. Rama Rao Chair in Computer Science. A researcher in distributed systems, Professor Birman focuses on high assurance applications. His past work was used in settings that include the New York Stock Exchange, French Air Traffic Control System and US Navy AEGIS. More recent systems transitioned to companies like IBM, Microsoft, Cisco, and Amazon. Professor Birman has been the Editor in Chief of the ACM Transactions on Computer Systems and has chaired or participated in program committees for numerous conferences. He has also run a number of studies on behalf of the Air Force, NSF, DARPA and DOE, aimed at understanding how best to exploit cloud computing in sensitive settings. At present Professor Birman is working on a new software platform for reliable cloud computing to support use of machine-learning in cloud computing environments that track sensor data and need to take actions under tight time pressure.  One concrete example involves managing the smart power grid, a topic he is exploring in collaboration with the New England ISO, the New York Power Authority, and the New York ISO.     He has several recent publications on this work, and one of the main components, a system he calls Derecho, is available on GitHub.com for open-source download.  Beyond the smart power grid, Derecho has applications to other kinds of smart infrastructures (such as highways, homes, cities), and can be used to create cloud storage infrastructures that use machine intelligence to decide what to store, how to preprocess it, and what forms of indexing to run now in anticipation of future requests.

Professor Birman is a member of the Computer Science graduate field, and plays an active role in advising Cornell NYC Tech post-docs through the Jacobs' Institute's Runway program.

Title of the Talk: Cascade: Ultra-fast Edge Computing for Intelligent IoT

Abstract: Cascade is a new open-source computing platform designed to host AI or ML software close to cameras, other sensors and actuators, in settings where it is important to obtain ultra-low latencies and very high data rates.  While preserving a standard programming model, Cascade maps data movement and computing to accelerators such as RDMA or 5G networking, NMVe memory and leverages GPU when available.    Cascade is dramatically faster than widely popular platforms such as Spark and Apache Flink, and yet just as easy to use – indeed, code from those platforms can often be ported to run on Cascade with little or no change.   


Alyssa B. Apsel

(Professor, Cornell University)

Bio: Alyssa Apsel received the B.S. from Swarthmore College in 1995 and the Ph.D. from Johns Hopkins University, Baltimore, MD, in 2002.  She joined Cornell University in 2002, where she is currently Director of Electrical and Computer Engineering.  The focus of her research is on power-aware mixed signal circuits and design for highly scaled CMOS and modern electronic systems.  Her current research is on the leading edge of ultra-low power and flexible RF interfaces for multi-standard wireless and IoT.  Her group has pioneered the use of coupled oscillators for network synchronization of mesh networks and a variety of techniques for tunable narrowband RF systems.  She has authored or coauthored over 100 refereed publications including one book in related fields of RF mixed signal circuit design, ultra-low power radio, interconnect design and planning, photonic integration, and process invariant circuit design techniques resulting in ten patents.  She has received a number of best paper awards and the National Science Foundation CAREER Award in addition to being selected by Technology Review Magazine as one of the Top Young Innovators in 2004.  More recently Professor Apsel served as a Distinguished Lecturer for IEEE CAS from 2018-2019 and was named an IEEE Fellow.

Title of the Talk:  Ubiquitous, Seamless, and Future Proofed: How Wireless Circuits Can Push IoT

Abstract: In 2021 the number of IoT devices reached 46 billion, a 200% increase over the number in 2016*.  By 2030 this number is expected to jump to 125 billion.   While the FCC and other regulators have added licensed and unlicensed spectrum across several bands over the past few years to accommodate these new users, the need for increased wireless capacity and radios that can quickly adapt to new standards remains.  Needless to say, the RF circuit designer has a significant role to play in solving these problems. 

As the market continues to grow, regulating bodies in various countries will undoubtedly continue to work to free up and reallocate spectrum and users will continue to find more ways to use that spectrum.  Users will need both short reach and low power IoT devices that can operate independently and share spectrum as well as new WiFi and cellular radios that can quickly adapt to new environments and standards. 

In this talk I will look at two approaches to these related problems that require unconventional radio designs. First, I will look at an approach from the network side, of how to use a hardware support to build functional mesh networks that can communicate point to point in a scalable fashion.  Using such radios can reduce communication bottlenecks in centralized systems as well as enable more devices and sensors with greater flexibility.  The second part of the talk will examine how to add flexibility to the RF front end itself to accommodate changing standards and environments while keeping design and circuit costs low.   I will show techniques for both broadband and tunable narrowband systems that can enable flexibility while maintaining high performance.  With these examples I will discuss the potential for future flexible analog RF designs and the current limits of this approach. 


Brian A. Barsky

(Professor, University of California, Berkeley)

Bio: Brian A. Barsky is Professor of the Graduate School at the University of California, Berkeley where he is a Warren and Marjorie Minner Faculty Fellow in Engineering Ethics and Professional/ Social Responsibility.  Prof. Barsky has faculty affiliations in Electrical Engineering and Computer Sciences (EECS), Optometry, Vision Science, Bioengineering, the Berkeley Institute of Design (BID), the Berkeley Center for New Media (BCNM), the Arts Research Center (ARC), and the Berkeley Canadian Studies Program.He attended McGill University in Montréal, where he received a D.C.S. in engineering and a B.Sc. in mathematics and computer science. He studied computer graphics and computer science at Cornell University in Ithaca, where he earned an M.S. degree.  His Ph.D. degree is in computer science from the University of Utah in Salt Lake City.  His research interests include computational photography, contact lens design, computer methods for optometry and ophthalmology, image synthesis, computer aided geometric design and modeling, CAD/CAM/CIM, interactive and realistic three-dimensional computer graphics, visualization in scientific computing, computer aided cornea modeling and visualization, medical imaging, vision correcting displays, and virtual environments for surgical simulation.

Title of the Talk:

How Prioritizing Profits over Safety Created the Deadly Boeing 737 MAX and its Ill-Conceived Automated Software

Abstract: The Boeing 737 MAX airplane crashed twice with no survivors within two years of its first commercial flight.  It was grounded worldwide for 19 months in the U.S. and remains prohibited from flying in the airspace in some countries. Examination of the many factors that led to these disastrous consequences illuminates disquieting ethical issues of corporate behavior and lack of government oversight. There is a complex web of concerns involved. At the heart of the tragedy is an ill-conceived automated computer software approach to a flawed aerodynamic design. Prof. Barsky became involved in this topic when his friend’s granddaughter was killed in the second crash. He met with the head of the Aviation Accident Investigation Sub-Committee of the National Transportation Safety Committee of Indonesia in Jakarta to obtain first-hand the details of the first crash. He was featured prominently in a recent Smithsonian documentary shown in the U.S. and U.K. His full-page op-ed in the Globe and Mail was discussed in the Parliament of Canada. In this talk, Prof. Barsky will elucidate how these tragedies were the consequence of a corporation prioritizing profits over safety as well as of regulatory capture of the government agency which was derelict in its duty to protect the flying public.

 

Important Deadlines

Full Paper Submission:15th January 2023
Acceptance Notification: 1st February 2023
Final Paper Submission:15th February
2023
Early Bird Registration: 15th February 2023
Presentation Submission: 17th February 2023
Conference: 8 - 11 March 2023

Previous Conference

IEEE CCWC 2020

Sister Conferences

IEEE UEMCON 2020

IEEE IEMCON 2020

IEEE AIIOT 2021

Search

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Announcements

•    Best Paper Award will be given for each track
•    Conference Record No- will be updated