KEYNOTE TALK SERIES
Prof. Shahram Latifi
(University of Nevada, Las Vegas)
Abstract: Over the past two decades, AI has advanced at an extraordinary pace. Breakthroughs in Deep Learning, Generative Adversarial Networks, Transfer Learning, and Large Language Models have accelerated progress and transformed nearly every sector — including education, healthcare, aerospace, manufacturing, security, e-commerce, and the arts. But alongside these achievements come serious concerns. How do we ensure training data is fair and unbiased? How do we protect privacy in increasingly data-driven systems? And most importantly, how do we maintain human control over technologies that are becoming more autonomous? In this talk, I will present a concise overview of AI, Machine Learning (ML), and Deep Learning (DL). I will highlight the challenges not only in building general-purpose AI but, more urgently, in developing AI systems that are safe, transparent, and trustworthy. I will also discuss current national and international initiatives aimed at establishing Responsible AI practices.

Prof. Danijela Cabric
(University of California, Los Angeles)
Bio: Danijela Cabric is a Professor in the Electrical and Computer Engineering Department at the University of California, Los Angeles. She received M.S. from the University of California, Los Angeles in 2001 and Ph.D. from University of California, Berkeley in 2007, both in Electrical Engineering. In 2008, she joined UCLA as an Assistant Professor, where she heads Cognitive Reconfigurable Embedded Systems lab. Her current research projects include novel radio architectures, signal processing, communications, machine learning and networking techniques for spectrum sharing, millimeter-wave, massive MIMO and IoT systems. She is a principal investigator in the three large cross-disciplinary multi-university centers including SRC/JUMP ComSenTer and CONIX, and NSF SpectrumX. Prof. Cabric was a recipient of the Samueli Fellowship in 2008, the Okawa Foundation Research Grant in 2009, Hellman Fellowship in 2012, the National Science Foundation Faculty Early Career Development (CAREER) Award in 2012, and Qualcomm Faculty Awards in 2020 and 2021. Prof. Cabric is an IEEE Fellow.
Title of the Talk: Meeting 6G demands for energy efficiency and access to mid-band spectrum
Abstract: Each generation has taken a big step forward and introduced new technologies in order to increase the performance of networks and devices to support the constantly enriched services. In 5G, the telecommunications industry has been particularly focused on improving user experiences such as data rates and latency. However, 6G key objectives have significantly shifted. Operators are requesting improvement of operating costs, energy efficiency, access to mid-spectrum while embedding and leveraging AI/ML technology. This talk will discuss technologies and architectures for energy-efficient mobile and fixed wireless access using new antenna array designs, beamforming modes, ultra-wideband multiple access, and scalable processing architectures to support different coverage and connectivity requirements in 6G cellular and massive IoT connectivity. It will also explore solutions for enabling spectrum sharing in mid-band spectrum between cellular networks and incumbents including radars and satellites.

Prof. Raj Jain
(Washington University, Saint Louis)
Bio: Raj Jain is currently the Barbara J. and Jerome R. Cox, Jr., Professor of Computer Science and Engineering at Washington University in St. Louis. Dr. Jain is a Life Fellow of IEEE, a Fellow of ACM, a Fellow of AAAS, and a recipient of the 2018 James B. Eads Award from St. Louis Academy of Science, and the 2017 ACM SIGCOMM Life-Time Achievement Award. Previously, he was one of the Co-founders of Nayna Networks, Inc., a Senior Consulting Engineer at Digital Equipment Corporation in Littleton, Mass, and then a professor of Computer and Information Sciences at Ohio State University in Columbus, Ohio. With 47,000+ citations on Google Scholar, he is among the most highly cited authors in computer science. Further information is at http://www.cse.wustl.edu/~jain/
Title of the Talk: Quantum Networking: Challenges and Research Opportunities
Abstract: Quantum Computing is at the top of the hype curve and offers unique research opportunities. It is well known that Quantum Computing will break the current security methods, and post-quantum cryptography has already been standardized. Both industry and governments are investing heavily in this upcoming technology, and there are significant challenges. In this talk, I will highlight the opportunities for research in quantum networking.
![]()
Dr. Niranjan Hasabnis
(Principal Research Scientist, CodeMetal)
Bio: Dr. Niranjan Hasabnis is a Principal Research Scientist at CodeMetal, where he leverages his experience in formal methods, program analysis, and AI to solve problems in code transpilation. Niranjan brings years of research and engineering experience in both academia as well as companies such as Intel. At Intel, he was exploring applications of AI, ML, and formal method techniques to solve problems in compilers, high-performance computing (HPC), and software engineering. He implemented and open-sourced an autonomous system, named ControlFlag, that learns to detect programming errors in code. ControlFlag has been covered by several news outlets such as Communications of ACM, Venturebeat, ZDNet, and TechRepublic.
Previously, Niranjan obtained his PhD in Computer Science from Stony Brook University, where he conducted research in program analysis, ML, and compilers. Niranjan has published in top-tier conferences such as NeurIPS, CGO, ASPLOS, and FSE. He regularly serves on the program committees of various conferences, including ICSE, FSE, and USENIX ATC. Niranjan has been a recipient of the Outstanding Paper Award at HPEC’24. He holds 11 patents in the areas of compilers, computer architecture, machine learning, and code optimizations.
Title of the Talk: Conquering the Code Abyss: Navigating the Perilous Minefield of LLM-Powered Translation
Abstract: Code transpilation, the process of transforming and compiling code between different programming languages, is a long-standing and challenging research area with significant industrial applications. While recent advances in large language models (LLMs) have made this problem more accessible, several challenges persist. To give an example, beyond the obvious problem of correctness of AI-generated code, the questions around ability of LLMs to reason about code complexity, code cost, etc., are still open.
At CodeMetal, we specialize in deploying AI-based code transpilation pipelines for a diverse set of customers. In this presentation, I will share insights into the specific challenges we face and the innovative solutions we implement to overcome them. I will also incorporate relevant findings from my academic collaborations. The talk will conclude by outlining open research questions to guide future work in this field.

Prof. Tara Javidi
(University of California, San Diego)

Prof. Anant Sahai
(University of California, Berkeley)
Bio: Anant Sahai is currently the Qualcomm Chair Professor in Berkeley’s Electrical Engineering and Computer Sciences (EECS) Department, and is also a part-time Visiting Faculty Researcher at Google. After graduating with his PhD from MIT, and before joining the Berkeley faculty, he was on the theoretical/algorithmic side of a team at the startup Enuvis, Inc. He has previously served as the Treasurer for the IEEE Information Theory Society. He has coordinated machine learning efforts for SpectrumX, the NSF’s Center for Spectrum Innovation, and is also very involved with the data engineering efforts there. At Berkeley, he regularly teaches the main deep learning course.
His research interests span machine learning, wireless communication, information theory, signal processing, and decentralized control — with a particular interest at the intersections of these fields. Within wireless communication, he is particularly interested in Spectrum Sharing as well as very-low-latency ultra-reliable wireless communication protocols for control. He is also interested in the foundations of machine learning, particularly as it pertains to why overparameterized models do or do not work. Recently, he has also become quite interested in in-context learning in modern ML models.
Title of the Talk: How traditional systems theory-inspired models shed light on emergent capabilities of LLMs
Abstract: By looking at interleaved traces from stochastic linear systems, we can create a playground that sheds light on the inductive biases and training dynamics of the kinds of transformer models that underlay LLMs. In relatively tiny models (few millions of parameters as distinct from the billions to trillions involved with frontier LLMs), we can show LLM-type phenomena such as in-context learning, emergence (when an ability doesn’t exist for a long time in training but then suddenly starts to show up), differential emergence (when different abilities emerge at different points), the effect of scale (emergence can happen sooner in larger models as compared to smaller ones), and even transitions from “in-context learning” to “in-weights learning.” But because our playground is mathematical and we have understandings from system theory to lean on, we can also discover new phenomena.
Important Deadlines
| Full Paper Submission: | 21st November 2025 |
| Acceptance Notification: | 3rd December 2025 |
| Final/Camera-ready Paper Submission: | 22nd December 2025 |
| Early Bird Registration: | 11th December 2025 |
| Presentation Submission: | 28th December 2025 |
| Conference: | 5 - 7 January 2026 |
| Full Paper Submission: | 9th November 2023 |
| Acceptance Notification: | 30th November 2023 |
| Final Paper Submission: | 11th December 2023 |
| Early Bird Registration | 16th December 2023 |
| Presentation Submission: | 26th December 2023 |
| Conference: | 9 - 10th October 2023 |
Previous Conference
Sister Conferences
Search
Announcements
- Best Paper Award will be given for each track.
- Conference Record no-
