KEYNOTE TALK SERIES

                                               

                      Prof. Shahram Latifi

                       (University of Nevada, Las Vegas)

Bio: Shahram Latifi received his M.S. and Ph.D. degrees in Electrical and Computer Engineering from Louisiana State University in 1986 and 1989, respectively. He is a Professor of Electrical Engineering at the University of Nevada, Las Vegas (UNLV), where he also serves as Co-Director of the Center for Information Technology and Algorithms (CITA). For nearly four decades, Dr. Latifi has designed and taught a wide range of undergraduate and graduate courses spanning Computer Science, Computer Engineering, and Electrical Engineering. He is an internationally recognized educator and researcher who has delivered invited keynotes, plenary lectures, and seminars on Machine Learning, Artificial Intelligence, and Information Technology across the globe. Dr. Latifi has authored more than 300 technical publications in networking, AI/ML, cybersecurity, image processing, biometrics, fault-tolerant computing, parallel processing, and data compression. His research has been supported by major federal agencies and industry leaders, including NSF, NASA, DOE, DoD, Boeing, and Lockheed Martin. He has held several prominent leadership roles, including Associate Editor of the IEEE Transactions on Computers (1999–2006), IEEE Distinguished Speaker (1997–2000), Co-founder and Chair of the IEEE International Conference on Information Technology (2000–2004), and Founder and Chair of the International Conference on Information Technology – New Generations (2005–present). Dr. Latifi is the recipient of numerous research awards, most recently the Barrick Distinguished Research Award (2021). In 2020, he was recognized among the top 2% of researchers worldwide, according to the Stanford/Elsevier global citation database. He is a Fellow of the IEEE (elected 2002) and a Registered Professional Engineer in the State of Nevada.
 
Title of the Talk: AI at the Crossroads: Power, Risk, and the Path to Responsible Intelligence
 

Abstract: Over the past two decades, AI has advanced at an extraordinary pace. Breakthroughs in Deep Learning, Generative Adversarial Networks, Transfer Learning, and Large Language Models have accelerated progress and transformed nearly every sector — including education, healthcare, aerospace, manufacturing, security, e-commerce, and the arts. But alongside these achievements come serious concerns. How do we ensure training data is fair and unbiased? How do we protect privacy in increasingly data-driven systems? And most importantly, how do we maintain human control over technologies that are becoming more autonomous? In this talk, I will present a concise overview of AI, Machine Learning (ML), and Deep Learning (DL). I will highlight the challenges not only in building general-purpose AI but, more urgently, in developing AI systems that are safe, transparent, and trustworthy. I will also discuss current national and international initiatives aimed at establishing Responsible AI practices.

                                                                             

                      Prof. Danijela Cabric

                    (University of California, Los Angeles)

Bio: Danijela Cabric is a Professor in the Electrical and Computer Engineering Department at the University of California, Los Angeles. She received M.S. from the University of California, Los Angeles in 2001 and Ph.D. from University of California, Berkeley in 2007, both in Electrical Engineering. In 2008, she joined UCLA as an Assistant Professor, where she heads Cognitive Reconfigurable Embedded Systems lab. Her current research projects include novel radio architectures, signal processing, communications, machine learning and networking techniques for spectrum sharing, millimeter-wave, massive MIMO and IoT systems. She is a principal investigator in the three large cross-disciplinary multi-university centers including SRC/JUMP ComSenTer and CONIX, and NSF SpectrumX.  Prof. Cabric was a recipient of the Samueli Fellowship in 2008, the Okawa Foundation Research Grant in 2009, Hellman Fellowship in 2012, the National Science Foundation Faculty Early Career Development (CAREER) Award in 2012, and Qualcomm Faculty Awards in 2020 and 2021. Prof. Cabric is an IEEE Fellow.

Title of the Talk: Meeting 6G demands for energy efficiency and access to mid-band spectrum

Abstract: Each generation has taken a big step forward and introduced new technologies in order to increase the performance of networks and devices to support the constantly enriched services. In 5G, the telecommunications industry has been particularly focused on improving user experiences such as data rates and latency.  However, 6G key objectives have significantly shifted. Operators are requesting improvement of operating costs, energy efficiency, access to mid-spectrum while embedding and leveraging AI/ML technology. This talk will discuss technologies and architectures for energy-efficient mobile and fixed wireless access using new antenna array designs, beamforming modes, ultra-wideband multiple access, and scalable processing architectures to support different coverage and connectivity requirements in 6G cellular and massive IoT connectivity. It will also explore solutions for enabling spectrum sharing in mid-band spectrum between cellular networks and incumbents including radars and satellites.

                                                                                 

                           Prof. Raj Jain

                      (Washington University, Saint Louis) 

Bio: Raj Jain is currently the Barbara J. and Jerome R. Cox, Jr., Professor of Computer Science and Engineering at Washington University in St. Louis. Dr. Jain is a Life Fellow of IEEE, a Fellow of ACM, a Fellow of AAAS, and a recipient of the 2018 James B. Eads Award from St. Louis Academy of Science, and the 2017 ACM SIGCOMM Life-Time Achievement Award. Previously, he was one of the Co-founders of Nayna Networks, Inc., a Senior Consulting Engineer at Digital Equipment Corporation in Littleton, Mass, and then a professor of Computer and Information Sciences at Ohio State University in Columbus, Ohio. With 47,000+ citations on Google Scholar, he is among the most highly cited authors in computer science. Further information is at http://www.cse.wustl.edu/~jain/

Title of the Talk: Quantum Networking: Challenges and Research Opportunities

Abstract: Quantum Computing is at the top of the hype curve and offers unique research opportunities. It is well known that Quantum Computing will break the current security methods, and post-quantum cryptography has already been standardized. Both industry and governments are investing heavily in this upcoming technology, and there are significant challenges. In this talk, I will highlight the opportunities for research in quantum networking.

                           

                                                                                     

                      Dr. Niranjan Hasabnis

                 (Principal Research Scientist, CodeMetal)

Bio: Dr. Niranjan Hasabnis is a Principal Research Scientist at CodeMetal, where he leverages his experience in formal methods, program analysis, and AI to solve problems in code transpilation. Niranjan brings years of research and engineering experience in both academia as well as companies such as Intel. At Intel, he was exploring applications of AI, ML, and formal method techniques to solve problems in compilers, high-performance computing (HPC), and software engineering. He implemented and open-sourced an autonomous system, named ControlFlag, that learns to detect programming errors in code. ControlFlag has been covered by several news outlets such as Communications of ACM, Venturebeat, ZDNet, and TechRepublic. 

Previously, Niranjan obtained his PhD in Computer Science from Stony Brook University, where he conducted research in program analysis, ML, and compilers. Niranjan has published in top-tier conferences such as NeurIPS, CGO, ASPLOS, and FSE. He regularly serves on the program committees of various conferences, including ICSE, FSE, and USENIX ATC. Niranjan has been a recipient of the Outstanding Paper Award at HPEC’24. He holds 11 patents in the areas of compilers, computer architecture, machine learning, and code optimizations.

Title of the Talk: Conquering the Code Abyss: Navigating the Perilous Minefield of LLM-Powered Translation

Abstract: Code transpilation, the process of transforming and compiling code between different programming languages, is a long-standing and challenging research area with significant industrial applications. While recent advances in large language models (LLMs) have made this problem more accessible, several challenges persist. To give an example, beyond the obvious problem of correctness of AI-generated code, the questions around ability of LLMs to reason about code complexity, code cost, etc., are still open.

At CodeMetal, we specialize in deploying AI-based code transpilation pipelines for a diverse set of customers. In this presentation, I will share insights into the specific challenges we face and the innovative solutions we implement to overcome them. I will also incorporate relevant findings from my academic collaborations. The talk will conclude by outlining open research questions to guide future work in this field.

                                                                         

                                                                                 

                         Prof. Tara Javidi

                    (University of California, San Diego)

Bio: Tara Javidi studied electrical engineering and computer science at the University of Michigan, Ann Arbor. She joined the University of California, San Diego, in 2005 where she is currently the inaugural holder of Jerzy (George) Lewak Chair and a Professor of Electrical and Computer Engineering with a joint appointment in Halicioglu Data Science Institute.  At UCSD, she is o a founding co-director of the UCSD Center for Machine-Intelligence, Computing and Security, and a coPI of the National Science Foundation (NSF) Institute for Learning-enabled Optimization at Scale (TILOS). 
 
Tara Javidi  is a Fellow of IEEE where she previously served as the Editor in Chief of IEEE Journal on Selected Areas in Information Theory (2022/23/24), the Board of Governors of the IEEE Information Theory Society (elected 2018/19/20-2021/22/23, ex-officio 2024), a Distinguished Lecturer of the IEEE Information Theory Society (2017/18) as well as a Distinguished Lecturer of the IEEE Communications Society (2019/2020).  She and her former PhD students are recipients of the 2021 IEEE Communications Society & Information Theory Society Joint Paper. She has received numerous awards recognizing her research, educational, DEI and leadership contributions. Tara is also the founding CSO/CTO of KavAI, a startup that develops an adaptive and integrated sensing and AI platform to scale intelligence to the industrial and large-scale operations that most need it.
 
Title of the Talk: Physical Attention and Active Inference for Physically Embedded AI at Scale
 
Abstract: Beyond existing applications of AI, there is a critical need for artificial intelligence models and methodology to accurately and proactively interpret the physical world and assist us to monitor our ever-complex and large-scale industrial footprint on our planet. To achieve this, we need to simultaneously acquire data across large physical spaces AND actively interpret, in time, the diverse range and resolution of sensory inputs that make up the physical world. In this talk, I will first discusses how this can be achieved by an integrated approach to  connected, embodied and generative AI. Furthermore, I will discuss how advances in integrated communication and computing platforms, including ubiquitous connectivity, integrated sensing and communication, and AI-enabled embedded devices, bring this to reality at scale. 

                   

                   

                                                                                   

                        Prof. Anant Sahai

                    (University of California, Berkeley)

Bio: Anant Sahai is currently the Qualcomm Chair Professor in Berkeley’s Electrical Engineering and Computer Sciences (EECS) Department, and is also a part-time Visiting Faculty Researcher at Google.  After graduating with his PhD from MIT, and before joining the Berkeley faculty, he was on the theoretical/algorithmic side of a team at the startup Enuvis, Inc. He has previously served as the Treasurer for the IEEE Information Theory Society. He has coordinated machine learning efforts for SpectrumX, the NSF’s Center for Spectrum Innovation, and is also very involved with the data engineering efforts there. At Berkeley, he regularly teaches the main deep learning course.
His research interests span machine learning, wireless communication, information theory, signal processing, and decentralized control — with a particular interest at the intersections of these fields. Within wireless communication, he is particularly interested in Spectrum Sharing as well as very-low-latency ultra-reliable wireless communication protocols for control. He is also interested in the foundations of machine learning, particularly as it pertains to why overparameterized models do or do not work. Recently, he has also become quite interested in in-context learning in modern ML models.

Title of the Talk: How traditional systems theory-inspired models shed light on emergent capabilities of LLMs

Abstract: By looking at interleaved traces from stochastic linear systems, we can create a playground that sheds light on the inductive biases and training dynamics of the kinds of transformer models that underlay LLMs. In relatively tiny models (few millions of parameters as distinct from the billions to trillions involved with frontier LLMs), we can show LLM-type phenomena such as in-context learning, emergence (when an ability doesn’t exist for a long time in training but then suddenly starts to show up), differential emergence (when different abilities emerge at different points), the effect of scale (emergence can happen sooner in larger models as compared to smaller ones), and even transitions from “in-context learning” to “in-weights learning.” But because our playground is mathematical and we have understandings from system theory to lean on, we can also discover new phenomena. 

Important Deadlines

Full Paper Submission:21st November 2025
Acceptance Notification:3rd December 2025
Final/Camera-ready Paper Submission:22nd December 2025
Early Bird Registration:11th December 2025
Presentation Submission:28th December 2025
Conference:5 - 7 January 2026
Full Paper Submission: 9th November 2023
Acceptance Notification: 30th November 2023
Final Paper Submission: 11th December 2023
Early Bird Registration 16th December 2023
Presentation Submission: 26th December 2023
Conference: 9 - 10th October 2023

Previous Conference

Sister Conferences

Search

Announcements

  • Best Paper Award will be given for each track.
  • Conference Record no-