CORPORATE KEYNOTE SERIES
Ernesto Zamora Ramos
(Intel Corporation, USA)
Bio: Ernesto Zamora Ramos is a deep learning software engineer for Intel Corporation. His work entails the creation of cutting edge innovation on Intel’s large silicon offering of accelerators, general and specialized, for artificial intelligence, neural network training, and inference. Dr. Zamora received his Ph.D. in Computer Science from the University of Nevada, Las Vegas in 2017 for his work in computer vision and artificial neural networks. He worked under NSF grants for machine learning applications to solar panel energy optimization and he has several publications in artificial intelligence and computer vision on IEEE and other journals and conferences.
Title of the talk: The Prevalence of Artificial Neural Networks in Everyday Life.
Abstract: As technology and electronics permeate most aspects of our lives today, many people have increased access to an electronic device that has the capability to perform some prediction or reach a decision, either on the device itself or in the cloud. We would be surprised to know how many of these decisions are reached by applying some form of artificial neural network that has replaced some classical statistical algorithm. State of the art neural networks have managed to surpass human capabilities in their task, and the best products apply these to stay competitive. This talk will discuss how neural networks have been replacing many classical and statistical algorithms from pattern recognition to image processing, and can be found applied to many and least likely applications. It will also discuss some current industry challenges on implementation, training and deployment of neural networks, current solutions, and research opportunities to overcome these challenges.
(Smart Information Flow Technologies (SIFT), USA)
Bio:Dr. Daniel Bryce is a Principal Research Scientist at Smart Information Flow Technologies (SIFT), LLC. Dr. Bryce conducts research on Human-Aware AI systems that specifically account for the human in-the-loop. Human-aware AI address the uncertainty and imprecision inherent to man-machine communication. Within this area, he has conducted research upon knowledge representation, automated planning, verification, and machine learning. His primary contributions relate to synthesizing plans when humans provide incomplete, incorrect, and changing constraints.
Dr. Bryce received his Ph.D. in Computer Science from Arizona State University, and his dissertation, entitled “Scalable Planning Under Uncertainty”, was awarded the ICAPS Distinguished Dissertation award. Dr. Bryce held positions at NASA, Honeywell Laboratories, SRI International, and Utah State University, prior to joining SIFT. His work has been applied to unmanned vehicles, manufacturing, healthcare, systems biology, and synthetic biology. He work has been funded by NASA, NSF, AFRL, OSD, ONR, and DARPA.
Title of the talk: Human-Aware AI
Abstract: AI systems are increasingly present in a number of applications. Humans are involved in both AI model creation and use. Human imperfection, either through hand engineering or data set selection, leads to errors in modeling and interpretation of the results. Human-Aware AI systems can anticipate these errors and provide affordances to mitigate them. Such systems must not only reason about the application problem, but also the uncertainty due to the human in the loop.
I will focus upon Human-Aware AI in automated planning, where the task is to construct a series of actions to achieve a goal. Humans (or machine learning techniques) not only model how the actions transform the world, but also their goal. Humans can often model actions incorrectly due to misunderstanding the application, the application requirements changing over time, or simply human error in the same way that humans introduce bugs into computer programs. Similarly, humans can state their planning goal too narrowly or broadly, resulting in solution plans that fail to meet their implicit preferences. Human-Aware AI planning systems must relax the common assumption that the model is complete and correct. I will discuss strategies for relaxing models, learning updates to improve models, and interacting with humans to reach their intended solutions.
Mrinal K. Das
(Texas Instruments ,USA)
Bio: Dr. Mrinal K. Das is a globally recognized expert in the area of high-power semiconductor devices. He studied electrical engineering and Plan II liberal arts dual honors at The University of Texas at Austin, where he received his BSEE and BA degrees in 1993. He pursued graduate studies and research at Purdue University, where he received his MSEE and PhD in 1995 and 1999, respectively. His doctoral research involved the fundamental understanding of the SiC MOS structure to enable advanced power
MOSFETs. In his first 13 years at Cree Inc., Mrinal’s research career has been tied to advancing the evolution of many SiC power devices especially the power MOSFET, from process development / device design to characterization / reliability. In his final 5 years at Cree, his focus shifted to marketing where he worked with customers and partners to catalyze market acceptance. With the SiC MOSFET rapidly becoming a mature and accepted technology in power electronics, Dr. Das has moved on to his current role as a technology expert in Texas Instruments’ vaunted Kilby Labs R&D division. At TI, he is addressing the next phase of SiC MOSFET evolution—optimizing the surrounding eco-system. Dr. Das has authored 60+ publications, presented 12 invited seminars, and received 25+ US patents. He regularly provides critical review for submitted manuscripts for leading journals, conferences, and symposia.
Title of the talk: From Watts to Electrons: How a Semiconductor Revolution is Ushering in a More-Electric World
Abstract: With the ever-growing demand for power and the rapid consumption of the generative resources, modern power electronic systems have steadily gravitated to higher efficiency systems that help curb
the demand as well as lighter, more compact systems to enable insertion into traditionally non- electronic solutions. The former is well exemplified by the success of solar power where the higher system efficiencies have allowed more power to be delivered to the consumer with a growing excess of power that can be sold back to the grid. The latter is illustrated with the growing success of electric vehicles where power electronics and electric machines are now providing the propulsion instead of the century old internal combustion engine. This is already leading to fast electric charging stations that will eventually obsolete their petrol-based counterparts. This efficient, more-electric world is being enabled by a power semiconductor revolution occurring at the heart of these modern power systems—wide bandgap semiconductors like silicon carbide and gallium nitride becoming a viable alternative to the decades old silicon technology that is simply running out of steam. This presentation begins with the demands at the top (global) level focusing on watts and then working all the way down to the electrons in the power semiconductor devices themselves, concluding with open areas of research that should stimulate thought among corporate researchers and academicians alike across a wide range of technical fields.
Bio: Peter Shirley is the Principal research Scientist at Nvidia Corporation,USA. He also holds the position of Adjunct Professor at University of Utah.He received his BA in Physics from Reed College and a PhD in Computer Science from the University of Illinois. He has held academic positions at Indiana University, Cornell University, and the University of Utah. He has authored a number of books on computer graphics and ray tracing.
Title of the talk: State of the field: GPU computing
Abstract: Computer graphics is a rare example where a special purpose chip (the GPU) has never been displaced by the increasing powerful general purpose chip (the CPU). Now the GPU industry is booming more than ever and is being heavily influenced by both ecosystem changes and by non-graphics applications, particularly AI. Trends and uncertainties related to AI, VR, cloud gaming, scientific computing, and 5G will be covered, and their implications for the future discussed.
Bio: Hayden Melton is Quantitative Trading Proposition Manager at Refinitiv. His role involves analysis of trading dynamics and behavior, and technology-related research and design to support the market-leading foreign exchange (FX) trading venues that Refinitiv operate, that average ~$450 billion in trades daily. Dr Melton holds a Bachelor of Engineering with first class honors from the University of Auckland, New Zealand, and a PhD in the area of empirical software engineering from Deakin University, Australia. His recent research is in the area of financial market design, with a particular focus on fairness in such markets. He has given seminars on this research at several R1 universities as well as at other institutions, is sole inventor on a number of pending and granted patents for electronic trading, and has published his current and past research in international, peer-reviewed conference proceedings and journals.
Title of the talk: A Market Operator’s Perspective on the Arms Race for Speed in Electronic Trading
Abstract: There has been much recent debate in the field of economics about the ‘arms race’ that the prevailing first-come, first-served (FCFS) resource allocation discipline in financial markets has induced. Those who have characterized technology investments made by participants in pursuit of speed as an ‘arms race’ argue that these investments provide no social benefit and instead exclusively serve a flawed market design: FCFS. Examples of such investments include the construction of a $300 million ‘straight-line’ fiber communication connection between Chicago’s futures markets and New York’s equities markets that involved tunneling through mountains and beneath rivers to reduce transmission time from 17 to 13 milliseconds, its subsequent replacement by a chain of microwave towers between the two centers to save another 4.5ms, a $300 million underwater transatlantic cable built specifically to save market participants single digit milliseconds when transmitting data between London’s markets and New York’s, and so on. Those who do not view competition on speed in financial markets as wasteful point out: that notions of FCFS-ness are entrenched in western culture as evidenced by the old English proverb ‘the early bird gets the worm’ from 1600AD, that the fact milliseconds matter is an obvious consequence of computers operating at nanosecond timescales, and that computerization has demonstrably democratized financial markets and improved their efficiency. In this talk I’ll describe a carefully designed refinement to the resource allocation discipline implemented by the spot FX venue Refinitiv Matching that we think has successfully established some middle ground in the ‘arms race’ debate.
(Microsoft Corporation, USA)
Bio: Hossam Fattah received his Ph.D. in Electrical and Computer Engineering from University of British Columbia, Vancouver, Canada in 2003. In 2000, he received his Masters of Applied Science in Electrical and Computer Engineering from University of Victoria, Victoria, Canada. He completed his B.Sc. degree in Computers and Systems Engineering from AlAzhar University, Cairo, Egypt in 1995.
Between 2004 to 2013, he has been with the academia and industry, including Texas A&M University and Spirent Communications, USA, working on wireless communication technology, cellular systems, and research and development for several networking and wireless standards and protocol stacks including Zigbee, WiFi, WiMax, CDMA, and 3G/4G/5G systems. Since 2013, he has been with Microsoft Corporation, USA, working on different networking products and services for Windows and Cloud networking and technology.
He has contributed to many technical publications in referred conferences, journal, patents, and a Book author. He is a registered Professional Engineer with the Engineers and Geoscientists of British Columbia, Canada.
Title of the talk: 5G Narrowband Internet of Things (NB-IoT): 3GPP Protocols and Applications
Abstract: This talk presents the 3GPP technical specifications for the new 5G Narrowband Internet of Things (NB-IoT) technology based on latest release 15 and 16. The talk covers details of the LTE protocol stack of an NB-IoT device, architecture and framework, how they are functioning and communicate with cellular infrastructure, and supported features and capability. NB-IoT devices are one category of the Machine Type Communications (MTC) that is introduced in the LTE set of standards released by 3GPP. NB-IoT delivers different goals for IoT devices such as radio interface optimized for IoT, low data rate, limited mobility support, guard band operation, Extended coverage, very low power consumption, and module cost.
NB-IoT is designed to connect a large number of devices in a wide range of application domains forming so-called Internet of Things (IoT). Connected devices are to communicate through cellular infrastructure. This technology is new within the 3GPP specifications and is part of upcoming new wireless technology known as 5G.
Shankar N. Swamy
( DCM Systems,USA)
Bio: Shankar’s experience spans across both Hardware and Software. Most recently he has been working on a Hardware Accelerator & System Software for Navigating a swarm of drones. He is the Chief Architect of the on-board software system for swarm of drones, where he has worked on the Architecture of the Hardware that provides AI Algorithm acceleration for navigation and Architected the Software Stack. Previously he worked at Intel, AMD, Boeing Computing Research where he was involved with Drivers for multiple generation of Graphics Cards, architected a Texture Cache, and worked on Flight Simulators and Robotic System Software and a Real-Time OS Kernel. This decade, in addition to the current project, he worked on a System using Deep learning to optimize the physical design of the Integrated Circuits, an IoT Cloud based Medical Diagnostic System. Shankar went to school at Indian Institute of Technology, Madras and at Indiana University, Bloomington.
Title of the talk: Emerging Paradigm of Software Architectures
Abstract: Soft-computing, Internet of Things Devices, extensive application of Artificial Intelligence Algorithms and the Hardware that make all these feasible have imposed new requirements on the contemporary Software. Object-Oriented paradigm used to be the automatic choice for generations. Pure object-oriented approach is becoming inadequate in increasing number of contemporary systems. We need new approaches to meet new demands of the current era. This in turn requires new thinking in Hardware/Software Interfaces, System-Software Architectures, Development, Validation and even in Software Delivery Updates. The good news is that the language features are scaling – the type-systems are getting better, compile-time validations are much richer, languages support multiple program paradigms and allow for seamless integrations in the same software. The compilers are keeping up with the standards much quicker than ever before. The dev tools, the validation tools are also scaling well with the demands of the new era of software development. We will visit these novelties. And visit some areas of development that could use more help.
I will use examples from my experience of Architecting the On-board Navigation Management System for Drone Swarms to illustrate my points and explain how the requirements drove us to approaches beyond pure Object-Oriented Architecture and beyond traditional System validation methods, and how the requirements of Reinforced- Learning & Deep Learning, the strict latency requirements of the sensor-actuator systems influenced our decisions and forced us to look outside the traditional ways.
|Full Paper Submission:||20th November 2019|
|Acceptance Notification:||12th December 2019|
|Final Paper Submission:||22nd December 2019|
|Early Bird Registration:||20th December 2019|
|Presentation Submission:||31st December 2019|
|Conference:||6 - 8 January 2020|
• Conference Proceedings will be submitted for publication at IEEE Xplore Digital Library
• Best Paper Award will be given for each track
• There will be one workshop on IoT on Jan 8, 2020
• Conference Record No- 47524