Research

Research

Lab Research Overview

Welcome to the RISING Research page. This page provides a brief synopsis of some of the research projects in the lab. Note that the projects here only provide a sampling: research interests of the lab encompass the spectrum of safe, secure, and reliable Internet-of-Things systems, cross-cutting hardware, software, and applications. See our Publications page for a more comprehensive idea of the research in the lab.


Security Assurance in Autonomous Vehicular Communications

How would you feel if a hacker could remotely push a button that would cause your vehicle to veer off the highway into a ditch? Research over the last decade has shown that not only is this possible but it is actually depressingly easy for a trained hacker to do so. The reason is that as vehicles get infused with electronics and software to support and create various autonomous features, they are starting to look more like computers than as traditional cars. That also means that they are inheriting the problems that have plagued computers for decades – cyber-security. The only difference in this case is that vehicles are more like computers driving at 70 miles/hour and with people inside. Cyber-attacks on these systems can cause catastrophic accidents, cost human life, and bring down transportation infrastructure. As we increase autonomous features of vehicles and move toward self-driving cars, we are sorely in need for a robust vehicular design that is resilient to cyber-attacks.

A key feature of emergent vehicles is connectivity. Vehicles can “talk” to other vehicles as well as with the transportation infrastructure through sensors and inter-vehicular communications (called V2X) to enable smooth and efficient traffic flow and infrastructure utilization. Connected autonomous vehicle (CAV) applications are designed today include platooning, cooperative collision detection, cooperative on-ramp merging, etc. Connectivity, however, is also one of the most vulnerable components of autonomous vehicles and one of the crucial entry points for cyber-attacks. A key feature of such attacks is that they can be conducted without requiring an adversary to actually hack into the hardware/software or physical components of the target vehicle. They can simply send misleading or even malformed communications to “confuse” the communication or sensor systems.

The goal of this project is to address this crucial problem of cyber-resiliency of CAV applications. A key result from the team is a unique AI-based resiliency architecture against arbitrary cyber-attacks on perception channels (e.g., communication and sensor channels). To our knowledge this is the first (and so far, the only) comprehensive resiliency framework for connected vehicle applications against arbitrary cyber-attacks. The architecture exploits recent advances in AI and machine learning to create a unique, on-board predictor to detect, identify, and respond to malicious communications and sensory subversions. A unique feature of the approach is that it can provide assured resiliency against a large class of adversaries, including unknown attacks. We have instantiated the approach on several CAV applications and developed an extensive experimental evaluation methodology for demonstrating such resiliency. A key outcome of the research is a resiliency framework, named RACCON, for Cooperative Adaptive Cruise Control. Details of RACCON framework are available

Another recent research in this area is the development of exploration platforms to enable early and comprehensive understanding of vehicular functionality. We have been working on a variety of exploration platforms to enable exploration of automotive functionality. One key contribution is a platform called ViVE, which enables architects explore system-level use cases in automotive systems with implications to optimization and security. Details of ViVE are available here.


Assured Wireless Communication

Imagine what would happen if all wireless communication stopped suddenly. There would be mass panic since many people have never known any other way of communicating. There would also be catastrophic failure of many systems and processes that have been built on top of a wireless communication system, e.g., autonomous vehicles, navigation systems, and increasingly, smart devices in the home, office, and hospitals. In other words, the consequences of a breakdown of the wireless communications plane are severe.

One can argue that this scenario is unlikely because of the standards process by which wireless communication devices and networks are designed. On the other hand, while the standards process produces efficient protocols, rigorous mathematical assurance of their safe and predictable behavior on diverse complex distributed deployment scenarios has been lacking. Consequently, we face the very real possibility of rare) unexpected protocol behaviors which may result in failure or there may exist unknown vulnerabilities that can be exploited by unfriendly third parties. Given the ease of using powerful software-defined radios, the potential for unintentional or intentional malicious behaviors also increases. As we move towards 5G and future networks with dynamic spectrum allocation, the set of potential failure modes multiplies. For instance, unintentional spectrum squatting or intentional attacks that force legitimate users to vacate spectrum or trigger their devices into conservative protocol behaviors that use less bandwidth are possible.

This project focuses on addressing this crucial problem. The project will develop a flexible, powerful distributed platform for monitoring deployed networks, and analysis tools that use this data to certify safe and robust protocol operations in field. The project is funded by National Science Foundation under the “Formal Methods in the Field” program and is titled “DOPaMINe: Distributed Opportunistic Platform for Monitoring In-Situ Networks. The research will be performed in close collaboration between the University of Florida and Portland State University.. The proposed approach involves a tight mix of analytic and experimental components. The analytic work extends existing formal verification methods to certify assurance of trustworthy execution of deployed protocols. The experimental work utilizes traces from deployed networks to demonstrate viability of the analysis. The anticipated result will be powerful assurance infrastructure to certify robustness of wireless communication systems against failures and malicious exploitation, and detect vulnerabilities and non-compliance of system executions in real-time. More details of the project are available in the project page maintained at Portland State University.

Resilient System-on-Chip Architecture

System-on-Chip (SoC) design involves composition of pre-designed hardware blocks into a single integrated circuit. SoCs are readily being applied in various domains like healthcare, cyber-physical systems, automotive, etc. A modern SoC design includes a variety of processor cores, memory systems, cryptographic blocks, communication modules (e.g., wireless and LTE modules), debug and peripheral driving interfaces (e.g., JTAG, HDMI, etc.), power management units, and many others. The individual hardware blocks, referred to as intellectual properties (“IP” for short) are procured through a global supply chain of IP vendors. System functionality is implemented through communication of IPs through a variety of system interconnects. SoC designs promise much faster design time, robustness, and configurability than custom hardware. Unsurprisingly, SoC designs have seen explosive proliferation in electronic computing system architectures, particularly as we move into the era of hundreds of billions of devices.

A key concern with the proliferation of SoC designs is security. As we move towards Internet-of-Things, these systems are continually tracking, analyzing, and communicating some of our most intimate personal informations, including sleep patterns, health, location, etc. In addition to private end-user information, they also sensitive information in modern SoC designs include security-critical parameters introduced during the system design, e.g., fuses, cryptographic and Digital Rights Management (DRM) keys, firmware execution flows, on-chip debug modes, etc. Sensitive information or data in a modern computing system will be collectively referred to as “assets”. Unauthorized or malicious access to these assets can result in leakage of company trade secrets for device manufacturers or content providers, identity theft for end users, subversion of national security, and destruction of human life. It is vital to our well-being to ensure that these assets in computing devices are protected from unauthorized, malicious access.

The goal of this project is to enable a streamlined, disciplined approach for implementing and validating diverse security requirements in modern SoC designs. A key contribution of the project has been the establishment of an architectural framework for implementing a spectrum of security policies, including access control, information flow, time-of-check vs. time-of-use, etc. The framework introduces a centralized policy implementation IP that communicates with the various hardware modules in the SoC design to implement target policies. We demonstrated how the architecture can facilitate implementation of diverse security requirements (even in the presence of untrusted IPs) and streamline their validation. The approach holds the promise to usher in a new “patchable” SoC architecture, i.e., systems developed with hardware that can be reconfigured and updated in field just like we can update software or firmware components.


Firmware Validation

Recent years have seen a dramatic increase in the size and complexity of firmware in both custom and System-on-Chip (SoC) designs. Unlike custom hardware, a system functionality implemented through firmware can be updated after deployment in response to bugs, performance limitations, or security vulnerabilities discovered in-field, or for the purpose of adapting to changing requirements. Today, it is common to find more than a dozen hardware blocks (also referred to as “intellectual properties” or “IPs”) in a commercial SoC design implementing significant functionality through firmware that execute on diverse microcontrollers with different instruction set architectures, e.g., IA32, ARM™, 8051, etc., as well as proprietary custom instruction set. Firmware implementations are particularly common for many core algorithms in machine learning systems, since these algorithms typically need to be adapted and updated in-field in response to evolving data patterns, workloads, and use cases throughout the lifetime of the system. Given this extensive use, it is getting increasingly crucial to ensure that the firmware implementations are functionally correct, robust against various microarchitectural side channels, and protected against malicious security exploitation.

A key challenge with firmware validation is the need to comprehend its interaction with hardware as well as other firmware in an SoC. For instance, the act of simply waking up a mobile or IoT SoC from a power-save mode to accept an environmental stimulus entails communication among firmware of radio, power management, CPU, etc. In general, firmware in any SoC IP operates in close interaction with hardware as well as other firmware modules. Indeed, this illustrates one of the key distinctions between firmware and software: firmware is like software in one agent of a concurrent system that interacts with other agents. Furthermore, the interaction involves low-level interface code typically written manually. Finally, since firmware is developed in a resource constrained environment, a significant component of firmware implementation entails managing these resources manually. The goal of this project is to develop an automated framework for firmware validation that enables exploration of subtle hardware/firmware and firmware/firmware interactions while maintaining the scalability and performance. Our approach involves innovative application of formal verification techniques for verification of specific hardware-firmware and firmware-firmware interfaces as well as techniques based on state exploration to develop systematic dynamic validation. A key contribution is a framework for creating plug-and-play “virtual” SoC environment. The outcome is a comprehensive, automated methodology for firmware analysis that (1) can be employed early in the system exploration life-cycle, (2) accounts for the interaction of the firmware with the underlying hardware and other IPs, and (3) enables focused, targeted exploration of firmware code to identify functional error conditions and security vulnerabilities.


Speculative Side Channel Detection

Modern microprocessors perform out-of-order and speculative execution. In fact that is a critical ingredient to the efficiency of modern microarchitectures. But they can result in sophisticated, complex security vulnerabilities. The Spectre and Meltdown vulnerabilities identified in 2018 were in fact present in microprocessors from the earliest days, and virtually every microprocessor could be shown to be vulnerable to those attacks.

In this project, we explore ways to make the microarchitectures resilient against such attacks. We have developed one of the most comprehend solutions to add run-time resiliency of the microprocessor without requiring significant architecture changes. Our approach works by making use of current Design-for-Debug and Design-for-Security collateral but repurposes them in non-traditional ways to address the problem.


Post-silicon Validation

The size and complexity of a modern electronic computing systems precludes catching all design errors by functional verification (formal or otherwise) only on pre-silicon models. Indeed, an estimated 50% of modern SoC designs contain functional errors in the first fabricated silicon. Consequently, post-silicon verification has become a critical component of the overall verification flow. Post-silicon verification entails running simulation on first-pass fabricated silicon to detect functional bugs which are missed during pre-silicon validation. Since post-silicon execution proceeds at target clock speeds, it permits a much deeper exploration of the design space than afforded by pre-silicon. However, a key challenge in post-silicon verification is limited observability: only a few of the thousands of important internal signals are observable during normal chip operation. Observability is constrained by the number of pins and size of available internal buffers in the design. The situation is exacerbated by the current architectural trend away from a conglomeration of individual chips aggregated within a motherboard towards System-on-Chip (SoC) designs with system functionality within a single silicon substrate: with high integration afforded by the SoC architecture, the number of pins and correspondingly external observability, is getting increasingly diminished.

The goal of this project is to streamline and systematize post-silicon validation of hardware systems targeted towards current and emergent applications. We aim to develop a comprehensive TFM (tool, flow, methodology) with (1) effective and scalable algorithms for selection and qualification of post-silicon observability that affords ameliooration of the limited obsdrvability problem without redction in validation quality, (2) automation in post-silicon triage and diagnosis, and (3) a formally guaranteed post-silicon coverage. Finally, we are investigating clear and tight connection between pre-silicon and post-silicon validation to facilitate comprehensive assurance of system functionality and performance.


Certified Behavioral Synthesis

Recent years have witnessed continuing miniaturization of VLSI design and fabrication technologies, producing chips with higher and higher transistor density. A consequence of this advancement, together with the high computational demands of modern applications, is that hardware designs have continued to rise in complexity to make use of all the available transistors. This makes it challenging to produce reliable, high-quality hardware designs through hand-crafted Register-Transfer Level (RTL) or gate-level implementations. The problem is exacerbated with aggressive time-to-market requirements, leading to what is often referred to as the design productivity gap. Electronic System Level (ESL) design is often seen as a solution to this gap. The idea of ESL is to raise the design abstraction by specifying a hardware design behaviorally with a high-level language such as SystemC, C, or C++. ESL specifications are synthesized to hardware through a process known as behavioral synthesis. Behavioral synthesis consists of a number of inter-deptendent components, e.g., compilation, scheduling, resource allocation, control synthesis, etc., each of which involves application of a sequence of transformations ultimately compiling an ESL description into RTL.

Research in behavioral synthesis is now relatively mature, with several available tools such as AutoPilot, Bluespec, Cynthesizer, SPARK, etc. Nevertheless, and despite its great need, behavioral synthesis has not yet found wide acceptance in the current industrial practice. A major barrier to its adoption is the lack of the designers’ confidence in the correctness of synthesis tools themselves. Behavioral synthesis tools are complex software systems, often with thousands of lines of code, and consequently they are prone to logical and implementation errors. Errors in synthesis tools can result in RTL designs with bugs, with expensive ramifications. Thus, it is critical to develop techniques for certifying that the synthesized RTL conforms to its behavioral specification. Indeed, the state of behavioral synthesis today has sometimes been likened to that of logic synthesis (automatic translation of synthesizable RTL to gates) whose adoption in the 1990s was boosted significantly by the advancement of equivalence checking technology to automatically check conformance of the synthesized netlist with the RTL. Unfortunately, the significant gap in abstraction level between an ESL description and RTL — the very reason for adopting ESL for design specification — makes it difficult to check conformance between the two. Consequently, behaviorally synthesized RTL designs are either (a) error-prone or (b) overly conservative, producing circuits of poor quality and performance.

This goal of this project is to develop a generic, mechanized framework for applying formal analysis and verification techniques to certify correctness of hardware designs produced through behavioral synthesis. Here, “correctness” means that the synthesized RTL implements the input ESL description. Our research involves investigating to decompose the certification of correctness into pieces which can be effectively handled by a synergistic combination different formal verification technques. We have used our framework to certify several RTL designs synthesized from AutoPilot. Our experiments indicate that our framework can scale to designs with tens of thousands of lines of synthesized RTL.


Theorem Proving for Reactive System Analysis

Reactive concurrent systems consist of a number of interacting processes that perform non-terminating computations while receiving and transmitting messages. The design of such routing systems is error-prone, and non-determinism inherent in concurrent communications makes it hard to detect or diagnose such errors.

The goal of this project is to develop a scalable, mechanized infrastructure for certifying correct and secure execution of reactive routing system implementations through: a generic framework for modeling and specifying systems at a number of abstraction layers; a compositional methodology for mathematically analyzing such models; and developing a suite of tools and techniques to mechanize and automate such analysis within a unified logical foundation. Our research exploits general-purpose reasoning afforded by theorem proving while augmenting the tool suite with a streamlined modeling and specification methodology. We are developing a collection of targeted tools for verifying safety, liveness, and security properties of such systems while staying within a single logic and proof system.


Formal Analysis for Machine Code Verification

With pervasive use of software artifacts in every aspect of human life, the cost of malfunctioning software today is staggering. According to a report of the US Department of Commerce’s National Institute of Standards and Technology, the cost to the US economy is as high as 60 Billion dollars per year. In addition to crashes and failures in shipped software artifacts requiring significant investments to develop complicated patches in deployed systems, bugs make systems susceptible to malware, viruses, and other attacks. On the other hand, it is impossible today fully to understand or predict the behavior of large software systems, and this problem is getting worse as system complexity continues to grow.

Formal methods have shown significant promise to dramatically improve software reliability. The idea of formal methods is to model software executions in a formal, mathematical logic and use mathematical reasoning to ensure and check that the executions satisfy the desired properties and specifications. The process is well-known to catch rare, corner-case software bugs that are difficult to identify through simulation and testing, but can have catastrophic consequences in real execution, especially in the presence of adversaries actively looking for opportunities to exploit such vulnerabilities. In spite of such promise, however, the adoption of formal methods in software engineering practice has been limited. The key reason for this is that the state-of-the-art formal methods solutions available today are still too expensive for day-to-day use in large-scale software assurance. In particular, deductive or theorem proving solutions, based on human-guided, semi-automatic mechanical reasoning often involve prohibitive manual effort, while fully automatic approaches based on decision procedures (e.g., model checking) can quickly run out of available time and memory.

Ameliorating the scalability problem for future systems requires synergistic and collaborative progress in computer architecture, software design, and formal methods:

  • Machines and software code must be architected with the amenability of formal analysis in mind.
  • Formal verification techniques must be scaled up sufficiently, so that they can bring substantial automation to the analysis of software programs running on realistic machine architectures.

The goal of this project is to develop scalability of formal analysis tools and techniques for verification of software running on realistic machine architectures. The anticipated outcome is a comprehensive analysis tool-suite for machine-code verification, with the following characteristic features.

  • Analysis of simple properties is routine and automatic.
  • The tool suite is easily extendible by the human with program specific insights (e.g., assertions, algorithm-level invariants, etc.) to discharge more subtle and deeper properties.
  • The tool suite takes into account features of realistic machine architectures (e.g., page tables, translation look-aside buffers, page protections, etc.).
  • There is clear correspondence between the formal model used for reasoning and the machine architecture and software that is the target of analysis. In particular, machine and language features not modeled in the formalization are explicit.
  • The tool suite itself is independent of the nuances of a specific machine architecture, but can be instantiated with a broad range of architectural features. Thus, as machine architectures evolve to satisfy both the security and usability demands of modern applications, our tool suite can be easily extended and configured to target such systems.