As the complexity of Cyber-Physical Systems (CPS) increases, it becomes more and more challenging to ensure CPS reliability, especially in the presence of software and/or physical failures. The Simplex architecture is shown to be an efficient tool to address software failures in such systems. When physical failures exist, however, Simplex may not function correctly, because physical failures could change the system dynamics and the original Simplex design may not work for the new faulty system. To address concurrent software and physical failures, this paper presents the RSimplex architecture, which integrates the robust fault-tolerant control (RFTC) techniques into the Simplex architecture. It includes the uncertainty monitor, the high-performance controller (HPC), the robust high-assurance controller (RHAC), and the decision logic that triggers the switch of the controllers. Based on the uncertainty monitor of physical failures, we introduce a monitor-based switching rule in the decision logic in addition to the traditional stability-envelope-based rule. The RHAC is designed based on robust fault-tolerant controllers. We show that RSimplex can efficiently handle a class of software and physical failures.
The emerging trend in Internet of Things (IoT) applications is to move the computation (cyber) closer to the source of the data (physical). This paradigm is often referred to as edge computing. If edge resources are pooled together they can be used as decentralized shared resources for IoT applications, providing increased capacity to scale up computations and minimize end-to-end latency. Managing applications on these edge resources is hard, however, due to their remote, distributed, and possibly dynamic nature, which necessitates autonomous management mechanisms that facilitate application deployment, failure avoidance, failure management, and incremental updates. To address this need, we present CHARIOT, which is orchestration middleware capable of autonomously managing IoT systems that comprises edge resources and applications. CHARIOT implements a three-layer architecture. The topmost layer comprises a system description language; the middle layer comprises a persistent data storage layer and the corresponding schema to store system information; and the bottom layer comprises a management engine, which uses information stored in persistent data storage to formulate constraints that encode system properties and requirements, thereby enabling the use of Satisfiability Modulo Theories (SMT) solvers to compute optimal system (re)configurations dynamically at runtime. This paper describes the structure and functionality of CHARIOT and evaluates its efficacy as the basis for a smart parking system case study that is responsible for parking space management.
Internet of Things (IoT) domains generate large volumes of high velocity event streams from sensors, which need to be analyzed with low latency to drive decisions. Complex Event Processing (CEP) is a Big Data technique to enable such analytics, and is traditionally performed on Cloud Virtual Machines (VM). Leveraging captive IoT edge resources in combination with Cloud VMs can offer better performance, flexibility and monetary costs for CEP. Here, we formulate an optimization problem for placing CEP queries, composed as an analytics dataflow, across a collection of edge and Cloud resources, with the goal of minimizing the end-to-end latency for the dataflow. We propose a brute-force optimal algorithm (BF) and a Generic Algorithm (GA) meta-heuristic to solve this problem. We perform comprehensive real-world benchmarks on the compute, network and energy capacity of edge and Cloud resources for over 17 CEP query configurations. These results are used to define a realistic simulation study that validates the BF and GA solutions for 45 diverse dataflows. Our results show that the GA approaches within 99% of the optimal BF solution that takes hours, maps dataflows with 4 50 queries within 1 25 secs, and in fewer than 10% of the experiments is unable to offer a feasible solution.
As the density of wireless, resource-constrained sensors grows, so does the need to choreograph their actions across both time and space. Recent advances in ultra-wideband RF communication have enabled accurate packet timestamping, which can be used to precisely synchronize time. Location may be further estimated by timing signal propagation, but this requires additional communication overhead to mitigate the effect of relative clock drift. This additional communication lowers overall channel efficiency and increases energy consumption. This paper describes a novel approach to simultaneously localizing and time synchronizing networked mobile devices. An Extended Kalman Filter is used to estimate all devices' positions and clock errors, and packet timestamps serve as measurements that constrain time and overall network geometry. By inspection of the uncertainty in our state estimate, we can adapt the number of messages sent in each communication round to balance accuracy with communication cost. This reduces communication overhead, which decreases channel congestion and power consumption compared to traditional time of arrival and time difference of arrival localization techniques. We demonstrate the performance and scalability of our approach using a real network of custom RF devices and mobile quadrotors.
The environmental impacts of medium to large scale buildings receive substantial attention in research, industry, and media. This paper studies the energy savings potential of a commercial soccer stadium during day-to-day operation. Buildings of this kind are characterized by special purpose system installations like grass heating systems and by event-driven usage patterns. This work presents a methodology to holistically analyze the stadiums characteristics and integrate its existing instrumentation into a Cyber-Physical System, enabling to flexibly deploy different control strategies. In total, seven different strategies for controlling the studied stadiums grass heating system are developed and tested in operation. Experiments in winter season 2014/2015 validated the strategies impacts within the real operational setup of the Commerzbank Arena, Frankfurt, Germany. With 95% confidence, these experiments saved up to 66% of median daily weather normalized energy consumption. Extrapolated to an average heating season, this corresponds to savings of 775 MWh and 148 t of CO2 emissions. In winter 2015/2016 an additional predictive nighttime heating experiment targeted lower temperatures. This experiment increased the savings to up to 85%, equivalent to 1 GWh (197 t CO2) in an average winter. In addition to achieving significant levels of energy savings, the different control strategies also met the target temperature levels to the satisfaction of the stadiums operational staff.
Modern automotive Cyber-Physical Systems (CPS) are increasingly adopting wireless communications for Intra-Vehicular, Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) protocols as a promising solution for challenges such as the wire harnessing problem, collision detection, and collision avoidance, traffic control, and environmental hazards. Regrettably, this new trend results in new security challenges that can put the safety and privacy of the automotive CPS and passengers at great risk. In addition, automotive wireless communication security is constrained by strict energy and performance limitations of electronic controller units and sensors. As a result, the key generation and management for secure automotive CPS wireless communication is an open research challenge. This paper aims to help solve these security challenges by presenting a practical key generation technique based on the reciprocity and high spatial and temporal variation properties of the automotive wireless communication channel. Accompanying this technique is also a key length optimization algorithm to improve performance (in terms of time and energy) for safety-related applications constrained by small communication windows. To validate the practicality and effectiveness of our approach, we have conducted simulations alongside real-world experiments with vehicles and RC cars. Lastly, we demonstrate through simulations that we can generate keys with high security strength (keys with 67% min-entropy) with up to 10X improvement in performance and 20X reduction in code size overhead in comparison to the state-of-the-art security techniques.
A new mobile healthcare solution for neuro-cognitive function monitoring and treatment is presented. The technique is based on spatio-temporal detection and characterization of a specific brain potential, called P300. The diagnosis of cognitive deficit is achieved by analyzing the data collected by the system with a new algorithm called tuned-Residue Iteration Decomposition (t-RIDE). The system has been tested on 12 subjects involved in three different cognitive tasks with increasing difficulty. The system allows fast diagnosis of cognitive deficit, including mild and heavy cognitive impairment: t-RIDE convergence is achieved in 79 iterations (i.e., 1.95s) yielding an 80% accuracy in P300 amplitude evaluation with only 13 trials on a single EEG channel.
Internet-of-Things (IoT) envisions an infrastructure of ubiquitous networked smart devices offering advanced monitoring and control services. Current art in IoT architectures utilizes gateways to enable application-specific connectivity to IoT devices. In typical configurations, IoT gateways are shared among several IoT edge devices. Given the limited available bandwidth and processing capabilities of an IoT gateway, the service quality (SQ) of connected IoT edge devices must be adjusted over time not only to fulfill the needs of individual IoT device users, but also to tolerate the SQ needs of the other IoT edge devices sharing the same gateway. However, having multiple gateways introduces an interdependent problem, the binding, i.e. which IoT device shall connect to which gateway. In this paper, we jointly address the binding and allocation problems of IoT edge devices in a multi-gateway system under the constraints of available bandwidth, processing power, and battery lifetime. We propose a distributed trade-based mechanism in which after an initial setup, gateways negotiate and trade the IoT edge devices to increase the overall SQ. We evaluate the efficiency of the proposed approach with a case study and through extensive experimentation over different IoT system configurations regarding to the number and type of the employed IoT edge devices. Experiments show that our solution improves the overall SQ by up to 56% compared to an unsupervised system. Our solution also achieves up to 24.6% improvement on overall SQ compared to the state-of-the-art SQ management scheme while they both meet the battery lifetime constraints of the IoT devices.
Engine control applications include functions that need to be executed at specific rotation angles of the crankshaft. The tasks performing these functions are activated at variable rates and are programmed to be adaptive with respect to the rotation speed of the engine to avoid overloading the CPU. Simplified control implementations are used at high speeds, for example reducing the number of fuel injections or the complexity of the computations. Such different control implementations define execution modes with different execution times for different ranges of the rotation speed. The selection of the switching speeds for the operating modes of such tasks is an optimization problem, consisting in determining the optimal transition speeds that maximize the engine performance while guaranteeing schedulability. This paper presents three methods for tackling such an optimization problem under a set of assumptions about the performance metrics: two heuristics and a branch and bound method that guarantees finding the optimal solution within a given speed granularity. In addition, a simple method to compute a performance upper bound is presented. The approach and the hypothesis are validated using a Simulink model of the engine and the computational tasks, considering the engine efficiency and the production of pollutants (NO2) as metrics of interest. Simulation experiments show that the performance of proposed heuristics is quite close to the one of the upper bound and the optimum within a finite granularity.
Cyber-Physical Systems facilitate the seamless integration of devices in the physical world with cyberspace, which have attracted substantial interests from the academic, research, and industrial communities. Among them, localization is very important. For example, the date information will be meaningless without position, and localization provides precondition for various operations in the Cyber-Physical, such as routing, data collecting and so on. Localization for mobile group users is one of the important applications in Cyber-Physical systems. However, due to the sparse deployment of anchors and the instability of signals in the wireless environment, users may cannot receive adequate anchor information, which leads the localization quality is not dependable and acceptable. To solve this problem, we propose to exploit the localized users as the mobile anchors for localizing the non-localized users. These mobile users cooperate as a whole group to improve their localization accuracy. Moreover, to decrease the communication cost among these users, an algorithm for electing mobile anchors is designed, with several provable properties. This electing algorithm is a distributed method, without advanced negotiation among mobile users. In addition, for the scenarios with crowd of users, we divide the users into different groups according to their distance information, which can assure that only the dependable anchors are used for the localization. Extensive experimental results demonstrate that the localization dependability can be improved obviously.
Building an efficient, smart, and multifunctional power grid while maintaining high reliability and security is an extremely challenging task, particularly in the ever-evolving cyber threat landscape. The challenge is also compounded by the increasing complexity of power grids in both cyber and physical domains. In this article, we develop a stochastic Petri net based analytical model to assess and analyze the system reliability of smart grids, specifically against topology attacks, and system countermeasures (i.e., intrusion detection systems and malfunction recovery techniques). Topology attacks, evolving from false data injection attacks, are growing security threats to smart grids. In our analytical model, we define and consider both conservative and aggressive topology attacks, and two types of unreliable consequences (i.e., system disturbances and failures). The IEEE 14-bus power system is employed as a case study to clearly explain the model construction and parameterization process. The benefit of having this analytical model is the capability to measure the system reliability from both transient- and steady-state analysis. Finally, intensive simulation experiments are conducted to demonstrate the feasibility and efficiency of our proposed model.
With the ongoing development of sensor devices and network techniques, big data is being generated from the cyber-physical systems. Because of sensor equipment occasional failure and network transmission unreliability, a large number of low-quality data, such as noisy data and incomplete data, is collected from the cyber-physical systems. Low-quality data poses a remarkable challenge on deep learning models for big data feature learning. As a novel deep learning model, the deep computation model achieves the super performance for big data feature learning. However, it is difficult for the deep computation model to learn dependable features for low-quality data since it uses the nonlinear function as the encoder. In this paper, a dependable deep computation model is proposed for feature learning on low-quality big data in cyber-physical systems. Specially, a regularity is added into the objective function of the deep computation model to obtain reliable features in the intermediate-level representation space. Furthermore, a learning algorithm based on the back-propagation strategy is devised to train the parameters of the proposed model. Finally, some experiments are conducted to evaluate the effectiveness of the dependable deep computation model for low-quality big data feature learning. Results indicate that the proposed model performs better than the conventional deep computation model and the denoising deep computation model for the classification and the restoration for the low-quality data in cyber-physical systems.
Automotive functionalities typically consist of a large set of periodic/cyclic tasks scheduled under a time-triggered operating system (OS), and a large fraction of them are feedback control applications. OSEK/VDX is a common time-triggered automotive OS that offers preemptive periodic schedules supporting a pre-configured set of periods. The feedback controllers implemented onto such OSEK/VDX-compliant systems need to use one of the pre-configured (sampling) periods. A shorter period is often desired for a feedback controller for higher control performance, and on the other hand, this implies a higher processor load. For a given performance requirement, the longest sampling period that meets this requirement is the optimal one. Given a limited set of pre-configured periods, such optimal sampling periods are often not available, and the practice is to choose a shorter available period -- leading to a higher processor load. To address this, we propose a controller that cyclically switches among the available periods, thereby leading to an average sampling period closer to the optimal one. This way, we reduce the processor load and are able to pack more control applications on the same processor. The main challenge in this paper is the design of such controllers that takes into account such cyclic switching of sampling periods (i.e., use non-uniform sampling) and meets specified performance requirements (in settling time, which is the key metric for many real-time control applications and more difficult to optimize than quadratic cost) and system constraints (e.g., input saturation). Such a non-convex constrained controller optimization problem as raised in the OS-aware automotive systems design has not been addressed in the control theory and a new approach based on adaptively parameterized particle swarm optimization (PSO) is proposed to solve it.
Cyber-physical systems have been deployed with considerable success in many industries. However, the implementation of cyber-physical systems in hospitals has been far more limited. The nature of the field application that is a safety-critical system might be one of the reasons for this slow development but not only. Revenues from Operating Room (OR) time and surgery account for about 50 percent of the income of major hospitals, but the efficiency of OR utilization is often reported to be relatively low. Therefore, improving OR management with a cyber-physical system should be a priority. By nature, in clinical operations patient safety and consideration for health outcomes is of utmost importance, thus possibly slowing the implementation of innovative solutions with limited history. In this paper we will report on our experience implementing a cyber-physical system at Houston Methodist Hospital and discuss some of the difficulties and potential drivers for success. Our pilot study was done in the context of the management of a large suite of ORs and uses the agile co-development of a cyber-physical system through intense collaboration of clinicians and computational scientists. While technology remains the foundation of a cyber-physical system, this experience taught us that the human factor is certainly the driving force behind the design promoting user acceptance.
Decisions on how best to optimize todays energy systems operations are becoming ever so complex and conflicting such that model-based predictive control algorithms must play a key role. However, learning dynamical models of energy consuming systems such as buildings, using grey/white box approaches is very cost and time prohibitive. Demand response (DR) is becoming increasingly important as the volatility on the grid continues to increase. We consider the problem of data-driven end-user demand response and peak power reduction for large buildings which involves predicting the demand response baseline, evaluating fixed rule based DR strategies, synthesizing DR control actions, and reducing the peak power consumption. We provide a model based control with regression trees algorithm (mbCRT), which allows us to perform closed-loop control for DR strategy synthesis for large commercial buildings. Our data-driven control synthesis algorithm outperforms rule-based DR by 17% for a large DoE commercial reference building and leads to a curtailment of 380kW and over $45, 000 in DR revenue. A data predictive control with regression trees (DPCRT) algorithm, is also presented. DPCRT is a finite receding horizon method, using which the building operator can optimally trade off peak power reduction against thermal comfort without having to learn white/grey box models of the systems dynamics. Our methods have been integrated into an open source tool called DR-Advisor, which acts as a recommender system for the buildings facilities manager and provides suitable control actions to meet the desired load curtailment while maintaining operations and maximizing the economic reward. DR-Advisor achieves 92.8% to 98.9% prediction accuracy for 8 buildings on Penns campus. We compare DR-Advisor with other data driven methods and rank 2nd on ASHRAEs benchmarking data-set for energy prediction.
The use of robots in operating rooms improves safety and decreases patient recovery time and surgeon fatigue, but introduces new potential hazards that can lead to severe injury, or even the loss of human life. Thus, safety has been perceived as a crucial system property since the early days both by the industry, the medical community and the regulatory agents. In this paper we discuss the application of the mathemat- ically rigorous technique known as Formal Verification to analyze the safety properties of a laser incision case study, and assess its safe and predictable operation. Like all formal methods approaches, our analysis has three distinct components: a method to create a model of the system, a language to specify the prop- erties, and a strategy to prove rigorously that the behavior of the model fulfills the desired properties. The model of the system takes the form of a hybrid automaton consisting of a discrete control part that operates in a continuous environment. The safety constraints are formalized as reachability properties of the hybrid automaton model, while the verification strategy exploits the capabilities of the tool ARIADNE to address the verification problem and answer the related questions ranging from safety to efficiency and effectiveness.
In this paper, we perform a comprehensive survey to the technical aspects related to the implementation of demand response and smart buildings. Specifically, we discuss various smart loads such as heating, ventilating, and air-conditioning (HVAC) systems and plug-in electric vehicles (PEVs), the power architecture with multi-bus characteristics, different control algorithms such as the hybrid centralized and decentralized control and the distributed coordination among buildings, the communication technologies and network architectures, and the potential cyber-physical security issues and possible mechanisms for enhancing the system security at both cyber and physical layers. The current status of the demand response in United States, Europe, Japan, and China is reviewed, and the benefits, costs, and challenges of implementing and operating demand response and smart buildings are also discussed.
The heart is a vital organ that relies on the orchestrated propagation of electrical stimuli to coordinate each heart beat. Abnormalities in the hearts electrical behaviour can be managed with a cardiac pacemaker. Recently, the closed-loop testing of pacemakers with an emulation (real-time simulation) of the heart has been proposed. This enables developers to interrogate their pacemaker design without having to engage in costly or lengthy clinical trials. Many high-fidelity heart models have been developed, but are too computationally intensive to be simulated in real-time. Heart models, designed specifically for the closed-loop testing of pacemakers, are too abstract to be useful in the testing of physical pacemakers. In the context of pacemaker testing, this paper presents a more computationally efficient heart model that generates realistic continuous-time electrical signals. The heart model is composed of cardiac cells that are connected by paths. Significant improvements were made to an existing cardiac cell model to stabilise its activation behaviour and to an existing path model to capture the behaviour of continuous electrical propagation. We provide simulation results that show our ability to faithfully model complex re-entrant circuits (that cause arrhythmia) that existing heart models can not.
As the number, complexity, and heterogeneity of connected devices in the Internet of Things (IoT) increase, so does our need to secure these devices, the environment in which they operate, and the assets they manage or control. Collaborative security exploits the capabilities of these connected devices and opportunistically composes them in order to protect assets from potential harm. By dynamically composing these capabilities, collaborative security implements the security controls through which security (and other) requirements are satisfied. However, this dynamic composition is often hampered by the heterogeneity of the devices available in the environment and the diversity of their behaviours. In this paper we present a systematic, tool-supported approach for collaborative security where the analysis of requirements drives the opportunistic composition of capabilities in order to realise the appropriate security control in the operating environment. This opportunistic composition is supported through a combination of feature modelling and mediator synthesis. We use features and transition systems to represent and reason about capabilities and requirements. We formulate the selection of the optimal set of features to implement adequate security control as a multi-objective constrained optimisation problem and use constraint programming to solve it efficiently. The selected features are then used to scope the behaviours of the capabilities and thereby restrict the state space for synthesising the appropriate mediator. The synthesised mediator coordinates the behaviours of the capabilities to satisfy the behaviour specified by the security control. Our approach ensures that the implemented security controls are the optimal ones given the capabilities available in the operating environment. We demonstrate the validity of our approach by implementing a Feature-driven medIation for Collaborative Security (FICS) tool and applying it to a collaborative robots case study.