Constructive Computation in Stochastic Models with Applications: The RG-Factorizations

Amorah Quan Yin's Documents
Free download. Book file PDF easily for everyone and every device. You can download and read online Constructive Computation in Stochastic Models with Applications: The RG-Factorizations file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Constructive Computation in Stochastic Models with Applications: The RG-Factorizations book. Happy reading Constructive Computation in Stochastic Models with Applications: The RG-Factorizations Bookeveryone. Download file Free Book PDF Constructive Computation in Stochastic Models with Applications: The RG-Factorizations at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Constructive Computation in Stochastic Models with Applications: The RG-Factorizations Pocket Guide.

We have shown that this can be done, rather inexpensively, in time O T : a result that makes our algorithm as good as the best state-of-the-art algorithms for exact inference in HMMs. However, probabilistic property checking in c-HMMs requires an additional step: the identification of those trajectory of states that actually enforce a certain property.

STA4821: Stochastic Models - Lecture 17

The complexity of this step is, in general, property-dependent. Unless a number of impossible i. However, it can be noted that properties are functions of i. Proposition 1. Given a finite time horizon T and a c-HMM , the number of possible assignments of the trajectory of costs is equal or smaller than the cardinality of the set of trajectories of states. On the other hand, the number of trajectories of states that share the same trajectory of costs is: where k i is the cost of the state at the i -th time step. The main challenge of dealing with long time horizons T is that, as the number of possible trajectories grows exponentially with their length, the subset of trajectories that satisfy a certain resilient or not property might grow as well.

Analysis of a batch service multi-server polling system with dynamic service control

In this work we design tools to assess the sensitivity of fairness measures to this confounding for the popular class of non-linear additive noise models ANMs. Optimal service capacity in a multiple-server queueing system: A game theory approach. Public Private login e. With rapid development of Internet of Things IoT , big data, cloud computing, blockchain and artificial intelligence, it is necessary to discuss MDPs of queues and networks under an intelligent environment. Introduction Originally coined in the context of environmental sciences and ecological systems, resilience has become a property of great interest for the study of complex systems. Randomized value functions offer a promising approach towards the challenge of efficient exploration in complex environments with high dimensional state and action spaces.

This is also true and especially important for the resilient property of l -resistance. Proposition 2. This additional layer of complexity cannot really be circumvented, i. Instead, we can use the algorithm detailed in S1 Appendix to compute approximated probability values of l -resistance that are strictly smaller—or larger—than the actual value, i.

To compute a pessimistic estimate of this probability, we must construct new pseudo- transition and sensor models. Approximate inference methodologies are widely used in the context of Bayesian networks, dynamic Bayesian networks, and probabilistic graphical models in general. Unlike the upper and lower bound approximation proposed above, however, most approximate inference methods for probabilistic graphical models are based on repeated randomized sampling, also known as Monte Carlo methods or algorithms [ 9 ].

Rejection, likelihood weighting, Markov chain, and Gibbs sampling are all approaches for Bayesian networks inference that, with certain limitations and adjustments, can be adapted to the family of dynamic nets [ 9 ]—to which HMMs belong. In many real-world applications, approximate inference becomes necessary when the complexity of the model renders exact inference intractable.

To the contrary, we observe that the complexity of models and inference in this work is always kept within the realm of the exactly tractable—with the issue of unrestrained growth being circumscribed to the number of state trajectories satisfying any specific property. Sampling, therefore, could be used orthogonally to the proposed exact inference, as a tool to explore the space of state trajectories see S1 Appendix.

To demonstrate the potential of the proposed methodology, we apply both its modelling and probabilistic inference facets to four practical scenarios. These examples serve to demonstrate that resilience and the resilient properties have a prominent role in several different domains. The first two qualitative examples are inspired by the domains of disaster management and macroeconomics. The third and fourth example are drawn from the fields of self-adaptive computing for aerospace applications and swarm robotics, respectively, and they are used to evaluate the quantitative aspects of the proposed approach as well.

When dangerous disruptive events occur, proper disaster management is crucial to protect human lives and minimize casualties [ 21 ]. Effective disaster management cannot be decoupled from good modelling and decision making strategies [ 22 ]. Having defined the observation of the i -th island status as and its distance from the control centre as d i , then. The overall observation model is defined as. Functions state how many resources, e. The cost function reveals how many resources are required to cope with each situation in the state domain.

The four island archipelago modelled in the first application scenario.

Analysis of a batch service multi-server polling system with dynamic service control

For each island, the image shows its geographical distribution, the evolving state, cost, and partial observation from the point of view of the control room. This figure is similar but not identical to the one in the original submission and it is for illustrative purposes only. The numerical values in the transition and the sensor model can be found or improved upon used historical data and expert knowledge. Performing inference on the system model allows to answer different queries of interest.

If a limited number of search and rescue teams are present on the archipelago, computing the probability of the l -resistance property to hold true and making sure that it is above a desirable threshold e. Furthermore, if the archipelago can temporarily recall an additional q resources e.

Rewilded: Saving the South China Tiger, Book by Li Quan (Hardcover) | pemilusydney.org.au

Probabilistic and statistical models are already widely exploited tools in the fields in economics and finance—the latter especially. A notable example being the research on the expected return and risk of efficient portfolios by Harry Markowitz [ 24 ]. The deterministic modelling approach of traditional macroeconomics, on the other hand, has come to be questioned over the last decade by the crisis of and the growing prominence of experimental and behavioural economics [ 25 ]. Reckoning the existence of yet many unknowns in modern macroeconomics, probability theory only seems the natural development direction for models that need to be able to account for the uncertainties of this domain and the irrationality of human behaviours.

Applying the proposed modelling to the context of macroeconomics gives us a tool to derive valuable insights this world. Assessing the resilience of a macroeconomic system is important for multiple reasons: smoothly running economics guarantee the development, stability, and fairness of our societies.

The housing market crisis proved that existing models are not enough to protect us from rare, non-directly observable, and counterintuitively correlated events [ 26 ]. The argument that macroeconomics should be revisited to deal with the uncertainty of the real world is not new [ 27 ] and statistical model for certain phenomena have been proposed [ 28 ].

The general consensus on the yet incomplete understanding of macroeconomics lends itself perfectly to a partially-observable modelling approach P O t S t. As economists are well aware of the limitations of existing models, they often rely on stress tests [ 29 ] to evaluate the resilience of financial institutions [ 30 ]. Stress tests are experimental tools that go beyond statistical analysis but, for which, statistical meta-analyses exist [ 31 ] and can be used to construct the observation model required by our approach. Intuitively, the cost function of the c-HMM describing this scenario will tell which amount of money in cash, deposits, or bonds a government would need to prevent a default in a certain state.

Governments, banks, and investment funds typically monitor time horizons of 5, 10 sometimes 15, 20 years.

Search form

In this context, the f -functionality property represents the amount of funding that has be made available, on average, across multiple year budgets. Self-adaptive computers possess ad hoc capabilities—e. Because they do not require the supervision of a human operator, these systems are especially suitable for critical, advanced applications such as space systems and robotic exploration. An autonomous computer and a resilient ecological system share several properties, for example the ability to self-protect and self-heal [ 33 ] and assessing the resilience of the first is of primary importance both at design and run time.

Previous research [ 34 — 36 ] proved that probabilistic models have the potential to enable autonomous computing systems. We now demonstrate how they can be exploited for the analysis of their resilience.

  • Assessing the Open Method of Coordination: Institutional Design and National Influence of EU Social Policy Coordination.
  • Occupational Outlook Handbook 2006-2007 (Occupational Outlook Handbook (Jist Works))?
  • Short History of Ethics.
  • An Overview for Markov Decision Processes in Queues and Networks | DeepAI!
  • Featured Content?
  • Related Research.
  • CN1713559A - 一种容误码的通信信道编码参数盲识别方法 - Google Patents.

A stochastic transition model describes the ageing of a PE [ 34 ] and it is parameterized by the impact rate of particle radiation r and the mean time to failure MTTF of a PE. These parameters are responsible for transient and permanent faults, respectively.

  1. The Role of True Finiteness in the Admissible Recursively Enumerable Degrees.
  2. Distribution of the Stars with Respect to Brightness and Distance from the Milky Way.
  3. Victory Square!
  4. You are here?

Observations of each PE, however, are not perfect for two reasons: 1 errors can slip into the observers too; and 2 transient and permanent faults are, a priori , indistinguishable. Having defined the observation of each PE as working or faulty, , and the system observation as the set of observation of all PEs, , we can use any suitable memoryless probability distributions for the sensor model [ 34 ] with false positive and false negative rates of p fp , p fn : The cost function expresses the utility [ 9 ] of a configuration, that is, the scientific data throughput e.

Fig 4 offers a visual reference of the subset of these mappings, as in Eq To discover meaningful semantics associated to the resilient properties, we introduce a helper cost function , where is the theoretical maximum throughput attainable by the satellite.

[PDF] Constructive Computation in Stochastic Models with Applications: The RG-Factorizations

Using , computing the probability distributions of f -functionality and l -resistance reveals the expected and worst-case data throughput sent to Earth, respectively. The advantage of using the algorithm proposed in this work see S1 Appendix to assess these properties is the ability to maintain the computation within reasonable time limits, even for relatively long traces and complex models. The same problem would simply be intractable by any other algorithm that requires to evaluate the 10 75 entries in the conditional joint probability distribution CJPD. Because of the exponential growth of the CJPD, the savings are remarkable order of 10 59 even for properties that are satisfied by a large e.

The potential of self-aware computing is not limited to satellites. Studying the challenges of Mars rover operations, Gaines et al. Commands are uploaded to the rover every sol during an overnight orbiter pass or direct-from-Earth at local midmorning. Data that are necessary to plan the activities of the following sol are then returned via an orbiter telecom pass in the midafternoon. Non-essential information is stored and returned during the following overnight orbiter pass [ 42 ]. As a consequence, if the rover fails to send the required information during the correct orbiter pass, the tactical team might not be able to plan the activities for the following sol.

This is an issue that will aggravate in the near future, as the current fleet of sun-synchronous orbiters is replaced with non-sun-synchronous orbiters [ 41 ]. Yet, self-aware computing might have the potential to further improve performance, e. The sensitivity and specificity of the classifier are affected by several factors including the noisiness of the on-board sensors and the time horizon of the assessment algorithm.

Similar titles

However, even assuming relatively weak performance e. As many-robot systems, or robot swarms, become more and more pervasive, researchers must devise new, efficient ways to control and coordinate them [ 43 ]. In the fourth practical scenario, we test our framework in the context of the networked multi-robot system of Fig 5 , where robots move independently and have a limited communication range.

We remark that computing the CJPD is already a more efficient approach than blindly expanding the entire joint probability distribution of a c-HMM.

We analyze the scenario with two examples: a small one with 4 robots in a 20cm by 20cm arena and a large example with 20 robots in a 40cm by 40cm area. A robotic swarm, as described in the fourth application scenario. Each robot possesses a position, velocity, state the number of its neighbors , and a partial observation of its neighborhood evolving over time.

The inference algorithm is executed locally to assess the probability of losing connectivity with respect to the rest of the swarm at each time step. We want to assess the probability of the following two properties:. Table 1 reports the results of specific experiments, taking typical series of observations as inputs. It is worth noting that, because the proposed one is an exact approach, the obtained probability values are identical w.

Fig 6 compares the time delay of the proposed approach and that required by the computation of the CJPD.

Li, Quan-Lin

"Constructive Computation in Stochastic Models with Applications: The RG- Factorizations" provides a unified, constructive and algorithmic framework for. PDF | "Constructive Computation in Stochastic Models with Applications: The RG- Factorizations" provides a unified, constructive and algorithmic framework for.

For large scenarios, the CJPD delay rapidly gets off the chart. The proposed algorithm, instead, allows to deal with 20 robots with a comparable, but smaller, delay than the one required by the computation of the CJPD in the 4 robots scenario. In particular, we observe that the advantage of the proposed approach over the use of the CJPD actually increases with the length of the time horizon and the number of robots in the scenario.

Being able to perform exact inference in only seconds in large scenarios, accounting for tens of robots, this approach can effectively be implemented in several practical multi-robot applications, such as target tracking, area coverage, or task allocation. Unlike previous work on resilient robot formations and partially-observable robot swarms [ 16 ], the proposed approach does not limit the movement of the robots into configurations whose resilience can be established a priori but rather it allows the a posteriori assessment of resilience in a distributed fashion.

To do so, we defined the extended framework of c-HMMs. In S1 Appendix , we outline a state-of-the-art inference algorithm able to answer the queries required for the probabilistic property checking of resilience over this model. Furthermore, we studied the space- and time- complexity of this inference algorithm as well as those of property checking.

We demonstrated the practical applicability of our approach in four qualitative and quantitative scenarios of growing technical complexity. The experimental results show that, even in small domains, the proposed approach is approximately 1 four orders of magnitude faster than expanding the full conditional joint probability distribution.