Oh, there's lots of ways to measure resilience in particular aspects of particular systems. Like if I'm building a phone network, I might want to know how frequently a call fails---either by getting dropped or failing to connect in the first place. I might also measure how call failures increase when there are too many people into one place (like a soccer match) or when atmospheric conditions degrade (like a thunderstorm) or when a phone goes haywire and starts broadcasting all the time.
But these sorts of measures leave a lot to be desired, since they only look at particular aspects of a system's behavior and don't have anything to say about what happens when we link systems together to form a bigger system. That's why I'm interested in generic ways to measure the resilience of a system. My hope is that if we can design highly resilient components, then when they're connected together to former bigger components, that we will be more easily able to ensure that those larger components are resilient as well.
Even better is if we can get compositional proofs, so that we know that certain types of composition are guaranteed to produce resilient systems---just as there are compositions of linear systems that produce linear systems and digital systems that produce digital systems, etc. This is the type of foundation that lays the groundwork for explosions in the complexity and variety of artifacts that we can engineer, just like we've seen previously in digital computers or clockwork mechanical systems. I want to see the same thing happen for systems that live in more open worlds, so that we can have an infrastructure for our civilization that helps to maintain itself and that can tolerate more of the insults that we crazy humans throw at it.
But first, small and humble steps. In order to be able to even formulate these problems of resilience sanely, we need to better quantify what this "resilience" thing might mean. In my paper in the Workshop on Evaluation for SASO workshop at IEEE SASO, I take a crack at the problem, proposing a way to quantify "graceful degradation" using dimensionless numbers. The notion of graceful degradation is an important one for understanding resilience, because it gets at the notion of margins of error in the operation of a system. When you push a system that degrades gracefully, you start seeing problems in its behavior long before it collapses. For example, on an overloaded internet connection that shows graceful degradation, things start going slower and slower, rather than going directly from fast communication to none at all.
In my paper, I propose that we can measure how gracefully a system degrades in a relatively simple manner. Consider the space formed by all the parameters describing the structure of a system and of the environment in which it operates. We break that space into three parts: the acceptable region where things are going well, the failing region where things have collapsed entirely, and the degraded region in between.
If we draw a line slicing through this space, then we get a sequence of intervals of acceptable, degraded, and failing behavior. We can then compare the length of the acceptable intervals and the degraded intervals on their borders. The longer the degraded intervals that separate acceptable and failing intervals, the better the system is. So in order to know the weakest point of a system, we just look for the lowest ratio between degraded and acceptable on any line through the space.
What this metric really tell us is how painful is the tradeoff between speed of adaptation and safety of adaptation. The lower the number, the easier it is for changes to drive the system into failure before it can effectively react, or for the system to accidentally drive itself off the cliff. The higher the number, the more there is a margin for error.
So, here's a start. There are scads of open questions about how to apply this metric, how to understand what it's telling us, etc., but it may be a good point to start from, since it can pull out the weak points of a system and tell us what they are...
2 comments:
Are you not going to have severe problems relating to measurement? It doesn't seem like this is going to be invariant to various simple transformations of the systems that don't really affect its behavior. [This is based on the description in your blog post; I have not read the paper.]
I do talk about this some in the paper... one of the advantages of the way I've set up the proposed metric is that it is invariant to linear transformations. So changes of units, scaling, etc., should not affect the metric.
Nonlinear transformations are a different matter. For example, switching a measurement from side length to area to volume, or switching from linear scale to log scale all do change the values.
I'm not sure that's a bad thing, though, if the measures you choose are the ones that you are choosing to regulate.
For example, I'm intending to apply this metric to the robotic redesign problem we've working in in the MADV project. There, we have controllers that will need to be acting semi-independently on the robot's mass and on various dimensions. These are, of course, coupled, but since we're acting on them semi-independently, I want to know about whichever type of incremental steps are most likely to get us into trouble, whether that be mass (cubic) or side length (linear). My guess is that we'll find mass to be more sensitive when going down and side length to be more sensitive when going up...
Post a Comment