Everything is a stream
/There is a nice consequence of our fixation on time-oriented data - we can consider everything as a data stream. Any input stream processed by our framework is required to include a time variable, from a simple timestamp field to more complex time structures. We use this variable to organize the data, but also to establish a shared dimension for connecting data elements from different sources. Such an approach gives us the opportunity to operate on multiple streams and implement interesting scenarios with the help of various stream transformations (e.g. time aggregation) and analysis methods.
We define data streams as sequences of events (or state snapshots/changes) that are time ordered, i.e. an event with a timestamp earlier than a previous one is flagged as a potential error. The simplest example of such a data stream is a univariate time series consisting of a timestamp and a data point (a single value), often with some additional requirements, like equal spacing between two consecutive data points. More complex objects in a data stream can include several variables describing an event, often closely related (e.g. time and distance of a running workout, enabling calculation of average speed). In many practical application scenarios, these objects can be models (e.g. snapshots of a system’s attack surface) or collections of unstructured data (e.g. articles published on a subject).
The beauty of a time variable being present in all our data is that it can become a shared dimension enabling integration of streams from different sources and building a bigger (or just cleaner) picture from seemingly disconnected elements. Let’s illustrate it using a case study based on very simple historical rent data. The input stream includes monthly rent payments and additional variables like location of a rented place and tags indicating lease renewal events. The interim visualization of the rent series is presented in Figure 1, with location and renewal events added as annotations (Point Marker artifacts).
The history of rent payment is an obvious example of time-oriented data, but in this case study more interesting are variables for location and lease renewal events, all connected by time (of monthly granularity). Let’s start with the location variable - we can detect when its value changes and automatically create an annotation (Time Marker artifact) indicating an event of moving between apartments. The lease renewal variable on the other hand can be used to identify critical data points when rent is most likely to change the most, and produce a new series with percentage difference between consecutive renewals. Figure 2 includes the results of these operations.
The requirement of time variable is one of the foundations of the framework we are creating. Time gives order to a stream of events, but also provides a frame of reference for data analysis. We can use time variables to effectively manage multiple different streams, create useful streams based on multiple input sources or extract key information from streams of unstructured data. Obviously there are challenges related to time granularities, irregular samples or overlapping of intervals and data points. All these problems however can be solved with help of the right technology.