Blog

A milestone in development

Friday, July 31, 2015
The Vital Consortium

Reporting our technical progress to date

First integrated prototype

The VITAL Consortium is pleased to report that we have successfully integrated the first prototype of the VITAL framework. This is a significant first step towards a platform that will provide a cost-efficient development, deployment and operation system for next-generation Smart City applications.

This prototype, though still inchoate, already offers some key functionalities:

  • The “Platform Provider Interface” function offers a set of RESTful web services to enable platform-agnostic integration of data sources and internet-connected objects, as well as complete IoT systems
  • The “Data Management Service” provides data storage and access to a wide range of metadata in respect of integrated systems
  • A range of Horizontal Added-Value functionalities, including:
    – “Service Discovery”, offering discovery of ICOs, data streams and services
    – “Complex Event Processing”, offering event processing over observation streams
    – “Adaptive Filtering”, which provides data and event filtering
    – The “Orchestrator”, which provides management and orchestration of IoT services within the VITAL platform
  • Various development and deployment tools are provided to facilitate the development and deployment of Smart City applications in the VITAL framework

The prototype allows us to develop two different use case scenarios in Istanbul and London. Our focus in Istanbul is on Smart Traffic Management, while the “Spacefinder” mobile app to be trialed  in the Camden Town area of London focuses initially on offering more flexible and useful ad hoc workspace options for the fast-growing numbers of mobile workers.

Two further iterations of the VITAL framework are due to be deployed in the coming months.

Platform Provider Interfaces: adapting legacy IoT systems to VITAL

VITAL has published specifications for a new functionality called a “Platform Provider Interface” (PPI), which defines the interface between the VITAL platform and third-party IoT systems (including external IoT platforms and applications). Implementing a PPI is all IoT platform and application providers will need to do, to make their platforms and/or applications compliant with VITAL.

Ease of use is at the core of the VITAL philosophy. In the current iteration, the PPI is defined as a set of RESTful web services, which are marked either mandatory or optional. By using popular web standards such as REST and JSON, we make implementation of the interface significantly easier. By classifying PPI services as optional or mandatory, we minimise the effort required from IoT system providers to integrate their systems into VITAL.

IoT system providers who wish to connect their platforms and/or applications into VITAL must, at least, implement the subset of  PPI services which are marked as mandatory. This not only provisions a set of RESTful web services, but also deals with the more tedious task of transforming their data and metadata from their existing models to the VITAL data models and back again to support ongoing interoperability.

The process of implementing a PPI can be summarized in the following steps.

Step 1: Describe the managed sensors
The provider must employ the VITAL ontology to describe the sensors managed by its IoT system. Information that can be conveyed about a sensor includes its type, the properties it observes, accuracy, reliability, the quality of its network connection and much more besides. With mobile sensors, VITAL can learn movement patterns and predict location using the VITAL ontology. The provider can optionally enhance the particular VITAL installation by extending the VITAL ontology to accommodate, for example, sensor or property types that are not already included. The final task in this step is to implement the RESTful web service (i.e. the PPI primitive) that will make all this information available in the VITAL platform.

Step 2: Describe observations
As part of this step, the IoT system provider needs to map sensor measurements to observations. In VITAL’s Semantic Sensor Network-based ontology, an observation is a single value observed for a single property of a single feature of interest by a single sensor. Thus, if observation of multiple features of interest or of multiple properties is called for, this might require a new property or feature type to be defined. To complete Step 2, the IoT system provider must implement the RESTful web services required to support at least one of the two observation access mechanisms (push or pull).

Step 3: Describe the provided services
To make one or more services available through the VITAL platform, the IoT system provider must first describe them using the VITAL ontology. The type of service, how to access it and the expected input and output are just some of the metadata that may be specified and made available for the services the IoT system provides. The provider must implement the relevant PPI for VITAL to access to this information.

Step 4: Describe the system
The final step is to implement the RESTful web service that gives VITAL access to the IoT system’s metadata – for example, about the person responsible for operating it.

Development and deployment tools

Within the platform we have created an environment where IoT applications that can take advantage of the functionalities of the VITAL architecture can be assembled. This environment exploits the semantic modeling and added-value functionalities of the architecture so as to facilitate development of semantically interoperable applications. It is built using Node-RED, the popular open-source visual tool for wiring the Internet of Things. The two fundamental concepts in Node-RED are nodes and flows. A node is a well-defined piece of functionality which, depending on the number of its input and output ports, can be of one of the following: (a) an input node, which has one or more output ports; (b) an output node, which has one input port; (c) a function node, which has one input port and one or more output ports. Nodes can be wired together into a flow, where input nodes sit at the start and output nodes sit at the end, with function nodes in the middle of the flow. Flows can be regarded as programs, and nodes as the blocks that are used to build them. The Node-RED platform comprises two components: a browser-based editor used for designing flows (i.e. programs), and a light-weight runtime where they can be deployed and executed. The flow editor palette can be extended with new nodes.

VITAL’s development and deployment environment allows developers to use all the capabilities which the architecture offers for implementing smart city applications. Within the environment we have the VITAL development tool, a single entry point for developers who want to create IoT applications that use multiple IoT platforms, architectures and business contexts. These capabilities, and all mechanisms that are to be integrated in the tool, are exposed through Virtualized Unified Access Interfaces (VUAIs), which are implemented as RESTful web services. This makes Node-RED an ideal basis for implementing the VITAL development tool.

The development tool has been further enhanced with various nodes relating to the architecture, as well as with a programming language and an environment for statistical computing and graphics functionalities developed by the R project. The result is an easy-to-use tool that  enables its users to perform tasks relating to the semantic interoperability architecture (for example, retrieving IoT system metadata), along with data analysis tasks such as data value prediction or clustering. Thus, in addition to the default nodes present in Node-RED, the development tool palette contains one node for each piece of functionality offered by components of VITAL. Developers can exploit any such VITAL functionality simply by adding the relevant node in their flow and setting its properties. As the flow is running, the node interacts with the appropriate components of the platform to perform the tasks. Nodes thus facilitate the use of VITAL capabilities by hiding low-level implementation and formatting details from the developer.

In order to validate these tools, we have implemented workflows using data from the two smart cities that are members of the VITAL consortium: Camden Town and Istanbul. In one of these workflows, a web service accepts HTTP GET requests with the URI of a traffic sensor in its parameters, and responds with a static HTML page displaying a map marking the location of that sensor. We use PPI to connect the Istanbul traffic sensors with this web service.

This flow, illustrated in Fig. 1, comprises six nodes:

  1. An http node accepts HTTP GET requests at /locate-sensor
  2. A function node extracts the sensor URI from the query string of the request
  3. A sensors node retrieves metadata about the sensor with that URI
  4. A function node extracts the name, longitude and latitude of the sensor from the retrieved metadata
  5. A template node creates an HTML page with the name of the sensor in the title, and a map with a marker at the location of the sensor
  6. An http response node responds to the initial HTTP request with that HTML page

node-red workflow

Figure 1: Workflow implemented using the VITAL Node-RED Tool: Showing the location of a specific traffic sensor on a map

 

In summary, VITAL’s development and deployment environment makes optimal use of the VITAL architecture to minimise the time and effort needed for deploying smart city applications.

Horizontal value-added functionalities

Our main objective is to build a platform for semantically interoperable applications in smart cities. To achieve this, we focus on how best to deliver semantic unification of all the heterogeneous types of IoT platforms and applications used in cities today, by offering various value-added functionalities such as dynamic discovery of internet-connected objects and services, data filtering, complex event processing (CEP). Business process management (BPM) functionalities help with the process of configuring elementary functions into more complex ones.

These horizontal value-added functionalities leverage the data management service (DMS) virtualization infrastructure to implement mechanisms that are agnostic as regards both platform and business context, thus enabling horizontal integration of all the silo verticals. 

The main objectives of this layer are:

  • To achieve intelligent semantic discovery of ICOs and their services, regardless of the underlying IoT platforms where they are accessed and the business applications they support
  • To design and implement virtualized filtering mechanisms that can operate across multiple IoT platforms and filter events from diverse applications
  • To design and implement platform-agnostic mechanisms for complex processing and managing IoT-related events
  • To research and implement novel techniques for the business management of sensor-driven processes to take account of the peculiarities of the IoT environment

Discovery and filtering

The service discovery (SD) module is designed to discover ICOs, data streams and services that are horizontally integrated in the VITAL platform. It uses the IoT resources stored by the data management service (DMS) in response to requests by other modules, such as CEP and orchestration.

The first version of the SD module only offers discovery of ICOs. The other components will be implemented in the final version, scheduled for release in the coming months – details are given in Project Deliverable 4.1.1.

As part of the value-added services, the functionalities of the filtering module are accessed directly through the Virtualized Universal Access Interfaces (VUAIs).

Figure 2 illustrates the interaction between the orchestrator, discovery and filtering modules. In the example shown, the orchestrator directly activates filtering in order to resample the temperature value in Room 105 every 30 minutes in a defined, previously recorded period.  The filtering module applies its functions and sends the results back to the orchestrator.

Figure 2 Example of filtering interactions

 

Complex event processing (CEP)

Complex event processing is an integral part of the horizontal value-added functionalities of the VITAL framework. Its function is to manage event processing across observation streams. Version 1 of the VITAL CEP has been designed and developed in the current year of the project. This delivers basic event processing as well as various types of data aggregation. A “data subscriber” role has been defined in the CEP for consuming data flows from VITAL data management services. Like all other VITAL modules, the CEP is accessed and managed through VUIAs.

Figure 3 Architecture of the CEP module

 

Business process management

VITAL’s business process management module, known as the “orchestrator”, was designed in the current year of the project and a first prototype has been implemented. The rationale behind managing and orchestrating IoT services in the platform is as follows.

  • Composition
    The aim of VITAL is to enable developers rapidly to develop new functionalities, applications and services. This requires structured mechanisms for producing composite meta-services.
  • Re-use
    To keep software development and management costs down, the platform seeks to maximize the potential for reuse . The orchestrator provides a mechanism for combining multiple services into more complex services, while structuring the latter as services that can be readily reused in application development.
  • Support for data processing and analytics
    The orchestrator also allows development of more complex data processing functions required for data processing frameworks, including newly emerging ones for analytics and Big Data processing.

To support data-driven applications in smart cities, the orchestrator provides a layer where data captured from various IoT systems can be processed and structured in the form of reusable functions. This represents a key link in the data processing chain in smart cities. An important feature is that the orchestrator also implements functionalities typical in Big Data systems, for example for combining elementary processing functions into more complex and sophisticated ones.

Figure 4 VITAL Orchestrator architecture

 

Architecturally, the orchestrator sits on top of other components, acting as the bridge between users or external application and the systems, services and data that are offered by VITAL. Direct access to VITAL modules is supported, but the orchestrator will provide access not only to single modules, but to complex functionalities that use larger sets of services and data. The building blocks to deliver this are as follows.

The workflow engine is at the core of the system. This provides a scripting engine to execute code dynamically, whether as standalone operations or part of more complex workflows.

Operations are the smallest functionality elements in the orchestrator. Smallest however does not necessarily mean simple: the tasks they perform can range from single ones like retrieving an observation from a specific sensor, right up to performing data analysis with fuzzy logic on complex data streams.

Workflows are defined in the orchestrator as a composition of operations. Users can select and combine compatible operations to produce complex workflows. Workflows can then be tested for their functionality prior to deployment as new meta-services.

Adapters are software modules that act as interfaces with external components – including for example VITAL systems, services, sensors, databases and logging. In essence, they provide a utility toolbox for operations that hide the internal complexities of discovering, connecting, authenticating, parsing and retrieving data and observations from external sources.

Figure 5 shows a simple example of a workflow for a meta-service to calculate the average temperature in an area:


Figure 5 Workflow example showing the current version User Interface for editing and testing workflows