Conference starts in:

Use FRUCT discount code at booking.com

Find a new job

You are here

Others

Energy Aware Early Detection

Goal

Implementation of Energy Aware Early Detection protocol and testing it with real application.

Introduction

This research is based on initial ideas presented first time in PIMRC’06 conference paper [2]. A comprehensive description can be found in NC54326 [3]. 
The goal of the work is to implement and test a packet classification and queuing algorithm, called Energy Aware Early Detection (EAED). The algorithm is targeted for routing purposes in wireless ad-hoc or mesh networks. The idea of the algorithm in a nutshell is that selected packets from the queue are dropped based on the packet type, length of the packet queue and remaining battery capacity of the wireless router. The usage of algorithm will result an energy savings of the devices and thus increase the lifetime of the network through longer network connectivity.

Initial protocol research description and implementation notes can be found in [1,4]. In this project, EAED protocol will be validated with real application. This means implementing and testing of EAED with some real application (voice or video transmission; e. g. linphone) for simple 3-node scenario (source-dropper-dest). N810 is considered as target device for packet dropper node. One of the tasks here is formulation of criteria for EAED performance evaluation. Such criteria should take into account quality measures (i.e. quality of reconstructed signal) and network lifetime.

Project description

  1. Obtaining initial knowledge about Linux kernel development 
    Subtasks: studying scratch box, studying kernel modules, studying maemo kernel compiling and reflashing
    Duration: 0.5 month
  2. Development of EAED packer dropper 
    Duration: 1 month
  3. Development of battery management routines 
    Duration: 1 month
    Milestone 1
    Deliverables: packet dropper source code, battery routines source code
  4. Creation of test-bed 
    Subtasks: choose application for testing, study usage of packet priority in chosen application, creation of test-bed (PC1-maemo-PC2)
    Duration: 1.5 month
  5. Testing of EAED using test-bed 
    Subtasks: running experiments, results analysis, investigation of improvements
    Duration: 1 month
  6. Preparing final report 
    Duration: 1 month
    Milestone 2
    Deliverables: final reports describing experiments results and analysis
    Project duration: 6 months
    Project start date: 17.08.2009
  7. Long-term plan 
    There are several tasks [3], which could be considered after pilot project:
    - Integration with energy aware routing. 
    Testing EAED for more complex scenarios, i.e. many nodes and routing protocol. From our point of view, the topic here is integration of EAED with some energy aware routing protocol. For example, EAED informs routing that battery is low and it will drop packets in near future. 
    - Generalization of the approach for several real-time streams of different priorities. 
    Some current video compression standards (e. g. H.264 SVC) support coding into several streams. These streams have different priorities and provide different quality levels. Currently EAED does not support several real-time streams with different priorities. The goal here is to generalize the EAED for such traffic sources.

Project members

Supervisor from Nokia: Harri Paloheimo
Supervisor from SUAI: Vladimir Bashun
Graduate student: Dmitry Malichenko

References

[1] Harri Paloheimo, Johanna Nieminen, Energy Aware Early Detection Research Plan

[2] Harri Paloheimo, Jukka Manner, Johanna Nieminen and Antti Ylä-Jääski, Challenges in Packet Scheduling in 4G Wireless Networks, IEEE international Symposium on Personal, Indoor and Mobile Radio Communications, Helsinki, 2006

[3] NC54326: Energy Aware Early Detection

[4] Laura Takkinen, Implementation Notes for Energy Aware Early Detection (EAED)

Distributed Objects Allocation/Retrieval System for Heterogeneous P2P Network

Goal


The goal of the project is development of management algorithms for content distribution and content search in heterogeneous p2p network.

Introduction


There is a heterogeneous network consisting of nodes with different capabilities (e.g. mobile phone, UMPC, desktop, NAS and RF sensors). It is required to develop service for storing and searching of files in this network. For creation of such service it is necessary to develop two sets of algorithms: the algorithms for working with the file descriptors and the algorithms for working with files. File descriptor consists of two parts: file description (e.g. file name and the set of keywords) and file location. The file search is interpreted as searching for file location using file description.

The nodes could join and leave the network in arbitrary moments of time. Therefore it is impossible to establish central server, which stores all file descriptors. The network could consist of many nodes with many files, so it is also impractical to replicate full list of file descriptors on all nodes.

The possible solution for storing and searching of file descriptors is p2p network. Such network could be considered as distributed database.

Approach Description


Current proposal presents the concept design and prototype development plan of the incremental routing and distributed objects allocation/retrieval system which scales in advance within highly dynamic heterogeneous environments. The resulting application areas are not limited to the distributed infrastructure usage. Thus, the resulting solution should not be considered as a fixed but as a synergistic concept presented in a formal way which can be easily rendered to the actual case.

Despite the fact of multi-domain convergence still distributed systems management stands on complex and computationally expensive cross-domain relationships findings and extrapolations. Thus, most of the systems encounter issues in scalability, robustness and durability.

The main reason of such issues comes from the intelligent engine, which is usually based on complicated cross-correlation findings and analysis. Even though such complexity, and corresponding overhead, is (compensated) by resulting performance, it is obvious to see limited applicability in case of energy and computationally constrained device. Therefore this raises the question of balanced, in energy and computational aspects, distributed systems management construction - managing content allocation with min cost; allocation mechanism which is aware of content features and network properties at any moment of time.

To narrow down our consideration the following two items are essential for further considerations:

  1. planning path to target content (constructs the most optimal path to the content, in the scope of the network and data properties, or, it gives a “hint” how to get to the target and where to go)
  2. content concentration (shows how much content slices locality diverge, in the sense of the query satisfaction and content slices)

As it was highlighted above, a multitier architecture can be seen: incremental low-level routing mechanism, a distributed query planner and a distributed directory management mechanism.

Any layer above distributed directory management is defined to serve as a distributed content location and retrieval mechanism. The main purpose is to guarantee sustained content evolution management, serialization and access control. To undertake such actions in most efficient (intelligent) way a decision update mechanism considered. Decision update mechanism utilizes information gathered from meta-data (actual content and query related) and network, which is fused and delivered as a conditional rule.

Incremental low-level routing mechanism provides routing and message passing facilities between network mapping facilities and actual physical connection selection/transfer (the connectivity layer).

Below the incremental low-level routing layer a corresponding connectivity should be provided. By means of that any network specific information is delivered by connectivity layer and, optionally, by means of system performance control with service info which includes connection specific details. The granularity level of transferred information can be adjustable.

System performance control can be based on external services specifications which are performance requirements and utilization, e.g. access pattern. The central role the system performance control plays is the infrastructure resource provisioning subsystem which is based on workload-resource mapping and distribution-admission control. These two parts absorb information from actual network topology and service availability, and network conditions and traffic patterns. All elements above are converged by means of resource management and actual performance measurements.

There are two main domains to gather information from. They are content specific and network specific domains. Content specific information can be delivered, for example, by distributed object filesystem infrastructure (which implies tasks arbitrage) and can consist of commonly used meta-data, object distribution and hierarchy. Network specific information is delivered by connectivity layer and can consist of actual network topology, network conditions and traffic pattern information etc.
Any layer above distributed directory management is defined to serve as a distributed content location and retrieval mechanism. The main purpose is to guarantee sustained content evolution management, serialization and access control. To undertake such actions in most efficient (intelligent) way a decision update mechanism considered. Decision update mechanism utilizes information gathered from meta-data (actual content and query related) and network, which is fused and delivered as a conditional rule.

Incremental low-level routing mechanism provides routing and message passing facilities between network mapping facilities and actual physical connection selection/transfer (the connectivity layer).

Below the incremental low-level routing layer a corresponding connectivity should be provided. By means of that any network specific information is delivered by connectivity layer and, optionally, by means of system performance control with service info which includes connection specific details. The granularity level of transferred information can be adjustable.

System performance control can be based on external services specifications which are performance requirements and utilization, e.g. access pattern. The central role the system performance control plays is the infrastructure resource provisioning subsystem which is based on workload-resource mapping and distribution-admission control. These two parts absorb information from actual network topology and service availability, and network conditions and traffic patterns. All elements above are converged by means of resource management and actual performance measurements.

There are two main domains to gather information from. They are content specific and network specific domains. Content specific information can be delivered, for example, by distributed object filesystem infrastructure (which implies tasks arbitrage) and can consist of commonly used meta-data, object distribution and hierarchy. Network specific information is delivered by connectivity layer and can consist of actual network topology, network conditions and traffic pattern information etc.

From the Network side the following information is relevant to network specific domain analysis (for example).

  • Interface properties
  • Adjacent nodes properties
  • Last action type
  • Timestamp of the last action
  • Node access info

Since there are several types of distributed storage infrastructure, two essential content related analyses are vital here – content locality and concentration. Content locality is analyzed in terms of temporal and spatial locality.

Content locality shows the actual proximity of data to the potential consumer in terms of costs. Since cost function is rather compound dependency from several parameters, obviously, proximity is determined in terms of those parameters and is non-linear by nature.

Content concentration shows the number of available data pieces per certain locality (e.g within certain proximity). Content concentration serves as input parameter of local workload model and, is derived from content dispersing estimation and content tracker.

Therefore the path construction converges to the efficient query requests update rule mechanism which is based on two domains information analysis and fusion. Since information which is used to update the management rule is rather independent (orthogonal) there are no ways to use correlation analysis and any derivative as well. The proposed considerably different approach consists of domains (content and network) decomposition and fusion based on specific features analysis.

It is important to note that current approach can be extended to the content concentration management, as it was mentioned above. There, the updated estimate will be used to track the allocation for the certain content distributed in the network, stick it and tight it to the set of aggregate queries that are targeted for that content. Thus, a dual side optimization is possible, from the query and from the content location, concurrently.

Work Plan


This pilot project covers the following topics:

  1. Distributed directory management mechanism, i.e. content distribution algorithm, which is aware of network specific information.
  2. Incremental low-level routing mechanism and a distributed query planner, i.e. content search algorithms, which are aware of network specific information.

Other mentioned topics like

  1. content evaluation management (decision update mechanism),
  2. optimization using notation of content locality and content concentration are considered as future work in subsequent projects.

The project starts at 18 of February, lasts 4 months and will finish at 18th of June.

The expected deliverables are:

  1. Technical report, which contains literature survey, solution description and simulation results.
  2. Simulation code

The work plan has the following structure.

Task name Duration Start date Finish date
Literature survey 1 month 18 Feb 18 March
Solution development 1,5 month 19 March 30 April
Analysis and simulation 1 month 1 May 30 May
Technical report 0,5 month 1 June 18 June

 

Graduate: 4th FRUCT seminar

Final presentation

Project team


Evgeny Linsky, Alexandra Afanasieva

Tutor: Sergey Boldyrev

Security Analysis of Provisioning Protocol Used in On-board Credentials Platform

Goal


Security analysis of provisioning protocol used inside On-board Credentials (ObC) system by Nokia Research Center.

Introduction


ObC [1, 2] is an open platform for executing code, operating on secrets that must not leak to an attacker. One use case of the system is to let 3rd parties write credentials that operate inside the secure hardware of a legacy phone, including the secure provisioning of secrets related to the credential. The platform uses M-shield security architecture by Texas Instruments (hardware) and virtual machine (software). It is required to validate this platform from security point of view.

The requirement to validate the environment comes from two main sources:

  1. Internal trust guarantees: separating credentials from company-internal, business-critical code that shares the environment;
  2. External trust guarantees: if security-conscious third parties are to deploy credentials on the platform, they need a confirmation regarding the achievable level of security for this credentials.

Usually as platform which satisfy these requirements smart cards are used. They are often both in terms of hardware and software validated / certified to standards like EAL4+.

Such certification cannot be achieved in the short term for ObC, especially with respect to the hardware. However, for the security of the software environment, it should be built argumentation for both requirements 1. and 2. There are many validation aspects to consider, but in this proposal only provisioning protocol is considered. Provisioning protocol is the protocol which provides access to secured data for credentials.

Tasks which could be considered as future work are described in section 4.

Project Description


General info:
Duration: 4,5 months
Dates: 15 June 2009 – 31 October 2009
Deliverables: 3 technical reports, simulation code

The following subtasks could be formulated:

1. Studying details of ObC platform 
Dates: 15 June 2009 – 29 June 2009
Duration: 2 weeks.

2. Comparative analysis of cryptographic primitive used in ObC and alternative ones Dates: 1 July 2009 – 31 July 2009
Duration: 1 month.

Description:
The considered platform uses only one of cryptographic primitive AES-EAX. This primitive is actively used by provisioning protocol, i. e. during the protocol the same primitive is used several times for different purposes. It is used for encryption, decryption, hashing, authentication and key derivation.

For all mentioned uses comparative analysis of AES-EAX with some alternative primitives should be done for the following criteria:

  • Efficiency
  • Possible pitfalls
  • Security
  • Meeting implementation requirements (memory footprint, cycles)

Deliverables: technical report including comparative analysis.

3. Analytical cryptanalysis of provisioning protocol

Dates: 1 August 2009 – 31 August 2009
Duration: 1 month.

Description:
A discussion about possible attack vectors against the provisioning protocol and their data is needed. Additional goal could be studied – the efficiency of alternative solutions, e.g. possibility of public key cryptography usage (rather than symmetric key).

Deliverables: technical report including possible attacks and founded alternatives.

4. Automatic analysis of provisioning protocol using AVISPA tool
Dates: 1 September 2009 – 31 October 2009
Duration: 2 months.

Description:
AVISPA tool [4] is a technology for the analysis of large-scale Internet security-sensitive protocols and applications. In this task its applicability for analysis of provisioning protocol will be studied. However, validation object could be changed by mutual agreement during the project, e.g. the provisioning protocol could be replaced by subroutine invocation protocol.

Deliverables: technical report including simulation codes.

Long-term plan


There are several tasks [3], which could be considered after pilot project:

  1. Cryptographic analysis of sealing protocol (E)
    - Efficiency analysis of sealing protocol
    - Cryptanalysis of sealing protocol
  2. Cryptanalysis of key management schemes for these protocols
  3. Side-channel attacks analysis (F)

Project members


Supervisor from Nokia: Jan-Erik Ekberg
Supervisor from SUAI: Sergey Bezzateev
PhD student: Alexandra Afanasieva

References


  • Jan-Erik Ekberg, N. Asokan, Kari Kostiainen, Pasi Eronen, OnBoard Credentials Platform Design and Implementation, Technical report, NRC-TR-2008-001
  • Jan-Erik Ekberg, N. Asokan, Kari Kostiainen, Aarne Rantala, On-board Credentials with Open Provisioning, Technical report, NRC-TR-2008-007
  • Jan-Erik Ekberg, Validation of On-Board Credentials, 08/04/2009
  • http://avispa-project.org/

End-to-End Delay Estimations in IP networks

Introduction


Quality of Service (QoS) is an important factor in telecommunications networks. For the time division multiplexing systems, comprehensive methods of the QoS attribute analysis were elaborated. For the networks based on the packet technologies, there is a number of new problems related to the QoS analysis. One of the complicated problems is an estimation of the packets delays between user's terminals. In this case, end-to-end delay of the transmitted IP packets has to be estimated. The purpose of the research is to create the analytical methods of the end-to-end delay estimation for the IP networks.

State of the art


Time delivery of IP packets can be considered as a random quantity. According to standardization of the QoS parameters proposed by ITU and ETSI, two characteristics of the time delivery are interesting. The first one is mean value of the analyzable random quantity. A quantile of the corresponding distribution function (jitter) is the second characteristic. In the tasks of end-to-end delay’s estimation the chain of routers between two terminals has to be considered as multiphase queuing system. 
NGN is an important sort of the IP networks. ITU recommendations related to the NGN assume the priority-service disciplines for the IP packets’ delivering. In case of priorities, an analysis of the multiphase queuing systems becomes more complex.
Model of the multiphase queuing system can be studied by the analytical methods or simulation. For the analysis of the model both approaches are widely used. Calculations of the mean value and distribution function are examples of the direct problem. For this problem, simulation appears to be efficient method for complicated models while analytical results can not be obtained. Typical example of the inverse problems is calculation of the service rate μ for the known offered traffic λ and established QoS parameters. For the inverse problems, analytical methods are preferable.

Research directions


First of all, main characteristics of the arrival process have to be defined. For telephone traffic arrival process can be described by the Poisson process. In cases of data and video traffic such assumption will be not acceptable. 
Secondly, main characteristics of the service time have to be selected. In general, a service time distribution is described by the step function. On the other hand, some approximations of this function are useful. 
In general, recently Pareto, Weibull and other self-similar processes are applied for a description of random processes in queuing systems. 
Finally, it is needed to choose priority-service disciplines. There are a number of solutions related to QoS provision. Undoubtedly, applied priority-service disciplines have to comply with international standards. 
The main mathematical method is based on the teletraffic theory. Laplace transform and some rough-and-ready methods will used as well. Simulation will be applied for the estimation of accuracy of the accepted assumptions. In addition, for estimation of accuracy the real measurement data of the IP traffic should be used.

Project results


Methods of the multiphase queuing systems analysis should have theoretical value. These methods will be useful for analysis of the complicated models describing a number of the telecommunications systems. As practical value some results will be useful for the NGN planning.

Contacts


Contact person: Andrew Sokolov

Pages