LATEST TITLES OF PROJECTS FOR COMPUTER SCIENCE (IEEE) STANDARD
CONFERENCE-IEEE
INFOCOM
DOUBLE-COVERED
BROADCAST (DCB): A SIMPLE RELIABLE BROADCAST ALGORITHM IN MANETS:--JAVA--2004
Mobile ad hoc networks (MANETs)
suffer from high transmission error rate because of the nature of radio
communications. The broadcast operation, as a fundamental service in MANETs, is
prone to the broadcast storm problem if forward nodes are not carefully
designated. The objective of reducing the broadcast redundancy while still
providing high delivery ratio for each broadcast packet is a major challenge in
a dynamic environment. In this paper, we propose a simple, reliable broadcast
algorithm, called double-covered broadcast (DCB), that takes advantage of
broadcast redundancy to improve the delivery ratio in the environment that has
rather high transmission error rate. Among 1-hop neighbors of the sender, only
selected forward nodes retransmit the broadcast message. Forward nodes are
selected in such a way that (1) the sender’s 2-hop neighbors are covered and
(2) the sender’s 1-hop neighbors are either a forward node, or a non-forward
node but covered by at least two forwarding neighbors. The retransmissions of
the forward nodes are received by the sender as confirmation of their receiving
the packet. The non-forward 1-hop neighbors of the sender do not acknowledge
the reception of the broadcast. If the sender does not detect all its forward
nodes’ retransmissions, it will resend the packet until the maximum times of
retry is reached. Simulation results show that the algorithm provides good
performance for a broadcast operation under high transmission error rate
environment
COMBINATORIAL
APPROACH FOR PREVENTING SQL INJECTION ATTACKS:--J2EE--2009
A combinatorial approach for
protecting Web applications against SQL injection is discussed in this paper,
which is a novel idea of incorporating the uniqueness of Signature based method
and auditing method. The major issue of web application security is the SQL
Injection, which can give the attackers unrestricted access to the database
that underlie Web applications and has become increasingly frequent and
serious. From signature based method standpoint of view, it presents a
detection mode for SQL injection using pair wise sequence alignment of amino
acid code formulated from web application form parameter sent via web server.
On the other hand from the Auditing based method standpoint of view, it
analyzes the transaction to find out the malicious access. In signature based
method It uses an approach called Hirschberg algorithm, it is a divide and
conquer approach to reduce the time and space complexity. This system was able
to stop all of the successful attacks and did not generate any false positives.
INTERNATIONAL
CONFERENCE ON INTELLIGENT AND ADVANCED SYSTEMS
PREDICTIVE
JOB SCHEDULING IN A CONNECTION LIMITED SYSTEM USING PARALLEL GENETIC
ALGORITHM:--JAVA--2005
Job scheduling is the key feature of
any computing environment and the efficiency of computing depends largely on
the scheduling technique used. Intelligence is the key factor which is lacking
in the job scheduling techniques of today. Genetic algorithms are powerful
search techniques based on the mechanisms of natural selection and natural
genetics. Multiple jobs are handled by the scheduler and the resource the job
needs are in remote locations. Here we assume that the resource a job needs are
in a location and not split over nodes and each node that has a resource runs a
fixed number of jobs. The existing algorithms used are non predictive and
employs greedy based algorithms or a variant of it. The efficiency of the job
scheduling process would increase if previous experience and the genetic
algorithms are used. In this paper, we propose a model of the scheduling
algorithm where the scheduler can learn from previous experiences and an
effective job scheduling is achieved as time progresses.
CONFERENCE-IEEE
INFOCOM
DOUBLE-COVERED
BROADCAST (DCB): A SIMPLE RELIABLE BROADCAST ALGORITHM IN MANETS:--JAVA--2004
Mobile ad
hoc networks (MANETs) suffer from high transmission error rate because of the
nature of radio communications. The broadcast operation, as a fundamental
service in MANETs, is prone to the broadcast storm problem if forward nodes are
not carefully designated. The objective of reducing the broadcast redundancy
while still providing high delivery ratio for each broadcast packet is a major
challenge in a dynamic environment. In this paper, we propose a simple,
reliable broadcast algorithm, called double-covered broadcast (DCB), that takes
advantage of broadcast redundancy to improve the delivery ratio in the
environment that has rather high transmission error rate. Among 1-hop neighbors
of the sender, only selected forward nodes retransmit the broadcast message.
Forward nodes are selected in such a way that (1) the sender’s 2-hop neighbors
are covered and (2) the sender’s 1-hop neighbors are either a forward node, or
a non-forward node but covered by at least two forwarding neighbors. The
retransmissions of the forward nodes are received by the sender as confirmation
of their receiving the packet. The non-forward 1-hop neighbors of the sender do
not acknowledge the reception of the broadcast. If the sender does not detect
all its forward nodes’ retransmissions, it will resend the packet until the
maximum times of retry is reached. Simulation results show that the algorithm
provides good performance for a broadcast operation under high transmission
error rate environment
PERFORMANCE
OF A SPECULATIVE TRANSMISSION SCHEME FOR SCHEDULING LATENCY
REDUCTION:--JAVA-2008
This work was motivated by the need
to achieve low latency in an input centrally-scheduled cell switch for
high-performance computing applications; specifically, the aim is to reduce the
latency incurred between issuance of a request and arrival of the corresponding
grant. We introduce a speculative transmission scheme to significantly reduce
the average latency by allowing cells to proceed without waiting for a grant.
It operates in conjunction with any centralized matching algorithm to achieve a
high maximum utilization. An analytical model is presented to investigate the
efficiency of the speculative transmission scheme employed in a non-blocking
N*NR input-queued crossbar switch with receivers R per output. The results
demonstrate that the can be almost entirely eliminated for loads up to 50%. Our
simulations confirm the analytical results.
RATE
ALLOCATION & NETWORK LIFETIME PROBLEM FOR WIRELESS SENSOR
NETWORKS:--DOTNET--2008
In this paper, we consider an
overarching problem that encompasses both performance metrics. In particular,
we study the network capacity problem under a given network lifetime
requirement. Specifically, for a wireless sensor network where each node is
provisioned with an initial energy, if all nodes are required to live up to a
certain lifetime criterion, Since the objective of maximizing the sum of rates
of all the nodes in the network can lead to a severe bias in rate allocation
among the nodes, we advocate the use of lexicographical max-min (LMM) rate
allocation. To calculate the LMM rate allocation vector, we develop a
polynomial-time algorithm by exploiting the parametric analysis (PA) technique
from linear program (LP), which we call serial LP with Parametric Analysis
(SLP-PA). We show that the SLP-PA can be also employed to address the LMM node
lifetime problem much more efficiently than a state-of-the-art algorithm
proposed in the literature. More important, we show that there exists an
elegant duality relationship between the LMM rate allocation problem and the
LMM node lifetime problem. Therefore, it is sufficient to solve only one of the
two problems. Important insights can be obtained by inferring duality results
for the other problem.
STATISTICAL
TECHNIQUES FOR DETECTING TRAFFIC ANOMALIES THROUGH PACKET HEADER
DATA:--DOTNET--2008
THE frequent attacks on network
infrastructure, using various forms of denial of service (DoS) attacks and
worms, have led to an increased need for developing techniques for analyzing
and monitoring network traffic. If efficient analysis tools were available, it
could become possible to detect the attacks, anomalies and take action to
suppress them before they have had much time to propagate across the network.
In this paper, we study the possibilities of traffic-analysis based mechanisms
for attack and anomaly detection. The motivation for this work came from a need
to reduce the likelihood that an attacker may hijack the campus machines to
stage an attack on a third party. A campus may want to prevent or limit misuse
of its machines in staging attacks, and possibly limit the liability from such
attacks. In particular, we study the utility of observing packet header data of
outgoing traffic, such as destination addresses, port numbers and the number of
flows, in order to detect attacks/anomalies originating from the campus at the
edge of a campus. Detecting anomalies/attacks close to the source allows us to
limit the potential damage close to the attacking machines. Traffic monitoring
close to the source may enable the network operator quicker identification of
potential anomalies and allow better control of administrative domain’s
resources. Attack propagation could be slowed through early detection. Our
approach passively monitors network traffic at regular intervals and analyzes
it to find any abnormalities in the aggregated traffic. By observing the
traffic and correlating it to previous states of traffic, it may be possible to
see whether the current traffic is behaving in a similar (i.e., correlated)
manner. The network traffic could look different because of flash crowds,
changing access patterns, infrastructure problems such as router failures, and
DoS attacks. In the case of bandwidth attacks, the usage of network may be increased
and abnormalities may show up in traffic volume. Flash crowds could be observed
through sudden increase in traffic volume to a single destination. Sudden
increase of traffic on a certain port could signify the onset of an anomaly
such as worm propagation. Our approach relies on analyzing packet header data
in order to provide indications of Possible abnormalities in the traffic.
EFFICIENT
ROUTING IN INTERMITTENTLY CONNECTED MOBILE NETWORKS: THE MULTIPLE COPY
CASE:--DOTNET--2008
Intermittently connected mobile
networks are wireless networks where most of the time there does not exist a
complete path from the source to the destination. There are many real networks
that follow this model, for example, wildlife tracking sensor networks,
military networks, vehicular ad hoc networks, etc. In this context,
conventional routing schemes fail, because they try to establish complete
end-to-end paths, before any data is sent. To deal with such networks
researchers have suggested to use flooding-based routing schemes. While
flooding-based schemes have a high probability of delivery, they waste a lot of
energy and suffer from severe contention which can significantly degrade their
performance. Furthermore, proposed efforts to reduce the overhead of
flooding-based schemes have often been plagued by large delays. With this in
mind, we introduce a new family of routing schemes that “spray” a few message
copies into the network, and then route each copy independently towards the
destination. We show that, if carefully designed, spray routing
TWO
TECHNIQUES FOR FAST COMPUTATION OF CONSTRAINED SHORTEST PATHS:--JAVA--2008
Computing constrained shortest paths
is fundamental to some important network functions such as QoS routing, MPLS
path selection, ATM circuit routing, and traffic engineering. The problem is to
find the cheapest path that satisfies certain constraints. In particular,
finding the cheapest delay-constrained path is critical for real-time data
flows such as voice/video calls. Because it is NP-complete, much research has
been designing heuristic algorithms that solve the -approximation of the
problem with an adjustable accuracy. A common approach is to discretize (i.e.,
scale and round) the link delay or link cost, which transforms the original
problem to a simpler one solvable in polynomial time. The efficiency of the
algorithms directly relates to the magnitude of the errors introduced during
discretization. In this paper, we propose two techniques that reduce the
discretization errors, which allow faster algorithms to be designed. Reducing
the overhead of computing constrained shortest paths is practically important
for the successful design of a high-throughput QoS router, which is limited at
both processing power and memory space. Our simulations show that the new
algorithms reduce the execution time by an order of magnitude on power-law
topologies with 1000 nodes.
PROBABILISTIC
PACKET MARKING FOR LARGE-SCALE IP TRACE BACK:--DOTNET
An approach to IP traces back based
on the probabilistic packet marking paradigm. Our approach, which we call
randomize-and-link, uses large checksum cords to “link” message fragments in a
way that is highly scalable, for the checksums serve both as associative
addresses and data integrity verifiers. The main advantage of these checksum
cords is that they spread the addresses of possible router messages across a
spectrum that is too large for the attacker to easily create messages that
collide with legitimate messages.
DUAL-LINK
FAILURE RESILIENCY THROUGH BACKUP LINK MUTUAL EXCLUSION:--JAVA
Networks employ link protection to
achieve fast recovery from link failures. While the first link failure can be
protected using link protection, there are several alternatives for protecting
against the second failure. This paper formally classifies the approaches to
dual-link failure resiliency. One of the strategies to recover from dual-link
failures is to employ link protection for the two failed links independently,
which requires that two links may not use each other in their backup paths if they
may fail simultaneously. Such a requirement is referred to as backup link
mutual exclusion (BLME) constraint and the problem of identifying a backup path
for every link that satisfies the above requirement is referred to as the BLME
problem. This paper develops the necessary theory to establish the sufficient
conditions for existence of a solution to the BLME problem. Solution
methodologies for the BLME problem is developed using two approaches by: 1)
formulating the backup path selection as an integer linear program; 2)
developing a polynomial time heuristic based on minimum cost path routing. The
ILP formulation and heuristic are applied to six networks and their performance
is compared with approaches that assume precise knowledge of dual-link failure.
It is observed that a solution exists for all of the six networks considered.
The heuristic approach is shown to obtain feasible solutions that are resilient
to most dual-link failures, although the backup path lengths may be
significantly higher than optimal. In addition, the paper illustrates the
significance of the knowledge of failure location by illustrating that network
with higher connectivity may require lesser capacity than one with a lower
connectivity to recover from dual-link failures.
A
DISTRIBUTED DATABASE ARCHITECTURE FOR GLOBAL ROAMING IN NEXT-GENERATION MOBILE
NETWORKS:--JAVA--2004
The next-generation mobile network
will support terminal mobility, personal mobility, and service provider
portability, making global roaming seamless. A location-independent personal
telecommunication number (PTN) scheme is conducive to implementing such a
global mobile system. However, the non-geographic PTNs coupled with the
anticipated large number of mobile users in future mobile networks may
introduce very large centralized databases. This necessitates research into the
design and performance of high-throughput database technologies used in mobile
systems to ensure that future systems will be able to carry efficiently the
anticipated loads. This paper proposes a scalable, robust, efficient location
database architecture based on the location-independent PTNs. The proposed
multi tree database architecture consists of a number of database subsystems,
each of which is a three-level tree structure and is connected to the others
only through its root. By exploiting the localized nature of calling and
mobility patterns, the proposed architecture effectively reduces the database
loads as well as the signaling traffic incurred by the location registration
and call delivery procedures. In addition, two memory-resident database
indices, memory-resident direct file and T-tree, are proposed for the location
databases to further improve their throughput. Analysis model and numerical
results are presented to evaluate the efficiency of the proposed database
architecture. Results have revealed that the proposed database architecture for
location management can effectively support the anticipated high user density
in the future mobile networks.
NETWORK
BORDER PATROL: PREVENTING CONGESTION COLLAPSE AND PROMOTING FAIRNESS IN THE
INTERNET:--JAVA--2004
The Internet's excellent scalability
and robustness result in part from the end-to-end nature of Internet congestion
control. End-to-end congestion control algorithms alone, however, are unable to
prevent the congestion collapse and unfairness created by applications that are
unresponsive to network congestion. To address these maladies, we propose and
investigate a novel congestion-avoidance mechanism called network border patrol
(NBP). NBP entails the exchange of feedback between routers at the borders of a
network in order to detect and restrict unresponsive traffic flows before they
enter the network, thereby preventing congestion within the network. Moreover,
NBP is complemented with the proposed enhanced core-stateless fair queueing
(ECSFQ) mechanism, which provides fair bandwidth allocations to competing
flows. Both NBP and ECSFQ are compliant with the Internet philosophy of pushing
complexity toward the edges of the network whenever possible. Simulation
results show that NBP effectively eliminates congestion collapse and that, when
combined with ECSFQ, approximately max-min fair bandwidth allocations can be
achieved for competing flows.
IEEE Software Engineering Projects
ATOMICITY
ANALYSIS OF SERVICE COMPOSITION ACROSS ORGANIZATIONS:--J2EE--2009
Atomicity is a highly desirable
property for achieving application consistency in service compositions. To
achieve atomicity, a service composition should satisfy the atomicity sphere, a
structural criterion for the backend processes of involved services. Existing
analysis techniques for the atomicity sphere generally assume complete
knowledge of all involved backend processes. Such an assumption is invalid when
some service providers do not release all details of their backend processes to
service consumers outside the organizations. To address this problem, we
propose a process algebraic framework to publish atomicity-equivalent public
views from the backend processes. These public views extract relevant task
properties and reveal only partial process details that service providers need
to expose. Our framework enables the analysis of the atomicity sphere for
service compositions using these public views instead of their backend
processes. This allows service consumers to choose suitable services such that
their composition satisfies the atomicity sphere without disclosing the details
of their backend processes. Based on the theoretical result, we present
algorithms to construct atomicity-equivalent public views and to analyze the
atomicity sphere for a service composition. Two case studies from the supply
chain and insurance domains are given to evaluate our proposal and demonstrate
the applicability of our approach.
USING
THE CONCEPTUAL COHESION OF CLASSES FOR FAULT PREDICTION IN OBJECT ORIENTED
SYSTEMS:--JAVA --2008
High cohesion is desirable property
in software systems to achieve reusability and maintainability. In this project
we are measures for cohesion in Object-Oriented (OO) software reflect
particular interpretations of cohesion and capture different aspects of it. In
existing approaches the cohesion is calculate from the structural information
for example method attributes and references. In conceptual cohesion of
classes, i.e. in our project we are calculating the unstructured information
from the source code such as comments and identifiers. Unstructured information
is embedded in the source code. To retrieve the unstructured information from
the source code Latent Semantic Indexing is used. A large case study on three
open source software systems is presented which compares the new measure with
an extensive set of existing metrics and uses them to construct models that
predict software faults. In our project we are achieving the high cohesion and
we are predicting the fault in Object –Oriented Systems
THE
EFFECT OF PAIRS IN PROGRAM DESIGN TASKS:--DOTNET--2008
In this project efficiency of pairs
in program design tasks is identified by using pair programming concept. Pair
programming involves two developers simultaneously collaborating with each
other on the same programming task to design and code a solution. Algorithm
design and its implementation are normally merged and it provides feedback to
enhance the design. Previous controlled pair programming experiments did not
explore the efficacy of pairs against individuals in program design-related
tasks. Variations in programmer skills in a particular language or an
integrated development environment and the understanding of programming
instructions can cover the skill of subjects in program design-related tasks.
Programming aptitude tests (PATs) have been shown to correlate with programming
performance. PATs do not require understanding of programming instructions and
do not require a skill in any specific computer language. By conducting two
controlled experiments, with full-time professional programmers being the
subjects who worked on increasingly complex programming aptitude tasks related
to problem solving and algorithmic design. In both experiments, pairs
significantly outperformed individuals, providing evidence of the value of
pairs in program design-related tasks.
ESTIMATION
OF DEFECTS BASED ON EFECT DECAY MODEL: ED3M:--DOTNET--2008
An accurate prediction of the number
of defects in a software product during system testing contributes not only to
the management of the system testing process but also to the estimation of the
product’s required maintenance. Here, a new approach, called Estimation of
Defects based on Defect Decay Model (ED3M) is presented that computes an
estimate the defects in an ongoing testing process. ED3M is based on estimation
theory. Unlike many existing approaches, the technique presented here does not
depend on historical data from previous projects or any assumptions about the
requirements and/or testers’ productivity. It is a completely automated
approach that relies only on the data collected during an ongoing testing
process. This is a key advantage of the ED3M approach as it makes it widely
applicable in different testing environments. Here, the ED3M approach has been
evaluated using five data sets from large industrial projects and two data sets
from the literature. In addition, a performance analysis has been conducted
using simulated data sets to explore its behavior using different models for
the input data. The results are very promising; they indicate the ED3M approach
provides accurate estimates with as fast or better convergence time in
comparison to well-known alternative techniques, while only using defect data
as the input.
IEEE Mobile Computing Projects
A
TABU SEARCH ALGORITHM FOR CLUSTER BUILDING IN WIRELESS SENSOR
NETWORKS:--DOTNET--2009
The main challenge in wireless
sensor network deployment pertains to optimizing energy consumption when
collecting data from sensor nodes. This paper proposes a new centralized
clustering method for a data collection mechanism in wireless sensor networks,
which is based on network energy maps and Quality-of-Service (QoS) requirements.
The clustering problem is modeled as a hypergraph partitioning and its
resolution is based on a tabu search heuristic. Our approach defines moves
using largest size cliques in a feasibility cluster graph. Compared to other
methods (CPLEX-based method, distributed method, simulated annealing-based
method), the results show that our tabu search-based approach returns
high-quality solutions in terms of cluster cost and execution time. As a
result, this approach is suitable for handling network extensibility in a
satisfactory manner.
ROUTE
STABILITY IN MANETS UNDER THE RANDOM DIRECTION MOBILITY MODEL:--DOTNET--2009
A fundamental issue arising in
mobile ad hoc networks (MANETs) is the selection of the optimal path between
any two nodes. A method that has been advocated to improve routing efficiency
is to select the most stable path so as to reduce the latency and the overhead
due to route reconstruction. In this work, we study both the availability and
the duration probability of a routing path that is subject to link failures
caused by node mobility. In particular, we focus on the case where the network
nodes move according to the Random Direction model, and we derive both exact
and approximate (but simple) expressions of these probabilities. Through our
results, we study the problem of selecting an optimal route in terms of path
availability. Finally, we propose an approach to improve the efficiency of
reactive routing protocols.
GREEDY
ROUTING WITH ANTI-VOID TRAVERSAL FOR WIRELESS SENSOR NETWORKS:--DOTNET--2009
The unreachability problem (i.e.,
the so-called void problem) that exists in the greedy routing algorithms has
been studied for the wireless sensor networks. Some of the current research
work cannot fully resolve the void problem, while there exist other schemes
that can guarantee the delivery of packets with the excessive consumption of
control overheads. In this paper, a greedy antivoid routing (GAR) protocol is
proposed to solve the void problem with increased routing efficiency by
exploiting the boundary finding technique for the unit disk graph (UDG). The
proposed rolling-ball UDG boundary traversal (RUT) is employed to completely
guarantee the delivery of packets from the source to the destination node under
the UDG network. The boundary map (BM) and the indirect map searching (IMS)
scheme are proposed as efficient algorithms for the realization of the RUT
technique. Moreover, the hop count reduction (HCR) scheme is utilized as a
short-cutting technique to reduce the routing hops by listening to the neighbor’s
traffic, while the intersection navigation (IN) mechanism is proposed to obtain
the best rolling direction for boundary traversal with the adoption of shortest
path criterion. In order to maintain the network requirement of the proposed
RUT scheme under the non-UDG networks, the partial UDG construction (PUC)
mechanism is proposed to transform the non-UDG into UDG setting for a portion
of nodes that facilitate boundary traversal. These three schemes are
incorporated within the GAR protocol to further enhance the routing performance
with reduced communication overhead. The proofs of correctness for the GAR
scheme are also given in this paper. Comparing with the existing localized
routing algorithms, the simulation results show that the proposed GAR-based
protocols can provide better routing efficiency.
CELL
BREATHING TECHNIQUES FOR LOAD BALANCING IN WIRELESS LANS:--DOTNET--2009
Maximizing network throughput while
providing fairness is one of the key challenges in wireless LANs (WLANs). This
goal is typically achieved when the load of access points (APs) is balanced.
Recent studies on operational WLANs, however, have shown that AP load is often
substantially uneven. To alleviate such imbalance of load, several load
balancing schemes have been proposed. These schemes commonly require
proprietary software or hardware at the user side for controlling the user-AP
association. In this paper we present a new load balancing technique by
controlling the size of WLAN cells (i.e., AP’s coverage range), which is conceptually
similar to cell breathing in cellular networks. The proposed scheme does not
require any modification to the users neither the IEEE 802.11 standard. It only
requires the ability of dynamically changing the transmission power of the AP
beacon messages. We develop a set of polynomial time algorithms that find the
optimal beacon power settings which minimize the load of the most congested AP.
We also consider the problem of network-wide min-max load balancing. Simulation
results show that the performance of the proposed method is comparable with or
superior to the best existing association-based methods.
LOCAL
CONSTRUCTION OF NEAR-OPTIMAL POWER SPANNERS FOR WIRELESS AD-HOC
NETWORKS:--DOTNET
We present a local distributed
algorithm that, given a wireless ad hoc network modeled as a unit disk graph U
in the plane, constructs a planar power spanner of U whose degree is bounded by
k and whose stretch factor is bounded by 1 + (2sin pi/k)p, where k ges 10 is an
integer parameter and p isin [2, 5] is the power exponent constant. For the
same degree bound k, the stretch factor of our algorithm significantly improves
the previous best bounds by Song et al. We show that this bound is near-optimal
by proving that the slightly smaller stretch factor of 1 + (2sin pi/k+1)p is
unattainable for the same degree bound k. In contrast to previous algorithms
for the problem, the presented algorithm is local. As a consequence, the
algorithm is highly scalable and robust. Finally, while the algorithm is
efficient and easy to implement in practice, it relies on deep insights on the
geometry of unit disk graphs and novel techniques that are of independent
interest.
INTRUSION
DETECTION IN HOMOGENEOUS & HETEROGENEOUS WIRELESS SENSOR
NETWORKS:--JAVA--2008
Intrusion detection in Wireless
Sensor Network (WSN) is of practical interest in many applications such as
detecting an intruder in a battlefield. The intrusion detection is defined as a
mechanism for a WSN to detect the existence of inappropriate, incorrect, or
anomalous moving attackers. In this paper, we consider this issue according to
heterogeneous WSN models. Furthermore, we consider two sensing detection
models: single-sensing detection and multiple-sensing detection... Our
simulation results show the advantage of multiple sensor heterogeneous WSNs.
LOCATION
BASED SPATIAL QUERY PROCESSING IN WIRELESS BROADCAST ENVIRONMENTS:--JAVA--2008
Location-based spatial queries (LBSQ
s) refer to spatial queries whose answers rely on the location of the inquirer.
Efficient processing of LBSQ s is of critical importance with the
ever-increasing deployment and use of mobile technologies. We show that LBSQ s
has certain unique characteristics that the traditional spatial query
processing in centralized databases does not address. For example, a
significant challenge is presented by wireless broadcasting environments, which
have excellent scalability but often exhibit high-latency database access. In
this paper, we present a novel query processing technique that, though
maintaining high scalability and accuracy, manages to reduce the latency
considerably in answering LBSQ s. Our approach is based on peer-to-peer
sharing, which enables us to process queries without delay at a mobile host by
using query results cached in its neighboring mobile peers. We demonstrate the
feasibility of our approach through a probabilistic analysis, and we illustrate
the appeal of our technique through extensive simulation results.
BANDWIDTH
ESTIMATION FOR IEEE 802.11 BASED ADHOC NETWORK:--JAVA--2008
Since 2005, IEEE 802.11-based
networks have been able to provide a certain level of quality of service (QoS)
by the means of service differentiation, due to the IEEE 802.11e amendment.
However, no mechanism or method has been standardized to accurately evaluate
the amount of resources remaining on a given channel. Such an evaluation would,
however, be a good asset for bandwidth-constrained applications. In multihop ad
hoc networks, such evaluation becomes even more difficult. Consequently,
despite the various contributions around this research topic, the estimation of
the available bandwidth still represents one of the main issues in this field.
In this paper, we propose an improved mechanism to estimate the available
bandwidth in IEEE 802.11-based ad hoc networks. Through simulations, we compare
the accuracy of the estimation we propose to the estimation performed by other
state-of-the-art QoS protocols, BRuIT, AAC, and QoS-AODV.
This paper presents both theoretical
and practical analyses of the security offered by watermarking and data hiding
methods based on spread spectrum. In this context, security is understood as
the difficulty of estimating the secret parameters of the embedding function
based on the observation of watermarked signals. On the theoretical side, the
security is quantified from an information-theoretic point of view by means of
the equivocation about the secret parameters. The main results reveal
fundamental limits and bounds on security and provide insight into other
properties, such as the impact of the embedding parameters, and the tradeoff
between robustness and security. On the practical side, workable estimators of
the secret parameters are proposed and theoretically analyzed for a variety of
scenarios, providing a comparison with previous approaches, and showing that
the security of many schemes used in practice can be fairly low.
RESOURCE
ALLOCATION IN OFDMA WIRELESS COMMUNICATIONS SYSTEMS SUPPORTING MULTIMEDIA
SERVICES:--DOTNET--2009
We design a resource allocation
algorithm for down-link of orthogonal frequency division multiple access
(OFDMA) systems supporting real-time (RT) and best-effort (BE) services
simultaneously over a time-varying wireless channel. The proposed algorithm
aims at maximizing system throughput while satisfying quality of service (QoS)
requirements of the RT and BE services. We take two kinds of QoS requirements
into account. One is the required average transmission rate for both RT and BE
services. The other is the tolerable average absolute deviation of transmission
rate (AADTR) just for the RT services, which is used to control the fluctuation
in transmission rates and to limit the RT packet delay to a moderate level. We
formulate the optimization problem representing the resource allocation under
consideration and solve it by using the dual optimization technique and the
projection stochastic subgradient method. Simulation results show that the
proposed algorithm well meets the QoS requirements with the high throughput and
outperforms the modified largest weighted delay first (M-LWDF) algorithm that
supports similar QoS requirements.
ANALYSIS
OF SHORTEST PATH ROUTING FOR LARGE MULTI-HOP WIRELESS NETWORKS:--DOTNET--2009
In this paper, we analyze the impact
of straight line routing in large homogeneous multi-hop wireless networks. We
estimate the nodal load, which is defined as the number of packets served at a
node, induced by straight line routing. For a given total offered load on the
network, our analysis shows that the nodal load at each node is a function of
the node’s Voronoi cell, the node’s location in the network, and the traffic pattern
specified by the source and destination randomness and straight line routing.
In the asymptotic regime, we show that each node’s probability that the node
serves a packet arriving to the network approaches the products of half the
length of the Voronoi cell perimeter and the load density function that a
packet goes through the node’s location. The density function depends on the
traffic pattern generated by straight line routing, and determines where the
hot spot is created in the network. Hence, contrary to conventional wisdom,
straight line routing can balance the load over the network, depending on the
traffic patterns.
SECURE
AND POLICY-COMPLIANT SOURCE ROUTING:--DOTNET--2009
In today’s Internet, inter-domain
route control remains elusive; nevertheless, such control could improve the
performance, reliability, and utility of the network for end users and ISPs
alike. While researchers have proposed a number of source routing techniques to
combat this limitation, there has thus far been no way for independent ASes to
ensure that such traffic does not circumvent local traffic policies, nor to
accurately determine the correct party to charge for forwarding the traffic. We
present Platypus, an authenticated source routing system built around the
concept of network capabilities, which allow for accountable, fine-grained path
selection by cryptographically attesting to policy compliance at each hop along
a source route. Capabilities can be composed to construct routes through
multiple ASes and can be delegated to third parties. Platypus caters to the
needs of both end users and ISPs: users gain the ability to pool their
resources and select routes other than the default, while ISPs maintain control
over where, when, and whose packets traverse their networks. We describe the
design and implementation of an extensive Platypus policy framework that can be
used to address several issues in wide-area routing at both the edge and the
core, and evaluate its performance and security. Our results show that
incremental deployment of Platypus can achieve immediate gains.
MOBILITY
MANAGEMENT APPROACHES FOR MOBILE IP NETWORKS: PERFORMANCE COMPARISON AND USE
RECOMMENDATIONS:--JAVA--2009
In wireless networks, efficient
management of mobility is a crucial issue to support mobile users. The Mobile
Internet Protocol (MIP) has been proposed to support global mobility in IP
networks. Several mobility management strategies have been proposed which aim
reducing the signaling traffic related to the Mobile Terminals (MTs) registration
with the Home Agents (HAs) whenever their Care-of-Addresses (CoAs) change. They
use different Foreign Agents (FAs) and Gateway FAs (GFAs) hierarchies to
concentrate the registration processes. For high-mobility MTs, the Hierarchical
MIP (HMIP) and Dynamic HMIP (DHMIP) strategies localize the registration in FAs
and GFAs, yielding to high-mobility signaling. The Multicast HMIP strategy
limits the registration processes in the GFAs. For high-mobility MTs, it
provides lowest mobility signaling delay compared to the HMIP and DHMIP
approaches. However, it is resource consuming strategy unless for frequent MT
mobility. Hence, we propose an analytic model to evaluate the mean signaling
delay and the mean bandwidth per call according to the type of MT mobility. In our
analysis, the MHMIP outperforms the DHMIP and MIP strategies in almost all the
studied cases. The main contribution of this paper is the analytic model that
allows the mobility management approaches performance evaluation.
SINGLE-LINK
FAILURE DETECTION IN ALL-OPTICAL NETWORKS USING MONITORING CYCLES AND
PATHS:--DOTNET--2009
In this paper, we consider the
problem of fault localization in all-optical networks. We introduce the concept
of monitoring cycles (MCs) and monitoring paths (MPs) for unique identification
of single-link failures. MCs and MPs are required to pass through one or more
monitoring locations. They are constructed such that any single-link failure
results in the failure of a unique combination of MCs and MPs that pass through
the monitoring location(s). For a network with only one monitoring location, we
prove that three-edge connectivity is a necessary and sufficient condition for
constructing MCs that uniquely identify any single-link failure in the network.
For this case, we formulate the problem of constructing MCs as an integer
linear program (ILP). We also develop heuristic approaches for constructing MCs
in the presence of one or more monitoring locations. For an arbitrary network
(not necessarily three-edge connected), we describe a fault localization
technique that uses both MPs and MCs and that employs multiple monitoring
locations. We also provide a linear-time algorithm to compute the minimum
number of required monitoring locations. Through extensive simulations, we
demonstrate the effectiveness of the proposed monitoring technique.
MULTIPLE
ROUTING CONFIGURATIONS FOR FAST IP NETWORK RECOVERY:--JAVA--2009
As the Internet takes an
increasingly central role in our communications infrastructure, the slow
convergence of routing protocols after a network failure becomes a growing
problem. To assure fast recovery from link and node failures in IP networks, we
present a new recovery scheme called Multiple Routing Configurations (MRC). Our
proposed scheme guarantees recovery in all single failure scenarios, using a
single mechanism to handle both link and node failures, and without knowing the
root cause of the failure. MRC is strictly connectionless, and assumes only
destination based hop-by-hop forwarding. MRC is based on keeping additional
routing information in the routers, and allows packet forwarding to continue on
an alternative output link immediately after the detection of a failure. It can
be implemented with only minor changes to existing solutions. In this paper we
present MRC, and analyze its performance with respect to scalability, backup
path lengths, and load distribution after a failure. We also show how an
estimate of the traffic demands in the network can be used to improve the
distribution of the recovered traffic, and thus reduce the chances of
congestion when MRC is used.
VIRUS
SPREAD IN NETWORKS:--DOTNET--2009
We study how the spread of computer
viruses, worms, and other self-replicating malware is affected by the logical
topology of the network over which they propagate. We consider a model in which
each host can be in one of 3 possible states - susceptible, infected or removed
(cured and no longer susceptible to infection). We characterize how the size of
the population that eventually becomes infected depends on the network
topology. Specially, we show that if the ratio of cure to infection rates is
larger than the spectral radius of the graph, and the initial infected
population is small, then the final infected population is also small in a
sense that can be made precise. Conversely, if this ratio is smaller than the
spectral radius, then we show in some graph models of practical interest
(including power law random graphs) that the final infected population is
large. These results yield insights into what the critical parameters are in
determining virus spread in networks.
MINING
FILE DOWNLOADING TIME IN STOCHASTIC PEER TO PEER NETWORKS:--DOTNET--2008
On-demand routing protocols use
route caches to make routing decisions. Due to mobility, cached routes easily
become stale. To address the cache staleness issue, prior work in DSR used
heuristics with ad hoc parameters to predict the lifetime of a link or a route.
However, heuristics cannot accurately estimate timeouts because topology changes
are unpredictable. In this paper, we propose proactively disseminating the
broken link information to the nodes that have that link in their caches. We
define a new cache structure called a cache table and present a distributed
cache update algorithm. Each node maintains in its cache table the information
necessary for cache updates. When a link failure is detected, the algorithm
notifies all reachable nodes that have cached the link in a distributed manner.
The algorithm does not use any ad hoc parameters, thus making route caches
fully adaptive to topology changes. We show that the algorithm outperforms DSR
with path caches and with Link-Max Life, an adaptive timeout mechanism for link
caches. We conclude that proactive cache updating is key to the adaptation of
on-demand routing protocols to mobility.
RATE
& DELAY GUARANTEES PROVIDED BY CLOSE PACKET SWITCHES WITH LOAD
BALANCING:--JAVA--2008
In this paper, we consider an
overarching problem that encompasses both performance metrics. In particular,
we study the network capacity problem under a given network lifetime
requirement. Specifically, for a wireless sensor network where each node is
provisioned with an initial energy, if all nodes are required to live up to a
certain lifetime criterion, Since the objective of maximizing the sum of rates
of all the nodes in the network can lead to a severe bias in rate allocation
among the nodes, we advocate the use of lexicographical max-min (LMM) rate
allocation. To calculate the LMM rate allocation vector, we develop a
polynomial-time algorithm by exploiting the parametric analysis (PA) technique
from linear program (LP), which we call serial LP with Parametric Analysis
(SLP-PA). We show that the SLP-PA can be also employed to address the LMM node
lifetime problem much more efficiently than a state-of-the-art algorithm
proposed in the literature. More important, we show that there exists an
elegant duality relationship between the LMM rate allocation problem and the
LMM node lifetime problem. Therefore, it is sufficient to solve only one of the
two problems. Important insights can be obtained by inferring duality results
for the other problem.
GEOMETRIC
APPROACH TO IMPROVING ACTIVE PACKET LOSS MEASUREMENT:--JAVA--2008
Measurement and estimation of packet
loss characteristics are challenging due to the relatively rare occurrence and
typically short duration of packet loss episodes. While active probe tools are
commonly used to measure packet loss on end-to-end paths, there has been little
analysis of the accuracy of these tools or their impact on the network. The
objective of our study is to understand how to measure packet loss episodes
accurately with end-to-end probes. We begin by testing the capability of
standard Poisson- modulated end-to-end measurements of loss in a controlled
laboratory environment using IP routers and commodity end hosts. Our tests show
that loss characteristics reported from such Poisson-modulated probe tools can
be quite inaccurate over a range of traffic conditions. Motivated by these observations,
we introduce a new algorithm for packet loss measurement that is designed to
overcome the deficiencies in standard Poisson-based tools. Specifically, our
method entails probe experiments that follow a geometric distribution to 1)
enable an explicit trade-off between accuracy and impact on the network, and 2)
enable more accurate measurements than standard Poisson probing at the same
rate. We evaluate the capabilities of our methodology experimentally by
developing and implementing a prototype tool, called BADABING. The experiments
demonstrate the trade-offs between impact on the network and measurement
accuracy. We show that BADABING reports loss characteristics far more
accurately than traditional loss measurement tools.