COMPUTER SCIENCE LATEST PROJECT ABSTRACTS|CSE LATEST PROJECT ABSTRACTS|IEEE PROJECT ABSTRACTS|CSE IEEE PROJECT ABSTRACTS
PERFORMANCE
OF A SPECULATIVE TRANSMISSION SCHEME FOR SCHEDULING LATENCY
REDUCTION:--JAVA-2008
This work was motivated by the need
to achieve low latency in an input centrally-scheduled cell switch for
high-performance computing applications; specifically, the aim is to reduce the
latency incurred between issuance of a request and arrival of the corresponding
grant. We introduce a speculative transmission scheme to significantly reduce
the average latency by allowing cells to proceed without waiting for a grant.
It operates in conjunction with any centralized matching algorithm to achieve a
high maximum utilization. An analytical model is presented to investigate the
efficiency of the speculative transmission scheme employed in a non-blocking
N*NR input-queued crossbar switch with receivers R per output. The results
demonstrate that the can be almost entirely eliminated for loads up to 50%. Our
simulations confirm the analytical results.
RATE
ALLOCATION & NETWORK LIFETIME PROBLEM FOR WIRELESS SENSOR
NETWORKS:--DOTNET--2008
In this paper, we consider an
overarching problem that encompasses both performance metrics. In particular,
we study the network capacity problem under a given network lifetime
requirement. Specifically, for a wireless sensor network where each node is
provisioned with an initial energy, if all nodes are required to live up to a
certain lifetime criterion, Since the objective of maximizing the sum of rates
of all the nodes in the network can lead to a severe bias in rate allocation
among the nodes, we advocate the use of lexicographical max-min (LMM) rate
allocation. To calculate the LMM rate allocation vector, we develop a
polynomial-time algorithm by exploiting the parametric analysis (PA) technique
from linear program (LP), which we call serial LP with Parametric Analysis
(SLP-PA). We show that the SLP-PA can be also employed to address the LMM node
lifetime problem much more efficiently than a state-of-the-art algorithm
proposed in the literature. More important, we show that there exists an
elegant duality relationship between the LMM rate allocation problem and the
LMM node lifetime problem. Therefore, it is sufficient to solve only one of the
two problems. Important insights can be obtained by inferring duality results
for the other problem.
STATISTICAL
TECHNIQUES FOR DETECTING TRAFFIC ANOMALIES THROUGH PACKET HEADER
DATA:--DOTNET--2008
THE frequent attacks on network
infrastructure, using various forms of denial of service (DoS) attacks and
worms, have led to an increased need for developing techniques for analyzing
and monitoring network traffic. If efficient analysis tools were available, it
could become possible to detect the attacks, anomalies and take action to
suppress them before they have had much time to propagate across the network.
In this paper, we study the possibilities of traffic-analysis based mechanisms
for attack and anomaly detection. The motivation for this work came from a need
to reduce the likelihood that an attacker may hijack the campus machines to
stage an attack on a third party. A campus may want to prevent or limit misuse
of its machines in staging attacks, and possibly limit the liability from such
attacks. In particular, we study the utility of observing packet header data of
outgoing traffic, such as destination addresses, port numbers and the number of
flows, in order to detect attacks/anomalies originating from the campus at the
edge of a campus. Detecting anomalies/attacks close to the source allows us to
limit the potential damage close to the attacking machines. Traffic monitoring
close to the source may enable the network operator quicker identification of
potential anomalies and allow better control of administrative domain’s
resources. Attack propagation could be slowed through early detection. Our
approach passively monitors network traffic at regular intervals and analyzes
it to find any abnormalities in the aggregated traffic. By observing the
traffic and correlating it to previous states of traffic, it may be possible to
see whether the current traffic is behaving in a similar (i.e., correlated)
manner. The network traffic could look different because of flash crowds,
changing access patterns, infrastructure problems such as router failures, and
DoS attacks. In the case of bandwidth attacks, the usage of network may be increased
and abnormalities may show up in traffic volume. Flash crowds could be observed
through sudden increase in traffic volume to a single destination. Sudden
increase of traffic on a certain port could signify the onset of an anomaly
such as worm propagation. Our approach relies on analyzing packet header data
in order to provide indications of Possible abnormalities in the traffic.
EFFICIENT
ROUTING IN INTERMITTENTLY CONNECTED MOBILE NETWORKS: THE MULTIPLE COPY
CASE:--DOTNET--2008
Intermittently connected mobile
networks are wireless networks where most of the time there does not exist a
complete path from the source to the destination. There are many real networks
that follow this model, for example, wildlife tracking sensor networks,
military networks, vehicular ad hoc networks, etc. In this context,
conventional routing schemes fail, because they try to establish complete
end-to-end paths, before any data is sent. To deal with such networks
researchers have suggested to use flooding-based routing schemes. While
flooding-based schemes have a high probability of delivery, they waste a lot of
energy and suffer from severe contention which can significantly degrade their
performance. Furthermore, proposed efforts to reduce the overhead of
flooding-based schemes have often been plagued by large delays. With this in
mind, we introduce a new family of routing schemes that “spray” a few message
copies into the network, and then route each copy independently towards the
destination. We show that, if carefully designed, spray routing
TWO
TECHNIQUES FOR FAST COMPUTATION OF CONSTRAINED SHORTEST PATHS:--JAVA--2008
Computing constrained shortest paths
is fundamental to some important network functions such as QoS routing, MPLS
path selection, ATM circuit routing, and traffic engineering. The problem is to
find the cheapest path that satisfies certain constraints. In particular,
finding the cheapest delay-constrained path is critical for real-time data
flows such as voice/video calls. Because it is NP-complete, much research has
been designing heuristic algorithms that solve the -approximation of the
problem with an adjustable accuracy. A common approach is to discretize (i.e.,
scale and round) the link delay or link cost, which transforms the original
problem to a simpler one solvable in polynomial time. The efficiency of the
algorithms directly relates to the magnitude of the errors introduced during
discretization. In this paper, we propose two techniques that reduce the
discretization errors, which allow faster algorithms to be designed. Reducing
the overhead of computing constrained shortest paths is practically important
for the successful design of a high-throughput QoS router, which is limited at
both processing power and memory space. Our simulations show that the new
algorithms reduce the execution time by an order of magnitude on power-law
topologies with 1000 nodes.
PROBABILISTIC
PACKET MARKING FOR LARGE-SCALE IP TRACE BACK:--DOTNET
An approach to IP traces back based
on the probabilistic packet marking paradigm. Our approach, which we call
randomize-and-link, uses large checksum cords to “link” message fragments in a
way that is highly scalable, for the checksums serve both as associative
addresses and data integrity verifiers. The main advantage of these checksum
cords is that they spread the addresses of possible router messages across a
spectrum that is too large for the attacker to easily create messages that
collide with legitimate messages.
DUAL-LINK
FAILURE RESILIENCY THROUGH BACKUP LINK MUTUAL EXCLUSION:--JAVA
Networks employ link protection to
achieve fast recovery from link failures. While the first link failure can be
protected using link protection, there are several alternatives for protecting
against the second failure. This paper formally classifies the approaches to
dual-link failure resiliency. One of the strategies to recover from dual-link
failures is to employ link protection for the two failed links independently,
which requires that two links may not use each other in their backup paths if they
may fail simultaneously. Such a requirement is referred to as backup link
mutual exclusion (BLME) constraint and the problem of identifying a backup path
for every link that satisfies the above requirement is referred to as the BLME
problem. This paper develops the necessary theory to establish the sufficient
conditions for existence of a solution to the BLME problem. Solution
methodologies for the BLME problem is developed using two approaches by: 1)
formulating the backup path selection as an integer linear program; 2)
developing a polynomial time heuristic based on minimum cost path routing. The
ILP formulation and heuristic are applied to six networks and their performance
is compared with approaches that assume precise knowledge of dual-link failure.
It is observed that a solution exists for all of the six networks considered.
The heuristic approach is shown to obtain feasible solutions that are resilient
to most dual-link failures, although the backup path lengths may be
significantly higher than optimal. In addition, the paper illustrates the
significance of the knowledge of failure location by illustrating that network
with higher connectivity may require lesser capacity than one with a lower
connectivity to recover from dual-link failures.
A
DISTRIBUTED DATABASE ARCHITECTURE FOR GLOBAL ROAMING IN NEXT-GENERATION MOBILE
NETWORKS:--JAVA--2004
The next-generation mobile network
will support terminal mobility, personal mobility, and service provider
portability, making global roaming seamless. A location-independent personal
telecommunication number (PTN) scheme is conducive to implementing such a
global mobile system. However, the non-geographic PTNs coupled with the
anticipated large number of mobile users in future mobile networks may
introduce very large centralized databases. This necessitates research into the
design and performance of high-throughput database technologies used in mobile
systems to ensure that future systems will be able to carry efficiently the
anticipated loads. This paper proposes a scalable, robust, efficient location
database architecture based on the location-independent PTNs. The proposed
multi tree database architecture consists of a number of database subsystems,
each of which is a three-level tree structure and is connected to the others
only through its root. By exploiting the localized nature of calling and
mobility patterns, the proposed architecture effectively reduces the database
loads as well as the signaling traffic incurred by the location registration
and call delivery procedures. In addition, two memory-resident database
indices, memory-resident direct file and T-tree, are proposed for the location
databases to further improve their throughput. Analysis model and numerical
results are presented to evaluate the efficiency of the proposed database
architecture. Results have revealed that the proposed database architecture for
location management can effectively support the anticipated high user density
in the future mobile networks.
NETWORK
BORDER PATROL: PREVENTING CONGESTION COLLAPSE AND PROMOTING FAIRNESS IN THE
INTERNET:--JAVA--2004
The Internet's excellent scalability
and robustness result in part from the end-to-end nature of Internet congestion
control. End-to-end congestion control algorithms alone, however, are unable to
prevent the congestion collapse and unfairness created by applications that are
unresponsive to network congestion. To address these maladies, we propose and
investigate a novel congestion-avoidance mechanism called network border patrol
(NBP). NBP entails the exchange of feedback between routers at the borders of a
network in order to detect and restrict unresponsive traffic flows before they
enter the network, thereby preventing congestion within the network. Moreover,
NBP is complemented with the proposed enhanced core-stateless fair queueing
(ECSFQ) mechanism, which provides fair bandwidth allocations to competing
flows. Both NBP and ECSFQ are compliant with the Internet philosophy of pushing
complexity toward the edges of the network whenever possible. Simulation
results show that NBP effectively eliminates congestion collapse and that, when
combined with ECSFQ, approximately max-min fair bandwidth allocations can be
achieved for competing flows.
IEEE Software Engineering Projects
ATOMICITY
ANALYSIS OF SERVICE COMPOSITION ACROSS ORGANIZATIONS:--J2EE--2009
Atomicity is a highly desirable
property for achieving application consistency in service compositions. To
achieve atomicity, a service composition should satisfy the atomicity sphere, a
structural criterion for the backend processes of involved services. Existing
analysis techniques for the atomicity sphere generally assume complete
knowledge of all involved backend processes. Such an assumption is invalid when
some service providers do not release all details of their backend processes to
service consumers outside the organizations. To address this problem, we
propose a process algebraic framework to publish atomicity-equivalent public
views from the backend processes. These public views extract relevant task
properties and reveal only partial process details that service providers need
to expose. Our framework enables the analysis of the atomicity sphere for
service compositions using these public views instead of their backend
processes. This allows service consumers to choose suitable services such that
their composition satisfies the atomicity sphere without disclosing the details
of their backend processes. Based on the theoretical result, we present
algorithms to construct atomicity-equivalent public views and to analyze the
atomicity sphere for a service composition. Two case studies from the supply
chain and insurance domains are given to evaluate our proposal and demonstrate
the applicability of our approach.
USING
THE CONCEPTUAL COHESION OF CLASSES FOR FAULT PREDICTION IN OBJECT ORIENTED
SYSTEMS:--JAVA --2008
High cohesion is desirable property
in software systems to achieve reusability and maintainability. In this project
we are measures for cohesion in Object-Oriented (OO) software reflect
particular interpretations of cohesion and capture different aspects of it. In
existing approaches the cohesion is calculate from the structural information
for example method attributes and references. In conceptual cohesion of
classes, i.e. in our project we are calculating the unstructured information
from the source code such as comments and identifiers. Unstructured information
is embedded in the source code. To retrieve the unstructured information from
the source code Latent Semantic Indexing is used. A large case study on three
open source software systems is presented which compares the new measure with
an extensive set of existing metrics and uses them to construct models that
predict software faults. In our project we are achieving the high cohesion and
we are predicting the fault in Object –Oriented Systems
THE
EFFECT OF PAIRS IN PROGRAM DESIGN TASKS:--DOTNET--2008
In this project efficiency of pairs
in program design tasks is identified by using pair programming concept. Pair
programming involves two developers simultaneously collaborating with each
other on the same programming task to design and code a solution. Algorithm
design and its implementation are normally merged and it provides feedback to
enhance the design. Previous controlled pair programming experiments did not
explore the efficacy of pairs against individuals in program design-related
tasks. Variations in programmer skills in a particular language or an
integrated development environment and the understanding of programming
instructions can cover the skill of subjects in program design-related tasks.
Programming aptitude tests (PATs) have been shown to correlate with programming
performance. PATs do not require understanding of programming instructions and
do not require a skill in any specific computer language. By conducting two
controlled experiments, with full-time professional programmers being the
subjects who worked on increasingly complex programming aptitude tasks related
to problem solving and algorithmic design. In both experiments, pairs
significantly outperformed individuals, providing evidence of the value of
pairs in program design-related tasks.
ESTIMATION
OF DEFECTS BASED ON EFECT DECAY MODEL: ED3M:--DOTNET--2008
An accurate prediction of the number
of defects in a software product during system testing contributes not only to
the management of the system testing process but also to the estimation of the
product’s required maintenance. Here, a new approach, called Estimation of
Defects based on Defect Decay Model (ED3M) is presented that computes an
estimate the defects in an ongoing testing process. ED3M is based on estimation
theory. Unlike many existing approaches, the technique presented here does not
depend on historical data from previous projects or any assumptions about the
requirements and/or testers’ productivity. It is a completely automated
approach that relies only on the data collected during an ongoing testing
process. This is a key advantage of the ED3M approach as it makes it widely
applicable in different testing environments. Here, the ED3M approach has been
evaluated using five data sets from large industrial projects and two data sets
from the literature. In addition, a performance analysis has been conducted
using simulated data sets to explore its behavior using different models for
the input data. The results are very promising; they indicate the ED3M approach
provides accurate estimates with as fast or better convergence time in
comparison to well-known alternative techniques, while only using defect data
as the input.
IEEE Mobile Computing Projects
A
TABU SEARCH ALGORITHM FOR CLUSTER BUILDING IN WIRELESS SENSOR
NETWORKS:--DOTNET--2009
The main challenge in wireless
sensor network deployment pertains to optimizing energy consumption when
collecting data from sensor nodes. This paper proposes a new centralized
clustering method for a data collection mechanism in wireless sensor networks,
which is based on network energy maps and Quality-of-Service (QoS) requirements.
The clustering problem is modeled as a hypergraph partitioning and its
resolution is based on a tabu search heuristic. Our approach defines moves
using largest size cliques in a feasibility cluster graph. Compared to other
methods (CPLEX-based method, distributed method, simulated annealing-based
method), the results show that our tabu search-based approach returns
high-quality solutions in terms of cluster cost and execution time. As a
result, this approach is suitable for handling network extensibility in a
satisfactory manner.
ROUTE
STABILITY IN MANETS UNDER THE RANDOM DIRECTION MOBILITY MODEL:--DOTNET--2009
A fundamental issue arising in
mobile ad hoc networks (MANETs) is the selection of the optimal path between
any two nodes. A method that has been advocated to improve routing efficiency
is to select the most stable path so as to reduce the latency and the overhead
due to route reconstruction. In this work, we study both the availability and
the duration probability of a routing path that is subject to link failures
caused by node mobility. In particular, we focus on the case where the network
nodes move according to the Random Direction model, and we derive both exact
and approximate (but simple) expressions of these probabilities. Through our
results, we study the problem of selecting an optimal route in terms of path
availability. Finally, we propose an approach to improve the efficiency of
reactive routing protocols.
GREEDY
ROUTING WITH ANTI-VOID TRAVERSAL FOR WIRELESS SENSOR NETWORKS:--DOTNET--2009
The unreachability problem (i.e.,
the so-called void problem) that exists in the greedy routing algorithms has
been studied for the wireless sensor networks. Some of the current research
work cannot fully resolve the void problem, while there exist other schemes
that can guarantee the delivery of packets with the excessive consumption of
control overheads. In this paper, a greedy antivoid routing (GAR) protocol is
proposed to solve the void problem with increased routing efficiency by
exploiting the boundary finding technique for the unit disk graph (UDG). The
proposed rolling-ball UDG boundary traversal (RUT) is employed to completely
guarantee the delivery of packets from the source to the destination node under
the UDG network. The boundary map (BM) and the indirect map searching (IMS)
scheme are proposed as efficient algorithms for the realization of the RUT
technique. Moreover, the hop count reduction (HCR) scheme is utilized as a
short-cutting technique to reduce the routing hops by listening to the neighbor’s
traffic, while the intersection navigation (IN) mechanism is proposed to obtain
the best rolling direction for boundary traversal with the adoption of shortest
path criterion. In order to maintain the network requirement of the proposed
RUT scheme under the non-UDG networks, the partial UDG construction (PUC)
mechanism is proposed to transform the non-UDG into UDG setting for a portion
of nodes that facilitate boundary traversal. These three schemes are
incorporated within the GAR protocol to further enhance the routing performance
with reduced communication overhead. The proofs of correctness for the GAR
scheme are also given in this paper. Comparing with the existing localized
routing algorithms, the simulation results show that the proposed GAR-based
protocols can provide better routing efficiency.
CELL
BREATHING TECHNIQUES FOR LOAD BALANCING IN WIRELESS LANS:--DOTNET--2009
Maximizing network throughput while
providing fairness is one of the key challenges in wireless LANs (WLANs). This
goal is typically achieved when the load of access points (APs) is balanced.
Recent studies on operational WLANs, however, have shown that AP load is often
substantially uneven. To alleviate such imbalance of load, several load
balancing schemes have been proposed. These schemes commonly require
proprietary software or hardware at the user side for controlling the user-AP
association. In this paper we present a new load balancing technique by
controlling the size of WLAN cells (i.e., AP’s coverage range), which is conceptually
similar to cell breathing in cellular networks. The proposed scheme does not
require any modification to the users neither the IEEE 802.11 standard. It only
requires the ability of dynamically changing the transmission power of the AP
beacon messages. We develop a set of polynomial time algorithms that find the
optimal beacon power settings which minimize the load of the most congested AP.
We also consider the problem of network-wide min-max load balancing. Simulation
results show that the performance of the proposed method is comparable with or
superior to the best existing association-based methods.
LOCAL
CONSTRUCTION OF NEAR-OPTIMAL POWER SPANNERS FOR WIRELESS AD-HOC
NETWORKS:--DOTNET
We present a local distributed
algorithm that, given a wireless ad hoc network modeled as a unit disk graph U
in the plane, constructs a planar power spanner of U whose degree is bounded by
k and whose stretch factor is bounded by 1 + (2sin pi/k)p, where k ges 10 is an
integer parameter and p isin [2, 5] is the power exponent constant. For the
same degree bound k, the stretch factor of our algorithm significantly improves
the previous best bounds by Song et al. We show that this bound is near-optimal
by proving that the slightly smaller stretch factor of 1 + (2sin pi/k+1)p is
unattainable for the same degree bound k. In contrast to previous algorithms
for the problem, the presented algorithm is local. As a consequence, the
algorithm is highly scalable and robust. Finally, while the algorithm is
efficient and easy to implement in practice, it relies on deep insights on the
geometry of unit disk graphs and novel techniques that are of independent
interest.
INTRUSION
DETECTION IN HOMOGENEOUS & HETEROGENEOUS WIRELESS SENSOR
NETWORKS:--JAVA--2008
Intrusion detection in Wireless
Sensor Network (WSN) is of practical interest in many applications such as
detecting an intruder in a battlefield. The intrusion detection is defined as a
mechanism for a WSN to detect the existence of inappropriate, incorrect, or
anomalous moving attackers. In this paper, we consider this issue according to
heterogeneous WSN models. Furthermore, we consider two sensing detection
models: single-sensing detection and multiple-sensing detection... Our
simulation results show the advantage of multiple sensor heterogeneous WSNs.
LOCATION
BASED SPATIAL QUERY PROCESSING IN WIRELESS BROADCAST ENVIRONMENTS:--JAVA--2008
Location-based spatial queries (LBSQ
s) refer to spatial queries whose answers rely on the location of the inquirer.
Efficient processing of LBSQ s is of critical importance with the
ever-increasing deployment and use of mobile technologies. We show that LBSQ s
has certain unique characteristics that the traditional spatial query
processing in centralized databases does not address. For example, a
significant challenge is presented by wireless broadcasting environments, which
have excellent scalability but often exhibit high-latency database access. In
this paper, we present a novel query processing technique that, though
maintaining high scalability and accuracy, manages to reduce the latency
considerably in answering LBSQ s. Our approach is based on peer-to-peer
sharing, which enables us to process queries without delay at a mobile host by
using query results cached in its neighboring mobile peers. We demonstrate the
feasibility of our approach through a probabilistic analysis, and we illustrate
the appeal of our technique through extensive simulation results.
BANDWIDTH
ESTIMATION FOR IEEE 802.11 BASED ADHOC NETWORK:--JAVA--2008
Since 2005, IEEE 802.11-based
networks have been able to provide a certain level of quality of service (QoS)
by the means of service differentiation, due to the IEEE 802.11e amendment.
However, no mechanism or method has been standardized to accurately evaluate
the amount of resources remaining on a given channel. Such an evaluation would,
however, be a good asset for bandwidth-constrained applications. In multihop ad
hoc networks, such evaluation becomes even more difficult. Consequently,
despite the various contributions around this research topic, the estimation of
the available bandwidth still represents one of the main issues in this field.
In this paper, we propose an improved mechanism to estimate the available
bandwidth in IEEE 802.11-based ad hoc networks. Through simulations, we compare
the accuracy of the estimation we propose to the estimation performed by other
state-of-the-art QoS protocols, BRuIT, AAC, and QoS-AODV.