LATEST CSE PROJETS|COMPUTER SCIENCE FINAL YEAR PROJECTS|CSE IEEE PROJECTS|LATEST IEEE PROJECTS FOR CSE |LATEST COMPUTER SCIENCE PROJECT TITLES|2012 CSE LATEST IEEE PROJECTS|COMPUTER SCIENCE PROJECTS|COMPUTER SCIENCE PROJECTS|CSE PROJECTS
IEEE Computer Science Projects
RESEQUENCING
ANALYSIS OF STOP-AND-WAIT ARQ FOR PARALLEL MULTICHANNEL
COMMUNICATIONS:--DOTNET--2009
Abstract—In this paper, we consider
a multichannel data communication system in which the stop-and-wait
automatic-repeat request protocol for parallel channels with an in-sequence
delivery guarantee (MSW-ARQ-inS) is used for error control. We evaluate the
resequencing delay and the resequencing buffer occupancy, respectively. Under
the assumption that all channels have the same transmission rate but possibly
different time-invariant error rates, we derive the probability generating
function of the resequencing buffer occupancy and the probability mass function
of the resequencing delay. Then, by assuming the Gilbert–Elliott model for each
channel, we extend our analysis to time-varying channels. Through examples, we
compute the probability mass functions of the resequencing buffer occupancy and
the resequencing delay for time-invariant channels. From numerical and
simulation results, we analyze trends in the mean resequencing buffer occupancy
and the mean resequencing delay as functions of system parameters. We expect
that the modeling technique and analytical approach used in this paper can be
applied to the performance evaluation of other ARQ protocols (e.g., the
selective-repeat ARQ) over multiple time-varying channels. Index
Terms—In-sequence delivery, modeling and performance, multichannel data
communications, resequencing buffer occupancy, resequencing delay, SW-ARQ.
COLLUSIVE
PIRACY PREVENTION IN P2P CONTENT DELIVERY NETWORKS:--J2EE--2009
Collusive piracy is the main source
of intellectual property violations within the boundary of a P2P network. Paid
clients (colluders) may illegally share copyrighted content files with unpaid
clients (pirates). Such online piracy has hindered the use of open P2P networks
for commercial content delivery. We propose a proactive content poisoning
scheme to stop colluders and pirates from alleged copyright infringements in
P2P file sharing. The basic idea is to detect pirates timely with
identity-based signatures and time stamped tokens. The scheme stops collusive
piracy without hurting legitimate P2P clients by targeting poisoning on detected
violators, exclusively. We developed a new peer authorization protocol (PAP) to
distinguish pirates from legitimate clients. Detected pirates will receive
poisoned chunks in their repeated attempts. Pirates are thus severely penalized
with no chance to download successfully in tolerable time. Based on simulation
results, we find 99.9 percent prevention rate in Gnutella, KaZaA, and Freenet.
We achieved 85-98 percent prevention rate on eMule, eDonkey, Morpheus, etc. The
scheme is shown less effective in protecting some poison-resilient networks
like BitTorrent and Azureus. Our work opens up the low-cost P2P technology for
copyrighted content delivery. The advantage lies mainly in minimum delivery
cost, higher content availability, and copyright compliance in exploring P2P
network resources.
NOISE
REDUCTION BY FUZZY IMAGE FILTERING:--JAVA--2006
A new fuzzy filter is presented for
the noise reduction of images corrupted with additive noise. The filter
consists of two stages. The first stage computes a fuzzy derivative for eight
different directions. The second stage uses these fuzzy derivatives to perform
fuzzy smoothing by weighting the contributions of neighboring pixel values.
Both stages are based on fuzzy rules which make use of membership functions.
The filter can be applied iteratively to effectively reduce heavy noise. In
particular, the shape of the membership functions is adapted according to the
remaining noise level after each iteration, making use of the distribution of
the homogeneity in the image. A statistical model for the noise distribution
can be incorporated to relate the homogeneity to the adaptation scheme of the
membership functions. Experimental results are obtained to show the feasibility
of the proposed approach. These results are also compared to other filters by
numerical measures and visual inspection.
PATTERN
ANALYSIS AND MACHINE INTELLIGENCE
FACE
RECOGNITION USING LAPLACIAN FACES:--JAVA--2005
Abstract: The face recognition is a
fairly controversial subject right now. A system such as this can recognize and
track dangerous criminals and terrorists in a crowd, but some contend that it
is an extreme invasion of privacy. The proponents of large-scale face
recognition feel that it is a necessary evil to make our country safer. It
could benefit the visually impaired and allow them to interact more easily with
the environment. Also, a computer vision-based authentication system could be
put in place to allow computer access or access to a specific room using face
recognition. Another possible application would be to integrate this technology
into an artificial intelligence system for more realistic interaction with
humans. We propose an appearance-based face recognition method called the
Laplacianface approach. By using Locality Preserving Projections (LPP), the
face images are mapped into a face subspace for analysis. Different from
Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) which
effectively see only the Euclidean structure of face space, LPP finds an
embedding that preserves local information, and obtains a face subspace that
best detects the essential face manifold structure. The Laplacian faces are the
optimal linear approximations to the eigen functions of the Laplace Beltrami
operator on the face manifold. In this way, the unwanted variations resulting
from changes in lighting, facial expression, and pose may be eliminated or
reduced. Theoretical analysis shows that PCA, LDA, and LPP can be obtained from
different graph models. We compare the proposed Laplacianface approach with
Eigenface and Fisherface methods on three different face data sets.
Experimental results suggest that the proposed Laplacianface approach provides
a better representation and achieves lower error rates in face recognition.
Principal Component Analysis (PCA) is a statistical method under the broad
title of factor analysis. The purpose of PCA is to reduce the large
dimensionality of the data space (observed variables) to the smaller intrinsic
dimensionality of feature space (independent variables), which are needed to
describe the data economically. This is the case when there is a strong
correlation between observed variables. The jobs which PCA can do are
prediction, redundancy removal, feature extraction, data compression, etc.
Because PCA is a known powerful technique which can do something in the linear
domain, applications having linear models are suitable, such as signal
processing, image processing, system and control theory, communications, etc.
The main idea of using PCA for face recognition is to express the large 1-D
vector of pixels constructed from 2-D face image into the compact principal
components of the feature space. This is called eigenspace projection.
Eigenspace is calculated by identifying the eigenvectors of the covariance
matrix derived from a set of fingerprint images (vectors).
INFORMATION
TECHNOLOGY IN BIOMEDICINE
ENHANCING
PRIVACY AND AUTHORIZATION CONTROL SCALABILITY IN THE GRID THROUGH
ONTOLOGIES:--JAVA--2009
The use of data Grids for sharing
relevant data has proven to be successful in many research disciplines.
However, the use of these environments when personal data are involved (such as
in health) is reduced due to its lack of trust. There are many approaches that
provide encrypted storages and key shares to prevent the access from
unauthorized users. However, these approaches are additional layers that should
be managed along with the authorization policies. We present in this paper a
privacy-enhancing technique that uses encryption and relates to the structure
of the data and their organizations, providing a natural way to propagate
authorization and also a framework that fits with many use cases. The paper
describes the architecture and processes, and also shows results obtained in a
medical imaging platform.