Adaptive Optics in Ground Based Telescopes
Adaptive                 optics is a new technology which is being used now a  days in ground                based telescopes to remove atmospheric  tremor and thus provide a                clearer and brighter view of  stars seen through ground based telescopes.                Without using  this system, the images obtained through telescopes                on  earth are seen to be blurred, which is caused by the turbulent                 mixing of air at different temperatures.
  Adaptive optics                in effect removes this atmospheric  tremor. It brings together the                latest in computers,  material science, electronic detectors, and                digital  control in a system that warps and bends a mirror in a telescope                 to counteract, in real time the atmospheric distortion.
  The advance                promises to let ground based telescopes  reach their fundamental                limits of resolution and  sensitivity, out performing space based                telescopes and  ushering in a new era in optical astronomy. Finally,                with  this technology, it will be possible to see gas-giant type                 planets in nearby solar systems in our Milky Way galaxy. Although                 about 100 such planets have been discovered in recent years,  all                were detected through indirect means, such as the  gravitational                effects on their parent stars, and none has  actually been detected                directly.
WHAT IS ADAPTIVE                OPTICS ?
  Adaptive optics                refers to optical systems which adapt to  compensate for optical                effects introduced by the medium  between the object and its image.                In theory a telescope's  resolving power is directly proportional                to the diameter  of its primary light gathering lens or mirror. But                in  practice , images from large telescopes are blurred to a resolution                 no better than would be seen through a 20 cm aperture with no  atmospheric                blurring. At scientifically important  infrared wavelengths, atmospheric                turbulence degrades  resolution by at least a factor of 10.
  Space telescopes                avoid problems with the atmosphere, but  they are enormously expensive                and the limit on aperture  size of telescopes is quite restrictive.                The Hubble Space  telescope, the world's largest telescope in orbit                , has  an aperture of only 2.4 metres, while terrestrial telescopes                 can have a diameter four times that size.
ANN for misuse detection
Because                 of the increasing dependence which companies and  government agencies                have on their computer networks the  importance of protecting these                systems from attack is  critical. A single intrusion of a computer                network can  result in the loss or unauthorized utilization or modification                 of large amounts of data and cause users to question the  reliability                of all of the information on the network.  There are numerous methods                of responding to a network  intrusion, but they all require the accurate                and timely  identification of the attack.
Intrusion Detection                Systems
The  timely and                accurate detection of computer and network  system intrusions has                always been an elusive goal for  system administrators and information                security  researchers. The individual creativity of attackers, the                 wide range of computer hardware and operating systems, and the ever                 changing nature of the overall threat to target systems have  contributed                to the difficulty in effectively identifying  intrusions. While the                complexities of host computers  already made intrusion detection                a difficult endeavor,  the increasing prevalence of distributed network-based                 systems and insecure networks such as the Internet has greatly increased                 the need for intrusion detection.
There  are two                general categories of attacks which intrusion  detection technologies                attempt to identify - anomaly  detection and misuse detection .Anomaly                detection  identifies activities that vary from established patterns                 for users, or groups of users. Anomaly detection typically involves                 the creation of knowledge bases that contain the profiles of  the                monitored activities.
The  second general                approach to intrusion detection is misuse  detection. This technique                involves the comparison of a  user's activities with the known behaviors                of attackers  attempting to penetrate a system. While anomaly detection                 typically utilizes threshold monitoring to indicate when a certain                 established metric has been reached, misuse detection  techniques                frequently utilize a rule-based approach. When  applied to misuse                detection, the rules become scenarios  for network attacks. The intrusion                detection mechanism  identifies a potential attack if a user's activities                are  found to be consistent with the established rules. The use of                 comprehensive rules is critical in the application of expert  systems                for intrusion detection.
Current approaches                to intrusion detection systems
Most  current                approaches to the process of detecting  intrusions utilize some form                of rule-based analysis.  Rule-Based analysis relies on sets of predefined                rules  that are provided by an administrator, automatically created                 by the system, or both. Expert systems are the most common form                 of rule-based intrusion detection approaches. The early  intrusion                detection research efforts realized the  inefficiency of any approach                that required a manual  review of a system audit trail. While the                information  necessary to identify attacks was believed to be present                 within the voluminous audit data, an effective review of the material                 required the use of an automated system. 
The  use of expert                system techniques in intrusion detection  mechanisms was a significant                milestone in the development  of effective and practical detection-based                information  security systems.
An  expert system                consists of a set of rules that encode the  knowledge of a human                "expert". These rules are used by  the system to make conclusions                about the security-related  data from the intrusion detection system.                Expert systems  permit the incorporation of an extensive amount of                human  experience into a computer application that then utilizes                 that knowledge to identify activities that match the defined  characteristics                of misuse and attack.
Anthropomorphic Robot hand
This                 paper presents an anthropomorphic robot hand called the  Gifu hand                II, which has a thumb and four fingers, all the
joints of which are driven by servomotors built into the fingers and the palm. The thumb has four joints with four-degrees-of-freedom (DOF); the other fingers have four joints with 3-DOF; and two axes of the joints near the palm cross orthogonally at one point, as is the case in the human hand. The Gifu hand II can be equipped with
six-axes force sensor at each fingertip and a developed distributed tactile sensor with 624 detecting points on its surface. The design concepts and the specifications of the Gifu hand II, the basic characteristics of the tactile sensor, and the pressure distributions at the time of object grasping are described and discussed herein. Our results demonstrate that the Gifu hand II has a high potential to perform dexterous object manipulations like the human hand.
joints of which are driven by servomotors built into the fingers and the palm. The thumb has four joints with four-degrees-of-freedom (DOF); the other fingers have four joints with 3-DOF; and two axes of the joints near the palm cross orthogonally at one point, as is the case in the human hand. The Gifu hand II can be equipped with
six-axes force sensor at each fingertip and a developed distributed tactile sensor with 624 detecting points on its surface. The design concepts and the specifications of the Gifu hand II, the basic characteristics of the tactile sensor, and the pressure distributions at the time of object grasping are described and discussed herein. Our results demonstrate that the Gifu hand II has a high potential to perform dexterous object manipulations like the human hand.
INTRODUCTION
IT  IS HIGHLY                expected that forthcoming humanoid robots will  execute various complicated                tasks via communication with  a human user. The humanoid robots will                be equipped with  anthropomorphic multifingered hands very much like                the  human hand. We call this a humanoid hand robot. Humanoid hand                 robots will eventually supplant human labor in the execution of                 intricate and dangerous tasks in areas such as  manufacturing, space,                the seabed, and so on. Further, the  anthropomorphic hand will be                provided as a prosthetic  application for handicapped individuals.
Many  multifingered                robot hands (e.g., the Stanford-JPL hand  by Salisbury et al. [1],                the Utah/MIT hand by Jacobsen et  al. [2], the JPL four-fingered                hand by Jau [3], and the  Anthrobot hand by Kyriakopoulos et al.                [4]) have been  developed. These robot hands are driven by actuators                that  are located in a place remote from the robot hand frame and                 connected by tendon cables. The elasticity of the tendon cable  causes                inaccurate joint angle control, and the long  wiring of tendon cables                may obstruct the robot motion  when the hand is attached to the tip                of the robot arm.  Moreover, these hands have been problematic commercial                 products, particularly in terms of maintenance, due to their mechanical                 complexity.
To  solve these                problems, robot hands in which the actuators  are built into the                hand (e.g., the Belgrade/USC hand by  Venkataraman et al. [5], the                Omni hand by Rosheim [6],  the NTU hand by Lin et al. [7], and the                DLR's hand by Liu  et al. [8]) have been developed. However, these                hands  present a problem in that their movement is unlike that of                 the human hand because the number of fingers and the number of joints                 in the fingers are insufficient. Recently, many reports  on the use                of the tactile sensor [9]-[13] have been  presented, all of which                attempted to realize adequate  object manipulation involving contact                with the finger and  palm. The development of the hand, which combines                a  6-axial force sensor attached at the fingertip and a distributed                 tactile sensor mounted on the hand surface, has been slight.
Our  group developed                the Gifu hand I [14], [15], a  five-fingered hand driven by built-in                servomotors. We  investigated the hand's potential, basing the platform                of  the study on dexterous grasping and manipulation of objects.                 Because it had a nonnegligible backlash in the gear transmission,                 we redesigned the anthropomorphic robot hand based on the  finite                element analysis to reduce the backlash and  enhance the output torque.                We call this version the Gifu  hand II.
A 64 Point Fourier Transform Chip
Fourth                 generation wireless and mobile system are currently the  focus of                research and development. Broadband wireless  system based on orthogonal                frequency division  multiplexing will allow packet based high data                rate  communication suitable for video transmission and mobile internet                 application. Considering this fact we proposed a data path  architecture                using dedicated hardwire for the baseband  processor. The most computationally                intensive part of  such a high data rate system are the 64-point                inverse FFT  in the transmit direction and the viterbi decoder in                the  receiver direction. Accordingly an appropriate design methodology                 for constructing them has to be chosen a) how much silicon  area                is needed b) how easily the particular architecture  can be made                flat for implementation in VLSI c) in actual  implementation how                many wire crossings and how many long  wires carrying signals to                remote parts of the design are  necessary d) how small the power                consumption can be .This  paper describes a novel 64-point FFT/IFFT                processor  which has been developed as part of a large research project                 to develop a single chip wireless modem. 
ALGORITHM FORMULATION
 The discrete                fourier transformation A(r) of a complex data sequence B(k) of length                N
where r, k ={0,1……, N-1} can be described as
Where WN = e-2?j/N . Let us consider that N=MT , ? = s+ Tt and k=l+Mm,where s,l ? {0,1…..7} and m, t ? {0,1,….T-1}. Applying these values in first equation and we get
This shows that it is possible to realize the FFT of length N by first decomposing it to one M and one T-point FFT where N = MT, and combinig them. But this results in in a two dimensional instead of one dimensional structure of FFT. We can formulate 64-point by considering M =T = 8
where r, k ={0,1……, N-1} can be described as
Where WN = e-2?j/N . Let us consider that N=MT , ? = s+ Tt and k=l+Mm,where s,l ? {0,1…..7} and m, t ? {0,1,….T-1}. Applying these values in first equation and we get
This shows that it is possible to realize the FFT of length N by first decomposing it to one M and one T-point FFT where N = MT, and combinig them. But this results in in a two dimensional instead of one dimensional structure of FFT. We can formulate 64-point by considering M =T = 8
This shows that it is possible to express the 64-point FFT in terms of a two dimensional structure of 8-point FFTs plus 64 complex inter-dimensional constant multiplications. At first, appropriate data samples undergo an 8-point FFT computation. However, the number of non-trivial multiplications required for each set of 8-point FFT gets multiplied with 1. Eight such computations are needed to generate a full set of 64 intermediate data, which once again undergo a second 8-point FFT operation . Like first 8-point FFT for second 8-point again such computions are required. Proper reshuffling of the data coming out from the second 8-point FFT generates the final output of the 64-point FFT .
Fig. Signal flow graph of an 8-point DIT FFT.
For realization of 8-point FFT using the conventional DIT does not need to use any multiplication operation.
The  constants                to be multiplied for the first two columns of  the 8-point FFT structure                are either 1 or j . In the  third column, the multiplications of                the constants are  actually addition/subtraction operation followed                 multiplication of 1/?2 which can be easily realized by using only                 a hardwired shift-and-add operation. Thus an 8-point FFT can be                 carried out without using any true digital multiplier  and thus provide                a way to realize a low- power 64-point  FFT at reduced hardware cost.                Since a basic 8-point FFT  does not need a true multiplier. On the                other hand, the  number of non-trivial complex multiplications for                the  conventional 64-point radix-2 DIT FFT is 66. Thus the present                 approach results in a reduction of about 26% for complex  multiplication                compared to that required in the  conventional radix-2 64-point FFT.                This reduction of  arithmetic complexity furthur enhances the scope                for  realizing a low-power 64-point FFT processor. However, the arithmetic                 complexity of the proposed scheme is almost the same to  that of                radix-4 FFT algorithm since the radix-4 64-point  FFT algorithm needs                52 non-trivial complex  multiplications. 
BIT for Intelligent system design
The                 principal of Built-in-test and self-test has been widely  applied                to the design and testing of complex,  mixed-signal electronic systems,                such as integrated  circuits (IC s) and multifractional instrumentation                [1]. A  system with BIT is characterized by its ability to identify                 its operation condition by itself, through the testing and diagnosis                 capabilities built into its in structure. To ensure  reliable performance,                testability needs to be  incorporated into the early stage of system                and product  design. Various techniques have been developed over                the  past decades to implement the BIT technique. In the semiconductor,                 the objective of applying BIT is to improve the yield of chip  fabrication,                enable robust and efficient chip testing and  better scope with the                increasing circuit complexity and  integration density. This has                been achieved by having an  IC chip generate its own test stimuli                and measure the  corresponding responses from the various elements                within  the chip to determine its condition. In recent years, BIT                 has seen increasing applications in other branches of industry,                 eg. manufacturing, aerospace and transportation and for the  purposes                of system condition monitoring. In manufacturing  systems, BIT facilitates                automatic detection of toolwear  and breakage and assists in corrective                actions to ensure  part quality and reduce machine downtime.
2. BIT TECHNIQUES
BIT techniques are classified:
a. on-line BIT
b. off-line BIT
On-line BIT:
It includes concurrent and nonconcurrent techniques. Testing occurs during normal functional operation.
Concurrent on-line BIST - Testing occurs simultaneously with normal operation mode, usually coding techniques or duplication and comparison are used. [3]
Nonconcurrent on-line BIST - testing is carried out while a system is in an idle state, often by executing diagnostic software or firmware routines
Off-line BIT:
System is not in its normal working mode it usually uses onchip test generators and output response analysers or micro diagnostic routines. Functional off-line BIT is based on a functional description of the Component Under Test (CUT) and uses functional high-level fault models.
Structural off-line BIT is based on the structure of the CUT and uses structural fault models.
BIT techniques are classified:
a. on-line BIT
b. off-line BIT
On-line BIT:
It includes concurrent and nonconcurrent techniques. Testing occurs during normal functional operation.
Concurrent on-line BIST - Testing occurs simultaneously with normal operation mode, usually coding techniques or duplication and comparison are used. [3]
Nonconcurrent on-line BIST - testing is carried out while a system is in an idle state, often by executing diagnostic software or firmware routines
Off-line BIT:
System is not in its normal working mode it usually uses onchip test generators and output response analysers or micro diagnostic routines. Functional off-line BIT is based on a functional description of the Component Under Test (CUT) and uses functional high-level fault models.
Structural off-line BIT is based on the structure of the CUT and uses structural fault models.
3. BIT FOR THE                IC INDUSTRY
IC s entering the market today is more complex in design with a higher integration density. This leads to increased vulnerability of the chip to problems such as cross talk noise contamination, and internal power dissipation. These problems reduce the reliability of the chip. Further more
IC s entering the market today is more complex in design with a higher integration density. This leads to increased vulnerability of the chip to problems such as cross talk noise contamination, and internal power dissipation. These problems reduce the reliability of the chip. Further more
The                 principal of Built-in-test and self-test has been widely  applied                to the design and testing of complex,  mixed-signal electronic systems,                such as integrated  circuits (IC s) and multifractional instrumentation                [1]. A  system with BIT is characterized by its ability to identify                 its operation condition by itself, through the testing and diagnosis                 capabilities built into its in structure. To ensure  reliable performance,                testability needs to be  incorporated into the early stage of system                and product  design. Various techniques have been developed over                the  past decades to implement the BIT technique. In the semiconductor,                 the objective of applying BIT is to improve the yield of chip  fabrication,                enable robust and efficient chip testing and  better scope with the                increasing circuit complexity and  integration density. This has                been achieved by having an  IC chip generate its own test stimuli                and measure the  corresponding responses from the various elements                within  the chip to determine its condition. In recent years, BIT                 has seen increasing applications in other branches of industry,                 eg. manufacturing, aerospace and transportation and for the  purposes                of system condition monitoring. In manufacturing  systems, BIT facilitates                automatic detection of toolwear  and breakage and assists in corrective                actions to ensure  part quality and reduce machine downtime.
2. BIT TECHNIQUES
BIT techniques are classified:
a. on-line BIT
b. off-line BIT
On-line BIT:
It includes concurrent and nonconcurrent techniques. Testing occurs during normal functional operation.
Concurrent on-line BIST - Testing occurs simultaneously with normal operation mode, usually coding techniques or duplication and comparison are used. [3]
Nonconcurrent on-line BIST - testing is carried out while a system is in an idle state, often by executing diagnostic software or firmware routines
Off-line BIT:
System is not in its normal working mode it usually uses onchip test generators and output response analysers or micro diagnostic routines. Functional off-line BIT is based on a functional description of the Component Under Test (CUT) and uses functional high-level fault models.
Structural off-line BIT is based on the structure of the CUT and uses structural fault models.
BIT techniques are classified:
a. on-line BIT
b. off-line BIT
On-line BIT:
It includes concurrent and nonconcurrent techniques. Testing occurs during normal functional operation.
Concurrent on-line BIST - Testing occurs simultaneously with normal operation mode, usually coding techniques or duplication and comparison are used. [3]
Nonconcurrent on-line BIST - testing is carried out while a system is in an idle state, often by executing diagnostic software or firmware routines
Off-line BIT:
System is not in its normal working mode it usually uses onchip test generators and output response analysers or micro diagnostic routines. Functional off-line BIT is based on a functional description of the Component Under Test (CUT) and uses functional high-level fault models.
Structural off-line BIT is based on the structure of the CUT and uses structural fault models.
3. BIT FOR THE                IC INDUSTRY
IC s entering the market today is more complex in design with a higher integration density. This leads to increased vulnerability of the chip to problems such as cross talk noise contamination, and internal power dissipation. These problems reduce the reliability of the chip. Further more, with increased chip density, it becomes mo0re difficult to access test points on a chip for external testing. Also, testing procedures currently in use are time consuming, presenting a bottleneck for higher productivity [2]. These factors have led to the emergence of BIT in the semiconductor industry as a cost effective, reliable, and efficient quality control technique. Generally, adding testing circuitry on to the same IC chip increases the chip area requirement conflicting with the need for system miniaturization and power conception reduction. On the other hand, techniques have been developed to allow the circuit-under-test (CUT) to be tested using existing on-chip hardware, thus keeping the area overhead to a minimum [1]. Also, the built-in-test functions obviate the need for expensive external testers. Further more; since the chip testing procedure is generated and performed on the chip itself, it takes less time as compared to one external testing procedure.
IC s entering the market today is more complex in design with a higher integration density. This leads to increased vulnerability of the chip to problems such as cross talk noise contamination, and internal power dissipation. These problems reduce the reliability of the chip. Further more, with increased chip density, it becomes mo0re difficult to access test points on a chip for external testing. Also, testing procedures currently in use are time consuming, presenting a bottleneck for higher productivity [2]. These factors have led to the emergence of BIT in the semiconductor industry as a cost effective, reliable, and efficient quality control technique. Generally, adding testing circuitry on to the same IC chip increases the chip area requirement conflicting with the need for system miniaturization and power conception reduction. On the other hand, techniques have been developed to allow the circuit-under-test (CUT) to be tested using existing on-chip hardware, thus keeping the area overhead to a minimum [1]. Also, the built-in-test functions obviate the need for expensive external testers. Further more; since the chip testing procedure is generated and performed on the chip itself, it takes less time as compared to one external testing procedure.
,  with increased chip density, it becomes                mo0re difficult  to access test points on a chip for external testing.                 Also, testing procedures currently in use are time consuming, presenting                 a bottleneck for higher productivity [2]. These factors  have led                to the emergence of BIT in the semiconductor  industry as a cost                effective, reliable, and efficient  quality control technique. Generally,                adding testing  circuitry on to the same IC chip increases the chip                area  requirement conflicting with the need for system miniaturization                 and power conception reduction. On the other hand, techniques  have                been developed to allow the circuit-under-test (CUT)  to be tested                using existing on-chip hardware, thus  keeping the area overhead                to a minimum [1]. Also, the  built-in-test functions obviate the                need for expensive  external testers. Further more; since the chip                testing  procedure is generated and performed on the chip itself,                 it takes less time as compared to one external testing procedure.
 
