2011 electronics seminars

FPGAS IN SPACE

A quiet revolution is taking place. Over the past few years, the density of the average programmable logic device has begun to skyrocket. The maximum number of gates in an FPGA is currently around 500,000 and doubling every 18 months. Meanwhile, the price of these chips is dropping. What all of this means is that the price of an individual NAND or NOR is rapidly approaching zero! And the designers of embedded systems are taking note. Some system designers are buying processor cores and incorporating them into system-on-a-chip designs; others are eliminating the processor and software altogether, choosing an alternative hardware-only design.
As this trend continues, it becomes more difficult to separate hardware from software. After all, both hardware and software designers are now describing logic in high-level terms, albeit in different languages, and downloading the compiled result to a piece of silicon. Surely no one would claim that language choice alone marks a real distinction between the two fields. Turing's notion of machine-level equivalence and the existence of language-to-language translators have long ago taught us all that that kind of reasoning is foolish. There are even now products that allow designers to create their hardware designs in traditional programming languages like C. So language differences alone are not enough of a distinction.
Both hardware and software designs are compiled from a human-readable form into a machine-readable one. And both designs are ultimately loaded into some piece of silicon. Does it matter that one chip is a memory device and the other a piece of programmable logic? If not, how else can we distinguish hardware from software?
Regardless of where the line is drawn, there will continue to be engineers like you and me who cross the boundary in our work. So rather than try to nail down a precise boundary between hardware and software design, we must assume that there will be overlap in the two fields. And we must all learn about new things. Hardware designers must learn how to write better programs, and software developers must learn how to utilize programmable logic.
TYPES OF PROGRAMMABLE LOGIC
Many types of programmable logic are available. The current range of offerings includes everything from small devices capable of implementing only a handful of logic equations to huge FPGAs that can hold an entire processor core (plus peripherals!). In addition to this incredible difference in size there is also much variation in architecture. In this section, I'll introduce you to the most common types of programmable logic and highlight the most important features of each type.
PLDs
At the low end of the spectrum are the original Programmable Logic Devices (PLDs). These were the first chips that could be used to implement a flexible digital logic design in hardware. In other words, you could remove a couple of the 7400-series TTL parts (ANDs, ORs, and NOTs) from your board and replace them with a single PLD. Other names you might encounter for this class of device are Programmable Logic Array (PLA), Programmable Array Logic (PAL), and Generic Array Logic (GAL).

2011 electronics seminars

Electro Dynamic Tether

Tether is a word, which is not heard often. The word meaning of tether is 'a rope or chain to fasten an animal so that it can graze within a certain limited area'. We can see animals like cows and goats 'tethered' to trees and posts.
In space also tethers have an application similar to their word meaning. But instead of animals, there are spacecrafts and satellites in space. A tether if connected between two spacecrafts (one having smaller orbital altitude and the other at a larger orbital altitude) momentum exchange can take place between them. Then the tether is called momentum exchange space tether. A tether is deployed by pushing one object up or down from the other. The gravitational and centrifugal forces balance each other at the center of mass. Then what happens is that the lower satellite, which orbits faster, tows its companion along like an orbital water skier. The outer satellite thereby gains momentum at the expense of the lower one, causing its orbit to expand and that of the lower to contract. This was the original use of tethers.

But now tethers are being made of electrically conducting materials like aluminium or copper and they provide additional advantages. Electrodynamic tethers, as they are called, can convert orbital energy into electrical energy. It works on the principle of electromagnetic induction. This can be used for power generation. Also when the conductor moves through a magnetic field, charged particles experience an electromagnetic force perpendicular to both the direction of motion and field. This can be used for orbit raising and lowering and debris removal. Another application of tethers discussed here is artificial gravity inside spacecrafts.
NEED AND ORIGIN OF TETHERS

Space tethers have been studied theoretically since early in the 20th century, it wasn't until 1974 that Guiseppe Colombo came up with the idea of using a long tether to support satellite from an orbiting platform. But that was simple momentum exchange space tether. Now lets see what made scientists think of electrodynamic tethers.

Every spacecraft on every mission has to carry all the energy sources required to get its job done, typically in the form of chemical propellants, photovoltaic arrays or nuclear reactors. The sole alternative - delivery service - can be very expensive. For example, a spacecraft orbiting in the International space Station (ISS) will need an estimated 77 metric tons of booster propellant over its anticipated 10 year life span just to keep itself from gradually falling out of orbit. Assuming a minimal price of $7000 a pound (dirt cheap by current standards) to get fuel up to the station's 360 km altitude, i.e. $1.2 billion simply to maintain the orbital status quo.
So scientists have are taking a new look at space tether, making it electrically conductive. In 1996, NASA launched a shuttle to deploy a satellite on a tether to study the electrodynamic effects of a conducting tether as it passes through the earth's magnetic fields. As predicted by the laws of electromagnetism, a current was produced in the tether as it passed through the earth's magnetic field, acting as an electrical generator. This was the origin of electrodynamic tethers

2011 electronics seminars

MEDICAL IMAGE FUSION

The fusion of medical images is the process of combining two or more acquired from same or different imaging modalities into a single image retaining important features from each.
Automatic Fusion of anatomic (CT, MRI) with metabolic (PET, SPECT) scanning has improved the diagnostic accuracy of tumor imaging. Recent advances in imaging hardware and computer software have made this exciting technique signi?cantly easier to adopt.

BIOFUEL CELLS


The World Energy consumption rate is increasing at an alarming rate leading to an unbalanced energy management. We depend mainly on the non-renewable resources, for our power, which are soon getting depleted. While there is no sign that this growth in demand will abate particularly amongst the developing nations, there is now an awareness of the transience of non-renewable resources and the irreversible damage caused to the environment. The fast growing technology demands from us, small, light power sources that are able to sustain operation over a long period of time.
Extending the life span of batteries and thus miniaturization of equipments will benefit more in the bio-engineering field. The problem of supplying energy to the implanted medical devices is extremely challenging even though continuous efforts of research and studies have led to the miniaturization of batteries used. There also exist the problem of need of a surgical intervention after a fixed a period of time once these devices are planted. The advances in medical sciences are leading to an increasing number of implantable electrically operated devices.
These are areas where biofuel cells turn to a blessing. The original concept of biofuel cell was derived from the normal working of human body. Biological cells use enzymes to break down glucose in order to form adenosine triphosphate (ATP), which acts as a potential energy store for a wide variety of metabolic and cellular processes. Biofuel cells try to imitate this ability of cells to generate energy from glucose.

latest seminars for electronics


STREAM CONTROL TRANSMISSION PROTOCOL
 

 There is an increasing need for internetworking between telephone and computer networks. Applications such as voice over ip (VoIP) and the deployment of the 3rd Generation mobile telephony networks, make this integration a necessity. The Signaling transport (SIGTRAN) working group of the Internet Engineering Task Force (IETF) is the one in charge of the design of the standards needed to make this internetworking possible. The primary purpose of this working group is addressing the transport of packet-based Public Switched Telephone Networks (PSTN) signaling over IP networks, taking into account functional and performance requirements of the PSTN signaling.
Among the multiple standards that have been defined by SIGTRAN there is one new reliable transport protocol, the Stream control transmission (SCTP). SCTP is the evolution of a previous transport protocol, called the Multi-Network Datagram Transmission Protocol (MDTP), highly based on TCP. SCTP has several new features that make it more suitable for PSTN signaling transport than TCP. SCTP can take advantage of a multihomed host using all the IP addresses the host owns. SCTP avoids a very simple attack that affects TCP, the so called SYN attack. This new protocol also provides a mechanism to prevent an application using SCTP from the so-called Head-Of-Line (HOL) blocking by using streams. Moreover, many features that are optional in TCP have been including in the basic specifications of SCTP, such as the Selective Acknowledgements, the ability to tell about the receipt of Duplicate Datagrams or the support for Explicit Congestion Notification (ECN).
On the whole, SCTP has many advantages over TCP and very few drawbacks, and we can expect that, apart from being used for signaling transport, SCTP will replace TCP in the Internet in the future. However, that will not happen overnight. Moreover, SCTP and TCP implementations share resources equally (as they have the same congestion avoidance algorithms). This behavior is highly desired to facilitate a gradual conversion of applications to use SCTP instead of TCP, making easier the co-existence of both protocols.

latest electrical seminars

Adaptive Optics in Ground Based Telescopes

Adaptive optics is a new technology which is being used now a days in ground based telescopes to remove atmospheric tremor and thus provide a clearer and brighter view of stars seen through ground based telescopes. Without using this system, the images obtained through telescopes on earth are seen to be blurred, which is caused by the turbulent mixing of air at different temperatures.
Adaptive optics in effect removes this atmospheric tremor. It brings together the latest in computers, material science, electronic detectors, and digital control in a system that warps and bends a mirror in a telescope to counteract, in real time the atmospheric distortion.
The advance promises to let ground based telescopes reach their fundamental limits of resolution and sensitivity, out performing space based telescopes and ushering in a new era in optical astronomy. Finally, with this technology, it will be possible to see gas-giant type planets in nearby solar systems in our Milky Way galaxy. Although about 100 such planets have been discovered in recent years, all were detected through indirect means, such as the gravitational effects on their parent stars, and none has actually been detected directly.
WHAT IS ADAPTIVE OPTICS ?
Adaptive optics refers to optical systems which adapt to compensate for optical effects introduced by the medium between the object and its image. In theory a telescope's resolving power is directly proportional to the diameter of its primary light gathering lens or mirror. But in practice , images from large telescopes are blurred to a resolution no better than would be seen through a 20 cm aperture with no atmospheric blurring. At scientifically important infrared wavelengths, atmospheric turbulence degrades resolution by at least a factor of 10.
Space telescopes avoid problems with the atmosphere, but they are enormously expensive and the limit on aperture size of telescopes is quite restrictive. The Hubble Space telescope, the world's largest telescope in orbit , has an aperture of only 2.4 metres, while terrestrial telescopes can have a diameter four times that size.

ANN for misuse detection

Because of the increasing dependence which companies and government agencies have on their computer networks the importance of protecting these systems from attack is critical. A single intrusion of a computer network can result in the loss or unauthorized utilization or modification of large amounts of data and cause users to question the reliability of all of the information on the network. There are numerous methods of responding to a network intrusion, but they all require the accurate and timely identification of the attack.
Intrusion Detection Systems
The timely and accurate detection of computer and network system intrusions has always been an elusive goal for system administrators and information security researchers. The individual creativity of attackers, the wide range of computer hardware and operating systems, and the ever changing nature of the overall threat to target systems have contributed to the difficulty in effectively identifying intrusions. While the complexities of host computers already made intrusion detection a difficult endeavor, the increasing prevalence of distributed network-based systems and insecure networks such as the Internet has greatly increased the need for intrusion detection.
There are two general categories of attacks which intrusion detection technologies attempt to identify - anomaly detection and misuse detection .Anomaly detection identifies activities that vary from established patterns for users, or groups of users. Anomaly detection typically involves the creation of knowledge bases that contain the profiles of the monitored activities.
The second general approach to intrusion detection is misuse detection. This technique involves the comparison of a user's activities with the known behaviors of attackers attempting to penetrate a system. While anomaly detection typically utilizes threshold monitoring to indicate when a certain established metric has been reached, misuse detection techniques frequently utilize a rule-based approach. When applied to misuse detection, the rules become scenarios for network attacks. The intrusion detection mechanism identifies a potential attack if a user's activities are found to be consistent with the established rules. The use of comprehensive rules is critical in the application of expert systems for intrusion detection.
Current approaches to intrusion detection systems
Most current approaches to the process of detecting intrusions utilize some form of rule-based analysis. Rule-Based analysis relies on sets of predefined rules that are provided by an administrator, automatically created by the system, or both. Expert systems are the most common form of rule-based intrusion detection approaches. The early intrusion detection research efforts realized the inefficiency of any approach that required a manual review of a system audit trail. While the information necessary to identify attacks was believed to be present within the voluminous audit data, an effective review of the material required the use of an automated system.
The use of expert system techniques in intrusion detection mechanisms was a significant milestone in the development of effective and practical detection-based information security systems.
An expert system consists of a set of rules that encode the knowledge of a human "expert". These rules are used by the system to make conclusions about the security-related data from the intrusion detection system. Expert systems permit the incorporation of an extensive amount of human experience into a computer application that then utilizes that knowledge to identify activities that match the defined characteristics of misuse and attack.


Anthropomorphic Robot hand

This paper presents an anthropomorphic robot hand called the Gifu hand II, which has a thumb and four fingers, all the
joints of which are driven by servomotors built into the fingers and the palm. The thumb has four joints with four-degrees-of-freedom (DOF); the other fingers have four joints with 3-DOF; and two axes of the joints near the palm cross orthogonally at one point, as is the case in the human hand. The Gifu hand II can be equipped with
six-axes force sensor at each fingertip and a developed distributed tactile sensor with 624 detecting points on its surface. The design concepts and the specifications of the Gifu hand II, the basic characteristics of the tactile sensor, and the pressure distributions at the time of object grasping are described and discussed herein. Our results demonstrate that the Gifu hand II has a high potential to perform dexterous object manipulations like the human hand.
INTRODUCTION
IT IS HIGHLY expected that forthcoming humanoid robots will execute various complicated tasks via communication with a human user. The humanoid robots will be equipped with anthropomorphic multifingered hands very much like the human hand. We call this a humanoid hand robot. Humanoid hand robots will eventually supplant human labor in the execution of intricate and dangerous tasks in areas such as manufacturing, space, the seabed, and so on. Further, the anthropomorphic hand will be provided as a prosthetic application for handicapped individuals.
Many multifingered robot hands (e.g., the Stanford-JPL hand by Salisbury et al. [1], the Utah/MIT hand by Jacobsen et al. [2], the JPL four-fingered hand by Jau [3], and the Anthrobot hand by Kyriakopoulos et al. [4]) have been developed. These robot hands are driven by actuators that are located in a place remote from the robot hand frame and connected by tendon cables. The elasticity of the tendon cable causes inaccurate joint angle control, and the long wiring of tendon cables may obstruct the robot motion when the hand is attached to the tip of the robot arm. Moreover, these hands have been problematic commercial products, particularly in terms of maintenance, due to their mechanical complexity.
To solve these problems, robot hands in which the actuators are built into the hand (e.g., the Belgrade/USC hand by Venkataraman et al. [5], the Omni hand by Rosheim [6], the NTU hand by Lin et al. [7], and the DLR's hand by Liu et al. [8]) have been developed. However, these hands present a problem in that their movement is unlike that of the human hand because the number of fingers and the number of joints in the fingers are insufficient. Recently, many reports on the use of the tactile sensor [9]-[13] have been presented, all of which attempted to realize adequate object manipulation involving contact with the finger and palm. The development of the hand, which combines a 6-axial force sensor attached at the fingertip and a distributed tactile sensor mounted on the hand surface, has been slight.
Our group developed the Gifu hand I [14], [15], a five-fingered hand driven by built-in servomotors. We investigated the hand's potential, basing the platform of the study on dexterous grasping and manipulation of objects. Because it had a nonnegligible backlash in the gear transmission, we redesigned the anthropomorphic robot hand based on the finite element analysis to reduce the backlash and enhance the output torque. We call this version the Gifu hand II.

A 64 Point Fourier Transform Chip

Fourth generation wireless and mobile system are currently the focus of research and development. Broadband wireless system based on orthogonal frequency division multiplexing will allow packet based high data rate communication suitable for video transmission and mobile internet application. Considering this fact we proposed a data path architecture using dedicated hardwire for the baseband processor. The most computationally intensive part of such a high data rate system are the 64-point inverse FFT in the transmit direction and the viterbi decoder in the receiver direction. Accordingly an appropriate design methodology for constructing them has to be chosen a) how much silicon area is needed b) how easily the particular architecture can be made flat for implementation in VLSI c) in actual implementation how many wire crossings and how many long wires carrying signals to remote parts of the design are necessary d) how small the power consumption can be .This paper describes a novel 64-point FFT/IFFT processor which has been developed as part of a large research project to develop a single chip wireless modem.
ALGORITHM FORMULATION
The discrete fourier transformation A(r) of a complex data sequence B(k) of length N
where r, k ={0,1……, N-1} can be described as


Where WN = e-2?j/N . Let us consider that N=MT , ? = s+ Tt and k=l+Mm,where s,l ? {0,1…..7} and m, t ? {0,1,….T-1}. Applying these values in first equation and we get


This shows that it is possible to realize the FFT of length N by first decomposing it to one M and one T-point FFT where N = MT, and combinig them. But this results in in a two dimensional instead of one dimensional structure of FFT. We can formulate 64-point by considering M =T = 8


This shows that it is possible to express the 64-point FFT in terms of a two dimensional structure of 8-point FFTs plus 64 complex inter-dimensional constant multiplications. At first, appropriate data samples undergo an 8-point FFT computation. However, the number of non-trivial multiplications required for each set of 8-point FFT gets multiplied with 1. Eight such computations are needed to generate a full set of 64 intermediate data, which once again undergo a second 8-point FFT operation . Like first 8-point FFT for second 8-point again such computions are required. Proper reshuffling of the data coming out from the second 8-point FFT generates the final output of the 64-point FFT .

Fig. Signal flow graph of an 8-point DIT FFT.

For realization of 8-point FFT using the conventional DIT does not need to use any multiplication operation.
The constants to be multiplied for the first two columns of the 8-point FFT structure are either 1 or j . In the third column, the multiplications of the constants are actually addition/subtraction operation followed multiplication of 1/?2 which can be easily realized by using only a hardwired shift-and-add operation. Thus an 8-point FFT can be carried out without using any true digital multiplier and thus provide a way to realize a low- power 64-point FFT at reduced hardware cost. Since a basic 8-point FFT does not need a true multiplier. On the other hand, the number of non-trivial complex multiplications for the conventional 64-point radix-2 DIT FFT is 66. Thus the present approach results in a reduction of about 26% for complex multiplication compared to that required in the conventional radix-2 64-point FFT. This reduction of arithmetic complexity furthur enhances the scope for realizing a low-power 64-point FFT processor. However, the arithmetic complexity of the proposed scheme is almost the same to that of radix-4 FFT algorithm since the radix-4 64-point FFT algorithm needs 52 non-trivial complex multiplications. 

BIT for Intelligent system design

The principal of Built-in-test and self-test has been widely applied to the design and testing of complex, mixed-signal electronic systems, such as integrated circuits (IC s) and multifractional instrumentation [1]. A system with BIT is characterized by its ability to identify its operation condition by itself, through the testing and diagnosis capabilities built into its in structure. To ensure reliable performance, testability needs to be incorporated into the early stage of system and product design. Various techniques have been developed over the past decades to implement the BIT technique. In the semiconductor, the objective of applying BIT is to improve the yield of chip fabrication, enable robust and efficient chip testing and better scope with the increasing circuit complexity and integration density. This has been achieved by having an IC chip generate its own test stimuli and measure the corresponding responses from the various elements within the chip to determine its condition. In recent years, BIT has seen increasing applications in other branches of industry, eg. manufacturing, aerospace and transportation and for the purposes of system condition monitoring. In manufacturing systems, BIT facilitates automatic detection of toolwear and breakage and assists in corrective actions to ensure part quality and reduce machine downtime.
2. BIT TECHNIQUES
BIT techniques are classified:
a. on-line BIT
b. off-line BIT
On-line BIT:
It includes concurrent and nonconcurrent techniques. Testing occurs during normal functional operation.
Concurrent on-line BIST - Testing occurs simultaneously with normal operation mode, usually coding techniques or duplication and comparison are used. [3]
Nonconcurrent on-line BIST - testing is carried out while a system is in an idle state, often by executing diagnostic software or firmware routines
Off-line BIT:
System is not in its normal working mode it usually uses onchip test generators and output response analysers or micro diagnostic routines. Functional off-line BIT is based on a functional description of the Component Under Test (CUT) and uses functional high-level fault models.
Structural off-line BIT is based on the structure of the CUT and uses structural fault models.
3. BIT FOR THE IC INDUSTRY
IC s entering the market today is more complex in design with a higher integration density. This leads to increased vulnerability of the chip to problems such as cross talk noise contamination, and internal power dissipation. These problems reduce the reliability of the chip. Further more
The principal of Built-in-test and self-test has been widely applied to the design and testing of complex, mixed-signal electronic systems, such as integrated circuits (IC s) and multifractional instrumentation [1]. A system with BIT is characterized by its ability to identify its operation condition by itself, through the testing and diagnosis capabilities built into its in structure. To ensure reliable performance, testability needs to be incorporated into the early stage of system and product design. Various techniques have been developed over the past decades to implement the BIT technique. In the semiconductor, the objective of applying BIT is to improve the yield of chip fabrication, enable robust and efficient chip testing and better scope with the increasing circuit complexity and integration density. This has been achieved by having an IC chip generate its own test stimuli and measure the corresponding responses from the various elements within the chip to determine its condition. In recent years, BIT has seen increasing applications in other branches of industry, eg. manufacturing, aerospace and transportation and for the purposes of system condition monitoring. In manufacturing systems, BIT facilitates automatic detection of toolwear and breakage and assists in corrective actions to ensure part quality and reduce machine downtime.
2. BIT TECHNIQUES
BIT techniques are classified:
a. on-line BIT
b. off-line BIT
On-line BIT:
It includes concurrent and nonconcurrent techniques. Testing occurs during normal functional operation.
Concurrent on-line BIST - Testing occurs simultaneously with normal operation mode, usually coding techniques or duplication and comparison are used. [3]
Nonconcurrent on-line BIST - testing is carried out while a system is in an idle state, often by executing diagnostic software or firmware routines
Off-line BIT:
System is not in its normal working mode it usually uses onchip test generators and output response analysers or micro diagnostic routines. Functional off-line BIT is based on a functional description of the Component Under Test (CUT) and uses functional high-level fault models.
Structural off-line BIT is based on the structure of the CUT and uses structural fault models.
3. BIT FOR THE IC INDUSTRY
IC s entering the market today is more complex in design with a higher integration density. This leads to increased vulnerability of the chip to problems such as cross talk noise contamination, and internal power dissipation. These problems reduce the reliability of the chip. Further more, with increased chip density, it becomes mo0re difficult to access test points on a chip for external testing. Also, testing procedures currently in use are time consuming, presenting a bottleneck for higher productivity [2]. These factors have led to the emergence of BIT in the semiconductor industry as a cost effective, reliable, and efficient quality control technique. Generally, adding testing circuitry on to the same IC chip increases the chip area requirement conflicting with the need for system miniaturization and power conception reduction. On the other hand, techniques have been developed to allow the circuit-under-test (CUT) to be tested using existing on-chip hardware, thus keeping the area overhead to a minimum [1]. Also, the built-in-test functions obviate the need for expensive external testers. Further more; since the chip testing procedure is generated and performed on the chip itself, it takes less time as compared to one external testing procedure.
, with increased chip density, it becomes mo0re difficult to access test points on a chip for external testing. Also, testing procedures currently in use are time consuming, presenting a bottleneck for higher productivity [2]. These factors have led to the emergence of BIT in the semiconductor industry as a cost effective, reliable, and efficient quality control technique. Generally, adding testing circuitry on to the same IC chip increases the chip area requirement conflicting with the need for system miniaturization and power conception reduction. On the other hand, techniques have been developed to allow the circuit-under-test (CUT) to be tested using existing on-chip hardware, thus keeping the area overhead to a minimum [1]. Also, the built-in-test functions obviate the need for expensive external testers. Further more; since the chip testing procedure is generated and performed on the chip itself, it takes less time as compared to one external testing procedure.