//IJSER JOURNAL

Effect of Temperature on Deformation Characteristics of Gold Ball Bond in Au-Al Thermosonic Wire Bonding

Aug 7th, 2014
Comments Off

1. INTRODUCTION

IN recent years, thermosonic wire bonding has been prevalent in the application of solid state inter- connect technology. In the process of making an interconnection, two wire bonds are formed. The first bond involves the formation of a ball with Electric Flame Off (EFO) process. The ball is placed in direct contact within the bond pad opening on the die. With application of load (bond force) and ultrasonic energy within a few milliseconds (bond time) under the influence of heat, a ball bond is formed at the aluminum bond pad. Factors such as ultrasonic energy, temperature, and pressure may influence the quality of bonding quality [1, 2].

Upon application of the primary factors to form an intermetallic layer that makes the connection on the bond pad of a die, the wire is the lifted to form a loop and then placed in contact with the desired bond area of a leadframe to form a wedge bond. In this process, the bonding temperature is one of the main bonding parameters which play an important
role in the bonding [3]. Essentially, different temperature will lead to different bonding output response as different temperature conditions mean different bonding environment. In previous studies, a parabolic relationship between temperature and strength has been determined. Too low or too high temperature for bonding can lead to unsuccessful bonding or low bonding strength [4, 5].
Although many studies about temperature effect in wire bonding has been carried out, it is worth investigating the criticality of temperature application on thermosonic wire bonding while keeping other factors at constant. This study will be able to depict the actual deformation characteristics of Gold (Au) ball bond with respect to temperature, while keeping other bonding factors as constant. Sixteen groups of bonding data at various temperature settings were compared to establish the relationship between ball deformation and temperature.

Click to read more about journal- http://www.ijser.org/onlineResearchPaperViewer.aspx?Effect_of_Temperature_on_Deformation_Characteristics_of_Gold_Ball_Bond_in_Au-Al_Thermosonic_Wire_Bonding.pdf

Load Forecasting Using New Error Measures In Neural Networks

Aug 7th, 2014
Comments Off

1 INTRODUCTION

There is a growing tendency towards unbundling the electricity system. This is continually confronting the different sectors of the industry (generation, trans- mission, and distribution) with increasing demand on planning management and operations of the network. The operation and planning of a power utility company requires an adequate model for electric power load fore- casting. Load forecasting plays a key role in helping an electric utility to make important decisions on power, load switching, voltage control, network reconfiguration, and infrastructure development.

Methodologies of load forecasts can be divided into various categories that include short-term forecasts, medium-term forecasts, and long-term forecasts. Short-term forecasting which forms the focus of this paper, gives a forecast of electric load one hour ahead of time. Such forecast can help to make decisions aimed at preventing imbalance in the power generation and load demand, thus leading to greater network reliability and power quality.

Many methods have been used for load forecasting in the past. These include statistical methods such as regres- sion and similar-day approach, fuzzy logic, expert sys- tems, support vector machines, econometric models, end- use models, etc. [2].

A supervised artificial neural network has been used in this work. Here, the neural network is trained on input data as well as the associated target values. The trained network can then make predictions based on the relation- ships learned during training. A real life case study of the power industry in Nigeria was used in this work.

In this paper a supervised artificial neural network has been used for load forecasting.  Here, the neural network is trained on input data as well as the associated target values. The error measures used in the presented model is different from the conventional models. England ISO data has been used for the simula- tion.

In further sections of this paper, we discuss neural net- work in greater detail, introduce the conventional model of artificial neural network and the mathematical error models used in our model.

2 NEURAL NETWORK

2.1 INTRODUCTION

Artificial neuron model is inspired by biological neu- ron. Biological neurons are the basic unit of brain for in- formation processing system. Artificial Neural Network is a massive parallel distributed processing system that has a natural propensity for storing experimental know- ledge and making it available for use [1]. The basic processing units in ANN are neurons. Artificial Neural Network can be divided in two categories viz. Feed- forward and Recurrent networks. In Feed-forward neural networks, data flows from input to output units in a feed- forward direction and no feedback connections are present in the network. Widely known feed-forward neural networks are Multilayer Perceptron (MLP) [3], Probabilistic Neural Network (PNN) [4], General Regres- sion Neural Network (GRNN) [5] and Radial Basis Func- tion (RBF) Neural Networks [6]. Multilayer Perceptron (MLP) is composed of a hierarchy of processing units (Perceptron), organized in series of two or more mutually exclusive sets layers. It consists of an input layer, which serves as the holding site for the input applied to the network. One or more hidden layers with desired number of neurons and an output layer at which the overall map- ping of the network input is available [7]. There are three types of learning in Artificial Neural Networks, Super- vised, unsupervised and Reinforcement learning. In su- pervised learning, the network is trained by providing it with input and target output patterns. In Unsupervised learning or Self-organization, an (output) unit is trained to respond to clusters of pattern within the input. In this paradigm the system is supposed to discover statistically salient features of the input population.Reinforcement learning may be considered as an intermediate form of supervised and unsupervised learning. Here the learning machine does some action on the environment and gets a feedback response from the environment. Most popular learning algorithm for feed-forward networks is Back- propagation. The first order optimization techniques in Back-propagation which uses steepest gradient decent algorithm show poor convergence. ANN is a very popu- lar tool in the field of engineering. It has been applied for various problem like times series prediction, classifica- tion, optimization, function approximation, control sys- tems and vector quantization. Many real life application fall into one of these categories. In the existing literature, various neural network learning algorithms have been used for these applications. In this paper, MLP neural network has been used successfully with different error measures for load forecasting.

Click to read more about journal- http://www.ijser.org/onlineResearchPaperViewer.aspx?Load_Forecasting_Using_New_Error_Measures_In_Neural_Networks.pdf

Receding Horizon Control on Large Scale Supply Chain

Aug 7th, 2014
Comments Off

1 INTRODUCTION

—————————— • ——————————
THE network of suppliers, manufacturers, distributors and retailers constitutes a supply chain management system. Between interconnected entities, there are
two types of process flows: information flows, e.g., an order requesting goods, and material flows, i.e., the actual shipment of goods. Key elements to an efficient supply chain are accurate pinpointing of process flows and tim- ing of supply needs at each entity, both of which enable entities to request items as they are needed, thereby re- ducing safety stock levels to free space and capital. The operational planning and direct control of the network can in principle be addressed by a variety of methods, including deterministic analytical models and stochastic analytical models, and simulation models, coupled with the desired optimization objectives and network perfor- mance measures [1].
The significance of the basic idea implicit in the reced- ing horizon control (RHC) or RHC has been recognized a long time ago in the operations management literature as a tractable scheme for solving stochastic multi period op- timization problems, such as production planning and supply chain management, under the term receding hori- zon [2]. In a recent paper [3], a RHC strategy was em- ployed for the optimization of production/distribution systems, including a simplified scheduling model for the manufacturing function. The suggested control strategy considers only deterministic type of demand, which re- duces the need for an inventory control mechanism [4,5].
For the purposes of our study and the time scales of in- terest, a discrete time difference model is developed [6].The model is applicable to multi echelon supply chain networks of arbitrary structure. To treat process uncer- tainty within the deterministic supply chain network model, a RHC approach is suggested [7,8].

Typically, RHC is implemented in a centralized fashion [9]. The algorithm uses a receding horizon, to allow the incorporation of past and present control actions to future predictions [10,11,12,13].
In this paper, a centralized receding horizon controller applying to a supply chain management system consist of one plant (supplier), two distribution centers and three retailers.

2 DISCRETE TIME DIFFERENCE MODEL

In this work, a discrete time difference model is devel- oped[4]. The model is applicable to multi echelon supply chain networks of arbitrary structure, that DP denote the set of desired products in the supply Chain and these can be manufactured at plants, P, by utilizing various re- sources, RS. The manufacturing function considers inde- pendent production lines for the distributed products. The products are subsequently transported to and stored at warehouses, W. Products from warehouses are trans- ported upon customer demand, either to distribution cen- ters, D, or directly to retailers, R. Retailers receive time varying orders from different customers for different products. Satisfaction of customer demand is the primary target in the supply chain management mechanism. Un- satisfied demand is recorded as backorders for the next time period. A discrete time difference model is used for description of the supply chain network dynamics. It is assumed that decisions are taken within equally spaced time periods (e.g. hours, days, or weeks). The duration of the base time period depends on the dynamic characteris- tics of the network. As a result, dynamics of higher fre- quency than that of the selected time scale are considered negligible and completely attenuated by the network [4,14].

Plants P, warehouses W, distribution centers D, and retailers constitute the nodes of the system. For each
node, k, there is a set of upstream nodes and a set of downstream nodes, indexed by (‘, ”) . Upstream nodes can supply node k and downstream nodes can be sup-not require a separate balance for customer orders at nodes other than the final retailer nodes [4,15].

Click to read more about journal- http://www.ijser.org/onlineResearchPaperViewer.aspx?Receding_Horizon_Control_on_Large_Scale_Supply_Chain.pdf

Decreasing Inventory Levels Fluctuations by Moving Horizon Control Method and Move Suppression in the Demand Network

Aug 7th, 2014
Comments Off

1 INTRODUCTION

—————————— • ——————————
Key elements to an efficient supply chain are accurate pinpointing of process flows and timing of supply needs at each entity, both of which enable entities to request items as they are needed, thereby reducing safety stock levels to free space and capital. The operational planning and direct control of the network can in prin- ciple be addressed by a variety of methods, including deterministic analytical models and stochastic analytical models, and simulation models, coupled with the desired optimization objectives and network performance measures [1].
The significance of the basic idea implicit in the moving horizon control (MHC) or MHC has been recognized a
long time ago in the operations management literature as a tractable scheme for solving stochastic multi period op- timization problems, such as production planning and supply chain management, under the term moving horizon [2]. In a recent paper [3], a MHC strategy was em- ployed for the optimization of production/distribution systems, including a simplified scheduling model for the manufacturing function. The suggested control strategy considers only deterministic type of demand, which re- duces the need for an inventory control mechanism [4,5].
For the purposes of our study and the time scales of in- terest, a discrete time difference model is developed [6]. The model is applicable to multi echelon supply chain networks of arbitrary structure. To treat process uncer- tainty within the deterministic supply chain network model, a MHC approach is suggested [7,8].
Typically, MHC is implemented in a centralized fa- shion [9]. The algorithm uses a moving horizon, to allow the incorporation of past and present control actions to future predictions [10,11,12,13]. In this paper, a moving horizon controller with move suppression term used for inventory management of the demand network.

2 MODELLING AND CONTROL

In this work, a discrete time difference model is devel- oped[4]. The model is applicable to multi echelon supply chain networks of arbitrary structure, that DP denote the set of desired products in the supply Chain and these can be manufactured at plants, P, by utilizing various re- sources, RS. The manufacturing function considers inde- pendent production lines for the distributed products. The products are subsequently transported to and stored at warehouses, W. Products from warehouses are trans- ported upon customer demand, either to distribution cen- ters, D, or directly to retailers, R. Retailers receive time varying orders from different customers for different products. Satisfaction of customer demand is the primary target in the supply chain management mechanism. Un- satisfied demand is recorded as backorders for the next time period. A discrete time difference model is used for description of the supply chain network dynamics. It is assumed that decisions are taken within equally spaced time periods (e.g. hours, days, or weeks). The duration of the base time period depends on the dynamic characteris- tics of the network. As a result, dynamics of higher fre- quency than that of the selected time scale are considered negligible and completely attenuated by the network [4,14]. Plants P, warehouses W, distribution centers D, and retailers constitute the nodes of the system. For each node, k, there is a set of upstream nodes and a set of  downstream nodes, indexed by (‘, ”) . Upstream nodes can supply node k and downstream nodes can be sup-MHC originated in the late seventies and has devel- oped considerably since then.

Click to read more about journal- http://www.ijser.org/onlineResearchPaperViewer.aspx?Decreasing_Inventory_Levels_Fluctuations_by_Moving_Horizon_Control_Method_and_Move_Suppression_in_the_Demand_Network.pdf

Mining Knowledge Using Decision Tree Algorithm

Aug 7th, 2014
Comments Off

1 INTRODUCTION

Extensive research in data mining[1] has been done on discovering distributional knowledge about the un- derlying data. Models such as Bayesian models, decision trees, support vector machines, and association rules have been applied to various industrial applications such as customer relationship management (CRM) [2]. Despite such phenomenal success, most of these techniques stop short of the final objective of data mining, which is to maximize the profit while reducing the costs, relying on such post processing techniques as visualization and inte- restingness ranking. While these techniques are essential to move the data mining result to the eventual applica- tions, they nevertheless require a great deal of human manual work by experts. Often, in industrial practice, one needs to walk an extra mile to automatically extract the real “nuggets” of knowledge, the actions, in order to max- imize the final objective functions. In this paper, a novel post processing technique is presented to extract actiona- ble knowledge from decision trees. To illustrate my moti- vation, customer relationship management CRM is consi- dered, in particular, the Mobile communications industry is taken as an example [3]. This industry is experiencing more and more competitions in recent years. With mas- sive industry deregulation across the world, each custom- er is facing an ever-growing number of choices in com- munications and financial services. As a result, an in- creasing number of customers are switching from one
service provider to another. This phenomenon is called customer “churning” or “attrition,” which is a major problem for these companies and makes it hard for them to stay profitable. In addition, a CRM data set is often unbalanced; the most valuable customers who actually churn can be only a small fraction of the customers who stay. In the past, many researchers have tackled the direct marketing problem as a classification problem, where the cost-sensitive and unbalanced nature of the problem is taken into account. In management and marketing sciences, stochastic models are used to describe the re- sponse behavior of customers. In the data mining area, a main approach is to rank the customers by the estimated likelihood to respond to direct marketing actions and compare the rankings using a lift chart or the area under curve measure from the ROC curve.
Like most data mining algorithms today, a common problem in current applications of data mining in intelli- gent CRM is that people tend to focus on, and be satisfied with, building up the models and interpreting them, but not to use them to get profit explicitly.
In this paper, a novel algorithm for post processing de- cision trees is presented to obtain actions that are asso- ciated with attribute-value changes, in order to maximize the profit-based objective functions. This allows a large number of candidate actions to be considered, complicat- ing the computation. More specifically, in this paper, two broad cases are considered. One case corresponds to the unlimited resource situation, which is only an approximation to the real-world situations, although it allows a clear introduction to the problem. Another more realistic case is the limited-resource situation, where the actions must be restricted to be below a certain cost level. In both cases, the aim is to maximize the expected net profit of all the customers as well as the industry. It can be shown that finding the optimal solution for the limited resource prob- lem and designing a greedy heuristic algorithm to solve it efficiently is computationally hard [4]. An important con- tribution of the paper is that it integrates data mining and decision making together, such that the discovery of ac- tions is guided by the result of data mining algorithms (decision trees in this case)[5]. An approach is consi- dered as a new step in this direction, which is to discover action sets from the attribute value changes in a non se- quential data set through optimization. The rest of the paper is organized as follows:
First a base algorithm is presented for finding unre- stricted actions in Section 2. Then formulate two versions of action-set extraction problems, and show that finding the optimal solution for the problems is computationally difficult in the limited resources case (Section 3). then greedy algorithms are efficient while achieving results very close to the optimal ones obtained by the exhaustive search (which is exponential in time complexity). A case study for Mobile handset manufacturing and selling company is discussed in section 4. Conclusions and fu- ture work are presented in section 5.

 

Click to read more about journal- http://www.ijser.org/onlineResearchPaperViewer.aspx?Mining_knowledge_using_Decision_Tree_Algorithm.pdf

Economic Power Dispatch using Artificial Immune System

Aug 7th, 2014
Comments Off

1 INTRODUCTION

THE definition of economic dispatch provided in EP Act section 1234 is: “The operation of generation facilities to produce energy at the lowest cost to reliably serve consumers, recognizing any operational limits of generation and transmission facilities”. Most electric power systems dispatch their own generating units and their own purchased power in a way that may be said to meet this definition. There are two fundamental components to economic dispatch:
I Planning for tomorrow’s dispatch
II Dispatching the power system today

I Planning for tomorrow’s dispatch
a. Scheduling generating units for each hour of the next day’s dispatch
b. Based on forecast load for the next day
c. Select generating units to be running and available for dispatch the next day (operating day) d. Recognize each generating unit’s operating limit, including it’s:
Ramp rate (how quickly the generator’s output can be changed), Maximum and minimum generation levels,
Minimum amount of time the generator must run, Minimum amount of time the generator must stay off once turned off, Recognize generating unit characteristics
e. Cost of generation, which depends on its efficiency (heat rate), variable operating costs (fuel and non-fuel), variable cost of environmental compliance, start-up costs, next day scheduling is typically performed by a generation group or an independent market operator, reliability assessment.

Analyze forecasted load and transmission conditions in the area to ensure that scheduled generation dispatch can meet load reliability. If the scheduled dispatch is not feasible within the limits of the transmission system, revise it. The reliability assessment is typically performed by a transmission operations group.
II Dispatching the power system today
a) Monitor load, generation and interchange (imports/exports) to ensure balance of supply and load.
b) Monitor and maintain system frequency at 50/60 Hz during dispatch according to NERC standards, using Automatic Generation Control (AGC) to change generation dispatch as needed.
c) Monitor hourly dispatch schedules to ensure that dispatch for the next hour will be in balance.
d) Monitor flows on transmission system.

Click to read more about journal- http://www.ijser.org/onlineResearchPaperViewer.aspx?Economic_Power_Dispatch_Problem_using_Artificial_Immune_System.pdf

Different Approaches of Spectral Subtraction method for Enhancing the Speech Signal in Noisy Environments

Aug 7th, 2014
Comments Off

1 INTRODUCTION

Speech signals from the uncontrolled environment may contain degradation components along with required speech components. The degradation components include background noise, speech from other speakers etc. Speech signal degraded by additive noise, this make the listening task difficult for a direct listener, gives poor performance in automatic speech processing tasks like speech recognition speaker identification, hearing aids, speech coders etc. The degraded speech therefore needs to be processed for the enhancement of speech compo- nents. The aim of speech enhancement is to improve the quality and intelligibility of degraded speech signal. Main objective of speech enhancement is to improve the per- ceptual aspects of speech such as overall quality, intelligi- bility and degree of listener fatigue. Improving quality and intelligibility of speech signals reduces listener’s fati- gue; improve the performance of hearing aids, cockpit communication, videoconferencing, speech coders and many other speech systems. Quality can be measured in terms of signal distortion but intelligibility and pleasant- ness are difficult to measure by any mathematical algo- rithm. Perceptual quality and intelligibility are two meas- ures of speech signals and which are not co-related. In
this study a speech signal enhancement using basic spec

tral subtraction and modified versions of spectral subtrac- tion methods such as Spectral Subtraction with over sub- traction, Non linear Spectral Subtraction, Multiband Spec- tral Subtraction, MMSE Spectral Subtraction, Selective Spectral Subtraction, Spectral Subtraction based on per- ceptual properties has been explained in detail with their performance evaluation.

2 METHODOLOGIES

2.1 Basic spectral subtraction algorithm

The speech enhancement algorithms based on theory from signal processing. The spectral – subtractive algo- rithm is historically one of the first algorithms proposed for noise reduction [4]. Simple and easy to implement it is based on the principle that one can estimate and update the noise spectrum when speech signal is not present and subtract it from the noisy speech signal to obtain clean speech signal spectrum[7]. Assumption is noise is addi- tive and its spectrum does not change with time, means noise is stationary or it’s slowly time varying signal. Whose spectrum does not change significantly between the updating periods. Let y(n) be the noise corrupted in- put speech signal, is composed of the clean speech signal x(n) and the additive noise signal d(n). In mathematical equation form one can
y(n) = x(n) +d(n) (1)

Click to read more about journal- http://www.ijser.org/onlineResearchPaperViewer.aspx?Different_Approaches_of_Spectral_Subtraction_method_for_Enhancing_the_Speech_Signal_in_Noisy_Environments.pdf

Speedy Deconvolution using Vedic Mathematics

Aug 7th, 2014
Comments Off

1 INTRODUCTION

THE concept of deconvolution is widely used in the techniques of signal processing and image processing. The concept of deconvolution has appli- cations in reflection seismology, in reversing the optical distortion, to sharpen images etc. Faster additions, multip- lications and divisions are of extreme importance in DSP for deconvolution. Speeding up deconvolution using a Hardware Description Language for design entry not only increases (improves) the level of abstraction, but also
opens new possibilities for using programmable devices.
In this paper, a novel method for computing the linear deconvolution of two finite length sequences is used. Me-
thod is explained in detail in [1]. This method is similar to computing long-hand division and polynomial division.
As a need of project, all required possible adders are studied. All these adders are synthesized using Xilinx9.2i. There delays and areas are compared. Adders which have highest speed and comparatively less area occupied, are selected for implementing deconvolution. Since 4×4 bit multiplier is need of this project, different 4×4 bit multip- liers are studied and Urdhava Triyakbhyam algorithm which gives lowest delay among remaining all multipliers is used here. For division, different division algorithms are studied, by comparing drawbacks and advantages of each algorithm, Non restoring algorithm is modified ac- cording to need and then used.
This paper can be considered as extension of [2]. where discrete linear convolution of two finite length se- quences(4 ×4) is implemented. That convolved output of [2]. is input to this proposed design, impulse response of system is known is another input, this paper proposes design that carry out high speed deconvolution and ex- tracts input samples.
Paper is organized as follows: section 2 gives brief in- troduction of novel method for deconvolution. Section 3 describes division algorithm. Section 4 discusses the Ved- ic mathematics and Urdhva Tiryagbhyam algorithm for
multiplication. Section 5 presents selection of speedy ad- der. In section 6 design verification is given. Finally, the conclusion is obtained.

2 NOVEL METHOD FOR CALCULATING DECONVOLUTION

In general, the object of deconvolution is to find the solution of a convolution equation of the form:
f *g = h (1) Usually, is some recorded signal, and ƒ is some signal that wish to recover, but has been convolved with some other signal before get recorded. The function might represent the transfer function of an instrument or a driv- ing force that was applied to a physical system.If one know g or at least form of g,then one can perform deter- ministic deconvolution.
If the two sequences f(n) and g ( n ) are causal, then the con-
volution sum is:

h(n) = k), n � 0 (2)

Therefore, solving for f(n) given g(n) and y(n) results in

f(n) =  , n ;: 1 (3)

where


f(0) = (4)

where solution requires that g(0) * 0

This recursion can be carried out in a manner similar to long division. Lets take example ,let h[n] = [16 36 56 17 28 12 ] and g[n] = [ 4 4 3 2 ] , solving for f(n) given g(n) and h(n). The sequences are set up in a fashion similar to long division, as shown below, but where no carries are performed out of a column.

Click to read more about journal- http://www.ijser.org/onlineResearchPaperViewer.aspx?Speedy_Deconvolution_using_Vedic_Mathematics.pdf

A Novel Dynamic Key Management Scheme Based On Hamming Distance for Wireless Sensor Networks

Aug 7th, 2014
Comments Off

1. INTRODUCTION

THE envisioned growth in utilizing sensor networks in a wide variety of sensitive applications ranging from healthcare to warfare is stimulating numerous efforts to secure these networks. Sensor networks comprise a large number of tiny sensor nodes that collect and (partially) process data from the surrounding environment. The data is then communicated, using wireless links, to aggregation and forwarding nodes (or gateways) that may further process the data and communicate it to the outside world through one or more base stations (or command nodes). Base stations are the entry points to the network where user requests begin and network responses are received. Typically, gateways and base stations are higher-end nodes. It is to be noted, however, that various sensor, gateway, and base station functions can be performed by the same or different nodes. The sensitivity of collected data makes encryption keys essential to secure sensor networks.

1.1 Key Management

The term key may refer to a simple key (e.g., 128-bit string) or a more complex key construct (e.g., a symmetric bivariate key polynomial).A large number of keys need to be managed in order to encrypt and authenticate sensitive data exchanged. The objective of key management is to dynamically establish and maintain secure channels among communicating parties.
Typically, key management schemes use administrative keys (key encryption keys) for the secure and efficient (re-)distribution and, at times, generation of the secure channel communication keys (data encryption keys) to the communicating parties. Communication keys may be pair-wise keys used to secure a communication channel between two nodes that are in direct or indirect communications, or they may be group keys shared by multiple nodes. Network keys (both administrative and communication keys) may need to be changed (re-keyed) to maintain secrecy and resilience to attacks, failures, or network topology changes. Numerous key management schemes have been proposed for sensor networks. Most existing schemes build on the seminal random key pre- distribution scheme introduced by Eschenauer and Gligor [1]. Subsequent extensions to that scheme include using deployment knowledge [2] and key polynomials [3] to enhance scalability and resilience to attacks. These set of schemes is referred as static key management schemes since they do not update the administrative keys post network deployment.
An example of dynamic keying schemes is proposed by Jolly et al. [4] in which a key management scheme based on identity based symmetric keying is given. This scheme requires very few keys (typically two) to be stored at each sensor node and shared with the base station as well as the cluster gateways. Rekeying involves reestablishment of clusters and redistribution of keys. Although the storage requirement is very affordable, the rekeying procedure is inefficient due to the large number of messages exchanged for key renewals. Another emerging category of schemes employ a combinatorial formulation of the group key management problem to affect efficient rekeying [5, 6]. These are examples of dynamic key management schemes. While static schemes primarily assume that administrative keys will outlive the network and emphasize pair wise communication keys.
Dynamic schemes advocate rekeying to achieve resilience to attack in long-lived networks and primarily emphasize group communication keys. Since the dynamic scheme has the advantage of long lived network and rekeying when compared to the static schemes, the dynamic key management is chosen as a security scheme for the WSN’s.

Click to read more about journal- http://www.ijser.org/onlineResearchPaperViewer.aspx?A_Novel_Dynamic_Key_Management_Scheme_Based_On_Hamming_Distance_for_Wireless_Sensor_Networks.pdf

A Novel Real-time Intelligent Tele Cardiology System Using Wireless Technology To Detect Cardiac Abnormalities

Aug 7th, 2014
Comments Off

1 INTRODUCTION

1.1 General Introduction

CARDIO Vascular disease (CVD) is one of the most prevalent and serious health problems in the world. An Estimated 17.5 million people died from CVD in 2005, representing 30% of all deaths worldwide. Based on current trends, over 20 million people will die from CVD by 2015. In 2000, 56% of CVD deaths occurred before the age of 75. However, CVD is becoming more common in younger people, with most of the people affected now aged between 34 and 65 years [1]. In addition to the fatal cases, at least 20 million people experience nonfatal heart attacks and strokes every year; many requiring continuing costly medical care. Developed countries around the world continue to experience significant problems in providing healthcare services, which are as follows:

1) The increasing proportion of elderly, whose lifestyle changes are increasing the demand for chronic disease Healthcare services;
2) Demand for increased accessibility to hospitals and mobile healthcare services, as well as in-home care [2];
3) Financial constraints in efficiently improving personalized and quality-oriented healthcare though the current trend of centralizing specialized clinics can certainly reduce clinical costs, decentralized healthcare allow the alternatives of in- hospital and out-hospital care, and even further, home healthcare [3]. Rapid developments in information and communication technologies have made it possible to overcome the challenges mentioned earlier and to provide a services.

1.2 Sinus Tachycardia

Sinus tachycardia (also colloquially known as sinus tach or sinus tachy) is a heart rhythm with elevated rate of impulses originating from the sinoatrial node, defined as a rate greater than 100 beats/min in an average adult. The normal heart rate in the average adult ranges from 60–100 beats/min. Note that the normal heart rate varies with age, with infants having normal heart rate of 110–150 bpm to the elderly, who have slower normals. Tachycardia is often asymptomatic. If the heart rate is too high, cardiac output may fall due to the markedly reduced ventricular filling time. Rapid rates, though they may be compensating for ischemia elsewhere, increase myocardial oxygen demand and reduce coronary blood flow, thus precipitating an ischemia heart or valvular disease.

1.3 Sinus Bradycardia

Sinus bradycardia is a heart rhythm that originates from the sinus node and has a rate of under 60 beats per minute. The decreased heart rate can cause a decreased cardiac output resulting in symptoms such as lightheadedness, dizziness, hypotension, vertigo, and syncope. The slow heart rate may also lead to atrial, junctional, or ventricular ectopic rhythms. Sinus Bradycardia is not necessarily problematic. People who regularly practice sports may have sinus bradycardia, because their trained hearts can pump enough blood in each contraction to allow a low resting heart rate. Sinus Bradycardia can aid in the sport of Free diving, which includes any of various aquatic activities that share the practice of breath-hold underwater diving, Bradycardia aids in this process due to drop in blood rate pulse. These adaptations enable the human body to endure depth and lack of oxygen far beyond what would be possible without the mammalian diving reflex. Sinus bradycardia is a sinus rhythm of less than 60 bpm. It is a common condition found in both healthy individuals and those who are considered well conditioned athletes. Studies have found that 50 – 85 percent of conditioned athletes have benign sinus bradycardia, as compared to 23 percent of the general population studied. Trained athletes or young healthy individuals may also have a slow resting heart rate.

Click to read more about journal- http://www.ijser.org/onlineResearchPaperViewer.aspx?A_Novel_Real-time_Intelligent_Tele_Cardiology_System_Using_Wireless_Technology_to_Detect_Cardiac_Abnormalities.pdf