Analysis of Normal and Epileptic EEG Signal with Filtering Method
issue 1

Analysis of Normal and Epileptic EEG Signal with Filtering Method

Chaitanya Anturkar1, Kanchan Kathoute2, DivyaBahadure3, AkashDudhe4, Dr.P.D.Khandait5

Final year students1234, Guide & H.O.D5

Department of Electronics Engineering12345

K.D.K College of Engineering, Nagpur, Maharashtra12345

Abstract –

In the programmed recognition of epileptic seizures, the checking of fundamentally sick patients with time changing EEG signals is a basic method in concentrated consideration units. There is an expanding enthusiasm for utilizing EEG examination to identify seizure, and in this investigation we expect to show signs of improvement comprehension of how to envision the data in the EEG time-recurrence highlight, and plan and train a novel irregular timberland calculation for EEG unraveling, particularly for numerous degrees of disease. Here, we propose a programmed recognition structure for epileptic seizure dependent on numerous time-recurrence investigation draws near; it includes a novel irregular timberland model joined with framework inquiry streamlining. The brief timeframe Fourier change pictures seizure includes after standardization. The dimensionality of highlights is decreased through head segment investigation before taking care of them into the characterization model. The preparation parameters are upgraded utilizing lattice inquiry enhancement to improve identification execution and symptomatic precision by in the acknowledgment of three unique levels epileptic of conditions (sound subjects, sans seizure stretches, seizure action). Our proposed model was utilized to group 500 examples of crude EEG information, and numerous cross-approvals were embraced to help the demonstrating precision. Trial results were assessed by an exactness, a disarray network, a collector working trademark bend, and a region under the bend. The assessments demonstrated that our model accomplished the more compelling characterization than some past regular strategies. Such a plan for PC helped clinical analysis of seizures has a potential directing noteworthiness, which not just soothes the enduring of patient with epilepsy to improve personal satisfaction, yet in addition assists nervous system specialists with diminishing their outstanding task at hand.

Key Words: EEG Sampling Quantization Personality inference Emotion analysis

INTRODUCTION

Electroencephalogram (EEG) has been for quite some time used to analyze various disarranges of the sensory system, for example, epilepsy, characterizing phases of rest in patients, seizures and cerebrum harm. EEG is the electrical movement recorded from the scalp surface, which is gotten by conductive media and cathodes [1, 2]. EEG has been playing out a crucial job to explore cerebrum exercises in clinical application and logical research for quite a while [3], [4], [5]. The present de-noising procedures, that depend on the recurrence particular sifting, experience the ill effects of a considerable loss of the EEG information. Keeping patients from a typical work isn’t viewed as an attainable arrangement., indeed, this may have significant effects in recording EEG. Thinking about the issues, recurrence particular sifting method for dispensing with commotion from EEG is viewed as a difficult assignment these days [6]. An alluring substitute is the Wavelet based sifting considering the capacity of contemplating both recurrence and time maps at the same time [7], [8], [9]. Fixed wavelet change (SWT) is used for de-noising EEG signal by Zikov et al. [10]. Recreated signal in SWT method is certifiably not a decent guess for unique EEG since antiquities related to recorded EEG signal are impressively uncorrelated. An alternate method of denoising EEG signal using HAAR wavelet of higher request is depicted by Venkataramanan et al. [11]. In any case, the procedure is legitimate for taking out commotion related to eye developments.

In this examination, a discrete wavelet-based commotion end is completed to dispose of relics from EEG signal. In de-noising physiological signs, Wavelet de-noising is effective as it has a propensity for saving sign attributes while diminishing commotion, this is preferred over sign recurrence space separating [12]. The explanation is that the limit methodologies are accessible which permits recreation dependent on picked coefficients [13].

Daubechies (db8, db6, db2), Meyer (dmey) and wavelet capacities (WF) are used in this examination for the time of the wavelet change for commotion disposal. These WFs are chosen dependent on mother wavelet shapes [12, 14]. Figure 1. shows the meyer wavelet. Using these wavelets, RMS contrast was worked out to compute the commotion evacuation adequacy.

1.1   ELECTROENCEPHALOGRAPHY(EEG)

The Electroencephalogram famously known as the mind waves is the electrical movement of the cerebrum. Physiological control forms, point of view and outer improvements create flags in the comparing portions of the cerebrum that might be recorded at the scalp utilizing surface electrodes.[15] EEG possibilities have arbitrary showing up waveforms with top to top amplitudes extending from less than10µV to over 100µV.The transfer speed of the EEG signal is from beneath 1Hz to 100Hz.[8],[25] The mind wave is removed and the sign experiences different procedures like information securing, separating, highlight extraction and afterward investigation for examining the sign in any of the angle. In information securing the recorded signs are changed over in the structure that can be additionally prepared. Any sign other than that of intrigue could be named as clamor. These are expelled

utilizing channels. The EEG is a non-fixed sign the component extraction from the sifted information is done either in time space or in recurrence area. Ones the component is removed the signs are considered and contrasted and the ordinary EEG signal. Subsequent to experiencing the above procedures the mind waves can be looked at and distinguished in any of the point of view. Bioelectric occasions are gotten from the outside of the body before they can be placed into t e enhancer for ensuing record or show. This is finished by utilizing electrodes.[15]. In this task the grouping of EEG signals utilizing a trial arrangement is

done. The investigation of EEG signal is done on ongoing premise. This framework utilizes four anode framework. This will be finished by putting the terminals on the temple, over the ears and setting the ear cut on the left ear of the individual whose EEG signal will be dissected

=EEG signals in the purchase manner can effortlessly be con-taminated with noise signals. We pre-system the records by using the elimination of artifacts within the manner of EEG recording to hold sign stability and keep the powerful information segments. Continuous EEG facts are filtered with a bandpass (1-45Hz) clear out to put off linear trends and minimise the introduction of artifacts.We made use of popular statistical strategies for feature extraction. It is located that once humans are uncovered to unique emotion like happiness or unhappiness, higher frequency signals are more prominently visible in comparison to decrease frequency signals, especially regions of the mind. During intense emotional interest, changes had been noticed in the alpha sign in occipital and frontal areas of the mind. In case of very excessive sad emotion show, Beta indicators were also visible over Temporal and Frontal regions. For classification of facts we’ve used Linear Discriminant Analysis (LDA). The category price in case of unhappy feelings is eighty four.37%, for happiness it is 78.12% and for relaxed nation it’s far found to be 92.70%.

1.2 IMAGE PROCESSING

Image processing is a physical process used to convert an image signal into a physical image. The image signal can be either digital or analog. The actual output itself can be an actual physical image or the characteristics of an image. The most common type of image processing is photography.

Image processing basically includes the following three steps:

  • Importing the image via image acquisition tools;
  • Analysing and manipulating the image;
  • Output in which result can be altered image or report that is based on image analysis.

There are  sorts of techniques used for picture processing specifically, analogue and virtual picture processing. Analogue picture processing may be used for the tough copies like printouts and pics. Image analysts use diverse fundamentals of interpretation whilst the use of those visible techniques. Digital image processing strategies help in manipulation of the digital photographs via the usage of computers. The 3 popular levels that all kinds of statistics must undergo whilst using virtual method are pre-processing, enhancement, and display, records extraction.

1.2.1 Sampling and Quantization

2.    The sampling price determines the spatial resolution of the digitized picture, while the quantization degree determines the number of grey degrees inside the digitized photograph. A significance of the sampled image is expressed as a virtual value in picture processing. The transition between continuous values of the image function and its virtual equivalent is called quantization

3.    SYSTEM ANALYSIS

The WT-based strategies introduced in the writing for the investigation of epilepsy in EEG fundamentally apply discrete wavelet change (DWT) or wavelet parcel disintegration (WPD). WT is a period recurrence procedure, which gives both time and recurrence perspectives on a sign [23]. In this manner, it can precisely catch and limit transient highlights in the information like the epileptic spikes. In wavelet investigation, a direct blend of explicit capacities speaks to the underlying sign. These capacities are acquired by expansion and interpretation of the mother wavelet. The sign is disintegrated into fragments of a large portion of its size and range with the utilization of the mother wavelet. Especially, in DWT the scaling and interpreting parameters are introduced in forces of two. A progression of quadrature reflect channels (QMF) are utilized, filling in as high-pass and low-pass channels. In the principal level, the conjugate channels (high-pass and low-pass) are applied to the info signal coming about to a lot of coefficients, named wavelet coefficients. The “estimation” is the yield of the low-pass channel and is sub-disintegrated, broadening this strategy in the following level. In any case, the yield of the high-pass channel (“detail”) isn’t additionally disintegrated. In the following level, the strategy is rehashed uniquely for the estimate until the sign is deteriorated to uncover the band of intrigue.

WPD is a wavelet change and it can likewise be deciphered as an extension of the DWT, wherein the sign is broke down with a lot of QMFs that isolate the recurrence pivot in discrete time periods sizes [24]. In any case, in the WPD, the sign is gone through a bigger number of channels than the DWT and both the detail and guess coefficients are deteriorated. In the primary degree of deterioration, the acquired wavelet parcel coefficients are alluded as first-level guess and detail individually. In the subsequent level, the guess of the estimation (AA), the detail of the estimation (DA), the guess of the detail (AD) and the detail of the detail (DD) coefficients are processed and this recursive calculation renders each recently registered wavelet parcel coefficient the base of its own investigation tree. This repetitive parting is spoken to in a double tree. The means of the methodological methodologies introduced in the writing are basic in the two cases. The EEG signal is disintegrated into a few recurrence sub-groups and highlights are extricated, making an element vector, most usually utilized as contribution to a classifier.

DWT-based investigations

The testing recurrence of the EEG chronicles in the Bonn database is 173.61 Hz, and in this manner the recurrence run is 0–86.8 Hz. In most of techniques, the whole range of the EEG accounts was investigated. Nonetheless, frequencies higher than 60 Hz are regularly described as commotion and are in this manner disposed of. Thus, a few analysts have at first applied a band-pass channel, which evacuates the excess recurrence and concentrates just on the range that compares to the five medicinally settled EEG rhythms, for example delta (0–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–30 Hz) and gamma (30–60 Hz or 30–80 Hz).

 Subasi [3] utilized DWT to break down the EEG signals into six recurrence sub-groups. Nonetheless, just the wavelet coefficients that relate to the recurrence scope of intrigue 0–21.7 Hz, which means the subtleties D3-D5 and the guess A5, were utilized to figure the highlights and train a blend of specialists (ME)- based classifier. Guo et al. [4] likewise utilized the DWT to break down the EEG signals, applying a four-level deterioration, isolating the chose EEG accounts into five recurrence sub-groups. The line length highlight was removed from every one of the five sub-signals (D1-D4 and A4) shaping the element vector that prepared a multilayer perceptron neural system (MLP). Ocak [5] applied a deterioration of three levels in the whole range (0–86.8 Hz). Estimated entropy (ApEn) values, determined for all the recurrence groups, were utilized to characterize an edge which ordered the EEG fragments. Kumar et al. [6] applied a five-level deterioration and determined the ApEn in every disintegration level. The produced include vector was taken care of to a MLP classifier. In an ensuing report, a similar gathering applied a decay of five levels (as they recently recommended in [6]), utilizing the fluffy surmised entropy (fApEn) and bolster vector machines (SVM) for arrangement.

3.1 EXISTING SCHEME

A correlation of three element extraction methods, head segment investigation (PCA), autonomous part examination (ICA) and direct discriminant investigation (LDA) was introduced in [8]. The EEG accounts were exposed to a five-level deterioration, and measurable highlights were removed uniquely by the sub-signals D3, D4, D5 and A5, which relate to the recurrence scope of 0–21.7 Hz. The component of the subsequent list of capabilities was diminished by utilizing PCA, ICA and LDA, and the element vector was utilized as contribution to a SVM classifier. In another DWT-based examination [9], the creators’ fundamental objective was the execution of an element extraction framework dependent on hereditary programming. Subsequently, they applied a four-level decay to investigate the sign in sub-signals and afterward hereditary programming, planning to decrease the element of the extricated include vector. The extricated set of highlights and the decreased were utilized individually to prepare a k-closest neighbor (KNN) classifier. Results demonstrated that the diminished element vector improved the classifier’s presentation. A far reaching strategy dependent on advanced outrageous learning machine (OELM) was proposed in [10]. In this approach, wavelet-based measurable highlights were separated from a four-level decay and the OELM classifier was prepared by the highlights that were extricated from the whole range (0–86.8 Hz). Five order issues were directed (among them the five-class issue Z-O-N-F-S), and the exhibition was estimated with precision, which came to above 94% for the entirety of the issues.

Another methodology is to separate the recurrence band of enthusiasm from the five EEG rhythms, from the repetitive recurrence of the sign, by applying a band-pass channel. A wavelet-confusion procedure was introduced by Adeli et al. [11], where a low-pass limited drive reaction (FIR) was utilized to channel the EEG sign to the 0–60 Hz band. The EEG chronicles were then exposed to a four-level decay, and the normal qualities and standard deviations of several parameters (to be specific connection measurement and biggest Lyapunov type) were determined in every wavelet sub-signal (D1-D4 and A4), speaking to the framework’s chaocity. In a resulting study [12], the previously mentioned creators applied wavelet examination and disintegrated the signs into a similar recurrence sub-groups, assessing various techniques for characterization. A comparable methodology is portrayed in study [13], wherein the creators applied a band-pass channel and remove all the sign action outside the 0–60-Hz range to set up the EEG signals for additional preparing. In the following stage, a four-level deterioration was applied and the determined autoregressive (AR) parameters of each sub-band were taken care of to a MLP classifier. Wang et al. [14] introduced a novel arrangement calculation dependent on a democratic methodology and an equipment execution. The creators utilized a band-pass channel to concentrate just to the 0–32-Hz range and afterward applied a three-level decay and separated the example entropy (SampEn) just by the detail coefficients (D1, D2, D3).

Ocak [15] separated the EEG sections through a four-level wavelet bundle disintegration. ApEn estimations of the wavelet coefficients of all the 31 hubs of the decay tree were utilized as an element vector, while a hereditary calculation was utilized to decrease the quantity of highlights and locate the ideal element subset that augments the order execution of a learning vector quantization (LVQ) conspire. Master et al. [16] utilized wavelet bundle disintegration to extricate important data from the EEG signal. A six-level wavelet parcel deterioration yielding 64 hubs was performed, and a few factual highlights were removed from every hub. The creators tried seven unique mixes of the element vector and brought about the best pair, arriving at elevated levels of exactness. Table 1 sums up WT-based techniques (DWT and WPD) introduced in the writing.

PROPOSED SCHEME

3.2    ADVANTAGES OF PROPOSED SYSTEM

  • Three dimensional condition detection
  • Emotion recognition will help us to get accurate info

4.    REQUIREMENT SPECIFICATIONS

4.1 The requirements specification is a technical specification of requirements for the software products. It is step one inside the requirements evaluation technique it lists the necessities of a particular software gadget inclusive of purposeful, overall performance and security necessities. The necessities also provide usage situations from a person, an operational and an administrative angle. The cause of software necessities specification is to offer a detailed assessment of the software project, its parameters and goals. This describes the undertaking audience and its person interface, hardware and software necessities. It defines how the consumer, team and audience see the mission and its functionality.

4.2 HARDWARE REQUIREMENTS

  • Hard Disk                 :              40GB and Above
  • RAM                       :               512MB and Above
  • Processor                :               Pentium III and Above

4.3 SOFTWARE REQUIREMENTS

  • Windows Operating System XP and Above
  • MATLAB 7.6.0(R2008)

4.1 TECHNICAL FEASIBILITY

It is evident that necessary hardware and software are available for development and implementation of proposed system It uses MATLAB The smoothed pseudo Wigner-Ville appropriation (SPWVD) was applied in study [17]. Different time allotments recurrence goals (64, 128, 256 and 512), time windows (3 and 5) and recurrence sub-groups (4, 5, 7 and 13) were investigated, planning to separate a few highlights from the range of the sign mirroring the vitality dissemination over the time-recurrence plane. PCA was applied to the acquired highlights, and afterward a fake neural system (ANN) was utilized for characterization. In [18], a similar gathering introduced an extensive report wherein the brief timeframe Fourier change (STFT) and 12 different TFDs were assessed. The force range thickness (PSD) of each portion was additionally removed and utilized as contribution to an ANN classifier.

A system dependent on quick Fourier change (FFT) and ApEn was proposed in [19]. The normal force range was removed in each sub-band of 4 Hz alongside the ApEn. Altogether, 16 highlights were extricated, and the capacity of hereditary programming and PCA to diminish the component of highlight vector was analyzed. The SVM classifier with direct and spiral premise capacities (portion capacities) was likewise utilized.

In study [20], EEG examination utilizing TFDs and especially the spectrogram (SP), the Choi-Williams dispersion (CWD) and the SPWVD are performed. The motivation behind the investigation was both the distinguishing proof of the seizure tops and the arrangement of the EEG signals. For the distinguishing proof of the pinnacle seizures, the TFDs were determined and the greatest qualities were found. The standardized Renyi peripheral entropy (RME) was extricated for different lengths of a window (11, 17, 27, 41, 49, 93, 151, 205, 255) for SP and SPWVD and the best estimation of CWD got by the best estimations of window length of SP and SPWVD. The SPWVD with the RME gave the best outcomes as far as time-recurrence goals for the pinnacle ID issue. Each sign of the whole datasets was sectioned in six sub-groups, and the vitality from the sub-groups B1, B2 and B3 relating to the recurrence scope of enthusiasm of 0.5 to 12 Hz was separated. A vector of 200 estimations of vitality for the three sub-groups of intrigue was gotten, and the moving midpoints were separated. The characterization of the signs was performed by a limit which was characterized by the mean of the moving normal of vitality for each band. The acquired outcomes were utilized as contribution to a score capacity to characterize each sign.

4.2    ECONOMICAL FEASIBILITY

The cost for the proposed system is comparatively less to other medical software’s.

4.3    OPERATIONAL FEASIBILITY

In this project it requires to configure the software and technical background is necessary to work on the microcontroller and software.

5. SYSTEM ARCHITECTURE

5.1 MODULE DESIGN SPECIFICATION OVERALL ARCHITECTURE OF THE SYSTEM

Figure 1 – ARCHITECTURE OF THE PROPOSED SYSTEM DESCRIPTION

The EEG (Electroencephalogram) indicators generated using the programmable

Microcontroller is given as input to the EMD (Empirical Mode Decomposition) which decomposes the signals into IMF (Intrinsic Mode Functions) at the side of a fashion, and obtain instant frequency data. Here the EMD decomposes the EEG alerts into 5 intrinsic mode features specifically IMF1,IMF2,IMF3,IMF4,IMF5 for 5 types of alerts alpha, beta, gamma, delta, theta respectively.

Based at the type of IMF signal the characteristic is extracted. And the extracted function is given to the SVM classifier. The SVM classifier classifies every function the use of the kernel function(x,y) to recognize the emotion. The recognized results are used to expect emotion

6.1.1 MODULE 1: Pre – processing

The input RGB photo is resized to a height of 320 pixels. The resized photograph undergoes two separate processing pipelines: a saturation-primarily based one, and a color texture one. In the first one, the photograph is first of all gamma corrected after which the RGB values are converted to HSV to extract the saturation channel. These values are routinely threshold and morphological operations are carried out to easy up the acquired binary picture. A second processing based on the segmentation algorithm that works on both colour and texture features

6.1.2 MODULE 2: EEG dataset Creation and Group Segmentation

This module consists of methods involved in getting the EEG data sets which matches with mood swings. The EEG data set is processed in the MATLAB environment to segment it better by their alpha, beta, Gama, theta ranges.

6.1.3 MODULE 3: Classification

This module is used to classify the EEG info with respect to wave region to provide the individual’s emotional status like normal or abnormal, happy or Sad etc. These features are further used for analysis purpose of the individual’s emotional conditions and mood swings for psychological analysis.

6.  APPENDICES

6.1 SAMPLE SCREENSHOTS FROM THE PROJECT

7.1.1 IN HAPPY MOOD

Figure 2 Stemming ranges of the discovered signals for 300 samples

7.1.2 ANGRY/ABNORMAL

Figure 3 Frequency graph of classified signals for 1000 samples

7. CONCLUSIONS

A strategy for methodical examination of the recurrence sub-band definition with respect to EEG investigation for epilepsy, is introduced in this work, so as to evaluate the effect of various number and elective meanings of recurrence sub-groups in this issue. The system depends on the meaning of various unearthly limits, in view of which a lot of recurrence sub-groups is made. At that point, a lot of ghostly highlights are removed and used to prepare an irregular timberland classifier. For every particular number of ghastly edges (running from 0 to 12), all blends of sub-band definition are examined, with the restriction that each sub-band go must be in any event 2 Hz, coming about to an aggregate of ~ 1.32 × 108 recurrence sub-band mixes. The philosophy has been applied on a benchmark dataset, being the Bonn EEG database, for the five-class (Z-O-N-F-S) and the three-class (ZO-NF-S) issues.Three essential strides in the general execution assessment framework: data assortment, realities change, and measurements perception. Information assortment is the framework through which information about application in general execution are procured from an executing program. Information are normally amassed in a document, both at some phase in or after execution, in spite of the fact that in certain conditions it can be introduced to the purchaser in genuine time. Three straightforward data assortment procedures might be noticeable: The uncooked data created by means of profiles, counters, or lines are not regularly inside the shape required to respond to by and large execution questions. Thus, information modifications are applied, regularly with the objective of diminishing all out measurements amount. Changes might be utilized to decide infer values or diverse better-request records or to remove profile and counter insights from lines. For example, a profile recording the time spent in each subroutine on each processor is presumably changed over to choose the interim spent in each subroutine on each processor, and the regular old deviation from this recommend. Essentially, an indication can be prepared to create a histogram giving the appropriation of message sizes. Every one of the various by and large execution gear portrayed in resulting segments conveys some arrangement of incorporated modifications; progressively specific change additionally can be coded through the developer. Equal generally speaking execution measurements are innately multidimensional, which incorporate execution times, dispatch costs, etc, for more than one program parts, on explicit processors, and for extraordinary difficulty sizes. Despite the fact that records decrease methodologies might be used in certain conditions to pack execution data to scalar qualities, it is regularly fundamental so it will investigate the crude multidimensional it is notable in computational science and designing, this procedure can profit massively from the utilization of information perception strategies. Both regular and increasingly particular showcase strategies can be applied to execution information. Consequently it is reasoned that our undertaking should be possible with more improvements and cost productive parts. The extra included highlights might be finished with sensors on LIVE to such an extent that it can recognize the mindset of the people. The Facial Recognition should be possible on-live without pixel preparing utilizing Machine Learning as the Future Enhancement.

8. REFERENCES

  1. D. P. McAdams, and B. D. Olson, development: Continuity and change over Annual Review of Psychology, vol. 61, -542, pp. 5172010.
  2. O. P. John, S. E. Hampson, and L. R. basic level in -traitpersonalityhierarchies: studies and accessibility in differentJournalof co Personality and Social Psychology, vol. 60, no. – 3, pp 361, 1991.
  3. A. Panaccio, and C. Vandenberghe,-factormodel“Fiveo personality and organizational commitmen role of positive and negativeJournalaffectiveof Vocational Behavior, vol. 80, no. – 658,3,pp2012.647.
  4. J. M. Digman, “Five robust trait dimens stability, andJournalutility,”ofPersonality, vol. 57, no. pp. 195-214, 1989.
  5. R. R. McCrae, and P. T. Costa, “V-factorlidati model of personality across instruments Journal of Personality and Social Psychology, vol. 52, no 1, pp. 81, 1987.
  6. L. R. Goldberg, “The structure of phen traits,”American Psychologist, vol. 48, no. -34,1, pp. 1993.
  7. O. P. John, and S. Srivastava, “The taxonomy:History,measurement,and perspectives,”Handbook of Personality: Theory and research, L. A. Pervin and O. P. John,-138 of Computer Science Trends and Technology (IJCST) – Volume 5 Issue 2, Mar – Apr 2017 ISSN: 2347-8578 www.ijcstjournal.org Page 503
  8. Uma Somani, “Implementing Digital Signature with RSA Encryption Algorithm to Enhance the Data Security of Cloud in Cloud Computing,”2010 1st International Conference on Parallel, Distributed and Grid Computing (PDGC-2010).
  9. M. Komarraju, S. J. Karau, R. R. Schmeck, and A. Avdic, “The Big Five personality traits, learning styles, and academic achievement,” Personality and Individual Differences, vol. 51, no. 4, pp. 472-477, 2011.
  10. M. R. Barrick, and M. K. Mount, “The big five personality dimensions and job performance: a meta‐analysis,”Personnel Psychology, vol. 44, no. 1, pp. 1-26, 1991.
  11. E. E. Noftle, and P. R. Shaver, “Attachment dimensions and the big five personality traits: Associations and comparative ability to predict relationship quality,” Journal of Research in Personality, vol. 40, no. 2, pp. 179-208, 2006.

Related posts

A SURVEY ON COIN BASED DYNAMIC WATER DISPENSER SYSTEM

admin

Intrusion Detection using Machine Learning

admin

Defence Mechnisms By Using Decoy Technology

admin

Leave a Comment