Step Size Regression Based on Absolute Value of Gradient


Figure 2.20: EEG before and after noise filtering, segment n20013000 .



Figure 2.21: EEG before and after noise filtering, segment n30014000 .


Comparison of filtered EEG signal (top) with clean EEG signal

(middle) in the next phase in figure 2.19 shows that the filtered signal reaches the noise-free EEG signal Sn.

( n )

Not yet

This is also reflected in the mean square error at

Figure 2.12 and Figure 2.13. The MSE undulation increases as the step size

adaptation increases from 0.05 to 0.5 .


2.3. LMS algorithm with variable step size

The experimental results presented in the above section show a preliminary classification for the step size in using the LMS algorithm and suggest “mixing” models (see [29]), using different layers of step size to exploit the advantages of each layer. However, the proposal in [29] only solves the case of EEG signals with a strong assumption about the initial weight value. This is an important result but cannot be directly applied to the case of ECG noise filtering due to the quite different nature of these two signal layers. The distribution of the magnitude of the gradient has provided the idea and the basis for the thesis’s proposal on how to change the adaptive step size value to create a “mixing” model applicable to many different noise filtering layers in the noise filtering of ECG and EEG signals. This section introduces the mathematical basis of the method and experimental results in using a new calculation method to update the adaptive step size to filter noise during the recording of electrocardiogram and electroencephalogram signals. The experimental results illustrate and compare the performance between the proposed algorithm and the LMS algorithm with a fixed adaptive step size.

2.3.1. Step size change based on absolute value of Gradient

To speed up the convergence of the LMS algorithm, Daniel Olguín Olguín in

[29] proposed to change the adaptive step size according to the formula:



in there

n 1n 2n, (2.22)

: Forgetting factor, has a value in the range: 01 , often used

choose 0.98

: Adaptive step size parameter of , usually chosen to satisfy

satisfy the condition 0

The above proposal originates from the problem of noise filtering for electroencephalogram (EEG) signals with the main contribution shown in formula (2.22), which is the adaptive step size change method. However, (2.22) is only suitable for the problem of noise filtering for EEG signals due to the uniform variation characteristic of this signal class with the boundary value of the signal lying in the range

0.15max

0.15 (see [5], [29]). Suppression filter slot width

reflects the degree of attenuation to signals with frequencies near the current cancellation frequency

0 (see [44]). The rejection gap width is calculated as follows:

BW

In there:

BW : Slit width.

2 C 2 ,

: Adaptive step size.

C : Amplitude of the reference noise signal (see formulas 2.8 and 2.9).

Therefore, when the algorithm converges, can take on a value small enough to make the suppression gap narrow enough. In the electrocardiogram signal (Figure 1.1), the R peak at each cycle

The activity has the characteristic of sudden change. Therefore, the calculation 2n

in (2.22)


may lead to failure to satisfy the stability condition of the algorithm and loss of useful information when the gap width is too large. Furthermore, the author in [29] assigns a large value of when initializing and uses formula (2.22) to make


decreases to the best value. This may cause the algorithm to not converge or to converge slowly if we randomly choose the initial point of the weight matrix near the minimum point.

According to [3], [27], [30], noise is also modified during the propagation from the noise source to the reference receiver. This deviation is modeled by a random quantity with Gaussian distribution. And is described in the formula

following formula:

N n x 1n normrnd mean , sigma . Standard deviation

sigma reflects the distance from the minimum point to the initial point of the weight matrix. The relationship between the standard deviation and the number of iterations is described in table 2.1 and in figures (2.22) and (2.23).


Table 2.1: Relationship between standard deviation and number of iterations required

for the algorithm to converge.



TT

Standard deviation

Number of iterations required for the algorithm to converge

Change step size

use formula 2.20

Change step size

use formula 2.21

1

0.01

500

450

2

0.005

500

400

3

0.0001

520

370

4

0.00005

520

350

5

0.000001

530

300

Maybe you are interested!


It is easy to see that if the strict assumption on the choice of initial weight values ​​is not satisfied, the algorithm will converge slowly. Unfortunately, in many experiments, this assumption is not satisfied. The thesis proposal


based on the exploitation of information about the change in magnitude of the gradient vector in the LMS algorithm.


Figure 2.22: Convergence of the LMS algorithm using formula (2.22) for the control

adjust n, provided that the coordinatesw 10, w 20  are chosen appropriately



Figure 2.23: Convergence of the LMS algorithm using formula (2.22) for the control

Adjust n, whenw 10, w 20  are not suitable.


For a positive definite second-order function, the gradient has a large value far from the minimum point, and a small value near the minimum point (Figure 2.24). The idea


of the thesis can be described on the plane w 1 , w 2(see [44]). At time k, the adaptive step size should take on a large value when the coordinates

1 2

w 1 n , w 2 n  are far from the coordinates w * , w * of the minimum point of the second-order performance surface (see [44]). Conversely, the adaptive step size should take the value

small when

w 1n, w 2n  is close to the coordinates of the minimum point. The choice of size

Such an adaptive step size will help the filtering algorithm satisfy the conditions on the convergence speed and stability of the algorithm. We find that the magnitude distribution of Gradient ( ) on the plane w 1 , w 2 has properties that almost satisfy the above idea (see [38]), and the proposed adaptive step update formula is as follows:

,

n 1

x 1nn

maxx 1

n m m


1,..., nN 2



where: N is the number of samples in 1 cycle of the reference signal,

: Ideal width for the suppression band (See [3] [29], [44])

(2.23)


maxx 1nm

m 1,..., n N  2return value C 2

at time n

n: Step size for weight adjustment at time n .

Note that the first term of the right-hand side of (2.23)

x 1 ( n ) ( n )

1

2

( n )


reflects the distribution of the magnitude of the Gradient ( )

on plane w 1 ,w 2 ,

x 1 ( n ) : Noise obtained at the reference input at time n ,

( n ) : Output of the noise filter at time n ,



1 2

Figure 2.24: Gradient of on the plane ( w , w ) .


We have determined the convergence condition of the frequency cancelling filter using formula (2.23) and given in the following lemma:

Lemma . With 1

2

The sequences (2.11) and (2.12) with given by (2.23) will converge.

Prove .

From formulas (2.8), (2.11) and (2.23) we have


2

( n ) S ( n ) w i ( n ) x i ( n ) , (2.24)

i 1

and

w 1 n 1 ​​w 1 n 2 ( n 1) n x 1 n ;

2


12

C

x ( n ) w n F , (2.25)

2 1 1 n

where F n

bounded by a constant M independent of n


The convergence of the above formula is reflected in the expectation of v 1nof linear systems.

Consider the approximate linear systemv 1 ( n )c given by

2


v 1n1 ​​ 12

C

x ( n ) v n . (2.26)

2 1 1

Expectations of


recipe

v 1 n 1 is calculated based on (2.16) and is determined by


  n2


 

1

2  

E v ( n1) 12

l 1  C

E x 1 ( l ) E v 0;



0

1nEv


(2.27)

Assume 1

2


ensure the convergence of the linear system. Prove the

convergence of the expectation of the sequence (2.11) with given by (2.23) based on the evaluation (2.27) and the estimate

k2 2k


E 1

l 1

x ( l ) F

C 2 1 n k

M (1) .


Prove convergence for the sequencew 2n is done similarly.

2.3.2. Experiment and results

To evaluate the results, we use formula (2.7). The convergence speed of the algorithm is reflected through the convergence speed to the value of 0 or approximately 0 of MSE. The stability of the algorithm is reflected through the change of MSE after the algorithm has converged.

2.3.2.1. For electrocardiogram signals

In Figure 2.25 we show MSE in three cases.

Comment


Agree Privacy Policy *