13. ์ ์‘ ํ•„ํ„ฐ

13. ์ ์‘ ํ•„ํ„ฐ

์ด์ „: 12. ๋‹ค์ค‘ ๋ ˆ์ดํŠธ ์‹ ํ˜ธ ์ฒ˜๋ฆฌ | ๋‹ค์Œ: 14. ์‹œ๊ฐ„-์ฃผํŒŒ์ˆ˜ ๋ถ„์„


์ ์‘ ํ•„ํ„ฐ(adaptive filter)๋Š” ์ตœ์ ํ™” ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ๋”ฐ๋ผ ๊ณ„์ˆ˜๊ฐ€ ์ž๋™์œผ๋กœ ์กฐ์ •๋˜๋Š” ํ•„ํ„ฐ์ž…๋‹ˆ๋‹ค. ์‹ ํ˜ธ ๋ฐ ์žก์Œ ํ†ต๊ณ„์— ๋Œ€ํ•œ ์™„์ „ํ•œ ์‚ฌ์ „ ์ง€์‹์„ ๋ฐ”ํƒ•์œผ๋กœ ์„ค๊ณ„๋œ ๊ณ ์ • ํ•„ํ„ฐ์™€ ๋‹ฌ๋ฆฌ, ์ ์‘ ํ•„ํ„ฐ๋Š” ๋ฐ์ดํ„ฐ๋กœ๋ถ€ํ„ฐ ์ง€์†์ ์œผ๋กœ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๊ฐฑ์‹ ํ•จ์œผ๋กœ์จ ๋ฏธ์ง€(ๆœช็Ÿฅ) ๋˜๋Š” ์‹œ๋ณ€(time-varying) ํ™˜๊ฒฝ์—์„œ๋„ ๋™์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋…ธ์ด์ฆˆ ์บ”์Šฌ๋ง ํ—ค๋“œํฐ, ์ „ํ™”๊ธฐ์˜ ์—์ฝ” ์ œ๊ฑฐ, ๋ชจ๋Ž€์˜ ์ฑ„๋„ ๋“ฑํ™”(channel equalization) ๋“ฑ ์ˆ˜๋งŽ์€ ์‹ค์ œ ์‹œ์Šคํ…œ์˜ ํ•ต์‹ฌ ๊ธฐ์ˆ ์ž…๋‹ˆ๋‹ค.

๋‚œ์ด๋„: โญโญโญโญ

์„ ์ˆ˜ ์ง€์‹: FIR/IIR ํ•„ํ„ฐ ์„ค๊ณ„, ์„ ํ˜•๋Œ€์ˆ˜, ๊ธฐ๋ณธ ์ตœ์ ํ™” ๊ฐœ๋…

ํ•™์Šต ๋ชฉํ‘œ: - ์œ„๋„ˆ ํ•„ํ„ฐ(Wiener filter)๋ฅผ ์ตœ์  MMSE ์„ ํ˜• ํ•„ํ„ฐ๋กœ ์œ ๋„ - ์ตœ๊ธ‰๊ฐ•ํ•˜๋ฒ•(method of steepest descent)๊ณผ ์ˆ˜๋ ด ํŠน์„ฑ ์ดํ•ด - LMS ์•Œ๊ณ ๋ฆฌ์ฆ˜ ์œ ๋„ ๋ฐ ๊ตฌํ˜„, ์ˆ˜๋ ด ๋™์ž‘ ๋ถ„์„ - ํ–ฅ์ƒ๋œ ์ˆ˜๋ ด์„ ์œ„ํ•œ ์ •๊ทœํ™” LMS(Normalized LMS, NLMS) ๊ตฌํ˜„ - ํ–‰๋ ฌ ์—ญ ๋ณด์กฐ์ •๋ฆฌ(matrix inversion lemma)๋ฅผ ์ด์šฉํ•œ RLS ์•Œ๊ณ ๋ฆฌ์ฆ˜ ์œ ๋„ ๋ฐ ๊ตฌํ˜„ - ๋ณต์žก๋„, ์ˆ˜๋ ด, ์ถ”์  ์ธก๋ฉด์—์„œ LMS์™€ RLS ๋น„๊ต - ์‹œ์Šคํ…œ ์‹๋ณ„, ์žก์Œ ์ œ๊ฑฐ, ์—์ฝ” ์ œ๊ฑฐ, ๋“ฑํ™”์— ์ ์‘ ํ•„ํ„ฐ ์ ์šฉ


๋ชฉ์ฐจ

  1. ์ ์‘ ํ•„ํ„ฐ๋ง์ด ํ•„์š”ํ•œ ์ด์œ 
  2. ์œ„๋„ˆ ํ•„ํ„ฐ: ์ตœ์  MMSE ํ•ด
  3. ์ตœ๊ธ‰๊ฐ•ํ•˜๋ฒ•
  4. LMS ์•Œ๊ณ ๋ฆฌ์ฆ˜
  5. LMS ์ˆ˜๋ ด ๋ถ„์„
  6. ์ •๊ทœํ™” LMS (NLMS)
  7. RLS ์•Œ๊ณ ๋ฆฌ์ฆ˜
  8. ๋น„๊ต: LMS vs RLS
  9. ์‘์šฉ: ์‹œ์Šคํ…œ ์‹๋ณ„
  10. ์‘์šฉ: ์žก์Œ ์ œ๊ฑฐ
  11. ์‘์šฉ: ์—์ฝ” ์ œ๊ฑฐ
  12. ์‘์šฉ: ์ฑ„๋„ ๋“ฑํ™”
  13. ์‘์šฉ: ์ ์‘ ๋น”ํฌ๋ฐ
  14. Python ๊ตฌํ˜„: ์™„์ „ํ•œ ์ ์‘ ํ•„ํ„ฐ๋ง ํˆดํ‚ท
  15. ์—ฐ์Šต ๋ฌธ์ œ
  16. ์š”์•ฝ
  17. ์ฐธ๊ณ  ๋ฌธํ—Œ

1. ์ ์‘ ํ•„ํ„ฐ๋ง์ด ํ•„์š”ํ•œ ์ด์œ 

1.1 ๊ณ ์ • ํ•„ํ„ฐ์˜ ํ•œ๊ณ„

๊ธฐ์กด FIR ๋ฐ IIR ํ•„ํ„ฐ๋Š” ์„ค๊ณ„ ์‹œ์ ์— ์‹ ํ˜ธ ๋ฐ ์žก์Œ ํŠน์„ฑ์— ๋Œ€ํ•œ ์™„์ „ํ•œ ์ง€์‹์„ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒฝ์šฐ๋ฅผ ๊ณ ๋ คํ•ด ๋ณด์„ธ์š”:

  • ํ†ต๊ณ„๋ฅผ ๋ชจ๋ฅผ ๋•Œ: ์žก์Œ์˜ ์ŠคํŽ™ํŠธ๋Ÿผ ํŠน์„ฑ์„ ๋ชจ๋ฅด๋ฉด ์ตœ์  ํ•„ํ„ฐ๋ฅผ ์„ค๊ณ„ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค.
  • ํ†ต๊ณ„๊ฐ€ ์‹œ๋ณ€์ผ ๋•Œ: ๋ฌด์„  ์ฑ„๋„์€ ์†ก์ˆ˜์‹ ๊ธฐ์˜ ์ด๋™์— ๋”ฐ๋ผ ๋ณ€ํ•ฉ๋‹ˆ๋‹ค. ํ•œ ์ฑ„๋„ ์‹คํ˜„(realization)์— ๋งž๊ฒŒ ์„ค๊ณ„๋œ ํ•„ํ„ฐ๋Š” ์ž ์‹œ ํ›„ ์ค€์ตœ์ (suboptimal)์ด ๋ฉ๋‹ˆ๋‹ค.
  • ์‹ค์‹œ๊ฐ„ ๋™์ž‘์ด ํ•„์š”ํ•  ๋•Œ: ์ผ๋ถ€ ํ™˜๊ฒฝ์—์„œ๋Š” ์˜คํ”„๋ผ์ธ ์„ค๊ณ„ ๋‹จ๊ณ„ ์—†์ด ์ง€์†์ ์ธ ์ ์‘์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค.

1.2 ์ ์‘ ํ•„ํ„ฐ๋ง ํ”„๋ ˆ์ž„์›Œํฌ

์ ์‘ ํ•„ํ„ฐ๋Š” ๋‘ ๋ถ€๋ถ„์œผ๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค:

  1. ํŒŒ๋ผ๋ฏธํ„ฐํ™”๋œ ํ•„ํ„ฐ ๊ตฌ์กฐ (๋ณดํ†ต FIR): ์ž…๋ ฅ $x(n)$์œผ๋กœ๋ถ€ํ„ฐ ์ถœ๋ ฅ $y(n)$์„ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค.
  2. ์ ์‘ ์•Œ๊ณ ๋ฆฌ์ฆ˜: ์–ด๋–ค ๋น„์šฉ ํ•จ์ˆ˜๋ฅผ ์ตœ์†Œํ™”ํ•˜๋„๋ก ํ•„ํ„ฐ ๊ณ„์ˆ˜ $\mathbf{w}(n)$์„ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค.
                    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
     x(n) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ถโ”‚   Adaptive Filter    โ”‚โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ถ y(n)
                    โ”‚   w(n)               โ”‚
                    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                               โ”‚
                               โ”‚  e(n) = d(n) - y(n)
                               โ”‚
     d(n) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ถ Error
     (desired signal)                      Computation
                                              โ”‚
                                              โ–ผ
                                     Adaptation Algorithm
                                     (update w(n+1))

์˜ค์ฐจ ์‹ ํ˜ธ(error signal)๋Š”:

$$e(n) = d(n) - y(n) = d(n) - \mathbf{w}^T(n) \mathbf{x}(n)$$

์—ฌ๊ธฐ์„œ: - $d(n)$์€ ์›ํ•˜๋Š”(๊ธฐ์ค€) ์‹ ํ˜ธ(desired/reference signal) - $\mathbf{x}(n) = [x(n), x(n-1), \ldots, x(n-M+1)]^T$๋Š” ์ž…๋ ฅ ๋ฒกํ„ฐ - $\mathbf{w}(n) = [w_0(n), w_1(n), \ldots, w_{M-1}(n)]^T$๋Š” ํ•„ํ„ฐ ๊ฐ€์ค‘์น˜ ๋ฒกํ„ฐ - $M$์€ ํ•„ํ„ฐ ์ฐจ์ˆ˜

1.3 ์ฃผ์š” ๊ตฌ์„ฑ

์ ์‘ ํ•„ํ„ฐ๋Š” ๋„ค ๊ฐ€์ง€ ์ฃผ์š” ๊ตฌ์„ฑ์œผ๋กœ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค:

๊ตฌ์„ฑ ์ž…๋ ฅ $x(n)$ ์›ํ•˜๋Š” ์‹ ํ˜ธ $d(n)$ ๋ชฉ์ 
์‹œ์Šคํ…œ ์‹๋ณ„ ๋ฏธ์ง€ ์‹œ์Šคํ…œ์˜ ์ž…๋ ฅ ๋ฏธ์ง€ ์‹œ์Šคํ…œ์˜ ์ถœ๋ ฅ ๋ฏธ์ง€ ์‹œ์Šคํ…œ ๋ชจ๋ธ๋ง
์—ญ ๋ชจ๋ธ๋ง ๋ฏธ์ง€ ์‹œ์Šคํ…œ์˜ ์ถœ๋ ฅ ์ง€์—ฐ๋œ ์ž…๋ ฅ ์ฑ„๋„ ๋“ฑํ™”
์žก์Œ ์ œ๊ฑฐ ์ƒ๊ด€๋œ ์žก์Œ ๊ธฐ์ค€ ์‹ ํ˜ธ + ์žก์Œ ์‹ ํ˜ธ ์ถ”์ถœ
์˜ˆ์ธก ์‹ ํ˜ธ์˜ ์ง€์—ฐ ๋ฒ„์ „ ํ˜„์žฌ ์‹ ํ˜ธ ๋ฏธ๋ž˜ ๊ฐ’ ์˜ˆ์ธก

2. ์œ„๋„ˆ ํ•„ํ„ฐ: ์ตœ์  MMSE ํ•ด

2.1 ๋น„์šฉ ํ•จ์ˆ˜

์ตœ์†Œ ํ‰๊ท  ์ œ๊ณฑ ์˜ค์ฐจ(minimum mean square error, MMSE) ๊ธฐ์ค€์€ ๊ธฐ๋Œ€ ์ œ๊ณฑ ์˜ค์ฐจ๋ฅผ ์ตœ์†Œํ™”ํ•ฉ๋‹ˆ๋‹ค:

$$J(\mathbf{w}) = E\left[|e(n)|^2\right] = E\left[|d(n) - \mathbf{w}^T \mathbf{x}(n)|^2\right]$$

์ „๊ฐœํ•˜๋ฉด:

$$J(\mathbf{w}) = E[d^2(n)] - 2\mathbf{w}^T E[d(n)\mathbf{x}(n)] + \mathbf{w}^T E[\mathbf{x}(n)\mathbf{x}^T(n)] \mathbf{w}$$

๋‹ค์Œ์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค: - ์ž๊ธฐ์ƒ๊ด€ ํ–‰๋ ฌ(autocorrelation matrix): $\mathbf{R} = E[\mathbf{x}(n)\mathbf{x}^T(n)]$ ($M \times M$ ์–‘์ •์น˜ ํ–‰๋ ฌ) - ์ƒํ˜ธ์ƒ๊ด€ ๋ฒกํ„ฐ(cross-correlation vector): $\mathbf{p} = E[d(n)\mathbf{x}(n)]$ ($M \times 1$ ๋ฒกํ„ฐ) - $\sigma_d^2 = E[d^2(n)]$

๋น„์šฉ ํ•จ์ˆ˜๋Š” 2์ฐจ ๋ณผ(quadratic bowl) ํ˜•ํƒœ๊ฐ€ ๋ฉ๋‹ˆ๋‹ค:

$$J(\mathbf{w}) = \sigma_d^2 - 2\mathbf{w}^T \mathbf{p} + \mathbf{w}^T \mathbf{R} \mathbf{w}$$

2.2 ์œ„๋„ˆ-ํ˜ธํ”„ ๋ฐฉ์ •์‹(Wiener-Hopf Equation)

๊ธฐ์šธ๊ธฐ๋ฅผ ๊ตฌํ•˜๊ณ  0์œผ๋กœ ์„ค์ •ํ•˜๋ฉด:

$$\nabla_{\mathbf{w}} J = -2\mathbf{p} + 2\mathbf{R}\mathbf{w} = \mathbf{0}$$

์ด๋กœ๋ถ€ํ„ฐ ์œ„๋„ˆ-ํ˜ธํ”„ ๋ฐฉ์ •์‹(Wiener-Hopf equation)(์ •๊ทœ ๋ฐฉ์ •์‹)์ด ๋„์ถœ๋ฉ๋‹ˆ๋‹ค:

$$\boxed{\mathbf{R}\mathbf{w}_{opt} = \mathbf{p}}$$

์ตœ์ (์œ„๋„ˆ) ํ•„ํ„ฐ๋Š”:

$$\mathbf{w}_{opt} = \mathbf{R}^{-1}\mathbf{p}$$

์ตœ์  ํ•ด์—์„œ์˜ ์ตœ์†Œ MSE๋Š”:

$$J_{min} = \sigma_d^2 - \mathbf{p}^T \mathbf{R}^{-1} \mathbf{p}$$

2.3 ์„ฑ๋Šฅ ๊ณก๋ฉด

$\mathbf{R}$์ด ์–‘์ •์น˜์ด๋ฏ€๋กœ, ๋น„์šฉ ํ•จ์ˆ˜ $J(\mathbf{w})$๋Š” ๋ณผ๋ก 2์ฐจํ•จ์ˆ˜๋กœ ๊ทธ๋ฆ‡ ๋ชจ์–‘์˜ ๊ณก๋ฉด(ํƒ€์› ํฌ๋ฌผ๋ฉด)์„ ํ˜•์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ํ•˜๊ฐ• ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์œ ์ผํ•œ ์ „์—ญ ์ตœ์†Ÿ๊ฐ’์œผ๋กœ ์ˆ˜๋ ดํ•ฉ๋‹ˆ๋‹ค.

๊ณ ์œ ๋ถ„ํ•ด $\mathbf{R} = \mathbf{Q}\boldsymbol{\Lambda}\mathbf{Q}^T$๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด, ํšŒ์ „ ์ขŒํ‘œ๊ณ„ $\mathbf{v} = \mathbf{Q}^T(\mathbf{w} - \mathbf{w}_{opt})$์—์„œ์˜ ๋น„์šฉ ํ•จ์ˆ˜๋Š”:

$$J(\mathbf{v}) = J_{min} + \sum_{k=0}^{M-1} \lambda_k v_k^2$$

์—ฌ๊ธฐ์„œ $\lambda_k$๋Š” $\mathbf{R}$์˜ ๊ณ ์œ ๊ฐ’์ž…๋‹ˆ๋‹ค. $J$์˜ ๋“ฑ๊ณ ์„ ์€ ๊ณ ์œ ๋ฒกํ„ฐ ๋ฐฉํ–ฅ์œผ๋กœ ์ •๋ ฌ๋˜๊ณ  ๊ณ ์œ ๊ฐ’์— ์˜ํ•ด ํฌ๊ธฐ๊ฐ€ ๊ฒฐ์ •๋˜๋Š” ํƒ€์›์ž…๋‹ˆ๋‹ค.

2.4 ์œ„๋„ˆ ํ•ด์˜ ํ•œ๊ณ„

์œ„๋„ˆ ํ•„ํ„ฐ๋Š” ๋‹ค์Œ์„ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค: 1. $\mathbf{R}$๊ณผ $\mathbf{p}$์— ๋Œ€ํ•œ ์ง€์‹ (2์ฐจ ํ†ต๊ณ„) 2. ์‹ ํ˜ธ์˜ ์ •์ƒ์„ฑ(stationarity) 3. $\mathbf{R}^{-1}$ ๊ณ„์‚ฐ ($O(M^3)$ ์—ฐ์‚ฐ)

์‹ค์ œ๋กœ ์ด ์กฐ๊ฑด๋“ค์€ ์ •ํ™•ํžˆ ๋งŒ์กฑ๋˜๊ธฐ ์–ด๋ ต๊ธฐ ๋•Œ๋ฌธ์—, ๋ฐ˜๋ณต์ ์ด๊ณ  ์ ์‘์ ์ธ ์ ‘๊ทผ๋ฒ•์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค.


3. ์ตœ๊ธ‰๊ฐ•ํ•˜๋ฒ•

3.1 MSE ๊ณก๋ฉด์—์„œ์˜ ๊ฒฝ์‚ฌ ํ•˜๊ฐ•

์œ„๋„ˆ-ํ˜ธํ”„ ๋ฐฉ์ •์‹์„ ์ง์ ‘ ํ’€์ง€ ์•Š๊ณ , ๊ฒฝ์‚ฌ ํ•˜๊ฐ•(gradient descent)์„ ํ†ตํ•ด ๋ฐ˜๋ณต์ ์œผ๋กœ $\mathbf{w}_{opt}$์— ๋„๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:

$$\mathbf{w}(n+1) = \mathbf{w}(n) - \mu \nabla_{\mathbf{w}} J(n)$$

MSE ๋น„์šฉ ํ•จ์ˆ˜์˜ ์ง„์งœ ๊ธฐ์šธ๊ธฐ๋Š”:

$$\nabla_{\mathbf{w}} J = -2\mathbf{p} + 2\mathbf{R}\mathbf{w}(n)$$

๋”ฐ๋ผ์„œ ๊ฐฑ์‹  ๊ทœ์น™์€:

$$\boxed{\mathbf{w}(n+1) = \mathbf{w}(n) + 2\mu\left(\mathbf{p} - \mathbf{R}\mathbf{w}(n)\right)}$$

์ด๊ฒƒ์ด ์ตœ๊ธ‰๊ฐ•ํ•˜๋ฒ•(steepest descent) ์•Œ๊ณ ๋ฆฌ์ฆ˜์ž…๋‹ˆ๋‹ค. ์—ฌ์ „ํžˆ $\mathbf{R}$๊ณผ $\mathbf{p}$์— ๋Œ€ํ•œ ์ง€์‹์ด ํ•„์š”ํ•˜๋ฏ€๋กœ ์ง„์ •ํ•œ ์ ์‘ํ˜•์€ ์•„๋‹™๋‹ˆ๋‹ค.

3.2 ์ˆ˜๋ ด ๋ถ„์„

๊ฐ€์ค‘์น˜ ์˜ค์ฐจ ๋ฒกํ„ฐ๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค: $\boldsymbol{\epsilon}(n) = \mathbf{w}(n) - \mathbf{w}_{opt}$

๊ฐฑ์‹ ์‹์— ๋Œ€์ž…ํ•˜๋ฉด:

$$\boldsymbol{\epsilon}(n+1) = (\mathbf{I} - 2\mu\mathbf{R})\boldsymbol{\epsilon}(n)$$

๊ณ ์œ ๋ถ„ํ•ด $\mathbf{R} = \mathbf{Q}\boldsymbol{\Lambda}\mathbf{Q}^T$๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ, ํšŒ์ „ ์ขŒํ‘œ $\mathbf{v}(n) = \mathbf{Q}^T \boldsymbol{\epsilon}(n)$์—์„œ:

$$v_k(n+1) = (1 - 2\mu\lambda_k) v_k(n)$$

์ˆ˜๋ ด์„ ์œ„ํ•ด์„œ๋Š” ๋ชจ๋“  $k$์— ๋Œ€ํ•ด $|1 - 2\mu\lambda_k| < 1$์ด ํ•„์š”ํ•˜๋ฉฐ, ์ด๋Š”:

$$\boxed{0 < \mu < \frac{1}{\lambda_{max}}}$$

์—ฌ๊ธฐ์„œ $\lambda_{max}$๋Š” $\mathbf{R}$์˜ ์ตœ๋Œ€ ๊ณ ์œ ๊ฐ’์ž…๋‹ˆ๋‹ค.

3.3 ์ˆ˜๋ ด ์†๋„์™€ ๊ณ ์œ ๊ฐ’ ๋ถ„์‚ฐ

๊ฐ ๋ชจ๋“œ $v_k$์˜ ์ˆ˜๋ ด ์†๋„๋Š” $|1 - 2\mu\lambda_k|$์— ์˜ํ•ด ๊ฒฐ์ •๋ฉ๋‹ˆ๋‹ค. ๊ฐ ๋ชจ๋“œ์— ๋Œ€ํ•œ ์ตœ์  ์Šคํ… ํฌ๊ธฐ๋Š” $\mu_k = 1/(2\lambda_k)$์ด์ง€๋งŒ, ๋‹จ์ผ $\mu$๋ฅผ ์‚ฌ์šฉํ•˜๋ฏ€๋กœ:

  • ๊ฐ€์žฅ ๋น ๋ฅด๊ฒŒ ์ˆ˜๋ ดํ•˜๋Š” ๋ชจ๋“œ๋Š” $\lambda_{max}$์— ํ•ด๋‹น
  • ๊ฐ€์žฅ ๋А๋ฆฌ๊ฒŒ ์ˆ˜๋ ดํ•˜๋Š” ๋ชจ๋“œ๋Š” $\lambda_{min}$์— ํ•ด๋‹น

๊ณ ์œ ๊ฐ’ ๋ถ„์‚ฐ(eigenvalue spread)(์กฐ๊ฑด์ˆ˜, condition number):

$$\chi(\mathbf{R}) = \frac{\lambda_{max}}{\lambda_{min}}$$

์ด๊ฒƒ์ด ์ „์ฒด ์ˆ˜๋ ด ์†๋„๋ฅผ ์ง€๋ฐฐํ•ฉ๋‹ˆ๋‹ค. ํฐ ๊ณ ์œ ๊ฐ’ ๋ถ„์‚ฐ์€ ๋А๋ฆฐ ์ˆ˜๋ ด์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด ์„ฑ๋Šฅ ๊ณก๋ฉด์˜ ์ข์€ ๊ณ„๊ณก์„ ๊ฐ€๋กœ์งˆ๋Ÿฌ "์ง€๊ทธ์žฌ๊ทธ"๋กœ ์›€์ง์ด๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค.

3.4 ํ•™์Šต ๊ณก์„ 

๋ฐ˜๋ณต ํšŸ์ˆ˜์˜ ํ•จ์ˆ˜๋กœ์„œ์˜ MSE๊ฐ€ ํ•™์Šต ๊ณก์„ (learning curve)์ž…๋‹ˆ๋‹ค:

$$J(n) = J_{min} + \sum_{k=0}^{M-1} \lambda_k v_k^2(0)(1 - 2\mu\lambda_k)^{2n}$$

๊ฐ ๋ชจ๋“œ๋Š” ์‹œ์ƒ์ˆ˜(time constant)์™€ ํ•จ๊ป˜ ๊ธฐํ•˜๊ธ‰์ˆ˜์ ์œผ๋กœ ๊ฐ์†Œํ•ฉ๋‹ˆ๋‹ค:

$$\tau_k = \frac{-1}{2\ln|1 - 2\mu\lambda_k|} \approx \frac{1}{4\mu\lambda_k} \quad \text{(์†Œ } \mu \text{์— ๋Œ€ํ•ด)}$$

๊ฐ€์žฅ ๋А๋ฆฐ ๋ชจ๋“œ์˜ ์‹œ์ƒ์ˆ˜๋Š” $\tau_{max} \approx 1/(4\mu\lambda_{min})$์ž…๋‹ˆ๋‹ค.


4. LMS ์•Œ๊ณ ๋ฆฌ์ฆ˜

4.1 ์œ ๋„

์ตœ๊ธ‰๊ฐ•ํ•˜๋ฒ• ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์ง„์งœ ๊ธฐ์šธ๊ธฐ $\nabla J = -2\mathbf{p} + 2\mathbf{R}\mathbf{w}(n)$์„ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. Widrow์™€ Hoff (1960)์˜ ํ•ต์‹ฌ ํ†ต์ฐฐ์€ ์ง„์งœ ๊ธฐ์šธ๊ธฐ๋ฅผ ์ˆœ๊ฐ„ ์ถ”์ •๊ฐ’(instantaneous estimate)์œผ๋กœ ๋Œ€์ฒดํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค:

$$\hat{\nabla} J(n) = -2e(n)\mathbf{x}(n)$$

์ด๋Š” ๊ธฐ๋Œ“๊ฐ’์„ ์ˆœ๊ฐ„ ์ƒ˜ํ”Œ๋กœ ๋Œ€์ฒดํ•˜์—ฌ ์–ป์–ด์ง‘๋‹ˆ๋‹ค: - $\mathbf{R}\mathbf{w}(n) \approx \mathbf{x}(n)\mathbf{x}^T(n)\mathbf{w}(n) = \mathbf{x}(n)y(n)$ - $\mathbf{p} \approx d(n)\mathbf{x}(n)$

LMS ์•Œ๊ณ ๋ฆฌ์ฆ˜์€:

$$\boxed{\mathbf{w}(n+1) = \mathbf{w}(n) + \mu \, e(n) \, \mathbf{x}(n)}$$

์—ฌ๊ธฐ์„œ $e(n) = d(n) - \mathbf{w}^T(n)\mathbf{x}(n)$.

4.2 ์•Œ๊ณ ๋ฆฌ์ฆ˜ ์š”์•ฝ

LMS Algorithm
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
Initialize: w(0) = 0 (or small random values)
Parameters: step size ฮผ, filter order M

For each new sample n = 0, 1, 2, ...
  1. Form input vector: x(n) = [x(n), x(n-1), ..., x(n-M+1)]^T
  2. Compute output:    y(n) = w^T(n) x(n)
  3. Compute error:     e(n) = d(n) - y(n)
  4. Update weights:    w(n+1) = w(n) + ฮผ e(n) x(n)

๊ณ„์‚ฐ ๋ณต์žก๋„: ์ƒ˜ํ”Œ๋‹น $O(M)$ ๊ณฑ์…ˆ๊ณผ ๋ง์…ˆ - ๋งค์šฐ ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค.

4.3 LMS์˜ ํŠน์„ฑ

  1. ๋‹จ์ˆœ์„ฑ: ํ–‰๋ ฌ ์—ญ์‚ฐ ์—†์Œ, ์ž๊ธฐ์ƒ๊ด€ ์ถ”์ • ์—†์Œ
  2. ๋‚ฎ์€ ๋ณต์žก๋„: ๋ฐ˜๋ณต๋‹น $2M$ ๊ณฑ์…ˆ
  3. ํ™•๋ฅ ์  ๊ธฐ์šธ๊ธฐ: ๊ธฐ์šธ๊ธฐ ์ถ”์ •๊ฐ’์— ์žก์Œ์ด ์žˆ์ง€๋งŒ ๋ถˆํŽธํ–ฅ(unbiased): $E[\hat{\nabla}J] = \nabla J$
  4. ์ž๊ธฐ ์กฐ์ •: ์‹ ํ˜ธ ํ†ต๊ณ„์˜ ๋А๋ฆฐ ๋ณ€ํ™”๋ฅผ ์ž๋™์œผ๋กœ ์ถ”์ 

5. LMS ์ˆ˜๋ ด ๋ถ„์„

5.1 ํ‰๊ท  ์ˆ˜๋ ด

LMS ๊ฐฑ์‹ ์˜ ๊ธฐ๋Œ“๊ฐ’ ๊ณ„์‚ฐ (๋…๋ฆฝ์„ฑ ๊ฐ€์ • ํ•˜์— - $\mathbf{x}(n)$์ด $\mathbf{w}(n)$์— ๋…๋ฆฝ):

$$E[\mathbf{w}(n+1)] = E[\mathbf{w}(n)] + \mu E[e(n)\mathbf{x}(n)]$$

๋Œ€์ˆ˜ ๊ณ„์‚ฐ ํ›„:

$$E[\boldsymbol{\epsilon}(n+1)] = (\mathbf{I} - 2\mu\mathbf{R}) E[\boldsymbol{\epsilon}(n)]$$

์ด๋Š” ์ตœ๊ธ‰๊ฐ•ํ•˜๋ฒ•๊ณผ ๋™์ผํ•œ ์ ํ™”์‹์ด๋ฏ€๋กœ, ํ‰๊ท  ์ˆ˜๋ ด ์กฐ๊ฑด์€:

$$0 < \mu < \frac{1}{\lambda_{max}}$$

์‹ค์ œ๋กœ๋Š” ๋‹ค์Œ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค:

$$0 < \mu < \frac{1}{\text{tr}(\mathbf{R})} = \frac{1}{M \cdot \sigma_x^2}$$

$\text{tr}(\mathbf{R}) = \sum_k \lambda_k \geq \lambda_{max}$์ด๊ณ , ์ •์ƒ ์ž…๋ ฅ์— ๋Œ€ํ•ด $\text{tr}(\mathbf{R}) = M\sigma_x^2$์ด๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค.

5.2 ํ‰๊ท  ์ œ๊ณฑ ์ˆ˜๋ ด

MSE๊ฐ€ ์ˆ˜๋ ดํ•˜๊ธฐ ์œ„ํ•œ ์กฐ๊ฑด(ํ‰๊ท  ์ œ๊ณฑ ์•ˆ์ •์„ฑ)์€ ๋” ์—„๊ฒฉํ•ฉ๋‹ˆ๋‹ค:

$$0 < \mu < \frac{2}{\lambda_{max} + \text{tr}(\mathbf{R})}$$

์‹ค์šฉ์ ์œผ๋กœ ์•ˆ์ „ํ•œ ์„ ํƒ์€:

$$\mu < \frac{1}{3 \, \text{tr}(\mathbf{R})} = \frac{1}{3M\sigma_x^2}$$

5.3 ์ดˆ๊ณผ MSE์™€ ์˜ค์กฐ์ •

์ˆ˜๋ ด ํ›„์—๋„ LMS ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ํ™•๋ฅ ์  ๊ธฐ์šธ๊ธฐ๊ฐ€ ๊ฐ€์ค‘์น˜ ๊ฐฑ์‹ ์— ์žก์Œ์„ ๋„์ž…ํ•˜๊ธฐ ๋•Œ๋ฌธ์— $J_{min}$์— ๋„๋‹ฌํ•˜์ง€ ๋ชปํ•ฉ๋‹ˆ๋‹ค. ์ดˆ๊ณผ MSE(excess MSE)๋Š”:

$$J_{excess} = J_{steady-state} - J_{min}$$

์˜ค์กฐ์ •(misadjustment)์€:

$$\mathcal{M} = \frac{J_{excess}}{J_{min}} \approx \mu \, \text{tr}(\mathbf{R}) = \mu M \sigma_x^2$$

์ด๋Š” ๊ทผ๋ณธ์ ์ธ ํŠธ๋ ˆ์ด๋“œ์˜คํ”„๋ฅผ ๋“œ๋Ÿฌ๋ƒ…๋‹ˆ๋‹ค: - ํฐ $\mu$: ๋น ๋ฅธ ์ˆ˜๋ ด์ด์ง€๋งŒ ํฐ ์˜ค์กฐ์ • (์žก์Œ์ด ๋งŽ์€ ์ •์ƒ ์ƒํƒœ) - ์ž‘์€ $\mu$: ๋А๋ฆฐ ์ˆ˜๋ ด์ด์ง€๋งŒ ์ž‘์€ ์˜ค์กฐ์ • (์ •ํ™•ํ•œ ์ •์ƒ ์ƒํƒœ)

5.4 ์Šคํ… ํฌ๊ธฐ ์„ ํƒ ์ง€์นจ

๊ธฐ์ค€ ์Šคํ… ํฌ๊ธฐ
์•ˆ์ •์„ฑ (ํ‰๊ท ) $\mu < 1/\lambda_{max}$
์•ˆ์ •์„ฑ (ํ‰๊ท  ์ œ๊ณฑ) $\mu < 2/(\lambda_{max} + \text{tr}(\mathbf{R}))$
์‹ค์šฉ์  ๊ทœ์น™ $\mu \in [0.01, 0.1] / (M \sigma_x^2)$
์˜ค์กฐ์ • $\leq$ 10% $\mu \leq 0.1 / (M \sigma_x^2)$

5.5 ์ˆ˜๋ ด ์‹œ๊ฐ„

๊ฐ€์žฅ ๋А๋ฆฐ ๋ชจ๋“œ์˜ ๊ทผ์‚ฌ ์‹œ์ƒ์ˆ˜๋Š”:

$$\tau_{mse} \approx \frac{1}{4\mu\lambda_{min}}$$

์˜ค์กฐ์ • ์ œ์•ฝ $\mathcal{M} = \mu M \sigma_x^2$์™€ ๊ฒฐํ•ฉํ•˜๋ฉด:

$$\tau_{mse} \approx \frac{M \sigma_x^2}{4\mathcal{M}\lambda_{min}} = \frac{\chi(\mathbf{R})}{4\mathcal{M}} \cdot \frac{M\sigma_x^2}{\lambda_{max}}$$

ํฐ ๊ณ ์œ ๊ฐ’ ๋ถ„์‚ฐ $\chi(\mathbf{R})$์€ ์ฃผ์–ด์ง„ ์˜ค์กฐ์ •์—์„œ ์ˆ˜๋ ดํ•˜๋Š” ๋ฐ ๋งŽ์€ ๋ฐ˜๋ณต์ด ํ•„์š”ํ•จ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค.


6. ์ •๊ทœํ™” LMS (NLMS)

6.1 ๋™๊ธฐ

ํ‘œ์ค€ LMS๋Š” ๊ณ ์ • ์Šคํ… ํฌ๊ธฐ $\mu$๋ฅผ ๊ฐ€์ง€๋ฏ€๋กœ, ์‹ค์ œ ์ ์‘ ์†๋„๊ฐ€ ์ž…๋ ฅ ์ „๋ ฅ $\|\mathbf{x}(n)\|^2$์— ๋”ฐ๋ผ ๋‹ฌ๋ผ์ง‘๋‹ˆ๋‹ค. ์ž…๋ ฅ ์ „๋ ฅ์ด ๋ณ€ํ•  ๋•Œ LMS๋Š” ๋ถˆ์•ˆ์ •ํ•ด์ง€๊ฑฐ๋‚˜ ๋„ˆ๋ฌด ๋А๋ฆฌ๊ฒŒ ์ˆ˜๋ ดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

6.2 ์œ ๋„

NLMS ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์Šคํ… ํฌ๊ธฐ๋ฅผ ์ž…๋ ฅ ์ „๋ ฅ์œผ๋กœ ์ •๊ทœํ™”ํ•˜์—ฌ ์–ป์–ด์ง‘๋‹ˆ๋‹ค:

$$\boxed{\mathbf{w}(n+1) = \mathbf{w}(n) + \frac{\tilde{\mu}}{\|\mathbf{x}(n)\|^2 + \delta} \, e(n) \, \mathbf{x}(n)}$$

์—ฌ๊ธฐ์„œ: - $\tilde{\mu} \in (0, 2)$๋Š” ์ •๊ทœํ™”๋œ ์Šคํ… ํฌ๊ธฐ - $\delta > 0$๋Š” 0์œผ๋กœ ๋‚˜๋ˆ„๋Š” ๊ฒƒ์„ ๋ฐฉ์ง€ํ•˜๋Š” ์ž‘์€ ์ •๊ทœํ™” ์ƒ์ˆ˜

6.3 ์ œ์•ฝ ์ตœ์ ํ™”๋กœ๋ถ€ํ„ฐ์˜ ์œ ๋„

NLMS๋Š” ๋‹ค์Œ ์ œ์•ฝ ์ตœ์ ํ™” ๋ฌธ์ œ๋ฅผ ํ’€์–ด์„œ ์œ ๋„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:

$$\min_{\mathbf{w}(n+1)} \|\mathbf{w}(n+1) - \mathbf{w}(n)\|^2 \quad \text{์ œ์•ฝ ์กฐ๊ฑด:} \quad \mathbf{w}^T(n+1)\mathbf{x}(n) = d(n)$$

์ฆ‰, ์ตœ์‹  ๋ฐ์ดํ„ฐ ํฌ์ธํŠธ๋ฅผ ์™„๋ฒฝํ•˜๊ฒŒ ํ”ผํŒ…ํ•˜๋Š” ํ˜„์žฌ ๊ฐ€์ค‘์น˜ ๋ฒกํ„ฐ์— ๊ฐ€์žฅ ๊ฐ€๊นŒ์šด ๊ฐ€์ค‘์น˜ ๋ฒกํ„ฐ๋ฅผ ์ฐพ๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ผ๊ทธ๋ž‘์ฃผ ์Šน์ˆ˜(Lagrange multipliers)๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด $\tilde{\mu} = 1$์ธ NLMS ๊ฐฑ์‹ ์ด ์–ป์–ด์ง‘๋‹ˆ๋‹ค.

6.4 NLMS์˜ ์žฅ์ 

  1. ๊ฐ•๊ฑดํ•œ ์ˆ˜๋ ด: ์Šคํ… ํฌ๊ธฐ๊ฐ€ ์ž…๋ ฅ ์ „๋ ฅ์— ์ž๋™ ์ ์‘
  2. ๋‹จ์ˆœํ•œ ํŠœ๋‹: ์„ค์ •ํ•  ํŒŒ๋ผ๋ฏธํ„ฐ๊ฐ€ $\tilde{\mu} \in (0, 2)$ ํ•˜๋‚˜๋ฟ
  3. ๋น„์ •์ƒ ์ž…๋ ฅ์— ์ ํ•ฉ: ๋ณ€๋™ํ•˜๋Š” ์‹ ํ˜ธ ๋ ˆ๋ฒจ์—์„œ๋„ ์ž˜ ์ž‘๋™
  4. ์ตœ์†Œํ•œ์˜ ์ถ”๊ฐ€ ๋น„์šฉ: ๋ฐ˜๋ณต๋‹น ํ•œ ๋ฒˆ์˜ ๋‚ด์ ๋งŒ ์ถ”๊ฐ€

6.5 NLMS ์ˆ˜๋ ด

NLMS์˜ ์ˆ˜๋ ด ์กฐ๊ฑด์€ ๋‹จ์ˆœํ•ฉ๋‹ˆ๋‹ค:

$$0 < \tilde{\mu} < 2$$

์˜ค์กฐ์ •์€ ๊ทผ์‚ฌ์ ์œผ๋กœ:

$$\mathcal{M}_{NLMS} \approx \frac{\tilde{\mu}}{2 - \tilde{\mu}} \cdot \frac{1}{M}$$

์ผ๋ฐ˜์ ์ธ ์„ ํƒ์€ $\tilde{\mu} \in [0.1, 1.0]$์ž…๋‹ˆ๋‹ค.


7. RLS ์•Œ๊ณ ๋ฆฌ์ฆ˜

7.1 ๋™๊ธฐ

LMS๊ฐ€ ๊ธฐ์šธ๊ธฐ๋ฅผ ํ™•๋ฅ ์ ์œผ๋กœ ์ถ”์ •ํ•˜๋Š”(ํ•œ ๋ฒˆ์— ํ•œ ์ƒ˜ํ”Œ) ๋ฐ˜๋ฉด, ์ˆœํ™˜ ์ตœ์†Œ ์ œ๊ณฑ(Recursive Least Squares, RLS) ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ๋ชจ๋“  ๊ณผ๊ฑฐ ๋ฐ์ดํ„ฐ์— ๊ฑธ์นœ ๊ฒฐ์ •๋ก ์  ๋น„์šฉ ํ•จ์ˆ˜๋ฅผ ์ตœ์†Œํ™”ํ•ฉ๋‹ˆ๋‹ค:

$$J_{RLS}(n) = \sum_{i=0}^{n} \lambda^{n-i} |e(i)|^2$$

์—ฌ๊ธฐ์„œ $\lambda \in (0, 1]$์€ ๋ง๊ฐ ์ธ์ž(forgetting factor)(๋ณดํ†ต $0.95 \leq \lambda \leq 1.0$)์ž…๋‹ˆ๋‹ค. ์ตœ๊ทผ ์ƒ˜ํ”Œ์— ์˜ค๋ž˜๋œ ์ƒ˜ํ”Œ๋ณด๋‹ค ๋” ํฐ ๊ฐ€์ค‘์น˜๊ฐ€ ๋ถ€์—ฌ๋˜์–ด ๋น„์ •์ƒ ํ™˜๊ฒฝ์—์„œ ์ถ”์  ๋Šฅ๋ ฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค.

7.2 ๊ฐ€์ค‘ LS์— ๋Œ€ํ•œ ์ •๊ทœ ๋ฐฉ์ •์‹

๋น„์šฉ ํ•จ์ˆ˜๋Š” ๋‹ค์Œ์— ์˜ํ•ด ์ตœ์†Œํ™”๋ฉ๋‹ˆ๋‹ค:

$$\mathbf{w}(n) = \boldsymbol{\Phi}^{-1}(n) \boldsymbol{\theta}(n)$$

์—ฌ๊ธฐ์„œ: - $\boldsymbol{\Phi}(n) = \sum_{i=0}^{n} \lambda^{n-i} \mathbf{x}(i)\mathbf{x}^T(i)$๋Š” ๊ฐ€์ค‘ ์ƒ˜ํ”Œ ์ƒ๊ด€ ํ–‰๋ ฌ - $\boldsymbol{\theta}(n) = \sum_{i=0}^{n} \lambda^{n-i} d(i)\mathbf{x}(i)$๋Š” ๊ฐ€์ค‘ ์ƒํ˜ธ์ƒ๊ด€ ๋ฒกํ„ฐ

๋‘˜ ๋‹ค ์ˆœํ™˜ ๊ฐฑ์‹ ์„ ๊ฐ€์ง‘๋‹ˆ๋‹ค:

$$\boldsymbol{\Phi}(n) = \lambda \boldsymbol{\Phi}(n-1) + \mathbf{x}(n)\mathbf{x}^T(n)$$

$$\boldsymbol{\theta}(n) = \lambda \boldsymbol{\theta}(n-1) + d(n)\mathbf{x}(n)$$

7.3 ํ–‰๋ ฌ ์—ญ ๋ณด์กฐ์ •๋ฆฌ

๊ฐ ๋‹จ๊ณ„์—์„œ $\boldsymbol{\Phi}^{-1}(n)$์„ ๋‹ค์‹œ ๊ณ„์‚ฐํ•˜๋Š” ๊ฒƒ($O(M^3)$)์„ ํ”ผํ•˜๊ธฐ ์œ„ํ•ด ํ–‰๋ ฌ ์—ญ ๋ณด์กฐ์ •๋ฆฌ(matrix inversion lemma, Woodbury identity)๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค:

$$(\mathbf{A} + \mathbf{u}\mathbf{v}^T)^{-1} = \mathbf{A}^{-1} - \frac{\mathbf{A}^{-1}\mathbf{u}\mathbf{v}^T\mathbf{A}^{-1}}{1 + \mathbf{v}^T\mathbf{A}^{-1}\mathbf{u}}$$

$\mathbf{P}(n) = \boldsymbol{\Phi}^{-1}(n)$์œผ๋กœ ์ •์˜ํ•˜๋ฉด:

$$\mathbf{P}(n) = \lambda^{-1}\mathbf{P}(n-1) - \lambda^{-1}\mathbf{k}(n)\mathbf{x}^T(n)\mathbf{P}(n-1)$$

์—ฌ๊ธฐ์„œ ์ด๋“ ๋ฒกํ„ฐ(gain vector)๋Š”:

$$\mathbf{k}(n) = \frac{\lambda^{-1}\mathbf{P}(n-1)\mathbf{x}(n)}{1 + \lambda^{-1}\mathbf{x}^T(n)\mathbf{P}(n-1)\mathbf{x}(n)}$$

7.4 RLS ์•Œ๊ณ ๋ฆฌ์ฆ˜ ์š”์•ฝ

RLS Algorithm
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
Initialize: w(0) = 0, P(0) = ฮด^{-1} I (ฮด small, e.g., 0.01)
Parameters: forgetting factor ฮป (e.g., 0.99), regularization ฮด

For each new sample n = 1, 2, ...
  1. Compute gain vector:
     k(n) = P(n-1) x(n) / [ฮป + x^T(n) P(n-1) x(n)]

  2. Compute a priori error:
     e(n) = d(n) - w^T(n-1) x(n)

  3. Update weights:
     w(n) = w(n-1) + k(n) e(n)

  4. Update inverse correlation matrix:
     P(n) = ฮป^{-1} [P(n-1) - k(n) x^T(n) P(n-1)]

๊ณ„์‚ฐ ๋ณต์žก๋„: ์ƒ˜ํ”Œ๋‹น $O(M^2)$ ($\mathbf{P}$ ํ–‰๋ ฌ ๊ฐฑ์‹  ๋•Œ๋ฌธ).

7.5 ๋ง๊ฐ ์ธ์ž

๋ง๊ฐ ์ธ์ž $\lambda$๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์œ ํšจ ๋ฉ”๋ชจ๋ฆฌ(effective memory)๋ฅผ ๊ฒฐ์ •ํ•ฉ๋‹ˆ๋‹ค:

$$N_{eff} = \frac{1}{1 - \lambda}$$

$\lambda$ $N_{eff}$ ๋™์ž‘
1.0 $\infty$ ์„ฑ์žฅํ•˜๋Š” ์œˆ๋„์šฐ (์ •์ƒ ํ™˜๊ฒฝ)
0.99 100 ๋А๋ฆฌ๊ฒŒ ๋ณ€ํ•˜๋Š” ํ†ต๊ณ„์— ์ ํ•ฉ
0.95 20 ๋น ๋ฅด๊ฒŒ ๋ณ€ํ•˜๋Š” ํ†ต๊ณ„์— ์ ํ•ฉ
0.9 10 ๋งค์šฐ ๋น ๋ฅธ ์ถ”์ , ํ•˜์ง€๋งŒ ์žก์Œ์ด ๋งŽ์Œ

7.6 RLS์˜ ํŠน์„ฑ

  1. ๋น ๋ฅธ ์ˆ˜๋ ด: ์•ฝ $2M$ ๋ฐ˜๋ณต์—์„œ ์ˆ˜๋ ด (๊ณ ์œ ๊ฐ’ ๋ถ„์‚ฐ์— ๋…๋ฆฝ์ )
  2. ๊ณ ์œ ๊ฐ’ ๋ถ„์‚ฐ ๋ฌธ์ œ ์—†์Œ: $\mathbf{P}$ ํ–‰๋ ฌ์ด ์ž…๋ ฅ์„ ๋ฐฑ์ƒ‰ํ™”(whitens)
  3. ๋†’์€ ๋ณต์žก๋„: LMS์˜ $O(M)$ ๋Œ€๋น„ $O(M^2)$
  4. ์ˆ˜์น˜์  ๋ฏผ๊ฐ์„ฑ: $\mathbf{P}$ ํ–‰๋ ฌ์ด ์–‘์ •์น˜์„ฑ์„ ์žƒ์„ ์ˆ˜ ์žˆ์Œ; ์•ˆ์ •ํ™”๋œ ๋ฒ„์ „ ์กด์žฌ (QR-RLS, ๊ฒฉ์ž RLS)

8. ๋น„๊ต: LMS vs RLS

ํŠน์„ฑ LMS NLMS RLS
์ƒ˜ํ”Œ๋‹น ๋ณต์žก๋„ $O(M)$ $O(M)$ $O(M^2)$
๋ฉ”๋ชจ๋ฆฌ $O(M)$ $O(M)$ $O(M^2)$
์ˆ˜๋ ด ์†๋„ ๋А๋ฆผ ($\chi$์— ์˜์กด) ๋ณดํ†ต ๋น ๋ฆ„ ($\sim 2M$ ๋ฐ˜๋ณต)
์˜ค์กฐ์ • ๋†’์Œ ๋ณดํ†ต ๋‚ฎ์Œ
์ถ”์  ๋Šฅ๋ ฅ ๋ณดํ†ต ๋ณดํ†ต ์ข‹์Œ
์ˆ˜์น˜์  ์•ˆ์ •์„ฑ ์šฐ์ˆ˜ ์šฐ์ˆ˜ ๋ถˆ์•ˆ์ •ํ•  ์ˆ˜ ์žˆ์Œ
๊ณ ์œ ๊ฐ’ ๋ถ„์‚ฐ ๋ฏผ๊ฐ๋„ ๋†’์Œ ๋ณดํ†ต ์—†์Œ
์Šคํ… ํฌ๊ธฐ ํŒŒ๋ผ๋ฏธํ„ฐ $\mu$ (์„ค์ • ๊นŒ๋‹ค๋กœ์›€) $\tilde{\mu} \in (0,2)$ $\lambda$ (์„ค์ • ๋” ์‰ฌ์›€)

๊ฒฝํ—˜ ๊ทœ์น™: ๊ณ„์‚ฐ ๋น„์šฉ์ด ์ตœ์šฐ์„ ์ด๊ฑฐ๋‚˜ ํ•„ํ„ฐ๊ฐ€ ๊ธธ ๋•Œ๋Š” LMS/NLMS๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋น ๋ฅธ ์ˆ˜๋ ด์ด ํ•„์ˆ˜์ ์ด๊ณ  ํ•„ํ„ฐ ์ฐจ์ˆ˜๊ฐ€ ์ ์ ˆํ•  ๋•Œ๋Š” RLS๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.


9. ์‘์šฉ: ์‹œ์Šคํ…œ ์‹๋ณ„

9.1 ๋ฌธ์ œ ์„ค์ •

                    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
     x(n) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ถโ”‚  Unknown System     โ”‚โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ถ d(n) = h*x(n) + v(n)
         โ”‚          โ”‚  h = [h0, h1, ...]  โ”‚
         โ”‚          โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚
         โ”‚          โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
         โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ถโ”‚  Adaptive Filter   โ”‚โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ถ y(n) = w^T x(n)
                    โ”‚  w(n)              โ”‚
                    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                                                    e(n) = d(n) - y(n) โ†’ 0

์ ์‘ ํ•„ํ„ฐ๋Š” ๋ฏธ์ง€ ์‹œ์Šคํ…œ์˜ ์ž„ํŽ„์Šค ์‘๋‹ต์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด ์ˆ˜๋ ดํ•˜๋ฉด $\mathbf{w}_{opt} \approx \mathbf{h}$๊ฐ€ ๋ฉ๋‹ˆ๋‹ค.

9.2 ์‚ฌ์šฉ ์‹œ๊ธฐ

  • ํ”Œ๋žœํŠธ ๋ชจ๋ธ๋ง: ์ œ์–ด ์‹œ์Šคํ…œ์—๋Š” ์‹œ์Šคํ…œ ๋ชจ๋ธ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค
  • ์Œํ–ฅ ๊ฒฝ๋กœ ์‹๋ณ„: ์‹ค๋‚ด ์ž„ํŽ„์Šค ์‘๋‹ต ํŒŒ์•…
  • ์ ์‘ ์—ญ ์ œ์–ด: ์ˆœ๋ฐฉํ–ฅ ๋ชจ๋ธ์„ ์‹๋ณ„ํ•œ ํ›„ ์—ญ์‚ฐ

10. ์‘์šฉ: ์žก์Œ ์ œ๊ฑฐ

10.1 ์ ์‘ ์žก์Œ ์ œ๊ฑฐ๊ธฐ (ANC)

     Signal s(n) + Noise n0(n) = d(n)    (primary input)

     Noise reference n1(n)               (reference input, correlated with n0)
              โ”‚
              โ–ผ
     โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
     โ”‚  Adaptive Filter   โ”‚ โ”€โ”€โ–ถ ลท(n) โ‰ˆ n0(n)
     โ”‚  w(n)              โ”‚
     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                                         e(n) = d(n) - ลท(n) โ‰ˆ s(n)

ํ•ต์‹ฌ ํ†ต์ฐฐ: ๊ธฐ์ค€ ์ž…๋ ฅ $n_1(n)$์€ ์žก์Œ $n_0(n)$๊ณผ ์ƒ๊ด€๋˜์–ด ์žˆ์ง€๋งŒ ์‹ ํ˜ธ $s(n)$๊ณผ๋Š” ์ƒ๊ด€๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ ์‘ ํ•„ํ„ฐ๋Š” $n_1(n)$์„ $n_0(n)$์˜ ์ถ”์ •๊ฐ’์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋ฉด ์˜ค์ฐจ ์‹ ํ˜ธ๊ฐ€ ๊นจ๋—ํ•œ ์‹ ํ˜ธ $s(n)$์˜ ์ถ”์ •๊ฐ’์ด ๋ฉ๋‹ˆ๋‹ค.

10.2 ์ˆ˜ํ•™์  ์ •๋‹น์„ฑ

MSE๋Š”:

$$E[e^2(n)] = E[(s(n) + n_0(n) - \hat{y}(n))^2]$$

$s(n)$์ด $n_0(n)$๊ณผ $n_1(n)$ ๋ชจ๋‘์™€ ์ƒ๊ด€๋˜์ง€ ์•Š์œผ๋ฏ€๋กœ:

$$E[e^2(n)] = E[s^2(n)] + E[(n_0(n) - \hat{y}(n))^2]$$

$\mathbf{w}$์— ๋Œ€ํ•ด $E[e^2(n)]$์„ ์ตœ์†Œํ™”ํ•˜๋ฉด $E[(n_0(n) - \hat{y}(n))^2]$๊ฐ€ ์ตœ์†Œํ™”๋˜์–ด $\hat{y}(n) \to n_0(n)$์ด ๋˜๊ณ  $e(n) \to s(n)$์ด ๋ฉ๋‹ˆ๋‹ค.

์‹ ํ˜ธ๋Š” ์žก์Œ ์ถ”์ •์˜ ๋ถ€์‚ฐ๋ฌผ๋กœ ์ถ”์ถœ๋ฉ๋‹ˆ๋‹ค.


11. ์‘์šฉ: ์—์ฝ” ์ œ๊ฑฐ

11.1 ์Œํ–ฅ ์—์ฝ” ์ œ๊ฑฐ (AEC)

์Šคํ”ผ์ปคํฐ ์‹œ์Šคํ…œ์—์„œ ์›๋‹จ(far-end) ์Œ์„ฑ์ด ์Šคํ”ผ์ปค๋ฅผ ํ†ตํ•ด ์žฌ์ƒ๋˜๊ณ , ๋ฐฉ ์•ˆ์—์„œ ๋ฐ˜ํ–ฅ๋˜์–ด ๋งˆ์ดํฌ๋กœํฐ์— ํฌ์ฐฉ๋ฉ๋‹ˆ๋‹ค. ์ ์‘ ํ•„ํ„ฐ๋Š” ์Šคํ”ผ์ปค์—์„œ ๋งˆ์ดํฌ๋กœํฐ๊นŒ์ง€์˜ ์Œํ–ฅ ๊ฒฝ๋กœ๋ฅผ ๋ชจ๋ธ๋งํ•ฉ๋‹ˆ๋‹ค.

Far-end โ”€โ”€โ–ถ Loudspeaker โ”€โ”€โ–ถ Room โ”€โ”€โ–ถ Microphone โ”€โ”€โ–ถ Near-end + Echo
  x(n)                   h(n)                        d(n) = s(n) + h*x(n)
    โ”‚
    โ”‚         โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ถโ”‚  Adaptive Filter โ”‚โ”€โ”€โ–ถ ลท(n) โ‰ˆ h*x(n)
              โ”‚  w(n) โ‰ˆ h        โ”‚
              โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                                       e(n) = d(n) - ลท(n) โ‰ˆ s(n)

๋„์ „ ๊ณผ์ œ: - ์Œํ–ฅ ์ž„ํŽ„์Šค ์‘๋‹ต์€ ๋งค์šฐ ๊ธธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (8 kHz์—์„œ 100-500 ms = 800-4000 ํƒญ) - ์ด์ค‘ ํ†ตํ™”(double-talk): ๋‘ ํ™”์ž๊ฐ€ ๋™์‹œ์— ํ™œ์„ฑํ™” - ๋น„์ •์ƒ์„ฑ: ์‚ฌ๋žŒ์ด ์ด๋™ํ•˜๊ณ  ๋ฌธ์ด ์—ด๋ฆผ

11.2 ๋„คํŠธ์›Œํฌ ์—์ฝ” ์ œ๊ฑฐ

์ „ํ™” ๋„คํŠธ์›Œํฌ์—์„œ ํ•˜์ด๋ธŒ๋ฆฌ๋“œ(2์„ -4์„  ๋ณ€ํ™˜)์˜ ์ž„ํ”ผ๋˜์Šค ๋ถˆ์ผ์น˜๊ฐ€ ์ „๊ธฐ์  ์—์ฝ”๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์—์ฝ” ๊ฒฝ๋กœ๋Š” ์งง์ง€๋งŒ ์š”๊ตฌ ์‚ฌํ•ญ์ด ์—„๊ฒฉํ•ฉ๋‹ˆ๋‹ค (>40 dB ์—์ฝ” ๋ฐ˜ํ™˜ ์†์‹ค ํ–ฅ์ƒ).


12. ์‘์šฉ: ์ฑ„๋„ ๋“ฑํ™”

12.1 ๋ฌธ์ œ

์ „์†ก๋œ ์‹ ํ˜ธ $a(n)$์ด ๋ถ„์‚ฐ ์ฑ„๋„ $c(n)$์„ ํ†ต๊ณผํ•˜์—ฌ ์‹ฌ๋ณผ ๊ฐ„ ๊ฐ„์„ญ(inter-symbol interference, ISI)์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค:

$$x(n) = \sum_k c(k) a(n-k) + v(n)$$

๋“ฑํ™”๊ธฐ(equalizer)๋Š” ์ฑ„๋„ ์™œ๊ณก์„ ๋˜๋Œ๋ฆฌ๋Š” ์ ์‘ ํ•„ํ„ฐ์ž…๋‹ˆ๋‹ค:

$$\hat{a}(n - \Delta) = \mathbf{w}^T(n) \mathbf{x}(n)$$

์—ฌ๊ธฐ์„œ $\Delta$๋Š” $w(n) * c(n) \approx \delta(n - \Delta)$๊ฐ€ ๋˜๋„๋ก ์„ ํƒ๋œ ๊ฒฐ์ • ์ง€์—ฐ(decision delay)์ž…๋‹ˆ๋‹ค.

12.2 ํ›ˆ๋ จ ๋ฐ ๊ฒฐ์ • ์ฃผ๋„ ๋ชจ๋“œ

  • ํ›ˆ๋ จ ๋ชจ๋“œ: ์•Œ๋ ค์ง„ ์‹œํ€€์Šค๊ฐ€ ์ „์†ก๋ฉ๋‹ˆ๋‹ค; $d(n) = a(n-\Delta)$
  • ๊ฒฐ์ • ์ฃผ๋„ ๋ชจ๋“œ: ์ดˆ๊ธฐ ์ˆ˜๋ ด ํ›„, ์Šฌ๋ผ์ด์„œ(slicer) ์ถœ๋ ฅ $\hat{a}(n-\Delta)$๋ฅผ $d(n)$์œผ๋กœ ์‚ฌ์šฉ

13. ์‘์šฉ: ์ ์‘ ๋น”ํฌ๋ฐ

13.1 ๋ฌธ์ œ

$M$๊ฐœ์˜ ์„ผ์„œ๋กœ ๊ตฌ์„ฑ๋œ ๋ฐฐ์—ด์ด ์—ฌ๋Ÿฌ ๋ฐฉํ–ฅ์—์„œ ์‹ ํ˜ธ๋ฅผ ์ˆ˜์‹ ํ•ฉ๋‹ˆ๋‹ค. ๋ชฉํ‘œ๋Š” ์›ํ•˜๋Š” ์‹ ํ˜ธ ๋ฐฉํ–ฅ์œผ๋กœ ๋น”์„ ์กฐํ–ฅํ•˜๋ฉด์„œ ๊ฐ„์„ญ์›(interferer)์„ ๋„๋ง(nulling)ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.

๋ฐฐ์—ด์—์„œ ์ˆ˜์‹ ๋œ ์‹ ํ˜ธ๋Š”:

$$\mathbf{x}(n) = s(n)\mathbf{a}(\theta_s) + \sum_{k=1}^{K} i_k(n)\mathbf{a}(\theta_k) + \mathbf{v}(n)$$

์—ฌ๊ธฐ์„œ $\mathbf{a}(\theta)$๋Š” ๋ฐฉํ–ฅ $\theta$์— ๋Œ€ํ•œ ์กฐํ–ฅ ๋ฒกํ„ฐ(steering vector)์ž…๋‹ˆ๋‹ค.

13.2 ์ตœ์†Œ ๋ถ„์‚ฐ ์™œ๊ณก ์—†๋Š” ์‘๋‹ต (MVDR)

Capon ๋น”ํฌ๋จธ๋Š” ๋‹ค์Œ์„ ํ’‰๋‹ˆ๋‹ค:

$$\min_{\mathbf{w}} \mathbf{w}^H \mathbf{R} \mathbf{w} \quad \text{์ œ์•ฝ ์กฐ๊ฑด:} \quad \mathbf{w}^H \mathbf{a}(\theta_s) = 1$$

ํ•ด:

$$\mathbf{w}_{MVDR} = \frac{\mathbf{R}^{-1}\mathbf{a}(\theta_s)}{\mathbf{a}^H(\theta_s)\mathbf{R}^{-1}\mathbf{a}(\theta_s)}$$

์ ์‘ํ˜• ๋ณ€ํ˜•์€ RLS์™€ ์œ ์‚ฌํ•œ ๊ฐฑ์‹ ์„ ์‚ฌ์šฉํ•˜์—ฌ $\mathbf{R}$์„ ์ˆœํ™˜์ ์œผ๋กœ ์ถ”์ •ํ•ฉ๋‹ˆ๋‹ค.


14. Python ๊ตฌํ˜„: ์™„์ „ํ•œ ์ ์‘ ํ•„ํ„ฐ๋ง ํˆดํ‚ท

14.1 LMS, NLMS, RLS ๊ตฌํ˜„

import numpy as np
import matplotlib.pyplot as plt


def lms_filter(x, d, M, mu):
    """
    LMS adaptive filter.

    Parameters
    ----------
    x : ndarray
        Input signal
    d : ndarray
        Desired (reference) signal
    M : int
        Filter order (number of taps)
    mu : float
        Step size

    Returns
    -------
    y : ndarray
        Filter output
    e : ndarray
        Error signal
    w_history : ndarray
        Weight history (N x M)
    """
    N = len(x)
    w = np.zeros(M)
    y = np.zeros(N)
    e = np.zeros(N)
    w_history = np.zeros((N, M))

    for n in range(M, N):
        x_vec = x[n:n-M:-1] if M > 1 else np.array([x[n]])
        # Proper construction of input vector
        x_vec = x[n-M+1:n+1][::-1]

        y[n] = np.dot(w, x_vec)
        e[n] = d[n] - y[n]
        w = w + mu * e[n] * x_vec
        w_history[n] = w

    return y, e, w_history


def nlms_filter(x, d, M, mu_tilde, delta=1e-6):
    """
    Normalized LMS adaptive filter.

    Parameters
    ----------
    x : ndarray
        Input signal
    d : ndarray
        Desired (reference) signal
    M : int
        Filter order
    mu_tilde : float
        Normalized step size (0 < mu_tilde < 2)
    delta : float
        Regularization constant

    Returns
    -------
    y, e, w_history : ndarrays
    """
    N = len(x)
    w = np.zeros(M)
    y = np.zeros(N)
    e = np.zeros(N)
    w_history = np.zeros((N, M))

    for n in range(M, N):
        x_vec = x[n-M+1:n+1][::-1]

        y[n] = np.dot(w, x_vec)
        e[n] = d[n] - y[n]

        norm_sq = np.dot(x_vec, x_vec) + delta
        w = w + (mu_tilde / norm_sq) * e[n] * x_vec
        w_history[n] = w

    return y, e, w_history


def rls_filter(x, d, M, lam=0.99, delta=0.01):
    """
    Recursive Least Squares adaptive filter.

    Parameters
    ----------
    x : ndarray
        Input signal
    d : ndarray
        Desired (reference) signal
    M : int
        Filter order
    lam : float
        Forgetting factor (0 < lambda <= 1)
    delta : float
        Regularization for P initialization

    Returns
    -------
    y, e, w_history : ndarrays
    """
    N = len(x)
    w = np.zeros(M)
    P = (1.0 / delta) * np.eye(M)
    y = np.zeros(N)
    e = np.zeros(N)
    w_history = np.zeros((N, M))

    for n in range(M, N):
        x_vec = x[n-M+1:n+1][::-1]

        # Gain vector
        Px = P @ x_vec
        denom = lam + x_vec @ Px
        k = Px / denom

        # A priori error
        y[n] = np.dot(w, x_vec)
        e[n] = d[n] - y[n]

        # Weight update
        w = w + k * e[n]

        # Inverse correlation matrix update
        P = (1.0 / lam) * (P - np.outer(k, x_vec @ P))

        w_history[n] = w

    return y, e, w_history

14.2 ์‹œ์Šคํ…œ ์‹๋ณ„ ์˜ˆ์ œ

# System Identification Demo
np.random.seed(42)

# Unknown system (FIR)
h_true = np.array([0.5, 1.2, -0.8, 0.3, -0.1])
M = len(h_true)

# Generate input signal (white noise)
N = 2000
x = np.random.randn(N)

# System output + measurement noise
d = np.convolve(x, h_true, mode='full')[:N] + 0.01 * np.random.randn(N)

# Run adaptive filters
mu_lms = 0.01
_, e_lms, w_lms = lms_filter(x, d, M, mu_lms)
_, e_nlms, w_nlms = nlms_filter(x, d, M, mu_tilde=0.5)
_, e_rls, w_rls = rls_filter(x, d, M, lam=0.99)

# Plot learning curves
fig, axes = plt.subplots(2, 1, figsize=(12, 8))

# MSE learning curves (smoothed)
window = 50
mse_lms = np.convolve(e_lms**2, np.ones(window)/window, mode='valid')
mse_nlms = np.convolve(e_nlms**2, np.ones(window)/window, mode='valid')
mse_rls = np.convolve(e_rls**2, np.ones(window)/window, mode='valid')

axes[0].semilogy(mse_lms, label='LMS', alpha=0.8)
axes[0].semilogy(mse_nlms, label='NLMS', alpha=0.8)
axes[0].semilogy(mse_rls, label='RLS', alpha=0.8)
axes[0].set_xlabel('Iteration')
axes[0].set_ylabel('MSE')
axes[0].set_title('Learning Curves: System Identification')
axes[0].legend()
axes[0].grid(True, alpha=0.3)

# Final weight comparison
x_pos = np.arange(M)
width = 0.2
axes[1].bar(x_pos - 1.5*width, h_true, width, label='True', color='black')
axes[1].bar(x_pos - 0.5*width, w_lms[-1], width, label='LMS')
axes[1].bar(x_pos + 0.5*width, w_nlms[-1], width, label='NLMS')
axes[1].bar(x_pos + 1.5*width, w_rls[-1], width, label='RLS')
axes[1].set_xlabel('Tap index')
axes[1].set_ylabel('Weight value')
axes[1].set_title('Identified Impulse Response')
axes[1].legend()
axes[1].grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('system_identification.png', dpi=150, bbox_inches='tight')
plt.show()

# Print final weights
print("True system:  ", h_true)
print("LMS weights:  ", np.round(w_lms[-1], 4))
print("NLMS weights: ", np.round(w_nlms[-1], 4))
print("RLS weights:  ", np.round(w_rls[-1], 4))

14.3 ์žก์Œ ์ œ๊ฑฐ ๋ฐ๋ชจ

# Adaptive Noise Cancellation Demo
np.random.seed(42)

N = 5000
t = np.arange(N) / 1000.0  # 1 kHz sampling rate

# Clean signal: sum of sinusoids
s = np.sin(2 * np.pi * 50 * t) + 0.5 * np.sin(2 * np.pi * 120 * t)

# Noise source
noise_source = np.random.randn(N)

# Noise that corrupts the signal (filtered version of noise source)
noise_path = np.array([1.0, -0.5, 0.3, -0.1])
n0 = np.convolve(noise_source, noise_path, mode='full')[:N]

# Primary input: signal + noise
d = s + n0

# Reference input: correlated with noise but not with signal
# (different path from the noise source)
ref_path = np.array([0.8, -0.4, 0.2])
n1 = np.convolve(noise_source, ref_path, mode='full')[:N]

# Apply adaptive noise canceller
M = 8  # Filter order (longer than the noise path to be safe)
mu = 0.01

y_lms, e_lms, _ = lms_filter(n1, d, M, mu)
y_nlms, e_nlms, _ = nlms_filter(n1, d, M, mu_tilde=0.5)
y_rls, e_rls, _ = rls_filter(n1, d, M, lam=0.995)

# Plot results
fig, axes = plt.subplots(4, 1, figsize=(14, 12))

axes[0].plot(t[:500], s[:500], 'g', linewidth=1.5, label='Clean signal')
axes[0].set_title('Original Clean Signal')
axes[0].legend()
axes[0].grid(True, alpha=0.3)

axes[1].plot(t[:500], d[:500], 'r', alpha=0.7, label='Signal + Noise')
axes[1].set_title('Noisy Signal (Primary Input)')
axes[1].legend()
axes[1].grid(True, alpha=0.3)

axes[2].plot(t[:500], e_nlms[:500], 'b', alpha=0.7, label='NLMS output')
axes[2].plot(t[:500], s[:500], 'g--', alpha=0.5, label='Clean (reference)')
axes[2].set_title('Recovered Signal (NLMS Noise Canceller)')
axes[2].legend()
axes[2].grid(True, alpha=0.3)

# SNR improvement over time
window = 200
snr_input = 10 * np.log10(
    np.convolve(s**2, np.ones(window)/window, mode='same') /
    np.convolve(n0**2, np.ones(window)/window, mode='same') + 1e-10
)
residual_nlms = e_nlms - s
snr_output = 10 * np.log10(
    np.convolve(s**2, np.ones(window)/window, mode='same') /
    np.convolve(residual_nlms**2, np.ones(window)/window, mode='same') + 1e-10
)

axes[3].plot(t, snr_input, 'r', alpha=0.7, label='Input SNR')
axes[3].plot(t, snr_output, 'b', alpha=0.7, label='Output SNR (NLMS)')
axes[3].set_xlabel('Time (s)')
axes[3].set_ylabel('SNR (dB)')
axes[3].set_title('SNR Improvement')
axes[3].legend()
axes[3].grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('noise_cancellation.png', dpi=150, bbox_inches='tight')
plt.show()

# Compute overall SNR improvement
snr_in = 10 * np.log10(np.mean(s[M:]**2) / np.mean(n0[M:]**2))
snr_out_nlms = 10 * np.log10(
    np.mean(s[1000:]**2) / np.mean((e_nlms[1000:] - s[1000:])**2)
)
print(f"Input SNR:       {snr_in:.1f} dB")
print(f"Output SNR (NLMS): {snr_out_nlms:.1f} dB")
print(f"SNR improvement:   {snr_out_nlms - snr_in:.1f} dB")

14.4 ์‹œ๋ณ€ ์‹œ์Šคํ…œ ์ถ”์ 

# Tracking a time-varying system
np.random.seed(42)

N = 4000
x = np.random.randn(N)

# Time-varying system: coefficients change at n=2000
h1 = np.array([1.0, 0.5, -0.3])
h2 = np.array([0.2, -0.8, 1.0])
M = 3

d = np.zeros(N)
for n in range(M, N):
    x_vec = x[n-M+1:n+1][::-1]
    if n < 2000:
        d[n] = np.dot(h1, x_vec) + 0.01 * np.random.randn()
    else:
        d[n] = np.dot(h2, x_vec) + 0.01 * np.random.randn()

# Compare algorithms
_, e_lms, w_lms = lms_filter(x, d, M, mu=0.05)
_, e_nlms, w_nlms = nlms_filter(x, d, M, mu_tilde=0.8)
_, e_rls, w_rls = rls_filter(x, d, M, lam=0.98)

# Plot weight trajectories
fig, axes = plt.subplots(3, 1, figsize=(12, 10), sharex=True)

titles = ['LMS', 'NLMS', 'RLS']
w_histories = [w_lms, w_nlms, w_rls]
colors = ['tab:blue', 'tab:orange', 'tab:green']

for ax, title, w_hist in zip(axes, titles, w_histories):
    for i in range(M):
        ax.plot(w_hist[:, i], label=f'w[{i}]', alpha=0.8)
    # Plot true values
    ax.axhline(y=h1[0], color='gray', linestyle=':', alpha=0.3)
    ax.axhline(y=h1[1], color='gray', linestyle=':', alpha=0.3)
    ax.axhline(y=h1[2], color='gray', linestyle=':', alpha=0.3)
    ax.axvline(x=2000, color='red', linestyle='--', alpha=0.5, label='System change')
    ax.set_title(f'{title} Weight Tracking')
    ax.legend(loc='upper right')
    ax.grid(True, alpha=0.3)

axes[-1].set_xlabel('Iteration')
plt.tight_layout()
plt.savefig('tracking_demo.png', dpi=150, bbox_inches='tight')
plt.show()

15. ์—ฐ์Šต ๋ฌธ์ œ

์—ฐ์Šต ๋ฌธ์ œ 1: ์œ„๋„ˆ ํ•„ํ„ฐ

$x(n)$์ด ๋ถ„์‚ฐ $\sigma_x^2 = 1$์ธ ๋ฐฑ์ƒ‰ ์žก์Œ์ด๊ณ , ์›ํ•˜๋Š” ์‹ ํ˜ธ๊ฐ€ $d(n) = 0.8x(n) + 0.5x(n-1) - 0.3x(n-2) + v(n)$์ธ ์‹œ์Šคํ…œ์„ ๊ณ ๋ คํ•˜์„ธ์š”. ์—ฌ๊ธฐ์„œ $v(n)$์€ ๋ถ„์‚ฐ $\sigma_v^2 = 0.1$์ธ ๋ฐฑ์ƒ‰ ์žก์Œ์œผ๋กœ $x(n)$๊ณผ ๋…๋ฆฝ์ž…๋‹ˆ๋‹ค.

(a) 3ํƒญ ์œ„๋„ˆ ํ•„ํ„ฐ์— ๋Œ€ํ•œ ์ž๊ธฐ์ƒ๊ด€ ํ–‰๋ ฌ $\mathbf{R}$์„ ๊ณ„์‚ฐํ•˜์„ธ์š”.

(b) ์ƒํ˜ธ์ƒ๊ด€ ๋ฒกํ„ฐ $\mathbf{p}$๋ฅผ ๊ณ„์‚ฐํ•˜์„ธ์š”.

(c) $\mathbf{R}\mathbf{w}_{opt} = \mathbf{p}$๋ฅผ ํ’€์–ด ์ตœ์  ์œ„๋„ˆ ํ•„ํ„ฐ $\mathbf{w}_{opt}$๋ฅผ ๊ตฌํ•˜์„ธ์š”.

(d) ์ตœ์†Œ MSE $J_{min}$์„ ๊ณ„์‚ฐํ•˜์„ธ์š”.

์—ฐ์Šต ๋ฌธ์ œ 2: LMS ์ˆ˜๋ ด

$M = 10$ ํƒญ์„ ๊ฐ€์ง„ LMS ํ•„ํ„ฐ๊ฐ€ ์ž๊ธฐ์ƒ๊ด€ ํ–‰๋ ฌ์˜ ๊ณ ์œ ๊ฐ’์ด $\lambda_{max} = 5.0$, $\lambda_{min} = 0.1$์ธ ์ž…๋ ฅ ์‹ ํ˜ธ์— ์ ์šฉ๋ฉ๋‹ˆ๋‹ค.

(a) ํ‰๊ท  ์ˆ˜๋ ด์„ ์œ„ํ•œ ์ตœ๋Œ€ ์Šคํ… ํฌ๊ธฐ๋Š” ์–ผ๋งˆ์ž…๋‹ˆ๊นŒ?

(b) ์กฐ๊ฑด์ˆ˜ $\chi(\mathbf{R})$์€ ์–ผ๋งˆ์ž…๋‹ˆ๊นŒ?

(c) $\mu = 0.01$์ด๋ฉด, $\text{tr}(\mathbf{R}) = 10$์ผ ๋•Œ ์˜ค์กฐ์ • $\mathcal{M}$์„ ๊ณ„์‚ฐํ•˜์„ธ์š”.

(d) ๊ฐ€์žฅ ๋А๋ฆฐ ๋ชจ๋“œ์˜ ์ˆ˜๋ ด ์‹œ์ƒ์ˆ˜ $\tau_{mse}$๋ฅผ ์ถ”์ •ํ•˜์„ธ์š”.

(e) LMS๋ฅผ ์ ์šฉํ•˜๊ธฐ ์ „์— ์ž…๋ ฅ์„ ๋ฐฑ์ƒ‰ํ™”(whitening)ํ•˜๋ฉด ์ˆ˜๋ ด์ด ์–ด๋–ป๊ฒŒ ๋ณ€ํ• ์ง€ ์งˆ์ ์œผ๋กœ ์„ค๋ช…ํ•˜์„ธ์š”.

์—ฐ์Šต ๋ฌธ์ œ 3: NLMS vs LMS

์ „๋ ฅ์ด 500 ์ƒ˜ํ”Œ๋งˆ๋‹ค 0.1๊ณผ 10.0 ์‚ฌ์ด๋ฅผ ๊ต๋Œ€ํ•˜๋Š” ๋น„์ •์ƒ ์ž…๋ ฅ ์‹ ํ˜ธ์— ๋Œ€ํ•ด ์žก์Œ ์ œ๊ฑฐ๋ฅผ ์œ„ํ•œ LMS์™€ NLMS๋ฅผ ๋ชจ๋‘ ๊ตฌํ˜„ํ•˜์„ธ์š”. ํ•„ํ„ฐ ์ฐจ์ˆ˜ $M = 16$์„ ์‚ฌ์šฉํ•˜์„ธ์š”.

(a) ๊ณ ์ • ์Šคํ… ํฌ๊ธฐ๋ฅผ ๊ฐ€์ง„ LMS๊ฐ€ ๊ณ ์ „๋ ฅ ๊ตฌ๊ฐ„์—์„œ ๋ฐœ์‚ฐํ•˜๊ฑฐ๋‚˜ ์ €์ „๋ ฅ ๊ตฌ๊ฐ„์—์„œ ๋„ˆ๋ฌด ๋А๋ฆฌ๊ฒŒ ์ˆ˜๋ ดํ•จ์„ ๋ณด์ด์„ธ์š”.

(b) NLMS๊ฐ€ ์ „๋ ฅ ๋ณ€๋™์„ ์šฐ์•„ํ•˜๊ฒŒ ์ฒ˜๋ฆฌํ•จ์„ ์‹œ์—ฐํ•˜์„ธ์š”.

(c) ๋‘ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ MSE ํ•™์Šต ๊ณก์„ ์„ ๊ทธ๋ฆฌ์„ธ์š”.

์—ฐ์Šต ๋ฌธ์ œ 4: RLS ๊ตฌํ˜„

์ž„ํŽ„์Šค ์‘๋‹ต $h = [1, -0.5, 0.25, -0.125]$๋ฅผ ๊ฐ€์ง„ ์‹œ์Šคํ…œ์„ ์‹๋ณ„ํ•˜๊ธฐ ์œ„ํ•ด ๋ง๊ฐ ์ธ์ž $\lambda = 0.99$๋กœ RLS๋ฅผ ๊ตฌํ˜„ํ•˜์„ธ์š”.

(a) ๊ฐ ๊ฐ€์ค‘์น˜๊ฐ€ ์‹ค์ œ ๊ฐ’์œผ๋กœ ์ˆ˜๋ ดํ•˜๋Š” ๊ฒƒ์„ ๊ทธ๋ž˜ํ”„๋กœ ๋‚˜ํƒ€๋‚ด์„ธ์š”. LMS ๋ฐ NLMS์™€ ๋น„๊ตํ•˜์„ธ์š”.

(b) $\lambda$๋ฅผ 0.9์—์„œ 1.0๊นŒ์ง€ ๋ณ€ํ™”์‹œํ‚ค๊ณ  ์ •์ƒ ์ƒํƒœ MSE ๋Œ€ ์ˆ˜๋ ด ์‹œ๊ฐ„ ํŠธ๋ ˆ์ด๋“œ์˜คํ”„๋ฅผ ๊ทธ๋ฆฌ์„ธ์š”.

(c) $n = 1000$์—์„œ ์‹œ์Šคํ…œ ๋ณ€ํ™”๋ฅผ ๋„์ž…ํ•˜์„ธ์š” ($h$๋ฅผ $[0.5, 0.3, -0.2, 0.1]$๋กœ ๋ณ€๊ฒฝ). LMS, NLMS, RLS์˜ ์ถ”์  ์„ฑ๋Šฅ์„ ๋น„๊ตํ•˜์„ธ์š”.

์—ฐ์Šต ๋ฌธ์ œ 5: ์—์ฝ” ์ œ๊ฑฐ ์‹œ๋ฎฌ๋ ˆ์ด์…˜

์Œํ–ฅ ์—์ฝ” ์ œ๊ฑฐ ์‹œ๋‚˜๋ฆฌ์˜ค๋ฅผ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ํ•˜์„ธ์š”:

(a) ๋‹ค์–‘ํ•œ ์ฃผํŒŒ์ˆ˜์˜ ์ •ํ˜„ํŒŒ ํ•ฉ์œผ๋กœ "์›๋‹จ ์Œ์„ฑ" ์‹ ํ˜ธ๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”.

(b) ์‹ค๋‚ด ์ž„ํŽ„์Šค ์‘๋‹ต์„ ์ƒ์„ฑํ•˜์„ธ์š” (๊ธธ์ด 100์˜ ์ง€์ˆ˜์ ์œผ๋กœ ๊ฐ์†Œํ•˜๋Š” ๋žœ๋ค ์‹œํ€€์Šค ์‚ฌ์šฉ).

(c) ๊ทผ๋‹จ(near-end) ์žก์Œ์„ ์ถ”๊ฐ€ํ•˜์„ธ์š”.

(d) ํ•„ํ„ฐ ์ฐจ์ˆ˜ 128์˜ NLMS๋ฅผ ์ ์šฉํ•˜์„ธ์š”. ์‹œ๊ฐ„์— ๋”ฐ๋ฅธ ์—์ฝ” ๋ฐ˜ํ™˜ ์†์‹ค ํ–ฅ์ƒ(ERLE)์„ ๊ทธ๋ฆฌ์„ธ์š”:

$$\text{ERLE}(n) = 10 \log_{10} \frac{E[d^2(n)]}{E[e^2(n)]}$$

(e) ์ด์ค‘ ํ†ตํ™”(๊ทผ๋‹จ ์Œ์„ฑ ์ถ”๊ฐ€)๊ฐ€ ์ ์‘ ํ•„ํ„ฐ์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์„ ์กฐ์‚ฌํ•˜์„ธ์š”.

์—ฐ์Šต ๋ฌธ์ œ 6: ์ ์‘ ๋“ฑํ™”

๋””์ง€ํ„ธ ํ†ต์‹  ์ฑ„๋„์ด ์ž„ํŽ„์Šค ์‘๋‹ต $c = [0.5, 1.0, 0.5]$๋ฅผ ๊ฐ€์ง‘๋‹ˆ๋‹ค (ISI ๋ฐœ์ƒ).

(a) ๋žœ๋ค BPSK ์‹ ํ˜ธ($a(n) \in \{-1, +1\}$)๋ฅผ ์ƒ์„ฑํ•˜๊ณ  ์ฑ„๋„์„ ํ†ต๊ณผ์‹œํ‚ค์„ธ์š”. SNR = 20 dB์—์„œ ์žก์Œ์„ ์ถ”๊ฐ€ํ•˜์„ธ์š”.

(b) $M = 11$ ํƒญ๊ณผ ๊ฒฐ์ • ์ง€์—ฐ $\Delta = 5$๋กœ LMS๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ ์‘ ๋“ฑํ™”๊ธฐ๋ฅผ ์„ค๊ณ„ํ•˜์„ธ์š”.

(c) ํ›ˆ๋ จ ๊ธธ์ด์˜ ํ•จ์ˆ˜๋กœ ๋น„ํŠธ ์˜ค๋ฅ˜์œจ(BER)์„ ๊ทธ๋ฆฌ์„ธ์š”.

(d) 500๊ฐœ์˜ ํ›ˆ๋ จ ์‹ฌ๋ณผ ํ›„ ๊ฒฐ์ • ์ฃผ๋„ ๋ชจ๋“œ๋กœ ์ „ํ™˜ํ•˜๊ณ  BER์ด ์•ˆ์ •์ ์œผ๋กœ ์œ ์ง€๋จ์„ ๊ฒ€์ฆํ•˜์„ธ์š”.

(e) ๋“ฑํ™” ์ „ํ›„์˜ ์•„์ด ๋‹ค์ด์–ด๊ทธ๋žจ(eye diagram)์„ ๋น„๊ตํ•˜์„ธ์š”.

์—ฐ์Šต ๋ฌธ์ œ 7: ํ•„ํ„ฐ ์ฐจ์ˆ˜์˜ ์˜ํ–ฅ

์‹ค์ œ ์‹œ์Šคํ…œ $h = [0.5, 1.2, -0.8, 0.3, -0.1]$์— ๋Œ€ํ•œ ์‹œ์Šคํ…œ ์‹๋ณ„ ๋ฌธ์ œ์—์„œ:

(a) ํ•„ํ„ฐ ์ฐจ์ˆ˜ $M = 3, 5, 7, 10, 20$์œผ๋กœ LMS๋ฅผ ์‹คํ–‰ํ•˜๊ณ  ์ •์ƒ ์ƒํƒœ MSE๋ฅผ ๋น„๊ตํ•˜์„ธ์š”.

(b) $M < 5$ (๊ณผ์†Œ ๋ชจ๋ธ๋ง)์™€ $M > 5$ (๊ณผ๋Œ€ ๋ชจ๋ธ๋ง)์ผ ๋•Œ ์–ด๋–ค ์ผ์ด ๋ฐœ์ƒํ•˜๋Š”์ง€ ์„ค๋ช…ํ•˜์„ธ์š”.

(c) ๊ฐ $M$์— ๋Œ€ํ•ด ์‹๋ณ„๋œ ์ž„ํŽ„์Šค ์‘๋‹ต์„ ๊ทธ๋ฆฌ์„ธ์š”.


16. ์š”์•ฝ

๊ฐœ๋… ํ•ต์‹ฌ ๊ณต์‹ / ์•„์ด๋””์–ด
์œ„๋„ˆ ํ•„ํ„ฐ $\mathbf{w}_{opt} = \mathbf{R}^{-1}\mathbf{p}$ (์ตœ์  MMSE)
์ตœ๊ธ‰๊ฐ•ํ•˜๋ฒ• $\mathbf{w}(n+1) = \mathbf{w}(n) + 2\mu(\mathbf{p} - \mathbf{R}\mathbf{w}(n))$
์ˆ˜๋ ด ์กฐ๊ฑด $0 < \mu < 1/\lambda_{max}$
LMS ๊ฐฑ์‹  $\mathbf{w}(n+1) = \mathbf{w}(n) + \mu \, e(n) \, \mathbf{x}(n)$
LMS ์˜ค์กฐ์ • $\mathcal{M} = \mu \, \text{tr}(\mathbf{R})$
NLMS ๊ฐฑ์‹  $\mathbf{w}(n+1) = \mathbf{w}(n) + \frac{\tilde{\mu}}{\|\mathbf{x}\|^2+\delta} e(n)\mathbf{x}(n)$
RLS ์ด๋“ $\mathbf{k}(n) = \frac{\mathbf{P}(n-1)\mathbf{x}(n)}{\lambda + \mathbf{x}^T(n)\mathbf{P}(n-1)\mathbf{x}(n)}$
๋ง๊ฐ ์ธ์ž ๋ฉ”๋ชจ๋ฆฌ $N_{eff} = 1/(1-\lambda)$
์žก์Œ ์ œ๊ฑฐ ์˜ค์ฐจ ์‹ ํ˜ธ $e(n) = d(n) - \hat{y}(n) \approx s(n)$
ํŠธ๋ ˆ์ด๋“œ์˜คํ”„ ๋น ๋ฅธ ์ˆ˜๋ ด vs ๋‚ฎ์€ ์˜ค์กฐ์ •

ํ•ต์‹ฌ ์ •๋ฆฌ: 1. ์œ„๋„ˆ ํ•„ํ„ฐ๋Š” ์ด๋ก ์  ์ตœ์ ์„ ์ œ๊ณตํ•˜์ง€๋งŒ ์•Œ๋ ค์ง„ ํ†ต๊ณ„๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. 2. LMS๋Š” ๊ธฐ์šธ๊ธฐ๋ฅผ ์ˆœ๊ฐ„ ์ถ”์ •๊ฐ’์œผ๋กœ ๊ทผ์‚ฌํ•ฉ๋‹ˆ๋‹ค - ๋‹จ์ˆœํ•˜๊ณ  ๊ฐ•๊ฑดํ•˜๋ฉฐ $O(M)$์ž…๋‹ˆ๋‹ค. 3. NLMS๋Š” ์ž…๋ ฅ ์ „๋ ฅ์œผ๋กœ ์ •๊ทœํ™”ํ•ฉ๋‹ˆ๋‹ค - ๋ณ€๋™ํ•˜๋Š” ์‹ ํ˜ธ ๋ ˆ๋ฒจ์—์„œ ๋” ๋‚˜์€ ์•ˆ์ •์„ฑ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. 4. RLS๋Š” ์ง€์ˆ˜ ๊ฐ€์ค‘์น˜๋กœ ๋ชจ๋“  ๊ณผ๊ฑฐ ๋ฐ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค - $O(M^2)$ ๋น„์šฉ์œผ๋กœ ๋น ๋ฅธ ์ˆ˜๋ ด์„ ๋‹ฌ์„ฑํ•ฉ๋‹ˆ๋‹ค. 5. ์˜ค์กฐ์ •-์ˆ˜๋ ด ํŠธ๋ ˆ์ด๋“œ์˜คํ”„๋Š” ๋ชจ๋“  ์ ์‘ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ๊ทผ๋ณธ์ ์ž…๋‹ˆ๋‹ค. 6. ์ ์‘ ํ•„ํ„ฐ๋Š” ์žก์Œ ์ œ๊ฑฐ์—์„œ ๋“ฑํ™”๊นŒ์ง€ ์ˆ˜๋งŽ์€ ์‘์šฉ์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค.


17. ์ฐธ๊ณ  ๋ฌธํ—Œ

  1. S. Haykin, Adaptive Filter Theory, 5th ed., Pearson, 2014.
  2. A.H. Sayed, Adaptive Filters, Wiley-IEEE Press, 2008.
  3. P.S.R. Diniz, Adaptive Filtering: Algorithms and Practical Implementation, 4th ed., Springer, 2013.
  4. B. Widrow and S.D. Stearns, Adaptive Signal Processing, Pearson, 1985.
  5. S. Haykin, "Adaptive filter theory," in Proc. IEEE, vol. 90, no. 2, pp. 211-259, 2002.
  6. B. Farhang-Boroujeny, Adaptive Filters: Theory and Applications, 2nd ed., Wiley, 2013.

์ด์ „: 12. ๋‹ค์ค‘ ๋ ˆ์ดํŠธ ์‹ ํ˜ธ ์ฒ˜๋ฆฌ | ๋‹ค์Œ: 14. ์‹œ๊ฐ„-์ฃผํŒŒ์ˆ˜ ๋ถ„์„

to navigate between lessons