How to Implement LMS Filter in Python: Simple Guide
To implement an
LMS filter in Python, initialize filter weights, then iteratively update them using the error between the desired and actual output multiplied by the input and a learning rate. This adaptive process adjusts weights to minimize error over time.Syntax
The LMS filter updates weights using the formula: w = w + 2 * mu * e * x, where:
wis the weight vector.muis the learning rate (step size).eis the error between desired and output.xis the input vector.
This update happens for each new input sample.
python
def lms_filter(x, d, mu, n): w = np.zeros(n) y = np.zeros(len(x)) e = np.zeros(len(x)) for i in range(n, len(x)): x_vec = x[i-n:i][::-1] y[i] = np.dot(w, x_vec) e[i] = d[i] - y[i] w += 2 * mu * e[i] * x_vec return y, e, w
Example
This example shows how to use the LMS filter to estimate a system output from noisy input data.
python
import numpy as np import matplotlib.pyplot as plt def lms_filter(x, d, mu, n): w = np.zeros(n) y = np.zeros(len(x)) e = np.zeros(len(x)) for i in range(n, len(x)): x_vec = x[i-n:i][::-1] y[i] = np.dot(w, x_vec) e[i] = d[i] - y[i] w += 2 * mu * e[i] * x_vec return y, e, w # Generate input signal np.random.seed(0) x = np.random.randn(500) # Desired signal: system with impulse response [0.1, 0.15, 0.5] h = np.array([0.1, 0.15, 0.5]) d = np.convolve(x, h)[:len(x)] + 0.05 * np.random.randn(len(x)) # LMS filter parameters mu = 0.01 filter_order = 3 # Apply LMS filter y, e, w = lms_filter(x, d, mu, filter_order) # Plot results plt.figure(figsize=(10,6)) plt.plot(d, label='Desired signal') plt.plot(y, label='LMS output') plt.legend() plt.title('LMS Filter Output vs Desired Signal') plt.show()
Output
A plot showing two lines: the desired signal and the LMS filter output closely following it after adaptation.
Common Pitfalls
- Choosing learning rate (mu): Too large causes instability; too small slows learning.
- Filter order: Too small misses system details; too large increases computation and noise sensitivity.
- Initialization: Starting weights at zero is common, but poor initialization can slow convergence.
- Input vector slicing: Incorrect indexing or reversing input vector can cause wrong updates.
python
import numpy as np # Wrong: Not reversing input vector w = np.zeros(3) x = np.array([1, 2, 3, 4]) d = np.array([0, 0, 0, 0]) mu = 0.01 for i in range(3, len(x)): x_vec = x[i-3:i] # Missing [::-1] y = np.dot(w, x_vec) e = d[i] - y w += 2 * mu * e * x_vec # Right: Reverse input vector w = np.zeros(3) for i in range(3, len(x)): x_vec = x[i-3:i][::-1] y = np.dot(w, x_vec) e = d[i] - y w += 2 * mu * e * x_vec
Quick Reference
- Weight update:
w = w + 2 * mu * e * x - Error:
e = d - y - Learning rate (mu): Small positive value, e.g., 0.01
- Filter order: Number of weights, depends on system complexity
- Input vector: Use most recent samples reversed for dot product
Key Takeaways
The LMS filter updates weights iteratively to minimize error using a simple formula.
Choose learning rate carefully to balance speed and stability.
Reverse the input vector slice before dot product to align with filter weights.
Filter order affects accuracy and computational cost.
Initialization and input handling are critical for correct LMS implementation.