SQLite database for sensor data in Raspberry Pi - Time & Space Complexity
When working with sensor data on a Raspberry Pi, we often store readings in an SQLite database. Understanding how the time to save or retrieve data grows helps us keep the system responsive.
We want to know how the time to insert or query data changes as the amount of stored sensor data grows.
Analyze the time complexity of the following code snippet.
import sqlite3
conn = sqlite3.connect('sensor_data.db')
cursor = conn.cursor()
cursor.execute('CREATE TABLE IF NOT EXISTS readings (id INTEGER PRIMARY KEY, value REAL)')
for reading in sensor_readings:
cursor.execute('INSERT INTO readings (value) VALUES (?)', (reading,))
conn.commit()
This code creates a table if needed and inserts multiple sensor readings one by one into the database.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The loop that inserts each sensor reading into the database.
- How many times: Once for each reading in the sensor_readings list.
Each new reading causes one insert operation. So, if you have more readings, the total work grows directly with the number of readings.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 insert commands |
| 100 | 100 insert commands |
| 1000 | 1000 insert commands |
Pattern observation: The total work increases steadily as you add more sensor readings.
Time Complexity: O(n)
This means the time to insert all readings grows in direct proportion to how many readings you have.
[X] Wrong: "Inserting many readings at once will take the same time as inserting just one."
[OK] Correct: Each insert adds work, so more readings mean more time. The total time grows with the number of inserts.
Understanding how database operations scale is a useful skill. It shows you can think about how your code behaves as data grows, which is important for real projects.
"What if we used a single bulk insert statement instead of inserting one reading at a time? How would the time complexity change?"