Consider a table accounts with columns id and balance. Which rows will be locked by this query?
SELECT * FROM accounts WHERE balance > 1000 FOR UPDATE;
CREATE TABLE accounts (id SERIAL PRIMARY KEY, balance INT); INSERT INTO accounts (balance) VALUES (500), (1500), (2000);
FOR UPDATE locks rows returned by the query to prevent concurrent updates.
The FOR UPDATE clause locks only the rows returned by the query. Here, only rows with balance > 1000 are locked.
Given the table orders with columns order_id and status, what does this query do?
SELECT * FROM orders WHERE status = 'pending' FOR SHARE;
CREATE TABLE orders (order_id SERIAL PRIMARY KEY, status TEXT); INSERT INTO orders (status) VALUES ('pending'), ('shipped'), ('pending');
FOR SHARE locks rows to prevent updates or deletes but allows reads.
The FOR SHARE clause locks the selected rows to prevent other transactions from modifying or deleting them, but it allows concurrent reads.
Which option contains a syntax error when trying to lock rows for update?
SELECT * FROM products WHERE price > 100 FOR UPDATE NOWAIT;
Check the valid locking options and keywords in PostgreSQL.
PostgreSQL supports NOWAIT but not WAIT as a locking option. Option D uses an invalid keyword WAIT, causing a syntax error.
You want to lock only necessary rows for update to reduce lock contention. Which query is best?
CREATE TABLE inventory (item_id SERIAL PRIMARY KEY, quantity INT); INSERT INTO inventory (quantity) VALUES (10), (0), (5);
Lock only rows you intend to update.
Query A locks only rows with quantity > 0 for update, minimizing locked rows and reducing contention.
Transaction A and Transaction B both execute SELECT * FROM employees WHERE id = 1 FOR UPDATE; at the same time. What happens?
CREATE TABLE employees (id SERIAL PRIMARY KEY, name TEXT); INSERT INTO employees (name) VALUES ('Alice');
Consider how row-level locks serialize concurrent updates.
When two transactions try to lock the same row with FOR UPDATE, the second waits until the first finishes (commits or rolls back) to maintain data consistency.