To match locks in Postgres logs you can use the following regular expressions. The first matches messages that show tables being locked, and the second shows the application loading share locks.
.*user=(?P<lock_user_name>\w+),db=(?P<lock_database>\w+)
LOG:\s+process (?P<lock_process_id>\d+)
acquired (?P<lock_type>\w+) on (?<lock_on>\w+)
(?P<lock_tuple>[\(\)0-9,]+) of (?P<lock_object_type>\w+)
(?P<lock_object_oid>\d+) of database (?P<lock_db_oid>\d+)
after (?P<lock_wait_time>[0-9.]+).*
.*user=(?P<lock_user_name>\w+),db=(?P<lock_database>\w+)
LOG:\s+process (?P<lock_process_id>\d+) acquired
(?P<lock_type>\w+) on transaction (?<lock_transaction_id>\d+)
after (?P<lock_wait_time>[0-9.]+) .*
The useful thing about this is it allows you to see which queries wait on locks, and for how long.
To make these entries show up in the log, you need this in postgresql.conf:
log_lock_waits = on
Locks on tables are particularly interesting, because they will list both the rows and tables that are locked. However, they do this using internal identifiers.
You can look up the tables like so:
select pg_class.oid, nspname, relname
from pg_class, pg_namespace
where pg_class.relnamespace = pg_namespace.oid
and pg_class.oid = 1234
The log output will give you row IDs (representing where the row is stored). You can query this like so:
select *
from mytable
where ctid::text = '(0,1)'
If you are experiencing a lot of locks and are concerned, it is also worth trying to get the server name of the offending transaction, so that you can determine whether multiple servers in a farm are causing you issues, or several processes on the same server.