© Postgres Professional, 2017–2021
Authors: Egor Rogov, Pavel Luzanov
Translated by Liudmila Mantrova
Usage of Course Materials
Non-commercial use of course materials (presentations, demonstrations) is
allowed on an unrestricted basis. Commercial use is only possible with prior
written permission of Postgres Professional company. Modification of course
materials is forbidden.
Contact Us
Please send your feedback to:
In no event shall Postgres Professional company be liable for any damages
or loss, including loss of profits, that arise from direct or indirect, special or
incidental use of course materials. Postgres Professional company
specifically disclaims any warranties on course materials. Course materials
are provided “as is,” and Postgres Professional company has no obligations
to provide maintenance, support, updates, enhancements, or modifications.
Triggers and trigger functions
Triggers’ firing time
Execution context of a trigger function
Return values
Dos and don’ts when using triggers
Event triggers
Triggers and Functions
A trigger
a database object: the list of events to process
once an event occurs, the trigger function is called,
and the call context is passed to this function
Trigger functions
a database object: event-processing code
is executed in the same transaction as the main operation
convention: the function does not take any parameters,
returns a value of the trigger pseudotype (which is virtually a record)
can be used in several triggers
Triggers enables us to perform particular actions in response to particular
events. A trigger consists of two parts: the trigger itself (which defines the
events) and a trigger function (which defines the actions). Both the trigger
and the function are independent database objects.
When an event occurs to which the trigger is subscribed, the trigger function
is called. It receives the context of the call that defines the exact trigger that
has called the function and the exact conditions that have led to this call.
A trigger function is a regular function that follows some conventions:
- It can be written in any language except pure SQL.
- It must have no parameters.
- Its return value is of the trigger type (it is actually a pseudotype;
a record corresponding to a table row is returned instead).
The trigger function is executed in the same transaction as the main
operation. Thus, if a trigger function results in an error, the whole
transaction is aborted.
tables before/after statement
before/after row
views before/after statement
instead of row
tables before/after statement
WHEN condition
sets an additional filter
Triggers can fire for INSERT, UPDATE, or DELETE operations performed on
tables or views, as well as for the TRUNCATE operation on tables.
A trigger can fire before the specified action (BEFORE), after the action
(AFTER), or instead of the action (INSTEAD OF).
A trigger can fire once for the whole operation (FOR EACH STATEMENT)
or for each affected row (FOR EACH ROW).
There are some combinations of these conditions that are not supported.
For example, INSTEAD OF triggers can be defined only for views at the
row level, while TRUNCATE triggers can be defined only for tables and only
at the statement level. Possible combinations are listed on this slide.
Besides, you can limit the area controlled by the trigger by specifying the
WHEN condition: if this condition is false, the trigger does
not fire.
Before Statement
Is triggered
before the operation
Return value
is ignored
TG variables
operation execution
Let’s take a closer look at different trigger types.
The BEFORE STATEMENT trigger fires only once per operation, regardless
of the number of affected rows (even if there is none). It happens before the
start of the operation.
The return value of the trigger function is ignored, it can simply return NULL.
If there is an error in the trigger, the operation is canceled.
Since the trigger function has no parameters, the call context in PL/pgSQL
is passed via predefined TG variables, such as:
You can also pass a user-defined context (which is analogous to the absent
parameters) via the TG_ARGV variable, although it is usually advisable to
create several specialized functions instead of a single generic one.
Before Row
Is triggered
before the action on the row
during the statement execution
Return value
a row (possibly modified)
null cancels the operation
OLD update, delete
NEW insert, update
TG variables
operation execution
BEFORE ROW triggers fire each time an operation is about to process a row.
It happens right during the operation execution.
Trigger functions get the context via variables, such as:
- OLD an old state of the row (undefined for insertion)
- NEW an updated state of the row (undefined for deletion)
The NULL return value is interpreted as cancellation of the action for the
current row. The execution of the operation itself will continue, but the
current row won’t be processed, and other triggers won’t fire for this row.
To avoid interfering with the operation, the trigger must return the received
row without any modifications, so it must always return NEW for insert and
update operations. For delete operations, it can return any value except
NULL (usually OLD is used).
But the trigger function can also change the NEW value to affect the result of
the operation; this trigger is often defined exactly for this purpose.
Instead of Row
Is triggered
instead of the action on the row
for views
Return value
a row (possibly modified) —
is shown in RETURNING
null cancels the operation
OLD update, delete
NEW insert, update
TG variables
operation execution
INSTEAD OF triggers are very similar to BEFORE triggers, but they can
be defined only for views and fire instead of the specified operation, not
before it.
Such triggers usually perform operations on the base tables for views. The
trigger can also return a modified NEW value: this value will be available
when performing an operation with the RETURNING clause.
After Row
Is triggered
after the operation
the queue of rows satisfying the WHEN condition
Return value
is ignored
OLD, OLD TABLE update, delete
NEW, NEW TABLE insert, update
TG variables
operation execution
Just as BEFORE ROW, AFTER ROW triggers fire for each affected row; but it
happens only after the whole operation is complete, not immediately after
processing the row. For this purpose, all events are placed in a queue and
processed after the operation has finished. The fewer events get queued,
the smaller overhead will be incurred; that’s why it is recommended to use
the WHEN clause in this case, which allows us to filter out the rows that we
definitely won't need.
The return value of the AFTER ROW triggers is ignored (because the
operation is already complete).
The context of the trigger function is constituted by the following variables:
- OLD an old value of the row (undefined for insertion)
- NEW an updated value of the row (undefined for deletion)
Apart from these variables, in PostgreSQL 10 or higher, the trigger function
can get access to special transition tables. The table listed in trigger
definition as OLD TABLE contains old values of the rows processed by the
trigger; NEW TABLE provides updated values of these rows.
Regular TG variables are also available, including the following ones:
After Statement
Is triggered
after the operation
(even if none of the rows are affected)
Return value
is ignored
OLD TABLE update, delete
NEW TABLE insert, update
TG variables
operation execution
The AFTER STATEMENT trigger fires after the operation has completed
(including all the AFTER ROW triggers, if any). This trigger fires only once
regardless of the number of the affected rows.
The return value of the trigger function is ignored.
The call context is passed using transition tables. The trigger function can
access these table to analyze all the affected rows. Transition tables are
usually used with AFTER STATEMENT, not with AFTER ROW triggers.
Besides, regular TG variables are defined, such as:
Possible Use Cases
changing the base tables for views
consistency checks,
including table-level checks;
logging operations for audit purposes;
cascading table updates (denormalization,
asynchronous processing...)
operation applicability checks
correctness checks;
row modifications
operation execution
How can we put this theory into practice?
BEFORE triggers can be used to check that the operation is valid and to
raise errors if required.
BEFORE ROW triggers can be used to modify a row (for example, fill an
empty field with the required value). It is convenient to use such triggers to
avoid repeating the logic of filling out “technical” fields in each operation, as
well as tweak the application behavior if its code cannot be modified.
INSTEAD OF ROW triggers are used to translate operations on views into
the corresponding operations on the underlying base tables.
AFTER ROW and AFTER STATEMENT triggers can be useful for getting
the exact state after the operation (BEFORE triggers may affect the result,
so the state is not yet clear at this stage):
- to check consistency of the operation
- to perform audit operations, i.e., logging all changes in a separate storage
- to cascade changes to other tables (for example, to update denormalized
data if the base tables have changed, or queue changes for subsequent
processing outside of the current transaction)
If the operation affects multiple rows, it may be more efficient to use AFTER
STATEMENT on transition tables instead of AFTER ROW as it can process
changes in batches.
The code is called implicitly
the execution logic is hard to track
Visibility rules for volatile trigger functions
the result of BEFORE ROW and INSTEAD OF ROW triggers is visible
The order of calling triggers for one and the same event
triggers fire in the alphabetical order
Infinite looping can occur
a trigger can activate other triggers
Integrity constraints can be broken
for example, by excluding the rows that have to be deleted
Triggers should not be overused. As they fire implicitly, the logic of the
application becomes obscure, thus making its maintenance hugely
complicated. Attempts to use triggers for implementing complex logic are
usually quite unfortunate.
In some cases, you can use generated columns instead of triggers
(GENERATED ALWAYS AS ... STORED). If applicable, this solution is sure
to be more transparent and easier to implement.
There is a number of subtle points related to using triggers; we consciously
skip their detailed discussion here:
- visibility rules of volatile functions in BEFORE ROW and INSTEAD OF ROW
triggers (do not rely on the order of triggers when accessing a table)
- the order of calling several triggers on one and the same event (do not
aggravate implicit firing of triggers by relying on their exact processing
- a possibility of infinite looping if cascade firing of triggers leads to another
activation of the first trigger
- a risk of integrity constraint violation (for example, referential integrity can
be compromised when skipping a row deleted by the ON DELETE CASCADE
If you see that these subtleties are important for your application, you
should seriously consider redesign.
Event Triggers
An event trigger
is similar to a regular trigger defined on a table, but is a separate object
Trigger functions
convention: a function does not take any parameters,
returns a value of the event_trigger pseudotype
the call context is retrieved using special functions
DDL_COMMAND_START before command execution
DDL_COMMAND_END after command execution
TABLE_REWRITE before rewriting the table
SQL_DROP after deleting objects
Event triggers are virtually the same as regular triggers, but instead of firing
on DML operations, they fire on DDL operations (CREATE, ALTER, DROP,
Such triggers are not an application development tool; they are mainly used
for database administration purposes. We only mention them here to give
you a complete picture, so we’ll provide a very basic example.
A trigger is a way to address a particular event
Using triggers, you can cancel an operation, modify its outcome,
or perform additional actions
Triggers are executed as part of the main transaction;
an error in a trigger aborts this transaction
Using AFTER ROW triggers and transition tables makes processing
more expensive
Everything is good in moderation: complex logic is hard to
debug because of implicit trigger execution
1. Create a trigger that handles updates of the onhand_qty field
in the catalog_v view.
Check that the “Catalog” now allows ordering books.
2. Make sure that the following consistency requirement is met:
the amount of available books cannot be negative (it is
impossible to buy a book if it is not in stock).
Check your implementation carefully, keeping in mind that the
application can be accessed by several users simultaneously.
Task 2. It may seem that it’s enough to define the AFTER trigger on the
operations table to calculate the qty_change sum. However, at the
READ COMMITTED isolation level used in the “Bookstore” application, we will
have to acquire an exclusive lock on this table: otherwise, the check may
not function properly in some scenarios.
Here is a better approach: extend the books table with the onhand_qty
column and create a trigger that will be modifying onhand_qty values when
the operations table is changed (i.e., you should virtually perform data
denormalization). You can now define the CHECK constraint on the
onhand_qty field to ensure data consistency. The onhand_qty() function
created earlier is no longer required.
You should pay special attention to setting the initial value, keeping in mind
that the database system may be serving some users while we perform
these changes.
1. Create a trigger that increments the counter (the version field)
by 1 each time a row is updated. If a new row is inserted, the
corresponding counter must be set to 1.
Check the implementation.
2. There are two tables: orders (to store orders) and lines
(to keep order lines).
It is required to perform denormalization: automatically update
the total cost of the order in the orders table once the
corresponding lines have changed.
Create the required triggers using transition tables to minimize
the number of update operations.
Task 2. Use the following commands to create tables:
total_amount numeric(20,2) NOT NULL DEFAULT 0
order_id int NOT NULL REFERENCES orders(id),
amount numeric(20,2) NOT NULL
The orders.total_amount column must be calculated automatically as
the sum of lines.amount values for all rows related to the corresponding