New data types
Relevant: when migrating from Firebird versions 2.5, 3.0.
As mentioned earlier, some expressions may return new data types that cannot be interpreted by your application without modification.Such modification may take a significant amount of time or be beyond your capabilities.To simplify migration to new versions, you can set the DataTypeCompatibility
parameter to the compatibility mode with the required version in firebird.conf
or databases.conf
.
DataTypeCompatibility = 3.0
or
DataTypeCompatibility = 2.5
This is the fastest way to achieve compatibility with new data types.However, over time you may start to implement support for new types in your application.Naturally, this will happen gradually - first one type, then another, and so on.In this case, you need to configure the mapping of those types that you have not yet completed, to other data types.For this, the operator SET BIND OF
is used.
SET BIND OF { <type-from> | TIME ZONE } TO { <type-to> | LEGACY | NATIVE | EXTENDED }
The keyword LEGACY
in the TO
part is used when a data type that is absent in the previous version of Firebird should be represented in a way that is understandable to the old client software (some data loss is possible). There are the following conversions to LEGACY
types:
DataTypeCompatibility | Native type | Legacy type |
---|---|---|
2.5 |
BOOLEAN |
CHAR(5) |
2.5 or 3.0 |
DECFLOAT |
DOUBLE PRECISION |
2.5 or 3.0 |
INT128 |
BIGINT |
2.5 or 3.0 |
TIME WITH TIME ZONE |
TIME WITHOUT TIME ZONE |
2.5 or 3.0 |
TIMESTAMP WITH TIME ZONE |
TIMESTAMP WITHOUT TIME ZONE |
When setting the DataTypeCompatibility parameter, new data types are converted to legacy types according to the table described above.
A detailed description of this operator is available in the "Firebird 4.0 Release Notes" and "Firebird 5.0 SQL Language Reference".With it, you can control the mapping of new types in your application by executing the corresponding query immediately after connecting, and even write an AFTER CONNECT
trigger in which to use several such operators.
For example, suppose you have added support for date and time with time zones to your application, but you still do not support the types INT128
and DECFLOAT
. In this case, you can write the following trigger.
create or alter trigger tr_ac_set_bind on connect as begin set bind of int128 to legacy; set bind of decfloat to legacy; end
Consistent reading in READ COMMITTED transactions
Relevant: when migrating from Firebird versions 2.5, 3.0.
Firebird 4 not only introduces consistent reading (READ CONSISTENCY
) for queries in READ COMMITTED
transactions, but also makes it the default mode for all READ COMMITTED
transactions, regardless of their RECORD VERSION
or NO RECORD VERSION
properties.
This is done to provide users with better behavior - both conforming to the SQL specification and less prone to conflicts. However, this new behavior may also have unexpected side effects.
Perhaps the most important of them is the so-called restarts when processing update conflicts. This can lead to some code that is not subject to transactional control being executed multiple times within PSQL. Examples of such code may be:
-
using external tables, sequences, or context variables;
-
sending emails using UDF;
-
using autonomous transactions or external queries.
Note
|
In the isolation mode READ COMMITTED READ CONSISTENCY, the update conflict is handled differently. If an |
More about consistent reading in READ COMMITTED
transactions can be read in the "Firebird 4.0 Release Notes".
Another important effect is that unfetched cursors in READ COMMITTED READ CONSISTENCY
transactions in Read Only mode now hold garbage collection.We recommend that you stop using a single long READ COMMITTED READ ONLY
transaction in your application, and replace it with several such transactions, each of which is active for as long as necessary.
If the features of the READ CONSISTENCY
mode are undesirable for some reason, then to restore the obsolete behavior, you need to set the configuration parameter ReadConsistency
to 0.
Changes in the optimizer
The optimizer changes in each version of Firebird. Mostly these changes are positive, that is, your queries should run faster, but some queries may slow down, so you need to test the performance of your application, and if there is a slowdown somewhere, you need intervention from the programmer.
For most optimizer changes, you cannot influence the query plan by changing the server configuration. In this case, you can do the following:
-
rewrite the SQL query so that it runs faster on the new version of the server;
-
create or delete indexes;
-
if none of the above helped, then create a regression ticket at https://github.com/FirebirdSQL/firebird/issues.
There are a couple of points in the optimizer’s work that can be influenced by changing the configuration:
Using Refetch for sorting wide data sets
Relevant: when migrating from Firebird versions 2.5, 3.0.
Starting from Firebird 4.0, a new access method Refetch was introduced, which allows to optimize the sorting of wide data sets. A wide data set is a data set in which the total length of the record fields is large.
Historically, when performing external sorting, Firebird writes both key fields (i.e., those specified in the ORDER BY
or GROUP BY
clause) and non-key fields (all other fields that have references within the query) to sorting blocks, which are either stored in memory or in temporary files. After the sorting is completed, these fields are read back from the sorting blocks. Usually this approach is considered faster, since records are read from temporary files in the order corresponding to the sorted records, rather than selected randomly from the data page. However, if the non-key fields are large (for example, long VARCHARs are used), this increases the size of the sorting blocks and, thus, leads to more I/O operations for temporary files. Firebird 4 offers an alternative approach (the Refetch
access method), when only key fields and DBKEY
records are stored inside the sorting blocks, and non-key fields are extracted from the data pages after sorting. This improves the performance of sorting in the case of long non-key fields.
Thus, the plans of your queries using sorting may change. To control this access method, a new configuration parameter InlineSortThreshold
was introduced. The value specified for InlineSortThreshold
determines the maximum size of the sorting record (in bytes) that can be stored inline, i.e. inside the sorting block. Zero means that records are always refetched. The optimal value of this parameter should be determined experimentally. The default value is 1000 bytes.
Consider the following example:
SELECT
field_1, field_2, field_3, field_4
FROM SomeTable
ORDER BY field_1
Before Firebird 4.0, all 4 fields were always included in the sorting blocks. Starting from Firebird 4.0, if the total length of the fields field_1 .. field_4
exceeds the value of InlineSortThreshold
, then only field_1
will be included in the sorting blocks, and then Refetch
will be performed.
Converting OUTER JOINs to INNER JOINs
There are a number of problems with optimizing OUTER JOINs in Firebird.
First, currently OUTER JOIN can only be performed by one join algorithm NESTED LOOP JOIN
, which may be changed in future versions.
Second, when joining streams with outer joins, the join order is strictly fixed, i.e., the optimizer cannot change it to keep the result correct.
However, if there is a predicate in the WHERE
condition for the field of the "right" (joined) table, which explicitly does not handle the NULL
value, then there is no point in the outer join. In this case, starting from Firebird 5.0, such a join will be converted to an inner one, which allows the optimizer to apply the full range of available join algorithms.
Suppose you have the following query:
SELECT
COUNT(*)
FROM
HORSE
LEFT JOIN FARM ON FARM.CODE_FARM = HORSE.CODE_FARM
WHERE FARM.CODE_COUNTRY = 1
In Firebird 5.0, such a query will be implicitly converted to an equivalent form:
SELECT
COUNT(*)
FROM
HORSE
JOIN FARM ON FARM.CODE_FARM = HORSE.CODE_FARM
WHERE FARM.CODE_COUNTRY = 1
If LEFT JOIN
was used as a hint to indicate the join order very actively, then rewriting a lot of queries in a new way may be problematic. For such developers, there is a configuration parameter OuterJoinConversion
in firebird.conf
or database.conf
. Setting the parameter OuterJoinConversion to false disables the transformation of Outer Join to inner join. Note that this parameter is a temporary solution to facilitate migration and may be removed in future versions of Firebird.
RETURNING, returning multiple records
Starting from Firebird 5.0, client modifying operators INSERT .. SELECT
, UPDATE
, DELETE
, UPDATE OR INSERT
and MERGE
, containing the RETURNING
clause, return a cursor, i.e. they are able to return multiple records instead of issuing an error "multiple rows in singleton select", as it happened before.
Now these queries during preparation are described as isc_info_sql_stmt_select
, whereas in previous versions they were described as isc_info_sql_stmt_exec_procedure.
Singleton operators INSERT .. VALUES
, as well as positioned operators UPDATE and DELETE (i.e., those containing the WHERE CURRENT OF
clause) retain the existing behavior and are described as isc_info_sql_stmt_exec_procedure
.
However, all these queries, if they are used in PSQL and the RETURNING
clause is applied, are still considered as singletons.
If your application uses modifying operators INSERT .. SELECT
, UPDATE
, DELETE
, UPDATE OR INSERT
and MERGE
, containing the RETURNING
clause, thenthis may be the cause of errors. Make sure that your driver or access component correctly handles such queries, and if not, then either modify the code (application or component), or wait until an update of the corresponding driver/component that correctly handles these queries is released.
Examples of modifying operators containing RETURNING
, and returning a data set:
INSERT INTO dest(name, val)
SELECT desc, num + 1 FROM src WHERE id_parent = 5
RETURNING id, name, val;
UPDATE dest
SET a = a + 1
WHERE id = ?
RETURNING id, a;
DELETE FROM dest
WHERE price < 0.52
RETURNING id;
MERGE INTO PRODUCT_INVENTORY AS TARGET
USING (
SELECT
SL.ID_PRODUCT,
SUM(SL.QUANTITY)
FROM
SALES_ORDER_LINE SL
JOIN SALES_ORDER S ON S.ID = SL.ID_SALES_ORDER
WHERE S.BYDATE = CURRENT_DATE
AND SL.ID_PRODUCT = :ID_PRODUCT
GROUP BY 1
) AS SRC(ID_PRODUCT, QUANTITY)
ON TARGET.ID_PRODUCT = SRC.ID_PRODUCT
WHEN MATCHED AND TARGET.QUANTITY - SRC.QUANTITY <= 0 THEN
DELETE
WHEN MATCHED THEN
UPDATE SET
TARGET.QUANTITY = TARGET.QUANTITY - SRC.QUANTITY,
TARGET.BYDATE = CURRENT_DATE
RETURNING OLD.QUANTITY, NEW.QUANTITY, SRC.QUANTITY;
Mass Migration Framework for Firebird
If you’re looking to migrate a bunch of servers—like over a hundred—and you want it done smoothly over just a few days, you might want to check out the Mass Migration Framework for Firebird. It’s got all the pro tools and support you’ll need to make the migration a breeze.
Conclusion
In this article, we tried to describe the most common problems and their solutions when migrating to Firebird 5.0 from Firebird 2.5, 3.0 and 4.0.We hope that this article will help you to migrate your databases and applications to Firebird 5.0 and take advantage of the new version.