BIGINT
BIGINT
is a 64-bit integer data type, available only in Dialect 3.
Numbers of the BIGINT
type are within the range from -263 to 263 - 1, or from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.
BIGINT
BIGINT
is a 64-bit integer data type, available only in Dialect 3.
Numbers of the BIGINT
type are within the range from -263 to 263 - 1, or from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.
INT128
INT128
is a 128-bit integer data type.This type is not defined in the SQL standard.
Numbers of the INT128
type are within the range from -2127 to 2127 - 1.
Constants of integer types can be specified in a hexadecimal format by means of 1 to 8 digits for INTEGER
, 9 to 16 hexadecimal digits for BIGINT
, and 10 to 32 hexadecimal digits for INT128
.Hex representation for writing to SMALLINT
is not explicitly supported, but Firebird will transparently convert a hex number to SMALLINT
if necessary, provided it falls within the ranges of negative and positive SMALLINT
.
The usage and numerical value ranges of hexadecimal notation are described in more detail in the discussion of number constants in the chapter entitled Common Language Elements.
CREATE TABLE WHOLELOTTARECORDS (
ID BIGINT NOT NULL PRIMARY KEY,
DESCRIPTION VARCHAR(32)
);
INSERT INTO MYBIGINTS VALUES (
-236453287458723,
328832607832,
22,
-56786237632476,
0X6F55A09D42, -- 478177959234
0X7FFFFFFFFFFFFFFF, -- 9223372036854775807
0XFFFFFFFFFFFFFFFF, -- -1
0X80000000, -- -2147483648, an INTEGER
0X080000000, -- 2147483648, a BIGINT
0XFFFFFFFF, -- -1, an INTEGER
0X0FFFFFFFF -- 4294967295, a BIGINT
);
The hexadecimal INTEGER
s in the above example are automatically cast to BIGINT
before being inserted into the table.However, this happens after the numerical value is determined, so 0x80000000
(8 digits) and 0x080000000
(9 digits) will be stored as different BIGINT
values.
When composing an expression or specifying an operation, the aim should be to use compatible data types for the operands.When a need arises to use a mixture of data types, it should prompt you to look for a way to convert incompatible operands before subjecting them to the operation.The ability to convert data may well be an issue if you are working with Dialect 1 data.
The CAST
function enables explicit conversion between many pairs of data types.
CAST (<expression> AS <target_type>) <target_type> ::= <domain_or_non_array_type> | <array_datatype> <domain_or_non_array_type> ::= !! See Scalar Data Types Syntax !! <array_datatype> ::= !! See Array Data Types Syntax !!
See also CAST()
in Chapter Built-in Scalar Functions.
When you cast to a domain, any constraints declared for it are taken into account, i.e. NOT NULL
or CHECK
constraints.If the value does not pass the check, the cast will fail.
If TYPE OF
is additionally specified — casting to its base type — any domain constraints are ignored during the cast.If TYPE OF
is used with a character type (CHAR
/VARCHAR
), the character set and collation are retained.
TYPE OF COLUMN
When operands are cast to the type of a column, the specified column may be from a table or a view.
Only the type of the column itself is used.For character types, the cast includes the character set, but not the collation.The constraints and default values of the source column are not applied.
CREATE TABLE TTT (
S VARCHAR (40)
CHARACTER SET UTF8 COLLATE UNICODE_CI_AI
);
COMMIT;
SELECT
CAST ('I have many friends' AS TYPE OF COLUMN TTT.S)
FROM RDB$DATABASE;
CAST
FunctionFrom Data Type | To Data Type |
---|---|
Numeric types |
Numeric types, |
|
|
|
|
|
|
|
|
|
|
To convert string data types to the BOOLEAN
type, the value must be (case-insensitive) 'true'
or 'false'
, or NULL
.
Important
|
Keep in mind that partial information loss is possible.For instance, when you cast the |
To cast string data types to the DATE
, TIME
or TIMESTAMP
data types, you need the string argument to be one of the predefined datetime mnemonics (see [fblangref50-dtyp-tbl-datetimemnemonics]) or a representation of the date in one of the allowed datetime formats (see Datetime Format Syntax),
Literal |
Description |
---|---|
|
Current date and time |
|
Current date |
|
Current date + 1 (day) |
|
Current date - 1 (day) |
Note
|
Casting the date mnemonics For example |
select
cast('04.12.2014' as date) as d1, -- DD.MM.YYYY
cast('04 12 2014' as date) as d2, -- MM DD YYYY
cast('4-12-2014' as date) as d3, -- MM-DD-YYYY
cast('04/12/2014' as date) as d4, -- MM/DD/YYYY
cast('04.12.14' as date) as d5, -- DD.MM.YY
-- DD.MM with current year
cast('04.12' as date) as d6,
-- MM/DD with current year
cast('04/12' as date) as d7,
cast('2014/12/04' as date) as d8, -- YYYY/MM/DD
cast('2014 12 04' as date) as d9, -- YYYY MM DD
cast('2014.12.04' as date) as d10, -- YYYY.MM.DD
cast('2014-12-04' as date) as d11, -- YYYY-MM-DD
cast('4 Jan 2014' as date) as d12, -- DD MM YYYY
cast('2014 Jan 4' as date) as dt13, -- YYYY MM DD
cast('Jan 4 2014' as date) as dt14, -- MM DD YYYY
cast('11:37' as time) as t1, -- HH:mm
cast('11:37:12' as time) as t2, -- HH:mm:ss
cast('11:31:12.1234' as time) as t3, -- HH:mm:ss.nnnn
-- DD.MM.YYYY HH:mm
cast('04.12.2014 11:37' as timestamp) as dt1,
-- MM/DD/YYYY HH:mm:ss
cast('04/12/2014 11:37:12' as timestamp) as dt2,
-- DD.MM.YYYY HH:mm:ss.nnnn
cast('04.12.2014 11:31:12.1234' as timestamp) as dt3,
cast('now' as timestamp) as m1,
cast('today' as date) as m2,
cast('yesterday' as date) as m3,
cast('tomorrow' as date) as m4
from rdb$database
Firebird allows the use of a shorthand “C-style” type syntax for casts from string to the types DATE
, TIME
and TIMESTAMP
.The SQL standard calls these “datetime literals”.
<data_type> 'date_format_string'
See also Datetime Literals.
Note
|
These literal expressions are evaluated directly during parsing, as though the statement were already prepared for execution.As this produced unexpected or confusing results when using the datetime mnemonics like To use datetime mnemonics, use the full
|
Implicit data conversion is not possible in Dialect 3 — the CAST
function is almost always required to avoid data type clashes.
In Dialect 1, in many expressions, one type is implicitly cast to another without the need to use the CAST function.For instance, the following statement in Dialect 1 is valid:
UPDATE ATABLE
SET ADATE = '25.12.2016' + 1
The string literal will be cast to the DATE
type implicitly.
In Dialect 3, this statement will raise error 35544569, “Dynamic SQL Error: expression evaluation not supported, Strings cannot be added or subtracted in dialect 3” — a cast will be needed:
UPDATE ATABLE
SET ADATE = CAST ('25.12.2016' AS DATE) + 1
Or, with a datetime literal:
UPDATE ATABLE
SET ADATE = DATE '25.12.2016' + 1
In Dialect 1, mixing integer data and numeric strings is usually possible because the parser will try to cast the string implicitly.For example,
2 + '1'
will be executed correctly.
In Dialect 3, an expression like this will raise an error, so you will need to write it as a CAST
expression:
2 + CAST('1' AS SMALLINT)
The exception to the rule is during string concatenation.
When multiple data elements are being concatenated, all non-string data will undergo implicit conversion to string, if possible.
SELECT 30||' days hath September, April, June and November' CONCAT$
FROM RDB$DATABASE;
CONCAT$
------------------------------------------------
30 days hath September, April, June and November
In Firebird, the concept of a “user-defined data type” is implemented in the form of the domain.Creating a domain does not truly create a new data type, of course.A domain provides the means to encapsulate an existing data type with a set of attributes and make this “capsule” available for reuse across the whole database.If several tables need columns defined with identical or nearly identical attributes, a domain makes sense.
Domain usage is not limited to column definitions for tables and views.Domains can be used to declare input and output parameters and variables in PSQL code.
A domain definition has required and optional attributes.The data type is a required attribute.Optional attributes include:
a default value
to allow or forbid NULL
CHECK
constraints
character set (for character data types and text BLOB fields)
collation (for character data types)
CREATE DOMAIN BOOL3 AS SMALLINT
CHECK (VALUE IS NULL OR VALUE IN (0, 1));
Explicit Data Type Conversion for the description of differences in the data conversion mechanism when domains are specified for the TYPE OF
and TYPE OF COLUMN
modifiers.
While defining a column using a domain, it is possible to override some attributes inherited from the domain.[fblangref50-dtyp-tbl-domoverride] summarises the rules for domain override.
Attribute | Override? | Comments |
---|---|---|
Data type |
No |
|
Default value |
Yes |
|
Text character set |
Yes |
It can also be used to restore the default database values for the column |
Text collation |
Yes |
|
|
Yes |
To add new conditions to the check, you can use the corresponding |
|
No |
Often it is better to leave domain nullable in its definition and decide whether to make it |
A domain is created with the DDL statement CREATE DOMAIN
.
CREATE DOMAIN name [AS] <type> [DEFAULT {<literal> | NULL | <context_var>}] [NOT NULL] [CHECK (<condition>)] [COLLATE <collation>]
CREATE DOMAIN
in the Data Definition (DDL) Statements chapter.
To change the attributes of a domain, use the DDL statement ALTER DOMAIN
.With this statement you can:
rename the domain
change the data type
drop the current default value
set a new default value
drop the NOT NULL
constraint
set the NOT NULL
constraint
drop an existing CHECK
constraint
add a new CHECK
constraint
ALTER DOMAIN name [{TO new_name}] [{SET DEFAULT { <literal> | NULL | <context_var> } | DROP DEFAULT}] [{SET | DROP} NOT NULL ] [{ADD [CONSTRAINT] CHECK (<dom_condition>) | DROP CONSTRAINT}] [{TYPE <datatype>}]
ALTER DOMAIN STORE_GRP SET DEFAULT -1;
When changing a domain, its dependencies must be taken into account: whether there are table columns, any variables, input and/or output parameters with the type of this domain declared in the PSQL code.If you change domains in haste, without carefully checking them, your code may stop working!
Important
|
When you convert data types in a domain, you must not perform any conversions that may result in data loss.Also, for example, if you convert |
ALTER DOMAIN
in the Data Definition (DDL) Statements chapter.
The DDL statement DROP DOMAIN
deletes a domain from the database, provided it is not in use by any other database objects.
DROP DOMAIN name
DROP DOMAIN Test_Domain
DROP DOMAIN
in the Data Definition (DDL) Statements chapter.
This section documents the syntax of declaring data types.Data type declaration commonly occurs in DDL statements, but also in CAST
and EXECUTE BLOCK
.
The syntax documented below is referenced from other parts of this language reference.
The scalar data types are simple data types that hold a single value.For reasons of organisation, the syntax of BLOB types are defined separately in [fblangref50-datatypes-syntax-blob].
<domain_or_non_array_type> ::= <scalar_datatype> | <blob_datatype> | [TYPE OF] domain | TYPE OF COLUMN rel.col <scalar_datatype> ::= SMALLINT | INT[EGER] | BIGINT | INT128 | REAL | FLOAT [(bin_prec)] | DOUBLE PRECISION | DECFLOAT [(dec_prec)] | BOOLEAN | DATE | TIME [{WITHOUT | WITH} TIME ZONE] | TIMESTAMP [{WITHOUT | WITH} TIME ZONE] | {DECIMAL | DEC | NUMERIC} [(precision [, scale])] | {VARCHAR | {CHAR | CHARACTER} VARYING} (length) [CHARACTER SET charset] | {CHAR | CHARACTER} [(length)] [CHARACTER SET charset] | {NCHAR | NATIONAL {CHARACTER | CHAR}} VARYING (length) | {NCHAR | NATIONAL {CHARACTER | CHAR}} [(length)] | BINARY [(length)] | {VARBINARY | BINARY VARYING} (length)
Argument | Description |
---|---|
domain |
Domain (only non-array domains) |
rel |
Name of a table or view |
col |
Name of a column in a table or view (only columns of a non-array type) |
bin_prec |
Binary precision, default is 24. 1 - 24: 32-bit single precision |
dec_prec |
Decimal precision of |
precision |
Numeric precision in decimal digits.From 1 to 38 |
scale |
Scale, or number of decimals.From 0 to 38.It must be less than or equal to precision |
length |
The maximum length of a string, in characters, or — for |
charset |
Character set |
domain_or_non_array_type |
Non-array types that can be used in PSQL code and casts |
A domain name can be specified as the type of a PSQL parameter or local variable.The parameter or variable will inherit all domain attributes.If a default value is specified for the parameter or variable, it overrides the default value specified in the domain definition.
If the TYPE OF
clause is added before the domain name, only the data type of the domain is used: any of the other attributes of the domain — NOT NULL
constraint, CHECK
constraints, default value — are neither checked nor used.However, if the domain is of a text type, its character set and collation are always used.
Input and output parameters or local variables can also be declared using the data type of columns in existing tables and views.The TYPE OF COLUMN
clause is used for that, specifying relationname.columnname as its argument.
When TYPE OF COLUMN
is used, the parameter or variable inherits only the data type and — for string types — the character set and collation.The constraints and default value of the column are ignored.
The BLOB data types hold binary, character or custom format data of unspecified size.For more information, see [fblangref50-datatypes-bnrytypes].
<blob_datatype> ::= BLOB [SUB_TYPE {subtype_num | subtype_name}] [SEGMENT SIZE seglen] [CHARACTER SET charset] | BLOB [(seglen [, subtype_num])] | BLOB [(, subtype_num)]
Argument | Description |
---|---|
charset |
Character set (ignored for subtypes other than |
subtype_num |
|
subtype_name |
|
seglen |
Segment size, cannot be greater than 65,535, defaults to 80 when not specified.See also [fblangref50-datatypes-seg-size] |
If the SUB_TYPE
and CHARACTER SET
clauses are absent, then subtype BINARY
(or 0
) is used.If the SUB_TYPE
clause is absent and the CHARACTER SET
clause is present, then subtype TEXT
(or 1
) is used.
The array data types hold multiple scalar values in a single or multi-dimensional array.For more information, see [fblangref50-datatypes-array]
<array_datatype> ::= {SMALLINT | INT[EGER] | BIGINT | INT128} <array_dim> | {REAL | FLOAT [(bin_prec)] | DOUBLE PRECISION} <array_dim> | DECFLOAT [(dec_prec)] <array_dim> | BOOLEAN <array_dim> | DATE <array_dim> | TIME [{WITHOUT | WITH} TIME ZONE] <array_dim> | TIMESTAMP [{WITHOUT | WITH} TIME ZONE] <array_dim> | {DECIMAL | DEC | NUMERIC} [(precision [, scale])] <array_dim> | {VARCHAR | {CHAR | CHARACTER} VARYING} (length) <array_dim> [CHARACTER SET charset] | {CHAR | CHARACTER} [(length)] <array_dim> [CHARACTER SET charset] | {NCHAR | NATIONAL {CHARACTER | CHAR}} VARYING (length) <array_dim> | {NCHAR | NATIONAL {CHARACTER | CHAR}} [(length)] <array_dim> | BINARY [(length)] <array_dim> | {VARBINARY | BINARY VARYING} (length) <array_dim> <array_dim> ::= '[' [m:]n [,[m:]n ...] ']'
Argument | Description |
---|---|
array_dim |
Array dimensions |
bin_prec |
Binary precision, default is 24. 1 - 24: 32-bit single precision |
dec_prec |
Decimal precision of |
precision |
Numeric precision in decimal digits.From 1 to 38 |
scale |
Scale, or number of decimals.From 0 to 38.It must be less than or equal to precision |
length |
The maximum length of a string, in characters, or — for |
charset |
Character set |
m, n |
Integer numbers defining the index range of an array dimension |
Firebird supports two types of floating-point data types: approximate or binary floating-point data types (FLOAT
and DOUBLE PRECISION
), and decimal floating-point types (DECFLOAT
).
Approximate floating-point values are stored in an IEEE 754 binary format that comprises sign, exponent and mantissa.Precision is dynamic, corresponding to the physical storage format of the value, which is exactly 4 bytes for the FLOAT
type and 8 bytes for DOUBLE PRECISION
.
Considering the peculiarities of storing floating-point numbers in a database, these data types are not recommended for storing monetary data.For the same reasons, columns with floating-point data are not recommended for use as keys or to have uniqueness constraints applied to them.
For testing data in columns with floating-point data types, expressions should check using a range, for instance, BETWEEN
, rather than searching for exact matches.
When using these data types in expressions, extreme care is advised regarding the rounding of evaluation results.
FLOAT
FLOAT [(bin_prec)]
Parameter | Description |
---|---|
bin_prec |
Precision in binary digits, default is 24 1 - 24: 32-bit single precision25 - 53: 64-bit double precision |
The FLOAT
data type defaults to a 32-bit single precision floating-point type with an approximate precision of 7 decimal digits after the decimal point (24 binary digits).To ensure the safety of storage, rely on 6 decimal digits of precision.
The syntax FLOAT(bin_prec)
behaves as follows:
1 <= _bin_prec <= 23: 32-bit single precision (synonym for FLOAT
)
25 <= _bin_prec <= 53: 64-bit double precision (synonym for DOUBLE PRECISION
)
The behaviour of FLOAT
(without explicit precision) behaves as the SQL standard type [fblangref50-datatypes-real].
Note
|
Compatibility Notes
|
REAL
REAL
The data type REAL
is a synonym for FLOAT
, and is provided for syntax compatibility.When used to define a column or parameter, it’s indistinguishable from using FLOAT
or FLOAT(1)
— FLOAT(24)
.
Note
|
Compatibility Note
|
DOUBLE PRECISION
DOUBLE PRECISION
The DOUBLE PRECISION
data type is stored with an approximate precision of 15 digits.
Note
|
Compatibility Notes
|
Decimal floating-point values are stored in an IEEE 754 decimal format that comprises sign, exponent and coefficient.Contrary to the approximate floating-point data types, precision is either 16 or 34 decimal digits.
DECFLOAT
DECFLOAT [(dec_prec)]
Parameter | Description |
---|---|
dec_prec |
Precision in decimal digits, either 16 or 34.Default is 34. |
DECFLOAT
is a SQL standard-compliant numeric type that stores floating-point number precisely (decimal floating-point type), unlike FLOAT
or DOUBLE PRECISION
that provide a binary approximation of the purported precision.
The type is stored and transmitted as IEEE 754 standard types Decimal64
(DECFLOAT(16)
) or Decimal128
(DECFLOAT(34)
).
All intermediate calculations are performed with 34-digit values.
The “16” and “34” refer to the maximum precision in Base-10 digits.See https://en/wikipedia.org/wiki/iEEE_754#Basic_and_interchange_formats for a comprehensive table.
Type | Maximum precision | Minimum Exponent | Maximum Exponent | Smallest value | Largest value |
---|---|---|---|---|---|
|
16 |
-383 |
+384 |
1E-398 |
9.9..9E+384 |
|
34 |
-6143 |
+6144 |
1E-6176 |
9.9..9E+6144 |
Observe that although the smallest exponent for DECFLOAT(16)
is -383, the smallest value has an exponent of -398, but 15 fewer digits.And similar for DECFLOAT(34)
, smallest exponent is -6143, but the smallest value has an exponent of -6176, but 33 fewer digits.The reason is that precision was “sacrificed” to be able to store a smaller value.
This is a result of how the value is stored: as a decimal value of 16 or 34 digits and an exponent.For example, 1.234567890123456e-383
is stored as coefficient 1234567890123456
and exponent -398
, while 1E-398
is stored as coefficient 1
, exponent -398
.
DECFLOAT
OperationsThe behaviour of DECFLOAT
operations in a session, specifically rounding and error behaviour, can be configured using the SET DECFLOAT
management statement, and the isc_dpb_decfloat_round
and isc_dpb_decfloat_traps
DPB items.
DECFLOAT
LiteralsIt is possible to express DECFLOAT(34)
values in approximate numeric literals, but only for values with a mantissa of 20 or more digits, or an absolute exponent larger than 308.Scientific notation literals with fewer digits or a smaller absolute exponent are DOUBLE PRECISION
literals.Exact numeric literals with 40 or more digits — actually 39 digits, when larger than the maximum INT128
value — are also handled as DECFLOAT(34)
.
Alternatively, use a string literal and explicitly cast to the desired DECFLOAT
type.
The length of DECFLOAT
literals cannot exceed 1024 characters.Scientific notation is required for greater values.For example, 0.0<1020 zeroes>11
cannot be used as a literal, but the equivalent in scientific notation, 1.1E-1022
, is valid.Similarly, 10<1022 zeroes>0
can be presented as 1.0E1024
.Literals with more than 34 significant digits are rounded using the DECFLOAT
rounding mode of the session.
DECFLOAT
and FunctionsA number of standard scalar functions can be used with expressions and values of the DECFLOAT
type.They are:
The aggregate functions SUM
, AVG
, MAX
and MIN
work with DECFLOAT
data, as do all the statistical aggregates (including but not limited to STDDEV
or CORR
).
DECFLOAT
Firebird supports four functions, designed to support DECFLOAT
data specifically:
COMPARE_DECFLOAT
compares two DECFLOAT
values to be equal, different or unordered
NORMALIZE_DECFLOAT
takes a single DECFLOAT
argument and returns it in its simplest form
QUANTIZE
takes two DECFLOAT
arguments and returns the first argument scaled using the second value as a pattern
TOTALORDER
performs an exact comparison on two DECFLOAT
values
Detailed descriptions are available in the Special Functions for DECFLOAT
section of the Built-in Scalar Functions chapter.
Fixed-point data types ensure the predictability of multiplication and division operations, making them the choice for storing monetary values.Firebird implements two fixed-point data types: NUMERIC
and DECIMAL
.
According to the SQL standard, both types limit the stored number to the declared scale (the number of digits after the decimal point).The standard defines different treatment of the precision for each type: precision for NUMERIC
columns is exactly “as declared”, while DECIMAL
columns accepts numbers whose precision is at least equal to what was declared.
Note
|
The behaviour of both |
For instance, NUMERIC(4, 2)
defines a number consisting altogether of four digits, including two digits after the decimal point;that is, it can have up to two digits before the point[1]] and no more than two digits after the point.If the number 3.1415 is written to a column with this data type definition, the value of 3.14 will be saved in the NUMERIC(4, 2)
column.
The form of declaration for fixed-point data, for instance, NUMERIC(p, s)
, is common to both types.The s
argument in this template is scale.Understanding the mechanism for storing and retrieving fixed-point data should help to visualise why: for storage, the number is multiplied by 10s (10 to the power of s
), converting it to an integer;when read, the integer is converted back by multiplying by 10-s (or, dividing by 10s).
The method of storing fixed-point data in the database depends on several factors: declared precision, database dialect, declaration type.
Precision | Data type | Dialect 1 | Dialect 3 |
---|---|---|---|
1 - 4 |
|
|
|
1 - 4 |
|
|
|
5 - 9 |
|
|
|
10 - 18 |
|
|
|
19 - 38 |
|
|
|
Note
|
Numerics with precision less than 19 digits use For complex calculations, those digits are cast internally to DECFLOAT(34).The result of various mathematical operations, such as |
NUMERIC(4, s)
that is SMALLINT
, which means it can store [-327.68, 327.67
NUMERIC
NUMERIC [(precision [, scale])]
Parameter | Description |
---|---|
precision |
Precision, between 1 and 38.Defaults to 9. |
scale |
Scale, between 0 and precision.Defaults to 0. |
Further to the explanation above, Firebird will store NUMERIC
data according the declared precision and scale.Some more examples are:
NUMERIC(4) stored as SMALLINT (exact data) NUMERIC(4,2) SMALLINT (data * 102) NUMERIC(10,4) (Dialect 1) DOUBLE PRECISION (Dialect 3) BIGINT (data * 104) NUMERIC(38,6) INT128 (data * 106)
Caution
|
Always keep in mind that the storage format depends on the precision.For instance, you define the column type as |
DECIMAL
{ DECIMAL | DEC } [(precision [, scale])]
Parameter | Description |
---|---|
precision |
Precision, between 1 and 38.Defaults to 9. |
scale |
Scale, between 0 and precision.Defaults to 0. |
The storage format in the database for DECIMAL
is similar to NUMERIC
, with some differences that are easier to observe with the help of some more examples:
DECIMAL(4) stored as INTEGER (exact data) DECIMAL(4,2) INTEGER (data * 102) DECIMAL(10,4) (Dialect 1) DOUBLE PRECISION (Dialect 3) BIGINT (data * 104) DECIMAL(38,6) INT128 (data * 106)
The DATE
, TIME
and TIMESTAMP
data types are used to work with data containing dates and times.
Time zone support is available using the types TIME WITH TIME ZONE
and TIMESTAMP WITH TIME ZONE
.In this language reference, we’ll use TIME
and TIMESTAMP
to refer both to the specific types without time zone — TIME [WITHOUT TIME ZONE]
and TIMESTAMP [WITHOUT TIME ZONE]
— and aspects of both the without time zone and with time zone types, which one we mean is usually clear from the context.
Important
|
The data types |
Dialect 3 supports all the five types, while Dialect 1 has only DATE
.The DATE
type in Dialect 3 is “date-only”, whereas the Dialect 1 DATE
type stores both date and time-of-day, equivalent to TIMESTAMP
in Dialect 3.Dialect 1 has no “date-only” type.
Note
|
Dialect 1 |
If fractions of seconds are stored in date and time data types, Firebird stores them to ten-thousandths of a second (100 microseconds or deci-milliseconds).If a lower granularity is preferred, the fraction can be specified explicitly as thousandths, hundredths or tenths of a second, or second, in Dialect 3 databases of ODS 11 or higher.
Note
|
Some Useful Knowledge about Sub-second Precision
The time-part of a
Deci-milliseconds precision is not supported by all drivers and access components.The best assumption to make from all this is that, although Firebird stores |
The time zone types are stored as values at UTC (offset 0), using the structure of TIME
or TIMESTAMP
+ two extra bytes for time zone information (either an offset in minutes, or the id of a named time zone).Storing as UTC allows Firebird to index and compare two values in different time zones.
Storing at UTC has some caveats:
When you use named zones, and the time zone rules for that zone change, the UTC time stays the same, but the local time in the named zone may change.
For TIME WITH TIME ZONE
, calculating a time zone offset for a named zone to get the local time in the zone applies the rules valid at the 1st of January 2020 to ensure a stable value.This may result in unexpected or confusing results.
When the rules of a named time zone changes, a value in the affected date range may no longer match the intended value if the actual offset in that named zone changes.
DATE
DATE
The DATE
data type in Dialect 3 stores only date without time.The available range for storing data is from January 01, 1 to December 31, 9999.
In Dialect 1, DATE
is an alias for [fblangref50-datatypes-timestamp].Dialect 1 has no “date-only” type.
Tip
|
In Dialect 1, date literals without a time part, as well as casts of date mnemonics If you need to store a Dialect 1 timestamp literal with an explicit zero time-part, the engine will accept a literal like |
Internally, Firebird stores dates in a 32-bit integer as a Modified Julian Date, or the number of days since 1858-11-17.An additional restriction is imposed, limiting valid dates to the range from 0001-01-01 AD to 9999-12-31 AD.
TIME
TIME [{ WITHOUT | WITH } TIME ZONE]
For a bare TIME
, WITHOUT TIME ZONE
is assumed.
The TIME
data type is available in Dialect 3 only.It stores the time of day within the range from 00:00:00.0000 to 23:59:59.9999.
If you need to get the time-part from DATE
in Dialect 1, you can use the EXTRACT
function.
EXTRACT()
EXTRACT (HOUR FROM DATE_FIELD)
EXTRACT (MINUTE FROM DATE_FIELD)
EXTRACT (SECOND FROM DATE_FIELD)
See also the EXTRACT()
function in the chapter entitled Built-in Scalar Functions.
TIME [WITHOUT TIME ZONE]
The TIME
(or synonym TIME WITHOUT TIME ZONE
) represents a time without time zone information.
TIME WITH TIME ZONE
The TIME WITH TIME ZONE
represents a time with time zone information (either an offset or a named zone).
Firebird uses the ICU implementation of the IANA Time Zone Database for named zones.
EXTRACT()
EXTRACT (TIMEZONE_HOUR FROM TIME_TZ_FIELD)
EXTRACT (TIMEZONE_MINUTE FROM TIME_TZ_FIELD)
TIMESTAMP
TIMESTAMP [{ WITHOUT | WITH } TIME ZONE]
For a bare TIMESTAMP
, WITHOUT TIME ZONE
is assumed.
The TIMESTAMP
data type is available in Dialect 3 and Dialect 1.It comprises two 32-bit integers — a date-part and a time-part — to form a structure that stores both date and time-of-day.In Dialect 1, [fblangref50-datatypes-date] is an alias for TIMESTAMP
.
The EXTRACT
function works equally well with TIMESTAMP
as with the Dialect 1 DATE
type.
TIMESTAMP [WITHOUT TIME ZONE]
The TIMESTAMP
(or synonym TIMESTAMP WITHOUT TIME ZONE
) represents a time and date without time zone information.
TIMESTAMP WITH TIME ZONE
The TIMESTAMP WITH TIME ZONE
represents a time with time zone information (either an offset or a named zone).
As the name implies, the session time zone can be different for each database attachment.It can be set explicitly in the DPB with the item isc_dpb_session_time_zone
;otherwise, by default, it uses the same time zone as the operating system of the Firebird server process.This default can be overridden in firebird.conf
, setting DefaultTimeZone
.
Note
|
Drivers may apply different defaults, for example specifying the client time zone as the default session time zone.Check your driver documentation for details. |
Subsequently, the time zone can be changed to a given time zone using a SET TIME ZONE
statement or reset to its original value with SET TIME ZONE LOCAL
or ALTER SESSION RESET
.
A time zone is specified as a string, either a time zone region (for example, 'America/Sao_Paulo'
) or a displacement from GMT in hours:minutes (for example, '-03:00'
).
A time/timestamp with time zone is considered equal to another time/timestamp with time zone if their conversions to UTC are equivalent.For example, time '10:00 -02:00'
and time '09:00 -03:00'
are equivalent, since both are the same as time '12:00 GMT'
.
Important
|
The same equivalence applies in |
The method of storing date and time values makes it possible to involve them as operands in some arithmetic operations.In storage, a date value or date-part of a timestamp is represented as the number of days elapsed since “date zero” — November 17, 1858 — whilst a time value or the time-part of a timestamp is represented as the number of deci-milliseconds (100 microseconds) since midnight.
An example is to subtract an earlier date, time or timestamp from a later one, resulting in an interval of time, in days and fractions of days.
Operand 1 | Operation | Operand 2 | Result |
---|---|---|---|
|
|
|
|
|
|
|
|
|
|
Numeric value |
|
|
|
|
|
|
|
|
|
|
|
Numeric value |
|
|
|
Numeric value |
|
|
|
Numeric value |
|
|
|
Numeric value |
|
|
|
|
Number of days elapsed, within the range |
|
|
Numeric value |
|
|
|
|
Number of seconds elapsed, within the range |
|
|
|
The without time zone value is converted to WITH TIME ZONE in the current session time zone.Number of seconds elapsed between the UTC values, within the range |
|
|
|
Number of seconds elapsed between the UTC values, within the range |
|
|
Numeric value |
|
|
|
|
Number of days and part-day, within the range |
|
|
|
The without time zone value is converted to WITH TIME ZONE in the current session time zone.Number of days and part-day between UTC values, within the range |
|
|
|
Number of days and part-day between UTC values, within the range |
|
|
Numeric value |
|
Note
|
Notes
In Dialect 1, the |
Firebird provides a number of features to discover time zone information.
RDB$TIME_ZONES
A virtual table listing time zones supported in the engine.
See also RDB$TIME_ZONES
in System Tables.
RDB$TIME_ZONE_UTIL
A package of time zone utility functions and procedures.
See also RDB$TIME_ZONE_UTIL
in System Packages.
Time zones are often changed: of course, when it happens, it is desirable to update the time zone database as soon as possible.
Firebird stores WITH TIME ZONE
values translated to UTC time.Suppose a value is created with one time zone database, and a later update of that database changes the information in the range of our stored value.When that value is read, it will be returned as different to the value that was stored initially.
Firebird uses the IANA time zone database through the ICU library.The ICU library included in the Firebird kit (Windows), or installed in a POSIX operating system, can sometimes have an outdated time zone database.
An updated database can be found on this page on the FirebirdSQL GitHub.Filename le.zip
stands for little-endian and is the necessary file for most computer architectures (Intel/AMD compatible x86 or x64), while be.zip
stands for big-endian architectures and is necessary mostly for RISC computer architectures.The content of the zip file must be extracted in the /tzdata
sub-directory of the Firebird installation, overwriting existing *.res
files belonging to the database.
Note
|
|
For working with character data, Firebird has the fixed-length CHAR
and the variable-length VARCHAR
data types.The maximum size of text data stored in these data types is 32,767 bytes for CHAR
and 32,765 bytes for VARCHAR
.The maximum number of characters that will fit within these limits depends on the character set used for the data under consideration.The collation does not affect this maximum, although it may affect the maximum size of any index that involves the column.
If no character set is explicitly specified when defining a character object, the default character set of the database — at time of defining the object — will be used.If the database does not have a default character set defined, the object gets the character set NONE
.
Most current development tools support Unicode, implemented in Firebird with the character sets UTF8
and UNICODE_FSS
.UTF8
comes with collations for many languages.UNICODE_FSS
is more limited and was previously used mainly by Firebird internally for storing metadata.Keep in mind that one UTF8
character occupies up to 4 bytes, thus limiting the size of CHAR
fields to 8,191 characters (32,767/4).
Note
|
The actual “bytes per character” value depends on the range the character belongs to.Non-accented Latin letters occupy 1 byte, Cyrillic letters from the |
The UTF8
character set implemented in Firebird supports the latest version of the Unicode standard, thus recommending its use for international databases.
While working with strings, it is essential to keep the character set of the client connection in mind.If there is a mismatch between the character sets of the stored data and that of the client connection, the output results for string columns are automatically re-encoded, both when data are sent from the client to the server and when they are sent back from the server to the client.For example, if the database was created in the WIN1251
encoding but KOI8R
or UTF8
is specified in the client’s connection parameters, the mismatch will be transparent.
NONE
The character set NONE
is a special character set in Firebird.It can be characterized such that each byte is a part of a string, but the string is stored in the system without any clues about what constitutes any character: character encoding, collation, case, etc. are simply unknown.It is the responsibility of the client application to deal with the data and provide the means to interpret the string of bytes in some way that is meaningful to the application and the human user.
OCTETS
Data in OCTETS
encoding are treated as bytes that may not be interpreted as characters.OCTETS
provides a way to store binary data.The database engine has no concept of what it is meant to do with a string of bytes in OCTETS
, other than store and retrieve it.Again, the client side is responsible for validating the data, presenting them in formats that are meaningful to the application and its users and handling any exceptions arising from decoding and encoding them.CHAR
and VARCHAR
with character set OCTETS
have synonyms BINARY
and VARBINARY
.
Each character set has a default collation (COLLATE
) that specifies the collation order (or, collation sequence, or collating sequence).Usually, it provides nothing more than ordering based on the numeric code of the characters and a basic mapping of upper- and lower-case characters.If some behaviour is needed for strings that is not provided by the default collation and a suitable alternative collation is supported for that character set, a COLLATE collation
clause can be specified in the column definition.
A COLLATE collation
clause can be applied in other contexts besides the column definition.For comparison operations, it can be added in the WHERE
clause of a SELECT
statement.If output needs to be sorted in a special alphabetic sequence, or case-insensitively, and the appropriate collation exists, then a COLLATE
clause can be included with the ORDER BY
clause when rows are being sorted on a character field and with the GROUP BY
clause in case of grouping operations.
For a case-insensitive search, the UPPER
function could be used to convert both the search argument and the searched strings to upper-case before attempting a match:
...
where upper(name) = upper(:flt_name)
For strings in a character set that has a case-insensitive collation available, you can apply the collation, to compare the search argument and the searched strings directly.For example, using the WIN1251
character set, the collation PXW_CYRL
is case-insensitive for this purpose:
...
WHERE FIRST_NAME COLLATE PXW_CYRL >= :FLT_NAME
...
ORDER BY NAME COLLATE PXW_CYRL
UTF8
CollationsThe following table shows the possible collations for the UTF8
character set.
Collation | Characteristics |
---|---|
|
Collation works according to the position of the character in the table (binary). |
|
Collation works according to the UCA algorithm (Unicode Collation Algorithm) (alphabetical). |
|
The default, binary collation, identical to |
|
Case-insensitive collation, works without taking character case into account. |
|
Case-insensitive, accent-insensitive collation, works alphabetically without taking character case or accents into account. |
An example of collation for the UTF8 character set without taking into account the case or accentuation of characters (similar to COLLATE PXW_CYRL
in the earlier example).
...
ORDER BY NAME COLLATE UNICODE_CI_AI
The maximum length for an index key equals one quarter of the page size, i.e. from 1,024 — for page size 4,096 — to 8,192 bytes — for page size 32,768.The maximum length of an indexed string is 9 bytes less than that quarter-page limit.
The following formula calculates the maximum length of an indexed string (in characters):
max_char_length = FLOOR((page_size / 4 - 9) / N)
where N is the number of bytes per character in the character set.
The table below shows the maximum length of an indexed string (in characters), according to page size and character set, calculated using this formula.
Page Size |
Bytes per character |
||||
---|---|---|---|---|---|
1 |
2 |
3 |
4 |
6 |
|
4,096 |
1,015 |
507 |
338 |
253 |
169 |
8,192 |
2,039 |
1,019 |
679 |
509 |
339 |
16,384 |
4,087 |
2,043 |
1,362 |
1,021 |
681 |
32,768 |
8,183 |
4,091 |
2,727 |
2,045 |
1,363 |
Note
|
With case-insensitive collations (“_CI”), one character in the index key will occupy not 4, but 6 (six) bytes, so the maximum key length for a page of — for example — 4,096 bytes, will be 169 characters. |
BINARY
BINARY [(length)]
Parameter | Description |
---|---|
length |
Length in bytes between 1 and 32,767;defaults to |
BINARY
is a fixed-length binary data type, and is an SQL standard-compliant alias for CHAR(length) CHARACTER SET OCTETS
.Values shorter than the declared length are padded with NUL (0x00) up to the declared length.
Note
|
Some tools may report the type as |
CHAR
{CHAR | CHARACTER} [(length)] [CHARACTER SET <set>] [COLLATE <name>]
Parameter | Description |
---|---|
length |
Length in characters, defaults to |
set |
Character set name |
name |
Collation name |
CHAR
is a fixed-length character data type.Values shorter than the declared length are padded with spaces up to the declared length.The pad character does not have to be a space (0x20): it depends on the character set.For example, the pad character for the OCTETS
character set is NUL (0x00).
Fixed-length character data can be used to store codes whose length is standard and has a definite “width”.An example of such a code is an EAN13 barcode — 13 characters, all filled.
Note
|
|
VARBINARY
{VARBINARY | BINARY VARYING} (length)
Parameter | Description |
---|---|
length |
Length in bytes between 1 and 32,765 |
VARBINARY
is a variable-length binary type, and is an SQL standard-compliant alias for VARCHAR(length) CHARACTER SET OCTETS
.
Note
|
Some tools may report the type as |
VARCHAR
{VARCHAR | {CHAR | CHARACTER} VARYING} (length) [CHARACTER SET <set>] [COLLATE <name>]
Parameter | Description |
---|---|
length |
Length in characters.A valid length is from 1 to the maximum number of characters that can be accommodated within 32,765 bytes. |
set |
Character set name |
name |
Collation name |
VARCHAR
is a variable-length character data type, up to a maximum of 32,765 bytes.The stored structure is equal to the actual size of the data plus 2 bytes to record the length of the data.
All characters that are sent from the client application to the database are considered meaningful, including leading and trailing spaces.
Note
|
|
NCHAR
{NCHAR | NATIONAL {CHAR | CHARACTER}} [(length)]
NCHAR
is a fixed-length character data type with the ISO8859_1
character set.In all other respects it is the same as CHAR
.
Note
|
If no length is specified, it is taken to be 1. |
A similar data type is available for the variable-length string type: NATIONAL {CHAR | CHARACTER} VARYING
.
BOOLEAN
BOOLEAN
The SQL-compliant BOOLEAN
data type (8 bits) comprises the distinct truth values TRUE
and FALSE
.Unless prohibited by a NOT NULL
constraint, the BOOLEAN
data type also supports the truth value UNKNOWN
as the null value.The specification does not make a distinction between the NULL
value of this data type, and the truth value UNKNOWN
that is the result of an SQL predicate, search condition, or Boolean value expression: they may be used interchangeably to mean the same thing.
As with many programming languages, the SQL BOOLEAN
values can be tested with implicit truth values.For example, field1 OR field2
and NOT field1
are valid expressions.
IS
OperatorPredicates can use the operator Boolean IS [NOT]
for matching.For example, field1 IS FALSE
, or field1 IS NOT TRUE
.
Note
|
|
BOOLEAN
ExamplesInserting and selecting
CREATE TABLE TBOOL (ID INT, BVAL BOOLEAN);
COMMIT;
INSERT INTO TBOOL VALUES (1, TRUE);
INSERT INTO TBOOL VALUES (2, 2 = 4);
INSERT INTO TBOOL VALUES (3, NULL = 1);
COMMIT;
SELECT * FROM TBOOL;
ID BVAL
============ =======
1 <true>
2 <false>
3 <null>
Test for TRUE
value
SELECT * FROM TBOOL WHERE BVAL;
ID BVAL
============ =======
1 <true>
Test for FALSE
value
SELECT * FROM TBOOL WHERE BVAL IS FALSE;
ID BVAL
============ =======
2 <false>
Test for UNKNOWN
value
SELECT * FROM TBOOL WHERE BVAL IS UNKNOWN;
ID BVAL
============ =======
3 <null>
Boolean values in SELECT
list
SELECT ID, BVAL, BVAL AND ID < 2
FROM TBOOL;
ID BVAL
============ ======= =======
1 <true> <true>
2 <false> <false>
3 <null> <false>
PSQL declaration with start value
DECLARE VARIABLE VAR1 BOOLEAN = TRUE;
Valid syntax, but as with a comparison with NULL
, will never return any record
SELECT * FROM TBOOL WHERE BVAL = UNKNOWN; SELECT * FROM TBOOL WHERE BVAL <> UNKNOWN;
Although BOOLEAN
is not inherently convertible to any other data type, the strings 'true'
and 'false'
(case-insensitive) will be implicitly cast to BOOLEAN
in value expressions.For example:
if (true > 'false') then ...
The value 'false'
is converted to BOOLEAN
.Any attempt to use the Boolean operators AND
, NOT
, OR
and IS
will fail.NOT 'False'
, for example, is invalid.
A BOOLEAN
can be explicitly converted to and from string with CAST
.UNKNOWN
is not available for any form of casting.
Note
|
Other Notes
|
Note
|
The types [fblangref50-datatypes-chartypes-binary] and [fblangref50-datatypes-chartypes-varbinary] are covered earlier in section [fblangref50-datatypes-chartypes]. |
BLOB
s (Binary Large Objects) are complex structures used to store text and binary data of an undefined length, often very large.
BLOB [SUB_TYPE <subtype>] [SEGMENT SIZE <segment size>] [CHARACTER SET <character set>] [COLLATE <collation name>]
If the SUB_TYPE
and CHARACTER SET
clauses are absent, then subtype BINARY
(or 0
) is used.If the SUB_TYPE
clause is absent and the CHARACTER SET
clause is present, then subtype TEXT
(or 1
) is used.
BLOB (<segment size>) BLOB (<segment size>, <subtype>) BLOB (, <subtype>)
Note
|
Formally, the |
Specifying the BLOB segment size is a throwback to times past, when applications for working with BLOB data were written in C (Embedded SQL) with the help of the gpre pre-compiler.Nowadays, it is effectively irrelevant.The segment size for BLOB data is determined by the client side and is usually larger than the data page size, in any case.
BLOB
SubtypesThe optional SUB_TYPE
parameter specifies the nature of data written to the column.Firebird provides two pre-defined subtypes for storing user data:
BINARY
If a subtype is not specified, the specification is assumed to be for untyped data and the default SUB_TYPE BINARY
(or SUB_TYPE 0
) is applied.This is the subtype to specify when the data are any form of binary file or stream: images, audio, word-processor files, PDFs and so on.
TEXT
Subtype 1 has an alias, TEXT
, which can be used in declarations and definitions.For instance, BLOB SUB_TYPE TEXT
(or BLOB SUB_TYPE 1
).It is a specialized subtype used to store plain text data that is too large to fit into a string type.A CHARACTER SET
may be specified, if the field is to store text with a different encoding to that specified for the database.A COLLATE
clause is also supported.
Specifying CHARACTER SET
without specifying a SUB_TYPE
implies SUB_TYPE TEXT
.
It is also possible to add custom data subtypes, for which the range of enumeration from -1 to -32,768 is reserved.Custom subtypes enumerated with positive numbers are not allowed, as the Firebird engine uses the numbers from 2-upward for some internal subtypes in metadata.Custom subtype aliases can be inserted into the RDB$TYPES
table by users with the system privilege CREATE_USER_TYPES
.
BLOB
SpecificsThe maximum size of a BLOB
field depends on the page size of the database, whether the blob value is created as a stream blob or a segmented blob, and if segmented, the actual segment sizes used when populating the blob.For most built-in functions, the maximum size of a BLOB
field is 4 GB, or data beyond the 4 GB limit is not addressable.For a page size of 4 KB (4096 bytes) the maximum size is slightly less than 4 GB.
Text BLOBs of any length and any character set — including multi-byte — can be operands for practically any statement or internal functions.The following operators are fully supported:
= |
(assignment) |
=, <>, <, <=, >, >= |
(comparison) |
|
(concatenation) |
|
|
|
As an efficient alternative to concatenation, you can also use BLOB_APPEND()
or the functions and procedures of system package RDB$BLOB_UTIL
.
Partial support:
An error occurs with these if the search argument is larger than or equal to 32 KB:
|
Aggregation clauses work not on the contents of the field itself, but on the BLOB ID.Aside from that, there are some quirks:
|
returns several NULL values by mistake if they are present |
|
— |
|
concatenates the same strings if they are adjacent to each other, but does not do it if they are remote from each other |
BLOB
StorageBy default, a regular record is created for each BLOB, and it is stored on a data page that is allocated for it.If the entire BLOB
fits onto this page, it is called a level 0 BLOB.The number of this special record is stored in the table record and occupies 8 bytes.
If a BLOB
does not fit onto one data page, its contents are put onto separate pages allocated exclusively to it (blob pages), while the numbers of these pages are stored into the BLOB
record.This is a level 1 BLOB.
If the array of page numbers containing the BLOB
data does not fit onto a data page, the array is put on separate blob pages, while the numbers of these pages are put into the BLOB
record.This is a level 2 BLOB.
Levels higher than 2 are not supported.
Note
|
Firebird does not offer much in the way of language or tools for working with the contents of arrays, and there are no plans to improve this.This limits the usefulness and accessibility of array types.Therefore, the general advice is: do not use arrays. |
The support of arrays in the Firebird DBMS is a departure from the traditional relational model.Supporting arrays in the DBMS could make it easier to solve some data-processing tasks involving large sets of similar data.
Arrays in Firebird are stored in BLOB
of a specialized type.Arrays can be one-dimensional and multi-dimensional and of any data type except BLOB
and ARRAY
.
CREATE TABLE SAMPLE_ARR (
ID INTEGER NOT NULL PRIMARY KEY,
ARR_INT INTEGER [4]
);
This example will create a table with a field of the array type consisting of four integers.The subscripts of this array are from 1 to 4.
By default, dimensions are 1-based — subscripts are numbered from 1.To specify explicit upper and lower bounds of the subscript values, use the following syntax:
'[' <lower>:<upper> ']'
A new dimension is added using a comma in the syntax.In this example we create a table with a two-dimensional array, with the lower bound of subscripts in both dimensions starting from zero:
CREATE TABLE SAMPLE_ARR2 (
ID INTEGER NOT NULL PRIMARY KEY,
ARR_INT INTEGER [0:3, 0:3]
);
The database employee.fdb
, found in the ../examples/empbuild
directory of any Firebird distribution package, contains a sample stored procedure showing some simple work with arrays:
SHOW_LANGS
, a procedure involving an arrayCREATE OR ALTER PROCEDURE SHOW_LANGS (
CODE VARCHAR(5),
GRADE SMALLINT,
CTY VARCHAR(15))
RETURNS (LANGUAGES VARCHAR(15))
AS
DECLARE VARIABLE I INTEGER;
BEGIN
I = 1;
WHILE (I <= 5) DO
BEGIN
SELECT LANGUAGE_REQ[:I]
FROM JOB
WHERE (JOB_CODE = :CODE)
AND (JOB_GRADE = :GRADE)
AND (JOB_COUNTRY = :CTY)
AND (LANGUAGE_REQ IS NOT NULL))
INTO :LANGUAGES;
IF (LANGUAGES = '') THEN
/* PRINTS 'NULL' INSTEAD OF BLANKS */
LANGUAGES = 'NULL';
I = I +1;
SUSPEND;
END
END
If the features described are enough for your tasks, you might consider using arrays in your projects.Currently, no improvements are planned to enhance support for arrays in Firebird.
“Special” data types …
SQL_NULL
Data TypeThe SQL_NULL
type holds no data, but only a state: NULL
or NOT NULL
.It is not available as a data type for declaring table fields, PSQL variables or parameter descriptions.This data type exists to support the use of untyped parameters in expressions involving the IS NULL
predicate.
An evaluation problem occurs when optional filters are used to write queries of the following type:
WHERE col1 = :param1 OR :param1 IS NULL
After processing, at the API level, the query will look like this:
WHERE col1 = ? OR ? IS NULL
This is a case where the developer writes an SQL query and considers :param1
as though it were a variable that they can refer to twice.However, at the API level, the query contains two separate and independent parameters.The server cannot determine the type of the second parameter since it comes in association with IS NULL
.
The SQL_NULL
data type solves this problem.Whenever the engine encounters an “? IS NULL
” predicate in a query, it assigns the SQL_NULL
type to the parameter, which will indicate that parameter is only about “nullness” and the data type or the value need not be addressed.
The following example demonstrates its use in practice.It assumes two named parameters — say, :size
and :colour
— which might, for example, get values from on-screen text fields or drop-down lists.Each named parameter corresponds with two positional parameters in the query.
SELECT
SH.SIZE, SH.COLOUR, SH.PRICE
FROM SHIRTS SH
WHERE (SH.SIZE = ? OR ? IS NULL)
AND (SH.COLOUR = ? OR ? IS NULL)
Explaining what happens here assumes the reader is familiar with the Firebird API and the passing of parameters in XSQLVAR structures — what happens under the surface will not be of interest to those who are not writing drivers or applications that communicate using the “naked” API.
The application passes the parameterized query to the server in the usual positional ?
-form.Pairs of “identical” parameters cannot be merged into one, so for the two optional filters in the example, four positional parameters are needed: one for each ?
in our example.
After the call to isc_dsql_describe_bind()
, the SQLTYPE of the second and fourth parameters will be set to SQL_NULL
.Firebird has no knowledge of their special relation with the first and third parameters: that responsibility lies entirely on the application side.
Once the values for size and colour have been set (or left unset) by the user, and the query is about to be executed, each pair of XSQLVAR
s must be filled as follows:
First parameter (value compare): set *sqldata
to the supplied value and *sqlind
to 0
(for NOT NULL
)
Second parameter (NULL
test): set sqldata
to null
(null pointer, not SQL NULL
) and *sqlind
to 0
(for NOT NULL
)
Both parameters: set sqldata
to null
(null pointer, not SQL NULL
) and *sqlind
to -1
(indicating NULL
)
In other words: The value compare parameter is always set as usual.The SQL_NULL
parameter is set the same, except that sqldata
remains null
at all times.