Devrace FIBPlus: Optimization of network traffic in Delphi and C++ Builder applications

In this article we will try to give a number of recommendations and examples, which will enable developers to create more effective InterBase and Firebird applications. These technologies of optimization of network traffic can be applied to multi-user environment and systems that address to the server using low-speed channels (i.e. ISDN).

Caching metadata

FIBPlus enables us to get system information on table columns automatically, and independently set properties of such TpFIBDataSet fields as Required (for NOT NULL fields), ReadOnly (for calculated ones) and DefaultExpression (for those with the default option). It is very convenient for a developer and a user as the first does not have to set the properties manually during the development of the client application and the second gets more informative messages when working with the program. For example, if a database field is described as NOT NULL, and the user will try to leave it empty, he or she will get the message «Field ‘…’ must have a value». Such a message is more understandable than that about system 

Such a message is more understandable than that about system InterBase/Firebird error of violation of the PRIMARY KEY. The same concerns calculated fields, as you surely cannot edit them. FIBPlus will automatically set their ReadOnly properties to True and users will not see incomprehensible errors when trying to change the field properties in TDBGrid.

But this FIBPlus feature has a disadvantage that reveals during work with low-speed connection channels. In order to get information on component fields FIBPlus executes additional "invisible" queries addressing to system InterBase/Firebird tables. Of course if there are many tables in the application or many fields in these tables, speed of application work may slow down and the network traffic may increase. In particular you can see this on the stage of first query openings as each of them entails a number of additional ones. Then during work with recurrent queries FIBPlus uses information it has previously got but at the start of the application you may notice a small slowing down of work.   

At first sight the developer has only two solutions of this problem: either to tolerate delays on first opening of queries or not to use this nice function of FIBPlus components, which will set all table fields on its own. But actually you can use such FIBPlus feature as caching metadata on the client.

Now we will demonstrate this in practice. You create a new application and place there the following components:


pFIBDatabase1: TpFIBDatabase;
pFIBTransaction1: TpFIBTransaction;
pFIBDataSet1: TpFIBDataSet;
DBGrid1: TDBGrid;
DataSource1: TDataSource;
FIBSQLMonitor1: TFIBSQLMonitor;
Label1: TLabel;
Label2: TLabel;
Label3: TLabel;
Memo1: TMemo;
Button1: TButton;
ListView1: TListView;

And then link the components in the following way:


pFIBDatabase1.DefaultTransaction := pFIBTransaction1;
pFIBTransaction1.DefaultDatabase := pFIBDatabase1;
pFIBDataSet1.Database := pFIBDatabase1;
pFIBDataSet1.Transaction := pFIBTransaction1;
pFIBDataSet1.AutoCommit := True;
DBGrid1.DataSource := DataSource1;
DataSource1.DataSet := pFIBDataSet1;

After this set parameters of connection to a database:

 

And form queries necessary for work of pFIBDataSet1. For this you call SQL Generator, pressing the right button on pFIBDataSet1:

 

Then you choose EMPLOYEE in the list of tables and place this name in the left editor. After this press “Save SQL” and go to the “Options” bookmark:

 

In the list «Select main table:» you set the EMPLOYEE table (it must be single as we have not linked tables in the query), and press “Get table fields”. SQL Generator will form two lists of fields. Now you only have to press «Generate SQLs», “Save all”, and the component is ready to work. Now let's write handlers of OnFormCreate and OnSQL events (for the TFIBSQLMonitor component):

procedure TForm1.FIBSQLMonitor1SQL(EventText: String;
EventTime: TDateTime);
begin
Memo1.Lines.Text := Memo1.Lines.Text + EventText;
end;
procedure TForm1.FormCreate(Sender: TObject);
begin
pFIBDatabase1.Connected := True;
pFIBDataset1.Active := True;
end;

Now it is time to look at the ListView1 component. It is necessary to add there four columns:

Name, Not Null, Computed and Default. They are necessary to check if properties of pFIBDataset1 columns have been set correctly. For this we will write the AfterOpen even handler of pFIBDataset1:

procedure
TForm1.pFIBDataSet1AfterOpen(DataSet: TDataSet);

var FieldInfo: TListItem;
      Index: Integer;
begin

     with pFIBDataSet1 do begin
         for
Index := 0 to pred(FieldCount) do begin
              FieldInfo := ListView1.Items.Add;

              FieldInfo.Caption := Fields[Index].FieldName;
            if Fields[Index].Required then FieldInfo.SubItems.Add('+')
            else FieldInfo.SubItems.Add('-');
            if Fields[Index].ReadOnly then FieldInfo.SubItems.Add('+')
            else FieldInfo.SubItems.Add('-');
            if Fields[Index].DefaultExpression <> '' then

            FieldInfo.SubItems.Add(Fields[Index].DefaultExpression);
        end
;
    end;
end;

So in result our application will look as:

A full example text is given on CD.

Now start the application and pay attention at contents of ListView1:

As shown in the picture, pFIBDataSet1 has got all properties of table fields and even default ones. This was achieved by use of a number of additional queries hooked by FIBSQLMonitor, for example:

[Application: metadata_cache]

: [Execute]  SELECT R.RDB$FIELD_NAME , R.RDB$FIELD_SOURCE , F.RDB$COMPUTED_BLR , R.RDB$DEFAULT_SOURCE DS , F.RDB$DEFAULT_SOURCE DS1 , F.RDB$FIELD_TYPE , F.RDB$DIMENSIONS FROM RDB$RELATION_FIELDS R JOIN RDB$FIELDS F ON (R.RDB$FIELD_SOURCE = F.RDB$FIELD_NAME ) WHERE R.RDB$RELATION_NAME = :TN ORDER BY R.RDB$FIELD_POSITION

It is obvious that if our application had a hundred of tables, we would get a hundred of additional queries. During work in the net these queries do not cause any problems, but if you have a low speed channel they slow down speed of work. 

Now we will speak about the main issue, which concerns caching metadata on the client. You should set the following values of the CacheSchemeOptions property of pFIBDatabase1:

Now pFIBDatabase1 will save all obtained metadata (in our case they are properties of such fields as NOT NULL, COMPUTED, etc) in the external file «metadata_cache.fpc» on closing the application and load them from this file on opening it. So on recurrent opening of application there will be no need to use additional queries. There is also one thing concerning caching metadata: pFIBDatabase1 will try to check validity of current metadata. For this it will again make a query to system tables and check if the system table identifier has been changed and the structure updated. If any of the table structures on the server has been changed, its data will be reread. Of course this will not affect other tables.

 You can avoid such a default check if you write your own realization in the OnAcceptCacheSchema handler of pFIBDatabase1. In the simplest case we will say that all saved metadata are always valid:

procedure TForm1.pFIBDatabase1AcceptCacheSchema(const ObjName: String;
var Accept: Boolean);

begin
    Accept := True;
end
;

This handler is called for metadata of each object saved in the file. Notice that if real metadata differ from those cached by our application, there may appear difficult-to-find errors. Then start our application. After the first start you will not see any changes, as the application will execute all the queries as earlier. But if you start the program again you ill see that Memo1 no more has queries to RDB$RELATION_FIELDS. Actually the application will only execute one query:

[Application: metadata_cache]

pFIBDataSet1: [Execute] SELECT
EMP.EMP_NO,
EMP.FIRST_NAME,
EMP.LAST_NAME,
EMP.PHONE_EXT,
EMP.HIRE_DATE,
EMP.DEPT_NO,
EMP.JOB_CODE,
EMP.JOB_GRADE,
EMP.JOB_COUNTRY,
EMP.SALARY,
EMP.FULL_NAME
FROM
EMPLOYEE EMP

Execute tick count 0

Though there are no system queries, all properties of fields in pFIBDataset1 (as shown in ListView1) will be correct. We wanted such a result in order to reduce redundant queries and at the same time to continue using nice FIBPlus features.

Dynamic modifying queries

Before telling about modifying queries we will explain how user's data modification by TpFIBDataSet works. The query used to get data is set in the SelectSQL property. On inserting a record (to be more exact when the Post method is called after Append/Insert) there is executed a query set in InsertSQL, and instead of parameters there are inserted real values of fields set by the user (or generated by the program). The same happens on changing a record: after calling Post, TpFIBDataSet executes a query from the UpdateSQL property inserting values there from record columns. To illustrate this we will write a small example. First we will create the following table and generator for the primary key:

CREATE TABLE "Simple Table" (
    "Id" INTEGER NOT NULL,
    "First Name" VARCHAR (100),
    "Last Name" VARCHAR (100),
    "Address" VARCHAR (100));

ALTER TABLE "Simple Table" ADD CONSTRAINT "PK_Simple Table" PRIMARY KEY ("Id");

CREATE GENERATOR “Simple Table_Id_GEN”;

Now we will create a new application. There we will place the same components as in the previous one except for TListView (we do not need it anymore). Using SQL Generator we will form a query for SelectSQL:

SELECT
    Sim."Id",
    Sim."First Name",
    Sim."Last Name",
    Sim."Address"
FROM
    "Simple Table" Sim

and then generate modifying queries. The whole process was described in the previous example. For instance, for UpdateSQL we will get the following query:

UPDATE "Simple Table" SET
    "Id" = ?"Id",
    "First Name" = ?"First Name",
    "Last Name" = ?"Last Name",
    "Address" = ?"Address"
 WHERE    
            "Id" = ?"OLD_Id"

Now we have to set autogeneration of primary key values for the “Id” field. We will use the AutoUpdateOptions property:


Here there are three important properties: GeneratorName (a name of the generator), KeyFields (a name of the key field) and WhenGetGenID (an option of getting generator values). The WhenGetGenID property is set equal wgOnNewRecord, in order to let pFIBDataSet1 get a new generator value just on inserting a record. Then we start the application and add a couple of records to the table:


Now if we have a look at Memo1, we will see how the records were added:

[Application: update_only_modified]
pFIBDataSet1: [Execute] INSERT INTO "Simple Table"(
    "Id",
    "First_Name",
    "Last_Name",
    "Address"
)
VALUES(
    ?"Id",
    ?"First_Name",
    ?"Last_Name",
    ?"Address"
)

  Id = 1
  First_Name = 'Name 1'
  Last_Name = 'Last Name 1'
  Address = 'Address 1'
Rows Affected:  1

Execute tick count 0After each user's insertion pFIBDataSet1 executed a query from InsertSQL, and it used the set values as parameters. For example if we try to change a record with Id = 2, SQL Monitor will hook the following query:

pFIBDataSet1: [Execute] UPDATE "Simple Table" SET
    "Id" = ?"Id",
    "First_Name" = ?"First_Name",
    "Last_Name" = ?"Last_Name",
    "Address" = ?"Address"
 WHERE    
            "Id" = ?"OLD_Id"
   
  Id = 2
  First_Name = 'Name 2 - Changed'
  Last_Name = 'Last Name 2'
  Address = 'Address 2'
  OLD_Id = 2
Rows Affected:  1
Execute tick count 0

As you see, pFIBDataSet1 send to the server all record fields though in reality only the First_Name field was changed. It is easy to imagine, that in a multi-user environment with many fields in a table (especially string ones) such an approach will cause much superfluous net traffic.
You can eliminate this defect if you use a query generator built in TpFIBDataSet. For this you will have to add some options to AutoUpdateOptions: first set AutoReWriteSQLs and CanChangeSQLs to True, UpdateTableName equal "Simple Table", and UpdateOnlyModifiedFields - True. These options will let pFIBDataSet1 generate modifying queries each time after changing a record. And this query will add only field with really changed values. Run the application and try to change the “First_Name” field of the record with Id = 3. In Memo1 you will see a query executed after such a change:

pFIBDataSet1: [Execute] Update "Simple Table" Set
     "First_Name"=?"NEW_First_Name"
where "Simple Table"."Id"=?"OLD_Id"

  NEW_First_Name = 'Name 3 - Changed'
  OLD_Id = 3
Rows Affected:  1
Execute tick count 0

If you use this approach, the economy of network traffic becomes obvious.

Using poRefreshAfterPost option in TpFIBDataSet Options

The TpFIBDataset component has a special RefreshSQL property intended for updating the record, which has just been changed. Imagine a situation when the AFTER UPDATE trigger changing the Last Name field is set to our table from the example above. When a user edits a record and pFIBDataSet1 executes a corresponding UpdateSQL, the trigger also edits the record. After this TpFIBDataset executes a query from RefreshSQL that returns only one current record. For example if we have a look at a query generated with SQL Generator, so RefreshSQL will look like:

SELECT'
    Sim."Id",
    Sim."First_Name",
    Sim."Last_Name",
    Sim."Address"
FROM
    "Simple Table" Sim
WHERE
    (   
            Sim."Id" = ?"OLD_Id"
    )

It is obvious that after execution of the query we will see the changes made by the trigger. But it is also obvious that the execution of this query after any record changing will cause additional net traffic. If you are sure that a table has no triggers, which change field values or a record in your program will not be edited by multiple users, you can deactivate this query by removing the poRefreshAfterPost key from pFIBDataSet1.Options. In this case RefreshSQL will not be executed without calling pFIBDataSet1.Refresh in the program.

So there is no doubt that deactivation of RefreshSQL during work with tables containing a large number of fields may considerably reduce network traffic and speed up work of your application on low-speed channels.

Recurrent use of queries

To prepare a query for execution, all client libraries including FIBPlus deliver a full query text to the server. But to execute a prepared query it is enough to deliver only Handle and values of parameters. If there are often similar queries in your program, you can organize their recurrent use by the following methods:

function GetQueryForUse (aTransaction: TFIBTransaction; const SQLText: string): TpFIBQuery;
procedure FreeQueryForUse (aFIBQuery: TpFIBQuery);

You will not have to create instances of TpFIBQuery, as the GetQueryForUse procedure will create it itself on the first calling. And then it will return a reference to the existent component if you execute this query again and again. 

It is obvious that during after each recurrent calling there will be used a query that has already been prepared so the text will be delivered to the server only once. After getting the query result from the TpFIBQuery component it is necessary to call the FreeQueryForUse method. This mechanism is already used for internal purposes in FIBPlus components, for example, to call generators in order to get primary key values. You can also use these methods in your applications to optimize traffic.

Client BLOB-filters. «Transparent» packing of BLOB-fields.

Many readers may know about blob filters technology in InterBase/Firebird. These are user functions enabling you to handle (that is code/decode, pack, etc) blob-fields on the server transparently for the client application. This may be useful if you need to archive blob-fields in a database, as for this you do not have to change the client program. But this approach will not help you to slow down the network traffic because in any case the server and the application will exchange unpacked fields. 

Due to Ivan Ravin, FIBPlus has a mechanism of client blob-filters, which is very similar to that in InterBase/Firebird. An advantage of a local blob-filter is our ability to decrease network traffic of the application considerably if we pack blob-fields before sending them to and then unpack them after getting to the server. This is done by means of registration of two procedures of reading and writing blob-fields in TpFIBDatabase. As a result FIBPlus will automatically use these procedures to handle all blob-fields of the set type in all TpFIBDataSets using one TpFIBDatabase instance. Let's illustrate this mechanism writing an example.

First we will create a table with blob-fields and a trigger to generate unique values of the primary key:

CREATE TABLE "BlobTable" (
    "Id" INTEGER NOT NULL,
    "BlobText" BLOB sub_type -15 segment size 1);

ALTER TABLE "BlobTable" ADD CONSTRAINT "PK_BlobTable" PRIMARY KEY ("Id");

Notice that sub_type must have negative value! Now place the following components on the form:

pFIBDataSet1: TpFIBDataSet;
pFIBTransaction1: TpFIBTransaction;
pFIBDatabase1: TpFIBDatabase;
DataSource1: TDataSource;
DBGrid1: TDBGrid;
DBMemo1: TDBMemo;
Button1: TButton;
OpenDialog1: TOpenDialog;

Link FIBPlus components and generate queries for pFIBDataSet1 (this time only for the “BlobTable” table), as we have already demonstrated in previous examples. We will get the following form:

We will write a handler of pressing the button:

procedure TForm1.Button1Click(Sender: TObject);
var S: TStream;

        FileS: TFileStream;
begin

    if not OpenDialog1.Execute then exit;
    pFIBDataSet1.Append;
    S :=pFIBDataSet1.CreateBlobStream(pFIBDataSet1.FieldByName('BlobText'), bmReadWrite);
    FileS := TFileStream.Create(OpenDialog1.FileName, fmOpenRead);
    S.CopyFrom(FileS, FileS.Size);
    FileS.Free;
    S.Free;
    pFIBDataSet1.Post;
end;

Then we run the application and try to save a couple of text files in blob-fields (for instance, sources of this application). Now if look at the contents of this table using any visual administration tool with BLOB-Viewer, so we will see that blob-fields keep the text of files:

Now we will create functions of packing/de-packing blob-fields:

procedure PackBuffer(var Buffer: PChar; var BufSize: LongInt);
var srcStream, dstStream: TStream;

begin
    srcStream := TMemoryStream.Create;
    dstStream := TMemoryStream.Create;
    try

        srcStream.WriteBuffer(Buffer^, BufSize);
        srcStream.Position := 0;
        GZipStream(srcStream, dstStream, 6);
        srcStream.Free;
        srcStream := nil;
        BufSize := dstStream.Size;
        dstStream.Position := 0;
        ReallocMem(Buffer, BufSize);
        dstStream.ReadBuffer(Buffer^, BufSize);
    finally
        if Assigned(srcStream) then srcStream.Free;
        dstStream.Free;
    end;
end
;

procedure UnpackBuffer(var Buffer: PChar; var BufSize: LongInt);
var srcStream,dstStream: TStream;

begin

    srcStream := TMemoryStream.Create;
    dstStream := TMemoryStream.Create;
    try
       srcStream.WriteBuffer(Buffer^, BufSize);
       srcStream.Position := 0;
       GunZipStream(srcStream, dstStream);
       srcStream.Free;
       srcStream:=nil;
       BufSize := dstStream.Size;
       dstStream.Position := 0;
       ReallocMem(Buffer, BufSize);
       dstStream.ReadBuffer(Buffer^, BufSize);
    finally
        if
assigned(srcStream) then srcStream.Free;

           dstStream.Free;
    end;
end
;

Do not forget to add two modules to the section uses: zStream, IBBlobFilter. The first is intended for making archives of data and the second is included in FIBPlus and controls blob-filters. Now you only have to register blob-filters. This is done by calling the RegisterBlobFilter function. The value of the first parameter is a type of a blob-field (in our case it is –15), and the second and third parameters are functions of coding and decoding of the blob-field:

procedure TForm1.FormCreate(Sender: TObject);
begin

    pFIBDatabase1.RegisterBlobFilter(-15, @PackBuffer, @UnpackBuffer);
    pFIBDatabase1.Connected := True;
    pFIBDataset1.Active := True;
end;

Run our application, delete records it has already contained and add new ones. You will not see any difference but if you look what is really saved in blob-fields, you will see that all the data are archived:

Moreover, the application sends to (and gets from) the server already archived blobs, and this can considerably decrease net traffic! Of course you can pack blob-fields without using the above-described mechanism of blob-filters. For example you can compress a field in the Button1Click procedure before saving it and then decompress in the AfterScroll handler or anything like that. But, firstly, use of centralized mechanism greatly simplifies your code (as blob fields are handled imperceptibly for the rest parts of the program) and secondly it helps to avoid commonplace errors (when there are packed blob fields in one part on the program and no ones in another).

Summary

We have demonstrated several examples of decreasing network traffic in applications: cache of metadata on the client, auto-generation of optimal modifying queries and packing of blob-fields by means of client blob-filters. Of course if you project your application correctly and it will not try to get excess data, you will considerably decrease the network traffic. But our secrets can be useful for those people, who have already tried everything to optimize the project.
All the above-mentioned examples are available
http://www.devrace.com/files/optimization_examples.zip.
To compile them you will need FIBPlus of version 4.77 or upwards, as previous versions do not support some features. 
The latest FIBPlus version can be downladed from our site
http://www.devrace.com/en/fibplus/download 


Preview text: 

In this article we will try to give a number of recommendations and examples, which will enable developers to create more effective InterBase and Firebird applications. These technologies of optimization of network traffic can be applied to multi-user environment and systems that address to the server using low-speed channels (i.e. ISDN).


Prices in Euro:

235 (1 copy)
1250 (unlimited)

Volume discounts are available...

Navigation



Using FIBPlus, I did everything I could only dream of with other data connectors and InterBase in 2 weeks. I am very happy with your work. Together we are successful. >>

Mark McFarlane
FOR CUSTOMERS
Download full versions and updates in your Personal Area