Subscribe for automatic updates: RSS icon RSS

Login icon Sign in for full access | Help icon Help
Advanced search

Pages: [1]
  Reply  |  Print  
Author Topic: Execute command with dynamic "using" block  (Read 27439 times)
Anderson P.
Posts: 82

« on: October 09, 2014, 02:43:36 pm »

Is there some way to create dynamically the "execute" statement of a query? More specifically, the "using" block.

Here is the situation, i have a "Prepare" statement for a query that is very dynamic, having more or less tables to insert and the blob types vary a lot according to the tables that i have to insert.

So i have to make a "case" block with all the possible combinations of tables and blob types. So i would like to know if is possible to create dynamically the "execute" statement.

Other possibility is to change the variable type during the program execution, like i have the "execute ... using var1, var2, var3" and the type of var1, var2 and var3 can change according to the table.

Thanks for your attention
Sebastien F.
Four Js
Posts: 509

« Reply #1 on: October 10, 2014, 11:26:04 am »

Hello Anderson,

Unfortunatly, the USING clause of EXECUTE or OPEN/FOREACH must provide the exact number variables corresponding to the ? parameters used in the SQL statement, otherwise you get the error:

SQL statement error number -254.
Too many or too few host variables given.

Note that you can use more variables in an INTO clause is the number for fetched SELECT items...

As a general advice, avoid to concat constant values to build dynamic SQL statements as in:
  2. LET d = TODAY
  3. LET sql = "SELECT ... WHERE order_date > '", d, "'"
Because this is not portable accross different databases (ex: date formatting rules are db specific).

In your case, since there can be so many parameter combinations, using SQL constants could be the best solution, assuming that you generate SQL constants according to the target database type.

But unfortunately, you want to pass Large OBject values, and these cannot be converted to SQL constants...
The only way it to use TEXT/BYTE variables.

Assuming that you are using Informix and TEXT/BYTE columns, I wanted to suggest you to use FILETOCLOB/FILETOBLOB functions, but unfortunately it's not possible to cast CLOB to TEXT / BLOB to BYTE:

  1. MAIN
  2.   DATABASE test1
  3.   CREATE TEMP TABLE tt1 ( k INT, t TEXT )


SQL statement error number -9634.
No cast from clob to text.

So far, I see no other way as a large CASE switch will all possible USING combinations, but that seems undoable, if you want to support all possible data types!

Best thing to do is to you contact our support to give more details about your case, we could then properly analyze all options.

- what is the target db?
- what kind of SQL params can be used (max number, possible types)?
- why is it possible to have so many BYTE/TEXT param combinations in your application?

Anderson P.
Posts: 82

« Reply #2 on: October 10, 2014, 01:19:04 pm »

Hello Seb

I understand that this is not a normal situation, actually there is about 20 cases that i need to support, so it's possible to do and in this case it's the best solution.

Other solution will be to make Genero generates his own code and write a .4gl file, so i use "base.Channel.openPipe" to call "gcomp" in the server and compile the self generated code, then i call the self generated program do execute the SQL.  Yeah, now THAT is crazy!
Sebastien F.
Four Js
Posts: 509

« Reply #3 on: October 10, 2014, 03:37:18 pm »

Generating 4gl code on the fly and run it is maybe not that crazy.
As long as you have a development license on the production server this can be a good option.
Of course if you have to deploy your application on many production servers with a runtime license, you cannot compile.

We have also the idea to extend the language with so called "dynamic record" concept, where you could define RECORD members at runtime.
But this is not easy to implement - long term feature -
However to me it's a must to write generic code in Genero.

Back to your need, I am still wondering what kind of query you have to deal with.

Is this for INSERT, UPDATE or SELECT? Any kind? Are you writing a SQL command interpreter? ;-)

Maybe you could split the task into several SQL statements, each taking a single LOB param, if the LOB columns allow NULLs.

I mean:

  1. PREPARE s FROM "INSERT INTO mytab (pk,b1,b2) VALUES (?,?,?)"
  2. EXECUTE s USING, rec.b1, rec.b2
is like:

  1. PREPARE s1 FROM "INSERT INTO mytab (pk,b1,b2) VALUES (?,NULL,NULL)"
  3. PREPARE s2 FROM "UPDATE mytab SET b1 = ? WHERE pk = ?"
  4. EXECUTE s2 USING, rec.b1
  5. PREPARE s3 FROM "UPDATE mytab SET b2 = ? WHERE pk = ?"
  6. EXECUTE s3 USING, rec.b2

... you can repeat s2...sN

Anderson P.
Posts: 82

« Reply #4 on: October 10, 2014, 03:59:05 pm »

So, actually the program is a "table replicator". Since our company has many servers worldwide it's necessary to replicate some informations between servers.

The only option we had was to "hang" the program in the user's computer until the copy was done, and in the case of some server be unavailable at the time the user must retry the operation latter.

What we has developed is a program that run SQL query's, any query to anywhere. The user must call a function informing the destination, the desired execution time and the SQL query to execute, like this:

  1. type t_agendar_query dynamic array of record
  2.                                        sql     like diacpa.cpasql,
  3.                                        tabname like diacpa.tabname,
  4.                                        blob    dynamic array of    record
  5.                                                                        arq     like diapab.pabarq,
  6.                                                                        text    like diapab.pabtext,
  7.                                                                        varchar like diapab.pabvarchar
  8.                                                                    end record
  9.                                      end record

sql is the actual SQL query to run.
tabname is the name of the table where the query will be ran, this is not used, it's just to keep a record.
Blob is a dynamic array, so each "?" in the query equates to a position in the array, the user must inform only one of the types in the record.

And here come the case, i have to "execute" the query informed in the "sql" and evaluate the "blob" length, testing for each position witch is null and witch in informed, like this:

  1. foreach pont_pab into la_cpa[l_indcpa].blob[l_indpab].pabarq,
  2.                              la_cpa[l_indcpa].blob[l_indpab].pabtext,
  3.                              la_cpa[l_indcpa].blob[l_indpab].pabvarchar
  5.            if   la_cpa[l_indcpa].blob[l_indpab].pabarq is not null
  6.              then
  7.                 let la_cpa[l_indcpa].blobind = la_cpa[l_indcpa].blobind clipped, "A"
  8.            end if
  10.            if   la_cpa[l_indcpa].blob[l_indpab].pabtext is not null
  11.              then
  12.                 let la_cpa[l_indcpa].blobind = la_cpa[l_indcpa].blobind clipped, "T"
  13.            end if
  15.            if   la_cpa[l_indcpa].blob[l_indpab].pabvarchar is not null
  16.              then
  17.                 let la_cpa[l_indcpa].blobind = la_cpa[l_indcpa].blobind clipped, "V"
  18.            end if
  20.        end foreach

blobind is a code that show the array length and the data type of each position.

Our system have a limit of 3 files in the blob array, combined with 3 types in each of the 3 positions, so the full case is this:

  1. case la_cpa[l_i].blobind
  2.          when "0"
  3.            execute com_sql
  4.          when "A"
  5.            execute com_sql using la_cpa[l_i].blob[1].pabarq
  6.          when "T"
  7.            execute com_sql using la_cpa[l_i].blob[1].pabtext
  8.          when "V"
  9.            execute com_sql using la_cpa[l_i].blob[1].pabvarchar
  10.          when "AA"
  11.            execute com_sql using la_cpa[l_i].blob[1].pabarq, la_cpa[l_i].blob[2].pabarq
  12.          when "AT"
  13.            execute com_sql using la_cpa[l_i].blob[1].pabarq, la_cpa[l_i].blob[2].pabtext
  14.          when "AV"
  15.            execute com_sql using la_cpa[l_i].blob[1].pabarq, la_cpa[l_i].blob[2].pabvarchar
  16.          when "TA"
  17.            execute com_sql using la_cpa[l_i].blob[1].pabtext, la_cpa[l_i].blob[2].pabarq
  18.          when "TT"
  19.            execute com_sql using la_cpa[l_i].blob[1].pabtext, la_cpa[l_i].blob[2].pabtext
  20.          when "TV"
  21.            execute com_sql using la_cpa[l_i].blob[1].pabtext, la_cpa[l_i].blob[2].pabvarchar
  22.          when "VA"
  23.            execute com_sql using la_cpa[l_i].blob[1].pabvarchar, la_cpa[l_i].blob[2].pabarq
  24.          when "VT"
  25.            execute com_sql using la_cpa[l_i].blob[1].pabvarchar, la_cpa[l_i].blob[2].pabtext
  26.          when "VV"
  27.            execute com_sql using la_cpa[l_i].blob[1].pabvarchar, la_cpa[l_i].blob[2].pabvarchar            
  28.          when "AAA"
  29.            execute com_sql using la_cpa[l_i].blob[1].pabarq, la_cpa[l_i].blob[2].pabarq, la_cpa[l_i].blob[3].pabarq
  30.          when "AAT"
  31.            execute com_sql using la_cpa[l_i].blob[1].pabarq, la_cpa[l_i].blob[2].pabarq, la_cpa[l_i].blob[3].pabtext
  32.          when "AAV"
  33.            execute com_sql using la_cpa[l_i].blob[1].pabarq, la_cpa[l_i].blob[2].pabarq, la_cpa[l_i].blob[3].pabvarchar
  34.          when "ATA"
  35.            execute com_sql using la_cpa[l_i].blob[1].pabarq, la_cpa[l_i].blob[2].pabtext, la_cpa[l_i].blob[3].pabarq
  36.          when "ATT"
  37.            execute com_sql using la_cpa[l_i].blob[1].pabarq, la_cpa[l_i].blob[2].pabtext, la_cpa[l_i].blob[3].pabtext
  38.          when "ATV"
  39.            execute com_sql using la_cpa[l_i].blob[1].pabarq, la_cpa[l_i].blob[2].pabtext, la_cpa[l_i].blob[3].pabvarchar
  40.          when "AVA"
  41.            execute com_sql using la_cpa[l_i].blob[1].pabarq, la_cpa[l_i].blob[2].pabvarchar, la_cpa[l_i].blob[3].pabarq
  42.          when "AVT"
  43.            execute com_sql using la_cpa[l_i].blob[1].pabarq, la_cpa[l_i].blob[2].pabvarchar, la_cpa[l_i].blob[3].pabtext
  44.          when "AVV"
  45.            execute com_sql using la_cpa[l_i].blob[1].pabarq, la_cpa[l_i].blob[2].pabvarchar, la_cpa[l_i].blob[3].pabvarchar
  46.          when "TAA"
  47.            execute com_sql using la_cpa[l_i].blob[1].pabtext, la_cpa[l_i].blob[2].pabarq, la_cpa[l_i].blob[3].pabarq
  48.          when "TAT"
  49.            execute com_sql using la_cpa[l_i].blob[1].pabtext, la_cpa[l_i].blob[2].pabarq, la_cpa[l_i].blob[3].pabtext
  50.          when "TAV"
  51.            execute com_sql using la_cpa[l_i].blob[1].pabtext, la_cpa[l_i].blob[2].pabarq, la_cpa[l_i].blob[3].pabvarchar
  52.          when "TTA"
  53.            execute com_sql using la_cpa[l_i].blob[1].pabtext, la_cpa[l_i].blob[2].pabtext, la_cpa[l_i].blob[3].pabarq
  54.          when "TTT"
  55.            execute com_sql using la_cpa[l_i].blob[1].pabtext, la_cpa[l_i].blob[2].pabtext, la_cpa[l_i].blob[3].pabtext
  56.          when "TTV"
  57.            execute com_sql using la_cpa[l_i].blob[1].pabtext, la_cpa[l_i].blob[2].pabtext, la_cpa[l_i].blob[3].pabvarchar
  58.          when "TVA"
  59.            execute com_sql using la_cpa[l_i].blob[1].pabtext, la_cpa[l_i].blob[2].pabvarchar, la_cpa[l_i].blob[3].pabarq
  60.          when "TVT"
  61.            execute com_sql using la_cpa[l_i].blob[1].pabtext, la_cpa[l_i].blob[2].pabvarchar, la_cpa[l_i].blob[3].pabtext
  62.          when "TVV"
  63.            execute com_sql using la_cpa[l_i].blob[1].pabtext, la_cpa[l_i].blob[2].pabvarchar, la_cpa[l_i].blob[3].pabvarchar
  64.          when "VAA"
  65.            execute com_sql using la_cpa[l_i].blob[1].pabvarchar, la_cpa[l_i].blob[2].pabarq, la_cpa[l_i].blob[3].pabarq
  66.          when "VAT"
  67.            execute com_sql using la_cpa[l_i].blob[1].pabvarchar, la_cpa[l_i].blob[2].pabarq, la_cpa[l_i].blob[3].pabtext
  68.          when "VAV"
  69.            execute com_sql using la_cpa[l_i].blob[1].pabvarchar, la_cpa[l_i].blob[2].pabarq, la_cpa[l_i].blob[3].pabvarchar
  70.          when "VTA"
  71.            execute com_sql using la_cpa[l_i].blob[1].pabvarchar, la_cpa[l_i].blob[2].pabtext, la_cpa[l_i].blob[3].pabarq
  72.          when "VTT"
  73.            execute com_sql using la_cpa[l_i].blob[1].pabvarchar, la_cpa[l_i].blob[2].pabtext, la_cpa[l_i].blob[3].pabtext
  74.          when "VTV"
  75.            execute com_sql using la_cpa[l_i].blob[1].pabvarchar, la_cpa[l_i].blob[2].pabtext, la_cpa[l_i].blob[3].pabvarchar
  76.          when "VVA"
  77.            execute com_sql using la_cpa[l_i].blob[1].pabvarchar, la_cpa[l_i].blob[2].pabvarchar, la_cpa[l_i].blob[3].pabarq
  78.          when "VVT"
  79.            execute com_sql using la_cpa[l_i].blob[1].pabvarchar, la_cpa[l_i].blob[2].pabvarchar, la_cpa[l_i].blob[3].pabtext
  80.          when "VVV"
  81.            execute com_sql using la_cpa[l_i].blob[1].pabvarchar, la_cpa[l_i].blob[2].pabvarchar, la_cpa[l_i].blob[3].pabvarchar
  82.          otherwise
  83.            let gg_erro = true
  84.        end case

Well, finally, the code is working! :D

The developer must call the function instead of executing the sql query. Has made some tests, all good, and we are changing our programs one by one to use this function.

Everything is ok, my only question was if there is some better way to do this instead of using this case.
Marco P.
Posts: 8

« Reply #5 on: October 13, 2014, 04:20:59 pm »

I have a similar need.
I have to write a single SELECT statement for all the table of my database. The SELECT is by SERIAL fields.
I am able to generate the SELECT instruction. The problem is the FETCH.
The problem is how to get the value of all table fields in one single statements. Normally I write SELECT * INTO var_table_like.* FROM table.  What kind of variable I have to use?

It could be useful an element like recordset in java where the access to a value of a field is by a function with the position or the name of the field.
I would like to avoid to use Java interface for performance reason and to avoid to manage 2 kind of database connections for every installation.
Anderson P.
Posts: 82

« Reply #6 on: October 13, 2014, 04:36:53 pm »


If i get you right, maybe you can use a record of strings to solve this. String will accept either number and text value, so whatever comes from your table string will accept, expect for byte values.
Reuben B.
Four Js
Posts: 1049

« Reply #7 on: October 13, 2014, 10:30:47 pm »


Have a look at my fgl_zoom example here

Its a generic zoom / query / lookup window function which evolved from query_window which my previous employers placed in the IIUG software repository many years ago.  It essentially does ...

  1. DEFINE l_result ARRAY[9] OF VARCHAR(255)
  2. ...
  3.   FOREACH list_curs INTO l_result[1], l_result[2], l_result[3],
  4.                          l_result[4], l_result[5], l_result[6],
  5.                          l_result[7], l_result[8], l_result[9]

... so determine an arbitrary large number of columns you want to return, declare an arbitrary large type of VARCHAR(255), and fetch into them.  (When STRING can be used in SQL statements, I'll rewrite to use STRINGS).

There maybe some techniques you can use if you are prepared to go down that path.  It is similar to what Marco suggested.

It surprised me when I moved across to Fourjs and started looking at other peoples code, how many did not have something similar.  I'd guestimate that some zoom / query / lookup code was making up 25% of the code base/number of forms, which if they had created a generic function, not only would 25% of the code be eliminated but the user interface would be consistent, and changes would only require changing in one place.  One day I'd still like to create a similar generic single table maintenance function and knock another 25% of some code bases.

It could be useful an element like recordset in java where the access to a value of a field is by a function with the position or the name of the field.

You can go close by serializing record as either XML or in 2.50 JSON, and using the XML or JSON methods to access by position or name.

What I think you would want as well as that, is the ability to define record variables at run-time, and then access by position or name of field so that you could write your generic ...

  1. DEFINE var_table_like DYNAMIC RECORD
  3. FOREACH cursor USING tablename INTO column_name, column_datatype -- loop through systables/syscolumns for tablename
  4.   CALL var_table_like.addField(column_name, column_datatype)
  7. SELECT * INTO var_table_like.* FROM tablename
  9. LET value = var_table_like.getValueByFieldname(column_name)
  10. LET value = var_table_like.getValueByIndex(index)
  11. etc


Product Consultant (Asia Pacific)
Developer Relations Manager (Worldwide)
Author of
Contributor to
Sebastien F.
Four Js
Posts: 509

« Reply #8 on: October 14, 2014, 09:31:33 am »

I agree with Reuben, however, with a new "DESCRIBE" instruction:

  2. ...
  4. OPEN c1 USING ...
  5. DESCRIBE c1 INTO dynrec
  6. FETCH c1 INTO dynrec.*
  7. ...

FOREACH could implicitely describe the cursor into the dynamic record, so the code would be:

  2. ...
  4. FOREACH c1 INTO dynrec -- (No .* notation)
  5.   FOR i=1 TO dynrec.getFieldCount()
  6.       DISPLAY dynrec.getFieldName(i), "=", dynrec.getFieldValue()
  7.   END FOR

Marco P.
Posts: 8

« Reply #9 on: October 14, 2014, 10:09:19 am »

Thank for your answers. I have a generic function for zoom that is valid for all my table. That function has a limited number of exposed fields. What I need now is a generic function for all the record types of my database. And I have to manage 3 different DBMS (IBM Informix, Oracle and MS SQL Server). Waiting for the  DYNAMIC RECORD I will try following the example  using generic varchar variables.
My application is a REST service that queries my database and return the results in JSON format.
Sebastien F.
Four Js
Posts: 509

« Reply #10 on: October 14, 2014, 11:09:48 am »

For such generic code, you may consider to use the .sch schema file to known the structure of your tables.
That's easier to query as DB vendor specific system catalog tables.
Of course the .sch file must be up to date with the DB.
See also FGLDIR/src/fgldbslib.4gl, but this code is not documented and can change in future versions.
David H.
Posts: 158

« Reply #11 on: October 14, 2014, 11:32:05 am »

I for one would certainly love to see DESCRIBE/dynamic variables added to Genero BDL! :-) That's something we have been after for a long time now.  I would also like to see more new language features added, particularly anything that promotes code reuse...
Pages: [1]
  Reply  |  Print  
Jump to:  

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines