With the LOAD we match the Informix-4gl LOAD statement which I don't believe has the ability to skip first row. Having said I don't see what stops us adding syntax to ...
LOAD FROM filename [ DELIMITER delimiter]
{
INSERT INTO table-specification [ ( column [,...]) ]
|
insert-string
}
... and adding a keyword similar to ...
LOAD FROM filename [ DELIMITER delimiter] [SKIP line]
{
INSERT INTO table-specification [ ( column [,...]) ]
|
insert-string
}
as that would not break compatibility.
However that does not help you now. Your current options might include ...
using tail to remove the first line, something like
tail -n + 2 filename > filename2
or also suggested here
https://stackoverflow.com/questions/28478657/informix-load-from-file-with-header load into a temp table which is all chars and then delete or ignore the first row before loading to destination table.
or you can explore writing your own generic library function using base.Channel, base.SqlHandle, base.StringTokenizer
Something like this as a starting point, need to add error handling, cater for case where delimiter is at end etc.
CALL generic_load("filename.unl", "INSERT INTO test_table VALUES(?,?)", "|", TRUE)
...
FUNCTION generic_load(filename STRING, sql STRING, delimiter STRING, omit_firstrow BOOLEAN)
DEFINE ch base.Channel
DEFINE hdl base.SqlHandle
DEFINE col_idx INTEGER
DEFINE line STRING
DEFINE tok base.StringTokenizer
LET ch = base.Channel.create()
CALL ch.openFile(filename,"r")
LET hdl = base.SqlHandle.create()
CALL hdl.prepare(sql)
BEGIN WORK
CALL hdl.open()
IF omit_firstrow THEN
LET line = ch.readLine()
END IF
WHILE TRUE
LET line = ch.readLine()
IF ch.isEof() THEN
EXIT WHILE
END IF
LET tok = base.StringTokenizer.createExt(line, delimiter, "\\", TRUE)
LET col_idx = 0
WHILE tok.hasMoreTokens()
LET col_idx = col_idx + 1
CALL hdl.setParameter(col_idx, tok.nextToken())
END WHILE
CALL hdl.put()
END WHILE
CALL hdl.close()
COMMIT WORK
CALL ch.close()
END FUNCTION
(it would be nice if there was a method that combined ch.read and hdl.setParameter to avoid use of tokenizer (tokenizer doesn't have CSV as a delimiter ...)
Hope that helps,
Reuben