One of the web development projects using Oracle APEX I’m working on is to upload the content of a text file in csv format into database. By default, all files using the “File Browse..” element will be uploaded to the “wwv_flow_files” table which has structure as follows:
[source language=”sql”]
SQL> desc wwv_flow_files
Name Type Nullable Default Comments
———— ————– ——– ——- ——–
ID NUMBER
FLOW_ID NUMBER
NAME VARCHAR2(90)
FILENAME VARCHAR2(400) Y
TITLE VARCHAR2(255) Y
MIME_TYPE VARCHAR2(48) Y
DOC_SIZE NUMBER Y
DAD_CHARSET VARCHAR2(128) Y
CREATED_BY VARCHAR2(255) Y
CREATED_ON DATE Y
UPDATED_BY VARCHAR2(255) Y
UPDATED_ON DATE Y
LAST_UPDATED DATE Y
CONTENT_TYPE VARCHAR2(128) Y
BLOB_CONTENT BLOB Y
LANGUAGE VARCHAR2(30) Y
DESCRIPTION VARCHAR2(4000) Y
FILE_TYPE VARCHAR2(255) Y
FILE_CHARSET VARCHAR2(128) Y
[/source]
Even though the text file contains character data, the uploaded file content will be stored in the “binary format” in a BLOB column named BLOB_CONTENT. Due to nature of BLOB, to read character out, the conversion using “chr” function is required. Please see a good article about this concept and conversion by Natalka Roshak on the orafaq at http://www.orafaq.com/node/895.
I modified the sample script provided in above article to suite my need – reading each line of the text file and insert into a target table.
[source language=”sql”]
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number := 1;
v_line VARCHAR2 (32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
BEGIN
— Read data from wwv_flow_files
select blob_content into v_blob_data
from wwv_flow_files where name = ‘F29800/Data_CSV.csv’;
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
— Read and convert binary to char
WHILE ( v_position <= v_blob_len ) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
— When a whole line is retrieved
IF v_char = CHR(10) THEN
— Convert comma to : to use wwv_flow_utilities
v_line := REPLACE (v_line, ‘,’, ‘:’);
— Convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table (v_line);
— Insert data into target table
EXECUTE IMMEDIATE ‘insert into TABLE_X (v1, v2, v3, v4)
values (:1,:2,:3,:4)’
USING
v_data_array(1),
v_data_array(2),
v_data_array(3),
v_data_array(4);
— Clear out
v_line := NULL;
END IF;
END LOOP;
END;
/
[/source]
Hi Ittichai,
Awesome post !! this really helped me for my similar project.
Thanks a lot again !!
Regards,
Advait Deo
Good to hear that. Thanks for stopping by. IC
Its good but I am facing one issue. In my csv files one of the column is having ,(comma) bcoz of which above code is considering one column text as two coulmns.
Like one coulmn Name is having value: “Advait,Deo” but codce is treating it as two columns “Advait” and “Deo”.
Any workaround for this.
Thanks.
Question is that is it one-time thing?
If it is, after the below line, which converts every “,” into “:” for wwv utility,
— Convert comma to : to use wwv_flow_utilities
v_line := REPLACE (v_line, ‘,’, ‘:’);
You could convert Advait:Deo back into Advait,Deo using the same replace command.
v_line := REPLACE (v_line, ‘Advait:Deo’, ‘Advait,Deo’);
Otherwise, you might need to use different delimiter (e,g, ~) instead of “,”.
The replace would be like this instead
v_line := REPLACE (v_line, ‘~’, ‘:’);
Let me know whether or not it works.
IC
Hi IC,
I used ur code for my development but its not working for me i am getting the Error “No Data Found”.
I used the same code like below
v_line := REPLACE (v_line, ‘~’, ‘:’);
How to solve this Issue.
Thanks,
Raj.
Obviously there is no data 🙂 You will have to perform some debugging like adding the dbms_output or htp.prn or use some development tools like SQL Developer to do it.
Hi Ittichai,
Thanks for ur reply. I m new in Apex. while debug i got the String Like eg. below
My Data is – 1,:”My,Name”,”Raj,Verma”,1985
I got the String like – 1:”My:Name”:”Raj:Verma”:1985
Please help me in this Issue.
Thanks,
Raj.
Hi
I am trying to use the code here.I am getting following error.
ORA-12899: value too large for column "TS_APEX"."TABLE_X"."V1" (actual: 913, maximum: 400)
Error
and the debug looks like >>
Branch point: BEFORE_PROCESSING
0.30: Processing point: AFTER_SUBMIT
0.30: …Process "Data_to_table": PLSQL (AFTER_SUBMIT) DECLARE v_blob_data BLOB; v_blob_len NUMBER; v_position NUMBER; v_raw_chunk RAW(10000); v_char CHAR(1); c_chunk_len number := 1; v_line VARCHAR2 (32767) := NUL
5.16: Encountered unhandled exception in process type PLSQL
5.16: Show ERROR page…
5.16: Performing rollback…
Thanks
Can you please describe table_x fields. I suspect there might be a problem. Please suggest
Yes that is likely the issue. Actually it does not matter what I have. The column length of target table should be big enough based on your data.
Looking from your posted error, I assume that your V1 column has only 400 characters in length, but data is 913 chars.
ORA-12899: value too large for column “TS_APEX”.”TABLE_X”.”V1″ (actual: 913, maximum: 400)
I guess if you try to make it bigger, it should be okay. Let me know how it goes.
Hi Ittichai
I could figure out the error. The file wass not saved as .csv file, it was saved as .xls file which was causing problem. If my first row of data contains column names, i would like to avoid them while inserting into table. Is there any way to avoid first row while inserting into table?
Thanks for your time.
Sandeep
Glad to hear that you figured the issue out.
Since the script is reading line by line, you can simply add logic (e.g., line_count) to ignore the first line.
Hi. I’m trying to use this, but when I reach the point to save it to the existing table, it tells me ORA-00947: not enough values.
Bonnie,
This has to do with the number of values for columns must be the same as number of values in USING.
Sample below has 4 values –
EXECUTE IMMEDIATE ‘insert into TABLE_X (v1, v2, v3,v4)
values (:1,:2,:3,:4)’
USING
v_data_array(1),
v_data_array(2),
v_data_array(3),
v_data_array(4);
Sorry. I’ve been on vacation. This is my code:
EXECUTE IMMEDIATE ‘insert into UPLOAD_TABLE(application, extension, phone)
VALUES (:1,:2,:3)’
USING v_data_array(1),
v_data_array(2),
v_data_array(3);
I am still getting “not enough values”
Bonnie,
That is interesting. The ORA-947- not enough values is the insert issue. Will you be able to do it in sqlplus? What is table definition?
Great Article! Works like a treat!
One question though.. How would you go about reporting back to the user the success of the upload? What i am looking for specifically is reporting back how many records were inserted, how many failed, etc. Just like the standard data upload mechanism..
Any ideas?
David,
I would think the easiest way is to catch exception error. You can simply wrap the EXECUTE IMMEDIATE of insert with BEGIN/END and add EXCEPTION.
BEGIN
:
:
BEGIN
EXECUTE IMMEDIATE ‘insert into TABLE_X (v1, v2, v3, v4)
values (:1,:2,:3,:4)’
USING
v_data_array(1),
v_data_array(2),
v_data_array(3),
v_data_array(4);
EXCEPTION
WHEN OTHERS THEN
[[Handle a failed record – you could put the failed data into a temp table]]
END;
:
:
END;
IC
To answer “anonymous” who asked this question –
“I am using oracle apex 3.1 connected to oracle, i have a table named adjbatch, i want this information to be uploaded to a CSV file and saved in the local directly. How do i do this in APEX.”
You can use the standard default “export to csv” feature as a part of report, or please read Scott Spendolini’s post who wrote “how to do custom export to csv” at http://spendolini.blogspot.com/2006/04/custom-export-to-csv.html.
Hi,
Super post, Need to mark it on Digg
Thank you
GlenStef
I am not able to query wwv_flow_files using a method other than within APEX. I am using PL/SQL Developer.
Is there an issue with privleges? I typically find it easier to run queries using something other than APEX.
Any thoughts or experience on this?
David,
Querying these APEX objects outside APEX will require to set workspace ID before. I ran below from PL/SQL Developer’s command line.
First you will need to identify the workspace ID.
SQL> col WORKSPACE_ID format 999999999999999
SQL> select workspace, schemas, workspace_id from apex_workspaces;
WORKSPACE SCHEMAS WORKSPACE_ID
---------- ---------- -------------------
EDW 1 858728211457679
You won’t be able to see anything yet.
SQL> select id, flow_id, name, filename from wwv_flow_files;
ID FLOW_ID NAME FILENAME
---------- ---------- ------------------------------ --------------------
But after setting the workspace ID, you should be able to see its contents.
SQL> exec wwv_flow_api.set_security_group_id('858728211457679');
PL/SQL procedure successfully completed
SQL> select id, flow_id, name, filename from wwv_flow_files;
ID FLOW_ID NAME FILENAME
---------- ---------- ------------------------------ --------------------
1.64130293 488 F31299/matrix_order_installer_ matrix_order_install
1.0.sql er_1.0.sql
hi ,
i got this error while uploading the excel sheet ORA-01403: no data found .. can u help me in solve this error
Ram,
The best approach to find the root cause is to use the PL/SQL tools (like Toad or PL/SQL developer) to debug codes line-by-line.
If you give me more information of your issue, I can try to help out. I also recommend you to take advantage of Oracle’s forum.
Ittichai
hi,
thanxs for replying i have solved this issue by debugging line by line … thanks a lot for posting this ….
Hi
This helps a lot with csv files, but what do you do when the upload file is a fixed length ASCII file with a carriage return as the end of a line?
You can just simply replace lines with comma conversion and wwv_flow_utilities.string_to_table function with a loop to parse (substr) through your fixed length data.
IF v_char = CHR(10) THEN
— Convert comma to : to use wwv_flow_utilities
— v_line := REPLACE (v_line, ‘,’, ‘:’);
— Convert each column separated by : into array of data
— v_data_array := wwv_flow_utilities.string_to_table (v_line);
for i in 1 .. v_num_of_cols loop
v_data_array(i) := substr(v_line, ((i-1)*v_col_length)+1, v_col_length);
end loop;
:
:
Note that in the declaration section, you will have to create two numeric variables v_num_of_cols and v_col_length for the number of columns of each row and its fixed length respectively. I tested with a small set of data, it seems to work fine.
Hope this helps.
Ittichai
Hi
Yes it helped, thank you very much. It worked perfectly until I got a file where the column lenghts is not the same size – this is the file :
SPPTY001 ASHAI DN SHAI DN 0000000123456789 051001TSALARIS / SALARY 010000028954600000000000999999999990000000000000000000000200905070000000001
SPPTY003 AKENNEDY B KENNEDY B 0000000123456789 632005SSALARIS / SALARY 010000034875700000000000999999999990000000000000000000000200905070000000001
column lenghts is 16,1,30,30,26,6,1,20,2,11,44,8,10.
I am very new to apex and oracle, please could you assist me.
You may just need to create two new variables – one numeric for starting pointer (starting with 1), other as numeric array containing column lengths.
For every loop you just keep the pointer moving to the beginning of the next column.
for i in 1 .. v_num_of_cols loop
v_data_array(i) := substr(v_line, v_start, v_col_length(i));
v_start := v_start + v_col_length(i);
end loop;
Thank you very much for all your help – this is brilliant!
Hi Ittichai
I have tried your solution but I have run into a problem. I get ORA-00928: missing SELECT keyword error when running the script.
Here is copy of my code.
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number:= 1;
v_line VARCHAR2(32767):= NULL;
v_data_array wwv_flow_global.vc_arr2;
BEGIN
— Read data from wwv_flow_files
select blob_content
into v_blob_data
from wwv_flow_files
where name = :P57_UPLOAD;
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
— Read and convert binary to char
WHILE ( v_position <= v_blob_len ) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
— When a whole line is retrieved
IF v_char = CHR(10) THEN
— Convert comma to : to use wwv_flow_utilities
v_line := REPLACE (v_line, ‘,’, ‘:’);
— Convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table (v_line);
— Insert data into target table
EXECUTE IMMEDIATE ‘insert into DATA_UPLD(DATE, AMOUNT, BALANCE, DESCRIPTION)
values (:1,:2,:3,:4)’
USING
to_date(v_data_array(1),’yy/mm/dd’),
to_number(v_data_array(2)),
to_number(v_data_array(3)),
v_data_array(4);
— Clear out
v_line := NULL;
END IF;
END LOOP;
END;
Any assistance to this newbie will be greatly appreciated.
Regards
Jacques
Jacques,
I tried your codes on my apex.oracle.com (http://apex.oracle.com/pls/otn/f?p=51832:15), and did not see any issue. The only problem is the table column DATE which is not allowed, so I created it with UPLD_DATE instead.
The csv file has data as follows:
09/07/01,2,3,ABC
09/01/08,3,1,DEF
Every time, the Upload button is clicked, the above process will be executed, so you will see two rows being inserted for every click.
Please check your table’s column name and/or let me know which line the error is on?
Hi Ittichai,
I have tried your solution and it works very well.
Great job!
In my teamwork we have built the upload procedure for CSV files on Apex.
During the upload phase we have notice that some files have some problems.
The reason is that we have some strings utf8 encoding (for example some Swedish customers with special characters).
We have tried to solve this problem but unfortunately we couldn’t.
This is our code:
declare
v_blob_data blob;
v_blob_len number;
v_position number;
v_raw_chunk raw(10000);
v_char char(1);
c_chunk_len number := 1;
v_line varchar2(32767) := null;
v_data_array wwv_flow_global.vc_arr2;
v_yes_no varchar2(1) := ‘Y’;
v_user varchar2(255) := ‘ADMIN’;
v_load_date date := trunc(sysdate);
v_rec_no number := 1;
begin
XXDESCOR.PR_XXDESCOR_TRUNC_UPL_SALESINC();
commit;
XXDESCOR.PR_XXDESCOR_ANALYZE();
— read data from wwv_flow_files
select blob_content into v_blob_data
from wwv_flow_files where name = :P15_UPLOAD_CSV;
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
— read and convert binary to char
while ( v_position <= v_blob_len ) loop
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
— when a whole line is retrieved
if v_char = chr(10) then
— convert comma to : to use wwv_flow_utilities
v_line := replace (v_line, ';', ':');
v_line := replace (v_line, chr(13), '');
v_line := replace (v_line, chr(10), '');
— convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table (v_line);
— insert data into target table
execute immediate 'insert into DE_SCOR_UPL_SALES_INC (SCORECARD_ID, SCENARIO, SCORECARD_VIEW, ENTITY_OF_INVOICE
, SALES_CHANNEL, CUSTOMER, COUNTRY, APPLICATION, PRODUCT_FAMILY
, UNITS, NET_TURNOVER, TOTAL_COGS, VCE, G_AND_A, CENTRAL_R_AND_D
, SALES_AND_MARKETING, CENTRAL_SALES_AND_MARKETING, OTHER_INC_EXP
, BAD_DEBT_PROVISION, OTHER_VARIANCES, PROVISION, IP_SFC_VARIANCE
, OTHER_NON_OPERATING, DES_YEAR, DES_MONTH, IS_ACTIVE, CREATED_BY
, CREATION_DATE, LAST_UPDATE_BY, LAST_UPDATE_DATE)
values (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17, :18, :19, :20
, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30)'
using
v_rec_no,
trim(v_data_array(1)),
trim(v_data_array(2)),
trim(v_data_array(3)),
trim(v_data_array(4)),
trim(v_data_array(5)),
trim(v_data_array(6)),
trim(v_data_array(7)),
trim(v_data_array(8)),
to_number(trim(v_data_array(9))),
to_number(trim(v_data_array(10))),
to_number(trim(v_data_array(11))),
to_number(trim(v_data_array(12))),
to_number(trim(v_data_array(13))),
to_number(trim(v_data_array(14))),
to_number(trim(v_data_array(15))),
to_number(trim(v_data_array(16))),
to_number(trim(v_data_array(17))),
to_number(trim(v_data_array(18))),
to_number(trim(v_data_array(19))),
to_number(trim(v_data_array(20))),
to_number(trim(v_data_array(21))),
to_number(trim(v_data_array(22))),
trim(v_data_array(23)),
trim(v_data_array(24)),
v_yes_no,
v_user,
v_load_date,
v_user,
v_load_date;
— clear out
v_line := null;
v_rec_no := v_rec_no + 1;
end if;
end loop;
end;
Could you help us?
Any assistance will be greatly appreciated.
Hi Ittichai,
I have tried your solution and it works very well.
Great job!
In my teamwork we have built the upload procedure for CSV files on Apex.
During the upload phase we have notice that some files have some problems.
The reason is that we have some strings utf8 encoding of some Swedish customers with special characters.
We have tried to solve this problem but unfortunately we couldn’t.
This is our code:
declare
v_blob_data blob;
v_blob_len number;
v_position number;
v_raw_chunk raw(10000);
v_char char(1);
c_chunk_len number := 1;
v_line varchar2(32767) := null;
v_data_array wwv_flow_global.vc_arr2;
v_yes_no varchar2(1) := ‘Y’;
v_user varchar2(255) := ‘ADMIN’;
v_load_date date := trunc(sysdate);
v_rec_no number := 1;
begin
XXDESCOR.PR_XXDESCOR_TRUNC_UPL_SALESINC();
commit;
XXDESCOR.PR_XXDESCOR_ANALYZE();
— read data from wwv_flow_files
select blob_content into v_blob_data
from wwv_flow_files where name = :P15_UPLOAD_CSV;
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
— read and convert binary to char
while ( v_position <= v_blob_len ) loop
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
— when a whole line is retrieved
if v_char = chr(10) then
— convert comma to : to use wwv_flow_utilities
v_line := replace (v_line, ';', ':');
v_line := replace (v_line, chr(13), '');
v_line := replace (v_line, chr(10), '');
— convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table (v_line);
— insert data into target table
execute immediate 'insert into DE_SCOR_UPL_SALES_INC (SCORECARD_ID, SCENARIO, SCORECARD_VIEW, ENTITY_OF_INVOICE
, SALES_CHANNEL, CUSTOMER, COUNTRY, APPLICATION, PRODUCT_FAMILY
, UNITS, NET_TURNOVER, TOTAL_COGS, VCE, G_AND_A, CENTRAL_R_AND_D
, SALES_AND_MARKETING, CENTRAL_SALES_AND_MARKETING, OTHER_INC_EXP
, BAD_DEBT_PROVISION, OTHER_VARIANCES, PROVISION, IP_SFC_VARIANCE
, OTHER_NON_OPERATING, DES_YEAR, DES_MONTH, IS_ACTIVE, CREATED_BY
, CREATION_DATE, LAST_UPDATE_BY, LAST_UPDATE_DATE)
values (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17, :18, :19, :20
, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30)'
using
v_rec_no,
trim(v_data_array(1)),
trim(v_data_array(2)),
trim(v_data_array(3)),
trim(v_data_array(4)),
trim(v_data_array(5)),
trim(v_data_array(6)),
trim(v_data_array(7)),
trim(v_data_array(8)),
to_number(trim(v_data_array(9))),
to_number(trim(v_data_array(10))),
to_number(trim(v_data_array(11))),
to_number(trim(v_data_array(12))),
to_number(trim(v_data_array(13))),
to_number(trim(v_data_array(14))),
to_number(trim(v_data_array(15))),
to_number(trim(v_data_array(16))),
to_number(trim(v_data_array(17))),
to_number(trim(v_data_array(18))),
to_number(trim(v_data_array(19))),
to_number(trim(v_data_array(20))),
to_number(trim(v_data_array(21))),
to_number(trim(v_data_array(22))),
trim(v_data_array(23)),
trim(v_data_array(24)),
v_yes_no,
v_user,
v_load_date,
v_user,
v_load_date;
— clear out
v_line := null;
v_rec_no := v_rec_no + 1;
end if;
end loop;
end;
Could you help us?
Any assistance will be greatly appreciated.
Giorgio,
What exact error message did you receive? Can you please send me your sample data?
Thank you for your quick answer.
The question is that we don’t have any errors, because the import procedure completes successfully.
At the end of that procedure we have a check that matches the csv file with the imported data, in this phase we have an error that shows a problems with a Swedish customer name.
One of our customers has this name:
Båt & Maskin
but when we put it on db and we check in Apex on web application ( so in not a problem with the codificatio, because on web we have the same transcoding)we have that record:
B t & Maskin
So the matter is on the codification of the field during the upload not on the upload of the file,in fact the other rows are perfect.
If you have any further question I’m glade to explain better my problem.
Regards
Hi Ittichai,
I post you an example of the code we are using and a sample csv of our customers.
If you can help us with your precious job we’ll appreciate very much.
This is a sample of the customers csv:
Ferrari;Italia
Båt & Maskin;Svezia
Jörgen Lushgy;Norvegia
Gonzales;Spagna
The upload function used for import is:
declare
v_blob_data blob;
v_blob_len number;
v_position number;
v_raw_chunk raw(10000);
v_char char(1);
c_chunk_len number := 1;
v_line varchar2(32767) := null;
v_data_array wwv_flow_global.vc_arr2;
v_yes_no varchar2(1) := ‘Y’;
v_user varchar2(255) := ‘ADMIN’;
v_load_date date := trunc(sysdate);
v_rec_no number := 1;
begin
XXDESCOR.PR_XXDESCOR_TRUNC_UPL_SALESSTK();
commit;
XXDESCOR.PR_XXDESCOR_ANALYZE();
— read data from wwv_flow_files
select blob_content into v_blob_data
from wwv_flow_files where name = :P22_UPLOAD_CSV;
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
— read and convert binary to char
while ( v_position <= v_blob_len ) loop
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position c_chunk_len;
— when a whole line is retrieved
if v_char = chr(10) then
— convert comma to : to use wwv_flow_utilities
v_line := replace (v_line, ';', ':');
v_line := replace (v_line, chr(13), '');
v_line := replace (v_line, chr(10), '');
— convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table (v_line);
— insert data into target table
execute immediate 'insert into DE_SCOR_UPL_CHAR (RAGIONE_SOCIALE, NAZIONE, CREATED_BY, CREATION_DATE, LAST_UPDATE_BY, LAST_UPDATE_DATE)
values (:1, :2, :3, :4, :5, :6)'
using
trim(v_data_array(1)),
trim(v_data_array(2)),
v_user,
v_load_date,
v_user,
v_load_date;
— clear out
v_line := null;
v_rec_no := v_rec_no 1;
end if;
end loop;
end;
The result on Apex is:
Ferrari Italia ADMIN 20-OTT-09 ADMIN 20-OTT-09
B t & Maskin Svezia ADMIN 20-OTT-09 ADMIN 20-OTT-09
Jrgen Lushgy Norvegia ADMIN 20-OTT-09 ADMIN 20-OTT-09
Gonzales Spagna ADMIN 20-OTT-09 ADMIN 20-OTT-09
I hope it’s enough for you to understand our problems.
If you have any further questions tell me.
Regards.
This works great, but I have one issue. When the value being inserted into the table has commas, APEX gives me an “invalid number” error. How would I get around that?
Cedrick,
This is the limitation of this code because it needs a delimiter to distinguish each column data. And a comma is commonly used. If you have control over how to get this csv file, you can try to get a different delimiter such as ~ or ^.
hi
when i upload excel file, in database some sort of unreadable charactars are being stored.
in my case group of people will upload excel files.in those excel files 10-15 questions will be present.i want those questions to be added in my database.
Jayanth,
This junk characters are formatting from excel. Adapting from my codes, Advait wrote a post about uploading an excel file into database. You should check it out at http://advait.wordpress.com/2008/05/21/uploading-excel-sheet-using-oracle-application-express-apex/. This may solve your issue.
Hi,
I tried to user your solution, with the following settings:
– A table with 3 columns
– A csv which has been uploaded via an Apex file chooser
Values:
1,2,3
1,2,3
1,2,3
– By trying to use the skript, I get the following error message:
ORA-00928: missing SELECT keyword
– By using exception handling, I narrowed it down to following line:
EXECUTE IMMEDIATE ‘insert into UPLOAD_TEST (1, 2, 3) values (:1,:2,:3)’ USING v_data_array(1), v_data_array(2), v_data_array(3);
Do you know, what the problem can be?
Thank you very much for your help in advance!
Best regards,
Thorben
Thorben,
I think the column names you specified is not correct. The three values after “insert into UPLOAD_TEST” must be the column names, not numbers.
EXECUTE IMMEDIATE ‘insert into UPLOAD_TEST (col1, col2, col3) values (:1,:2,:3)’ USING v_data_array(1), v_data_array(2), v_data_array(3);
Please, kill me now :/
Ouch…
Thank you very much!
Hi,
can this tools is used with dblink
i assume like this
“…. 1nsert 1nto DATA_UPLD@DBLINK_NAME …”
Yes you can if the account of dblink has privilege to do so. Please be cautious about using dblink for DML.
Hi –
I’m getting the ORA-01403: no data found error because some of my columns contain null values.
How do I get around that?
Thanks,
Matt
You can try to use nvl function to replace NULL with other value before insertion.
Hi i am working on Apex now.
I try to use your code (copy of my code)
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number :=1;
v_line VARCHAR2 (32767) :=NULL;
v_data_array wwv_flow_global.vc_arr2;
v_rows number;
v_sr_no number := 1;
BEGIN
— Read data from wwv_flow_files
delete from ZONNEPANEEL_3;
select blob_content into v_blob_data
from wwv_flow_files
where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER)
and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER);
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position :=1;
— Read and convert binary to char
WHILE ( v_position Column DATUM
38,003 –> EPRODUCTIE
Alex,
It seems like your comment post was cut off. 🙂
Ittichai
if the entire last column in the excel is null then it gives the error no_data_found
How do i get around that?
even though i use NVl.It won’t serve the purpose.
Where should i use the nvl.
Note: the problem is only if the entire last column is blank.
Supriya,
I tried with test data with last column is empty, it can still load data into table. Have you tried to run debug and see where it raised no_data_found exception? Please note that if you run the codes outside APEX, you will get the same exception as well if the following is not set:
wwv_flow_api.set_security_group_id(858728211457679);
The long number is the workspace_id which can be retrieved by querying select WORKSPACE_ID from APEX_WORKSPACES;
Ittichai
Hi Ittichai,
I’m using the above code to upload a csv into the database and it is showing me this error
ORA-01858: a non-numeric character was found where a numeric was expected.
my table structure is
desc fin_consultant;
Name Null Type
—————————— ——– ———————————————————————————————————————————————————————————————
EMP_NUMBER VARCHAR2(20)
NAME VARCHAR2(400)
LEVEL VARCHAR2(10)
TYPE VARCHAR2(200)
LOCATION VARCHAR2(100)
EFFECTIVE_START_DATE DATE
EFFECTIVE_END_DATE DATE
and the code I’m using is :
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number := 1;
v_line VARCHAR2 (32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
BEGIN
delete from fin_consultant;
— Read data from wwv_flow_files
select blob_content into v_blob_data
from wwv_flow_files where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER)
and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER);
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
— Read and convert binary to char
WHILE ( v_position <= v_blob_len ) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
— When a whole line is retrieved
IF v_char = CHR(10) THEN
— Convert comma to : to use wwv_flow_utilities
v_line := substr(v_line, 1, length(v_line)-2);
v_line := REPLACE (v_line, ',', ':');
— Convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table (v_line);
— Insert data into target table
EXECUTE IMMEDIATE 'insert into FIN_CONSULTANT( EMP_NUMBER,NAME,LEVEL,TYPE,LOCATION,EFFECTIVE_START_DATE,EFFECTIVE_END_DATE)
values (:1,:2,:3,:4,:5,:6,:7)'
USING
v_data_array(1),
v_data_array(2),
v_data_array(3),
v_data_array(4),
v_data_array(5),
to_date(v_data_array(6),'dd/mm/yyyy'),
to_date(v_data_array(7),'dd/mm/yyyy');
— Clear out
v_line := NULL;
END IF;
END LOOP;
END;
Please help.
Regards,
Priyanka
Priyanka,
Does it tell you which line? You can use the development tool like PL/SQL Dev or SQL Dev to debug the code. But please note that data will be visible only when the security group ID is set properly. Please see previous comment on how to do so.
Ittichai
Hi ittichai,
I have a CSV file with the following data.
1,2,”Name1,Surname1″
3,4,”Name2,Surname2″
5,6,”Name3,Surname3″
but the problem is that the Surnames will not be uploaded since there is a ‘,’ in the middle of the 3rd column.
What is the best way to handle this problem?
is there a particular function which will not replace ‘,’ within the double quotation marks?
Sushi,
You can still try to separate those two fields (Name and Surname), but when you insert the record, you can then concat them like v_data_array(3) || v_data_array(4) in the insert statement.
Hi,
I am having a small issue. In my CSV file I have a text field which sometimes has a colon inside, eg.:
Project:2000
Since we are converting our chosen delimiter to : (colon) for the wwv_flow_utilities.string_to_table function, after processing a line where there is an extra colon (inside a field) the application fails since it expects other values for next columns…
How would you propose to fix this issue assuming that the text inserted into a table whould still have the colon inside it?
Regards,
Pawel.
Pawel,
You can try to convert “:” to another delimiter first, then convert it back to the colon before insert.
— Additional line to replace : to ~
v_line := REPLACE (v_line, ‘:’, ‘~’);
— Perform separation into an array
v_data_array := wwv_flow_utilities.string_to_table (v_line);
— Before insert, convert ~ in a column back to :
v_data_array(3) := REPLACE (v_data_array(3(, ‘~’, ‘:’);
HI,
how can retrive the data from excel sheet to oracle database and i need to perform some dml over there so that it can updated in the excel sheet also. I need solution in detail that mean by using which,how and what
The sample here is for CSV file so you have to do conversion from XLS to CSV first. If you still insist working with Excel, you may try the 3rd party production such as http://www.excel-db.net/Update%20Excel-DB.htm.
Hi,
Uploading File with English character is working fine. But it not supporting for other than English Characters. Can anyone help me to give the solution for this.
Regards
Kumaran V
kumaran.vijayarangan@cognizant.com
Handling international languages is not in the scope of this post. The concept of this post is to read the whole file into a LOB, then read line-by-line, then parses each line with a delimiter. I’m not sure the junk characters you’re talking about are from being processed by the script, or from your database not be able to handle Chinese language. Please check with DBA to see whether or not you can read and write Chinese first.
Hi,
To add to the above issue, when i tried Uploading File with Chinese character the data loaded in the table seems to be junk.
Please Help me if anybody faced the similar problem.
Regards
Kumaran V
kumaran.vijayarangan@cognizant.com
Pingback: Application Express, CSV Upload into an Oracle Table « tschafer life
Thank you for publishing this solution. It does the task I was striving for, but performance seems to be an issue with files that have a thousand or more records. Have you noticed similar behavior?
It is possible. However, we’re using this code in our small application which loads only couple hundred records a day. If performance is your concern, you should look at the alternatives such as SQL Loader or 3rd-party ETL tools.
Pisz dalej, bo nieźle Ci to idzie.
I am very thankful to this topic because it really gives great information -“~
Good morning,
I’m new in the process of programming in Oracle.
–> so that the process I’m working
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
V_Raw_Chunk Raw(10000);
V_Char Char(1);
C_Chunk_Len Number := 1;
v_line VARCHAR2(32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
function hex_to_decimal
–this function is based on one by Connor McDonald
–http://www.jlcomp.demon.co.uk/faq/base_convert.html
( p_hex_str in varchar2 ) return number
is
v_dec number;
v_hex varchar2(16) := ‘0123456789ABCDEF’;
begin
v_dec := 0;
for indx in 1 .. length(p_hex_str)
loop
v_dec := v_dec * 16 + instr(v_hex,upper(substr(p_hex_str,indx,1)))-1;
end loop;
return v_dec;
end hex_to_decimal;
BEGIN
— Read data from wwv_flow_files
Select Blob_Content Into V_Blob_Data
from wwv_flow_files where name = ‘F624031668/sui_facturacion_alcantarillado_15085_2010_11_76122_001.csv’;
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
— Read and convert binary to char
WHILE ( v_position http://www.orafaq.com/node/895.
HEX_TO_DECIMAL found a function here that copy and paste
function hex_to_decimal
–this function is based on one by Connor McDonald
–http://www.jlcomp.demon.co.uk/faq/base_convert.html
( p_hex_str in varchar2 ) return number
is
v_dec number;
v_hex varchar2(16) := ‘0123456789ABCDEF’;
begin
v_dec := 0;
for indx in 1 .. length(p_hex_str)
loop
v_dec := v_dec * 16 + instr(v_hex,upper(substr(p_hex_str,indx,1)))-1;
end loop;
return v_dec;
end hex_to_decimal;
the truth does not quite understand what the function does, but the first error has ceased to exist.
The second error appears in the following Instructions:
— Read data from wwv_flow_files
select blob_content into v_blob_data
from wwv_flow_files where name = ‘F624031668/sui_facturacion_alcantarillado_15085_2010_11_76122_001.csv’;
Informe de error:
ORA-01403: No se ha encontrado ningún dato
ORA-06512: en línea 32
01403. 00000 – “no data found”
*Cause:
*Action:
as the information is stored in the table wwv_flow_files?
or I need to save the plane on the table I need?
I am grateful for the attention and collaboration in …
good day …
The data stored in BLOB is in the hexadecimal format. That’s why you need the HEX_TO_DECIMAL function to convert to decimal before further character processing.
It’s possible that you execute this block outside APEX. Thus there is no content of wwv_flow_files. If you can find the workspace id from select workspace_id from apex_workspaces. Then execute this, wwv_flow_api.set_security_group_id(123456890123456);, first within your block.
Nice Post..helped me in uploading csv file In APEX…But I am getting No data found Error while trying to upload a csv file with null values in some columns.Can anybody suggest a solution for the same?
You can use the NVL function to handle the NULL value.
Is this inserted into the Processes on the page?
PL/SQL anonymous block????
Yes it is in the Process After Submit.
I must be doing something wrong?
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number := 1;
v_line VARCHAR2 (32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
BEGIN
— Read data from wwv_flow_files
select blob_content into v_blob_data
from wwv_flow_files where name = ‘F1923492865/4161340.csv’;
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
— Read and convert binary to char
WHILE ( v_position <= v_blob_len ) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
— When a whole line is retrieved
IF v_char = CHR(10) THEN
— Convert comma to : to use wwv_flow_utilities
v_line := REPLACE (v_line, ',', ':');
— Convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table (v_line);
— Insert data into target table
EXECUTE IMMEDIATE 'insert into CONTRACT (CONTRACT_NUMBER, ITEM_NAME, START_DATE, END_DATE, CSI_NUMBER, SERIAL_NUMBER)
values (:1,:2,:3,:4,:5,:6)'
USING
v_data_array(1),
v_data_array(2),
v_data_array(3),
v_data_array(4),
v_data_array(5),
v_data_array(6);
— Clear out
v_line := NULL;
END IF;
END LOOP;
END;
/
Yields this error
1 error has occurred
* ORA-06550: line 46, column 1: PLS-00103: Encountered the symbol "/" when expecting one of the following: ( begin case declare end exception exit for goto if loop mod null pragma raise return select update while with << continue close current delete fetch lock insert open rollback savepoint set sql execute commit forall merge pipe purge The symbol "/" was ignored.
Ok, I cleared that error by removing the last /
After process i get this page error…
1 error has occurred
* ORA-06550: line 21, column 17: PLS-00201: identifier ‘HEX_TO_DECIMAL’ must be declared ORA-06550: line 21, column 2: PL/SQL: Statement ignored
FILE UPLOADEdit
You will have to create that function. Please read http://www.orafaq.com/node/895 (URL in the post). The sample scripts at the bottom shows the code of that function.
1 error has occurred
* ORA-01422: exact fetch returns more than requested number of rows
Check that…Got it.
Thank You.
Ok, I finally got it….
Regarding File Upload in APEX, the latest is the new Excel upload released with APEX 4.0…
Read more here:
http://krisrice.blogspot.com/2010/02/yet-another-apex-listener-ea.html
The key is the listener needs updated. Doesn’t appear to be in oracle.com
So, here is a sample appl that should help people using the code mentioned above.
http://apex.oracle.com/pls/apex/f?p=51467:LOGIN:4111591425073094:::::
User: test Password : test
YEAH! Like we say in France: “c’est d’la bombe, bébé !” 🙂 Thank you…
greetings: nice post, i would like to know if its possible to use that code but not using apex? i want to do a scheduled job, using a csv file stored in a specific location, i ‘ve already used sqlldr, but i want to know if this can be a plan b for me. thanks in advance.
Yes it can be done outside apex. The only thing that you might need to copy codes out or find replacement is wwv_flow_utilities because it is the built-in in APEX.
Hi,
Your code is really great and worked for me very well. But i have one question, when i upload “.csv” file with Column name , this code gives me an error for non-numeric character…i have been forced to remove the column name from “.csv” file and then to upload it…after that it works fine.
Can you give me any suggestion what i need to change in the code that it takes column name also or just ignore the column names in file?
Thanks in advance.
Falguni
Falguni,
If you want to skip certain columns, you can skip the index in the v_data_array during insert statement, for example, this will skip the columan #2.
:
USING
v_data_array(1),
v_data_array(3),
v_data_array(4);
Hi,
Thank you for the reply. I do not want to skip the columns. I want to skip the row of column names/Headers at the top.
Falguni
You can simply add a new variable called v_line_count := 0;
Then after each line processing, increment by 1.
— When a whole line is retrieved
IF v_char = CHR(10) THEN
v_line_count := v_line_count + 1;
And add a condition not to insert if v_line_count = 1 (discarding first line which is the column header)
IF v_line_count > 1 THEN — Only insert when line # > 2
— Insert data into target table
EXECUTE IMMEDIATE ‘insert into TABLE_X (v1, v2, v3, v4)
Hi
I am getting no data found error while executing below code. Actually there in .csv file some records are there. This error is showing while insert statement happens.
Kindly help this issue.
=========================
declare
v_blob_data blob;
v_blob_len number;
v_position number;
v_raw_chunk raw(10000);
v_char char(1);
c_chunk_len number := 1;
v_line varchar2(32767) := null;
v_data_array wwv_flow_global.vc_arr2;
v_load_date date := trunc(sysdate);
v_rec_no number := 1;
begin
select blob_content into v_blob_data
from wwv_flow_files where id= 6646505326045323;
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
while ( v_position <= v_blob_len ) loop
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position +c_chunk_len;
if v_char = chr(10) then
v_line := replace (v_line, ';', ':');
v_line := replace (v_line, chr(13), '');
v_line := replace (v_line, chr(10), '');
v_data_array := wwv_flow_utilities.string_to_table (v_line);
execute immediate 'insert into Sal_blob (empno,ename,sal )
values (:1, :2, :3)'
using v_data_array(1),v_data_array(2),v_data_array(3) ;
v_line := null;
v_rec_no := v_rec_no +1;
end if;
end loop;
exception when no_data_found then dbms_output.put_line('No data');
end;
Vivek,
The best way I think is to use debugging tool or techniques (i.,e. adding dbms_output) to find the issues.