none
How to get COUNT on SQL-PASSTHROUGH for VFP 9.0 SP2

    Question

  • I have been trying to execute COUNT commands on SQL-PASSTHROUGH inside a VFP 9.0 SP2 program with SELECT COUNT(*) but it returns error on the SQLEXEC and it is not executed.

    How I can know how many records are in a ODBC datasource?

    Thnak you 

    Thursday, November 17, 2011 9:08 PM

Answers

  • SQLCmd="SELECT COUNT(*) AS wk_count FROM "+(wk_name)
    wk_cntrl=
    SQLEXEC(gnConnHandle,SQLCmd,'MyCursor')

    If wk_contrl<0

       AERROR(laError)

       set step on

       * inspect the laError array in the debuggers local variables window.

    Endif

    Like Pavel says MyCursor.wk_count should not exist, or if it does, it's from a previous result. If SQLExec returns -1 there's something wrong with connection or the command you execute, eg wrong table name, invalid connection handle, connected to wrong database, insufficient rights to read from a table etc pp.

    Bye, Olaf.

    • Marked as answer by JoseGala Monday, November 21, 2011 4:34 PM
    Friday, November 18, 2011 10:07 AM
  • You could try to remove spaces from table name and you should check the table name corectness:

    SQLCmd="SELECT COUNT(*) AS wk_count FROM "+ TRIM(wk_name)
    

    The error message will say more for sure.

    Your data should have some column which could be used to split the large table, e.g. some date column or some numeric ID etc.

    Syntax of the SQL language used on your server could allow another split possibility like LIMIT in MySQL etc.

    The import will be time consuming task. I would recommend to import SQL data day by day based on some date column. In such case you may simply continue in the point where it stopped the last time.

    You could also split the table vertically - import Primary key plus several columns in one batch and PK plus another set of columns in another batch etc. Then you may decide what to do with results in DBF.

    Also 3/4 gigs do not necessarily mean 3/4 gigs in FoxPro. Do you plan to import all columns?

    • Proposed as answer by Mike FengModerator Monday, November 21, 2011 4:21 PM
    • Marked as answer by JoseGala Monday, November 21, 2011 4:34 PM
    Friday, November 18, 2011 2:24 PM

All replies

  • What database is your backend? MS SQL and MySQL surely support "SELECT COUNT(*) FROM sometable".

    Bye, Olaf.

    Thursday, November 17, 2011 9:26 PM
  • What says AERROR() ?

    Post both the SQLCONNECT() and SQLEXEC() to allow better analysis.

    Thursday, November 17, 2011 9:33 PM
  • Hi, Olaf

    Basically, I am extracting data from a UniData DB with ODBC. I could get the COUNT information using Microsoft Query (the one that comes with Excel 2007) and I realized that the three files VFP could not read were more bigger than 2 GB.

    And I see that the SQL command is related to the data source DB instead of making ODBC SQL-complain commands. It will not work with VFP directly, so I will use MS Query to extract the data but I found its limitations with the clause INTO on it.

     

     

    Thursday, November 17, 2011 9:35 PM
  • The error was < 0 in SQLEXEC

    Those are the lines of code:
    STORE SQLCONNECT('UniData','cmicsi','csicmi') TO gnConnHandle

    =SQLSETPROP(gnConnHandle,"Asynchronous",.T.)
    SQLCmd="SELECT COUNT(*) AS wk_count FROM "+(wk_name)
    wk_cntrl=
    SQLEXEC(gnConnHandle,SQLCmd,'MyCursor')

    DO
    TBLDATA

    =S
    QLCANCEL(gnConnHandle)

    where wk_name is the file name and the proc TBLDATA is the data manipulation
    wk_cntrl is showing error -1 and wk_count remains with 0

    Thank you  

    Thursday, November 17, 2011 9:42 PM
  • You have to call AERROR() when SQLEXEC() returns error and then you may display better error info.

    Also wk_count cannot "remain with 0" because the MyCursor isn't created at all.

    Thursday, November 17, 2011 10:02 PM
  • SQLCmd="SELECT COUNT(*) AS wk_count FROM "+(wk_name)
    wk_cntrl=
    SQLEXEC(gnConnHandle,SQLCmd,'MyCursor')

    If wk_contrl<0

       AERROR(laError)

       set step on

       * inspect the laError array in the debuggers local variables window.

    Endif

    Like Pavel says MyCursor.wk_count should not exist, or if it does, it's from a previous result. If SQLExec returns -1 there's something wrong with connection or the command you execute, eg wrong table name, invalid connection handle, connected to wrong database, insufficient rights to read from a table etc pp.

    Bye, Olaf.

    • Marked as answer by JoseGala Monday, November 21, 2011 4:34 PM
    Friday, November 18, 2011 10:07 AM
  • Thank you for your responses and your help

    I never used AERROR before, so I will apply it
    basically wk_count is on 0 because it is initialized before entering to the routine
    and MyCursor is not created, of course

    As i mentioned, the problem looks in the data source more than in VFP about the SQL
    command to execute, I had other problems, already solved with your help, with this DB UniData that is a Unix one with multi-values fields

    Now, using the Micorosft Query I was able to get the count to limit the files that are bigger than 2GB. I will find a way to split the big files or moving directly to SQL and manipulate them there since I can not use VFP 

    Thank you again

     

     

     

     

    Friday, November 18, 2011 1:29 PM
  • There should be no difference accessing to files under/over 2 gigs via ODBC driver. The file size is really not relevant for the client which sends the SQL command. The only possibility why large files should fail in COUNT(*) calculation is different ODBC driver used for query from VFP than the query executed from Microsoft query which sounds as not probable reason...

    Do you use same ODBC driver for both clients?

    BTW, when you create the variable named wk_count and initialize its value to 0 then even the successful query will leave this variable value unchanged. The reason why you see different value is simple: Cursor MyCursor is created with numeric column named wk_count. This column takes precedence over the existing variable so you may see the right COUNT(*) value.

    To be sure you are using variable value you have to use m. prefix (m.wk_count), to be sure you are looking at table column value you have to use alias name prefix (MyCursor.wk_count).

    Friday, November 18, 2011 1:49 PM
  • Thank you Pavel.

    I am using the same datasource ODBC for VFP and MS Query. So, you are right, it should not affect the results. Something in my command in VFP is not correct and for that reason it is not working properly as MS Query. I will try AERROR (as I mentioned, I never used it before) and comment with you.

    However, if it works, I will have a problem. I have few 3 GB/4 GB files to import and I do not have nay way to split them programatically since I can not access to a counter field in SQL that I can use to limit the executions. To make it more difficult, SQL execute the TOP but I do not know how to make for example a SQL extraction for record number 2000000 thru 4000000 after processing the first 2000000 in one VFP table.

    Probably I am doing everything wrong and that is why I cannot get what I need directly from VFP and programatically. Do you have a different idea to get the results.

    Thank you again for your help

    Jose

     

    Friday, November 18, 2011 1:59 PM
  • You could try to remove spaces from table name and you should check the table name corectness:

    SQLCmd="SELECT COUNT(*) AS wk_count FROM "+ TRIM(wk_name)
    

    The error message will say more for sure.

    Your data should have some column which could be used to split the large table, e.g. some date column or some numeric ID etc.

    Syntax of the SQL language used on your server could allow another split possibility like LIMIT in MySQL etc.

    The import will be time consuming task. I would recommend to import SQL data day by day based on some date column. In such case you may simply continue in the point where it stopped the last time.

    You could also split the table vertically - import Primary key plus several columns in one batch and PK plus another set of columns in another batch etc. Then you may decide what to do with results in DBF.

    Also 3/4 gigs do not necessarily mean 3/4 gigs in FoxPro. Do you plan to import all columns?

    • Proposed as answer by Mike FengModerator Monday, November 21, 2011 4:21 PM
    • Marked as answer by JoseGala Monday, November 21, 2011 4:34 PM
    Friday, November 18, 2011 2:24 PM
  • Thank you, Pavel

    I have been retiring system and I have this one at this time. Unix UniData DB with multiple accounts.

     
    When it is a retirement system you need to extract all the possible documents (usually about a million in a company of this size) and then all data for auditing, SOX complain and basically warranty (I work in an automotive industry) of 25 years since done. The report information is stored to be accesible in an ECM server and with data information must be SQL-complain and kept accordingly, but differently to other retired systems (we have few of them due to previous acquisitions) in this case I am enfacing a Nested RDBMS like UniData with multiple accounts defined in Unix (I do not know if you are familiar with, but in Unix each account is like a complete separate system similar to each other but not  sharing information, something like a cluster, if you want).

    Well, that multi-value DB (no first normal form NF2) is not accesible for our ETL so I am using VFP to extract the data (all the data) but I found the problem of the size. I have done about 95% of the data already in three weeks (that is computer machine time only)

    Of course, I want to do it programatically because doing file-by-file, account-by-account will be extremely time consuming. And I am using VFP because I create the schemas in SQL programatically also and I can load them in a program. As you see, it is a ETL created in VFP

    If you want to know more about this fascinanting but complex type of DB enter to:

    http://en.wikipedia.org/wiki/Nested_relational_model#Non-first_normal_form_.28NF.C2.B2_or_N1NF.29

    or research about how 45 Gb in this databases (created when disk storage space was very expensive) can hold 300 GB when expanded (my case now)

    Finally, if I will create a little front end in VFP that shows the fields and separate them (I have views with few fields and others with a considerable one-basically the master historical file-)

    I am work in a Microsoft shop, so I do not have Open Source options to use

    Thank you for your time. I will try what you mentioned and I will post it again. 

    Friday, November 18, 2011 2:59 PM