locked
Data flows in Data Factory issue in Copying data RRS feed

  • Question

  • Hi,

    I am following this article to copy the delta(incremental) data from one SQL table to other in the same database. My objective here is to move all data from a blob storage to one of the SQL table(temporary table) using Copy activity in ADF and then based on the last modified date the data is then copied to the other SQL table(Main table) from the temporary table.  I have been able to follow the article creating Watermark table and destination table and source table and have tried to use Data flows to check only the latest value which is not there in the Main table.

    I do not get any error but my Main table is blank. I have my data copied from my blob to temporary table but there is no data copied from temporary table to Main table. I am not sure, what have i missed here. Can someone please look into and guide me



    Here is json for DataFlow: 
    {
        "name": "FlowToGetTheLatestDate",
        "properties": {
            "type": "MappingDataFlow",
            "typeProperties": {
                "sources": [
                    {
                        "dataset": {
                            "referenceName": "AzureSqlDatabaseDataSource",
                            "type": "DatasetReference"
                        },
                        "name": "SqlStagingData"
                    },
                    {
                        "dataset": {
                            "referenceName": "AzureSqlDatabaseDataSource",
                            "type": "DatasetReference"
                        },
                        "name": "watermarktable"
                    }
                ],
                "sinks": [
                    {
                        "dataset": {
                            "referenceName": "AzureSqlTable1",
                            "type": "DatasetReference"
                        },
                        "name": "Sinkorigdata"
                    }
                ],
                "transformations": [
                    {
                        "name": "DerivedColumn1"
                    },
                    {
                        "name": "JointoWatermark"
                    },
                    {
                        "name": "onlyLatestRecord"
                    },
                    {
                        "name": "SelectColumns"
                    },
                    {
                        "name": "DerivedColumn2"
                    }
                ],
                "script": "\n\nsource(output(\n\t\ttransferId as string,\n\t\tfromPopulationId as string,\n\t\ttoPopulationId as string,\n\t\tcountFactor as string,\n\t\tbiomassFactor as string,\n\t\ttransferTime as string,\n\t\ttransferType as string,\n\t\tLOAD_DATE as string,\n\t\tFrom_Unit_ID as string,\n\t\tTo_Unit_ID as string,\n\t\tClient_ID as string,\n\t\tDateKey as string,\n\t\tFact_FishTransfer_ID as string,\n\t\tTableName as string\n\t),\n\tallowSchemaDrift: true,\n\tvalidateSchema: false,\n\tisolationLevel: 'READ_UNCOMMITTED',\n\tquery: 'SELECT\\n  transferId\\n, fromPopulationId\\n, toPopulationId\\n, countFactor\\n, biomassFactor\\n, transferTime\\n, transferType\\n, LOAD_DATE\\n, From_Unit_ID\\n, To_Unit_ID\\n, Client_ID\\n, DateKey\\n, Fact_FishTransfer_ID\\n, \\'Fact_Fish\\' AS TableName\\nFROM dbo.Fish_Transfer',\n\tformat: 'query') ~> SqlStagingData\nsource(output(\n\t\tTableName as string,\n\t\tWatermark as string,\n\t\tWatermarkValue as string\n\t),\n\tallowSchemaDrift: true,\n\tvalidateSchema: false,\n\tisolationLevel: 'READ_UNCOMMITTED',\n\tquery: 'SELECT\\n TableName\\n, Watermark\\n, WatermarkValue\\nFROM [dbo].[WatermarkTable]',\n\tformat: 'query') ~> watermarktable\nSqlStagingData derive(LOAD_DATE = toString(LOAD_DATE)) ~> DerivedColumn1\nDerivedColumn1, watermarktable join(SqlStagingData@TableName == watermarktable@TableName,\n\tjoinType:'left',\n\tbroadcast: 'none')~> JointoWatermark\nJointoWatermark filter(LOAD_DATE > WatermarkValue) ~> onlyLatestRecord\nonlyLatestRecord select(mapColumn(\n\t\ttransferId,\n\t\tfromPopulationId,\n\t\ttoPopulationId,\n\t\tcountFactor,\n\t\tbiomassFactor,\n\t\ttransferTime,\n\t\ttransferType,\n\t\tLOAD_DATE,\n\t\tFrom_Unit_ID,\n\t\tTo_Unit_ID,\n\t\tClient_ID,\n\t\tDateKey,\n\t\tFact_FishTransfer_ID\n\t),\n\tskipDuplicateMapInputs: true,\n\tskipDuplicateMapOutputs: true) ~> SelectColumns\nSelectColumns derive(LOAD_DATE = toDate(LOAD_DATE)) ~> DerivedColumn2\nDerivedColumn2 sink(input(\n\t\ttransferId as string,\n\t\tfromPopulationId as string,\n\t\ttoPopulationId as string,\n\t\tcountFactor as string,\n\t\tbiomassFactor as string,\n\t\ttransferTime as string,\n\t\ttransferType as string,\n\t\tLOAD_DATE as string,\n\t\tFrom_Unit_ID as string,\n\t\tTo_Unit_ID as string,\n\t\tClient_ID as string,\n\t\tDateKey as string,\n\t\tFact_FishTransfer_ID as string\n\t),\n\tallowSchemaDrift: true,\n\tvalidateSchema: false,\n\tdeletable:false,\n\tinsertable:true,\n\tupdateable:false,\n\tupsertable:false,\n\tformat: 'table') ~> Sinkorigdata"
            }
        }
    }

    and my copy activity json:

    {
        "name": "Copy data1",
        "type": "Copy",
        "dependsOn": [],
        "policy": {
            "timeout": "7.00:00:00",
            "retry": 0,
            "retryIntervalInSeconds": 30,
            "secureOutput": false,
            "secureInput": false
        },
        "userProperties": [],
        "typeProperties": {
            "source": {
                "type": "DelimitedTextSource",
                "storeSettings": {
                    "type": "AzureBlobStorageReadSettings",
                    "recursive": true,
                    "wildcardFileName": "*"
                },
                "formatSettings": {
                    "type": "DelimitedTextReadSettings"
                }
            },
            "sink": {
                "type": "AzureSqlSink"
            },
            "enableStaging": false
        },
        "inputs": [
            {
                "referenceName": "SourceDataset_wkn",
                "type": "DatasetReference"
            }
        ],
        "outputs": [
            {
                "referenceName": "AzureSqlDatabaseDataSource",
                "type": "DatasetReference"
            }
        ]
    }

    Thanks in advance

    zzzSharePoint


    Monday, December 23, 2019 3:17 PM

Answers

  • I know this error and assuming that the derived column name is inputcol ,

    please add the toString(inputcol) in the Join and it should work .

    Please do let me know how it goes




    Thanks Himanshu

    Tuesday, January 7, 2020 10:55 PM

All replies

  • Hello , 

    The problem statement as i understand at this time is .

    I have been able to follow the article creating Watermark table and destination table and source table and have tried to use Data flows to check only the latest value which is not there in the Main table.

    The thing which connects the copy activity and DF is the watermark table . 
    Can you please check that the Watermark table has an entry for the table in question ?

    In the blog the talk about the order table , in your case it may be different .


    Thanks Himanshu

    Monday, December 30, 2019 7:08 PM
  • You don't need the copy activity to do ETL. You can achieve this entire task in dataflow. It looks more like a slowly changing dimension dataflow.
    Monday, December 30, 2019 7:44 PM
  • Yes the table is there. When I trying to Join and Filter it with Watermark columns the source table columns are not popping up. I tried to define a parameter to call the column name but still no success. It then gives me an error only conditional expression are allowed:

    { "message": "{\"StatusCode\":\"DFExecutorUserError\",\"Message\":\"Job '3a561133-0fac-48d7-a120-c784b5a2c608 failed due to reason: DF-JOIN-003 at Join 'Join1'(Line 23/Col 31): Only conditional expressions are allowed\",\"Details\":\"Job '3a561133-0fac-48d7-a120-c784b5a2c608 failed due to reason: DF-JOIN-003 at Join 'Join1'(Line 23/Col 31): Only conditional expressions are allowed\"}", "failureType": "UserError", "target": "dataflow1" }

    Thanks,

    zzzSharePoint

    Tuesday, January 7, 2020 12:13 PM
  • I know this error and assuming that the derived column name is inputcol ,

    please add the toString(inputcol) in the Join and it should work .

    Please do let me know how it goes




    Thanks Himanshu

    Tuesday, January 7, 2020 10:55 PM
  • That worked and I am not recieving any error but the data is not getting copied in the Sink as well. Everything completes with Success message but my Sink is empty. Is it because I am not able to match the column name as I am getting the schema on fly??
     Here is the data flow script:


    parameters{
    myTableName as string ('table_name'),
    LOAD_DATE as string,
    NameTable as string
    }
    source(allowSchemaDrift: true,
    validateSchema: false,
    limit: 100,
    inferDriftedColumnTypes: true,
    isolationLevel: 'READ_UNCOMMITTED',
    query: (concat('SELECT * FROM ', $myTableName)),
    format: 'query') ~> SqlStagingData
    source(output(
    TableName as string,
    Watermark as string,
    WatermarkValue as string
    ),
    allowSchemaDrift: true,
    validateSchema: false,
    isolationLevel: 'READ_UNCOMMITTED',
    format: 'table') ~> watermarktable
    DerivedColumn1, watermarktable join(toString(NameTable) == TableName,
    joinType:'left',
    broadcast: 'none')~> Join1
    Join1 filter(toString(LOAD_DATE) > toString(WatermarkValue)) ~> Filter1
    SqlStagingData derive(LOAD_DATE = toString(byName($LOAD_DATE)),
    NameTable = byName($NameTable)) ~> DerivedColumn1
    Filter1 select(skipDuplicateMapInputs: true,
    skipDuplicateMapOutputs: true) ~> Select1
    Select1 sink(allowSchemaDrift: true,
    validateSchema: false,
    deletable:false,
    insertable:true,
    updateable:false,
    upsertable:false,
    format: 'table',
    skipDuplicateMapInputs: true,
    skipDuplicateMapOutputs: true) ~> Sinkorigdata

    zzzSharePoint


    Wednesday, January 8, 2020 11:53 AM
  • I would Like to correct myself at the above, I did the change and then I get a very weird kind of error which is as below. The 'table_name' is the variable defined in my lookup which is being provided as input to the foreach activity. In lookup activity it is defined as" SELECT table_name FROM information_schema.tables WHERE table_type= 'base table' and table_name like '%Staging%'"" and then in foreach activity it is defined as "@activity('LookupAllStagingTable').output.value"

    also I have defined dataflow parameter 'mytable' which is provided in source as SELECT * FROM $mytable and mytable has  default value as tablename where tablename is a table parameter defined in source dataset and it is defined as @item().table_name.

    Hope I am clear. here is the error I am getting now:


    "message": "{\"StatusCode\":\"DFExecutorUserError\",\"Message\":\"Job '2f1ef016-9a2c-414f-b94f-cf7977b94eaa failed due to reason: DF-SYS-01 at Source 'SqlStagingData': com.microsoft.sqlserver.jdbc.SQLServerException: Invalid object name 'table_name'.\\ncom.microsoft.sqlserver.jdbc.SQLServerException: Invalid object name 'table_name'.\\n\\tat com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:258)\\n\\tat com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1535)\\n\\tat com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:467)\\n\\tat com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:409)\\n\\tat com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7151)\\n\\tat com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2478)\\n\\tat com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:219)\\n\\tat com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:199)\\n\\tat com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(SQLServerPreparedStatement.java:331)\\n\\tat org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:61)\\n\\tat org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:210)\\n\\tat org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)\\n\\tat org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:343)\\n\\tat org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:307)\\n\\tat org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:293)\\n\\tat org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:203)\\n\\tat org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:322)\\n\\tat com.microsoft.dataflow.transformers.store.JDBCReader$$anonfun$read$5.apply(JDBCStore.scala:59)\\n\\tat com.microsoft.dataflow.transformers.store.JDBCReader$$anonfun$read$5.apply(JDBCStore.scala:55)\\n\\tat scala.util.Try$.apply(Try.scala:192)\\n\\tat com.microsoft.dataflow.transformers.store.JDBCReader.read(JDBCStore.scala:55)\\n\\tat com.microsoft.dataflow.transformers.TableLikeReader.read(TableStore.scala:82)\\n\\tat com.microsoft.dataflow.transformers.SourcePlanner$.physicalPlanInternal(SourceV2.scala:277)\\n\\tat com.microsoft.dataflow.transformers.SourcePlanner$.physicalPlan(SourceV2.scala:260)\\n\\tat com.microsoft.dataflow.FlowRunner$$anonfun$16.apply(FlowRunner.scala:230)\\n\\tat com.microsoft.dataflow.FlowRunner$$anonfun$16.apply(FlowRunner.scala:216)\\n\\tat scala.collection.Iterator$$anon$11.next(Iterator.scala:410)\\n\\tat scala.collection.TraversableOnce$class.collectFirst(TraversableOnce.scala:145)\\n\\tat scala.collection.SeqViewLike$AbstractTransformed.collectFirst(SeqViewLike.scala:37)\\n\\tat com.microsoft.dataflow.FlowRunner$.com$microsoft$dataflow$FlowRunner$$runner(FlowRunner.scala:309)\\n\\tat com.microsoft.dataflow.FlowRunner$$anonfun$runner$2.apply(FlowRunner.scala:178)\\n\\tat com.microsoft.dataflow.FlowRunner$$anonfun$runner$2.apply(FlowRunner.scala:173)\\n\\tat scala.util.Success.flatMap(Try.scala:231)\\n\\tat com.microsoft.dataflow.FlowRunner$.runner(FlowRunner.scala:173)\\n\\tat com.microsoft.dataflow.DataflowExecutor$$anonfun$6$$anonfun$apply$3$$anonfun$apply$4$$anonfun$apply$5$$anonfun$apply$6$$anonfun$apply$9$$anonfun$apply$10$$anonfun$apply$11$$anonfun$7.apply(DataflowExecutor.scala:119)\\n\\tat com.microsoft.dataflow.DataflowExecutor$$anonfun$6$$anonfun$apply$3$$anonfun$apply$4$$anonfun$apply$5$$anonfun$apply$6$$anonfun$apply$9$$anonfun$apply$10$$anonfun$apply$11$$anonfun$7.apply(DataflowExecutor.scala:106)\\n\\tat com.microsoft.dataflow.DataflowJobFuture$$anonfun$flowCode$1.apply(DataflowJobFuture.scala:71)\\n\\tat com.microsoft.dataflow.DataflowJobFuture$$anonfun$flowCode$1.apply(DataflowJobFuture.scala:71)\\n\\tat scala.Option.map(Option.scala:146)\\n\\tat com.microsoft.dataflow.DataflowJobFuture.flowCode$lzycompute(DataflowJobFuture.scala:71)\\n\\tat com.microsoft.dataflow.DataflowJobFuture.flowCode(DataflowJobFuture.scala:71)\\n\\tat com.microsoft.dataflow.DataflowJobFuture.addListener(DataflowJobFuture.scala:420)\\n\\tat com.microsoft.datafactory.dataflow.AppManager$.processActivityRunRequest(AppManager.scala:99)\\n\\tat com.microsoft.datafactory.dataflow.AppManager$.main(AppManager.scala:52)\\n\\tat line6177713df9ab462ab679252689ebba0525.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command--1:1)\\n\\tat line6177713df9ab462ab679252689ebba0525.$read$$iw$$iw$$iw$$iw$$iw.<init>(command--1:44)\\n\\tat line6177713df9ab462ab679252689ebba0525.$read$$iw$$iw$$iw$$iw.<init>(command--1:46)\\n\\tat line6177713df9ab462ab679252689ebba0525.$read$$iw$$iw$$iw.<init>(command--1:48)\\n\\tat line6177713df9ab462ab679252689ebba0525.$read$$iw$$iw.<init>(command--1:50)\\n\\tat line6177713df9ab462ab679252689ebba0525.$read$$iw.<init>(command--1:52)\\n\\tat line6177713df9ab462ab679252689ebba0525.$read.<init>(command--1:54)\\n\\tat line6177713df9ab462ab679252689ebba0525.$read$.<init>(command--1:58)\\n\\tat line6177713df9ab462ab679252689ebba0525.$read$.<clinit>(command--1)\\n\\tat line6177713df9ab462ab679252689ebba0525.$eval$.$print$lzycompute(<notebook>:7)\\n\\tat line6177713df9ab462ab679252689ebba0525.$eval$.$print(<notebook>:6)\\n\\tat line6177713df9ab462ab679252689ebba0525.$eval.$print(<notebook>)\\n\\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\\n\\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\\n\\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\\n\\tat java.lang.reflect.Method.invoke(Method.java:498)\\n\\tat scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:793)\\n\\tat scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1054)\\n\\tat scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:645)\\n\\tat scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:644)\\n\\tat scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)\\n\\tat scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)\\n\\tat scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:644)\\n\\tat scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:576)\\n\\tat scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:572)\\n\\tat com.databricks.backend.daemon.driver.DriverILoop.execute(DriverILoop.scala:215)\\n\\tat com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply$mcV$sp(ScalaDriverLocal.scala:197)\\n\\tat com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:197)\\n\\tat com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:197)\\n\\tat com.databricks.backend.daemon.driver.DriverLocal$TrapExitInternal$.trapExit(DriverLocal.scala:679)\\n\\tat com.databricks.backend.daemon.driver.DriverLocal$TrapExit$.apply(DriverLocal.scala:632)\\n\\tat com.databricks.backend.daemon.driver.ScalaDriverLocal.repl(ScalaDriverLocal.scala:197)\\n\\tat com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:368)\\n\\tat com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:345)\\n\\tat com.databricks.logging.UsageLogging$$anonfun$withAttributionContext$1.apply(UsageLogging.scala:238)\\n\\tat scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)\\n\\tat com.databricks.logging.UsageLogging$class.withAttributionContext(UsageLogging.scala:233)\\n\\tat com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:48)\\n\\tat com.databricks.logging.UsageLogging$class.withAttributionTags(UsageLogging.scala:271)\\n\\tat com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:48)\\n\\tat com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:345)\\n\\tat com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:644)\\n\\tat com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:644)\\n\\tat scala.util.Try$.apply(Try.scala:192)\\n\\tat com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:639)\\n\\tat com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:485)\\n\\tat com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:597)\\n\\tat com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:390)\\n\\tat com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:337)\\n\\tat com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:219)\\n\\tat java.lang.Thread.run(Thread.java:748)\\n\",\"Details\":\"Job '2f1ef016-9a2c-414f-b94f-cf7977b94eaa failed due to reason: DF-SYS-01 at Source 'SqlStagingData': com.microsoft.sqlserver.jdbc.SQLServerException: Invalid object name 'table_name'.\\ncom.microsoft.sqlserver.jdbc.SQLServerException: Invalid object name 'table_name'.\\n\\tat com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:258)\\n\\tat com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1535)\\n\\tat com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:467)\\n\\tat com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:409)\\n\\tat com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7151)\\n\\tat com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2478)\\n\\tat com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:219)\\n\\tat com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:199)\\n\\tat com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(SQLServerPreparedStatement.java:331)\\n\\tat org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:61)\\n\\tat org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:210)\\n\\tat org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)\\n\\tat org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:343)\\n\\tat org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:307)\\n\\tat org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:293)\\n\\tat org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:203)\\n\\tat org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:322)\\n\\tat com.microsoft.dataflow.transformers.store.JDBCReader$$anonfun$read$5.apply(JDBCStore.scala:59)\\n\\tat com.microsoft.dataflow.transformers.store.JDBCReader$$anonfun$read$5.apply(JDBCStore.scala:55)\\n\\tat scala.util.Try$.apply(Try.scala:192)\\n\\tat com.microsoft.dataflow.transformers.store.JDBCReader.read(JDBCStore.scala:55)\\n\\tat com.microsoft.dataflow.transformers.TableLikeReader.read(TableStore.scala:82)\\n\\tat com.microsoft.dataflow.transformers.SourcePlanner$.physicalPlanInternal(SourceV2.scala:277)\\n\\tat com.microsoft.dataflow.transformers.SourcePlanner$.physicalPlan(SourceV2.scala:260)\\n\\tat com.microsoft.dataflow.FlowRunner$$anonfun$16.apply(FlowRunner.scala:230)\\n\\tat com.microsoft.dataflow.FlowRunner$$anonfun$16.apply(FlowRunner.scala:216)\\n\\tat scala.collection.Iterator$$anon$11.next(Iterator.scala:410)\\n\\tat scala.collection.TraversableOnce$class.collectFirst(TraversableOnce.scala:145)\\n\\tat scala.collection.SeqViewLike$AbstractTransformed.collectFirst(SeqViewLike.scala:37)\\n\\tat com.microsoft.dataflow.FlowRunner$.com$microsoft$dataflow$FlowRunner$$runner(FlowRunner.scala:309)\\n\\tat com.microsoft.dataflow.FlowRunner$$anonfun$runner$2.apply(FlowRunner.scala:178)\\n\\tat com.microsoft.dataflow.FlowRunner$$anonfun$runner$2.apply(FlowRunner.scala:173)\\n\\tat scala.util.Success.flatMap(Try.scala:231)\\n\\tat com.microsoft.dataflow.FlowRunner$.runner(FlowRunner.scala:173)\\n\\tat com.microsoft.dataflow.DataflowExecutor$$anonfun$6$$anonfun$apply$3$$anonfun$apply$4$$anonfun$apply$5$$anonfun$apply$6$$anonfun$apply$9$$anonfun$apply$10$$anonfun$apply$11$$anonfun$7.apply(DataflowExecutor.scala:119)\\n\\tat com.microsoft.dataflow.DataflowExecutor$$anonfun$6$$anonfun$apply$3$$anonfun$apply$4$$anonfun$apply$5$$anonfun$apply$6$$anonfun$apply$9$$anonfun$apply$10$$anonfun$apply$11$$anonfun$7.apply(DataflowExecutor.scala:106)\\n\\tat com.microsoft.dataflow.DataflowJobFuture$$anonfun$flowCode$1.apply(DataflowJobFuture.scala:71)\\n\\tat com.microsoft.dataflow.DataflowJobFuture$$anonfun$flowCode$1.apply(DataflowJobFuture.scala:71)\\n\\tat scala.Option.map(Option.scala:146)\\n\\tat com.microsoft.dataflow.DataflowJobFuture.flowCode$lzycompute(DataflowJobFuture.scala:71)\\n\\tat com.microsoft.dataflow.DataflowJobFuture.flowCode(DataflowJobFuture.scala:71)\\n\\tat com.microsoft.dataflow.DataflowJobFuture.addListener(DataflowJobFuture.scala:420)\\n\\tat com.microsoft.datafactory.dataflow.AppManager$.processActivityRunRequest(AppManager.scala:99)\\n\\tat com.microsoft.datafactory.dataflow.AppManager$.main(AppManager.scala:52)\\n\\tat line6177713df9ab462ab679252689ebba0525.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command--1:1)\\n\\tat line6177713df9ab462ab679252689ebba0525.$read$$iw$$iw$$iw$$iw$$iw.<init>(command--1:44)\\n\\tat line6177713df9ab462ab679252689ebba0525.$read$$iw$$iw$$iw$$iw.<init>(command--1:46)\\n\\tat line6177713df9ab462ab679252689ebba0525.$read$$iw$$iw$$iw.<init>(command--1:48)\\n\\tat line6177713df9ab462ab679252689ebba0525.$read$$iw$$iw.<init>(command--1:50)\\n\\tat line6177713df9ab462ab679252689ebba0525.$read$$iw.<init>(command--1:52)\\n\\tat line6177713df9ab462ab679252689ebba0525.$read.<init>(command--1:54)\\n\\tat line6177713df9ab462ab679252689ebba0525.$read$.<init>(command--1:58)\\n\\tat line6177713df9ab462ab679252689ebba0525.$read$.<clinit>(command--1)\\n\\tat line6177713df9ab462ab679252689ebba0525.$eval$.$print$lzycompute(<notebook>:7)\\n\\tat line6177713df9ab462ab679252689ebba0525.$eval$.$print(<notebook>:6)\\n\\tat line6177713df9ab462ab679252689ebba0525.$eval.$print(<notebook>)\\n\\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\\n\\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\\n\\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\\n\\tat java.lang.reflect.Method.invoke(Method.java:498)\\n\\tat scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:793)\\n\\tat scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1054)\\n\\tat scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:645)\\n\\tat scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:644)\\n\\tat scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)\\n\\tat scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)\\n\\tat scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:644)\\n\\tat scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:576)\\n\\tat scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:572)\\n\\tat com.databricks.backend.daemon.driver.DriverILoop.execute(DriverILoop.scala:215)\\n\\tat com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply$mcV$sp(ScalaDriverLocal.scala:197)\\n\\tat com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:197)\\n\\tat com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:197)\\n\\tat com.databricks.backend.daemon.driver.DriverLocal$TrapExitInternal$.trapExit(DriverLocal.scala:679)\\n\\tat com.databricks.backend.daemon.driver.DriverLocal$TrapExit$.apply(DriverLocal.scala:632)\\n\\tat com.databricks.backend.daemon.driver.ScalaDriverLocal.repl(ScalaDriverLocal.scala:197)\\n\\tat com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:368)\\n\\tat com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:345)\\n\\tat com.databricks.logging.UsageLogging$$anonfun$withAttributionContext$1.apply(UsageLogging.scala:238)\\n\\tat scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)\\n\\tat com.databricks.logging.UsageLogging$class.withAttributionContext(UsageLogging.scala:233)\\n\\tat com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:48)\\n\\tat com.databricks.logging.UsageLogging$class.withAttributionTags(UsageLogging.scala:271)\\n\\tat com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:48)\\n\\tat com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:345)\\n\\tat com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:644)\\n\\tat com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:644)\\n\\tat scala.util.Try$.apply(Try.scala:192)\\n\\tat com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:639)\\n\\tat com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:485)\\n\\tat com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:597)\\n\\tat com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:390)\\n\\tat com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:337)\\n\\tat com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:219)\\n\\tat java.lang.Thread.run(Thread.java:748)\\n\"}", "failureType": "UserError", "target": "dataflow1" }

    zzzSharePoint

    Wednesday, January 8, 2020 6:48 PM