Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Blaze's query results don't match the spark query results #789

Open
fierceX opened this issue Jan 23, 2025 · 2 comments
Open

Blaze's query results don't match the spark query results #789

fierceX opened this issue Jan 23, 2025 · 2 comments

Comments

@fierceX
Copy link

fierceX commented Jan 23, 2025

A bug was tested in our environment, that is, the Blaze query results were inconsistent with the spark query results. In spark, the query result of the label query is 0.00, while the query result in blaze is null, which causes deviations during model calculation. Searching this table separately on our data platform also reproduces this problem. Here are our table creation statements and query statements

 CREATE TABLE `label.label_test`(     
   `id` string COMMENT '',               
   `label1` int COMMENT '',       
   `label2` int COMMENT '',  
   `label3` decimal(16,2) COMMENT '',  
   `label4` bigint COMMENT '') 
 COMMENT ''                                 
 PARTITIONED BY (                                   
   `back_date` string COMMENT '',       
   `dt` string COMMENT '',              
 ROW FORMAT SERDE                                   
   'org.apache.hadoop.hive.ql.io.orc.OrcSerde'      
 STORED AS INPUTFORMAT                              
   'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'  
 OUTPUTFORMAT                                       
   'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' 
 LOCATION                                           
   'hdfs://fcycdh/user/hive/warehouse/label.db/lable_test' 
 TBLPROPERTIES (                                    
   'transient_lastDdlTime'='1677155949') 

This table creation statement is the result of a query via the show create table xxx command on beeline.

select id,label from label.label_test where id = '1';

As a side note, writing this data to another orc formatted table is queried correctly using blaze. The original table was created a long time ago and through beeline, the data should be written using spark2, but now we are using spark3.

@fierceX
Copy link
Author

fierceX commented Jan 23, 2025

Correct the bug content, not the results of the query is not correct, but can not find the data, increase the id condition for the query, no matter = or regexp, can not find any data, do not specify the data can be found.

@fierceX
Copy link
Author

fierceX commented Jan 23, 2025

orc schema:

25/01/23 15:43:38 INFO orc.ReaderImpl: Reading ORC rows from /user/hive/warehouse/label.db/label_test/000000_0 with {include: null, offset: 0, length: 9223372036854775807}
Rows: 7082010
Compression: SNAPPY
Compression size: 262144
Type: struct<_col0:string,_col1:int,_col2:int,_col3:decimal(16,2),_col4:bigint>

Stripe Statistics:
  Stripe 1:
    Column 0: count: 7082010 hasNull: false
    Column 1: count: 7082010 hasNull: false min: 10000 max: 99999980 sum: 67231871
    Column 2: count: 7082010 hasNull: false min: 1 max: 1 sum: 7082010
    Column 3: count: 7082010 hasNull: false min: 0 max: 13 sum: 2335605
    Column 4: count: 7082010 hasNull: false min: 0 max: 6 sum: 938807
    Column 5: count: 7082010 hasNull: false min: 0 max: 28 sum: 9167523

File Statistics:
  Column 0: count: 7082010 hasNull: false
  Column 1: count: 7082010 hasNull: false min: 10000 max: 99999980 sum: 67231871
  Column 2: count: 7082010 hasNull: false min: 1 max: 1 sum: 7082010
  Column 3: count: 7082010 hasNull: false min: 0 max: 13 sum: 2335605
  Column 4: count: 7082010 hasNull: false min: 0 max: 6 sum: 938807
  Column 5: count: 7082010 hasNull: false min: 0 max: 28 sum: 9167523

Stripes:
  Stripe: offset: 3 data: 67060947 rows: 7082010 tail: 134 index: 63548
    Stream: column 0 section ROW_INDEX start: 3 length 320
    Stream: column 1 section ROW_INDEX start: 323 length 24448
    Stream: column 2 section ROW_INDEX start: 24771 length 5732
    Stream: column 3 section ROW_INDEX start: 30503 length 9656
    Stream: column 4 section ROW_INDEX start: 40159 length 13779
    Stream: column 5 section ROW_INDEX start: 53938 length 9613
    Stream: column 1 section DATA start: 63551 length 54255045
    Stream: column 1 section LENGTH start: 54318596 length 4474003
    Stream: column 2 section DATA start: 58792599 length 84883
    Stream: column 3 section DATA start: 58877482 length 2608026
    Stream: column 4 section DATA start: 61485508 length 1769838
    Stream: column 4 section SECONDARY start: 63255346 length 43099
    Stream: column 5 section DATA start: 63298445 length 3826053
    Encoding column 0: DIRECT
    Encoding column 1: DIRECT_V2
    Encoding column 2: DIRECT_V2
    Encoding column 3: DIRECT_V2
    Encoding column 4: DIRECT_V2
    Encoding column 5: DIRECT_V2

File length: 67124966 bytes
Padding length: 0 bytes
Padding ratio: 0%

driver log:

21560 [main] WARN  org.apache.spark.util.Utils  - spark.executor.instances less than spark.dynamicAllocation.minExecutors is invalid, ignoring its setting, please update your configs.
21560 [main] INFO  org.apache.spark.util.Utils  - Using initial executors = 1, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
21561 [main] INFO  org.apache.spark.deploy.yarn.YarnAllocator  - Resource profile 0 doesn't exist, adding it
21578 [dispatcher-event-loop-1] INFO  org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint  - ApplicationMaster registered as NettyRpcEndpointRef(spark://[email protected]:15227)
21583 [main] INFO  org.apache.spark.deploy.yarn.YarnAllocator  - Will request 1 executor container(s) for  ResourceProfile Id: 0, each with 3 core(s) and 15360 MB memory.
21741 [main] INFO  org.apache.spark.deploy.yarn.YarnAllocator  - Submitted 1 unlocalized container requests.
22083 [main] INFO  org.apache.spark.deploy.yarn.ApplicationMaster  - Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
22176 [Reporter] INFO  org.apache.spark.deploy.yarn.YarnAllocator  - Launching container container_e21_1731902288777_13206_01_000002 on host datanode24-fcy.hadoop.test.com for executor with ID 1 for ResourceProfile Id 0
22182 [Reporter] INFO  org.apache.spark.deploy.yarn.YarnAllocator  - Received 1 containers from YARN, launching executors on 1 of them.
31402 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint  - Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.30.10.34:16672) with ID 1,  ResourceProfileId 0
31417 [spark-listener-group-executorManagement] INFO  org.apache.spark.scheduler.dynalloc.ExecutorMonitor  - New executor 1 has registered (new total is 1)
31500 [Driver] INFO  org.apache.spark.scheduler.cluster.YarnClusterSchedulerBackend  - SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
31501 [Driver] INFO  org.apache.spark.scheduler.cluster.YarnClusterScheduler  - YarnClusterScheduler.postStartHook done
31617 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerMasterEndpoint  - Registering block manager datanode24-fcy.hadoop.test.com:32003 with 7.6 GiB RAM, BlockManagerId(1, datanode24-fcy.hadoop.test.com, 32003, None)
31949 [Driver] INFO  org.apache.spark.sql.blaze.BlazeSparkSessionExtension  - org.apache.spark.BlazeSparkSessionExtension enabled
32001 [Driver] WARN  org.apache.spark.sql.internal.SharedState  - URL.setURLStreamHandlerFactory failed to set FsUrlStreamHandlerFactory
32002 [Driver] INFO  org.apache.spark.sql.internal.SharedState  - spark.sql.warehouse.dir is not set, but hive.metastore.warehouse.dir is set. Setting spark.sql.warehouse.dir to the value of hive.metastore.warehouse.dir ('/user/hive/warehouse').
32002 [Driver] INFO  org.apache.spark.sql.internal.SharedState  - Warehouse path is '/user/hive/warehouse'.
32038 [Driver] INFO  org.apache.spark.ui.ServerInfo  - Adding filter to /SQL: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
32040 [Driver] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@703fc211{/SQL,null,AVAILABLE,@Spark}
32041 [Driver] INFO  org.apache.spark.ui.ServerInfo  - Adding filter to /SQL/json: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
32042 [Driver] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@1ea61ff5{/SQL/json,null,AVAILABLE,@Spark}
32042 [Driver] INFO  org.apache.spark.ui.ServerInfo  - Adding filter to /SQL/execution: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
32043 [Driver] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@25de576b{/SQL/execution,null,AVAILABLE,@Spark}
32043 [Driver] INFO  org.apache.spark.ui.ServerInfo  - Adding filter to /SQL/execution/json: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
32044 [Driver] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@6982d56e{/SQL/execution/json,null,AVAILABLE,@Spark}
32045 [Driver] INFO  org.apache.spark.ui.ServerInfo  - Adding filter to /static/sql: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
32046 [Driver] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@7d10c062{/static/sql,null,AVAILABLE,@Spark}
32056 [Driver] WARN  org.apache.spark.sql.internal.SQLConf  - The SQL config 'spark.sql.execution.arrow.enabled' has been deprecated in Spark v3.0 and may be removed in the future. Use 'spark.sql.execution.arrow.pyspark.enabled' instead of it.
32056 [Driver] WARN  org.apache.spark.sql.internal.SQLConf  - The SQL config 'spark.sql.execution.arrow.fallback.enabled' has been deprecated in Spark v3.0 and may be removed in the future. Use 'spark.sql.execution.arrow.pyspark.fallback.enabled' instead of it.
32058 [Driver] WARN  org.apache.spark.sql.internal.SQLConf  - The SQL config 'spark.sql.execution.arrow.enabled' has been deprecated in Spark v3.0 and may be removed in the future. Use 'spark.sql.execution.arrow.pyspark.enabled' instead of it.
32058 [Driver] WARN  org.apache.spark.sql.internal.SQLConf  - The SQL config 'spark.sql.execution.arrow.fallback.enabled' has been deprecated in Spark v3.0 and may be removed in the future. Use 'spark.sql.execution.arrow.pyspark.fallback.enabled' instead of it.
32058 [Driver] WARN  org.apache.spark.sql.internal.SQLConf  - The SQL config 'spark.sql.execution.arrow.enabled' has been deprecated in Spark v3.0 and may be removed in the future. Use 'spark.sql.execution.arrow.pyspark.enabled' instead of it.
32058 [Driver] WARN  org.apache.spark.sql.internal.SQLConf  - The SQL config 'spark.sql.execution.arrow.fallback.enabled' has been deprecated in Spark v3.0 and may be removed in the future. Use 'spark.sql.execution.arrow.pyspark.fallback.enabled' instead of it.
33441 [Driver] INFO  com.dataworker.spark.jobserver.driver.support.SparkEnv  - SparkSession Inited
34268 [http-nio-auto-1-exec-1] INFO  org.springframework.web.servlet.DispatcherServlet  - Initializing Servlet 'dispatcherServlet'
34271 [http-nio-auto-1-exec-1] INFO  org.springframework.web.servlet.DispatcherServlet  - Completed initialization in 3 ms
34829 [http-nio-auto-1-exec-1] INFO  com.dataworker.spark.jobserver.driver.aspectj.AspectHelper  - &#28165;&#31354;&#20020;&#26102;&#25968;&#25454;
34870 [http-nio-auto-1-exec-1] WARN  org.apache.spark.sql.internal.SQLConf  - The SQL config 'spark.sql.execution.arrow.enabled' has been deprecated in Spark v3.0 and may be removed in the future. Use 'spark.sql.execution.arrow.pyspark.enabled' instead of it.
34871 [http-nio-auto-1-exec-1] WARN  org.apache.spark.sql.internal.SQLConf  - The SQL config 'spark.sql.execution.arrow.fallback.enabled' has been deprecated in Spark v3.0 and may be removed in the future. Use 'spark.sql.execution.arrow.pyspark.fallback.enabled' instead of it.
34873 [http-nio-auto-1-exec-1] WARN  org.apache.spark.sql.internal.SQLConf  - The SQL config 'spark.sql.execution.arrow.enabled' has been deprecated in Spark v3.0 and may be removed in the future. Use 'spark.sql.execution.arrow.pyspark.enabled' instead of it.
34873 [http-nio-auto-1-exec-1] WARN  org.apache.spark.sql.internal.SQLConf  - The SQL config 'spark.sql.execution.arrow.fallback.enabled' has been deprecated in Spark v3.0 and may be removed in the future. Use 'spark.sql.execution.arrow.pyspark.fallback.enabled' instead of it.
34874 [http-nio-auto-1-exec-1] WARN  org.apache.spark.sql.internal.SQLConf  - The SQL config 'spark.sql.execution.arrow.enabled' has been deprecated in Spark v3.0 and may be removed in the future. Use 'spark.sql.execution.arrow.pyspark.enabled' instead of it.
34874 [http-nio-auto-1-exec-1] WARN  org.apache.spark.sql.internal.SQLConf  - The SQL config 'spark.sql.execution.arrow.fallback.enabled' has been deprecated in Spark v3.0 and may be removed in the future. Use 'spark.sql.execution.arrow.pyspark.fallback.enabled' instead of it.
34877 [http-nio-auto-1-exec-1] INFO  com.dataworker.spark.jobserver.driver.DriverRestApi  - Spark task: ukdZjIFZxRPwGofFzWf5mgfBCkZ605Qs began, type:spark_sql, submit from http://10.10.10.11:7002
34878 [http-nio-auto-1-exec-1] INFO  com.dataworker.spark.jobserver.driver.support.JobServerContext  - startQueySparkStageLog
34881 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.task.SparkSqlTask  - Sql Job: ukdZjIFZxRPwGofFzWf5mgfBCkZ605Qs begined, submit from http://10.10.10.11:7002
35165 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.task.SparkTask  - spark param: set spark.datawork.job.code=ukdZjIFZxRPwGofFzWf5mgfBCkZ605Qs 
35167 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.task.SparkTask  - spark param: set spark.datawork.job.userId=qiusuo 
35169 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.task.SparkTask  - spark param: set spark.datawork.column.authorization.enabled=true 
35170 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.task.SparkTask  - spark param: set spark.datawork.sql.allow.fullscan=false 
35178 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.common.service.JobInstanceService  - update task ukdZjIFZxRPwGofFzWf5mgfBCkZ605Qs status running
35181 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.aspectj.AspectHelper  - &#28165;&#31354;&#20020;&#26102;&#25968;&#25454;
35192 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.task.SparkTask  - Update JobServer: application_1731902288777_13206 Status running:
35292 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.task.SparkTask  - Job ukdZjIFZxRPwGofFzWf5mgfBCkZ605Qs logThread Started
35329 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.support.JobServerContext  - use kerberos auth
35338 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.support.JobServerContext  - user login: [email protected] (auth:KERBEROS)
35393 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.aspectj.SparkSessionAspectj  - spark.datawork.url: http://10.10.10.10:8080
35393 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.aspectj.SparkSessionAspectj  - Prepare to execute jobType=spark_sql, sql=use dev
35827 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.task.SparkTask  - spark param: set spark.datawork.job.statementType=use 
35828 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.task.SparkTask  - spark param: set spark.datawork.job.sql=use dev 
37870 [SparkTaskThread-0] WARN  org.apache.spark.sql.blaze.NativeHelper  - memory total: 6442450944, onheap: 5368709120, offheap: 1073741824
37874 [SparkTaskThread-0] INFO  org.apache.spark.sql.blaze.BlazeColumnarOverrides$$anon$1  - Blaze convert strategy for current stage:
37875 [SparkTaskThread-0] INFO  org.apache.spark.sql.blaze.BlazeSparkSessionExtension  -  + SetCatalogAndNamespace (convertible=false, strategy=NeverConvert)
37878 [SparkTaskThread-0] INFO  org.apache.spark.sql.blaze.BlazeColumnarOverrides$$anon$1  - Blaze convert result for current stage:
37878 [SparkTaskThread-0] INFO  org.apache.spark.sql.blaze.BlazeSparkSessionExtension  -  + SetCatalogAndNamespace (convertible=false, strategy=NeverConvert)
37885 [SparkTaskThread-0] INFO  org.apache.spark.sql.blaze.BlazeColumnarOverrides$$anon$1  - Transformed spark plan after preColumnarTransitions:
SetCatalogAndNamespace org.apache.spark.sql.connector.catalog.CatalogManager@67cfa493, spark_catalog, ArrayBuffer(dev)

37945 [SparkTaskThread-0] WARN  org.apache.spark.sql.internal.SQLConf  - The SQL config 'spark.sql.execution.arrow.enabled' has been deprecated in Spark v3.0 and may be removed in the future. Use 'spark.sql.execution.arrow.pyspark.enabled' instead of it.
37945 [SparkTaskThread-0] WARN  org.apache.spark.sql.internal.SQLConf  - The SQL config 'spark.sql.execution.arrow.fallback.enabled' has been deprecated in Spark v3.0 and may be removed in the future. Use 'spark.sql.execution.arrow.pyspark.fallback.enabled' instead of it.
37962 [SparkTaskThread-0] INFO  org.apache.spark.sql.hive.HiveUtils  - Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
38830 [SparkTaskThread-0] INFO  org.apache.hadoop.conf.Configuration.deprecation  - mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
39168 [SparkTaskThread-0] INFO  hive.metastore  - Trying to connect to metastore with URI thrift://manager01-fcy.hadoop.test.com:9083
39259 [SparkTaskThread-0] INFO  hive.metastore  - Connected to metastore.
40981 [SparkTaskThread-0] INFO  org.apache.hadoop.hive.ql.session.SessionState  - Created local directory: /data11/yarn/nm/usercache/datawork/appcache/application_1731902288777_13206/container_e21_1731902288777_13206_01_000001/tmp/datawork
40987 [SparkTaskThread-0] INFO  org.apache.hadoop.hive.ql.session.SessionState  - Created local directory: /data11/yarn/nm/usercache/datawork/appcache/application_1731902288777_13206/container_e21_1731902288777_13206_01_000001/tmp/d4894ffe-7578-4c12-aae3-6760a7210474_resources
41012 [SparkTaskThread-0] INFO  org.apache.hadoop.hive.ql.session.SessionState  - Created HDFS directory: /tmp/hive/datawork/d4894ffe-7578-4c12-aae3-6760a7210474
41019 [SparkTaskThread-0] INFO  org.apache.hadoop.hive.ql.session.SessionState  - Created local directory: /data11/yarn/nm/usercache/datawork/appcache/application_1731902288777_13206/container_e21_1731902288777_13206_01_000001/tmp/datawork/d4894ffe-7578-4c12-aae3-6760a7210474
41033 [SparkTaskThread-0] INFO  org.apache.hadoop.hive.ql.session.SessionState  - Created HDFS directory: /tmp/hive/datawork/d4894ffe-7578-4c12-aae3-6760a7210474/_tmp_space.db
41036 [SparkTaskThread-0] INFO  org.apache.spark.sql.hive.client.HiveClientImpl  - Warehouse location for Hive client (version 1.2.2) is /user/hive/warehouse
41205 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.aspectj.SparkSessionAspectj  - spark.datawork.url: http://10.10.10.10:8080
41205 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.aspectj.SparkSessionAspectj  - Prepare to execute jobType=spark_sql, sql=select * from label.label_test where back_date regexp '' and id = '31539560' limit 1000
41215 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.task.SparkTask  - spark param: set spark.datawork.job.statementType=select 
41217 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.task.SparkTask  - spark param: set spark.datawork.job.sql=select * from label.label_test where back_date regexp '' and id = '31539560' limit 1000 
41421 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.aspectj.SparkSessionAspectj  - projectCode: dev, appKey: Hn0qPv9yFbOxOgj4, appSecret: ALzHmXYad1sYdSMkzGjXQxMVft7DViJO
41422 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.util.DriverUtils  - prepare to check authority for: select * from label.label_test where back_date regexp '' and id = '31539560' limit 1000
41422 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.util.DriverUtils  - projectCode: dev, appKey: Hn0qPv9yFbOxOgj4, appSecret: xxxxxx
41480 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.util.DriverUtils  - checkAuthority result: {&quot;success&quot;:true,&quot;code&quot;:200}
41520 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.util.DriverUtils  - check authority success
41778 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.util.mask.MaskingHandler  - have no maskcount,return sql
41778 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.aspectj.SparkSessionAspectj  - sql=select * from label.label_test where back_date regexp '' and id = '31539560' limit 1000
,masksql=select * from label.label_test where back_date regexp '' and id = '31539560' limit 1000

42322 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.aspectj.SparkSessionAspectj  - updateDMLSql success
42927 [SparkTaskThread-0] INFO  org.apache.spark.sql.execution.datasources.InMemoryFileIndex  - It took 162 ms to list leaf files for 10 paths.
42954 [SparkTaskThread-0] INFO  org.apache.spark.sql.execution.datasources.DataSourceStrategy  - Pruning directories with: isnotnull(back_date#5),back_date#5 RLIKE 
42982 [SparkTaskThread-0] INFO  org.apache.spark.sql.execution.datasources.FileSourceStrategy  - Pushed Filters: IsNotNull(id),EqualTo(id,31539560)
42983 [SparkTaskThread-0] INFO  org.apache.spark.sql.execution.datasources.FileSourceStrategy  - Post-Scan Filters: isnotnull(id#0),(id#0 = 31539560)
42986 [SparkTaskThread-0] INFO  org.apache.spark.sql.execution.datasources.FileSourceStrategy  - Output Data Schema: struct&lt;id: string, label1: int, label2: int, label3: decimal(16,2), label4: bigint ... 3 more fields&gt;
42991 [SparkTaskThread-0] INFO  org.apache.spark.sql.execution.datasources.FileSourceStrategy  - check user access table column, user: qiusuo, instance code: ukdZjIFZxRPwGofFzWf5mgfBCkZ605Qs, table: label.label_test, columns: id,label1,label2,label3,label4
43459 [SparkTaskThread-0] INFO  org.apache.spark.storage.memory.MemoryStore  - Block broadcast_0 stored as values in memory (estimated size 416.4 KiB, free 2.8 GiB)
44057 [SparkTaskThread-0] INFO  org.apache.spark.storage.memory.MemoryStore  - Block broadcast_0_piece0 stored as bytes in memory (estimated size 26.4 KiB, free 2.8 GiB)
44061 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Added broadcast_0_piece0 in memory on datanode16-fcy.hadoop.test.com:7529 (size: 26.4 KiB, free: 2.8 GiB)
44065 [SparkTaskThread-0] INFO  org.apache.spark.SparkContext  - Created broadcast 0 from collect at SparkSqlTask.scala:255
44075 [SparkTaskThread-0] INFO  org.apache.spark.sql.execution.datasources.InMemoryFileIndex  - Selected 10 partitions out of 10, pruned 0.0% partitions.
44083 [SparkTaskThread-0] INFO  org.apache.spark.sql.execution.FileSourceScanExec  - Planning scan with bin packing, max size: 4194304 bytes, open cost is considered as scanning 4194304 bytes.
44345 [SparkTaskThread-0] INFO  org.apache.spark.sql.blaze.BlazeColumnarOverrides$$anon$1  - Blaze convert strategy for current stage:
44345 [SparkTaskThread-0] INFO  org.apache.spark.sql.blaze.BlazeSparkSessionExtension  -  + CollectLimit (convertible=false, strategy=NeverConvert)
44345 [SparkTaskThread-0] INFO  org.apache.spark.sql.blaze.BlazeSparkSessionExtension  -  +- Filter (convertible=true, strategy=AlwaysConvert)
44345 [SparkTaskThread-0] INFO  org.apache.spark.sql.blaze.BlazeSparkSessionExtension  -  +-- Scan orc label.label_test (convertible=true, strategy=AlwaysConvert)
44348 [SparkTaskThread-0] INFO  org.apache.spark.sql.blaze.BlazeColumnarOverrides$$anon$1  - Blaze convert result for current stage:
44349 [SparkTaskThread-0] INFO  org.apache.spark.sql.blaze.BlazeSparkSessionExtension  -  + CollectLimit (convertible=false, strategy=NeverConvert)
44349 [SparkTaskThread-0] INFO  org.apache.spark.sql.blaze.BlazeSparkSessionExtension  -  +- NativeFilter (convertible=false, strategy=Default)
44349 [SparkTaskThread-0] INFO  org.apache.spark.sql.blaze.BlazeSparkSessionExtension  -  +-- InputAdapter (convertible=false, strategy=Default)
44349 [SparkTaskThread-0] INFO  org.apache.spark.sql.blaze.BlazeSparkSessionExtension  -  +--- NativeOrcScan label.label_test (convertible=false, strategy=Default)
44354 [SparkTaskThread-0] INFO  org.apache.spark.sql.blaze.BlazeColumnarOverrides$$anon$1  - Transformed spark plan after preColumnarTransitions:
CollectLimit 1000
+- NativeFilter (isnotnull(id#0) AND (id#0 = 31539560))
   +- InputAdapter [#0, #1, #2, #3, #4, #5, #6, #7]
      +- NativeOrcScan label.label_test (FileScan orc label.label_test[id#0,label1#1,label2#2,label3#3,label4#4L,back_date#5,dt#6,id_pt#7] Batched: true, DataFilters: [isnotnull(id#0), (id#0 = 31539560)], Format: ORC, Location: InMemoryFileIndex[hdfs://fcycdh/user/hive/warehouse/label.db/label_test/back_date=2..., PartitionFilters: [isnotnull(back_date#5), back_date#5 RLIKE ], PushedFilters: [IsNotNull(id), EqualTo(id,31539560)], ReadSchema: struct&lt;id:string,label1:int,label2:int,label...)

44504 [SparkTaskThread-0] INFO  org.apache.spark.storage.memory.MemoryStore  - Block broadcast_1 stored as values in memory (estimated size 416.3 KiB, free 2.8 GiB)
44528 [SparkTaskThread-0] INFO  org.apache.spark.storage.memory.MemoryStore  - Block broadcast_1_piece0 stored as bytes in memory (estimated size 26.3 KiB, free 2.8 GiB)
44530 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Added broadcast_1_piece0 in memory on datanode16-fcy.hadoop.test.com:7529 (size: 26.3 KiB, free: 2.8 GiB)
44532 [SparkTaskThread-0] INFO  org.apache.spark.SparkContext  - Created broadcast 1 from collect at SparkSqlTask.scala:255
44816 [SparkTaskThread-0] INFO  org.apache.spark.SparkContext  - Starting job: collect at SparkSqlTask.scala:255
44848 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Got job 0 (collect at SparkSqlTask.scala:255) with 1 output partitions
44849 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Final stage: ResultStage 0 (collect at SparkSqlTask.scala:255)
44850 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Parents of final stage: List()
44852 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Missing parents: List()
44858 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Submitting ResultStage 0 (MapPartitionsRDD[4] at collect at SparkSqlTask.scala:255), which has no missing parents
45251 [dag-scheduler-event-loop] INFO  org.apache.spark.storage.memory.MemoryStore  - Block broadcast_2 stored as values in memory (estimated size 28.8 KiB, free 2.8 GiB)
45255 [dag-scheduler-event-loop] INFO  org.apache.spark.storage.memory.MemoryStore  - Block broadcast_2_piece0 stored as bytes in memory (estimated size 8.6 KiB, free 2.8 GiB)
45257 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Added broadcast_2_piece0 in memory on datanode16-fcy.hadoop.test.com:7529 (size: 8.6 KiB, free: 2.8 GiB)
45257 [dag-scheduler-event-loop] INFO  org.apache.spark.SparkContext  - Created broadcast 2 from broadcast at DAGScheduler.scala:1388
45274 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[4] at collect at SparkSqlTask.scala:255) (first 15 tasks are for partitions Vector(0))
45275 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.cluster.YarnClusterScheduler  - Adding task set 0.0 with 1 tasks resource profile 0
45347 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 0.0 in stage 0.0 (TID 0) (datanode24-fcy.hadoop.test.com, executor 1, partition 0, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
45835 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Added broadcast_2_piece0 in memory on datanode24-fcy.hadoop.test.com:32003 (size: 8.6 KiB, free: 7.6 GiB)
47516 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Added broadcast_1_piece0 in memory on datanode24-fcy.hadoop.test.com:32003 (size: 26.3 KiB, free: 7.6 GiB)
50720 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 0.0 in stage 0.0 (TID 0) in 5385 ms on datanode24-fcy.hadoop.test.com (executor 1) (1/1)
50726 [task-result-getter-0] INFO  org.apache.spark.scheduler.cluster.YarnClusterScheduler  - Removed TaskSet 0.0, whose tasks have all completed, from pool 
50746 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - ResultStage 0 (collect at SparkSqlTask.scala:255) finished in 5.865 s
50754 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
50755 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.cluster.YarnClusterScheduler  - Killing all running tasks in stage 0: Stage finished
50759 [SparkTaskThread-0] INFO  org.apache.spark.scheduler.DAGScheduler  - Job 0 finished: collect at SparkSqlTask.scala:255, took 5.942067 s
50794 [SparkTaskThread-0] INFO  org.apache.spark.SparkContext  - Starting job: collect at SparkSqlTask.scala:255
50796 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Got job 1 (collect at SparkSqlTask.scala:255) with 4 output partitions
50796 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Final stage: ResultStage 1 (collect at SparkSqlTask.scala:255)
50796 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Parents of final stage: List()
50796 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Missing parents: List()
50798 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Submitting ResultStage 1 (MapPartitionsRDD[4] at collect at SparkSqlTask.scala:255), which has no missing parents
50811 [dag-scheduler-event-loop] INFO  org.apache.spark.storage.memory.MemoryStore  - Block broadcast_3 stored as values in memory (estimated size 28.8 KiB, free 2.8 GiB)
50817 [dag-scheduler-event-loop] INFO  org.apache.spark.storage.memory.MemoryStore  - Block broadcast_3_piece0 stored as bytes in memory (estimated size 8.6 KiB, free 2.8 GiB)
50818 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Added broadcast_3_piece0 in memory on datanode16-fcy.hadoop.test.com:7529 (size: 8.6 KiB, free: 2.8 GiB)
50819 [dag-scheduler-event-loop] INFO  org.apache.spark.SparkContext  - Created broadcast 3 from broadcast at DAGScheduler.scala:1388
50820 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Submitting 4 missing tasks from ResultStage 1 (MapPartitionsRDD[4] at collect at SparkSqlTask.scala:255) (first 15 tasks are for partitions Vector(1, 2, 3, 4))
50820 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.cluster.YarnClusterScheduler  - Adding task set 1.0 with 4 tasks resource profile 0
50825 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 0.0 in stage 1.0 (TID 1) (datanode24-fcy.hadoop.test.com, executor 1, partition 1, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
50826 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 1.0 in stage 1.0 (TID 2) (datanode24-fcy.hadoop.test.com, executor 1, partition 2, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
50826 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 2.0 in stage 1.0 (TID 3) (datanode24-fcy.hadoop.test.com, executor 1, partition 3, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
50925 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Added broadcast_3_piece0 in memory on datanode24-fcy.hadoop.test.com:32003 (size: 8.6 KiB, free: 7.6 GiB)
50992 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 3.0 in stage 1.0 (TID 4) (datanode24-fcy.hadoop.test.com, executor 1, partition 4, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
50993 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 2.0 in stage 1.0 (TID 3) in 167 ms on datanode24-fcy.hadoop.test.com (executor 1) (1/4)
51000 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 0.0 in stage 1.0 (TID 1) in 175 ms on datanode24-fcy.hadoop.test.com (executor 1) (2/4)
51002 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 1.0 in stage 1.0 (TID 2) in 176 ms on datanode24-fcy.hadoop.test.com (executor 1) (3/4)
51034 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 3.0 in stage 1.0 (TID 4) in 43 ms on datanode24-fcy.hadoop.test.com (executor 1) (4/4)
51034 [task-result-getter-0] INFO  org.apache.spark.scheduler.cluster.YarnClusterScheduler  - Removed TaskSet 1.0, whose tasks have all completed, from pool 
51035 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - ResultStage 1 (collect at SparkSqlTask.scala:255) finished in 0.235 s
51036 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Job 1 is finished. Cancelling potential speculative or zombie tasks for this job
51036 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.cluster.YarnClusterScheduler  - Killing all running tasks in stage 1: Stage finished
51036 [SparkTaskThread-0] INFO  org.apache.spark.scheduler.DAGScheduler  - Job 1 finished: collect at SparkSqlTask.scala:255, took 0.241590 s
51051 [SparkTaskThread-0] INFO  org.apache.spark.SparkContext  - Starting job: collect at SparkSqlTask.scala:255
51053 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Got job 2 (collect at SparkSqlTask.scala:255) with 20 output partitions
51053 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Final stage: ResultStage 2 (collect at SparkSqlTask.scala:255)
51053 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Parents of final stage: List()
51053 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Missing parents: List()
51055 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Submitting ResultStage 2 (MapPartitionsRDD[4] at collect at SparkSqlTask.scala:255), which has no missing parents
51064 [dag-scheduler-event-loop] INFO  org.apache.spark.storage.memory.MemoryStore  - Block broadcast_4 stored as values in memory (estimated size 28.8 KiB, free 2.8 GiB)
51070 [dag-scheduler-event-loop] INFO  org.apache.spark.storage.memory.MemoryStore  - Block broadcast_4_piece0 stored as bytes in memory (estimated size 8.6 KiB, free 2.8 GiB)
51071 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Added broadcast_4_piece0 in memory on datanode16-fcy.hadoop.test.com:7529 (size: 8.6 KiB, free: 2.8 GiB)
51072 [dag-scheduler-event-loop] INFO  org.apache.spark.SparkContext  - Created broadcast 4 from broadcast at DAGScheduler.scala:1388
51072 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Submitting 20 missing tasks from ResultStage 2 (MapPartitionsRDD[4] at collect at SparkSqlTask.scala:255) (first 15 tasks are for partitions Vector(5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19))
51073 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.cluster.YarnClusterScheduler  - Adding task set 2.0 with 20 tasks resource profile 0
51078 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 0.0 in stage 2.0 (TID 5) (datanode24-fcy.hadoop.test.com, executor 1, partition 5, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51079 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 1.0 in stage 2.0 (TID 6) (datanode24-fcy.hadoop.test.com, executor 1, partition 6, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51080 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 2.0 in stage 2.0 (TID 7) (datanode24-fcy.hadoop.test.com, executor 1, partition 7, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51102 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Added broadcast_4_piece0 in memory on datanode24-fcy.hadoop.test.com:32003 (size: 8.6 KiB, free: 7.6 GiB)
51150 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 3.0 in stage 2.0 (TID 8) (datanode24-fcy.hadoop.test.com, executor 1, partition 8, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51150 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 2.0 in stage 2.0 (TID 7) in 71 ms on datanode24-fcy.hadoop.test.com (executor 1) (1/20)
51154 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 4.0 in stage 2.0 (TID 9) (datanode24-fcy.hadoop.test.com, executor 1, partition 9, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51155 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 0.0 in stage 2.0 (TID 5) in 77 ms on datanode24-fcy.hadoop.test.com (executor 1) (2/20)
51164 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 5.0 in stage 2.0 (TID 10) (datanode24-fcy.hadoop.test.com, executor 1, partition 10, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51165 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 1.0 in stage 2.0 (TID 6) in 87 ms on datanode24-fcy.hadoop.test.com (executor 1) (3/20)
51191 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 6.0 in stage 2.0 (TID 11) (datanode24-fcy.hadoop.test.com, executor 1, partition 11, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51192 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 3.0 in stage 2.0 (TID 8) in 43 ms on datanode24-fcy.hadoop.test.com (executor 1) (4/20)
51195 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 7.0 in stage 2.0 (TID 12) (datanode24-fcy.hadoop.test.com, executor 1, partition 12, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51195 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 5.0 in stage 2.0 (TID 10) in 32 ms on datanode24-fcy.hadoop.test.com (executor 1) (5/20)
51198 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 8.0 in stage 2.0 (TID 13) (datanode24-fcy.hadoop.test.com, executor 1, partition 13, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51198 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 4.0 in stage 2.0 (TID 9) in 45 ms on datanode24-fcy.hadoop.test.com (executor 1) (6/20)
51235 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 9.0 in stage 2.0 (TID 14) (datanode24-fcy.hadoop.test.com, executor 1, partition 14, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51236 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 8.0 in stage 2.0 (TID 13) in 39 ms on datanode24-fcy.hadoop.test.com (executor 1) (7/20)
51237 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 10.0 in stage 2.0 (TID 15) (datanode24-fcy.hadoop.test.com, executor 1, partition 15, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51238 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 7.0 in stage 2.0 (TID 12) in 44 ms on datanode24-fcy.hadoop.test.com (executor 1) (8/20)
51240 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 11.0 in stage 2.0 (TID 16) (datanode24-fcy.hadoop.test.com, executor 1, partition 16, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51240 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 6.0 in stage 2.0 (TID 11) in 49 ms on datanode24-fcy.hadoop.test.com (executor 1) (9/20)
51264 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 12.0 in stage 2.0 (TID 17) (datanode24-fcy.hadoop.test.com, executor 1, partition 17, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51265 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 9.0 in stage 2.0 (TID 14) in 30 ms on datanode24-fcy.hadoop.test.com (executor 1) (10/20)
51276 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 13.0 in stage 2.0 (TID 18) (datanode24-fcy.hadoop.test.com, executor 1, partition 18, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51277 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 10.0 in stage 2.0 (TID 15) in 40 ms on datanode24-fcy.hadoop.test.com (executor 1) (11/20)
51293 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 14.0 in stage 2.0 (TID 19) (datanode24-fcy.hadoop.test.com, executor 1, partition 19, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51294 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 12.0 in stage 2.0 (TID 17) in 30 ms on datanode24-fcy.hadoop.test.com (executor 1) (12/20)
51297 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 15.0 in stage 2.0 (TID 20) (datanode24-fcy.hadoop.test.com, executor 1, partition 20, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51297 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 13.0 in stage 2.0 (TID 18) in 21 ms on datanode24-fcy.hadoop.test.com (executor 1) (13/20)
51316 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 16.0 in stage 2.0 (TID 21) (datanode24-fcy.hadoop.test.com, executor 1, partition 21, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51316 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 14.0 in stage 2.0 (TID 19) in 23 ms on datanode24-fcy.hadoop.test.com (executor 1) (14/20)
51324 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 17.0 in stage 2.0 (TID 22) (datanode24-fcy.hadoop.test.com, executor 1, partition 22, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51324 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 15.0 in stage 2.0 (TID 20) in 28 ms on datanode24-fcy.hadoop.test.com (executor 1) (15/20)
51342 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 18.0 in stage 2.0 (TID 23) (datanode24-fcy.hadoop.test.com, executor 1, partition 23, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51343 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 16.0 in stage 2.0 (TID 21) in 28 ms on datanode24-fcy.hadoop.test.com (executor 1) (16/20)
51347 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 19.0 in stage 2.0 (TID 24) (datanode24-fcy.hadoop.test.com, executor 1, partition 24, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51348 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 17.0 in stage 2.0 (TID 22) in 24 ms on datanode24-fcy.hadoop.test.com (executor 1) (17/20)
51367 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 18.0 in stage 2.0 (TID 23) in 25 ms on datanode24-fcy.hadoop.test.com (executor 1) (18/20)
51368 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 19.0 in stage 2.0 (TID 24) in 21 ms on datanode24-fcy.hadoop.test.com (executor 1) (19/20)
51443 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 11.0 in stage 2.0 (TID 16) in 203 ms on datanode24-fcy.hadoop.test.com (executor 1) (20/20)
51443 [task-result-getter-0] INFO  org.apache.spark.scheduler.cluster.YarnClusterScheduler  - Removed TaskSet 2.0, whose tasks have all completed, from pool 
51445 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - ResultStage 2 (collect at SparkSqlTask.scala:255) finished in 0.387 s
51445 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Job 2 is finished. Cancelling potential speculative or zombie tasks for this job
51445 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.cluster.YarnClusterScheduler  - Killing all running tasks in stage 2: Stage finished
51445 [SparkTaskThread-0] INFO  org.apache.spark.scheduler.DAGScheduler  - Job 2 finished: collect at SparkSqlTask.scala:255, took 0.394196 s
51467 [SparkTaskThread-0] INFO  org.apache.spark.SparkContext  - Starting job: collect at SparkSqlTask.scala:255
51469 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Got job 3 (collect at SparkSqlTask.scala:255) with 100 output partitions
51469 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Final stage: ResultStage 3 (collect at SparkSqlTask.scala:255)
51469 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Parents of final stage: List()
51470 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Missing parents: List()
51471 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Submitting ResultStage 3 (MapPartitionsRDD[4] at collect at SparkSqlTask.scala:255), which has no missing parents
51487 [dag-scheduler-event-loop] INFO  org.apache.spark.storage.memory.MemoryStore  - Block broadcast_5 stored as values in memory (estimated size 28.8 KiB, free 2.8 GiB)
51495 [dag-scheduler-event-loop] INFO  org.apache.spark.storage.memory.MemoryStore  - Block broadcast_5_piece0 stored as bytes in memory (estimated size 8.6 KiB, free 2.8 GiB)
51497 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Added broadcast_5_piece0 in memory on datanode16-fcy.hadoop.test.com:7529 (size: 8.6 KiB, free: 2.8 GiB)
51498 [dag-scheduler-event-loop] INFO  org.apache.spark.SparkContext  - Created broadcast 5 from broadcast at DAGScheduler.scala:1388
51500 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Submitting 100 missing tasks from ResultStage 3 (MapPartitionsRDD[4] at collect at SparkSqlTask.scala:255) (first 15 tasks are for partitions Vector(25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39))
51500 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.cluster.YarnClusterScheduler  - Adding task set 3.0 with 100 tasks resource profile 0
51506 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 0.0 in stage 3.0 (TID 25) (datanode24-fcy.hadoop.test.com, executor 1, partition 25, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51507 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 1.0 in stage 3.0 (TID 26) (datanode24-fcy.hadoop.test.com, executor 1, partition 26, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51507 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 2.0 in stage 3.0 (TID 27) (datanode24-fcy.hadoop.test.com, executor 1, partition 27, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51525 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Added broadcast_5_piece0 in memory on datanode24-fcy.hadoop.test.com:32003 (size: 8.6 KiB, free: 7.6 GiB)
51558 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 3.0 in stage 3.0 (TID 28) (datanode24-fcy.hadoop.test.com, executor 1, partition 28, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51558 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 0.0 in stage 3.0 (TID 25) in 52 ms on datanode24-fcy.hadoop.test.com (executor 1) (1/100)
51560 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 4.0 in stage 3.0 (TID 29) (datanode24-fcy.hadoop.test.com, executor 1, partition 29, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51560 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 1.0 in stage 3.0 (TID 26) in 53 ms on datanode24-fcy.hadoop.test.com (executor 1) (2/100)
51562 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 5.0 in stage 3.0 (TID 30) (datanode24-fcy.hadoop.test.com, executor 1, partition 30, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51563 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 2.0 in stage 3.0 (TID 27) in 56 ms on datanode24-fcy.hadoop.test.com (executor 1) (3/100)
51592 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 6.0 in stage 3.0 (TID 31) (datanode24-fcy.hadoop.test.com, executor 1, partition 31, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51593 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 5.0 in stage 3.0 (TID 30) in 31 ms on datanode24-fcy.hadoop.test.com (executor 1) (4/100)
51595 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 7.0 in stage 3.0 (TID 32) (datanode24-fcy.hadoop.test.com, executor 1, partition 32, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51595 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 4.0 in stage 3.0 (TID 29) in 35 ms on datanode24-fcy.hadoop.test.com (executor 1) (5/100)
51597 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 8.0 in stage 3.0 (TID 33) (datanode24-fcy.hadoop.test.com, executor 1, partition 33, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51597 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 3.0 in stage 3.0 (TID 28) in 39 ms on datanode24-fcy.hadoop.test.com (executor 1) (6/100)
51626 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 9.0 in stage 3.0 (TID 34) (datanode24-fcy.hadoop.test.com, executor 1, partition 34, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51627 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 7.0 in stage 3.0 (TID 32) in 33 ms on datanode24-fcy.hadoop.test.com (executor 1) (7/100)
51629 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 10.0 in stage 3.0 (TID 35) (datanode24-fcy.hadoop.test.com, executor 1, partition 35, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51629 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 8.0 in stage 3.0 (TID 33) in 32 ms on datanode24-fcy.hadoop.test.com (executor 1) (8/100)
51652 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 11.0 in stage 3.0 (TID 36) (datanode24-fcy.hadoop.test.com, executor 1, partition 36, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51652 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 10.0 in stage 3.0 (TID 35) in 24 ms on datanode24-fcy.hadoop.test.com (executor 1) (9/100)
51671 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 12.0 in stage 3.0 (TID 37) (datanode24-fcy.hadoop.test.com, executor 1, partition 37, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51672 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 11.0 in stage 3.0 (TID 36) in 21 ms on datanode24-fcy.hadoop.test.com (executor 1) (10/100)
51678 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 13.0 in stage 3.0 (TID 38) (datanode24-fcy.hadoop.test.com, executor 1, partition 38, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51678 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 9.0 in stage 3.0 (TID 34) in 52 ms on datanode24-fcy.hadoop.test.com (executor 1) (11/100)
51692 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 14.0 in stage 3.0 (TID 39) (datanode24-fcy.hadoop.test.com, executor 1, partition 39, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51692 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 12.0 in stage 3.0 (TID 37) in 21 ms on datanode24-fcy.hadoop.test.com (executor 1) (12/100)
51697 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 15.0 in stage 3.0 (TID 40) (datanode24-fcy.hadoop.test.com, executor 1, partition 40, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51697 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 13.0 in stage 3.0 (TID 38) in 20 ms on datanode24-fcy.hadoop.test.com (executor 1) (13/100)
51710 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 16.0 in stage 3.0 (TID 41) (datanode24-fcy.hadoop.test.com, executor 1, partition 41, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51711 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 14.0 in stage 3.0 (TID 39) in 20 ms on datanode24-fcy.hadoop.test.com (executor 1) (14/100)
51720 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 17.0 in stage 3.0 (TID 42) (datanode24-fcy.hadoop.test.com, executor 1, partition 42, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51720 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 15.0 in stage 3.0 (TID 40) in 23 ms on datanode24-fcy.hadoop.test.com (executor 1) (15/100)
51736 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 18.0 in stage 3.0 (TID 43) (datanode24-fcy.hadoop.test.com, executor 1, partition 43, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51737 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 16.0 in stage 3.0 (TID 41) in 27 ms on datanode24-fcy.hadoop.test.com (executor 1) (16/100)
51738 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 19.0 in stage 3.0 (TID 44) (datanode24-fcy.hadoop.test.com, executor 1, partition 44, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51738 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 17.0 in stage 3.0 (TID 42) in 18 ms on datanode24-fcy.hadoop.test.com (executor 1) (17/100)
51756 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 20.0 in stage 3.0 (TID 45) (datanode24-fcy.hadoop.test.com, executor 1, partition 45, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51757 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 19.0 in stage 3.0 (TID 44) in 19 ms on datanode24-fcy.hadoop.test.com (executor 1) (18/100)
51758 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 21.0 in stage 3.0 (TID 46) (datanode24-fcy.hadoop.test.com, executor 1, partition 46, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51758 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 18.0 in stage 3.0 (TID 43) in 22 ms on datanode24-fcy.hadoop.test.com (executor 1) (19/100)
51759 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 22.0 in stage 3.0 (TID 47) (datanode24-fcy.hadoop.test.com, executor 1, partition 47, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51759 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 6.0 in stage 3.0 (TID 31) in 167 ms on datanode24-fcy.hadoop.test.com (executor 1) (20/100)
51778 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 23.0 in stage 3.0 (TID 48) (datanode24-fcy.hadoop.test.com, executor 1, partition 48, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51778 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 20.0 in stage 3.0 (TID 45) in 22 ms on datanode24-fcy.hadoop.test.com (executor 1) (21/100)
51798 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 24.0 in stage 3.0 (TID 49) (datanode24-fcy.hadoop.test.com, executor 1, partition 49, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51798 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 22.0 in stage 3.0 (TID 47) in 39 ms on datanode24-fcy.hadoop.test.com (executor 1) (22/100)
51807 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 25.0 in stage 3.0 (TID 50) (datanode24-fcy.hadoop.test.com, executor 1, partition 50, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51807 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 23.0 in stage 3.0 (TID 48) in 29 ms on datanode24-fcy.hadoop.test.com (executor 1) (23/100)
51821 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 26.0 in stage 3.0 (TID 51) (datanode24-fcy.hadoop.test.com, executor 1, partition 51, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51822 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 24.0 in stage 3.0 (TID 49) in 25 ms on datanode24-fcy.hadoop.test.com (executor 1) (24/100)
51828 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 27.0 in stage 3.0 (TID 52) (datanode24-fcy.hadoop.test.com, executor 1, partition 52, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51829 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 25.0 in stage 3.0 (TID 50) in 22 ms on datanode24-fcy.hadoop.test.com (executor 1) (25/100)
51840 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 28.0 in stage 3.0 (TID 53) (datanode24-fcy.hadoop.test.com, executor 1, partition 53, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51841 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 26.0 in stage 3.0 (TID 51) in 20 ms on datanode24-fcy.hadoop.test.com (executor 1) (26/100)
51844 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 29.0 in stage 3.0 (TID 54) (datanode24-fcy.hadoop.test.com, executor 1, partition 54, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51845 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 27.0 in stage 3.0 (TID 52) in 17 ms on datanode24-fcy.hadoop.test.com (executor 1) (27/100)
51857 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 30.0 in stage 3.0 (TID 55) (datanode24-fcy.hadoop.test.com, executor 1, partition 55, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51858 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 28.0 in stage 3.0 (TID 53) in 18 ms on datanode24-fcy.hadoop.test.com (executor 1) (28/100)
51860 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 31.0 in stage 3.0 (TID 56) (datanode24-fcy.hadoop.test.com, executor 1, partition 56, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51861 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 29.0 in stage 3.0 (TID 54) in 17 ms on datanode24-fcy.hadoop.test.com (executor 1) (29/100)
51878 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 32.0 in stage 3.0 (TID 57) (datanode24-fcy.hadoop.test.com, executor 1, partition 57, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51879 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 31.0 in stage 3.0 (TID 56) in 19 ms on datanode24-fcy.hadoop.test.com (executor 1) (30/100)
51880 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 33.0 in stage 3.0 (TID 58) (datanode24-fcy.hadoop.test.com, executor 1, partition 58, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51880 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 30.0 in stage 3.0 (TID 55) in 23 ms on datanode24-fcy.hadoop.test.com (executor 1) (31/100)
51896 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 34.0 in stage 3.0 (TID 59) (datanode24-fcy.hadoop.test.com, executor 1, partition 59, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51896 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 32.0 in stage 3.0 (TID 57) in 18 ms on datanode24-fcy.hadoop.test.com (executor 1) (32/100)
51897 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 35.0 in stage 3.0 (TID 60) (datanode24-fcy.hadoop.test.com, executor 1, partition 60, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51898 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 33.0 in stage 3.0 (TID 58) in 19 ms on datanode24-fcy.hadoop.test.com (executor 1) (33/100)
51912 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 36.0 in stage 3.0 (TID 61) (datanode24-fcy.hadoop.test.com, executor 1, partition 61, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51914 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 37.0 in stage 3.0 (TID 62) (datanode24-fcy.hadoop.test.com, executor 1, partition 62, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51914 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 34.0 in stage 3.0 (TID 59) in 19 ms on datanode24-fcy.hadoop.test.com (executor 1) (34/100)
51914 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 35.0 in stage 3.0 (TID 60) in 17 ms on datanode24-fcy.hadoop.test.com (executor 1) (35/100)
51929 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 38.0 in stage 3.0 (TID 63) (datanode24-fcy.hadoop.test.com, executor 1, partition 63, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51929 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 36.0 in stage 3.0 (TID 61) in 17 ms on datanode24-fcy.hadoop.test.com (executor 1) (36/100)
51949 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 39.0 in stage 3.0 (TID 64) (datanode24-fcy.hadoop.test.com, executor 1, partition 64, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51949 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 38.0 in stage 3.0 (TID 63) in 21 ms on datanode24-fcy.hadoop.test.com (executor 1) (37/100)
51954 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 40.0 in stage 3.0 (TID 65) (datanode24-fcy.hadoop.test.com, executor 1, partition 65, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51954 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 21.0 in stage 3.0 (TID 46) in 196 ms on datanode24-fcy.hadoop.test.com (executor 1) (38/100)
51968 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 41.0 in stage 3.0 (TID 66) (datanode24-fcy.hadoop.test.com, executor 1, partition 66, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51968 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 39.0 in stage 3.0 (TID 64) in 19 ms on datanode24-fcy.hadoop.test.com (executor 1) (39/100)
51969 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 42.0 in stage 3.0 (TID 67) (datanode24-fcy.hadoop.test.com, executor 1, partition 67, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
51969 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 40.0 in stage 3.0 (TID 65) in 15 ms on datanode24-fcy.hadoop.test.com (executor 1) (40/100)
52062 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 43.0 in stage 3.0 (TID 68) (datanode24-fcy.hadoop.test.com, executor 1, partition 68, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52063 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 37.0 in stage 3.0 (TID 62) in 150 ms on datanode24-fcy.hadoop.test.com (executor 1) (41/100)
52064 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 44.0 in stage 3.0 (TID 69) (datanode24-fcy.hadoop.test.com, executor 1, partition 69, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52065 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 41.0 in stage 3.0 (TID 66) in 97 ms on datanode24-fcy.hadoop.test.com (executor 1) (42/100)
52066 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 45.0 in stage 3.0 (TID 70) (datanode24-fcy.hadoop.test.com, executor 1, partition 70, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52067 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 42.0 in stage 3.0 (TID 67) in 99 ms on datanode24-fcy.hadoop.test.com (executor 1) (43/100)
52083 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 46.0 in stage 3.0 (TID 71) (datanode24-fcy.hadoop.test.com, executor 1, partition 71, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52084 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 43.0 in stage 3.0 (TID 68) in 21 ms on datanode24-fcy.hadoop.test.com (executor 1) (44/100)
52087 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 47.0 in stage 3.0 (TID 72) (datanode24-fcy.hadoop.test.com, executor 1, partition 72, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52088 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 44.0 in stage 3.0 (TID 69) in 24 ms on datanode24-fcy.hadoop.test.com (executor 1) (45/100)
52088 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 48.0 in stage 3.0 (TID 73) (datanode24-fcy.hadoop.test.com, executor 1, partition 73, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52089 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 45.0 in stage 3.0 (TID 70) in 23 ms on datanode24-fcy.hadoop.test.com (executor 1) (46/100)
52102 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 49.0 in stage 3.0 (TID 74) (datanode24-fcy.hadoop.test.com, executor 1, partition 74, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52102 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 47.0 in stage 3.0 (TID 72) in 15 ms on datanode24-fcy.hadoop.test.com (executor 1) (47/100)
52104 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 50.0 in stage 3.0 (TID 75) (datanode24-fcy.hadoop.test.com, executor 1, partition 75, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52105 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 46.0 in stage 3.0 (TID 71) in 22 ms on datanode24-fcy.hadoop.test.com (executor 1) (48/100)
52105 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 51.0 in stage 3.0 (TID 76) (datanode24-fcy.hadoop.test.com, executor 1, partition 76, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52106 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 48.0 in stage 3.0 (TID 73) in 18 ms on datanode24-fcy.hadoop.test.com (executor 1) (49/100)
52118 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 52.0 in stage 3.0 (TID 77) (datanode24-fcy.hadoop.test.com, executor 1, partition 77, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52119 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 51.0 in stage 3.0 (TID 76) in 13 ms on datanode24-fcy.hadoop.test.com (executor 1) (50/100)
52121 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 53.0 in stage 3.0 (TID 78) (datanode24-fcy.hadoop.test.com, executor 1, partition 78, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52121 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 50.0 in stage 3.0 (TID 75) in 17 ms on datanode24-fcy.hadoop.test.com (executor 1) (51/100)
52122 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 54.0 in stage 3.0 (TID 79) (datanode24-fcy.hadoop.test.com, executor 1, partition 79, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52122 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 49.0 in stage 3.0 (TID 74) in 21 ms on datanode24-fcy.hadoop.test.com (executor 1) (52/100)
52136 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 55.0 in stage 3.0 (TID 80) (datanode24-fcy.hadoop.test.com, executor 1, partition 80, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52136 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 52.0 in stage 3.0 (TID 77) in 18 ms on datanode24-fcy.hadoop.test.com (executor 1) (53/100)
52137 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 56.0 in stage 3.0 (TID 81) (datanode24-fcy.hadoop.test.com, executor 1, partition 81, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52137 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 54.0 in stage 3.0 (TID 79) in 16 ms on datanode24-fcy.hadoop.test.com (executor 1) (54/100)
52152 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 57.0 in stage 3.0 (TID 82) (datanode24-fcy.hadoop.test.com, executor 1, partition 82, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52153 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 55.0 in stage 3.0 (TID 80) in 17 ms on datanode24-fcy.hadoop.test.com (executor 1) (55/100)
52153 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 58.0 in stage 3.0 (TID 83) (datanode24-fcy.hadoop.test.com, executor 1, partition 83, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52154 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 56.0 in stage 3.0 (TID 81) in 18 ms on datanode24-fcy.hadoop.test.com (executor 1) (56/100)
52168 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 59.0 in stage 3.0 (TID 84) (datanode24-fcy.hadoop.test.com, executor 1, partition 84, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52168 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 57.0 in stage 3.0 (TID 82) in 16 ms on datanode24-fcy.hadoop.test.com (executor 1) (57/100)
52169 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 60.0 in stage 3.0 (TID 85) (datanode24-fcy.hadoop.test.com, executor 1, partition 85, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52169 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 58.0 in stage 3.0 (TID 83) in 16 ms on datanode24-fcy.hadoop.test.com (executor 1) (58/100)
52182 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 61.0 in stage 3.0 (TID 86) (datanode24-fcy.hadoop.test.com, executor 1, partition 86, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52183 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 59.0 in stage 3.0 (TID 84) in 16 ms on datanode24-fcy.hadoop.test.com (executor 1) (59/100)
52183 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 62.0 in stage 3.0 (TID 87) (datanode24-fcy.hadoop.test.com, executor 1, partition 87, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52184 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 60.0 in stage 3.0 (TID 85) in 16 ms on datanode24-fcy.hadoop.test.com (executor 1) (60/100)
52198 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 63.0 in stage 3.0 (TID 88) (datanode24-fcy.hadoop.test.com, executor 1, partition 88, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52199 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 64.0 in stage 3.0 (TID 89) (datanode24-fcy.hadoop.test.com, executor 1, partition 89, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52199 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 62.0 in stage 3.0 (TID 87) in 16 ms on datanode24-fcy.hadoop.test.com (executor 1) (61/100)
52199 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 61.0 in stage 3.0 (TID 86) in 17 ms on datanode24-fcy.hadoop.test.com (executor 1) (62/100)
52223 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 65.0 in stage 3.0 (TID 90) (datanode24-fcy.hadoop.test.com, executor 1, partition 90, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52223 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 63.0 in stage 3.0 (TID 88) in 25 ms on datanode24-fcy.hadoop.test.com (executor 1) (63/100)
52224 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 66.0 in stage 3.0 (TID 91) (datanode24-fcy.hadoop.test.com, executor 1, partition 91, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52224 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 64.0 in stage 3.0 (TID 89) in 25 ms on datanode24-fcy.hadoop.test.com (executor 1) (64/100)
52240 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 67.0 in stage 3.0 (TID 92) (datanode24-fcy.hadoop.test.com, executor 1, partition 92, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52241 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 66.0 in stage 3.0 (TID 91) in 17 ms on datanode24-fcy.hadoop.test.com (executor 1) (65/100)
52241 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 68.0 in stage 3.0 (TID 93) (datanode24-fcy.hadoop.test.com, executor 1, partition 93, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52242 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 65.0 in stage 3.0 (TID 90) in 19 ms on datanode24-fcy.hadoop.test.com (executor 1) (66/100)
52261 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 69.0 in stage 3.0 (TID 94) (datanode24-fcy.hadoop.test.com, executor 1, partition 94, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52261 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 67.0 in stage 3.0 (TID 92) in 21 ms on datanode24-fcy.hadoop.test.com (executor 1) (67/100)
52281 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 70.0 in stage 3.0 (TID 95) (datanode24-fcy.hadoop.test.com, executor 1, partition 95, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52281 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 69.0 in stage 3.0 (TID 94) in 21 ms on datanode24-fcy.hadoop.test.com (executor 1) (68/100)
52300 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 71.0 in stage 3.0 (TID 96) (datanode24-fcy.hadoop.test.com, executor 1, partition 96, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52300 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 70.0 in stage 3.0 (TID 95) in 19 ms on datanode24-fcy.hadoop.test.com (executor 1) (69/100)
52321 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 72.0 in stage 3.0 (TID 97) (datanode24-fcy.hadoop.test.com, executor 1, partition 97, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52321 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 53.0 in stage 3.0 (TID 78) in 200 ms on datanode24-fcy.hadoop.test.com (executor 1) (70/100)
52322 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 73.0 in stage 3.0 (TID 98) (datanode24-fcy.hadoop.test.com, executor 1, partition 98, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52323 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 71.0 in stage 3.0 (TID 96) in 24 ms on datanode24-fcy.hadoop.test.com (executor 1) (71/100)
52338 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 74.0 in stage 3.0 (TID 99) (datanode24-fcy.hadoop.test.com, executor 1, partition 99, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52338 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 73.0 in stage 3.0 (TID 98) in 16 ms on datanode24-fcy.hadoop.test.com (executor 1) (72/100)
52339 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 75.0 in stage 3.0 (TID 100) (datanode24-fcy.hadoop.test.com, executor 1, partition 100, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52339 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 72.0 in stage 3.0 (TID 97) in 18 ms on datanode24-fcy.hadoop.test.com (executor 1) (73/100)
52352 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 76.0 in stage 3.0 (TID 101) (datanode24-fcy.hadoop.test.com, executor 1, partition 101, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52352 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 75.0 in stage 3.0 (TID 100) in 13 ms on datanode24-fcy.hadoop.test.com (executor 1) (74/100)
52356 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 77.0 in stage 3.0 (TID 102) (datanode24-fcy.hadoop.test.com, executor 1, partition 102, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52356 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 74.0 in stage 3.0 (TID 99) in 19 ms on datanode24-fcy.hadoop.test.com (executor 1) (75/100)
52367 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 78.0 in stage 3.0 (TID 103) (datanode24-fcy.hadoop.test.com, executor 1, partition 103, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52368 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 76.0 in stage 3.0 (TID 101) in 17 ms on datanode24-fcy.hadoop.test.com (executor 1) (76/100)
52375 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 79.0 in stage 3.0 (TID 104) (datanode24-fcy.hadoop.test.com, executor 1, partition 104, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52375 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 77.0 in stage 3.0 (TID 102) in 19 ms on datanode24-fcy.hadoop.test.com (executor 1) (77/100)
52381 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 80.0 in stage 3.0 (TID 105) (datanode24-fcy.hadoop.test.com, executor 1, partition 105, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52381 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 78.0 in stage 3.0 (TID 103) in 14 ms on datanode24-fcy.hadoop.test.com (executor 1) (78/100)
52393 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 81.0 in stage 3.0 (TID 106) (datanode24-fcy.hadoop.test.com, executor 1, partition 106, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52393 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 80.0 in stage 3.0 (TID 105) in 12 ms on datanode24-fcy.hadoop.test.com (executor 1) (79/100)
52395 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 82.0 in stage 3.0 (TID 107) (datanode24-fcy.hadoop.test.com, executor 1, partition 107, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52395 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 79.0 in stage 3.0 (TID 104) in 20 ms on datanode24-fcy.hadoop.test.com (executor 1) (80/100)
52407 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 83.0 in stage 3.0 (TID 108) (datanode24-fcy.hadoop.test.com, executor 1, partition 108, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52407 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 82.0 in stage 3.0 (TID 107) in 13 ms on datanode24-fcy.hadoop.test.com (executor 1) (81/100)
52409 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 84.0 in stage 3.0 (TID 109) (datanode24-fcy.hadoop.test.com, executor 1, partition 109, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52410 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 81.0 in stage 3.0 (TID 106) in 18 ms on datanode24-fcy.hadoop.test.com (executor 1) (82/100)
52418 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 85.0 in stage 3.0 (TID 110) (datanode24-fcy.hadoop.test.com, executor 1, partition 110, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52418 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 83.0 in stage 3.0 (TID 108) in 12 ms on datanode24-fcy.hadoop.test.com (executor 1) (83/100)
52420 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 86.0 in stage 3.0 (TID 111) (datanode24-fcy.hadoop.test.com, executor 1, partition 111, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52420 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 68.0 in stage 3.0 (TID 93) in 179 ms on datanode24-fcy.hadoop.test.com (executor 1) (84/100)
52432 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 87.0 in stage 3.0 (TID 112) (datanode24-fcy.hadoop.test.com, executor 1, partition 112, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52432 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 85.0 in stage 3.0 (TID 110) in 14 ms on datanode24-fcy.hadoop.test.com (executor 1) (85/100)
52557 [dispatcher-event-loop-1] INFO  org.apache.spark.deploy.yarn.YarnAllocator  - Resource profile 0 doesn't exist, adding it
52558 [dispatcher-event-loop-1] INFO  org.apache.spark.deploy.yarn.YarnAllocator  - Driver requested a total number of 2 executor(s) for resource profile id: 0.
52563 [spark-dynamic-executor-allocation] INFO  org.apache.spark.ExecutorAllocationManager  - Requesting 1 new executor because tasks are backlogged (new desired total will be 2 for resource profile id: 0)
52565 [Reporter] INFO  org.apache.spark.deploy.yarn.YarnAllocator  - Will request 1 executor container(s) for  ResourceProfile Id: 0, each with 3 core(s) and 15360 MB memory.
52565 [Reporter] INFO  org.apache.spark.deploy.yarn.YarnAllocator  - Submitted 1 unlocalized container requests.
52574 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 88.0 in stage 3.0 (TID 113) (datanode24-fcy.hadoop.test.com, executor 1, partition 113, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52574 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 84.0 in stage 3.0 (TID 109) in 165 ms on datanode24-fcy.hadoop.test.com (executor 1) (86/100)
52589 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 89.0 in stage 3.0 (TID 114) (datanode24-fcy.hadoop.test.com, executor 1, partition 114, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52590 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 87.0 in stage 3.0 (TID 112) in 158 ms on datanode24-fcy.hadoop.test.com (executor 1) (87/100)
52591 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 90.0 in stage 3.0 (TID 115) (datanode24-fcy.hadoop.test.com, executor 1, partition 115, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52591 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 86.0 in stage 3.0 (TID 111) in 171 ms on datanode24-fcy.hadoop.test.com (executor 1) (88/100)
52592 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 91.0 in stage 3.0 (TID 116) (datanode24-fcy.hadoop.test.com, executor 1, partition 116, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52593 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 88.0 in stage 3.0 (TID 113) in 19 ms on datanode24-fcy.hadoop.test.com (executor 1) (89/100)
52608 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 92.0 in stage 3.0 (TID 117) (datanode24-fcy.hadoop.test.com, executor 1, partition 117, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52609 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 91.0 in stage 3.0 (TID 116) in 17 ms on datanode24-fcy.hadoop.test.com (executor 1) (90/100)
52610 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 93.0 in stage 3.0 (TID 118) (datanode24-fcy.hadoop.test.com, executor 1, partition 118, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52610 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 89.0 in stage 3.0 (TID 114) in 21 ms on datanode24-fcy.hadoop.test.com (executor 1) (91/100)
52613 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 94.0 in stage 3.0 (TID 119) (datanode24-fcy.hadoop.test.com, executor 1, partition 119, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52613 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 90.0 in stage 3.0 (TID 115) in 22 ms on datanode24-fcy.hadoop.test.com (executor 1) (92/100)
52626 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 95.0 in stage 3.0 (TID 120) (datanode24-fcy.hadoop.test.com, executor 1, partition 120, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52626 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 93.0 in stage 3.0 (TID 118) in 16 ms on datanode24-fcy.hadoop.test.com (executor 1) (93/100)
52629 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 96.0 in stage 3.0 (TID 121) (datanode24-fcy.hadoop.test.com, executor 1, partition 121, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52630 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 92.0 in stage 3.0 (TID 117) in 22 ms on datanode24-fcy.hadoop.test.com (executor 1) (94/100)
52637 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 97.0 in stage 3.0 (TID 122) (datanode24-fcy.hadoop.test.com, executor 1, partition 122, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52638 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 94.0 in stage 3.0 (TID 119) in 26 ms on datanode24-fcy.hadoop.test.com (executor 1) (95/100)
52641 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 98.0 in stage 3.0 (TID 123) (datanode24-fcy.hadoop.test.com, executor 1, partition 123, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52641 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 95.0 in stage 3.0 (TID 120) in 15 ms on datanode24-fcy.hadoop.test.com (executor 1) (96/100)
52645 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 99.0 in stage 3.0 (TID 124) (datanode24-fcy.hadoop.test.com, executor 1, partition 124, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52645 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 96.0 in stage 3.0 (TID 121) in 16 ms on datanode24-fcy.hadoop.test.com (executor 1) (97/100)
52659 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 99.0 in stage 3.0 (TID 124) in 14 ms on datanode24-fcy.hadoop.test.com (executor 1) (98/100)
52659 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 97.0 in stage 3.0 (TID 122) in 22 ms on datanode24-fcy.hadoop.test.com (executor 1) (99/100)
52661 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 98.0 in stage 3.0 (TID 123) in 21 ms on datanode24-fcy.hadoop.test.com (executor 1) (100/100)
52661 [task-result-getter-0] INFO  org.apache.spark.scheduler.cluster.YarnClusterScheduler  - Removed TaskSet 3.0, whose tasks have all completed, from pool 
52662 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - ResultStage 3 (collect at SparkSqlTask.scala:255) finished in 1.183 s
52662 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Job 3 is finished. Cancelling potential speculative or zombie tasks for this job
52662 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.cluster.YarnClusterScheduler  - Killing all running tasks in stage 3: Stage finished
52662 [SparkTaskThread-0] INFO  org.apache.spark.scheduler.DAGScheduler  - Job 3 finished: collect at SparkSqlTask.scala:255, took 1.195241 s
52665 [dispatcher-event-loop-1] INFO  org.apache.spark.deploy.yarn.YarnAllocator  - Resource profile 0 doesn't exist, adding it
52666 [dispatcher-event-loop-1] INFO  org.apache.spark.deploy.yarn.YarnAllocator  - Driver requested a total number of 1 executor(s) for resource profile id: 0.
52667 [Reporter] INFO  org.apache.spark.deploy.yarn.YarnAllocator  - Canceling requests for 1 executor container(s) to have a new desired total 1 executors.
52683 [SparkTaskThread-0] INFO  org.apache.spark.SparkContext  - Starting job: collect at SparkSqlTask.scala:255
52684 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Got job 4 (collect at SparkSqlTask.scala:255) with 40 output partitions
52684 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Final stage: ResultStage 4 (collect at SparkSqlTask.scala:255)
52684 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Parents of final stage: List()
52684 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Missing parents: List()
52685 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Submitting ResultStage 4 (MapPartitionsRDD[4] at collect at SparkSqlTask.scala:255), which has no missing parents
52691 [dag-scheduler-event-loop] INFO  org.apache.spark.storage.memory.MemoryStore  - Block broadcast_6 stored as values in memory (estimated size 28.8 KiB, free 2.8 GiB)
52696 [dag-scheduler-event-loop] INFO  org.apache.spark.storage.memory.MemoryStore  - Block broadcast_6_piece0 stored as bytes in memory (estimated size 8.6 KiB, free 2.8 GiB)
52697 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Added broadcast_6_piece0 in memory on datanode16-fcy.hadoop.test.com:7529 (size: 8.6 KiB, free: 2.8 GiB)
52697 [dag-scheduler-event-loop] INFO  org.apache.spark.SparkContext  - Created broadcast 6 from broadcast at DAGScheduler.scala:1388
52698 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Submitting 40 missing tasks from ResultStage 4 (MapPartitionsRDD[4] at collect at SparkSqlTask.scala:255) (first 15 tasks are for partitions Vector(125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139))
52698 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.cluster.YarnClusterScheduler  - Adding task set 4.0 with 40 tasks resource profile 0
52702 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 0.0 in stage 4.0 (TID 125) (datanode24-fcy.hadoop.test.com, executor 1, partition 125, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52702 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 1.0 in stage 4.0 (TID 126) (datanode24-fcy.hadoop.test.com, executor 1, partition 126, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52703 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 2.0 in stage 4.0 (TID 127) (datanode24-fcy.hadoop.test.com, executor 1, partition 127, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52717 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Added broadcast_6_piece0 in memory on datanode24-fcy.hadoop.test.com:32003 (size: 8.6 KiB, free: 7.6 GiB)
52734 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 3.0 in stage 4.0 (TID 128) (datanode24-fcy.hadoop.test.com, executor 1, partition 128, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52735 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 2.0 in stage 4.0 (TID 127) in 33 ms on datanode24-fcy.hadoop.test.com (executor 1) (1/40)
52749 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 4.0 in stage 4.0 (TID 129) (datanode24-fcy.hadoop.test.com, executor 1, partition 129, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52749 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 1.0 in stage 4.0 (TID 126) in 47 ms on datanode24-fcy.hadoop.test.com (executor 1) (2/40)
52757 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 5.0 in stage 4.0 (TID 130) (datanode24-fcy.hadoop.test.com, executor 1, partition 130, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52757 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 3.0 in stage 4.0 (TID 128) in 23 ms on datanode24-fcy.hadoop.test.com (executor 1) (3/40)
52769 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 6.0 in stage 4.0 (TID 131) (datanode24-fcy.hadoop.test.com, executor 1, partition 131, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52769 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 4.0 in stage 4.0 (TID 129) in 21 ms on datanode24-fcy.hadoop.test.com (executor 1) (4/40)
52770 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 7.0 in stage 4.0 (TID 132) (datanode24-fcy.hadoop.test.com, executor 1, partition 132, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52770 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 5.0 in stage 4.0 (TID 130) in 13 ms on datanode24-fcy.hadoop.test.com (executor 1) (5/40)
52781 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 8.0 in stage 4.0 (TID 133) (datanode24-fcy.hadoop.test.com, executor 1, partition 133, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52781 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 6.0 in stage 4.0 (TID 131) in 12 ms on datanode24-fcy.hadoop.test.com (executor 1) (6/40)
52786 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 9.0 in stage 4.0 (TID 134) (datanode24-fcy.hadoop.test.com, executor 1, partition 134, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52786 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 7.0 in stage 4.0 (TID 132) in 16 ms on datanode24-fcy.hadoop.test.com (executor 1) (7/40)
52794 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 10.0 in stage 4.0 (TID 135) (datanode24-fcy.hadoop.test.com, executor 1, partition 135, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52795 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 8.0 in stage 4.0 (TID 133) in 14 ms on datanode24-fcy.hadoop.test.com (executor 1) (8/40)
52801 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 11.0 in stage 4.0 (TID 136) (datanode24-fcy.hadoop.test.com, executor 1, partition 136, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52801 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 9.0 in stage 4.0 (TID 134) in 15 ms on datanode24-fcy.hadoop.test.com (executor 1) (9/40)
52807 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 12.0 in stage 4.0 (TID 137) (datanode24-fcy.hadoop.test.com, executor 1, partition 137, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52807 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 10.0 in stage 4.0 (TID 135) in 13 ms on datanode24-fcy.hadoop.test.com (executor 1) (10/40)
52815 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 13.0 in stage 4.0 (TID 138) (datanode24-fcy.hadoop.test.com, executor 1, partition 138, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52815 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 11.0 in stage 4.0 (TID 136) in 14 ms on datanode24-fcy.hadoop.test.com (executor 1) (11/40)
52819 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 14.0 in stage 4.0 (TID 139) (datanode24-fcy.hadoop.test.com, executor 1, partition 139, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52819 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 12.0 in stage 4.0 (TID 137) in 12 ms on datanode24-fcy.hadoop.test.com (executor 1) (12/40)
52830 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 15.0 in stage 4.0 (TID 140) (datanode24-fcy.hadoop.test.com, executor 1, partition 140, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52830 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 13.0 in stage 4.0 (TID 138) in 16 ms on datanode24-fcy.hadoop.test.com (executor 1) (13/40)
52832 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 16.0 in stage 4.0 (TID 141) (datanode24-fcy.hadoop.test.com, executor 1, partition 141, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52832 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 14.0 in stage 4.0 (TID 139) in 13 ms on datanode24-fcy.hadoop.test.com (executor 1) (14/40)
52850 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 17.0 in stage 4.0 (TID 142) (datanode24-fcy.hadoop.test.com, executor 1, partition 142, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52851 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 16.0 in stage 4.0 (TID 141) in 19 ms on datanode24-fcy.hadoop.test.com (executor 1) (15/40)
52868 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 18.0 in stage 4.0 (TID 143) (datanode24-fcy.hadoop.test.com, executor 1, partition 143, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52868 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 17.0 in stage 4.0 (TID 142) in 18 ms on datanode24-fcy.hadoop.test.com (executor 1) (16/40)
52884 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 19.0 in stage 4.0 (TID 144) (datanode24-fcy.hadoop.test.com, executor 1, partition 144, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52884 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 18.0 in stage 4.0 (TID 143) in 16 ms on datanode24-fcy.hadoop.test.com (executor 1) (17/40)
52901 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 20.0 in stage 4.0 (TID 145) (datanode24-fcy.hadoop.test.com, executor 1, partition 145, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52901 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 0.0 in stage 4.0 (TID 125) in 199 ms on datanode24-fcy.hadoop.test.com (executor 1) (18/40)
52913 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 21.0 in stage 4.0 (TID 146) (datanode24-fcy.hadoop.test.com, executor 1, partition 146, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52913 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 19.0 in stage 4.0 (TID 144) in 29 ms on datanode24-fcy.hadoop.test.com (executor 1) (19/40)
52916 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 22.0 in stage 4.0 (TID 147) (datanode24-fcy.hadoop.test.com, executor 1, partition 147, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52916 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 20.0 in stage 4.0 (TID 145) in 16 ms on datanode24-fcy.hadoop.test.com (executor 1) (20/40)
52927 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 23.0 in stage 4.0 (TID 148) (datanode24-fcy.hadoop.test.com, executor 1, partition 148, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52928 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 21.0 in stage 4.0 (TID 146) in 14 ms on datanode24-fcy.hadoop.test.com (executor 1) (21/40)
52929 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 24.0 in stage 4.0 (TID 149) (datanode24-fcy.hadoop.test.com, executor 1, partition 149, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52930 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 22.0 in stage 4.0 (TID 147) in 14 ms on datanode24-fcy.hadoop.test.com (executor 1) (22/40)
52945 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 25.0 in stage 4.0 (TID 150) (datanode24-fcy.hadoop.test.com, executor 1, partition 150, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52945 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 24.0 in stage 4.0 (TID 149) in 16 ms on datanode24-fcy.hadoop.test.com (executor 1) (23/40)
52946 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 26.0 in stage 4.0 (TID 151) (datanode24-fcy.hadoop.test.com, executor 1, partition 151, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52946 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 23.0 in stage 4.0 (TID 148) in 19 ms on datanode24-fcy.hadoop.test.com (executor 1) (24/40)
52959 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 27.0 in stage 4.0 (TID 152) (datanode24-fcy.hadoop.test.com, executor 1, partition 152, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52960 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 26.0 in stage 4.0 (TID 151) in 15 ms on datanode24-fcy.hadoop.test.com (executor 1) (25/40)
52962 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 28.0 in stage 4.0 (TID 153) (datanode24-fcy.hadoop.test.com, executor 1, partition 153, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52962 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 25.0 in stage 4.0 (TID 150) in 17 ms on datanode24-fcy.hadoop.test.com (executor 1) (26/40)
52975 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 29.0 in stage 4.0 (TID 154) (datanode24-fcy.hadoop.test.com, executor 1, partition 154, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52975 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 28.0 in stage 4.0 (TID 153) in 14 ms on datanode24-fcy.hadoop.test.com (executor 1) (27/40)
52975 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 30.0 in stage 4.0 (TID 155) (datanode24-fcy.hadoop.test.com, executor 1, partition 155, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52976 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 27.0 in stage 4.0 (TID 152) in 17 ms on datanode24-fcy.hadoop.test.com (executor 1) (28/40)
52990 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 31.0 in stage 4.0 (TID 156) (datanode24-fcy.hadoop.test.com, executor 1, partition 156, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52991 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 29.0 in stage 4.0 (TID 154) in 16 ms on datanode24-fcy.hadoop.test.com (executor 1) (29/40)
52993 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 32.0 in stage 4.0 (TID 157) (datanode24-fcy.hadoop.test.com, executor 1, partition 157, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
52993 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 30.0 in stage 4.0 (TID 155) in 18 ms on datanode24-fcy.hadoop.test.com (executor 1) (30/40)
53007 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 33.0 in stage 4.0 (TID 158) (datanode24-fcy.hadoop.test.com, executor 1, partition 158, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
53008 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 31.0 in stage 4.0 (TID 156) in 18 ms on datanode24-fcy.hadoop.test.com (executor 1) (31/40)
53009 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 34.0 in stage 4.0 (TID 159) (datanode24-fcy.hadoop.test.com, executor 1, partition 159, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
53009 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 32.0 in stage 4.0 (TID 157) in 16 ms on datanode24-fcy.hadoop.test.com (executor 1) (32/40)
53016 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 35.0 in stage 4.0 (TID 160) (datanode24-fcy.hadoop.test.com, executor 1, partition 160, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
53016 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 15.0 in stage 4.0 (TID 140) in 186 ms on datanode24-fcy.hadoop.test.com (executor 1) (33/40)
53024 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 36.0 in stage 4.0 (TID 161) (datanode24-fcy.hadoop.test.com, executor 1, partition 161, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
53024 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 33.0 in stage 4.0 (TID 158) in 17 ms on datanode24-fcy.hadoop.test.com (executor 1) (34/40)
53029 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 37.0 in stage 4.0 (TID 162) (datanode24-fcy.hadoop.test.com, executor 1, partition 162, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
53029 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 34.0 in stage 4.0 (TID 159) in 20 ms on datanode24-fcy.hadoop.test.com (executor 1) (35/40)
53031 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 38.0 in stage 4.0 (TID 163) (datanode24-fcy.hadoop.test.com, executor 1, partition 163, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
53031 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 35.0 in stage 4.0 (TID 160) in 16 ms on datanode24-fcy.hadoop.test.com (executor 1) (36/40)
53037 [dispatcher-CoarseGrainedScheduler] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 39.0 in stage 4.0 (TID 164) (datanode24-fcy.hadoop.test.com, executor 1, partition 164, PROCESS_LOCAL, 5041 bytes) taskResourceAssignments Map()
53038 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 36.0 in stage 4.0 (TID 161) in 15 ms on datanode24-fcy.hadoop.test.com (executor 1) (37/40)
53045 [task-result-getter-2] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 37.0 in stage 4.0 (TID 162) in 17 ms on datanode24-fcy.hadoop.test.com (executor 1) (38/40)
53046 [task-result-getter-3] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 38.0 in stage 4.0 (TID 163) in 16 ms on datanode24-fcy.hadoop.test.com (executor 1) (39/40)
53052 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 39.0 in stage 4.0 (TID 164) in 15 ms on datanode24-fcy.hadoop.test.com (executor 1) (40/40)
53052 [task-result-getter-0] INFO  org.apache.spark.scheduler.cluster.YarnClusterScheduler  - Removed TaskSet 4.0, whose tasks have all completed, from pool 
53052 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - ResultStage 4 (collect at SparkSqlTask.scala:255) finished in 0.365 s
53052 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - Job 4 is finished. Cancelling potential speculative or zombie tasks for this job
53052 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.cluster.YarnClusterScheduler  - Killing all running tasks in stage 4: Stage finished
53053 [SparkTaskThread-0] INFO  org.apache.spark.scheduler.DAGScheduler  - Job 4 finished: collect at SparkSqlTask.scala:255, took 0.369636 s
53101 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.task.SparkSqlTask  - receiveJobResult url: http://10.10.10.10:8080/innerApi/v1/receiveJobResult
53125 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.support.JobServerContext  - stopQueySparkStageLog
53133 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.task.SparkTask  - &#32467;&#26463;&#20316;&#19994;: ukdZjIFZxRPwGofFzWf5mgfBCkZ605Qs
53136 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.task.SparkTask  - &#20316;&#19994;&#36816;&#34892;&#23436;&#25104;&#65292;&#26356;&#26032;&#23454;&#20363;&#29366;&#24577;&#65306;9
53144 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.common.service.JobServerService  - update jobserver: application_1731902288777_13206 status finished
53178 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.task.SparkTask  - remove sparkContext map:Map()
53178 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.task.SparkTask  - remove sparkContext map:Map()
53184 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.support.JobServerContext  - jobserver application_1731902288777_13206 run task finished&#65292;update status idle
53185 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.util.LogUtils  - waiting for logQueue and consoleLogQueue empty...
53385 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.util.LogUtils  - logQueue is empty!!!
53386 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.task.SparkSqlTask  - Job: ukdZjIFZxRPwGofFzWf5mgfBCkZ605Qs ended
53386 [SparkTaskThread-0] INFO  com.dataworker.spark.jobserver.driver.support.JobServerContext  - stopQueySparkStageLog
322570 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Removed broadcast_3_piece0 on datanode16-fcy.hadoop.test.com:7529 in memory (size: 8.6 KiB, free: 2.8 GiB)
322599 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Removed broadcast_3_piece0 on datanode24-fcy.hadoop.test.com:32003 in memory (size: 8.6 KiB, free: 7.6 GiB)
322674 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Removed broadcast_1_piece0 on datanode16-fcy.hadoop.test.com:7529 in memory (size: 26.3 KiB, free: 2.8 GiB)
322676 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Removed broadcast_1_piece0 on datanode24-fcy.hadoop.test.com:32003 in memory (size: 26.3 KiB, free: 7.6 GiB)
322682 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Removed broadcast_4_piece0 on datanode16-fcy.hadoop.test.com:7529 in memory (size: 8.6 KiB, free: 2.8 GiB)
322684 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Removed broadcast_4_piece0 on datanode24-fcy.hadoop.test.com:32003 in memory (size: 8.6 KiB, free: 7.6 GiB)
322689 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Removed broadcast_5_piece0 on datanode16-fcy.hadoop.test.com:7529 in memory (size: 8.6 KiB, free: 2.8 GiB)
322690 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Removed broadcast_5_piece0 on datanode24-fcy.hadoop.test.com:32003 in memory (size: 8.6 KiB, free: 7.6 GiB)
322695 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Removed broadcast_0_piece0 on datanode16-fcy.hadoop.test.com:7529 in memory (size: 26.4 KiB, free: 2.8 GiB)
322700 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Removed broadcast_6_piece0 on datanode16-fcy.hadoop.test.com:7529 in memory (size: 8.6 KiB, free: 2.8 GiB)
322701 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Removed broadcast_6_piece0 on datanode24-fcy.hadoop.test.com:32003 in memory (size: 8.6 KiB, free: 7.6 GiB)
322706 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Removed broadcast_2_piece0 on datanode16-fcy.hadoop.test.com:7529 in memory (size: 8.6 KiB, free: 2.8 GiB)
322708 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerInfo  - Removed broadcast_2_piece0 on datanode24-fcy.hadoop.test.com:32003 in memory (size: 8.6 KiB, free: 7.6 GiB)

Physical Plan

== Physical Plan ==
CollectLimit (4)
+- NativeFilter (3)
   +- InputAdapter (2)
      +- NativeOrcScan label.label_test (1)


(1) NativeOrcScan label.label_test
Output [8]: [id#0, label1#1, label2#2, label3#3, label4#4L, back_date#5, dt#6, id_pt#7]
Arguments: FileScan orc label.label_test[id#0,label1#1,label2#2,label3#3,label4#4L,back_date#5,dt#6,id_pt#7] Batched: true, DataFilters: [isnotnull(id#0), (id#0 = 31539560)], Format: ORC, Location: InMemoryFileIndex[hdfs://fcycdh/user/hive/warehouse/label.db/label_test/back_date=2..., PartitionFilters: [isnotnull(back_date#5), back_date#5 RLIKE ], PushedFilters: [IsNotNull(id), EqualTo(id,31539560)], ReadSchema: struct<id:string,label1:int,label2:int,cnt_bank_loan_...

(2) InputAdapter
Input [8]: [id#0, label1#1, label2#2, label3#3, label4#4L, back_date#5, dt#6, id_pt#7]
Arguments: [#0, #1, #2, #3, #4, #5, #6, #7]

(3) NativeFilter
Input [8]: [#0#0, #1#1, #2#2, #3#3, #4#4L, #5#5, #6#6, #7#7]
Arguments: (isnotnull(id#0) AND (id#0 = 31539560))

(4) CollectLimit
Input [8]: [#0#0, #1#1, #2#2, #3#3, #4#4L, #5#5, #6#6, #7#7]
Arguments: 1000

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant