最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

apache spark - two jobs are getting created by only creating a dataframe in databricks note book - Stack Overflow

programmeradmin0浏览0评论

I am pretty new to databricks and pyspark. I am creating one dataframe by reading a csv file but i am not calling any action. But I am seeing two jobs are running. Can someone explain why.

from pyspark.sql import SparkSession
from pyspark.sql.functions import *

spark = SparkSession.builder.appName("uber_data_analysis").getOrCreate()
df = spark.read.csv("/FileStore/tables/uber_data.csv", header = True, inferSchema = True)

I am pretty new to databricks and pyspark. I am creating one dataframe by reading a csv file but i am not calling any action. But I am seeing two jobs are running. Can someone explain why.

from pyspark.sql import SparkSession
from pyspark.sql.functions import *

spark = SparkSession.builder.appName("uber_data_analysis").getOrCreate()
df = spark.read.csv("/FileStore/tables/uber_data.csv", header = True, inferSchema = True)

Share Improve this question edited Feb 11 at 19:54 Ged 18.1k8 gold badges47 silver badges103 bronze badges asked Feb 10 at 18:21 Kshitish DasKshitish Das 711 silver badge7 bronze badges 4
  • Did you check the Spark UI to see what these jobs are? Also, depending on your version, the number of tasks may vary. – Steven Commented Feb 11 at 8:51
  • I'm not sure for the 2 tasks but one of them is definitely scanning the file to infer the schema. – Steven Commented Feb 11 at 8:52
  • Can you restart the cluster and try one more time? It's weird – mjeday Commented Feb 11 at 14:30
  • I got this question re-opened. Revised answer slightly as back home. – Ged Commented Feb 18 at 15:09
Add a comment  | 

2 Answers 2

Reset to default 2

The point is that the question is about the Databricks environment that I also use (here). It could well be that for HDP on-prem or Cloudera this optimization does not happen, but could be a configuration option of such environments. However, I got tired of setting up (Hive) metastores etc. for the plain-vanilla Spark stuff. So I cannot remember but we see some stuff alluding to that.

Below with both False parameters we get 1 Job. Path checking, partitions, what not. An error is given if file cannot be found.

As soon as you request (inferSchema=True), then there will be an extra job, for that exact process up-front of an Action.

So:

  1. Reading the header to get column names and path checking etc.
  2. Reading the file for schema inference.

There will always be at least one job, which will verify some basic things like existence of source (file, folder, table, ...).

As soon as you try to read some catalog like (spark.read.table('hive_metastore.default.table1')inferSchema=True)), then it won't need to "read the data" to figure out the schema as it's part of the catalog and there will be only one extra job.

If you're looking under the hood for a specific reason then update OP, otherwise in general this is not really something you should bother with unless it's affecting your job.

发布评论

评论列表(0)

  1. 暂无评论