最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

apache spark sql - Compare two PySpark DataFrames and append the results side by side - Stack Overflow

programmeradmin0浏览0评论

I have two pySpark DataFrames, need to compare those two DataFrames column wise and append the result next to it.

DF1:

Claim_number Claim_Status
1001 Closed
1002 In Progress
1003 open

I have two pySpark DataFrames, need to compare those two DataFrames column wise and append the result next to it.

DF1:

Claim_number Claim_Status
1001 Closed
1002 In Progress
1003 open

Df2:

Claim_number Claim_Status
1001 Closed
1002 open
1004 In Progress

Expected Result in pySpark:

DF3:

Claim_number_DF1 Claim_number_DF2 Comparison_of_Claim_number Claim_status_DF1 Claim_status_DF2 Comparison_of_Claim_Status
1001 1001 TRUE Closed Closed TRUE
1002 1002 TRUE In Progress Open FALSE
1003 1004 FALSE open In Progress FALSE
Share Improve this question asked Nov 20, 2024 at 15:13 SrinivasanSrinivasan 133 bronze badges 5
  • What is your actual question? What have you tried? – Andrew Commented Nov 20, 2024 at 15:57
  • I want to compare two dataframe and if the column value matches it should populate True and if it not matches it should populate False next to the column – Srinivasan Commented Nov 20, 2024 at 15:59
  • That's not a question, that's asking SO to write your code for you. – Andrew Commented Nov 20, 2024 at 16:06
  • Sorry i dont understand your question... – Srinivasan Commented Nov 20, 2024 at 16:08
  • 1 Unlike Pandas dataframes PySpark dataframes are not ordered. So the task is not doable unless a criterium is provided which rows of each dataframe should be compared. Simply saying "take the third row from df1 and compare it with the third row from df2" does not work unfortunately. There is no "third row", at least not when using large datasets with multiple partitions. – werner Commented Nov 20, 2024 at 18:24
Add a comment  | 

1 Answer 1

Reset to default 0

DF(s) are nor ordered but distributed in different places so this is an invalid ask.

However what you can do instead is following -

  • You can assume DF1 is master DF and join it with DF2 using Claim_number and if DF2 has no claim number then depending on join type, you can choose to ignore (inner join) or produce then as null(left outer join)

If that is what yous ask is, then here is the solution.

final_Df = df1.join(df2, Claim_number, "inner").distinct()
发布评论

评论列表(0)

  1. 暂无评论