最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

scala - createOrReplace() of DataFrameWriterV2 not replacing the data and configuration of Iceberg table - Stack Overflow

programmeradmin3浏览0评论

I initially create an iceberg table with two columns of type int and string. Later I try to overwrite the table with one extra column of type date being added in the middle. The operation errors out. I have attached the code and error log below:

import .apache.spark.sql.functions._
spark.range(10).withColumn("tmp", lit("hi")).writeTo("test.sample").using("iceberg").tableProperty("write.spark.accept-any-schema", "true").createOrReplace() // creating a table for the first time
spark.range(10).withColumn("cur_date", current_date).withColumn("tmp", lit("hi")).writeTo("test.sample").using("iceberg").tableProperty("write.spark.accept-any-schema", "true").option("mergeSchema", "true").createOrReplace()

Caused by: .apache.hadoop.hive.metastore.api.InvalidOperationException: The following columns have types incompatible with the existing columns in their respective positions : cur_date

Version Details

spark : version 3.4.1.3.3.6.4-7

scala version: 2.12.17

java: 17.0.14

iceberg-spark-runtime: iceberg-spark-3.4_2.12-1.4.3.3.3.6.4-7.jar

As per documentation, the existing table's configuration and data must be replaced with that of the dataframe.

与本文相关的文章

发布评论

评论列表(0)

  1. 暂无评论