最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

sql - CREATE OR REPLACE TABLE in Databricks returns DELTA_CREATE_TABLE_WITH_NON_EMPTY_LOCATION error - Stack Overflow

programmeradmin0浏览0评论

When attempting to create a table in databricks

CREATE OR REPLACE TABLE foo.bar (
    comment VARCHAR(255),
    row_count INT,
    date TIMESTAMP
)

Returns the following error

[DELTA_CREATE_TABLE_WITH_NON_EMPTY_LOCATION] Cannot create table ('spark_catalog.foo.bar'). The associated location ('dbfs:/user/hive/warehouse/foo/bar') is not empty and also not a Delta table. SQLSTATE: 42601

Expected it to create a table as the CREATE OR REPLACE should do that based on advice I've read

When attempting to create a table in databricks

CREATE OR REPLACE TABLE foo.bar (
    comment VARCHAR(255),
    row_count INT,
    date TIMESTAMP
)

Returns the following error

[DELTA_CREATE_TABLE_WITH_NON_EMPTY_LOCATION] Cannot create table ('spark_catalog.foo.bar'). The associated location ('dbfs:/user/hive/warehouse/foo/bar') is not empty and also not a Delta table. SQLSTATE: 42601

Expected it to create a table as the CREATE OR REPLACE should do that based on advice I've read

Share Improve this question edited Feb 3 at 11:27 Joe Zalewski asked Feb 3 at 10:24 Joe ZalewskiJoe Zalewski 3510 bronze badges 2
  • Brutal scoring on this one - feel free to tell me what is incorrect / poorly formatted instead of downvoting please! – Joe Zalewski Commented Feb 4 at 10:13
  • This is expected if the mentioned location previously contained a non-Delta external table. Most likely, the data is in a different format (e.g., Parquet, ORC, or CSV). You can drop the existing data at the location and then recreate the table as a Delta table – BruceWayne Commented Feb 13 at 17:24
Add a comment  | 

1 Answer 1

Reset to default 1

I had similar problems in the past and it turned out that the table was deleted in my schema, but some metadata was left behind in my dbfs. So the table is trying to fill the location in your dbfs, but sees the location is already taken.

I recommend you to have a look in your dbfs and if this is the case, try the dbutils.fs.rm(...) command to empty the location. After that, you can try again and hopefully it will work.

发布评论

评论列表(0)

  1. 暂无评论