You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
create iceberg table in hive:
create table test.iceberg_v1(
a int,
b string,
c string,
d string
)
partitioned by (par_dt string)
STORED BY 'org.apache.iceberg.mr.hive.HiveIcebergStorageHandler'
TBLPROPERTIES (
'format-version'='1'
);
add properties: alter table test.iceberg_v1 SET TBLPROPERTIES('history.expire.max-snapshot-age-ms'='7200000');
1.write new file and create new metadata.json with flink, the properties of metadata file is also without property: 'history.expire.max-snapshot-age-ms'.
2.execute spark expireSnapshots,but it seems that 'history.expire.max-snapshot-age-ms' is not work.i find action get table properties from metadata.json,that is normal?why not from hms?
Are you saying that the table properties are not being set correctly in metadata json when set in flink?
Or are you saying the Spark Expire Snapshots action is not respecting history.expire.max-snapshot-age-ms?
What I'm confused about is why the new properties haven't been updated in the metadata file after they were added in hive.
This causes that when executing Spark Expire Snapshots, the newly added properties cannot be used to control the expired snapshots.
Query engine
hive,flink,spark
Question
1.write new file and create new metadata.json with flink, the properties of metadata file is also without property: 'history.expire.max-snapshot-age-ms'.
2.execute spark expireSnapshots,but it seems that 'history.expire.max-snapshot-age-ms' is not work.i find action get table properties from metadata.json,that is normal?why not from hms?
iceberg version:1.2.1
flink version: 1.14.5
spark version: 3.3.2
hive version: 3.1.3
The text was updated successfully, but these errors were encountered: