iceberg系列(1):存储详解

Iceberg是数据湖热门组件之一,本系列文章将深入探究一二。
首先将研究iceberg底层存储。

1、启动本地的Spark

./bin/spark-sql \
  --packages org.apache.iceberg:iceberg-spark3-runtime:0.12.1 \
  --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions \
  --conf spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkSessionCatalog \
  --conf spark.sql.catalog.spark_catalog.type=hive \
  --conf spark.sql.catalog.local=org.apache.iceberg.spark.SparkCatalog \
  --conf spark.sql.catalog.local.type=hadoop \
  --conf spark.sql.catalog.local.warehouse=$PWD/warehouse

分别使用v1 v2两种格式创建表
使用format-version 1创建表table

CREATE TABLE local.db.table (id bigint, data string) USING iceberg;

打开目录,其结构如下:

(base) ➜ table ll -R
total 0
drwxr-xr-x  6 liliwei  staff   192B Jan  2 21:22 metadata

./metadata:
total 16
-rw-r--r--@ 1 liliwei  staff   1.2K Jan  2 21:22 v1.metadata.json
-rw-r--r--@ 1 liliwei  staff     1B Jan  2 21:22 version-hint.text
(base) ➜ table

查看v1.metadata.json,内容如下:

{
  "format-version" : 1,
  "table-uuid" : "0dc08d49-ed4d-49bb-8ddf-006e37c65372",
  "location" : "/Users/liliwei/plat/spark-3.1.2-bin-hadoop3.2/warehouse/db/table",
  "last-updated-ms" : 1641129739691,
  "last-column-id" : 2,
  "schema" : {
    "type" : "struct",
    "schema-id" : 0,
    "fields" : [ {
      "id" : 1,
      "name" : "id",
      "required" : false,
      "type" : "long"
    }, {
      "id" : 2,
      "name" : "data",
      "required" : false,
      "type" : "string"
    } ]
  },
  "current-schema-id" : 0,
  "schemas" : [ {
    "type" : "struct",
    "schema-id" : 0,
    "fields" : [ {
      "id" : 1,
      "name" : "id",
      "required" : false,
      "type" : "long"
    }, {
      "id" : 2,
      "name" : "data",
      "required" : false,
      "type" : "string"
    } ]
  } ],
  "partition-spec" : [ ],
  "default-spec-id" : 0,
  "partition-specs" : [ {
    "spec-id" : 0,
    "fields" : [ ]
  } ],
  "last-partition-id" : 999,
  "default-sort-order-id" : 0,
  "sort-orders" : [ {
    "order-id" : 0,
    "fields" : [ ]
  } ],
  "properties" : {
    "owner" : "liliwei"
  },
  "current-snapshot-id" : -1,
  "snapshots" : [ ],
  "snapshot-log" : [ ],
  "metadata-log" : [ ]
}

查看version-hint.text,内容如下:

1

使用format-version 2创建表tableV2

CREATE TABLE local.db.tableV2 (id bigint, data string) 
USING iceberg
TBLPROPERTIES ('format-version'='2'); 

tavleV2的目录结构如下:

(base) ➜ tableV2 cd metadata
(base) ➜ metadata ll
total 16
-rw-r--r--  1 liliwei  staff   936B Jan  2 21:38 v1.metadata.json
-rw-r--r--  1 liliwei  staff     1B Jan  2 21:38 version-hint.text
(base) ➜ metadata

v1.metadata.json的内容如下:

{
  "format-version" : 2,
  "table-uuid" : "67b54789-070c-4600-b2ff-3b9a0a774e4a",
  "location" : "/Users/liliwei/plat/spark-3.1.2-bin-hadoop3.2/warehouse/db/tableV2",
  "last-sequence-number" : 0,
  "last-updated-ms" : 1641130714999,
  "last-column-id" : 2,
  "current-schema-id" : 0,
  "schemas" : [ {
    "type" : "struct",
    "schema-id" : 0,
    "fields" : [ {
      "id" : 1,
      "name" : "id",
      "required" : false,
      "type" : "long"
    }, {
      "id" : 2,
      "name" : "data",
      "required" : false,
      "type" : "string"
    } ]
  } ],
  "default-spec-id" : 0,
  "partition-specs" : [ {
    "spec-id" : 0,
    "fields" : [ ]
  } ],
  "last-partition-id" : 999,
  "default-sort-order-id" : 0,
  "sort-orders" : [ {
    "order-id" : 0,
    "fields" : [ ]
  } ],
  "properties" : {
    "owner" : "liliwei"
  },
  "current-snapshot-id" : -1,
  "snapshots" : [ ],
  "snapshot-log" : [ ],
  "metadata-log" : [ ]
}

version-hint.text的内容如下:

1

现在,我们插入数据到表中

INSERT INTO local.db.table VALUES (1, 'a');

再次查看目录结构:

(base) ➜ table tree -A -C -D
.
├── [Jan  2 21:45]  data
│   └── [Jan  2 21:45]  00000-0-ea35130e-b5ed-4443-889f-2ee5e62e6757-00001.parquet
└── [Jan  2 21:45]  metadata
    ├── [Jan  2 21:45]  1bd1f809-55ea-4ba1-b425-ab4ecc212434-m0.avro
    ├── [Jan  2 21:45]  snap-5028042644139258397-1-1bd1f809-55ea-4ba1-b425-ab4ecc212434.avro
    ├── [Jan  2 21:22]  v1.metadata.json
    ├── [Jan  2 21:45]  v2.metadata.json
    └── [Jan  2 21:45]  version-hint.text

2 directories, 6 files

查看v2.metadata.json文件内容:

{
  "format-version" : 1,
  "table-uuid" : "0dc08d49-ed4d-49bb-8ddf-006e37c65372",
  "location" : "/Users/liliwei/plat/spark-3.1.2-bin-hadoop3.2/warehouse/db/table",
  "last-updated-ms" : 1641131156558,
  "last-column-id" : 2,
  "schema" : {
    "type" : "struct",
    "schema-id" : 0,
    "fields" : [ {
      "id" : 1,
      "name" : "id",
      "required" : false,
      "type" : "long"
    }, {
      "id" : 2,
      "name" : "data",
      "required" : false,
      "type" : "string"
    } ]
  },
  "current-schema-id" : 0,
  "schemas" : [ {
    "type" : "struct",
    "schema-id" : 0,
    "fields" : [ {
      "id" : 1,
      "name" : "id",
      "required" : false,
      "type" : "long"
    }, {
      "id" : 2,
      "name" : "data",
      "required" : false,
      "type" : "string"
    } ]
  } ],
  "partition-spec" : [ ],
  "default-spec-id" : 0,
  "partition-specs" : [ {
    "spec-id" : 0,
    "fields" : [ ]
  } ],
  "last-partition-id" : 999,
  "default-sort-order-id" : 0,
  "sort-orders" : [ {
    "order-id" : 0,
    "fields" : [ ]
  } ],
  "properties" : {
    "owner" : "liliwei"
  },
  "current-snapshot-id" : 5028042644139258397,
  "snapshots" : [ {
    "snapshot-id" : 5028042644139258397,
    "timestamp-ms" : 1641131156558,
    "summary" : {
      "operation" : "append",
      "spark.app.id" : "local-1641129606166",
      "added-data-files" : "1",
      "added-records" : "1",
      "added-files-size" : "643",
      "changed-partition-count" : "1",
      "total-records" : "1",
      "total-files-size" : "643",
      "total-data-files" : "1",
      "total-delete-files" : "0",
      "total-position-deletes" : "0",
      "total-equality-deletes" : "0"
    },
    "manifest-list" : "/Users/liliwei/plat/spark-3.1.2-bin-hadoop3.2/warehouse/db/table/metadata/snap-5028042644139258397-1-1bd1f809-55ea-4ba1-b425-ab4ecc212434.avro",
    "schema-id" : 0
  } ],
  "snapshot-log" : [ {
    "timestamp-ms" : 1641131156558,
    "snapshot-id" : 5028042644139258397
  } ],
  "metadata-log" : [ {
    "timestamp-ms" : 1641129739691,
    "metadata-file" : "/Users/liliwei/plat/spark-3.1.2-bin-hadoop3.2/warehouse/db/table/metadata/v1.metadata.json"
  } ]
}

version-hint.text内容:

2

snap-5028042644139258397-1-1bd1f809-55ea-4ba1-b425-ab4ecc212434.avro内容:
此处借助于avro转json的工具:avro-tools-1.10.2.jar (https://repo1.maven.org/maven2/org/apache/avro/avro-tools/1.10.2/

java -jar ~/plat/tools/avro-tools-1.10.2.jar tojson metadata/snap-5028042644139258397-1-1bd1f809-55ea-4ba1-b425-ab4ecc212434.avro
{
    "manifest_path": "/Users/liliwei/plat/spark-3.1.2-bin-hadoop3.2/warehouse/db/table/metadata/1bd1f809-55ea-4ba1-b425-ab4ecc212434-m0.avro",
    "manifest_length": 5803,
    "partition_spec_id": 0,
    "added_snapshot_id": {
        "long": 5028042644139258397
    },
    "added_data_files_count": {
        "int": 1
    },
    "existing_data_files_count": {
        "int": 0
    },
    "deleted_data_files_count": {
        "int": 0
    },
    "partitions": {
        "array": []
    },
    "added_rows_count": {
        "long": 1
    },
    "existing_rows_count": {
        "long": 0
    },
    "deleted_rows_count": {
        "long": 0
    }
}

查看对应的 manifest 文件

(base) ➜ table java -jar ~/plat/tools/avro-tools-1.10.2.jar tojson metadata/1bd1f809-55ea-4ba1-b425-ab4ecc212434-m0.avro
{
    "status": 1,
    "snapshot_id": {
        "long": 5028042644139258397
    },
    "data_file": {
        "file_path": "/Users/liliwei/plat/spark-3.1.2-bin-hadoop3.2/warehouse/db/table/data/00000-0-ea35130e-b5ed-4443-889f-2ee5e62e6757-00001.parquet",
        "file_format": "PARQUET",
        "partition": {},
        "record_count": 1,
        "file_size_in_bytes": 643,
        "block_size_in_bytes": 67108864,
        "column_sizes": {
            "array": [{
                "key": 1,
                "value": 46
            }, {
                "key": 2,
                "value": 48
            }]
        },
        "value_counts": {
            "array": [{
                "key": 1,
                "value": 1
            }, {
                "key": 2,
                "value": 1
            }]
        },
        "null_value_counts": {
            "array": [{
                "key": 1,
                "value": 0
            }, {
                "key": 2,
                "value": 0
            }]
        },
        "nan_value_counts": {
            "array": []
        },
        "lower_bounds": {
            "array": [{
                "key": 1,
                "value": "\u0001\u0000\u0000\u0000\u0000\u0000\u0000\u0000"
            }, {
                "key": 2,
                "value": "a"
            }]
        },
        "upper_bounds": {
            "array": [{
                "key": 1,
                "value": "\u0001\u0000\u0000\u0000\u0000\u0000\u0000\u0000"
            }, {
                "key": 2,
                "value": "a"
            }]
        },
        "key_metadata": null,
        "split_offsets": {
            "array": [4]
        },
        "sort_order_id": {
            "int": 0
        }
    }
}

很明显,其中包含了数据文件的具体位置,以及一些统计信息。我们再插入一条数据,看看会有什么变化:

INSERT INTO local.db.table VALUES (2, 'b');

查看目录结构:

(base) ➜ table tree -C
(base) ➜ table tree -C -D
.
├── [Jan  2 22:07]  data
│   ├── [Jan  2 21:45]  00000-0-ea35130e-b5ed-4443-889f-2ee5e62e6757-00001.parquet
│   └── [Jan  2 22:07]  00000-1-631cd5bc-2ad0-4ddd-9530-f055b2888d56-00001.parquet
└── [Jan  2 22:07]  metadata
    ├── [Jan  2 21:45]  1bd1f809-55ea-4ba1-b425-ab4ecc212434-m0.avro
    ├── [Jan  2 22:07]  6881af48-5efa-4660-99ed-be5b9f640e52-m0.avro
    ├── [Jan  2 22:07]  snap-1270004071302473053-1-6881af48-5efa-4660-99ed-be5b9f640e52.avro
    ├── [Jan  2 21:45]  snap-5028042644139258397-1-1bd1f809-55ea-4ba1-b425-ab4ecc212434.avro
    ├── [Jan  2 21:22]  v1.metadata.json
    ├── [Jan  2 21:45]  v2.metadata.json
    ├── [Jan  2 22:07]  v3.metadata.json
    └── [Jan  2 22:07]  version-hint.text

2 directories, 10 files

v3.metadata.json文件内容如下:

{
  "format-version" : 1,
  "table-uuid" : "0dc08d49-ed4d-49bb-8ddf-006e37c65372",
  "location" : "/Users/liliwei/plat/spark-3.1.2-bin-hadoop3.2/warehouse/db/table",
  "last-updated-ms" : 1641132476394,
  "last-column-id" : 2,
  "schema" : {
    "type" : "struct",
    "schema-id" : 0,
    "fields" : [ {
      "id" : 1,
      "name" : "id",
      "required" : false,
      "type" : "long"
    }, {
      "id" : 2,
      "name" : "data",
      "required" : false,
      "type" : "string"
    } ]
  },
  "current-schema-id" : 0,
  "schemas" : [ {
    "type" : "struct",
    "schema-id" : 0,
    "fields" : [ {
      "id" : 1,
      "name" : "id",
      "required" : false,
      "type" : "long"
    }, {
      "id" : 2,
      "name" : "data",
      "required" : false,
      "type" : "string"
    } ]
  } ],
  "partition-spec" : [ ],
  "default-spec-id" : 0,
  "partition-specs" : [ {
    "spec-id" : 0,
    "fields" : [ ]
  } ],
  "last-partition-id" : 999,
  "default-sort-order-id" : 0,
  "sort-orders" : [ {
    "order-id" : 0,
    "fields" : [ ]
  } ],
  "properties" : {
    "owner" : "liliwei"
  },
  "current-snapshot-id" : 1270004071302473053,
  "snapshots" : [ {
    "snapshot-id" : 5028042644139258397,
    "timestamp-ms" : 1641131156558,
    "summary" : {
      "operation" : "append",
      "spark.app.id" : "local-1641129606166",
      "added-data-files" : "1",
      "added-records" : "1",
      "added-files-size" : "643",
      "changed-partition-count" : "1",
      "total-records" : "1",
      "total-files-size" : "643",
      "total-data-files" : "1",
      "total-delete-files" : "0",
      "total-position-deletes" : "0",
      "total-equality-deletes" : "0"
    },
    "manifest-list" : "/Users/liliwei/plat/spark-3.1.2-bin-hadoop3.2/warehouse/db/table/metadata/snap-5028042644139258397-1-1bd1f809-55ea-4ba1-b425-ab4ecc212434.avro",
    "schema-id" : 0
  }, {
    "snapshot-id" : 1270004071302473053,
    "parent-snapshot-id" : 5028042644139258397,
    "timestamp-ms" : 1641132476394,
    "summary" : {
      "operation" : "append",
      "spark.app.id" : "local-1641129606166",
      "added-data-files" : "1",
      "added-records" : "1",
      "added-files-size" : "643",
      "changed-partition-count" : "1",
      "total-records" : "2",
      "total-files-size" : "1286",
      "total-data-files" : "2",
      "total-delete-files" : "0",
      "total-position-deletes" : "0",
      "total-equality-deletes" : "0"
    },
    "manifest-list" : "/Users/liliwei/plat/spark-3.1.2-bin-hadoop3.2/warehouse/db/table/metadata/snap-1270004071302473053-1-6881af48-5efa-4660-99ed-be5b9f640e52.avro",
    "schema-id" : 0
  } ],
  "snapshot-log" : [ {
    "timestamp-ms" : 1641131156558,
    "snapshot-id" : 5028042644139258397
  }, {
    "timestamp-ms" : 1641132476394,
    "snapshot-id" : 1270004071302473053
  } ],
  "metadata-log" : [ {
    "timestamp-ms" : 1641129739691,
    "metadata-file" : "/Users/liliwei/plat/spark-3.1.2-bin-hadoop3.2/warehouse/db/table/metadata/v1.metadata.json"
  }, {
    "timestamp-ms" : 1641131156558,
    "metadata-file" : "/Users/liliwei/plat/spark-3.1.2-bin-hadoop3.2/warehouse/db/table/metadata/v2.metadata.json"
  } ]
}

snap-1270004071302473053-1-6881af48-5efa-4660-99ed-be5b9f640e52.avro文件内容如下:

{
    "manifest_path": "/Users/liliwei/plat/spark-3.1.2-bin-hadoop3.2/warehouse/db/table/metadata/6881af48-5efa-4660-99ed-be5b9f640e52-m0.avro",
    "manifest_length": 5802,
    "partition_spec_id": 0,
    "added_snapshot_id": {
        "long": 1270004071302473053
    },
    "added_data_files_count": {
        "int": 1
    },
    "existing_data_files_count": {
        "int": 0
    },
    "deleted_data_files_count": {
        "int": 0
    },
    "partitions": {
        "array": []
    },
    "added_rows_count": {
        "long": 1
    },
    "existing_rows_count": {
        "long": 0
    },
    "deleted_rows_count": {
        "long": 0
    }
}

{
    "manifest_path": "/Users/liliwei/plat/spark-3.1.2-bin-hadoop3.2/warehouse/db/table/metadata/1bd1f809-55ea-4ba1-b425-ab4ecc212434-m0.avro",
    "manifest_length": 5803,
    "partition_spec_id": 0,
    "added_snapshot_id": {
        "long": 5028042644139258397
    },
    "added_data_files_count": {
        "int": 1
    },
    "existing_data_files_count": {
        "int": 0
    },
    "deleted_data_files_count": {
        "int": 0
    },
    "partitions": {
        "array": []
    },
    "added_rows_count": {
        "long": 1
    },
    "existing_rows_count": {
        "long": 0
    },
    "deleted_rows_count": {
        "long": 0
    }
}

注意,以上是两条数据,分别是两个JSON格式。
6881af48-5efa-4660-99ed-be5b9f640e52-m0.avro内容如下:

{
    "status": 1,
    "snapshot_id": {
        "long": 1270004071302473053
    },
    "data_file": {
        "file_path": "/Users/liliwei/plat/spark-3.1.2-bin-hadoop3.2/warehouse/db/table/data/00000-1-631cd5bc-2ad0-4ddd-9530-f055b2888d56-00001.parquet",
        "file_format": "PARQUET",
        "partition": {},
        "record_count": 1,
        "file_size_in_bytes": 643,
        "block_size_in_bytes": 67108864,
        "column_sizes": {
            "array": [{
                "key": 1,
                "value": 46
            }, {
                "key": 2,
                "value": 48
            }]
        },
        "value_counts": {
            "array": [{
                "key": 1,
                "value": 1
            }, {
                "key": 2,
                "value": 1
            }]
        },
        "null_value_counts": {
            "array": [{
                "key": 1,
                "value": 0
            }, {
                "key": 2,
                "value": 0
            }]
        },
        "nan_value_counts": {
            "array": []
        },
        "lower_bounds": {
            "array": [{
                "key": 1,
                "value": "\u0002\u0000\u0000\u0000\u0000\u0000\u0000\u0000"
            }, {
                "key": 2,
                "value": "b"
            }]
        },
        "upper_bounds": {
            "array": [{
                "key": 1,
                "value": "\u0002\u0000\u0000\u0000\u0000\u0000\u0000\u0000"
            }, {
                "key": 2,
                "value": "b"
            }]
        },
        "key_metadata": null,
        "split_offsets": {
            "array": [4]
        },
        "sort_order_id": {
            "int": 0
        }
    }
}
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 219,928评论 6 509
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 93,748评论 3 396
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 166,282评论 0 357
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 59,065评论 1 295
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 68,101评论 6 395
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,855评论 1 308
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,521评论 3 420
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,414评论 0 276
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,931评论 1 319
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 38,053评论 3 340
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 40,191评论 1 352
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,873评论 5 347
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,529评论 3 331
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 32,074评论 0 23
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 33,188评论 1 272
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 48,491评论 3 375
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 45,173评论 2 357

推荐阅读更多精彩内容