Elasticsearch原理解析--index disk usage api介绍

ES在7.15版本引入了一个非常好用的API:Index disk usage API,具体功能可以见文档说明:https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-disk-usage.html

这个API可以分析给定索引每个字段使用的存储容量,有了这个API,就能对ES使用的存储容积有了量化的认识。

很多用户对于ES使用占用过多存储容量,写入性能不及预期等会有误解。这其实主要是ES为了功能方面的强大做出的一种选择。

ES默认会给每个字段都建立非常完整的索引,这些索引写入时需要进行索引处理,也增加了额外的存储容量,但是有了这些索引,查询时会支持更多功能,更快性能。这算是ES schma on write和make query simple and fast的思想。

但这可能造成过度schma on write,带来了写入性能和存储膨胀方面的问题。这时候优化索引mapping,对于不需要的字段,去掉索引,能提高写入性能,降低存储开销。

Index disk usage API就给索引优化mapping提供了一直量化的数据。Index disk usage API能列出索引每个字段使用的存储明显,这样就能根据ROI,从大到小来优化mapping字段。

这个是文档给出的结果示例:

{
    "_shards": {
        "total": 1,
        "successful": 1,
        "failed": 0
    },
    "my-index-000001": {
        "store_size": "929mb", 
        "store_size_in_bytes": 974192723,
        "all_fields": {
            "total": "928.9mb", 
            "total_in_bytes": 973977084,
            "inverted_index": {
                "total": "107.8mb",
                "total_in_bytes": 113128526
            },
            "stored_fields": "623.5mb",
            "stored_fields_in_bytes": 653819143,
            "doc_values": "125.7mb",
            "doc_values_in_bytes": 131885142,
            "points": "59.9mb",
            "points_in_bytes": 62885773,
            "norms": "2.3kb",
            "norms_in_bytes": 2356,
            "term_vectors": "2.2kb",
            "term_vectors_in_bytes": 2310
        },
        "fields": {
            "_id": {
                "total": "49.3mb",
                "total_in_bytes": 51709993,
                "inverted_index": {
                    "total": "29.7mb",
                    "total_in_bytes": 31172745
                },
                "stored_fields": "19.5mb", 
                "stored_fields_in_bytes": 20537248,
                "doc_values": "0b",
                "doc_values_in_bytes": 0,
                "points": "0b",
                "points_in_bytes": 0,
                "norms": "0b",
                "norms_in_bytes": 0,
                "term_vectors": "0b",
                "term_vectors_in_bytes": 0
            },
            "_primary_term": {...},
            "_seq_no": {...},
            "_version": {...},
            "_source": {
                "total": "603.9mb",
                "total_in_bytes": 633281895,
                "inverted_index": {...},
                "stored_fields": "603.9mb", 
                "stored_fields_in_bytes": 633281895,
                "doc_values": "0b",
                "doc_values_in_bytes": 0,
                "points": "0b",
                "points_in_bytes": 0,
                "norms": "0b",
                "norms_in_bytes": 0,
                "term_vectors": "0b",
                "term_vectors_in_bytes": 0
            },
            "context": {
                "total": "28.6mb",
                "total_in_bytes": 30060405,
                "inverted_index": {
                    "total": "22mb",
                    "total_in_bytes": 23090908
                },
                "stored_fields": "0b",
                "stored_fields_in_bytes": 0,
                "doc_values": "0b",
                "doc_values_in_bytes": 0,
                "points": "0b",
                "points_in_bytes": 0,
                "norms": "2.3kb",
                "norms_in_bytes": 2356,
                "term_vectors": "2.2kb",
                "term_vectors_in_bytes": 2310
            },
            "context.keyword": {...},
            "message": {...},
            "message.keyword": {...}
        }
    }
}

针对一个字段,可以看到,会列出非常详细的存储容量:

                "total": "49.3mb",
                "total_in_bytes": 51709993,
                "inverted_index": {
                    "total": "29.7mb",
                    "total_in_bytes": 31172745
                },
                "stored_fields": "19.5mb", 
                "stored_fields_in_bytes": 20537248,
                "doc_values": "0b",
                "doc_values_in_bytes": 0,
                "points": "0b",
                "points_in_bytes": 0,
                "norms": "0b",
                "norms_in_bytes": 0,
                "term_vectors": "0b",
                "term_vectors_in_bytes": 0

total记录了字段的总容量,这个来自于倒排索引的开销,明细数据的开销,doc_values的开销等等。

那么ES是如何实时获得这些数据呢?

ES核心功能在IndexDiskUsageAnalyzer类中,IndexDiskUsageAnalyzer类统计每个shard的存储容量,然后使用BroadcastAction框架将每个shard数据汇聚起来。

IndexDiskUsageAnalyzer通过doAnalyze方法获取每种类型的存储容量:

    void doAnalyze(IndexDiskUsageStats stats) throws IOException {
        long startTimeInNanos;
        final ExecutionTime executionTime = new ExecutionTime();
        try (DirectoryReader directoryReader = DirectoryReader.open(commit)) {
            directory.resetBytesRead();
            for (LeafReaderContext leaf : directoryReader.leaves()) {
                cancellationChecker.checkForCancellation();
                final SegmentReader reader = Lucene.segmentReader(leaf.reader());

                startTimeInNanos = System.nanoTime();
                analyzeInvertedIndex(reader, stats);
                executionTime.invertedIndexTimeInNanos += System.nanoTime() - startTimeInNanos;

                startTimeInNanos = System.nanoTime();
                analyzeStoredFields(reader, stats);
                executionTime.storedFieldsTimeInNanos += System.nanoTime() - startTimeInNanos;

                startTimeInNanos = System.nanoTime();
                analyzeDocValues(reader, stats);
                executionTime.docValuesTimeInNanos += System.nanoTime() - startTimeInNanos;

                startTimeInNanos = System.nanoTime();
                analyzePoints(reader, stats);
                executionTime.pointsTimeInNanos += System.nanoTime() - startTimeInNanos;

                startTimeInNanos = System.nanoTime();
                analyzeNorms(reader, stats);
                executionTime.normsTimeInNanos += System.nanoTime() - startTimeInNanos;

                startTimeInNanos = System.nanoTime();
                analyzeTermVectors(reader, stats);
                executionTime.termVectorsTimeInNanos += System.nanoTime() - startTimeInNanos;

                startTimeInNanos = System.nanoTime();
                analyzeKnnVectors(reader, stats);
                executionTime.knnVectorsTimeInNanos += System.nanoTime() - startTimeInNanos;
            }
        }
        logger.debug("analyzing the disk usage took {} stats: {}", executionTime, stats);
    }

这里是调用lucene接口,从对应数据结构的元数据和具体数据中获取字段的存储容量。

其中InvertedIndex、Points、DocValues都是以字段为单位存储数据的,所以可以一个字段一个字段处理。

StoredFields是按行存储数据,所以这里遍历了每一个doc,去获取里面每个字段的length。

相关代码都在IndexDiskUsageAnalyzer里,对具体数据结构感兴趣的同学可以针对性的了解。

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容