Mongodb信息参数详解(二)

其他参数

实例信息:

"host" : ,
"advisoryHostFQDNs" : ,
"version" : ,
"process" : ,
"pid" : ,
"uptime" : ,
"uptimeMillis" : ,
"uptimeEstimate" : ,
"localTime" : ISODate(“"),

host: 系统的主机名。在Unix / Linux系统中,与hostname命令的输出相同。
advisoryHostFQDNs: 3.2版本新功能,全限定域名数组
version:当前MongoDB进程的MongoDB版本。
process:当前的MongoDB进程,可能的值为mongos或mongod
pid: 进程的id号
uptime: 当前MongoDB进程处于活动状态的总秒数,即启动时长。
uptimeMillis: 当前MongoDB进程处于活动状态的毫秒数。
uptimeEstimate: MongoDB内部粗粒度时间保持系统以秒为单位的启动时长
localTime: ISODate表示服务器当前时间,以UTC表示。

断言asserts:

报告自MongoDB进程启动以来抛出的断言数量的文档。虽然assert错误通常不常见,但如果断言有非零值,您应该检查日志文件以获取更多信息。在许多情况下,这些错误微不足道,但值得研究。

> db.serverStatus().asserts
{ "regular" : 0, "warning" : 0, "msg" : 0, "user" : 5, "rollovers" : 0 }
asserts.regular
自MongoDB进程启动以来引发的常规断言的数量。查看日志文件以获得有关这些消息的更多信息。

asserts.warning
在4.0版更改。
从MongoDB 4.0开始,该字段返回0 0。
在早期版本中,该字段返回自MongoDB进程启动以来引发的警告次数。

asserts.msg
自MongoDB进程启动以来引发的消息断言的数量。查看日志文件以获得有关这些消息的更多信息。

asserts.user
自上次MongoDB进程启动以来发生的“user断言”的数量。这些是用户可能产生的错误,如磁盘空间不足或重复密钥。您可以通过修复应用程序或部署中的问题来防止这些断言。更多信息请查看MongoDB日志。

asserts.rollovers
自上次MongoDB进程启动以来,滚动计数器已滚动的次数。计数器将在230个断言后切换到零。使用此值为断言数据结构中的其他值提供上下文。

extra_info提供有关基础系统的其他信息的文档。

"extra_info" : {
   "note" : "fields vary by platform.",
   "heap_usage_bytes" : ,
   "page_faults" : 
},

> db.serverStatus().extra_info
{ "note" : "fields vary by platform", "page_faults" : 21 }

extra_info.note: 字符串文本 “fields vary by platform.”
extra_info.heap_usage_bytes: 数据库进程使用的堆空间的总大小(以字节为单位)。仅适用于Unix / Linux系统。
extra_info.page_faults: 缺页中断总数。当性能瓶颈或者内存不足或者数据集增大, extra_info.page_faults计数器动态的增加。有限和零星的缺页中断不一定表示问题。
Windows区分“硬”缺页中断包括硬盘I/O,“软”缺页中断仅需要内存页面移动。MongoDB在此统计信息中计算硬缺页中断和软缺页中断。

globalLock报告数据库锁状态的文档。

shard01:SECONDARY> db.serverStatus().globalLock
{
    "totalTime" : NumberLong("4639816000"),
    "currentQueue" : {
        "total" : 0,
        "readers" : 0,
        "writers" : 0
    },
    "activeClients" : {
        "total" : 0,
        "readers" : 0,
        "writers" : 0
    }
}
1. globalLock: 报告数据库锁状态的文档。通常,锁文档提供有关锁使用的更详细数据。
2. globalLock.totalTime: 自数据库上次启动和创建全局锁以来的时间(以微秒为单位)。这大致与总服务器启动时间相同。
3. globalLock.currentQueue: 锁引起的排队操作数目的文档
4. globalLock.currentQueue.total: 等锁的操作的总数(即,globalLock.currentQueue.readers和 globalLock.currentQueue.writers的总和)。 持续很小的队列,特别是较短的操作,不必关注。综合考虑globalLock.activeClients 读写相关信息。
5. globalLock.currentQueue.readers: 排队等待读锁的操作数。持续很小的读队列,尤其是较短的操作,不必关注。
6. globalLock.currentQueue.writers: 排队等待写锁的操作数。持续很小写队列,特别是较短的操作,不必关注。
7. globalLock.activeClients: 正在执行读写操作的已连接客户端数目文档,综合考虑 globalLock.currentQueue。
8. globalLock.activeClients.total: 内部客户端连接db总数,包括系统线程以及读写队列。由于包括系统线程,此值将高于activeClients.readers 和activeClients.writers之和。
9. globalLock.activeClients.readers: 执行读操作的活跃客户端连接数。
10. globalLock.activeClients.writers: 执行写操作的活跃客户端连接数。

locks

报告每个锁和锁数据的文档。

> db.serverStatus().locks
{
    "Global" : {
        "acquireCount" : {
            "r" : NumberLong(133211),
            "w" : NumberLong(250),
            "W" : NumberLong(5)
        }
    },
    "Database" : {
        "acquireCount" : {
            "r" : NumberLong(66546),
            "w" : NumberLong(232),
            "R" : NumberLong(4),
            "W" : NumberLong(18)
        },
        "acquireWaitCount" : {
            "r" : NumberLong(2),
            "W" : NumberLong(2)
        },
        "timeAcquiringMicros" : {
            "r" : NumberLong(132),
            "W" : NumberLong(304)
},  
"deadlockCount" : {
            <mode> : NumberLong(<num>),
            ...
         }
    },
    "Collection" : {
        "acquireCount" : {
            "r" : NumberLong(52895),
            "w" : NumberLong(232)
        }
    },
    "oplog" : {
        "acquireCount" : {
            "r" : NumberLong(13647)
        }
    }
}

locks..acquireCount:在特定模式下获取锁的次数。
locks..acquireWaitCount: 因锁冲突,引起locks.acquireCount锁等待的次数。
locks..timeAcquiringMicros: 获取锁的等待时间和(以微秒为单位)。
locks.timeAcquiringMicros除以 locks.acquireWaitCount给出特定锁定模式的近似平均等待时间。
locks..deadlockCount: 获取锁时遇到死锁的次数。

MongoDB网络使用情况(network)

wantRepl:PRIMARY> db.serverStatus().network
{
    "bytesIn" : NumberLong("2573428351867"),
    "bytesOut" : NumberLong("3889407355888"),
    "physicalBytesIn" : NumberLong("2568906769497"),
    "physicalBytesOut" : NumberLong("797923925390"),
    "numRequests" : NumberLong(136468356),
    "compression" : {
        "snappy" : {
            "compressor" : {
                "bytesIn" : NumberLong("3589137805219"),
                "bytesOut" : NumberLong("497232509340")
            },
            "decompressor" : {
                "bytesIn" : NumberLong("15326981527"),
                "bytesOut" : NumberLong("21068338987")
            }
        }
    },
    "serviceExecutorTaskStats" : {
        "executor" : "passthrough",
        "threadsRunning" : 31
    }
}

network.bytesIn: 数据库接收的网络流量字节数。使用此值可确保发送到mongod进程的网络流量与预期和整个应用程序间流量一致。
network.bytesOut: 数据库发送的网络流量的字节数 。使用此值可确保mongod进程发送的网络流量与预期和整体应用程序间流量一致。
network.numRequests: 服务器已收到的不同请求的总数。使用此值为network.bytesIn和network.bytesOut 值提供上下文, 以确保MongoDB的网络使用率与期望和应用程序使用一致。

opcounters

> db.serverStatus().opcounters
{
    "insert" : 0,
    "query" : 49,
    "update" : 6,
    "delete" : 0,
    "getmore" : 0,
    "command" : 174
}
opcounters.insert:自上次启动mongod实例以来收到的插入操作总数 。
opcounters.query:自 上次启动mongod实例以来收到的查询总数。
opcounters.update:自上次启动mongod实例以来收到的更新操作总数 。
opcounters.delete:自上次启动mongod实例以来的删除操作总数。
opcounters.getmore:自上次启动mongod实例以来“getmore”操作的总数。即使查询数目较低,此计数器也可能很高。作为复制进程的一部分,Secondary节点将发送 getMore操作
opcounters.command:自mongod上次启动实例以来向数据库发出的命令总数 。
opcounters.command计数所有的命令 ,除了写命令: insert,update,和delete。

repl

shard01:PRIMARY> db.serverStatus().repl
{
    "topologyVersion" : {
        "processId" : ObjectId("5fdaff1a68fa4882da69da73"),
        "counter" : NumberLong(6)
    },
    "hosts" : [
        "localhost:29018",
        "localhost:29019",
        "localhost:29020"
    ],
    "setName" : "shard01",
    "setVersion" : 2,
    "ismaster" : true,
    "secondary" : false,
    "primary" : "localhost:29019",
    "me" : "localhost:29019",
    "electionId" : ObjectId("7fffffff0000000000000003"),
    "lastWrite" : {
        "opTime" : {
            "ts" : Timestamp(1608194231, 1),
            "t" : NumberLong(3)
        },
        "lastWriteDate" : ISODate("2020-12-17T08:37:11Z"),
        "majorityOpTime" : {
            "ts" : Timestamp(1608194231, 1),
            "t" : NumberLong(3)
        },
        "majorityWriteDate" : ISODate("2020-12-17T08:37:11Z")
    },
    "rbid" : 1
}

repl:报告副本集配置的文档。 repl仅在当前主机是副本集时存在。
repl.hosts:当前副本集成员的主机名和端口信息(”host:port”)的数组。
repl.setName:当前副本集名称的字符串。此值反映–replSet命令行参数或配置文件中replSetName的值。
repl.ismaster:一个布尔值,指示当前节点是否是副本集的primary节点 。
repl.secondary:一个布尔值,指示当前节点是否是副本集的 secondary成员。
repl.primary:3.0版中的新功能。
副本集的当前primary成员的主机名和端口信息(”host:port”) 。
repl.me:3.0版中的新增功能:副本集当前成员的主机名和端口信息(”host:port”)。
repl.rbid:3.0版中的新功能。回滚标识符。用于确定此mongod实例是否发生了回滚。
repl.replicationProgress:在3.2版中更改:以前名称serverStatus.repl.slaves。
3.0版中的新功能。
一个数组,副本集的每个成员报告复制进程给这个成员的一个数组文档。通常,这个成员是primary或者使用链式复制的secondary。
要输出repl,必须将repl选项传递给 serverStatus,如下所示:
db.serverStatus({ “repl”: 1 })
db.runCommand({ “serverStatus”: 1, “repl”: 1 })
repl.replicationProgress部分的内容取决于每个成员复制的源。支持内部操作,仅供内部和诊断使用。
repl.replicationProgress[n].rid:ObjectId用作副本集成员的ID。仅限内部使用。
repl.replicationProgress[n].optime:从这个成员报告的,成员应用的oplog最后一个操作信息。
repl.replicationProgress[n].host:主机的名称[hostname]:[port]格式为副本集的成员。
repl.replicationProgress[n].memberID:此成员的副本集的整数标识符

sharding

版本3.2中的新功能:运行时mongos,该命令返回分片信息。
在版本3.6中更改:从MongoDB 3.6开始,分片成员返回分片信息。

mongos> db.serverStatus().sharding
{
    "configsvrConnectionString" : "configRepl/localhost:29024",
    "lastSeenConfigServerOpTime" : {
        "ts" : Timestamp(1608194582, 2),
        "t" : NumberLong(3)
    },
    "maxChunkSizeInBytes" : NumberLong(67108864)
}

1. sharding:包含分片集群数据的文档。lastSeenConfigServerOpTime仅存在在mongos或分片成员,而配置节点不存在。
2. sharding.configsvrConnectionString:配置服务器的连接字符串。
3. sharding.lastSeenConfigServerOpTime:
4. mongos或shard成员可见,CSRS primary的最新 optime。optime文档包括:
5. ts,操作的时间戳。
6. t,term表示操作在primary上最初生成的时间。
7. lastSeenConfigServerOpTime仅存在在使用CSRS(副本集)的分片集群中。
8. sharding.maxChunkSizeInBytes:版本3.6中的新功能。块的最大大小限制。如果最近在配置服务器上更新了块大小,则maxChunkSizeInBytes可能无法反映最新值。

shardingStatistics

shard01:PRIMARY> db.serverStatus().shardingStatistics
{
    "countStaleConfigErrors" : NumberLong(2),
    "countDonorMoveChunkStarted" : NumberLong(0),
    "totalDonorChunkCloneTimeMillis" : NumberLong(0),
    "totalCriticalSectionCommitTimeMillis" : NumberLong(0),
    "totalCriticalSectionTimeMillis" : NumberLong(0),
    "countDocsClonedOnRecipient" : NumberLong(0),
    "countDocsClonedOnDonor" : NumberLong(0),
    "countRecipientMoveChunkStarted" : NumberLong(0),
    "countDocsDeletedOnDonor" : NumberLong(0),
    "countDonorMoveChunkLockTimeout" : NumberLong(0),
    "countDonorMoveChunkAbortConflictingIndexOperation" : NumberLong(0),
    "unfinishedMigrationFromPreviousPrimary" : NumberLong(0),
    "catalogCache" : {
        "numDatabaseEntries" : NumberLong(2),
        "numCollectionEntries" : NumberLong(1),
        "countStaleConfigErrors" : NumberLong(0),
        "totalRefreshWaitTimeMicros" : NumberLong(1041596),
        "numActiveIncrementalRefreshes" : NumberLong(0),
        "countIncrementalRefreshesStarted" : NumberLong(152),
        "numActiveFullRefreshes" : NumberLong(0),
        "countFullRefreshesStarted" : NumberLong(1),
        "countFailedRefreshes" : NumberLong(0)
    },
    "rangeDeleterTasks" : 0
}

shardingStatistics:分片集群上元数据刷新的指标的文档。
shardingStatistics.countStaleConfigErrors:线程命中陈旧配置异常的总次数。由于陈旧的配置异常触发元数据的刷新,因此该数字大致与元数据刷新的数量成比例。仅存在在正在运行的分片上。
shardingStatistics.countDonorMoveChunkStarted:作为块迁移过程的一部分, moveChunk 命令在分片上启动的总次数(此节点是其成员)。这个数字都会增加不论迁移是否成功。仅存在在运行分片上。
shardingStatistics.totalDonorChunkCloneTimeMillis:从当前shard块迁移的克隆阶段所占用的累积时间(以毫秒为单位),此节点是该节点的成员。具体而言,对于从此分片的每次迁移,跟踪时间从发起moveChunk命令开始, 结束于目标分片进入追赶阶段之前,应用在块迁移期间发生的更改 。仅存在在运行的分片上。
shardingStatistics.totalCriticalSectionCommitTimeMillis:从此分片块迁移过程中的更新元数据阶段所花费的累积时间(以毫秒为单位)。在更新元数据阶段,将阻止集合上的所有操作。仅存在在运行的分片上。
shardingStatistics.totalCriticalSectionTimeMillis:从此分片块迁移的追赶阶段和更新元数据阶段所占用的累积时间(以毫秒为单位),此节点是该节点的成员。要计算追赶阶段的持续时间totalCriticalSectionTimeMillis – totalCriticalSectionCommitTimeMillis 仅存在于在运行分片上运行时出现。
shardingStatistics.catalogCache:集群路由信息缓存的统计信息的文档。
shardingStatistics.catalogCache.numDatabaseEntries:当前在编目缓存中的数据库条目总数。
shardingStatistics.catalogCache.numCollectionEntries:当前位于编目缓存中的集合条目总数(跨所有数据库)。
shardingStatistics.catalogCache.countStaleConfigErrors:线程命中过时配置异常的总次数。过时的配置异常会触发元数据的刷新。
shardingStatistics.catalogCache.totalRefreshWaitTimeMicros:线程必须等待刷新元数据的累积时间(以微秒为单位)。
shardingStatistics.catalogCache.numActiveIncrementalRefreshes:当前正在等待的增量编目缓存刷新的数量。
shardingStatistics.countIncrementalRefreshesStarted:已启动的累计增量刷新次数。
shardingStatistics.catalogCache.numActiveFullRefreshes:正在等待的全量编目缓存刷新的数量。
shardingStatistics.catalogCache.countFullRefreshesStarted:已启动的累计全量刷新数。
shardingStatistics.catalogCache.countFailedRefreshes:已失败的全量或增量刷新的累计数量。

storageEngine

shard01:PRIMARY> db.serverStatus().storageEngine
{
    "name" : "wiredTiger",
    "supportsCommittedReads" : true,
    "oldestRequiredTimestampForCrashRecovery" : Timestamp(1608201974, 1),
    "supportsPendingDrops" : true,
    "dropPendingIdents" : NumberLong(0),
    "supportsTwoPhaseIndexBuild" : true,
    "supportsSnapshotReadConcern" : true,
    "readOnly" : false,
    "persistent" : true,
    "backupCursorOpen" : false
}
1. storageEngine:包含当前存储引擎数据的文档。
2. storageEngine.name:当前存储引擎的名称。
3. storageEngine.supportsCommittedReads:版本3.2中的新功能。一个布尔值,表示存储引擎是否支持”majority” read concern。
4. storageEngine.persistent:版本3.2.6中的新功能。一个布尔值,表示存储引擎是否将数据持久化到磁盘。

transactions

shard01:PRIMARY> db.serverStatus().transactions
{
    "retriedCommandsCount" : NumberLong(0),
    "retriedStatementsCount" : NumberLong(0),
    "transactionsCollectionWriteCount" : NumberLong(0),
    "currentActive" : NumberLong(0),
    "currentInactive" : NumberLong(0),
    "currentOpen" : NumberLong(0),
    "totalAborted" : NumberLong(0),
    "totalCommitted" : NumberLong(0),
    "totalStarted" : NumberLong(0),
    "totalPrepared" : NumberLong(0),
    "totalPreparedThenCommitted" : NumberLong(0),
    "totalPreparedThenAborted" : NumberLong(0),
    "currentPrepared" : NumberLong(0)
}

1. transactions:包含有关可重试写入和 多文档事务的数据的文档。
2. transactions.retriedCommandsCount:相应的可重试写入命令已经提交之后收到的重试总数。也就是说,即使写入已成功并且在config.transactions 集合中存在的事务和会话的关联记录,可重试写入继续尝试,例如客户端的初始写入响应丢失。
注意:MongoDB不会重新执行已提交的写入。
总数包括所有会话。总数不包括在内部块迁移时的发生的可重试写入。版本3.6.3中的新功能。
3. transactions.retriedStatementsCount:与重试命令transactions.retriedCommandsCount关联的写语句总数。
4. transactions.transactionsCollectionWriteCount:提交新的可重试写入语句时触发的对config.transactions 集合的写入总数。
5. 对于更新和删除命令,由于只有单个文档操作可以重试,因此每个语句都有一个写入。
6. 对于插入操作,插入的每批文档有一次写入,除非失败导致每个文档单独插入。
7. 总数包括迁移发生时部分写入服务器config.transactions 集合的写入。
版本3.6.3中的新功能。
8. transactions.currentActive:当前正在执行命令的打开事务的总数。版本4.0.2中的新功能。
9. transactions.currentInactive:当前未执行命令的打开事务的总数。版本4.0.2中的新功能。
10. transactions.currentOpen:开放事务总数。当第一个命令作为该事务的一部分运行时,将打开一个事务,并在事务提交或中止之前保持打开状态。版本4.0.2中的新功能。
11. transactions.totalAborted:自mongod进程上次启动以来在此服务器上中止的事务总数 。版本4.0.2中的新功能。
12. transactions.totalCommitted:自mongod进程上次启动以来在此服务器上提交的事务总数 。版本4.0.2中的新功能。
13. transactions.totalStarted:自mongod进程上次启动以来在此服务器上启动的事务总数 。版本4.0.2中的新功能。

mem

shard01:PRIMARY> db.serverStatus().mem
{ "bits" : 64, "resident" : 24, "virtual" : 5496, "supported" : true }

mem:报告mongod的系统架构和当前内存使用的文档 。
mem.bits:可选数字64或32,表示已编译的mongodb实例是32位还是64位体系结构。
mem.resident:该值mem.resident大致相当于数据库进程当前使用的RAM量(以兆字节(MB)为单位)。在正常使用期间,该值趋于增长。在专用数据库服务器中,此数字接近系统内存总量。
mem.virtual:mem.virtual显示mongod进程使用的虚拟内存的总量(以兆字节(MB)为单位)。
日志启用并且使用MMAPv1存储引擎,mem.virtual值至少两倍的mem.mapped。如果 mem.virtual值显着大于 mem.mapped(例如3倍或更多倍),则这可能表示内存泄漏。
mem.supported:一个布尔值,指示底层系统是否支持扩展内存信息。如果为false,表示系统不支持扩展内存信息,则数据库服务器可能无法访问其他 mem值。
mem.mapped:仅适用于MMAPv1存储引擎。数据库的映射内存量(以兆字节(MB)为单位)。由于MongoDB使用内存映射文件,因此该值可能大致等于数据库或数据库的总大小。
mem.mappedWithJournal:仅适用于MMAPv1存储引擎。映射内存量,以兆字节(MB)为单位,包括用于journaling的内存。该值始终是值的两倍 mem.mapped。仅在启用 journaling 功能时才包含此字段。
mem.note:mem.note如果mem.supported为false,则显示 该字段。该mem.note字段显示文本:”not all mem info support on thisplatform”

metrics

shard01:PRIMARY> db.serverStatus().metrics
{
    "commands" : {
        "aggregate" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(580982)
        },
        "buildInfo" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(8729344)
        },
        "collStats" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(30)
        },
        "connectionStatus" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(5)
        },
        "count" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(284679)
        },
        "create" : {
            "failed" : NumberLong(1),
            "total" : NumberLong(1)
        },
        "createIndexes" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(462633)
        },
        "createUser" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(1)
        },
        "dbStats" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(8)
        },
        "delete" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(87983)
        },
        "drop" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(7)
        },
        "endSessions" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(4360233)
        },
        "find" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(11631918)
        },
        "getCmdLineOpts" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(1)
        },
        "getFreeMonitoringStatus" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(1)
        },
        "getLastError" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(17677)
        },
        "getLog" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(1)
        },
        "getMore" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(17435278)
        },
        "insert" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(4724448)
        },
        "isMaster" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(16034722)
        },
        "killCursors" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(15)
        },
        "listCollections" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(164)
        },
        "listDatabases" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(22)
        },
        "listIndexes" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(67271)
        },
        "logout" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(20)
        },
        "ping" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(8504)
        },
        "replSetGetRBID" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(1)
        },
        "replSetGetStatus" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(8711552)
        },
        "replSetHeartbeat" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(5042177)
        },
        "replSetUpdatePosition" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(17123049)
        },
        "rolesInfo" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(2)
        },
        "saslContinue" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(9074388)
        },
        "saslStart" : {
            "failed" : NumberLong(8),
            "total" : NumberLong(4537203)
        },
        "serverStatus" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(4355749)
        },
        "update" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(19842123)
        },
        "usersInfo" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(1)
        },
        "whatsmyuri" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(4355789)
        }
    },
    "cursor" : {
        "timedOut" : NumberLong(30),
        "open" : {
            "noTimeout" : NumberLong(0),
            "pinned" : NumberLong(1),
            "total" : NumberLong(1)
        }
    },
    "document" : {
        "deleted" : NumberLong("3425958759"),
        "inserted" : NumberLong("3432065606"),
        "returned" : NumberLong("7275879520"),
        "updated" : NumberLong(53083893)
    },
    "getLastError" : {
        "wtime" : {
            "num" : 94439,
            "totalMillis" : 464983
        },
        "wtimeouts" : NumberLong(0)
    },
    "operation" : {
        "scanAndOrder" : NumberLong(8336779),
        "writeConflicts" : NumberLong(160097)
    },
    "query" : {
        "planCacheTotalSizeEstimateBytes" : NumberLong(3135730),
        "updateOneOpStyleBroadcastWithExactIDCount" : NumberLong(0),
        "upsertReplacementCannotTargetByQueryCount" : NumberLong(0)
    },
    "queryExecutor" : {
        "scanned" : NumberLong("5825296338"),
        "scannedObjects" : NumberLong("12705570863")
    },
    "record" : {
        "moves" : NumberLong(0)
    },
    "repl" : {
        "executor" : {
            "pool" : {
                "inProgressCount" : 0
            },
            "queues" : {
                "networkInProgress" : 0,
                "sleepers" : 2
            },
            "unsignaledEvents" : 0,
            "shuttingDown" : false,
            "networkInterface" : "DEPRECATED: getDiagnosticString is deprecated in NetworkInterfaceTL"
        },
        "apply" : {
            "attemptsToBecomeSecondary" : NumberLong(1),
            "batchSize" : NumberLong(0),
            "batches" : {
                "num" : 0,
                "totalMillis" : 0
            },
            "ops" : NumberLong(0)
        },
        "buffer" : {
            "count" : NumberLong(0),
            "maxSizeBytes" : NumberLong(268435456),
            "sizeBytes" : NumberLong(0)
        },
        "initialSync" : {
            "completed" : NumberLong(0),
            "failedAttempts" : NumberLong(0),
            "failures" : NumberLong(0)
        },
        "network" : {
            "bytes" : NumberLong(0),
            "getmores" : {
                "num" : 0,
                "totalMillis" : 0
            },
            "ops" : NumberLong(0),
            "readersCreated" : NumberLong(0)
        },
        "preload" : {
            "docs" : {
                "num" : 0,
                "totalMillis" : 0
            },
            "indexes" : {
                "num" : 0,
                "totalMillis" : 0
            }
        }
    },
    "storage" : {
        "freelist" : {
            "search" : {
                "bucketExhausted" : NumberLong(0),
                "requests" : NumberLong(0),
                "scanned" : NumberLong(0)
            }
        }
    },
    "ttl" : {
        "deletedDocuments" : NumberLong(11648),
        "passes" : NumberLong(168171)
    }
}

metrics:返回反映当前使用情况和正在运行的mongod实例状态的各种统计信息的文档。
metrics.commands:3.0版中的新功能。报告数据库命令使用情况的文档。这些字段metrics.commands是数据库命令的名称,每个值都是一个文档,用于报告执行的命令总数以及失败的执行次数。
metrics.commands..failed:mongod中 失败 的次数。
metrics.commands..total:mongod 中执行 的次数。
metrics.document:反映文档访问和修改模式的文档。将这些值与opcounters 文档中的数据进行比较,该数据跟踪总操作数。
metrics.document.deleted:删除的文档总数。
metrics.document.inserted:插入的文档总数。
metrics.document.returned:查询返回的文档总数。
metrics.document.updated:更新的文件总数。
metrics.executor:版本3.2中的新功能。报告复制执行器的各种统计信息的文档。
metrics.getLastError:报告getLastError使用的文件。
metrics.getLastError.wtime:报告getLastError操作计数的文档,其w参数大于1。
metrics.getLastError.wtime.num:指定write concern(即w)的getLastError 操作总数,即等待副本集的一个或多个成员确认写入操作(即w大于1)。
metrics.getLastError.wtime.totalMillis:指定write concern(即w)mongod写操作操作所花费的总时间(以毫秒为单位w),即等待副本集的一个或多个成员确认写操作(即w大于1)。
metrics.getLastError.wtimeouts:write concern操作由于wtimeout阈值而 超时到中getLastError的次数。
metrics.operation:用于保存MongoDB使用特定操作类型处理的几种类型的更新和查询操作的计数器文档。
metrics.operation.fastmod:在3.4中删除。如果使用MMAPv1存储引擎,那么更新操作数既不会导致文档增长也不需要更新索引。例如,此计数器将记录更新操作,使用$inc 操作使用运算符来递增未被索引的字段的值。
metrics.operation.idhack:在3.4中删除。包含该_id字段的查询数。对于这些查询,MongoDB将在该_id字段上使用默认索引并跳过所有查询执行计划。
metrics.operation.scanAndOrder:返回无法使用索引的排序操作的已排序数目的查询总数。
metrics.operation.writeConflicts:遇到写入冲突的查询总数。
metrics.queryExecutor:报告来自查询执行系统数据的文档。
metrics.queryExecutor.scanned:在查询和查询计划评估期间索引扫描的总数。此计数器totalKeysExamined与输出中的 计数器相同 explain()。
metrics.queryExecutor.scannedObjects:查询和查询计划评估期间扫描的文档总数。此计数器totalDocsExamined与explain()输出中的 计数器相同 。
metrics.record:报告与磁盘存储文件中的记录分配相关的数据的文档。
metrics.record.moves:对于MMAPv1存储引擎,metrics.record.moves 报告文档在MongoDB数据集的磁盘表示内移动的总次数。文档移动是因为操作会增加文档大小超出其分配的记录大小。
metrics.repl:报告与复制过程相关的指标的文档。 metrics.repl文档出现在所有mongod实例上,包括副本集成员的实例 。
metrics.repl.apply:从复制oplog应用到应用程序的文档。
metrics.repl.apply.batchSize:版本4.0.6中的新功能:(也可在3.6.11+中使用)应用的oplog操作总数。该 metrics.repl.apply.batchSize在批量操作边界时的操作数目递增,而不是每次操作后递增。要获得更精细的粒度,请参阅metrics.repl.apply.ops。
metrics.repl.apply.batches:metrics.repl.apply.batches报告在副本集的secondary成员上的oplog应用进程。有关oplog应用程序进程的更多信息,请参见 多线程复制
metrics.repl.apply.batches.num:所有数据库中应用的批次总数。
metrics.repl.apply.batches.totalMillis
mongod从oplog应用操作所花费的总时间(以毫秒为单位)。
metrics.repl.apply.ops:应用的oplog操作总数。 metrics.repl.apply.ops每次操作后递增。参阅:metrics.repl.apply.batchSize
metrics.repl.buffer:在批量应用oplog条目之前,MongoDB会从复制源缓冲区中缓冲oplog操作。metrics.repl.buffer提供了一种跟踪oplog缓冲区的方法。有关oplog应用程序进程的更多信息,请参见 多线程复制。
metrics.repl.buffer.count:oplog缓冲区中的当前操作数。
metrics.repl.buffer.maxSizeBytes:缓冲区的最大大小。此值是mongod的常量设置,不可配置。
metrics.repl.buffer.sizeBytes:oplog缓冲区内容的当前大小。
metrics.repl.network:metrics.repl.network 报告复制过程的网络信息。
metrics.repl.network.bytes:metrics.repl.network.bytes 报告从复制同步源读取的数据总量。
metrics.repl.network.getmores:metrics.repl.network.getmores报告 getmore操作,oplog复制进程中oplog 游标的额外请求结果。
metrics.repl.network.getmores.num:metrics.repl.network.getmores.num报告getmore操作总数,从复制同步源请求其他操作的操作。
metrics.repl.network.getmores.totalMillis: 报告从getmore操作中收集数据所需的总时间 。这个数字可能非常大,因为即使getmore操作没有初始返回数据,MongoDB也会等待更多数据。
metrics.repl.network.ops:metrics.repl.network.ops 报告从复制源读取的操作总数。
metrics.repl.network.readersCreated:metrics.repl.network.readersCreated报告创建的oplog查询进程的总数。将在连接中发生错误(包括超时或网络操作)时,MongoDB将创建新的oplog查询。此外,metrics.repl.network.readersCreated每次MongoDB选择新的复制源时, 都会递增。
metrics.repl.preload: metrics.repl.preload 报告“预读”阶段,其中MongoDB将文档和索引加载到RAM中以提高复制吞吐量。有关复制过程的预读阶段的详细信息,请参阅多线程复制。
metrics.repl.preload.docs:报告在预读阶段加载到内存中的文档的文档。
metrics.repl.preload.docs.num:在复制的预读阶段加载的文档总数。
metrics.repl.preload.docs.totalMillis:复制预取阶段加载文档所花费的总时间。
metrics.repl.preload.indexes:在复制预读阶段报告加载到内存中的索引项的文档。有关预取复制阶段的详细信息,请参阅多线程复制。
metrics.repl.preload.indexes.num:作为复制预取阶段的一部分,在更新文档之前由成员加载的索引条目总数。
metrics.repl.preload.indexes.totalMillis:作为复制预读阶段的一部分,加载索引条目所花费的总时间(以毫秒为单位)。
metrics.storage.freelist.search.bucketExhausted:mongod已检查空闲列表中没有找到合适的大记录分配的次数。
metrics.storage.freelist.search.requests:mongod搜索可用记录分配的次数。
metrics.storage.freelist.search.scanned:mongod搜索可用记录分配的数量。
metrics.ttl:报告ttl索引进程的资源使用的文档 。
metrics.ttl.deletedDocuments:使用ttl索引从集合中删除的文档总数 。
metrics.ttl.passes:后台进程使用ttl索引从集合中删除文档的次数。
metrics.cursor:2.6版中的新功能。有关游标状态和使用的数据的文档。
metrics.cursor.timedOut:2.6版中的新功能。自服务器进程启动以来已超时的游标总数。如果此数字很大或以常规速率增长,则可能表示应用程序错误。
metrics.cursor.open:2.6版中的新功能。有关打开游标的数据的文档。
metrics.cursor.open.noTimeout:2.6版中的新功能。打开游标的数量,选项 DBQuery.Option.noTimeout设置为在一段时间不活动后防止超时。
metrics.cursor.open.pinned:2.6版中的新功能。“固定”打开游标的数量。
metrics.cursor.open.total:2.6版中的新功能。MongoDB为客户端维护的游标数量。因为MongoDB耗尽了未使用的游标,通常这个值很小或为零。但是,如果存在队列,过时的tailable游标或大量操作,则此值可能会上升。
metrics.cursor.open.singleTarget:3.0版中的新功能。仅针对单个分片的游标总数。仅 mongos实例报告metrics.cursor.open.singleTarget值。
metrics.cursor.open.multiTarget:3.0版中的新功能。仅针对多个分片的游标总数。仅mongos实例报告metrics.cursor.open.multiTarget值。

wiredTiger

shard01:PRIMARY> db.serverStatus().wiredTiger
{
    "uri" : "statistics:",
    "async" : {
        "current work queue length" : 0,
        "maximum work queue length" : 0,
        "number of allocation state races" : 0,
        "number of flush calls" : 0,
        "number of operation slots viewed for allocation" : 0,
        "number of times operation allocation failed" : 0,
        "number of times worker found no work" : 0,
        "total allocations" : 0,
        "total compact calls" : 0,
        "total insert calls" : 0,
        "total remove calls" : 0,
        "total search calls" : 0,
        "total update calls" : 0
    },
    "block-manager" : {
        "blocks pre-loaded" : 67,
        "blocks read" : 4815,
        "blocks written" : 22114,
        "bytes read" : 19845120,
        "bytes read via memory map API" : 0,
        "bytes read via system call API" : 0,
        "bytes written" : 166707200,
        "bytes written for checkpoint" : 166703104,
        "bytes written via memory map API" : 0,
        "bytes written via system call API" : 0,
        "mapped blocks read" : 0,
        "mapped bytes read" : 0,
        "number of times the file was remapped because it changed size via fallocate or truncate" : 0,
        "number of times the region was remapped via write" : 0
    },
    "cache" : {
        "application threads page read from disk to cache count" : 30,
        "application threads page read from disk to cache time (usecs)" : 10875,
        "application threads page write from cache to disk count" : 11634,
        "application threads page write from cache to disk time (usecs)" : 2352569,
        "bytes allocated for updates" : 1584785,
        "bytes belonging to page images in the cache" : 663850,
        "bytes belonging to the history store table in the cache" : 2462,
        "bytes currently in the cache" : 2302335,
        "bytes dirty in the cache cumulative" : 1154502803,
        "bytes not belonging to page images in the cache" : 1638485,
        "bytes read into cache" : 615392,
        "bytes written from cache" : 187154920,
        "cache overflow score" : 0,
        "checkpoint blocked page eviction" : 0,
        "eviction calls to get a page" : 6958,
        "eviction calls to get a page found queue empty" : 5861,
        "eviction calls to get a page found queue empty after locking" : 19,
        "eviction currently operating in aggressive mode" : 0,
        "eviction empty score" : 0,
        "eviction passes of a file" : 0,
        "eviction server candidate queue empty when topping up" : 0,
        "eviction server candidate queue not empty when topping up" : 0,
        "eviction server evicting pages" : 0,
        "eviction server slept, because we did not make progress with eviction" : 1772,
        "eviction server unable to reach eviction goal" : 0,
        "eviction server waiting for a leaf page" : 1,
        "eviction state" : 64,
        "eviction walk target pages histogram - 0-9" : 0,
        "eviction walk target pages histogram - 10-31" : 0,
        "eviction walk target pages histogram - 128 and higher" : 0,
        "eviction walk target pages histogram - 32-63" : 0,
        "eviction walk target pages histogram - 64-128" : 0,
        "eviction walk target strategy both clean and dirty pages" : 0,
        "eviction walk target strategy only clean pages" : 0,
        "eviction walk target strategy only dirty pages" : 0,
        "eviction walks abandoned" : 0,
        "eviction walks gave up because they restarted their walk twice" : 0,
        "eviction walks gave up because they saw too many pages and found no candidates" : 0,
        "eviction walks gave up because they saw too many pages and found too few candidates" : 0,
        "eviction walks reached end of tree" : 0,
        "eviction walks started from root of tree" : 0,
        "eviction walks started from saved location in tree" : 0,
        "eviction worker thread active" : 4,
        "eviction worker thread created" : 0,
        "eviction worker thread evicting pages" : 1039,
        "eviction worker thread removed" : 0,
        "eviction worker thread stable number" : 0,
        "files with active eviction walks" : 0,
        "files with new eviction walks started" : 0,
        "force re-tuning of eviction workers once in a while" : 0,
        "forced eviction - history store pages failed to evict while session has history store cursor open" : 0,
        "forced eviction - history store pages selected while session has history store cursor open" : 0,
        "forced eviction - history store pages successfully evicted while session has history store cursor open" : 0,
        "forced eviction - pages evicted that were clean count" : 0,
        "forced eviction - pages evicted that were clean time (usecs)" : 0,
        "forced eviction - pages evicted that were dirty count" : 1,
        "forced eviction - pages evicted that were dirty time (usecs)" : 237,
        "forced eviction - pages selected because of too many deleted items count" : 5,
        "forced eviction - pages selected count" : 1,
        "forced eviction - pages selected unable to be evicted count" : 0,
        "forced eviction - pages selected unable to be evicted time" : 0,
        "forced eviction - session returned rollback error while force evicting due to being oldest" : 0,
        "hazard pointer blocked page eviction" : 2,
        "hazard pointer check calls" : 1040,
        "hazard pointer check entries walked" : 618,
        "hazard pointer maximum array length" : 1,
        "history store key truncation calls that returned restart" : 0,
        "history store key truncation due to mixed timestamps" : 0,
        "history store key truncation due to the key being removed from the data page" : 0,
        "history store score" : 0,
        "history store table insert calls" : 3,
        "history store table insert calls that returned restart" : 0,
        "history store table max on-disk size" : 0,
        "history store table on-disk size" : 36864,
        "history store table out-of-order resolved updates that lose their durable timestamp" : 0,
        "history store table out-of-order updates that were fixed up by moving existing records" : 0,
        "history store table out-of-order updates that were fixed up during insertion" : 0,
        "history store table reads" : 0,
        "history store table reads missed" : 0,
        "history store table reads requiring squashed modifies" : 0,
        "history store table remove calls due to key truncation" : 0,
        "history store table writes requiring squashed modifies" : 0,
        "in-memory page passed criteria to be split" : 0,
        "in-memory page splits" : 0,
        "internal pages evicted" : 0,
        "internal pages queued for eviction" : 0,
        "internal pages seen by eviction walk" : 0,
        "internal pages seen by eviction walk that are already queued" : 0,
        "internal pages split during eviction" : 0,
        "leaf pages split during eviction" : 0,
        "maximum bytes configured" : 1073741824,
        "maximum page size at eviction" : 376,
        "modified pages evicted" : 1038,
        "modified pages evicted by application threads" : 0,
        "operations timed out waiting for space in cache" : 0,
        "overflow pages read into cache" : 0,
        "page split during eviction deepened the tree" : 0,
        "page written requiring history store records" : 1168,
        "pages currently held in the cache" : 78,
        "pages evicted by application threads" : 0,
        "pages queued for eviction" : 0,
        "pages queued for eviction post lru sorting" : 0,
        "pages queued for urgent eviction" : 1040,
        "pages queued for urgent eviction during walk" : 0,
        "pages read into cache" : 74,
        "pages read into cache after truncate" : 1036,
        "pages read into cache after truncate in prepare state" : 0,
        "pages requested from the cache" : 2567797,
        "pages seen by eviction walk" : 0,
        "pages seen by eviction walk that are already queued" : 0,
        "pages selected for eviction unable to be evicted" : 2,
        "pages selected for eviction unable to be evicted as the parent page has overflow items" : 0,
        "pages selected for eviction unable to be evicted because of active children on an internal page" : 0,
        "pages selected for eviction unable to be evicted because of failure in reconciliation" : 0,
        "pages walked for eviction" : 0,
        "pages written from cache" : 11678,
        "pages written requiring in-memory restoration" : 1,
        "percentage overhead" : 8,
        "tracked bytes belonging to internal pages in the cache" : 24830,
        "tracked bytes belonging to leaf pages in the cache" : 2277505,
        "tracked dirty bytes in the cache" : 965,
        "tracked dirty pages in the cache" : 2,
        "unmodified pages evicted" : 0
    },
    "capacity" : {
        "background fsync file handles considered" : 0,
        "background fsync file handles synced" : 0,
        "background fsync time (msecs)" : 0,
        "bytes read" : 425984,
        "bytes written for checkpoint" : 92029428,
        "bytes written for eviction" : 77,
        "bytes written for log" : 825313536,
        "bytes written total" : 917343041,
        "threshold to call fsync" : 0,
        "time waiting due to total capacity (usecs)" : 0,
        "time waiting during checkpoint (usecs)" : 0,
        "time waiting during eviction (usecs)" : 0,
        "time waiting during logging (usecs)" : 0,
        "time waiting during read (usecs)" : 0
    },
    "checkpoint-cleanup" : {
        "pages added for eviction" : 1035,
        "pages removed" : 0,
        "pages skipped during tree walk" : 24192,
        "pages visited" : 34496
    },
    "connection" : {
        "auto adjusting condition resets" : 12656,
        "auto adjusting condition wait calls" : 435335,
        "auto adjusting condition wait raced to update timeout and skipped updating" : 0,
        "detected system time went backwards" : 0,
        "files currently open" : 47,
        "memory allocations" : 8880632,
        "memory frees" : 8864180,
        "memory re-allocations" : 1480705,
        "pthread mutex condition wait calls" : 1101371,
        "pthread mutex shared lock read-lock calls" : 5961333,
        "pthread mutex shared lock write-lock calls" : 271188,
        "total fsync I/Os" : 20536,
        "total read I/Os" : 6840,
        "total write I/Os" : 34031
    },
    "cursor" : {
        "Total number of entries skipped by cursor next calls" : 647,
        "Total number of entries skipped by cursor prev calls" : 97,
        "Total number of entries skipped to position the history store cursor" : 0,
        "cached cursor count" : 70,
        "cursor bulk loaded cursor insert calls" : 0,
        "cursor close calls that result in cache" : 2082208,
        "cursor create calls" : 208084,
        "cursor insert calls" : 15105,
        "cursor insert key and value bytes" : 7689438,
        "cursor modify calls" : 6929,
        "cursor modify key and value bytes affected" : 492073,
        "cursor modify value bytes modified" : 55543,
        "cursor next calls" : 135323,
        "cursor next calls that skip greater than or equal to 100 entries" : 0,
        "cursor next calls that skip less than 100 entries" : 134073,
        "cursor operation restarted" : 0,
        "cursor prev calls" : 653235,
        "cursor prev calls that skip due to a globally visible history store tombstone" : 0,
        "cursor prev calls that skip due to a globally visible history store tombstone in rollback to stable" : 0,
        "cursor prev calls that skip greater than or equal to 100 entries" : 0,
        "cursor prev calls that skip less than 100 entries" : 653235,
        "cursor remove calls" : 90,
        "cursor remove key bytes removed" : 2327,
        "cursor reserve calls" : 0,
        "cursor reset calls" : 5873856,
        "cursor search calls" : 1185916,
        "cursor search history store calls" : 0,
        "cursor search near calls" : 656174,
        "cursor sweep buckets" : 433926,
        "cursor sweep cursors closed" : 0,
        "cursor sweep cursors examined" : 14980,
        "cursor sweeps" : 72321,
        "cursor truncate calls" : 0,
        "cursor update calls" : 0,
        "cursor update key and value bytes" : 0,
        "cursor update value size change" : 0,
        "cursors reused from cache" : 2081876,
        "open cursor count" : 17
    },
    "data-handle" : {
        "connection data handle size" : 456,
        "connection data handles currently active" : 79,
            "connection sweep candidate became referenced" : 0,
        "connection sweep dhandles closed" : 0,
        "connection sweep dhandles removed from hash list" : 5729,
        "connection sweep time-of-death sets" : 23850,
        "connection sweeps" : 6917,
        "session dhandles swept" : 16015,
        "session sweep attempts" : 1246
    },
    "lock" : {
        "checkpoint lock acquisitions" : 1153,
        "checkpoint lock application thread wait time (usecs)" : 34,
        "checkpoint lock internal thread wait time (usecs)" : 0,
        "dhandle lock application thread time waiting (usecs)" : 0,
        "dhandle lock internal thread time waiting (usecs)" : 138,
        "dhandle read lock acquisitions" : 289023,
        "dhandle write lock acquisitions" : 11537,
        "durable timestamp queue lock application thread time waiting (usecs)" : 64,
        "durable timestamp queue lock internal thread time waiting (usecs)" : 0,
        "durable timestamp queue read lock acquisitions" : 2,
        "durable timestamp queue write lock acquisitions" : 6958,
        "metadata lock acquisitions" : 1153,
        "metadata lock application thread wait time (usecs)" : 100,
        "metadata lock internal thread wait time (usecs)" : 1,
        "read timestamp queue lock application thread time waiting (usecs)" : 0,
        "read timestamp queue lock internal thread time waiting (usecs)" : 0,
        "read timestamp queue read lock acquisitions" : 0,
        "read timestamp queue write lock acquisitions" : 1156,
        "schema lock acquisitions" : 1188,
        "schema lock application thread wait time (usecs)" : 10,
        "schema lock internal thread wait time (usecs)" : 7,
        "table lock application thread time waiting for the table lock (usecs)" : 26769,
        "table lock internal thread time waiting for the table lock (usecs)" : 0,
        "table read lock acquisitions" : 0,
        "table write lock acquisitions" : 138467,
        "txn global lock application thread time waiting (usecs)" : 45,
        "txn global lock internal thread time waiting (usecs)" : 106,
        "txn global read lock acquisitions" : 23561,
        "txn global write lock acquisitions" : 33856
    },
    "log" : {
        "busy returns attempting to switch slots" : 18,
        "force archive time sleeping (usecs)" : 0,
        "log bytes of payload data" : 3346940,
        "log bytes written" : 4892032,
        "log files manually zero-filled" : 0,
        "log flush operations" : 651105,
        "log force write operations" : 728222,
        "log force write operations skipped" : 718616,
        "log records compressed" : 1158,
        "log records not compressed" : 6969,
        "log records too small to compress" : 10370,
        "log release advances write LSN" : 1154,
        "log scan operations" : 4,
        "log scan records requiring two reads" : 3,
        "log server thread advances write LSN" : 9606,
        "log server thread write LSN walk skipped" : 163993,
        "log sync operations" : 10184,
        "log sync time duration (usecs)" : 83326246,
        "log sync_dir operations" : 1,
        "log sync_dir time duration (usecs)" : 6611,
        "log write operations" : 18497,
        "logging bytes consolidated" : 4891520,
        "maximum log file size" : 104857600,
        "number of pre-allocated log files to create" : 2,
        "pre-allocated log files not ready and missed" : 1,
        "pre-allocated log files prepared" : 2,
        "pre-allocated log files used" : 0,
        "records processed by log scan" : 14,
        "slot close lost race" : 0,
        "slot close unbuffered waits" : 0,
        "slot closures" : 10760,
        "slot join atomic update races" : 0,
        "slot join calls atomic updates raced" : 0,
        "slot join calls did not yield" : 18497,
        "slot join calls found active slot closed" : 0,
        "slot join calls slept" : 0,
        "slot join calls yielded" : 0,
        "slot join found active slot closed" : 0,
        "slot joins yield time (usecs)" : 0,
        "slot transitions unable to find free slot" : 0,
        "slot unbuffered writes" : 0,
        "total in-memory size of compressed records" : 6789223,
        "total log buffer size" : 33554432,
        "total size of compressed records" : 1751691,
        "written slots coalesced" : 0,
        "yields waiting for previous log file close" : 0
    },
    "perf" : {
        "file system read latency histogram (bucket 1) - 10-49ms" : 0,
        "file system read latency histogram (bucket 2) - 50-99ms" : 0,
        "file system read latency histogram (bucket 3) - 100-249ms" : 0,
        "file system read latency histogram (bucket 4) - 250-499ms" : 0,
        "file system read latency histogram (bucket 5) - 500-999ms" : 0,
        "file system read latency histogram (bucket 6) - 1000ms+" : 0,
        "file system write latency histogram (bucket 1) - 10-49ms" : 9,
        "file system write latency histogram (bucket 2) - 50-99ms" : 0,
        "file system write latency histogram (bucket 3) - 100-249ms" : 0,
        "file system write latency histogram (bucket 4) - 250-499ms" : 0,
        "file system write latency histogram (bucket 5) - 500-999ms" : 0,
        "file system write latency histogram (bucket 6) - 1000ms+" : 0,
        "operation read latency histogram (bucket 1) - 100-249us" : 983,
        "operation read latency histogram (bucket 2) - 250-499us" : 146,
        "operation read latency histogram (bucket 3) - 500-999us" : 40,
        "operation read latency histogram (bucket 4) - 1000-9999us" : 89,
        "operation read latency histogram (bucket 5) - 10000us+" : 0,
        "operation write latency histogram (bucket 1) - 100-249us" : 51,
        "operation write latency histogram (bucket 2) - 250-499us" : 9,
        "operation write latency histogram (bucket 3) - 500-999us" : 3,
        "operation write latency histogram (bucket 4) - 1000-9999us" : 1,
        "operation write latency histogram (bucket 5) - 10000us+" : 0
    },
    "reconciliation" : {
        "approximate byte size of timestamps in pages written" : 5760,
        "approximate byte size of transaction IDs in pages written" : 55856,
        "fast-path pages deleted" : 0,
        "maximum seconds spent in a reconciliation call" : 0,
        "page reconciliation calls" : 14773,
        "page reconciliation calls for eviction" : 1038,
        "page reconciliation calls that resulted in values with prepared transaction metadata" : 0,
        "page reconciliation calls that resulted in values with timestamps" : 214,
        "page reconciliation calls that resulted in values with transaction ids" : 3462,
        "pages deleted" : 3107,
        "pages written including an aggregated newest start durable timestamp " : 1185,
        "pages written including an aggregated newest stop durable timestamp " : 25,
        "pages written including an aggregated newest stop timestamp " : 9,
        "pages written including an aggregated newest stop transaction ID" : 9,
        "pages written including an aggregated oldest start timestamp " : 13,
        "pages written including an aggregated oldest start transaction ID " : 7,
        "pages written including an aggregated prepare" : 0,
        "pages written including at least one prepare state" : 0,
        "pages written including at least one start durable timestamp" : 216,
        "pages written including at least one start timestamp" : 216,
        "pages written including at least one start transaction ID" : 3464,
        "pages written including at least one stop durable timestamp" : 24,
        "pages written including at least one stop timestamp" : 24,
        "pages written including at least one stop transaction ID" : 24,
        "records written including a prepare state" : 0,
        "records written including a start durable timestamp" : 296,
        "records written including a start timestamp" : 296,
        "records written including a start transaction ID" : 6918,
        "records written including a stop durable timestamp" : 64,
        "records written including a stop timestamp" : 64,
        "records written including a stop transaction ID" : 64,
        "split bytes currently awaiting free" : 0,
        "split objects currently awaiting free" : 0
    },
    "session" : {
        "open session count" : 17,
        "session query timestamp calls" : 4,
        "table alter failed calls" : 0,
        "table alter successful calls" : 0,
        "table alter unchanged and skipped" : 0,
        "table compact failed calls" : 0,
        "table compact successful calls" : 0,
        "table create failed calls" : 0,
        "table create successful calls" : 1,
        "table drop failed calls" : 0,
        "table drop successful calls" : 0,
        "table import failed calls" : 0,
        "table import successful calls" : 0,
        "table rebalance failed calls" : 0,
        "table rebalance successful calls" : 0,
        "table rename failed calls" : 0,
        "table rename successful calls" : 0,
        "table salvage failed calls" : 0,
        "table salvage successful calls" : 0,
        "table truncate failed calls" : 0,
        "table truncate successful calls" : 0,
        "table verify failed calls" : 0,
        "table verify successful calls" : 0
    },
    "thread-state" : {
        "active filesystem fsync calls" : 0,
        "active filesystem read calls" : 0,
        "active filesystem write calls" : 0
    },
    "thread-yield" : {
        "application thread time evicting (usecs)" : 0,
        "application thread time waiting for cache (usecs)" : 0,
        "connection close blocked waiting for transaction state stabilization" : 0,
        "connection close yielded for lsm manager shutdown" : 0,
        "data handle lock yielded" : 0,
        "get reference for page index and slot time sleeping (usecs)" : 0,
        "log server sync yielded for log write" : 0,
        "page access yielded due to prepare state change" : 0,
        "page acquire busy blocked" : 0,
        "page acquire eviction blocked" : 0,
        "page acquire locked blocked" : 0,
        "page acquire read blocked" : 0,
        "page acquire time sleeping (usecs)" : 0,
        "page delete rollback time sleeping for state change (usecs)" : 0,
        "page reconciliation yielded due to child modification" : 0
    },
    "transaction" : {
        "Number of prepared updates" : 0,
        "durable timestamp queue entries walked" : 1466,
        "durable timestamp queue insert to empty" : 5492,
        "durable timestamp queue inserts to head" : 1466,
        "durable timestamp queue inserts total" : 6958,
        "durable timestamp queue length" : 1,
        "prepared transactions" : 0,
        "prepared transactions committed" : 0,
        "prepared transactions currently active" : 0,
        "prepared transactions rolled back" : 0,
        "query timestamp calls" : 802904,
        "read timestamp queue entries walked" : 676,
        "read timestamp queue insert to empty" : 480,
        "read timestamp queue inserts to head" : 676,
        "read timestamp queue inserts total" : 1156,
        "read timestamp queue length" : 1,
        "rollback to stable calls" : 0,
        "rollback to stable hs records with stop timestamps older than newer records" : 0,
        "rollback to stable keys removed" : 0,
        "rollback to stable keys restored" : 0,
        "rollback to stable pages visited" : 0,
        "rollback to stable restored tombstones from history store" : 0,
        "rollback to stable sweeping history store keys" : 0,
        "rollback to stable tree walk skipping pages" : 0,
        "rollback to stable updates aborted" : 0,
        "rollback to stable updates removed from history store" : 0,
        "set timestamp calls" : 13866,
        "set timestamp durable calls" : 0,
        "set timestamp durable updates" : 0,
        "set timestamp oldest calls" : 6933,
        "set timestamp oldest updates" : 6933,
        "set timestamp stable calls" : 6933,
        "set timestamp stable updates" : 6932,
        "transaction begins" : 1510630,
        "transaction checkpoint currently running" : 0,
        "transaction checkpoint generation" : 1154,
        "transaction checkpoint history store file duration (usecs)" : 81,
        "transaction checkpoint max time (msecs)" : 291,
        "transaction checkpoint min time (msecs)" : 37,
        "transaction checkpoint most recent time (msecs)" : 75,
        "transaction checkpoint prepare currently running" : 0,
        "transaction checkpoint prepare max time (msecs)" : 11,
        "transaction checkpoint prepare min time (msecs)" : 1,
        "transaction checkpoint prepare most recent time (msecs)" : 3,
        "transaction checkpoint prepare total time (msecs)" : 2887,
        "transaction checkpoint scrub dirty target" : 0,
        "transaction checkpoint scrub time (msecs)" : 0,
        "transaction checkpoint total time (msecs)" : 82776,
        "transaction checkpoints" : 1153,
        "transaction checkpoints skipped because database was clean" : 0,
        "transaction failures due to history store" : 0,
        "transaction fsync calls for checkpoint after allocating the transaction ID" : 1153,
        "transaction fsync duration for checkpoint after allocating the transaction ID (usecs)" : 25206,
        "transaction range of IDs currently pinned" : 0,
        "transaction range of IDs currently pinned by a checkpoint" : 0,
        "transaction range of timestamps currently pinned" : 21474836480,
        "transaction range of timestamps pinned by a checkpoint" : NumberLong("6907410767591505921"),
        "transaction range of timestamps pinned by the oldest active read timestamp" : 0,
        "transaction range of timestamps pinned by the oldest timestamp" : 21474836480,
        "transaction read timestamp of the oldest active reader" : 0,
        "transaction sync calls" : 0,
        "transactions committed" : 15040,
        "transactions rolled back" : 1496352,
        "update conflicts" : 0
    },
    "concurrentTransactions" : {
        "write" : {
            "out" : 0,
            "available" : 128,
            "totalTickets" : 128
        },
        "read" : {
            "out" : 1,
            "available" : 127,
            "totalTickets" : 128
        }
    },
    "snapshot-window-settings" : {
        "cache pressure percentage threshold" : 95,
        "current cache pressure percentage" : NumberLong(0),
        "total number of SnapshotTooOld errors" : NumberLong(0),
        "max target available snapshots window size in seconds" : 5,
        "target available snapshots window size in seconds" : 5,
        "current available snapshots window size in seconds" : 5,
        "latest majority snapshot timestamp available" : "Dec 18 10:01:35:1",
        "oldest majority snapshot timestamp available" : "Dec 18 10:01:30:1"
    },
    "oplog" : {
        "visibility timestamp" : Timestamp(1608256895, 1)
    }
}


1. wiredTiger.uri:3.0版中的新功能。一个字符串。供MongoDB内部使用一个字符。
2. wiredTiger.LSM:3.0版中的新功能。返回LSM(Log-Structured Merge)树的统计信息的文档。这些值反映了此服务器中使用的所有LSM树的统计信息。
3. wiredTiger.async:3.0版中的新功能。返回与异步操作API相关的统计信息的文档。MongoDB没有使用它。
4. wiredTiger.block-manager:3.0版中的新功能。返回块管理器操作统计信息的文档。
5. wiredTiger.cache:3.0版中的新功能:返回缓存和缓存中页面移除的统计信息的文档。
以下描述了一些 wiredTiger.cache的key统计数据:
6. wiredTiger.cache.maximum bytes configured:最大缓存大小。
7. wiredTiger.cache.bytes currently in the cache:当前在缓存中的数据的字节大小。该值不应大于maximum bytesconfigured。
8. wiredTiger.cache.unmodified pages evicted:页面移除的主要统计数据。
9. wiredTiger.cache.tracked dirty bytes in the cache:缓存中脏数据的大小(以字节为单位)。该值应小于bytes currently in the cache。
10. wiredTiger.cache.pages read into cache:读入缓存的页数。 wiredTiger.cache.pages read intocache和wiredTiger.cache.pages written from cache可以提供I / O 信息。
11. wiredTiger.cache.pages written from cache:从缓存写入的页数。 wiredTiger.cache.pages written fromcache和wiredTiger.cache.pages read into cache可以提供I / O的信息。要调整WiredTiger内部缓存的大小,请参阅storage.wiredTiger.engineConfig.cacheSizeGB和 –wiredTigerCacheSizeGB。避免将WiredTiger内部缓存大小增加到其默认值以上。
12. wiredTiger.connection:3.0版中的新功能。返回与WiredTiger连接相关的统计信息的文档。
13. wiredTiger.cursor:3.0版中的新功能。返回WiredTiger游标统计信息的文档。
14. wiredTiger.data-handle:3.0版中的新功能。返回有关数据句柄和扫描的统计信息的文档。
15. wiredTiger.log:3.0版中的新功能。返回WiredTiger的预写日志的统计信息的文档。参阅:日记和WiredTiger存储引擎
16. wiredTiger.reconciliation:3.0版中的新功能。返回协调进程统计信息的文档。
17. wiredTiger.session:3.0版中的新功能。返回会话的打开游标计数和打开会话计数的文档。
18. wiredTiger.thread-yield:3.0版中的新功能。页面请求量的统计信息的文档。
19. wiredTiger.transaction:3.0版中的新功能。返回有关事务检查点和操作的统计信息的文档。
20. wiredTiger.transaction.transaction checkpoint most recent time:创建最新检查点的时间量(以毫秒为单位)。在固定写入负载下该值增加可能表示I / O系统饱和。
21. wiredTiger.concurrentTransactions:3.0版中的新功能。返回允许进入WiredTiger存储引擎的读写事务并发数的信息文档。这些设置是特定于MongoDB的。要更改并发读取和写入事务的设置,请参阅wiredTigerConcurrentReadTransactions和wiredTigerConcurrentWriteTransactions。
22. writeBacksQueued :一个布尔值,指示是否有来自mongos实例排队等待重试的的操作 。通常,此值为false。另请参见writeBacks。
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 216,324评论 6 498
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 92,356评论 3 392
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 162,328评论 0 353
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,147评论 1 292
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,160评论 6 388
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,115评论 1 296
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,025评论 3 417
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,867评论 0 274
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,307评论 1 310
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,528评论 2 332
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,688评论 1 348
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,409评论 5 343
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,001评论 3 325
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,657评论 0 22
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,811评论 1 268
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,685评论 2 368
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,573评论 2 353

推荐阅读更多精彩内容