Go tooling, GC和grpc常用的命令

Go tooling


  • go tool pprof -web ~/x.profile
  • go tool trace
    The package also exports a handler that serves execution trace data for the "go tool trace" command. To collect a 5-second execution trace:
wget -O trace.out http://localhost:6060/debug/pprof/trace?seconds=5
go tool trace trace.out
  • 内存泄漏
    • 导出时间点1堆的profile:curl -s http://127.0.0.1:8080/debug/pprof/heap > base.heap
    • 导出时间点2堆的profile:curl -s http://127.0.0.1:8080/debug/pprof/heap > current.heap
    • go tool pprof --base base.heap current.heap

When setting nodefraction=0 we will get to see the entire map of the allocated objects, including the smaller ones.

list can find the source code when searching for it under your GOPATH environment. In cases where the root it is searching for does not match, which depends on your build machine, you can use the -trim_path option. This will assist with fixing it and letting you see the annotated source code. Remember to set your git to the right commit which was running when the heap profile was captured.

如果想要了解对应的汇编代码,可以使用 disadm <regex> 命令。这两个命令虽然强大,但是在命令行中查看代码并不是很方面,所以可以使用 weblist <regex> 命令,用法和两者一样,但它会在浏览器打开一个页面,能够同时显示源代码和汇编代码。

  • 可以执行o命令查询当前options信息。
(pprof) o
  call_tree                 = false
  compact_labels            = true
  cumulative                = flat                 //: [cum | flat]
  divide_by                 = 1
  drop_negative             = false
  edgefraction              = 0.001
  focus                     = ""
  granularity               = filefunctions        //: [addresses | filefunctions | files | functions | lines]
  hide                      = ""
  ignore                    = ""
  mean                      = false
  nodecount                 = -1                   //: default
  nodefraction              = 0.005
  noinlines                 = false
  normalize                 = false
  output                    = ""
  prune_from                = ""
  relative_percentages      = false
  sample_index              = delay                //: [contentions | delay]
  show                      = ""
  show_from                 = ""
  tagfocus                  = ""
  taghide                   = ""
  tagignore                 = ""
  tagshow                   = ""
  trim                      = true
  trim_path                 = ""
  unit                      = minimum
go tool pprof -http=':' http://10.252.3.53:8051/debug/pprof/heap
  • Mutex和Block profile默认是disable的。
  • block: Block profile shows where goroutines block waiting on synchronization primitives (including timer channels). Block profile is not enabled by default; use runtime.SetBlockProfileRate to enable it.
  • mutex: Mutex profile reports the lock contentions. When you think your CPU is not fully utilized due to a mutex contention, use this profile. Mutex profile is not enabled by default, see runtime.SetMutexProfileFraction to enable it.
(ENV_3.7.6) ➜ /home/admin/go/src ☞ git:(bug/update_mig) ✗  go tool pprof http://localhost:6060/debug/pprof/block
Fetching profile over HTTP from http://localhost:6060/debug/pprof/block
Saved profile in /home/admin/pprof/pprof.test_wait_group2.contentions.delay.007.pb.gz
File: test_wait_group2
Type: delay
Time: Dec 27, 2020 at 9:59am (EST)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 4s, 100% of 4s total
      flat  flat%   sum%        cum   cum%
        2s 50.00% 50.00%         2s 50.00%  sync.(*WaitGroup).Wait
        2s 50.00%   100%         2s 50.00%  runtime.chanrecv1
         0     0%   100%         2s 50.00%  main.main
         0     0%   100%         2s 50.00%  main.testWaitGroup
         0     0%   100%         2s 50.00%  runtime.main
(pprof) list main.main
Total: 4s
ROUTINE ======================== main.main in /home/admin/go/src/github.com/microyahoo/go-exercises/test_wait_group2.go
         0         2s (flat, cum) 50.00% of Total
         .          .     27:
         .          .     28:   go testWaitGroup()
         .          .     29:
         .          .     30:   for {
         .          .     31:           select {
         .         2s     32:           case <-ticker.C:
         .          .     33:                   fmt.Println("hello")
         .          .     34:           }
         .          .     35:   }
         .          .     36:
         .          .     37:   // for range ticker.C {
(pprof) traces
File: test_wait_group2
Type: delay
Time: Dec 27, 2020 at 9:59am (EST)
-----------+-------------------------------------------------------
        2s   sync.(*WaitGroup).Wait
             main.testWaitGroup
-----------+-------------------------------------------------------
        2s   runtime.chanrecv1
             main.main
             runtime.main
-----------+-------------------------------------------------------
(pprof)

GC

GODEBUG=gctrace=1 prints garbage collector events at each collection, summarizing the amount of memory collected and the length of the pause.
GODEBUG=schedtrace=X prints scheduling events every X milliseconds.

(ENV_3.7.6) 🍺 /home/admin/go-exercises ☞ git:(master) ✗ GODEBUG=gctrace=1 ./gc1
2020/12/27 09:57:16 215584
2020/12/27 09:57:16 215584
2020/12/27 09:57:16 215584
2020/12/27 09:57:16 66781184
gc 1 @0.003s 17%: 1.4+3.2+0.066 ms clock, 11+0.094/0.11/0+0.53 ms cpu, 95->95->95 MB, 96 MB goal, 8 P
gc 2 @0.009s 22%: 0.71+0.35+0.045 ms clock, 5.6+0.085/0.22/0.011+0.36 ms cpu, 191->191->95 MB, 192 MB goal, 8 P
gc 3 @0.010s 66%: 14+0.16+0.005 ms clock, 116+0.048/0.20/0+0.047 ms cpu, 191->191->95 MB, 192 MB goal, 8 P
gc 4 @0.027s 73%: 13+0.32+0.006 ms clock, 111+0.18/0.20/0.099+0.048 ms cpu, 190->190->95 MB, 191 MB goal, 8 P
gc 5 @0.041s 73%: 6.4+2.0+0.004 ms clock, 51+0.041/0.17/0+0.036 ms cpu, 190->190->95 MB, 191 MB goal, 8 P
gc 6 @0.050s 76%: 6.0+0.23+0.005 ms clock, 48+0.14/0.14/0.037+0.041 ms cpu, 190->190->95 MB, 191 MB goal, 8 P
gc 7 @0.057s 76%: 5.6+1.0+0.005 ms clock, 45+0.11/0.13/0.030+0.044 ms cpu, 191->286->190 MB, 192 MB goal, 8 P
gc 8 @0.071s 68%: 0.006+0.23+0.004 ms clock, 0.050+0/0.19/0.13+0.037 ms cpu, 381->381->0 MB, 382 MB goal, 8 P
2020/12/27 09:57:16 400233568
2020/12/27 09:57:16 1000457904
2020/12/27 09:57:16 400233568
2020/12/27 09:57:16 469303296

目前格式是:Golang Environment Variables

gc # @#s #%: #+#+# ms clock, #+#/#/#+# ms cpu, #->#-># MB, # MB goal, # P

其中字段含义如下('#' 代表数字):

  gc #             每次 GC 时递增的 GC 编号
  @#s              程序运行的秒数
  #%               自程序启动后在 GC 中花费的时间百分比
  #+...+#          GC 各阶段的挂钟时间(wall-clock)/CPU 时间
  #->#-># MB       GC 开始时的堆大小、GC 结束时的堆大小和活动堆大小
  # MB goal        目标堆大小
  # P              使用的处理器数量

这些阶段是 stop-the-world(STW)清除终止(sweep termination),并发标记和扫描,以及 STW 标记终止(mark termination)。
mark/scan 的 CPU 时间被分解为辅助时间(根据分配执行的 GC)、后台 GC 时间和空闲 GC 时间。
如果该行以 "(forced)" 结尾,则此 GC 由 runtime.GC() 调用强制执行。


Debug

we recommend disabling optimizations when building the code being debugged. The following command builds a package with no compiler optimizations:

$ go build -gcflags=all="-N -l"

As part of the improvement effort, Go 1.10 introduced a new compiler flag -dwarflocationlists. The flag causes the compiler to add location lists that helps debuggers work with optimized binaries. The following command builds a package with optimizations but with the DWARF location lists:

$ go build -gcflags="-dwarflocationlists=true"

gRPC

Turn on logging for the client and server
For go client:

GRPC_GO_LOG_VERBOSITY_LEVEL=99 
GRPC_GO_LOG_SEVERITY_LEVEL=info

For C++ server:

GRPC_VERBOSITY=DEBUG 
GRPC_TRACE=server_channel

References

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
【社区内容提示】社区部分内容疑似由AI辅助生成,浏览时请结合常识与多方信息审慎甄别。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

相关阅读更多精彩内容

  • 介绍 问:如何对golang程序进行性能调优? 当然是使用golang中集成的大杀器pprof,来帮助我们从cpu...
    MrDTree阅读 9,014评论 0 1
  • Golang具有一套可以构建和处理go源代码的程序,作为命令行工具,这些程序也并非直接运行,而是由go程序调用。运...
    云时代的运维开发阅读 8,765评论 0 0
  • 本文作者欧长坤,德国慕尼黑大学在读博士,Go/etcd/Tensorflow contributor,开源书籍《G...
    水表学Java阅读 3,218评论 0 0
  • [TOC] roadmap pprof 发展过程:pprof --> http/pprof --> uber go...
    cdz620阅读 9,397评论 0 3
  • 久违的晴天,家长会。 家长大会开好到教室时,离放学已经没多少时间了。班主任说已经安排了三个家长分享经验。 放学铃声...
    飘雪儿5阅读 12,225评论 16 22

友情链接更多精彩内容