Akka手册译(一)——配置

运行Akka不需要定义配置,相当大的默认值已经配置好了。您可以需要修改设置来改变默认的行为或适应特定的运行时环境。典型的例子你可以修改的设置:

  • 日志级别和日志后台
  • 远程功能
  • 消息序列化
  • 定义路由
  • 调优的调度程序
    Akka使用类型安全配置库,它也可能是一个不错的选择在应用程序或库的配置不论有或没有Akka构造。这个库是用Java实现的外部依赖,应该看一看它的文档(特别是ConfigFactory),这是只在以下总结。

警告
如果你使用Scala的REPL Akka 2.9.x系列,不用提供自己的类加载器ActorSystem,开始“-Yrepl-sync”工作的REPL REPL提供上下文类加载器的不足。

配置在哪里读的

Akka的ActorSystem的实例所有配置,换句话说从外表看Actor系统仅仅是配置信息的消费者。构建一个Actor系统,你能通过一个配置对象,在第二种情况下通过ConfigFactory.load()(正确的类装入)。这意味着默认是所有应用程序来解析,application.conf, application.json和application.properties在根类路径下详情请参阅上述文档。然后Actor系统合并配置资源在类路径的根形成回退的配置,即它在内部使用。

 1. appConfig.withFallback(ConfigFactory.defaultReference(classLoader))

代码不包含默认值的哲学,而是依赖于他们的存在reference.conf 提供的问题。

最高优先级给覆盖给定的系统属性,看到HOCON规范(底部)。还值得注意的是,application:可能用config.resource资源属性(更多内容,请参考配置文档)。

注意
如果您正在编写一个Akka应用程序,让你配置的application.conf的类跟路径上。如果您正在编写一个Akka-based库,使其配置 reference.conf 在Jar文件中。

JarJar, OneJar, Assembly 或其它jar-bundler使用时机

警告
Akka的配置方法在很大程度上依赖于每个模块/ jar的概念有自己的reference.conf文件,所有这些将被发现的配置和装载。不幸的是这也意味着如果你把多个jar /合并到相同的jar,需要合并所有reference.conf。否则默认值会丢失和Akka会失去功能。

如果您正在使用Maven打包应用程序,您还可以使用Apache Maven插件支持资源合并所有的reference.conf及其在构建类路径中。
插件配置可能看起来像这样:

 1. <plugin>
2. <groupId>org.apache.maven.plugins</groupId>
3. <artifactId>maven-shade-plugin</artifactId>
4. <version>1.5</version>
5. <executions>
6.  <execution>
7.   <phase>package</phase>
8.   <goals>
9.    <goal>shade</goal>
10.   </goals>
11.   <configuration>
12.    <shadedArtifactAttached>true</shadedArtifactAttached>
13.    <shadedClassifierName>allinone</shadedClassifierName>
14.    <artifactSet>
15.     <includes>
16.      <include>*:*</include>
17.     </includes>
18.    </artifactSet>
19.    <transformers>
20.      <transformer
21.       implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
22.       <resource>reference.conf</resource>
23.      </transformer>
24.      <transformer
25.       implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
26.       <manifestEntries>
27.        <Main-Class>akka.Main</Main-Class>
28.       </manifestEntries>
29.      </transformer>
30.    </transformers>
31.   </configuration>
32.  </execution>
33. </executions>
34.</plugin>

自定义application.conf

自定义的 application.conf 可以看起来象这样:

 1. # In this file you can override any option defined in the reference files.
 2.# Copy in parts of the reference files and modify as you please.
3. 
4.akka {
5. 
6.  # Loggers to register at boot time (akka.event.Logging$DefaultLogger logs
7.  # to STDOUT)
8.  loggers = ["akka.event.slf4j.Slf4jLogger"]
9. 
10.  # Log level used by the configured loggers (see "loggers") as soon
11.  # as they have been started; before that, see "stdout-loglevel"
12.  # Options: OFF, ERROR, WARNING, INFO, DEBUG
13.  loglevel = "DEBUG"
14. 
15.  # Log level for the very basic logger activated during ActorSystem startup.
16.  # This logger prints the log messages to stdout (System.out).
17.  # Options: OFF, ERROR, WARNING, INFO, DEBUG
18.  stdout-loglevel = "DEBUG"
19. 
20.  # Filter of log events that is used by the LoggingAdapter before
21.  # publishing log events to the eventStream.
22.  logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
23. 
24.  actor {
25.    provider = "cluster"
26. 
27.    default-dispatcher {
28.      # Throughput for default Dispatcher, set to 1 for as fair as possible
29.      throughput = 10
30.    }
31.  }
32. 
33.  remote {
34.    # The port clients should connect to. Default is 2552.
35.    netty.tcp.port = 4711
36.  }
37.}

包含的文件

有时它可能是有用的,包括另一个配置文件,例如,如果您有一个 application.conf与环境独立设置,然后覆盖一些设置为特定的环境。
指定特定的系统属性-Dconfig.resource=/dev.conf会载入dev.conf文件, 它包含了application.conf
dev.conf:

 1. include "application"
2. 
3.akka {
4.  loglevel = "DEBUG"
5.}

更高级的包括和替换机制解释HOCON规范。

日志配置

如果系统或akka.log-config-on-start 设置为开,然后Actor启动后完整的配置记录在信息层面系统。这是有用的,当你不确定的哪个配置在使用。
如果有疑问,您也可以轻松地使用它们之前或之后检查配置对象来构造一个Actor系统:

1.Welcome to Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0).
2.Type in expressions to have them evaluated.
3.Type :help for more information.
4. 
5.scala> import com.typesafe.config._
6.import com.typesafe.config._
7. 
8.scala> ConfigFactory.parseString("a.b=12")
9.res0: com.typesafe.config.Config = Config(SimpleConfigObject({"a" : {"b" : 12}}))
10. 
11.scala> res0.root.render
12.res1: java.lang.String =
13.{
14.    # String: 1
15.    "a" : {
16.        # String: 1
17.        "b" : 12
18.    }
19.}

在注解前每一项给设置的起源的详细信息(文件和行号)加上可能的注解也在场,如在参考配置。设置与参考合并和解析的Actor系统可以显示是这样的:

 1.final ActorSystem system = ActorSystem.create();
2.System.out.println(system.settings());
3.// this is a shortcut for system.settings().config().root().render()

关于类装载器

配置文件几个地方的可以指定完全限定类名的被Akka实例化。这是通过使用Java反射,进而使用一个类 ClassLoader。在这样一种环境下并不是像应用程序容器或OSGi绑定那么简单。当前的Akka方法是,每个ActorSystem实现存储当前线程的上下文类加载器(如果可用,否则只是自己的装载机在this.getClass.getClassLoader)为所有反射访问和使用。这意味着把Akka引导类路径将从陌生的地方产生NullPointerException:这是不支持的。

应用程序特定设置

配置还可以用于特定于应用程序的设置。一个典型应用是将这些设置在一个扩展,如:

  • Scala API:
  • Java API:

配置多个ActorSystem

如果你有多个ActorSystem(或者您正在编写一个库,有一个ActorSystem可能独立于应用程序的)您可能想分离每个系统的配置。

鉴于ConfigFactory.load()合并所有资源匹配的名字从整个类路径,它是简单的使用功能和区分Actor系统层次结构内的配置:

1. myapp1 {
2.  akka.loglevel = "WARNING"
3.  my.own.setting = 43
4.}
5.myapp2 {
6.  akka.loglevel = "ERROR"
7.  app2.setting = "appname"
8.}
9.my.own.setting = 42
10.my.other.setting = "hello"
1.val config = ConfigFactory.load()
2.val app1 = ActorSystem("MyApp1", config.getConfig("myapp1").withFallback(config))
3.val app2 = ActorSystem("MyApp2",
4.  config.getConfig("myapp2").withOnlyPath("akka").withFallback(config))

两个样品展示不同的“lift-a-subtree”技巧:在第一种情况下,配置可以从Actor系统内是这样的

1.akka.loglevel = "WARNING"
2.my.own.setting = 43
3.my.other.setting = "hello"
4.// plus myapp1 and myapp2 subtrees

而在第二个,只有“akka”子树了,下面的结果

1.akka.loglevel = "ERROR"
2.my.own.setting = 42
3.my.other.setting = "hello"
4.// plus myapp1 and myapp2 subtrees

注意
配置库是很强大的,涵盖了所有功能范围。特别是未涉及如何在其他文件包括其他配置文件(看一个小例子Including files)和复制的部分配置树的路径替换。
以编程方式指定并解析配置在实例化ActorSystem时。

1.import akka.actor.ActorSystem
2.import com.typesafe.config.ConfigFactory
3.    val customConf = ConfigFactory.parseString("""
4.      akka.actor.deployment {
5.        /my-service {
6.          router = round-robin-pool
7.          nr-of-instances = 3
8.        }
9.      }
10.      """)
11.    // ConfigFactory.load sandwiches customConfig between default reference
12.    // config and default overrides, and then resolves it.
13.    val system = ActorSystem("MySystem", ConfigFactory.load(customConf))

从自定义位置读取配置

你可以用代码或使用系统属性更换或补充application.conf。

如果使用ConfigFactory.load()(Akka默认的)用-Dconfig.resource=位置, -Dconfig.file=位置, 或 -Dconfig.url=位置.替换application.conf 。

从内部替换时指定为-Dconfig.resource等,可以用application.{conf,json,properties}等在需要继续用时包含它们。设置在指定"application"前出现,则会被覆盖;之后出现则覆盖原设置。

在代码中,有许多定制选项。

有些用ConfigFactory.load()这些允许您指定要夹在系统属性(覆盖)和默认值(从reference.conf),取代通常application.{conf,json,properties}以及-Dconfig.file

最简单的变体ConfigFactory.load()接受一个资源基于名字(代替application),myname.conf, myname.json, and myname.properties用来代替application.{conf,json,properties}.

最灵活的变体需要一个配置对象,您可以使用任何方法装载ConfigFactory。例如你可以把配置字符串在代码中使用ConfigFactory.parseString()或者你可以做个图和ConfigFactory.parseMap(),或者你可以加载一个文件。可以整合自定义配置与通常的配置,例如:

1.// make a Config with just your special setting
2.Config myConfig =
3.  ConfigFactory.parseString("something=somethingElse");
4.// load the normal config stack (system props,
5.// then application.conf, then reference.conf)
6.Config regularConfig =
7.  ConfigFactory.load();
8.// override regular stack with myConfig
9.Config combined =
10.  myConfig.withFallback(regularConfig);
11.// put the result in between the overrides
12.// (system props) and defaults again
13.Config complete =
14.  ConfigFactory.load(combined);
15.// create ActorSystem
16.ActorSystem system =
17.  ActorSystem.create("myname", complete

在处理配置对象时,记住,有三个“层”内容:

  • ConfigFactory.defaultOverrides() (系统属性)
  • 应用设置
  • ConfigFactory.defaultReference() (reference.conf)

正常的目标是定制中间层而另外两个独立设置。

  • ConfigFactory.load() 装载整个堆栈
  • 覆盖 ConfigFactory.load() 使用你特定的设置
  • ConfigFactory.parse()变量载入单个文件或资源

使用两层堆积override.withFallback(fallback),在顶层保持系统设置((defaultOverrides())以及在底层reference.conf (defaultReference())

记住,你可以经常添加另一个声明包含在 application.conf 而不是编写代码。包括顶层application.conf被其他application.conf覆盖 ,而底部将会覆盖前面的东西。

Actor部置配置

部署特定的设置akka.actor.deployment这一节。在部署部分可以包括调度,邮箱,路由器设置,和远程部署。配置这些特性相对应的章节详细描述的主题。一个例子可能是这个样子:

1. akka.actor.deployment {
2. 
3.  # '/user/actorA/actorB' is a remote deployed actor
4.  /actorA/actorB {
5.    remote = "akka.tcp://sampleActorSystem@127.0.0.1:2553"
6.  }
7.  
8.  # all direct children of '/user/actorC' have a dedicated dispatcher 
9.  "/actorC/*" {
10.    dispatcher = my-dispatcher
11.  }
12. 
13.  # all descendants of '/user/actorC' (direct children, and their children recursively)
14.  # have a dedicated dispatcher
15.  "/actorC/**" {
16.    dispatcher = my-dispatcher
17.  }
18.  
19.  # '/user/actorD/actorE' has a special priority mailbox
20.  /actorD/actorE {
21.    mailbox = prio-mailbox
22.  }
23.  
24.  # '/user/actorF/actorG/actorH' is a random pool
25.  /actorF/actorG/actorH {
26.    router = random-pool
27.    nr-of-instances = 5
28.  }
29.}
30. 
31.my-dispatcher {
32.  fork-join-executor.parallelism-min = 10
33.  fork-join-executor.parallelism-max = 10
34.}
35.prio-mailbox {
36.  mailbox-type = "a.b.MyPrioMailbox"
37.}

注意
部署部分为一个特定的Actor它的路径相对于/user

您可以使用星号作为actor的路径部分通配符匹配,所以您可以指定/*/sampleActor 它匹配所有的层级。另外请注意:

  • 使用通配符放在最后匹配所有Actor在当前层次上:/someParent/*
  • 使用两个通配符放在最后匹配所有Actor,将遍历所有了子级:/someParent/**
  • 没有通配符的总有更高优先级,单个通配符的优先级总比两个的高:所以/foo/bar比更高/foo/*,而它的优先级更高于/foo/**,只有最高优先级的匹配才会被使用。
  • 通配符不能用作路径的一部分,如: /foo*/bar, /f*o/bar 等.

注意
双通配符只能放置成最后

参考配置清单

每个Akka模块参考配置文件的默认值

akka-actor

1. ####################################
2.# Akka Actor Reference Config File #
3.####################################
4. 
5.# This is the reference config file that contains all the default settings.
6.# Make your edits/overrides in your application.conf.
7. 
8.# Akka version, checked against the runtime version of Akka. Loaded from generated conf file.
9.include "version"
10. 
11.akka {
12.  # Home directory of Akka, modules in the deploy directory will be loaded
13.  home = ""
14. 
15.  # Loggers to register at boot time (akka.event.Logging$DefaultLogger logs
16.  # to STDOUT)
17.  loggers = ["akka.event.Logging$DefaultLogger"]
18.  
19.  # Filter of log events that is used by the LoggingAdapter before 
20.  # publishing log events to the eventStream. It can perform
21.  # fine grained filtering based on the log source. The default
22.  # implementation filters on the `loglevel`.
23.  # FQCN of the LoggingFilter. The Class of the FQCN must implement 
24.  # akka.event.LoggingFilter and have a public constructor with
25.  # (akka.actor.ActorSystem.Settings, akka.event.EventStream) parameters.
26.  logging-filter = "akka.event.DefaultLoggingFilter"
27. 
28.  # Specifies the default loggers dispatcher
29.  loggers-dispatcher = "akka.actor.default-dispatcher"
30. 
31.  # Loggers are created and registered synchronously during ActorSystem
32.  # start-up, and since they are actors, this timeout is used to bound the
33.  # waiting time
34.  logger-startup-timeout = 5s
35. 
36.  # Log level used by the configured loggers (see "loggers") as soon
37.  # as they have been started; before that, see "stdout-loglevel"
38.  # Options: OFF, ERROR, WARNING, INFO, DEBUG
39.  loglevel = "INFO"
40. 
41.  # Log level for the very basic logger activated during ActorSystem startup.
42.  # This logger prints the log messages to stdout (System.out).
43.  # Options: OFF, ERROR, WARNING, INFO, DEBUG
44.  stdout-loglevel = "WARNING"
45. 
46.  # Log the complete configuration at INFO level when the actor system is started.
47.  # This is useful when you are uncertain of what configuration is used.
48.  log-config-on-start = off
49. 
50.  # Log at info level when messages are sent to dead letters.
51.  # Possible values:
52.  # on: all dead letters are logged
53.  # off: no logging of dead letters
54.  # n: positive integer, number of dead letters that will be logged
55.  log-dead-letters = 10
56. 
57.  # Possibility to turn off logging of dead letters while the actor system
58.  # is shutting down. Logging is only done when enabled by 'log-dead-letters'
59.  # setting.
60.  log-dead-letters-during-shutdown = on
61. 
62.  # List FQCN of extensions which shall be loaded at actor system startup.
63.  # Library extensions are regular extensions that are loaded at startup and are
64.  # available for third party library authors to enable auto-loading of extensions when
65.  # present on the classpath. This is done by appending entries:
66.  # 'library-extensions += "Extension"' in the library `reference.conf`.
67.  #
68.  # Should not be set by end user applications in 'application.conf', use the extensions property for that
69.  #
70.  library-extensions = ${?akka.library-extensions} []
71. 
72.  # List FQCN of extensions which shall be loaded at actor system startup.
73.  # Should be on the format: 'extensions = ["foo", "bar"]' etc.
74.  # See the Akka Documentation for more info about Extensions
75.  extensions = []
76. 
77.  # Toggles whether threads created by this ActorSystem should be daemons or not
78.  daemonic = off
79. 
80.  # JVM shutdown, System.exit(-1), in case of a fatal error,
81.  # such as OutOfMemoryError
82.  jvm-exit-on-fatal-error = on
83. 
84.  actor {
85. 
86.    # Either one of "local", "remote" or "cluster" or the
87.    # FQCN of the ActorRefProvider to be used; the below is the built-in default,
88.    # note that "remote" and "cluster" requires the akka-remote and akka-cluster
89.    # artifacts to be on the classpath.
90.    provider = "local"
91. 
92.    # The guardian "/user" will use this class to obtain its supervisorStrategy.
93.    # It needs to be a subclass of akka.actor.SupervisorStrategyConfigurator.
94.    # In addition to the default there is akka.actor.StoppingSupervisorStrategy.
95.    guardian-supervisor-strategy = "akka.actor.DefaultSupervisorStrategy"
96. 
97.    # Timeout for ActorSystem.actorOf
98.    creation-timeout = 20s
99. 
100.    # Serializes and deserializes (non-primitive) messages to ensure immutability,
101.    # this is only intended for testing.
102.    serialize-messages = off
103. 
104.    # Serializes and deserializes creators (in Props) to ensure that they can be
105.    # sent over the network, this is only intended for testing. Purely local deployments
106.    # as marked with deploy.scope == LocalScope are exempt from verification.
107.    serialize-creators = off
108. 
109.    # Timeout for send operations to top-level actors which are in the process
110.    # of being started. This is only relevant if using a bounded mailbox or the
111.    # CallingThreadDispatcher for a top-level actor.
112.    unstarted-push-timeout = 10s
113. 
114.    typed {
115.      # Default timeout for typed actor methods with non-void return type
116.      timeout = 5s
117.    }
118.    
119.    # Mapping between ´deployment.router' short names to fully qualified class names
120.    router.type-mapping {
121.      from-code = "akka.routing.NoRouter"
122.      round-robin-pool = "akka.routing.RoundRobinPool"
123.      round-robin-group = "akka.routing.RoundRobinGroup"
124.      random-pool = "akka.routing.RandomPool"
125.      random-group = "akka.routing.RandomGroup"
126.      balancing-pool = "akka.routing.BalancingPool"
127.      smallest-mailbox-pool = "akka.routing.SmallestMailboxPool"
128.      broadcast-pool = "akka.routing.BroadcastPool"
129.      broadcast-group = "akka.routing.BroadcastGroup"
130.      scatter-gather-pool = "akka.routing.ScatterGatherFirstCompletedPool"
131.      scatter-gather-group = "akka.routing.ScatterGatherFirstCompletedGroup"
132.      tail-chopping-pool = "akka.routing.TailChoppingPool"
133.      tail-chopping-group = "akka.routing.TailChoppingGroup"
134.      consistent-hashing-pool = "akka.routing.ConsistentHashingPool"
135.      consistent-hashing-group = "akka.routing.ConsistentHashingGroup"
136.    }
137. 
138.    deployment {
139. 
140.      # deployment id pattern - on the format: /parent/child etc.
141.      default {
142.      
143.        # The id of the dispatcher to use for this actor.
144.        # If undefined or empty the dispatcher specified in code
145.        # (Props.withDispatcher) is used, or default-dispatcher if not
146.        # specified at all.
147.        dispatcher = ""
148. 
149.        # The id of the mailbox to use for this actor.
150.        # If undefined or empty the default mailbox of the configured dispatcher
151.        # is used or if there is no mailbox configuration the mailbox specified
152.        # in code (Props.withMailbox) is used.
153.        # If there is a mailbox defined in the configured dispatcher then that
154.        # overrides this setting.
155.        mailbox = ""
156. 
157.        # routing (load-balance) scheme to use
158.        # - available: "from-code", "round-robin", "random", "smallest-mailbox",
159.        #              "scatter-gather", "broadcast"
160.        # - or:        Fully qualified class name of the router class.
161.        #              The class must extend akka.routing.CustomRouterConfig and
162.        #              have a public constructor with com.typesafe.config.Config
163.        #              and optional akka.actor.DynamicAccess parameter.
164.        # - default is "from-code";
165.        # Whether or not an actor is transformed to a Router is decided in code
166.        # only (Props.withRouter). The type of router can be overridden in the
167.        # configuration; specifying "from-code" means that the values specified
168.        # in the code shall be used.
169.        # In case of routing, the actors to be routed to can be specified
170.        # in several ways:
171.        # - nr-of-instances: will create that many children
172.        # - routees.paths: will route messages to these paths using ActorSelection,
173.        #   i.e. will not create children
174.        # - resizer: dynamically resizable number of routees as specified in
175.        #   resizer below
176.        router = "from-code"
177. 
178.        # number of children to create in case of a router;
179.        # this setting is ignored if routees.paths is given
180.        nr-of-instances = 1
181. 
182.        # within is the timeout used for routers containing future calls
183.        within = 5 seconds
184. 
185.        # number of virtual nodes per node for consistent-hashing router
186.        virtual-nodes-factor = 10
187. 
188.        tail-chopping-router {
189.          # interval is duration between sending message to next routee
190.          interval = 10 milliseconds
191.        }
192. 
193.        routees {
194.          # Alternatively to giving nr-of-instances you can specify the full
195.          # paths of those actors which should be routed to. This setting takes
196.          # precedence over nr-of-instances
197.          paths = []
198.        }
199.        
200.        # To use a dedicated dispatcher for the routees of the pool you can
201.        # define the dispatcher configuration inline with the property name 
202.        # 'pool-dispatcher' in the deployment section of the router.
203.        # For example:
204.        # pool-dispatcher {
205.        #   fork-join-executor.parallelism-min = 5
206.        #   fork-join-executor.parallelism-max = 5
207.        # }
208. 
209.        # Routers with dynamically resizable number of routees; this feature is
210.        # enabled by including (parts of) this section in the deployment
211.        resizer {
212.        
213.          enabled = off
214. 
215.          # The fewest number of routees the router should ever have.
216.          lower-bound = 1
217. 
218.          # The most number of routees the router should ever have.
219.          # Must be greater than or equal to lower-bound.
220.          upper-bound = 10
221. 
222.          # Threshold used to evaluate if a routee is considered to be busy
223.          # (under pressure). Implementation depends on this value (default is 1).
224.          # 0:   number of routees currently processing a message.
225.          # 1:   number of routees currently processing a message has
226.          #      some messages in mailbox.
227.          # > 1: number of routees with at least the configured pressure-threshold
228.          #      messages in their mailbox. Note that estimating mailbox size of
229.          #      default UnboundedMailbox is O(N) operation.
230.          pressure-threshold = 1
231. 
232.          # Percentage to increase capacity whenever all routees are busy.
233.          # For example, 0.2 would increase 20% (rounded up), i.e. if current
234.          # capacity is 6 it will request an increase of 2 more routees.
235.          rampup-rate = 0.2
236. 
237.          # Minimum fraction of busy routees before backing off.
238.          # For example, if this is 0.3, then we'll remove some routees only when
239.          # less than 30% of routees are busy, i.e. if current capacity is 10 and
240.          # 3 are busy then the capacity is unchanged, but if 2 or less are busy
241.          # the capacity is decreased.
242.          # Use 0.0 or negative to avoid removal of routees.
243.          backoff-threshold = 0.3
244. 
245.          # Fraction of routees to be removed when the resizer reaches the
246.          # backoffThreshold.
247.          # For example, 0.1 would decrease 10% (rounded up), i.e. if current
248.          # capacity is 9 it will request an decrease of 1 routee.
249.          backoff-rate = 0.1
250. 
251.          # Number of messages between resize operation.
252.          # Use 1 to resize before each message.
253.          messages-per-resize = 10
254.        }
255. 
256.        # Routers with dynamically resizable number of routees based on
257.        # performance metrics.
258.        # This feature is enabled by including (parts of) this section in
259.        # the deployment, cannot be enabled together with default resizer.
260.        optimal-size-exploring-resizer {
261. 
262.          enabled = off
263. 
264.          # The fewest number of routees the router should ever have.
265.          lower-bound = 1
266. 
267.          # The most number of routees the router should ever have.
268.          # Must be greater than or equal to lower-bound.
269.          upper-bound = 10
270. 
271.          # probability of doing a ramping down when all routees are busy
272.          # during exploration.
273.          chance-of-ramping-down-when-full = 0.2
274. 
275.          # Interval between each resize attempt
276.          action-interval = 5s
277. 
278.          # If the routees have not been fully utilized (i.e. all routees busy)
279.          # for such length, the resizer will downsize the pool.
280.          downsize-after-underutilized-for = 72h
281. 
282.          # Duration exploration, the ratio between the largest step size and
283.          # current pool size. E.g. if the current pool size is 50, and the
284.          # explore-step-size is 0.1, the maximum pool size change during
285.          # exploration will be +- 5
286.          explore-step-size = 0.1
287. 
288.          # Probabily of doing an exploration v.s. optmization.
289.          chance-of-exploration = 0.4
290. 
291.          # When downsizing after a long streak of underutilization, the resizer
292.          # will downsize the pool to the highest utiliziation multiplied by a
293.          # a downsize rasio. This downsize ratio determines the new pools size
294.          # in comparison to the highest utilization.
295.          # E.g. if the highest utilization is 10, and the down size ratio
296.          # is 0.8, the pool will be downsized to 8
297.          downsize-ratio = 0.8
298. 
299.          # When optimizing, the resizer only considers the sizes adjacent to the
300.          # current size. This number indicates how many adjacent sizes to consider.
301.          optimization-range = 16
302. 
303.          # The weight of the latest metric over old metrics when collecting
304.          # performance metrics.
305.          # E.g. if the last processing speed is 10 millis per message at pool
306.          # size 5, and if the new processing speed collected is 6 millis per
307.          # message at pool size 5. Given a weight of 0.3, the metrics
308.          # representing pool size 5 will be 6 * 0.3 + 10 * 0.7, i.e. 8.8 millis
309.          # Obviously, this number should be between 0 and 1.
310.          weight-of-latest-metric = 0.5
311.        }
312.      }
313. 
314.      /IO-DNS/inet-address {
315.        mailbox = "unbounded"
316.        router = "consistent-hashing-pool"
317.        nr-of-instances = 4
318.      }
319.    }
320. 
321.    default-dispatcher {
322.      # Must be one of the following
323.      # Dispatcher, PinnedDispatcher, or a FQCN to a class inheriting
324.      # MessageDispatcherConfigurator with a public constructor with
325.      # both com.typesafe.config.Config parameter and
326.      # akka.dispatch.DispatcherPrerequisites parameters.
327.      # PinnedDispatcher must be used together with executor=thread-pool-executor.
328.      type = "Dispatcher"
329. 
330.      # Which kind of ExecutorService to use for this dispatcher
331.      # Valid options:
332.      #  - "default-executor" requires a "default-executor" section
333.      #  - "fork-join-executor" requires a "fork-join-executor" section
334.      #  - "thread-pool-executor" requires a "thread-pool-executor" section
335.      #  - A FQCN of a class extending ExecutorServiceConfigurator
336.      executor = "default-executor"
337. 
338.      # This will be used if you have set "executor = "default-executor"".
339.      # If an ActorSystem is created with a given ExecutionContext, this
340.      # ExecutionContext will be used as the default executor for all
341.      # dispatchers in the ActorSystem configured with
342.      # executor = "default-executor". Note that "default-executor"
343.      # is the default value for executor, and therefore used if not
344.      # specified otherwise. If no ExecutionContext is given,
345.      # the executor configured in "fallback" will be used.
346.      default-executor {
347.        fallback = "fork-join-executor"
348.      }
349. 
350.      # This will be used if you have set "executor = "fork-join-executor""
351.      # Underlying thread pool implementation is scala.concurrent.forkjoin.ForkJoinPool
352.      fork-join-executor {
353.        # Min number of threads to cap factor-based parallelism number to
354.        parallelism-min = 8
355. 
356.        # The parallelism factor is used to determine thread pool size using the
357.        # following formula: ceil(available processors * factor). Resulting size
358.        # is then bounded by the parallelism-min and parallelism-max values.
359.        parallelism-factor = 3.0
360. 
361.        # Max number of threads to cap factor-based parallelism number to
362.        parallelism-max = 64
363. 
364.        # Setting to "FIFO" to use queue like peeking mode which "poll" or "LIFO" to use stack
365.        # like peeking mode which "pop".
366.        task-peeking-mode = "FIFO"
367.      }
368. 
369.      # This will be used if you have set "executor = "thread-pool-executor""
370.      # Underlying thread pool implementation is java.util.concurrent.ThreadPoolExecutor
371.      thread-pool-executor {
372.        # Keep alive time for threads
373.        keep-alive-time = 60s
374.        
375.        # Define a fixed thread pool size with this property. The corePoolSize
376.        # and the maximumPoolSize of the ThreadPoolExecutor will be set to this
377.        # value, if it is defined. Then the other pool-size properties will not
378.        # be used. 
379.        # 
380.        # Valid values are: `off` or a positive integer.
381.        fixed-pool-size = off
382. 
383.        # Min number of threads to cap factor-based corePoolSize number to
384.        core-pool-size-min = 8
385. 
386.        # The core-pool-size-factor is used to determine corePoolSize of the 
387.        # ThreadPoolExecutor using the following formula: 
388.        # ceil(available processors * factor).
389.        # Resulting size is then bounded by the core-pool-size-min and
390.        # core-pool-size-max values.
391.        core-pool-size-factor = 3.0
392. 
393.        # Max number of threads to cap factor-based corePoolSize number to
394.        core-pool-size-max = 64
395. 
396.        # Minimum number of threads to cap factor-based maximumPoolSize number to
397.        max-pool-size-min = 8
398. 
399.        # The max-pool-size-factor is used to determine maximumPoolSize of the 
400.        # ThreadPoolExecutor using the following formula:
401.        # ceil(available processors * factor)
402.        # The maximumPoolSize will not be less than corePoolSize.
403.        # It is only used if using a bounded task queue.
404.        max-pool-size-factor  = 3.0
405. 
406.        # Max number of threads to cap factor-based maximumPoolSize number to
407.        max-pool-size-max = 64
408. 
409.        # Specifies the bounded capacity of the task queue (< 1 == unbounded)
410.        task-queue-size = -1
411. 
412.        # Specifies which type of task queue will be used, can be "array" or
413.        # "linked" (default)
414.        task-queue-type = "linked"
415. 
416.        # Allow core threads to time out
417.        allow-core-timeout = on
418.      }
419. 
420.      # How long time the dispatcher will wait for new actors until it shuts down
421.      shutdown-timeout = 1s
422. 
423.      # Throughput defines the number of messages that are processed in a batch
424.      # before the thread is returned to the pool. Set to 1 for as fair as possible.
425.      throughput = 5
426. 
427.      # Throughput deadline for Dispatcher, set to 0 or negative for no deadline
428.      throughput-deadline-time = 0ms
429. 
430.      # For BalancingDispatcher: If the balancing dispatcher should attempt to
431.      # schedule idle actors using the same dispatcher when a message comes in,
432.      # and the dispatchers ExecutorService is not fully busy already.
433.      attempt-teamwork = on
434. 
435.      # If this dispatcher requires a specific type of mailbox, specify the
436.      # fully-qualified class name here; the actually created mailbox will
437.      # be a subtype of this type. The empty string signifies no requirement.
438.      mailbox-requirement = ""
439.    }
440. 
441.    default-mailbox {
442.      # FQCN of the MailboxType. The Class of the FQCN must have a public
443.      # constructor with
444.      # (akka.actor.ActorSystem.Settings, com.typesafe.config.Config) parameters.
445.      mailbox-type = "akka.dispatch.UnboundedMailbox"
446. 
447.      # If the mailbox is bounded then it uses this setting to determine its
448.      # capacity. The provided value must be positive.
449.      # NOTICE:
450.      # Up to version 2.1 the mailbox type was determined based on this setting;
451.      # this is no longer the case, the type must explicitly be a bounded mailbox.
452.      mailbox-capacity = 1000
453. 
454.      # If the mailbox is bounded then this is the timeout for enqueueing
455.      # in case the mailbox is full. Negative values signify infinite
456.      # timeout, which should be avoided as it bears the risk of dead-lock.
457.      mailbox-push-timeout-time = 10s
458. 
459.      # For Actor with Stash: The default capacity of the stash.
460.      # If negative (or zero) then an unbounded stash is used (default)
461.      # If positive then a bounded stash is used and the capacity is set using
462.      # the property
463.      stash-capacity = -1
464.    }
465. 
466.    mailbox {
467.      # Mapping between message queue semantics and mailbox configurations.
468.      # Used by akka.dispatch.RequiresMessageQueue[T] to enforce different
469.      # mailbox types on actors.
470.      # If your Actor implements RequiresMessageQueue[T], then when you create
471.      # an instance of that actor its mailbox type will be decided by looking
472.      # up a mailbox configuration via T in this mapping
473.      requirements {
474.        "akka.dispatch.UnboundedMessageQueueSemantics" =
475.          akka.actor.mailbox.unbounded-queue-based
476.        "akka.dispatch.BoundedMessageQueueSemantics" =
477.          akka.actor.mailbox.bounded-queue-based
478.        "akka.dispatch.DequeBasedMessageQueueSemantics" =
479.          akka.actor.mailbox.unbounded-deque-based
480.        "akka.dispatch.UnboundedDequeBasedMessageQueueSemantics" =
481.          akka.actor.mailbox.unbounded-deque-based
482.        "akka.dispatch.BoundedDequeBasedMessageQueueSemantics" =
483.          akka.actor.mailbox.bounded-deque-based
484.        "akka.dispatch.MultipleConsumerSemantics" =
485.          akka.actor.mailbox.unbounded-queue-based
486.        "akka.dispatch.ControlAwareMessageQueueSemantics" =
487.          akka.actor.mailbox.unbounded-control-aware-queue-based
488.        "akka.dispatch.UnboundedControlAwareMessageQueueSemantics" =
489.          akka.actor.mailbox.unbounded-control-aware-queue-based
490.        "akka.dispatch.BoundedControlAwareMessageQueueSemantics" =
491.          akka.actor.mailbox.bounded-control-aware-queue-based
492.        "akka.event.LoggerMessageQueueSemantics" =
493.          akka.actor.mailbox.logger-queue
494.      }
495. 
496.      unbounded-queue-based {
497.        # FQCN of the MailboxType, The Class of the FQCN must have a public
498.        # constructor with (akka.actor.ActorSystem.Settings,
499.        # com.typesafe.config.Config) parameters.
500.        mailbox-type = "akka.dispatch.UnboundedMailbox"
501.      }
502. 
503.      bounded-queue-based {
504.        # FQCN of the MailboxType, The Class of the FQCN must have a public
505.        # constructor with (akka.actor.ActorSystem.Settings,
506.        # com.typesafe.config.Config) parameters.
507.        mailbox-type = "akka.dispatch.BoundedMailbox"
508.      }
509. 
510.      unbounded-deque-based {
511.        # FQCN of the MailboxType, The Class of the FQCN must have a public
512.        # constructor with (akka.actor.ActorSystem.Settings,
513.        # com.typesafe.config.Config) parameters.
514.        mailbox-type = "akka.dispatch.UnboundedDequeBasedMailbox"
515.      }
516. 
517.      bounded-deque-based {
518.        # FQCN of the MailboxType, The Class of the FQCN must have a public
519.        # constructor with (akka.actor.ActorSystem.Settings,
520.        # com.typesafe.config.Config) parameters.
521.        mailbox-type = "akka.dispatch.BoundedDequeBasedMailbox"
522.      }
523. 
524.      unbounded-control-aware-queue-based {
525.        # FQCN of the MailboxType, The Class of the FQCN must have a public
526.        # constructor with (akka.actor.ActorSystem.Settings,
527.        # com.typesafe.config.Config) parameters.
528.        mailbox-type = "akka.dispatch.UnboundedControlAwareMailbox"
529.      }
530. 
531.      bounded-control-aware-queue-based {
532.        # FQCN of the MailboxType, The Class of the FQCN must have a public
533.        # constructor with (akka.actor.ActorSystem.Settings,
534.        # com.typesafe.config.Config) parameters.
535.        mailbox-type = "akka.dispatch.BoundedControlAwareMailbox"
536.      }
537.      
538.      # The LoggerMailbox will drain all messages in the mailbox
539.      # when the system is shutdown and deliver them to the StandardOutLogger.
540.      # Do not change this unless you know what you are doing.
541.      logger-queue {
542.        mailbox-type = "akka.event.LoggerMailboxType"
543.      }
544.    }
545. 
546.    debug {
547.      # enable function of Actor.loggable(), which is to log any received message
548.      # at DEBUG level, see the “Testing Actor Systems” section of the Akka
549.      # Documentation at http://akka.io/docs
550.      receive = off
551. 
552.      # enable DEBUG logging of all AutoReceiveMessages (Kill, PoisonPill et.c.)
553.      autoreceive = off
554. 
555.      # enable DEBUG logging of actor lifecycle changes
556.      lifecycle = off
557. 
558.      # enable DEBUG logging of all LoggingFSMs for events, transitions and timers
559.      fsm = off
560. 
561.      # enable DEBUG logging of subscription changes on the eventStream
562.      event-stream = off
563. 
564.      # enable DEBUG logging of unhandled messages
565.      unhandled = off
566. 
567.      # enable WARN logging of misconfigured routers
568.      router-misconfiguration = off
569.    }
570. 
571.    # Entries for pluggable serializers and their bindings.
572.    serializers {
573.      java = "akka.serialization.JavaSerializer"
574.      bytes = "akka.serialization.ByteArraySerializer"
575.    }
576. 
577.    # Class to Serializer binding. You only need to specify the name of an
578.    # interface or abstract base class of the messages. In case of ambiguity it
579.    # is using the most specific configured class, or giving a warning and
580.    # choosing the “first” one.
581.    #
582.    # To disable one of the default serializers, assign its class to "none", like
583.    # "java.io.Serializable" = none
584.    serialization-bindings {
585.      "[B" = bytes
586.      "java.io.Serializable" = java
587.    }
588.    
589.    # Set this to on to enable serialization-bindings define in
590.    # additional-serialization-bindings. Those are by default not included
591.    # for backwards compatibility reasons. They are enabled by default if
592.    # akka.remote.artery.enabled=on.
593.    enable-additional-serialization-bindings = off
594.    
595.    # Additional serialization-bindings that are replacing Java serialization are
596.    # defined in this section and not included by default for backwards compatibility 
597.    # reasons. They can be enabled with enable-additional-serialization-bindings=on.
598.    # They are enabled by default if akka.remote.artery.enabled=on. 
599.    additional-serialization-bindings {
600.    }
601. 
602.    # Log warnings when the default Java serialization is used to serialize messages.
603.    # The default serializer uses Java serialization which is not very performant and should not
604.    # be used in production environments unless you don't care about performance. In that case
605.    # you can turn this off.
606.    warn-about-java-serializer-usage = on
607. 
608.    # To be used with the above warn-about-java-serializer-usage
609.    # When warn-about-java-serializer-usage = on, and this warn-on-no-serialization-verification = off,
610.    # warnings are suppressed for classes extending NoSerializationVerificationNeeded
611.    # to reduce noize.
612.    warn-on-no-serialization-verification = on
613. 
614.    # Configuration namespace of serialization identifiers.
615.    # Each serializer implementation must have an entry in the following format:
616.    # `akka.actor.serialization-identifiers."FQCN" = ID`
617.    # where `FQCN` is fully qualified class name of the serializer implementation
618.    # and `ID` is globally unique serializer identifier number.
619.    # Identifier values from 0 to 16 are reserved for Akka internal usage.
620.    serialization-identifiers {
621.      "akka.serialization.JavaSerializer" = 1
622.      "akka.serialization.ByteArraySerializer" = 4
623.    }
624. 
625.    # Configuration items which are used by the akka.actor.ActorDSL._ methods
626.    dsl {
627.      # Maximum queue size of the actor created by newInbox(); this protects
628.      # against faulty programs which use select() and consistently miss messages
629.      inbox-size = 1000
630. 
631.      # Default timeout to assume for operations like Inbox.receive et al
632.      default-timeout = 5s
633.    }
634.  }
635. 
636.  # Used to set the behavior of the scheduler.
637.  # Changing the default values may change the system behavior drastically so make
638.  # sure you know what you're doing! See the Scheduler section of the Akka
639.  # Documentation for more details.
640.  scheduler {
641.    # The LightArrayRevolverScheduler is used as the default scheduler in the
642.    # system. It does not execute the scheduled tasks on exact time, but on every
643.    # tick, it will run everything that is (over)due. You can increase or decrease
644.    # the accuracy of the execution timing by specifying smaller or larger tick
645.    # duration. If you are scheduling a lot of tasks you should consider increasing
646.    # the ticks per wheel.
647.    # Note that it might take up to 1 tick to stop the Timer, so setting the
648.    # tick-duration to a high value will make shutting down the actor system
649.    # take longer.
650.    tick-duration = 10ms
651. 
652.    # The timer uses a circular wheel of buckets to store the timer tasks.
653.    # This should be set such that the majority of scheduled timeouts (for high
654.    # scheduling frequency) will be shorter than one rotation of the wheel
655.    # (ticks-per-wheel * ticks-duration)
656.    # THIS MUST BE A POWER OF TWO!
657.    ticks-per-wheel = 512
658. 
659.    # This setting selects the timer implementation which shall be loaded at
660.    # system start-up.
661.    # The class given here must implement the akka.actor.Scheduler interface
662.    # and offer a public constructor which takes three arguments:
663.    #  1) com.typesafe.config.Config
664.    #  2) akka.event.LoggingAdapter
665.    #  3) java.util.concurrent.ThreadFactory
666.    implementation = akka.actor.LightArrayRevolverScheduler
667. 
668.    # When shutting down the scheduler, there will typically be a thread which
669.    # needs to be stopped, and this timeout determines how long to wait for
670.    # that to happen. In case of timeout the shutdown of the actor system will
671.    # proceed without running possibly still enqueued tasks.
672.    shutdown-timeout = 5s
673.  }
674. 
675.  io {
676. 
677.    # By default the select loops run on dedicated threads, hence using a
678.    # PinnedDispatcher
679.    pinned-dispatcher {
680.      type = "PinnedDispatcher"
681.      executor = "thread-pool-executor"
682.      thread-pool-executor.allow-core-timeout = off
683.    }
684. 
685.    tcp {
686. 
687.      # The number of selectors to stripe the served channels over; each of
688.      # these will use one select loop on the selector-dispatcher.
689.      nr-of-selectors = 1
690. 
691.      # Maximum number of open channels supported by this TCP module; there is
692.      # no intrinsic general limit, this setting is meant to enable DoS
693.      # protection by limiting the number of concurrently connected clients.
694.      # Also note that this is a "soft" limit; in certain cases the implementation
695.      # will accept a few connections more or a few less than the number configured
696.      # here. Must be an integer > 0 or "unlimited".
697.      max-channels = 256000
698. 
699.      # When trying to assign a new connection to a selector and the chosen
700.      # selector is at full capacity, retry selector choosing and assignment
701.      # this many times before giving up
702.      selector-association-retries = 10
703. 
704.      # The maximum number of connection that are accepted in one go,
705.      # higher numbers decrease latency, lower numbers increase fairness on
706.      # the worker-dispatcher
707.      batch-accept-limit = 10
708. 
709.      # The number of bytes per direct buffer in the pool used to read or write
710.      # network data from the kernel.
711.      direct-buffer-size = 128 KiB
712. 
713.      # The maximal number of direct buffers kept in the direct buffer pool for
714.      # reuse.
715.      direct-buffer-pool-limit = 1000
716. 
717.      # The duration a connection actor waits for a `Register` message from
718.      # its commander before aborting the connection.
719.      register-timeout = 5s
720. 
721.      # The maximum number of bytes delivered by a `Received` message. Before
722.      # more data is read from the network the connection actor will try to
723.      # do other work.
724.      # The purpose of this setting is to impose a smaller limit than the 
725.      # configured receive buffer size. When using value 'unlimited' it will
726.      # try to read all from the receive buffer.
727.      max-received-message-size = unlimited
728. 
729.      # Enable fine grained logging of what goes on inside the implementation.
730.      # Be aware that this may log more than once per message sent to the actors
731.      # of the tcp implementation.
732.      trace-logging = off
733. 
734.      # Fully qualified config path which holds the dispatcher configuration
735.      # to be used for running the select() calls in the selectors
736.      selector-dispatcher = "akka.io.pinned-dispatcher"
737. 
738.      # Fully qualified config path which holds the dispatcher configuration
739.      # for the read/write worker actors
740.      worker-dispatcher = "akka.actor.default-dispatcher"
741. 
742.      # Fully qualified config path which holds the dispatcher configuration
743.      # for the selector management actors
744.      management-dispatcher = "akka.actor.default-dispatcher"
745. 
746.      # Fully qualified config path which holds the dispatcher configuration
747.      # on which file IO tasks are scheduled
748.      file-io-dispatcher = "akka.actor.default-dispatcher"
749. 
750.      # The maximum number of bytes (or "unlimited") to transfer in one batch
751.      # when using `WriteFile` command which uses `FileChannel.transferTo` to
752.      # pipe files to a TCP socket. On some OS like Linux `FileChannel.transferTo`
753.      # may block for a long time when network IO is faster than file IO.
754.      # Decreasing the value may improve fairness while increasing may improve
755.      # throughput.
756.      file-io-transferTo-limit = 512 KiB
757. 
758.      # The number of times to retry the `finishConnect` call after being notified about
759.      # OP_CONNECT. Retries are needed if the OP_CONNECT notification doesn't imply that
760.      # `finishConnect` will succeed, which is the case on Android.
761.      finish-connect-retries = 5
762. 
763.      # On Windows connection aborts are not reliably detected unless an OP_READ is
764.      # registered on the selector _after_ the connection has been reset. This
765.      # workaround enables an OP_CONNECT which forces the abort to be visible on Windows.
766.      # Enabling this setting on other platforms than Windows will cause various failures
767.      # and undefined behavior.
768.      # Possible values of this key are on, off and auto where auto will enable the
769.      # workaround if Windows is detected automatically.
770.      windows-connection-abort-workaround-enabled = off
771.    }
772. 
773.    udp {
774. 
775.      # The number of selectors to stripe the served channels over; each of
776.      # these will use one select loop on the selector-dispatcher.
777.      nr-of-selectors = 1
778. 
779.      # Maximum number of open channels supported by this UDP module Generally
780.      # UDP does not require a large number of channels, therefore it is
781.      # recommended to keep this setting low.
782.      max-channels = 4096
783. 
784.      # The select loop can be used in two modes:
785.      # - setting "infinite" will select without a timeout, hogging a thread
786.      # - setting a positive timeout will do a bounded select call,
787.      #   enabling sharing of a single thread between multiple selectors
788.      #   (in this case you will have to use a different configuration for the
789.      #   selector-dispatcher, e.g. using "type=Dispatcher" with size 1)
790.      # - setting it to zero means polling, i.e. calling selectNow()
791.      select-timeout = infinite
792. 
793.      # When trying to assign a new connection to a selector and the chosen
794.      # selector is at full capacity, retry selector choosing and assignment
795.      # this many times before giving up
796.      selector-association-retries = 10
797. 
798.      # The maximum number of datagrams that are read in one go,
799.      # higher numbers decrease latency, lower numbers increase fairness on
800.      # the worker-dispatcher
801.      receive-throughput = 3
802. 
803.      # The number of bytes per direct buffer in the pool used to read or write
804.      # network data from the kernel.
805.      direct-buffer-size = 128 KiB
806. 
807.      # The maximal number of direct buffers kept in the direct buffer pool for
808.      # reuse.
809.      direct-buffer-pool-limit = 1000
810. 
811.      # Enable fine grained logging of what goes on inside the implementation.
812.      # Be aware that this may log more than once per message sent to the actors
813.      # of the tcp implementation.
814.      trace-logging = off
815. 
816.      # Fully qualified config path which holds the dispatcher configuration
817.      # to be used for running the select() calls in the selectors
818.      selector-dispatcher = "akka.io.pinned-dispatcher"
819. 
820.      # Fully qualified config path which holds the dispatcher configuration
821.      # for the read/write worker actors
822.      worker-dispatcher = "akka.actor.default-dispatcher"
823. 
824.      # Fully qualified config path which holds the dispatcher configuration
825.      # for the selector management actors
826.      management-dispatcher = "akka.actor.default-dispatcher"
827.    }
828. 
829.    udp-connected {
830. 
831.      # The number of selectors to stripe the served channels over; each of
832.      # these will use one select loop on the selector-dispatcher.
833.      nr-of-selectors = 1
834. 
835.      # Maximum number of open channels supported by this UDP module Generally
836.      # UDP does not require a large number of channels, therefore it is
837.      # recommended to keep this setting low.
838.      max-channels = 4096
839. 
840.      # The select loop can be used in two modes:
841.      # - setting "infinite" will select without a timeout, hogging a thread
842.      # - setting a positive timeout will do a bounded select call,
843.      #   enabling sharing of a single thread between multiple selectors
844.      #   (in this case you will have to use a different configuration for the
845.      #   selector-dispatcher, e.g. using "type=Dispatcher" with size 1)
846.      # - setting it to zero means polling, i.e. calling selectNow()
847.      select-timeout = infinite
848. 
849.      # When trying to assign a new connection to a selector and the chosen
850.      # selector is at full capacity, retry selector choosing and assignment
851.      # this many times before giving up
852.      selector-association-retries = 10
853. 
854.      # The maximum number of datagrams that are read in one go,
855.      # higher numbers decrease latency, lower numbers increase fairness on
856.      # the worker-dispatcher
857.      receive-throughput = 3
858. 
859.      # The number of bytes per direct buffer in the pool used to read or write
860.      # network data from the kernel.
861.      direct-buffer-size = 128 KiB
862. 
863.      # The maximal number of direct buffers kept in the direct buffer pool for
864.      # reuse.
865.      direct-buffer-pool-limit = 1000
866.      
867.      # Enable fine grained logging of what goes on inside the implementation.
868.      # Be aware that this may log more than once per message sent to the actors
869.      # of the tcp implementation.
870.      trace-logging = off
871. 
872.      # Fully qualified config path which holds the dispatcher configuration
873.      # to be used for running the select() calls in the selectors
874.      selector-dispatcher = "akka.io.pinned-dispatcher"
875. 
876.      # Fully qualified config path which holds the dispatcher configuration
877.      # for the read/write worker actors
878.      worker-dispatcher = "akka.actor.default-dispatcher"
879. 
880.      # Fully qualified config path which holds the dispatcher configuration
881.      # for the selector management actors
882.      management-dispatcher = "akka.actor.default-dispatcher"
883.    }
884. 
885.    dns {
886.      # Fully qualified config path which holds the dispatcher configuration
887.      # for the manager and resolver router actors.
888.      # For actual router configuration see akka.actor.deployment./IO-DNS/*
889.      dispatcher = "akka.actor.default-dispatcher"
890. 
891.      # Name of the subconfig at path akka.io.dns, see inet-address below
892.      resolver = "inet-address"
893. 
894.      inet-address {
895.        # Must implement akka.io.DnsProvider
896.        provider-object = "akka.io.InetAddressDnsProvider"
897. 
898.        # These TTLs are set to default java 6 values
899.        positive-ttl = 30s
900.        negative-ttl = 10s
901. 
902.        # How often to sweep out expired cache entries.
903.        # Note that this interval has nothing to do with TTLs
904.        cache-cleanup-interval = 120s
905.      }
906.    }
907.  }
908. 
909. 
910.}

akka-agent

1.####################################
2.# Akka Agent Reference Config File #
3.####################################
4. 
5.# This is the reference config file that contains all the default settings.
6.# Make your edits/overrides in your application.conf.
7. 
8.akka {
9.  agent {
10. 
11.    # The dispatcher used for agent-send-off actor
12.    send-off-dispatcher {
13.      executor = thread-pool-executor
14.      type = PinnedDispatcher
15.    }
16. 
17.    # The dispatcher used for agent-alter-off actor
18.    alter-off-dispatcher {
19.      executor = thread-pool-executor
20.      type = PinnedDispatcher
21.    }
22.  }
23.}

akka-camel

1.####################################
2.# Akka Camel Reference Config File #
3.####################################
4. 
5.# This is the reference config file that contains all the default settings.
6.# Make your edits/overrides in your application.conf.
7. 
8.akka {
9.  camel {
10.    # FQCN of the ContextProvider to be used to create or locate a CamelContext
11.    # it must implement akka.camel.ContextProvider and have a no-arg constructor
12.    # the built-in default create a fresh DefaultCamelContext
13.    context-provider = akka.camel.DefaultContextProvider
14. 
15.    # Whether JMX should be enabled or disabled for the Camel Context
16.    jmx = off
17.    # enable/disable streaming cache on the Camel Context
18.    streamingCache = on
19.    consumer {
20.      # Configured setting which determines whether one-way communications
21.      # between an endpoint and this consumer actor
22.      # should be auto-acknowledged or application-acknowledged.
23.      # This flag has only effect when exchange is in-only.
24.      auto-ack = on
25. 
26.      # When endpoint is out-capable (can produce responses) reply-timeout is the
27.      # maximum time the endpoint can take to send the response before the message
28.      # exchange fails. This setting is used for out-capable, in-only,
29.      # manually acknowledged communication.
30.      reply-timeout = 1m
31. 
32.      # The duration of time to await activation of an endpoint.
33.      activation-timeout = 10s
34.    }
35. 
36.    #Scheme to FQCN mappings for CamelMessage body conversions
37.    conversions {
38.      "file" = "java.io.InputStream"
39.    }
40.  }
41.}

上一篇
第二部分
附1
附2
附3

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 212,222评论 6 493
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 90,455评论 3 385
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 157,720评论 0 348
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 56,568评论 1 284
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 65,696评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 49,879评论 1 290
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,028评论 3 409
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,773评论 0 268
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,220评论 1 303
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,550评论 2 327
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,697评论 1 341
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,360评论 4 332
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,002评论 3 315
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,782评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,010评论 1 266
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 46,433评论 2 360
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 43,587评论 2 350

推荐阅读更多精彩内容