Principle of Locality I: Hacking the Continuum Mean-Field Technique in Network Modeling

Figure 1. Space-network created by Stephen Wolfram

Outline

  • What does "Continuum Mean-Field" mean ?
  • From Dynamics assumption to Stable Distribution
  • The Preferential Attachment Model
  • The Preferential Return Model
  • The Spatial-constrained Attachment Model
  • From Stable Distribution to Dynamics assumption

1. What does "Continuum Mean-Field" mean ?

  1. "Mean-Field": many -> one
    The complex interaction between a large number of components can be simplified into a single averaged effect of all the other individuals on any given one.

  2. "Continuum": discrete -> Continuous
    A discrete variables that is "smooth enough", i.e. increase by one unit in each period, such as time steps [1], spatial jumps [2-3], and population, can be viewed as a continuous geometric structure. Any measure on this structure, like degree [1], number of unique locations visited [2], and distance [3], can be described by differential equations.

2. From Dynamics assumption to Stable Distribution

2.1 The Preferential Attachment Model
Figure 2. An example network of the preferential attachment model
Figure 3. Time continuum
Geometric structure Measure Differential form
Time t Degree k

The "preferential attachment" assumption can be translated as

![Eq. 1](http://latex.codecogs.com/svg.latex?\frac{dk_j}{dt}=m\frac{k_j}{\sum^{all ,nodes} k_j}=m\frac{k_j}{\sum_{j=1}^N{k_j}}=m\frac{k_j}{2mt}=\frac{k_j}{2t}.)

In other words, we have

Eq. 2
Eq. 2

or

Eq. 3
Eq. 3

which gives

Eq. 4
Eq. 4

Considering the boundary condition (the purple node in Figure 2) j=t, k=m leads to

![Eq. 5](http://latex.codecogs.com/svg.latex?ln(m)=\frac{1}{2}ln(j)+c \rightarrow c=ln(m)-\frac{1}{2}lnj.)

Therefore, Eq. 4 reads

Eq. 6
Eq. 6

or

Eq. 7
Eq. 7

Now, here comes the most interesting step. Observe Figure 2 again, we find that all nodes on the right of the ith node (including the ith node itself) have a degree <= ki(t), whereas all nodes on the left have a degree > ki(t). It means that we can write the cumulative probability function as

![Eq. 8][eq8]
[eq8]:http://latex.codecogs.com/svg.latex?P(K\leq,k_j)=\frac{t-j}{t}=1-\frac{j}{t}=1-\frac{m2}{k_j2}=F(k_j),

and thus naturally we have the probability density function as

![Eq. 9][eq9]
[eq9]:http://latex.codecogs.com/svg.latex?f(k_j)=\frac{dF(k_j)}{dk_j}=\frac{2m2}{k_j3},

To summarize, PA predicts two scaling relationships, Eq. 7 and Eq. 9.

Now, let's summarize the steps:

  1. Define the quantity of interest y (e.g., degree k) and the background variable x (e.g., time t);
  2. Give the single-step assumption as differential equation dy/dx = f(x,y);
  3. Solve this differential equation to obtain y = g(x);
  4. Get the distribution P(Y<=y)=h(x)=h(g'(y))

This approach is very general and can be applied to any kind of social dynamics satisfying the "Continuum" and "Mean-Field" assumption.

2.2 The Preferential Return Model
Figure 4. An example network of the preferential return model
Figure 5. Interest continuum

The major (and very critical) difference between Preferential Return (PR) and Preferential Attachment (PA) is that, the number of nodes increase sub-linearly over time in PR. Therefore, in PR we have to choose either time or the interest space as the geometric background. Here we choose the latter and study the dynamics.

Besides this, PR use a linear function of degree (1+\lambda k) for calculating weights so that new nodes with zero degree may also be visited. We can call this variable votes (v=1+\lambda k) for it is vote and not degree that determine the probability directly.

Geometric structure Measure Differential form
Interest set M Degree k dk/dM = (dk/dt) * (dt/dM)

As shown by Figure 3, nodes are sorted by their visiting orders. The jth node is firstly visited at tj. In other words, util time ti there are i nodes being visited, and each node is visited by kj times. We have

![Eq. 10](http://latex.codecogs.com/svg.latex?\frac{di}{dt}=\frac{\sum^{new ,nodes} v_j}{\sum^{all ,nodes} v_j}=\frac{\sum_{j=i+1}^{M}(1+\lambda k_j)}{\sum_{j=1}^M{(1+\lambda k_j)}}=\frac{M-i}{M+\lambda \sum _{j=1}^Mk_j}=\frac{M-i}{M+\lambda t}.)

Re-arrange it to obtain

![Eq. 11](http://latex.codecogs.com/svg.latex?\frac{1}{M-i}di=\frac{1}{M+\lambda t}dt,)

or

![Eq. 12](http://latex.codecogs.com/svg.latex?\int\frac{1}{M-i}di=\int\frac{1}{M+\lambda t}dt,)

which gives

![Eq. 13](http://latex.codecogs.com/svg.latex?ln(M-i)=-\frac{1}{\lambda}ln(M+\lambda t)+c.)

or

![Eq. 14](http://latex.codecogs.com/svg.latex?i=M-e^{-c}(\frac{1}{M+\lambda t})^{\frac{1}{\lambda}})

Considering the boundary condition when t=1, i=1,

![Eq. 15](http://latex.codecogs.com/svg.latex?ln(M-1)=-\frac{1}{\lambda}ln(M+1)+c \rightarrow e{-c}=(M-1)(M+\lambda){\frac{1}{\lambda}}.)

Therefore Eq 14 reads

![Eq. 16](http://latex.codecogs.com/svg.latex?i=M-(M-1)(\frac{M+\lambda}{M+\lambda t})^{\frac{1}{\lambda}})

or

Eq. 17
Eq. 17

Eq. 17 predicts the waiting time before the first visit of the jth node.

Meanwhile, we derive the increase of degree over time as

![Eq. 18](http://latex.codecogs.com/svg.latex?\frac{dk_j}{dt}=1*\frac{v_j}{\sum^{total ,nodes} v_j}=\frac{1+\lambda k_{j}}{\sum_{j=1}^M{(1+\lambda k_{j})}}=\frac{1+\lambda k_{j}}{M+\lambda t}.)

Re-arrange it to obtain

![Eq. 19](http://latex.codecogs.com/svg.latex?\frac{1}{1+\lambda k_{j}}dk_{j}=\frac{1}{M+\lambda t}dt,)

or

![Eq. 20](http://latex.codecogs.com/svg.latex?\int\frac{1}{1+\lambda k_{j}}dk_j=\int\frac{1}{M+\lambda t}dt,)

which gives

![Eq. 21](http://latex.codecogs.com/svg.latex?ln(1+\lambda k_{j})=ln(M+\lambda t)+c.)

Considering the boundary condition when t=tj, kj=1

![Eq. 22](http://latex.codecogs.com/svg.latex?ln(1+ \lambda)=ln(M+\lambda t_j)+c \rightarrow c=ln(1+ \lambda)-ln(M+\lambda t_j).)

Therefore, Eq. 21 reads

![Eq. 23](http://latex.codecogs.com/svg.latex?ln(1+\lambda k_j)=ln(M+\lambda t)+ln(1+ \lambda)-ln(M+\lambda t_j).)

or

![Eq. 24](http://latex.codecogs.com/svg.latex?k_{j}=\frac{\lambda+1}{\lambda}\frac{\lambda t+M}{\lambda t_j+M}-\frac{1}{\lambda})

Considering the condition tj=g(j) given by Eq. 17, we derive

![Eq. 25](http://latex.codecogs.com/svg.latex?k_{j}=\frac{\lambda+1}{\lambda}\frac{\lambda t+M}{\lambda+M}(\frac{M-j}{M-1})^{\lambda}-\frac{1}{\lambda})

Considering the current time step ti=g(i) we also have

Eq. 26
Eq. 26

When j=i, kj=ki=1.

Again, here comes the most interesting step. In Figure 3 we find that all nodes on the right of the ith node (including the ith node itself) have a degree <= ki, whereas all nodes on the left have a degree > ki. It means that we can write the cumulative probability function as

![Eq. 27][eq27]
[eq27]:http://latex.codecogs.com/svg.latex?P(K\leq,k_i)=\frac{M-j}{M}\approx\frac{M-j}{M-1}=[\frac{\lambda}{\lambda+1}\frac{\lambda+M}{\lambda{t}+M}(k+\frac{1}{\lambda})]^{\frac{1}{\lambda}}=F(k_i),

and thus naturally we have the probability density function as

![Eq. 28][eq28]
[eq28]:http://latex.codecogs.com/svg.latex?f(k_i)=\frac{dF(k_i)}{dk_i}=\frac{1}{\lambda}(\frac{\lambda}{\lambda+1}\frac{\lambda+M}{\lambda{t}+M}){\frac{1}{\lambda}}(k+\frac{1}{\lambda}){\frac{1}{\lambda}-1},

To summarize, PR predicts three scaling relationships, Eq. 16, Eq. 26, and Eq. 28.

2.3 The Spatial-constrained Attachment Model
Figure 6. An example network of the space-constrained attachment model

Figure 7. Similarity continuum

The expected increase of the radius R(t), which is obviously a function of time t, is

![Eq. 29](http://latex.codecogs.com/svg.latex?\frac{dR(t)}{dt}=\Delta R(t)=\int_0rxf(x)dx=\int_0rx\frac{1}{L}dx=\frac{r^2}{2L}.)

which gives

Eq. 30
Eq. 30

in which we can consider the boundary condition t=2, R(t)<=r to obtain that c<=r.

If the similarity space is D dimensional, then the volume of the space

![Eq. 31](http://latex.codecogs.com/svg.latex?V(t)\sim R(t)^D\sim t^D,)

As the radius increase linearly with time, the number of nodes at distance rho from the origin is approximately R(t) - rho, which is the time steps passed by after the time step rho:

![Eq. 32](http://latex.codecogs.com/svg.latex?\mu_\rho(t)\sim R(t)-\rho,)

Therefore, the total number of nodes in one dimensional space is

![Eq. 33](http://latex.codecogs.com/svg.latex?N(t)\sim\int_0^R(R-\rho)d\rho = \frac{R(t)^2}{2},)

If the similarity space is D dimensional, then

![Eq. 33](http://latex.codecogs.com/svg.latex?N(t)\sim\int_0R(R-\rho)Dd\rho = \frac{R(t)^{D+1}}{D+1}\sim t^{D+1},)

Meanwhile, we know that for every "local area" of the space with distance r, n nodes will be fully connected to each other, generating n^2/2 undirected links, therefore the total number of edges

![Eq. 34](http://latex.codecogs.com/svg.latex?E(t)\sim\int_0^RN(t) d\rho \sim R(t)^{D+2} \sim t^{D+2},)

Putting these results together, we have the following scaling relationships:

![Eq. 35](http://latex.codecogs.com/svg.latex?N(t)\sim V(t)^{\frac{D+1}{D}},)

and

![Eq. 36](http://latex.codecogs.com/svg.latex?E(t)\sim N(t)^{\frac{D+2}{D+1}}.)

That is, the super-linear scaling between nodes and covered areas, and the super-linear scaling between edges and nodes.

3. From Stable Distribution to Dynamics assumption

References

[1] What Is Spacetime, Really? by Stephen Wolfram.
[2] Barabási, A. L., & Albert, R. (1999). Emergence of scaling in random networks. science, 286(5439), 509-512.
[3] Zhao, Y. M., Zeng, A., Yan, X. Y., Wang, W. X., & Lai, Y. C. (2015). Universal underpinning of human mobility in the real world and cyberspace. arXiv preprint arXiv:1512.04669.
[4] Zhang, J., Li, X., Wang, X., Wang, W. X., & Wu, L. (2015). Scaling behaviours in the growth of networked systems and their geometric origins.Scientific reports, 5.

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容

  • 大概一个月前的某一天,我被同一个小区的宝妈拉入了一个微信群。几年前,我们曾住同一个楼不同的单元,因为遛娃而相互认识...
    番茄狂想阅读 4,012评论 6 11
  • 城南旧燕寻何渺, 蝶舞丛中飞愈小。 未梦一簪秋露凉, 流光默默涂青草。 #飞云##未名# 2017.1.10
    云烟深处YY阅读 1,235评论 0 0
  • 今天简书首页,看到了一篇关于写作的文章,让我有醍醐灌顶之感。 《写作的价值,你或许还不知道有这些吧》作者:贝小鱼 ...
    明小羽阅读 1,469评论 0 1
  • 刚毕业的我只是一张白纸,面对人情世俗中的套路,虚伪,两面三刀很是不屑。 然而生活于俗世的我们却不得不接受并学会这样...
    琴声呜咽阅读 2,144评论 0 0