So the final three responses are all correct, and I hope the high level intuition for why is fairly clear.
T of N is definitely a quadratic function.
We know that the linear term doesn't matter much as it grows, as N grows large.
So since it has quadratic growth, then the third response should be correct.
It's theta of N squared.
And it is omega of N.
So Omega of N is not a very good lower bound on the asymptotic rate of growth of T of N, but it is legitimate.
Indeed, as a quadratic growing function, it grows at least as fast as a linear function.
So it's Omega of N.
For the same reason, big O of N cubed, it's not a very good upper bound, but it is a legitimate one, it is correct.
The rate of growth of T of N is at most cubic.
In fact, it's at most quadratic, but it is indeed, at most, cubic.
Now if you wanted to prove these three statements formally, you would just exhibit the appropriate constants.
So for proving that it's big Omega of N, you could take N naught equal to one, and C equal to one-half.
For the final statement, again you could take N naught equal to one.
And C equal to say four.
And to prove that it's theta of N squared you could do something similar just using the two constants combined.
So N naught would be one.
You could take C1 to be one-half and C2 to be four.
And I'll leave it to you to verify that the formal definitions of big omega, big theta, and big O would be satisfied with these choices of constants.
One final piece of asymptotic notation, we're are not going to use this much, but you do see it from time to time so I wanted to mention it briefly.
This is called little O notation, in contrast to big O notation.
So while big O notation informally is a less than or equal to type relation, little O is a strictly less than relation.
So intuitively it means that one function is growing strictly less quickly than another.
So formally we say that a function T of N is little O of F of N, if and only if for all constants C, there is a constant N naught beyond which T of N is upper bounded by this constant multiple C times by F of N.
So the difference between this definition and that of Big-O notation, is that, to prove that one function is big O of another, we only have to exhibit one measly constant C, such that C times F of N is upper bound, eventually, for T of N.
By contrast, to prove that something is little O of another function, we have to prove something quite a bit stronger.
We have to prove that, for every single constant C, no matter how small, for every C, there exists some large enough N naught beyond which T of N is bounded above by C times F of N.
So, for those of you looking for a little more facility with little O notation, I'll leave it as an exercise to prove that, as you'd expect for all polynomial powers K, in fact, N to the K minus one is little O of N to the K.
There is an analogous notion of little omega notation expressing that one function grows strictly quicker than another.
But that one you don't see very often, and I'm not gonna say anything more about it.
So let me conclude this video with a quote from an article, back from 1976, about my colleague Don Knuth, widely regarded as the grandfather of the formal analysis of algorithms.
And it's rare that you can pinpoint why and where some kind of notation became universally adopted in the field.
In the case of asymptotic notation, indeed, it's very clear where it came from.
The notation was not invented by algorithm designers or computer scientists.
It's been in use in number theory since the nineteenth century.
But it was Don Knuth in '76 that proposed that this become the standard language for discussing rate of growth, and in particular, for the running time of algorithms.
So in particular, he says in this article, "On the basis of the issues discussed here, I propose that members of SIGACT," this is the special interest group of the ACM, which is concerned with theoretical computer science, in particular the analysis of algorithms.
So, "I propose that the members of SIGACT and editors in computer science and mathematics journals adopt the O, omega, and theta notations as defined above unless a better alternative can be found reasonably soon.
So clearly a better alternative was not found and ever since that time this has been the standard way of discussing the rate of growth of running times of algorithms and that's what we'll be using here.
因此,最后的三个答案都是正确的,我希望有关原因的高层直觉很清楚。
N的T绝对是二次函数。
我们知道线性项随着N的增加而增长并不重要。
因此,由于它具有二次增长,因此第三个响应应该是正确的。
是N平方的theta。
它是N的欧米茄。
因此,N的O并不是N的T的渐近增长率的很好下限,但这是合理的。
实际上,作为二次增长函数,它的增长至少与线性函数一样快。
所以是N的欧米茄。
出于同样的原因,将N的大O求立方,这不是一个很好的上限,但它是合法的,这是正确的。
N的T的增长率至多为立方。
实际上,它最多是二次方的,但实际上最多是三次方的。
现在,如果您想正式证明这三个语句,则只需展示适当的常量即可。
因此,为了证明它是N的大欧米茄,您可以使N等于1,而C等于二分之一。
对于最后的陈述,您可以再次选择N等于1。
和C等于说四。
为了证明它是N平方的theta,您可以使用两个常数组合来做类似的事情。
所以没有零会是一个。
您可以将C1减半,将C2减为四。
而且,我将留给您来验证大欧米茄,大theta和大O的形式定义对这些常量选择是否满意。
最后一种渐近符号,我们不会使用太多,但是您会不时看到它,所以我想简单地提一下。
与大O符号相反,这称为小O符号。
因此,尽管非正式地大O符号是小于或等于类型关系,但小O严格上是小于关系。
因此,从直观上讲,这意味着一个功能的增长速度远远快于另一个功能。
正式地讲,我们说N的函数T等于N的F的O,当且仅当对于所有常数C都存在一个常数N时,超出N的T的上限就是该常数C乘以F的F。 N.
因此,该定义与Big-O表示法的区别在于,为了证明一个函数是另一个函数的大O,我们只需要展示一个中等的常数C,以使C乘以N的F成为上限。 ,对于N的T。
相比之下,要证明某事物不是另一个函数的O,我们必须证明事物要强得多。
我们必须证明,对于每个单个常数C,无论多么小,对于每个C,都存在一些足够大的N,N超出该范围,则N的T被C乘以N的F所限制。
因此,对于那些寻求更多带有少量O表示的工具的人,我将其保留为一项练习,以证明,正如您期望的所有多项式幂K一样,实际上N等于K减1就是N的小O到K。
有一个类似的小Ω表示法的概念,它表示一个功能严格比另一个功能更快地增长。
但是,您不会经常看到的那个,我也不会再说了。
因此,让我用1976年的一篇文章作为我的同事Don Knuth的引用来结束本视频,该同事被广泛认为是算法形式分析的祖父。
而且,您很少能指出为什么在某种领域普遍采用某种表示法的原因和地点。
实际上,在渐近符号的情况下,非常清楚它的来源。
该符号不是算法设计者或计算机科学家发明的。
自19世纪以来,它一直在数论中使用。
但是正是唐·克努斯(Don Knuth)在76年提出,它成为讨论增长率,尤其是算法运行时间的标准语言。
因此,他在本文中特别指出:“根据此处讨论的问题,我建议SIGACT的成员”,这是ACM的特殊利益集团,它关注理论计算机科学,尤其是分析。算法。
因此,“我建议SIGACT的成员以及计算机科学和数学杂志的编辑采用上面定义的O,omega和theta表示法,除非可以在合理的时间内找到更好的选择。
因此,显然没有找到更好的替代方法了,自那时以来,这一直是讨论算法运行时间增长率的标准方法,这就是我们将在此处使用的方法。