1.9「Stanford Algorithms」Guiding Principles for Analysis of Algorithms

Having completed our first analysis of an algorithm, namely an upper bound on the running time of the Merge Short algorithm.

What I wanna do next is take a step back, and be explicit about three assumptions, three biases that we made when we did this analysis of Merge Short, and interpreted the results.

These three assumptions we will adopt as guiding principles for how to reason about algorithms, and how to define a so called fast algorithm for the rest of the course.

So, the first guiding principle is that we used what's often called worst case analysis.

By worst case.

Analysis, I simply mean that our upper bound of six N log N plus six N.

Applies to the number of lines of executed for every single input array of length end.

We made absolutely no assumptions about the input, where it comes from, what it looks like beyond what the input length N was.

Put differently, if, hypothetically, we had some adversary whose sole purpose in life was to concoct some malevolent input designed to make our algorithm run as slow as possible.

The worst this adversary could do, is.

Upper bounded by this same number, 6N log N + 6N.

Now, this, so, sort of worst case guarantee popped out so naturally from our analysis of Merge Short, you might well be wondering, what else could you do? Well, two other methods of analysis, which do have their place, although we won't really dicuss them in this course, are quote unquote, average case analysis.

And also the use of a set of prespecified benchmarks.

By average case analysis, I mean, you analyze the average running time of an algorithm under some assumption about the relative frequencies of different inputs.

So, for example, in the sorting problem, one thing you could do, although it's not what we do here.

You could assume that every possible input array is equally unlikely, and then analyze the average running time of an algorithm.

By benchmarks, I just mean that one agrees up front about some set, say ten or twenty, Benchmark inputs, which are thought to represent practical or typical inputs for the algorithm.

Now, both average-case analysis and benchmarks are useful in certain settings, but for them to make sense, you really have to have domain knowledge about your problem.

You need to have some understanding of what inputs are more common than others, what inputs better represent typical inputs than others.

By contrast, in worst-case analysis, by definition you're making absolutely no assumptions about where the input comes from.

So, as a result, worst-case analysis is particularly appropriate for general-purpose sub-routines.

Sub-routines that you design.

Find without having any knowledge of how they will be used or what kind of inputs they will be used on.

And happily, another bonus of doing worst case analysis, as we will in this course, it's usually mathematically much more attractable than trying to analyze the average performance of an algorithm under some distribution over inputs.

Or to understand the detailed behavior of an algorithm on a particular set of benchmark inputs.

This mathemetical tractabilty was reflected in our Merge Sort analysis, where we had no a priori goal of analyzing the worst case, per se.

But it's naturally what popped out of our reasoning about the algorithm's running time.

The second and third guiding principles are closely related.

The second one is that, in this course, when we analyze algorithms, we won't worry unduly about small constant factors or lower order terms.

We saw this philosophy at work very early on in our analysis of merge sort.

When we discussed the number of lines of code that the merge subroutine requires.

We first upper-bounded it by 4m plus two, for an array of length m, and then we said, eh, let's just think about it as 6m instead.

Let's have a simpler, sloppy upper-bound and work with that.

So, that was already an example of not worrying about small changes in the constant factor.

Now, the question you should be wondering about is, why do we do this, and can we really get away with it? So let me tell you about the justifications for this guiding principle.

So the first motivation is clear, and we used it already in our merge short analysis.

Which is simply way easier mathematically, if we don't have to, precisely pin down what the [inaudible] constant factors and lower-order terms are.

The second justification is a little less obvious, but is extremely important.

So, I claim that, given the level at which we're describing and analyzing algorithms in this course, it would be totally inappropriate to obsess unduly about exactly what the constant factors are.

Recall our discussion of the merge subroutine.

So, we wrote that subroutine down in pseudocode, and we gave it analysis of 4m plus two on the number of lines of code executed, given an input of length m.

We also noted that, it was somewhat ambiguous exactly how many lines of code we should count it as, depending on how you count loop increments and so on.

So even there it's small constant factors could creep in given the under specification of the pseudo code.

Depending on how that pseudo code gets translated into an actual program language like C or Java.

You'll see the number of lines of code deviate even further, not but a lot but again by small constant factors.

When such a program is then compiled down into machine code, you'll see even greater variance depending on the exact processor, the compiler, the compiler optimizations, the programming implementation, and so on.

So to summarize, because we're going to describe algorithms at a level.

That transcends any particular programming language.

It would be inappropriate to specify precise constants.

The precise constants were ultimately determined by more machine dependent aspects like who the programmer is, what the compiler is, what the processor is, and so on.

And now the third justification is frankly, we're just going to be able to get away with it.

[sound] That is, one might be concerned that ignoring things like small constant factors leads us astray.

That we wind up deriving results which suggest that an algorithm is fast when it's really slow in practice, or vice versa.

And for the problems we discuss in this course we'll get extremely accurate predictive power.

Even though we won't be keeping track of lower terms and constant factors.

When the mathematical analysis we do suggests that an algorithm is fast, indeed it will be.

When it suggests that it's not fast, indeed that will be the case.

So we lose a little bit of granularity of information.

But we don't lose in what we really care about, which is accurate guidance about what algorithms are gonna be faster than others.

So the first two justifications, I think, are pretty self evident.

This third justification is more of an assertion, but it's one we'll be baking up over and over again as we proceed through this course.

Now, don't get me wrong.

I'm not saying constant factors aren't important in practice.

Obviously, for crucial programs the constant factors are hugely important.

If you're running the sorta crucial loop, you know, your start up's survival depends on, by all means optimize the constant like crazy.

The point is just that understanding tiny constant factors in the analysis is an inappropriate level of granularity for the kind of algorithm analysis we're going to be doing in this course.

Okay, lets move on the, the third and final guiding principle.

So the third principle is that we're going to use what's called asymptotic analysis, by which I mean we will focus on the case of a large input sizes.

The performance of an algorithm as the size N of the input grows large, that is, tends to infinity.

Now this focus on large input size is it was already evident when we interpreted our bound on Merge Sort.

So, how did we describe the bound on Merge Sort? We said, oh, well, it needs a number of operations proportional, a constant fact or times in login.

And we very cavalierly declared that this was better than any algorithm which has quadratic dependence of it's running time on the number of operations.

So for example, we argued that merge sort is a better, faster algorithm than something like insertion sort, without actually discussing the constant factors at all.

So mathematically.

We were saying the running time of merge short, which we know, which we can represent as the function.

Six N log base two of N + 6N is better than any function which has a quadratic dependence on n.

Even one with a small constant like lets say 1/2 N squared which might roughly be the running time of insertion sort.

And this is a mathematical statement that is true if and only if N is sufficiently large once N grows large it is certainly true that the expression on the left is smaller than the expression on the right but for small N the expression on the right is actually going to be smaller because of the smaller leading term so in saying that merge sort is superior to insertion sort the bias is that we're focusing on problems with a large N so the question you should have is that reasonable is that a justified assumption to focus on large input sizes and the answer is certainly yes.

So the reason we focus on large input sizes is because, frankly, those are the only problems which are even, which are at all interesting.

If all you need to do is sort 100 numbers, use whatever method you want, and it's gonna happen instantaneously on modern computers.

You don't need to know say, the divide and conquer paradigm, if all you need to do is sort 100 numbers.

So one thing you might be wondering is if, with computers getting faster all the time according to Moore's Law, if really, it doesn't even matter to think about algorithmic analysis, if eventually all problem sizes will just be trivially solvable on super fast computers.

But, in fact, the opposite is true.

Moore's Law, with computers getting faster, actually says that our computational ambitions will naturally grow.

We naturally focus on ever larger problem sizes.

And the gulf between an N squared algorithm and an m log n algorithm will become ever wider.

A different way to think about it is in terms of how much bigger a problem size you can solve.

As computers get faster.

If you are using an algorithm with a running time which is proportional to the input size then the computers get faster by a factor of four then you can solve problems that are factor of four or larger.

Whereas if you are using an algorithm whose running time is proportional to the square of the input size then a computer gets faster by a factor of four, you can only solve double the problem size and we'll see even starker examples of this gulf between different algorithm approaches as the time goes on.

So to drive this point home.

Let me show you a couple of graphs.

So what we're looking at here, is we're looking at a graph, of two functions.

So the solid function.

Is the upper bound that we proved on merge sort.

So this is gonna be 6nlog(base2)n plus 6n.

And the dotted line is an estimate.

A rather generous estimate about the running time of, [inaudible] sort.

Namely one-half times N.

Squared.

And we see here in the graph exactly the behavior that we discussed earlier, which is that the small N.

Down here.

In fact because one-half N.

Squared has a smaller leading constant it's actually a smaller function.

And this is true up to this crossing point of maybe 90 or so.

Again, beyond n=90.

The quadratic growth in the N squared term.

Overwhelms the fact that it had a smaller constant and it starts being bigger than this other function six of N + six N so in the regime below 90 it's predicting that the insertion store will be better and in the regime above 90 it's predicting that merge sort will be faster.

Now here's what's interesting let's scale the X axis let's look well beyond this crossing point of 90 let's just increase it in order of magnitude up to a raise in size 1500.

And I want to emphasize these are still very small problem sizes.

If all you need to do is sort arrays of size 1500 you really don't need to know Divide-and-conquer or anything else we'll talk about -- that's a pretty trivial problem on modern computers.

[sound].

So what we're seeing is, that even for very modest problem sizes here, array of, of, size, say 1500.

The quadratic dependence in the insertion sort bound is more than dwarfing the fact, that it had a lower constant factor.

So in this large regime, the gulf between the two algorithms is growing.

And of course, if I increased it another 10X or 100x or 1000x to get to genuinely interesting problem sizes, the gap between these two algorithms would be even bigger, it would be huge.

That said, I'm not saying you should be completely ignorant of constant factors when you implement algorithms.

It's still good to have a general sense of what these constant factors are so for example in highly tuned versions of Merge Sort which you'll find in mny programming libraries.

In fact, because of the difference in constant factors, the algorithm will actually switch from Merge Sort over to insertion sort, once the problem size drops below some particular threshold, say seven elements, or something like that.

So for small problem sizes, you use the algorithm with smaller constant factors, and the insertion sort for larger problem sizes, you use the algorithm and better rate of growth, mainly merge short.

So, to review our first guiding principal is that we're going to pursue worse case analysis.

We're going to look to bounds on the performance, on the running time of an algorithm which make no domain assumptions, which make no assumptions about which input of a given length the algorithm is provided.

The second guiding principal is we're not going to focus on constant factors or lower returns, that would be inappropriate, given the level of granularity at which we're describing algorithms and third is where going to focus on the rate of growth of algorithms for large problem sizes.

Putting these three principles together, we get a mathematical definition of a fast algorithm.

Namely, we're gonna pursue algorithms whose worst case running time grows slowly as a function of the input size.

So let me tell you how you should interpret what I just wrote down in this box.

So on the left hand side is clearly what we want.

Okay, we want algorithms which run quickly if we implement them.

And on the right hand side is a proposed mathematical surrogate of a fast algorithm.

Right, the left hand side is not a mathematical definition.

The right hand side is, as well become clear in the next set of lectures.

So we're identifying fast algorithms, which, those that have good asymptotic running time which grows slowly with the input size.

Now, what would we want from a mathematical definition? We'd want a sweet spot.

On one hand we want something we can actually reason about.

This is why we zoom out and squint and ignore things like constant factors and lower terms.

We can't keep track of everything.

Otherwise we'd never be able to analyze stuff.

On the other hand we don't want to throw out the baby with the bath water, we want to retain predictive power and this turns out this definition turns out for the problems we're going to talk about in this course, to be the sweet spot for reasoning about algorithms okay worst case analysis using the asymptotic running time.

We'll be able to prove lots of theorems.

We'll be able to establish a lot of performance guarantees for fundamental algorithms but at the same time we'll have good predictive power what the theory advocates will in fact be algorithms that are known to be best in practice.

So, the final explanation I owe you, is, what do I mean by, the running time grows slowly, with respect to the input size? Well, the answer depends a little bit on the context, but, for almost all of the problems we're going to discuss, the holy grail will be to have what's called a linear time algorithm, an algorithm whose number of instructions grows proportional to the input size.

So, we won't always be able to achieve linear time, but that's, in some sense, the best case scenario.

Notice, linear time is even better than what we achieved with our merge short algorithm for sorting.

Merge short runs a little bit superlinear, it's n times log n, running as the input size.

If possible, we.

To be linear time.

It's not always gonna be possible, but that is what we will aspire toward.

For most of the problems we'll discuss in this course.

Looking ahead, the next series of videos is going to have two goals.

First of all, on the analysis side, I'll describe formally what I mean by asymptotic running time.

I'll introduce "Big Oh" notation and its variants, explain its mathematical definitions, and give a number of examples.

On the design side, we'll get more experience applying the divide and conquer paradigm to further problems.

See you then.

完成了对算法的第一个分析,即Merge Short算法运行时间的上限。

我接下来要做的是退后一步,明确三个假设,对Merge Short进行分析并解释结果时所做出的三个偏见。

我们将采用这三个假设作为指导原则,以指导如何推理算法,以及如何在本课程的其余部分中定义所谓的快速算法。

因此,第一个指导原则是我们使用了通常所说的最坏情况分析。

最坏的情况。

分析,我只是说我们的上限为六个N log N加六个N。

适用于每个长度为end的单个输入数组执行的行数。

我们绝对不对输入,输入的来源,输入长度N之外的情况做出任何假设。

换句话说,假设我们有一个对手,其生命中的唯一目的是编造一些恶意的输入,目的是使我们的算法尽可能慢地运行。

这个对手可能做的最坏的事情是。

以该相同数字为上限,即6N log N + 6N。

现在,从我们对Merge Short的分析中可以很自然地得出这种最坏情况的保证,您可能想知道,您还能做什么?好吧,尽管我们在本课程中不会真正讨论它们,但还有其他两种分析方法确实有其地位,它们是引用无引号,平均案例分析。

并使用一组预定的基准。

我是说,通过平均用例分析,您可以在有关不同输入的相对频率的某些假设下分析算法的平均运行时间。

因此,例如,在排序问题中,您可以做一件事,尽管这不是我们在这里要做的。

您可以假设每个可能的输入数组都不太可能,然后分析算法的平均运行时间。

通过基准测试,我只是说一个人预先同意一些基准输入(例如十或二十个),这些基准输入被认为代表算法的实际或典型输入。

现在,平均案例分析和基准测试在某些情况下都是有用的,但要使它们有意义,您实际上必须具有有关问题的领域知识。

您需要了解哪些输入比其他输入更常见,哪些输入比其他输入更能代表典型输入。

相比之下,在最坏的情况下,根据定义,您绝对不会假设输入来自何处。

因此,结果,最坏情况分析特别适合于通用子例程。

您设计的子例程。

在不知道如何使用它们或将使用何种输入的情况下查找。

令人高兴的是,进行最坏情况分析的另一个好处是,正如我们在本课程中将要讲到的那样,从数学上讲,比起尝试根据输入的某种分布来分析算法的平均性能,它在数学上更具吸引力。

或者了解特定基准测试集上算法的详细行为。

这种数学上的可塑性在我们的“合并排序”分析中得到了体现,在该分析中,我们本身并没有分析最坏情况的先验目标。

但这自然是我们从算法运行时间的推理中跳出来的。

第二和第三指导原则密切相关。

第二个问题是,在本课程中,当我们分析算法时,我们不会过分担心小常数因子或低阶项。

我们在合并排序分析中很早就看到了这种理念。

当我们讨论合并子例程所需的代码行数时。

通过4米加上两个我们先上界是,对于长度为m的数组,然后我们说,嗯,让我们想想它作为6米代替。

让我们有一个更简单,宽松的上限,并使用它。

因此,这已经是一个不用担心恒定因子的微小变化的例子。

现在,您应该想知道的问题是,为什么我们要这样做,我们真的可以摆脱吗?因此,让我告诉您有关该指导原则的理由。

因此,第一个动机很明确,我们已经在合并简短分析中使用了它。

从数学上讲,这是简单的方法,如果不需要的话,精确地确定[听不清]恒定因子和低阶项的含义。

第二个理由不太明显,但非常重要。

因此,我认为,考虑到我们在本课程中描述和分析算法的水平,完全不恰当地过多地关注恒定因素是完全不合适的。

回想一下我们对合并子例程的讨论。

因此,我们用伪代码写下了该子例程,在给定长度为m的输入的情况下,对执行的代码行数进行了4m加2的分析。

我们还注意到,确切的多少行数取决于我们对循环增量的计数方式,等等。

因此,即使给定伪代码的规格不足,它的常数常数也可能很小。

取决于伪代码如何转换为实际的程序语言,例如C或Java。

您将看到代码行数甚至进一步偏离,但偏离的幅度更大,但又会受到较小的常量因素的影响。

当将这样的程序编译成机器代码时,您将看到更大的差异,具体取决于确切的处理器,编译器,编译器优化,编程实现等。

总结一下,因为我们将在一个层次上描述算法。

这超越了任何特定的编程语言。

指定精确的常数是不合适的。

最终,精确的常量由更多与机器相关的方面所决定,例如程序员是谁,编译器是什么,处理器是什么,等等。

现在坦率地说,第三个理由是,我们将能够摆脱它。

[声音]也就是说,人们可能会担心忽略诸如小的恒定因子之类的东西会使我们误入歧途。

我们得出的推论结果表明算法在实践中真的很慢时很快,反之亦然。

对于本课程中讨论的问题,我们将获得非常准确的预测能力。

即使我们不会跟踪较低的项和常数。

当我们进行数学分析时,建议算法确实是快速的。

当它表明速度不是很快时,的确是这样。

因此,我们失去了一些信息的粒度。

但是,我们不会在真正关心的问题上有所损失,这是有关哪些算法将比其他算法更快的准确指南。

因此,我认为前两个理由很明显。

这第三个理由更多是断言,但这是我们在进行此过程时将一遍又一遍地证明的理由。

现在,不要误会我的意思。

我并不是说恒定因素在实践中并不重要。

显然,对于关键程序而言,恒定因素非常重要。

如果您正在运行sorta关键循环,您就会知道,启动的生存期取决于,一定是疯狂地优化常数。

关键是,对于我们在本课程中将要进行的那种算法分析,理解分析中的微小常数因素是不合适的粒度级别。

好吧,让我们继续第三个也是最后一个指导原则。

因此,第三个原则是我们将使用所谓的渐近分析,这意味着我们将专注于大输入量的情况。

随着输入大小N的增大,算法的性能会提高,即趋于无穷大。

现在,我们专注于大型输入,这在我们解释合并排序的界限时已经很明显。

那么,我们如何描述合并排序的边界?我们说,哦,好,它需要成比例的多个操作,一个恒定的事实或登录时间。

而且,我们非常勇敢地宣称,这比任何算法都具有更好的运行时间取决于运算次数的算法要好。

因此,例如,我们认为合并排序是一种比插入排序更好,更快的算法,而根本没有讨论常量因素。

从数学上来说。

我们所说的合并运行时间很短,我们知道,我们可以将其表示为函数。

N + 6N的六个N对数基数2优于对n具有二次依赖关系的任何函数。

即使是一个常数很小的常量,也可以说1/2 N平方,这大概是插入排序的运行时间。

这是一个数学表达式,当且仅当N一旦变大时N足够大,则可以肯定的是,左边的表达式小于右边的表达式,但是对于较小的N,右边的表达式实际上是由于前导项较小,所以会变小,因此说合并排序优于插入排序,这是因为我们关注的是N大的问题,因此您应该考虑的问题是,合理的假设是专注于大型输入,答案肯定是肯定的。

因此,我们之所以专注于大型输入,是因为,坦率地说,这些是唯一甚至都是有趣的问题。

如果您只需要对100个数字进行排序,则可以使用所需的任何方法,并且它将在现代计算机上立即发生。

您不需要知道说分而治之的范式,如果您需要做的就是对100个数字进行排序。

因此,您可能想知道的一件事是,如果计算机根据摩尔定律一直在不断提高,如果真的的话,甚至不必考虑算法分析,如果最终所有问题的大小都可以在超快速度上轻松解决的话电脑。

但是,实际上,情况恰恰相反。

随着计算机越来越快,摩尔定律实际上说我们的计算野心自然会增长。

我们自然会专注于更大的问题。

N平方算法和m log n算法之间的鸿沟将变得越来越大。

考虑问题的另一种方式是可以解决的问题规模有多大。

随着计算机变得越来越快。

如果您使用的算法的运行时间与输入大小成正比,则计算机的运行速度将提高四倍,那么您可以解决四倍或更大的问题。

而如果您使用的算法的运行时间与输入大小的平方成正比,则计算机的运行速度将提高四倍,那么您只能解决两倍的问题大小,甚至在这个问题之间,我们将看到更清晰的例子随着时间的流逝,不同的算法也随之发展。

因此,将这一点带回家。

让我给你看几个图表。

所以我们在这里看的是两个功能的图表。

如此坚固的功能。

是我们在合并排序上证明的上限。

因此,这将是6nlog(base2)n加6n。

虚线是估计值。

关于[听不清]排序时间的相当宽泛的估计。

即是N的一半。

平方。

我们在图中看到的正是我们前面讨论的行为,即小N。

在这

实际上是因为N的一半。

平方的前导常数较小,实际上是较小的函数。

直到大约90左右的交叉点,情况都是如此。

再次,超过n = 90。

N平方项中的平方增长。

它具有一个较小的常数,并且开始大于此其他函数N + 6 N的事实,因此不知所措,因此在90以下的状态下,它预测插入存储会更好;而在90以上的状态下,它则预测合并排序会更快。

现在,有趣的是,我们缩放X轴,使其看起来远远超过90的交叉点,让我们按大小顺序将其增大,直到增大到1500。

我想强调的是,这些问题的规模仍然很小。

如果您需要做的只是大小为1500的排序数组,那么您实际上不需要知道分而治之或其他我们将要谈论的内容-在现代计算机上,这是一个非常琐碎的问题。

[声音]。

因此,我们看到的是,即使对于此处非常小的问题大小,大小数组也可以说是1500。

插入排序范围内的二次依赖性远远超过了事实,即它具有较低的常数因子。

因此,在这种较大的情况下,两种算法之间的鸿沟越来越大。

当然,如果我再增加10倍,100倍或1000倍来达到真正有趣的问题大小,则这两种算法之间的差距将更大,甚至会更大。

就是说,我并不是说您在实现算法时应该完全不了解常量因素。

大致了解这些恒定因素是一件好事,例如,在mny编程库中可以找到的高度优化的Merge Sort版本中。

实际上,由于常数因素的不同,一旦问题大小降至某个特定阈值(例如七个元素)或类似水平以下,该算法实际上就会从“合并排序”切换为“插入排序”。

因此,对于较小的问题,使用具有较小常数因子的算法,对于较大的问题使用插入排序,并且使用算法且增长率更高,主要是合并时间短。

因此,回顾我们的第一个指导原则是我们将进行更坏的案例分析。

我们将在算法的性能,运行时间上寻找界限,这些界限没有域假设,也没有假设提供了给定长度的算法的输入。

第二个指导原则是,我们不会将注意力集中在恒定因素或较低的收益上,考虑到我们描述算法的粒度级别,这将是不合适的;第三点是将重点放在算法的增长率上对于大问题。

将这三个原理放在一起,我们得到了快速算法的数学定义。

也就是说,我们将追求最坏情况下运行时间根据输入大小而缓慢增长的算法。

因此,让我告诉您如何解释我在此框中写下的内容。

因此,在左侧显然是我们想要的。

好的,我们希望算法能够在实现时快速运行。

右侧是提出的一种快速算法的数学替代方案。

右边,左边不是数学定义。

右边的内容在下一组讲座中也很清楚。

因此,我们正在确定快速算法,这些算法具有良好的渐近运行时间,并且随着输入大小的增长而缓慢增长。

现在,我们要从数学定义中得到什么?我们想要一个甜蜜的地方。

一方面,我们想要一些我们可以真正推理的东西。

这就是为什么我们缩小并斜视而不理会常数因子和较低项之类的原因。

我们无法跟踪所有事情。

否则,我们将永远无法分析事物。

另一方面,我们不想把婴儿洗澡水扔掉,我们想保留预测力,这证明了这个定义正是我们在这个过程中要谈论的问题,是使用渐近运行时间进行算法推理的最佳点,可以进行最坏情况分析。

我们将能够证明许多定理。

我们将能够为基本算法建立很多性能保证,但与此同时,我们将具有良好的预测能力,该理论所倡导的实际上是已知在实践中最佳的算法。

因此,我欠您的最终解释是,相对于输入大小,运行时间增长缓慢是什么意思?好吧,答案有点取决于上下文,但是,对于我们将要讨论的几乎所有问题,圣杯都是拥有所谓的线性时间算法,该算法的指令数量与输入大小。

因此,我们将无法始终获得线性时间,但是从某种意义上讲,这是最佳情况。

注意,线性时间甚至比使用合并短排序算法所实现的时间还要好。

合并短期运行有点超线性,是输入n的n倍log n。

如果可能的话,我们。

是线性时间。

并非总是有可能,但这就是我们将追求的目标。

对于大多数问题,我们将在本课程中讨论。

展望未来,下一组视频将有两个目标。

首先,在分析方面,我将正式描述渐近运行时间的含义。

我将介绍“ Big Oh”符号及其变体,解释其数学定义,并给出许多示例。

在设计方面,我们将获得更多将分而治之范式应用于更多问题的经验。

回头见。

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 204,921评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 87,635评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 151,393评论 0 338
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,836评论 1 277
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,833评论 5 368
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,685评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,043评论 3 399
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,694评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 42,671评论 1 300
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,670评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,779评论 1 332
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,424评论 4 321
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,027评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,984评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,214评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,108评论 2 351
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,517评论 2 343

推荐阅读更多精彩内容