9/29/2018 assignment05Homework 5Homework Submission WorkflowWhen you submit your work, follow the instructions on the submission workflow page(https://www.cs.duke.edu/courses/fall18/compsci371d/homework/workflow.html) scrupulously for full credit.Important: Failure to do any of the following will result in lost points:Submit one PDF file and one notebook per groupEnter all the group members in your PDF Gradescope submission, not just in your documents. Lookat this Piazza Post (https://piazza.com/class/jl2rbsllc6c25v?cid=45) if you dont know how to do thisMatch each answer (not question!) with the appropriate page in GradescopeAvoid large blank spaces in your PDF filePart 1: Numerical DifferentiationGradient and JacobianThe gradient of a function is typically viewed as a column vector in the literature:A more natural (and not uncommon) view of the gradient is as a row vector, because then the gradient is merelya special case (for ) of the Jacobian matrix of a function , defined as the followingmatrix:The Jacobian matrix can therefore be viewed as the column vector of gradients of the components ofthe function , one gradient per row.The distinction between gradient as a row vector and gradient as a column vector is relevant in mathematics,but is of course moot if vectors are represented as numpy arrays in Python, because these arrays do notdistinguish between row vectors and column vectors.9/29/2018 assignment05http://db.cs.duke.edu/courses/fall18/compsci371d/homework/5/assignment05.html 2/10Problem 1.1The transformation from polar coordinates to cartesian coordinates on the plane can be writtenas follows:Write a formula for the Jacobian of this transformation.Problem 1.2Write functions with headersdef P2C(z):and def JacobianP2C(z):that compute the transformation above and its Jacobian matrix. Inputs and outputs should be numpy arrays.Show your code. You will test it in a later problem.Problem 1.3Numerical Approximation of the DerivativeIn homework 4, we explored the concept of gradient and Hessian in a case in which the functionwas known analytically. This is the situation students are most familiar with, because thatswhat calculus courses emphasize.In practice, however, is often known either (i) through a piece of code in some programming language, or, evenmore opaquely, (ii) as a black box: A program that can be called at will with an input and produces someoutput . We will explore options for scenario (i) in a later assignment. In this part of this assignment, we takethe black box route: We know nothing about except that it is differentiable (or else we cannot differentiate it!).Of course, this situation is general and subsumes scenario (i) by just forgoing any analysis of the code. However,we will see later that having access to the code opens other, interesting avenues.9/29/2018 assignment05http://db.cs.duke.edu/courses/fall18/compsci371d/homework/5/assignment05.html 3/10By definition of (partial) derivative, if , we can writewhere is the -th column of the identity matrix. That is, is a vector of zeros except for a one inposition .We can approximate the limit above by picking a small value of and disposing of the limit:For reasons that would take too long to get into, a much better numerical approximation is provided by the socalledcentral differenceThere is a whole literature on how to choose appropriately: Too large, and we are too far from the limit. Toosmall, and numerical errors in the computation prevail. We take a pragmatic approach in this introductoryexploration and use most of the time. More on this aspect later.Write a Python function with headerdef Jacobian(f, z, delta=1e-5):that takes a function f from to , a numpy vector z with entries, and an optional value for and returnsa numpy array with the Jacobian of f at z, using the central difference formula given above.Show your code. You will test it in the next problem.Programming NotesYour function Jacobian must work for any function f from to for any and . The function ftakes a numpy vector with entries as input and produces a numpy vector with entries. The functionJacobian outputs a numpy array of shape (not !).For later use, it is important that Jacobian return a numpy array in all cases (even when the result is a scalar).Problem 1.4Show the result of running the tests below. This will happen automatically once your functions Jacobian, P2C,and JacobianP2C, are defined correctly (and you run the cell below).9/29/2018 assignment05http://db.cs.duke.edu/courses/fall18/compsci371d/homework/5/assignment05.html 4/10These tests use the default value for .In?[1]: def compare(a, f, b, delta=1e-5): def a2s(a): def n2s(x): return {:g}.format(round(x, 4)) try: return [ + ; .join([, .join([n2s(y) for y in row]) forrow in a]) + ] except TypeError: try: return [ + , .join([n2s(y) for y in a]) + ] except TypeError: return [] if a.size == 0 else n2s(a) aName, fName, bName = a.__name__, f.__name__, b.__name__ msgBase = {:s}({:s}, {{:s}}) = {{:s}}\n{:s}({{:s}}) = {{:s}} msg = msgBase.format(aName, fName, bName) zs = np.array([[0, 0], [1, 0], [2, 1], [2, 2]]) for z in zs: print(msg.format(a2s(z), a2s(a(f, z, delta)), a2s(z), a2s(b(z))), end=\n\n)try: compare(Jacobian, P2C, JacobianP2C)except NameError: passProblem 1.59/29/2018 assignment05http://db.cs.duke.edu/courses/fall18/compsci371d/homework/5/assignment05.html 5/10The Hessian is the Jacobian of the gradient, in the following sense. If the gradient of is a row vector,the Hessiancan be written as follows:Use the fact that the Hessian is the Jacobian of the gradient to write a Python function with headerdef Hessian(f, x, delta=1e-5):that uses your gradient function to compute the Hessian of f at x. Show your code.Programming NotesYour function must work for a function for any , not just for . However, the codomain ofhas dimension now.You will get full credit if your code is correct and uses Jacobian to fullest advantage.Make sure that the value of delta is propagated to all calls to Jacobian.Problem 1.6Show the result of running the tests below. This will happen automatically once your function Hessian isdefined correctly (and you run the cell below).The tests use a function f which is the same as in homework 4, and is given below. They also use a functiongradientF, given below, which computes the gradient of f through the analytic formula (as you did inhomework 4). These tests use the default value for .9/29/2018 assignment05http://db.cs.duke.edu/courses/fall18/compsci371d/homework/5/assignment05.html 6/10In?[2]: shift, scale = [2, 1], 10def f(z): d = z - shift return np.array(np.inner(z, z) / scale + np.exp(-np.inner(d, d)))def gradientF(z): d = z - shift return 2 * (z / scale - d * np.exp(-np.inner(d, d)))def HessianF(z): I = np.eye(2) d = z - shift return 2 * (I / scale + (2 * np.outer(d, d) - I) * np.exp(-np.inner(d, d)))try: compare(Jacobian, f, gradientF) compare(Hessian, f, HessianF)except NameError: passProblem 1.7The default value for happens to work for the examples above. This is because all functions involved aretame, in the sense that their values have comparable magnitudes. Even then, however, the choice of is notinconsequential.Write one clear and concise sentence to describe which results are good and which are not in the tests below.RemarkThere are of course better methods for choosing than just trying some value. In particular, the choice needs to adapt to the range of values encountered during the computations.However, the experiment in this problem should at least serve as a warning that numerical differentiation can be tricky to get right. A later assignment will explore a different technique for computing derivatives, calledalgorithmic differentiation or automatic differentiation. Variants of this techniques are used in many packagesthat implement deep learning methods. Stay tuned.In?[3]: try: delta = 1e-9 compare(Jacobian, f, gradientF, delta) compare(Hessian, f, HessianF, delta)except NameError: pass9/29/2018 assignment05http://db.cs.duke.edu/courses/fall18/compsci371d/homework/5/assignment05.html 7/10Part 2: Steepest DescentIn the steepest descent algorithm for the minimization of a function , the new value of isfound from the current value by a technique called line search.Specifically, given the search directionline search defines the function asand finds a local minimum of for . The line search function returns the corresponding point .The class notes show an iterative bracketing technique to perform line search. A first stage finds an initalbracketing triple , and a second stage shrinks the triple.In this part, you will implement and test the steepest descent algorithm. This implies that you are not allowed touse any existing function that implements steepest descent, either exactly or substantively.However, you are allowed to use the function scipy.optimize.minimize_scalar, which implements theshrinkage part of line search, as explained below.Problem 2.1Using the imports and definition in the cell below, write a function with headerdef lineSearch(f, g, z0):that performs line search on the function f, whose gradient is computed by the function g, starting at point z0.If the starting point is in , then functions and have the following signatures:Show your code, and the result of running the function with the function and value z0 defined below. Definingthe corresponding gradient is your task.9/29/2018 assignment05http://db.cs.duke.edu/courses/fall18/compsci371d/homework/5/assignment05.html 8/10In?[4]: from scipy import optimize as optimport numpy as npimport mathimport matplotlib.pyplot as pltsmall = math.sqrt(np.finfo(float).eps)f, z0 = lambda z: -np.sin(z), 0Programming NotesIf the magnitude of the gradient of at is smaller than small (defined above), then lineSearch should justreturn z0 without further work. Otherwise, lineSearch should return the new value of it found (not ).The class notes state that the third element of the initial bracketing triple can be found by starting at somesmall value and then increasing until is greater than evaluated at the previous value of . In doing so,start at small and increase every time by a multiplicative factor 1.2:c *= 1.2A factor of 2 would work as well, but 1.2 is a bit safer.Allow for a maximum value for , and report failure if that value is exceeded. This should nothappen in this assignment.It will be convenient to use a two-item tuple for the initialization of the keyword parameter bracket for thefunction scipy.optimize.minimize_scalar. If bracket is the pair , then the function will callanother function (scipy.optimize.bracket) that finds a bracketing triple , as defined in the classnotes. So your function lineSearch first finds (hint: ), and then callsscipy.optimize.minimize_scalar to compute the result of line search. Read the scipy manual tounderstand how to use the result from scipy.optimize.minimize_scalar.Problem 2.2Write a function with headerdef steepest(f, g, z0, maxK=10000, delta=small, remember=False):that uses your lineSearch function to implement steepest descent with line search.Show your code and the value of that results from minimizing the provided function Rosenbrock withsteepest. Start the minimization at (defined below), and use the default values for the keywordarguments.http://db.cs.duke.edu/courses/fall18/compsci371d/homework/5/assignment05.html 9/10Programming NotesThe gradient of the function Rosenbrock is also provided as function RosenGrad below. The functionRosenbrock has a true minimum at (defined as zStar below).Minimization stops when either the norm of the difference between consecutive values of differ by less thandelta or when the number of iterations equals or exceeds maxK.If the argument remember is False, the function steepest returns the value of it found. If remember isTrue, the function returns a pair (z, history). The first item of the pair is still , and the second item is anumpy.array that records the value traversed by every step of steepest descent, including .Here, is the number of steps and .One step is defined as one call to lineSearch. This implies that history is not to record the intermediatesteps taken within the call to lineSearch.Be patient, as the search may take a while.In?[5]: def Rosenbrock(z): return 100 * (z[1] - z[0] ** 2) ** 2 + (1 - z[0]) ** 2def RosenGrad(z): return np.array([400 * (z[0] ** 3 - z[0] * z[1]) + 2 * z[0] - 2, 200 * (z[1] - z[0] ** 2)])z0, zStar = np.array([-1.2, 1]), np.array([1, 1])Problem 2.3Now run steepest as follows (the try/except is there so that the notebook will still run if steepest isundefined).In?[6]: try: [zHat, history] = steepest(Rosenbrock, RosenGrad, z0, maxK=1000, remember=True)except NameError: passMake a contour plot of the Rosenbrock function using 10 levels drawn as thin gray lines (use colors=grey,linewidths=1 in your call to matplotlib.pyplot.contour to this effect) for and. To make the contour plot, sample this rectangle of values with 100 samples in each dimension.Superimpose a plot of the path recorded in history on the contour plot. Also draw a blue dot at and a reddot at , the true minimum.Show code and plot.http://db.cs.duke.edu/courses/fall18/compsci371d/homework/5/assignment05.html 10/10Problem 2.4Convergence slows down as is approached, and even after 1000 iterations there is still a way to go. To seethis slowdown more clearly, plot a vector distance with the Euclidean distances between each of the points inhistory (obtained in the previous problem) and . Label the plot axes meaningfully.Show code and plot.转自:http://ass.3daixie.com/2018100277779911.html
讲解:Workflow、Python、Python、Python、PythonMatlab|
©著作权归作者所有,转载或内容合作请联系作者
- 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
- 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
- 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...