讲解:MDS6106、Decision Analytics、MATLAB、MATLABSQL|Python

iDDA · Institute for Data and Decision AnalyticsMDS6106 – Introduction to OptimizationFinal ProjectImage Inpainting and Nonconvex Image CompressionThe goal of this project is to investigate different optimization models and to utilize minimizationmethodologies that were introduced in the lecture to reconstruct images from partial data.Project Description. Typically, a grey-scale image U ∈ Rm×nis represented as a m × n matrixwhere each entry Uij represents a specific pixel of the image containing the color information. Ifthe columns of U = (u[1], u[2], . . . , u[n]) are stacked, we obtain the vector formu = vec(U) = (u>[1], u>[2], . . . , u>[n])> ∈ Rmnof the image U. In this project, we consider a class of imaging problems that is known as inpaintingproblems. In general, inpainting describes the task of recovering an image U from partial dataand observations. Specifically, we assume that parts of the image U are missing or damaged, (e.g.,due to scratches, stains, or compression), and the aim is to reconstruct this missing or damagedinformation via solving a suitable inpainting or optimization problem.Figure 1: Examples of different damaged images. We want to recover the missing image informationin the red areas.The images in Figure 1 show several typical situations where inpainting techniques can be applied.Our overall task is to reconstruct the red target areas. In this project, we assume that thesetarget areas are known, i.e., we have access to a binary mask Ind ∈ Rm×n withIndij =(1 the pixel (i, j) is not damaged,0 the pixel (i, j) is damaged.This mask contains the relevant information about missing or damaged parts in the image.Let us set ind := vec(Ind) ∈ Rmn, s := Pmni=1 indi, and Ω := {i : indi = 1}. Moreover, letΩ = {q1, q2, ..., qs} denote the different elements of the index set Ω. Then, we can define thefollowing selection matrixThe vectors ej are unit vectors in Rmn and the matrix P selects all undamaged pixels of a stackedimage according to the mask ind. In particular, if U ∈ Rm×nis the input image and u = vec(U)is its stacked version, then the vectorb = P u ∈ Rscontains the color information of all undamaged pixels of the original image U. Hence, we nowwant to find a reconstruction y ∈ Rmn of u such that:(i) Original information of the undamaged parts in U is maintained, i.e., x satisfiesP y = b or P y ≈ b. (1)(ii) The image y can recover the missing parts in U in a “suitable” way.In this project, we will discuss different models and strategies to achieve these goals.Project Tasks.1. Sparse Reconstruction and L-BFGS. The linear system of equations (1) is underdeterminedand has infinitely many possible solutions. In order to recover solutions with a “naturalimage structure”, we first consider the so-called `1-regularized image reconstruction problemis derived as in (1) and µ > 0 is given. The efficiency and success of thismodel is based on the fact that the `1-regularization promotes sparsity and that there existcanonical sparse representations of the image data. The idea is to use a linear transformationx = Ψy of the image y in (1) and to transfer it to the frequency domain. In this new space,many images have a very sparse representation and many of the components xi are zero orclose to zero. This motivates the choice of the `1-norm, kxk1 =Pmni=1 |xi|, in the model (2).The quadratic term in (2) corresponds to the condition (1) and is small when the pixels ofundamaged parts of u and of the corresponding reconstruction have close values.In this part, we want to use the discrete cosine transformation as sparse basis for the images,i.e., we can set x = Ψy = dct(y). Since the DCT-transformation is orthogonal, this leads tothe following definitions:Ax = A(x) = Pidct(x), A>v = A>(v) = dct(P>v), x ∈ Rmn, v ∈ Rs,where idct denotes the inverse DCT-transformation. In MATLAB, these operations can berepresented compactly as functions viaP = @(x) x(ind); A = @(x) P(idct(x)); A>A = @(x) dct(ind.*idct(x)).Since the `1-norm is not differentiable, we also want to consider a smoothed and popularvariant of the `1-problem: (3)where the so-called Huber-function is defined as follows:ϕhub(z) = ( 12δz2if |z| ≤ δ,|z| − 12δ if |z| > δ,z ∈ R, δ > 0.In contrast to the `1-norm, the Huber function is continuously differentiable and hence,gradient-type methods can be applied to solve the problem (3).2 of 6– Implement the accelerated proximal gradient method (FISTA) for the `1-problem (2).(It holds that λmax(A>A) ≤ 1).– Implement the globalized L-BFGS method (Algorithm 5 with L-BFGS updates) forthe smooth Huber-loss formulation (3). You can use backtracking to perform the linesearch and the two-loop recursion to calculate the L-BFGS update. You can chooseH0as initial matrix for the update. In order to guarantee positive definiteness of theL-BFGS update, the pair {sk, yk} should only be added to the current curvature pairs ifthe condition (sk)>yk > 10−14 is satisfied. Suitable choices for the memory parameterare m ∈ {5, 7, 10, 12}.– A suitable image quality measure is the so-called PSNR value. Suppose that u∗ =vec(U∗) is the original true (and undamaged) image, then we have:PSNR := 10 · log10 � mnky − u∗k2where y = idct(x).Compare the results of the two methods and models with respect to the PSNR value,the number of iterations, and the required cpu-time. Test different parameter settingsand variants of the methods to improve the performance of your implementations. Plotthe reconstructed images and the convergence behavior of FISTA and L-BFGS. Youcan use the stopping criterion kxk+1 − yk+1k ≤ tol in FISTA. How does the PSNRvalue behave for different tolerances?– The choice of the parameter µ and δ can depend on the tested images; good choicesare µ ∈ [0.001, 0.1] and δ ∈ [0.01, 0.5]. (The Huber function is closer to the absolutevalue | · | for smaller choices of δ).Hints and Guidelines:– On Blackboard, we have provided two datasets containing test images and test inpaintingmasks. Compare the performance of your methods on several images and differenttype of masks.– Typically, the pixel values Uij are represented by integer numbers in [0, 255]. Scale theimages to [0, 1]. A common initial point is zero: x0 = zeros(m*n,1).– For debugging, you can test your implementation of the L-BFGS strategy first on asimpler example.2. Total Variation Minimization. The total variation minimization problem utilizes a differentregularization to solve the inpainting task (1). The optimization problem is given byminxϕ(Dx) s.t. kP x − bk2 ≤ δ, (4)where P ∈ Rs×mn and b ∈ Rs are given as in (1) and δ > 0 is a noise parameter. Here,D ∈ R2mn×mn is a discretized version of the image gradient and the mapping ϕ : R2mn → Ris given byϕ(z) := XmnFor an input image x = vec(X) and at the pixel (i, j), the image gradient Dx computes thedifference between the pixel valuesδ1 = X(i+1)j − Xij and δ2 = Xi(j+1) − Xij3 of 6in the x- and y-direction of the image. The total variation term ϕ(Dx) penalizes now the`2-norm of (δ1, δ2)> at all pixels. In particular, the objective function in (4) is minimized,when neighboring pixels have similar values.In contrast to the `1-problem, this regularization also works in the pixel domain and we donotMDS6106作业代做、代做Decision Analytics作业、代写MATLAB实验作业、MATLAB编程语言作业 need to transform the image x to the frequency space.– Solve the optimization problem (4) and implement the primal-dual method. A pseudocodeof the primal-dual method can be found below (the algorithm will be discussed indetail in the following lectures). You can use the estimate: kDk2 ≤ 8.– Run your code for different test images and masks and report your results. Repeatyour experiments with different step sizes σ, τ > 0 – can different choices improve theperformance of your implementation? Compare the PSNR values with the results inpart 1.) – does the total variation model achieve better results? The parameter δ canbe chosen small: δ ∈ {10−6, 10−4, 10−2}.Hints and Further Guidelines:– On BB, we have provided a MATLAB function D = get_D(m,n) to compute the operatorD for given dimensions m and n. You can use this code (or your own solutions) togenerate D. Note that the code is tailored to the definition of the function ϕ.– The primal-dual method will be discussed in Lecture 23, November 27th in detail. It isdesigned for problems of the formminxf(x) + g(Kx),where f : Rn → R and g : Rm → R are convex functions or indicator functions ofconvex, closed sets, and K ∈ Rm×nis a given matrix. The method is defined as follows:• Initialization: Choose initial points x0 ∈ Rn, y0 ∈ Rm and set x¯0 = x0. Choosestep sizes τ, σ > 0.• For k = 0, 1, . . . : Repeat the following steps:Convergence of this method can only be guaranteed if the step sizes τ and σ are chosensuch that τσkKk2 stopping criterion. In practice, the algorithm is terminated after a suitable number ofiterations or when the relative change in xkis small:3. Nonconvex Image Compression. In this final part of the project, we try to investigate aslightly different question related to inpainting. Given an image u ∈ Rmn, can we constructa binary mask c = ind ∈ {0, 1}mn such that s =Pmni=1 ciis small as possible, but the imageu can still be reconstructed with reasonable quality by only using pixels j ∈ {1, ..., mn} withcj = 1? In this part, we want to consider a PDE-based image compression technique thatallows to compute such a mask c and the corresponding reconstruction x of u simultaneously.The model is given bymin x,c12kx − uk2 + µkck1 s.t. diag(c)(x − u) − (I − diag(c))Lx = 0, c ∈ [0, 1], (5)4 of 6where u = vec(U) ∈ Rmn is the ground truth image, x ∈ Rmn is the reconstructed image,c ∈ Rmn denotes the inpainting mask, and L = −D>D ∈ Rmn×mn is the Laplacian operator.As long as one element in c is nonzero, the matrix A(c) = diag(c) + (diag(c) − I)L can beshown to be invertible and hence, x can be expressed explicitly via x = A(c)−1diag(c)u. Inthis case, the problem (5) can be reduced to, this model has a standard form. Furthermore, thegradient of f is given by∇f(c) = diag(Lx + u − x)[A(c)>]−1(x − u) where x = A(c)−1diag(c)u.In Figure 2, an exemplary output or solution of this problem is shown. The figure in themiddle depicts the mask c and the associated reconstruction x of u is shown on the right.In this example, the mask has a density of smn ≈ 6.38% pixels, i.e., only 6.38% of the pixelsof the ground truth image (which is shown on the left) are used.Figure 2: Image compression via optimal selection masks. Left: ground truth image u. Middle:inpainting mask c. Right: reconstructed image x. The mask c only selects 6.38% of theavailable pixels. The PSNR value of the recovered image is 35.1.– Implement the inertial proximal gradient method (iPiano) that was presented in thelecture to solve the nonconvex optimization problem (6).You can use the method get_D to generate the Laplacian operator L = −D>D. Wesuggest to use c0 = ones(m*n,1) as initial point and βk ≡ β = 0.8 (or a different valueclose to 0.8). As the Lipschitz constant ` of the gradient ∇f is unknown, the simpleline search-type procedure of the inertial proximal gradient method can be used todetermine ` adaptively.Hints and Guidelines:– The parameter µ depends on the specific ground truth image u. Possible choices areµ ∈ [0.001, 0.01]. If µ is too large, the mask c will be set to zero which will causenumerical issues and the reconstruction fails.– This problem can be computationally demanding. Try to implement iPiano as efficientlyas possible! You can test your implementation first on smaller images. For instancethe MATLAB-command imresize(u,0.5) can be used to scale the original image u tohalf of its size.5 of 6– Ensure that the matrix A(c) is in a sparse format. In this case, the backslash operatoror similar methods for sparse linear systems can be utilized to compute the solutionx = A(c)−1diag(c)u efficiently.– In order to prevent the Lipschitz constant ` from being too large, you can decrease itby a certain factor after a fixed number of iterations, i.e., a possible strategy isif mod(iter,5) == 0, ` = 0.95 · `, α = 1.99(1 − β)/`, endYou can also experiment with continuation strategies for the parameter µ: start witha larger parameter µ0 > µ and reduce it after a fixed number of iterations until itcoincides with the original parameter µ.– You can either use a number of iterations (250, 500, or 1000, . . .) or the condition,as stopping criterion.Project Report and Presentation. This project is designed for groups of four students. Pleasesend the following information to 217012017@link.cuhk.edu.cn or andremilzarek@cuhk.edu.cnuntil November, 29th, 6:00 pm:• Name and student ID number of the participating students in your group, group name.Please contact the instructor in case your group is smaller to allow adjustments of the projectoutline and requirements.A report should be written to summarize the project and to collect and present your differentresults. The report itself should take no more than 15–20 typed pages plus a possible additionalappendix. It should cover the following topics and items:• What is the project about?• What have you done in the project? Which algorithmic components have you chosen toimplement? What are your main contributions?• Summarize your main results and observations.• Describe your conclusions about the different problems and methods that you have studied.You can organize your report following the outlined structure in this project description. As thedifferent parts in this project only depend very loosely on each other, you can choose to distributethe different tasks and parts among the group members. Please clearly indicate the responsibilitiesand contributions of each student and mention if you have received help from other groups, theteaching assistant, or the instructor.Try to be brief, precise and fact-based. Main results should be presented by highly condensed andorganized data and not just piles of raw data. To support your summary and conclusions, youcan put more selected and organized data into an appendix which is not counted in the page limit.Please use a cover page that shows the names and student ID numbers of your group members.The deadline for the report submission is December, 16th, 15:00 pm. Please send your reportand supporting materials to 217012017@link.cuhk.edu.cn.The individual presentations of the project are scheduled for December, 17th, afternoon orDecember, 18th. More information will follow here soon.6 of 6转自:http://www.daixie0.com/contents/12/4484.html

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容