Algebraic polynomial system solving and applications

Research output: ThesisDoctoral ThesisInternal

4151 Downloads (Pure)

Abstract

The problem of computing the solutions of a system of multivariate polynomial equations can be approached by the Stetter-Möller matrix method which casts the problem into a large eigenvalue problem. This Stetter-Möller matrix method forms the starting point for the development of computational procedures for the two main applications addressed in this thesis:

* The global optimization of a multivariate polynomial, described in Part II of this thesis, and

* the H2 model-order reduction problem, described in Part III of this thesis.

Part I of this thesis provides an introduction in the background of algebraic geometry and an overview of various methods to solve systems of polynomial equations.

In Chapter 4 a global optimization method is worked out which computes the global minimum of a Minkowski dominated polynomial. The Stetter-Möller matrix method transforms the problem of finding solutions to the system of first-order conditions of this polynomial into an eigenvalue problem. This method is described in [48], [50] and [81]. A drawback of this approach is that the matrices involved in this eigenvalue problem are usually very large.
The research question which plays a central role in Part II of this thesis is formulated as follows: How to improve the efficiency of the Stetter-Möller matrix method, applied to the global optimization of a multivariate polynomial, by means of an nD-systems approach? The efficiency of this method is improved in this thesis by using a matrix-free implementation of the matrix-vector products and by using an iterative eigenvalue solver instead of a direct eigenvalue solver.
The matrix-free implementation is achieved by the development and implementation of an nD-system of difference equations as described in Chapter 5. This yields a routine that computes the action of the involved large and sparse matrix on a given vector. This routine is used as input for the iterative eigenproblem solvers to compute the matrix-vector products in a matrix-free fashion.
To study the efficiency of such an nD-system we have set up a corresponding shortest path problem. It turns out that this will quickly become huge and difficult to solve. However, it is possible to set up a relaxation of the shortest path problem that is easier to solve. This is described in the Sections 5.3.1 and 5.3.2. The conclusions about these shortest paths problems lead to some heuristic methods, as discussed in Section 5.3.3, to arrive cheaply at suboptimal paths with acceptable performance.
Another way to improve the efficiency of an nD-system is to apply parallel computing techniques as described in Section 5.3.4. However, it turns out that parallelization is not very useful in this application of the nD-system.
Iterative eigenvalue solvers are described in Chapter 6. An advantage of such an iterative eigenvalue solver is that it is compatible with an nD-systems approach and that it is able to focus on a subset of eigenvalues such that it is no longer necessary to compute all the eigenvalues of the matrix. For global polynomial optimization the most interesting eigenvalue is the smallest real eigenvalue. Therefore this feature is developed and implemented in a Jacobi-Davidson eigenproblem solver, as described in Section 6.3. In Section 6.4 some procedures are studied to project an approximate eigenvector to a close-by vector with Stetter structure to speed up the convergence process of the iterative solver.
The development of the JDCOMM method, a Jacobi-Davidson eigenvalue solver for commuting matrices, is described in Section 6.5. The most important new implemented features of this matrix-free solver are that: (i) it computes the eigenvalues of the matrix of interest while iterating with a another, much sparser (but commuting), matrix which results in a speed up in computation time and a decrease in the required amount of floating point operations, and, (ii) it focuses on the smallest real eigenvalues first.
In Chapter 7 we present the results of the numerical experiments in which the global minima of various Minkowski dominated polynomials are computed using the approaches and techniques mentioned in Part II of this thesis. In the majority of the test cases the SOSTOOLS software approach is more efficient than the nD-systems approach in combination with an iterative eigenvalue solver. Some test cases however can not accurately be solved by the SOSTOOLS software since they produce large errors. These large errors can limit the actual application of this software. In general our approach tends to give more accurate results. In Section 7.5 four experiments are described in which the newly developed JDCOMM method is used to compute the global minima. The result here is that the JDCOMM method has a superior performance over the other methods: the JDCOMM method requires fewer matrix-vector operations and therefore also uses less computation time.

The H2 model-order reduction problem, described in Part III of this thesis, deals with finding an approximating system of reduced order N−k to a given system of order N. This is called the co-order k case. In [50] an approach is introduced 259 where the H2 global model-order reduction problem for the co-order k = 1 case is reformulated as the problem of finding solutions of a quadratic system of polynomial equations.
The research question answered in Part III, is: How to find the global optimum to the H2 model-order reduction problem for a reduction of order N to order N − k, using the techniques of the Stetter-Möller matrix method in combination with an nD-system and an iterative eigenvalue solver? Furthermore, we give answers to the following questions: How to improve the performance for co-order k=1 reduction?, and How to develop new techniques or extensions for co-order k=2 and k=3 reduction?
The approach introduced in [50] for the co-order k = 1 case is generalized and extended in Chapter 8 which results in a joint framework in which to study the H2 model-order reduction problem for various co-orders k ≥ 1. In the co-order k ≥ 1 case the problem is reformulated as finding the solutions to the system of quadratic equations which contains k − 1 additional parameters and whose solutions should satisfy k − 1 additional linear constraints. The Stetter-Möller method is used to transform the problem of finding solutions of this system of quadratic equations into an eigenvalue problem containing one or more parameters. From the solutions of such an eigenvalue problem the corresponding feasible solutions for the real approximation G(s) of order N−k can be selected. A generalized version of the H2 -criterion is used to select the globally best approximation.
In Chapter 9 this approach leads in the co-order k = 1 case to a large conventional eigenvalue problem, as described in [50]. We improve the efficiency of this method by a matrix-free implementation by using an nD-system in combination with an iterative eigenvalue solver which targets the smallest real eigenvalue. In Section 9.3 the potential of this approach is demonstrated by an example which involves the efficient reduction of a system of order 10 to a system of order 9.
In Chapter 10 we describe how the Stetter-Möller matrix method in the co-order k = 2 case, when taking the additional linear constraint into account, yields a rational matrix in one parameter. We can set up a polynomial eigenvalue problem and to solve this problem it is rewritten as a generalized and singular eigenvalue problem. To accurately compute the eigenvalues of this singular matrix, the singular parts are split off by computing the Kronecker canonical form. In Section 10.6 three examples are given where this co-order k = 2 approach is successfully applied to obtain a globally optimal approximation of order N−2.
Applying the Stetter-Möller matrix method for the co-order k = 3 case in Chapter 11 yields two rational matrices in two parameters. This is cast into a two-parameter polynomial eigenvalue problem involving two matrices and one common eigenvector. Both matrices are joined together into one rectangular and singular matrix. Now the ideas of Chapter 10 regarding the Kronecker canonical form computations are useful to compute the values ρ1 and ρ2 which make these matrices simultaneously singular. An important result is described in this chapter: the solutions to the system of equations are determined by the values which make the transformation matrices, used in the Kronecker canonical form computations, singular. With these values the globally optimal approximation G(s) of order N−3 is computed. An example is worked out at the end of this chapter where the co-order k = 3 technique exhibits a better performance on this example than the co-order k = 1 and k = 2 techniques.
Chapter 12 provides concluding remarks and directions for further research of the results described in the Parts II and III of this thesis.
Original languageEnglish
QualificationDoctor of Philosophy
Awarding Institution
  • Maastricht University
Supervisors/Advisors
  • Peeters, Ralf, Supervisor
  • Hanzon, B., Supervisor, External person
Award date9 Dec 2010
Print ISBNs978-90-8590-046-7
DOIs
Publication statusPublished - 1 Jan 2010

Cite this