Abstract
The problem of computing the solutions of a system of multivariate polynomial equations can be approached by the StetterMöller matrix method which casts the problem into a large eigenvalue problem. This StetterMöller matrix method forms the starting point for the development of computational procedures for the two main applications addressed in this thesis:
* The global optimization of a multivariate polynomial, described in Part II of this thesis, and
* the H2 modelorder reduction problem, described in Part III of this thesis.
Part I of this thesis provides an introduction in the background of algebraic geometry and an overview of various methods to solve systems of polynomial equations.
In Chapter 4 a global optimization method is worked out which computes the global minimum of a Minkowski dominated polynomial. The StetterMöller matrix method transforms the problem of finding solutions to the system of firstorder conditions of this polynomial into an eigenvalue problem. This method is described in [48], [50] and [81]. A drawback of this approach is that the matrices involved in this eigenvalue problem are usually very large.
The research question which plays a central role in Part II of this thesis is formulated as follows: How to improve the efficiency of the StetterMöller matrix method, applied to the global optimization of a multivariate polynomial, by means of an nDsystems approach? The efficiency of this method is improved in this thesis by using a matrixfree implementation of the matrixvector products and by using an iterative eigenvalue solver instead of a direct eigenvalue solver.
The matrixfree implementation is achieved by the development and implementation of an nDsystem of difference equations as described in Chapter 5. This yields a routine that computes the action of the involved large and sparse matrix on a given vector. This routine is used as input for the iterative eigenproblem solvers to compute the matrixvector products in a matrixfree fashion.
To study the efficiency of such an nDsystem we have set up a corresponding shortest path problem. It turns out that this will quickly become huge and difficult to solve. However, it is possible to set up a relaxation of the shortest path problem that is easier to solve. This is described in the Sections 5.3.1 and 5.3.2. The conclusions about these shortest paths problems lead to some heuristic methods, as discussed in Section 5.3.3, to arrive cheaply at suboptimal paths with acceptable performance.
Another way to improve the efficiency of an nDsystem is to apply parallel computing techniques as described in Section 5.3.4. However, it turns out that parallelization is not very useful in this application of the nDsystem.
Iterative eigenvalue solvers are described in Chapter 6. An advantage of such an iterative eigenvalue solver is that it is compatible with an nDsystems approach and that it is able to focus on a subset of eigenvalues such that it is no longer necessary to compute all the eigenvalues of the matrix. For global polynomial optimization the most interesting eigenvalue is the smallest real eigenvalue. Therefore this feature is developed and implemented in a JacobiDavidson eigenproblem solver, as described in Section 6.3. In Section 6.4 some procedures are studied to project an approximate eigenvector to a closeby vector with Stetter structure to speed up the convergence process of the iterative solver.
The development of the JDCOMM method, a JacobiDavidson eigenvalue solver for commuting matrices, is described in Section 6.5. The most important new implemented features of this matrixfree solver are that: (i) it computes the eigenvalues of the matrix of interest while iterating with a another, much sparser (but commuting), matrix which results in a speed up in computation time and a decrease in the required amount of floating point operations, and, (ii) it focuses on the smallest real eigenvalues first.
In Chapter 7 we present the results of the numerical experiments in which the global minima of various Minkowski dominated polynomials are computed using the approaches and techniques mentioned in Part II of this thesis. In the majority of the test cases the SOSTOOLS software approach is more efficient than the nDsystems approach in combination with an iterative eigenvalue solver. Some test cases however can not accurately be solved by the SOSTOOLS software since they produce large errors. These large errors can limit the actual application of this software. In general our approach tends to give more accurate results. In Section 7.5 four experiments are described in which the newly developed JDCOMM method is used to compute the global minima. The result here is that the JDCOMM method has a superior performance over the other methods: the JDCOMM method requires fewer matrixvector operations and therefore also uses less computation time.
The H2 modelorder reduction problem, described in Part III of this thesis, deals with finding an approximating system of reduced order N−k to a given system of order N. This is called the coorder k case. In [50] an approach is introduced 259 where the H2 global modelorder reduction problem for the coorder k = 1 case is reformulated as the problem of finding solutions of a quadratic system of polynomial equations.
The research question answered in Part III, is: How to find the global optimum to the H2 modelorder reduction problem for a reduction of order N to order N − k, using the techniques of the StetterMöller matrix method in combination with an nDsystem and an iterative eigenvalue solver? Furthermore, we give answers to the following questions: How to improve the performance for coorder k=1 reduction?, and How to develop new techniques or extensions for coorder k=2 and k=3 reduction?
The approach introduced in [50] for the coorder k = 1 case is generalized and extended in Chapter 8 which results in a joint framework in which to study the H2 modelorder reduction problem for various coorders k ≥ 1. In the coorder k ≥ 1 case the problem is reformulated as finding the solutions to the system of quadratic equations which contains k − 1 additional parameters and whose solutions should satisfy k − 1 additional linear constraints. The StetterMöller method is used to transform the problem of finding solutions of this system of quadratic equations into an eigenvalue problem containing one or more parameters. From the solutions of such an eigenvalue problem the corresponding feasible solutions for the real approximation G(s) of order N−k can be selected. A generalized version of the H2 criterion is used to select the globally best approximation.
In Chapter 9 this approach leads in the coorder k = 1 case to a large conventional eigenvalue problem, as described in [50]. We improve the efficiency of this method by a matrixfree implementation by using an nDsystem in combination with an iterative eigenvalue solver which targets the smallest real eigenvalue. In Section 9.3 the potential of this approach is demonstrated by an example which involves the efficient reduction of a system of order 10 to a system of order 9.
In Chapter 10 we describe how the StetterMöller matrix method in the coorder k = 2 case, when taking the additional linear constraint into account, yields a rational matrix in one parameter. We can set up a polynomial eigenvalue problem and to solve this problem it is rewritten as a generalized and singular eigenvalue problem. To accurately compute the eigenvalues of this singular matrix, the singular parts are split off by computing the Kronecker canonical form. In Section 10.6 three examples are given where this coorder k = 2 approach is successfully applied to obtain a globally optimal approximation of order N−2.
Applying the StetterMöller matrix method for the coorder k = 3 case in Chapter 11 yields two rational matrices in two parameters. This is cast into a twoparameter polynomial eigenvalue problem involving two matrices and one common eigenvector. Both matrices are joined together into one rectangular and singular matrix. Now the ideas of Chapter 10 regarding the Kronecker canonical form computations are useful to compute the values ρ1 and ρ2 which make these matrices simultaneously singular. An important result is described in this chapter: the solutions to the system of equations are determined by the values which make the transformation matrices, used in the Kronecker canonical form computations, singular. With these values the globally optimal approximation G(s) of order N−3 is computed. An example is worked out at the end of this chapter where the coorder k = 3 technique exhibits a better performance on this example than the coorder k = 1 and k = 2 techniques.
Chapter 12 provides concluding remarks and directions for further research of the results described in the Parts II and III of this thesis.
* The global optimization of a multivariate polynomial, described in Part II of this thesis, and
* the H2 modelorder reduction problem, described in Part III of this thesis.
Part I of this thesis provides an introduction in the background of algebraic geometry and an overview of various methods to solve systems of polynomial equations.
In Chapter 4 a global optimization method is worked out which computes the global minimum of a Minkowski dominated polynomial. The StetterMöller matrix method transforms the problem of finding solutions to the system of firstorder conditions of this polynomial into an eigenvalue problem. This method is described in [48], [50] and [81]. A drawback of this approach is that the matrices involved in this eigenvalue problem are usually very large.
The research question which plays a central role in Part II of this thesis is formulated as follows: How to improve the efficiency of the StetterMöller matrix method, applied to the global optimization of a multivariate polynomial, by means of an nDsystems approach? The efficiency of this method is improved in this thesis by using a matrixfree implementation of the matrixvector products and by using an iterative eigenvalue solver instead of a direct eigenvalue solver.
The matrixfree implementation is achieved by the development and implementation of an nDsystem of difference equations as described in Chapter 5. This yields a routine that computes the action of the involved large and sparse matrix on a given vector. This routine is used as input for the iterative eigenproblem solvers to compute the matrixvector products in a matrixfree fashion.
To study the efficiency of such an nDsystem we have set up a corresponding shortest path problem. It turns out that this will quickly become huge and difficult to solve. However, it is possible to set up a relaxation of the shortest path problem that is easier to solve. This is described in the Sections 5.3.1 and 5.3.2. The conclusions about these shortest paths problems lead to some heuristic methods, as discussed in Section 5.3.3, to arrive cheaply at suboptimal paths with acceptable performance.
Another way to improve the efficiency of an nDsystem is to apply parallel computing techniques as described in Section 5.3.4. However, it turns out that parallelization is not very useful in this application of the nDsystem.
Iterative eigenvalue solvers are described in Chapter 6. An advantage of such an iterative eigenvalue solver is that it is compatible with an nDsystems approach and that it is able to focus on a subset of eigenvalues such that it is no longer necessary to compute all the eigenvalues of the matrix. For global polynomial optimization the most interesting eigenvalue is the smallest real eigenvalue. Therefore this feature is developed and implemented in a JacobiDavidson eigenproblem solver, as described in Section 6.3. In Section 6.4 some procedures are studied to project an approximate eigenvector to a closeby vector with Stetter structure to speed up the convergence process of the iterative solver.
The development of the JDCOMM method, a JacobiDavidson eigenvalue solver for commuting matrices, is described in Section 6.5. The most important new implemented features of this matrixfree solver are that: (i) it computes the eigenvalues of the matrix of interest while iterating with a another, much sparser (but commuting), matrix which results in a speed up in computation time and a decrease in the required amount of floating point operations, and, (ii) it focuses on the smallest real eigenvalues first.
In Chapter 7 we present the results of the numerical experiments in which the global minima of various Minkowski dominated polynomials are computed using the approaches and techniques mentioned in Part II of this thesis. In the majority of the test cases the SOSTOOLS software approach is more efficient than the nDsystems approach in combination with an iterative eigenvalue solver. Some test cases however can not accurately be solved by the SOSTOOLS software since they produce large errors. These large errors can limit the actual application of this software. In general our approach tends to give more accurate results. In Section 7.5 four experiments are described in which the newly developed JDCOMM method is used to compute the global minima. The result here is that the JDCOMM method has a superior performance over the other methods: the JDCOMM method requires fewer matrixvector operations and therefore also uses less computation time.
The H2 modelorder reduction problem, described in Part III of this thesis, deals with finding an approximating system of reduced order N−k to a given system of order N. This is called the coorder k case. In [50] an approach is introduced 259 where the H2 global modelorder reduction problem for the coorder k = 1 case is reformulated as the problem of finding solutions of a quadratic system of polynomial equations.
The research question answered in Part III, is: How to find the global optimum to the H2 modelorder reduction problem for a reduction of order N to order N − k, using the techniques of the StetterMöller matrix method in combination with an nDsystem and an iterative eigenvalue solver? Furthermore, we give answers to the following questions: How to improve the performance for coorder k=1 reduction?, and How to develop new techniques or extensions for coorder k=2 and k=3 reduction?
The approach introduced in [50] for the coorder k = 1 case is generalized and extended in Chapter 8 which results in a joint framework in which to study the H2 modelorder reduction problem for various coorders k ≥ 1. In the coorder k ≥ 1 case the problem is reformulated as finding the solutions to the system of quadratic equations which contains k − 1 additional parameters and whose solutions should satisfy k − 1 additional linear constraints. The StetterMöller method is used to transform the problem of finding solutions of this system of quadratic equations into an eigenvalue problem containing one or more parameters. From the solutions of such an eigenvalue problem the corresponding feasible solutions for the real approximation G(s) of order N−k can be selected. A generalized version of the H2 criterion is used to select the globally best approximation.
In Chapter 9 this approach leads in the coorder k = 1 case to a large conventional eigenvalue problem, as described in [50]. We improve the efficiency of this method by a matrixfree implementation by using an nDsystem in combination with an iterative eigenvalue solver which targets the smallest real eigenvalue. In Section 9.3 the potential of this approach is demonstrated by an example which involves the efficient reduction of a system of order 10 to a system of order 9.
In Chapter 10 we describe how the StetterMöller matrix method in the coorder k = 2 case, when taking the additional linear constraint into account, yields a rational matrix in one parameter. We can set up a polynomial eigenvalue problem and to solve this problem it is rewritten as a generalized and singular eigenvalue problem. To accurately compute the eigenvalues of this singular matrix, the singular parts are split off by computing the Kronecker canonical form. In Section 10.6 three examples are given where this coorder k = 2 approach is successfully applied to obtain a globally optimal approximation of order N−2.
Applying the StetterMöller matrix method for the coorder k = 3 case in Chapter 11 yields two rational matrices in two parameters. This is cast into a twoparameter polynomial eigenvalue problem involving two matrices and one common eigenvector. Both matrices are joined together into one rectangular and singular matrix. Now the ideas of Chapter 10 regarding the Kronecker canonical form computations are useful to compute the values ρ1 and ρ2 which make these matrices simultaneously singular. An important result is described in this chapter: the solutions to the system of equations are determined by the values which make the transformation matrices, used in the Kronecker canonical form computations, singular. With these values the globally optimal approximation G(s) of order N−3 is computed. An example is worked out at the end of this chapter where the coorder k = 3 technique exhibits a better performance on this example than the coorder k = 1 and k = 2 techniques.
Chapter 12 provides concluding remarks and directions for further research of the results described in the Parts II and III of this thesis.
Original language  English 

Qualification  Doctor of Philosophy 
Awarding Institution 

Supervisors/Advisors 

Award date  9 Dec 2010 
Print ISBNs  9789085900467 
DOIs  
Publication status  Published  1 Jan 2010 