GRASS GIS 8 Programmer's Manual 8.4.1(2025)-45ca3179ab
Loading...
Searching...
No Matches
ccmath_grass_wrapper.c
Go to the documentation of this file.
1/*****************************************************************************
2 *
3 * MODULE: Grass numerical math interface
4 * AUTHOR(S): Soeren Gebbert, Berlin (GER) Dec 2006
5 * soerengebbert <at> googlemail <dot> com
6 *
7 * PURPOSE: ccmath library function wrapper
8 * part of the gmath library
9 *
10 * COPYRIGHT: (C) 2010 by the GRASS Development Team
11 *
12 * This program is free software under the GNU General Public
13 * License (>=v2). Read the file COPYING that comes with GRASS
14 * for details.
15 *
16 *****************************************************************************/
17
18#if defined(HAVE_CCMATH)
19#include <ccmath.h>
20#else
21#include <grass/ccmath_grass.h>
22#endif
23
24/**
25 * Documentation and ccmath library version 2.2.1 by Daniel A. Atkinson
26 *
27 Chapter 1
28
29 LINEAR ALGEBRA
30
31 Summary
32
33 The matrix algebra library contains functions that
34 perform the standard computations of linear algebra.
35 General areas covered are:
36
37 o Solution of Linear Systems
38 o Matrix Inversion
39 o Eigensystem Analysis
40 o Matrix Utility Operations
41 o Singular Value Decomposition
42
43 The operations covered here are fundamental to many
44 areas of mathematics and statistics. Thus, functions
45 in this library segment are called by other library
46 functions. Both real and complex valued matrices
47 are covered by functions in the first four of these
48 categories.
49
50
51 Notes on Contents
52
53 Functions in this library segment provide the basic operations of
54 numerical linear algebra and some useful utility functions for operations on
55 vectors and matrices. The following list describes the functions available for
56 operations with real-valued matrices.
57
58
59 o Solving and Inverting Linear Systems:
60
61 solv --------- solve a general system of real linear equations.
62 solvps ------- solve a real symmetric linear system.
63 solvru ------- solve a real right upper triangular linear system.
64 solvtd ------- solve a tridiagonal real linear system.
65
66 minv --------- invert a general real square matrix.
67 psinv -------- invert a real symmetric matrix.
68 ruinv -------- invert a right upper triangular matrix.
69
70
71 The solution of a general linear system and efficient algorithms for
72 solving special systems with symmetric and tridiagonal matrices are provided
73 by these functions. The general solution function employs a LU factorization
74 with partial pivoting and it is very robust. It will work efficiently on any
75 problem that is not ill-conditioned. The symmetric matrix solution is based
76 on a modified Cholesky factorization. It is best used on positive definite
77 matrices that do not require pivoting for numeric stability. Tridiagonal
78 solvers require order-N operations (N = dimension). Thus, they are highly
79 recommended for this important class of sparse systems. Two matrix inversion
80 routines are provided. The general inversion function is again LU based. It
81 is suitable for use on any stable (ie. well-conditioned) problem. The
82 Cholesky based symmetric matrix inversion is efficient and safe for use on
83 matrices known to be positive definite, such as the variance matrices
84 encountered in statistical computations. Both the solver and the inverse
85 functions are designed to enhance data locality. They are very effective
86 on modern microprocessors.
87
88
89 o Eigensystem Analysis:
90
91 eigen ------ extract all eigen values and vectors of a real
92 symmetric matrix.
93 eigval ----- extract the eigen values of a real symmetric matrix.
94 evmax ------ compute the eigen value of maximum absolute magnitude
95 and its corresponding vector for a symmetric matrix.
96
97
98 Eigensystem functions operate on real symmetric matrices. Two forms of
99 the general eigen routine are provided because the computation of eigen values
100 only is much faster when vectors are not required. The basic algorithms use
101 a Householder reduction to tridiagonal form followed by QR iterations with
102 shifts to enhance convergence. This has become the accepted standard for
103 symmetric eigensystem computation. The evmax function uses an efficient
104 iterative power method algorithm to extract the eigen value of maximum
105 absolute size and the corresponding eigenvector.
106
107
108 o Singular Value Decomposition:
109
110 svdval ----- compute the singular values of a m by n real matrix.
111 sv2val ----- compute the singular values of a real matrix
112 efficiently for m >> n.
113 svduv ------ compute the singular values and the transformation
114 matrices u and v for a real m by n matrix.
115 sv2uv ------ compute the singular values and transformation
116 matrices efficiently for m >> n.
117 svdu1v ----- compute the singular values and transformation
118 matrices u1 and v, where u1 overloads the input
119 with the first n column vectors of u.
120 sv2u1v ----- compute the singular values and the transformation
121 matrices u1 and v efficiently for m >> n.
122
123
124 Singular value decomposition is extremely useful when dealing with linear
125 systems that may be singular. Singular values with values near zero are flags
126 of a potential rank deficiency in the system matrix. They can be used to
127 identify the presence of an ill-conditioned problem and, in some cases, to
128 deal with the potential instability. They are applied to the linear least
129 squares problem in this library. Singular values also define some important
130 matrix norm parameters such as the 2-norm and the condition value. A complete
131 decomposition provides both singular values and an orthogonal decomposition of
132 vector spaces related to the matrix identifying the range and null-space.
133 Fortunately, a highly stable algorithm based on Householder reduction to
134 bidiagonal form and QR rotations can be used to implement the decomposition.
135 The library provides two forms with one more efficient when the dimensions
136 satisfy m > (3/2)n.
137
138 General Technical Comments
139
140 Efficient computation with matrices on modern processors must be
141 adapted to the storage scheme employed for matrix elements. The functions
142 of this library segment do not employ the multidimensional array intrinsic
143 of the C language. Access to elements employs the simple row-major scheme
144 described here.
145
146 Matrices are modeled by the library functions as arrays with elements
147 stored in row order. Thus, the element in the jth row and kth column of
148 the n by n matrix M, stored in the array mat[], is addressed by
149
150 M[j,k] = mat[n*j+k] , with 0 =< j,k <= n-1 .
151
152 (Remember that C employs zero as the starting index.) The storage order has
153 important implications for data locality.
154
155 The algorithms employed here all have excellent numerical stability, and
156 the default double precision arithmetic of C enhances this. Thus, any
157 problems encountered in using the matrix algebra functions will almost
158 certainly be due to an ill-conditioned matrix. (The Hilbert matrices,
159
160 H[i,j] = 1/(1+i+j) for i,j < n
161
162 form a good example of such ill-conditioned systems.) We remind the reader
163 that the appropriate response to such ill-conditioning is to seek an
164 alternative approach to the problem. The option of increasing precision has
165 already been exploited. Modification of the linear algebra algorithm code is
166 not normally effective in an ill-conditioned problem.
167
168------------------------------------------------------------------------------
169 FUNCTION SYNOPSES
170------------------------------------------------------------------------------
171
172 Linear System Solutions:
173-----------------------------------------------------------------------------
174*/
175
176/**
177 \brief Solve a general linear system A*x = b.
178
179 \param a = array containing system matrix A in row order (altered to L-U
180 factored form by computation) \param b = array containing system vector b at
181 entry and solution vector x at exit \param n = dimension of system \return 0
182 -> normal exit; -1 -> singular input
183 */
184int G_math_solv(double **a, double *b, int n)
185{
186 return solv(a[0], b, n);
187}
188
189/**
190 \brief Solve a symmetric positive definite linear system S*x = b.
191
192 \param a = array containing system matrix S (altered to Cholesky upper
193 right factor by computation) \param b = array containing system vector b as
194 input and solution vector x as output \param n = dimension of system
195 \return: 0 -> normal exit; -1 -> input matrix not positive definite
196 */
197int G_math_solvps(double **a, double *b, int n)
198{
199 return solvps(a[0], b, n);
200}
201
202/**
203 \brief Solve a tridiagonal linear system M*x = y.
204
205 \param a = array containing m+1 diagonal elements of M
206 \param b = array of m elements below the main diagonal of M
207 \param c = array of m elements above the main diagonal
208 \param x = array containing the system vector y initially, and the
209 solution vector at exit (m+1 elements) \param m = dimension parameter ( M is
210 (m+1)x(m+1) )
211
212*/
213void G_math_solvtd(double *a, double *b, double *c, double *x, int m)
214{
215 solvtd(a, b, c, x, m);
216 return;
217}
218
219/*
220 \brief Solve an upper right triangular linear system T*x = b.
221
222 \param a = pointer to array of upper right triangular matrix T
223 \param b = pointer to array of system vector The computation overloads this
224 with the solution vector x. \param n = dimension (dim(a)=n*n,dim(b)=n)
225 \return value: f = status flag, with 0 -> normal exit, -1 -> system singular
226 */
227int G_math_solvru(double **a, double *b, int n)
228{
229 return solvru(a[0], b, n);
230}
231
232/**
233 \brief Invert (in place) a general real matrix A -> Inv(A).
234
235 \param a = array containing the input matrix A. This is converted to the
236 inverse matrix. \param n = dimension of the system (i.e. A is n x n )
237 \return: 0 -> normal exit, 1 -> singular input matrix
238*/
239int G_math_minv(double **a, int n)
240{
241 return minv(a[0], n);
242}
243
244/**
245 \brief Invert (in place) a symmetric real matrix, V -> Inv(V).
246
247 The input matrix V is symmetric (V[i,j] = V[j,i]).
248 \param a = array containing a symmetric input matrix. This is converted to
249 the inverse matrix. \param n = dimension of the system (dim(v)=n*n) \return:
250 0 -> normal exit 1 -> input matrix not positive definite
251*/
252int G_math_psinv(double **a, int n)
253{
254 return psinv(a[0], n);
255}
256
257/**
258 \brief Invert an upper right triangular matrix T -> Inv(T).
259
260 \param a = pointer to array of upper right triangular matrix, This is
261 replaced by the inverse matrix. \param n = dimension (dim(a)=n*n) \return
262 value: status flag, with 0 -> matrix inverted -1 -> matrix singular
263*/
264int G_math_ruinv(double **a, int n)
265{
266 return ruinv(a[0], n);
267}
268
269/*
270 -----------------------------------------------------------------------------
271
272 Symmetric Eigensystem Analysis:
273 -----------------------------------------------------------------------------
274 */
275
276/**
277
278 \brief Compute the eigenvalues of a real symmetric matrix A.
279
280 \param a = pointer to array of symmetric n by n input matrix A. The
281 computation alters these values. \param ev = pointer to array of the output
282 eigenvalues \param n = dimension parameter (dim(a)= n*n, dim(ev)= n)
283*/
284void G_math_eigval(double **a, double *ev, int n)
285{
286 eigval(a[0], ev, n);
287 return;
288}
289
290/**
291 \brief Compute the eigenvalues and eigenvectors of a real symmetric matrix
292 A.
293
294 The input and output matrices are related by
295
296 A = E*D*E~ where D is the diagonal matrix of eigenvalues
297 D[i,j] = ev[i] if i=j and 0 otherwise.
298
299 The columns of E are the eigenvectors.
300
301 \param a = pointer to store for symmetric n by n input matrix A. The
302 computation overloads this with an orthogonal matrix of eigenvectors E.
303 \param ev = pointer to the array of the output eigenvalues
304 \param n = dimension parameter (dim(a)= n*n, dim(ev)= n)
305*/
306void G_math_eigen(double **a, double *ev, int n)
307{
308 eigen(a[0], ev, n);
309 return;
310}
311
312/*
313 \brief Compute the maximum (absolute) eigenvalue and corresponding
314 eigenvector of a real symmetric matrix A.
315
316
317 \param a = array containing symmetric input matrix A
318 \param u = array containing the n components of the eigenvector at exit
319 (vector normalized to 1) \param n = dimension of system \return: ev =
320 eigenvalue of A with maximum absolute value HUGE -> convergence failure
321 */
322double G_math_evmax(double **a, double *u, int n)
323{
324 return evmax(a[0], u, n);
325}
326
327/*
328 ------------------------------------------------------------------------------
329
330 Singular Value Decomposition:
331 ------------------------------------------------------------------------------
332
333 A number of versions of the Singular Value Decomposition (SVD)
334 are implemented in the library. They support the efficient
335 computation of this important factorization for a real m by n
336 matrix A. The general form of the SVD is
337
338 A = U*S*V~ with S = | D |
339 | 0 |
340
341 where U is an m by m orthogonal matrix, V is an n by n orthogonal matrix,
342 D is the n by n diagonal matrix of singular value, and S is the singular
343 m by n matrix produced by the transformation.
344
345 The singular values computed by these functions provide important
346 information on the rank of the matrix A, and on several matrix
347 norms of A. The number of non-zero singular values d[i] in D
348 equal to the rank of A. The two norm of A is
349
350 ||A|| = max(d[i]) , and the condition number is
351
352 k(A) = max(d[i])/min(d[i]) .
353
354 The Frobenius norm of the matrix A is
355
356 Fn(A) = Sum(i=0 to n-1) d[i]^2 .
357
358 Singular values consistent with zero are easily recognized, since
359 the decomposition algorithms have excellent numerical stability.
360 The value of a 'zero' d[i] is no larger than a few times the
361 computational rounding error e.
362
363 The matrix U1 is formed from the first n orthonormal column vectors
364 of U. U1[i,j] = U[i,j] for i = 1 to m and j = 1 to n. A singular
365 value decomposition of A can also be expressed in terms of the m by\
366 n matrix U1, with
367
368 A = U1*D*V~ .
369
370 SVD functions with three forms of output are provided. The first
371 form computes only the singular values, while the second computes
372 the singular values and the U and V orthogonal transformation
373 matrices. The third form of output computes singular values, the
374 V matrix, and saves space by overloading the input array with
375 the U1 matrix.
376
377 Two forms of decomposition algorithm are available for each of the
378 three output types. One is computationally efficient when m ~ n.
379 The second, distinguished by the prefix 'sv2' in the function name,
380 employs a two stage Householder reduction to accelerate computation
381 when m substantially exceeds n. Use of functions of the second form
382 is recommended for m > 2n.
383
384 Singular value output from each of the six SVD functions satisfies
385
386 d[i] >= 0 for i = 0 to n-1.
387 -------------------------------------------------------------------------------
388 */
389
390/**
391 \brief Compute the singular values of a real m by n matrix A.
392
393
394 \param d = pointer to double array of dimension n (output = singular
395 values of A) \param a = pointer to store of the m by n input matrix A (A is
396 altered by the computation) \param m = number of rows in A \param n =
397 number of columns in A (m>=n required) \return value: status flag with: 0 ->
398 success -1 -> input error m < n
399
400*/
401int G_math_svdval(double *d, double **a, int m, int n)
402{
403 return svdval(d, a[0], m, n);
404}
405
406/**
407
408 \brief Compute singular values when m >> n.
409
410 \param d = pointer to double array of dimension n (output = singular
411 values of A) \param a = pointer to store of the m by n input matrix A (A is
412 altered by the computation) \param m = number of rows in A \param n =
413 number of columns in A (m>=n required) \return value: status flag with: 0 ->
414 success -1 -> input error m < n
415*/
416int G_math_sv2val(double *d, double **a, int m, int n)
417{
418 return sv2val(d, a[0], m, n);
419}
420
421/*
422 \brief Compute the singular value transformation S = U~*A*V.
423
424 \param d = pointer to double array of dimension n (output = singular values
425 of A) \param a = pointer to store of the m by n input matrix A (A is altered
426 by the computation) \param u = pointer to store for m by m orthogonal matrix
427 U \param v = pointer to store for n by n orthogonal matrix V \param m =
428 number of rows in A \param n = number of columns in A (m>=n required)
429 \return value: status flag with: 0 -> success -1 -> input error m < n
430 */
431int G_math_svduv(double *d, double **a, double **u, int m, double **v, int n)
432{
433 return svduv(d, a[0], u[0], m, v[0], n);
434}
435
436/**
437 \brief Compute the singular value transformation when m >> n.
438
439 \param d = pointer to double array of dimension n (output = singular
440 values of A) \param a = pointer to store of the m by n input matrix A (A is
441 altered by the computation) \param u = pointer to store for m by m
442 orthogonal matrix U \param v = pointer to store for n by n orthogonal matrix
443 V \param m = number of rows in A \param n = number of columns in A (m>=n
444 required) \return value: status flag with: 0 -> success -1 -> input error m <
445 n
446*/
447int G_math_sv2uv(double *d, double **a, double **u, int m, double **v, int n)
448{
449 return sv2uv(d, a[0], u[0], m, v[0], n);
450}
451
452/**
453
454 \brief Compute the singular value transformation with A overloaded by the
455 partial U-matrix.
456
457 \param d = pointer to double array of dimension n
458 (output = singular values of A)
459 \param a = pointer to store of the m by n input matrix A (At output a is
460 overloaded by the matrix U1 whose n columns are orthogonal vectors equal to
461 the first n columns of U.) \param v = pointer to store for n by n
462 orthogonal matrix V \param m = number of rows in A \param n = number of
463 columns in A (m>=n required) \return value: status flag with: 0 -> success -1
464 -> input error m < n
465
466*/
467int G_math_svdu1v(double *d, double **a, int m, double **v, int n)
468{
469 return svdu1v(d, a[0], m, v[0], n);
470}
void solvtd(double *a, double *b, double *c, double *x, int m)
Definition solvtd.c:8
int svdu1v(double *d, double *a, int m, double *v, int n)
Definition svdu1v.c:10
void eigval(double *a, double *eval, int n)
Definition eigval.c:10
int solv(double *a, double *b, int n)
Definition solv.c:10
int sv2uv(double *d, double *a, double *u, int m, double *v, int n)
Definition sv2uv.c:10
int svduv(double *d, double *a, double *u, int m, double *v, int n)
Definition svduv.c:10
int psinv(double *v, int n)
Definition psinv.c:9
int solvps(double *s, double *x, int n)
Definition solvps.c:9
int ruinv(double *a, int n)
Definition ruinv.c:8
double evmax(double *a, double *u, int n)
Definition evmax.c:10
void eigen(double *a, double *eval, int n)
Definition eigen.c:10
int svdval(double *d, double *a, int m, int n)
Definition svdval.c:10
int solvru(double *a, double *b, int n)
Definition solvru.c:8
int minv(double *a, int n)
Definition minv.c:10
int sv2val(double *d, double *a, int m, int n)
Definition sv2val.c:10
int G_math_solvru(double **a, double *b, int n)
int G_math_psinv(double **a, int n)
Invert (in place) a symmetric real matrix, V -> Inv(V).
int G_math_solv(double **a, double *b, int n)
Solve a general linear system A*x = b.
int G_math_svdu1v(double *d, double **a, int m, double **v, int n)
Compute the singular value transformation with A overloaded by the partial U-matrix.
int G_math_minv(double **a, int n)
Invert (in place) a general real matrix A -> Inv(A).
void G_math_eigen(double **a, double *ev, int n)
Compute the eigenvalues and eigenvectors of a real symmetric matrix A.
int G_math_svduv(double *d, double **a, double **u, int m, double **v, int n)
void G_math_eigval(double **a, double *ev, int n)
Compute the eigenvalues of a real symmetric matrix A.
int G_math_solvps(double **a, double *b, int n)
Solve a symmetric positive definite linear system S*x = b.
double G_math_evmax(double **a, double *u, int n)
void G_math_solvtd(double *a, double *b, double *c, double *x, int m)
Solve a tridiagonal linear system M*x = y.
int G_math_ruinv(double **a, int n)
Invert an upper right triangular matrix T -> Inv(T).
int G_math_sv2val(double *d, double **a, int m, int n)
Compute singular values when m >> n.
int G_math_svdval(double *d, double **a, int m, int n)
Compute the singular values of a real m by n matrix A.
int G_math_sv2uv(double *d, double **a, double **u, int m, double **v, int n)
Compute the singular value transformation when m >> n.
double b
#define x