| /petsc/src/mat/tests/output/ |
| H A D | ex125_mumps_seq.out | 6 0-the sparse MatMatSolve 7 1-the sparse MatMatSolve 17 0-the sparse MatMatSolve 18 1-the sparse MatMatSolve
|
| H A D | ex125_nsize-1_saddle_point_mumps_lu.out | 6 0-the sparse MatMatSolve 7 1-the sparse MatMatSolve 17 0-the sparse MatMatSolve 18 1-the sparse MatMatSolve
|
| H A D | ex125_nsize-1_saddle_point_mumps_cholesky.out | 6 0-the sparse MatMatSolve 7 1-the sparse MatMatSolve 17 0-the sparse MatMatSolve 18 1-the sparse MatMatSolve
|
| /petsc/share/petsc/matlab/ |
| H A D | PetscBinaryWrite.m | 3 % Writes in PETSc binary file sparse matrices and vectors. 7 % a sparse matrix: for example PetscBinaryWrite('myfile',sparse(A)); 51 % save sparse matrix in special MATLAB format 53 A = sparse(A);
|
| H A D | PetscBinaryRead.m | 6 % emits as MATLAB sparse matrice or vectors. 128 A = sparse(i,j,complex(s(1:2:2*nz),s(2:2:2*nz)),m,n,nz); 130 A = sparse(i,j,s,m,n,nz);
|
| H A D | UFgetPetscMat.m | 4 % (1) gets the selected index file of the UF sparse matrix collection,
|
| /petsc/doc/overview/ |
| H A D | matrix_table.md | 20 - Compressed sparse row 56 * - Kronecker product of sparse matrix :math:`A`; :math:`I \otimes S + A \otimes T` 70 - Block compressed sparse row 75 - Upper triangular compressed sparse row
|
| /petsc/src/ts/tutorials/autodiff/adolc-utils/ |
| H A D | drivers.cxx | 41 if (adctx->sparse) { in PetscAdolcComputeRHSJacobian() 79 if (adctx->sparse) { in PetscAdolcComputeRHSJacobianLocal() 121 if (adctx->sparse) { in PetscAdolcComputeIJacobian() 136 if (adctx->sparse) { in PetscAdolcComputeIJacobian() 181 if (adctx->sparse) { in PetscAdolcComputeIJacobianIDMass() 225 if (adctx->sparse) { in PetscAdolcComputeIJacobianLocal() 240 if (adctx->sparse) { in PetscAdolcComputeIJacobianLocal() 284 if (adctx->sparse) { in PetscAdolcComputeIJacobianLocalIDMass() 423 if (adctx->sparse) { in PetscAdolcComputeIJacobianAndDiagonalLocal() 436 if (adctx->sparse) { in PetscAdolcComputeIJacobianAndDiagonalLocal()
|
| H A D | contexts.cxx | 17 PetscBool sparse, sparse_view, sparse_view_done; member
|
| /petsc/doc/miscellaneous/ |
| H A D | acknowledgements.md | 70 - LUSOL - sparse LU factorization code (part of MINOS) developed by 77 - MUMPS - MUltifrontal Massively Parallel sparse direct 82 - PaStiX - Parallel sparse LU and Cholesky solvers; 86 - SPAI - for parallel sparse approximate inverse preconditioning; 88 - SuiteSparse - sequential sparse solvers, developed by 93 - SuperLU and SuperLU_Dist - the efficient sparse LU codes 97 <https://portal.nersc.gov/project/sparse/strumpack/>
|
| /petsc/doc/install/ |
| H A D | external_software.md | 13 - [AMD](http://www.cise.ufl.edu/research/sparse/amd/) Approximate minimum degree orderings. 17 …knowledgecenter/en/SSFHY8/essl_welcome.html) IBM's math library for fast sparse direct LU factoriz… 26 - [MUMPS](https://mumps-solver.org/) MUltifrontal Massively Parallel sparse direct Solver. 31 …pringer.com/referenceworkentry/10.1007%2F978-0-387-09766-4_144) Parallel sparse approximate invers… 34 …v/~xiaoye/SuperLU/#superlu_dist) Robust and efficient sequential and parallel direct sparse solves.
|
| /petsc/doc/developers/ |
| H A D | matrices.md | 41 freedom per cell), blocking is advantageous. The PETSc sparse matrix 45 - Storing the matrices using a generic sparse matrix format, but 54 a standard sparse matrix format and brings a large percentage of the 101 several times that of the basic sparse implementations. 105 PETSc offers a variety of both sparse and dense matrix types. 109 The default matrix representation within PETSc is the general sparse AIJ 110 format (also called the compressed sparse row format, CSR). 114 The AIJ sparse matrix type, is the default parallel matrix format;
|
| H A D | objects.md | 4 collection of data (for instance, a sparse matrix) is stored in a way 12 `Mat` (matrices, both dense and sparse). Each class is implemented by 31 compressed sparse row) has its own data fields for storing the actual 51 for matrices it is shared by dense, sparse, parallel, and sequential 67 One or more actual implementations of the class (for example, sparse
|
| /petsc/doc/manualpages/MANSECHeaders/ |
| H A D | MatFD | 3 The `MatFD` tools handle the approximation of Jacobians via sparse finite differences.
|
| H A D | MatGraphOperations | 3 These tools compute reorderings (`MatOrdering`) (for sparse matrix factorizations), colorings (`Mat…
|
| H A D | Mat | 3 PETSc matrices (`Mat` objects) are used to store Jacobians and other sparse matrices
|
| /petsc/share/petsc/datafiles/matrices/ |
| H A D | amesos2_test_mat0.mtx | 5 % brief: To use for testing Amesos2. This is a 13 by 13 sparse matrix with
|
| H A D | LFAT5.mtx | 4 % http://www.cise.ufl.edu/research/sparse/matrices/Oberwolfach/LFAT5
|
| /petsc/src/dm/interface/ |
| H A D | dmperiodicity.c | 289 PetscErrorCode DMGetSparseLocalize(DM dm, PetscBool *sparse) in DMGetSparseLocalize() argument 293 PetscAssertPointer(sparse, 2); in DMGetSparseLocalize() 294 *sparse = dm->sparseLocalize; in DMGetSparseLocalize() 311 PetscErrorCode DMSetSparseLocalize(DM dm, PetscBool sparse) in DMSetSparseLocalize() argument 315 PetscValidLogicalCollectiveBool(dm, sparse, 2); in DMSetSparseLocalize() 316 dm->sparseLocalize = sparse; in DMSetSparseLocalize()
|
| /petsc/src/tao/leastsquares/tutorials/ |
| H A D | tomographyGenerateData.m | 11 % L: Forward Model, sparse matrix of NTheta*NTau by Ny*Nx 77 …A', A, 'precision', 'float64'); % do NOT need to convert A to sparse, always write as sparse matrix
|
| /petsc/lib/petsc/bin/ |
| H A D | PetscBinaryIO.py | 347 from scipy.sparse import csr_matrix 353 from scipy.sparse import csr_matrix 508 import scipy.sparse 509 mat = scipy.sparse.load_npz(infile)
|
| /petsc/src/benchmarks/results/ |
| H A D | lap2d.m | 5 % * matrix vector product for very sparse matrix, and
|
| /petsc/src/ksp/ksp/tutorials/ |
| H A D | ex41.m | 23 A = sparse([3 2 1; 1 3 2; 1 2 3]);
|
| /petsc/src/tao/leastsquares/tutorials/matlab/more_wild_probs/ |
| H A D | jacobian.m | 38 J = sparse(J)';
|
| /petsc/doc/manual/ |
| H A D | mat.md | 7 dense storage and compressed sparse row storage (both sequential and 67 the sparse AIJ format, which is discussed in detail in 102 *Warning*: Several of the sparse implementations do *not* currently 111 When using the block compressed sparse row matrix format (`MATSEQBAIJ` 158 In the sparse matrix implementations, once the assembly routines have 218 The default matrix representation within PETSc is the general sparse AIJ 219 format (also called the compressed sparse 237 To create a sequential AIJ sparse matrix, `A`, with `m` rows and 264 preallocate the memory needed for the sparse matrix. The user has two 305 Thus, when assembling a sparse matrix with very different numbers of [all …]
|