xref: /petsc/src/ksp/ksp/tutorials/output/ex27_4c.out (revision 52446cc723085e0744f602c8b3df1001d0d0ab4d)
1Failed to load initial guess, so use a vector of all zeros.
2Linear solve converged due to CONVERGED_RTOL_NORMAL_EQUATIONS iterations 9
3KSP Object: 4 MPI processes
4  type: lsqr
5    standard error not computed
6    using inexact matrix norm
7  maximum iterations=100, initial guess is zero
8  tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
9  left preconditioning
10  using UNPRECONDITIONED norm type for convergence test
11PC Object: 4 MPI processes
12  type: hpddm
13  levels: 2
14  Neumann matrix attached? TRUE
15  coarse correction: BALANCED
16  on process #0, value (+ threshold if available) for selecting deflation vectors: 20
17  grid and operator complexities: 1.09512 1.01219
18  KSP Object: (pc_hpddm_levels_1_) 4 MPI processes
19    type: preonly
20    maximum iterations=10000, initial guess is zero
21    tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
22    left preconditioning
23    not checking for convergence
24  PC Object: (pc_hpddm_levels_1_) 4 MPI processes
25    type: shell
26      no name
27    linear system matrix, followed by the matrix used to construct the preconditioner:
28    Mat Object: A 4 MPI processes
29      type: mpiaij
30      rows=4889, cols=841
31      total number of mallocs used during MatSetValues calls=0
32        using I-node (on process 0) routines: found 670 nodes, limit used is 5
33    Mat Object: 4 MPI processes
34      type: normalh
35      rows=841, cols=841
36  PC Object: (pc_hpddm_levels_1_) 4 MPI processes
37    type: asm
38      total subdomain blocks = 4, user-defined overlap
39      restriction/interpolation type - BASIC
40      Local solver information for first block is in the following KSP and PC objects on rank 0:
41      Use -pc_hpddm_levels_1_ksp_view ::ascii_info_detail to display information for all blocks
42    KSP Object: (pc_hpddm_levels_1_sub_) 1 MPI process
43      type: preonly
44      maximum iterations=10000, initial guess is zero
45      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
46      left preconditioning
47      not checking for convergence
48    PC Object: (pc_hpddm_levels_1_sub_) 1 MPI process
49      type: cholesky
50        out-of-place factorization
51        tolerance for zero pivot 2.22045e-14
52        matrix ordering: nd
53        factor fill ratio given 5., needed 1.21054
54          Factored matrix:
55            Mat Object: (pc_hpddm_levels_1_sub_) 1 MPI process
56              type: seqsbaij
57              rows=718, cols=718
58              package used to perform factorization: petsc
59      linear system matrix, which is also used to construct the preconditioner:
60      Mat Object: (pc_hpddm_levels_1_sub_) 1 MPI process
61        type: seqaij
62        rows=718, cols=718
63        total number of mallocs used during MatSetValues calls=0
64          not using I-node routines
65    linear system matrix, followed by the matrix used to construct the preconditioner:
66    Mat Object: A 4 MPI processes
67      type: mpiaij
68      rows=4889, cols=841
69      total number of mallocs used during MatSetValues calls=0
70        using I-node (on process 0) routines: found 670 nodes, limit used is 5
71    Mat Object: 4 MPI processes
72      type: normalh
73      rows=841, cols=841
74    KSP Object: (pc_hpddm_coarse_) 1 MPI process
75      type: preonly
76      maximum iterations=10000, initial guess is zero
77      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
78      left preconditioning
79      not checking for convergence
80    PC Object: (pc_hpddm_coarse_) 1 MPI process
81      type: cholesky
82        out-of-place factorization
83        tolerance for zero pivot 2.22045e-14
84        matrix ordering: natural
85        factor fill ratio given 5., needed 1.1
86          Factored matrix:
87            Mat Object: (pc_hpddm_coarse_) 1 MPI process
88              type: seqsbaij
89              rows=80, cols=80, bs=20
90              package used to perform factorization: petsc
91      linear system matrix, which is also used to construct the preconditioner:
92      Mat Object: (pc_hpddm_coarse_) 1 MPI process
93        type: seqsbaij
94        rows=80, cols=80, bs=20
95        total number of mallocs used during MatSetValues calls=0
96  linear system matrix, followed by the matrix used to construct the preconditioner:
97  Mat Object: A 4 MPI processes
98    type: mpiaij
99    rows=4889, cols=841
100    total number of mallocs used during MatSetValues calls=0
101      using I-node (on process 0) routines: found 670 nodes, limit used is 5
102  Mat Object: 4 MPI processes
103    type: normalh
104    rows=841, cols=841
105KSP type: lsqr
106Number of iterations =   9
107Residual norm 1.63035e-05
108