xref: /petsc/src/snes/tutorials/output/ex12_tri_hpddm_reuse.out (revision f13dfd9ea68e0ddeee984e65c377a1819eab8a8a)
1  0 SNES Function norm 11.2854
2    0 KSP Residual norm 10.7468
3    1 KSP Residual norm 0.859968
4  Linear solve converged due to CONVERGED_RTOL iterations 1
5  1 SNES Function norm 2.37215
6    0 KSP Residual norm 0.859968
7    1 KSP Residual norm 0.36619
8    2 KSP Residual norm 0.156019
9    3 KSP Residual norm 0.0404001
10  Linear solve converged due to CONVERGED_RTOL iterations 3
11  2 SNES Function norm 0.126042
12    0 KSP Residual norm 0.0404001
13    1 KSP Residual norm 0.0222502
14    2 KSP Residual norm 0.00654102
15    3 KSP Residual norm 0.00287769
16  Linear solve converged due to CONVERGED_RTOL iterations 3
17  3 SNES Function norm 0.00959685
18    0 KSP Residual norm 0.00287769
19    1 KSP Residual norm 0.00144493
20    2 KSP Residual norm 0.000645135
21    3 KSP Residual norm 0.000207281
22  Linear solve converged due to CONVERGED_RTOL iterations 3
23  4 SNES Function norm 0.000601202
24    0 KSP Residual norm 0.000207281
25    1 KSP Residual norm 9.98348e-05
26    2 KSP Residual norm 3.38896e-05
27    3 KSP Residual norm 1.59084e-05
28  Linear solve converged due to CONVERGED_RTOL iterations 3
29  5 SNES Function norm 5.11301e-05
30    0 KSP Residual norm 1.59084e-05
31    1 KSP Residual norm 8.95606e-06
32    2 KSP Residual norm 3.85819e-06
33    3 KSP Residual norm 1.12629e-06
34  Linear solve converged due to CONVERGED_RTOL iterations 3
35  6 SNES Function norm 3.41277e-06
36    0 KSP Residual norm 1.12629e-06
37    1 KSP Residual norm 5.16268e-07
38    2 KSP Residual norm 1.69075e-07
39    3 KSP Residual norm 8.34073e-08
40  Linear solve converged due to CONVERGED_RTOL iterations 3
41  7 SNES Function norm 2.68082e-07
42    0 KSP Residual norm 8.34073e-08
43    1 KSP Residual norm 4.84996e-08
44    2 KSP Residual norm 1.99918e-08
45    3 KSP Residual norm 5.65355e-09
46  Linear solve converged due to CONVERGED_RTOL iterations 3
47  8 SNES Function norm 1.79858e-08
48L_2 Error: 5.33424e-10
49Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 8
50SNES Object: 4 MPI processes
51  type: newtonls
52  maximum iterations=50, maximum function evaluations=10000
53  tolerances: relative=1e-08, absolute=1e-50, solution=1e-08
54  total number of linear solver iterations=22
55  total number of function evaluations=9
56  norm schedule ALWAYS
57  SNESLineSearch Object: 4 MPI processes
58    type: bt
59      interpolation: cubic
60      alpha=1.000000e-04
61    maxstep=1.000000e+08, minlambda=1.000000e-12
62    tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08
63    maximum iterations=40
64  KSP Object: 4 MPI processes
65    type: gmres
66      restart=100, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
67      happy breakdown tolerance 1e-30
68    maximum iterations=10000, initial guess is zero
69    tolerances: relative=0.1, absolute=1e-50, divergence=10000.
70    left preconditioning
71    using PRECONDITIONED norm type for convergence test
72  PC Object: 4 MPI processes
73    type: hpddm
74    levels: 2
75    Neumann matrix attached? TRUE
76    shared subdomain KSP between SLEPc and PETSc? FALSE
77    coarse correction: DEFLATED
78    on process #0, value (+ threshold if available) for selecting deflation vectors: 4
79    grid and operator complexities: 1.07111 1.07178
80    KSP Object: (pc_hpddm_levels_1_) 4 MPI processes
81      type: preonly
82      maximum iterations=10000, initial guess is zero
83      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
84      left preconditioning
85      using NONE norm type for convergence test
86    PC Object: (pc_hpddm_levels_1_) 4 MPI processes
87      type: shell
88        no name
89      linear system matrix = precond matrix:
90      Mat Object: 4 MPI processes
91        type: mpiaij
92        rows=225, cols=225
93        total: nonzeros=2229, allocated nonzeros=2229
94        total number of mallocs used during MatSetValues calls=0
95          not using I-node (on process 0) routines
96    PC Object: (pc_hpddm_levels_1_) 4 MPI processes
97      type: bjacobi
98        number of blocks = 4
99        Local solver information for first block is in the following KSP and PC objects on rank 0:
100        Use -pc_hpddm_levels_1_ksp_view ::ascii_info_detail to display information for all blocks
101        KSP Object: (pc_hpddm_levels_1_sub_) 1 MPI process
102          type: preonly
103          maximum iterations=10000, initial guess is zero
104          tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
105          left preconditioning
106          using NONE norm type for convergence test
107        PC Object: (pc_hpddm_levels_1_sub_) 1 MPI process
108          type: lu
109            out-of-place factorization
110            tolerance for zero pivot 2.22045e-14
111            matrix ordering: nd
112            factor fill ratio given 5., needed 1.31206
113              Factored matrix follows:
114                Mat Object: (pc_hpddm_levels_1_sub_) 1 MPI process
115                  type: seqaij
116                  rows=42, cols=42
117                  package used to perform factorization: petsc
118                  total: nonzeros=370, allocated nonzeros=370
119                    not using I-node routines
120          linear system matrix = precond matrix:
121          Mat Object: (pc_hpddm_levels_1_sub_) 1 MPI process
122            type: seqaij
123            rows=42, cols=42
124            total: nonzeros=282, allocated nonzeros=282
125            total number of mallocs used during MatSetValues calls=0
126              not using I-node routines
127      linear system matrix = precond matrix:
128      Mat Object: 4 MPI processes
129        type: mpiaij
130        rows=225, cols=225
131        total: nonzeros=2229, allocated nonzeros=2229
132        total number of mallocs used during MatSetValues calls=0
133          not using I-node (on process 0) routines
134      KSP Object: (pc_hpddm_coarse_) 2 MPI processes
135        type: preonly
136        maximum iterations=10000, initial guess is zero
137        tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
138        left preconditioning
139        using NONE norm type for convergence test
140      PC Object: (pc_hpddm_coarse_) 2 MPI processes
141        type: redundant
142          First (color=0) of 2 PCs follows
143          KSP Object: (pc_hpddm_coarse_redundant_) 1 MPI process
144            type: preonly
145            maximum iterations=10000, initial guess is zero
146            tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
147            left preconditioning
148            using NONE norm type for convergence test
149          PC Object: (pc_hpddm_coarse_redundant_) 1 MPI process
150            type: cholesky
151              out-of-place factorization
152              tolerance for zero pivot 2.22045e-14
153              matrix ordering: natural
154              factor fill ratio given 5., needed 1.1
155                Factored matrix follows:
156                  Mat Object: (pc_hpddm_coarse_redundant_) 1 MPI process
157                    type: seqsbaij
158                    rows=16, cols=16, bs=4
159                    package used to perform factorization: petsc
160                    total: nonzeros=176, allocated nonzeros=176
161                        block size is 4
162            linear system matrix = precond matrix:
163            Mat Object: 1 MPI process
164              type: seqsbaij
165              rows=16, cols=16, bs=4
166              total: nonzeros=160, allocated nonzeros=160
167              total number of mallocs used during MatSetValues calls=0
168                  block size is 4
169        linear system matrix = precond matrix:
170        Mat Object: (pc_hpddm_coarse_) 2 MPI processes
171          type: mpisbaij
172          rows=16, cols=16, bs=4
173          total: nonzeros=160, allocated nonzeros=160
174          total number of mallocs used during MatSetValues calls=0
175              block size is 4
176    linear system matrix = precond matrix:
177    Mat Object: 4 MPI processes
178      type: mpiaij
179      rows=225, cols=225
180      total: nonzeros=2229, allocated nonzeros=2229
181      total number of mallocs used during MatSetValues calls=0
182        not using I-node (on process 0) routines
183