1 Linear solve converged due to CONVERGED_RTOL iterations 5 2SNES Object: 4 MPI processes 3 type: ksponly 4 maximum iterations=1, maximum function evaluations=10000 5 tolerances: relative=1e-08, absolute=1e-50, solution=1e-08 6 total number of linear solver iterations=5 7 total number of function evaluations=1 8 norm schedule ALWAYS 9 Jacobian is never rebuilt 10 KSP Object: 4 MPI processes 11 type: cg 12 maximum iterations=100, initial guess is zero 13 tolerances: relative=1e-10, absolute=1e-50, divergence=10000. 14 left preconditioning 15 using UNPRECONDITIONED norm type for convergence test 16 PC Object: 4 MPI processes 17 type: gamg 18 type is MULTIPLICATIVE, levels=2 cycles=v 19 Cycles per PCApply=1 20 Using externally compute Galerkin coarse grid matrices 21 GAMG specific options 22 Threshold for dropping small values in graph on each level = 0.001 0.001 23 Threshold scaling factor for each level not specified = 1. 24 Using aggregates made with 3 applications of heavy edge matching (HEM) to define subdomains for PCASM 25 AGG specific options 26 Number of levels of aggressive coarsening 1 27 Square graph aggressive coarsening 28 Number smoothing steps 1 29 Complexity: grid = 1.11111 operator = 1.02041 30 Coarse grid solver -- level 0 ------------------------------- 31 KSP Object: (mg_coarse_) 4 MPI processes 32 type: preonly 33 maximum iterations=10000, initial guess is zero 34 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 35 left preconditioning 36 using NONE norm type for convergence test 37 PC Object: (mg_coarse_) 4 MPI processes 38 type: bjacobi 39 number of blocks = 4 40 Local solver information for first block is in the following KSP and PC objects on rank 0: 41 Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks 42 KSP Object: (mg_coarse_sub_) 1 MPI process 43 type: preonly 44 maximum iterations=1, initial guess is zero 45 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 46 left preconditioning 47 using NONE norm type for convergence test 48 PC Object: (mg_coarse_sub_) 1 MPI process 49 type: lu 50 out-of-place factorization 51 tolerance for zero pivot 2.22045e-14 52 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] 53 matrix ordering: nd 54 factor fill ratio given 1., needed 1. 55 Factored matrix follows: 56 Mat Object: (mg_coarse_sub_) 1 MPI process 57 type: seqaij 58 rows=1, cols=1 59 package used to perform factorization: petsc 60 total: nonzeros=1, allocated nonzeros=1 61 not using I-node routines 62 linear system matrix = precond matrix: 63 Mat Object: (mg_coarse_sub_) 1 MPI process 64 type: seqaij 65 rows=1, cols=1 66 total: nonzeros=1, allocated nonzeros=1 67 total number of mallocs used during MatSetValues calls=0 68 not using I-node routines 69 linear system matrix = precond matrix: 70 Mat Object: 4 MPI processes 71 type: mpiaij 72 rows=1, cols=1 73 total: nonzeros=1, allocated nonzeros=1 74 total number of mallocs used during MatSetValues calls=0 75 not using I-node (on process 0) routines 76 Down solver (pre-smoother) on level 1 ------------------------------- 77 KSP Object: (mg_levels_1_) 4 MPI processes 78 type: chebyshev 79 Chebyshev polynomial of first kind 80 eigenvalue targets used: min 0.260428, max 1.43236 81 eigenvalues provided (min 0.413297, max 1.30214) with transform: [0. 0.2; 0. 1.1] 82 maximum iterations=2, nonzero initial guess 83 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 84 left preconditioning 85 using NONE norm type for convergence test 86 PC Object: (mg_levels_1_) 4 MPI processes 87 type: jacobi 88 type DIAGONAL 89 linear system matrix = precond matrix: 90 Mat Object: 4 MPI processes 91 type: mpiaij 92 rows=9, cols=9 93 total: nonzeros=49, allocated nonzeros=49 94 total number of mallocs used during MatSetValues calls=0 95 not using I-node (on process 0) routines 96 Up solver (post-smoother) same as down solver (pre-smoother) 97 linear system matrix = precond matrix: 98 Mat Object: 4 MPI processes 99 type: mpiaij 100 rows=9, cols=9 101 total: nonzeros=49, allocated nonzeros=49 102 total number of mallocs used during MatSetValues calls=0 103 not using I-node (on process 0) routines 104DM Object: box 4 MPI processes 105 type: plex 106box in 3 dimensions: 107 Number of 0-cells per rank: 8 8 8 8 108 Number of 1-cells per rank: 12 12 12 12 109 Number of 2-cells per rank: 6 6 6 6 110 Number of 3-cells per rank: 1 1 1 1 111Labels: 112 depth: 4 strata with value/size (0 (8), 1 (12), 2 (6), 3 (1)) 113 marker: 1 strata with value/size (1 (23)) 114 Face Sets: 4 strata with value/size (1 (1), 2 (1), 3 (1), 6 (1)) 115 celltype: 4 strata with value/size (0 (8), 1 (12), 4 (6), 7 (1)) 116 boundary: 1 strata with value/size (1 (23)) 117Field deformation: 118 adjacency FEM 119DM Object: Mesh 4 MPI processes 120 type: plex 121Mesh in 3 dimensions: 122 Number of 0-cells per rank: 27 27 27 27 123 Number of 1-cells per rank: 54 54 54 54 124 Number of 2-cells per rank: 36 36 36 36 125 Number of 3-cells per rank: 8 8 8 8 126Labels: 127 celltype: 4 strata with value/size (0 (27), 1 (54), 4 (36), 7 (8)) 128 depth: 4 strata with value/size (0 (27), 1 (54), 2 (36), 3 (8)) 129 marker: 1 strata with value/size (1 (77)) 130 Face Sets: 4 strata with value/size (1 (9), 2 (9), 3 (9), 6 (9)) 131 boundary: 1 strata with value/size (1 (77)) 132Field deformation: 133 adjacency FEM 134 Linear solve converged due to CONVERGED_RTOL iterations 8 135SNES Object: 4 MPI processes 136 type: ksponly 137 maximum iterations=1, maximum function evaluations=10000 138 tolerances: relative=1e-08, absolute=1e-50, solution=1e-08 139 total number of linear solver iterations=8 140 total number of function evaluations=1 141 norm schedule ALWAYS 142 Jacobian is never rebuilt 143 KSP Object: 4 MPI processes 144 type: cg 145 maximum iterations=100, initial guess is zero 146 tolerances: relative=1e-10, absolute=1e-50, divergence=10000. 147 left preconditioning 148 using UNPRECONDITIONED norm type for convergence test 149 PC Object: 4 MPI processes 150 type: gamg 151 type is MULTIPLICATIVE, levels=2 cycles=v 152 Cycles per PCApply=1 153 Using externally compute Galerkin coarse grid matrices 154 GAMG specific options 155 Threshold for dropping small values in graph on each level = 0.001 0.001 156 Threshold scaling factor for each level not specified = 1. 157 Using aggregates made with 3 applications of heavy edge matching (HEM) to define subdomains for PCASM 158 AGG specific options 159 Number of levels of aggressive coarsening 1 160 Square graph aggressive coarsening 161 Number smoothing steps 1 162 Complexity: grid = 1.02721 operator = 1.00432 163 Coarse grid solver -- level 0 ------------------------------- 164 KSP Object: (mg_coarse_) 4 MPI processes 165 type: preonly 166 maximum iterations=10000, initial guess is zero 167 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 168 left preconditioning 169 using NONE norm type for convergence test 170 PC Object: (mg_coarse_) 4 MPI processes 171 type: bjacobi 172 number of blocks = 4 173 Local solver information for first block is in the following KSP and PC objects on rank 0: 174 Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks 175 KSP Object: (mg_coarse_sub_) 1 MPI process 176 type: preonly 177 maximum iterations=1, initial guess is zero 178 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 179 left preconditioning 180 using NONE norm type for convergence test 181 PC Object: (mg_coarse_sub_) 1 MPI process 182 type: lu 183 out-of-place factorization 184 tolerance for zero pivot 2.22045e-14 185 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] 186 matrix ordering: nd 187 factor fill ratio given 5., needed 1. 188 Factored matrix follows: 189 Mat Object: (mg_coarse_sub_) 1 MPI process 190 type: seqaij 191 rows=4, cols=4 192 package used to perform factorization: petsc 193 total: nonzeros=16, allocated nonzeros=16 194 using I-node routines: found 1 nodes, limit used is 5 195 linear system matrix = precond matrix: 196 Mat Object: (mg_coarse_sub_) 1 MPI process 197 type: seqaij 198 rows=4, cols=4 199 total: nonzeros=16, allocated nonzeros=16 200 total number of mallocs used during MatSetValues calls=0 201 using I-node routines: found 1 nodes, limit used is 5 202 linear system matrix = precond matrix: 203 Mat Object: 4 MPI processes 204 type: mpiaij 205 rows=4, cols=4 206 total: nonzeros=16, allocated nonzeros=16 207 total number of mallocs used during MatSetValues calls=0 208 using I-node (on process 0) routines: found 1 nodes, limit used is 5 209 Down solver (pre-smoother) on level 1 ------------------------------- 210 KSP Object: (mg_levels_1_) 4 MPI processes 211 type: chebyshev 212 Chebyshev polynomial of first kind 213 eigenvalue targets used: min 0.327489, max 1.80119 214 eigenvalues provided (min 0.133814, max 1.63744) with transform: [0. 0.2; 0. 1.1] 215 maximum iterations=2, nonzero initial guess 216 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 217 left preconditioning 218 using NONE norm type for convergence test 219 PC Object: (mg_levels_1_) 4 MPI processes 220 type: jacobi 221 type DIAGONAL 222 linear system matrix = precond matrix: 223 Mat Object: 4 MPI processes 224 type: mpiaij 225 rows=147, cols=147 226 total: nonzeros=3703, allocated nonzeros=3703 227 total number of mallocs used during MatSetValues calls=0 228 not using I-node (on process 0) routines 229 Up solver (post-smoother) same as down solver (pre-smoother) 230 linear system matrix = precond matrix: 231 Mat Object: 4 MPI processes 232 type: mpiaij 233 rows=147, cols=147 234 total: nonzeros=3703, allocated nonzeros=3703 235 total number of mallocs used during MatSetValues calls=0 236 not using I-node (on process 0) routines 237[0] 0) N= 9, max displ=2.5713786e+01, error=9.564e+00 238[0] 1) N= 147, max displ=3.1758769e+01, disp diff= 6.04e+00, error=3.519e+00, rate=1.4 239