1KSP Object: 1 MPI process 2 type: gmres 3 restart=30, using classical (unmodified) Gram-Schmidt orthogonalization with no iterative refinement 4 happy breakdown tolerance=1e-30 5 maximum iterations=10000, initial guess is zero 6 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 7 left preconditioning 8 using DEFAULT norm type for convergence test 9PC Object: 1 MPI process 10 type: gamg 11 PC has not been set up so information may be incomplete 12 type is MULTIPLICATIVE, levels=0 cycles=unknown 13 Cycles per PCApply=0 14 Using externally compute Galerkin coarse grid matrices 15 GAMG specific options 16 Threshold for dropping small values in graph on each level = 17 Threshold scaling factor for each level not specified = 1. 18 AGG specific options 19 Number of levels of aggressive coarsening 1 20 Square graph aggressive coarsening 21 Coarsening algorithm not yet selected 22 Number smoothing steps to construct prolongation 1 23 Complexity: grid = 0. operator = 0. 24 Per-level complexity: op = operator, int = interpolation 25 #equations | #active PEs | avg nnz/row op | avg nnz/row int 26 linear system matrix, which is also used to construct the preconditioner: 27 Mat Object: 1 MPI process 28 type: seqaij 29 rows=16, cols=16 30 total: nonzeros=64, allocated nonzeros=64 31 total number of mallocs used during MatSetValues calls=0 32 not using I-node routines 33KSP Object: 1 MPI process 34 type: gmres 35 restart=30, using classical (unmodified) Gram-Schmidt orthogonalization with no iterative refinement 36 happy breakdown tolerance=1e-30 37 maximum iterations=10000, initial guess is zero 38 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 39 left preconditioning 40 using PRECONDITIONED norm type for convergence test 41PC Object: 1 MPI process 42 type: gamg 43 type is MULTIPLICATIVE, levels=2 cycles=v 44 Cycles per PCApply=1 45 Using externally compute Galerkin coarse grid matrices 46 GAMG specific options 47 Threshold for dropping small values in graph on each level = -1. -1. 48 Threshold scaling factor for each level not specified = 1. 49 AGG specific options 50 Number of levels of aggressive coarsening 1 51 Square graph aggressive coarsening 52 MatCoarsen Object: (pc_gamg_) 1 MPI process 53 type: mis 54 Number smoothing steps to construct prolongation 1 55 Complexity: grid = 1.1875 operator = 1.14062 56 Per-level complexity: op = operator, int = interpolation 57 #equations | #active PEs | avg nnz/row op | avg nnz/row int 58 3 1 3 0 59 16 1 4 2 60 Coarse grid solver -- level 0 ------------------------------- 61 KSP Object: (mg_coarse_) 1 MPI process 62 type: preonly 63 maximum iterations=10000, initial guess is zero 64 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 65 left preconditioning 66 not checking for convergence 67 PC Object: (mg_coarse_) 1 MPI process 68 type: bjacobi 69 number of blocks = 1 70 Local solver information for first block is in the following KSP and PC objects on rank 0: 71 Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks 72 KSP Object: (mg_coarse_sub_) 1 MPI process 73 type: preonly 74 maximum iterations=1, initial guess is zero 75 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 76 left preconditioning 77 not checking for convergence 78 PC Object: (mg_coarse_sub_) 1 MPI process 79 type: lu 80 out-of-place factorization 81 tolerance for zero pivot 2.22045e-14 82 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] 83 matrix ordering: nd 84 factor fill ratio given 5., needed 1. 85 Factored matrix: 86 Mat Object: (mg_coarse_sub_) 1 MPI process 87 type: seqaij 88 rows=3, cols=3 89 package used to perform factorization: petsc 90 total: nonzeros=9, allocated nonzeros=9 91 using I-node routines: found 1 nodes, limit used is 5 92 linear system matrix, which is also used to construct the preconditioner: 93 Mat Object: (mg_coarse_sub_) 1 MPI process 94 type: seqaij 95 rows=3, cols=3 96 total: nonzeros=9, allocated nonzeros=9 97 total number of mallocs used during MatSetValues calls=0 98 using I-node routines: found 1 nodes, limit used is 5 99 linear system matrix, which is also used to construct the preconditioner: 100 Mat Object: (mg_coarse_sub_) 1 MPI process 101 type: seqaij 102 rows=3, cols=3 103 total: nonzeros=9, allocated nonzeros=9 104 total number of mallocs used during MatSetValues calls=0 105 using I-node routines: found 1 nodes, limit used is 5 106 Down solver (pre-smoother) on level 1 ------------------------------- 107 KSP Object: (mg_levels_1_) 1 MPI process 108 type: chebyshev 109 Chebyshev polynomial of first kind 110 eigenvalue targets used: min 0.514268, max 5.65695 111 eigenvalues provided (min 0.299461, max 5.14268) with transform: [0. 0.1; 0. 1.1] 112 maximum iterations=2, nonzero initial guess 113 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 114 left preconditioning 115 not checking for convergence 116 PC Object: (mg_levels_1_) 1 MPI process 117 type: jacobi 118 type DIAGONAL 119 linear system matrix, which is also used to construct the preconditioner: 120 Mat Object: 1 MPI process 121 type: seqaij 122 rows=16, cols=16 123 total: nonzeros=64, allocated nonzeros=64 124 total number of mallocs used during MatSetValues calls=0 125 not using I-node routines 126 Up solver (post-smoother) same as down solver (pre-smoother) 127 linear system matrix, which is also used to construct the preconditioner: 128 Mat Object: 1 MPI process 129 type: seqaij 130 rows=16, cols=16 131 total: nonzeros=64, allocated nonzeros=64 132 total number of mallocs used during MatSetValues calls=0 133 not using I-node routines 134KSP Object: 1 MPI process 135 type: gmres 136 restart=30, using classical (unmodified) Gram-Schmidt orthogonalization with no iterative refinement 137 happy breakdown tolerance=1e-30 138 maximum iterations=10000, initial guess is zero 139 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 140 left preconditioning 141 using PRECONDITIONED norm type for convergence test 142PC Object: 1 MPI process 143 type: gamg 144 type is MULTIPLICATIVE, levels=2 cycles=v 145 Cycles per PCApply=1 146 Using externally compute Galerkin coarse grid matrices 147 GAMG specific options 148 Threshold for dropping small values in graph on each level = -1. -1. 149 Threshold scaling factor for each level not specified = 1. 150 AGG specific options 151 Number of levels of aggressive coarsening 1 152 Square graph aggressive coarsening 153 MatCoarsen Object: (pc_gamg_) 1 MPI process 154 type: mis 155 Number smoothing steps to construct prolongation 1 156 Complexity: grid = 1.1875 operator = 1.14062 157 Per-level complexity: op = operator, int = interpolation 158 #equations | #active PEs | avg nnz/row op | avg nnz/row int 159 3 1 3 0 160 16 1 4 2 161 Coarse grid solver -- level 0 ------------------------------- 162 KSP Object: (mg_coarse_) 1 MPI process 163 type: preonly 164 maximum iterations=10000, initial guess is zero 165 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 166 left preconditioning 167 not checking for convergence 168 PC Object: (mg_coarse_) 1 MPI process 169 type: bjacobi 170 number of blocks = 1 171 Local solver information for first block is in the following KSP and PC objects on rank 0: 172 Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks 173 KSP Object: (mg_coarse_sub_) 1 MPI process 174 type: preonly 175 maximum iterations=1, initial guess is zero 176 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 177 left preconditioning 178 not checking for convergence 179 PC Object: (mg_coarse_sub_) 1 MPI process 180 type: lu 181 out-of-place factorization 182 tolerance for zero pivot 2.22045e-14 183 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] 184 matrix ordering: nd 185 factor fill ratio given 5., needed 1. 186 Factored matrix: 187 Mat Object: (mg_coarse_sub_) 1 MPI process 188 type: seqaij 189 rows=3, cols=3 190 package used to perform factorization: petsc 191 total: nonzeros=9, allocated nonzeros=9 192 using I-node routines: found 1 nodes, limit used is 5 193 linear system matrix, which is also used to construct the preconditioner: 194 Mat Object: (mg_coarse_sub_) 1 MPI process 195 type: seqaij 196 rows=3, cols=3 197 total: nonzeros=9, allocated nonzeros=9 198 total number of mallocs used during MatSetValues calls=0 199 using I-node routines: found 1 nodes, limit used is 5 200 linear system matrix, which is also used to construct the preconditioner: 201 Mat Object: (mg_coarse_sub_) 1 MPI process 202 type: seqaij 203 rows=3, cols=3 204 total: nonzeros=9, allocated nonzeros=9 205 total number of mallocs used during MatSetValues calls=0 206 using I-node routines: found 1 nodes, limit used is 5 207 Down solver (pre-smoother) on level 1 ------------------------------- 208 KSP Object: (mg_levels_1_) 1 MPI process 209 type: chebyshev 210 Chebyshev polynomial of first kind 211 eigenvalue targets used: min 0.159372, max 1.75309 212 eigenvalues estimated via gmres: min 0.406283, max 1.59372 213 eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1] 214 KSP Object: (mg_levels_1_esteig_) 1 MPI process 215 type: gmres 216 restart=30, using classical (unmodified) Gram-Schmidt orthogonalization with no iterative refinement 217 happy breakdown tolerance=1e-30 218 maximum iterations=10, initial guess is zero 219 tolerances: relative=1e-12, absolute=1e-50, divergence=10000. 220 left preconditioning 221 using PRECONDITIONED norm type for convergence test 222 estimating eigenvalues using a noisy random number generated right-hand side 223 maximum iterations=2, nonzero initial guess 224 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 225 left preconditioning 226 not checking for convergence 227 PC Object: (mg_levels_1_) 1 MPI process 228 type: jacobi 229 type DIAGONAL 230 linear system matrix, which is also used to construct the preconditioner: 231 Mat Object: 1 MPI process 232 type: seqaij 233 rows=16, cols=16 234 total: nonzeros=64, allocated nonzeros=64 235 total number of mallocs used during MatSetValues calls=0 236 not using I-node routines 237 Up solver (post-smoother) same as down solver (pre-smoother) 238 linear system matrix, which is also used to construct the preconditioner: 239 Mat Object: 1 MPI process 240 type: seqaij 241 rows=16, cols=16 242 total: nonzeros=64, allocated nonzeros=64 243 total number of mallocs used during MatSetValues calls=0 244 not using I-node routines 245KSP Object: 1 MPI process 246 type: gmres 247 restart=30, using classical (unmodified) Gram-Schmidt orthogonalization with no iterative refinement 248 happy breakdown tolerance=1e-30 249 maximum iterations=10000, initial guess is zero 250 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 251 left preconditioning 252 using PRECONDITIONED norm type for convergence test 253PC Object: 1 MPI process 254 type: gamg 255 type is MULTIPLICATIVE, levels=2 cycles=v 256 Cycles per PCApply=1 257 Using externally compute Galerkin coarse grid matrices 258 GAMG specific options 259 Threshold for dropping small values in graph on each level = -1. -1. 260 Threshold scaling factor for each level not specified = 1. 261 AGG specific options 262 Number of levels of aggressive coarsening 1 263 Square graph aggressive coarsening 264 MatCoarsen Object: (pc_gamg_) 1 MPI process 265 type: mis 266 Number smoothing steps to construct prolongation 1 267 Complexity: grid = 1.1875 operator = 1.14062 268 Per-level complexity: op = operator, int = interpolation 269 #equations | #active PEs | avg nnz/row op | avg nnz/row int 270 3 1 3 0 271 16 1 4 2 272 Coarse grid solver -- level 0 ------------------------------- 273 KSP Object: (mg_coarse_) 1 MPI process 274 type: preonly 275 maximum iterations=10000, initial guess is zero 276 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 277 left preconditioning 278 not checking for convergence 279 PC Object: (mg_coarse_) 1 MPI process 280 type: bjacobi 281 number of blocks = 1 282 Local solver information for first block is in the following KSP and PC objects on rank 0: 283 Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks 284 KSP Object: (mg_coarse_sub_) 1 MPI process 285 type: preonly 286 maximum iterations=1, initial guess is zero 287 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 288 left preconditioning 289 not checking for convergence 290 PC Object: (mg_coarse_sub_) 1 MPI process 291 type: lu 292 out-of-place factorization 293 tolerance for zero pivot 2.22045e-14 294 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] 295 matrix ordering: nd 296 factor fill ratio given 5., needed 1. 297 Factored matrix: 298 Mat Object: (mg_coarse_sub_) 1 MPI process 299 type: seqaij 300 rows=3, cols=3 301 package used to perform factorization: petsc 302 total: nonzeros=9, allocated nonzeros=9 303 using I-node routines: found 1 nodes, limit used is 5 304 linear system matrix, which is also used to construct the preconditioner: 305 Mat Object: (mg_coarse_sub_) 1 MPI process 306 type: seqaij 307 rows=3, cols=3 308 total: nonzeros=9, allocated nonzeros=9 309 total number of mallocs used during MatSetValues calls=0 310 using I-node routines: found 1 nodes, limit used is 5 311 linear system matrix, which is also used to construct the preconditioner: 312 Mat Object: (mg_coarse_sub_) 1 MPI process 313 type: seqaij 314 rows=3, cols=3 315 total: nonzeros=9, allocated nonzeros=9 316 total number of mallocs used during MatSetValues calls=0 317 using I-node routines: found 1 nodes, limit used is 5 318 Down solver (pre-smoother) on level 1 ------------------------------- 319 KSP Object: (mg_levels_1_) 1 MPI process 320 type: chebyshev 321 Chebyshev polynomial of first kind 322 eigenvalue targets used: min 0.160581, max 1.76639 323 eigenvalues estimated via gmres: min 0.394193, max 1.60581 324 eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1] 325 KSP Object: (mg_levels_1_esteig_) 1 MPI process 326 type: gmres 327 restart=30, using classical (unmodified) Gram-Schmidt orthogonalization with no iterative refinement 328 happy breakdown tolerance=1e-30 329 maximum iterations=10, initial guess is zero 330 tolerances: relative=1e-12, absolute=1e-50, divergence=10000. 331 left preconditioning 332 using PRECONDITIONED norm type for convergence test 333 estimating eigenvalues using a noisy random number generated right-hand side 334 maximum iterations=2, nonzero initial guess 335 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 336 left preconditioning 337 not checking for convergence 338 PC Object: (mg_levels_1_) 1 MPI process 339 type: jacobi 340 type DIAGONAL 341 linear system matrix, which is also used to construct the preconditioner: 342 Mat Object: 1 MPI process 343 type: seqaij 344 rows=16, cols=16 345 total: nonzeros=64, allocated nonzeros=64 346 total number of mallocs used during MatSetValues calls=0 347 not using I-node routines 348 Up solver (post-smoother) same as down solver (pre-smoother) 349 linear system matrix, which is also used to construct the preconditioner: 350 Mat Object: 1 MPI process 351 type: seqaij 352 rows=16, cols=16 353 total: nonzeros=64, allocated nonzeros=64 354 total number of mallocs used during MatSetValues calls=0 355 not using I-node routines 356