1KSP Object: 1 MPI process 2 type: gmres 3 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 4 happy breakdown tolerance 1e-30 5 maximum iterations=10000, initial guess is zero 6 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 7 left preconditioning 8 using PRECONDITIONED norm type for convergence test 9PC Object: 1 MPI process 10 type: gamg 11 type is MULTIPLICATIVE, levels=2 cycles=v 12 Cycles per PCApply=1 13 Using externally compute Galerkin coarse grid matrices 14 GAMG specific options 15 Threshold for dropping small values in graph on each level = -1. -1. 16 Threshold scaling factor for each level not specified = 1. 17 AGG specific options 18 Number of levels of aggressive coarsening 1 19 Square graph aggressive coarsening 20 MatCoarsen Object: (pc_gamg_) 1 MPI process 21 type: mis 22 MIS aggregator lists are not available 23 Number smoothing steps to construct prolongation 1 24 Complexity: grid = 1.1875 operator = 1.14062 25 Coarse grid solver -- level 0 ------------------------------- 26 KSP Object: (mg_coarse_) 1 MPI process 27 type: preonly 28 maximum iterations=10000, initial guess is zero 29 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 30 left preconditioning 31 using NONE norm type for convergence test 32 PC Object: (mg_coarse_) 1 MPI process 33 type: bjacobi 34 number of blocks = 1 35 Local solver information for each block is in the following KSP and PC objects: 36 [0] number of local blocks = 1, first local block number = 0 37 [0] local block number 0 38 KSP Object: (mg_coarse_sub_) 1 MPI process 39 type: preonly 40 maximum iterations=1, initial guess is zero 41 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 42 left preconditioning 43 using NONE norm type for convergence test 44 PC Object: (mg_coarse_sub_) 1 MPI process 45 type: lu 46 out-of-place factorization 47 tolerance for zero pivot 2.22045e-14 48 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] 49 matrix ordering: nd 50 factor fill ratio given 5., needed 1. 51 Factored matrix follows: 52 Mat Object: (mg_coarse_sub_) 1 MPI process 53 type: seqaij 54 rows=3, cols=3 55 package used to perform factorization: petsc 56 total: nonzeros=9, allocated nonzeros=9 57 using I-node routines: found 1 nodes, limit used is 5 58 linear system matrix = precond matrix: 59 Mat Object: (mg_coarse_sub_) 1 MPI process 60 type: seqaij 61 rows=3, cols=3 62 total: nonzeros=9, allocated nonzeros=9 63 total number of mallocs used during MatSetValues calls=0 64 using I-node routines: found 1 nodes, limit used is 5 65 - - - - - - - - - - - - - - - - - - 66 linear system matrix = precond matrix: 67 Mat Object: (mg_coarse_sub_) 1 MPI process 68 type: seqaij 69 rows=3, cols=3 70 total: nonzeros=9, allocated nonzeros=9 71 total number of mallocs used during MatSetValues calls=0 72 using I-node routines: found 1 nodes, limit used is 5 73 Down solver (pre-smoother) on level 1 ------------------------------- 74 KSP Object: (mg_levels_1_) 1 MPI process 75 type: chebyshev 76 Chebyshev polynomial of first kind 77 eigenvalue targets used: min 1.06112, max 11.6723 78 eigenvalues provided (min 0.311583, max 10.6112) with transform: [0. 0.1; 0. 1.1] 79 maximum iterations=2, nonzero initial guess 80 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 81 left preconditioning 82 using NONE norm type for convergence test 83 PC Object: (mg_levels_1_) 1 MPI process 84 type: jacobi 85 type DIAGONAL 86 Vec Object: 1 MPI process 87 type: seq 88 length=16 89 linear system matrix = precond matrix: 90 Mat Object: 1 MPI process 91 type: seqaij 92 rows=16, cols=16 93 total: nonzeros=64, allocated nonzeros=64 94 total number of mallocs used during MatSetValues calls=0 95 not using I-node routines 96 Up solver (post-smoother) same as down solver (pre-smoother) 97 linear system matrix = precond matrix: 98 Mat Object: 1 MPI process 99 type: seqaij 100 rows=16, cols=16 101 total: nonzeros=64, allocated nonzeros=64 102 total number of mallocs used during MatSetValues calls=0 103 not using I-node routines 104KSP Object: 1 MPI process 105 type: gmres 106 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 107 happy breakdown tolerance 1e-30 108 maximum iterations=10000, initial guess is zero 109 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 110 left preconditioning 111 using PRECONDITIONED norm type for convergence test 112PC Object: 1 MPI process 113 type: gamg 114 type is MULTIPLICATIVE, levels=2 cycles=v 115 Cycles per PCApply=1 116 Using externally compute Galerkin coarse grid matrices 117 GAMG specific options 118 Threshold for dropping small values in graph on each level = -1. -1. 119 Threshold scaling factor for each level not specified = 1. 120 AGG specific options 121 Number of levels of aggressive coarsening 1 122 Square graph aggressive coarsening 123 MatCoarsen Object: (pc_gamg_) 1 MPI process 124 type: mis 125 MIS aggregator lists are not available 126 Number smoothing steps to construct prolongation 1 127 Complexity: grid = 1.1875 operator = 1.14062 128 Coarse grid solver -- level 0 ------------------------------- 129 KSP Object: (mg_coarse_) 1 MPI process 130 type: preonly 131 maximum iterations=10000, initial guess is zero 132 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 133 left preconditioning 134 using NONE norm type for convergence test 135 PC Object: (mg_coarse_) 1 MPI process 136 type: bjacobi 137 number of blocks = 1 138 Local solver information for each block is in the following KSP and PC objects: 139 [0] number of local blocks = 1, first local block number = 0 140 [0] local block number 0 141 KSP Object: (mg_coarse_sub_) 1 MPI process 142 type: preonly 143 maximum iterations=1, initial guess is zero 144 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 145 left preconditioning 146 using NONE norm type for convergence test 147 PC Object: (mg_coarse_sub_) 1 MPI process 148 type: lu 149 out-of-place factorization 150 tolerance for zero pivot 2.22045e-14 151 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] 152 matrix ordering: nd 153 factor fill ratio given 5., needed 1. 154 Factored matrix follows: 155 Mat Object: (mg_coarse_sub_) 1 MPI process 156 type: seqaij 157 rows=3, cols=3 158 package used to perform factorization: petsc 159 total: nonzeros=9, allocated nonzeros=9 160 using I-node routines: found 1 nodes, limit used is 5 161 linear system matrix = precond matrix: 162 Mat Object: (mg_coarse_sub_) 1 MPI process 163 type: seqaij 164 rows=3, cols=3 165 total: nonzeros=9, allocated nonzeros=9 166 total number of mallocs used during MatSetValues calls=0 167 using I-node routines: found 1 nodes, limit used is 5 168 - - - - - - - - - - - - - - - - - - 169 linear system matrix = precond matrix: 170 Mat Object: (mg_coarse_sub_) 1 MPI process 171 type: seqaij 172 rows=3, cols=3 173 total: nonzeros=9, allocated nonzeros=9 174 total number of mallocs used during MatSetValues calls=0 175 using I-node routines: found 1 nodes, limit used is 5 176 Down solver (pre-smoother) on level 1 ------------------------------- 177 KSP Object: (mg_levels_1_) 1 MPI process 178 type: chebyshev 179 Chebyshev polynomial of first kind 180 eigenvalue targets used: min 0.159372, max 1.75309 181 eigenvalues estimated via gmres: min 0.406283, max 1.59372 182 eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1] 183 KSP Object: (mg_levels_1_esteig_) 1 MPI process 184 type: gmres 185 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 186 happy breakdown tolerance 1e-30 187 maximum iterations=10, initial guess is zero 188 tolerances: relative=1e-12, absolute=1e-50, divergence=10000. 189 left preconditioning 190 using PRECONDITIONED norm type for convergence test 191 estimating eigenvalues using a noisy random number generated right-hand side 192 maximum iterations=2, nonzero initial guess 193 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 194 left preconditioning 195 using NONE norm type for convergence test 196 PC Object: (mg_levels_1_) 1 MPI process 197 type: jacobi 198 type DIAGONAL 199 Vec Object: 1 MPI process 200 type: seq 201 length=16 202 linear system matrix = precond matrix: 203 Mat Object: 1 MPI process 204 type: seqaij 205 rows=16, cols=16 206 total: nonzeros=64, allocated nonzeros=64 207 total number of mallocs used during MatSetValues calls=0 208 not using I-node routines 209 Up solver (post-smoother) same as down solver (pre-smoother) 210 linear system matrix = precond matrix: 211 Mat Object: 1 MPI process 212 type: seqaij 213 rows=16, cols=16 214 total: nonzeros=64, allocated nonzeros=64 215 total number of mallocs used during MatSetValues calls=0 216 not using I-node routines 217KSP Object: 1 MPI process 218 type: gmres 219 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 220 happy breakdown tolerance 1e-30 221 maximum iterations=10000, initial guess is zero 222 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 223 left preconditioning 224 using PRECONDITIONED norm type for convergence test 225PC Object: 1 MPI process 226 type: gamg 227 type is MULTIPLICATIVE, levels=2 cycles=v 228 Cycles per PCApply=1 229 Using externally compute Galerkin coarse grid matrices 230 GAMG specific options 231 Threshold for dropping small values in graph on each level = -1. -1. 232 Threshold scaling factor for each level not specified = 1. 233 AGG specific options 234 Number of levels of aggressive coarsening 1 235 Square graph aggressive coarsening 236 MatCoarsen Object: (pc_gamg_) 1 MPI process 237 type: mis 238 MIS aggregator lists are not available 239 Number smoothing steps to construct prolongation 1 240 Complexity: grid = 1.1875 operator = 1.14062 241 Coarse grid solver -- level 0 ------------------------------- 242 KSP Object: (mg_coarse_) 1 MPI process 243 type: preonly 244 maximum iterations=10000, initial guess is zero 245 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 246 left preconditioning 247 using NONE norm type for convergence test 248 PC Object: (mg_coarse_) 1 MPI process 249 type: bjacobi 250 number of blocks = 1 251 Local solver information for each block is in the following KSP and PC objects: 252 [0] number of local blocks = 1, first local block number = 0 253 [0] local block number 0 254 KSP Object: (mg_coarse_sub_) 1 MPI process 255 type: preonly 256 maximum iterations=1, initial guess is zero 257 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 258 left preconditioning 259 using NONE norm type for convergence test 260 PC Object: (mg_coarse_sub_) 1 MPI process 261 type: lu 262 out-of-place factorization 263 tolerance for zero pivot 2.22045e-14 264 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] 265 matrix ordering: nd 266 factor fill ratio given 5., needed 1. 267 Factored matrix follows: 268 Mat Object: (mg_coarse_sub_) 1 MPI process 269 type: seqaij 270 rows=3, cols=3 271 package used to perform factorization: petsc 272 total: nonzeros=9, allocated nonzeros=9 273 using I-node routines: found 1 nodes, limit used is 5 274 linear system matrix = precond matrix: 275 Mat Object: (mg_coarse_sub_) 1 MPI process 276 type: seqaij 277 rows=3, cols=3 278 total: nonzeros=9, allocated nonzeros=9 279 total number of mallocs used during MatSetValues calls=0 280 using I-node routines: found 1 nodes, limit used is 5 281 - - - - - - - - - - - - - - - - - - 282 linear system matrix = precond matrix: 283 Mat Object: (mg_coarse_sub_) 1 MPI process 284 type: seqaij 285 rows=3, cols=3 286 total: nonzeros=9, allocated nonzeros=9 287 total number of mallocs used during MatSetValues calls=0 288 using I-node routines: found 1 nodes, limit used is 5 289 Down solver (pre-smoother) on level 1 ------------------------------- 290 KSP Object: (mg_levels_1_) 1 MPI process 291 type: chebyshev 292 Chebyshev polynomial of first kind 293 eigenvalue targets used: min 0.160581, max 1.76639 294 eigenvalues estimated via gmres: min 0.394193, max 1.60581 295 eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1] 296 KSP Object: (mg_levels_1_esteig_) 1 MPI process 297 type: gmres 298 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 299 happy breakdown tolerance 1e-30 300 maximum iterations=10, initial guess is zero 301 tolerances: relative=1e-12, absolute=1e-50, divergence=10000. 302 left preconditioning 303 using PRECONDITIONED norm type for convergence test 304 estimating eigenvalues using a noisy random number generated right-hand side 305 maximum iterations=2, nonzero initial guess 306 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 307 left preconditioning 308 using NONE norm type for convergence test 309 PC Object: (mg_levels_1_) 1 MPI process 310 type: jacobi 311 type DIAGONAL 312 Vec Object: 1 MPI process 313 type: seq 314 length=16 315 linear system matrix = precond matrix: 316 Mat Object: 1 MPI process 317 type: seqaij 318 rows=16, cols=16 319 total: nonzeros=64, allocated nonzeros=64 320 total number of mallocs used during MatSetValues calls=0 321 not using I-node routines 322 Up solver (post-smoother) same as down solver (pre-smoother) 323 linear system matrix = precond matrix: 324 Mat Object: 1 MPI process 325 type: seqaij 326 rows=16, cols=16 327 total: nonzeros=64, allocated nonzeros=64 328 total number of mallocs used during MatSetValues calls=0 329 not using I-node routines 330KSP Object: 1 MPI process 331 type: gmres 332 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 333 happy breakdown tolerance 1e-30 334 maximum iterations=10000, initial guess is zero 335 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 336 left preconditioning 337 using PRECONDITIONED norm type for convergence test 338PC Object: 1 MPI process 339 type: gamg 340 type is MULTIPLICATIVE, levels=2 cycles=v 341 Cycles per PCApply=1 342 Using externally compute Galerkin coarse grid matrices 343 GAMG specific options 344 Threshold for dropping small values in graph on each level = -1. -1. 345 Threshold scaling factor for each level not specified = 1. 346 AGG specific options 347 Number of levels of aggressive coarsening 1 348 Square graph aggressive coarsening 349 MatCoarsen Object: (pc_gamg_) 1 MPI process 350 type: mis 351 MIS aggregator lists are not available 352 Number smoothing steps to construct prolongation 1 353 Complexity: grid = 1.1875 operator = 1.14062 354 Coarse grid solver -- level 0 ------------------------------- 355 KSP Object: (mg_coarse_) 1 MPI process 356 type: preonly 357 maximum iterations=10000, initial guess is zero 358 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 359 left preconditioning 360 using NONE norm type for convergence test 361 PC Object: (mg_coarse_) 1 MPI process 362 type: bjacobi 363 number of blocks = 1 364 Local solver information for each block is in the following KSP and PC objects: 365 [0] number of local blocks = 1, first local block number = 0 366 [0] local block number 0 367 KSP Object: (mg_coarse_sub_) 1 MPI process 368 type: preonly 369 maximum iterations=1, initial guess is zero 370 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 371 left preconditioning 372 using NONE norm type for convergence test 373 PC Object: (mg_coarse_sub_) 1 MPI process 374 type: lu 375 out-of-place factorization 376 tolerance for zero pivot 2.22045e-14 377 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] 378 matrix ordering: nd 379 factor fill ratio given 5., needed 1. 380 Factored matrix follows: 381 Mat Object: (mg_coarse_sub_) 1 MPI process 382 type: seqaij 383 rows=3, cols=3 384 package used to perform factorization: petsc 385 total: nonzeros=9, allocated nonzeros=9 386 using I-node routines: found 1 nodes, limit used is 5 387 linear system matrix = precond matrix: 388 Mat Object: (mg_coarse_sub_) 1 MPI process 389 type: seqaij 390 rows=3, cols=3 391 total: nonzeros=9, allocated nonzeros=9 392 total number of mallocs used during MatSetValues calls=0 393 using I-node routines: found 1 nodes, limit used is 5 394 - - - - - - - - - - - - - - - - - - 395 linear system matrix = precond matrix: 396 Mat Object: (mg_coarse_sub_) 1 MPI process 397 type: seqaij 398 rows=3, cols=3 399 total: nonzeros=9, allocated nonzeros=9 400 total number of mallocs used during MatSetValues calls=0 401 using I-node routines: found 1 nodes, limit used is 5 402 Down solver (pre-smoother) on level 1 ------------------------------- 403 KSP Object: (mg_levels_1_) 1 MPI process 404 type: chebyshev 405 Chebyshev polynomial of first kind 406 eigenvalue targets used: min 0.160614, max 1.76675 407 eigenvalues estimated via gmres: min 0.393863, max 1.60614 408 eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1] 409 KSP Object: (mg_levels_1_esteig_) 1 MPI process 410 type: gmres 411 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 412 happy breakdown tolerance 1e-30 413 maximum iterations=10, initial guess is zero 414 tolerances: relative=1e-12, absolute=1e-50, divergence=10000. 415 left preconditioning 416 using PRECONDITIONED norm type for convergence test 417 estimating eigenvalues using a noisy random number generated right-hand side 418 maximum iterations=2, nonzero initial guess 419 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 420 left preconditioning 421 using NONE norm type for convergence test 422 PC Object: (mg_levels_1_) 1 MPI process 423 type: jacobi 424 type DIAGONAL 425 Vec Object: 1 MPI process 426 type: seq 427 length=16 428 linear system matrix = precond matrix: 429 Mat Object: 1 MPI process 430 type: seqaij 431 rows=16, cols=16 432 total: nonzeros=64, allocated nonzeros=64 433 total number of mallocs used during MatSetValues calls=0 434 not using I-node routines 435 Up solver (post-smoother) same as down solver (pre-smoother) 436 linear system matrix = precond matrix: 437 Mat Object: 1 MPI process 438 type: seqaij 439 rows=16, cols=16 440 total: nonzeros=64, allocated nonzeros=64 441 total number of mallocs used during MatSetValues calls=0 442 not using I-node routines 443