1 0 KSP Residual norm 775.064 2 1 KSP Residual norm 131.137 3 2 KSP Residual norm 51.6444 4 3 KSP Residual norm 48.3143 5 4 KSP Residual norm 22.4594 6 5 KSP Residual norm 7.21248 7 6 KSP Residual norm 2.00533 8 7 KSP Residual norm 1.11834 9 8 KSP Residual norm 0.460893 10 9 KSP Residual norm 0.212662 11 10 KSP Residual norm 0.0845831 12 11 KSP Residual norm 0.0401786 13 12 KSP Residual norm 0.0181711 14 13 KSP Residual norm 0.00565415 15Linear solve converged due to CONVERGED_RTOL iterations 13 16KSP Object: 8 MPI processes 17 type: cg 18 maximum iterations=10000, initial guess is zero 19 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 20 left preconditioning 21 using PRECONDITIONED norm type for convergence test 22PC Object: 8 MPI processes 23 type: gamg 24 type is MULTIPLICATIVE, levels=2 cycles=v 25 Cycles per PCApply=1 26 Using externally compute Galerkin coarse grid matrices 27 GAMG specific options 28 Threshold for dropping small values in graph on each level = 29 Threshold scaling factor for each level not specified = 1. 30 Using aggregates from coarsening process to define subdomains for PCASM 31 Using parallel coarse grid solver (all coarse grid equations not put on one process) 32 AGG specific options 33 Symmetric graph false 34 Number of levels to square graph 1 35 Number smoothing steps 1 36 Complexity: grid = 1.07125 37 Coarse grid solver -- level ------------------------------- 38 KSP Object: (mg_coarse_) 8 MPI processes 39 type: cg 40 maximum iterations=10000, initial guess is zero 41 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 42 left preconditioning 43 using PRECONDITIONED norm type for convergence test 44 PC Object: (mg_coarse_) 8 MPI processes 45 type: jacobi 46 linear system matrix = precond matrix: 47 Mat Object: 8 MPI processes 48 type: mpiaij 49 rows=162, cols=162, bs=6 50 total: nonzeros=14076, allocated nonzeros=14076 51 total number of mallocs used during MatSetValues calls=0 52 using I-node (on process 0) routines: found 4 nodes, limit used is 5 53 Down solver (pre-smoother) on level 1 ------------------------------- 54 KSP Object: (mg_levels_1_) 8 MPI processes 55 type: chebyshev 56 eigenvalue estimates used: min = 0.450864, max = 2.36704 57 eigenvalues estimate via gmres min 0.0411196, max 2.25432 58 eigenvalues estimated using gmres with translations [0. 0.2; 0. 1.05] 59 KSP Object: (mg_levels_1_esteig_) 8 MPI processes 60 type: gmres 61 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 62 happy breakdown tolerance 1e-30 63 maximum iterations=10, initial guess is zero 64 tolerances: relative=1e-12, absolute=1e-50, divergence=10000. 65 left preconditioning 66 using PRECONDITIONED norm type for convergence test 67 estimating eigenvalues using noisy right hand side 68 maximum iterations=1, nonzero initial guess 69 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 70 left preconditioning 71 using NONE norm type for convergence test 72 PC Object: (mg_levels_1_) 8 MPI processes 73 type: asm 74 total subdomain blocks = 27, amount of overlap = 0 75 restriction/interpolation type - BASIC 76 Local solver is the same for all blocks, as in the following KSP and PC objects on rank 0: 77 KSP Object: (mg_levels_1_sub_) 1 MPI processes 78 type: preonly 79 maximum iterations=10000, initial guess is zero 80 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 81 left preconditioning 82 using NONE norm type for convergence test 83 PC Object: (mg_levels_1_sub_) 1 MPI processes 84 type: lu 85 out-of-place factorization 86 tolerance for zero pivot 2.22045e-14 87 matrix ordering: nd 88 factor fill ratio given 5., needed 1.43328 89 Factored matrix follows: 90 Mat Object: 1 MPI processes 91 type: seqaij 92 rows=135, cols=135 93 package used to perform factorization: petsc 94 total: nonzeros=8217, allocated nonzeros=8217 95 total number of mallocs used during MatSetValues calls=0 96 using I-node routines: found 45 nodes, limit used is 5 97 linear system matrix = precond matrix: 98 Mat Object: 1 MPI processes 99 type: seqaij 100 rows=135, cols=135 101 total: nonzeros=5733, allocated nonzeros=5733 102 total number of mallocs used during MatSetValues calls=0 103 using I-node routines: found 45 nodes, limit used is 5 104 linear system matrix = precond matrix: 105 Mat Object: 8 MPI processes 106 type: mpiaij 107 rows=3000, cols=3000, bs=3 108 total: nonzeros=197568, allocated nonzeros=243000 109 total number of mallocs used during MatSetValues calls=0 110 has attached near null space 111 using I-node (on process 0) routines: found 125 nodes, limit used is 5 112 Up solver (post-smoother) same as down solver (pre-smoother) 113 linear system matrix = precond matrix: 114 Mat Object: 8 MPI processes 115 type: mpiaij 116 rows=3000, cols=3000, bs=3 117 total: nonzeros=197568, allocated nonzeros=243000 118 total number of mallocs used during MatSetValues calls=0 119 has attached near null space 120 using I-node (on process 0) routines: found 125 nodes, limit used is 5 121 0 KSP Residual norm 0.00775064 122 1 KSP Residual norm 0.00131137 123 2 KSP Residual norm 0.000516444 124 3 KSP Residual norm 0.000483143 125 4 KSP Residual norm 0.000224594 126 5 KSP Residual norm 7.21248e-05 127 6 KSP Residual norm 2.00533e-05 128 7 KSP Residual norm 1.11834e-05 129 8 KSP Residual norm 4.60893e-06 130 9 KSP Residual norm 2.12662e-06 131 10 KSP Residual norm 8.45831e-07 132 11 KSP Residual norm 4.01786e-07 133 12 KSP Residual norm 1.81711e-07 134 13 KSP Residual norm 5.65415e-08 135Linear solve converged due to CONVERGED_RTOL iterations 13 136KSP Object: 8 MPI processes 137 type: cg 138 maximum iterations=10000, initial guess is zero 139 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 140 left preconditioning 141 using PRECONDITIONED norm type for convergence test 142PC Object: 8 MPI processes 143 type: gamg 144 type is MULTIPLICATIVE, levels=2 cycles=v 145 Cycles per PCApply=1 146 Using externally compute Galerkin coarse grid matrices 147 GAMG specific options 148 Threshold for dropping small values in graph on each level = 149 Threshold scaling factor for each level not specified = 1. 150 Using aggregates from coarsening process to define subdomains for PCASM 151 Using parallel coarse grid solver (all coarse grid equations not put on one process) 152 AGG specific options 153 Symmetric graph false 154 Number of levels to square graph 1 155 Number smoothing steps 1 156 Complexity: grid = 1.07125 157 Coarse grid solver -- level ------------------------------- 158 KSP Object: (mg_coarse_) 8 MPI processes 159 type: cg 160 maximum iterations=10000, initial guess is zero 161 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 162 left preconditioning 163 using PRECONDITIONED norm type for convergence test 164 PC Object: (mg_coarse_) 8 MPI processes 165 type: jacobi 166 linear system matrix = precond matrix: 167 Mat Object: 8 MPI processes 168 type: mpiaij 169 rows=162, cols=162, bs=6 170 total: nonzeros=14076, allocated nonzeros=14076 171 total number of mallocs used during MatSetValues calls=0 172 using nonscalable MatPtAP() implementation 173 using I-node (on process 0) routines: found 4 nodes, limit used is 5 174 Down solver (pre-smoother) on level 1 ------------------------------- 175 KSP Object: (mg_levels_1_) 8 MPI processes 176 type: chebyshev 177 eigenvalue estimates used: min = 0.450864, max = 2.36704 178 eigenvalues estimate via gmres min 0.0411196, max 2.25432 179 eigenvalues estimated using gmres with translations [0. 0.2; 0. 1.05] 180 KSP Object: (mg_levels_1_esteig_) 8 MPI processes 181 type: gmres 182 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 183 happy breakdown tolerance 1e-30 184 maximum iterations=10, initial guess is zero 185 tolerances: relative=1e-12, absolute=1e-50, divergence=10000. 186 left preconditioning 187 using PRECONDITIONED norm type for convergence test 188 estimating eigenvalues using noisy right hand side 189 maximum iterations=1, nonzero initial guess 190 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 191 left preconditioning 192 using NONE norm type for convergence test 193 PC Object: (mg_levels_1_) 8 MPI processes 194 type: asm 195 total subdomain blocks = 27, amount of overlap = 0 196 restriction/interpolation type - BASIC 197 Local solver is the same for all blocks, as in the following KSP and PC objects on rank 0: 198 KSP Object: (mg_levels_1_sub_) 1 MPI processes 199 type: preonly 200 maximum iterations=10000, initial guess is zero 201 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 202 left preconditioning 203 using NONE norm type for convergence test 204 PC Object: (mg_levels_1_sub_) 1 MPI processes 205 type: lu 206 out-of-place factorization 207 tolerance for zero pivot 2.22045e-14 208 matrix ordering: nd 209 factor fill ratio given 5., needed 1.43328 210 Factored matrix follows: 211 Mat Object: 1 MPI processes 212 type: seqaij 213 rows=135, cols=135 214 package used to perform factorization: petsc 215 total: nonzeros=8217, allocated nonzeros=8217 216 total number of mallocs used during MatSetValues calls=0 217 using I-node routines: found 45 nodes, limit used is 5 218 linear system matrix = precond matrix: 219 Mat Object: 1 MPI processes 220 type: seqaij 221 rows=135, cols=135 222 total: nonzeros=5733, allocated nonzeros=5733 223 total number of mallocs used during MatSetValues calls=0 224 using I-node routines: found 45 nodes, limit used is 5 225 linear system matrix = precond matrix: 226 Mat Object: 8 MPI processes 227 type: mpiaij 228 rows=3000, cols=3000, bs=3 229 total: nonzeros=197568, allocated nonzeros=243000 230 total number of mallocs used during MatSetValues calls=0 231 has attached near null space 232 using I-node (on process 0) routines: found 125 nodes, limit used is 5 233 Up solver (post-smoother) same as down solver (pre-smoother) 234 linear system matrix = precond matrix: 235 Mat Object: 8 MPI processes 236 type: mpiaij 237 rows=3000, cols=3000, bs=3 238 total: nonzeros=197568, allocated nonzeros=243000 239 total number of mallocs used during MatSetValues calls=0 240 has attached near null space 241 using I-node (on process 0) routines: found 125 nodes, limit used is 5 242 0 KSP Residual norm 0.00775064 243 1 KSP Residual norm 0.00131137 244 2 KSP Residual norm 0.000516444 245 3 KSP Residual norm 0.000483143 246 4 KSP Residual norm 0.000224594 247 5 KSP Residual norm 7.21248e-05 248 6 KSP Residual norm 2.00533e-05 249 7 KSP Residual norm 1.11834e-05 250 8 KSP Residual norm 4.60893e-06 251 9 KSP Residual norm 2.12662e-06 252 10 KSP Residual norm 8.45831e-07 253 11 KSP Residual norm 4.01786e-07 254 12 KSP Residual norm 1.81711e-07 255 13 KSP Residual norm 5.65415e-08 256Linear solve converged due to CONVERGED_RTOL iterations 13 257KSP Object: 8 MPI processes 258 type: cg 259 maximum iterations=10000, initial guess is zero 260 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 261 left preconditioning 262 using PRECONDITIONED norm type for convergence test 263PC Object: 8 MPI processes 264 type: gamg 265 type is MULTIPLICATIVE, levels=2 cycles=v 266 Cycles per PCApply=1 267 Using externally compute Galerkin coarse grid matrices 268 GAMG specific options 269 Threshold for dropping small values in graph on each level = 270 Threshold scaling factor for each level not specified = 1. 271 Using aggregates from coarsening process to define subdomains for PCASM 272 Using parallel coarse grid solver (all coarse grid equations not put on one process) 273 AGG specific options 274 Symmetric graph false 275 Number of levels to square graph 1 276 Number smoothing steps 1 277 Complexity: grid = 1.07125 278 Coarse grid solver -- level ------------------------------- 279 KSP Object: (mg_coarse_) 8 MPI processes 280 type: cg 281 maximum iterations=10000, initial guess is zero 282 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 283 left preconditioning 284 using PRECONDITIONED norm type for convergence test 285 PC Object: (mg_coarse_) 8 MPI processes 286 type: jacobi 287 linear system matrix = precond matrix: 288 Mat Object: 8 MPI processes 289 type: mpiaij 290 rows=162, cols=162, bs=6 291 total: nonzeros=14076, allocated nonzeros=14076 292 total number of mallocs used during MatSetValues calls=0 293 using nonscalable MatPtAP() implementation 294 using I-node (on process 0) routines: found 4 nodes, limit used is 5 295 Down solver (pre-smoother) on level 1 ------------------------------- 296 KSP Object: (mg_levels_1_) 8 MPI processes 297 type: chebyshev 298 eigenvalue estimates used: min = 0.450864, max = 2.36704 299 eigenvalues estimate via gmres min 0.0411196, max 2.25432 300 eigenvalues estimated using gmres with translations [0. 0.2; 0. 1.05] 301 KSP Object: (mg_levels_1_esteig_) 8 MPI processes 302 type: gmres 303 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 304 happy breakdown tolerance 1e-30 305 maximum iterations=10, initial guess is zero 306 tolerances: relative=1e-12, absolute=1e-50, divergence=10000. 307 left preconditioning 308 using PRECONDITIONED norm type for convergence test 309 estimating eigenvalues using noisy right hand side 310 maximum iterations=1, nonzero initial guess 311 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 312 left preconditioning 313 using NONE norm type for convergence test 314 PC Object: (mg_levels_1_) 8 MPI processes 315 type: asm 316 total subdomain blocks = 27, amount of overlap = 0 317 restriction/interpolation type - BASIC 318 Local solver is the same for all blocks, as in the following KSP and PC objects on rank 0: 319 KSP Object: (mg_levels_1_sub_) 1 MPI processes 320 type: preonly 321 maximum iterations=10000, initial guess is zero 322 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 323 left preconditioning 324 using NONE norm type for convergence test 325 PC Object: (mg_levels_1_sub_) 1 MPI processes 326 type: lu 327 out-of-place factorization 328 tolerance for zero pivot 2.22045e-14 329 matrix ordering: nd 330 factor fill ratio given 5., needed 1.43328 331 Factored matrix follows: 332 Mat Object: 1 MPI processes 333 type: seqaij 334 rows=135, cols=135 335 package used to perform factorization: petsc 336 total: nonzeros=8217, allocated nonzeros=8217 337 total number of mallocs used during MatSetValues calls=0 338 using I-node routines: found 45 nodes, limit used is 5 339 linear system matrix = precond matrix: 340 Mat Object: 1 MPI processes 341 type: seqaij 342 rows=135, cols=135 343 total: nonzeros=5733, allocated nonzeros=5733 344 total number of mallocs used during MatSetValues calls=0 345 using I-node routines: found 45 nodes, limit used is 5 346 linear system matrix = precond matrix: 347 Mat Object: 8 MPI processes 348 type: mpiaij 349 rows=3000, cols=3000, bs=3 350 total: nonzeros=197568, allocated nonzeros=243000 351 total number of mallocs used during MatSetValues calls=0 352 has attached near null space 353 using I-node (on process 0) routines: found 125 nodes, limit used is 5 354 Up solver (post-smoother) same as down solver (pre-smoother) 355 linear system matrix = precond matrix: 356 Mat Object: 8 MPI processes 357 type: mpiaij 358 rows=3000, cols=3000, bs=3 359 total: nonzeros=197568, allocated nonzeros=243000 360 total number of mallocs used during MatSetValues calls=0 361 has attached near null space 362 using I-node (on process 0) routines: found 125 nodes, limit used is 5 363[0]main |b-Ax|/|b|=1.585482e-04, |b|=5.391826e+00, emax=9.993392e-01 364