1Linear solve converged due to CONVERGED_RTOL iterations 18 2KSP Object: 4 MPI processes 3 type: gmres 4 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 5 happy breakdown tolerance 1e-30 6 maximum iterations=10000, initial guess is zero 7 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 8 left preconditioning 9 using PRECONDITIONED norm type for convergence test 10PC Object: 4 MPI processes 11 type: hpddm 12 levels: 2 13 Neumann matrix attached? FALSE 14 coarse correction: DEFLATED 15 on process #0, value (+ threshold if available) for selecting deflation vectors: 1 16 grid and operator complexities: 1.005 16. 17 KSP Object: (pc_hpddm_levels_1_) 4 MPI processes 18 type: preonly 19 maximum iterations=10000, initial guess is zero 20 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 21 left preconditioning 22 using NONE norm type for convergence test 23 PC Object: (pc_hpddm_levels_1_) 4 MPI processes 24 type: shell 25 no name 26 linear system matrix = precond matrix: 27 Mat Object: 4 MPI processes 28 type: htool 29 rows=800, cols=800 30 symmetry: N 31 minimum cluster size: 10 32 maximum block size: 1000000 33 epsilon: 0.01 34 eta: 10. 35 minimum target depth: 0 36 minimum source depth: 0 37 compressor: sympartialACA 38 clustering: PCARegular 39 compression ratio: 1.08851 40 space saving: 0.0813156 41 (minimum, mean, maximum) dense block sizes: (144, 288.562, 2500) 42 (minimum, mean, maximum) low rank block sizes: (144, 164.113, 625) 43 (minimum, mean, maximum) ranks: (2, 4.8792, 7) 44 PC Object: (pc_hpddm_levels_1_) 4 MPI processes 45 type: asm 46 total subdomain blocks = 4, user-defined overlap 47 restriction/interpolation type - RESTRICT 48 Local solver information for first block is in the following KSP and PC objects on rank 0: 49 Use -pc_hpddm_levels_1_ksp_view ::ascii_info_detail to display information for all blocks 50 KSP Object: (pc_hpddm_levels_1_sub_) 1 MPI processes 51 type: preonly 52 maximum iterations=10000, initial guess is zero 53 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 54 left preconditioning 55 using NONE norm type for convergence test 56 PC Object: (pc_hpddm_levels_1_sub_) 1 MPI processes 57 type: lu 58 out-of-place factorization 59 tolerance for zero pivot 2.22045e-14 60 matrix ordering: external 61 factor fill ratio given 0., needed 0. 62 Factored matrix follows: 63 Mat Object: 1 MPI processes 64 type: seqdense 65 rows=202, cols=202 66 package used to perform factorization: petsc 67 total: nonzeros=40804, allocated nonzeros=40804 68 linear system matrix = precond matrix: 69 Mat Object: (pc_hpddm_levels_1_sub_) 1 MPI processes 70 type: seqdense 71 rows=202, cols=202 72 total: nonzeros=40804, allocated nonzeros=40804 73 total number of mallocs used during MatSetValues calls=0 74 linear system matrix = precond matrix: 75 Mat Object: 4 MPI processes 76 type: htool 77 rows=800, cols=800 78 symmetry: N 79 minimum cluster size: 10 80 maximum block size: 1000000 81 epsilon: 0.01 82 eta: 10. 83 minimum target depth: 0 84 minimum source depth: 0 85 compressor: sympartialACA 86 clustering: PCARegular 87 compression ratio: 1.08851 88 space saving: 0.0813156 89 (minimum, mean, maximum) dense block sizes: (144, 288.562, 2500) 90 (minimum, mean, maximum) low rank block sizes: (144, 164.113, 625) 91 (minimum, mean, maximum) ranks: (2, 4.8792, 7) 92 KSP Object: (pc_hpddm_coarse_) 1 MPI processes 93 type: gmres 94 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 95 happy breakdown tolerance 1e-30 96 maximum iterations=10000, initial guess is zero 97 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 98 left preconditioning 99 using PRECONDITIONED norm type for convergence test 100 PC Object: (pc_hpddm_coarse_) 1 MPI processes 101 type: lu 102 out-of-place factorization 103 tolerance for zero pivot 2.22045e-14 104 matrix ordering: external 105 factor fill ratio given 0., needed 0. 106 Factored matrix follows: 107 Mat Object: 1 MPI processes 108 type: seqdense 109 rows=4, cols=4 110 package used to perform factorization: petsc 111 total: nonzeros=16, allocated nonzeros=16 112 linear system matrix = precond matrix: 113 Mat Object: (pc_hpddm_coarse_) 1 MPI processes 114 type: seqdense 115 rows=4, cols=4 116 total: nonzeros=16, allocated nonzeros=16 117 total number of mallocs used during MatSetValues calls=0 118 linear system matrix = precond matrix: 119 Mat Object: 4 MPI processes 120 type: htool 121 rows=800, cols=800 122 symmetry: N 123 minimum cluster size: 10 124 maximum block size: 1000000 125 epsilon: 0.01 126 eta: 10. 127 minimum target depth: 0 128 minimum source depth: 0 129 compressor: sympartialACA 130 clustering: PCARegular 131 compression ratio: 1.08851 132 space saving: 0.0813156 133 (minimum, mean, maximum) dense block sizes: (144, 288.562, 2500) 134 (minimum, mean, maximum) low rank block sizes: (144, 164.113, 625) 135 (minimum, mean, maximum) ranks: (2, 4.8792, 7) 136