1Linear solve converged due to CONVERGED_RTOL iterations 18 2KSP Object: 4 MPI processes 3 type: gmres 4 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 5 happy breakdown tolerance 1e-30 6 maximum iterations=10000, initial guess is zero 7 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 8 left preconditioning 9 using PRECONDITIONED norm type for convergence test 10PC Object: 4 MPI processes 11 type: hpddm 12 levels: 2 13 Neumann matrix attached? FALSE 14 shared subdomain KSP between SLEPc and PETSc? FALSE 15 coarse correction: DEFLATED 16 on process #0, value (+ threshold if available) for selecting deflation vectors: 1 17 grid and operator complexities: 1.005 16. 18 KSP Object: (pc_hpddm_levels_1_) 4 MPI processes 19 type: preonly 20 maximum iterations=10000, initial guess is zero 21 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 22 left preconditioning 23 using NONE norm type for convergence test 24 PC Object: (pc_hpddm_levels_1_) 4 MPI processes 25 type: shell 26 no name 27 linear system matrix = precond matrix: 28 Mat Object: 4 MPI processes 29 type: htool 30 rows=800, cols=800 31 symmetry: N 32 minimum cluster size: 10 33 maximum block size: 1000000 34 epsilon: 0.01 35 eta: 10. 36 minimum target depth: 0 37 minimum source depth: 0 38 compressor: sympartialACA 39 clustering: PCARegular 40 compression ratio: 1.08851 41 space saving: 0.0813156 42 (minimum, mean, maximum) dense block sizes: (144, 288.562, 2500) 43 (minimum, mean, maximum) low rank block sizes: (144, 164.113, 625) 44 (minimum, mean, maximum) ranks: (2, 4.8792, 7) 45 PC Object: (pc_hpddm_levels_1_) 4 MPI processes 46 type: asm 47 total subdomain blocks = 4, user-defined overlap 48 restriction/interpolation type - RESTRICT 49 Local solver information for first block is in the following KSP and PC objects on rank 0: 50 Use -pc_hpddm_levels_1_ksp_view ::ascii_info_detail to display information for all blocks 51 KSP Object: (pc_hpddm_levels_1_sub_) 1 MPI process 52 type: preonly 53 maximum iterations=10000, initial guess is zero 54 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 55 left preconditioning 56 using NONE norm type for convergence test 57 PC Object: (pc_hpddm_levels_1_sub_) 1 MPI process 58 type: lu 59 out-of-place factorization 60 tolerance for zero pivot 2.22045e-14 61 matrix ordering: external 62 factor fill ratio given 0., needed 0. 63 Factored matrix follows: 64 Mat Object: (pc_hpddm_levels_1_sub_) 1 MPI process 65 type: seqdense 66 rows=202, cols=202 67 package used to perform factorization: petsc 68 total: nonzeros=40804, allocated nonzeros=40804 69 linear system matrix = precond matrix: 70 Mat Object: (pc_hpddm_levels_1_sub_) 1 MPI process 71 type: seqdense 72 rows=202, cols=202 73 total: nonzeros=40804, allocated nonzeros=40804 74 total number of mallocs used during MatSetValues calls=0 75 linear system matrix = precond matrix: 76 Mat Object: 4 MPI processes 77 type: htool 78 rows=800, cols=800 79 symmetry: N 80 minimum cluster size: 10 81 maximum block size: 1000000 82 epsilon: 0.01 83 eta: 10. 84 minimum target depth: 0 85 minimum source depth: 0 86 compressor: sympartialACA 87 clustering: PCARegular 88 compression ratio: 1.08851 89 space saving: 0.0813156 90 (minimum, mean, maximum) dense block sizes: (144, 288.562, 2500) 91 (minimum, mean, maximum) low rank block sizes: (144, 164.113, 625) 92 (minimum, mean, maximum) ranks: (2, 4.8792, 7) 93 KSP Object: (pc_hpddm_coarse_) 1 MPI process 94 type: gmres 95 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 96 happy breakdown tolerance 1e-30 97 maximum iterations=10000, initial guess is zero 98 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 99 left preconditioning 100 using PRECONDITIONED norm type for convergence test 101 PC Object: (pc_hpddm_coarse_) 1 MPI process 102 type: lu 103 out-of-place factorization 104 tolerance for zero pivot 2.22045e-14 105 matrix ordering: external 106 factor fill ratio given 0., needed 0. 107 Factored matrix follows: 108 Mat Object: (pc_hpddm_coarse_) 1 MPI process 109 type: seqdense 110 rows=4, cols=4 111 package used to perform factorization: petsc 112 total: nonzeros=16, allocated nonzeros=16 113 linear system matrix = precond matrix: 114 Mat Object: (pc_hpddm_coarse_) 1 MPI process 115 type: seqdense 116 rows=4, cols=4 117 total: nonzeros=16, allocated nonzeros=16 118 total number of mallocs used during MatSetValues calls=0 119 linear system matrix = precond matrix: 120 Mat Object: 4 MPI processes 121 type: htool 122 rows=800, cols=800 123 symmetry: N 124 minimum cluster size: 10 125 maximum block size: 1000000 126 epsilon: 0.01 127 eta: 10. 128 minimum target depth: 0 129 minimum source depth: 0 130 compressor: sympartialACA 131 clustering: PCARegular 132 compression ratio: 1.08851 133 space saving: 0.0813156 134 (minimum, mean, maximum) dense block sizes: (144, 288.562, 2500) 135 (minimum, mean, maximum) low rank block sizes: (144, 164.113, 625) 136 (minimum, mean, maximum) ranks: (2, 4.8792, 7) 137