1 Linear solve converged due to CONVERGED_RTOL iterations 18 2KSP Object: 4 MPI processes 3 type: gmres 4 restart=30, using classical (unmodified) Gram-Schmidt orthogonalization with no iterative refinement 5 happy breakdown tolerance=1e-30 6 maximum iterations=10000, initial guess is zero 7 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 8 left preconditioning 9 using PRECONDITIONED norm type for convergence test 10PC Object: 4 MPI processes 11 type: hpddm 12 levels: 2 13 Neumann matrix attached? FALSE 14 shared subdomain KSP between SLEPc and PETSc? FALSE 15 coarse correction: DEFLATED 16 on process #0, value (+ threshold if available) for selecting deflation vectors: 1 17 grid and operator complexities: 1.005 16. 18 KSP Object: (pc_hpddm_levels_1_) 4 MPI processes 19 type: preonly 20 maximum iterations=10000, initial guess is zero 21 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 22 left preconditioning 23 not checking for convergence 24 PC Object: (pc_hpddm_levels_1_) 4 MPI processes 25 type: shell 26 no name 27 linear system matrix, which is also used to construct the preconditioner: 28 Mat Object: 4 MPI processes 29 type: htool 30 rows=800, cols=800 31 symmetry: N 32 maximal cluster leaf size: 10 33 epsilon: 0.01 34 eta: 10. 35 minimum target depth: 0 36 minimum source depth: 0 37 compressor: sympartialACA 38 clustering: PCARegular 39 compression ratio: 1.086263 40 space saving: 0.079412 41 block tree consistency: TRUE 42 recompression: FALSE 43 (minimum, mean, maximum) dense block sizes: (36, 39.341459, 169) 44 (minimum, mean, maximum) low rank block sizes: (36, 127.619256, 625) 45 (minimum, mean, maximum) ranks: (2, 4.356674, 8) 46 PC Object: (pc_hpddm_levels_1_) 4 MPI processes 47 type: asm 48 total subdomain blocks = 4, user-defined overlap 49 restriction/interpolation type - RESTRICT 50 Local solver information for first block is in the following KSP and PC objects on rank 0: 51 Use -pc_hpddm_levels_1_ksp_view ::ascii_info_detail to display information for all blocks 52 KSP Object: (pc_hpddm_levels_1_sub_) 1 MPI process 53 type: preonly 54 maximum iterations=10000, initial guess is zero 55 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 56 left preconditioning 57 not checking for convergence 58 PC Object: (pc_hpddm_levels_1_sub_) 1 MPI process 59 type: lu 60 out-of-place factorization 61 tolerance for zero pivot 2.22045e-14 62 matrix ordering: external 63 factor fill ratio given 0., needed 0. 64 Factored matrix: 65 Mat Object: (pc_hpddm_levels_1_sub_) 1 MPI process 66 type: seqdense 67 rows=202, cols=202 68 package used to perform factorization: petsc 69 total: nonzeros=40804, allocated nonzeros=40804 70 linear system matrix, which is also used to construct the preconditioner: 71 Mat Object: (pc_hpddm_levels_1_sub_) 1 MPI process 72 type: seqdense 73 rows=202, cols=202 74 total: nonzeros=40804, allocated nonzeros=40804 75 total number of mallocs used during MatSetValues calls=0 76 linear system matrix, which is also used to construct the preconditioner: 77 Mat Object: 4 MPI processes 78 type: htool 79 rows=800, cols=800 80 symmetry: N 81 maximal cluster leaf size: 10 82 epsilon: 0.01 83 eta: 10. 84 minimum target depth: 0 85 minimum source depth: 0 86 compressor: sympartialACA 87 clustering: PCARegular 88 compression ratio: 1.086263 89 space saving: 0.079412 90 block tree consistency: TRUE 91 recompression: FALSE 92 (minimum, mean, maximum) dense block sizes: (36, 39.341459, 169) 93 (minimum, mean, maximum) low rank block sizes: (36, 127.619256, 625) 94 (minimum, mean, maximum) ranks: (2, 4.356674, 8) 95 KSP Object: (pc_hpddm_coarse_) 1 MPI process 96 type: gmres 97 restart=30, using classical (unmodified) Gram-Schmidt orthogonalization with no iterative refinement 98 happy breakdown tolerance=1e-30 99 maximum iterations=10000, initial guess is zero 100 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 101 left preconditioning 102 using PRECONDITIONED norm type for convergence test 103 PC Object: (pc_hpddm_coarse_) 1 MPI process 104 type: lu 105 out-of-place factorization 106 tolerance for zero pivot 2.22045e-14 107 matrix ordering: external 108 factor fill ratio given 0., needed 0. 109 Factored matrix: 110 Mat Object: (pc_hpddm_coarse_) 1 MPI process 111 type: seqdense 112 rows=4, cols=4 113 package used to perform factorization: petsc 114 total: nonzeros=16, allocated nonzeros=16 115 linear system matrix, which is also used to construct the preconditioner: 116 Mat Object: (pc_hpddm_coarse_) 1 MPI process 117 type: seqdense 118 rows=4, cols=4 119 total: nonzeros=16, allocated nonzeros=16 120 total number of mallocs used during MatSetValues calls=0 121 linear system matrix, which is also used to construct the preconditioner: 122 Mat Object: 4 MPI processes 123 type: htool 124 rows=800, cols=800 125 symmetry: N 126 maximal cluster leaf size: 10 127 epsilon: 0.01 128 eta: 10. 129 minimum target depth: 0 130 minimum source depth: 0 131 compressor: sympartialACA 132 clustering: PCARegular 133 compression ratio: 1.086263 134 space saving: 0.079412 135 block tree consistency: TRUE 136 recompression: FALSE 137 (minimum, mean, maximum) dense block sizes: (36, 39.341459, 169) 138 (minimum, mean, maximum) low rank block sizes: (36, 127.619256, 625) 139 (minimum, mean, maximum) ranks: (2, 4.356674, 8) 140