1 0 SNES Function norm 33.3967 2 1 SNES Function norm 3.95646e-09 3L_2 Error: 7.89093 4Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 1 5SNES Object: 4 MPI processes 6 type: newtonls 7 maximum iterations=50, maximum function evaluations=10000 8 tolerances: relative=1e-08, absolute=1e-50, solution=1e-08 9 total number of linear solver iterations=16 10 total number of function evaluations=2 11 norm schedule ALWAYS 12 SNESLineSearch Object: 4 MPI processes 13 type: bt 14 interpolation: cubic 15 alpha=1.000000e-04 16 maxlambda=1.000000e+00, minlambda=1.000000e-12 17 tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 18 maximum iterations=40 19 KSP Object: 4 MPI processes 20 type: gmres 21 restart=100, using classical (unmodified) Gram-Schmidt orthogonalization with no iterative refinement 22 happy breakdown tolerance=1e-30 23 maximum iterations=10000, initial guess is zero 24 tolerances: relative=1e-10, absolute=1e-50, divergence=10000. 25 left preconditioning 26 using PRECONDITIONED norm type for convergence test 27 PC Object: 4 MPI processes 28 type: hpddm 29 levels: 2 30 Neumann matrix attached? TRUE 31 shared subdomain KSP between SLEPc and PETSc? FALSE 32 coarse correction: DEFLATED 33 on process #0, value (+ threshold if available) for selecting deflation vectors: 20 34 grid and operator complexities: 1.01463 1.10782 35 KSP Object: (pc_hpddm_levels_1_) 4 MPI processes 36 type: preonly 37 maximum iterations=10000, initial guess is zero 38 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 39 left preconditioning 40 not checking for convergence 41 PC Object: (pc_hpddm_levels_1_) 4 MPI processes 42 type: shell 43 no name 44 linear system matrix, which is also used to construct the preconditioner: 45 Mat Object: 4 MPI processes 46 type: mpiaij 47 total number of mallocs used during MatSetValues calls=0 48 not using I-node (on process 0) routines 49 PC Object: (pc_hpddm_levels_1_) 4 MPI processes 50 type: bjacobi 51 number of blocks = 4 52 Local solver information for first block is in the following KSP and PC objects on rank 0: 53 Use -pc_hpddm_levels_1_ksp_view ::ascii_info_detail to display information for all blocks 54 KSP Object: (pc_hpddm_levels_1_sub_) 1 MPI process 55 type: preonly 56 maximum iterations=10000, initial guess is zero 57 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 58 left preconditioning 59 not checking for convergence 60 PC Object: (pc_hpddm_levels_1_sub_) 1 MPI process 61 type: icc 62 out-of-place factorization 63 3 levels of fill 64 tolerance for zero pivot 2.22045e-14 65 using Manteuffel shift [POSITIVE_DEFINITE] 66 matrix ordering: natural 67 factor fill ratio given 1., needed 2.69786 68 Factored matrix: 69 Mat Object: (pc_hpddm_levels_1_sub_) 1 MPI process 70 type: seqsbaij 71 package used to perform factorization: petsc 72 linear system matrix, which is also used to construct the preconditioner: 73 Mat Object: (pc_hpddm_levels_1_sub_) 1 MPI process 74 type: seqaij 75 total number of mallocs used during MatSetValues calls=0 76 not using I-node routines 77 linear system matrix, which is also used to construct the preconditioner: 78 Mat Object: 4 MPI processes 79 type: mpiaij 80 total number of mallocs used during MatSetValues calls=0 81 not using I-node (on process 0) routines 82 KSP Object: (pc_hpddm_coarse_) 2 MPI processes 83 type: preonly 84 maximum iterations=10000, initial guess is zero 85 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 86 left preconditioning 87 not checking for convergence 88 PC Object: (pc_hpddm_coarse_) 2 MPI processes 89 type: redundant 90 First (color=0) of 2 PCs follows 91 KSP Object: (pc_hpddm_coarse_redundant_) 1 MPI process 92 type: preonly 93 maximum iterations=10000, initial guess is zero 94 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 95 left preconditioning 96 not checking for convergence 97 PC Object: (pc_hpddm_coarse_redundant_) 1 MPI process 98 type: cholesky 99 out-of-place factorization 100 tolerance for zero pivot 2.22045e-14 101 matrix ordering: natural 102 factor fill ratio given 5., needed 1.1 103 Factored matrix: 104 Mat Object: (pc_hpddm_coarse_redundant_) 1 MPI process 105 type: seqsbaij 106 package used to perform factorization: petsc 107 linear system matrix, which is also used to construct the preconditioner: 108 Mat Object: 1 MPI process 109 type: seqsbaij 110 total number of mallocs used during MatSetValues calls=0 111 linear system matrix, which is also used to construct the preconditioner: 112 Mat Object: (pc_hpddm_coarse_) 2 MPI processes 113 type: mpisbaij 114 total number of mallocs used during MatSetValues calls=0 115 linear system matrix, which is also used to construct the preconditioner: 116 Mat Object: 4 MPI processes 117 type: mpiaij 118 total number of mallocs used during MatSetValues calls=0 119 not using I-node (on process 0) routines 120