0 SNES Function norm 33.3967 1 SNES Function norm 3.95646e-09 L_2 Error: 7.89093 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 1 SNES Object: 4 MPI processes type: newtonls maximum iterations=50, maximum function evaluations=10000 tolerances: relative=1e-08, absolute=1e-50, solution=1e-08 total number of linear solver iterations=16 total number of function evaluations=2 norm schedule ALWAYS SNESLineSearch Object: 4 MPI processes type: bt interpolation: cubic alpha=1.000000e-04 maxlambda=1.000000e+00, minlambda=1.000000e-12 tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 maximum iterations=40 KSP Object: 4 MPI processes type: gmres restart=100, using classical (unmodified) Gram-Schmidt orthogonalization with no iterative refinement happy breakdown tolerance=1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-10, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 4 MPI processes type: hpddm levels: 2 Neumann matrix attached? TRUE shared subdomain KSP between SLEPc and PETSc? FALSE coarse correction: DEFLATED on process #0, value (+ threshold if available) for selecting deflation vectors: 20 grid and operator complexities: 1.01463 1.10782 KSP Object: (pc_hpddm_levels_1_) 4 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning not checking for convergence PC Object: (pc_hpddm_levels_1_) 4 MPI processes type: shell no name linear system matrix, which is also used to construct the preconditioner: Mat Object: 4 MPI processes type: mpiaij total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines PC Object: (pc_hpddm_levels_1_) 4 MPI processes type: bjacobi number of blocks = 4 Local solver information for first block is in the following KSP and PC objects on rank 0: Use -pc_hpddm_levels_1_ksp_view ::ascii_info_detail to display information for all blocks KSP Object: (pc_hpddm_levels_1_sub_) 1 MPI process type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning not checking for convergence PC Object: (pc_hpddm_levels_1_sub_) 1 MPI process type: icc out-of-place factorization 3 levels of fill tolerance for zero pivot 2.22045e-14 using Manteuffel shift [POSITIVE_DEFINITE] matrix ordering: natural factor fill ratio given 1., needed 2.69786 Factored matrix: Mat Object: (pc_hpddm_levels_1_sub_) 1 MPI process type: seqsbaij package used to perform factorization: petsc linear system matrix, which is also used to construct the preconditioner: Mat Object: (pc_hpddm_levels_1_sub_) 1 MPI process type: seqaij total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix, which is also used to construct the preconditioner: Mat Object: 4 MPI processes type: mpiaij total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines KSP Object: (pc_hpddm_coarse_) 2 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning not checking for convergence PC Object: (pc_hpddm_coarse_) 2 MPI processes type: redundant First (color=0) of 2 PCs follows KSP Object: (pc_hpddm_coarse_redundant_) 1 MPI process type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning not checking for convergence PC Object: (pc_hpddm_coarse_redundant_) 1 MPI process type: cholesky out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: natural factor fill ratio given 5., needed 1.1 Factored matrix: Mat Object: (pc_hpddm_coarse_redundant_) 1 MPI process type: seqsbaij package used to perform factorization: petsc linear system matrix, which is also used to construct the preconditioner: Mat Object: 1 MPI process type: seqsbaij total number of mallocs used during MatSetValues calls=0 linear system matrix, which is also used to construct the preconditioner: Mat Object: (pc_hpddm_coarse_) 2 MPI processes type: mpisbaij total number of mallocs used during MatSetValues calls=0 linear system matrix, which is also used to construct the preconditioner: Mat Object: 4 MPI processes type: mpiaij total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines