Lines Matching +full:linux +full:- +full:intel
22 …e: A Guide to Good Style](https://www.cambridge.org/core/books/writing-scientific-software/2320670…
32 - Fast, **low-latency** interconnect; any ethernet (even 10 GigE) simply cannot provide
34 - High per-core **memory** performance. Each core needs to
72 - [MPICH2 binding with the Hydra process manager](https://github.com/pmodels/mpich/blob/main/doc/wi…
75 $ mpiexec.hydra -n 4 --binding cpu:sockets
78 - [Open MPI binding](https://www.open-mpi.org/faq/?category=tuning#using-paffinity)
81 $ mpiexec -n 4 --map-by socket --bind-to socket --report-bindings
84 - `taskset`, part of the [util-linux](https://github.com/karelzak/util-linux) package
89 - `numactl`
92 policy. On Linux, the default policy is to attempt to find memory on the same memory bus
97 The option `--localalloc` allocates memory on the local NUMA node, similar to the
99 `--cpunodebind=nodes` binds the process to a given NUMA node (note that this can be
102 The option `--physcpubind=cpus` binds the process to a given processor core (numbered
103 according to `/proc/cpuinfo`, therefore including logical cores if Hyper-threading is
136 - We use powerful editors and programming environments.
137 - Our manual pages are generated automatically from formatted comments in the code,
139 - We employ continuous integration testing of the entire PETSc library on many different
140 machine architectures. This process **significantly** protects (no bug-catching
147 - PETSc as a package should be easy to use, write, and maintain. Our mantra is to write
152 - PETSc is regularly checked to make sure that all code conforms to our interface
158 - We retain the useful ones and discard the rest. All of these decisions are based not
163 - Even the rules about capitalization are designed to make it easy to figure out the
172 - <mailto:petsc-maint@mcs.anl.gov> is always checked, and we pride ourselves on responding
174 an archive of all reported problems and fixes, so it is easy to re-find fixes to
177 8. **We contain the complexity of PETSc by using powerful object-oriented programming
180 - Data encapsulation serves to abstract complex data formats or movement to
181 human-readable format. This is why your program cannot, for example, look directly
183 - Polymorphism makes changing program behavior as easy as possible, and further
193 `--with-scalar-type=complex` and either `--with-clanguage=c++` or (the default)
194 `--with-clanguage=c`. In our experience they will deliver very similar performance
233 See GPU install {ref}`documentation <doc_config_accel>` for up-to-date information on
239 - The `VecType` `VECSEQCUDA`, `VECMPICUDA`, or `VECCUDA` may be used with
240 `VecSetType()` or `-vec_type seqcuda`, `mpicuda`, or `cuda` when
242 - The `MatType` `MATSEQAIJCUSPARSE`, `MATMPIAIJCUSPARSE`, or `MATAIJCUSPARSE`
243 may be used with `MatSetType()` or `-mat_type seqaijcusparse`, `mpiaijcusparse`, or
245 - If you are creating the vectors and matrices with a `DM`, you can use `-dm_vec_type
246 cuda` and `-dm_mat_type aijcusparse`.
250 - The `VecType` `VECSEQVIENNACL`, `VECMPIVIENNACL`, or `VECVIENNACL` may be used
251 with `VecSetType()` or `-vec_type seqviennacl`, `mpiviennacl`, or `viennacl`
253 - The `MatType` `MATSEQAIJVIENNACL`, `MATMPIAIJVIENNACL`, or `MATAIJVIENNACL`
254 may be used with `MatSetType()` or `-mat_type seqaijviennacl`, `mpiaijviennacl`, or
256 - If you are creating the vectors and matrices with a `DM`, you can use `-dm_vec_type
257 viennacl` and `-dm_mat_type aijviennacl`.
261 - It is useful to develop your code with the default vectors and then run production runs
263 - All of the Krylov methods except `KSPIBCGS` run on the GPU.
264 - Parts of most preconditioners run directly on the GPU. After setup, `PCGAMG` runs
268 must be built with the `configure` option `--with-precision=single`.
275 options `--with-precision=__float128` and `--download-f2cblaslapack`.
286 try to, implement the built-in data types of `double` are not native types and cannot
311 example "linux-c-debug" for the debug versions compiled by a C compiler or
312 "linux-c-opt" for the optimized version.
315 See the {ref}`quick-start tutorial <tut_install>` for a step-by-step guide on
329 e.g. `find . -name output -type d | xargs du -sh | sort -hr` on a Unix-based system.
336 No, run `configure` with the option `--with-mpi=0`
340 Yes. Run `configure` with the additional flag `--with-x=0`
344 MPI is the message-passing standard. Because it is a standard, it will not frequently change over
345 time; thus, we do not have to change PETSc every time the provider of the message-passing
351 different libraries; no other message-passing system provides this support. All of the
357 one particular group to provide the message-passing libraries. Today, MPI is the only
367 `--with-mpi-dir`. You can rerun the configure with the additional option
368 `--with-mpi-compilers=0`, which will try to auto-detect working compilers; however,
370 work, run with `--with-cc=[your_c_compiler]` where you know `your_c_compiler` works
371 with this particular MPI, and likewise for C++ (`--with-cxx=[your_cxx_compiler]`) and Fortran
372 (`--with-fc=[your_fortran_compiler]`).
376 ### When should/can I use the `configure` option `--with-64-bit-indices`?
379 `PetscInt` defined to be a 32-bit `int`. If your problem:
381 - Involves more than 2^31 - 1 unknowns (around 2 billion).
382 - Your matrix might contain more than 2^31 - 1 nonzeros on a single process.
386 This option can be used when you are using either 32 or 64-bit pointers. You do not
387 need to use this option if you are using 64-bit pointers unless the two conditions above
398 $ make -f gmakefile PCC_FLAGS="-O1" $PETSC_ARCH/obj/src/mat/impls/baij/seq/baijsolvtrannat.o
405 2. `configure` PETSc with the `--with-petsc4py=1` option.
414 ### How can I find the URL locations of the packages you install using `--download-PACKAGE`?
424 - You previously ran `configure` with the option `--download-mpich` (or `--download-openmpi`)
429 $ rm -rf $PETSC_DIR/$PETSC_ARCH
430 $ ./configure --your-args
452 Using PETSC_DIR=/Users/barrysmith/Src/petsc and PETSC_ARCH=arch-fix-mpiexec-hang-2-ranks
459 - Verify you are using the correct `mpiexec` for the MPI you have linked PETSc with.
461 - If you have a VPN enabled on your machine, try turning it off and then running `make check` to
464 - If ``ping `hostname` `` (`/sbin/ping` on macOS) fails or hangs do:
467 echo 127.0.0.1 `hostname` | sudo tee -a /etc/hosts
472 - Try completely disconnecting your machine from the network and see if `make check` then works
474 - Try the PETSc `configure` option `--download-mpich-device=ch3:nemesis` with `--download-mpich`.
583 …nt to use Hypre boomerAMG without GMRES but when I run `-pc_type hypre -pc_hypre_type boomeramg -k…
585 You should run with `-ksp_type richardson` to have PETSc run several V or W
586 cycles. `-ksp_type preonly` causes boomerAMG to use only one V/W cycle. You can control
588 `-pc_hypre_boomeramg_max_iter <it>` (the default is 1). You can also control the
590 `-pc_hypre_boomeramg_tol <tol>` (the default is 1.e-7). Run with `-ksp_view` to see
591 all the hypre options used and `-help | grep boomeramg` to see all the command line
597 of `PCASM`. These may be activated with the runtime option `-pc_type asm`. Various
598 other options may be set, including the degree of overlap `-pc_asm_overlap <number>` the
599 type of restriction/extension `-pc_asm_type [basic,restrict,interpolate,none]` sets ASM
600 type and several others. You may see the available ASM options by using `-pc_type asm
601 -help`. See the procedural interfaces in the manual pages, for example `PCASMType()`
610 PETSc also contains a balancing Neumann-Neumann type preconditioner, see the manual page
662 firstGlobalIndex = r == R-1 ? 0 : (N/R)*(r+1);
735 ### How can I read in or write out a sparse matrix in Matrix Market, Harwell-Boeing, Slapc or other…
742 $ python -m $PETSC_DIR/lib/petsc/bin/PetscBinaryIO convert matrix.mtx
767 If `XXSetFromOptions()` is used (with `-xxx_type aaaa`) to change the type of the
785 - Allow setting the type from the command line, if it is not on the command line then the
795 - Hardwire the type in the code, but allow the user to override it via a subsequent
841 u^{n+1} = u^n - \lambda * \left[J(u^n)] * F(u^n) \right]^{\dagger}
847 - In order to do the line search $F \left(u^n - \lambda * \text{step} \right)$ may
851 - In the final step if $|| F(u^p)||$ satisfies the convergence criteria then a
865 -s
869 (`SNESSetLagJacobian()`) How often Jacobian is rebuilt (use -1 to
870 never rebuild, use -2 to rebuild the next time requested and then
873 -s
878 through multiple `SNES` solves , same as passing -2 to
879 `-snes_lag_jacobian`. By default, each new `SNES` solve
882 -s
891 -s
900 `-snes_mf_operator` which applies the fresh Jacobian matrix-free for every
901 matrix-vector product. Otherwise the out-of-date matrix vector product, computed with
919 The simplest way is with the option `-snes_mf`, this will use finite differencing of the
923 Since no matrix-representation of the Jacobian is provided the `-pc_type` used with
924 this option must be `-pc_type none`. You may provide a custom preconditioner with
928 The option `-snes_mf_operator` will use Jacobian free to apply the Jacobian (in the
935 For purely matrix-free (like `-snes_mf`) pass the matrix object for both matrix
939 `-snes_mf_operator`), pass this other matrix as the second matrix argument to
945 matrix-free Jacobian multiply call `MatMFFDSetFunction()` to set that other function. See
955 -pc_type svd -pc_svd_monitor
961 -pc_type none -ksp_type gmres -ksp_monitor_singular_value -ksp_gmres_restart 1000
967 number of the preconditioned operator, use `-pc_type somepc` in the last command.
1017 S_D := A - BD^{-1}C \\
1018 S_A := D - CA^{-1}B
1022 dense, so assuming you wish to calculate $S_A = D - C \underbrace{
1023 \overbrace{(A^{-1})}^{U} B}_{V}$ begin by:
1033 6. Now call `MatAXPY(S,-1.0,D,MAT_SUBSET_NONZERO)`.
1034 7. Followed by `MatScale(S,-1.0)`.
1056 D - C \text{diag}(A)^{-1} B
1066 Moore-Penrose pseudoinverse, and is available in `PCLSC` which operates on matrices of
1118 … use MatZeroRows() to eliminate the Dirichlet rows but this results in a non-Symmetric system. How…
1120 - For nonsymmetric systems put the appropriate boundary solutions in the x vector and use
1122 - For symmetric problems use `MatZeroRowsColumns()`.
1123 - If you have many Dirichlet locations you can use `MatZeroRows()` (**not**
1124 `MatZeroRowsColumns()`) and `-ksp_type preonly -pc_type redistribute` (see
1137 require PETSc to be configured with --with-matlab.
1144 …[MATLAB Engine](https://www.mathworks.com/help/matlab/calling-matlab-engine-from-c-programs-1.html…
1158 …ple [example](https://stackoverflow.com/questions/3046305/simple-wrapping-of-c-code-with-cython). …
1184 $ ./configure --download-superlu_dist --download-parmetis --download-metis --with-openmp
1191 $ OMP_NUM_THREADS=n ./myprog -pc_type lu -pc_factor_mat_solver superlu_dist
1228 - `MatCreateNormalHermitian()`
1229 - `MatCreateHermitianTranspose()`
1233 - `MatIsHermitian()`
1234 - `MatIsHermitianKnown()`
1235 - `MatIsStructurallySymmetric()`
1236 - `MatIsSymmetricKnown()`
1237 - `MatIsSymmetric()`
1238 - `MatSetOption()` (use with `MAT_SYMMETRIC` or `MAT_HERMITIAN` to assert to PETSc
1244 - `MatMultHermitianTranspose()`
1245 - `MatMultHermitianTransposeAdd()` (very limited support)
1302 `--with-shared-libraries` (this is the default). Also, if you have room, compiling and
1306 ### How does PETSc's `-help` option work? Why is it different for different programs?
1310 - `PetscOptionsGetXXX()` where `XXX` is some type or data structure (for example
1314 - `PetscOptionsXXX()` where `XXX` is some type or data structure (for example
1315 `PetscOptionsBool()` or `PetscOptionsScalarArray()`). This is a so-called "provider"
1323 they will encounter different providers, and therefore have different `-help` output.
1327 Running the PETSc program with the option `-help` will print out many of the options. To
1328 print the options that have been specified within a program, employ `-options_left` to
1335 You can use the option `-info` to get more details about the solution process. The
1336 option `-log_view` provides details about the distribution of time spent in the various
1337 phases of the solution process. You can run with `-ts_view` or `-snes_view` or
1338 `-ksp_view` to see what solver options are being used. Run with `-ts_monitor`,
1339 `-snes_monitor`, or `-ksp_monitor` to watch convergence of the
1340 methods. `-snes_converged_reason` and `-ksp_converged_reason` will indicate why and if
1354 -l
1364 Run with `-log_view` and `-pc_mg_log`
1375 `-viewer_binary_skip_info` or `PetscViewerBinarySkipInfo()`.
1383 ### Why is my parallel solver slower than my sequential solver, or I have poor speed-up?
1388 with `-log_view`). Often the slower time is in generating the matrix or some other
1399 try `-pc_type asm` (`PCASM`) its iterations scale a bit better for more
1409 general, this requires an MPI-3 implementation, an implementation that supports multiple
1413 - Cray MPI MPT-5.6 MPI-3, by setting `$MPICH_MAX_THREAD_SAFETY` to "multiple"
1422 - MPICH version 3.0 and later implements the MPI-3 standard and the default
1429 ### When using PETSc in single precision mode (`--with-precision=single` when running `configure`) …
1443 - The Jacobian is wrong (or correct in sequential but not in parallel).
1444 - The linear system is {ref}`not solved <doc_faq_execution_kspconv>` or is not solved
1446 - The Jacobian system has a singularity that the linear solver is not handling.
1447 - There is a bug in the function evaluation routine.
1448 - The function is not continuous or does not have continuous first derivatives (e.g. phase
1450 - The equations may not have a solution (e.g. limit cycle instead of a steady state) or
1452 must ignite and burn before reaching a steady state, but the steady-state residual will
1457 - Run on one processor to see if the problem is only in parallel.
1459 - Run with `-info` to get more detailed information on the solution process.
1461 - Run with the options
1464 -snes_monitor -ksp_monitor_true_residual -snes_converged_reason -ksp_converged_reason
1467 - If the linear solve does not converge, check if the Jacobian is correct, then see
1469 - If the preconditioned residual converges, but the true residual does not, the
1471 - If the linear solve converges well, but the line search fails, the Jacobian may be
1474 - Run with `-pc_type lu` or `-pc_type svd` to see if the problem is a poor linear
1477 - Run with `-mat_view` or `-mat_view draw` to see if the Jacobian looks reasonable.
1479 - Run with `-snes_test_jacobian -snes_test_jacobian_view` to see if the Jacobian you are
1480 using is wrong. Compare the output when you add `-mat_fd_type ds` to see if the result
1483 - Run with `-snes_mf_operator -pc_type lu` to see if the Jacobian you are using is
1487 -snes_mf_operator -pc_type ksp -ksp_ksp_rtol 1e-12.
1490 Compare the output when you add `-mat_mffd_type ds` to see if the result is sensitive
1493 - Run with `-snes_linesearch_monitor` to see if the line search is failing (this is
1494 usually a sign of a bad Jacobian). Use `-info` in PETSc 3.1 and older versions,
1495 `-snes_ls_monitor` in PETSc 3.2 and `-snes_linesearch_monitor` in PETSc 3.3 and
1500 - Run with grid sequencing (`-snes_grid_sequence` if working with a `DM` is all you
1503 - Run with quad precision, i.e.
1506 $ ./configure --with-precision=__float128 --download-f2cblaslapack
1513 - Change the units (nondimensionalization), boundary condition scaling, or formulation so
1517 - Mollify features in the function that do not have continuous first derivatives (often
1522 - Try a trust region method (`-ts_type tr`, may have to adjust parameters).
1524 - Run with some continuation parameter from a point where you know the solution, see
1525 `TSPSEUDO` for steady-states.
1527 - There are homotopy solver packages like PHCpack that can get you all possible solutions
1536 Always run with `-ksp_converged_reason -ksp_monitor_true_residual` when trying to
1542 - A symmetric method is being used for a non-symmetric problem.
1544 - The equations are singular by accident (e.g. forgot to impose boundary
1545 conditions). Check this for a small problem using `-pc_type svd -pc_svd_monitor`.
1547 - The equations are intentionally singular (e.g. constant null space), but the Krylov
1551 a stern talking-to by the nearest Krylov Subspace Method representative.
1553 - The equations are intentionally singular and `MatSetNullSpace()` was used, but the
1554 right-hand side is not consistent. You may have to call `MatNullSpaceRemove()` on the
1555 right-hand side before calling `KSPSolve()`. See `MatSetTransposeNullSpace()`.
1557 - The equations are indefinite so that standard preconditioners don't work. Usually you
1561 -ksp_compute_eigenvalues -ksp_gmres_restart 1000 -pc_type none
1567 -pc_type fieldsplit -pc_fieldsplit_type schur -pc_fieldsplit_detect_saddle_point
1571 …<mailto:petsc-users@mcs.anl.gov> or <mailto:petsc-maint@mcs.anl.gov> if you want advice about how …
1574 - If the method converges in preconditioned residual, but not in true residual, the
1576 (e.g. incompressible flow) or strongly nonsymmetric operators (e.g. low-Mach hyperbolic
1579 - The preconditioner is too weak or is unstable. See if `-pc_type asm -sub_pc_type lu`
1581 if longer restarts help `-ksp_gmres_restart 300`. If a transpose is available, try
1582 `-ksp_type bcgs` or other methods that do not require a restart.
1588 - The preconditioner is nonlinear (e.g. a nested iterative solve), try `-ksp_type
1589 fgmres` or `-ksp_type gcr`.
1591 - You are using geometric multigrid, but some equations (often boundary conditions) are
1592 not scaled compatibly between levels. Try `-pc_mg_galerkin` both to algebraically
1596 - The matrix is very ill-conditioned. Check the {ref}`condition number <doc_faq_usage_condnum>`.
1598 - Try to improve it by choosing the relative scaling of components/boundary conditions.
1599 - Try `-ksp_diagonal_scale -ksp_diagonal_scale_fix`.
1600 - Perhaps change the formulation of the problem to produce more friendly algebraic
1603 - Change the units (nondimensionalization), boundary condition scaling, or formulation so
1607 - Classical Gram-Schmidt is becoming unstable, try `-ksp_gmres_modifiedgramschmidt` or
1608 use a method that orthogonalizes differently, e.g. `-ksp_type gcr`.
1610 …get the error message: Actual argument at (1) to assumed-type dummy is of derived type with type-b…
1612 Use the following code-snippet:
1634 !------------------------------------------------------------------------
1680 the program after `PetscFinalize()`. Use the following code-snippet:
1701 ### What does the message hwloc/linux: Ignoring PCI device with non-16bit domain mean?
1713 ### How do I turn off PETSc signal handling so I can use the `-C` Option On `xlf`?
1718 (`-C` for IBM's) that causes all array access in Fortran to be checked that they are
1719 in-bounds. This is a great feature but does require that the array dimensions be set
1722 ### How do I debug if `-start_in_debugger` does not work on my machine?
1727 On newer macOS machines - one has to be in admin group to be able to use debugger.
1729 On newer Ubuntu linux machines - one has to disable `ptrace_scope` with
1737 If `-start_in_debugger` does not work on your OS, for a uniprocessor job, just
1742 You can use the `-start_in_debugger` option to start all processes in the debugger (each
1744 xterm. Once you are sure that the program is hanging, hit control-c in each xterm and then
1754 (gdb) p ((Vec_Seq*) v->data)->array[0]@v->map.n
1767 (gdb) call MatView(m, PETSC_VIEWER_STDOUT_(m->comm))
1777 ### How can I find the cause of floating point exceptions like not-a-number (NaN) or infinity?
1780 architectures (including Linux and glibc-based systems), just run in a debugger and pass
1781 `-fp_trap` to the PETSc application. This will activate signaling exceptions and the
1785 Without a debugger, running with `-fp_trap` in debug mode will only identify the
1787 `-fp_trap` is not supported on your architecture, consult the documentation for your
1794 The Intel compilers use shared libraries (like libimf) that cannot be found, by default, at run
1795 time. When using the Intel compilers (and running the resulting code) you must make sure
1796 that the proper Intel initialization scripts are run. This is usually done by adding some
1804 source /opt/intel/cc/10.1.012/bin/iccvars.csh
1805 source /opt/intel/fc/10.1.012/bin/ifortvars.csh
1806 source /opt/intel/idb/10.1.012/bin/idbvars.csh
1812 source /opt/intel/cc/10.1.012/bin/iccvars.sh
1813 source /opt/intel/fc/10.1.012/bin/ifortvars.sh
1814 source /opt/intel/idb/10.1.012/bin/idbvars.sh
1863 option `-malloc_debug`. Occasionally the code may crash only with the optimized version,
1864 in that case run the optimized version with `-malloc_debug`. If you determine the
1868 If `-malloc_debug` does not help: on NVIDIA CUDA systems you can use <https://docs.nvidia.com/compu…
1869 for example, `compute-sanitizer --tool memcheck [sanitizer_options] app_name [app_options]`.
1871 If `-malloc_debug` does not help: on GNU/Linux (not macOS machines) - you can
1874 1. `configure` PETSc with `--download-mpich --with-debugging` (you can use other MPI implementation…
1881 $ $PETSC_DIR/lib/petsc/bin/petscmpiexec -valgrind -n NPROC PETSCPROGRAMNAME PROGRAMOPTIONS
1887 $ mpiexec -n NPROC valgrind --tool=memcheck -q --num-callers=20 \
1888 --suppressions=$PETSC_DIR/share/petsc/suppressions/valgrind \
1889 --log-file=valgrind.log.%p PETSCPROGRAMNAME -malloc off PROGRAMOPTIONS
1893 - option `--with-debugging` enables valgrind to give stack trace with additional
1894 source-file:line-number info.
1895 - option `--download-mpich` is valgrind clean, other MPI builds are not valgrind clean.
1896 - when `--download-mpich` is used - mpiexec will be in `$PETSC_ARCH/bin`
1897 - `--log-file=valgrind.log.%p` option tells valgrind to store the output from each
1899 - `memcheck` will not find certain array access that violate static array
1900 declarations so if memcheck runs clean you can try the `--tool=exp-ptrcheck`
1904 You might also consider using <http://drmemory.org> which has support for GNU/Linux, Apple
1915 -pc_factor_shift_type nonzero -pc_factor_shift_amount [amount]
1921 -pc_factor_shift_type positive_definite -[level]_pc_factor_shift_type nonzero
1922 -pc_factor_shift_amount [amount]
1928 -[level]_pc_factor_shift_type positive_definite
1940 …e draw windows or `PETSCVIEWERDRAW` windows or use options `-ksp_monitor draw::draw_lg` or `-snes_…
1943 was run with the option `--with-x`.
1949 - You are creating new PETSc objects but never freeing them.
1950 - There is a memory leak in PETSc or your code.
1951 - Something much more subtle: (if you are using Fortran). When you declare a large array
1956 - You are running with the `-log`, `-log_mpe`, or `-log_all` option. With these
1959 - You are linking with the MPI profiling libraries; these cause logging of all MPI
1965 - Run with the `-malloc_debug` option and `-malloc_view`. Or use `PetscMallocDump()`
1971 - This is just the way Unix works and is harmless.
1972 - Do not use the `-log`, `-log_mpe`, or `-log_all` option, or use
1975 - Make sure you do not link with the MPI profiling libraries.
1987 26 KSP Residual norm 3.421544615851e-04
1988 27 KSP Residual norm 2.973675659493e-04
1989 28 KSP Residual norm 2.588642948270e-04
1990 29 KSP Residual norm 2.268190747349e-04
1991 30 KSP Residual norm 1.977245964368e-04
1992 30 KSP Residual norm 1.994426291979e-04 <----- At restart the residual norm is printed a second time
1998 $|| b - A x^{n} ||$.
2000 Sometimes, especially with an ill-conditioned matrix, or computation of the matrix-vector
2007 noticeable. Of if you are running matrix-free you may need to tune the matrix-free
2015 1198 KSP Residual norm 1.366052062216e-04
2016 1198 KSP Residual norm 1.931875025549e-04
2017 1199 KSP Residual norm 1.366026406067e-04
2018 1199 KSP Residual norm 1.931819426344e-04
2021 Some Krylov methods, for example `KSPTFQMR`, actually have a "sub-iteration" of size 2
2029 When using DYNAMIC libraries - the libraries cannot be moved after they are
2030 installed. This could also happen on clusters - where the paths are different on the (run)
2031 nodes - than on the (compile) front-end. **Do not use dynamic libraries & shared
2033 `--with-shared-libraries=0 --with-dynamic-loading=0`.
2041 If at some point (in PETSc code history) you had a working code - but the latest PETSc
2045 - Using Git to access PETSc sources
2046 - Knowing the Git commit for the known working version of PETSc
2047 - Knowing the Git commit for the known broken version of PETSc
2048 - Using the [bisect](https://mirrors.edge.kernel.org/pub/software/scm/git/docs/git-bisect.html)
2061 history of petsc-release clones. Lets say the known bad commit is 21af4baa815c and
2083 Now until done - keep bisecting, building PETSc, and testing your code with it and
2084 determine if the code is working or not. After something like 5-15 iterations, `git
2085 bisect` will pin-point the exact code change that resulted in the difference in
2089 See [git-bisect(1)](https://mirrors.edge.kernel.org/pub/software/scm/git/docs/git-bisect.html) and …
2090 [debugging section of the Git Book](https://git-scm.com/book/en/Git-Tools-Debugging-with-Git) for m…
2115 $ ./configure --with-shared-libraries
2124 - Saves disk space when more than one executable is created
2125 - Improves the compile time immensely, because the compiler has to write a much smaller
2135 You must run `configure` with the option `--with-shared-libraries=0` (you can use a