Lines Matching refs:have
35 have its **own** memory bandwidth of at least 2 or more gigabytes/second. Most modern
45 If you do not know this and can run MPI programs with mpiexec (that is, you don't have
55 If you have a batch system:
66 Even if you have enough memory bandwidth if the OS switches processes between cores
123 C++ without using such a large and more complicated language. It would have been natural and
124 reasonable to have coded PETSc in C++; we opted to use C instead.
170 7. **We have a rich, robust, and fast bug reporting system**
301 Assuming that the PETSc libraries have been successfully built for a particular
316 installing PETSc, in case you have missed a step.
327 `$PETSC_DIR/src/*/tests/output`. Once you have run the test examples, you may remove
345 time; thus, we do not have to change PETSc every time the provider of the message-passing
352 major parallel computer vendors were involved in the design of MPI and have committed to
355 In addition, since MPI is a standard, several different groups have already provided
356 complete free implementations. Thus, one does not have to rely on the technical skills of
459 - Verify you are using the correct `mpiexec` for the MPI you have linked PETSc with.
461 - If you have a VPN enabled on your machine, try turning it off and then running `make check` to
585 You should run with `-ksp_type richardson` to have PETSc run several V or W
614 ### You have `MATAIJ` and `MATBAIJ` matrix formats, and `MATSBAIJ` for symmetric storage, how come …
636 For example, assuming we have distributed a vector `vecGlobal` of size $N$ to
782 the restart will be ignored since the type has not yet been set to `KSPGMRES`. To have
828 Declare the class method static. Static methods do not have a `this` pointer, but the
854 You are free to have your `FormFunction()` compute as much of the Jacobian at that point
1046 Alternatively, if you already have a block matrix `M = [A, B; C, D]` (in some
1069 ### Do you have examples of doing unstructured grid Finite Element Method (FEM) with PETSc?
1123 - If you have many Dirichlet locations you can use `MatZeroRows()` (**not**
1172 ### If I have a sequential program can I use a PETSc parallel solver?
1187 Your compiler must support OpenMP. To have the linear solver run in parallel run your
1198 If your code is MPI parallel you can also use these same options to have SuperLU_dist
1201 cores available for each MPI process. For example if your compute nodes have 6 cores
1255 No, once the vector or matrices sizes have been set and the matrices or vectors are fully
1262 Assuming you have an existing matrix $A$ whose nullspace $V$ you want to find:
1302 `--with-shared-libraries` (this is the default). Also, if you have room, compiling and
1323 they will encounter different providers, and therefore have different `-help` output.
1328 print the options that have been specified within a program, employ `-options_left` to
1341 the solvers have converged.
1383 ### Why is my parallel solver slower than my sequential solver, or I have poor speed-up?
1392 more. This is even more true when using multiple GPUs, where you need to have millions
1448 - The function is not continuous or does not have continuous first derivatives (e.g. phase
1450 - The equations may not have a solution (e.g. limit cycle instead of a steady state) or
1517 - Mollify features in the function that do not have continuous first derivatives (often
1522 - Try a trust region method (`-ts_type tr`, may have to adjust parameters).
1554 right-hand side is not consistent. You may have to call `MatNullSpaceRemove()` on the
1717 Some Fortran compilers including the IBM xlf, xlF etc compilers have a compile option
1772 could have custom code to print values in the object. We have only done this for the most
1788 debugger since there is likely a way to have it catch exceptions.
1801 For example, on my Mac using `csh` I have the following in my `.cshrc` file:
1809 And in my `.profile` I have
1938 have an error in the code that generates the matrix.
2021 Some Krylov methods, for example `KSPTFQMR`, actually have a "sub-iteration" of size 2
2042 code broke it, its possible to determine the PETSc code change that might have caused this
2140 You would also need to have access to the shared libraries on this new machine. The other