xref: /petsc/doc/community/meetings/2023/index.md (revision 09b68a49ed2854d1e4985cc2aa6af33c7c4e69b3)
1---
2orphan: true
3---
4
5(2023_meeting)=
6
7# 2023 Annual PETSc Meeting
8
9```{image} https://petsc.gitlab.io/annual-meetings/2023/GroupPhoto.jpg
10:alt: PETSc User Meeting 2023 group photo (Hermann Hall, 06/06/2023)
11:width: 800
12```
13
14June 5-7, 2023, at the [Hermann Hall Conference Center](https://www.iit.edu/event-services/meeting-spaces/hermann-hall-conference-center)
15in the Hermann Ballroom (when you enter the Hermann Hall building through the main entrance walk straight back to the rear of the building and take a right)
16(3241 South Federal Street, Chicago, IL)
17on the campus of [The Illinois Institute of Technology (IIT)](https://www.iit.edu) in Chicago.
18Easy access from the hotels via the Chicago Elevated [Green](https://www.transitchicago.com/greenline) or [Red](https://www.transitchicago.com/redline) Lines.
19[Parking use B5 (32nd & Federal St.)](https://www.iit.edu/cbsc/parking/visitor-and-event-parking).
20
21Please test for Covid before attending the meeting and
22mask while traveling to the meeting.
23
24In addition to a newbie user tutorial and a {any}`newbie_developer_workshop`, the meeting will include a "speed dating" session where users can ask questions of developers (and each other) about technical details of their particular simulations. Finally, the meeting will be interspersed with mini-tutorials that will dive into particular aspects of PETSc that users may not be familiar with.
25
26## Meeting times
27
28- Monday, June 5: 1 pm to 5:30 pm
29- Tuesday, June 6: 10:15 am to 5:30 pm
30- Wednesday, June 7: 9 am to 3 pm
31
32PETSc newbie user lightning tutorial:
33
34- Monday, June 5: 10 am to 12 pm
35
36PETSc {any}`newbie_developer_workshop`
37
38- Tuesday, June 6: 9 am to 10 am
39
40## Registration
41
42Please register at [EventBrite](https://www.eventbrite.com/e/petsc-2023-user-meeting-tickets-494165441137) to save your seat. 100-dollar registration fee for breaks and lunches; this can be skipped if you cannot afford it.
43
44## Submit a presentation
45
46[Submit an abstract](https://docs.google.com/forms/d/e/1FAIpQLSesh47RGVb9YD9F1qu4obXSe1X6fn7vVmjewllePBDxBItfOw/viewform) by May 1st (but preferably now) to be included in the schedule. We welcome talks from all perspectives, including those who
47
48- contribute to PETSc,
49- use PETSc in their applications or libraries,
50- develop the libraries and packages [called from PETSc](https://petsc.org/release/install/external_software/), and even
51- those who are curious about using PETSc in their applications.
52
53## Suggested hotels
54
55- [Receive IIT hotel discounts.](https://www.iit.edu/procurement-services/purchasing/preferred-and-contract-vendors/hotels)
56
57- More Expensive
58
59  - [Hilton Chicago](https://www.hilton.com/en/hotels/chichhh-hilton-chicago/?SEO_id=GMB-AMER-HI-CHICHHH&y_source=1_NzIxNzU2LTcxNS1sb2NhdGlvbi53ZWJzaXRl) 720 S Michigan Ave, Chicago
60  - [Hotel Blake, an Ascend Hotel Collection Member](https://www.choicehotels.com/illinois/chicago/ascend-hotels/il480) 500 S Dearborn St, Chicago, IL 60605
61  - [The Blackstone, Autograph Collection](https://www.marriott.com/en-us/hotels/chiab-the-blackstone-autograph-collection/overview/?scid=f2ae0541-1279-4f24-b197-a979c79310b0) 636 South Michigan Avenue Lobby Entrance On, E Balbo Dr, Chicago
62
63- Inexpensive
64
65  - [Travelodge by Wyndham Downtown Chicago](https://www.wyndhamhotels.com/travelodge/chicago-illinois/travelodge-hotel-downtown-chicago/overview?CID=LC:TL::GGL:RIO:National:10073&iata=00093796) 65 E Harrison St, Chicago
66  - [The Congress Plaza Hotel & Convention Center](https://www.congressplazahotel.com/?utm_source=local-directories&utm_medium=organic&utm_campaign=travelclick-localconnect) 520 S Michigan Ave, Chicago
67  - [Hilton Garden Inn Chicago Downtown South Loop](https://www.hilton.com/en/hotels/chidlgi-hilton-garden-inn-chicago-downtown-south-loop/?SEO_id=GMB-AMER-GI-CHIDLGI&y_source=1_MTI2NDg5NzktNzE1LWxvY2F0aW9uLndlYnNpdGU%3D) 55 E 11th St, Chicago
68
69## Agenda
70
71### Monday, June 5
72
73| Time     | Title                                                                                                                        | Speaker                 |
74| -------- | ---------------------------------------------------------------------------------------------------------------------------- | ----------------------- |
75| 10:00 am | Newbie tutorial ([Slides][s_00], [Video][v_00])                                                                              |                         |
76| 11:30 am | Follow-up questions and meetings                                                                                             |                         |
77| 12:00 am | **Lunch** for tutorial attendees and early arrivees                                                                          |                         |
78| 1:00 pm  | Some thoughts on the future of PETSc ([Slides][s_01], [Video][v_01])                                                         | [Barry Smith]           |
79| 1:30 pm  | A new nonhydrostatic capability for MPAS-Ocean ([Slides][s_02], [Video][v_02])                                               | [Sara Calandrini]       |
80| 2:00 pm  | MultiFlow: A coupled balanced-force framework to solve multiphase flows in arbitrary domains ([Slides][s_03], [Video][v_03]) | [Berend van Wachem]     |
81| 2:30 pm  | Mini tutorial: PETSc and PyTorch interoperability ([Slides][s_04], [Video][v_04], [IPython code][c_04])                      | [Hong Zhang (Mr.)]      |
82| 2:45 pm  | **Coffee Break**                                                                                                             |                         |
83| 3:00 pm  | Towards enabling digital twins capabilities for a cloud chamber (slides and video unavailable)                               | [Vanessa Lopez-Marrero] |
84| 3:30 pm  | PETSc ROCKS ([Slides][s_06], [Video][v_06])                                                                                  | [David May]             |
85| 4:00 pm  | Software Development and Deployment Including PETSc ([Slides][s_07], [Video][v_07])                                          | [Tim Steinhoff]         |
86| 4:30 pm  | Multiscale, Multiphysics Simulation Through Application Composition Using MOOSE ([Slides][s_08], [Video][v_08])              | [Derek Gaston]          |
87| 5:00 pm  | PETSc Newton Trust-Region for Simulating Large-scale Engineered Subsurface Systems with PFLOTRAN ([Slides][s_09])            | [Heeho Park]            |
88| 5:30 pm  | End of first day                                                                                                             |                         |
89
90### Tuesday, June 6
91
92| Time     | Title                                                                                                                                                   | Speaker                  |
93| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------ |
94|          |                                                                                                                                                         |                          |
95| 9:00 am  | Newbie Developer Workshop (optional)                                                                                                                    |                          |
96| 10:00 am | **Coffee Break**                                                                                                                                        |                          |
97| 10:15 am | Experiences in solving nonlinear eigenvalue problems with SLEPc ([Slides][s_10], [Video][v_10])                                                         | [Jose E. Roman]          |
98| 10:45 am | MPI Multiply Threads ([Slides][s_11], [Video][v_11])                                                                                                    | [Hui Zhou]               |
99| 11:15 am | Mini tutorial: PETSc on the GPU ([Slides][s_12], [Video][v_12])                                                                                         | [Junchao Zhang]          |
100| 11:30 am | AMD GPU benchmarking, documentation, and roadmap ([Slides][s_13], video unavailable)                                                                    | [Justin Chang]           |
101| 12:00 pm | **Lunch**                                                                                                                                               |                          |
102| 1:00 pm  | Mini tutorial: petsc4py ([Slides][s_14], [Video][v_14])                                                                                                 | [Stefano Zampini]        |
103| 1:15 pm  | Transparent Asynchronous Compute Made Easy With PETSc ([Slides][s_15], [Video][v_15])                                                                   | [Jacob Faibussowitsch]   |
104| 1:45 pm  | Using Kokkos Ecosystem with PETSc on modern architectures ([Slides][s_16])                                                                              | [Luc Berger-Vergiat]     |
105| 2:15 pm  | Intel oneAPI Math Kernel Library, what’s new and what’s next? ([Slides][s_17], [Video][v_17])                                                           | [Spencer Patty]          |
106| 2:45 pm  | Mini tutorial: DMPlex ([Video][v_18], slides unavailable)                                                                                               | [Matt Knepley]           |
107| 3:00 pm  | **Coffee Break**                                                                                                                                        |                          |
108| 3:15 pm  | Scalable cloud-native thermo-mechanical solvers using PETSc (slides and video unavailable)                                                              | [Ashish Patel]           |
109| 3:45 pm  | A mimetic finite difference based quasi-static magnetohydrodynamic solver for force-free plasmas in tokamak disruptions ([Slides][s_20], [Video][v_20]) | [Zakariae Jorti]         |
110| 4:15 pm  | High-order FEM implementation in AMReX using PETSc ([Slides][s_21], [Video][v_21])                                                                      | [Alex Grant]             |
111| 4:45 pm  | An Immersed Boundary method for Elastic Bodies Using PETSc ([Slides][s_22], [Video][v_22])                                                              | [Mohamad Ibrahim Cheikh] |
112| 5:15 pm  | Mini tutorial: DMNetwork ([Slides][s_23], [Video][v_23])                                                                                                | [Hong Zhang (Ms.)]       |
113| 5:30 pm  | End of second day                                                                                                                                       |                          |
114
115### Wednesday, June 7
116
117| Time     | Title                                                                                                                               | Speaker                             |
118| -------- | ----------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------- |
119| 9:00 am  | XGCm: An Unstructured Mesh Gyrokinetic Particle-in-cell Code for Exascale Fusion Plasma Simulations ([Slides][s_24], [Video][v_24]) | [Chonglin Zhang]                    |
120| 9:30 am  | PETSc-PIC: A Structure-Preserving Particle-In-Cell Method for Electrostatic Solves ([Slides][s_25], [Video][v_25])                  | [Daniel Finn]                       |
121| 9:57 am  | Landau Collisions in the Particle Basis with PETSc-PIC ([Slides][s_26], [Video][v_26])                                              | [Joseph Pusztay]                    |
122| 10:15 am | **Coffee Break**                                                                                                                    |                                     |
123| 10:30 am | Mini tutorial: DMSwarm ([Slides][s_27], [Video][v_27])                                                                              | [Joseph Pusztay\*][joseph pusztay*] |
124| 10:45 am | Scalable Riemann Solvers with the Discontinuous Galerkin Method for Hyperbolic Network Simulation ([Slides][s_28], [Video][v_28])   | [Aidan Hamilton]                    |
125| 11:15 am | Numerical upscaling of network models using PETSc ([Slides][s_29], [Video][v_29])                                                   | [Maria Vasilyeva]                   |
126| 11:45 am | Mini tutorial: TaoADMM ([Slides][s_30], [Video][v_30])                                                                              | [Hansol Suh]                        |
127| 12:00 am | **Lunch**                                                                                                                           |                                     |
128| 1:00 pm  | PETSc in the Ionosphere ([Slides][s_31], [Video][v_31])                                                                             | [Matt Young]                        |
129| 1:30 pm  | From the trenches: porting mef90 ([Slides][s_32], [Video][v_32])                                                                    | [Blaise Bourdin]                    |
130| 2:00 pm  | PERMON library for quadratic programming ([Slides][s_33], [Video][v_33])                                                            | [Jakub Kruzik]                      |
131| 2:22 pm  | Distributed Machine Learning for Natural Hazard Applications Using PERMON ([Slides][s_34], [Video][v_34])                           | [Marek Pecha]                       |
132| 2:45 pm  | Wrap up                                                                                                                             |                                     |
133| 3:00 pm  | End of meeting                                                                                                                      |                                     |
134
135(newbie_developer_workshop)=
136
137## Newbie Developer Workshop
138
139Tuesday, June 6, at 9 am. Some of the topics to be covered.
140
141- {any}`Exploring the developer documentation<ind_developers>`
142
143- {any}`petsc_developers_communication_channels`
144
145- {any}`PETSc Git branch organization<sec_integration_branches>`
146
147- {any}`ch_contributing`
148
149  - {any}`Starting a merge request (MR)<ch_developingmr>`
150  - {any}`Submitting and monitoring a MR<ch_submittingmr>`
151  - {any}`GitLab CI pipelines<pipelines>`
152  - {any}`PETSc style guide<style>`
153
154- Reviewing someone else's MR
155
156- Adding new Fortran and Python function bindings
157
158- PETSc's
159
160  - {any}`configure system<ch_buildsystem>`
161  - compiler system, and
162  - {any}`testing system including the GitLab CI<test_harness>`
163
164- Any other topics requested by potential contributors
165
166## Abstracts
167
168(luc-berger-vergiat)=
169
170:::{topic} **Using Kokkos Ecosystem with PETSc on modern architectures**
171**Luc Berger-Vergiat**
172
173Sandia National Laboratories
174
175Supercomputers increasingly rely on GPUs to achieve high
176throughput while maintaining a reasonable power consumption. Consequently,
177scientific applications are adapting to this new environment, and new
178algorithms are designed to leverage the high concurrency of GPUs. In this
179presentation, I will show how the Kokkos Ecosystem can help alleviate some
180of the difficulties associated with support for multiple CPU/GPU
181architectures. I will also show some results using the Kokkos and Kokkos
182kernels libraries with PETSc on modern architectures.
183:::
184
185(blaise-bourdin)=
186
187:::{topic} **From the trenches: porting mef90**
188**Blaise Bourdin**
189
190McMaster University
191
192mef90 is a distributed three-dimensional unstructured finite-element
193implementation of various phase-field models of fracture. In this talk,
194I will share the experience gained while porting mef90 from PETSc 3.3 to 3.18.
195:::
196
197(sara-calandrini)=
198
199:::{topic} **A new non-hydrostatic capability for MPAS-Ocean**
200**Sara Calandrini**
201
202, Darren Engwirda, Luke Van Roekel
203
204Los Alamos National Laboratory
205
206The Model for Prediction Across Scales-Ocean (MPAS-Ocean) is an
207open-source, global ocean model and is one component within the Department
208of Energy’s E3SM framework, which includes atmosphere, sea ice, and
209land-ice models. In this work, a new formulation for the ocean model is
210presented that solves the non-hydrostatic, incompressible Boussinesq
211equations on unstructured meshes. The introduction of this non-hydrostatic
212capability is necessary for the representation of fine-scale dynamical
213processes, including resolution of internal wave dynamics and large eddy
214simulations. Compared to the standard hydrostatic formulation,
215a non-hydrostatic pressure solver and a vertical momentum equation are
216added, where the PETSc (Portable Extensible Toolkit for Scientific
217Computation) library is used for the inversion of a large sparse system for
218the nonhydrostatic pressure. Numerical results comparing the solutions of
219the hydrostatic and non-hydrostatic models are presented, and the parallel
220efficiency and accuracy of the time-stepper are evaluated.
221:::
222
223(justin-chang)=
224
225:::{topic} **AMD GPU benchmarking, documentation, and roadmap**
226**Justin Chang**
227
228AMD Inc.
229
230This talk comprises three parts. First, we present an overview of some
231relatively new training documentation like the "AMD lab notes" to enable
232current and potential users of AMD GPUs into getting the best experience
233out of their applications or algorithms. Second, we briefly discuss
234implementation details regarding the PETSc HIP backend introduced into the
235PETSc library late last year and present some performance benchmarking data
236on some of the AMD hardware. Lastly, we give a preview of the upcoming
237MI300 series APU and how software developers can prepare to leverage this
238new type of accelerator.
239:::
240
241(mohamad-ibrahim-cheikh)=
242
243:::{topic} **An Immersed Boundary method for Elastic Bodies Using PETSc**
244**Mohamad Ibrahim Cheikh**
245
246, Konstantin Doubrovinski
247
248Doubrovinski Lab, The University of Texas Southwestern Medical Center
249
250This study presents a parallel implementation of an immersed boundary
251method code using the PETSc distributed memory module. This work aims to simulate a complex developmental process that occurs in the
252early stages of embryonic development, which involves the transformation of
253the embryo into a multilayered and multidimensional structure. To
254accomplish this, the researchers used the PETSc parallel module to solve
255a linear system for the Eulerian fluid dynamics while simultaneously
256coupling it with a deforming Lagrangian elastic body to model the
257deformable embryonic tissue. This approach allows for a detailed simulation
258of the interaction between the fluid and the tissue, which is critical for
259accurately modeling the developmental process. Overall, this work
260highlights the potential of the immersed boundary method and parallel
261computing techniques for simulating complex physical phenomena.
262:::
263
264(jacob-faibussowitsch)=
265
266:::{topic} **Transparent Asynchronous Compute Made Easy With PETSc**
267**Jacob Faibussowitch**
268
269Argonne National Laboratory
270
271Asynchronous GPU computing has historically been difficult to integrate scalably at the library level. We provide an update on recent work
272implementing a fully asynchronous framework in PETSc. We give detailed
273performance comparisons and provide a demo to showcase the proposed model's effectiveness
274and ease of use.
275:::
276
277(daniel-finn)=
278
279:::{topic} **PETSc-PIC: A Structure-Preserving Particle-In-Cell Method for Electrostatic Solves**
280**Daniel Finn**
281
282University at Buffalo
283
284Numerical solutions to the Vlasov-Poisson equations have important
285applications in the fields of plasma physics, solar physics, and cosmology.
286The goal of this research is to develop a structure-preserving,
287electrostatic and gravitational Vlasov-Poisson(-Landau) model using the
288Portable, Extensible Toolkit for Scientific Computation (PETSc) and study
289the presence of Landau damping in a variety of systems, such as
290thermonuclear fusion reactors and galactic dynamics. The PETSc
291Particle-In-Cell (PETSc-PIC) model is a highly scalable,
292structure-preserving PIC method with multigrid capabilities. In the PIC
293method, a hybrid discretization is constructed with a grid of finitely
294supported basis functions to represent the electric, magnetic, and/or
295gravitational fields, and a distribution of delta functions to represent
296the particle field. Collisions are added to the formulation using
297a particle-basis Landau collision operator recently added to the PETSc
298library.
299:::
300
301(derek-gaston)=
302
303:::{topic} **Multiscale, Multiphysics Simulation Through Application Composition Using MOOSE**
304**Derek Gaston**
305
306Idaho National Laboratory
307
308Eight years ago, at the PETSc 20 meeting, I introduced the idea of
309"Simplifying Multiphysics Through Application Composition" -- the idea
310that physics applications can be built in such a way that they can
311instantly be combined to tackle complicated multiphysics problems.
312This talk will serve as an update on those plans. I will detail the
313evolution of that idea, how we’re using it in practice, how well it’s
314working, and where we’re going next. Motivating examples will be drawn
315from nuclear engineering, and practical aspects, such as testing, will
316be explored.
317:::
318
319(alex-grant)=
320
321:::{topic} **High-order FEM implementation in AMReX using PETSc**
322**Alex Grant**
323
324, Karthik Chockalingam, Xiaohu Guo
325
326Science and Technology Facilities Council (STFC), UK
327
328AMReX is a C++ block-structured framework for adaptive mesh refinement,
329typically used for finite difference or finite volume codes. We describe
330a first attempt at a finite element implementation in AMReX using PETSc.
331AMReX splits the domain of uniform elements into rectangular boxes at each
332refinement level, with higher levels overlapping rather than replacing
333lower levels and with each level solved independently. AMReX boxes can be
334cell-centered or nodal; we use cell centered boxes to represent the geometry
335and mesh and nodal boxes to identify nodes to constrain and store results
336for visualization. We convert AMReX’s independent spatial indices into
337a single global index, then use MATMPIAIJ to assemble the system matrix per
338refinement level. In an unstructured grid, isoparametric mapping is
339required for each element; using a structured grid avoids both this
340and indirect addressing, which provides significant potential performance
341advantages. We have solved time-dependent parabolic equations and seen
342performance gains compared to unstructured finite elements. Further
343developments will include arbitrary higher-order schemes and
344multi-level hp refinement with arbitrary hanging nodes. PETSc uses AMReX
345domain decomposition to partition the matrix and right-hand vectors. For
346each higher level, not all of the domain will be refined, but AMReX’s
347indices cover the whole space - this poses an indexing challenge and can
348lead to over-allocation of memory. It is still to be explored whether DM
349data structures would provide a benefit over MATMPIAIJ.
350:::
351
352(aidan-hamilton)=
353
354:::{topic} **Scalable Riemann Solvers with the Discontinuous Galerkin Method for Hyperbolic Network Simulation**
355**Aidan Hamilton**
356
357, Jing-Mei Qiu, Hong Zhang
358
359University of Delaware
360
361We develop highly efficient and effective computational algorithms
362and simulation tools for fluid simulations on a network. The mathematical
363models are a set of hyperbolic conservation laws on the edges of a network, as
364well as coupling conditions on junctions of a network. For example, the
365shallow water system, together with flux balance and continuity conditions
366at river intersections, model water flows on a river network. The
367computationally accurate and robust discontinuous Galerkin methods,
368coupled with explicit strong-stability preserving Runge-Kutta methods, are
369implemented for simulations on network edges. Meanwhile, linear and
370nonlinear scalable Riemann solvers are being developed and implemented at
371network vertices. These network simulations result in tools built using
372PETSc and DMNetwork software libraries for the scientific community in
373general. Simulation results of a shallow water system on a Mississippi
374river network with over one billion network variables are performed on an
375extreme- scale computer using up to 8,192 processors with an optimal
376parallel efficiency. Further potential applications include traffic flow
377simulations on a highway network and blood flow simulations on an arterial
378network, among many others
379:::
380
381(zakariae-jorti)=
382
383:::{topic} **A mimetic finite difference based quasi-static magnetohydrodynamic solver for force-free plasmas in tokamak disruptions**
384**Zakariae Jorti**
385
386, Qi Tang, Konstantin Lipnikov, Xianzhu Tang
387
388Los Alamos National Laboratory
389
390Force-free plasmas are a good approximation in the low-beta case, where the
391plasma pressure is tiny compared with the magnetic pressure. On time scales
392long compared with the transit time of Alfvén waves, the evolution of
393a force-free plasma is most efficiently described by a quasi-static
394magnetohydrodynamic (MHD) model, which ignores the plasma inertia. In this
395work, we consider a regularized quasi-static MHD model for force-free
396plasmas in tokamak disruptions and propose a mimetic finite difference
397(MFD) algorithm, which is targeted at applications such as the cold
398vertical displacement event (VDE) of a major disruption in an ITER-like
399tokamak reactor. In the case of whole device modeling, we further consider
400the two sub-domains of the plasma region and wall region and their coupling
401through an interface condition. We develop a parallel, fully implicit, and
402scalable MFD solver based on PETSc and its DMStag data structure to discretize the five-field quasi-static perpendicular plasma dynamics
403model on a 3D structured mesh. The MFD spatial discretization is coupled
404with a fully implicit DIRK scheme. The full algorithm exactly preserves the
405divergence-free condition of the magnetic field under a generalized Ohm’s
406law. The preconditioner employed is a four-level fieldsplit preconditioner,
407created by combining separate preconditioners for individual
408fields, that calls multigrid or direct solvers for sub-blocks or exact
409factorization on the separate fields. The numerical results confirm the
410divergence-free constraint is strongly satisfied and demonstrate the
411performance of the fieldsplit preconditioner and overall algorithm. The
412simulation of ITER VDE cases over the actual plasma current diffusion time
413is also presented.
414:::
415
416(jakub-kruzik)=
417
418:::{topic} **PERMON library for quadratic programming**
419**Jakub Kruzik**
420
421, Marek Pecha, David Horak
422
423VSB - Technical University of Ostrava, Czechia
424
425PERMON (Parallel, Efficient, Robust, Modular, Object-oriented, Numerical)
426is a library based on PETSc for solving quadratic programming (QP)
427problems. We will present PERMON usage on our implementation of the FETI
428(finite element tearing and interconnecting) method. This FETI
429implementation involves a chain of QP transformations, such as
430dualization, which simplifies a given QP. We will also discuss some useful
431options, like viewing Karush-Kuhn-Tucker (optimality) conditions for each
432QP in the chain. Finally, we will showcase some QP applications solved by
433PERMON, such as the solution of contact problems for hydro-mechanical
434problems with discrete fracture networks or the solution of support vector
435machines using the PermonSVM module.
436:::
437
438(vanessa-lopez-marrero)=
439
440:::{topic} **Towards enabling digital twins capabilities for a cloud chamber**
441**Vanessa Lopez-Marrero**
442
443, Kwangmin Yu, Tao Zhang, Mohammad Atif, Abdullah Al Muti Sharfuddin, Fan Yang, Yangang Liu, Meifeng Lin, Foluso Ladeinde, Lingda Li
444
445Brookhaven National Laboratory
446
447Particle-resolved direct numerical simulations (PR-DNS), which resolve not
448only the smallest turbulent eddies but also track the development and
449the motion of individual particles, are an essential tool for studying
450aerosol-cloud-turbulence interactions. For instance, PR-DNS may complement
451experimental facilities designed to study key physical processes in
452a controlled environment and therefore serve as digital twins for such
453cloud chambers. In this talk, we will present our ongoing work aimed at
454enabling the use of PR-DNS for this purpose. We will describe the physical
455model used, which consists of a set of fluid dynamics equations for
456air velocity, temperature, and humidity coupled with a set of equations
457for particle (i.e., droplet) growth/tracing. The numerical method used to
458solve the model, which employs PETSc solvers in its implementation, will be
459discussed, as well as our current efforts to assess performance and
460scalability of the numerical solver.
461:::
462
463(david-may)=
464
465:::{topic} **PETSc ROCKS**
466**David May**
467
468University of California, San Diego
469
470The field of Geodynamics is concerned with understanding
471the deformation history of the solid Earth over millions to billions of
472year time scales. The infeasibility of extracting a spatially and
473temporally complete geological record based on rocks that are currently
474exposed at the surface of the Earth compels many geodynamists to employ
475computational simulations of geological processes.
476
477In this presentation I will discuss several geodynamic software packages
478which utilize PETSc. I intend to highlight how PETSc has played an
479important role in enabling and advancing state-of-the-art in geodynamic
480software. I will also summarize my own experiences and observations of how
481geodynamic-specific functionality has driven the
482development of new general-purpose PETSc functionality.
483:::
484
485(heeho-park)=
486
487:::{topic} **PETSc Newton Trust-Region for Simulating Large-scale Engineered Subsurface Systems with PFLOTRAN**
488**Heeho Park**
489
490, Glenn Hammond, Albert Valocchi
491
492Sandia National Laboratories
493
494Modeling large-scale engineered subsurface systems entails significant
495additional numerical challenges. For nuclear waste repository, the
496challenges arise from: (a) the need to accurately represent both the waste
497form processes and shafts, tunnel, and barriers at the small spatial scale
498and the large-scale transport processes throughout geological formations;
499(b) the strong contrast in material properties such as porosity and
500permeability, and the nonlinear constitutive relations for multiphase flow;
501(c) the decay of high level nuclear wastes cause nearby water to boil off
502into steam leading to dry-out. These can lead to an ill-conditioned
503Jacobian matrix and non-convergence with Newton’s method due to
504discontinuous nonlinearity in constitutive models.
505
506We apply the open-source simulator PFLOTRAN which employs a FV
507discretization and uses the PETSc parallel framework. We implement within
508PETSc the general-purpose nonlinear solver, Newton trust-region dogleg
509Cauchy (NTRDC) and Newton trust-region (NTR) to demonstrate the
510effectiveness of these advanced solvers. The results demonstrate speed-up
511compared to the default solvers of PETSc and complete simulations that were
512never completed with them.
513
514SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525.
515:::
516
517(ashish-patel)=
518
519:::{topic} **Scalable cloud-native thermo-mechanical solvers using PETSc**
520**Ashish Patel**
521
522, Jeremy Theler, Francesc Levrero-Florencio, Nabil Abboud, Mohammad Sarraf Joshaghani, Scott McClennan
523
524Ansys, Inc.
525
526This talk presents how the Ansys OnScale team uses PETSc to
527develop finite element-based thermo-mechanical solvers for scalable
528nonlinear simulations on the cloud. We will first provide an overview of
529features available in the solver and then discuss how some of the PETSc
530objects, like DMPlex and TS, have helped us speed up our development
531process. We will also talk about the workarounds we have incorporated to
532address the current limitations of some of the functions from DMPlex for
533our use cases involving multi-point constraints and curved elements.
534Finally, we demonstrate how PETSc’s linear solvers scale on multi-node
535cloud instances.
536:::
537
538(spencer-patty)=
539
540:::{topic} **Intel oneAPI Math Kernel Library, what’s new and what’s next?**
541**Spencer Patty**
542
543Intel Corporation
544
545This talk provides an overview of Intel® oneAPI Math Kernel Library (oneMKL)
546product and software for supporting optimized math routines for both Intel
547CPUs and GPUs. Given that PETSc already utilizes several BLAS/LAPACK/Sparse
548BLAS routines from oneMKL for Intel CPU and as part of the Aurora project
549with Argonne, we discuss the use of OpenMP offload APIs for Intel GPUs.
550We explore software and hardware improvements for better sparse linear
551algebra performance and have an informal discussion of how to further
552support the PETSc community.
553:::
554
555(marek-pecha)=
556
557:::{topic} **Distributed Machine Learning for Natural Hazard Applications Using PERMON**
558**Marek Pecha**
559
560, David Horak, Richard Tran Mills, Zachary Langford
561
562VSB – Technical University of Ostrava, Czechia
563
564We will present a software solution for distributed machine learning
565supporting computation on multiple GPUs running on the top of the PETSc
566framework, which we will demonstrate in applications related to natural
567hazard localizations and detections employing supervised uncertainties
568modeling. It is called PERMON and is designed for convex optimization
569using quadratic programming, and its extension PermonSVM implements
570maximal-margin classifier approaches associated with support vector
571machines (SVMs). Although deep learning (DL) is getting popular in recent
572years, SVMs are still applicable. However, unlike DL, the SVM approach requires
573additional feature engineering or feature selection. We will present our
574workflow and show how to achieve reasonable models for the application
575related to wildfire localization in Alaska.
576:::
577
578(joseph-pusztay)=
579
580:::{topic} **Landau Collisions in the Particle Basis with PETSc-PIC**
581**Joseph Pusztay**
582
583, Matt Knepley, Mark Adams
584
585University at Buffalo
586
587The kinetic description of plasma encompasses the fine scale interaction of
588the various bodies that it is comprised of, and applies to a litany of
589experiments ranging from the laboratory magnetically confined fusion
590plasma, to the scale of the solar corona. Of great import to these
591descriptions are collisions in the grazing limit, which transfer momentum
592between components of the plasma. Until recently, these have best been
593described conservatively by finite element discretizations of the Landau
594collision integral. In recent years a particle discretization has been
595proven to preserve the appropriate eigenfunctions of the system, as well as
596physically relevant quantities. I present here the recent work on a purely
597particle discretized Landau collision operator which preserves mass,
598momentum, and energy, with associated accuracy benchmarks in PETSc.
599:::
600
601(jose-e-roman)=
602
603:::{topic} **Experiences in solving nonlinear eigenvalue problems with SLEPc**
604**Jose E. Roman**
605
606Universitat Politècnica de València
607
608One of the unique features of SLEPc is the module for the general nonlinear
609eigenvalue problem (NEP), where we want to compute a few eigenvalues and
610corresponding eigenvectors of a large-scale parameter-dependent matrix
611T(lambda). In this talk, we will illustrate the use of NEP in the context
612of two applications, one of them coming from the characterization of
613resonances in nanophotonic devices, and the other one from a problem in
614aeroacoustics.
615:::
616
617(barry-smith)=
618
619:::{topic} **Some thoughts on the future of PETSc**:
620**Barry Smith**
621
622Flatiron Institute
623
624How will PETSc evolve and grow in the future? How can PETSc algorithms and
625simulations be integrated into the emerging world of machine learning and
626deep neural networks? I will provide an informal discussion of these topics
627and my thoughts.
628:::
629
630(tim-steinhoff)=
631
632:::{topic} **Software Development and Deployment Including PETSc**
633**Tim Steinhoff**
634
635, Volker Jacht
636
637Gesellschaft für Anlagen- und Reaktorsicherheit (GRS), Germany
638
639Once it is decided that PETSc shall handle certain numerical subtasks in
640your software the question may arise about how to smoothly incorporate PETSc
641into the overall software development and deployment processes. In this
642talk, we present our approach how to handle such a situation for the code
643family AC2 which is developed and distributed by GRS. AC2 is used to
644simulate the behavior of nuclear reactors during operation, transients,
645design basis and beyond design basis accidents up to radioactive releases
646to the environment. The talk addresses our experiences, what challenges had
647to be overcome, and how we make use of GitLab, CMake, and Docker techniques
648to establish clean incorporation of PETSc into our software development
649cycle.
650:::
651
652(hansol-suh)=
653
654:::{topic} **TaoADMM**
655**Hansol Suh**
656
657Argonne National Laboratory
658
659In this tutorial, we will be giving an introduction to ADMM algorithm on
660TAO. It will include walking through ADMM algorithm with some real-life
661example, and tips on setting up the framework to solve ADMM on PETSc/TAO.
662:::
663
664(maria-vasilyeva)=
665
666:::{topic} **Numerical upscaling of network models using PETSc**
667**Maria Vasilyeva**
668
669Texas A&M University-Corpus Christi
670
671Multiphysics models on large networks are used in many applications, for
672example, pore network models in reservoir simulation, epidemiological
673models of disease spread, ecological models on multispecies interaction,
674medical applications such as multiscale multidimensional simulations of
675blood flow, etc. This work presents the construction of the numerical
676upscaling and multiscale method for network models. An accurate
677coarse-scale approximation is generated by solving local problems in
678sub-networks. Numerical implementation of the network model is performed
679based on the PETSc DMNetwork framework. Results are presented for square
680and random heterogeneous networks generated by OpenPNM.
681:::
682
683(berend-van-wachem)=
684
685:::{topic} **MultiFlow: A coupled balanced-force framework to solve multiphase flows in arbitrary domains**
686**Berend van Wachem**
687
688, Fabien Evrard
689
690University of Magdeburg, Germany
691
692Since 2000, we have been working on a finite-volume numerical framework
693“MultiFlow ” to predict multiphase flows in arbitrary domains by solving
694various flavors of the incompressible and compressible Navier-Stokes
695equations using PETSc. This framework enables the simulation of creeping,
696laminar and turbulent flows with droplets and/or particles at various
697scales. It relies on a collocated variable arrangement of the unknown
698variables and momentum-weighted-interpolation to determine the fluxes at
699the cell faces to couple velocity and pressure. To maximize robustness, the
700governing flow equations are solved in a coupled fashion, i.e., as part of
701a single equation system involving all flow variables. Various modules are
702available within the code in addition to its core flow solver, allowing it to
703model interfacial and particulate flows at various flow regimes and scales.
704The framework heavily relies on the PETSc library not only to solve the
705system of governing equations but also for the handling of unknown
706variables, parallelization of the computational domain, and exchange of
707data over processor boundaries. We are now in the 3rd generation of our
708code, currently using a combination of DMDA, and DMPlex with DMForest/p4est
709frameworks to allow for the adaptive octree refinement of the
710computational mesh. In this contribution, we will present the details of
711the discretization and the parallel implementation of our framework and
712describe its interconnection with the PETSc library. We will then present
713some applications of our framework, simulating multiphase flows at various
714scales, flows regimes, and resolutions. During this contribution, we will
715also discuss our framework's challenges and future objectives.
716:::
717
718(matt-young)=
719
720:::{topic} **PETSc in the Ionosphere**
721**Matt Young**
722
723University of New Hampshire
724
725A planet's ionosphere is the region of its atmosphere where a fraction
726of the constituent atoms or molecules have separated into positive ions and
727electrons. Earth's ionosphere extends from roughly 85 km during the day
728(higher at night) to the edge of space. This partially ionized regime
729exhibits collective behavior and supports electromagnetic phenomena that do
730not exist in the neutral (i.e., unionized) atmosphere. Furthermore, the
731abundance of neutral atoms and molecules leads to phenomena that do not
732exist in the fully ionized space environment. In a relatively narrow
733altitude range of Earth's ionosphere called the "E region", electrons
734behave as typical charged particles -- moving in response to combined
735electric and magnetic fields -- while ions collide too frequently with
736neutral molecules to respond to the magnetic field. This difference leads
737to the Farley-Buneman instability when the local electric field is strong
738enough. The Farley-Buneman instability regularly produces irregularities in
739the charged-particle densities that are strong enough to reflect radio
740signals. Recent research suggests that fully developed turbulent
741structures can disrupt GPS communication.
742
743The Electrostatic Parallel Particle-in-Cell (EPPIC) numerical simulation
744self-consistently models instability growth and evolution in the E-region
745ionosphere. The simulation includes a hybrid mode that treats electrons as
746a fluid and treats ions as particles. The particular fluid electron model
747requires the solution of an elliptic partial differential equation for the
748electrostatic potential at each time step, which we represent as a linear
749system that the simulation solves with PETSc. This presentation will
750describe the original development of the 2D hybrid simulation, previous
751results, recent efforts to extend to 3D, and implications for modeling GPS
752scintillation.
753
754The Electrostatic Parallel Particle-in-Cell (EPPIC) numerical simulation
755self-consistently models instability growth and evolution in the E-region
756ionosphere. The simulation includes a hybrid mode that treats electrons as
757a fluid and treats ions as particles. The particular fluid electron model
758requires the solution of an elliptic partial differential equation for the
759electrostatic potential at each time step, which we represent as a linear
760system that the simulation solves with PETSc. This presentation will describe
761the original development of the 2D hybrid simulation, previous results, recently
762efforts to extend to 3D, and implications to modeling GPS scintillation.
763:::
764
765(chonglin-zhang)=
766
767:::{topic} **XGCm: An Unstructured Mesh Gyrokinetic Particle-in-cell Code for Exascale Fusion Plasma Simulations**
768**Chonglin Zhang**
769
770, Cameron W. Smith, Mark S. Shephard
771
772Rensselaer Polytechnic Institute (RPI)
773
774We report the development of XGCm, a new distributed unstructured mesh
775gyrokinetic particle-in-cell (PIC) code, short for x-point included
776gyrokinetic code mesh-based. The code adopts the physical algorithms of the
777well-established XGC code. It is intended as a testbed for experimenting
778with new numerical and computational algorithms, which can eventually be
779adopted in XGC and other PIC codes. XGCm is developed on top of several
780open-source libraries, including Kokkos, PETSc, Omega, and PUMIPic. Omega
781and PUMIPic rely on Kokkos to interact with the GPU accelerator, while
782PETSc solves the gyrokinetic Poisson equation on either CPU or GPU. We
783first discuss the numerical algorithms of our mesh-centric approach for
784performing PIC calculations. We then present a code validation study using
785the cyclone base case with ion temperature gradient turbulence (case 5 from
786Burckel, etc. Journal of Physics: Conference Series 260, 2010, 012006).
787Finally, we discuss the performance of XGCm and present weak scaling
788results using up to the full system (27,648 GPUs) of the Oak Ridge National
789Laboratory’s Summit supercomputer. Overall, XGCm executes all PIC
790operations on the GPU accelerators and exhibits good performance and
791portability.
792:::
793
794(hong-zhang-ms)=
795
796:::{topic} **PETSc DMNetwork: A Library for Scalable Network PDE-Based Multiphysics Simulation**
797**Hong Zhang (Ms.)**
798
799Argonne National Laboratory, Illinois Institute of Technology
800
801We present DMNetwork, a high-level set of routines included in the PETSc
802library for the simulation of multiphysics phenomena over large-scale
803networked systems. The library aims at applications with networked
804structures like those in electrical, water, and traffic
805distribution systems. DMNetwork provides data and topology management,
806parallelization for multiphysics systems over a network, and hierarchical
807and composable solvers to exploit the problem structure. DMNetwork eases
808the simulation development cycle by providing the necessary infrastructure
809to define and query the network components through simple abstractions.
810:::
811
812(hui-zhou)=
813
814:::{topic} **MPI Multiply Threads**
815**Hui Zhou**
816
817Argonne National Laboratory
818
819In the traditional MPI+Thread programming paradigm, MPI and OpenMP each
820form their own parallelization. MPI is unaware of the thread
821context. The requirement of thread safety and message ordering forces MPI
822library to blindly add critical sections, unnecessarily serializing the
823code. On the other hand, OpenMP cannot use MPI for inter-thread
824communications. Developers often need hand-roll algorithms for
825collective operations and non-blocking synchronizations.
826
827MPICH recently added a few extensions to address the root issues in
828MPI+Thread. The first extension, MPIX stream, allows applications to
829explicitly pass the thread context into MPI. The second extension, thread
830communicator, allows individual threads in an OpenMP parallel region to use
831MPI for inter-thread communications. In particular, this allows an OpenMP
832program to use PETSc within a parallel region.
833
834Instead of MPI+Thread, we refer to this new pattern as MPI x Thread.
835:::
836
837(junchao-zhang)=
838
839:::{topic} **PETSc on the GPU**
840**Junchao Zhang**
841
842Argonne National Laboratory
843
844In this mini-tutorial, we will briefly introduce the GPU backends of PETSc and how to configure, build, run
845and profile PETSc on GPUs. We also talk about how to port your PETSc code to GPUs.
846:::
847
848(hong-zhang-mr)=
849
850:::{topic} **PETSc and PyTorch Interoperability**
851**Hong Zhang (Mr.)**
852
853Argonne National Laboratory
854
855In this mini-tutorial, we will introduce: How to convert between PETSc vectors/matrices and PyTorch tensors;
856How to generate Jacobian or action of Jacobian with PyTorch and use it in PETSc; How to use PETSc and PyTorch
857for solving ODEs and training neural ODEs.
858:::
859
860(stefano-zampini)=
861
862:::{topic} **petsc4py**
863**Stefano Zampini**
864
865King Abdullah University of Science and Technology (KAUST)
866
867In this mini-tutorial, we will introduce the Python binding of PETSc.
868:::
869
870(matt-knepley)=
871
872:::{topic} **DMPlex**
873**Matt Knepley**
874
875University at Buffalo
876
877In this mini-tutorial, we will introduce the DMPlex class in PETSc.
878:::
879
880(id2)=
881
882:::{topic} **DMSwarm**
883**Joseph Pusztay**
884
885University at Buffalo
886
887In this mini-tutorial, we will introduce the DMSwarm class in PETSc.
888:::
889
890[c_04]: https://petsc.gitlab.io/annual-meetings/2023/slides/HongZhangMr.ipynb
891[s_00]: https://petsc.gitlab.io/annual-meetings/2023/tutorials/petsc_annual_meeting_2023_tutorial.pdf
892[s_01]: https://petsc.gitlab.io/annual-meetings/2023/slides/BarrySmith.pdf
893[s_02]: https://petsc.gitlab.io/annual-meetings/2023/slides/SaraCalandrini.pdf
894[s_03]: https://petsc.gitlab.io/annual-meetings/2023/slides/BerendvanWachem.pdf
895[s_04]: https://petsc.gitlab.io/annual-meetings/2023/slides/HongZhangMr.pdf
896[s_06]: https://petsc.gitlab.io/annual-meetings/2023/slides/DavidMay.pdf
897[s_07]: https://petsc.gitlab.io/annual-meetings/2023/slides/TimSteinhoff.pdf
898[s_08]: https://petsc.gitlab.io/annual-meetings/2023/slides/DerekGaston.pdf
899[s_09]: https://petsc.gitlab.io/annual-meetings/2023/slides/HeehoPark.pdf
900[s_10]: https://petsc.gitlab.io/annual-meetings/2023/slides/JoseERoman.pdf
901[s_11]: https://petsc.gitlab.io/annual-meetings/2023/slides/HuiZhou.pdf
902[s_12]: https://petsc.gitlab.io/annual-meetings/2023/slides/JunchaoZhang.pdf
903[s_13]: https://petsc.gitlab.io/annual-meetings/2023/slides/JustinChang.pdf
904[s_14]: https://petsc.gitlab.io/annual-meetings/2023/slides/StefanoZampini.pdf
905[s_15]: https://petsc.gitlab.io/annual-meetings/2023/slides/JacobFaibussowitsch.pdf
906[s_16]: https://petsc.gitlab.io/annual-meetings/2023/slides/LucBerger-Vergiat.pdf
907[s_17]: https://petsc.gitlab.io/annual-meetings/2023/slides/SpencerPatty.pdf
908[s_20]: https://petsc.gitlab.io/annual-meetings/2023/slides/ZakariaeJorti.pdf
909[s_21]: https://petsc.gitlab.io/annual-meetings/2023/slides/AlexGrant.pdf
910[s_22]: https://petsc.gitlab.io/annual-meetings/2023/slides/MohamadIbrahimCheikh.pdf
911[s_23]: https://petsc.gitlab.io/annual-meetings/2023/slides/HongZhangMs.pdf
912[s_24]: https://petsc.gitlab.io/annual-meetings/2023/slides/ChonglinZhang.pdf
913[s_25]: https://petsc.gitlab.io/annual-meetings/2023/slides/DanielFinn.pdf
914[s_26]: https://petsc.gitlab.io/annual-meetings/2023/slides/JosephPusztay.pdf
915[s_27]: https://petsc.gitlab.io/annual-meetings/2023/slides/JosephPusztayDMSwarm.pdf
916[s_28]: https://petsc.gitlab.io/annual-meetings/2023/slides/AidanHamilton.pdf
917[s_29]: https://petsc.gitlab.io/annual-meetings/2023/slides/MariaVasilyeva.pdf
918[s_30]: https://petsc.gitlab.io/annual-meetings/2023/slides/HansolSuh.pdf
919[s_31]: https://petsc.gitlab.io/annual-meetings/2023/slides/MattYoung.pdf
920[s_32]: https://petsc.gitlab.io/annual-meetings/2023/slides/BlaiseBourdin.pdf
921[s_33]: https://petsc.gitlab.io/annual-meetings/2023/slides/JakubKruzik.pdf
922[s_34]: https://petsc.gitlab.io/annual-meetings/2023/slides/MarekPecha.pdf
923[v_00]: https://youtu.be/rm34jR-p0xk
924[v_01]: https://youtu.be/vqx6b3Hg_6k
925[v_02]: https://youtu.be/pca0jT86qxU
926[v_03]: https://youtu.be/obdKq9SBpfw
927[v_04]: https://youtu.be/r_icrhAbmSQ
928[v_06]: https://youtu.be/0BplD93cSe8
929[v_07]: https://youtu.be/vENWhqp7XlI
930[v_08]: https://youtu.be/aHL4FIu_q6k
931[v_10]: https://youtu.be/2qhtMsvYw4o
932[v_11]: https://youtu.be/plfB7XVoqSQ
933[v_12]: https://youtu.be/8tmswLh3ez0
934[v_14]: https://youtu.be/hhe0Se4pkSg
935[v_15]: https://youtu.be/IbjboeTYuAE
936[v_17]: https://youtu.be/Baz4GVp4gQc
937[v_18]: https://youtu.be/jURFyoONRko
938[v_20]: https://youtu.be/k8PozEb4q40
939[v_21]: https://youtu.be/0L9boKxXPmA
940[v_22]: https://youtu.be/e101L03bO8A
941[v_23]: https://youtu.be/heWln8ZIrHc
942[v_24]: https://youtu.be/sGP_9JStYR8
943[v_25]: https://youtu.be/b-V_j4Vs2OA
944[v_26]: https://youtu.be/b-V_j4Vs2OA?t=1200
945[v_27]: https://youtu.be/FaAVV8-lnZI
946[v_28]: https://youtu.be/Ys0CZLha1pA
947[v_29]: https://youtu.be/Br-9WgvPG7Q
948[v_30]: https://youtu.be/8WvZ9ggB3x0
949[v_31]: https://youtu.be/hS3nOmX_g8I
950[v_32]: https://youtu.be/mfdmVbHsYK0
951[v_33]: https://youtu.be/2dC_NkGBBnE
952[v_34]: https://youtu.be/2dC_NkGBBnE?t=1194
953