Parallel Session T

Chair: Changhui Tan, Room SAS 1220, 10:30-12:00 November 13

Hangjie Ji 10:30-10:55

Title: A mathematical model for wetting and drying in filter membranes
Abstract: A filter membrane may be frequently used during its lifetime, with wetting and drying processes occurring in the porous medium for several cycles. During these cycles, the concentration distribution of molecules or contaminants and the medium morphology evolve. As a consequence, the filter performance ultimately deteriorates after several cycles. In this work, we formulate a coupled mathematical model for the wetting and drying dynamics in a porous medium occurring consecutively. Our model accounts for the porous medium internal morphology (internal structure, porosity, etc.), the contaminant deposition, and the evolution of dry/wet interfaces due to evaporation. The model provides insights to the overall porous medium evolution over cycles of wetting and drying processes and predicts the timeline to discard the filter based on its optimum performance.

Henry Shugart 11:00-11:15

Title: A Primal-Dual Method For Topological Changes in Adversarial Classification
Abstract: While the robustness of machine learning algorithms to data perturbing adversaries has been an important topic in the literature, our understanding of how to achieve robustness remains limited. We consider an adversary with the power to perturb the input data within an $\varepsilon$ neighborhood. In this work, we describe an algorithm to construct an optimal classifier for every adversarial power $\varepsilon >0$. Prior work has shown that a set of uncoupled ODEs govern the evolution of the optimal adversarial classifier in one dimension for small enough $\varepsilon$. We find that the optimal classifier is governed by the same ODEs except for a finite number of instantaneous changes in topology or discontinuous movements in endpoints of classification intervals. We rely on a novel primal dual method to prove the optimality of our algorithm.

Christopher Leonard 11:20-11:35

Title: Training neural networks to learn the dynamics of partial differential equations
Abstract: Neural networks have recently become a very prominent tool for the study of partial differential equations (PDEs). This is partially due to the success of physics-informed neural networks (PINN), where a neural network is trained to model the solution of a PDE by satisfying all the physical constraints put on the equation. However, what do we do if we are uncertain of the equations of motion? In this talk, we present a data-driven method to learn the dynamics of PDEs with little knowledge of the underlying physics. Given solution data from the PDE, we train a neural network to learn how the PDE evolves from one time step to the next. Since our method learns from local data instead of global data, it can simulate the solution from a large set of initial conditions. We show how our method can outperform classical time-stepping methods, such as the finite volume method, in both accuracy and run time on problems in fluid dynamics.