Numerical methods are preconditions of computer simulations: the latter would just be impossible without the former. Numerical methods are used for solving mathematical equations in computer simulations, especially when equations are not analytically tractable or take too long to be solved by other means. In other words, numerical methods are a necessary medium between the theoretical model and the simulation. Is this medium transparent or does it add a representational layer that would differ from the theoretical model?
If numerical methods are not transparent, does the plurality of methods mean that each one of them must be associated with a specific definition of computer simulations? Is it possible to provide a unique definition of computer simulation?
Besides, numerical methods must satisfy constraints that are specific to the computational architecture (parallel, sequential, digital or analog) and to the peculiar features of the machine (in terms of computational power, storage, system resource). To which extent do these constraints threaten the accuracy of the representation of the system under study that simulation models provide?
Another set of questions relates to the plurality of numerical methods, usually underestimated by philosophers. Let us mention some of them: methods for solving first-order or second-order differential equations, such as Euler’s, Runge-Kutta’s, Adams-Moulton’s and Numerov’s; the finite difference method; the finite element method; the Monte Carlo method; the Metropolis algorithm; the particle methods; etc. For a given problem, on what ground does one choose such or such numerical method? Does the choice depend on the nature of the problem? In some scientific disciplines, the Monte-Carlo method is preferably used for providing results of reference (benchmarks), and therefore for allowing the validation of differential equation-based simulations. How may the special functions attributed to some numerical methods be explained?
Robert Batterman (University of Pittsburgh)
François Dubois (CNAM, Université Paris Sud)
Paul Humphreys (University of Virginia)
Mark Wilson (University of Pittsburgh)
Thursday 3 November 2011
09:00-09:35 Johannes Lenhard, A Predictive Turn in Pre-Computer and Computer Numerics
09:35-10:10 Claus Beisbart, No risk no fun virtualized. How Monte Carlo simulations represent
10:10-10:45 Maarten Bullynck, Liesbeth De Mol and Martin Carlé, ENIAC, matrix of numerical simulation(s!)
11:15-12:15 Paul Humphreys, Applying Mathematics and Applied Mathematics
13:45-14:45 Mark Wilson, To be announced
15:15-16:50 Thomas Boyer, What numerical methods are not. The case of multilayered simulations, with several computational models
16:50-17:25 Greg Lusk Faithfulness Restored: Data Analysis and Data Assimilation
Friday 4 November 2011
09:30-10:30 François Dubois, Lattice Boltmann Equations and Finite-Difference Schemes
11:00-11:35 Sorin Bangu, Analytic vs. Numerical: Scientific Modeling and Computational Methods
11:35-12:10 Vincent Ardourel, Is Discretization a Change in Mathematical Idealization?
13:45-14:45 Robert Batterman, The Tyranny of Scales
15:15-15:50 Nicolas Fillion and Robert Corless, Computation and Explanation
15:50-16:25 Robert Moir and Robert Corless, Computation for Confirmation
Anouk Barberousse, Université de Lille
Cyrille Imbert, Archives Poincaré
Julie Jebeile, IHPST
Margaret Morrison, University of Toronto
Anouk Barberousse and Julie Jebeile
Please direct general conference inquiries to email@example.com
Presented by IHPST, Institut d’Histoire et de Philosophie des Sciences et des Techniques, University of Paris 1.