ParCo'95                                                                                                                                                                 BELGACOM


Invited Speakers

Peter Dzwig (UK)

High Performance Computing for Finance

Could the Barings debacle have been avoided by the use of HPC? How is bank data (for example your account) stored? and what impact might HPC have on it? Are neural nets and genetic algorithms applicable to the financial world as a whole? How can HPC benefit the financial community?

This talk will review both retail and non-retail aspects of HPC in the financial arena. In it we will show that banking and HPC will go hand-in-hand in future and that the application of HPC could have potentially vast impact - and benefit - to all involved which is, directly or indirectly, all of us.

We will consider the impact of the various timescales characteristic of the financial markets and their implications for the vast amounts of data which have to be handled and analysed in order to carry out the transactions typical of the modern markets and how HPC can impact them.

Among the areas to be touched upon in this talk will be: the retail banks and datamining; the equities, foreign exchange and derivatives markets; the Capital Adequacy Directive; and the meaning of "risk".

The talk will conclude with a review of the major issues for finance and HPC and some pointers to future developments in the field.



Oliver McBryan (USA, University of Colorado)

HPCC: The interrelationship of Computing and Communication

High Performance Computing has benefited enormously from the remarkable rate of increase in processor power - sustained at a factor of two increase every 18 months for many years. This in turn has allowed far larger and more complex problems to be solved than was envisioned a decade ago. However the increasing processor performance has lead to a set of new challenges, all relating to the need to move data rapidly to and from processors. As a result, data communication has surfaced as the critical bottleneck in most large scale computing. This bottleneck can occur at several levels - within the memory of a system, between processors in a system, between processors and disk, or between systems and remote systems or users.

In this paper we will illustrate these points using as examples, grand challenge computations of turbulent flow and structural design which are underway in our group at the University of Colorado. We will then go on to describe solutions we have developed to cope with the severe communication issues these computations generate.



Henk A. van der Vorst (The Netherlands, Utrecht University)

Parallelism in CG-like Methods

The Conjugate Gradient (CG) method is an iterative method for the solution of linear systems Ax=b with symmetric positive definite A. Most of the operations in this method are trivially parallelizable. However, the innerproducts may spoil the performance of distributed memory computers when the number of processors is large. We will discuss alternative formulations for CG as well as possibilities to reschedule the operations, in an attempt to reduce negative effects from communication. The matrix vector products may also introduce some communication overhead, but for many relevant problems this involves only communication with a few nearby processors. So this may, but does not necessarily, further degrade the performance of the algorithm. The discussed approaches can also be used for related methods, such as Bi-CG, CGS, QMR, and Bi-CGSTAB.

These iterative methods are often used in combination with preconditioning in order to obtain much faster convergence. Popular preconditioners based upon incomplete LU decomposition of A are often problematic in a parallel environment. In our presentation we will give an overview of parallelizable preconditioners, and of techniques to extract parallelism from the classical incomplete LU decompositions.