“ParCo”

ParCo2019


Prague, Czech Republic, 10-13 September 2019

× Home About ParCo News Prague Site Registration Symposia / Call for Symposia Conference Program Keynotes Committees Accommodation & Travel Proceedings Promotion & Sponsoring ParCo History Contact
menu

    Jean-Marc Denis


Chair of the Board, European Processor Initiative and Distinguished Expert, Strategy & Plan, BDS Strategy & Innovation, Atos

Title: European Processor Initiative: The European Vision for Exascale Ages and Beyond

Abstract:

The European Processor Initiative is a project funded by Horizon 2020 with the aim of bringing a low-power microprocessor to the market and ensuring that the key competences for high-end chip design remain in Europe. It came to existence as Europe recognized the challenge in high-performance computing - its importance has been on the rise over the last few years. Annual global IP traffic will soon reach several zettabytes, vast amounts of new devices collect and store data, and scientists are exploring new computing approaches to solving global challenges. At the same time, industry is changing the way products are designed, while we, as individuals, constantly expect more personalized services: better and more efficient drugs; faster diagnostic tools; safer and cheaper autonomous cars; and many others.

The need to collect and efficiently process these vast amounts of data comes at a price. The existing approach to HPC systems design is no longer sustainable for the exascale era in which computers will execute a billion billion calculations per second. Energy efficiency is of enormous importance for the sustainability of future exascale HPC systems. As one of the cornerstones of European HPC strategic plan, EPI wants to develop a novel HPC-focused low-power processing system, an accelerator to increase energy efficiency for computing intensive tasks in HPC and AI and an automotive demonstration platform to test the relevance of the previous components in this industry sector.

EPI’s Chairman of the Board, Jean-Marc Denis, will, in his keynote, touch upon technological foundations on EPI’s global architecture, General Purpose Processor, RISC-V accelerator.

Note:


The talk by Jean-Marc Denis is sponsored by EPI


    Erik D'Hollander


Professor at the Faculty of Engineering and Architecture, Department of Electronics and Information Systems, Ghent University, Belgium

Title: Empowering Parallel Computing with Field Programmable Gate Arrays

Abstract:

After more than 30 years, reconfigurable computing has grown from a concept to a mature field of science and technology. The cornerstone of this evolution is the field programmable gate array, a building block enabling the configuration of a custom hardware architecture.

The departure from static von Neumann like architectures opens the way to eliminate the instruction overhead and to optimize the execution speed and power consumption. FPGAs now live in a growing ecosystem of developing tools, enabling software programmers to map algorithms directly into hardware.

Applications abound in many directions, including data centers, IoT, AI, image processing and space exploration. The increasing success of FPGAs is largely due to an improved toolchain with solid high level synthesis support as well as a better integration with processor and memory systems. On the other hand, long compile times and complex design exploration remain areas for improvement. In this talk we will address the evolution of FPGAs into advanced multi-functional accelerators, discuss different programming models and their HLS language implementations as well as high-performance tuning of FPGAs integrated in a heterogeneous platform. We will pinpoint fallacies and pitfalls of promising projects as well as identify opportunities for language enhancements and architectural improvements.


    Mikhail Dyakonov


Laboratoire Charles Coulomb, Université Montpellier, Montpellier, France

Title: Will we ever have a Quantum Computer?

Abstract:

In the hypothetical quantum computing one replaces the classical two-state BIT by a quantum element (QUBIT) with two BASIC states, ↑ and ↓. Its arbitrary state is described by the wave function ψ = a↑+ b↓, where a and b are complex amplitudes, satisfying the normalization condition. Unlike the classical bit, that can be only in ONE of the two states, ↑ or ↓, the qubit can be in a continuum of states defined by the quantum amplitudes a and b. THE QUBIT IS A CONTINUOUS OBJECT.

At a given moment, the state of a quantum computer with N qubits is characterized by 2^N quantum amplitudes, which are continuous variables restricted by the normalization condition only. Thus, the hypothetical quantum computer is an ANALOG MACHINE characterized by a super-astronomical number of continuous variables.

Their values cannot be arbitrary, they must be under our control. Thus the answer to the question in title is: When physicists and engineers will learn to keep under control this number of continuous parameters, which, in my opinion, means - NEVER.


    Ian Foster


Arthur Holly Compton Distinguished Service Professor, Department of Computer Science, University of Chicago, Distinguished Fellow and Senior Scientist, MCS Division, Argonne National Laboratory

Title: Coding the Continuum

Abstract:

In 2001, as early high-speed networks were deployed, George Gilder observed that “when the network is as fast as the computer's internal links, the machine disintegrates across the net into a set of special purpose appliances.” Two decades later, our networks are 1,000 times faster, our appliances are increasingly specialized, and our computer systems are indeed disintegrating. As hardware acceleration overcomes speed-of-light delays, time and space merge into a computing continuum. Familiar questions like “where should I compute,” “for what workloads should I design computers,” and "where should I place my computers” seem to allow for a myriad of new answers that are exhilarating but also daunting. Are there concepts that can help guide us as we design applications and computer systems in a world that is untethered from familiar landmarks like center, cloud, edge? I propose some ideas and report on experiments in coding the continuum.


    Torsten Hoefler


Associate Professor of Computer Science, ETH Zürich, Switzerland

Title: Performance Portability with Data-Centric Parallel Programming

Abstract:

The ubiquity of accelerators in high-performance computing has driven programming complexity beyond the skill-set of the average domain scientist. To maintain performance portability in the future, it is imperative to decouple architecture-specific programming paradigms from the underlying scientific computations. We present the Stateful DataFlow multiGraph (SDFG), a data-centric intermediate representation that enables separating code definition from its optimization. We show how to tune several applications in this model and IR. Furthermore, we show a global, datacentric view of a state-of-the-art quantum transport simulator to optimize its execution on supercomputers. The approach yields coarse and fine-grained data-movement characteristics, which are used for performance and communication modeling, communication avoidance, and data-layout transformations. The transformations are tuned for the Piz Daint and Summit supercomputers, where each platform requires different caching and fusion strategies to perform optimally. We show that SDFGs deliver competitive performance, allowing domain scientists to develop applications naturally and port them to approach peak hardware performance without modifying the original scientific code.


    Thomas Lippert


Prof. Dr. Dr. Thomas Lippert, Jülich Supercomputing Centre, Forschungszentrum Jülich, Chair of the PRACE Council, Professor for Theoretical Computational Physics at Wuppertal University, Germany

Title: Scalability, Cost-Effectiveness and Composability of Large-Scale Supercomputers through Modularity

Abstract:

The leading supercomputers in the Top 500 list are based on a traditional, monolithic architectural approach. They all use complex nodes consisting of different elements such as CPUs and GPUs or FPGAs with common I/O interfaces. It is a known difficulty with such systems to encounter significant underutilization because the more complex the node, the more vulnerable the overall system becomes to inefficiencies. A second problem is the cost of scalability, because a node must be able to perform very complex calculations for problems that are often not scalable, and the same node must perform scalable calculations for readily scalable problems that would not require such complex nodes. This makes such monolithic system extremely costly. A third difficulty is the requirement to include resources, such as future computer systems such as quantum computers. To try to solve these problems, we propose a disaggregation of resources and their dynamic recomposition through a programming paradigm called modular supercomputing. Modular supercomputing opens up a new promising degree of freedom in supercomputing, namely the dynamic and optimal adaptation of program parts or programs in workflows to different architectures connected to a common fast network. We motivate this approach by the computer theoretical generalization of Amdahl's law. Future perspectives for modular supercomputing offer energy-efficient exascale computing, optimized workflows in data analysis or interactive supercomputing, but also the simple integration of future quantum computers and neuromorphic computers. In addition, memory expansions in the system's network are of great interest. Furthermore, we present arguments for the usefulness of modularity for important applications such as Earth System simulations, continuous learning and data analysis problems. Finally, we are presenting first results of test problems.

Note:


The talk by Thomas Lippert is sponsored by Jülich Supercomputing Centre, Forschungszentrum Jülich



    Jean-Pierre Panziera


Jean-Pierre Panziera, ATOS, France

Title: Addressing the Exascale Challenges

Abstract:

Several Exascale systems have been announced around the world (USA, Japan, China), which should be in production in 2021, maybe even earlier. Europe, through the EuroHPC programme, will install three pre-Exascale supercomputers in 2020 and is already planning for two Exascale systems in 2022-23. These systems aim at delivering 50-100 times more compute performance than the current generation on real life applications. To achieve this goal we need to overcome some formidable challenges. For Exascale performance, these systems will most probably use HPC accelerators, for which applications porting and tuning is not straightforward. In addition, they will require to scale to tens of thousands nodes. This could impact resilience and will put a lot of stress on the interconnect. Given their size, the power consumption of Exascale systems will reach 20-30MW, requiring new types of datacentres to handle this level of power supply and cooling. Finally these systems must support new application workloads including Data Analytics and Machine Learning. We will present a blueprint for Exascale systems and show how each of the challenges can be addressed to provide useful Exascale supercomputing tools for scientists and engineers.





Invited Speakers 1983 - 2017


Conference

Speaker

Title

ParCo2017

Maria Chiara Carrozza

NeuroRobotics Area in The Biorobotics Institute at Scuola Superiore Sant’Anna. President of the Italian National Group of Bioengineering. Member of the Board of Directors of the Piaggio Spa group.


The future of Robotics in the Fourth Industrial Revolution



Andris Ambainis

Professor at the Faculty of Physics and Mathematics, University of Latvia, Riga, Latvia


Software for Quantum Computers



Jack Dongarra

Professor at the Department of Electrical Engineering and Computer Science, University of Tennessee, Oak Ridge National Laboratory, and the University of Manchester, U.K.


An Overview of High Performance Computing and Challenges for the Future



Marco Aldinucci

Professor at the Computer Science Department, University of Torino, Torino, Italy.


Partitioned Global Address Space in the mainstream of C++ programming



Thomas Ludwig

Director of the German Climate Computing Center (DKRZ) and Professor of Informatics, University of Hamburg.


Computational Climate Science on the Way to Exaflops and Exabytes



Didier El Baz

Head and Founder of the Distributed Computing and Asynchronism team (CDA) LAAS-CNRS, Toulouse, France.


High Performance issues related to the Internet of Things and Smart Earth.


ParCo2015

Stephen Furber

ICL Professor of Computer Engineering, School of Computer Science, University of Manchester, UK.


Bio-Inspired Massively-Parallel Computation



Simon McIntosh-Smith

Head of the HPC Research Group at the University of Bristol, UK.


Scientific Software Challenges in the Extreme Scaling Era



Keshav Pingali

Professor in the Department of Computer Science, and holds the W.A."Tex" Moncrief Chair of Computing in the Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, USA


Parallel Program = Operator + Schedule + Parallel data structure



Rick Stevens

Associate Laboratory Director, Argonne National Laboratory and Professor, Department of Computer Science, University of Chicago, USA.


How might future HPC architectures utilize emerging neuromorphic chip technology?


ParCo2013

Pete Beckman

Director, Exascale Technology and Computing Institute, Argonne National Laboratory and the University of Chicago, USA.

The Changing Software Stack of Extreme-Scale Supercomputers


Sudip Dosanjh

Director of the National Energy Research Scientific Computing (NERSC) Center, Lawrence Berkeley National Laboratory, USA

On the Confluence of Exascale and Big Data


Wolfgang Nagel

Director, Center for Information Services and High Performance Computing (ZIH) & Professor for Computer Architecture, Institute for Computer Engineering, Technical University of Dresden,Germany.


Challenges for Exascale: Architectures and Workflows for Big Data in Life Sciences


Martin Schulz

Computer Scientist at the Center for Applied Scientific Computing (CASC) at Lawrence Livermore National Laboratory (LLNL), USA.


Performance Analysis Techniques for the Exascale Co-Design Process

ParCo2011

Andy Adamatsky

Dept of Computer Science, UWE, Bristol, UK

Physarum Machines


Jack B. Dennis

Professor of Computer Science, Massachusetts Institute of Technology, USA

The Fresh Breeze Project


Bernhard Fabianek

European Commission, Brussels, Belgium

The Future of High Performance Computing in Europe


William D. Gropp

Paul and Cynthia Saylor Professor of Computer Science, University of Illinois Urbana-Champaign, USA

Performance Modeling as the Key to Extreme Scale Performance


Thomas Lippert Forschungszentrum Jülich GmbH, Jülich, Germany

Europe's Supercomputing Research Infrastructure PRACE


Ignacio Martín Llorente OpenNebula Project Director, DSA-Research.org, Universidad Complutense de Madrid, Spain

Challenges in Hybrid and Federated Cloud Computing

ParCo2009

Alan Gara

Blue Gene Chief Architect, IBM, USA

Exascale computer: What future architectures mean for the user community


Ian Foster

Argonne National Laboratory & Computer Science, University of Chicago, USA

Computing Outside the Box


Chris Jesshope

University of Amsterdam, Netherlands

Making multi-cores mainstream - from security to scalability


François Bodin

CAPS enterprise, Rennes, France

High Level Graphics Processing Unit Programming.

ParCo2007

Maria Ramalho-Natario
European Commission, INFSO

European E-Infrastructure: Promoting Global Virtual Research Communities


Barbara Chapman
University of Houston, Texas

Programming in the Multicore Era



Marek Behr

RWTH Aachen University, Germany

Simulation of Heart-Assist Devices



Satoshi Matsuoka
Tokyo Institute of Technology, Japan

Towards Petascale Grids as a Foundation of E-Science



Thomas Lippert

Forschungszentrum Jülich GmbH, Jülich, Germany

Partnership for Advanced Computing in Europe (PACE)

ParCo2005

Joel H. Saltz

Ohio State Univ., USA

Computational Phenotyping and High End Computing


Michael Gerndt

TU München. Germany

Advanced Techniques for Performance Analysis


Antonio González

UPC & Intel Labs., Barcelona, Spain

The Right-Hand Turn to Multi-Core Processors

ParCo2003

Friedel Hoßfeld,

Jülich, Germany

Parallel Machines and the "Digital Brain“ - An Intricate Extrapolation on the Occasion of JvN's 100th Birthday


Manfred Zorn, NSF and Lawrence Berkeley National Laboratoy

Computational Challenges in the Genomics Era


Charles D. Hansen, University of Utah, Salt Lake City

High Performance Visualization: So Much Data, So Little Time

ParCo2001

Giovanni Aloisio

Grids: an application perspective


Tony Hey

E-Science, e-Business and the Grid


Vipin Kumar

Graph Partitioning for Dynamic, Adaptive and Multi-Phase Computations


Paul Messina

Technology Issues Relating to the Archiving, Accessing and Analysis of Very Large Distributed Scientific Data Sets


Jack Dongarra

Computational Grids

ParCo99

Paolo Ciancarini (I):

Coordination Languages


Dennis Gannon (USA):

The Information Power Grid and the Problem of Component Systems for High Performance Distributed Computing


Jerzy Leszczynski (USA):

Explosive Advances in Computational Chemistry - Applications of Parallel Computing in Biomedical and Material Science Research


Richard Robb (USA):

A Vision for Image Computing and Visualization in Medicine


David Womble (USA):

Challenges in the Practical Application of Parallel Computing

ParCo97

Geoffrey Fox (USA):

Future of High Performance Computing: Java on PetaFlop Computers


Andreas Reuter (D):

Parallel Database Techniques in Decision Support and Data Mining


Argy Krikelis (UK):

Parallel Multimedia Computing


Klaus Stüben (D):

Europort-D: Commercial Benefit of Using Parallel Technology

ParCo95

Peter Dzwig (UK)

High Performance Computing for Finance


Oliver McBryan (USA, University of Colorado)

HPCC: The interrelationship of Computing and Communication


Henk A. van der Vorst (The Netherlands, Utrecht University)

Parallelism in CG-like Methods

ParCo93

R. Hiromoto, USA

Are We Expecting Too Much from Parallelism?


Ian Foster, USA

Models for Modular Parallel Programming


Hans P. Zima, Austria

Vienna Fortran – A Second Generation System for High-Performance Computation


Vaidy Sunderam, USA

Methodologies and Tools for heterogeneous Concurrent Programming

ParCo91

J. L. Gustafson (USA)

Compute-Intensive Applications on Advanced Computer Architectures


M. Cosnard (F)

Designing Non-Numerical Algorithms for Distributed Memory Computers


H. Mühlenbein (D)

Neural Networks and Genetic Algorithms as Paradigms for Parallel Problem Solving


D. De Groot (USA)

Parallel Logic Programming and Speculative Computation


D. J. Evans (UK)

Design of Parallel Numerical Algorithms

ParCo89

K. Hwang (USA)

Massively Parallel Computing with Optics and Connectionist Neural Models


R. H. Perrot (UK)

Parallel Languages and Parallel Software


F. A. Lootsma (NL)

Parallel Non-Linear Optimization


J. S. Kowalik (USA)

Parallel Computation in Artificial Intelligence


W. Gentzsch (D)

Performance Evaluation for Shared-Memory Parallel Computers

ParCo85

R. L. Hockney (UK)

Parallel Computers and Algorithms


W. Butscher (D)

Supercomputing in Seismic Exploration INdustry


G. H. Rodrique (USA)

Parallel Algorithms for Partial Differential Equations


A. Sameh (USA)

Parallel Algorithms in Numerical Linear Algebra


W. Schmidt (D)

Advanced Numerical Methods in Aerodynamics Using vector Computers

ParCo83

F. Hoßfeld (D)

Nonlinear Dynamics – A Challenge on High-Speed Computation


F. Hertweck (D)

Using a Vector Computer in a Research Environment


D. J. Evans (UK)

The Parallel Solution of Partial Differential Equations


W. Händler (D)

Dynamic Computer Structures for Manifold Utilization


D. Parkinson (UK)

The Solution of n Linear Equations with p Processors



Top of page


© ParCo Conferences
Last updated: 2019-09-02