Contact: Delia Arnold
The main aim of this work package is to perform the dispersion calculations according to the predefined release scenarios and for a large number of meteorological situations. For this purpose, existing codes and scripts have been adapted and optimied and new ones have been created. The results were evaluated in a way that allows their further use in the following work packages. A major practical problem of this work package was to produce a reasonable number of products including the relevant results from the very large amount of data being produced.
The atmospheric dispersion modelling has been carried out with the latest Version 8 of Lagrangian Particle Dispersion Model FLEXPART (Stohl et al. 2005). Two different species of nuclides have been transported which with respect to their deposition behaviour represent, on the one hand, noble gases (without dry or wet deposition), and on the other hand, radionuclides bound to aerosols. Gaseous iodine (I2) is treated like the aerosol-bound nuclides as an approximation. The emission of a standard amount of these species yields source-receptor relationships (Seibert and Frank 2004) both for ground deposition and for near-ground air concentration. Radioactive decay was introduced afterwards as a correcting factor for computational efficiency.
Data of each run are stored with 3-hourly resolution. In order to accelerate the calculations and to reduce the data output, FLEXPART has been modified so that only the sum of dry and wet deposition is output, deposiont is saved incrementally like concentration instead of accumulated, and the simulation of a "cloud" is automatically stopped when its mass inside the computational domain has fallen below a given percentage of the initial value. For that matter, only the sum of dry and wet deposition are given, and a run was automatically terminated when the mass in the atmosphere has decreased below a threshold expressed as percentage of the total emitted mass.
Data from the European Centre for Medium-Range Weather Forecasts (ECMWF) have been used as meteorological input data, more specifically, the so-called ERA-Interim data with a horizontal resultion of 0.75 degree (approx. 70 km) and a temporal resolution of 3 h, including all model levels. These data are the best available ones for a Europe-wide investigation. Local-scale assessments, such as the risk from the site Krsko for Carinthia and Styria, are though only possible by approximation. More information on the precipitation distribution and the computational domain see basic information.
The calculations have been carried out for the 10-year-period 2000-2009 in order to attain approximate climatological representativeness. In addition the 90 cases of the predecessor project RISKMAP have been recomputed. The starting times of the calculations (releases) are distributed evenly over all times of the day and the seasons like in the RISKMAP project. All in all, about 2,800 cases resulted. However, this meant a substantially increas computational effort and an amount of serveral terabytes of data, also because of the requirements for the additional dose calculation. A part of the preparations of the dispersion calculations thus consisted of an optimization of the set-ups with respect to a good compromise between the different demands. The possible number of calculations had to consider the available resources, both CPU time and amount of output. In the end, the originally envisaged increase of this number by a factor of ten could be exceeded thrice so that now there are calculations of about 30 times as many dates as in RISKMAP.
The calculations have been carried out on the VSC high-perfomance computing facility operated jointly by TU Vienna, BOKU Vienna, and the University of Vienna.
Calculations have been carried out for 88 nuclear sites, with a predefined release shapes (one or more release phases of different duration, different release heights) that had to be considered at each site. A total of 250,000 dispersion cases has been calculated. The dispersion calculation has consumed about 170,000 core-hours on the VSC. This corresponds almost to 20 years of calculation time if it had had to be done on a single-core computer. However, as 40 compute nodes with each 8 processor cores could be employed (this corresponds roughly to 10% of the total VSC capacity), only about 530 hours (22 days) fall on each processor core, or with other words, the pure computational time amounted to three weeks. In this process, about 2.5 terabytes of data were produced.
The VSC high-performance computing facility