Jan 16, 2020 to Jan 16, 2020
|
Conferences & Events
The next-generation of physics and astronomy flagship projects require high-throughput exascale computing solutions. Examples are the
The projects have in common that they heavily rely on robust and scalable solutions for their data challenge. These solutions will make best use of progress in the performance and sustainability of the IT-hardware, but most of all of intelligent data pipelines. In the future, raw data will be heavily processed reducing the volume under a controlled loss of information to facilitate the long-time archival of the data and to make the data accessible to the widely distributed user communities. Among the data processing tools, machine-learning algorithms could play a potentially important role for reducing data but retaining the encoded information. New technologies and computer architectures as well as a standardized approach to the programming of data pipelines are key to a highly performant research data infrastructure serving the needs of the diversity of experiments and observatories.
The seminar encourages discussions among experts from science and industry with young scientists how to meet these challenges, addressing the long-term vision of community efforts for the advancement of fundamental science. The seminar also has a message to the scientific community: More efforts are needed to support the career of young researchers who gear up to tackle the gigantic data challenges.
Michiel van Haarlem, from ASTRON and leader of ESCAPE ESFRI Science Analysis Platform team, as well as Kay Graf from FAU and leader of ESCAPE Open-source scientific Software and Service Repository team, presented the work they have been done with the SKA and HL-LHC, 2 of the ESFRI projects part of ESCAPE.