The next generation of Super Flavor Factories, like Super B and SuperKEKB, present significant computing challenges. Extrapolating the BaBar and Belle experience to the SuperB nominal luminosity of 10 36 cm −2 s −1 , we estimate that the data size collected after a few years of operation is 200 PB and the amount of CPU required to process them of the order of 2000 KHep-Spec06. Already in the current phase of detector design, the amount of simulated events needed for estimating the impact on very rare benchmark channels is huge and has required the development of new simulation tools and the deployment of a worldwide production distributed system. With the collider is in operation, very large data set have to be managed and new technologies with potential large impact on the computational models, like the many core CPUs, need to be effectively exploited. In addition SuperB, like the LHC experiments, will have to make use of distributed computing resources accessible via the Grid infrastructures while providing an efficient and reliable data access model to its final users. To explore the key issues, a dedicated R&D program has been launched and is now in progress. A description of the R&D goals and the status of ongoing activities is presented.

Computing for the next generation flavour factories

CORVO, Marco;FELLA, Armando;GIANOLI, Alberto;LUPPI, Eleonora;TOMASSETTI, Luca
2011

Abstract

The next generation of Super Flavor Factories, like Super B and SuperKEKB, present significant computing challenges. Extrapolating the BaBar and Belle experience to the SuperB nominal luminosity of 10 36 cm −2 s −1 , we estimate that the data size collected after a few years of operation is 200 PB and the amount of CPU required to process them of the order of 2000 KHep-Spec06. Already in the current phase of detector design, the amount of simulated events needed for estimating the impact on very rare benchmark channels is huge and has required the development of new simulation tools and the deployment of a worldwide production distributed system. With the collider is in operation, very large data set have to be managed and new technologies with potential large impact on the computational models, like the many core CPUs, need to be effectively exploited. In addition SuperB, like the LHC experiments, will have to make use of distributed computing resources accessible via the Grid infrastructures while providing an efficient and reliable data access model to its final users. To explore the key issues, a dedicated R&D program has been launched and is now in progress. A description of the R&D goals and the status of ongoing activities is presented.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11392/1613067
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 6
social impact