LHCb is one of the four main high energy physics experiments currently in operation at the Large Hadron Collider at CERN, Switzerland. This contribution reports on the experience of the computing team during LHC Run 1, the current preparation for Run 2 and a brief outlook on plans for data taking and its implications for Run 3. Furthermore a brief introduction on LHCbDIRAC, i.e. the tool to interface the experiment distributed computing resources for its data processing and data management operations is given. During Run 1 several changes in the online filter farms had impacts on the computing operations and the computing model such as the replication of physics data, the data processing workflows and the organisation of processing campaigns. The strict MONARC model originally foreseen for LHC distributed computing was changed. Furthermore several changes and simplifications in the tools for distributed computing were taken e.g. for the software distribution, the replica catalog service or the deployment of conditions data. The reasons, implementations and implications for all these changes will be discussed. For Run 2 the running conditions of the LHC will change which will also have an impact on the distributed computing as the output rate of the high level trigger (HLT) approximately will double. This increased load on computing resources and also changes in the high level trigger farm, which will allow a final calibration of data will have a direct impact on the computing model. In addition more simplifications in the usage of tools are foreseen for Run 2, such as the consolidation of data access protocols, the usage of a new replica catalog and several adaptions in the core the distributed computing framework to serve the additional load. In Run 3 the trigger output rate is foreseen to increase. One of the changes in HLT, to be tested during Run 2 and taken further in Run 3, which allows direct output of physics data without offline reconstruction will be discussed. LHCb also strives for the inclusion of cloud and virtualised infrastructures for its distributed computing needs, including running on IaaS infrastructures such as Openstack or on hypervisor only systems using Vac, a self organising cloud infrastructure. The usage of BOINC for volunteer computing is currently in preparation and tested. All these infrastructures, in addition to the classical grid computing, can be served by a single service and pilot system. The details of these different approaches will be discussed.
The LHCb Distributed computing model and operations during LHC runs 1, 2 and 3
CORVO, Marco;TOMASSETTI, Luca;
2015
Abstract
LHCb is one of the four main high energy physics experiments currently in operation at the Large Hadron Collider at CERN, Switzerland. This contribution reports on the experience of the computing team during LHC Run 1, the current preparation for Run 2 and a brief outlook on plans for data taking and its implications for Run 3. Furthermore a brief introduction on LHCbDIRAC, i.e. the tool to interface the experiment distributed computing resources for its data processing and data management operations is given. During Run 1 several changes in the online filter farms had impacts on the computing operations and the computing model such as the replication of physics data, the data processing workflows and the organisation of processing campaigns. The strict MONARC model originally foreseen for LHC distributed computing was changed. Furthermore several changes and simplifications in the tools for distributed computing were taken e.g. for the software distribution, the replica catalog service or the deployment of conditions data. The reasons, implementations and implications for all these changes will be discussed. For Run 2 the running conditions of the LHC will change which will also have an impact on the distributed computing as the output rate of the high level trigger (HLT) approximately will double. This increased load on computing resources and also changes in the high level trigger farm, which will allow a final calibration of data will have a direct impact on the computing model. In addition more simplifications in the usage of tools are foreseen for Run 2, such as the consolidation of data access protocols, the usage of a new replica catalog and several adaptions in the core the distributed computing framework to serve the additional load. In Run 3 the trigger output rate is foreseen to increase. One of the changes in HLT, to be tested during Run 2 and taken further in Run 3, which allows direct output of physics data without offline reconstruction will be discussed. LHCb also strives for the inclusion of cloud and virtualised infrastructures for its distributed computing needs, including running on IaaS infrastructures such as Openstack or on hypervisor only systems using Vac, a self organising cloud infrastructure. The usage of BOINC for volunteer computing is currently in preparation and tested. All these infrastructures, in addition to the classical grid computing, can be served by a single service and pilot system. The details of these different approaches will be discussed.File | Dimensione | Formato | |
---|---|---|---|
ISGC2015_005.pdf
accesso aperto
Descrizione: Full text editoriale
Tipologia:
Full text (versione editoriale)
Licenza:
Creative commons
Dimensione
661.36 kB
Formato
Adobe PDF
|
661.36 kB | Adobe PDF | Visualizza/Apri |
I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.