ALICE Time Projection Chamber: view of the interior of the Field Cage

The ALICE High Level Trigger

The High-Level Trigger combines and processes the full information from all major detectors of ALICE in a large computer cluster. Its task is to select the relevant part of the huge amount of incoming data and to reduce the data volume by well over one order of magnitude in order to fit the available storage bandwidth while preserving the physics information of interest. This is achieved by a combination of different techniques which require a detailed online event reconstruction:

  • Trigger: selecting interesting events based on detailed online analysis of its physics observables.
  • Selection: selecting the Regions of Interests (interesting part of single events).
  • Compression: reducing the event size by advanced data compression without any loss of the contained physics.

ALICE is dedicated to the study of ultra-relativistic heavy-ion collision. One of the central detectors is the large TPC producing most of the experiment’s data of up to 15 GB/s. The collisions which are detected with the TPC occur approximately 200 times per second, and in each collision about 4000-12000 charged particles are created. The HLT performs a full analysis of those events in real-time and reconstructs all tracks in the TPC. The picture below shows the result of this online analysis. The red tracks belong to a so-called jet which was found by the HLT in the huge amount of background tracks (blue).

Trigger efficiency

The HLT system allows the number of collected events containing the physics signals of interest to be significantly increased.

Data flow

The data from the detectors are read out in parallel according to the natural detector granularity. The detector data are shipped with optical links to the first processing layer where local pattern recognition is done. The results are forwarded to the following processing layers where a global pattern recognition (track finding and sector merging) takes place. The trigger decision is generated in the final layer.

 

Computing Cluster

For the online event analysis a powerful computing cluster with a few hundred computing nodes is needed. This cluster is built up of inexpensive standard components like ordinary PCs which are connected by a standard network.

The FPGA Coprocessor

The online data processing is accelerated significantly by an FPGA co-processor. The picture below shows the block diagram of the Cluster-Finder together with the state machine diagrams of the Decoder and Merger component.

The H-RORC

The data from the different detectors are received with the HLT Read Out Receiver Card (H-RORC) and fed into the analysis cluster.

Remote Administration

The HLT-cluster is administered fully autonomously with the help of the Computer-Health-And-Remote-Monitoring (CHARM) card, which is plugged into the PCI-bus of the host-computer. It is a small single-board computer running Linux as operation system that can monitor and control the host computer. Connections to the CHARM card are possible via its own network for full remote control of the host system.

More about the Alice HLT