Public Member Functions | |
def | __init__ |
def | alcaHarvesting |
def | alcaReco |
def | alcaSkim |
def | dqmHarvesting |
def | dropOutputModule |
def | expressProcessing |
def | merge |
def | promptReco |
def | repack |
def | skimming |
def | visualizationProcessing |
_Scenario_
Definition at line 24 of file Scenario.py.
def Scenario.Scenario.__init__ | ( | self | ) |
Definition at line 29 of file Scenario.py.
def Scenario.Scenario.alcaHarvesting | ( | self, | |
globalTag, | |||
datasetName, | |||
options | |||
) |
_alcaHarvesting_ build an AlCa Harvesting configuration Arguments: globalTag - The global tag being used inputFiles - The list of LFNs being harvested
Definition at line 139 of file Scenario.py.
def Scenario.Scenario.alcaReco | ( | self, | |
skims, | |||
options | |||
) |
_alcaSkim_ Given a skeleton process install the skim production for given skims
Definition at line 107 of file Scenario.py.
def Scenario.Scenario.alcaSkim | ( | self, | |
skims, | |||
options | |||
) |
_alcaSkim_ Given a skeleton process install the skim splitting for given skims
Definition at line 95 of file Scenario.py.
def Scenario.Scenario.dqmHarvesting | ( | self, | |
datasetName, | |||
runNumber, | |||
globalTag, | |||
options | |||
) |
_dqmHarvesting_ build a DQM Harvesting configuration Arguments: datasetName - aka workflow name for DQMServer, this is the name of the dataset containing the harvested run runNumber - The run being harvested globalTag - The global tag being used inputFiles - The list of LFNs being harvested
Definition at line 119 of file Scenario.py.
def Scenario.Scenario.dropOutputModule | ( | self, | |
processRef, | |||
moduleName | |||
) |
_dropOutputModule_ Util to prune an unwanted output module
Definition at line 195 of file Scenario.py.
def Scenario.Scenario.expressProcessing | ( | self, | |
globalTag, | |||
options | |||
) |
_expressProcessing_ Build an express processing configuration for this scenario. Express processing runs conversion, reco and alca reco on each streamer file in the express stream and writes out RAW, RECO and a combined ALCA file that gets mergepacked in a later step writeTiers is list of tiers to write out, not including ALCA datasets is the list of datasets to split into for each tier written out. Should always be one dataset alcaDataset - if set, this means the combined Alca file is written out with no dataset splitting, it gets assigned straight to the datase provided
Definition at line 48 of file Scenario.py.
def Scenario.Scenario.merge | ( | self, | |
inputFiles, | |||
options | |||
) |
_merge_ builds a merge configuration
Definition at line 169 of file Scenario.py.
References Merge.mergeProcess().
def Scenario.Scenario.promptReco | ( | self, | |
globalTag, | |||
options | |||
) |
_installPromptReco_ given a skeleton process object and references to the output modules for the products it produces, install the standard reco sequences and event content for this scenario
Definition at line 33 of file Scenario.py.
def Scenario.Scenario.repack | ( | self, | |
options | |||
) |
_repack_ builds a repack configuration
Definition at line 180 of file Scenario.py.
References Repack.repackProcess().
def Scenario.Scenario.skimming | ( | self, | |
skims, | |||
globalTag, | |||
options | |||
) |
_skimming_ Given a process install the sequences for Tier 1 skimming and the appropriate output modules
Definition at line 156 of file Scenario.py.
def Scenario.Scenario.visualizationProcessing | ( | self, | |
globalTag, | |||
options | |||
) |
_expressProcessing_ Build a configuration for the visualization processing for this scenario. Visualization processing runs unpacking, and reco on streamer files and it is equipped to run on the online cluster and writes RECO or FEVT files, writeTiers is list of tiers to write out.
Definition at line 74 of file Scenario.py.