Public Member Functions | |
def | __init__ |
def | addExpressOutputModules |
def | alcaHarvesting |
def | alcaReco |
def | alcaSkim |
def | dqmHarvesting |
def | dropOutputModule |
def | expressProcessing |
def | merge |
def | promptReco |
def | skimming |
_Scenario_
Definition at line 19 of file Scenario.py.
def Scenario::Scenario::__init__ | ( | self | ) |
Definition at line 24 of file Scenario.py.
def Scenario::Scenario::addExpressOutputModules | ( | self, | |
process, | |||
tiers, | |||
datasets | |||
) |
_addExpressOutputModules_ Util method to unpack and install the set of data tier output modules corresponding to the list of tiers and datasets provided
Definition at line 168 of file Scenario.py.
00169 : 00170 """ 00171 _addExpressOutputModules_ 00172 00173 Util method to unpack and install the set of data tier 00174 output modules corresponding to the list of tiers and datasets 00175 provided 00176 00177 """ 00178 for tier in tiers: 00179 for dataset in datasets: 00180 moduleName = "write%s%s" % (tier, dataset) 00181 contentName = "%sEventContent" % tier 00182 contentAttr = getattr(process, contentName) 00183 setattr(process, moduleName, 00184 00185 cms.OutputModule( 00186 "PoolOutputModule", 00187 fileName = cms.untracked.string('%s.root' % moduleName), 00188 dataset = cms.untracked.PSet( 00189 dataTier = cms.untracked.string(tier), 00190 ), 00191 eventContent = contentAttr 00192 ) 00193 00194 ) 00195 return 00196 00197
def Scenario::Scenario::alcaHarvesting | ( | self, | |
globalTag, | |||
datasetName, | |||
options | |||
) |
_alcaHarvesting_ build an AlCa Harvesting configuration Arguments: globalTag - The global tag being used inputFiles - The list of LFNs being harvested
Definition at line 112 of file Scenario.py.
00113 : 00114 """ 00115 _alcaHarvesting_ 00116 00117 build an AlCa Harvesting configuration 00118 00119 Arguments: 00120 00121 globalTag - The global tag being used 00122 inputFiles - The list of LFNs being harvested 00123 00124 """ 00125 msg = "Scenario Implementation %s\n" % self.__class__.__name__ 00126 msg += "Does not contain an implementation for alcaHarvesting" 00127 raise NotImplementedError, msg 00128
def Scenario::Scenario::alcaReco | ( | self, | |
skims, | |||
options | |||
) |
_alcaSkim_ Given a skeleton process install the skim production for given skims
Definition at line 80 of file Scenario.py.
00081 : 00082 """ 00083 _alcaSkim_ 00084 00085 Given a skeleton process install the skim production for given skims 00086 00087 """ 00088 msg = "Scenario Implementation %s\n" % self.__class__.__name__ 00089 msg += "Does not contain an implementation for alcaReco" 00090 raise NotImplementedError, msg 00091
def Scenario::Scenario::alcaSkim | ( | self, | |
skims, | |||
options | |||
) |
_alcaSkim_ Given a skeleton process install the skim splitting for given skims
Definition at line 68 of file Scenario.py.
00069 : 00070 """ 00071 _alcaSkim_ 00072 00073 Given a skeleton process install the skim splitting for given skims 00074 00075 """ 00076 msg = "Scenario Implementation %s\n" % self.__class__.__name__ 00077 msg += "Does not contain an implementation for alcaSkim" 00078 raise NotImplementedError, msg 00079
def Scenario::Scenario::dqmHarvesting | ( | self, | |
datasetName, | |||
runNumber, | |||
globalTag, | |||
options | |||
) |
_dqmHarvesting_ build a DQM Harvesting configuration Arguments: datasetName - aka workflow name for DQMServer, this is the name of the dataset containing the harvested run runNumber - The run being harvested globalTag - The global tag being used inputFiles - The list of LFNs being harvested
Definition at line 92 of file Scenario.py.
00093 : 00094 """ 00095 _dqmHarvesting_ 00096 00097 build a DQM Harvesting configuration 00098 00099 Arguments: 00100 00101 datasetName - aka workflow name for DQMServer, this is the name of the 00102 dataset containing the harvested run 00103 runNumber - The run being harvested 00104 globalTag - The global tag being used 00105 inputFiles - The list of LFNs being harvested 00106 00107 """ 00108 msg = "Scenario Implementation %s\n" % self.__class__.__name__ 00109 msg += "Does not contain an implementation for dqmHarvesting" 00110 raise NotImplementedError, msg 00111
def Scenario::Scenario::dropOutputModule | ( | self, | |
processRef, | |||
moduleName | |||
) |
_dropOutputModule_ Util to prune an unwanted output module
Definition at line 157 of file Scenario.py.
def Scenario::Scenario::expressProcessing | ( | self, | |
globalTag, | |||
writeTiers = [] , |
|||
options | |||
) |
_expressProcessing_ Build an express processing configuration for this scenario. Express processing runs conversion, reco and alca reco on each streamer file in the express stream and writes out RAW, RECO and a combined ALCA file that gets mergepacked in a later step writeTiers is list of tiers to write out, not including ALCA datasets is the list of datasets to split into for each tier written out. Should always be one dataset alcaDataset - if set, this means the combined Alca file is written out with no dataset splitting, it gets assigned straight to the datase provided
Definition at line 43 of file Scenario.py.
00044 : 00045 """ 00046 _expressProcessing_ 00047 00048 Build an express processing configuration for this scenario. 00049 00050 Express processing runs conversion, reco and alca reco on each 00051 streamer file in the express stream and writes out RAW, RECO and 00052 a combined ALCA file that gets mergepacked in a later step 00053 00054 writeTiers is list of tiers to write out, not including ALCA 00055 00056 datasets is the list of datasets to split into for each tier 00057 written out. Should always be one dataset 00058 00059 alcaDataset - if set, this means the combined Alca file is written 00060 out with no dataset splitting, it gets assigned straight to the datase 00061 provided 00062 00063 """ 00064 msg = "Scenario Implementation %s\n" % self.__class__.__name__ 00065 msg += "Does not contain an implementation for expressProcessing" 00066 raise NotImplementedError, msg 00067
def Scenario::Scenario::merge | ( | self, | |
inputFiles, | |||
options | |||
) |
_merge_ builds a merge configuration
Definition at line 142 of file Scenario.py.
def Scenario::Scenario::promptReco | ( | self, | |
globalTag, | |||
writeTiers = ['RECO'] , |
|||
options | |||
) |
_installPromptReco_ given a skeleton process object and references to the output modules for the products it produces, install the standard reco sequences and event content for this scenario
Definition at line 28 of file Scenario.py.
00029 : 00030 """ 00031 _installPromptReco_ 00032 00033 given a skeleton process object and references 00034 to the output modules for the products it produces, 00035 install the standard reco sequences and event content for this 00036 scenario 00037 00038 """ 00039 msg = "Scenario Implementation %s\n" % self.__class__.__name__ 00040 msg += "Does not contain an implementation for promptReco" 00041 raise NotImplementedError, msg 00042
def Scenario::Scenario::skimming | ( | self, | |
skims, | |||
options | |||
) |
_skimming_ Given a process install the sequences for Tier 1 skimming and the appropriate output modules
Definition at line 129 of file Scenario.py.
00130 : 00131 """ 00132 _skimming_ 00133 00134 Given a process install the sequences for Tier 1 skimming 00135 and the appropriate output modules 00136 00137 """ 00138 msg = "Scenario Implementation %s\n" % self.__class__.__name__ 00139 msg += "Does not contain an implementation for skimming" 00140 raise NotImplementedError, msg 00141