5 Standard cmsRun Process building interface used for data processing
6 for a particular data scenario.
7 A scenario is a macro-data-taking setting such as cosmic running,
8 beam halo running, or particular validation tests.
10 This class defines the interfaces used by the Tier 0 and Tier 1
11 processing to wrap calls to ConfigBuilder in order to retrieve all the
12 configurations for the various types of job
17 from Configuration.DataProcessing.Merge
import mergeProcess
28 def promptReco(self, globalTag, writeTiers = ['RECO'], **options):
32 given a skeleton process object and references
33 to the output modules for the products it produces,
34 install the standard reco sequences and event content for this
38 msg =
"Scenario Implementation %s\n" % self.__class__.__name__
39 msg +=
"Does not contain an implementation for promptReco"
40 raise NotImplementedError, msg
47 Build an express processing configuration for this scenario.
49 Express processing runs conversion, reco and alca reco on each
50 streamer file in the express stream and writes out RAW, RECO and
51 a combined ALCA file that gets mergepacked in a later step
53 writeTiers is list of tiers to write out, not including ALCA
55 datasets is the list of datasets to split into for each tier
56 written out. Should always be one dataset
58 alcaDataset - if set, this means the combined Alca file is written
59 out with no dataset splitting, it gets assigned straight to the datase
63 msg =
"Scenario Implementation %s\n" % self.__class__.__name__
64 msg +=
"Does not contain an implementation for expressProcessing"
65 raise NotImplementedError, msg
72 Given a skeleton process install the skim splitting for given skims
75 msg =
"Scenario Implementation %s\n" % self.__class__.__name__
76 msg +=
"Does not contain an implementation for alcaSkim"
77 raise NotImplementedError, msg
84 Given a skeleton process install the skim production for given skims
87 msg =
"Scenario Implementation %s\n" % self.__class__.__name__
88 msg +=
"Does not contain an implementation for alcaReco"
89 raise NotImplementedError, msg
96 build a DQM Harvesting configuration
100 datasetName - aka workflow name for DQMServer, this is the name of the
101 dataset containing the harvested run
102 runNumber - The run being harvested
103 globalTag - The global tag being used
104 inputFiles - The list of LFNs being harvested
107 msg =
"Scenario Implementation %s\n" % self.__class__.__name__
108 msg +=
"Does not contain an implementation for dqmHarvesting"
109 raise NotImplementedError, msg
116 build an AlCa Harvesting configuration
120 globalTag - The global tag being used
121 inputFiles - The list of LFNs being harvested
124 msg =
"Scenario Implementation %s\n" % self.__class__.__name__
125 msg +=
"Does not contain an implementation for alcaHarvesting"
126 raise NotImplementedError, msg
133 Given a process install the sequences for Tier 1 skimming
134 and the appropriate output modules
137 msg =
"Scenario Implementation %s\n" % self.__class__.__name__
138 msg +=
"Does not contain an implementation for skimming"
139 raise NotImplementedError, msg
142 def merge(self, *inputFiles, **options):
146 builds a merge configuration
149 msg =
"Scenario Implementation %s\n" % self.__class__.__name__
161 Util to prune an unwanted output module
164 del process._Process__outputmodules[moduleName]
170 _addExpressOutputModules_
172 Util method to unpack and install the set of data tier
173 output modules corresponding to the list of tiers and datasets
178 for dataset
in datasets:
179 moduleName =
"write%s%s" % (tier, dataset)
180 contentName =
"%sEventContent" % tier
181 contentAttr = getattr(process, contentName)
182 setattr(process, moduleName,
186 fileName = cms.untracked.string(
'%s.root' % moduleName),
187 dataset = cms.untracked.PSet(
188 dataTier = cms.untracked.string(tier),
190 eventContent = contentAttr
def addExpressOutputModules