5 Standard cmsRun Process building interface used for data processing
6 for a particular data scenario.
7 A scenario is a macro-data-taking setting such as cosmic running,
8 beam halo running, or particular validation tests.
10 This class defines the interfaces used by the Tier 0 and Tier 1
11 processing to wrap calls to ConfigBuilder in order to retrieve all the
12 configurations for the various types of job
16 import FWCore.ParameterSet.Config
as cms
17 from Configuration.DataProcessing.Merge
import mergeProcess
18 from Configuration.DataProcessing.Repack
import repackProcess
21 from Configuration.Applications.ConfigBuilder
import ConfigBuilder,Options,defaultOptions
37 given a skeleton process object and references
38 to the output modules for the products it produces,
39 install the standard reco sequences and event content for this
43 msg =
"Scenario Implementation %s\n" % self.__class__.__name__
44 msg +=
"Does not contain an implementation for promptReco"
45 raise NotImplementedError(msg)
52 Build an express processing configuration for this scenario.
54 Express processing runs conversion, reco and alca reco on each
55 streamer file in the express stream and writes out RAW, RECO and
56 a combined ALCA file that gets mergepacked in a later step
58 writeTiers is list of tiers to write out, not including ALCA
60 datasets is the list of datasets to split into for each tier
61 written out. Should always be one dataset
63 alcaDataset - if set, this means the combined Alca file is written
64 out with no dataset splitting, it gets assigned straight to the datase
68 msg =
"Scenario Implementation %s\n" % self.__class__.__name__
69 msg +=
"Does not contain an implementation for expressProcessing"
70 raise NotImplementedError(msg)
78 Build a configuration for the visualization processing for this scenario.
80 Visualization processing runs unpacking, and reco on
81 streamer files and it is equipped to run on the online cluster
82 and writes RECO or FEVT files,
84 writeTiers is list of tiers to write out.
88 msg =
"Scenario Implementation %s\n" % self.__class__.__name__
89 msg +=
"Does not contain an implementation for visualizationProcessing"
90 raise NotImplementedError(msg)
99 Given a skeleton process install the skim splitting for given skims
102 msg =
"Scenario Implementation %s\n" % self.__class__.__name__
103 msg +=
"Does not contain an implementation for alcaSkim"
104 raise NotImplementedError(msg)
111 Given a skeleton process install the skim production for given skims
114 msg =
"Scenario Implementation %s\n" % self.__class__.__name__
115 msg +=
"Does not contain an implementation for alcaReco"
116 raise NotImplementedError(msg)
123 build a DQM Harvesting configuration
127 datasetName - aka workflow name for DQMServer, this is the name of the
128 dataset containing the harvested run
129 runNumber - The run being harvested
130 globalTag - The global tag being used
131 inputFiles - The list of LFNs being harvested
134 msg =
"Scenario Implementation %s\n" % self.__class__.__name__
135 msg +=
"Does not contain an implementation for dqmHarvesting"
136 raise NotImplementedError(msg)
143 build an AlCa Harvesting configuration
147 globalTag - The global tag being used
148 inputFiles - The list of LFNs being harvested
151 msg =
"Scenario Implementation %s\n" % self.__class__.__name__
152 msg +=
"Does not contain an implementation for alcaHarvesting"
153 raise NotImplementedError(msg)
160 Given a process install the sequences for Tier 1 skimming
161 and the appropriate output modules
164 msg =
"Scenario Implementation %s\n" % self.__class__.__name__
165 msg +=
"Does not contain an implementation for skimming"
166 raise NotImplementedError(msg)
169 def merge(self, *inputFiles, **options):
173 builds a merge configuration
176 msg =
"Scenario Implementation %s\n" % self.__class__.__name__
184 builds a repack configuration
187 msg =
"Scenario Implementation %s\n" % self.__class__.__name__
199 Util to prune an unwanted output module
202 del process._Process__outputmodules[moduleName]
def visualizationProcessing