CMS 3D CMS Logo

 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Properties Friends Macros Pages
List of all members | Public Member Functions | Static Public Member Functions | Public Attributes | Private Member Functions | Private Attributes | Static Private Attributes
parserPerfsuiteMetadata.parserPerfsuiteMetadata Class Reference

Public Member Functions

def __init__
 
def doQuery
 
def findLineAfter
 
def findLineBefore
 
def firstTimeStampAfter
 
def firstTimeStampBefore
 
def get_tarball_fromlog
 
def getIgSummary
 
def getMachineInfo
 
def getSummaryInfo
 
def handleParsingError
 
def parseAll
 
def parseAllOtherTests
 
def parseGeneralInfo
 
def parseTheCompletion
 
def parseTimeSize
 
def readCmsScimark
 
def readCmsScimarkTest
 
def readInput
 
def validateSteps
 

Static Public Member Functions

def findFirstIndex_ofStartsWith
 
def isTimeStamp
 

Public Attributes

 lines_general
 
 lines_other
 
 lines_timesize
 
 missing_fields
 
 reCmsScimarkTest
 

Private Member Functions

def _applyParsingRules
 

Private Attributes

 _DEBUG
 
 _MAX_STEPS
 
 _otherStart
 
 _path
 
 _timeSizeEnd
 
 _timeSizeStart
 

Static Private Attributes

string _LINE_SEPARATOR = "|"
 

Detailed Description

    The whole parsing works as following. We split the file into 3 parts (we keep 3 variables of line lists:self.lines_general, self.lines_timesize, self.lines_other ):

        * General info
    As most of the info are simple one line strings, we define some regular expressions defining and matching each of those lines. The regular expressions are associated with data which we can get from them. e.g. ^Suite started at (.+) on (.+) by user (.+)$ would match only the line defining the time suite started and on which machine. It's associated with tuple of field names for general info which will be filled in. in this way we get info = {'start_time': start-taken-from-regexp, 'host': host, 'user': user}. This is done by calling simple function _applyParsingRules which checks each lines with each if one passes another, if it does fills in the result dictionary with the result.
    Additionaly we get the cpu and memmory info from /proc/cpuinfo /proc/meminfo

        * TimeSize test
    We use the same technique a little bit also. But at first we divide the timesize lines by job (individual run of cmssw - per candle, and pileup/not). Then for each of the jobs we apply our parsing rules, also we find the starting and ending times (i.e. We know that start timestamp is somethere after certain line containing "Written out cmsRelvalreport.py input file at:")

        * All other tests
    We find the stating that the test is being launched (containing the test name, core and num events). Above we have the thread number, and below the starting time.
    The ending time can be ONLY connected with the starting time by the Thread-ID. The problem is that the file names different the same test instance like <Launching "PILE UP Memcheck"> and <"Memcheck" stopped>.

Definition at line 8 of file parserPerfsuiteMetadata.py.

Constructor & Destructor Documentation

def parserPerfsuiteMetadata.parserPerfsuiteMetadata.__init__ (   self,
  path 
)

Definition at line 28 of file parserPerfsuiteMetadata.py.

28 
29  def __init__(self, path):
30 
31  self._MAX_STEPS = 5 # MAXIMUM NUMBER OF STEPS PER RUN (taskset relvalreport.py...)
32  self._DEBUG = False
33 
34 
35  self._path = path
36 
37  """ some initialisation to speedup the other functions """
38  #for cmsscimark
39  self.reCmsScimarkTest = re.compile(r"""^Composite Score:(\s*)([^\s]+)$""")
40 
41  #TimeSize
42  """ the separator for beginning of timeSize / end of general statistics """
43  self._timeSizeStart = re.compile(r"""^Launching the TimeSize tests \(TimingReport, TimeReport, SimpleMemoryCheck, EdmSize\) with (\d+) events each$""")
44  """ (the first timestamp is the start of TimeSize) """
45 
46 
47  """ the separator for end of timeSize / beginning of IgProf_Perf, IgProf_Mem, Memcheck, Callgrind tests """
48  self._timeSizeEnd = re.compile(r"""^Stopping all cmsScimark jobs now$""")
49 
50  #Other tests:
51  self._otherStart = re.compile(r"^Preparing")
52 
53  """
54  ----- READ THE DATA -----
55  """
56  lines = self.readInput(path)
57  """ split the whole file into parts """
58  #Let's not assume there are ALWAYS TimeSize tests in the runs of the Performance Suite!:
59  #Check first:
60  #FIXME: Vidmantas did not think to this case... will need to implement protectionb against it for all the IB tests...
61  #To do as soon as possible...
62  #Maybe revisit the strategy if it can be done quickly.
63  timesize_end= [lines.index(line) for line in lines if self._timeSizeEnd.match(line)]
64  if timesize_end:
65  timesize_end_index = timesize_end[0]
66  else:
67  timesize_end_index=0
68  timesize_start=[lines.index(line) for line in lines if self._timeSizeStart.match(line)]
69  general_stop=[lines.index(line) for line in lines if self._otherStart.match(line)]
70  if timesize_start:
71  timesize_start_index = timesize_start[0]
72  general_stop_index=timesize_start_index
73  elif general_stop:
74  timesize_start_index=0
75  general_stop_index=general_stop[0]
76  else:
77  timesize_start_index=0
78  general_stop_index=-1
79 
80  """ we split the structure:
81  * general
82  * timesize
83  * all others [igprof etc]
84  """
85 
86  """ we get the indexes of spliting """
87  #Not OK to use timsize_start_index for the general lines... want to be general, also to cases of no TimeSize tests...
88  #self.lines_general = lines[:timesize_start_index]
89  self.lines_general = lines[:general_stop_index]
90  self.lines_timesize = lines[timesize_start_index:timesize_end_index+1]
91  self.lines_other = lines[timesize_end_index:]
92 
93  """ a list of missing fields """
94  self.missing_fields = []

Member Function Documentation

def parserPerfsuiteMetadata.parserPerfsuiteMetadata._applyParsingRules (   self,
  parsing_rules,
  lines 
)
private
    Applies the (provided) regular expression rules (=rule[1] for rule in parsing_rules)
    to each line and if it matches the line,
    puts the mached information to the dictionary as the specified keys (=rule[0]) which is later returned
    Rule[3] contains whether the field is required to be found. If so and it isn't found the exception would be raised.
    rules = [
      ( (field_name_1_to_match, field_name_2), regular expression, /optionaly: is the field required? if so "req"/ )
    ]
we call a shared parsing helper 

Definition at line 235 of file parserPerfsuiteMetadata.py.

References parsingRulesHelper.rulesParser().

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseGeneralInfo(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseTheCompletion(), and parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseTimeSize().

236  def _applyParsingRules(self, parsing_rules, lines):
237  """
238  Applies the (provided) regular expression rules (=rule[1] for rule in parsing_rules)
239  to each line and if it matches the line,
240  puts the mached information to the dictionary as the specified keys (=rule[0]) which is later returned
241  Rule[3] contains whether the field is required to be found. If so and it isn't found the exception would be raised.
242  rules = [
243  ( (field_name_1_to_match, field_name_2), regular expression, /optionaly: is the field required? if so "req"/ )
244  ]
245  """
246  """ we call a shared parsing helper """
247  #parsing_rules = map(parsingRulesHelper.rulesRegexpCompileFunction, parsing_rules)
248  #print parsing_rules
249  (info, missing_fields) = parsingRulesHelper.rulesParser(parsing_rules, lines, compileRules = True)
250 
251  self.missing_fields.extend(missing_fields)
252 
253  return info
254 
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.doQuery (   self,
  query,
  database 
)

Definition at line 631 of file parserPerfsuiteMetadata.py.

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.getSummaryInfo().

632  def doQuery(self, query, database):
633  if os.path.exists("/usr/bin/sqlite3"):
634  sqlite="/usr/bin/sqlite3"
635  else:
636  sqlite="/afs/cern.ch/user/e/eulisse/www/bin/sqlite"
637  return getstatusoutput("echo '%s' | %s -separator @@@ %s" % (query, sqlite, database))
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.findFirstIndex_ofStartsWith (   job_lines,
  start_of_line 
)
static

Definition at line 113 of file parserPerfsuiteMetadata.py.

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseGeneralInfo(), and parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseTimeSize().

114  def findFirstIndex_ofStartsWith(job_lines, start_of_line):
115  return [job_lines.index(line)
116  for line in job_lines
117  if line.startswith(start_of_line)][0]
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.findLineAfter (   self,
  line_index,
  lines,
  test_condition,
  return_index = False 
)
finds a line satisfying the `test_condition` comming after the `line_index` 

Definition at line 129 of file parserPerfsuiteMetadata.py.

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.firstTimeStampAfter(), and parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseTimeSize().

130  def findLineAfter(self, line_index, lines, test_condition, return_index = False):
131  """ finds a line satisfying the `test_condition` comming after the `line_index` """
132  # we're going forward the lines list
133  for line_index in xrange(line_index + 1, len(lines)):
134  line = lines[line_index]
135 
136  if test_condition(line):
137  if return_index:
138  return line_index
139  return line
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.findLineBefore (   self,
  line_index,
  lines,
  test_condition 
)
finds a line satisfying the `test_condition` comming before the `line_index` 

Definition at line 118 of file parserPerfsuiteMetadata.py.

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.firstTimeStampBefore(), and parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseAllOtherTests().

119  def findLineBefore(self, line_index, lines, test_condition):
120  """ finds a line satisfying the `test_condition` comming before the `line_index` """
121  # we're going backwards the lines list
122  for line_index in xrange(line_index -1, -1, -1):
123  line = lines[line_index]
124 
125  if test_condition(line):
126  return line
127  raise ValueError
128 
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.firstTimeStampAfter (   self,
  line_index,
  lines 
)
returns the first timestamp AFTER the line with given index 

Definition at line 145 of file parserPerfsuiteMetadata.py.

References parserPerfsuiteMetadata.parserPerfsuiteMetadata.findLineAfter(), and parserPerfsuiteMetadata.parserPerfsuiteMetadata.isTimeStamp().

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseAllOtherTests(), and parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseTimeSize().

146  def firstTimeStampAfter(self, line_index, lines):
147  """ returns the first timestamp AFTER the line with given index """
148 
149  return self.findLineAfter(line_index, lines, test_condition = self.isTimeStamp)
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.firstTimeStampBefore (   self,
  line_index,
  lines 
)
returns the first timestamp BEFORE the line with given index 

Definition at line 140 of file parserPerfsuiteMetadata.py.

References parserPerfsuiteMetadata.parserPerfsuiteMetadata.findLineBefore(), and parserPerfsuiteMetadata.parserPerfsuiteMetadata.isTimeStamp().

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseAllOtherTests(), and parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseTimeSize().

141  def firstTimeStampBefore(self, line_index, lines):
142  """ returns the first timestamp BEFORE the line with given index """
143 
144  return self.findLineBefore(line_index, lines, test_condition = self.isTimeStamp)
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.get_tarball_fromlog (   self)
Return the tarball castor location by parsing the cmsPerfSuite.log file

Definition at line 690 of file parserPerfsuiteMetadata.py.

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseTheCompletion().

691  def get_tarball_fromlog(self):
692  '''Return the tarball castor location by parsing the cmsPerfSuite.log file'''
693  print "Getting the url from the cmsPerfSuite.log"
694  log=open("cmsPerfSuite.log","r")
695  castor_dir="UNKNOWN_CASTOR_DIR"
696  tarball="UNKNOWN_TARBALL"
697  for line in log.readlines():
698  if 'castordir' in line:
699  castor_dir=line.split()[1]
700  if 'tgz' in line and tarball=="UNKNOWN_TARBALL": #Pick the first line that contains the tar command...
701  if 'tar' in line:
702  tarball=os.path.basename(line.split()[2])
703  castor_tarball=os.path.join(castor_dir,tarball)
704  return castor_tarball
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.getIgSummary (   self)

Definition at line 602 of file parserPerfsuiteMetadata.py.

References parserPerfsuiteMetadata.parserPerfsuiteMetadata._path, python.ConfigToolBase.ConfigToolBase._path, and parserPerfsuiteMetadata.parserPerfsuiteMetadata.getSummaryInfo().

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseAll().

603  def getIgSummary(self):
604  igresult = []
605  globbed = glob.glob(os.path.join(self._path, "../*/IgProfData/*/*/*.sql3"))
606 
607  for f in globbed:
608  #print f
609  profileInfo = self.getSummaryInfo(f)
610  if not profileInfo:
611  continue
612  cumCounts, cumCalls = profileInfo
613  dump, architecture, release, rest = f.rsplit("/", 3)
614  candle, sequence, pileup, conditions, process, counterType, events = rest.split("___")
615  events = events.replace(".sql3", "")
616  igresult.append({"counter_type": counterType, "event": events, "cumcounts": cumCounts, "cumcalls": cumCalls})
617 
618  return igresult
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.getMachineInfo (   self)
Returns the cpu and memory info  
cpu info 
we assume that:
 * num_cores = max(core id+1) [it's counted from 0]
 * 'model name' is processor type [we will return only the first one - we assume others to be same!!??
 * cpu MHz - is the speed of CPU
for 
    model name  : Intel(R) Core(TM)2 Duo CPU     L9400  @ 1.86GHz
    cpu MHz     : 800.000
    cache size  : 6144 KB

Definition at line 175 of file parserPerfsuiteMetadata.py.

References parserPerfsuiteMetadata.parserPerfsuiteMetadata._path, python.ConfigToolBase.ConfigToolBase._path, split, and strip().

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseAll().

176  def getMachineInfo(self):
177  """ Returns the cpu and memory info """
178 
179  """ cpu info """
180 
181  """
182  we assume that:
183  * num_cores = max(core id+1) [it's counted from 0]
184  * 'model name' is processor type [we will return only the first one - we assume others to be same!!??
185  * cpu MHz - is the speed of CPU
186  """
187  #TODO: BUT cpu MHz show not the maximum speed but current,
188  """
189  for
190  model name : Intel(R) Core(TM)2 Duo CPU L9400 @ 1.86GHz
191  cpu MHz : 800.000
192  cache size : 6144 KB
193  """
194  cpu_result = {}
195  try:
196  f= open(os.path.join(self._path, "cpuinfo"), "r")
197 
198  #we split data into a list of tuples = [(attr_name, attr_value), ...]
199  cpu_attributes = [l.strip().split(":") for l in f.readlines()]
200  #print cpu_attributes
201  f.close()
202  cpu_result = {
203  "num_cores": max ([int(attr[1].strip())+1 for attr in cpu_attributes if attr[0].strip() == "processor"]), #Bug... Vidmantas used "core id"
204  "cpu_speed_MHZ": max ([attr[1].strip() for attr in cpu_attributes if attr[0].strip() == "cpu MHz"]),
205  "cpu_cache_size": [attr[1].strip() for attr in cpu_attributes if attr[0].strip() == "cache size"][0],
206  "cpu_model_name": [attr[1].strip() for attr in cpu_attributes if attr[0].strip() == "model name"][0]
207  }
208  except IOError,e:
209  print e
210 
211 
212 
213 
214 
215  """ memory info """
216  mem_result = {}
217 
218  try:
219  f= open(os.path.join(self._path, "meminfo"), "r")
220 
221  #we split data into a list of tuples = [(attr_name, attr_value), ...]
222  mem_attributes = [l.strip().split(":") for l in f.readlines()]
223 
224  mem_result = {
225  "memory_total_ram": [attr[1].strip() for attr in mem_attributes if attr[0].strip() == "MemTotal"][0]
226  }
227 
228  except IOError,e:
229  print e
230 
231  cpu_result.update(mem_result)
232  return cpu_result
233 
234 
void strip(std::string &input, const std::string &blanks=" \n\t")
Definition: stringTools.cc:16
double split
Definition: MVATrainer.cc:139
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.getSummaryInfo (   self,
  database 
)

Definition at line 619 of file parserPerfsuiteMetadata.py.

References SiStripCoralIface.doQuery(), and parserPerfsuiteMetadata.parserPerfsuiteMetadata.doQuery().

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.getIgSummary().

620  def getSummaryInfo(self, database):
621  summary_query="""SELECT counter, total_count, total_freq, tick_period
622  FROM summary;"""
623  error, output = self.doQuery(summary_query, database)
624  if error or not output or output.count("\n") > 1:
625  return None
626  counter, total_count, total_freq, tick_period = output.split("@@@")
627  if counter == "PERF_TICKS":
628  return float(tick_period) * float(total_count), int(total_freq)
629  else:
630  return int(total_count), int(total_freq)
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.handleParsingError (   self,
  message 
)

Definition at line 150 of file parserPerfsuiteMetadata.py.

References parserPerfsuiteMetadata.parserPerfsuiteMetadata._DEBUG.

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseAll(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseTheCompletion(), and parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseTimeSize().

151  def handleParsingError(self, message):
152  if self._DEBUG:
153  raise ValueError, message
154  print " ======== AND ERROR WHILE PARSING METADATA ===="
155  print message
156  print " =============== end ========================= "
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.isTimeStamp (   line)
static
Returns whether the string is a timestamp (if not returns None)

>>> parserPerfsuiteMetadata.isTimeStamp("Fri Aug 14 01:16:03 2009")
True
>>> parserPerfsuiteMetadata.isTimeStamp("Fri Augx 14 01:16:03 2009")

Definition at line 96 of file parserPerfsuiteMetadata.py.

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.firstTimeStampAfter(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.firstTimeStampBefore(), and parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseTimeSize().

96 
97  def isTimeStamp(line):
98  """
99  Returns whether the string is a timestamp (if not returns None)
100 
101  >>> parserPerfsuiteMetadata.isTimeStamp("Fri Aug 14 01:16:03 2009")
102  True
103  >>> parserPerfsuiteMetadata.isTimeStamp("Fri Augx 14 01:16:03 2009")
104 
105  """
106  datetime_format = "%a %b %d %H:%M:%S %Y" # we use default date format
107  try:
108  time.strptime(line, datetime_format)
109  return True
110  except ValueError:
111  return None
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseAll (   self)

Definition at line 705 of file parserPerfsuiteMetadata.py.

References reco.get(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.getIgSummary(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.getMachineInfo(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.handleParsingError(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.missing_fields, parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseGeneralInfo(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseTheCompletion(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseTimeSize(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.readCmsScimark(), and update.

706  def parseAll(self):
707  result = {"General": {}, "TestResults":{}, "cmsSciMark":{}, "IgSummary":{}, 'unrecognized_jobs': []}
708 
709  """ all the general info - start, arguments, host etc """
710  result["General"].update(self.parseGeneralInfo())
711 
712  """ machine info - cpu, memmory """
713  result["General"].update(self.getMachineInfo())
714 
715  """ we add info about how successfull was the run, when it finished and final castor url to the file! """
716  result["General"].update(self.parseTheCompletion())
717 
718  try:
719  result["TestResults"].update(self.parseTimeSize())
720  except Exception, e:
721  print "BAD BAD BAD UNHANDLED ERROR" + str(e)
722 
723 
724  #TODO:
725  #Check what Vidmantas was doing in the parseAllOtherTests, de facto it is not used now, so commenting it for now (to avoid the "BAD BAD BAD...."
726  #try:
727  # result["TestResults"].update(self.parseAllOtherTests())
728  #except Exception, e:
729  # print "BAD BAD BAD UNHANDLED ERROR" + str(e)
730 
731 
732  main_cores = [result["General"]["run_on_cpus"]]
733  num_cores = result["General"].get("num_cores", 0)
734  #DEBUG
735  #print "Number of cores was: %s"%num_cores
736  #TODO: temporarly - search for cores, use regexp
737  main_cores = [1]
738 
739  # THE MAHCINE SCIMARKS
740  result["cmsSciMark"] = self.readCmsScimark(main_cores = main_cores)
741  result["IgSummary"] = self.getIgSummary()
742 
743 
744 
745  if self.missing_fields:
746  self.handleParsingError("========== SOME REQUIRED FIELDS WERE NOT FOUND DURING PARSING ======= "+ str(self.missing_fields))
747 
748  return result
749 
750 
#define update(a, b)
T get(const Candidate &c)
Definition: component.h:56
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseAllOtherTests (   self)

Definition at line 360 of file parserPerfsuiteMetadata.py.

References parserPerfsuiteMetadata.parserPerfsuiteMetadata.findLineBefore(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.firstTimeStampAfter(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.firstTimeStampBefore(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.lines_other, and update.

361  def parseAllOtherTests(self):
362  threads = {}
363  tests = {
364  #"IgProf_Perf": {}, "IgProf_Mem": {}, "Memcheck": {}, "Callgrind": {},
365  }
366 
367  lines = self.lines_other
368  """
369 
370  for each of IgProf_Perf, IgProf_Mem, Memcheck, Callgrind tests we have such a structure of input file:
371  * beginning ->> and start timestamp- the firstone:
372  Adding thread <simpleGenReportThread(Thread-1, started)> to the list of active threads
373  Launching the Memcheck tests on cpu 3 with 5 events each
374  Fri Aug 14 01:16:03 2009
375 
376  <... whatever might be here, might overlap with other test start/end messages ..>
377 
378  Fri Aug 14 02:13:18 2009
379  Memcheck test, in thread <simpleGenReportThread(Thread-1, stopped)> is done running on core 3
380  * ending - the last timestamp "before is done running ...."
381  """
382  # we take the first TimeStamp after the starting message and the first before the finishing message
383 
384 
385  #TODO: if threads would be changed it would stop working!!!
386 
387  # i.e. Memcheck, cpu, events
388  reStart = re.compile(r"""^Launching the (.*) tests on cpu (\d+) with (\d+) events each$""")
389  # i.e. Memcheck, thread name,core number
390  reEnd = re.compile(r"""^(.*) test, in thread <simpleGenReportThread\((.+), stopped\)> is done running on core (\d+)$""")
391 
392  #i.e. thread = Thread-1
393  reAddThread = re.compile(r"""^Adding thread <simpleGenReportThread\((.+), started\)> to the list of active threads$""")
394 
395  reExitCode = re.compile(r"""Individual cmsRelvalreport.py ExitCode (\d+)""")
396  """ we search for lines being either: (it's a little pascal'ish but we need the index!) """
397  for line_index in xrange(0, len(lines)):
398  line = lines[line_index]
399 
400  # * starting of test
401  if reStart.match(line):
402  #print reStart.match(line).groups()
403  testName, testCore, testEventsNum = reStart.match(line).groups()
404 
405  time = self.firstTimeStampAfter(line_index, lines)
406 
407  #find the name of Thread: it's one of the lines before
408  line_thread = self.findLineBefore(line_index, lines, test_condition=lambda l: reAddThread.match(l))
409  (thread_id, ) = reAddThread.match(line_thread).groups()
410 
411  #we add it to the list of threads as we DO NOT KNOW EXACT NAME OF TEST
412  if not threads.has_key(thread_id):
413  threads[thread_id] = {}
414  # this way we would get an Exception in case of unknown test name!
415  threads[thread_id].update({"name": testName, "events_num": testEventsNum, "core": testCore, "start": time, "thread_id": thread_id})
416 
417  # * or end of test
418  if reEnd.match(line):
419  testName, thread_id, testCore = reEnd.match(line).groups()
420  if not threads.has_key(testName):
421  threads[thread_id] = {}
422  #TODO: we get an exception if we found non existing
423 
424  time = self.firstTimeStampBefore(line_index, lines)
425  try:
426  exit_code = ""
427  #we search for the exit code
428  line_exitcode = self.findLineBefore(line_index, lines, test_condition=lambda l: reExitCode.match(l))
429  exit_code, = reExitCode.match(line_exitcode).groups()
430  except Exception, e:
431  print "Error while getting exit code (Other test): %s" + str(e)
432 
433 
434  # this way we would get an Exception in case of unknown test name! So we would be warned if the format have changed
435  threads[thread_id].update({"end": time, "exit_code":exit_code})
436  for key, thread in threads.items():
437  tests[thread["name"]] = thread
438  return tests
439 
#define update(a, b)
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseGeneralInfo (   self)

Definition at line 255 of file parserPerfsuiteMetadata.py.

References parserPerfsuiteMetadata.parserPerfsuiteMetadata._applyParsingRules(), parserPerfsuiteMetadata.parserPerfsuiteMetadata._DEBUG, parserPerfsuiteMetadata.parserPerfsuiteMetadata.findFirstIndex_ofStartsWith(), join(), and parserPerfsuiteMetadata.parserPerfsuiteMetadata.lines_general.

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseAll().

256  def parseGeneralInfo(self):
257  lines = self.lines_general
258  """ we define a simple list (tuple) of rules for parsing, the first part tuple defines the parameters to be fetched from the
259  regexp while the second one is the regexp itself """
260  #TIP: don't forget that tuple of one ends with ,
261  parsing_rules = (
262  (("", "num_cores", "run_on_cpus"), r"""^This machine \((.+)\) is assumed to have (\d+) cores, and the suite will be run on cpu \[(.+)\]$"""),
263  (("start_time", "host", "local_workdir", "user"), r"""^Performance Suite started running at (.+) on (.+) in directory (.+), run by user (.+)$""", "req"),
264  (("architecture",) ,r"""^Current Architecture is (.+)$"""),
265  (("test_release_based_on",), r"""^Test Release based on: (.+)$""", "req"),
266  (("base_release_path",) , r"""^Base Release in: (.+)$"""),
267  (("test_release_local_path",) , r"""^Your Test release in: (.+)$"""),
268 
269  (("castor_dir",) , r"""^The performance suite results tarball will be stored in CASTOR at (.+)$"""),
270 
271  (("TimeSize_events",) , r"""^(\d+) TimeSize events$"""),
272  (("IgProf_events",) , r"""^(\d+) IgProf events$"""),
273  (("CallGrind_events",) , r"""^(\d+) Callgrind events$"""),
274  (("Memcheck_events",) , r"""^(\d+) Memcheck events$"""),
275 
276  (("candles_TimeSize",) , r"""^TimeSizeCandles \[(.*)\]$"""),
277  (("candles_TimeSizePU",) , r"""^TimeSizePUCandles \[(.*)\]$"""),
278 
279  (("candles_Memcheck",) , r"""^MemcheckCandles \[(.*)\]$"""),
280  (("candles_MemcheckPU",) , r"""^MemcheckPUCandles \[(.*)\]$"""),
281 
282  (("candles_Callgrind",) , r"""^CallgrindCandles \[(.*)\]$"""),
283  (("candles_CallgrindPU",) , r"""^CallgrindPUCandles \[(.*)\]$"""),
284 
285  (("candles_IgProfPU",) , r"""^IgProfPUCandles \[(.*)\]$"""),
286  (("candles_IgProf",) , r"""^IgProfCandles \[(.*)\]$"""),
287 
288 
289  (("cmsScimark_before",) , r"""^(\d+) cmsScimark benchmarks before starting the tests$"""),
290  (("cmsScimark_after",) , r"""^(\d+) cmsScimarkLarge benchmarks before starting the tests$"""),
291  (("cmsDriverOptions",) , r"""^Running cmsDriver.py with user defined options: --cmsdriver="(.+)"$"""),
292 
293  (("HEPSPEC06_SCORE",) ,r"""^This machine's HEPSPEC06 score is: (.+)$"""),
294 
295 
296  )
297  """ we apply the defined parsing rules to extract the required fields of information into the dictionary (as defined in parsing rules) """
298  info = self._applyParsingRules(parsing_rules, lines)
299 
300 
301  """ postprocess the candles list """
302  candles = {}
303  for field, value in info.items():
304  if field.startswith("candles_"):
305  test = field.replace("candles_", "")
306  value = [v.strip(" '") for v in value.split(",")]
307  #if value:
308  candles[test]=value
309  del info[field]
310  #print candles
311  info["candles"] = self._LINE_SEPARATOR.join([k+":"+",".join(v) for (k, v) in candles.items()])
312 
313 
314  """ TAGS """
315  """
316  --- Tag --- --- RelTag --- -------- Package --------
317  HEAD V05-03-06 IgTools/IgProf
318  V01-06-05 V01-06-04 Validation/Performance
319  ---------------------------------------
320  total packages: 2 (2 displayed)
321  """
322  tags_start_index = -1 # set some default
323  try:
324  tags_start_index = [i for i in xrange(0, len(lines)) if lines[i].startswith("--- Tag ---")][0]
325  except:
326  pass
327  if tags_start_index > -1:
328  tags_end_index = [i for i in xrange(tags_start_index + 1, len(lines)) if lines[i].startswith("---------------------------------------")][0]
329  # print "tags start index: %s, end index: %s" % (tags_start_index, tags_end_index)
330  tags = lines[tags_start_index:tags_end_index+2]
331  # print [tag.split(" ") for tag in tags]
332  # print "\n".join(tags)
333  else: # no tags found, make an empty list ...
334  tags = []
335  """ we join the tags with separator to store as simple string """
336  info["tags"] = self._LINE_SEPARATOR.join(tags)
337  #FILES/PATHS
338 
339 
340  """ get the command line """
341  try:
342  cmd_index = self.findFirstIndex_ofStartsWith(lines, "Performance suite invoked with command line:") + 1 #that's the next line
343  info["command_line"] = lines[cmd_index]
344  except IndexError, e:
345  if self._DEBUG:
346  print e
347  info["command_line"] = ""
348 
349  try:
350  cmd_parsed_start = self.findFirstIndex_ofStartsWith(lines, "Initial PerfSuite Arguments:") + 1
351  cmd_parsed_end = self.findFirstIndex_ofStartsWith(lines, "Running cmsDriver.py")
352  info["command_line_parsed"] = self._LINE_SEPARATOR.join(lines[cmd_parsed_start:cmd_parsed_end])
353  except IndexError, e:
354  if self._DEBUG:
355  print e
356  info["command_line"] = ""
357 
358  return info
359 
static std::string join(char **cmd)
Definition: RemoteFile.cc:18
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseTheCompletion (   self)
 checks if the suite has successfully finished  
    and if the tarball was successfully archived and uploaded to the castor 

Definition at line 638 of file parserPerfsuiteMetadata.py.

References parserPerfsuiteMetadata.parserPerfsuiteMetadata._applyParsingRules(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.get_tarball_fromlog(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.handleParsingError(), and parserPerfsuiteMetadata.parserPerfsuiteMetadata.lines_other.

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseAll().

639  def parseTheCompletion(self):
640  """
641  checks if the suite has successfully finished
642  and if the tarball was successfully archived and uploaded to the castor """
643 
644  parsing_rules = (
645  (("finishing_time", "", ""), r"""^Performance Suite finished running at (.+) on (.+) in directory (.+)$"""),
646  (("castor_md5",) , r"""^The md5 checksum of the tarball: (.+)$"""),
647  (("successfully_archived_tarball", ), r"""^Successfully archived the tarball (.+) in CASTOR!$"""),
648  #TODO: WE MUST HAVE THE CASTOR URL, but for some of files it's not included [probably crashed]
649  (("castor_file_url",), r"""^The tarball can be found: (.+)$"""),
650  (("castor_logfile_url",), r"""^The logfile can be found: (.+)$"""),
651  )
652 
653 
654  """ we apply the defined parsing rules to extract the required fields of information into the dictionary (as defined in parsing rules) """
655  info = self._applyParsingRules(parsing_rules, self.lines_other)
656 
657  """ did we detect any errors in log files ? """
658  info["no_errors_detected"] = [line for line in self.lines_other if line == "There were no errors detected in any of the log files!"] and "1" or "0"
659  if not info["successfully_archived_tarball"]:
660  info["castor_file_url"] = ""
661 
662  if not info["castor_file_url"]:
663  #TODO: get the castor file url or abort
664  self.handleParsingError( "Castor tarball URL not found. Trying to get from environment")
665  lmdb_castor_url_is_valid = lambda url: url.startswith("/castor/")
666 
667  url = ""
668  try:
669  print "HERE!"
670  url=self.get_tarball_fromlog()
671  print "Extracted castor tarball full path by re-parsing cmsPerfSuite.log: %s"%url
672 
673  except:
674  if os.environ.has_key("PERFDB_CASTOR_FILE_URL"):
675  url = os.environ["PERFDB_CASTOR_FILE_URL"]
676 
677  else: #FIXME: add the possibility to get it directly from the cmsPerfSuite.log file (make sure it is dumped there before doing the tarball itself...)
678  print "Failed to get the tarball location from environment variable PERFDB_CASTOR_FILE_URL"
679  self.handleParsingError( "Castor tarball URL not found. Provide interactively")
680 
681  while True:
682 
683  if lmdb_castor_url_is_valid(url):
684  info["castor_file_url"] = url
685  break
686  print "Please enter a valid CASTOR url: has to start with /castor/ and should point to the tarball"
687  url = sys.stdin.readline()
688 
689 
return info
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseTimeSize (   self)
parses the timeSize 

Definition at line 440 of file parserPerfsuiteMetadata.py.

References parserPerfsuiteMetadata.parserPerfsuiteMetadata._applyParsingRules(), parserPerfsuiteMetadata.parserPerfsuiteMetadata._DEBUG, parserPerfsuiteMetadata.parserPerfsuiteMetadata.findFirstIndex_ofStartsWith(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.findLineAfter(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.firstTimeStampAfter(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.firstTimeStampBefore(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.handleParsingError(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.isTimeStamp(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.lines_timesize, split, and parserPerfsuiteMetadata.parserPerfsuiteMetadata.validateSteps().

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseAll().

441  def parseTimeSize(self):
442  """ parses the timeSize """
443  timesize_result = []
444 
445  # TODO: we will use the first timestamp after the "or these tests will use user input file..."
446  #TODO: do we have to save the name of input file somewhere?
447  """
448  the structure of input file:
449  * beginning ->> and start timestamp- the firstone:
450  >>> [optional:For these tests will use user input file /build/RAWReference/MinBias_RAW_320_IDEAL.root]
451  <...>
452  Using user-specified cmsDriver.py options: --conditions FrontierConditions_GlobalTag,MC_31X_V4::All --eventcontent RECOSIM
453  Candle MinBias will be PROCESSED
454  You defined your own steps to run:
455  RAW2DIGI-RECO
456  *Candle MinBias
457  Written out cmsRelvalreport.py input file at:
458  /build/relval/CMSSW_3_2_4/workStep2/MinBias_TimeSize/SimulationCandles_CMSSW_3_2_4.txt
459  Thu Aug 13 14:53:37 2009 [start]
460  <....>
461  Thu Aug 13 16:04:48 2009 [end]
462  Individual cmsRelvalreport.py ExitCode 0
463  * ending - the last timestamp "... ExitCode ...."
464  """
465  #TODO: do we need the cmsDriver --conditions? I suppose it would the global per work directory = 1 perfsuite run (so samefor all candles in one work dir)
466  # TODO: which candle definition to use?
467  """ divide into separate jobs """
468  lines = self.lines_timesize
469  jobs = []
470  start = False
471  timesize_start_indicator = re.compile(r"""^taskset -c (\d+) cmsRelvalreportInput.py""")
472  for line_index in xrange(0, len(lines)):
473  line = lines[line_index]
474  # search for start of each TimeSize job (with a certain candle and step)
475  if timesize_start_indicator.match(line):
476  if start:
477  jobs.append(lines[start:line_index])
478  start = line_index
479  #add the last one
480  jobs.append(lines[start:len(lines)])
481  #print "\n".join(str(i) for i in jobs)
482 
483  parsing_rules = (
484  (("", "candle", ), r"""^(Candle|ONLY) (.+) will be PROCESSED$""", "req"),
485  #e.g.: --conditions FrontierConditions_GlobalTag,MC_31X_V4::All --eventcontent RECOSIM
486  (("cms_driver_options", ), r"""^Using user-specified cmsDriver.py options: (.+)$"""),
487  (("", "conditions", ""), r"""^Using user-specified cmsDriver.py options: (.*)--conditions ([^\s]+)(.*)$""", "req"),
488  # for this we cannot guarrantee that it has been found, TODO: we might count the number of pileup candles and compare with arguments
489  (("", "pileup_type", ""), r"""^Using user-specified cmsDriver.py options:(.*)--pileup=([^\s]+)(.*)$"""),
490  #not shure if event content is required
491  (("", "event_content", ""), r"""^Using user-specified cmsDriver.py options:(.*)--eventcontent ([^\s]+)(.*)$""", "req"),
492  #TODO: after changeing the splitter to "taskset -c ..." this is no longer included into the part of correct job
493  #(("input_user_root_file", ), r"""^For these tests will use user input file (.+)$"""),
494  )
495 
496  #parse each of the TimeSize jobs: find candles, etc and start-end times
497 
498  reExit_code = re.compile(r"""Individual ([^\s]+) ExitCode (\d+)""")
499 
500  if self._DEBUG:
501  print "TimeSize (%d) jobs: %s" % (len(jobs), str(jobs))
502 
503  for job_lines in jobs:
504  """ we apply the defined parsing rules to extract the required fields of information into the dictionary (as defined in parsing rules) """
505  info = self._applyParsingRules(parsing_rules, job_lines)
506  #Fixing here the compatibility with new cmsdriver.py --conditions option (for which now we have autoconditions and FrontierConditions_GlobalTag is optional):
507  if 'auto:' in info['conditions']:
508  from Configuration.PyReleaseValidation.autoCond import autoCond
509  info['conditions'] = autoCond[ info['conditions'].split(':')[1] ].split("::")[0]
510  else:
511  if 'FrontierConditions_GlobalTag' in info['conditions']:
512  info['conditions']=info['conditions'].split(",")[1]
513 
514  #DEBUG:
515  #print "CONDITIONS are: %s"%info['conditions']
516  #start time - the index after which comes the time stamp
517  """ the following is not available on one of the releases, instead
518  use the first timestamp available on our job - that's the starting time :) """
519 
520  #start_time_after = self.findFirstIndex_ofStartsWith(job_lines, "Written out cmsRelvalreport.py input file at:")
521  #print start_time_after
522  info["start"] = self.firstTimeStampAfter(0, job_lines)
523 
524  #TODO: improve in future (in case of some changes) we could use findBefore instead which uses the regexp as parameter for searching
525  #end time - the index before which comes the time stamp
526 
527  # On older files we have - "Individual Relvalreport.py ExitCode 0" instead of "Individual cmsRelvalreport.py ExitCode"
528  end_time_before = self.findLineAfter(0, job_lines, test_condition = reExit_code.match, return_index = True)
529 
530  # on the same line we have the exit Code - so let's get it
531  nothing, exit_code = reExit_code.match(job_lines[end_time_before]).groups()
532 
533  info["end"] = self.firstTimeStampBefore(end_time_before, job_lines)
534  info["exit_code"] = exit_code
535 
536  steps_start = self.findFirstIndex_ofStartsWith(job_lines, "You defined your own steps to run:")
537  steps_end = self.findFirstIndex_ofStartsWith(job_lines, "*Candle ")
538  #probably it includes steps until we found *Candle... ?
539  steps = job_lines[steps_start + 1:steps_end]
540  if not self.validateSteps(steps):
541  self.handleParsingError( "Steps were not found corrently: %s for current job: %s" % (str(steps), str(job_lines)))
542 
543  """ quite nasty - just a work around """
544  print "Trying to recover from this error in case of old cmssw"
545 
546  """ we assume that steps are between the following sentance and a TimeStamp """
547  steps_start = self.findFirstIndex_ofStartsWith(job_lines, "Steps passed to writeCommands")
548  steps_end = self.findLineAfter(steps_start, job_lines, test_condition = self.isTimeStamp, return_index = True)
549 
550  steps = job_lines[steps_start + 1:steps_end]
551  if not self.validateSteps(steps):
552  self.handleParsingError( "EVEN AFTER RECOVERY Steps were not found corrently! : %s for current job: %s" % (str(steps), str(job_lines)))
553  else:
554  print "RECOVERY SEEMS to be successful: %s" % str(steps)
555 
556  info["steps"] = self._LINE_SEPARATOR.join(steps) #!!!! STEPS MIGHT CONTAIN COMMA: ","
557 
558 
559  timesize_result.append(info)
return {"TimeSize": timesize_result}
double split
Definition: MVATrainer.cc:139
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.readCmsScimark (   self,
  main_cores = [1] 
)

Definition at line 576 of file parserPerfsuiteMetadata.py.

References parserPerfsuiteMetadata.parserPerfsuiteMetadata._DEBUG, parserPerfsuiteMetadata.parserPerfsuiteMetadata._path, python.ConfigToolBase.ConfigToolBase._path, and parserPerfsuiteMetadata.parserPerfsuiteMetadata.readCmsScimarkTest().

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseAll().

577  def readCmsScimark(self, main_cores = [1]):
578  main_core = main_cores[0]
579  #TODO: WE DO NOT ALWAYS REALLY KNOW THE MAIN CORE NUMBER! but we don't care too much
580  #we parse each of the SciMark files and the Composite scores
581  csimark = []
582  csimark.extend(self.readCmsScimarkTest(testName = "cmsScimark2", testType = "mainCore", core = main_core))
583  csimark.extend(self.readCmsScimarkTest(testName = "cmsScimark2_large", testType = "mainCore_Large", core = main_core))
584 
585 
586  #we not always know the number of cores available so we will just search the directory to find out core numbers
587  reIsCsiMark_notusedcore = re.compile("^cmsScimark_(\d+).log$")
588  scimark_files = [reIsCsiMark_notusedcore.match(f).groups()[0]
589  for f in os.listdir(self._path)
590  if reIsCsiMark_notusedcore.match(f)
591  and os.path.isfile(os.path.join(self._path, f)) ]
592 
593  for core_number in scimark_files:
594  try:
595  csimark.extend(self.readCmsScimarkTest(testName = "cmsScimark_%s" % str(core_number), testType = "NotUsedCore_%s" %str(core_number), core = core_number))
596  except IOError, e:
597  if self._DEBUG:
598  print e
599  return csimark
600  #print csimark
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.readCmsScimarkTest (   self,
  testName,
  testType,
  core 
)

Definition at line 564 of file parserPerfsuiteMetadata.py.

References parserPerfsuiteMetadata.parserPerfsuiteMetadata._path, python.ConfigToolBase.ConfigToolBase._path, pat::helper::OverlapTest.readInput(), EcalFEtoDigi.readInput(), pat::helper::BasicOverlapTest.readInput(), pat::helper::OverlapBySuperClusterSeed.readInput(), and parserPerfsuiteMetadata.parserPerfsuiteMetadata.readInput().

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.readCmsScimark().

565  def readCmsScimarkTest(self, testName, testType, core):
566  lines = self.readInput(self._path, fileName = testName + ".log")
567  scores = [{"score": self.reCmsScimarkTest.match(line).groups()[1], "type": testType, "core": core}
568  for line in lines
569  if self.reCmsScimarkTest.match(line)]
570  #add the number of messurment
571  i = 0
572  for score in scores:
573  i += 1
574  score.update({"messurement_number": i})
575  return scores
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.readInput (   self,
  path,
  fileName = "cmsPerfSuite.log" 
)

Definition at line 161 of file parserPerfsuiteMetadata.py.

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.readCmsScimarkTest().

162  def readInput(self, path, fileName = "cmsPerfSuite.log"):
163  try:
164  f = open(os.path.join(path, fileName), "r")
165  lines = [s.strip() for s in f.readlines()]
166  f.close()
167  except IOError:
168  lines = []
169 
170  #print self._lines
171  return lines
172 
173 
174 
def parserPerfsuiteMetadata.parserPerfsuiteMetadata.validateSteps (   self,
  steps 
)
Simple function for error detection. TODO: we could use a list of possible steps also 

Definition at line 24 of file parserPerfsuiteMetadata.py.

References parserPerfsuiteMetadata.parserPerfsuiteMetadata._MAX_STEPS.

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseTimeSize().

24 
25  def validateSteps(self, steps):
26  """ Simple function for error detection. TODO: we could use a list of possible steps also """
27  return not (not steps or len(steps) > self._MAX_STEPS)

Member Data Documentation

parserPerfsuiteMetadata.parserPerfsuiteMetadata._DEBUG
private

Definition at line 31 of file parserPerfsuiteMetadata.py.

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.handleParsingError(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseGeneralInfo(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseTimeSize(), and parserPerfsuiteMetadata.parserPerfsuiteMetadata.readCmsScimark().

string parserPerfsuiteMetadata.parserPerfsuiteMetadata._LINE_SEPARATOR = "|"
staticprivate

Definition at line 23 of file parserPerfsuiteMetadata.py.

parserPerfsuiteMetadata.parserPerfsuiteMetadata._MAX_STEPS
private

Definition at line 30 of file parserPerfsuiteMetadata.py.

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.validateSteps().

parserPerfsuiteMetadata.parserPerfsuiteMetadata._otherStart
private

Definition at line 50 of file parserPerfsuiteMetadata.py.

parserPerfsuiteMetadata.parserPerfsuiteMetadata._path
private

Definition at line 34 of file parserPerfsuiteMetadata.py.

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.getIgSummary(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.getMachineInfo(), parserPerfsuiteMetadata.parserPerfsuiteMetadata.readCmsScimark(), and parserPerfsuiteMetadata.parserPerfsuiteMetadata.readCmsScimarkTest().

parserPerfsuiteMetadata.parserPerfsuiteMetadata._timeSizeEnd
private

Definition at line 47 of file parserPerfsuiteMetadata.py.

parserPerfsuiteMetadata.parserPerfsuiteMetadata._timeSizeStart
private

Definition at line 42 of file parserPerfsuiteMetadata.py.

parserPerfsuiteMetadata.parserPerfsuiteMetadata.lines_general

Definition at line 88 of file parserPerfsuiteMetadata.py.

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseGeneralInfo().

parserPerfsuiteMetadata.parserPerfsuiteMetadata.lines_other

Definition at line 90 of file parserPerfsuiteMetadata.py.

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseAllOtherTests(), and parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseTheCompletion().

parserPerfsuiteMetadata.parserPerfsuiteMetadata.lines_timesize

Definition at line 89 of file parserPerfsuiteMetadata.py.

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseTimeSize().

parserPerfsuiteMetadata.parserPerfsuiteMetadata.missing_fields

Definition at line 93 of file parserPerfsuiteMetadata.py.

Referenced by parserPerfsuiteMetadata.parserPerfsuiteMetadata.parseAll().

parserPerfsuiteMetadata.parserPerfsuiteMetadata.reCmsScimarkTest

Definition at line 38 of file parserPerfsuiteMetadata.py.