SPECjvm2008 User's Guide

https://spec.org/jvm2008/docs/UserGuide.html#UsePJA

Version 1.0
Last modified: April 16, 2008


1 Introduction

1.1 General Concepts

1.2 Background

1.3 Workloads

1.4 Base and Peak

1.5 An Operation

1.6 An Iteration

1.7 Warmup

1.8 Parallelism

1.9 Analyzers

1.10 Startup Benchmarks

1.11 SciMark Large and Small Workloads

2 Installing SPECjvm2008

2.1 System requirements

2.2 Installation

2.3 Trial run

3 Operational Configuration

3.1 SPECjvm2008 parameters

4 User-supplied System Information

4.1 Setting System Information Properties

5 Running SPECjvm2008

5.1 Running the benchmark

5.2 Running the Reporter Separately

6 Throughput Measurement

7 Results Reports

7.1 Results Reports Contents

8 Producing and Submitting Results

8.1 Producing Results

8.2 Submitting Results

9 Tuning notes

9.1 SPECjvm2008 properties

9.2 Benchmark specific properties

9.3 Threads specification

9.4 How to disable checksum verification?

9.5 Run time tuning

9.6 How to put results elsewhere?

9.7 Where SPECjvm2008 can be run from?

9.8 How to disable reports generation?

9.9 How to configure Analyzers?

9.10 How to get JVM arguments from the runtime?

9.11 Miscellaneous

Appendix A

SPECjvm2008 command line options

SPECjvm2008 workload names


1 Introduction

This document is a practical guide for setting up and running SPECjvm2008. To submit SPECjvm2008 results the benchmarker must adhere to the rules contained in the Run and Reporting Rules document contained in the kit.

This document is targeted at people trying to run the SPECjvm2008 benchmark in order to accurately measure their Java system, comprising a JRE and an underlying operating system and hardware.

1.1 General Concepts

  • JVM - Java Virtual Machine, an execution engine for Java.
  • JRE - Java Runtime Environment, which includes a JVM and class libraries.
  • Valid - A valid run is a run that produces a correct result. This can apply to a benchmark, a sub-benchmark or the whole suite.
  • Compliant - A compliant run is a run of the whole suite done according to the run rules. One of the requirements is that the run is valid.
  • Logical CPU - A hardware thread, i.e. what the system reports as a execution unit, for example a core, a hyper thread or a strand, depending on architecture. The harness will use the java call Runtime.getRuntime().availableProcessors() to determine the number of logical CPUs in the system it runs on.
  • java - The term java is used below in examples and in this context it refers to the launcher used to invoke the Java Virtual Machine.

1.2 Background

The main purpose of SPECjvm2008 is to measure the performance of a JRE (a JVM and associated libraries). It also measures the performance of the operating system and hardware in the context of executing the JRE. It focuses on the performance of the JRE executing a single application; it reflects the performance of the hardware processor and memory subsystem, but has minimal dependence on file I/O and includes no remote network I/O.

The SPECjvm2008 workload mimics a variety of common general purpose application computations. These characteristics reflect the goal that this benchmark be applicable to measuring basic Java performance on a wide variety of both client and server systems running Java.

SPEC also finds user experience of Java important and the suite therefore includes startup benchmarks and has a required run category called base, which has to be run without any tuning of the JVM to improve the out of the box performance.

Other SPEC benchmarks are available for measuring the performance of Java systems in more specialized enterprise scenarios. SPECjbb2005 is a Java program emulating a 3-tier system with emphasis on the middle tier. For a full-fledged multi-tier Java benchmark, SPEC has developed a comprehensive application server benchmark, SPECjAppServer2004. SPEC also has a Java messaging benchmark called SPECjms2007, which is the first industry-standard benchmark for evaluating the performance of enterprise message-oriented middleware servers based on JMS (Java Message Service)

1.3 Workloads

SPECjvm2008 comprises a collection of workloads intended to represent a diverse set of common types of computation. In general, the algorithms and operations in the workloads are components of real-world applications and include text/character processing, numerical computations, and bitwise computation (e.g., media processing). Each of the workloads has a specific amount of work to do, making them each a small benchmark in itself and several of the benchmarks have sub-benchmarks.

The benchmarks are described in detail in the benchmark pages.

1.4 Base and Peak

There are two run categories in SPECjvm2008 called Base and Peak and there is also an additional run category called Lagom. In order to create a compliant result, a run in the Base category must be included. It is optional to also include a run in the Peak category.

The Base category shows the performance of the system without any tuning of the JVM, the 'out of the box' performance. It does however allow tuning of the OS and hardware (including firmware like BIOS). The Base category has the limitation that you are not allowed to do any hand tuning of the JVM and you are not allowed to change the run time.

The Peak category shows what can be achieved with the system. Tuning of the JVM is allowed to achieve optimal performance.

The Base run and the Peak run are done with two seperate invocations of the SPECjvm2008 benchmarrk suite, and initially two different results are created. The results from these are later combined into one raw file, in order to submit.

1.4.1 Lagom workload

The Lagom workload is a fixed size workload, meaning it does a certain number of operations of each benchmarks. This is a complement to the base and peak categories and can be used when it is preferred to run a fixed amount of work. The workload will run a specified number of operations (different for each benchmark) in each benchmark thread. The number of benchmark threads will by default be adjusted to the number of hardware threads the machine has, so in order to have an exact amount of work on two different systems of different size, the number of benchmark threads should be set.

The Lagom workload is intended for research usage, to be used as a tool to measure the progres, overhead, or whatever the intent of the research may be. A run of the Lagom category will not be a compliant run and SPEC will not review or post any of this results on the spec.org web.

1.5 An Operation

Each invocation of a benchmark workload is one operation. The harness will call a benchmark several times, making it perform multiple operations in one iteration.

1.6 An Iteration

An iteration goes on for a certain duration, by default 240 seconds. During this time the harness will kick off several operations, one new one as soon as previous operation completed. It will never abort an operation, but wait until an operation is completed for stopping. The harness expects to complete at least 5 operations inside an iteration. The duration for an iteration is never less than specified time, but will be increased if the operations takes too long, based on performance in the warmup period.

1.7 Warmup

The first iteration is a warmup iteration, run for 120 seconds by default. The result of the warmup iteration is not included in the benchmark result. To skip warmup, set the warmup time to 0.

1.8 Parallelism

Most of the benchmarks are run in parallel, where several operations are started at the same time in separate threads. From a harness point of view the threads work independently but the work loads are designed to introduce an interesting mix of problems, both by sharing data and work in the application level as well as using resources shared inside the JVM.

1.9 Analyzers

It is possible to use the SPECjvm2008 benchcmark harness in order to analyze what happens during a run, in order to understand and diagnose a product. An example would be to view the heap usage during a benchmark run. In order to achieve this, the framework can run one or several Analyzers during the benchmark run. Those will gather information and deliver the results together with the benchmark results. It will plot information on the details chart in the report, where each benchmark operation is plotted, and it can report a summary metric for each iteration. The analyzers can either poll for information during the benchmark run, or implement a callback method to report results based on events. Here is recommended further reading for configuration and implementation of analyzers.

1.10 Startup Benchmarks

The startup benchmarks measure JVM and application startup performance. The ops/m metric is calculated using the time taken for each workload to be run for one operation by a newly started JVM (via java.lang.Runtime.exec())). This tests both basic JVM startup and also the time to startup the benchmark workload, since in order to get best score in several of the benchmarks it is critical to optimize hot areas of code.

It is possible to configure both launcher and arguments for the startup benchmarks, see the Operational Configurations section.

1.11 SciMark Large and Small Workloads

The SciMark workloads are run with both small and large dataset sizes. The small dataset fits within the L2 cache available on most modern CPU architectures and is intended to test JVM code optimization and computation performance while ensuring the dataset is accessible in cache alone. The large dataset is large enough to not fit within a standard L2 cache and is intended to test JVM optimizations targeted at memory and the performance of the memory subsystem itself. The scimark.monte_carlo workload is run only once as it does not use a dataset for calcuation that can be modified.

2 Installing SPECjvm2008

2.1 System requirements

SPECjvm2008 can be run on a system with only one logical CPU and 512 MB of memory.
The recommended minimum amount of disk space needed is 256 MB (including installation).

SPECjvm2008 requires a Java Runtime Environment supporting Java SE 5.0 features.

SPECjvm2008 has been tested on a large set of Hardware, OS and JRE combinations.
See the FAQ for details on what configurations are tested.

Note: SPECjvm2008 is designed to scale up the workload when a larger machine (determined by the number of logical CPUs) is used. In many of the work loads included in the suite, the amount of live data will be increased when the workload is increased and minimum amount of memory mentioned above will not be enough. More space on disk will also is required (since the derby benchmark stores data on disk).

2.2 Installation

The benchmark kit is distributed with an installer or in a zip file.

It is recommended to install the benchmark on the system under test, but it can be installed on and run from a file share. The installed benchmark suite will use about 150 MB of file resources.

To install using the installer, invoke the installer and follow the instructions, which will ask where the benchmark should be installed.

To install using the zip file, unzip it in a folder where the benchmark is to be installed. The zip file will create the sub-folder SPECjvm2008 where into which everything will be installed.

Note: Although the Java source code for the benchmark is provided in the kit, it is NOT necessary to recompile the Java code; the .jar files in the kit have everything required. In fact, rebuilt .jar files are not allowed to be used for compliant runs and they will fail the checksum tests. You may, of course, modify and recompile the benchmark for research purposes.

Note:If you are using Java for Mac OS X, see the Known Issues document for how to prepend javac.jar.

2.3 Trial run

The following command line may be used to quickly check the installation.
It should be executed from a shell or command window and should be executed where the benchmark is installed, where the SPECjvm2008.jar file is located.
It is expected to complete within a few minutes.
It is expected to:

  • Validate the kit with a checksum test of jars and resource files.
  • Run the check test, for a functional verification of the JVM. *
  • Run the compress benchmark, warmup and one iteration of 10 seconds each.
  • Create a report of the run in results/...
  • Print a result string for a valid, but noncompliant run of the benchmark suite.

If this runs to completion as described here and without any error messages, then it is highly likely that the various pieces of the benchmark are correctly in place.

     java -jar SPECjvm2008.jar -wt 5s -it 5s -bt 2 compress

Note: If there is no "java" command in your path, the "java" above would have to replaced by the full path to the java command, e.g. "d:\myjava\jre\bin\java.exe"

* Note: The check benchmark will verify that the javac version used by the JRE is from javac.jar included in the benchmark. If this verification fails, see the known issues document for a workaround, if it is the same issue already found.

Beginning of the output:

SPECjvm2008 Peak
Properties file: none
Benchmarks: compress WARNING: Run will not be compliant.
Not a compliant sequence of benchmarks for publication.
Property specjvm.iteration.time must be at least 240 seconds for publication. ...

3 Operational Configuration

There are a number of parameters that control the operation of the SPECjvm2008 benchmark. Each parameter has a default value; the user may override these defaults by specifying parameter values in a properties file and/or on the command line.

3.1 SPECjvm2008 parameters

The SPECJvm2008 harness is very flexible when it comes to tuning the workloads in order to be able to work with the benchmarks as effectively as possible. The complete set of parameters for SPECjvm2008 is documented in Appendix A and also in the files props/specjvm.properties and props/specjvm.reporter.properties.

3.1.1 How to specify SPECjvm2008 parameter values

The user may specify values for SPECjvm2008 parameters on the command line as arguments or as properties in a properties file. In general, the effective value of a parameter is determined as if by the following procedure:

  1. If one or more command line options specify a value for the parameter, then the effective value is the one specified in the last such option; otherwise

  2. if a properties file is loaded and one or more lines in the properties file specify a value for the parameter, then the effective value is the one specified by the last such line; otherwise

  3. the effective value is the default value for the parameter.

As described in the next section, the parameter specjvm.propfile is an exception to this rule.

3.1.2 Properties File

Whether a properties file is loaded is determined by what value, if any, for the parameter specjvm.propfile is specified on the command line. If no value for specjvm.propfile is specified on the command line, then no properties file is loaded. If one or more command line options specify a value for specjvm.propfile, then the value specified in the last such option, PROPFILE, is used as follows.

  1. If the file PROPFILE exists, that file is loaded; otherwise
  2. if the file SPECJVM_HOME/props/PROPFILE exists, that file is loaded; otherwise
  3. a warning is reported, no properties file is loaded, and execution of the benchmark continues.

SPECJVM_HOME denotes the root of the directory subtree containing the SPECjvm2008 kit.

3.1.2.1 Properties File Format

The properties file is a list of attribute-value pairs, specified one per line in the following form:

<parameter name>=<value>

Blank lines and lines beginning with the character '#' are ignored. Conversions are performed according to the type of value expected for each parameter.

3.1.3 Specifying parameter values on the command line

Any SPECjvm2008 parameter may be set by using the -D command line option. In addition, specific command line options exist to make setting many of the more commonly used parameters more convenient. See Appendix A for details on command line options.

4 User-supplied System Information

To assist testers and submitters in assembling all the pertinent information needed for reproducing SPECjvm2008 results, the benchmark defines a number of system attributes which are given values in the same way as operational parameters and the reporting mechanism automatically includes this information in the various reports it generates. The names of these system information properties are easy to distinguish, as the all begin with the prefix spec.jvm2008.reporter. It is important to note two things about these properties.

  1. Although they are given values via the same mechanism as the operational parameters, SPECjvm2008 treats them very differently. They have no effect on the operation of the benchmark; they are simply passed through to the results reports. Except for the labeling supplied for them in the results reports, SPECjvm2008 does not interpret their values.

  2. SPECjvm2008 is capable of automatically detecting and filling in values for a few of these properties, but for the most part it is up to the user to supply these values. In any case, the user should check them all for correctness.

4.1 Setting System Information Properties

Whereas it is common that all the operation parameters that a user wants to change are easily supplied via command line options, supplying all the required system information on the command line (while possible) would be very cumbersome. Therefore, the expected usage model is that these are supplied via a properties file.

An example properties file for this purpose is included as props/specjvm.reporter.properties. In addition, SPECjvm2008 has a mechanism that supports keeping separate property files for operational parameters and system information: if (as the result of command line options or the loading of a first property file) the operational parameter specjvm.additional.properties.file has a nonnull value, then SPECjvm2008 uses that value as the name of another property file to be loaded.

5 Running SPECjvm2008

5.1 Running the benchmark

SPECjvm2008 runs as a single Java application on a single system. It understands a number of command line switches, though none of them are required. However, some of the workloads require heap space that is greater than the default maximum heap size for some particular JREs on particular systems. The benchmark will increase the workload (and the size of the live data) based on the number for CPUs (number of hardware threads). For all the JREs we have tested, 400 MB is sufficient for a system with 2 hardware threads. So a simple command line to run the benchmark is

     java -Xmx400m -jar SPECjvm2008.jar

The general form of the command line to run the benchmark is

     java [<jvm options>] -jar SPECjvm2008.jar [<SPECjvm2008 options>] [<benchmark name> ...]

Appendix A lists the valid SPECjvm2008 options and their meanings and the valid benchmark names.

5.1.1 Compliant runs

For a run to be compliant, the SPECjvm2008 configuration, specified on the command line and/or in properties files must result in parameters having values conforming to the the run rules, section 2.3 and 2.4. The SPECjvm2008 framework checks the conformance of the parameter values at start of suite and indicates in the benchmark output any parameter values that do not pass these checks (which makes the run noncompliant).

For a run to be compliant it must also be valid, passing the result validations done by the SPECjvm2008 harness as part of each operation.

5.1.1.1 Compliant runs for the Base metric

For a compliant run to be submitted for the base metric, the run must not use any jvm options or harness time tuning.

To run the base category, specify --base on command line. This category is default unless JVM command line arguments are used or run time is changed.

5.1.1.2 Compliant runs for the Peak metric

A compliant run submitted for the peak metric may use jvm options. The options used must be disclosed as described in the Run Rules.

This disclosure is done using the property spec.jvm2008.report.jvm.command.line.

To run the peak category, specify --peak on command line. This will also be selected automatically if JVM command line arguments are used or run time is changed.

5.1.2 Output and Results

SPECjvm2008 prints a record of its operation to the standard output stream as it runs. It optionally produces an XML file that records system information supplied by the user, some parameter values and results for the run. This XML file is required for submitting the results to SPEC, and the default behavior is that this XML file is produced.

The location to which the XML file is written is controlled by the value of the parameter specjvm.result.dir; the default is a subdirectory named “results” in the current working directory. In the following description, RESULTS_DIR denotes the value of specjvm.results.dir. If the XML file is to be created, SPECjvm2008 creates a subdirectory of RESULTS_DIR named SPECjvm2008.<num>, where <num> is the smallest positive integer that will result in a unique subdirectory name within RESULTS_DIR. The XML file is then written as a file named

RESULTS_DIR/SPECjvm2008.<num>/SPECjvm2008<num>.raw

Optionally, the SPECjvm2008 reporter will run at the end of the run and create text and/or HTML versions of the information in the XML results file. These files are written to the same directory as the XML results file. A more detailed description of the results files appears in the section on Results Reports.

5.2 Running the Reporter Separately

Normally, the reporter is executed automatically by SPECjvm2008 at the end of a run to process the results file generated by that run. The reporter may also be run alone by using a command line of this form:

     java -jar SPECjvm2008.jar --reporter <file name>

where <file name> is the name of a results file produced by a previous run. As in a regular run, which results file formats are created is controlled by the properties specjvm.create.html.report and specjvm.create.txt.report.

6 Throughput Measurement

In a given run, each SPECjvm2008 sub-benchmark produces a result in ops/min (operations per minute) that reflects the rate at which the system was able to complete invocations of the workload of that sub-benchmark. At the conclusion of a run, SPECjvm2008 computes a single quantity intended to reflect the overall performance of the system on all the sub-benchmarks executed during the run. The basic method used to compute the combined result is to compute a geometric mean. However, because it is desired to reflect performance on various application areas more or less equally, the computation done is a little more complex than a straight geometric mean of the sub-benchmark results.

In order to include multiple sub-benchmarks that represent the same general application area while still treating various application areas equally, an intermediate result is computed for certain groups of the sub-benchmarks before they are combined into the overall throughput result. In particular, for these groups of sub-benchmarks

  • COMPILER: compiler.compiler, compiler.sunflow

  • CRYPTO: crypto.aes, crypto.rsa, crypto.signverify

  • SCIMARK: scimark.fft.large, scimark.lu.large, scimark.sor.large, scimark.sparse.large, scimark.fft.small, scimark.lu.small, scimark.sor.small, scimark.sparse.small, scimark.monte_carlo

  • STARTUP: {all sub-benchmarks having names beginning with startup. } See Appendix A for the complete list.

  • XML: xml.transform, xml.validation

the geometric mean of sub-benchmark results in each group is computed. The overall throughput result is then computed as the geometric mean of these group results and the results from the other sub-benchmarks.

While throughput results obtained from runs that execute only selected sub-benchmarks may be interesting and useful for research purposes, only the overall throughput results from runs that execute all the sub-benchmarks (and satify several other conditions as well) are deemed to represent the performance of the system on SPECjvm2008 per se. The conditions under which an overall throughput result represents a SPECjvm2008 metric is discussed in detail in the "Run and Reporting Rules".

7 Results Reports

The benchmark results reports for a single run are written to a single results directory as described above. The results files include some or all of the following.

     SPECjvm2008.<num>.raw
     SPECjvm2008.<num>.html 
     SPECjvm2008.<num>.txt 
     SPECjvm2008.<num>.sub 
     SPECjvm2008.<num>.summary 

If the HTML results file exists, there will also be an images subdirectory containing .jpg files that are referenced by the HTML results file.

The XML results file (.raw) is generated from data in the benchmark's internal data structures at the end of a run; it is intended primarily for consumption by other software, and it is the form required for submission to SPEC for review and publication. The HTML and text reports are generated from the XML results file and are intended primarily for viewing by humans. All of them contain substantially the same information.

7.1 Results Reports Contents

The results reports contain the system information supplied by the user, summary information about the run and detailed information about the run.

7.1.1 System Information in the Results Reports

The system information in the results reports is categorized as general information about the run (such as time and date), information about the JRE on which the benchmark was run, information about the OS, and information about the hardware. All of the system information properties that SPECjvm2008 is capable of reporting are documented in the example reporter properties file (props/specjvm.reporter.properties).

7.1.2 Summary Information in the Results Reports

The summary information includes the score on each workload, the composite score, and, if there were any, violations that make the run noncompliant. For a compliant run, the composite score represents a value of the SPECjvm2008 metric; for a noncompliant run the composite score is just a number that may or may not be interesting for research purposes.

7.1.3 Detailed Information in the Results Reports

For each workload executed in the run, the results report includes the values of the operational parameters in effect for that workload, the score for each iteration (or an indication that the iteration did not complete successfully), and the start and end times for each execution of the workload by each thread.

7.1.4 Navigating the HTML Results Report

Most of the information in the HTML report is presented in a single main page. However, below the graph of the scores by iteration for each workload, there is a link named “details”. Clicking on this link will display a graph showing the performance of individual threads for that workload.

8 Producing and Submitting Results

8.1 Producing Results

Recommended steps for producing a compliant result:

  • Edit the property file for the reporter with system and submission information SPECjvm2008/props/specjvm.reporter.properties.

    • Make sure the JVM arguments property and heap size properties are not set in the base run.
  • Edit the property file for the reporter with harness configuration SPECjvm2008/props/specjvm.properties. For many sumission, default values will be ok and no updates are needed.
    • Make sure that this configuration file points to the reporter information file.
    • Make sure that this does not include any changes to the run time.
  • Run the base run, either using run scripts or directly from the command line as follows:
        java -jar SPECjvm2008.jar --base --propfile props/specjvm.properties
  • Update properties files for a peak run, including an update of the JVM arguments property.
  • Run the peak run, either using run scripts or directly from command line like:
        java -Xms3000m -Xmx3000m -jar SPECjvm2008.jar --peak --propfile props/specjvm.properties
  • Review the results in the SPECjvm2008/results/ folder.

8.2 Submitting Results

Here are the steps for submitting results:

  • Prepare the raw file(s) with the command:

        java -jar SPECjvm2008.jar --reporter --prepare <base raw file> <optional peak raw file>

    This will first produce a new raw file, which is a merged version of previous files.
    Then it will produce a zip file containing the new raw file.

  • Check your raw file by copying your zip-file to a new location and running:
        java -jar SPECjvm2008.jar --reporter --specprocess <zip file>

    This should pass without complaints and create a brief summary report, which links to the full reports for each run in subfolders.

  • Mail the zip file to subjvm2008@spec.org.

9 Tuning notes

9.1 SPECjvm2008 properties

To control SPECjvm2008 behavior you can use SPECjvm2008 options. These options can be specified either on the command line or in a property file.

  • To specify option by a property file use:

        java -jar SPECjvm2008.jar -pf <your_file_name> 

    This will let the suite know what property file should be used. Desired property should be set in this property file.

  • To specify option by command line use:

        java -jar SPECjvm2008.jar -D<property_name>=<property_value>
  • For the most of frequently used options simpler way can be used, use '-help' option, to see short names.

Properties specified on the command line override properties specified in a property file, even if a propoerty file specification goes after command line options. So, if props.file contains line props_a=value_b:

    java -jar SPECjvm2008.jar -Dprops_a=value_c -pf props.file 

the property props_a will be set to value_c.

9.2 Benchmark specific properties

For the benchmark specific properties, you can add benchmark name to use this property only for a certain benchmark, for example:

    java -jar SPECjvm2008.jar -Dspecjvm.benchcmark.threads.scimark.monte_carlo=3 -Dspecjvm.benchmark.threads=2 all 

This will run 2 benchmark threads for all benchmarks except scimark.monte_carlo which will have 3 and the startup benchmarks which always run the workload single threaded.

9.3 Threads specification

The suite uses the number of available logical CPUs to compute the number of benchmark threads to use for each benchmark. Some benchmarks scale the number of benchmark threads, for example the sunflow benchmark uses only half as many benchmark threads as the number of available logical CPUs, since each benchmark instance (thread) kicks off four threads internally. It is possible to override the number of available logical CPUs using the property specjvm.hardware.threads.override. Overriding it will leave the existing suite computations based on the new value and scale the workload. This therefore differs from using the argument -bt.

9.4 Disabling the checksum verification?

To disable checksum verification of the kit, use the '-ikv' option. This can be used for testing purposes, but can not be used in a compliant run.

9.5 Run time tuning

It's possible to set the number of iterations, the iteration time and the warmup time.

To set the number of iterations, use the '-i <iter_num>' option. With this option the suite will run one warmup iteration and then <iter_num> iterations. If <iter_num> is set to -1, it will continue to run an infinite number of iterations.

To specify the duration of the warmup phase and an iteration one can use '-it ' option to specify the iteration time and '-wt ' to specify the warmup time.

So, for example:

    java -jar SPECjvm2008.jar -i 4 -wt 17 -it 4711 

will run 4 iterations with 17 seconds of warmup and each iteration will run for 4711 seconds.

Another example is:

    java -jar SPECjvm2008.jar -i -1 -wt 0 -it 20 

which will skip the warmup phase, then run an infinite number of iterations, 10 seconds each.

9.6 Where can SPECjvm2008 results be stored?

SPECjvm2008 results will by default be stored in a result folder, based on where the execution takes place. It is possible to redirect the results using the property specjvm.result.dir property. Example:

    java -Dspecjvm.result.dir=/home/results/jvm08-results/ -jar /home/tests/SPECjvm2008/SPECjvm2008.jar

9.7 Where can SPECjvm2008 be run from?

SPECjvm2008 can be run from any directory; however, specjvm.home.dir must be specified as a system property and point to the SPECjvm2008 location (where SPECjvm2008.jar is located). Example:

    java -Dspecjvm.home.dir=/home/tests/SPECjvm2008 -jar /home/tests/SPECjvm2008/SPECjvm2008.jar

In the above example, the results will be produced where you execute, but can be controlled with property specjvm.result.dir. Example:

    java -Dspecjvm.home.dir=/home/tests/SPECjvm2008 -Dspecjvm.result.dir=/home/results/jvm08-results/ -jar /home/tests/SPECjvm2008/SPECjvm2008.jar

9.8 How to disable report generation

By default the harness will produce a raw file (in xml format) with the results from the benchmark runs. The result is stored after each iteration, in order not to have an impact on the measurement period, but also done continuously in order to not store any extra data between benchmarks or even between iterations, which could affect the benchmark result.

After the full suite is run, the reporter will be invoked and produce an html report, a text report and a little summary file. In order to skip generating these reports, the commands '-ctf false' will skip the text report and '-chf false' will skip the html report.

In order to skip generating results over all, the command '-crf false' can be used in order not to print any results to file. This means that there will be no raw file, and it will not be possible to post-generate any html report or text report.

9.9 How to configure and write Analyzers?

By default no analyzers will be run. In order to run one or more, set the property specjvm.benchmark.analyzer.names to contain which analyzer(s) to run. If more than one is specified, seperate with a blank space. Use property specjvm.benchmark.analyzer.frequency to control how often the analyzers will run when polling. The name of an analyzer is the same as the name of the Analyzer class.

An analyzer is a class that extends the class AnalyzerBase and has the package name spec.harness.analyzers. It must implement the method execute(). If the analyzer rather should listen to events than poll, it is recommended to implement an empty execute method(). In order to store a result, use the report method. To store a result that should be plotted in the graph over the run, pass a TYInfo (Time-axis, Y-axis) object. To store a summary result (recommended to do in the tearDown method) for the iteration, pass an AnalyzerResult object.

To implement your own, see this example of a polling analyzer and this for an event-based or listening callback-based analyzer.

9.10 How to get JVM arguments from the runtime?

The harness can be configured to use a property file for reporter information about the system. It is recommended to do so in an additional file, compared to the harness configuration and the additional file is specified by using the property specjvm.additional.properties.file. This properties file usually includes the JRE arguments. It is possible to parse the command line arguments can from the information available in the Runtime JMXBean, including parsing the initial and mximum heap settings. THis is done if the -pja option is specified on the command line for the harness. There is however no standard on what is passed in by the launcher and then reported in this field. Some JVM also includes properties information other than what is specified on command line, but by the launcher. So use this option as shorthand when possible.

9.11 Miscellaneous

  • The suite verifies benchmark output; to switch this validation off one can use specjvm.verify property.
  • You can run a group of the benchmarks by specifying a group as a benchmark name. The following groups are available: 'crypto', 'xml', 'compiler', 'scimark', 'scimark.large', 'scimark.small', 'startup', 'throughput' and 'all':
        java -jar SPECjvm2008.jar compiler 

    will run the compiler.compiler and the compiler.sunflow benchmarks.

Appendix A

SPECjvm2008 command line options

These options are also available with the command

     java -jar SPECjvm2008.jar --help
Arg Long arg Value Property name Description
-h --help     Show this help.
  --version     Print SPECjvm2008 version and exit.
-sv --showversion     Print SPECjvm2008 version and continue.
  --base     Run the base compliant run of SPECjvm2008 (default, unless jvm args are specified).
  --peak     Run the peak compliant run of SPECjvm2008.
  --lagom     Run the Lagom benchmark suite, a version, of SPECjvm2008 that uses a fixed workload.
-pf --propfile string specjvm.propfile Use this properties file.
-i --iterations int specjvm.miniter, specjvm.maxniter How many iterations to run. 'inf' means an infinite number.
-mi --miniter int specjvm.miniter Minimum number of iterations.
-ma --maxiter int specjvm.maxniter Maximum number of iterations.
-it --iterationtime time specjvm.iteration.time How long one iteration should be. The time is specified as an integer, and assumed to be in seconds, or an integer with unit, for example 4m (4 minutes). Units available are ms, s, m and h. If the iteration time is too short, based on the warmup result, it will be adjusted to expect to finish at least 5 operations.
-fit --forceIterationIime time specjvm.iteration.time, specjvm.iteration.time.forced As iteration time, but the time will not be adjusted based on the warmup result.
-ja --jvmArgs string specjvm.startup.jvm_options JVM options used for startup subtests.
-jl --jvmLauncher path specjvm.benchmark.startup.launcher JVM launcher used for startup subtests.
-wt --warmuptime time specjvm.benchmark.warmup.time How long warmup time. The time format is the same as in iteration time.
-ops --operations int specjvm.fixed.operations, specjvm.run.type How many operations each iteration will consist of. It will then be a fixed workload and iteration time is ignored.
-bt --benchmarkThreads int specjvm.benchmark.threads How many benchmark threads to use.
-r --reporter raw file name   Invokes the reporter with given file(s). The benchmarks will not be run.
-v --verbose   specjvm.print.verbose, specjvm.print.progress Print verbose info (harness only).
-pja --parseJvmArgs     Parse jvm arguments info from command line, including heap settings (uses JMXBean info). This is not done by default.
-coe --continueOnError   specjvm.continue.on.error Continue to run suite, even if one test fails.
-ict --ignoreCheckTest   specjvm.run.initial.check Do not run check benchmark.
-ikv --ignoreKitValidation   specjvm.run.checksum.validation Do not run checksum validition of benchmark kit.
-crf --createRawFile boolean specjvm.create.xml.report Whether to generate a raw file.
-ctf --createTextFile boolean specjvm.create.txt.report Whether to generate text report. If raw is disabled, so is txt.
-chf --createHtmlFile boolean specjvm.create.html.report Whether to generate html report. If raw is disabled, so is html.
-xd --xmlDir path specjvm.benchmark.xml.validation.input.dir To set path to xml input files
  <benchmark(s)>   specjvm.benchmarks Name of benchmark(s) to run. By default all submission benchmarks will be selected. 'all' means all sumission benchmarks will be run. See SPECjvm2008 workload names for all values.

SPECjvm2008 workload names

startup.helloworld compiler.compiler scimark.fft.small
startup.compiler.compiler compiler.sunflow scimark.lu.small
startup.compiler.sunflow compress scimark.sor.small
startup.compress crypto.aes scimark.sparse.small
startup.crypto.aes crypto.rsa scimark.monte_carlo
startup.crypto.rsa crypto.signverify serial
startup.crypto.signverify derby sunflow
startup.mpegaudio mpegaudio xml.transform
startup.scimark.fft scimark.fft.large xml.validation
startup.scimark.lu scimark.lu.large
startup.scimark.monte_carlo scimark.sor.large
startup.scimark.sor scimark.sparse.large
startup.scimark.sparse
startup.serial
startup.sunflow
startup.xml.transform
startup.xml.validation

[转帖]SPECjvm2008 User's Guide的更多相关文章

  1. [转帖]Programmer’s guide to the big tech companies 💻

    Programmer’s guide to the big tech companies

  2. [转帖]Intro Guide to Dockerfile Best Practices

    Intro Guide to Dockerfile Best Practices By Tibor Vass July 02 2019    https://blog.docker.com/2019/ ...

  3. [ZZ] [精彩盘点] TesterHome 社区 2018年 度精华帖

    原文地址: https://testerhome.com/topics/17646 相逢即是缘分,总有一篇适合您! 感觉好的请点赞收藏 ,感觉分类不严谨的,欢迎反馈给我! 测试方法&测试管理 ...

  4. Beennan的内嵌汇编指导(译)Brennan's Guide to Inline Assembly

    注:写在前面,这是一篇翻译文章,本人的英文水平很有限,但内嵌汇编是学习操作系统不可少的知识,本人也常去查看这方面的内容,本文是在做mit的jos实验中的一篇关于内嵌汇编的介绍.关于常用的内嵌汇编(AT ...

  5. The Practical Guide to Empathy Maps: 10-Minute User Personas

    That’s where the empathy map comes in. When created correctly, empathy maps serve as the perfect lea ...

  6. Scrum Guide - Scrum指南中文版

    现在公司在使用敏捷开发模式进行日常的开发和管理工作,所以我看了下Ken Schwaber的<Scrum Guide>这本小册子,原本是英文的,这里提供中文的,以供日后复习和参考. Scru ...

  7. The Hacker's Guide To Python 单元测试

    The Hacker's Guide To Python 单元测试 基本方式 python中提供了非常简单的单元测试方式,利用nose包中的nosetests命令可以实现简单的批量测试. 安装nose ...

  8. A Beginner's Guide to Paxos

    Google Drive: A Beginner's Guide to Paxos The code ideas of Paxos protocol: 1) Optimistic concurrenc ...

  9. 转帖:DotNet 资源大全中文版

    (注:下面用 [$] 标注的表示收费工具,但部分收费工具针对开源软件的开发/部署/托管是免费的) API 框架 NancyFx:轻量.用于构建 HTTP 基础服务的非正式(low-ceremony)框 ...

  10. pipedata3d User Guide

    pipedata3d User Guide 1. Introduction 在管道设计过程中,会使用到大量的标准,如ASME,DIN,GB,CB,HG,SH等等.管道设计人员在设计过程中,需要翻阅相关 ...

随机推荐

  1. 带你认识多模数据库GeminiDB架构与应用实践

    本文分享自华为云社区<多模归一,一生万物--华为云多模数据库GeminiDB架构与应用实践>,作者: GaussDB 数据库 . 在这个信息爆炸的时代,数据的管理和应用变得越来越重要.互联 ...

  2. 光大银行刘淼:基于华为云GaussDB(DWS) 数据仓库创新实践

    摘要:面向未来数据平台3.0要做架构减法,平台由N->1,华为云GaussDB(DWS)未来作为数据仓库唯一平台,数据链路实现从数据湖直接到华为云GaussDB(DWS)数据仓库. 日前,华为举 ...

  3. PanGu-Coder:函数级的代码生成模型

    摘要:华为诺亚方舟实验室语音语义实验室联合华为云PaaS技术创新实验室基于PanGu-Alpha研制出了当前业界最新的模型PanGu-Coder 本文分享自华为云社区<PanGu-Coder 函 ...

  4. 一文带你从零认识什么是XLA

    摘要:简要介绍XLA的工作原理以及它在 Pytorch下的使用. 本文分享自华为云社区<XLA优化原理简介>,作者: 拓荒者01. 初识XLA XLA的全称是Accelerated Lin ...

  5. 万字长文|Hadoop入门笔记(附资料)

    大数据迅速发展,但是Hadoop的基础地位一直没有改变.理解并掌握Hadoop相关知识对于之后的相关组件学习有着地基的作用.本文整理了Hadoop基础理论知识与常用组件介绍,虽然有一些组件已经不太常用 ...

  6. Flutter App混淆加固、保护与优化原理

    ​ 引言 在移动应用程序开发中,保护应用程序的代码和数据安全至关重要.本文将探讨如何对Flutter应用程序进行混淆.优化和保护,以提高应用程序的安全性和隐私. 一.混淆原理 混淆是一种代码保护技术, ...

  7. iOS打包IPA教程

    ​ 转载:xcode打包导出ipa 众所周知,在开发苹果应用时需要使用签名(证书)才能进行打包安装苹果 IPA,作为刚接触ios开发的同学,只是学习ios app开发内测,并没有上架appstore需 ...

  8. 总结vue3 的一些知识点:​Vue.js 条件语句​

    Vue.js 条件语句 条件判断 v-if 条件判断使用 v-if 指令: v-if 指令 在元素 和 template 中使用 v-if 指令: <div id="app" ...

  9. Seal梁胜:近水楼台先得月,IT人员应充分利用AI解决问题

    2023年9月2日,由平台工程技术社区与数澈软件Seal联合举办的⌈AIGC时代下的平台工程⌋--2023平台工程技术大会在北京圆满收官.吸引了近300名平台工程爱好者现场参会,超过3000名观众在线 ...

  10. Solon Web 开发:一、开始

    1.第一个Web应用 回顾一下<快速入门>里做过的事情,然后开始我们的第一个web应用 1.1.pom.xml配置 设置solon的parent.这不是必须的,但包含了大量默认的配置,可简 ...