Skip to content

Batch Jobs

The Batch Jobs page displays the status of executed jobs, along with other job details. For any individual job:

  • The history can be viewed, showing the job progress as it is executed and the exit code on completion.
    Historical job executions that are no longer required can also be deleted from this page.
  • System options and environment variables specified for the the job can be viewed
  • The job log and results can be downloaded from a job's outputs.

Job States

A batch job goes through a number of states as it is processed by Altair SLC Hub.

State Meaning
Creating Jobs go through a two-phase creation process.
The job object is created.
After creating, any required inputs are uploaded.
A job should only exist in a creating state for a few moments. If a job remains in creating state for more than a few moments, it typically indicates that the client disconnected during the process. Jobs that remain in creating state for too long are automatically removed.
Pending Once a job is created, it is placed in Pending state. The reason for the job being in a pending state might be:
There are too many other jobs being executed, and therefore insufficient resources to place this job.
There is a constraint on where the job can be run and that constraint cannot be satisfied.
Executing Once a job has been committed to a node it is placed in Executing state.
This state means the job is being prepared for execution, currently running, or any execution results are being recovered. Individual events in the job status will indicate exactly what phase of execution the job is in.
Completed successfully This state indicates that a job has run to completion with a success exit code (either returning a zero exit code or one of the additional exit codes that have been explicitly defined as indicating successful completion).
Completed with error This state indicates the job has run to completion but produced an unexpected exit code.
Failed This state indicates that something has gone wrong during the execution.
For example, failure to recover result files, or failure to communicate with a host. This state will be accompanied by a reason text.
Cancelled This state indicates that the batch job was cancelled by the user.

It is possible to cancel a job that is in the 'Pending' or 'Executing' state (as may be necessary if a runaway job is consuming too many resources).

Running batch jobs

Altair SLC Hub provides a facility for executing batch jobs from a command line tool, hubcli. The batch jobs can consist of a number of different types of workload:

  • SAS language programs
  • Shell scripts (Linux bash, PowerShell, Windows BAT)
  • Program from an Altair SLC Hub package stored in an Altair SLC Hub artefact repository.

The hubcli command enables authentication with Altair SLC Hub and for batch jobs to be submitted and monitored. The Altair SLC Hub Portal also has administrative facilities to list and manage the batch job workloads.

Running Altair SLC Hub Package Programs

A program can be run from an Altair SLC Hub package that has been authored in Altair Analytics Workbench and uploaded into Altair SLC Hub. This can be done with the hubcli job runpkg command.

There is no requirement for the package to be deployed to use the hubcli job runpkg command, only that the package has been uploaded to Altair SLC Hub.

If you specify the --program parameter to invoke a program from an Altair SLC Hub package, you should use the API entry point for the program not the path to the source file within the package

For example, the following shows the editor for a program in Altair Analytics Workbench.

Image

The name of the source file in the package is Program1.sas. The value required for the --program parameter is the value of the "Program path" field, in this example, the path is examples/example1. The command to invoke this program using the hubcli command is:

hubcli job runpkg --program examples/example1 [other required parameters]

Other required parameters for this program include the repository, group, name, and version arguments.

Configuration

Token lifespan

The lifespan of the refresh and access tokens is governed by standard Altair SLC Hub configuration settings. See the etc\config.d\auth.yaml file for more information about their use and current default values.

The lifespan of the tokens can be set such that they affect all clients (the Altair SLC Hub portal, Altair Analytics Workbench, and the hubcli command), or they can be set individually for each of those client types. More details and examples are given in the etc\config.d\auth.yaml file.