CLI Reference
This documentation is automatically generated from the CLI help text.
To regenerate, run:
cargo run --bin generate-cli-docs --features "client,tui,plot_resources"
Command-Line Help for torc
This document contains the help content for the torc command-line program.
Command Overview:
- CLI Reference
- Command-Line Help for
torctorc- Subcommands: - Options:torc run- Arguments: - Options:torc submit- Arguments: - Options:torc submit-slurm- Arguments: - Options:torc watch- Arguments: - Options:torc workflows- Subcommands:torc workflows create- Arguments: - Options:torc workflows create-slurm- Arguments: - Options:torc workflows new- Options:torc workflows list- Options:torc workflows get- Arguments: - Options:torc workflows update- Arguments: - Options:torc workflows cancel- Arguments:torc workflows delete- Arguments: - Options:torc workflows archive- Arguments:torc workflows submit- Arguments: - Options:torc workflows run- Arguments: - Options:torc workflows initialize- Arguments: - Options:torc workflows reinitialize- Arguments:torc workflows status- Arguments: - Options:torc workflows reset-status- Arguments:torc workflows execution-plan- Arguments:torc workflows list-actions- Arguments:torc workflows is-complete- Arguments:torc workflows sync-status- Arguments: - Options:torc compute-nodes- Subcommands:torc compute-nodes get- Arguments:torc compute-nodes list- Arguments: - Options:torc files- Subcommands:torc files create- Arguments: - Options:torc files list- Arguments: - Options:torc files get- Arguments:torc files update- Arguments: - Options:torc files delete- Arguments:torc files list-required-existing- Arguments:torc jobs- Subcommands:torc jobs create- Arguments: - Options:torc jobs create-from-file- Arguments: - Options:torc jobs list- Arguments: - Options:torc jobs get- Arguments:torc jobs update- Arguments: - Options:torc jobs delete- Arguments:torc jobs delete-all- Arguments:torc jobs list-resource-requirements- Arguments: - Options:torc job-dependencies- Subcommands:torc job-dependencies job-job- Arguments: - Options:torc job-dependencies job-file- Arguments: - Options:torc job-dependencies job-user-data- Arguments: - Options:torc resource-requirements- Subcommands:torc resource-requirements create- Arguments: - Options:torc resource-requirements list- Arguments: - Options:torc resource-requirements get- Arguments:torc resource-requirements update- Arguments: - Options:torc resource-requirements delete- Arguments:torc events- Subcommands:torc events create- Arguments: - Options:torc events list- Arguments: - Options:torc events monitor- Arguments: - Options:torc events get-latest-event- Arguments:torc events delete- Arguments:torc results- Subcommands:torc results list- Arguments: - Options:torc results get- Arguments:torc results delete- Arguments:torc user-data- Subcommands:torc user-data create- Arguments: - Options:torc user-data list- Arguments: - Options:torc user-data get- Arguments:torc user-data update- Arguments: - Options:torc user-data delete- Arguments:torc user-data delete-all- Arguments:torc user-data list-missing- Arguments:torc slurm- Subcommands:torc slurm create- Arguments: - Options:torc slurm update- Arguments: - Options:torc slurm list- Arguments: - Options:torc slurm get- Arguments:torc slurm delete- Arguments:torc slurm schedule-nodes- Arguments: - Options:torc slurm parse-logs- Arguments: - Options:torc slurm sacct- Arguments: - Options:torc slurm generate- Arguments: - Options:torc slurm regenerate- Arguments: - Options:torc remote- Subcommands:torc remote add-workers- Arguments:torc remote add-workers-from-file- Arguments:torc remote remove-worker- Arguments:torc remote list-workers- Arguments:torc remote run- Arguments: - Options:torc remote status- Arguments: - Options:torc remote stop- Arguments: - Options:torc remote collect-logs- Arguments: - Options:torc remote delete-logs- Arguments: - Options:torc scheduled-compute-nodes- Subcommands:torc scheduled-compute-nodes get- Arguments:torc scheduled-compute-nodes list- Arguments: - Options:torc scheduled-compute-nodes list-jobs- Arguments:torc hpc- Subcommands:torc hpc listtorc hpc detecttorc hpc show- Arguments:torc hpc partitions- Arguments: - Options:torc hpc match- Options:torc reports- Subcommands:torc reports check-resource-utilization- Arguments: - Options:torc reports results- Arguments: - Options:torc reports summary- Arguments:torc config- Subcommands:torc config show- Options:torc config pathstorc config init- Options:torc config validatetorc tui- Options:torc plot-resources- Arguments: - Options:torc completions- Arguments:
torc
Torc workflow orchestration system
Usage: torc [OPTIONS] <COMMAND>
Subcommands:
run— Run a workflow locally (create from spec file or run existing workflow by ID)submit— Submit a workflow to scheduler (create from spec file or submit existing workflow by ID)submit-slurm— Submit a workflow to Slurm with auto-generated schedulerswatch— Watch a workflow and automatically recover from failuresworkflows— Workflow management commandscompute-nodes— Compute node management commandsfiles— File management commandsjobs— Job management commandsjob-dependencies— Job dependency and relationship queriesresource-requirements— Resource requirements management commandsevents— Event management commandsresults— Result management commandsuser-data— User data management commandsslurm— Slurm scheduler commandsremote— Remote worker execution commands (SSH-based distributed execution)scheduled-compute-nodes— Scheduled compute node management commandshpc— HPC system profiles and partition informationreports— Generate reports and analyticsconfig— Manage configuration files and settingstui— Interactive terminal UI for managing workflowsplot-resources— Generate interactive HTML plots from resource monitoring datacompletions— Generate shell completions
Options:
-
--log-level <LOG_LEVEL>— Log level (error, warn, info, debug, trace) -
-f,--format <FORMAT>— Output format (table or json) -
--log-level <LOG_LEVEL>— Log level (error, warn, info, debug, trace) -
-f,--format <FORMAT>— Output format (table or json)Default value:
table -
--url <URL>— URL of torc server -
--username <USERNAME>— Username for basic authentication -
--password <PASSWORD>— Password for basic authentication (will prompt if username provided but password not) -
--url <URL>— URL of torc server -
--username <USERNAME>— Username for basic authentication -
--password <PASSWORD>— Password for basic authentication (will prompt if username provided but password not)
torc run
Run a workflow locally (create from spec file or run existing workflow by ID)
Usage: torc run [OPTIONS] <WORKFLOW_SPEC_OR_ID>
Arguments:
<WORKFLOW_SPEC_OR_ID>— Path to workflow spec file (JSON/JSON5/YAML) or workflow ID<WORKFLOW_SPEC_OR_ID>— Path to workflow spec file (JSON/JSON5/YAML) or workflow ID
Options:
-
--max-parallel-jobs <MAX_PARALLEL_JOBS>— Maximum number of parallel jobs to run concurrently -
--num-cpus <NUM_CPUS>— Number of CPUs available -
--memory-gb <MEMORY_GB>— Memory in GB -
--num-gpus <NUM_GPUS>— Number of GPUs available -
-p,--poll-interval <POLL_INTERVAL>— Job completion poll interval in seconds -
-o,--output-dir <OUTPUT_DIR>— Output directory for jobs -
--skip-checks— Skip validation checks (e.g., scheduler node requirements). Use with caution -
--max-parallel-jobs <MAX_PARALLEL_JOBS>— Maximum number of parallel jobs to run concurrently -
--num-cpus <NUM_CPUS>— Number of CPUs available -
--memory-gb <MEMORY_GB>— Memory in GB -
--num-gpus <NUM_GPUS>— Number of GPUs available -
-p,--poll-interval <POLL_INTERVAL>— Job completion poll interval in seconds -
-o,--output-dir <OUTPUT_DIR>— Output directory for jobs -
--skip-checks— Skip validation checks (e.g., scheduler node requirements). Use with cautionDefault value:
false
torc submit
Submit a workflow to scheduler (create from spec file or submit existing workflow by ID)
Requires workflow to have an on_workflow_start action with schedule_nodes. For Slurm workflows
without pre-configured schedulers, use submit-slurm instead. Requires workflow to have an
on_workflow_start action with schedule_nodes. For Slurm workflows without pre-configured schedulers,
use submit-slurm instead.
Usage: torc submit [OPTIONS] <WORKFLOW_SPEC_OR_ID>
Arguments:
<WORKFLOW_SPEC_OR_ID>— Path to workflow spec file (JSON/JSON5/YAML) or workflow ID<WORKFLOW_SPEC_OR_ID>— Path to workflow spec file (JSON/JSON5/YAML) or workflow ID
Options:
-
-i,--ignore-missing-data— Ignore missing data (defaults to false) -
-i,--ignore-missing-data— Ignore missing data (defaults to false)Default value:
false -
--skip-checks— Skip validation checks (e.g., scheduler node requirements). Use with caution -
--skip-checks— Skip validation checks (e.g., scheduler node requirements). Use with cautionDefault value:
false
torc submit-slurm
Submit a workflow to Slurm with auto-generated schedulers
Automatically generates Slurm schedulers based on job resource requirements and HPC profile.
WARNING: This command uses heuristics to generate schedulers and workflow actions. For complex workflows with unusual dependency patterns, the generated configuration may not be optimal and could waste allocation time. WARNING: This command uses heuristics to generate schedulers and workflow actions. For complex workflows with unusual dependency patterns, the generated configuration may not be optimal and could waste allocation time.
RECOMMENDED: Preview the generated configuration first with:
torc slurm generate --account
Review the schedulers and actions to ensure they are appropriate for your workflow before submitting. You can save the output and submit manually: Review the schedulers and actions to ensure they are appropriate for your workflow before submitting. You can save the output and submit manually:
torc slurm generate --account
Usage: torc submit-slurm [OPTIONS] --account <ACCOUNT> <WORKFLOW_SPEC>
Arguments:
<WORKFLOW_SPEC>— Path to workflow spec file (JSON/JSON5/YAML/KDL)<WORKFLOW_SPEC>— Path to workflow spec file (JSON/JSON5/YAML/KDL)
Options:
-
--account <ACCOUNT>— Slurm account to use for allocations -
--hpc-profile <HPC_PROFILE>— HPC profile to use (auto-detected if not specified) -
--single-allocation— Bundle all nodes into a single Slurm allocation per scheduler -
--account <ACCOUNT>— Slurm account to use for allocations -
--hpc-profile <HPC_PROFILE>— HPC profile to use (auto-detected if not specified) -
--single-allocation— Bundle all nodes into a single Slurm allocation per schedulerBy default, creates one Slurm allocation per node (N×1 mode), which allows jobs to start as nodes become available and provides better fault tolerance. By default, creates one Slurm allocation per node (N×1 mode), which allows jobs to start as nodes become available and provides better fault tolerance.
With this flag, creates one large allocation with all nodes (1×N mode), which requires all nodes to be available simultaneously but uses a single sbatch.
-
-i,--ignore-missing-data— Ignore missing data (defaults to false) With this flag, creates one large allocation with all nodes (1×N mode), which requires all nodes to be available simultaneously but uses a single sbatch. -
-i,--ignore-missing-data— Ignore missing data (defaults to false)Default value:
false -
--skip-checks— Skip validation checks (e.g., scheduler node requirements). Use with caution -
--skip-checks— Skip validation checks (e.g., scheduler node requirements). Use with cautionDefault value:
false
torc watch
Watch a workflow and automatically recover from failures.
Monitors a workflow until completion. With --recover, automatically diagnoses failures, adjusts
resource requirements, and resubmits jobs.
Usage: torc watch [OPTIONS] <WORKFLOW_ID>
Usage Modes
-
Basic monitoring (no recovery):
torc watch 123Reports failures and exits. Use for manual intervention or AI-assisted recovery.
-
With automatic recovery (
--recover):torc watch 123 --recoverAutomatically diagnoses OOM/timeout failures, adjusts resources, and retries. Runs until all jobs complete or max retries exceeded.
-
With auto-scheduling (
--auto-schedule):torc watch 123 --auto-scheduleAutomatically submits new Slurm allocations when retry jobs are waiting. Essential for workflows using failure handlers that create retry jobs.
Arguments
<WORKFLOW_ID>— Workflow ID to watch
Options
Polling:
-p,--poll-interval <POLL_INTERVAL>— Poll interval in seconds (default:60)-o,--output-dir <OUTPUT_DIR>— Output directory for job files (default:output)-s,--show-job-counts— Show job counts by status during polling. WARNING: Can cause high server load for large workflows.
Recovery:
-r,--recover— Enable automatic failure recovery-m,--max-retries <MAX_RETRIES>— Maximum number of recovery attempts (default:3)--memory-multiplier <MEMORY_MULTIPLIER>— Memory multiplier for OOM failures (default:1.5)--runtime-multiplier <RUNTIME_MULTIPLIER>— Runtime multiplier for timeout failures (default:1.5)--retry-unknown— Also retry jobs with unknown failure causes (not just OOM or timeout)--recovery-hook <RECOVERY_HOOK>— Custom recovery script for unknown failures. The workflow ID is passed as an argument and viaTORC_WORKFLOW_IDenvironment variable.
Auto-scheduling:
--auto-schedule— Automatically schedule new compute nodes when needed--auto-schedule-threshold <N>— Minimum retry jobs before auto-scheduling when schedulers exist (default:5)--auto-schedule-cooldown <SECONDS>— Cooldown between auto-schedule attempts (default:1800/ 30 min)--auto-schedule-stranded-timeout <SECONDS>— Schedule stranded jobs after this timeout even if below threshold (default:7200/ 2 hrs). Set to0to disable.
Auto-Scheduling Behavior
When --auto-schedule is enabled:
- No schedulers available: Immediately submits new allocations if ready jobs exist.
- Threshold exceeded: If retry jobs (attempt_id > 1) exceed
--auto-schedule-thresholdwhile schedulers are running, submits additional allocations after cooldown. - Stranded jobs: If retry jobs are below threshold but waiting longer than
--auto-schedule-stranded-timeout, schedules anyway to prevent indefinite waiting.
Examples
# Basic: watch until completion, report failures
torc watch 123
# Recovery: automatically fix OOM/timeout failures
torc watch 123 --recover
# Recovery with aggressive resource increases
torc watch 123 --recover --memory-multiplier 2.0 --runtime-multiplier 2.0
# Recovery including unknown failures (transient errors)
torc watch 123 --recover --retry-unknown
# Auto-schedule: ensure retry jobs get scheduled
torc watch 123 --auto-schedule
# Full production setup: recovery + auto-scheduling
torc watch 123 --recover --auto-schedule
# Custom auto-schedule settings
torc watch 123 --auto-schedule \
--auto-schedule-threshold 10 \
--auto-schedule-cooldown 3600 \
--auto-schedule-stranded-timeout 14400
See Also
torc recover— One-shot recovery (no continuous monitoring)- Automatic Failure Recovery — Detailed guide
torc recover
Recover a Slurm workflow from failures (one-shot).
Diagnoses job failures (OOM, timeout), adjusts resource requirements, and resubmits jobs. Use after
a workflow has completed with failures. For continuous monitoring, use torc watch --recover
instead.
Usage: torc recover [OPTIONS] <WORKFLOW_ID>
Arguments
<WORKFLOW_ID>— Workflow ID to recover
Options
-o,--output-dir <OUTPUT_DIR>— Output directory for job files (default:output)--memory-multiplier <MEMORY_MULTIPLIER>— Memory multiplier for OOM failures (default:1.5)--runtime-multiplier <RUNTIME_MULTIPLIER>— Runtime multiplier for timeout failures (default:1.4)--retry-unknown— Also retry jobs with unknown failure causes--recovery-hook <RECOVERY_HOOK>— Custom recovery script for unknown failures--dry-run— Show what would be done without making any changes
When to Use
Use torc recover for:
- One-shot recovery after a workflow has completed with failures
- Manual investigation before retrying (use
--dry-runfirst) - Workflows where you want to inspect failures before retrying
Use torc watch --recover instead for:
- Continuous monitoring of long-running workflows
- Fully automated recovery without manual intervention
- Production workflows that should self-heal
Examples
# Basic recovery
torc recover 123
# Dry run to preview changes without modifying anything
torc recover 123 --dry-run
# Custom resource multipliers
torc recover 123 --memory-multiplier 2.0 --runtime-multiplier 1.5
# Also retry unknown failures (not just OOM/timeout)
torc recover 123 --retry-unknown
# With custom recovery hook for domain-specific fixes
torc recover 123 --recovery-hook 'bash fix-cluster.sh'
See Also
torc watch --recover— Continuous monitoring with automatic recovery- Automatic Failure Recovery — Detailed guide
torc workflows
Workflow management commands
Usage: torc workflows <COMMAND>
Subcommands:
create— Create a workflow from a specification file (supports JSON, JSON5, YAML, and KDL formats)create-slurm— Create a workflow with auto-generated Slurm schedulersnew— Create a new empty workflowlist— List workflowsget— Get a specific workflow by IDupdate— Update an existing workflowcancel— Cancel a workflow and all associated Slurm jobsdelete— Delete one or more workflowsarchive— Archive or unarchive one or more workflowssubmit— Submit a workflow: initialize if needed and schedule nodes for on_workflow_start actions This command requires the workflow to have an on_workflow_start action with schedule_nodesrun— Run a workflow locally on the current nodeinitialize— Initialize a workflow, including all job statusesreinitialize— Reinitialize a workflow. This will reinitialize all jobs with a status of canceled, submitting, pending, or terminated. Jobs with a status of done will also be reinitialized if an input_file or user_data record has changedstatus— Get workflow statusreset-status— Reset workflow and job statusexecution-plan— Show the execution plan for a workflow specification or existing workflowlist-actions— List workflow actions and their statuses (useful for debugging action triggers)is-complete— Check if a workflow is completecreate— Create a workflow from a specification file (supports JSON, JSON5, YAML, and KDL formats)create-slurm— Create a workflow with auto-generated Slurm schedulersnew— Create a new empty workflowlist— List workflowsget— Get a specific workflow by IDupdate— Update an existing workflowcancel— Cancel a workflow and all associated Slurm jobsdelete— Delete one or more workflowsarchive— Archive or unarchive one or more workflowssubmit— Submit a workflow: initialize if needed and schedule nodes for on_workflow_start actions This command requires the workflow to have an on_workflow_start action with schedule_nodesrun— Run a workflow locally on the current nodeinitialize— Initialize a workflow, including all job statusesreinitialize— Reinitialize a workflow. This will reinitialize all jobs with a status of canceled, submitting, pending, or terminated. Jobs with a status of done will also be reinitialized if an input_file or user_data record has changedstatus— Get workflow statusreset-status— Reset workflow and job statusexecution-plan— Show the execution plan for a workflow specification or existing workflowlist-actions— List workflow actions and their statuses (useful for debugging action triggers)is-complete— Check if a workflow is completeexport— Export a workflow to a portable JSON fileimport— Import a workflow from an exported JSON filesync-status— Synchronize job statuses with Slurm (detect and fail orphaned jobs)
torc workflows create
Create a workflow from a specification file (supports JSON, JSON5, YAML, and KDL formats)
Usage: torc workflows create [OPTIONS] --user <USER> <FILE>
Arguments:
-
<FILE>— Path to specification file containing WorkflowSpec -
<FILE>— Path to specification file containing WorkflowSpecSupported formats: - JSON (.json): Standard JSON format - JSON5 (.json5): JSON with comments and trailing commas - YAML (.yaml, .yml): Human-readable YAML format - KDL (.kdl): KDL document format Supported formats: - JSON (.json): Standard JSON format - JSON5 (.json5): JSON with comments and trailing commas - YAML (.yaml, .yml): Human-readable YAML format - KDL (.kdl): KDL document format
Format is auto-detected from file extension, with fallback parsing attempted Format is auto-detected from file extension, with fallback parsing attempted
Options:
-
-u,--user <USER>— User that owns the workflow (defaults to USER environment variable) -
--no-resource-monitoring— Disable resource monitoring (default: enabled with summary granularity and 5s sample rate) -
-u,--user <USER>— User that owns the workflow (defaults to USER environment variable) -
--no-resource-monitoring— Disable resource monitoring (default: enabled with summary granularity and 5s sample rate)Default value:
false -
--skip-checks— Skip validation checks (e.g., scheduler node requirements). Use with caution -
--skip-checks— Skip validation checks (e.g., scheduler node requirements). Use with cautionDefault value:
false -
--dry-run— Validate the workflow specification without creating it (dry-run mode) Returns a summary of what would be created including job count after parameter expansion -
--dry-run— Validate the workflow specification without creating it (dry-run mode) Returns a summary of what would be created including job count after parameter expansion
torc workflows create-slurm
Create a workflow with auto-generated Slurm schedulers
Automatically generates Slurm schedulers based on job resource requirements and HPC profile. For Slurm workflows without pre-configured schedulers. Automatically generates Slurm schedulers based on job resource requirements and HPC profile. For Slurm workflows without pre-configured schedulers.
Usage: torc workflows create-slurm [OPTIONS] --account <ACCOUNT> --user <USER> <FILE>
Arguments:
<FILE>— Path to specification file containing WorkflowSpec<FILE>— Path to specification file containing WorkflowSpec
Options:
-
--account <ACCOUNT>— Slurm account to use for allocations -
--hpc-profile <HPC_PROFILE>— HPC profile to use (auto-detected if not specified) -
--single-allocation— Bundle all nodes into a single Slurm allocation per scheduler -
--account <ACCOUNT>— Slurm account to use for allocations -
--hpc-profile <HPC_PROFILE>— HPC profile to use (auto-detected if not specified) -
--single-allocation— Bundle all nodes into a single Slurm allocation per schedulerBy default, creates one Slurm allocation per node (N×1 mode), which allows jobs to start as nodes become available and provides better fault tolerance. By default, creates one Slurm allocation per node (N×1 mode), which allows jobs to start as nodes become available and provides better fault tolerance.
With this flag, creates one large allocation with all nodes (1×N mode), which requires all nodes to be available simultaneously but uses a single sbatch.
-
-u,--user <USER>— User that owns the workflow (defaults to USER environment variable) -
--no-resource-monitoring— Disable resource monitoring (default: enabled with summary granularity and 5s sample rate) With this flag, creates one large allocation with all nodes (1×N mode), which requires all nodes to be available simultaneously but uses a single sbatch. -
-u,--user <USER>— User that owns the workflow (defaults to USER environment variable) -
--no-resource-monitoring— Disable resource monitoring (default: enabled with summary granularity and 5s sample rate)Default value:
false -
--skip-checks— Skip validation checks (e.g., scheduler node requirements). Use with caution -
--skip-checks— Skip validation checks (e.g., scheduler node requirements). Use with cautionDefault value:
false -
--dry-run— Validate the workflow specification without creating it (dry-run mode) -
--dry-run— Validate the workflow specification without creating it (dry-run mode)
torc workflows new
Create a new empty workflow
Usage: torc workflows new [OPTIONS] --name <NAME> --user <USER>
Options:
-n,--name <NAME>— Name of the workflow-d,--description <DESCRIPTION>— Description of the workflow-u,--user <USER>— User that owns the workflow (defaults to USER environment variable)-n,--name <NAME>— Name of the workflow-d,--description <DESCRIPTION>— Description of the workflow-u,--user <USER>— User that owns the workflow (defaults to USER environment variable)
torc workflows list
List workflows
Usage: torc workflows list [OPTIONS]
Options:
-
-u,--user <USER>— User to filter by (defaults to USER environment variable) -
--all-users— List workflows for all users (overrides --user) -
-l,--limit <LIMIT>— Maximum number of workflows to return -
-u,--user <USER>— User to filter by (defaults to USER environment variable) -
--all-users— List workflows for all users (overrides --user) -
-l,--limit <LIMIT>— Maximum number of workflows to returnDefault value:
10000 -
--offset <OFFSET>— Offset for pagination (0-based) -
--offset <OFFSET>— Offset for pagination (0-based)Default value:
0 -
--sort-by <SORT_BY>— Field to sort by -
--reverse-sort— Reverse sort order -
--archived-only— Show only archived workflows -
--sort-by <SORT_BY>— Field to sort by -
--reverse-sort— Reverse sort order -
--archived-only— Show only archived workflowsDefault value:
false -
--include-archived— Include both archived and non-archived workflows -
--include-archived— Include both archived and non-archived workflowsDefault value:
false
torc workflows get
Get a specific workflow by ID
Usage: torc workflows get [OPTIONS] [ID]
Arguments:
<ID>— ID of the workflow to get (optional - will prompt if not provided)<ID>— ID of the workflow to get (optional - will prompt if not provided)
Options:
-u,--user <USER>— User to filter by (defaults to USER environment variable)-u,--user <USER>— User to filter by (defaults to USER environment variable)
torc workflows update
Update an existing workflow
Usage: torc workflows update [OPTIONS] [ID]
Arguments:
<ID>— ID of the workflow to update (optional - will prompt if not provided)<ID>— ID of the workflow to update (optional - will prompt if not provided)
Options:
-n,--name <NAME>— Name of the workflow-d,--description <DESCRIPTION>— Description of the workflow--owner-user <OWNER_USER>— User that owns the workflow-n,--name <NAME>— Name of the workflow-d,--description <DESCRIPTION>— Description of the workflow--owner-user <OWNER_USER>— User that owns the workflow
torc workflows cancel
Cancel a workflow and all associated Slurm jobs
Usage: torc workflows cancel [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— ID of the workflow to cancel (optional - will prompt if not provided)<WORKFLOW_ID>— ID of the workflow to cancel (optional - will prompt if not provided)
torc workflows delete
Delete one or more workflows
Usage: torc workflows delete [OPTIONS] [IDS]...
Arguments:
<IDS>— IDs of workflows to remove (optional - will prompt if not provided)<IDS>— IDs of workflows to remove (optional - will prompt if not provided)
Options:
--no-prompts— Skip confirmation prompt--force— Force deletion even if workflow belongs to a different user--no-prompts— Skip confirmation prompt--force— Force deletion even if workflow belongs to a different user
torc workflows archive
Archive or unarchive one or more workflows
Usage: torc workflows archive <IS_ARCHIVED> [WORKFLOW_IDS]...
Arguments:
<IS_ARCHIVED>— Set to true to archive, false to unarchive<WORKFLOW_IDS>— IDs of workflows to archive/unarchive (if empty, will prompt for selection)<IS_ARCHIVED>— Set to true to archive, false to unarchive<WORKFLOW_IDS>— IDs of workflows to archive/unarchive (if empty, will prompt for selection)
torc workflows submit
Submit a workflow: initialize if needed and schedule nodes for on_workflow_start actions This command requires the workflow to have an on_workflow_start action with schedule_nodes Submit a workflow: initialize if needed and schedule nodes for on_workflow_start actions This command requires the workflow to have an on_workflow_start action with schedule_nodes
Usage: torc workflows submit [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— ID of the workflow to submit (optional - will prompt if not provided)<WORKFLOW_ID>— ID of the workflow to submit (optional - will prompt if not provided)
Options:
-
--force— If false, fail the operation if missing data is present (defaults to false) -
--force— If false, fail the operation if missing data is present (defaults to false)Default value:
false
torc workflows run
Run a workflow locally on the current node
Usage: torc workflows run [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— ID of the workflow to run (optional - will prompt if not provided)<WORKFLOW_ID>— ID of the workflow to run (optional - will prompt if not provided)
Options:
-
-p,--poll-interval <POLL_INTERVAL>— Poll interval in seconds for checking job completion -
-p,--poll-interval <POLL_INTERVAL>— Poll interval in seconds for checking job completionDefault value:
5.0 -
--max-parallel-jobs <MAX_PARALLEL_JOBS>— Maximum number of parallel jobs to run (defaults to available CPUs) -
--output-dir <OUTPUT_DIR>— Output directory for job logs and results -
--max-parallel-jobs <MAX_PARALLEL_JOBS>— Maximum number of parallel jobs to run (defaults to available CPUs) -
--output-dir <OUTPUT_DIR>— Output directory for job logs and resultsDefault value:
output
torc workflows initialize
Initialize a workflow, including all job statuses
Usage: torc workflows initialize [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— ID of the workflow to start (optional - will prompt if not provided)<WORKFLOW_ID>— ID of the workflow to start (optional - will prompt if not provided)
Options:
-
--force— If false, fail the operation if missing data is present (defaults to false) -
--force— If false, fail the operation if missing data is present (defaults to false)Default value:
false -
--no-prompts— Skip confirmation prompt -
--dry-run— Perform a dry run without making changes -
--no-prompts— Skip confirmation prompt -
--dry-run— Perform a dry run without making changes
torc workflows reinitialize
Reinitialize a workflow. This will reinitialize all jobs with a status of canceled, submitting, pending, or terminated. Jobs with a status of done will also be reinitialized if an input_file or user_data record has changed Reinitialize a workflow. This will reinitialize all jobs with a status of canceled, submitting, pending, or terminated. Jobs with a status of done will also be reinitialized if an input_file or user_data record has changed
Usage: torc workflows reinitialize [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— ID of the workflow to reinitialize (optional - will prompt if not provided)<WORKFLOW_ID>— ID of the workflow to reinitialize (optional - will prompt if not provided)
Options:
-
--force— If false, fail the operation if missing data is present (defaults to false) -
--force— If false, fail the operation if missing data is present (defaults to false)Default value:
false -
--dry-run— Perform a dry run without making changes -
--dry-run— Perform a dry run without making changes
torc workflows status
Get workflow status
Usage: torc workflows status [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— ID of the workflow to get status for (optional - will prompt if not provided)<WORKFLOW_ID>— ID of the workflow to get status for (optional - will prompt if not provided)
Options:
-u,--user <USER>— User to filter by (defaults to USER environment variable)-u,--user <USER>— User to filter by (defaults to USER environment variable)
torc workflows reset-status
Reset workflow and job status
Usage: torc workflows reset-status [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— ID of the workflow to reset status for (optional - will prompt if not provided)<WORKFLOW_ID>— ID of the workflow to reset status for (optional - will prompt if not provided)
Options:
-
--failed-only— Only reset failed jobs -
--failed-only— Only reset failed jobsDefault value:
false -
-r,--reinitialize— Reinitialize the workflow after resetting status -
-r,--reinitialize— Reinitialize the workflow after resetting statusDefault value:
false -
--force— Force reset even if there are active jobs (ignores running/pending jobs check) -
--force— Force reset even if there are active jobs (ignores running/pending jobs check)Default value:
false -
--no-prompts— Skip confirmation prompt -
--no-prompts— Skip confirmation prompt
torc workflows execution-plan
Show the execution plan for a workflow specification or existing workflow
Usage: torc workflows execution-plan <SPEC_OR_ID>
Arguments:
<SPEC_OR_ID>— Path to specification file OR workflow ID<SPEC_OR_ID>— Path to specification file OR workflow ID
torc workflows list-actions
List workflow actions and their statuses (useful for debugging action triggers)
Usage: torc workflows list-actions [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— ID of the workflow to show actions for (optional - will prompt if not provided)<WORKFLOW_ID>— ID of the workflow to show actions for (optional - will prompt if not provided)
Options:
-u,--user <USER>— User to filter by when selecting workflow interactively (defaults to USER environment variable)-u,--user <USER>— User to filter by when selecting workflow interactively (defaults to USER environment variable)
torc workflows is-complete
Check if a workflow is complete
Usage: torc workflows is-complete [ID]
Arguments:
<ID>— ID of the workflow to check (optional - will prompt if not provided)<ID>— ID of the workflow to check (optional - will prompt if not provided)
torc workflows export
Export a workflow to a portable JSON file
Creates a self-contained export that can be imported into the same or different torc-server instance. All entity IDs are preserved in the export and remapped during import.
Usage: torc workflows export [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— ID of the workflow to export (optional - will prompt if not provided)
Options:
-o,--output <OUTPUT>— Output file path (default: stdout)--include-results— Include job results in export--include-events— Include events (workflow history) in export
Examples:
# Export workflow to stdout
torc workflows export 123
# Export to a file
torc workflows export 123 -o workflow.json
# Include job results in export
torc workflows export 123 --include-results -o backup.json
# Export with all optional data
torc workflows export 123 --include-results --include-events -o complete.json
torc workflows import
Import a workflow from an exported JSON file
Imports a workflow that was previously exported. All entity IDs are remapped to new IDs assigned by the server. By default, all job statuses are reset to uninitialized for a fresh start.
Usage: torc workflows import [OPTIONS] <FILE>
Arguments:
<FILE>— Path to the exported workflow JSON file (use '-' for stdin)
Options:
--name <NAME>— Override the workflow name--skip-results— Skip importing results even if present in export--skip-events— Skip importing events even if present in export
Examples:
# Import a workflow (resets job statuses by default)
torc workflows import workflow.json
# Import from stdin
cat workflow.json | torc workflows import -
# Import with a different name
torc workflows import workflow.json --name 'my-copy'
# Skip importing results even if present in file
torc workflows import workflow.json --skip-results
torc workflows sync-status
Synchronize job statuses with Slurm (detect and fail orphaned jobs)
This command detects jobs that are stuck in "running" status because their Slurm allocation terminated unexpectedly (e.g., due to timeout, node failure, or admin intervention). It marks these orphaned jobs as failed so the workflow can be recovered or restarted.
Use this when:
torc recoverreports "there are active Slurm allocations" butsqueueshows none- Jobs appear stuck in "running" status after a Slurm allocation ended
- You want to clean up workflow state before running
torc recover
Usage: torc workflows sync-status [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— ID of the workflow to sync (optional - will prompt if not provided)
Options:
--dry-run— Preview changes without applying them
Examples:
# Preview what would be cleaned up
torc workflows sync-status 123 --dry-run
# Clean up orphaned jobs
torc workflows sync-status 123
# Get JSON output for scripting
torc -f json workflows sync-status 123
torc compute-nodes
Compute node management commands
Usage: torc compute-nodes <COMMAND>
Subcommands:
get— Get a specific compute node by IDlist— List compute nodes for a workflowget— Get a specific compute node by IDlist— List compute nodes for a workflow
torc compute-nodes get
Get a specific compute node by ID
Usage: torc compute-nodes get <ID>
Arguments:
<ID>— ID of the compute node<ID>— ID of the compute node
torc compute-nodes list
List compute nodes for a workflow
Usage: torc compute-nodes list [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— List compute nodes for this workflow (optional - will prompt if not provided)<WORKFLOW_ID>— List compute nodes for this workflow (optional - will prompt if not provided)
Options:
-
-l,--limit <LIMIT>— Maximum number of compute nodes to return -
-l,--limit <LIMIT>— Maximum number of compute nodes to returnDefault value:
10000 -
-o,--offset <OFFSET>— Offset for pagination (0-based) -
-o,--offset <OFFSET>— Offset for pagination (0-based)Default value:
0 -
-s,--sort-by <SORT_BY>— Field to sort by -
-r,--reverse-sort— Reverse sort order -
-s,--sort-by <SORT_BY>— Field to sort by -
-r,--reverse-sort— Reverse sort orderDefault value:
false -
--scheduled-compute-node <SCHEDULED_COMPUTE_NODE>— Filter by scheduled compute node ID -
--scheduled-compute-node <SCHEDULED_COMPUTE_NODE>— Filter by scheduled compute node ID
torc files
File management commands
Usage: torc files <COMMAND>
Subcommands:
create— Create a new filelist— List filesget— Get a specific file by IDupdate— Update an existing filedelete— Delete a filelist-required-existing— List required existing files for a workflowcreate— Create a new filelist— List filesget— Get a specific file by IDupdate— Update an existing filedelete— Delete a filelist-required-existing— List required existing files for a workflow
torc files create
Create a new file
Usage: torc files create --name <NAME> --path <PATH> [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Create the file in this workflow<WORKFLOW_ID>— Create the file in this workflow
Options:
-n,--name <NAME>— Name of the job-p,--path <PATH>— Path of the file-n,--name <NAME>— Name of the job-p,--path <PATH>— Path of the file
torc files list
List files
Usage: torc files list [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— List files for this workflow (optional - will prompt if not provided)<WORKFLOW_ID>— List files for this workflow (optional - will prompt if not provided)
Options:
-
--produced-by-job-id <PRODUCED_BY_JOB_ID>— Filter by job ID that produced the files -
-l,--limit <LIMIT>— Maximum number of files to return -
--produced-by-job-id <PRODUCED_BY_JOB_ID>— Filter by job ID that produced the files -
-l,--limit <LIMIT>— Maximum number of files to returnDefault value:
10000 -
--offset <OFFSET>— Offset for pagination (0-based) -
--offset <OFFSET>— Offset for pagination (0-based)Default value:
0 -
--sort-by <SORT_BY>— Field to sort by -
--reverse-sort— Reverse sort order -
--sort-by <SORT_BY>— Field to sort by -
--reverse-sort— Reverse sort order
torc files get
Get a specific file by ID
Usage: torc files get <ID>
Arguments:
<ID>— ID of the file to get<ID>— ID of the file to get
torc files update
Update an existing file
Usage: torc files update [OPTIONS] <ID>
Arguments:
<ID>— ID of the file to update<ID>— ID of the file to update
Options:
-n,--name <NAME>— Name of the file-p,--path <PATH>— Path of the file-n,--name <NAME>— Name of the file-p,--path <PATH>— Path of the file
torc files delete
Delete a file
Usage: torc files delete <ID>
Arguments:
<ID>— ID of the file to remove<ID>— ID of the file to remove
torc files list-required-existing
List required existing files for a workflow
Usage: torc files list-required-existing [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— List required existing files for this workflow (optional - will prompt if not provided)<WORKFLOW_ID>— List required existing files for this workflow (optional - will prompt if not provided)
torc jobs
Job management commands
Usage: torc jobs <COMMAND>
Subcommands:
create— Create a new jobcreate-from-file— Create multiple jobs from a text file containing one command per linelist— List jobsget— Get a specific job by IDupdate— Update an existing jobdelete— Delete one or more jobsdelete-all— Delete all jobs for a workflowlist-resource-requirements— List jobs with their resource requirementscreate— Create a new jobcreate-from-file— Create multiple jobs from a text file containing one command per linelist— List jobsget— Get a specific job by IDupdate— Update an existing jobdelete— Delete one or more jobsdelete-all— Delete all jobs for a workflowlist-resource-requirements— List jobs with their resource requirements
torc jobs create
Create a new job
Usage: torc jobs create [OPTIONS] --name <NAME> --command <COMMAND> [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Create the job in this workflow<WORKFLOW_ID>— Create the job in this workflow
Options:
-n,--name <NAME>— Name of the job-c,--command <COMMAND>— Command to execute-r,--resource-requirements-id <RESOURCE_REQUIREMENTS_ID>— Resource requirements ID for this job-b,--blocking-job-ids <BLOCKING_JOB_IDS>— Job IDs that block this job-i,--input-file-ids <INPUT_FILE_IDS>— Input files needed by this job-o,--output-file-ids <OUTPUT_FILE_IDS>— Output files produced by this job-n,--name <NAME>— Name of the job-c,--command <COMMAND>— Command to execute-r,--resource-requirements-id <RESOURCE_REQUIREMENTS_ID>— Resource requirements ID for this job-b,--blocking-job-ids <BLOCKING_JOB_IDS>— Job IDs that block this job-i,--input-file-ids <INPUT_FILE_IDS>— Input files needed by this job-o,--output-file-ids <OUTPUT_FILE_IDS>— Output files produced by this job
torc jobs create-from-file
Create multiple jobs from a text file containing one command per line
This command reads a text file where each line contains a job command. Lines starting with '#' are treated as comments and ignored. Empty lines are also ignored. This command reads a text file where each line contains a job command. Lines starting with '#' are treated as comments and ignored. Empty lines are also ignored.
Jobs will be named sequentially as job1, job2, job3, etc., starting from the current job count + 1 to avoid naming conflicts. Jobs will be named sequentially as job1, job2, job3, etc., starting from the current job count + 1 to avoid naming conflicts.
All jobs created will share the same resource requirements, which are automatically created and assigned. All jobs created will share the same resource requirements, which are automatically created and assigned.
Example: torc jobs create-from-file 123 batch_jobs.txt --cpus-per-job 4 --memory-per-job 8g
Usage: torc jobs create-from-file [OPTIONS] <WORKFLOW_ID> <FILE>
Arguments:
-
<WORKFLOW_ID>— Workflow ID to create jobs for -
<FILE>— Path to text file containing job commands (one per line) -
<WORKFLOW_ID>— Workflow ID to create jobs for -
<FILE>— Path to text file containing job commands (one per line)File format: - One command per line - Lines starting with # are comments (ignored) - Empty lines are ignored File format: - One command per line - Lines starting with # are comments (ignored) - Empty lines are ignored
Example file content: # Data processing jobs python process.py --batch 1 python process.py --batch 2 python process.py --batch 3 Example file content: # Data processing jobs python process.py --batch 1 python process.py --batch 2 python process.py --batch 3
Options:
-
--cpus-per-job <CPUS_PER_JOB>— Number of CPUs per job -
--cpus-per-job <CPUS_PER_JOB>— Number of CPUs per jobDefault value:
1 -
--memory-per-job <MEMORY_PER_JOB>— Memory per job (e.g., "1m", "2g", "16g") -
--memory-per-job <MEMORY_PER_JOB>— Memory per job (e.g., "1m", "2g", "16g")Default value:
1m -
--runtime-per-job <RUNTIME_PER_JOB>— Runtime per job (ISO 8601 duration format) -
--runtime-per-job <RUNTIME_PER_JOB>— Runtime per job (ISO 8601 duration format)Examples: P0DT1M = 1 minute P0DT30M = 30 minutes P0DT2H = 2 hours P1DT0H = 1 day Examples: P0DT1M = 1 minute P0DT30M = 30 minutes P0DT2H = 2 hours P1DT0H = 1 day
Default value:
P0DT1M
torc jobs list
List jobs
Usage: torc jobs list [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— List jobs for this workflow (optional - will prompt if not provided)<WORKFLOW_ID>— List jobs for this workflow (optional - will prompt if not provided)
Options:
-
-s,--status <STATUS>— User to filter by (defaults to USER environment variable) -
--upstream-job-id <UPSTREAM_JOB_ID>— Filter by upstream job ID (jobs that depend on this job) -
-l,--limit <LIMIT>— Maximum number of jobs to return -
-s,--status <STATUS>— User to filter by (defaults to USER environment variable) -
--upstream-job-id <UPSTREAM_JOB_ID>— Filter by upstream job ID (jobs that depend on this job) -
-l,--limit <LIMIT>— Maximum number of jobs to returnDefault value:
10000 -
--offset <OFFSET>— Offset for pagination (0-based) -
--offset <OFFSET>— Offset for pagination (0-based)Default value:
0 -
--sort-by <SORT_BY>— Field to sort by -
--reverse-sort— Reverse sort order -
--include-relationships— Include job relationships (depends_on_job_ids, input/output file/user_data IDs) - slower but more complete -
--sort-by <SORT_BY>— Field to sort by -
--reverse-sort— Reverse sort order -
--include-relationships— Include job relationships (depends_on_job_ids, input/output file/user_data IDs) - slower but more complete
torc jobs get
Get a specific job by ID
Usage: torc jobs get <ID>
Arguments:
<ID>— ID of the job to get<ID>— ID of the job to get
torc jobs update
Update an existing job
Usage: torc jobs update [OPTIONS] <ID>
Arguments:
<ID>— ID of the job to update<ID>— ID of the job to update
Options:
-n,--name <NAME>— Name of the job-c,--command <COMMAND>— Command to execute-n,--name <NAME>— Name of the job-c,--command <COMMAND>— Command to execute
torc jobs delete
Delete one or more jobs
Usage: torc jobs delete [IDS]...
Arguments:
<IDS>— IDs of the jobs to remove<IDS>— IDs of the jobs to remove
torc jobs delete-all
Delete all jobs for a workflow
Usage: torc jobs delete-all [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Workflow ID to delete all jobs from (optional - will prompt if not provided)<WORKFLOW_ID>— Workflow ID to delete all jobs from (optional - will prompt if not provided)
torc jobs list-resource-requirements
List jobs with their resource requirements
Usage: torc jobs list-resource-requirements [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Workflow ID to list jobs from (optional - will prompt if not provided)<WORKFLOW_ID>— Workflow ID to list jobs from (optional - will prompt if not provided)
Options:
-j,--job-id <JOB_ID>— Filter by specific job ID-j,--job-id <JOB_ID>— Filter by specific job ID
torc job-dependencies
Job dependency and relationship queries
Usage: torc job-dependencies <COMMAND>
Subcommands:
job-job— List job-to-job dependencies for a workflowjob-file— List job-file relationships for a workflowjob-user-data— List job-user_data relationships for a workflowjob-job— List job-to-job dependencies for a workflowjob-file— List job-file relationships for a workflowjob-user-data— List job-user_data relationships for a workflow
torc job-dependencies job-job
List job-to-job dependencies for a workflow
Usage: torc job-dependencies job-job [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— ID of the workflow (optional - will prompt if not provided)<WORKFLOW_ID>— ID of the workflow (optional - will prompt if not provided)
Options:
-
-l,--limit <LIMIT>— Maximum number of dependencies to return -
-l,--limit <LIMIT>— Maximum number of dependencies to returnDefault value:
10000 -
--offset <OFFSET>— Offset for pagination (0-based) -
--offset <OFFSET>— Offset for pagination (0-based)Default value:
0
torc job-dependencies job-file
List job-file relationships for a workflow
Usage: torc job-dependencies job-file [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— ID of the workflow (optional - will prompt if not provided)<WORKFLOW_ID>— ID of the workflow (optional - will prompt if not provided)
Options:
-
-l,--limit <LIMIT>— Maximum number of relationships to return -
-l,--limit <LIMIT>— Maximum number of relationships to returnDefault value:
10000 -
--offset <OFFSET>— Offset for pagination (0-based) -
--offset <OFFSET>— Offset for pagination (0-based)Default value:
0
torc job-dependencies job-user-data
List job-user_data relationships for a workflow
Usage: torc job-dependencies job-user-data [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— ID of the workflow (optional - will prompt if not provided)<WORKFLOW_ID>— ID of the workflow (optional - will prompt if not provided)
Options:
-
-l,--limit <LIMIT>— Maximum number of relationships to return -
-l,--limit <LIMIT>— Maximum number of relationships to returnDefault value:
10000 -
--offset <OFFSET>— Offset for pagination (0-based) -
--offset <OFFSET>— Offset for pagination (0-based)Default value:
0
torc resource-requirements
Resource requirements management commands
Usage: torc resource-requirements <COMMAND>
Subcommands:
create— Create new resource requirementslist— List resource requirementsget— Get a specific resource requirement by IDupdate— Update existing resource requirementsdelete— Delete resource requirementscreate— Create new resource requirementslist— List resource requirementsget— Get a specific resource requirement by IDupdate— Update existing resource requirementsdelete— Delete resource requirements
torc resource-requirements create
Create new resource requirements
Usage: torc resource-requirements create [OPTIONS] --name <NAME> [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Create resource requirements in this workflow<WORKFLOW_ID>— Create resource requirements in this workflow
Options:
-
-n,--name <NAME>— Name of the resource requirements -
--num-cpus <NUM_CPUS>— Number of CPUs required -
-n,--name <NAME>— Name of the resource requirements -
--num-cpus <NUM_CPUS>— Number of CPUs requiredDefault value:
1 -
--num-gpus <NUM_GPUS>— Number of GPUs required -
--num-gpus <NUM_GPUS>— Number of GPUs requiredDefault value:
0 -
--num-nodes <NUM_NODES>— Number of nodes required -
--num-nodes <NUM_NODES>— Number of nodes requiredDefault value:
1 -
-m,--memory <MEMORY>— Amount of memory required (e.g., "20g") -
-m,--memory <MEMORY>— Amount of memory required (e.g., "20g")Default value:
1m -
-r,--runtime <RUNTIME>— Maximum runtime in ISO 8601 duration format (e.g., "P0DT1H") -
-r,--runtime <RUNTIME>— Maximum runtime in ISO 8601 duration format (e.g., "P0DT1H")Default value:
P0DT1M
torc resource-requirements list
List resource requirements
Usage: torc resource-requirements list [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— List resource requirements for this workflow (optional - will prompt if not provided)<WORKFLOW_ID>— List resource requirements for this workflow (optional - will prompt if not provided)
Options:
-
-l,--limit <LIMIT>— Maximum number of resource requirements to return -
-l,--limit <LIMIT>— Maximum number of resource requirements to returnDefault value:
10000 -
--offset <OFFSET>— Offset for pagination (0-based) -
--offset <OFFSET>— Offset for pagination (0-based)Default value:
0 -
--sort-by <SORT_BY>— Field to sort by -
--reverse-sort— Reverse sort order -
--sort-by <SORT_BY>— Field to sort by -
--reverse-sort— Reverse sort order
torc resource-requirements get
Get a specific resource requirement by ID
Usage: torc resource-requirements get <ID>
Arguments:
<ID>— ID of the resource requirement to get<ID>— ID of the resource requirement to get
torc resource-requirements update
Update existing resource requirements
Usage: torc resource-requirements update [OPTIONS] <ID>
Arguments:
<ID>— ID of the resource requirement to update<ID>— ID of the resource requirement to update
Options:
-n,--name <NAME>— Name of the resource requirements--num-cpus <NUM_CPUS>— Number of CPUs required--num-gpus <NUM_GPUS>— Number of GPUs required--num-nodes <NUM_NODES>— Number of nodes required--memory <MEMORY>— Amount of memory required (e.g., "20g")--runtime <RUNTIME>— Maximum runtime (e.g., "1h", "30m")-n,--name <NAME>— Name of the resource requirements--num-cpus <NUM_CPUS>— Number of CPUs required--num-gpus <NUM_GPUS>— Number of GPUs required--num-nodes <NUM_NODES>— Number of nodes required--memory <MEMORY>— Amount of memory required (e.g., "20g")--runtime <RUNTIME>— Maximum runtime (e.g., "1h", "30m")
torc resource-requirements delete
Delete resource requirements
Usage: torc resource-requirements delete <ID>
Arguments:
<ID>— ID of the resource requirement to remove<ID>— ID of the resource requirement to remove
torc events
Event management commands
Usage: torc events <COMMAND>
Subcommands:
create— Create a new eventlist— List events for a workflowmonitor— Monitor events for a workflow in real-timeget-latest-event— Get the latest event for a workflowdelete— Delete an eventcreate— Create a new eventlist— List events for a workflowmonitor— Monitor events for a workflow in real-timeget-latest-event— Get the latest event for a workflowdelete— Delete an event
torc events create
Create a new event
Usage: torc events create --data <DATA> [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Create the event in this workflow<WORKFLOW_ID>— Create the event in this workflow
Options:
-d,--data <DATA>— JSON data for the event-d,--data <DATA>— JSON data for the event
torc events list
List events for a workflow
Usage: torc events list [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— List events for this workflow (optional - will prompt if not provided)<WORKFLOW_ID>— List events for this workflow (optional - will prompt if not provided)
Options:
-
-c,--category <CATEGORY>— Filter events by category -
-l,--limit <LIMIT>— Maximum number of events to return -
-c,--category <CATEGORY>— Filter events by category -
-l,--limit <LIMIT>— Maximum number of events to returnDefault value:
10000 -
-o,--offset <OFFSET>— Offset for pagination (0-based) -
-o,--offset <OFFSET>— Offset for pagination (0-based)Default value:
0 -
-s,--sort-by <SORT_BY>— Field to sort by -
-r,--reverse-sort— Reverse sort order -
-s,--sort-by <SORT_BY>— Field to sort by -
-r,--reverse-sort— Reverse sort orderDefault value:
false
torc events monitor
Monitor events for a workflow in real-time
Usage: torc events monitor [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Monitor events for this workflow (optional - will prompt if not provided)<WORKFLOW_ID>— Monitor events for this workflow (optional - will prompt if not provided)
Options:
-
-d,--duration <DURATION>— Duration to monitor in minutes (default: infinite) -
-p,--poll-interval <POLL_INTERVAL>— Poll interval in seconds (default: 60) -
-d,--duration <DURATION>— Duration to monitor in minutes (default: infinite) -
-p,--poll-interval <POLL_INTERVAL>— Poll interval in seconds (default: 60)Default value:
60 -
-c,--category <CATEGORY>— Filter events by category -
-c,--category <CATEGORY>— Filter events by category
torc events get-latest-event
Get the latest event for a workflow
Usage: torc events get-latest-event [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Get the latest event for this workflow (optional - will prompt if not provided)<WORKFLOW_ID>— Get the latest event for this workflow (optional - will prompt if not provided)
torc events delete
Delete an event
Usage: torc events delete <ID>
Arguments:
<ID>— ID of the event to remove<ID>— ID of the event to remove
torc results
Result management commands
Usage: torc results <COMMAND>
Subcommands:
list— List resultsget— Get a specific result by IDdelete— Delete a resultlist— List resultsget— Get a specific result by IDdelete— Delete a result
torc results list
List results
Usage: torc results list [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— List results for this workflow (optional - will prompt if not provided). By default, only lists results for the latest run of the workflow<WORKFLOW_ID>— List results for this workflow (optional - will prompt if not provided). By default, only lists results for the latest run of the workflow
Options:
-
-j,--job-id <JOB_ID>— List results for this job -
-r,--run-id <RUN_ID>— List results for this run_id -
--return-code <RETURN_CODE>— Filter by return code -
--failed— Show only failed jobs (non-zero return code) -
-s,--status <STATUS>— Filter by job status (uninitialized, blocked, canceled, terminated, done, ready, scheduled, running, pending, disabled) -
-l,--limit <LIMIT>— Maximum number of results to return -
-j,--job-id <JOB_ID>— List results for this job -
-r,--run-id <RUN_ID>— List results for this run_id -
--return-code <RETURN_CODE>— Filter by return code -
--failed— Show only failed jobs (non-zero return code) -
-s,--status <STATUS>— Filter by job status (uninitialized, blocked, canceled, terminated, done, ready, scheduled, running, pending, disabled) -
-l,--limit <LIMIT>— Maximum number of results to returnDefault value:
10000 -
--offset <OFFSET>— Offset for pagination (0-based) -
--offset <OFFSET>— Offset for pagination (0-based)Default value:
0 -
--sort-by <SORT_BY>— Field to sort by -
--reverse-sort— Reverse sort order -
--all-runs— Show all historical results (default: false, only shows current results) -
--compute-node <COMPUTE_NODE>— Filter by compute node ID -
--sort-by <SORT_BY>— Field to sort by -
--reverse-sort— Reverse sort order -
--all-runs— Show all historical results (default: false, only shows current results) -
--compute-node <COMPUTE_NODE>— Filter by compute node ID
torc results get
Get a specific result by ID
Usage: torc results get <ID>
Arguments:
<ID>— ID of the result to get<ID>— ID of the result to get
torc results delete
Delete a result
Usage: torc results delete <ID>
Arguments:
<ID>— ID of the result to remove<ID>— ID of the result to remove
torc user-data
User data management commands
Usage: torc user-data <COMMAND>
Subcommands:
create— Create a new user data recordlist— List user data recordsget— Get a specific user data recordupdate— Update a user data recorddelete— Delete a user data recorddelete-all— Delete all user data records for a workflowlist-missing— List missing user data for a workflowcreate— Create a new user data recordlist— List user data recordsget— Get a specific user data recordupdate— Update a user data recorddelete— Delete a user data recorddelete-all— Delete all user data records for a workflowlist-missing— List missing user data for a workflow
torc user-data create
Create a new user data record
Usage: torc user-data create [OPTIONS] --name <NAME> [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Workflow ID<WORKFLOW_ID>— Workflow ID
Options:
-n,--name <NAME>— Name of the data object-d,--data <DATA>— JSON data content--ephemeral— Whether the data is ephemeral (cleared between runs)--consumer-job-id <CONSUMER_JOB_ID>— Consumer job ID (optional)--producer-job-id <PRODUCER_JOB_ID>— Producer job ID (optional)-n,--name <NAME>— Name of the data object-d,--data <DATA>— JSON data content--ephemeral— Whether the data is ephemeral (cleared between runs)--consumer-job-id <CONSUMER_JOB_ID>— Consumer job ID (optional)--producer-job-id <PRODUCER_JOB_ID>— Producer job ID (optional)
torc user-data list
List user data records
Usage: torc user-data list [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Workflow ID (if not provided, will be selected interactively)<WORKFLOW_ID>— Workflow ID (if not provided, will be selected interactively)
Options:
-
-l,--limit <LIMIT>— Maximum number of records to return -
-l,--limit <LIMIT>— Maximum number of records to returnDefault value:
50 -
-o,--offset <OFFSET>— Number of records to skip -
-o,--offset <OFFSET>— Number of records to skipDefault value:
0 -
--sort-by <SORT_BY>— Field to sort by -
--reverse-sort— Reverse sort order -
--name <NAME>— Filter by name -
--is-ephemeral <IS_EPHEMERAL>— Filter by ephemeral status -
--sort-by <SORT_BY>— Field to sort by -
--reverse-sort— Reverse sort order -
--name <NAME>— Filter by name -
--is-ephemeral <IS_EPHEMERAL>— Filter by ephemeral statusPossible values:
true,false -
--consumer-job-id <CONSUMER_JOB_ID>— Filter by consumer job ID -
--producer-job-id <PRODUCER_JOB_ID>— Filter by producer job ID -
--consumer-job-id <CONSUMER_JOB_ID>— Filter by consumer job ID -
--producer-job-id <PRODUCER_JOB_ID>— Filter by producer job ID
torc user-data get
Get a specific user data record
Usage: torc user-data get <ID>
Arguments:
<ID>— User data record ID<ID>— User data record ID
torc user-data update
Update a user data record
Usage: torc user-data update [OPTIONS] <ID>
Arguments:
<ID>— User data record ID<ID>— User data record ID
Options:
-
-n,--name <NAME>— New name for the data object -
-d,--data <DATA>— New JSON data content -
--ephemeral <EPHEMERAL>— Update ephemeral status -
-n,--name <NAME>— New name for the data object -
-d,--data <DATA>— New JSON data content -
--ephemeral <EPHEMERAL>— Update ephemeral statusPossible values:
true,false
torc user-data delete
Delete a user data record
Usage: torc user-data delete <ID>
Arguments:
<ID>— User data record ID<ID>— User data record ID
torc user-data delete-all
Delete all user data records for a workflow
Usage: torc user-data delete-all <WORKFLOW_ID>
Arguments:
<WORKFLOW_ID>— Workflow ID<WORKFLOW_ID>— Workflow ID
torc user-data list-missing
List missing user data for a workflow
Usage: torc user-data list-missing <WORKFLOW_ID>
Arguments:
<WORKFLOW_ID>— Workflow ID<WORKFLOW_ID>— Workflow ID
torc slurm
Slurm scheduler commands
Usage: torc slurm <COMMAND>
Subcommands:
create— Add a Slurm config to the databaseupdate— Modify a Slurm config in the databaselist— Show the current Slurm configs in the databaseget— Get a specific Slurm config by IDdelete— Delete a Slurm config by IDschedule-nodes— Schedule compute nodes using Slurmparse-logs— Parse Slurm log files for known error messagessacct— Call sacct for scheduled compute nodes and display summarygenerate— Generate Slurm schedulers for a workflow based on job resource requirementsregenerate— Regenerate Slurm schedulers for an existing workflow based on pending jobscreate— Add a Slurm config to the databaseupdate— Modify a Slurm config in the databaselist— Show the current Slurm configs in the databaseget— Get a specific Slurm config by IDdelete— Delete a Slurm config by IDschedule-nodes— Schedule compute nodes using Slurmparse-logs— Parse Slurm log files for known error messagessacct— Call sacct for scheduled compute nodes and display summarygenerate— Generate Slurm schedulers for a workflow based on job resource requirementsregenerate— Regenerate Slurm schedulers for an existing workflow based on pending jobs
torc slurm create
Add a Slurm config to the database
Usage: torc slurm create [OPTIONS] --name <NAME> --account <ACCOUNT> [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Workflow ID<WORKFLOW_ID>— Workflow ID
Options:
-
-n,--name <NAME>— Name of config -
-a,--account <ACCOUNT>— HPC account -
-g,--gres <GRES>— Request nodes that have at least this number of GPUs. Ex: 'gpu:2' -
-m,--mem <MEM>— Request nodes that have at least this amount of memory. Ex: '180G' -
-N,--nodes <NODES>— Number of nodes to use for each job -
-n,--name <NAME>— Name of config -
-a,--account <ACCOUNT>— HPC account -
-g,--gres <GRES>— Request nodes that have at least this number of GPUs. Ex: 'gpu:2' -
-m,--mem <MEM>— Request nodes that have at least this amount of memory. Ex: '180G' -
-N,--nodes <NODES>— Number of nodes to use for each jobDefault value:
1 -
-p,--partition <PARTITION>— HPC partition. Default is determined by the scheduler -
-q,--qos <QOS>— Controls priority of the jobs -
-p,--partition <PARTITION>— HPC partition. Default is determined by the scheduler -
-q,--qos <QOS>— Controls priority of the jobsDefault value:
normal -
-t,--tmp <TMP>— Request nodes that have at least this amount of storage scratch space -
-W,--walltime <WALLTIME>— Slurm job walltime -
-t,--tmp <TMP>— Request nodes that have at least this amount of storage scratch space -
-W,--walltime <WALLTIME>— Slurm job walltimeDefault value:
04:00:00 -
-e,--extra <EXTRA>— Add extra Slurm parameters, for example --extra='--reservation=my-reservation' -
-e,--extra <EXTRA>— Add extra Slurm parameters, for example --extra='--reservation=my-reservation'
torc slurm update
Modify a Slurm config in the database
Usage: torc slurm update [OPTIONS] <SCHEDULER_ID>
Arguments:
<SCHEDULER_ID><SCHEDULER_ID>
Options:
-N,--name <NAME>— Name of config-a,--account <ACCOUNT>— HPC account-g,--gres <GRES>— Request nodes that have at least this number of GPUs. Ex: 'gpu:2'-m,--mem <MEM>— Request nodes that have at least this amount of memory. Ex: '180G'-n,--nodes <NODES>— Number of nodes to use for each job-p,--partition <PARTITION>— HPC partition-q,--qos <QOS>— Controls priority of the jobs-t,--tmp <TMP>— Request nodes that have at least this amount of storage scratch space--walltime <WALLTIME>— Slurm job walltime-e,--extra <EXTRA>— Add extra Slurm parameters-N,--name <NAME>— Name of config-a,--account <ACCOUNT>— HPC account-g,--gres <GRES>— Request nodes that have at least this number of GPUs. Ex: 'gpu:2'-m,--mem <MEM>— Request nodes that have at least this amount of memory. Ex: '180G'-n,--nodes <NODES>— Number of nodes to use for each job-p,--partition <PARTITION>— HPC partition-q,--qos <QOS>— Controls priority of the jobs-t,--tmp <TMP>— Request nodes that have at least this amount of storage scratch space--walltime <WALLTIME>— Slurm job walltime-e,--extra <EXTRA>— Add extra Slurm parameters
torc slurm list
Show the current Slurm configs in the database
Usage: torc slurm list [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Workflow ID<WORKFLOW_ID>— Workflow ID
Options:
-
-l,--limit <LIMIT>— Maximum number of configs to return -
-l,--limit <LIMIT>— Maximum number of configs to returnDefault value:
10000 -
--offset <OFFSET>— Offset for pagination (0-based) -
--offset <OFFSET>— Offset for pagination (0-based)Default value:
0
torc slurm get
Get a specific Slurm config by ID
Usage: torc slurm get <ID>
Arguments:
<ID>— ID of the Slurm config to get<ID>— ID of the Slurm config to get
torc slurm delete
Delete a Slurm config by ID
Usage: torc slurm delete <ID>
Arguments:
<ID>— ID of the Slurm config to delete<ID>— ID of the Slurm config to delete
torc slurm schedule-nodes
Schedule compute nodes using Slurm
Usage: torc slurm schedule-nodes [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Workflow ID<WORKFLOW_ID>— Workflow ID
Options:
-
-j,--job-prefix <JOB_PREFIX>— Job prefix for the Slurm job names -
-j,--job-prefix <JOB_PREFIX>— Job prefix for the Slurm job namesDefault value:
worker -
--keep-submission-scripts— Keep submission scripts after job submission -
--keep-submission-scripts— Keep submission scripts after job submissionDefault value:
false -
-m,--max-parallel-jobs <MAX_PARALLEL_JOBS>— Maximum number of parallel jobs -
-n,--num-hpc-jobs <NUM_HPC_JOBS>— Number of HPC jobs to submit -
-m,--max-parallel-jobs <MAX_PARALLEL_JOBS>— Maximum number of parallel jobs -
-n,--num-hpc-jobs <NUM_HPC_JOBS>— Number of HPC jobs to submitDefault value:
1 -
-o,--output <OUTPUT>— Output directory for job output files -
-o,--output <OUTPUT>— Output directory for job output filesDefault value:
output -
-p,--poll-interval <POLL_INTERVAL>— Poll interval in seconds -
-p,--poll-interval <POLL_INTERVAL>— Poll interval in secondsDefault value:
60 -
--scheduler-config-id <SCHEDULER_CONFIG_ID>— Scheduler config ID -
--start-one-worker-per-node— Start one worker per node -
--scheduler-config-id <SCHEDULER_CONFIG_ID>— Scheduler config ID -
--start-one-worker-per-node— Start one worker per nodeDefault value:
false
torc slurm parse-logs
Parse Slurm log files for known error messages
Usage: torc slurm parse-logs [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Workflow ID<WORKFLOW_ID>— Workflow ID
Options:
-
-o,--output-dir <OUTPUT_DIR>— Output directory containing Slurm log files -
-o,--output-dir <OUTPUT_DIR>— Output directory containing Slurm log filesDefault value:
output -
--errors-only— Only show errors (skip warnings) -
--errors-only— Only show errors (skip warnings)Default value:
false
torc slurm sacct
Call sacct for scheduled compute nodes and display summary
Usage: torc slurm sacct [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Workflow ID<WORKFLOW_ID>— Workflow ID
Options:
-
-o,--output-dir <OUTPUT_DIR>— Output directory for sacct JSON files (only used with --save-json) -
-o,--output-dir <OUTPUT_DIR>— Output directory for sacct JSON files (only used with --save-json)Default value:
output -
--save-json— Save full JSON output to files in addition to displaying summary -
--save-json— Save full JSON output to files in addition to displaying summaryDefault value:
false
torc slurm generate
Generate Slurm schedulers for a workflow based on job resource requirements
Usage: torc slurm generate [OPTIONS] --account <ACCOUNT> <WORKFLOW_FILE>
Arguments:
<WORKFLOW_FILE>— Path to workflow specification file (YAML, JSON, JSON5, or KDL)<WORKFLOW_FILE>— Path to workflow specification file (YAML, JSON, JSON5, or KDL)
Options:
-
--account <ACCOUNT>— Slurm account to use -
--profile <PROFILE>— HPC profile to use (if not specified, tries to detect current system) -
-o,--output <OUTPUT>— Output file path (if not specified, prints to stdout) -
--single-allocation— Bundle all nodes into a single Slurm allocation per scheduler -
--account <ACCOUNT>— Slurm account to use -
--profile <PROFILE>— HPC profile to use (if not specified, tries to detect current system) -
-o,--output <OUTPUT>— Output file path (if not specified, prints to stdout) -
--single-allocation— Bundle all nodes into a single Slurm allocation per schedulerBy default, creates one Slurm allocation per node (N×1 mode), which allows jobs to start as nodes become available and provides better fault tolerance. By default, creates one Slurm allocation per node (N×1 mode), which allows jobs to start as nodes become available and provides better fault tolerance.
With this flag, creates one large allocation with all nodes (1×N mode), which requires all nodes to be available simultaneously but uses a single sbatch.
-
--group-by <GROUP_BY>— Strategy for grouping jobs into schedulersPossible values:
resource-requirements(default),partition -
--walltime-strategy <STRATEGY>— Strategy for determining Slurm job walltimePossible values:
max-job-runtime(default),max-partition-timemax-job-runtime: Uses the maximum job runtime multiplied by--walltime-multiplier. This typically results in shorter walltime requests, improving queue priority.max-partition-time: Uses the partition's maximum allowed walltime. More conservative but may negatively impact queue scheduling.
-
--walltime-multiplier <MULTIPLIER>— Multiplier for job runtime when using--walltime-strategy=max-job-runtimeDefault value:
1.5The maximum job runtime is multiplied by this value to provide a safety margin. For example, 1.5 means requesting 50% more time than the longest job estimate.
-
--no-actions— Don't add workflow actions for scheduling nodes -
--overwrite— Overwrite existing schedulers in the workflow -
--dry-run— Show what would be generated without writing to output
torc slurm regenerate
Regenerate Slurm schedulers for an existing workflow based on pending jobs
Analyzes jobs that are uninitialized, ready, or blocked and generates new Slurm schedulers to run them. Uses existing scheduler configurations as defaults for account, partition, and other settings. Analyzes jobs that are uninitialized, ready, or blocked and generates new Slurm schedulers to run them. Uses existing scheduler configurations as defaults for account, partition, and other settings.
This is useful for recovery after job failures: update job resources, reset failed jobs, then regenerate schedulers to submit new allocations. This is useful for recovery after job failures: update job resources, reset failed jobs, then regenerate schedulers to submit new allocations.
Usage: torc slurm regenerate [OPTIONS] <WORKFLOW_ID>
Arguments:
<WORKFLOW_ID>— Workflow ID<WORKFLOW_ID>— Workflow ID
Options:
-
--account <ACCOUNT>— Slurm account to use (defaults to account from existing schedulers) -
--profile <PROFILE>— HPC profile to use (if not specified, tries to detect current system) -
--single-allocation— Bundle all nodes into a single Slurm allocation per scheduler -
--submit— Submit the generated allocations immediately -
-o,--output-dir <OUTPUT_DIR>— Output directory for job output files (used when submitting) -
--account <ACCOUNT>— Slurm account to use (defaults to account from existing schedulers) -
--profile <PROFILE>— HPC profile to use (if not specified, tries to detect current system) -
--single-allocation— Bundle all nodes into a single Slurm allocation per scheduler -
--submit— Submit the generated allocations immediately -
-o,--output-dir <OUTPUT_DIR>— Output directory for job output files (used when submitting)Default value:
output -
-p,--poll-interval <POLL_INTERVAL>— Poll interval in seconds (used when submitting)Default value:
60 -
--group-by <GROUP_BY>— Strategy for grouping jobs into schedulersPossible values:
resource-requirements(default),partition -
--walltime-strategy <STRATEGY>— Strategy for determining Slurm job walltimePossible values:
max-job-runtime(default),max-partition-time -
--walltime-multiplier <MULTIPLIER>— Multiplier for job runtime when using--walltime-strategy=max-job-runtimeDefault value:
1.5 -
--dry-run— Show what would be created without making changes -
--include-job-ids <JOB_IDS>— Include specific job IDs in planning regardless of their status (useful for recovery dry-run to include failed jobs)
torc remote
Remote worker execution commands (SSH-based distributed execution)
Usage: torc remote <COMMAND>
Subcommands:
add-workers— Add one or more remote workers to a workflowadd-workers-from-file— Add remote workers to a workflow from a fileremove-worker— Remove a remote worker from a workflowlist-workers— List remote workers stored in the database for a workflowrun— Run workers on remote machines via SSHstatus— Check status of remote workersstop— Stop workers on remote machinescollect-logs— Collect logs from remote workersdelete-logs— Delete logs from remote workers
torc remote add-workers
Add one or more remote workers to a workflow
Workers are stored in the database and used by subsequent commands. Format: [user@]hostname[:port]
Usage: torc remote add-workers <WORKFLOW_ID> <WORKERS>...
Arguments:
<WORKFLOW_ID>— Workflow ID<WORKERS>— Worker addresses (format: [user@]hostname[:port])
torc remote add-workers-from-file
Add remote workers to a workflow from a file
Each line in the file should be a worker address. Lines starting with # are comments.
Usage: torc remote add-workers-from-file <WORKER_FILE> [WORKFLOW_ID]
Arguments:
<WORKER_FILE>— Path to worker file listing remote machines<WORKFLOW_ID>— Workflow ID (optional - will prompt if not provided)
torc remote remove-worker
Remove a remote worker from a workflow
Usage: torc remote remove-worker <WORKER> [WORKFLOW_ID]
Arguments:
<WORKER>— Worker address to remove<WORKFLOW_ID>— Workflow ID (optional - will prompt if not provided)
torc remote list-workers
List remote workers stored in the database for a workflow
Usage: torc remote list-workers [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Workflow ID (optional - will prompt if not provided)
torc remote run
Run workers on remote machines via SSH
SSH into each stored worker and start a torc worker process. Workers run detached (via nohup) and survive SSH disconnection. Use add-workers first, or provide --workers to add and run in one step.
Usage: torc remote run [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Workflow ID to run (optional - will prompt if not provided)
Options:
-
-w,--workers <WORKERS>— Path to worker file (optional - adds workers before running) -
-o,--output-dir <OUTPUT_DIR>— Output directory on remote machines (relative to home)Default value:
torc_output -
--max-parallel-ssh <MAX_PARALLEL_SSH>— Maximum parallel SSH connectionsDefault value:
10 -
-p,--poll-interval <POLL_INTERVAL>— Poll interval in seconds for workersDefault value:
5.0 -
--max-parallel-jobs <MAX_PARALLEL_JOBS>— Maximum number of parallel jobs per worker -
--num-cpus <NUM_CPUS>— Number of CPUs per worker (auto-detect if not specified) -
--memory-gb <MEMORY_GB>— Memory in GB per worker (auto-detect if not specified) -
--num-gpus <NUM_GPUS>— Number of GPUs per worker (auto-detect if not specified) -
--skip-version-check— Skip version check (not recommended)Default value:
false
torc remote status
Check status of remote workers
Usage: torc remote status [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Workflow ID (optional - will prompt if not provided)
Options:
-
--output-dir <OUTPUT_DIR>— Remote output directory (must match what was used in run)Default value:
torc_output -
--max-parallel-ssh <MAX_PARALLEL_SSH>— Maximum parallel SSH connectionsDefault value:
10
torc remote stop
Stop workers on remote machines
Usage: torc remote stop [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Workflow ID (optional - will prompt if not provided)
Options:
-
--output-dir <OUTPUT_DIR>— Remote output directory (must match what was used in run)Default value:
torc_output -
--max-parallel-ssh <MAX_PARALLEL_SSH>— Maximum parallel SSH connectionsDefault value:
10 -
--force— Force kill (SIGKILL instead of SIGTERM)Default value:
false
torc remote collect-logs
Collect logs from remote workers
Usage: torc remote collect-logs [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Workflow ID (optional - will prompt if not provided)
Options:
-
-l,--local-output-dir <LOCAL_OUTPUT_DIR>— Local directory to save collected logsDefault value:
remote_logs -
--remote-output-dir <REMOTE_OUTPUT_DIR>— Remote output directory (must match what was used in run)Default value:
torc_output -
--max-parallel-ssh <MAX_PARALLEL_SSH>— Maximum parallel SSH connectionsDefault value:
10 -
--delete— Delete remote logs after successful collectionDefault value:
false
torc remote delete-logs
Delete logs from remote workers
Removes the output directory from all remote workers. Use collect-logs --delete to safely collect before deleting.
Usage: torc remote delete-logs [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Workflow ID (optional - will prompt if not provided)
Options:
-
--remote-output-dir <REMOTE_OUTPUT_DIR>— Remote output directory to delete (must match what was used in run)Default value:
torc_output -
--max-parallel-ssh <MAX_PARALLEL_SSH>— Maximum parallel SSH connectionsDefault value:
10
torc scheduled-compute-nodes
Scheduled compute node management commands
Usage: torc scheduled-compute-nodes <COMMAND>
Subcommands:
get— Get a scheduled compute node by IDlist— List scheduled compute nodes for a workflowlist-jobs— List jobs that ran under a scheduled compute nodeget— Get a scheduled compute node by IDlist— List scheduled compute nodes for a workflowlist-jobs— List jobs that ran under a scheduled compute node
torc scheduled-compute-nodes get
Get a scheduled compute node by ID
Usage: torc scheduled-compute-nodes get <ID>
Arguments:
<ID>— ID of the scheduled compute node<ID>— ID of the scheduled compute node
torc scheduled-compute-nodes list
List scheduled compute nodes for a workflow
Usage: torc scheduled-compute-nodes list [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— List scheduled compute nodes for this workflow (optional - will prompt if not provided)<WORKFLOW_ID>— List scheduled compute nodes for this workflow (optional - will prompt if not provided)
Options:
-
-l,--limit <LIMIT>— Maximum number of scheduled compute nodes to return -
-l,--limit <LIMIT>— Maximum number of scheduled compute nodes to returnDefault value:
10000 -
-o,--offset <OFFSET>— Offset for pagination (0-based) -
-o,--offset <OFFSET>— Offset for pagination (0-based)Default value:
0 -
-s,--sort-by <SORT_BY>— Field to sort by -
-r,--reverse-sort— Reverse sort order -
-s,--sort-by <SORT_BY>— Field to sort by -
-r,--reverse-sort— Reverse sort orderDefault value:
false -
--scheduler-id <SCHEDULER_ID>— Filter by scheduler ID -
--scheduler-config-id <SCHEDULER_CONFIG_ID>— Filter by scheduler config ID -
--status <STATUS>— Filter by status -
--scheduler-id <SCHEDULER_ID>— Filter by scheduler ID -
--scheduler-config-id <SCHEDULER_CONFIG_ID>— Filter by scheduler config ID -
--status <STATUS>— Filter by status
torc scheduled-compute-nodes list-jobs
List jobs that ran under a scheduled compute node
Usage: torc scheduled-compute-nodes list-jobs <ID>
Arguments:
<ID>— ID of the scheduled compute node<ID>— ID of the scheduled compute node
torc hpc
HPC system profiles and partition information
Usage: torc hpc <COMMAND>
Subcommands:
list— List known HPC system profilesdetect— Detect the current HPC systemshow— Show details of an HPC profilepartitions— Show partitions for an HPC profilematch— Find partitions matching resource requirementslist— List known HPC system profilesdetect— Detect the current HPC systemshow— Show details of an HPC profilepartitions— Show partitions for an HPC profilematch— Find partitions matching resource requirements
torc hpc list
List known HPC system profiles
Usage: torc hpc list
torc hpc detect
Detect the current HPC system
Usage: torc hpc detect
torc hpc show
Show details of an HPC profile
Usage: torc hpc show <NAME>
Arguments:
<NAME>— Profile name (e.g., "kestrel")<NAME>— Profile name (e.g., "kestrel")
torc hpc partitions
Show partitions for an HPC profile
Usage: torc hpc partitions [OPTIONS] [NAME]
Arguments:
<NAME>— Profile name (e.g., "kestrel"). If not specified, tries to detect current system<NAME>— Profile name (e.g., "kestrel"). If not specified, tries to detect current system
Options:
--gpu— Filter to GPU partitions only--cpu— Filter to CPU-only partitions--shared— Filter to shared partitions--gpu— Filter to GPU partitions only--cpu— Filter to CPU-only partitions--shared— Filter to shared partitions
torc hpc match
Find partitions matching resource requirements
Usage: torc hpc match [OPTIONS]
Options:
-
--cpus <CPUS>— Number of CPUs required -
--cpus <CPUS>— Number of CPUs requiredDefault value:
1 -
--memory <MEMORY>— Memory required (e.g., "100g", "512m", or MB as number) -
--memory <MEMORY>— Memory required (e.g., "100g", "512m", or MB as number)Default value:
1g -
--walltime <WALLTIME>— Wall time required (e.g., "4:00:00", "2-00:00:00") -
--walltime <WALLTIME>— Wall time required (e.g., "4:00:00", "2-00:00:00")Default value:
1:00:00 -
--gpus <GPUS>— Number of GPUs required -
--profile <PROFILE>— Profile name (if not specified, tries to detect current system) -
--gpus <GPUS>— Number of GPUs required -
--profile <PROFILE>— Profile name (if not specified, tries to detect current system)
torc reports
Generate reports and analytics
Usage: torc reports <COMMAND>
Subcommands:
check-resource-utilization— Check resource utilization and report jobs that exceeded their specified requirementsresults— Generate a comprehensive JSON report of job results including all log file pathssummary— Generate a summary of workflow results (requires workflow to be complete)check-resource-utilization— Check resource utilization and report jobs that exceeded their specified requirementsresults— Generate a comprehensive JSON report of job results including all log file pathssummary— Generate a summary of workflow results (requires workflow to be complete)
torc reports check-resource-utilization
Check resource utilization and report jobs that exceeded their specified requirements
Usage: torc reports check-resource-utilization [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Workflow ID to analyze (optional - will prompt if not provided)<WORKFLOW_ID>— Workflow ID to analyze (optional - will prompt if not provided)
Options:
-r,--run-id <RUN_ID>— Run ID to analyze (optional - analyzes latest run if not provided)-a,--all— Show all jobs (default: only show jobs that exceeded requirements)--include-failed— Include failed and terminated jobs in the analysis (for recovery diagnostics)-r,--run-id <RUN_ID>— Run ID to analyze (optional - analyzes latest run if not provided)-a,--all— Show all jobs (default: only show jobs that exceeded requirements)--include-failed— Include failed and terminated jobs in the analysis (for recovery diagnostics)
torc reports results
Generate a comprehensive JSON report of job results including all log file paths
Usage: torc reports results [OPTIONS] [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Workflow ID to analyze (optional - will prompt if not provided)<WORKFLOW_ID>— Workflow ID to analyze (optional - will prompt if not provided)
Options:
-
-o,--output-dir <OUTPUT_DIR>— Output directory (where job logs are stored, passed intorc runandtorc submit) -
-o,--output-dir <OUTPUT_DIR>— Output directory (where job logs are stored, passed intorc runandtorc submit)Default value:
output -
--all-runs— Include all runs for each job (default: only latest run) -
--all-runs— Include all runs for each job (default: only latest run)
torc reports summary
Generate a summary of workflow results (requires workflow to be complete)
Usage: torc reports summary [WORKFLOW_ID]
Arguments:
<WORKFLOW_ID>— Workflow ID to summarize (optional - will prompt if not provided)<WORKFLOW_ID>— Workflow ID to summarize (optional - will prompt if not provided)
torc config
Manage configuration files and settings
Usage: torc config <COMMAND>
Subcommands:
show— Show the effective configuration (merged from all sources)paths— Show configuration file pathsinit— Initialize a configuration file with defaultsvalidate— Validate the current configurationshow— Show the effective configuration (merged from all sources)paths— Show configuration file pathsinit— Initialize a configuration file with defaultsvalidate— Validate the current configuration
torc config show
Show the effective configuration (merged from all sources)
Usage: torc config show [OPTIONS]
Options:
-
-f,--format <FORMAT>— Output format (toml or json) -
-f,--format <FORMAT>— Output format (toml or json)Default value:
toml
torc config paths
Show configuration file paths
Usage: torc config paths
torc config init
Initialize a configuration file with defaults
Usage: torc config init [OPTIONS]
Options:
--system— Create system-wide config (/etc/torc/config.toml)--user— Create user config (~/.config/torc/config.toml)--local— Create project-local config (./torc.toml)-f,--force— Force overwrite if file exists--system— Create system-wide config (/etc/torc/config.toml)--user— Create user config (~/.config/torc/config.toml)--local— Create project-local config (./torc.toml)-f,--force— Force overwrite if file exists
torc config validate
Validate the current configuration
Usage: torc config validate
torc tui
Interactive terminal UI for managing workflows
Usage: torc tui [OPTIONS]
Options:
-
--standalone— Start in standalone mode: automatically start a torc-server -
--port <PORT>— Port for the server in standalone mode (default: 8080) -
--standalone— Start in standalone mode: automatically start a torc-server -
--port <PORT>— Port for the server in standalone mode (default: 8080)Default value:
8080 -
--database <DATABASE>— Database path for standalone mode -
--database <DATABASE>— Database path for standalone mode
torc plot-resources
Generate interactive HTML plots from resource monitoring data
Usage: torc plot-resources [OPTIONS] <DB_PATHS>...
Arguments:
<DB_PATHS>— Path to the resource metrics database file(s)<DB_PATHS>— Path to the resource metrics database file(s)
Options:
-
-o,--output-dir <OUTPUT_DIR>— Output directory for generated plots (default: current directory) -
-o,--output-dir <OUTPUT_DIR>— Output directory for generated plots (default: current directory)Default value:
. -
-j,--job-ids <JOB_IDS>— Only plot specific job IDs (comma-separated) -
-p,--prefix <PREFIX>— Prefix for output filenames -
-j,--job-ids <JOB_IDS>— Only plot specific job IDs (comma-separated) -
-p,--prefix <PREFIX>— Prefix for output filenamesDefault value:
resource_plot -
-f,--format <FORMAT>— Output format: html or json -
-f,--format <FORMAT>— Output format: html or jsonDefault value:
html
torc completions
Generate shell completions
Usage: torc completions <SHELL>
Arguments:
-
<SHELL>— The shell to generate completions for -
<SHELL>— The shell to generate completions forPossible values:
bash,elvish,fish,powershell,zsh
This document was generated automatically by
clap-markdown.
This document was generated automatically by
clap-markdown.