Download OpenAPI specification:
The return codes are usually specified with each API function, but in general:
Content-type: application/json
. This does not necessarily mean that the operation was successful, but that the function was called correctly. Reasons why the operation could fail are:The usual response dictionary has the following format:
{ "success": True/False, "message": "Usually an error when there was a failure. The message can generally be ignored if the operation was successful.", "data": "The data returned if the operation was successful." }
Content-type: text/plain
.Content-type: text/plain
.httpd
.Set a key-value pair to store in PanDA. Requires a secure connection.
key required | string Key to reference the secret |
value required | string Value of the secret |
{- "key": "string",
- "value": "string"
}
{- "success": true,
- "message": "string",
- "data": { }
}
Get the secrets for a user identified by a list of keys. Requires a secure connection.
keys required | Array of strings List of keys to reference the secrets to retrieve |
{- "keys": [
- "string"
]
}
{- "success": true,
- "message": "string",
- "data": { }
}
This function retrieves the distinguished name (DN) from the request and uses it to get a key pair. Requires a secure connection.
public_key_name required | string The name of the public key. |
private_key_name required | string The name of the private key. |
{- "success": true,
- "message": "string",
- "data": { }
}
Get the x509 proxy certificate for a user with a role. Requires a secure connection.
role | string The role of the user. Defaults to None. |
dn | string The distinguished name of the user. Defaults to None. |
{- "success": true,
- "message": "string",
- "data": { }
}
Get the OAuth access token for the specified client. Requires a secure connection.
client_name required | string client_name for the token as defined in token_cache_config |
token_key | string key to get the token from the token cache. Defaults to None. |
{- "success": true,
- "message": "string",
- "data": { }
}
This function retrieves the distinguished name (DN) from the request and uses it to get a token key for the specified client. Requires a secure connection.
client_name required | string The name of the client requesting the token key |
{- "success": true,
- "message": "string",
- "data": { }
}
This function returns the count of available event ranges for a given job_id, jobset_id, and task_id. Requires a secure connection and production role.
job_id required | string PanDA job ID |
jobset_id required | string Jobset ID |
task_id required | string JEDI task ID |
timeout | string The timeout value. Defaults to 60. |
{- "success": true,
- "message": "string",
- "data": { }
}
Gets a dictionary with the status of the event ranges for the given pairs of PanDA job IDs and JEDI task IDs. Requires a secure connection.
job_task_ids required | string json encoded string with JEDI task ID + PanDA job ID pairs, in the format |
{- "success": true,
- "message": "string",
- "data": { }
}
Acquires a list of event ranges with a given PandaID for execution. Requires a secure connection and production role.
job_id required | string PanDa job ID. |
jobset_id required | string Jobset ID. |
task_id | integer JEDI task ID. Defaults to None. |
n_ranges | integer The number of event ranges to retrieve. Defaults to 10. |
timeout | integer The timeout value. Defaults to 60. |
scattered | boolean Whether the event ranges are scattered. Defaults to None. |
segment_id | integer The segment ID. Defaults to None. |
{- "job_id": "string",
- "jobset_id": "string",
- "task_id": 0,
- "n_ranges": 0,
- "timeout": 0,
- "scattered": true,
- "segment_id": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Updates the status of a specific event range. Requires a secure connection and production role.
event_range_id required | string The ID of the event range to update. |
event_range_status required | string The new status of the event range. |
core_count | integer The number of cores used. Defaults to None. |
cpu_consumption_time | number The CPU consumption time. Defaults to None. |
object_store_id | integer The object store ID. Defaults to None. |
timeout | integer The timeout value. Defaults to 60. |
{- "event_range_id": "string",
- "event_range_status": "string",
- "core_count": 0,
- "cpu_consumption_time": 0,
- "object_store_id": 0,
- "timeout": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Updates the event ranges in bulk. Requires a secure connection and production role.
event_ranges required | string JSON-encoded string containing the list of event ranges to update. |
timeout | integer The timeout value. Defaults to 120. |
version | integer The version of the event ranges. Defaults to 0. Version 0: normal event service Version 1: jumbo jobs with zip file support Version 2: fine-grained processing where events can be updated before being dispatched |
{- "event_ranges": "string",
- "timeout": 0,
- "version": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Uploads a JEDI log file and returns the URL to the file. If there is already a log file for the task, it will be overwritten. Requires a secure connection and production role.
file required | string werkzeug.FileStorage object to be uploaded. |
{- "file": "string"
}
{- "success": true,
- "message": "string",
- "data": { }
}
Updates a JEDI log file, appending more content at the end of the file. Requires a secure connection and production role.
file required | string werkzeug.FileStorage object to be updated. |
{- "file": "string"
}
{- "success": true,
- "message": "string",
- "data": { }
}
Downloads the JEDI log file, if required at a particular offset.
log_name required | string log file name |
offset required | integer offset in the file |
{- "log_name": "string",
- "offset": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Uploads a file to the cache. When not touched, cache files are expired after some time. User caches will get registered in the PanDA database and will account towards user limits. PanDA log files will be stored in gzip format. Requires a secure connection.
file required | string werkzeug.FileStorage object to be uploaded. |
{- "file": "string"
}
{- "success": true,
- "message": "string",
- "data": { }
}
Touches a file in the cache directory. It avoids the file to expire and being deleted by server clean up processes. Requires a secure connection.
file_name required | string file name to be deleted |
{- "file_name": "string"
}
{- "success": true,
- "message": "string",
- "data": { }
}
Deletes a file from the cache directory. Currently a dummy method. Requires a secure connection.
file_name required | string file name to be deleted |
{- "file_name": "string"
}
{- "success": true,
- "message": "string",
- "data": { }
}
Registers a file from the cache directory into the PanDA database, so that PanDA knows the server it's on. Requires a secure connection and production role.
user_name required | string user that uploaded the file |
file_name required | string file name |
file_size required | integer file size |
checksum required | string checksum |
{- "user_name": "string",
- "file_name": "string",
- "file_size": 0,
- "checksum": "string"
}
{- "success": true,
- "message": "string",
- "data": { }
}
Validates a cache file owned by the caller by checking the file metadata that was registered in the database. Requires a secure connection.
file_size required | integer file size |
checksum required | string checksum |
{- "file_size": 0,
- "checksum": "string"
}
{- "success": true,
- "message": "string",
- "data": { }
}
Uploads a HPO checkpoint file to the server. Requires a secure connection.
file required | string werkzeug.FileStorage object to be uploaded. |
{- "file": "string"
}
{- "success": true,
- "message": "string",
- "data": { }
}
Deletes a HPO checkpoint file from the server. Requires a secure connection.
task_id required | string JEDI task ID |
sub_id required | string sub ID. |
{- "task_id": "string",
- "sub_id": "string"
}
{- "success": true,
- "message": "string",
- "data": { }
}
Upload request to recover lost files. Requires a secure connection.
task_id required | integer JEDI task ID. |
dry_run required | boolean dry run flag. |
{- "task_id": 0,
- "dry_run": true
}
{- "success": true,
- "message": "string",
- "data": { }
}
Uploads a workflow request to the server. The request can be processed synchronously or asynchronously. Requires a secure connection.
data required | string workflow request data |
dry_run required | boolean requests the workflow to be executed synchronously in dry_run mode |
sync required | boolean requests the workflow to be processed synchronously |
{- "data": "string",
- "dry_run": true,
- "sync": true
}
{- "success": true,
- "message": "string",
- "data": { }
}
Uploads an event picking request to the server. Requires a secure connection.
run_event_list required | string run and event list. |
data_type required | string data type. |
stream_name required | string stream name. |
dataset_name required | string dataset name. |
ami_tag required | string AMI tag. |
user_dataset_name required | string user dataset name. |
locked_by required | string locking agent. |
parameters required | string parameters. |
input_file_list required | string input file list. |
n_sites required | string number of sites. |
user_task_name required | string user task name. |
ei_api required | string event index API. |
include_guids required | boolean flag to indicate if GUIDs are included with the run-event list |
{- "run_event_list": "string",
- "data_type": "string",
- "stream_name": "string",
- "dataset_name": "string",
- "ami_tag": "string",
- "user_dataset_name": "string",
- "locked_by": "string",
- "parameters": "string",
- "input_file_list": "string",
- "n_sites": "string",
- "user_task_name": "string",
- "ei_api": "string",
- "include_guids": true
}
{- "success": true,
- "message": "string",
- "data": { }
}
Update the details for a list of workers. Requires a secure connection.
harvester_id required | string harvester id, e.g. |
workers required | Array of objects list of worker dictionaries that describe the fields of a pandaserver/taskbuffer/WorkerSpec object.
|
{- "harvester_id": "string",
- "workers": [
- { }
]
}
{- "success": true,
- "message": "string",
- "data": { }
}
Update the service metrics for a harvester instance. Requires a secure connection.
harvester_id required | string harvester id, e.g. |
metrics required | Array of objects list of triplets
|
{- "harvester_id": "string",
- "metrics": [
- { }
]
}
{- "success": true,
- "message": "string",
- "data": { }
}
Add messages for a harvester instance. Requires a secure connection.
harvester_id required | string harvester id, e.g. |
dialogs required | Array of objects list of dialog dictionaries, e.g.
|
{- "harvester_id": "string",
- "dialogs": [
- { }
]
}
{- "success": true,
- "message": "string",
- "data": { }
}
Send a heartbeat for harvester and optionally update the instance data. User and host are retrieved from the request object and updated in the database. Requires a secure connection.
harvester_id required | string harvester id, e.g. |
data required | string metadata dictionary to be updated in the PanDA database, e.g. |
{- "harvester_id": "string",
- "data": "string"
}
{- "success": true,
- "message": "string",
- "data": { }
}
Report statistics for the workers managed by a harvester instance at a PanDA queue. Requires a secure connection.
harvester_id required | string harvester id, e.g. |
panda_queue required | string Name of the PanDA queue, e.g. |
statistics required | string JSON string containing a dictionary with the statistics to be reported. It will be stored as a json in the database. E.g.
|
{- "harvester_id": "string",
- "panda_queue": "string",
- "statistics": "string"
}
{- "success": true,
- "message": "string",
- "data": { }
}
Retrieves the commands for a specified harvester instance. Requires a secure connection and production role.
harvester_id required | string harvester id, e.g. |
n_commands required | integer The number of commands to retrieve, e.g. |
timeout | integer The timeout value. Defaults to |
{- "harvester_id": "string",
- "n_commands": 0,
- "timeout": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Acknowledges the list of command IDs in the PanDA database. Requires a secure connection and production role.
command_ids required | Array of objects A list of command IDs to acknowledge, e.g. |
timeout | integer The timeout value. Defaults to |
{- "command_ids": [
- { }
], - "timeout": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Send a command to harvester to kill the workers in a PanDA queue, with the possibility of specifying filters by status, CE or submission host. Requires a secure connection and production role.
panda_queue required | string Name of the PanDA queue, e.g. |
status_list required | Array of objects list of worker statuses to be considered, e.g. |
ce_list required | Array of objects list of the Computing Elements to be considered, e.g. |
submission_host_list required | Array of objects list of the harvester submission hosts to be considered, e.g. |
{- "panda_queue": "string",
- "status_list": [
- { }
], - "ce_list": [
- { }
], - "submission_host_list": [
- { }
]
}
{- "success": true,
- "message": "string",
- "data": { }
}
Set the target number of slots for a PanDA queue, when you want to build up job pressure. Requires secure connection and production role.
panda_queue required | string Name of the PanDA queue, e.g. |
slots required | integer Number of slots to set, e.g. |
global_share | string Global share the slots apply to. Optional - by default it applies to the whole queue. E.g. |
resource_type | string Resource type the slots apply to. Optional - by default it applies to the whole queue. E.g. |
expiration_date | string The expiration date of the slots. Optional - by default it applies indefinitely. |
{- "panda_queue": "string",
- "slots": 0,
- "global_share": "string",
- "resource_type": "string",
- "expiration_date": "string"
}
{- "success": true,
- "message": "string",
- "data": { }
}
Gets the status for a job and command to the pilot if any. Requires a secure connection.
job_ids required | string list of PanDA job IDs. |
timeout | string The timeout value. Defaults to 60. |
{- "success": true,
- "message": "string",
- "data": { }
}
Gets the description of a job from the main/active schema. The description includes job attributes, job parameters and related file attributes. Requires a secure connection.
job_ids required | string List of PanDA job IDs. |
timeout | string The timeout value. Defaults to 60. |
{- "success": true,
- "message": "string",
- "data": { }
}
Gets the description of a job, also looking into the secondary/archive schema. The description includes job attributes, job parameters and related file attributes. Requires a secure connection.
job_ids required | string List of PanDA job IDs. |
timeout | string The timeout value. Defaults to 60. |
{- "success": true,
- "message": "string",
- "data": { }
}
Gets the execution script for a job, including Rucio download of input, ALRB setup, downloading transformation script and running the script. Requires a secure connection.
job_id required | string PanDA job ID |
timeout | string The timeout value. Defaults to 60. |
{- "success": true,
- "message": "string",
- "data": { }
}
Reassigns a list of jobs. Requires a secure connection.
job_ids required | string List of PanDA job IDs |
{- "job_ids": "string"
}
{- "success": true,
- "message": "string",
- "data": { }
}
Sets a command to the pilot for a job. Requires a secure connection and production role.
job_id required | integer PanDA job ID |
command required | string The command for the pilot, e.g. |
{- "job_id": 0,
- "command": "string"
}
{- "success": true,
- "message": "string",
- "data": { }
}
Sets the debug mode for a job. Requires a secure connection and production role.
job_id required | integer PanDA job ID |
mode required | boolean True to set debug mode, False to unset debug mode |
{- "job_id": 0,
- "mode": true
}
{- "success": true,
- "message": "string",
- "data": { }
}
Sets the debug mode for a job. Requires a secure connection.
jobs required | string JSON string with a list of job specs |
{- "jobs": "string"
}
{- "success": true,
- "message": "string",
- "data": { }
}
Gets a dictionary of site specs. By default analysis
sites are returned. Requires a secure connection.
type required | string type of site as defined in CRIC (currently |
{- "success": true,
- "message": "string",
- "data": { }
}
Acquire jobs for the pilot. The jobs are reserved, the job status is updated and the jobs are returned. Requires a secure connection.
site_name required | string The PanDA queue name |
timeout | integer Request timeout in seconds. Optional and defaults to 60. |
memory | integer Memory limit for the job. Optional and defaults to |
disk_space | integer Disk space limit for the job. Optional and defaults to |
prod_source_label | string Prodsourcelabel, e.g. |
node | string Identifier of the worker node/slot. Optional and defaults to |
computing_element | string Computing element. Optional and defaults to |
prod_user_id | string User ID of the job. Optional and defaults to |
get_proxy_key | boolean Flag to request a proxy key.Optional and defaults to |
task_id | integer JEDI task ID of the job. Optional and defaults to |
n_jobs | integer Number of jobs for bulk requests. Optional and defaults to |
background | boolean Background flag. Optional and defaults to |
resource_type | string Resource type of the job, e.g. |
harvester_id | string Harvester ID, used to update the worker entry in the DB. Optional and defaults to |
worker_id | integer Worker ID, used to update the worker entry in the DB. Optional and defaults to |
scheduler_id | string Scheduler, e.g. harvester ID. Optional and defaults to |
job_type | string Job type, e.g. |
via_topic | boolean Topic for message broker. Optional and defaults to |
remaining_time | integer Remaining walltime. Optional and defaults to |
{- "site_name": "string",
- "timeout": 0,
- "memory": 0,
- "disk_space": 0,
- "prod_source_label": "string",
- "node": "string",
- "computing_element": "string",
- "prod_user_id": "string",
- "get_proxy_key": true,
- "task_id": 0,
- "n_jobs": 0,
- "background": true,
- "resource_type": "string",
- "harvester_id": "string",
- "worker_id": 0,
- "scheduler_id": "string",
- "job_type": "string",
- "via_topic": true,
- "remaining_time": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Gets the status for a list of jobs. Requires a secure connection.
job_ids required | Array of integers list of job IDs. |
timeout | integer The timeout value. Defaults to 60. |
{- "job_ids": [
- 0
], - "timeout": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Updates the details for a job, stores the metadata and excerpts from the pilot log. Requires a secure connection and production role.
job_id required | integer PanDA job ID |
job_status | string Job status |
job_sub_status | string Job sub status. Optional, defaults to |
start_time | string Job start time in format |
end_time | string Job end time in format |
pilot_timing | string String with pilot timings. Optional, defaults to |
site_name | string PanDA queue name. Optional, defaults to |
node | string Identifier for worker node/slot. Optional, defaults to |
scheduler_id | string Scheduler ID, such as harvester instance. Optional, defaults to |
pilot_id | string Pilot ID. Optional, defaults to |
batch_id | string Batch ID. Optional, defaults to |
trans_exit_code | string Transformation exit code. Optional, defaults to |
pilot_error_code | string Pilot error code. Optional, defaults to |
pilot_error_diag | string Pilot error message. Optional, defaults to |
exe_error_code | integer Execution error code. Optional, defaults to |
exe_error_diag | string Execution error message. Optional, defaults to |
n_events | integer Number of events. Optional, defaults to |
n_input_files | integer Number of input files. Optional, defaults to |
attempt_nr | integer Job attempt number. Optional, defaults to |
cpu_consumption_time | integer CPU consumption time. Optional, defaults to |
cpu_consumption_unit | string CPU consumption unit, being used for updating some CPU details. Optional, defaults to |
cpu_conversion_factor | number CPU conversion factor. Optional defaults to |
core_count | integer Number of cores of the job. Optional, defaults to |
mean_core_count | integer Mean core count. Optional, defaults to |
max_rss | integer Measured max RSS memory. Optional, defaults to |
max_vmem | integer Measured max Virtual memory. Optional, defaults to |
max_swap | integer Measured max swap memory. Optional, defaults to |
max_pss | integer Measured max PSS memory. Optional, defaults to |
avg_rss | integer Measured average RSS. Optional, defaults to |
avg_vmem | integer Measured average Virtual memory.Optional, defaults to |
avg_swap | integer Measured average swap memory. Optional, defaults to |
avg_pss | integer Measured average PSS. Optional, defaults to |
tot_rchar | integer Measured total read characters. Optional, defaults to |
tot_wchar | integer Measured total written characters. Optional, defaults to |
tot_rbytes | integer Measured total read bytes. Optional, defaults to |
tot_wbytes | integer Measured total written bytes. Optional, defaults to |
rate_rchar | integer Measured rate for read characters. Optional, defaults to |
rate_wchar | integer Measured rate for written characters. Optional, defaults to |
rate_rbytes | integer Measured rate for read bytes. Optional, defaults to |
rate_wbytes | integer Measured rate for written bytes. Optional, defaults to |
corrupted_files | string List of corrupted files in comma separated format. Optional, defaults to |
cpu_architecture_level | integer CPU architecture level (e.g. |
job_metrics | string Job metrics. Optional, defaults to |
job_output_report | string Job output report. Optional, defaults to |
pilot_log | string Pilot log excerpt. Optional, defaults to |
meta_data | string Job metadata. Optional, defaults to |
stdout | string Standard output. Optional, defaults to |
timeout | integer Timeout for the operation in seconds. Optional, defaults to 60 |
{- "job_id": 0,
- "job_status": "string",
- "job_sub_status": "string",
- "start_time": "string",
- "end_time": "string",
- "pilot_timing": "string",
- "site_name": "string",
- "node": "string",
- "scheduler_id": "string",
- "pilot_id": "string",
- "batch_id": "string",
- "trans_exit_code": "string",
- "pilot_error_code": "string",
- "pilot_error_diag": "string",
- "exe_error_code": 0,
- "exe_error_diag": "string",
- "n_events": 0,
- "n_input_files": 0,
- "attempt_nr": 0,
- "cpu_consumption_time": 0,
- "cpu_consumption_unit": "string",
- "cpu_conversion_factor": 0,
- "core_count": 0,
- "mean_core_count": 0,
- "max_rss": 0,
- "max_vmem": 0,
- "max_swap": 0,
- "max_pss": 0,
- "avg_rss": 0,
- "avg_vmem": 0,
- "avg_swap": 0,
- "avg_pss": 0,
- "tot_rchar": 0,
- "tot_wchar": 0,
- "tot_rbytes": 0,
- "tot_wbytes": 0,
- "rate_rchar": 0,
- "rate_wchar": 0,
- "rate_rbytes": 0,
- "rate_wbytes": 0,
- "corrupted_files": "string",
- "cpu_architecture_level": 0,
- "job_metrics": "string",
- "job_output_report": "string",
- "pilot_log": "string",
- "meta_data": "string",
- "stdout": "string",
- "timeout": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Bulk method to update the details for jobs, store the metadata and excerpt from the pilot log. Internally, this method loops over
the jobs and calls update_job
for each job. Requires a secure connection and production role.
job_list required | Array of objects list of job dictionaries to update. The mandatory and optional keys for each job dictionary is the same as the arguments for |
{- "job_list": [
- { }
]
}
{- "success": true,
- "message": "string",
- "data": { }
}
Updates the status of a worker with the information seen by the pilot. Requires a secure connection and production role.
worker_id required | string The worker ID. |
harvester_id required | string The harvester ID. |
status required | string The status of the worker. Must be either 'started' or 'finished'. |
timeout | integer The timeout value. Defaults to 60. |
node_id | string The node ID. Defaults to None. |
{- "worker_id": "string",
- "harvester_id": "string",
- "status": "string",
- "timeout": 0,
- "node_id": "string"
}
{- "success": true,
- "message": "string",
- "data": { }
}
Updates a worker node in the worker node map. When already found, it updates the last_seen
time. When not found, it adds the worker node. Requires a secure connection and production role.
site required | string Site name (e.g. ATLAS site name, not PanDA queue). |
host_name required | string Host name. In the case of reporting in format |
cpu_model required | string CPU model, e.g. |
n_logical_cpus | integer Number of logical CPUs: n_sockets * cores_per_socket * threads_per_core.
When SMT is enabled, this is the number of threads. Otherwise it is the number of cores. Optional, defaults to |
n_sockets | integer Number of sockets. Optional, defaults to |
cores_per_socket | integer Number of cores per socket. Optional, defaults to |
threads_per_core | integer Number of threads per core. When SMT is disabled, this is 1. Otherwise a number > 1. Optional, defaults to |
cpu_architecture | string CPU architecture, e.g. |
cpu_architecture_level | string CPU architecture level, e.g. |
clock_speed | number Clock speed in MHz. Optional, defaults to |
total_memory | integer Total memory in MB. Optional, defaults to |
total_local_disk | integer Total disk space in GB. Optional, defaults to |
timeout | integer The timeout value. Defaults to 60. |
{- "site": "string",
- "host_name": "string",
- "cpu_model": "string",
- "n_logical_cpus": 0,
- "n_sockets": 0,
- "cores_per_socket": 0,
- "threads_per_core": 0,
- "cpu_architecture": "string",
- "cpu_architecture_level": "string",
- "clock_speed": 0,
- "total_memory": 0,
- "total_local_disk": 0,
- "timeout": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Get the job statistics by cloud, which includes the active jobs and jobs in final states modified in the last 12 hours. You have to filter the statistics by type, which can be either "production" or "analysis". Used by panglia monitoring. Requires a secure connection.
type required | string can be "analysis" or "production". Defaults to "production" when not provided. |
{- "success": true,
- "message": "string",
- "data": { }
}
Get the production job statistics by cloud and processing type, which includes the active jobs and jobs in final states modified in the last 12 hours. Used by panglia monitoring. Requires a secure connection.
{- "success": true,
- "message": "string",
- "data": { }
}
Get the job statistics by computing site (PanDA queue) and resource type (SCORE, MCORE, ...). This includes the active jobs and jobs in final states modified in the specified time window (default of 12 hours). Requires a secure connection.
time_window required | string time window in minutes for the statistics (affects only archived jobs) |
{- "success": true,
- "message": "string",
- "data": { }
}
Get the job statistics by computing site (PanDA queue), global share and resource type (SCORE, MCORE, ...). This includes the active jobs and jobs in final states modified in the specified time window (default of 12 hours). Requires a secure connection.
time_window required | string time window in minutes for the statistics (affects only archived jobs) |
{- "success": true,
- "message": "string",
- "data": { }
}
Retry a given task e.g. in exhausted state. Requires a secure connection without a production role to retry own tasks and with a production role to retry others' tasks.
task_id required | integer JEDI Task ID |
new_parameters | string a json dictionary with the new parameters for rerunning the task. The new parameters are merged with the existing ones. The parameters are the attributes in the JediTaskSpec object (https://github.com/PanDAWMS/panda-jedi/blob/master/pandajedi/jedicore/JediTaskSpec.py). |
no_child_retry | boolean if True, the child tasks are not retried |
discard_events | boolean if True, events will be discarded |
disable_staging_mode | boolean if True, the task skips staging state and directly goes to subsequent state |
keep_gshare_priority | boolean if True, the task keeps current gshare and priority |
ignore_hard_exhausted | boolean if True, the task ignores the limits for hard exhausted state and can be retried even if it is very faulty |
{- "task_id": 0,
- "new_parameters": "string",
- "no_child_retry": true,
- "discard_events": true,
- "disable_staging_mode": true,
- "keep_gshare_priority": true,
- "ignore_hard_exhausted": true
}
{- "success": true,
- "message": "string",
- "data": { }
}
Resume a given task. This transitions a paused or throttled task back to its previous active state. Resume can also be used to kick a task in staging state to the next state. Requires a secure connection and production role.
task_id required | integer JEDI Task ID |
{- "task_id": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Release a given task. This triggers the avalanche for tasks in scouting state or dynamically reconfigures the task to skip over the scouting state. Requires a secure connection and production role.
task_id required | integer JEDI Task ID |
{- "task_id": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Reassign a given task to a site, nucleus or cloud - depending on the parameters. Requires a secure connection.
task_id required | integer JEDI Task ID |
site | string site name |
cloud | string cloud name |
nucleus | string nucleus name |
soft | boolean soft reassign |
mode | string soft/nokill reassign |
{- "task_id": 0,
- "site": "string",
- "cloud": "string",
- "nucleus": "string",
- "soft": true,
- "mode": "string"
}
{- "success": true,
- "message": "string",
- "data": { }
}
Pause a given task. Requires a secure connection and production role.
task_id required | integer JEDI Task ID |
{- "task_id": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Kill a given task. Requires a secure connection.
task_id required | integer JEDI Task ID |
broadcast | boolean broadcast kill command to pilots to kill the jobs |
{- "task_id": 0,
- "broadcast": true
}
{- "success": true,
- "message": "string",
- "data": { }
}
Kills all unfinished jobs in a task. Requires a secure connection.
{ }
{- "success": true,
- "message": "string",
- "data": { }
}
Finish a given task. Requires a secure connection.
task_id required | integer JEDI Task ID |
soft | boolean soft finish |
broadcast | boolean broadcast finish command to pilots |
{- "task_id": 0,
- "soft": true,
- "broadcast": true
}
{- "success": true,
- "message": "string",
- "data": { }
}
Reactivate a given task, i.e. recycle a finished/done task. A reactivated task will generate new jobs and then go to done/finished. Requires a secure connection and production role.
task_id required | integer JEDI Task ID |
keep_attempt_nr | boolean keep the original attempt number |
trigger_job_generation | boolean trigger the job generation |
{- "task_id": 0,
- "keep_attempt_nr": true,
- "trigger_job_generation": true
}
{- "success": true,
- "message": "string",
- "data": { }
}
Avalanche a given task. Requires a secure connection and production role.
task_id required | integer JEDI Task ID |
{- "task_id": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Request to reload the input for a given task. Requires a secure connection and production role.
task_id required | integer JEDI Task ID |
ignore_hard_exhausted | boolean ignore the limits for hard exhausted |
{- "task_id": 0,
- "ignore_hard_exhausted": true
}
{- "success": true,
- "message": "string",
- "data": { }
}
Gets a map of the jumbo-job-enabled tasks to their datasets, filtering by the last modification time (now - from_offset to now - to_offset). Requires a secure connection.
from_offset required | string
|
to_offset | string
|
{- "success": true,
- "message": "string",
- "data": { }
}
Enable job cloning for a given task. Requires secure connection and production role.
jedi_task_id required | integer JEDI Task ID |
mode | string mode of operation, runonce or storeonce |
multiplicity | integer number of clones to be created for each target |
num_sites | integer number of sites to be used for each target |
{- "jedi_task_id": 0,
- "mode": "string",
- "multiplicity": 0,
- "num_sites": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Disable job cloning for a given task. Requires secure connection and production role.
jedi_task_id required | integer JEDI Task ID |
{- "jedi_task_id": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Increase possible task attempts. Requires secure connection and production role.
task_id required | integer JEDI Task ID |
increase required | integer number of attempts to increase |
{- "task_id": 0,
- "increase": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Get the details of a given task. Requires secure connection.
task_id required | string JEDI Task ID |
include_parameters | string flag to include task parameter information (Previously fullFlag) |
include_status | string flag to include status information (Previously withTaskInfo) |
{- "success": true,
- "message": "string",
- "data": { }
}
Change a task attribute within the list of valid attributes ("ramCount", "wallTime", "cpuTime", "coreCount"). Requires a secure connection and production role.
task_id required | integer JEDI task ID |
attribute_name required | string attribute to change |
value required | integer value to set to the attribute |
{- "task_id": 0,
- "attribute_name": "string",
- "value": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Change the modification time for a task to now() + positive_hour_offset
. Requires a secure connection and production role.
task_id required | integer JEDI task ID |
positive_hour_offset required | integer number of hours to add to the current time |
{- "task_id": 0,
- "positive_hour_offset": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Change the priority of a given task. Requires a secure connection and production role.
task_id required | integer JEDI task ID |
priority required | integer new priority for the task |
{- "task_id": 0,
- "priority": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Change the split rule for a task. Requires a secure connection and production role.
task_id required | integer JEDI task ID |
attribute_name required | string split rule attribute to change |
value required | integer value to set to the attribute |
{- "task_id": 0,
- "attribute_name": "string",
- "value": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}
Get the tasks with modificationtime > since
. Requires a secure connection.
since required | string time in the format |
dn | string user DN |
full | string flag to include full task information. If |
min_task_id | string minimum task ID |
prod_source_label | string task type (e.g. |
{- "success": true,
- "message": "string",
- "data": { }
}
Get the files in the datasets associated to a given task. You can filter passing a list of dataset types. The return format is:
[
{
"dataset": {
"name": dataset_name,
"id": dataset_id
},
"files": [
{
"lfn": lfn,
"scope": file_scope,
"id": file_id,
"status": status
},
...
]
},
...
]
Requires a secure connection.
task_id required | string JEDI task ID |
dataset_types | string list of dataset types, defaults to |
{- "success": true,
- "message": "string",
- "data": { }
}
Insert the task parameters to register a task. Requires a secure connection.
task_parameters required | string Dictionary with all the required task parameters. The parameters are the attributes in the JediTaskSpec object (https://github.com/PanDAWMS/panda-jedi/blob/master/pandajedi/jedicore/JediTaskSpec.py). |
parent_tid | integer Parent task ID |
{- "task_parameters": "string",
- "parent_tid": 0
}
{- "success": true,
- "message": "string",
- "data": { }
}