Utilities
job scripts
- class coupledmodeldriver.script.SlurmEmailType(value)
option for Slurm email notification
- class coupledmodeldriver.script.Script(commands: List[str])
abstraction of an executable script
- write(filename: PathLike, overwrite: bool = False)
write script to file
- Parameters:
filename – path to output file
overwrite – whether to overwrite existing files
- class coupledmodeldriver.script.JobScript(platform: Platform, commands: List[str], slurm_run_name: str | None = None, slurm_tasks: int | None = None, slurm_duration: timedelta | None = None, slurm_account: str | None = None, slurm_email_type: SlurmEmailType | None = None, slurm_email_address: str | None = None, slurm_error_filename: PathLike | None = None, slurm_log_filename: PathLike | None = None, slurm_nodes: int | None = None, slurm_partition: str | None = None, modules: List[PathLike] | None = None, path_prefix: str | None = None, write_slurm_directory: bool = False)
abstraction of a Slurm job script, to run locally or from a job manager
- Parameters:
platform – HPC to run script on
commands – shell commands to run in script
slurm_run_name – Slurm run name
slurm_tasks – number of total tasks for Slurm to run
slurm_duration – duration to run job in job manager
slurm_account – Slurm account name
slurm_email_type – email type
slurm_email_address – email address
slurm_error_filename – file path to error log file
slurm_log_filename – file path to output log file
slurm_nodes – number of physical nodes to run on
slurm_partition – partition to run on (stampede2 only)
modules – file paths to modules to load
path_prefix – file path to prepend to the PATH
write_slurm_directory – explicitly add directory to Slurm header when writing file
- property launcher: str
command to start processes on target system (
srun
,ibrun
, etc.)
- write(filename: PathLike, overwrite: bool = False)
write script to file
- Parameters:
filename – path to output file
overwrite – whether to overwrite existing files
- class coupledmodeldriver.script.EnsembleGenerationJob(platform: Platform, generate_command: str, slurm_tasks: int | None = None, slurm_duration: timedelta | None = None, slurm_account: str | None = None, slurm_run_name: str = 'GENERATE_CONFIGURATION', commands: List[str] | None = None, parallel: bool = False, **kwargs)
job script to generate the ensemble configuration
- Parameters:
platform – HPC to run script on
commands – shell commands to run in script
slurm_run_name – Slurm run name
slurm_tasks – number of total tasks for Slurm to run
slurm_duration – duration to run job in job manager
slurm_account – Slurm account name
slurm_email_type – email type
slurm_email_address – email address
slurm_error_filename – file path to error log file
slurm_log_filename – file path to output log file
slurm_nodes – number of physical nodes to run on
slurm_partition – partition to run on (stampede2 only)
modules – file paths to modules to load
path_prefix – file path to prepend to the PATH
write_slurm_directory – explicitly add directory to Slurm header when writing file
- property launcher: str
command to start processes on target system (
srun
,ibrun
, etc.)
- write(filename: PathLike, overwrite: bool = False)
write script to file
- Parameters:
filename – path to output file
overwrite – whether to overwrite existing files
- class coupledmodeldriver.script.EnsembleRunScript(platform: Platform, run_spinup: bool = True, commands: List[str] | None = None)
script to run the ensemble, either by running it directly or by submitting model execution to the job manager default filename is
run_<platform>.sh
- write(filename: PathLike, overwrite: bool = False)
write script to file
- Parameters:
filename – path to output file
overwrite – whether to overwrite existing files
- class coupledmodeldriver.script.EnsembleCleanupScript(commands: List[str] | None = None, filenames: List[PathLike] | None = None, spinup_filenames: List[PathLike] | None = None, hotstart_filenames: List[PathLike] | None = None)
script for cleaning an ensemble configuration, by deleting output and log files
- write(filename: PathLike, overwrite: bool = False)
write script to file
- Parameters:
filename – path to output file
overwrite – whether to overwrite existing files
- coupledmodeldriver.script.bash_if_statement(condition: str, then: List[str], *else_then: List[List[str]], indentation: str = ' ') str
create a if statement in Bash syntax using the given condition, then statement(s), and else condition(s) / statement(s)
- Parameters:
condition – boolean condition to check
then – Bash statement(s) to execute if condition is met
else_then – arbitrary number of Bash statement(s) to execute if condition is not met, with optional conditions (
elif
)indentation – indentation
- Returns:
if statement as a string
- coupledmodeldriver.script.bash_for_loop(iteration: str, do: List[str], indentation=' ') str
create a for loop in Bash syntax using the given variable, iterator, and do statement(s)
- Parameters:
iteration – for loop statement, such as
for dir in ./*
do – Bash statement(s) to execute on every loop iteration
indentation – indentation
- Returns:
for loop as a string
- coupledmodeldriver.script.bash_function(name: str, body: List[str], indentation: str = ' ') str
create a function in Bash syntax using the given name and function statement(s)
- Parameters:
name – name of function
body – Bash statement(s) making up function body
indentation – indentation
- Returns:
function as a string
- coupledmodeldriver.script.slurm_dependencies(after_ok: List[str])
create dependency argument for sbatch cli based on input list
- Parameters:
after_ok – list of dependencies as they should appear on bash script sbatch call
- Returns:
either an empty string or a dependencies argument for sbatch
- coupledmodeldriver.script.slurm_submit_get_id(job_file: PathLike, job_id_var: str, dependencies: str = '')
create a script to call a job via sbatch and return the job id as a named variable in bash
- Parameters:
job_file – path to the slurm script file
job_id_var – bash variable name to store the submitted slurm job id
dependecies – dependency argument for sbatch command
- Returns:
bash script to call sbatch with optional dependencies and store job id in the specified bash variable
platforms
- class coupledmodeldriver.platforms.Platform(value)
HPC platform information
directory check
- coupledmodeldriver.generate.adcirc.check.is_adcirc_run_directory(directory: PathLike | None = None) bool
check if the given directory has the baseline ADCIRC configuration files
- Parameters:
directory – path to directory
- Returns:
whether the directory is an ADCIRC configuration
- coupledmodeldriver.generate.adcirc.check.check_adcirc_completion(directory: PathLike | None = None, verbose: bool = False) Dict[str, Any]
return the status of ADCIRC execution within the given directory
- Parameters:
directory – path to directory
verbose – whether to include errors and detailed status checks in output
- Returns:
status of ADCIRC execution in JSON format
utilities
- coupledmodeldriver.utilities.get_logger(name: str, log_filename: PathLike | None = None, file_level: int | None = None, console_level: int | None = None, log_format: str | None = None) Logger
instantiate logger instance
- Parameters:
name – name of logger
log_filename – path to log file
file_level – minimum log level to write to log file
console_level – minimum log level to print to console
log_format – logger message format
- Returns:
instance of a Logger object
- coupledmodeldriver.utilities.make_executable(path: PathLike)
https://stackoverflow.com/questions/12791997/how-do-you-do-a-simple-chmod-x-from-within-python
- class coupledmodeldriver.utilities.ProcessPoolExecutorStackTraced(max_workers=None, mp_context=None, initializer=None, initargs=())
preserves the traceback of any kind of raised exception
Initializes a new ProcessPoolExecutor instance.
- Parameters:
max_workers – The maximum number of processes that can be used to execute the given calls. If None or not given then as many worker processes will be created as the machine has processors.
mp_context – A multiprocessing context to launch the workers. This object should provide SimpleQueue, Queue and Process.
initializer – A callable used to initialize worker processes.
initargs – A tuple of arguments to pass to the initializer.
- submit(fn, *args, **kwargs)
Submits a callable to be executed with the given arguments.
Schedules the callable to be executed as fn(*args, **kwargs) and returns a Future instance representing the execution of the callable.
- Returns:
A Future representing the given call.
- map(fn, *iterables, timeout=None, chunksize=1)
Returns an iterator equivalent to map(fn, iter).
- Parameters:
fn – A callable that will take as many arguments as there are passed iterables.
timeout – The maximum number of seconds to wait. If None, then there is no limit on the wait time.
chunksize – If greater than one, the iterables will be chopped into chunks of size chunksize and submitted to the process pool. If set to one, the items in the list will be sent one at a time.
- Returns:
An iterator equivalent to – map(func, *iterables) but the calls may be evaluated out-of-order.
- Raises:
TimeoutError – If the entire result iterator could not be generated before the given timeout.
Exception – If fn(*args) raises for any values.
- shutdown(wait=True, *, cancel_futures=False)
Clean-up the resources associated with the Executor.
It is safe to call this method several times. Otherwise, no other methods can be called after this one.
- Parameters:
wait – If True then shutdown will not return until all running futures have finished executing and the resources used by the executor have been reclaimed.
cancel_futures – If True then shutdown will cancel all pending futures. Futures that are completed or running will not be cancelled.