From 9620f67173e401256e07789d796ca2127b4934a4 Mon Sep 17 00:00:00 2001 From: Michel Lang Date: Wed, 21 Oct 2020 09:40:30 +0200 Subject: [PATCH] update pkgdown --- docs/articles/batchtools.html | 8 +- docs/index.html | 11 +- docs/pkgdown.yml | 2 +- docs/reference/JoinTables.html | 4 +- docs/reference/btlapply.html | 2 +- docs/reference/doJobCollection.html | 2 +- docs/reference/estimateRuntimes.html | 420 +++++++++--------- docs/reference/getJobTable.html | 18 +- docs/reference/getStatus.html | 2 +- docs/reference/makeClusterFunctionsSGE.html | 4 +- docs/reference/makeClusterFunctionsSlurm.html | 4 +- docs/reference/makeRegistry.html | 2 +- docs/reference/showLog.html | 4 +- docs/reference/submitJobs.html | 22 +- 14 files changed, 252 insertions(+), 253 deletions(-) diff --git a/docs/articles/batchtools.html b/docs/articles/batchtools.html index d81dd8ca..985bf057 100644 --- a/docs/articles/batchtools.html +++ b/docs/articles/batchtools.html @@ -334,7 +334,7 @@

In this example, a cap for the execution time (so-called walltime) and for the maximum memory requirements are set. The progress of the submitted jobs can be checked with getStatus().

-
## Status for 10 jobs at 2020-10-20 14:35:59:
+
## Status for 10 jobs at 2020-10-21 09:39:36:
 ##   Submitted    : 10 (100.0%)
 ##   -- Queued    :  0 (  0.0%)
 ##   -- Started   : 10 (100.0%)
@@ -362,7 +362,7 @@ 

Example 2: Machine Learning

We stick to a rather simple, but not unrealistic example to explain some further functionalities: Applying two classification learners to the famous iris data set (Anderson 1935), vary a few hyperparameters and evaluate the effect on the classification performance.

-

First, we create a registry, the central meta-data object which records technical details and the setup of the experiments. We use an ExperimentRegistry where the job definition is split into creating problems and algorithms. See the paper on BatchJobs and BatchExperiments for a detailed explanation. Again, we use a temporary registry and make it the default registry.

+

First, we create a registry, the central meta-data object which records technical details and the setup of the experiments. We use an ExperimentRegistry where the job definition is split into creating problems and algorithms. See the paper on BatchJobs and BatchExperiments for a detailed explanation. Again, we use a temporary registry and make it the default registry.

 library(batchtools)
 reg = makeExperimentRegistry(file.dir = NA, seed = 1)
@@ -582,7 +582,7 @@

After you have submitted jobs and suspect that something is going wrong, the first thing to do is to run getStatus() to display a summary of the current state of the system.

-
## Status for 10 jobs at 2020-10-20 14:36:12:
+
## Status for 10 jobs at 2020-10-21 09:39:40:
 ##   Submitted    : 10 (100.0%)
 ##   -- Queued    :  0 (  0.0%)
 ##   -- Started   : 10 (100.0%)
@@ -628,7 +628,7 @@ 

  • If required, query the data base for job ids depending on their status, parameters or tags (see findJobs()). The returned tables can easily be combined in a set-like fashion with data base verbs: union (ojoin() for outer join), intersect (ijoin() for inner join), difference (ajoin() for anti join).
  • Submit jobs with submitJobs(). You can specify job resources here. If you have thousands of fast terminating jobs, you want to chunk() them first. If some jobs already terminated, you can estimate the runtimes with estimateRuntimes() and chunk jobs into heterogeneous groups with lpt() and binpack().
  • Monitor jobs. getStatus() gives a summarizing overview. Use showLog() and grepLogs() to investigate log file. Run jobs in the currently running session with testJob() to get a traceback().
  • -
  • Collect (partial) results. loadResult() retrieves a single result from the file system. reduceResults() mimics Reduce() and allows to apply a function to many files in an iterative fashion. reduceResultsList() and reduceResultsDataTable() collect results into a list or data.table, respectively.
  • +
  • Collect (partial) results. loadResult() retrieves a single result from the file system. reduceResults() mimics Reduce() and allows to apply a function to many files in an iterative fashion. reduceResultsList() and reduceResultsDataTable() collect results into a list or data.table, respectively.
  • diff --git a/docs/index.html b/docs/index.html index 95c4aa09..6bf50796 100644 --- a/docs/index.html +++ b/docs/index.html @@ -16,8 +16,8 @@ this package provides a parallel implementation of the Map function for high performance computing systems managed by schedulers IBM Spectrum LSF (<https://www.ibm.com/products/hpc-workload-management>), - OpenLava (<http://www.openlava.org/>), Univa Grid Engine/Oracle Grid - Engine (<http://www.univa.com/>), Slurm (<http://slurm.schedmd.com/>), + OpenLava (<https://www.openlava.org/>), Univa Grid Engine/Oracle Grid + Engine (<https://www.univa.com/>), Slurm (<https://slurm.schedmd.com/>), TORQUE/PBS (<https://adaptivecomputing.com/cherry-services/torque-resource-manager/>), or Docker Swarm (<https://docs.docker.com/engine/swarm/>). @@ -125,7 +125,7 @@

  • JOSS Paper: Short paper on batchtools. Please cite this if you use batchtools.
  • -Paper on BatchJobs/BatchExperiments: The described concept still holds for batchtools and most examples work analogously (see the vignette for differences between the packages).
  • +Paper on BatchJobs/BatchExperiments: The described concept still holds for batchtools and most examples work analogously (see the vignette for differences between the packages).
    @@ -208,11 +208,10 @@

    Developers

    Dev status

      -
    • JOSS Publicatoin
    • -
    • CRAN Status Badge
    • +
    • JOSS Publicatoin
    • +
    • CRAN Status Badge
    • Build Status
    • Build Status
    • -
    • Coverage Status
    diff --git a/docs/pkgdown.yml b/docs/pkgdown.yml index 023059a7..84f8ae19 100644 --- a/docs/pkgdown.yml +++ b/docs/pkgdown.yml @@ -3,5 +3,5 @@ pkgdown: 1.6.1 pkgdown_sha: ~ articles: batchtools: batchtools.html -last_built: 2020-10-20T12:35Z +last_built: 2020-10-21T07:39Z diff --git a/docs/reference/JoinTables.html b/docs/reference/JoinTables.html index 7d6dbeb2..272be9e3 100644 --- a/docs/reference/JoinTables.html +++ b/docs/reference/JoinTables.html @@ -42,7 +42,7 @@ @@ -125,7 +125,7 @@

    Inner, Left, Right, Outer, Semi and Anti Join for Data Tables

    These helper functions perform join operations on data tables. Most of them are basically one-liners. -See http://rpubs.com/ronasta/join_data_tables for a overview of join operations in +See https://rpubs.com/ronasta/join_data_tables for a overview of join operations in data table or alternatively dplyr's vignette on two table verbs.

    diff --git a/docs/reference/btlapply.html b/docs/reference/btlapply.html index 6a7f0634..bf8b12a5 100644 --- a/docs/reference/btlapply.html +++ b/docs/reference/btlapply.html @@ -237,7 +237,7 @@

    Examp #> [[3]] #> [1] 9 #>
    btmapply(function(x, y, z) x + y + z, x = 1:3, y = 1:3, more.args = list(z = 1), simplify = TRUE) -
    #> No readable configuration file found
    #> Created registry in '/tmp/RtmpBTNF2j/registry2e1087f4a1a29' using cluster functions 'Interactive'
    #> Adding 3 jobs ...
    #> Submitting 3 jobs in 3 chunks using cluster functions 'Interactive' ...
    #> [1] 3 5 7

    +
    #> No readable configuration file found
    #> Created registry in '/tmp/RtmpsmIC4J/registry9e4369cc3bf0' using cluster functions 'Interactive'
    #> Adding 3 jobs ...
    #> Submitting 3 jobs in 3 chunks using cluster functions 'Interactive' ...
    #> [1] 3 5 7
    #> ### [bt]: This is batchtools v0.9.14 #> ### [bt]: Starting calculation of 2 jobs -#> ### [bt]: Setting working directory to '/home/lang/Projekte/batchtools/docs/reference' +#> ### [bt]: Setting working directory to '/home/michel/Projekte/batchtools/docs/reference' #> ### [bt]: Memory measurement disabled #> ### [bt]: Starting job [batchtools job.id=1] #> ### [bt]: Setting seed to 1166 ... diff --git a/docs/reference/estimateRuntimes.html b/docs/reference/estimateRuntimes.html index e508fb10..687061e6 100644 --- a/docs/reference/estimateRuntimes.html +++ b/docs/reference/estimateRuntimes.html @@ -222,7 +222,7 @@

    Examp #> 496: 496 496 NA NA NA <NA> NA NA #> 497: 497 497 NA NA NA <NA> NA NA #> 498: 498 498 NA NA NA <NA> NA NA -#> 499: 499 499 1603197327 1603197327 1603197827 <NA> NA 1 +#> 499: 499 499 1603265964 1603265964 1603266464 <NA> NA 1 #> 500: 500 500 NA NA NA <NA> NA NA #> batch.id log.file job.hash job.name repl #> 1: <NA> <NA> <NA> <NA> 1 @@ -234,99 +234,99 @@

    Examp #> 496: <NA> <NA> <NA> <NA> 1 #> 497: <NA> <NA> <NA> <NA> 1 #> 498: <NA> <NA> <NA> <NA> 1 -#> 499: cfInteractive <NA> job7c4bbe21691c77d776b738c39a54fd28 <NA> 1 +#> 499: cfInteractive <NA> joba17949c0e00e62405c8465e973297f1c <NA> 1 #> 500: <NA> <NA> <NA> <NA> 1

    rjoin(sjoin(tab, ids), getJobStatus(ids, reg = tmp)[, c("job.id", "time.running")])
    #> job.id problem algorithm x y time.running -#> 1: 32 iris nrow 7 b 1100.0049 secs -#> 2: 42 iris nrow 9 b 1100.0046 secs -#> 3: 47 iris nrow 10 b 1100.0046 secs -#> 4: 66 iris nrow 14 a 1100.0055 secs -#> 5: 73 iris nrow 15 c 100.0065 secs -#> 6: 75 iris nrow 15 e 100.0099 secs -#> 7: 86 iris nrow 18 a 1100.0058 secs -#> 8: 100 iris nrow 20 e 100.0060 secs -#> 9: 101 iris nrow 21 a 1100.0049 secs -#> 10: 103 iris nrow 21 c 100.0046 secs -#> 11: 123 iris nrow 25 c 100.0042 secs -#> 12: 125 iris nrow 25 e 100.0059 secs -#> 13: 161 iris nrow 33 a 1100.0059 secs -#> 14: 165 iris nrow 33 e 100.0110 secs -#> 15: 169 iris nrow 34 d 100.0049 secs -#> 16: 183 iris nrow 37 c 100.0058 secs -#> 17: 184 iris nrow 37 d 100.0058 secs -#> 18: 203 iris nrow 41 c 100.0051 secs -#> 19: 207 iris nrow 42 b 1100.0050 secs -#> 20: 209 iris nrow 42 d 100.0049 secs -#> 21: 220 iris nrow 44 e 100.0052 secs -#> 22: 227 iris nrow 46 b 1100.0054 secs -#> 23: 229 iris nrow 46 d 100.0068 secs -#> 24: 231 iris nrow 47 a 1100.0059 secs -#> 25: 244 iris nrow 49 d 100.0057 secs -#> 26: 260 iris ncol 2 e 500.0053 secs -#> 27: 276 iris ncol 6 a 1500.0050 secs -#> 28: 278 iris ncol 6 c 500.0066 secs -#> 29: 279 iris ncol 6 d 500.0054 secs -#> 30: 296 iris ncol 10 a 1500.0066 secs -#> 31: 320 iris ncol 14 e 500.0052 secs -#> 32: 340 iris ncol 18 e 500.0046 secs -#> 33: 347 iris ncol 20 b 1500.0046 secs -#> 34: 363 iris ncol 23 c 500.0061 secs -#> 35: 369 iris ncol 24 d 500.0062 secs -#> 36: 373 iris ncol 25 c 500.0063 secs -#> 37: 387 iris ncol 28 b 1500.0051 secs -#> 38: 410 iris ncol 32 e 500.0055 secs -#> 39: 421 iris ncol 35 a 1500.0070 secs -#> 40: 436 iris ncol 38 a 1500.0054 secs -#> 41: 444 iris ncol 39 d 500.0056 secs -#> 42: 448 iris ncol 40 c 500.0055 secs -#> 43: 456 iris ncol 42 a 1500.0059 secs -#> 44: 459 iris ncol 42 d 500.0058 secs -#> 45: 467 iris ncol 44 b 1500.0053 secs -#> 46: 468 iris ncol 44 c 500.0057 secs -#> 47: 475 iris ncol 45 e 500.0070 secs -#> 48: 482 iris ncol 47 b 1500.0070 secs -#> 49: 492 iris ncol 49 b 1500.0061 secs -#> 50: 499 iris ncol 50 d 500.0059 secs +#> 1: 32 iris nrow 7 b 1100.0026 secs +#> 2: 42 iris nrow 9 b 1100.0024 secs +#> 3: 47 iris nrow 10 b 1100.0023 secs +#> 4: 66 iris nrow 14 a 1100.0052 secs +#> 5: 73 iris nrow 15 c 100.0023 secs +#> 6: 75 iris nrow 15 e 100.0024 secs +#> 7: 86 iris nrow 18 a 1100.0025 secs +#> 8: 100 iris nrow 20 e 100.0026 secs +#> 9: 101 iris nrow 21 a 1100.0024 secs +#> 10: 103 iris nrow 21 c 100.0024 secs +#> 11: 123 iris nrow 25 c 100.0024 secs +#> 12: 125 iris nrow 25 e 100.0028 secs +#> 13: 161 iris nrow 33 a 1100.0026 secs +#> 14: 165 iris nrow 33 e 100.0026 secs +#> 15: 169 iris nrow 34 d 100.0026 secs +#> 16: 183 iris nrow 37 c 100.0027 secs +#> 17: 184 iris nrow 37 d 100.0027 secs +#> 18: 203 iris nrow 41 c 100.0036 secs +#> 19: 207 iris nrow 42 b 1100.0024 secs +#> 20: 209 iris nrow 42 d 100.0029 secs +#> 21: 220 iris nrow 44 e 100.0023 secs +#> 22: 227 iris nrow 46 b 1100.0024 secs +#> 23: 229 iris nrow 46 d 100.0023 secs +#> 24: 231 iris nrow 47 a 1100.0023 secs +#> 25: 244 iris nrow 49 d 100.0022 secs +#> 26: 260 iris ncol 2 e 500.0024 secs +#> 27: 276 iris ncol 6 a 1500.0025 secs +#> 28: 278 iris ncol 6 c 500.0025 secs +#> 29: 279 iris ncol 6 d 500.0024 secs +#> 30: 296 iris ncol 10 a 1500.0025 secs +#> 31: 320 iris ncol 14 e 500.0023 secs +#> 32: 340 iris ncol 18 e 500.0023 secs +#> 33: 347 iris ncol 20 b 1500.0023 secs +#> 34: 363 iris ncol 23 c 500.0023 secs +#> 35: 369 iris ncol 24 d 500.0023 secs +#> 36: 373 iris ncol 25 c 500.0025 secs +#> 37: 387 iris ncol 28 b 1500.0023 secs +#> 38: 410 iris ncol 32 e 500.0024 secs +#> 39: 421 iris ncol 35 a 1500.0024 secs +#> 40: 436 iris ncol 38 a 1500.0024 secs +#> 41: 444 iris ncol 39 d 500.0022 secs +#> 42: 448 iris ncol 40 c 500.0022 secs +#> 43: 456 iris ncol 42 a 1500.0023 secs +#> 44: 459 iris ncol 42 d 500.0023 secs +#> 45: 467 iris ncol 44 b 1500.0023 secs +#> 46: 468 iris ncol 44 c 500.0023 secs +#> 47: 475 iris ncol 45 e 500.0024 secs +#> 48: 482 iris ncol 47 b 1500.0023 secs +#> 49: 492 iris ncol 49 b 1500.0023 secs +#> 50: 499 iris ncol 50 d 500.0023 secs #> job.id problem algorithm x y time.running
    # Estimate runtimes: est = estimateRuntimes(tab, reg = tmp) print(est)
    #> Runtime Estimate for 500 jobs with 1 CPUs -#> Done : 0d 09h 43m 20.3s -#> Remaining: 3d 17h 34m 34.9s -#> Total : 4d 03h 17m 55.2s
    rjoin(tab, est$runtimes) +#> Done : 0d 09h 43m 20.1s +#> Remaining: 3d 17h 37m 8.0s +#> Total : 4d 03h 20m 28.1s
    rjoin(tab, est$runtimes)
    #> job.id problem algorithm x y type runtime -#> 1: 1 iris nrow 1 a estimated 1103.4590 -#> 2: 2 iris nrow 1 b estimated 1087.2531 -#> 3: 3 iris nrow 1 c estimated 338.3327 -#> 4: 4 iris nrow 1 d estimated 318.0785 -#> 5: 5 iris nrow 1 e estimated 317.7231 +#> 1: 1 iris nrow 1 a estimated 1107.0568 +#> 2: 2 iris nrow 1 b estimated 1090.8508 +#> 3: 3 iris nrow 1 c estimated 338.2092 +#> 4: 4 iris nrow 1 d estimated 318.6349 +#> 5: 5 iris nrow 1 e estimated 317.3189 #> --- -#> 496: 496 iris ncol 50 a estimated 1387.5999 -#> 497: 497 iris ncol 50 b estimated 1394.6896 -#> 498: 498 iris ncol 50 c estimated 618.0633 -#> 499: 499 iris ncol 50 d observed 500.0059 -#> 500: 500 iris ncol 50 e estimated 578.6555
    print(est, n = 10) +#> 496: 496 iris ncol 50 a estimated 1381.9162 +#> 497: 497 iris ncol 50 b estimated 1389.1659 +#> 498: 498 iris ncol 50 c estimated 614.0596 +#> 499: 499 iris ncol 50 d observed 500.0023 +#> 500: 500 iris ncol 50 e estimated 574.7851
    print(est, n = 10)
    #> Runtime Estimate for 500 jobs with 10 CPUs -#> Done : 0d 09h 43m 20.3s -#> Remaining: 3d 17h 34m 34.9s -#> Parallel : 0d 08h 58m 10.3s -#> Total : 4d 03h 17m 55.2s
    +#> Done : 0d 09h 43m 20.1s +#> Remaining: 3d 17h 37m 8.0s +#> Parallel : 0d 08h 58m 21.4s +#> Total : 4d 03h 20m 28.1s
    # Submit jobs with longest runtime first: ids = est$runtimes[type == "estimated"][order(runtime, decreasing = TRUE)] print(ids)
    #> job.id type runtime -#> 1: 466 estimated 1422.9803 -#> 2: 461 estimated 1419.5870 -#> 3: 472 estimated 1415.9535 -#> 4: 462 estimated 1415.6001 -#> 5: 457 estimated 1414.8001 +#> 1: 466 estimated 1420.0934 +#> 2: 461 estimated 1418.7001 +#> 3: 462 estimated 1415.5134 +#> 4: 457 estimated 1414.7134 +#> 5: 487 estimated 1413.4847 #> --- -#> 446: 204 estimated 132.6668 -#> 447: 185 estimated 132.3737 -#> 448: 189 estimated 131.9505 -#> 449: 174 estimated 131.3331 -#> 450: 179 estimated 130.1864
    if (FALSE) { +#> 446: 194 estimated 133.0456 +#> 447: 185 estimated 133.0030 +#> 448: 204 estimated 131.6954 +#> 449: 174 estimated 131.5901 +#> 450: 179 estimated 130.4434
    if (FALSE) { submitJobs(ids, reg = tmp) } @@ -334,121 +334,121 @@

    Examp ids = est$runtimes[type == "estimated"] ids[, chunk := binpack(runtime, 3600)]

    #> job.id type runtime chunk -#> 1: 1 estimated 1103.4590 48 -#> 2: 2 estimated 1087.2531 52 -#> 3: 3 estimated 338.3327 36 -#> 4: 4 estimated 318.0785 33 -#> 5: 5 estimated 317.7231 70 +#> 1: 1 estimated 1107.0568 47 +#> 2: 2 estimated 1090.8508 51 +#> 3: 3 estimated 338.2092 37 +#> 4: 4 estimated 318.6349 33 +#> 5: 5 estimated 317.3189 70 #> --- -#> 446: 495 estimated 584.3901 15 -#> 447: 496 estimated 1387.5999 18 -#> 448: 497 estimated 1394.6896 13 -#> 449: 498 estimated 618.0633 4 -#> 450: 500 estimated 578.6555 20
    print(ids) +#> 446: 495 estimated 581.7197 17 +#> 447: 496 estimated 1381.9162 20 +#> 448: 497 estimated 1389.1659 15 +#> 449: 498 estimated 614.0596 4 +#> 450: 500 estimated 574.7851 26
    print(ids)
    #> job.id type runtime chunk -#> 1: 1 estimated 1103.4590 48 -#> 2: 2 estimated 1087.2531 52 -#> 3: 3 estimated 338.3327 36 -#> 4: 4 estimated 318.0785 33 -#> 5: 5 estimated 317.7231 70 +#> 1: 1 estimated 1107.0568 47 +#> 2: 2 estimated 1090.8508 51 +#> 3: 3 estimated 338.2092 37 +#> 4: 4 estimated 318.6349 33 +#> 5: 5 estimated 317.3189 70 #> --- -#> 446: 495 estimated 584.3901 15 -#> 447: 496 estimated 1387.5999 18 -#> 448: 497 estimated 1394.6896 13 -#> 449: 498 estimated 618.0633 4 -#> 450: 500 estimated 578.6555 20
    print(ids[, list(runtime = sum(runtime)), by = chunk]) +#> 446: 495 estimated 581.7197 17 +#> 447: 496 estimated 1381.9162 20 +#> 448: 497 estimated 1389.1659 15 +#> 449: 498 estimated 614.0596 4 +#> 450: 500 estimated 574.7851 26
    print(ids[, list(runtime = sum(runtime)), by = chunk])
    #> chunk runtime -#> 1: 48 3485.338 -#> 2: 52 3598.644 -#> 3: 36 3599.459 -#> 4: 33 3596.494 -#> 5: 70 3490.925 -#> 6: 49 3479.219 -#> 7: 37 3595.168 -#> 8: 51 3596.367 -#> 9: 71 3488.611 -#> 10: 53 3599.742 -#> 11: 68 3496.038 -#> 12: 72 3485.351 -#> 13: 54 3589.155 -#> 14: 69 3494.538 -#> 15: 73 3478.069 -#> 16: 46 3516.899 -#> 17: 50 3599.762 -#> 18: 42 3572.213 -#> 19: 64 3516.755 -#> 20: 66 3506.507 -#> 21: 43 3569.239 -#> 22: 65 3512.215 -#> 23: 67 3505.174 -#> 24: 41 3581.446 -#> 25: 59 3537.144 -#> 26: 60 3532.235 -#> 27: 47 3493.115 -#> 28: 38 3599.214 -#> 29: 35 3599.986 -#> 30: 40 3589.694 -#> 31: 57 3561.085 -#> 32: 58 3546.237 -#> 33: 39 3594.115 -#> 34: 55 3581.046 -#> 35: 56 3570.053 -#> 36: 44 3534.662 -#> 37: 61 3522.996 -#> 38: 34 3598.099 -#> 39: 45 3533.418 -#> 40: 63 3518.338 -#> 41: 62 3521.086 -#> 42: 27 3597.000 -#> 43: 26 3586.504 -#> 44: 24 3596.226 -#> 45: 25 3597.570 -#> 46: 23 3597.820 -#> 47: 29 3567.933 -#> 48: 75 3599.097 -#> 49: 20 3522.443 -#> 50: 74 3474.955 -#> 51: 8 3591.306 -#> 52: 21 3515.784 -#> 53: 11 3565.927 -#> 54: 7 3592.927 -#> 55: 12 3563.426 -#> 56: 5 3593.237 -#> 57: 6 3598.126 -#> 58: 32 3599.468 -#> 59: 10 3579.486 -#> 60: 80 3481.959 -#> 61: 82 3598.299 -#> 62: 83 3597.802 -#> 63: 4 3598.686 -#> 64: 79 3493.361 -#> 65: 81 3474.061 -#> 66: 76 3587.729 -#> 67: 87 3564.202 -#> 68: 89 3546.702 -#> 69: 91 2158.021 -#> 70: 78 3506.583 -#> 71: 88 3554.982 -#> 72: 77 3523.895 -#> 73: 85 3580.521 -#> 74: 3 3599.219 -#> 75: 90 3522.918 -#> 76: 84 3586.090 -#> 77: 86 3572.917 -#> 78: 2 3599.764 -#> 79: 1 3599.763 -#> 80: 9 3587.324 -#> 81: 30 3560.138 -#> 82: 13 3599.521 -#> 83: 14 3598.014 -#> 84: 22 3598.040 -#> 85: 19 3574.641 -#> 86: 17 3585.269 -#> 87: 31 3546.246 -#> 88: 18 3581.006 -#> 89: 28 3574.646 -#> 90: 16 3598.238 -#> 91: 15 3599.249 +#> 1: 47 3493.187 +#> 2: 51 3593.783 +#> 3: 37 3598.573 +#> 4: 33 3599.900 +#> 5: 70 3493.489 +#> 6: 53 3598.723 +#> 7: 71 3491.366 +#> 8: 48 3491.841 +#> 9: 52 3597.483 +#> 10: 54 3587.877 +#> 11: 68 3499.779 +#> 12: 72 3489.223 +#> 13: 55 3583.526 +#> 14: 69 3496.272 +#> 15: 73 3483.829 +#> 16: 46 3519.591 +#> 17: 50 3599.943 +#> 18: 38 3597.396 +#> 19: 65 3512.646 +#> 20: 43 3571.763 +#> 21: 62 3522.617 +#> 22: 66 3511.003 +#> 23: 39 3599.908 +#> 24: 35 3599.575 +#> 25: 61 3533.407 +#> 26: 40 3598.645 +#> 27: 56 3571.361 +#> 28: 57 3565.133 +#> 29: 49 3481.931 +#> 30: 42 3583.160 +#> 31: 58 3555.775 +#> 32: 60 3535.954 +#> 33: 41 3588.180 +#> 34: 36 3599.425 +#> 35: 59 3545.174 +#> 36: 44 3541.279 +#> 37: 34 3599.586 +#> 38: 64 3514.492 +#> 39: 45 3540.479 +#> 40: 63 3517.610 +#> 41: 67 3507.819 +#> 42: 27 3598.911 +#> 43: 24 3599.823 +#> 44: 25 3590.607 +#> 45: 26 3598.511 +#> 46: 23 3599.593 +#> 47: 28 3573.496 +#> 48: 75 3599.916 +#> 49: 12 3559.937 +#> 50: 74 3474.824 +#> 51: 8 3593.188 +#> 52: 20 3521.159 +#> 53: 31 3599.784 +#> 54: 7 3595.855 +#> 55: 5 3594.254 +#> 56: 11 3563.352 +#> 57: 10 3575.839 +#> 58: 6 3599.450 +#> 59: 32 3598.576 +#> 60: 80 3492.129 +#> 61: 82 3471.066 +#> 62: 83 3599.780 +#> 63: 79 3501.372 +#> 64: 76 3593.842 +#> 65: 85 3588.259 +#> 66: 89 3553.760 +#> 67: 91 2151.522 +#> 68: 81 3481.753 +#> 69: 78 3513.014 +#> 70: 87 3570.795 +#> 71: 88 3563.106 +#> 72: 77 3529.443 +#> 73: 3 3599.295 +#> 74: 86 3578.904 +#> 75: 90 3529.605 +#> 76: 2 3599.210 +#> 77: 84 3596.381 +#> 78: 1 3599.788 +#> 79: 4 3595.377 +#> 80: 9 3583.777 +#> 81: 29 3558.408 +#> 82: 18 3572.866 +#> 83: 15 3583.955 +#> 84: 21 3599.004 +#> 85: 19 3567.117 +#> 86: 16 3582.283 +#> 87: 30 3550.130 +#> 88: 17 3578.532 +#> 89: 22 3599.427 +#> 90: 13 3599.265 +#> 91: 14 3595.019 #> chunk runtime
    if (FALSE) { submitJobs(ids, reg = tmp) } @@ -457,28 +457,28 @@

    Examp ids = est$runtimes[type == "estimated"] ids[, chunk := lpt(runtime, 10)]

    #> job.id type runtime chunk -#> 1: 1 estimated 1103.4590 7 -#> 2: 2 estimated 1087.2531 5 -#> 3: 3 estimated 338.3327 9 -#> 4: 4 estimated 318.0785 3 -#> 5: 5 estimated 317.7231 5 +#> 1: 1 estimated 1107.0568 4 +#> 2: 2 estimated 1090.8508 9 +#> 3: 3 estimated 338.2092 4 +#> 4: 4 estimated 318.6349 8 +#> 5: 5 estimated 317.3189 6 #> --- -#> 446: 495 estimated 584.3901 10 -#> 447: 496 estimated 1387.5999 6 -#> 448: 497 estimated 1394.6896 6 -#> 449: 498 estimated 618.0633 8 -#> 450: 500 estimated 578.6555 9
    print(ids[, list(runtime = sum(runtime)), by = chunk]) +#> 446: 495 estimated 581.7197 2 +#> 447: 496 estimated 1381.9162 9 +#> 448: 497 estimated 1389.1659 2 +#> 449: 498 estimated 614.0596 2 +#> 450: 500 estimated 574.7851 1
    print(ids[, list(runtime = sum(runtime)), by = chunk])
    #> chunk runtime -#> 1: 7 32290.29 -#> 2: 5 32288.78 -#> 3: 9 32212.48 -#> 4: 3 32215.25 -#> 5: 8 32271.77 -#> 6: 4 32212.70 -#> 7: 2 32210.72 -#> 8: 1 32289.79 -#> 9: 6 32271.49 -#> 10: 10 32211.62

    +#> 1: 4 32227.40 +#> 2: 9 32226.68 +#> 3: 8 32231.22 +#> 4: 6 32293.22 +#> 5: 1 32226.47 +#> 6: 3 32292.92 +#> 7: 10 32227.16 +#> 8: 5 32301.32 +#> 9: 2 32301.36 +#> 10: 7 32300.22
    #> job.id submitted started done -#> 1: 1 2020-10-20 14:35:31 2020-10-20 14:35:31 2020-10-20 14:35:31 -#> 2: 2 2020-10-20 14:35:31 2020-10-20 14:35:31 2020-10-20 14:35:31 -#> 3: 3 2020-10-20 14:35:31 2020-10-20 14:35:31 2020-10-20 14:35:31 +#> 1: 1 2020-10-21 09:39:25 2020-10-21 09:39:25 2020-10-21 09:39:25 +#> 2: 2 2020-10-21 09:39:25 2020-10-21 09:39:25 2020-10-21 09:39:25 +#> 3: 3 2020-10-21 09:39:25 2020-10-21 09:39:25 2020-10-21 09:39:25 #> error mem.used batch.id log.file #> 1: Error in (function (x) : x must be > 0 NA cfInteractive <NA> #> 2: <NA> NA cfInteractive <NA> #> 3: <NA> NA cfInteractive <NA> #> job.hash job.name time.queued -#> 1: jobe262d06587118a403d251a789f2ddbb5 <NA> 0.007600069 secs -#> 2: job0323395c9f048471d1762d3856c3bca2 <NA> 0.009500027 secs -#> 3: job17d391a42edd7824302048f3a813bcad <NA> 0.009400129 secs +#> 1: job5adb5742954ec70e5dc4621612e5638e <NA> 0.002799988 secs +#> 2: job5c0144440e7359c3570442e591fd68a7 <NA> 0.002799988 secs +#> 3: jobcedfc453688181879b03f66b5e6e5f25 <NA> 0.002799988 secs #> time.running job.pars resources tags -#> 1: 0.006000042 secs <list[1]> <list[0]> tag1 -#> 2: 0.007200003 secs <list[1]> <list[0]> tag1,tag2 -#> 3: 0.004899979 secs <list[1]> <list[0]> <NA>
    +#> 1: 0.002099991 secs <list[1]> <list[0]> tag1 +#> 2: 0.001900196 secs <list[1]> <list[0]> tag1,tag2 +#> 3: 0.001899958 secs <list[1]> <list[0]> <NA>
    # Job parameters: getJobPars(reg = tmp)
    #> job.id job.pars diff --git a/docs/reference/getStatus.html b/docs/reference/getStatus.html index b9e352e9..cdb4e6e6 100644 --- a/docs/reference/getStatus.html +++ b/docs/reference/getStatus.html @@ -192,7 +192,7 @@

    Examp

    #> [1] FALSE
    tab = getStatus(reg = tmp) print(tab) -
    #> Status for 5 jobs at 2020-10-20 14:35:31: +
    #> Status for 5 jobs at 2020-10-21 09:39:26: #> Submitted : 4 ( 80.0%) #> -- Queued : 0 ( 0.0%) #> -- Started : 4 ( 80.0%) diff --git a/docs/reference/makeClusterFunctionsSGE.html b/docs/reference/makeClusterFunctionsSGE.html index 4951baae..d779adb3 100644 --- a/docs/reference/makeClusterFunctionsSGE.html +++ b/docs/reference/makeClusterFunctionsSGE.html @@ -41,7 +41,7 @@

    Cluster functions for Univa Grid Engine / Oracle Grid Engine / -Sun Grid Engine (http://www.univa.com/).

    +Sun Grid Engine (https://www.univa.com/).

    Job files are created based on the brew template template. This file is processed with brew and then submitted to the queue using the qsub command. Jobs are killed using the qdel command and the diff --git a/docs/reference/makeClusterFunctionsSlurm.html b/docs/reference/makeClusterFunctionsSlurm.html index 0871b82e..a4c1f443 100644 --- a/docs/reference/makeClusterFunctionsSlurm.html +++ b/docs/reference/makeClusterFunctionsSlurm.html @@ -40,7 +40,7 @@ - -

    Cluster functions for Slurm (http://slurm.schedmd.com/).

    +

    Cluster functions for Slurm (https://slurm.schedmd.com/).

    Job files are created based on the brew template template.file. This file is processed with brew and then submitted to the queue using the sbatch command. Jobs are killed using the scancel command and diff --git a/docs/reference/makeRegistry.html b/docs/reference/makeRegistry.html index ef2cebf4..0f48c5f4 100644 --- a/docs/reference/makeRegistry.html +++ b/docs/reference/makeRegistry.html @@ -303,7 +303,7 @@

    Examp

    #> Job Registry #> Backend : Interactive #> File dir : /tmp/batchtools-example/reg -#> Work dir : /home/lang/Projekte/batchtools/docs/reference +#> Work dir : /home/michel/Projekte/batchtools/docs/reference #> Jobs : 0 #> Seed : 5075 #> Writeable: TRUE
    diff --git a/docs/reference/showLog.html b/docs/reference/showLog.html index 4d2f159a..dfb9cd8e 100644 --- a/docs/reference/showLog.html +++ b/docs/reference/showLog.html @@ -171,7 +171,7 @@

    Examp

    #> Adding 5 jobs ...
    submitJobs(reg = tmp)
    #> Submitting 5 jobs in 5 chunks using cluster functions 'Interactive' ...
    #> Warning: That's odd.
    #> Error in (function (i) : 3
    #> Warning: That's odd.
    waitForJobs(reg = tmp)
    #> [1] FALSE
    getStatus(reg = tmp) -
    #> Status for 5 jobs at 2020-10-20 14:35:44: +
    #> Status for 5 jobs at 2020-10-21 09:39:31: #> Submitted : 5 (100.0%) #> -- Queued : 0 ( 0.0%) #> -- Started : 5 (100.0%) @@ -182,7 +182,7 @@

    Examp writeLines(getLog(ids[1], reg = tmp))

    #> ### [bt]: This is batchtools v0.9.14 #> ### [bt]: Starting calculation of 1 jobs -#> ### [bt]: Setting working directory to '/home/lang/Projekte/batchtools/docs/reference' +#> ### [bt]: Setting working directory to '/home/michel/Projekte/batchtools/docs/reference' #> ### [bt]: Memory measurement disabled #> ### [bt]: Starting job [batchtools job.id=1] #> ### [bt]: Setting seed to 5192 ... diff --git a/docs/reference/submitJobs.html b/docs/reference/submitJobs.html index 000b58c7..28a7f94b 100644 --- a/docs/reference/submitJobs.html +++ b/docs/reference/submitJobs.html @@ -293,7 +293,7 @@

    Examp #> Error in (function (x, fn) : file not found #> Error in (function (x, fn) : file not found

    waitForJobs(ids, reg = tmp)
    #> [1] FALSE
    getStatus(reg = tmp) -
    #> Status for 20 jobs at 2020-10-20 14:35:46: +
    #> Status for 20 jobs at 2020-10-21 09:39:31: #> Submitted : 10 ( 50.0%) #> -- Queued : 0 ( 0.0%) #> -- Started : 10 ( 50.0%) @@ -305,7 +305,7 @@

    Examp file.create(fn)

    #> [1] TRUE
    submitJobs(findErrors(ids, reg = tmp), reg = tmp)
    #> Submitting 5 jobs in 5 chunks using cluster functions 'Interactive' ...
    getStatus(reg = tmp) -
    #> Status for 20 jobs at 2020-10-20 14:35:46: +
    #> Status for 20 jobs at 2020-10-21 09:39:31: #> Submitted : 10 ( 50.0%) #> -- Queued : 0 ( 0.0%) #> -- Started : 10 ( 50.0%) @@ -317,7 +317,7 @@

    Examp ids = findNotSubmitted(reg = tmp) submitJobs(ids, reg = tmp)

    #> Submitting 10 jobs in 10 chunks using cluster functions 'Interactive' ...
    getStatus(reg = tmp) -
    #> Status for 20 jobs at 2020-10-20 14:35:46: +
    #> Status for 20 jobs at 2020-10-21 09:39:31: #> Submitted : 20 (100.0%) #> -- Queued : 0 ( 0.0%) #> -- Started : 20 (100.0%) @@ -414,17 +414,17 @@

    Examp info = ijoin(getJobStatus(reg = tmp)[, .(job.id, mem.used)], getJobPars(reg = tmp)) print(unwrap(info))

    #> job.id mem.used n p -#> 1: 1 143.8795 1e+04 10 -#> 2: 2 143.8806 1e+04 50 -#> 3: 3 143.8808 1e+05 10 -#> 4: 4 143.8815 1e+05 50
    +#> 1: 1 145.3133 1e+04 10 +#> 2: 2 145.3144 1e+04 50 +#> 3: 3 145.3146 1e+05 10 +#> 4: 4 145.3153 1e+05 50
    # Combine job info with results -> each job is aggregated using mean() unwrap(ijoin(info, reduceResultsDataTable(fun = function(res) list(res = mean(res)), reg = tmp)))
    #> job.id mem.used n p res -#> 1: 1 143.8795 1e+04 10 0.5005778 -#> 2: 2 143.8806 1e+04 50 0.4992527 -#> 3: 3 143.8808 1e+05 10 0.5000026 -#> 4: 4 143.8815 1e+05 50 0.4999301
    +#> 1: 1 145.3133 1e+04 10 0.5005778 +#> 2: 2 145.3144 1e+04 50 0.4992527 +#> 3: 3 145.3146 1e+05 10 0.5000026 +#> 4: 4 145.3153 1e+05 50 0.4999301
    ### Example 3: Multicore execution on the slave tmp = makeRegistry(file.dir = NA, make.default = FALSE)
    #> No readable configuration file found
    #> Created registry in '/tmp/batchtools-example/reg3' using cluster functions 'Interactive'