CRAN Package Check Results for Package mllrnrs

Last updated on 2025-12-27 04:50:54 CET.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 0.0.7 5.96 182.43 188.39 ERROR
r-devel-linux-x86_64-debian-gcc 0.0.7 3.52 140.34 143.86 ERROR
r-devel-linux-x86_64-fedora-clang 0.0.7 10.00 222.74 232.74 ERROR
r-devel-linux-x86_64-fedora-gcc 0.0.7 10.00 309.21 319.21 ERROR
r-devel-windows-x86_64 0.0.7 7.00 277.00 284.00 OK
r-patched-linux-x86_64 0.0.7 5.60 193.11 198.71 OK
r-release-linux-x86_64 0.0.7 6.01 210.09 216.10 OK
r-release-macos-arm64 0.0.7 1.00 56.00 57.00 OK
r-release-macos-x86_64 0.0.7 4.00 259.00 263.00 OK
r-release-windows-x86_64 0.0.7 7.00 277.00 284.00 OK
r-oldrel-macos-arm64 0.0.7 1.00 64.00 65.00 OK
r-oldrel-macos-x86_64 0.0.7 4.00 273.00 277.00 OK
r-oldrel-windows-x86_64 0.0.7 8.00 378.00 386.00 OK

Check Details

Version: 0.0.7
Check: tests
Result: ERROR Running ‘testthat.R’ [65s/265s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > # https://github.com/Rdatatable/data.table/issues/5658 > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mllrnrs) > > test_check("mllrnrs") CV fold: Fold1 CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 7.55 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 12.845 seconds 3) Running FUN 2 times in 2 thread(s)... 0.554 seconds OMP: Warning #96: Cannot form a team with 3 threads, using 2 instead. OMP: Hint Consider unsetting KMP_DEVICE_THREAD_LIMIT (KMP_ALL_THREADS), KMP_TEAMS_THREAD_LIMIT, and OMP_THREAD_LIMIT (if any are set). CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.346 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 12.369 seconds 3) Running FUN 2 times in 2 thread(s)... 0.702 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.193 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 17.069 seconds 3) Running FUN 2 times in 2 thread(s)... 0.607 seconds CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Saving _problems/test-binary-287.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Saving _problems/test-multiclass-162.R CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold2 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold3 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold1 Saving _problems/test-multiclass-294.R CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 6.892 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.903 seconds 3) Running FUN 2 times in 2 thread(s)... 0.922 seconds CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 7.493 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.249 seconds 3) Running FUN 2 times in 2 thread(s)... 0.592 seconds CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 6.746 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.353 seconds 3) Running FUN 2 times in 2 thread(s)... 0.942 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 9.821 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 21.466 seconds 3) Running FUN 2 times in 2 thread(s)... 1.08 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 9.533 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 3.618 seconds 3) Running FUN 2 times in 2 thread(s)... 0.994 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.939 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 31.815 seconds 3) Running FUN 2 times in 2 thread(s)... 0.942 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] ══ Skipped tests (3) ═══════════════════════════════════════════════════════════ • On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5', 'test-multiclass.R:57:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-binary.R:287:5'): test nested cv, grid, binary - ranger ──────── Error in `xtfrm.data.frame(structure(list(`0` = 0.379858310721837, `1` = 0.620141689278164), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55c38be73fe0>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─ranger_optimizer$execute() at test-binary.R:287:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─hparam_tuner$execute(k = self$k_tuning) 9. │ └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. │ └─mlexperiments:::.run_optimizer(...) 11. │ └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper) 12. │ ├─base::do.call(...) 13. │ └─mlexperiments (local) `<fn>`(...) 14. │ └─base::lapply(...) 15. │ └─mlexperiments (local) FUN(X[[i]], ...) 16. │ ├─base::do.call(FUN, fun_parameters) 17. │ └─mlexperiments (local) `<fn>`(...) 18. │ ├─base::do.call(private$fun_optim_cv, kwargs) 19. │ └─mllrnrs (local) `<fn>`(...) 20. │ ├─base::do.call(ranger_predict, pred_args) 21. │ └─mllrnrs (local) `<fn>`(...) 22. │ └─kdry::mlh_reshape(preds) 23. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 24. │ └─data.table:::`[.data.table`(...) 25. └─base::which.max(.SD) 26. ├─base::xtfrm(`<dt[,2]>`) 27. └─base::xtfrm.data.frame(`<dt[,2]>`) ── Error ('test-multiclass.R:162:5'): test nested cv, grid, multiclass - lightgbm ── Error in `xtfrm.data.frame(structure(list(`0` = 0.20774260202068, `1` = 0.136781829323219, `2` = 0.655475568656101), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55c38be73fe0>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─lightgbm_optimizer$execute() at test-multiclass.R:162:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─mlexperiments:::.cv_fit_model(...) 9. │ ├─base::do.call(self$learner$predict, pred_args) 10. │ └─mlexperiments (local) `<fn>`(...) 11. │ ├─base::do.call(private$fun_predict, kwargs) 12. │ └─mllrnrs (local) `<fn>`(...) 13. │ └─kdry::mlh_reshape(preds) 14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 15. │ └─data.table:::`[.data.table`(...) 16. └─base::which.max(.SD) 17. ├─base::xtfrm(`<dt[,3]>`) 18. └─base::xtfrm.data.frame(`<dt[,3]>`) ── Error ('test-multiclass.R:294:5'): test nested cv, grid, multi:softprob - xgboost, with weights ── Error in `xtfrm.data.frame(structure(list(`0` = 0.250160574913025, `1` = 0.124035485088825, `2` = 0.62580394744873), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55c38be73fe0>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─xgboost_optimizer$execute() at test-multiclass.R:294:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─mlexperiments:::.cv_fit_model(...) 9. │ ├─base::do.call(self$learner$predict, pred_args) 10. │ └─mlexperiments (local) `<fn>`(...) 11. │ ├─base::do.call(private$fun_predict, kwargs) 12. │ └─mllrnrs (local) `<fn>`(...) 13. │ └─kdry::mlh_reshape(preds) 14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 15. │ └─data.table:::`[.data.table`(...) 16. └─base::which.max(.SD) 17. ├─base::xtfrm(`<dt[,3]>`) 18. └─base::xtfrm.data.frame(`<dt[,3]>`) [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-debian-clang

Version: 0.0.7
Check: tests
Result: ERROR Running ‘testthat.R’ [51s/164s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > # https://github.com/Rdatatable/data.table/issues/5658 > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mllrnrs) > > test_check("mllrnrs") CV fold: Fold1 CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 5.368 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 6.117 seconds 3) Running FUN 2 times in 2 thread(s)... 0.573 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 4.949 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 4.889 seconds 3) Running FUN 2 times in 2 thread(s)... 0.512 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 4.947 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 8.056 seconds 3) Running FUN 2 times in 2 thread(s)... 0.56 seconds CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Saving _problems/test-binary-287.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Saving _problems/test-multiclass-162.R CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold2 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold3 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold1 Saving _problems/test-multiclass-294.R CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 3.782 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.693 seconds 3) Running FUN 2 times in 2 thread(s)... 0.463 seconds CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 4.265 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.099 seconds 3) Running FUN 2 times in 2 thread(s)... 0.546 seconds CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 4.314 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.443 seconds 3) Running FUN 2 times in 2 thread(s)... 1.494 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 7.904 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 10.589 seconds 3) Running FUN 2 times in 2 thread(s)... 0.596 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 5.377 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.315 seconds 3) Running FUN 2 times in 2 thread(s)... 0.612 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 5.5 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 16.422 seconds 3) Running FUN 2 times in 2 thread(s)... 0.861 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] ══ Skipped tests (3) ═══════════════════════════════════════════════════════════ • On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5', 'test-multiclass.R:57:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-binary.R:287:5'): test nested cv, grid, binary - ranger ──────── Error in `xtfrm.data.frame(structure(list(`0` = 0.379858310721837, `1` = 0.620141689278164), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x556d81434070>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─ranger_optimizer$execute() at test-binary.R:287:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─hparam_tuner$execute(k = self$k_tuning) 9. │ └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. │ └─mlexperiments:::.run_optimizer(...) 11. │ └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper) 12. │ ├─base::do.call(...) 13. │ └─mlexperiments (local) `<fn>`(...) 14. │ └─base::lapply(...) 15. │ └─mlexperiments (local) FUN(X[[i]], ...) 16. │ ├─base::do.call(FUN, fun_parameters) 17. │ └─mlexperiments (local) `<fn>`(...) 18. │ ├─base::do.call(private$fun_optim_cv, kwargs) 19. │ └─mllrnrs (local) `<fn>`(...) 20. │ ├─base::do.call(ranger_predict, pred_args) 21. │ └─mllrnrs (local) `<fn>`(...) 22. │ └─kdry::mlh_reshape(preds) 23. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 24. │ └─data.table:::`[.data.table`(...) 25. └─base::which.max(.SD) 26. ├─base::xtfrm(`<dt[,2]>`) 27. └─base::xtfrm.data.frame(`<dt[,2]>`) ── Error ('test-multiclass.R:162:5'): test nested cv, grid, multiclass - lightgbm ── Error in `xtfrm.data.frame(structure(list(`0` = 0.20774260202068, `1` = 0.136781829323219, `2` = 0.655475568656101), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x556d81434070>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─lightgbm_optimizer$execute() at test-multiclass.R:162:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─mlexperiments:::.cv_fit_model(...) 9. │ ├─base::do.call(self$learner$predict, pred_args) 10. │ └─mlexperiments (local) `<fn>`(...) 11. │ ├─base::do.call(private$fun_predict, kwargs) 12. │ └─mllrnrs (local) `<fn>`(...) 13. │ └─kdry::mlh_reshape(preds) 14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 15. │ └─data.table:::`[.data.table`(...) 16. └─base::which.max(.SD) 17. ├─base::xtfrm(`<dt[,3]>`) 18. └─base::xtfrm.data.frame(`<dt[,3]>`) ── Error ('test-multiclass.R:294:5'): test nested cv, grid, multi:softprob - xgboost, with weights ── Error in `xtfrm.data.frame(structure(list(`0` = 0.250160574913025, `1` = 0.124035485088825, `2` = 0.62580394744873), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x556d81434070>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─xgboost_optimizer$execute() at test-multiclass.R:294:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─mlexperiments:::.cv_fit_model(...) 9. │ ├─base::do.call(self$learner$predict, pred_args) 10. │ └─mlexperiments (local) `<fn>`(...) 11. │ ├─base::do.call(private$fun_predict, kwargs) 12. │ └─mllrnrs (local) `<fn>`(...) 13. │ └─kdry::mlh_reshape(preds) 14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 15. │ └─data.table:::`[.data.table`(...) 16. └─base::which.max(.SD) 17. ├─base::xtfrm(`<dt[,3]>`) 18. └─base::xtfrm.data.frame(`<dt[,3]>`) [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-debian-gcc

Version: 0.0.7
Check: tests
Result: ERROR Running ‘testthat.R’ [87s/239s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > # https://github.com/Rdatatable/data.table/issues/5658 > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mllrnrs) > > test_check("mllrnrs") CV fold: Fold1 CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 10.758 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 16.779 seconds 3) Running FUN 2 times in 2 thread(s)... 0.785 seconds OMP: Warning #96: Cannot form a team with 24 threads, using 2 instead. OMP: Hint Consider unsetting KMP_DEVICE_THREAD_LIMIT (KMP_ALL_THREADS), KMP_TEAMS_THREAD_LIMIT, and OMP_THREAD_LIMIT (if any are set). CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 10.313 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 12.87 seconds 3) Running FUN 2 times in 2 thread(s)... 0.662 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.89 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 19.163 seconds 3) Running FUN 2 times in 2 thread(s)... 0.693 seconds CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Saving _problems/test-binary-287.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Saving _problems/test-multiclass-162.R CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold2 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold3 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold1 Saving _problems/test-multiclass-294.R CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 6.836 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.372 seconds 3) Running FUN 2 times in 2 thread(s)... 0.726 seconds CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 6.552 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.395 seconds 3) Running FUN 2 times in 2 thread(s)... 0.625 seconds CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 6.515 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.189 seconds 3) Running FUN 2 times in 2 thread(s)... 0.693 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 7.849 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 20.789 seconds 3) Running FUN 2 times in 2 thread(s)... 0.838 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 7.109 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 8.6 seconds 3) Running FUN 2 times in 2 thread(s)... 0.661 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 7.097 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 13.924 seconds 3) Running FUN 2 times in 2 thread(s)... 0.566 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] ══ Skipped tests (3) ═══════════════════════════════════════════════════════════ • On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5', 'test-multiclass.R:57:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-binary.R:287:5'): test nested cv, grid, binary - ranger ──────── Error in `xtfrm.data.frame(structure(list(`0` = 0.403024198843656, `1` = 0.596975801156344), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55c52180ed10>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─ranger_optimizer$execute() at test-binary.R:287:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─hparam_tuner$execute(k = self$k_tuning) 9. │ └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. │ └─mlexperiments:::.run_optimizer(...) 11. │ └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper) 12. │ ├─base::do.call(...) 13. │ └─mlexperiments (local) `<fn>`(...) 14. │ └─base::lapply(...) 15. │ └─mlexperiments (local) FUN(X[[i]], ...) 16. │ ├─base::do.call(FUN, fun_parameters) 17. │ └─mlexperiments (local) `<fn>`(...) 18. │ ├─base::do.call(private$fun_optim_cv, kwargs) 19. │ └─mllrnrs (local) `<fn>`(...) 20. │ ├─base::do.call(ranger_predict, pred_args) 21. │ └─mllrnrs (local) `<fn>`(...) 22. │ └─kdry::mlh_reshape(preds) 23. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 24. │ └─data.table:::`[.data.table`(...) 25. └─base::which.max(.SD) 26. ├─base::xtfrm(`<dt[,2]>`) 27. └─base::xtfrm.data.frame(`<dt[,2]>`) ── Error ('test-multiclass.R:162:5'): test nested cv, grid, multiclass - lightgbm ── Error in `xtfrm.data.frame(structure(list(`0` = 0.20774260202068, `1` = 0.136781829323219, `2` = 0.655475568656101), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55c52180ed10>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─lightgbm_optimizer$execute() at test-multiclass.R:162:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─mlexperiments:::.cv_fit_model(...) 9. │ ├─base::do.call(self$learner$predict, pred_args) 10. │ └─mlexperiments (local) `<fn>`(...) 11. │ ├─base::do.call(private$fun_predict, kwargs) 12. │ └─mllrnrs (local) `<fn>`(...) 13. │ └─kdry::mlh_reshape(preds) 14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 15. │ └─data.table:::`[.data.table`(...) 16. └─base::which.max(.SD) 17. ├─base::xtfrm(`<dt[,3]>`) 18. └─base::xtfrm.data.frame(`<dt[,3]>`) ── Error ('test-multiclass.R:294:5'): test nested cv, grid, multi:softprob - xgboost, with weights ── Error in `xtfrm.data.frame(structure(list(`0` = 0.274507701396942, `1` = 0.12648206949234, `2` = 0.599010229110718), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55c52180ed10>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─xgboost_optimizer$execute() at test-multiclass.R:294:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─mlexperiments:::.cv_fit_model(...) 9. │ ├─base::do.call(self$learner$predict, pred_args) 10. │ └─mlexperiments (local) `<fn>`(...) 11. │ ├─base::do.call(private$fun_predict, kwargs) 12. │ └─mllrnrs (local) `<fn>`(...) 13. │ └─kdry::mlh_reshape(preds) 14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 15. │ └─data.table:::`[.data.table`(...) 16. └─base::which.max(.SD) 17. ├─base::xtfrm(`<dt[,3]>`) 18. └─base::xtfrm.data.frame(`<dt[,3]>`) [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-fedora-clang

Version: 0.0.7
Check: tests
Result: ERROR Running ‘testthat.R’ [81s/272s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > # https://github.com/Rdatatable/data.table/issues/5658 > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mllrnrs) > > test_check("mllrnrs") CV fold: Fold1 CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 13.936 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 15.599 seconds 3) Running FUN 2 times in 2 thread(s)... 1.026 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 9.67 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 13.19 seconds 3) Running FUN 2 times in 2 thread(s)... 0.761 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.648 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 18.391 seconds 3) Running FUN 2 times in 2 thread(s)... 0.758 seconds CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Saving _problems/test-binary-287.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Saving _problems/test-multiclass-162.R CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold2 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold3 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold1 Saving _problems/test-multiclass-294.R CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 7.741 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.538 seconds 3) Running FUN 2 times in 2 thread(s)... 0.699 seconds CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 7.22 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.107 seconds 3) Running FUN 2 times in 2 thread(s)... 0.672 seconds CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 7.472 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.057 seconds 3) Running FUN 2 times in 2 thread(s)... 0.569 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.488 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 14.291 seconds 3) Running FUN 2 times in 2 thread(s)... 0.902 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.278 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 4.069 seconds 3) Running FUN 2 times in 2 thread(s)... 1.247 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 10.291 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 37.276 seconds 3) Running FUN 2 times in 2 thread(s)... 1.041 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] ══ Skipped tests (3) ═══════════════════════════════════════════════════════════ • On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5', 'test-multiclass.R:57:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-binary.R:287:5'): test nested cv, grid, binary - ranger ──────── Error in `xtfrm.data.frame(structure(list(`0` = 0.379858310721837, `1` = 0.620141689278164), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x20a9e550>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─ranger_optimizer$execute() at test-binary.R:287:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─hparam_tuner$execute(k = self$k_tuning) 9. │ └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. │ └─mlexperiments:::.run_optimizer(...) 11. │ └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper) 12. │ ├─base::do.call(...) 13. │ └─mlexperiments (local) `<fn>`(...) 14. │ └─base::lapply(...) 15. │ └─mlexperiments (local) FUN(X[[i]], ...) 16. │ ├─base::do.call(FUN, fun_parameters) 17. │ └─mlexperiments (local) `<fn>`(...) 18. │ ├─base::do.call(private$fun_optim_cv, kwargs) 19. │ └─mllrnrs (local) `<fn>`(...) 20. │ ├─base::do.call(ranger_predict, pred_args) 21. │ └─mllrnrs (local) `<fn>`(...) 22. │ └─kdry::mlh_reshape(preds) 23. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 24. │ └─data.table:::`[.data.table`(...) 25. └─base::which.max(.SD) 26. ├─base::xtfrm(`<dt[,2]>`) 27. └─base::xtfrm.data.frame(`<dt[,2]>`) ── Error ('test-multiclass.R:162:5'): test nested cv, grid, multiclass - lightgbm ── Error in `xtfrm.data.frame(structure(list(`0` = 0.20774260202068, `1` = 0.136781829323219, `2` = 0.655475568656101), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x20a9e550>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─lightgbm_optimizer$execute() at test-multiclass.R:162:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─mlexperiments:::.cv_fit_model(...) 9. │ ├─base::do.call(self$learner$predict, pred_args) 10. │ └─mlexperiments (local) `<fn>`(...) 11. │ ├─base::do.call(private$fun_predict, kwargs) 12. │ └─mllrnrs (local) `<fn>`(...) 13. │ └─kdry::mlh_reshape(preds) 14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 15. │ └─data.table:::`[.data.table`(...) 16. └─base::which.max(.SD) 17. ├─base::xtfrm(`<dt[,3]>`) 18. └─base::xtfrm.data.frame(`<dt[,3]>`) ── Error ('test-multiclass.R:294:5'): test nested cv, grid, multi:softprob - xgboost, with weights ── Error in `xtfrm.data.frame(structure(list(`0` = 0.250160574913025, `1` = 0.124035485088825, `2` = 0.62580394744873), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x20a9e550>, .data.table.locked = TRUE))`: cannot xtfrm data frames Backtrace: ▆ 1. ├─xgboost_optimizer$execute() at test-multiclass.R:294:5 2. │ └─mlexperiments:::.run_cv(self = self, private = private) 3. │ └─mlexperiments:::.fold_looper(self, private) 4. │ ├─base::do.call(private$cv_run_model, run_args) 5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. │ ├─base::do.call(.cv_run_nested_model, args) 7. │ └─mlexperiments (local) `<fn>`(...) 8. │ └─mlexperiments:::.cv_fit_model(...) 9. │ ├─base::do.call(self$learner$predict, pred_args) 10. │ └─mlexperiments (local) `<fn>`(...) 11. │ ├─base::do.call(private$fun_predict, kwargs) 12. │ └─mllrnrs (local) `<fn>`(...) 13. │ └─kdry::mlh_reshape(preds) 14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))] 15. │ └─data.table:::`[.data.table`(...) 16. └─base::which.max(.SD) 17. ├─base::xtfrm(`<dt[,3]>`) 18. └─base::xtfrm.data.frame(`<dt[,3]>`) [ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-fedora-gcc