summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorLoïc Guégan <loic.guegan@mailbox.org>2025-09-19 13:19:02 +0200
committerLoïc Guégan <loic.guegan@mailbox.org>2025-09-19 13:19:02 +0200
commit4f1b2ea492d3e19c81ab98f050618d437b6e9ec5 (patch)
tree118edc54e48150e7d9dfe78ef375a694fa4bc85f /README.md
parent284cee3f032bed1243f0d1256d394e9458132075 (diff)
Clean repo and debug setup.sh
Diffstat (limited to 'README.md')
-rw-r--r--README.md7
1 files changed, 5 insertions, 2 deletions
diff --git a/README.md b/README.md
index 52a99f5..7e02742 100644
--- a/README.md
+++ b/README.md
@@ -1,10 +1,13 @@
# loosely-policies-analytics
## Analysis folder
-- learning.R: contains two major functions:
+- offline.R: contains two major functions:
- build_models: To generate K-fold cross-validation results (note that hyper-parameters for decisions tree is fixed (no validation set))
- generate_inputs: generate the inputs for the simulations experiments + the decision tree plots
-- days.R: Implement the in-situ learning approach
+- in-situ.R: Implement the in-situ learning approach (Figure 4a 4b and 4c)
+ - For figure 4a and 4b we train the model with increasing amount of data from previous results as if we were using one policy per day (see section IV.A)
+ - For figure 4c, delta is generated by comparing using each policies in round-robin (one per days to perform the training)
+ to each previous paper results with single policy only (see paper section IV.A)
Todo: remove minbucket=1 (does not impact the results)