Skip to content

usaito/recsys2021-tutorial

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Counterfactual Learning and Evaluation for Recommender Systems (RecSys'21 Tutorial)

Materials for "Counterfactual Learning and Evaluation for Recommender Systems: Foundations, Implementations, and Recent Advances", a tutorial delivered at the 15th ACM Conference on Recommender System (RecSys'21).

Contents

  • examples: brief examples describing how to use Open Bandit Pipeline with synthetic data, classification data, and real-world bandit data
  • simulations: simulation codes comparing a wide variety of existing OPE estimators on synthetic data
  • real-world: a brief demo of OPE/OPL on real bandit dataset (need Open Bandit Dataset)

The Google Colab version of implementations (examples) are available here.

Requirements and Setup

The Python environment is built using poetry. You can build the same environment as in our examples and simulations by cloning the repository and running poetry install directly under the folder (if you have not install poetry yet, please run pip install poetry first.).

# clone the obp repository
git clone [email protected]:usaito/recsys2021-tutorial.git
cd benchmark/ope

# build the environment with poetry
poetry install

# activate jupyter-lab environment
poetry run jupyter lab

The versions of Python and used packages are as follows.

[tool.poetry.dependencies]
python = "^3.9,<3.10"
scikit-learn = "0.24.2"
numpy = "^1.21.2"
pandas = "^1.3.3"
obp = "0.5.1"
matplotlib = "^3.4.3"
jupyterlab = "^3.1.13"

Contact

If you have any question, please feel free to contact: [email protected]