Skip to content

Commit

Permalink
Add IPython extension (#8)
Browse files Browse the repository at this point in the history
* First cut of a IPython extension

* Add ipython to requirements

* Specify IPython dep like in line_profiler

* Try to fix travis due to ipython

* Add call to y() in mlrun cell magic demo

* Make IPython an optional dependency

* Add IPython magic instructions to README

* Fix syntax error in setup.py

* Catch NameError instead of Exception when evaling user provided function names

* Make string quotes consistent

* Add --dump-profile and --return args to mlrun magic

* Add --gpu specifier to mlrun magic

* Make target_gpu kwarg compatible with python 2.7

* Improve demo.ipynb and add --quiet flag

* Rerun notebook

* Tidy up IPython extension source

* Add links to relevant sections from overview in README, add details of IPython support

* Add link to demo.ipynb from README.md
  • Loading branch information
willprice committed Mar 24, 2020
1 parent 68b9f8b commit 3e84c38
Show file tree
Hide file tree
Showing 9 changed files with 588 additions and 35 deletions.
119 changes: 116 additions & 3 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,116 @@
__pycache__
*.pyc
*.egg-info
#### joe made this: http://goel.io/joe

#####=== IPythonNotebook ===#####
# Temporary data
.ipynb_checkpoints/

#####=== Python ===#####

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover

# Translations
*.mo
*.pot

# Django stuff:
*.log

# Sphinx documentation
docs/_build/

# PyBuilder
target/

#####=== JetBrains ===#####
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio

*.iml

## Directory-based project format:
.idea/
# if you remove the above rule, at least ignore the following:

# User-specific stuff:
# .idea/workspace.xml
# .idea/tasks.xml
# .idea/dictionaries

# Sensitive or high-churn files:
# .idea/dataSources.ids
# .idea/dataSources.xml
# .idea/sqlDataSources.xml
# .idea/dynamic.xml
# .idea/uiDesigner.xml

# Gradle:
# .idea/gradle.xml
# .idea/libraries

# Mongo Explorer plugin:
# .idea/mongoSettings.xml

## File-based project format:
*.ipr
*.iws

## Plugin-specific files:

# IntelliJ
/out/

# mpeltonen/sbt-idea plugin
.idea_modules/

# JIRA plugin
atlassian-ide-plugin.xml

# Crashlytics plugin (for Android Studio and IntelliJ)
com_crashlytics_export_strings.xml
crashlytics.properties
crashlytics-build.properties

.ropeproject
9 changes: 9 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,19 @@ language: python
matrix:
include:
- python: '2.7'
env:
IPYTHON_VERSION='5.8'
- python: '3.5'
env:
IPYTHON_VERSION='7.9'
- python: '3.6'
env:
IPYTHON_VERSION='7'
- python: '3.7'
env:
IPYTHON_VERSION='7'
install:
- pip install IPython==$IPYTHON_VERSION
- python setup.py install
- pip install -r requirements.txt
script:
Expand Down
54 changes: 50 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,13 @@ pytorch_memlab

A simple and accurate **CUDA** memory management laboratory for pytorch,
it consists of different parts about the memory:
- A `line_profiler` style CUDA memory profiler with simple API.
- A reporter to inspect tensors occupying the CUDA memory.
- An interesting feature to temporarily move all the CUDA tensors into
CPU memory for courtesy, and of course the backward transferring.

- [A `line_profiler` style CUDA memory profiler with simple API.](#memory-profiler)
- [A reporter to inspect tensors occupying the CUDA memory.](#memory-reporter)
- [An interesting feature to temporarily move all the CUDA tensors into
CPU memory for courtesy, and of course the backward transferring.](#courtesy)
- [IPython support through `%mlrun`/`%%mlrun` line/cell magic
commands.](#ipython-support)

Installation
-----
Expand Down Expand Up @@ -130,6 +133,49 @@ func()

More samples can be found in `test/test_line_profiler.py`

### IPython support

Make sure you have `IPython` installed, or have installed `pytorch-memlab` with
`pip install pytorch-memlab[ipython]`.

First, load the extension:

```python
%%load_ext pytorch_memlab
```

This makes the `%mlrun` and `%%mlrun` line/cell magics available for use. For
example, in a new cell run the following to profile an entire cell

```python
%%mlrun -f func
import torch
from pytorch_memlab import profile, set_target_gpu
def func():
net1 = torch.nn.Linear(1024, 1024).cuda(0)
set_target_gpu(1)
net2 = torch.nn.Linear(1024, 1024).cuda(1)
set_target_gpu(0)
net3 = torch.nn.Linear(1024, 1024).cuda(0)
```

Or you can invoke the profiler for a single statement on via the `%mlrun` cell
magic.

```python
import torch
from pytorch_memlab import profile, set_target_gpu
def func(input_size):
net1 = torch.nn.Linear(input_size, 1024).cuda(0)
%mlrun -f func func(2048)
```

See `%mlrun?` for help on what arguments are supported. You can set the GPU
device to profile, dump profiling results to a file, and return the
`LineProfiler` object for post-profile inspection.

Find out more by checking out the [demo Jupyter notebook](./demo.ipynb)


### Memory Reporter

Expand Down
Loading

0 comments on commit 3e84c38

Please sign in to comment.