This post will describe how to use virtualenv's tricks to run python inside Docker container extending concept of python virtual environment, without changing any current habits.
If you develop in python you've probably heard about
if not... go read about them
now. They're pretty standard way of managing dependencies
Virtualenv & pip workflow
Typical workflow with virtualenv and pip looks as follows:
$ virtualenv . $ source /bin/activate
In the above steps we create a virtual environment and
(docker-virtualenv) # att at lapunov in ~/Projects/docker-virtualenv on git:master x [8:05:05]
With virtualenv we can start using pip without worrying that it will pollute our global space.
$ pip install -r requirements.txt $ pip install my_new_dependency
When we no more want to use a virtualenv, we can get rid of it by:
All dependencies are in docker now...
Pip and virtualenv work well and you get used to them, but here comes Docker and lots of things change. Now, you install all dependencies inside a container instead of virtualenv
Installing dependencies in docker has it's advantages over using just virtualenv - because thanks to containers - you don't have to worry about system's dependencies, and your app is now better isolated.
However running python becomes a bit cumbersome, because you don't have dependencies installed on your machine, and you would have to get into docker container to run python there.
For example you work with Django and
want to run
./manage.py shell_plus, as you always did,
in a current directory?
./manage.py collect_static, or just play around
with packages you've just installed using IPython?
You first have to enter a docker container. And depending on your Dockerfile you can have few things wrong - default directory, user, environment variables...
It isn't the nicest developer experience ever. The best thing would be to not change our habits at all.
What if we have something like virtualenv which would change our python so it runs inside the Docker?
All we have to do this is to change how python is executed.
Instead of running a
python binary we can run a
python bash script,
which will run
python inside a Docker container containing all our
Do you remember what virtualenv does? It creates a bunch of directories:
bin lib man share
And stores a new
pip and so on in
bin directory as well as magical
activate file which shouldn't be run but sourced.
When we do:
$ source bin/activate
We change python to the local one.
activate is quite simple and it uses the
fact that python looks for python executable on using the
PATH environment variable,
and uses the first
python executable it finds. So
virtualenv bin path to
$ echo $PATH /home/att/Projects/docker-virtualenv/bin:/usr/local/sbin:/usr/local/bin (docker-virtualenv) $ echo $VIRTUAL_ENV /home/att/Projects/docker-virtualenv
Fake your python
We can do a similar thing as virtualenv does. Let's create a bin directory:
$ mkdir bin $ cp ../my_other_project_with_virtualenv/bin/activate bin/activate
VIRTUAL_ENV environment variable to
Now we only have to create a fake python executable.
$ echo "echo 'hello'" > bin/python # create a dummy python $ chmod +x bin/python # make script executable
So we should have now a structure like this:
$ tree bin bin ├── activate └── python
To use our new
python ;), we have to source
$ source bin/activate $ python hello
Hurray! It means that we have dummy python in place, and it's a default python now. We only have to make it a little bit more useful and run a python inside a Docker container.
This is an example content of
$ cat bin/python #!/bin/bash docker run -it docker_image_name python "$@"
If you rewrite a
bin/python to the version above and reload activate,
you run python in Docker just like always:
But it will be little slower to start...
$ time python -c "print('hello')" hello python -c "print('hello')" 0,02s user 0,01s system 10% cpu 0,318 total
However - it works!
If you work with fig - tool for isolated Docker
development environments, you probably should use
fig run instead of
docker run - because it will setup also
a rest of containers for you with proper links, volumes and environment
variables. Running with root privileges without a good reason is a bad practice
so it should be avoided. I use a docker/baseimage
as base for some of my
Docker images, so I have a
/sbin/setuser command which I use to run
commands as a different user.
After the changes mentioned above (assuming using
bin/python might look as follows:
#!/bin/bash cd /home/att/Projects/docker-virtualenv # fig has to find `fig.yml` fig run docker_image /sbin/setuser virtualenv python "$@"
Now you don't have to remember that your python app is in Docker. Just use python as you always do :).
And to restore normal python, just run