HowTos¶
Annot¶
HowTo install annot?¶
This howto walks you step by step through the process of installing development and production version of annot.
Install docker, docker-compose and the docker-machine as described in HowTo install the docker container platform?
On the host machine install Git. Follow the instructions on the website specified for your operating system.
Get the Annot source code from the main fork. Run from the command line:
git clone https://gitlab.com/biotransistor/annot.git
(Alternatively, you can clone annot from your own fork. Howto forking the project is not described here.)
The cloned source code’s
annot/pgsql.env
file contains a few PostgreSQL database configurations. Edit the DB_PASS entry:[...] DB_PASS=set_some_strong_random_postgresql_root_user_password. [...]
Generate a BioPortal bioontology.org account. Go to your BioPortal account settings to figure out your API application interface key.
The
crowbar.py
file contains Django framework and annot related environment variables. Write a plain text crowbar.py file with the following content:SECRET_KEY = "about_64_characters_long[->+<]" PASSWD_DATABASE = "some_random_postgresql_annot_user_password" APIKEY_BIOONTOLOGY = "your_BioPortal_bioontology.org_API_key" URL = "http://192.168.99.100/" CONTACT = "you@emailaddress"
Adapt the SECRET_KEY, PASSWD_DATABASE, APIKEY_BIOONTOLOGY and CONTACT content inside the double quotes. For a local installation leave URL as it is.
Place this file under
annot/web/prjannot/crowbar.py
.development version only: The
annot/dcdev.yml
file contains docker-compose related information. Edit the webdev and nginxdev volumes path according to your host machine environment:webdev: [...] volumes: - /path/to/your/git/cloned/annot/web:/usr/src/app [...] nginxdev: [...] volumes: - /path/to/your/git/cloned/annot/nginxdev/annotnginx.conf:/etc/nginx/nginx.conf [...]
Build a docker machine in which the docker container will be installed, to run the development or production version of annot. Build the containers. Then fire up annot. You can name the machine however you like. In this example we named the machine an0.
docker-machine create --driver virtualbox --virtualbox-disk-size "20000" an0
this command creates the machine using VirtialBox as disk driver. The disk size is given in MB. Please adjust disk size to your needs.docker-machine ls
lists all machines.docker-machine start an0
fires up machine an0, if not yet running.docker-machine env an0
get an0’s environment variables.eval "$(docker-machine env an0)"
sets an0’s environment variables.docker-machine ls
the an0 machine should now have an asterisk (*
) in the ACTIVE column.- cd into the cloned annot folder then execute the next steps.
for the development version:
docker-compose -f dcdev.yml pull
pulls the basic containers.docker-compose -f dcdev.yml build
builds all container.docker-compose -f dcdev.yml up
fires up the docker containers and reports what goes on with the web framework.- press
ctrl + c
to shut down the docker containers and give the prompt back. docker-compose -f dcdev.yml up -d
fires up the docker containers and gives the prompt back.
for the production version:
docker-compose -f dcusr.yml pull
pulls the basic containers.docker-compose -f dcusr.yml build
builds all container.docker-compose -f dcusr.yml up
fires up the docker containers and reports what goes on with the web framework.- press
ctrl + c
to shut down the docker containers and gives the prompt back. docker-compose -f dcusr.yml up -d
fires up the docker containers and gives the prompt back.
Setup PostgreSQL database and database user.
docker exec -ti annot_db_1 /bin/bash
to enter db docker container.su postgres -s /bin/bash
to switch from unix root to unix postgres user.createdb annot
creates a postgresql database named annot.createuser -P annot
creates a database user named annot. When prompted enter the same database password as specified inannot/web/prjannot/crowbar.py
psql -U postgres -d annot -c"GRANT ALL PRIVILEGES ON DATABASE annot TO annot;"
does what it says.exit
to exit as unix postgres user.exit
to exit as unix root user and leaving as such the annot_db_1 docker container.
Generate database tables, a superuser and pull all static files together:
for the development version:
docker exec -ti annot_webdev_1 /bin/bash
to enter the webdev docker container.
for the production version:
docker exec -ti annot_web_1 /bin/bash
to enter the web docker container.
then continuer:
python demigrations.py
will clean out the sql migration command folder from every app.python manage.py makemigrations
generates the sql database migration commands.python manage.py migrate
applies the generated sql migration commands.python manage.py createsuperuser
creates a superuser for the annot web application.python manage.py collectstatic
collects all static files needed by annot and put them into the right place to be served.exit
to leave the container.
Fire up you favorite web browser and surf to the place where annot is running.
docker-machine ls
will give you the correct ip. Most probably 192.168.99.100.- http://192.168.99.100/admin/ you can enter the annot GUI at the admin side. Use therefore the generate superuser credentials.
production version only:
Annot can be set up so that it automatically checks for new versions of each ontology at midnight container time, and installs them and backups the whole annot content.
- run
docker exec -ti annot_web_1 /bin/bash
to enter the annot_web_1 docker container /etc/init.d/cron status
to check the cron daemon status./etc/init.d/cron start
to start the cron daemon. Will enable check and backup at midnight container time. Backups are stored in at/usr/src/media/
.date
to check for the docker containers local time.
Assuming you run a unix flavored host machine and cron is installed, your host machine can be setup to pull automatically the backups stored inside the docker container to the host machine every night. For this, you have to adjust and install the following cronjob.
At your host machine, inside the cloned annot project folder adjust
annot/web/nix/hostpull.sh
.- Change every
mymachine
to the docker machine name you gave. e.gan0
. - Change every
/path/on/host/to/store/backup/
to the directory you would like to have your backups placed.
At the host machine, inside the cloned annot project folder adjust
annot/web/nix/hostcronjob.txt
- Make sure that
PATH
knows the location of the docker-machine binary. Runwhich docker-machine
at the command line to find out the correct location. - Change the time
00 00
(which represents mm hh) to be 6 hours later than midnight inside the annot docker containers. - Change
/path/to/cloned/project/
to the directory where you have annot cloned to. - Change
/path/on/host/to/store/backup/
to the directory you would like to have your backups placed.
At the host machine, queue the cron job and start cron:
crontab /path/to/cloned/project/annot/web/nix/hostcronjob.txt
to queue the job./etc/init.d/cron status
to check cron daemon status./etc/init.d/cron start
to start cron daemon, if needed.
If you run into troubles, the following cron documentation might come in handy. But keep in mind, this documentation was written for folks running the Ubuntu OS.
- run
HowTo json files and youe web browser?¶
Howto make the acjson file uploaded to annot viewable in your browser?
- for Ms Internet Explorer this hack will make the json file viewable but it will not render them nicely.
- the Firefox developer Edition comes with an integrated json viewer.
- for Chrome, Firefox, Opera and Safari install this Json Lite browser Add-on which can render large json files quickly.
- for links json files are viewable but will not be rendered.
HowTo set up an additional annot user?¶
- enter annot as superuser via GUI.
- scroll down to the white colored
Authentication_and_Authorization
link on the bottom of the page. - click
Groups
Add_Group
. - give
add
,change
anddelete
Permissions
for allapp*
django applications. Save
group.- go back to
Home › Authentication and Authorization
. - click
Users
Add_User
. - set
Username
andPassword
. - give user
Staff_status
by clicking the box. - add user to the group generated before.
Save
user.
HowTo fire up annot?¶
Once annot is installed as described in HowTo install annot? it can be fired up like this:
docker-machine ls
lists all machinesdocker-machine start an0
fires up machine an0, if not yet running.docker-machine env an0
get an0’s environment variables.eval "$(docker-machine env an0)"
sets an0’s environment variablesdocker-machine ls
the an0 machine should now have an asterisk in the ACTIVE column.
for the development version:
docker-compose -f dcdev.yml up
fires up docker containers.
for the production version:
docker-compose -f dcusr.yml up
fires up docker containers.
HowTo enter annot?¶
First annot must be running as described in HowTo fire up annot? Then:
- To enter annot by GUI, point your browser at http://192.168.99.100/admin/ and use your annot user credentials.
- To enter the development version from the command line run:
docker exec -ti annot_webdev_1 /bin/bash
- To enter the production version from the command line run:
docker exec -ti annot_web_1 /bin/bash
HowTo get files from your host machine into annot?¶
for the development version:
- move the files into the annot/web folder on your host machine.
- run
docker exec -ti annot_webdev_1 /bin/bash
to enter the docker container. - the files will appear inside the /usr/src/app folder.
for the production version:
- rebuild the annot_web_1 container:
this works because all relevant data is stored in the annot_fsdata_1 and annot_dbdata_1 containers.
- move the files into the
annot/web
folder on your host machine. docker-compose -f dcusr.yml stop
to shut down the docker containers.docker rm annot_web_1
to remove the annot_web_1 container.docker-compose -f dcusr.yml build
to rebuild the annot_web_1 container from scratch.docker-compose -f dcusr.yml up
to fire up annot again.
- move the files into the
- cat data into the docker container
- tar or zip the data to one big file.
docker exec -i annot_web_1 bash -c "cat > bigfile.tar.gz" < /host/path/bigfile.tar.gz
to upload a big junk of data into the docker container.
HowTo get files from inside annot to your hostmachine?¶
for the development version:
- run
docker exec -ti annot_webdev_1 /bin/bash
to enter the docker container - move the files into the
/usr/src/app
folder. - the files will appear inside the
annot/web
folder on your host machine.
for the production version:
- scp from inside the docker container:
- run
docker exec -ti annot_webdev_1 /bin/bash
to enter the docker container - run something like
scp bigfile.tar.gz user@host.edu:
- run
- docker cp from the host machine:
- run something like
docker cp annot_web_1:/usr/src/path/to/the/bigfile.tar.gz
- run something like
HowTo list all available commands?¶
- In the GUI available commands can be found in each app in the Action drop down list.
- Enter annot from the command line. run
python manage.py
HowTo backup annot content?¶
Annot can be backed up using the shell scripts we provide. Specifically:
- enter annot by the command line.
nix/cron_vocabulary.sh
this is a bash shell script, written to back up controlled vocabulary terms. Before annot backs up any vocabulary, it first updates the vocabulary to the latest ontology version. The backups are placed in folder/usr/src/media/vocabulary/YYYYMMDD_json_backup_latestvocabulary/
nix/cron_brick.sh
back up brick in json and tsv format. The backups are placed in folder/usr/src/media/vocabulary/YYYYMMDD_json_latestbrick
and/usr/src/media/vocabulary/YYYYMMDD_tsv_latestbrick/
.nix/cron_experiment.sh
backs up the acaxis, superset, runset, track, study and investigation table and acjson and superset files.
In the production version a cron job can be enabled to automatically backup the annot content every night. How to do this is described in the last step of HowTo install annot?
HowTo backup acpipeTemplateCode_*.py code?¶
Warning: it is your responsibility to back up the modified python3 template code that generated the acsjon files, as these scripts not are stored in annot.
- run
mkdir acaxis superset suerpsetfile runset
to gerenate the following folder structure:- acaxis
- superset
- suerpsetfile
- runset
- place all
acpipeTemplateCode_*.py
and superset fiels into the corresponding folders.
You only have to backup the py fiels and the supersetfiles as the ac.json files anytime can be regeneraetd.
HowTo fix anjson annotation that propagates from the acaxis to the superset to the runset layer?¶
The problem is the following: If you for example realize that you
miss typed a concentration value in acjson on the acaxis layer,
then you will have to fix this bug in the acpipeTemplateCode_*.py
, generate
the acjson file an regenerate all acjson files that depend on this acjson files.
Doing such a fix via GUI is possible though really tedious.
So we will make use of the command line to fix this bug.
- set up a folderstructure as described in HowTo backup acpipeTemplateCode_*.py code?
- fix the acpipeTemplateCode_*.py where necessary.
cp annot/web/apptool/acjsonUpdater.py /to/the/root/of/the/folderstructure/
.cp annot/web/apptool/acjsonCheck.py /to/the/root/of/the/folderstructure/
.- run
python3 acjsonUpdater.py
from the root of your folder structure. this should re-gernerate all acjson files. - run
python3 acjsonCheck.py
from the root of your folder structure. this should check all superset and runset acjson against inconsistency with the acxis and superset acjson. The result will be written into a file namedYYYYmmdd_acjsoncheck.log
at the root folder. - copy the whole folder structure to
/usr/src/media/upload/
inside annot as described in HowTo get files from your host machine into annot?. Note: remove the.git
folder form the root of the copy, if there is one, because this folder can cause troubles.
Now all your acjson files in annot should be updated to the latest version.
HowTo handle controlled vocabulary?¶
Please check out the about controlled vocabulary for a detailed discussion about the subject. This section just deals with the available annot commands.
python manage.py vocabulary_getupdate apponorganism_bioontology apponprotein_uniprot
searches and downloads the latest ontology version from ncbi taxon and uniprot. It will not update the database with it.python manage.py vocabulary_getupdate
this command searches and downloads the latest ontology version for each vocabulary. It will not update the database.python manage.py vocabulary_loadupdate apponorganism_bioontology apponprotein_uniprot
searches and downloads the latest ontology version, and updates the database, but only for ncbi taxon and uniprot.python manage.py vocabulary_loadupdate
this command searches and downloads the latest ontology version for each vocabulary, and updates the database.python manage.py vocabulary_loadbackup apponorganism_bioontology apponprotein_uniprot
First it will populate the ncbi taxonomy and uniprot vocabulary with the latest backup found at /usr/src/media/vocabulary/backup/`. Then it will download the latest ontology version, and update the database content with it.python manage.py vocabulary_loadbackup
will populate each vocabulary app First it with the latest backup found at /usr/src/media/vocabulary/backup/`. Then it will download the latest ontology version, and update the database content with it. This command will break if a online ontology fails to be downloadable.nix/cron_vocabulary.sh
is a shell script, written to back up each and every vocabulary one by one. Before annot backs up any vocabulary, it first update the vocabulary to the latest ontology version available. This script will not break, if a new online ontology version fails to be downloadable, which does happen.- In the production version a cron job can be enabled to automatically check for all plugged in ontologies for new versions every night, installs them when available and backs up the local modifications. How to is described in the last step of HowTo install annot?
We have defined ontologies for categories where no established ontology exists. For example: Dye, Health status, Provider, Sample entity, Verification profile and Yield fraction. Terms added to these ontologies can be transformed to “original” ontology terms:
python manage.py vocabulary_backup2origin apponhealthstatus_own
will transform all added terms from the apponhealthstatus_own into original terms.python manage.py vocabulary_backup2origin
will transform all added terms from every *_own ontology into original terms.
HowTo deal with huge ontologies?¶
Huge ontologies like Cellosaurus (apponsample_cellosaurus), ChEBI (apponcompound_ebi), Gene Ontology (apponcellularcomponent_go, appongeneontology_go, apponmolecularfunction_go), NCBI Taxonomy (apponorganism_bioontology), UniProt (apponprotein_uniprotemacs), can be filtered down to the relevant set of terms for your experimental paradigm using filter_idenitifier.txt or filter_term.txt files inside the particular django app. Check out the filter file in one of those ontologies apps for reference.
HowTo get detailed information about the ontologies in use?¶
A complete list of ontologies plugged into your current annot installation,
their actual version, and the source it is pulled from can be found by
clicking inside the GUI on the red colored Sys_admin_ctrl_vocabularies
link.
HowTo handle bricks?¶
Bricks are the cell lines and reagents used in the wet lab. In annot those bricks can be specified and annotated by searchable drop down list boxes with controlled vocabulary.
There are three major types of bricks:
- sample bricks
- perturbation bricks
- endpoint bricks
There are currently seven minor types of bricks:
- antibody1: primary antibodies
- antibody2: secondary antibodies
- cstain: compound stains
- compound: chemical compounds
- protein: proteins
- proteinset: protein complexes
- human: human cell line samples
Bricks are highlighted orange in the GUI.
These are the four commands to deal with each minor brick type. The example for the protein brick type:
python manage.py protein_db2json
will download the content from the protein brick table into a json file. This format is easy processable by python and is handy for backups.python manage.py protein_db2tsv
will download the content from the protein brick table into a tab separated value file. This is a handy format for folks who prefer Excel sheet over GUI for brick annotation and is a handy backup format.python manage.py protein_json2db
will upload the protein brick json file to the database. The upload content will automatically be checked against valid controlled vocabulary.python manage.py protein_tsv2db
will upload the protein brick tab separated value file into the database. Any additional columns will thereby be ignored. The content inside the expected columns will automatically be checked against valid controlled vocabulary.
HowTo annotate protein complexes?¶
In the GUI:
scroll to the orange colored
Appbrreagentprotein
section.click
Perturbation_Proteinset
.- under
Protein_set
choose the gene ontology cellular component identifier for the protein complex you wane annotate. E.g. COL1_go0005584. - choose the
Provider
. - enter
catalog_id
. - enter
batch_id
. - adjust
Availability
,Final_concentration_unit
andTime_unit
if necessary. - click
Save
.
Now Collagen 1 is a protein complex built out of two COL1A1_P02453 Collagen alpha-1 (I) chain proteins and one COL1A2_P02465 Collagen alpha-2 (I) chain protein. Both of these proteins have to be annotated.
- under
click
Perturbation_Protein
and enter both proteins as usual.- Under
Protein_set
you must choose the proteinset generated before. - Enter the
Proteinset_ratio
2:1. - Our lab convention is: Set
Availability
to False, because the single protein as such is not available. - Our lab convention is: Give the
Stock_solution_concentration
for the whole protein complex, do not divide by protein ratio, because there are protein complex reagents where the exact ratio is unknown.
- Under
now you should be able to upload this COL1_go0005584 protein set.
Howto make bricks accessible in the experiment layout layer?¶
Before any brick is accessible in experiment layout, it must be uploaded
into the corresponding Uploaded enpoint reagent bricks
,
Uploaded perturbation reagent bricks
or Uploaded sample bricks
table.
The very first time you install annot you have to do this by command line,
because the database tables which the GUI relies on has to be initialized.
After that you can populate the brick tables via command line or GUI.
from the command line:
python manage.py brick_load
will upload all bricks.
from the GUI:
- scroll to the bright orange colored
Sys_admin_brick
link and click. - select the brick types you like to upload.
- In the
Action
drop down list chooseUpload brick
and clickGo
.
Where are the uploaded bricks stored?
- enter the GUI
- go to
Home › Appsabrick
(bright orange colored) - the
Uploaded endpoint reagent bricks
,Uploaded endpoint reagent bricks
andUploaded endpoint reagent bricks
are the tables containing the uploaded bricks. Those are the bricks accessible for layout.
Note: If a brick (oragne colored) gets deleted,
the uploaded brick inside the Uploaded bricks
tables (bright orange colored) and any set
that uses this uploaded brick that no longer exist will not be deleted!
The entry in the ok_brick
column of such uploaded bricks will
change from a green tick to a red cross, the next time this brick type is uploaded.
HowTo layout experiments?¶
In a similar way the IPO input processing output paradigm can describes the structure of an information processing program, a biological experiment can be specified by sample, perturbation and endpoint description. The samples can thereby be regarded as input, perturbations as processing and endpoints as output. In annot assay coordinate model sample, perturbation and endpoint are represented as “axis”. Below is in short described, who such axis are specified. Check out the Tutorial for an applied example.
About axis sets!¶
- To define an axis set, one first has to gather the samples, the perturbation reagents, and the endpoint reagents used in the experiment.
- scroll to the cyan colored
Appacaxis
box. - click the cyan
Set_of_Endpoints
andAdd
link to group together the endpoint brick used in an experiment. - click the cyan
Set_of_Perturbation
andAdd
link to group together the perturbation bricks. - click the cyan
Set_of_Sample
andAdd
link to group together the sample bricks.
- scroll to the cyan colored
For set_names
only alphanumeric keys, underscores and dashes are allowed [A-Za-z0-9-_].
The dash
has a special function. The dash separates the major from the minor and possibly subminor setname.
E.g. drug-plate1, drug-plate2 and drug-plate3-well2 are all member of the same major drug set.
This becomes especially important later on when layout files and unstacked dataframes are retrieved form the acjson files,
because the layout files will be grouped into folders according to their major sets name,
and the unstacked dataframe will group the columns according to the major sets.
If no dash is given, then the major and the minor set name are the same.
Second, the gathered samples and reagents have to be laid out. Python3 and the acpipe_acjson library must be installed on your computer. You can install the acpipe_acjson library with pip like this:
pip3 install acpipe_acjson
should do the trick.
What follows is the description of the layout process on a perturbation set. But layout for sample and endpoint sets is done exactly the same way.
- click the cyan colored
Set_of_Perturbation
link. - choose the set you would like to layout.
- in the
Action
drop down list chooseDownload selected set's python3 acpipe template script
and clickGo
to download the template file. - open the template file in a text editor. You will find python3 template code, generated based on set_name and the chosen bricks. Read the template code and replace all the question-marks, which are place holders for wellplate layout and each reagent’s concentration and reaction time, with meaningful values.
- then run
python3 acpipeTemplateCode_*set-name*.py
. This will result in aacpipe_acjson-*set-name*_ac.json
file.
Third, upload the generated acjson file and check for consistency.
- on the GUI click the name from the set you downloaded the template.
- scroll down to
Set
Acjson file
andBrowse...
for the generate file to upload it. - click
Save
- in the
Set_of_Perturbation
table choose the set again. Then, in theAction
drop down list chooseCheck selected set's acjson file against brick content
. and clickGo
. After a little while, you should see a message*set-name* # successfully checked
or a warning when the acjson content differs from the set_name or bricks settings.
About supersets!¶
Superset - stored in the blue colored App4Superset
box - are optional.
Imagine for example you have pipette robot which helps you to produce randomized wellplates from reagents provided in eppendorf tubes.
You could store:
- the eppendorf layout that you feed to the pipette robot as an ordinary
Set_of_Perturbation
. - store the pipette robots program code as
Superset_File
. - write a python3 library that can take the eppendorf layout acjson file and
the robot program code as input to generates the
random plates layout acjson file
. store this library asPython3_acpipe_module
. - Connect
eppendorf layout perturbations set
,plate robot program code file
,python3 acjson module
and resultingrandom plate acjson file
as super set.
For any system in the lab you can imagine, you can write a python3 acpipe library and plug it into annot.
About run sets!¶
One runset represents one assay. An assay combines all 3 acjson axis: Sample, Perturbation, and Endpoint. The information can come from sampleset acjson files, perturbation set acjson files, endpoint acjson files, and superset acjson files.
- scroll down to the dark blue colored
Assay_Runs
Add
link. - give a
Runset_name
. Allowed are alphanumeric characters, dashes and underscore [A-Za-z0-9-_]. - use the drop down list boxes to gather the related endpointsets, perturbationsets, samplesets, and supersets click
Save
. - in the
Action
drop down list chooseDownload selected set's python3 acpipe template script
and clickGo
to download the template file. - modify the template code as appropriate and run it.
- upload the resulting
Acjson file
to the set. - in the
Action
drop down list chooseCheck runset against set of set acjson
and clickGo
. After a while You should see a message*runset_name* # successfully checked
or a warning when the acjson content differs.
About date tracking!¶
The tracking layer enables assay and superset related
date, protocol, and staff member metadata to be documented.
The tracking site links are located in the purple colored App2Track
box.
The tracking app can be customized for different experimental protocols.
- edit the
app2tacking/models.py
file to you needs - edit the
app2tacking/admin.py
file to you needs - enter annot by command line
- run
python manage.py makemigrations
- run
python manage.py migrate
- edit the es_table constat and the os.system datadump call in
annot/web/appacaxis/management/commands/experiment_pkgjson.py
to have the backup packing properly updated
HowTo disable the date tracking app?¶
- open
annot/web/prjannot/settings.py
with a text editer. - inside the INSTALLED_APPS tuple use a hashtag # to comment out
app2track
. - save the settings.py and leave the editor
- run
docker-machine restart an0
, assuming your docker machines name is an0. - reload
http://192.168.99.100/admin/
page in your browser.App2Track
should be gone.
HowTo handle study and investigation?¶
- click the black colored
Studies
andAdd
link to gatherAssay_Runs
to a study. - click the black colored
Investigation
andAdd
link to gatherStudies
to an investigation. Those pages should be quite self explanatory.
Django¶
Howto enable the django-debug-toolbar?¶
- open
annot/web/prjannot/settings.py
with a text editor. - delete the hashtags in front of DEBUG_TOOLBAR_CONFIG = { “SHOW_TOOLBAR_CALLBACK” : lambda request: True, }.
- inside the INSTALLED_APPS tuple delete the hashtag in front of
debug_toolbar
. - inside the MIDDLEWARE_CLASSES tuple delete the hashtag in front of
debug_toolbar.middleware.DebugToolbarMiddleware
. - save the settings.py and leave the editor
- enter annot from the command line
- run
python manage.py collectstatic
- exit the container
- run
docker-machine restart an0
, assuming your docker machines name is an0. - reload
http://192.168.99.100/admin/
page in your browser
Docker¶
HowTo install the docker platform?¶
Docker is able to run on Linux, Mac OSX, MS Windows, and many cloud platform flavors.
Install Docker Engine. Docker Machine and Docker Compose as described here: Install Docker.
HowTo run the docker platform?¶
This howto will get you familiar with docker, as much as is needed to run docker as annot user or developer.
To successfully run docker you have to know a whole set of docker commands, from the docker engine, docker-compose, and docker-machine. The section below introduces a minimal set of commands needed to run annot. It is worthwhile to check out the list of all available docker engine, docker-compose, and docker-machine commands. There are many nice commands that may be very helpful for your specific application.
The docker platform can be booted either by starting the docker engine or by firing up a docker-machine. Annot as such, could run solely with the docker engine and docker-compose. However, we have chosen to make use of docker-machine to allow one physical computer to run more then one development version or a development and a deployed version simultaneously.
docker-machine commands¶
In this example the docker machines name is dev:
docker-machine
list all available docker-machine commands.docker-machine --driver virtualbox dev
makes a docker machine labeled dev. Default disk size is 20GB.docker-machine --virtualbox-disk-size "1000000" --driver virtualbox dev
makes a docker machine labeled dev with 1TB space. Then disk size is always given in MB.docker-machine start dev
start docker machine dev.docker-machine ls
lists all docker-machines (docker environments).docker-machine ip
get the IP address of a machine to connect to it e.g. by a web browser.docker-machine env dev
get dev’s environment variables.eval "$(docker-machine env dev)"
sets dev’s environment variables.docker-machine regenerate-certs dev
recreates ip certificates if needed. usually IPs are give by the order the machines are started. in case the IP of the dev changed the certificates have to be regenerated.docker-machine upgrade dev
upgrades the dev machine to the latest version of docker.
docker engine commands¶
In the docker world, you have to be able to distinguish between a docker image and docker container. A docker image is a synonym for the container class (or type), A docker container is a actual instance of one of this container class.
Basic docker image related commands. In this example the image is labeled annot_web and has the id 0b8a78c6c379:
docker
list all the docker engine commands.docker images
list all images.docker rmi 0b8a78c6c379
delete one or more images.docker rmi annot_web
delete one or more images.
Basic docker container related commands. In this example the container is labeled annot_web_1 and has the id 290ebef76c11:
docker
list all the docker engine commands.docker ps
list running containers.docker ps -a
list all containers.docker run annnot_web_1 ls
run the ls command in a new container instance.docker exec annnot_web_1 ls
execute the ls command in the annnot_web_1 container instance.docker exec -ti annnot_web_1 /bin/bash
open an interactive terminal, which running the bash shell inside the annnot_web_1 container.docker start annnot_web_1
start a stoped container.docker restart annnot_web_1
restart a running container.docker stop annnot_web_1
stop a running container.docker rm annnot_web_1 annnot_nginx_1
delete one or more containers.docker rm -v annnot_fsdata_1
delete the annnot_fsdata_1 container inclusive the volume inside the container
The slight difference between run and exec: docker run command will run on the annot_web_1 image and actually build a new container to run the command. Then new container will be automatically labeled. docker exec instead will run the command in an existing container. No new container will be build. In the case of annot you usually you do not want do create a new container.
docker-compose commands¶
Web applications like annot are usually built out of many containers. For example the development version of annot is out of five containers: annot_nginxdev_1, annot_webdev_1, annot_fsdata_1, annot_db_1, annot_dbdata_1. To orchestrate the whole container set you can run docker-compose commands. Nevertheless, it is important to know the low level docker engine commands, to be able to deal with single container out of the set. For run docker-compose commands, the container set and the connection between the containers have to be specified through a related yml file. In the following example this is the dcdev.yml file:
docker_compose
list all the docker compose commands.docker-compose -f dcdev.yml build
build or rebuild the container set.docker-compose -f dcdev.yml up
start the containers set. This don’t gives the prompt back, but detailed output about the runnig containers. Pressctrl + c
to stop the containers.docker-compose -f dcdev.yml up -d
start the containers set in daemon mode. This gives the prompt back, but no detailed output about the runnig containers.docker-compose -f dcdev.yml ps
list all running containers.docker-compose -f dcdev.yml ps -a
list all containers.docker-compose -f dcdev.yml start
start container set.docker-compose -f dcdev.yml restart
restart container set.docker-compose -f dcdev.yml stop
stop container set.docker-compose -f dcdev.yml rm
remove stopped container set.
PostgreSQL¶
HowTo enter the postgresql database?¶
From the command line execute the following steps:
docker exec -ti annot_db_1 /bin/bash
to enter annot_db_1 docker container as unix root user.su postgres -s /bin/bash
to switch to unix postgres user.psql -U annot -d annot
to enter the postgresql shell as database user named annot and connecting to the database named annot.\q
to quit the postgresql shell.exit
to exit as unix postgres user.exit
to exit as unix root user and leaving as such the annot_db_1 docker container.