HowTos

Annot


HowTo install annot?

This howto walks you step by step through the process of installing development and production version of annot.

  1. Install docker, docker-compose and the docker-machine as described in HowTo install the docker container platform?

  2. On the host machine install Git. Follow the instructions on the website specified for your operating system.

  3. Get the Annot source code from the main fork. Run from the command line: git clone https://gitlab.com/biotransistor/annot.git

    (Alternatively, you can clone annot from your own fork. Howto forking the project is not described here.)

  4. The cloned source code’s annot/pgsql.env file contains a few PostgreSQL database configurations. Edit the DB_PASS entry:

    [...]
    DB_PASS=set_some_strong_random_postgresql_root_user_password.
    [...]
    
  5. Generate a BioPortal bioontology.org account. Go to your BioPortal account settings to figure out your API application interface key.

  6. The crowbar.py file contains Django framework and annot related environment variables. Write a plain text crowbar.py file with the following content:

    SECRET_KEY = "about_64_characters_long[->+<]"
    PASSWD_DATABASE = "some_random_postgresql_annot_user_password"
    APIKEY_BIOONTOLOGY = "your_BioPortal_bioontology.org_API_key"
    URL = "http://192.168.99.100/"
    CONTACT = "you@emailaddress"
    

    Adapt the SECRET_KEY, PASSWD_DATABASE, APIKEY_BIOONTOLOGY and CONTACT content inside the double quotes. For a local installation leave URL as it is.

    Place this file under annot/web/prjannot/crowbar.py.

  7. development version only: The annot/dcdev.yml file contains docker-compose related information. Edit the webdev and nginxdev volumes path according to your host machine environment:

    webdev:
      [...]
      volumes:
        - /path/to/your/git/cloned/annot/web:/usr/src/app
      [...]
    
    nginxdev:
      [...]
      volumes:
        - /path/to/your/git/cloned/annot/nginxdev/annotnginx.conf:/etc/nginx/nginx.conf
      [...]
    
  8. Build a docker machine in which the docker container will be installed, to run the development or production version of annot. Build the containers. Then fire up annot. You can name the machine however you like. In this example we named the machine an0.

    1. docker-machine create --driver virtualbox --virtualbox-disk-size "20000" an0 this command creates the machine using VirtialBox as disk driver. The disk size is given in MB. Please adjust disk size to your needs.
    2. docker-machine ls lists all machines.
    3. docker-machine start an0 fires up machine an0, if not yet running.
    4. docker-machine env an0 get an0’s environment variables.
    5. eval "$(docker-machine env an0)" sets an0’s environment variables.
    6. docker-machine ls the an0 machine should now have an asterisk (*) in the ACTIVE column.
    7. cd into the cloned annot folder then execute the next steps.

    for the development version:

    1. docker-compose -f dcdev.yml pull pulls the basic containers.
    2. docker-compose -f dcdev.yml build builds all container.
    3. docker-compose -f dcdev.yml up fires up the docker containers and reports what goes on with the web framework.
    4. press ctrl + c to shut down the docker containers and give the prompt back.
    5. docker-compose -f dcdev.yml up -d fires up the docker containers and gives the prompt back.

    for the production version:

    1. docker-compose -f dcusr.yml pull pulls the basic containers.
    2. docker-compose -f dcusr.yml build builds all container.
    3. docker-compose -f dcusr.yml up fires up the docker containers and reports what goes on with the web framework.
    4. press ctrl + c to shut down the docker containers and gives the prompt back.
    5. docker-compose -f dcusr.yml up -d fires up the docker containers and gives the prompt back.
  9. Setup PostgreSQL database and database user.

    1. docker exec -ti annot_db_1 /bin/bash to enter db docker container.
    2. su postgres -s /bin/bash to switch from unix root to unix postgres user.
    3. createdb annot creates a postgresql database named annot.
    4. createuser -P annot creates a database user named annot. When prompted enter the same database password as specified in annot/web/prjannot/crowbar.py
    5. psql -U postgres -d annot -c"GRANT ALL PRIVILEGES ON DATABASE annot TO annot;" does what it says.
    6. exit to exit as unix postgres user.
    7. exit to exit as unix root user and leaving as such the annot_db_1 docker container.
  10. Generate database tables, a superuser and pull all static files together:

    for the development version:

    1. docker exec -ti annot_webdev_1 /bin/bash to enter the webdev docker container.

    for the production version:

    1. docker exec -ti annot_web_1 /bin/bash to enter the web docker container.

    then continuer:

    1. python demigrations.py will clean out the sql migration command folder from every app.
    2. python manage.py makemigrations generates the sql database migration commands.
    3. python manage.py migrate applies the generated sql migration commands.
    4. python manage.py createsuperuser creates a superuser for the annot web application.
    5. python manage.py collectstatic collects all static files needed by annot and put them into the right place to be served.
    6. exit to leave the container.
  11. Fire up you favorite web browser and surf to the place where annot is running.

    1. docker-machine ls will give you the correct ip. Most probably 192.168.99.100.
    2. http://192.168.99.100/admin/ you can enter the annot GUI at the admin side. Use therefore the generate superuser credentials.
  12. production version only:

    Annot can be set up so that it automatically checks for new versions of each ontology at midnight container time, and installs them and backups the whole annot content.

    1. run docker exec -ti annot_web_1 /bin/bash to enter the annot_web_1 docker container
    2. /etc/init.d/cron status to check the cron daemon status.
    3. /etc/init.d/cron start to start the cron daemon. Will enable check and backup at midnight container time. Backups are stored in at /usr/src/media/.
    4. date to check for the docker containers local time.

    Assuming you run a unix flavored host machine and cron is installed, your host machine can be setup to pull automatically the backups stored inside the docker container to the host machine every night. For this, you have to adjust and install the following cronjob.

    At your host machine, inside the cloned annot project folder adjust annot/web/nix/hostpull.sh.

    1. Change every mymachine to the docker machine name you gave. e.g an0.
    2. Change every /path/on/host/to/store/backup/ to the directory you would like to have your backups placed.

    At the host machine, inside the cloned annot project folder adjust annot/web/nix/hostcronjob.txt

    1. Make sure that PATH knows the location of the docker-machine binary. Run which docker-machine at the command line to find out the correct location.
    2. Change the time 00 00 (which represents mm hh) to be 6 hours later than midnight inside the annot docker containers.
    3. Change /path/to/cloned/project/ to the directory where you have annot cloned to.
    4. Change /path/on/host/to/store/backup/ to the directory you would like to have your backups placed.

    At the host machine, queue the cron job and start cron:

    1. crontab /path/to/cloned/project/annot/web/nix/hostcronjob.txt to queue the job.
    2. /etc/init.d/cron status to check cron daemon status.
    3. /etc/init.d/cron start to start cron daemon, if needed.

    If you run into troubles, the following cron documentation might come in handy. But keep in mind, this documentation was written for folks running the Ubuntu OS.

HowTo json files and youe web browser?

Howto make the acjson file uploaded to annot viewable in your browser?

  • for Ms Internet Explorer this hack will make the json file viewable but it will not render them nicely.
  • the Firefox developer Edition comes with an integrated json viewer.
  • for Chrome, Firefox, Opera and Safari install this Json Lite browser Add-on which can render large json files quickly.
  • for links json files are viewable but will not be rendered.

HowTo set up an additional annot user?

  1. enter annot as superuser via GUI.
  2. scroll down to the white colored Authentication_and_Authorization link on the bottom of the page.
  3. click Groups Add_Group.
  4. give add, change and delete Permissions for all app* django applications.
  5. Save group.
  6. go back to Home Authentication and Authorization.
  7. click Users Add_User.
  8. set Username and Password.
  9. give user Staff_status by clicking the box.
  10. add user to the group generated before.
  11. Save user.

HowTo fire up annot?

Once annot is installed as described in HowTo install annot? it can be fired up like this:

  1. docker-machine ls lists all machines
  2. docker-machine start an0 fires up machine an0, if not yet running.
  3. docker-machine env an0 get an0’s environment variables.
  4. eval "$(docker-machine env an0)" sets an0’s environment variables
  5. docker-machine ls the an0 machine should now have an asterisk in the ACTIVE column.

for the development version:

  • docker-compose -f dcdev.yml up fires up docker containers.

for the production version:

  • docker-compose -f dcusr.yml up fires up docker containers.

HowTo enter annot?

First annot must be running as described in HowTo fire up annot? Then:

  • To enter annot by GUI, point your browser at http://192.168.99.100/admin/ and use your annot user credentials.
  • To enter the development version from the command line run: docker exec -ti annot_webdev_1 /bin/bash
  • To enter the production version from the command line run: docker exec -ti annot_web_1 /bin/bash

HowTo get files from your host machine into annot?

for the development version:

  1. move the files into the annot/web folder on your host machine.
  2. run docker exec -ti annot_webdev_1 /bin/bash to enter the docker container.
  3. the files will appear inside the /usr/src/app folder.

for the production version:

  • rebuild the annot_web_1 container: this works because all relevant data is stored in the annot_fsdata_1 and annot_dbdata_1 containers.
    1. move the files into the annot/web folder on your host machine.
    2. docker-compose -f dcusr.yml stop to shut down the docker containers.
    3. docker rm annot_web_1 to remove the annot_web_1 container.
    4. docker-compose -f dcusr.yml build to rebuild the annot_web_1 container from scratch.
    5. docker-compose -f dcusr.yml up to fire up annot again.
  • cat data into the docker container
    1. tar or zip the data to one big file.
    2. docker exec -i annot_web_1 bash -c "cat > bigfile.tar.gz" < /host/path/bigfile.tar.gz to upload a big junk of data into the docker container.

HowTo get files from inside annot to your hostmachine?

for the development version:

  1. run docker exec -ti annot_webdev_1 /bin/bash to enter the docker container
  2. move the files into the /usr/src/app folder.
  3. the files will appear inside the annot/web folder on your host machine.

for the production version:

  • scp from inside the docker container:
    1. run docker exec -ti annot_webdev_1 /bin/bash to enter the docker container
    2. run something like scp bigfile.tar.gz user@host.edu:
  • docker cp from the host machine:
    1. run something like docker cp annot_web_1:/usr/src/path/to/the/bigfile.tar.gz

HowTo list all available commands?

  • In the GUI available commands can be found in each app in the Action drop down list.
  • Enter annot from the command line. run python manage.py

HowTo backup annot content?

Annot can be backed up using the shell scripts we provide. Specifically:

  1. enter annot by the command line.
  2. nix/cron_vocabulary.sh this is a bash shell script, written to back up controlled vocabulary terms. Before annot backs up any vocabulary, it first updates the vocabulary to the latest ontology version. The backups are placed in folder /usr/src/media/vocabulary/YYYYMMDD_json_backup_latestvocabulary/
  3. nix/cron_brick.sh back up brick in json and tsv format. The backups are placed in folder /usr/src/media/vocabulary/YYYYMMDD_json_latestbrick and /usr/src/media/vocabulary/YYYYMMDD_tsv_latestbrick/.
  4. nix/cron_experiment.sh backs up the acaxis, superset, runset, track, study and investigation table and acjson and superset files.

In the production version a cron job can be enabled to automatically backup the annot content every night. How to do this is described in the last step of HowTo install annot?

HowTo backup acpipeTemplateCode_*.py code?

Warning: it is your responsibility to back up the modified python3 template code that generated the acsjon files, as these scripts not are stored in annot.

  1. run mkdir acaxis superset suerpsetfile runset to gerenate the following folder structure:
    • acaxis
    • superset
    • suerpsetfile
    • runset
  2. place all acpipeTemplateCode_*.py and superset fiels into the corresponding folders.

You only have to backup the py fiels and the supersetfiles as the ac.json files anytime can be regeneraetd.

HowTo fix anjson annotation that propagates from the acaxis to the superset to the runset layer?

The problem is the following: If you for example realize that you miss typed a concentration value in acjson on the acaxis layer, then you will have to fix this bug in the acpipeTemplateCode_*.py, generate the acjson file an regenerate all acjson files that depend on this acjson files. Doing such a fix via GUI is possible though really tedious. So we will make use of the command line to fix this bug.

  1. set up a folderstructure as described in HowTo backup acpipeTemplateCode_*.py code?
  2. fix the acpipeTemplateCode_*.py where necessary.
  3. cp annot/web/apptool/acjsonUpdater.py /to/the/root/of/the/folderstructure/.
  4. cp annot/web/apptool/acjsonCheck.py /to/the/root/of/the/folderstructure/.
  5. run python3 acjsonUpdater.py from the root of your folder structure. this should re-gernerate all acjson files.
  6. run python3 acjsonCheck.py from the root of your folder structure. this should check all superset and runset acjson against inconsistency with the acxis and superset acjson. The result will be written into a file named YYYYmmdd_acjsoncheck.log at the root folder.
  7. copy the whole folder structure to /usr/src/media/upload/ inside annot as described in HowTo get files from your host machine into annot?. Note: remove the .git folder form the root of the copy, if there is one, because this folder can cause troubles.

Now all your acjson files in annot should be updated to the latest version.

HowTo handle controlled vocabulary?

Please check out the about controlled vocabulary for a detailed discussion about the subject. This section just deals with the available annot commands.

  • python manage.py vocabulary_getupdate apponorganism_bioontology apponprotein_uniprot searches and downloads the latest ontology version from ncbi taxon and uniprot. It will not update the database with it.
  • python manage.py vocabulary_getupdate this command searches and downloads the latest ontology version for each vocabulary. It will not update the database.
  • python manage.py vocabulary_loadupdate apponorganism_bioontology apponprotein_uniprot searches and downloads the latest ontology version, and updates the database, but only for ncbi taxon and uniprot.
  • python manage.py vocabulary_loadupdate this command searches and downloads the latest ontology version for each vocabulary, and updates the database.
  • python manage.py vocabulary_loadbackup apponorganism_bioontology apponprotein_uniprot First it will populate the ncbi taxonomy and uniprot vocabulary with the latest backup found at /usr/src/media/vocabulary/backup/`. Then it will download the latest ontology version, and update the database content with it.
  • python manage.py vocabulary_loadbackup will populate each vocabulary app First it with the latest backup found at /usr/src/media/vocabulary/backup/`. Then it will download the latest ontology version, and update the database content with it. This command will break if a online ontology fails to be downloadable.
  • nix/cron_vocabulary.sh is a shell script, written to back up each and every vocabulary one by one. Before annot backs up any vocabulary, it first update the vocabulary to the latest ontology version available. This script will not break, if a new online ontology version fails to be downloadable, which does happen.
  • In the production version a cron job can be enabled to automatically check for all plugged in ontologies for new versions every night, installs them when available and backs up the local modifications. How to is described in the last step of HowTo install annot?

We have defined ontologies for categories where no established ontology exists. For example: Dye, Health status, Provider, Sample entity, Verification profile and Yield fraction. Terms added to these ontologies can be transformed to “original” ontology terms:

  • python manage.py vocabulary_backup2origin apponhealthstatus_own will transform all added terms from the apponhealthstatus_own into original terms.
  • python manage.py vocabulary_backup2origin will transform all added terms from every *_own ontology into original terms.

HowTo deal with huge ontologies?

Huge ontologies like Cellosaurus (apponsample_cellosaurus), ChEBI (apponcompound_ebi), Gene Ontology (apponcellularcomponent_go, appongeneontology_go, apponmolecularfunction_go), NCBI Taxonomy (apponorganism_bioontology), UniProt (apponprotein_uniprotemacs), can be filtered down to the relevant set of terms for your experimental paradigm using filter_idenitifier.txt or filter_term.txt files inside the particular django app. Check out the filter file in one of those ontologies apps for reference.

HowTo get detailed information about the ontologies in use?

A complete list of ontologies plugged into your current annot installation, their actual version, and the source it is pulled from can be found by clicking inside the GUI on the red colored Sys_admin_ctrl_vocabularies link.

HowTo handle bricks?

Bricks are the cell lines and reagents used in the wet lab. In annot those bricks can be specified and annotated by searchable drop down list boxes with controlled vocabulary.

There are three major types of bricks:

  • sample bricks
  • perturbation bricks
  • endpoint bricks

There are currently seven minor types of bricks:

  • antibody1: primary antibodies
  • antibody2: secondary antibodies
  • cstain: compound stains
  • compound: chemical compounds
  • protein: proteins
  • proteinset: protein complexes
  • human: human cell line samples

Bricks are highlighted orange in the GUI.

These are the four commands to deal with each minor brick type. The example for the protein brick type:

  • python manage.py protein_db2json will download the content from the protein brick table into a json file. This format is easy processable by python and is handy for backups.
  • python manage.py protein_db2tsv will download the content from the protein brick table into a tab separated value file. This is a handy format for folks who prefer Excel sheet over GUI for brick annotation and is a handy backup format.
  • python manage.py protein_json2db will upload the protein brick json file to the database. The upload content will automatically be checked against valid controlled vocabulary.
  • python manage.py protein_tsv2db will upload the protein brick tab separated value file into the database. Any additional columns will thereby be ignored. The content inside the expected columns will automatically be checked against valid controlled vocabulary.

HowTo annotate protein complexes?

In the GUI:

  1. scroll to the orange colored Appbrreagentprotein section.

  2. click Perturbation_Proteinset.

    1. under Protein_set choose the gene ontology cellular component identifier for the protein complex you wane annotate. E.g. COL1_go0005584.
    2. choose the Provider.
    3. enter catalog_id.
    4. enter batch_id.
    5. adjust Availability, Final_concentration_unit and Time_unit if necessary.
    6. click Save.

    Now Collagen 1 is a protein complex built out of two COL1A1_P02453 Collagen alpha-1 (I) chain proteins and one COL1A2_P02465 Collagen alpha-2 (I) chain protein. Both of these proteins have to be annotated.

  3. click Perturbation_Protein and enter both proteins as usual.

    • Under Protein_set you must choose the proteinset generated before.
    • Enter the Proteinset_ratio 2:1.
    • Our lab convention is: Set Availability to False, because the single protein as such is not available.
    • Our lab convention is: Give the Stock_solution_concentration for the whole protein complex, do not divide by protein ratio, because there are protein complex reagents where the exact ratio is unknown.
  4. now you should be able to upload this COL1_go0005584 protein set.

Howto make bricks accessible in the experiment layout layer?

Before any brick is accessible in experiment layout, it must be uploaded into the corresponding Uploaded enpoint reagent bricks, Uploaded perturbation reagent bricks or Uploaded sample bricks table. The very first time you install annot you have to do this by command line, because the database tables which the GUI relies on has to be initialized. After that you can populate the brick tables via command line or GUI.

from the command line:

  1. python manage.py brick_load will upload all bricks.

from the GUI:

  1. scroll to the bright orange colored Sys_admin_brick link and click.
  2. select the brick types you like to upload.
  3. In the Action drop down list choose Upload brick and click Go.

Where are the uploaded bricks stored?

  1. enter the GUI
  2. go to Home Appsabrick (bright orange colored)
  3. the Uploaded endpoint reagent bricks, Uploaded endpoint reagent bricks and Uploaded endpoint reagent bricks are the tables containing the uploaded bricks. Those are the bricks accessible for layout.

Note: If a brick (oragne colored) gets deleted, the uploaded brick inside the Uploaded bricks tables (bright orange colored) and any set that uses this uploaded brick that no longer exist will not be deleted! The entry in the ok_brick column of such uploaded bricks will change from a green tick to a red cross, the next time this brick type is uploaded.

HowTo layout experiments?

In a similar way the IPO input processing output paradigm can describes the structure of an information processing program, a biological experiment can be specified by sample, perturbation and endpoint description. The samples can thereby be regarded as input, perturbations as processing and endpoints as output. In annot assay coordinate model sample, perturbation and endpoint are represented as “axis”. Below is in short described, who such axis are specified. Check out the Tutorial for an applied example.

About axis sets!

  1. To define an axis set, one first has to gather the samples, the perturbation reagents, and the endpoint reagents used in the experiment.
    1. scroll to the cyan colored Appacaxis box.
    2. click the cyan Set_of_Endpoints and Add link to group together the endpoint brick used in an experiment.
    3. click the cyan Set_of_Perturbation and Add link to group together the perturbation bricks.
    4. click the cyan Set_of_Sample and Add link to group together the sample bricks.

For set_names only alphanumeric keys, underscores and dashes are allowed [A-Za-z0-9-_]. The dash has a special function. The dash separates the major from the minor and possibly subminor setname. E.g. drug-plate1, drug-plate2 and drug-plate3-well2 are all member of the same major drug set. This becomes especially important later on when layout files and unstacked dataframes are retrieved form the acjson files, because the layout files will be grouped into folders according to their major sets name, and the unstacked dataframe will group the columns according to the major sets. If no dash is given, then the major and the minor set name are the same.

  1. Second, the gathered samples and reagents have to be laid out. Python3 and the acpipe_acjson library must be installed on your computer. You can install the acpipe_acjson library with pip like this:

    1. pip3 install acpipe_acjson should do the trick.

    What follows is the description of the layout process on a perturbation set. But layout for sample and endpoint sets is done exactly the same way.

    1. click the cyan colored Set_of_Perturbation link.
    2. choose the set you would like to layout.
    3. in the Action drop down list choose Download selected set's python3 acpipe template script and click Go to download the template file.
    4. open the template file in a text editor. You will find python3 template code, generated based on set_name and the chosen bricks. Read the template code and replace all the question-marks, which are place holders for wellplate layout and each reagent’s concentration and reaction time, with meaningful values.
    5. then run python3 acpipeTemplateCode_*set-name*.py. This will result in a acpipe_acjson-*set-name*_ac.json file.
  2. Third, upload the generated acjson file and check for consistency.

    1. on the GUI click the name from the set you downloaded the template.
    2. scroll down to Set Acjson file and Browse... for the generate file to upload it.
    3. click Save
    4. in the Set_of_Perturbation table choose the set again. Then, in the Action drop down list choose Check selected set's acjson file against brick content. and click Go. After a little while, you should see a message *set-name* # successfully checked or a warning when the acjson content differs from the set_name or bricks settings.

About supersets!

Superset - stored in the blue colored App4Superset box - are optional.

Imagine for example you have pipette robot which helps you to produce randomized wellplates from reagents provided in eppendorf tubes.

You could store:

  1. the eppendorf layout that you feed to the pipette robot as an ordinary Set_of_Perturbation.
  2. store the pipette robots program code as Superset_File.
  3. write a python3 library that can take the eppendorf layout acjson file and the robot program code as input to generates the random plates layout acjson file. store this library as Python3_acpipe_module.
  4. Connect eppendorf layout perturbations set, plate robot program code file, python3 acjson module and resulting random plate acjson file as super set.

For any system in the lab you can imagine, you can write a python3 acpipe library and plug it into annot.

About run sets!

One runset represents one assay. An assay combines all 3 acjson axis: Sample, Perturbation, and Endpoint. The information can come from sampleset acjson files, perturbation set acjson files, endpoint acjson files, and superset acjson files.

  1. scroll down to the dark blue colored Assay_Runs Add link.
  2. give a Runset_name. Allowed are alphanumeric characters, dashes and underscore [A-Za-z0-9-_].
  3. use the drop down list boxes to gather the related endpointsets, perturbationsets, samplesets, and supersets click Save.
  4. in the Action drop down list choose Download selected set's python3 acpipe template script and click Go to download the template file.
  5. modify the template code as appropriate and run it.
  6. upload the resulting Acjson file to the set.
  7. in the Action drop down list choose Check runset against set of set acjson and click Go. After a while You should see a message *runset_name* # successfully checked or a warning when the acjson content differs.

About date tracking!

The tracking layer enables assay and superset related date, protocol, and staff member metadata to be documented. The tracking site links are located in the purple colored App2Track box. The tracking app can be customized for different experimental protocols.

  1. edit the app2tacking/models.py file to you needs
  2. edit the app2tacking/admin.py file to you needs
  3. enter annot by command line
  4. run python manage.py makemigrations
  5. run python manage.py migrate
  6. edit the es_table constat and the os.system datadump call in annot/web/appacaxis/management/commands/experiment_pkgjson.py to have the backup packing properly updated

HowTo disable the date tracking app?

  1. open annot/web/prjannot/settings.py with a text editer.
  2. inside the INSTALLED_APPS tuple use a hashtag # to comment out app2track.
  3. save the settings.py and leave the editor
  4. run docker-machine restart an0, assuming your docker machines name is an0.
  5. reload http://192.168.99.100/admin/ page in your browser. App2Track should be gone.

HowTo handle study and investigation?

  1. click the black colored Studies and Add link to gather Assay_Runs to a study.
  2. click the black colored Investigation and Add link to gather Studies to an investigation. Those pages should be quite self explanatory.

Django


Howto enable the django-debug-toolbar?

  1. open annot/web/prjannot/settings.py with a text editor.
  2. delete the hashtags in front of DEBUG_TOOLBAR_CONFIG = { “SHOW_TOOLBAR_CALLBACK” : lambda request: True, }.
  3. inside the INSTALLED_APPS tuple delete the hashtag in front of debug_toolbar.
  4. inside the MIDDLEWARE_CLASSES tuple delete the hashtag in front of debug_toolbar.middleware.DebugToolbarMiddleware.
  5. save the settings.py and leave the editor
  6. enter annot from the command line
  7. run python manage.py collectstatic
  8. exit the container
  9. run docker-machine restart an0, assuming your docker machines name is an0.
  10. reload http://192.168.99.100/admin/ page in your browser

Docker


HowTo install the docker platform?

Docker is able to run on Linux, Mac OSX, MS Windows, and many cloud platform flavors.

Install Docker Engine. Docker Machine and Docker Compose as described here: Install Docker.

HowTo run the docker platform?

This howto will get you familiar with docker, as much as is needed to run docker as annot user or developer.

To successfully run docker you have to know a whole set of docker commands, from the docker engine, docker-compose, and docker-machine. The section below introduces a minimal set of commands needed to run annot. It is worthwhile to check out the list of all available docker engine, docker-compose, and docker-machine commands. There are many nice commands that may be very helpful for your specific application.

The docker platform can be booted either by starting the docker engine or by firing up a docker-machine. Annot as such, could run solely with the docker engine and docker-compose. However, we have chosen to make use of docker-machine to allow one physical computer to run more then one development version or a development and a deployed version simultaneously.

docker-machine commands

In this example the docker machines name is dev:

  • docker-machine list all available docker-machine commands.
  • docker-machine --driver virtualbox dev makes a docker machine labeled dev. Default disk size is 20GB.
  • docker-machine --virtualbox-disk-size "1000000" --driver virtualbox dev makes a docker machine labeled dev with 1TB space. Then disk size is always given in MB.
  • docker-machine start dev start docker machine dev.
  • docker-machine ls lists all docker-machines (docker environments).
  • docker-machine ip get the IP address of a machine to connect to it e.g. by a web browser.
  • docker-machine env dev get dev’s environment variables.
  • eval "$(docker-machine env dev)" sets dev’s environment variables.
  • docker-machine regenerate-certs dev recreates ip certificates if needed. usually IPs are give by the order the machines are started. in case the IP of the dev changed the certificates have to be regenerated.
  • docker-machine upgrade dev upgrades the dev machine to the latest version of docker.

docker engine commands

In the docker world, you have to be able to distinguish between a docker image and docker container. A docker image is a synonym for the container class (or type), A docker container is a actual instance of one of this container class.

Basic docker image related commands. In this example the image is labeled annot_web and has the id 0b8a78c6c379:

  • docker list all the docker engine commands.
  • docker images list all images.
  • docker rmi 0b8a78c6c379 delete one or more images.
  • docker rmi annot_web delete one or more images.

Basic docker container related commands. In this example the container is labeled annot_web_1 and has the id 290ebef76c11:

  • docker list all the docker engine commands.
  • docker ps list running containers.
  • docker ps -a list all containers.
  • docker run annnot_web_1 ls run the ls command in a new container instance.
  • docker exec annnot_web_1 ls execute the ls command in the annnot_web_1 container instance.
  • docker exec -ti annnot_web_1 /bin/bash open an interactive terminal, which running the bash shell inside the annnot_web_1 container.
  • docker start annnot_web_1 start a stoped container.
  • docker restart annnot_web_1 restart a running container.
  • docker stop annnot_web_1 stop a running container.
  • docker rm annnot_web_1 annnot_nginx_1 delete one or more containers.
  • docker rm -v annnot_fsdata_1 delete the annnot_fsdata_1 container inclusive the volume inside the container

The slight difference between run and exec: docker run command will run on the annot_web_1 image and actually build a new container to run the command. Then new container will be automatically labeled. docker exec instead will run the command in an existing container. No new container will be build. In the case of annot you usually you do not want do create a new container.

docker-compose commands

Web applications like annot are usually built out of many containers. For example the development version of annot is out of five containers: annot_nginxdev_1, annot_webdev_1, annot_fsdata_1, annot_db_1, annot_dbdata_1. To orchestrate the whole container set you can run docker-compose commands. Nevertheless, it is important to know the low level docker engine commands, to be able to deal with single container out of the set. For run docker-compose commands, the container set and the connection between the containers have to be specified through a related yml file. In the following example this is the dcdev.yml file:

  • docker_compose list all the docker compose commands.
  • docker-compose -f dcdev.yml build build or rebuild the container set.
  • docker-compose -f dcdev.yml up start the containers set. This don’t gives the prompt back, but detailed output about the runnig containers. Press ctrl + c to stop the containers.
  • docker-compose -f dcdev.yml up -d start the containers set in daemon mode. This gives the prompt back, but no detailed output about the runnig containers.
  • docker-compose -f dcdev.yml ps list all running containers.
  • docker-compose -f dcdev.yml ps -a list all containers.
  • docker-compose -f dcdev.yml start start container set.
  • docker-compose -f dcdev.yml restart restart container set.
  • docker-compose -f dcdev.yml stop stop container set.
  • docker-compose -f dcdev.yml rm remove stopped container set.

PostgreSQL


HowTo enter the postgresql database?

From the command line execute the following steps:

  1. docker exec -ti annot_db_1 /bin/bash to enter annot_db_1 docker container as unix root user.
  2. su postgres -s /bin/bash to switch to unix postgres user.
  3. psql -U annot -d annot to enter the postgresql shell as database user named annot and connecting to the database named annot.
  4. \q to quit the postgresql shell.
  5. exit to exit as unix postgres user.
  6. exit to exit as unix root user and leaving as such the annot_db_1 docker container.