All software running on the CoFyBox is running in Docker style containers.
Please keep this page and the cofybox-balena README up to date with each other.
The software can be run in different ways:
For development:
For real usage:
When setting up a new container, please keep each of these development environments in mind.
The main cofybox-balena git repository can be viewed here. The services are managed by the docker-compose.yml
file in the Cofybox Balena repository (and the docker/local.yml
file, when running on a computer).
In combination with the docker-compose file, environment variables can toggle services on a system, and these can be altered for each Cofybox in the Balena cloud dashboard, according to which integrations that user needs. (For more information, see the Configuration page).
The deployment has been setup in such a way that it can easily be run and tested locally on a laptop. This is faciliated by using the standard overriding behaviour present in docker-compose in conjunction with the balena multi-arch/platform images. The different requirements for running locally are contained in the docker/local.yml
override file. If you're running ubuntu, you will also need the docker/ubuntu/ubuntu.yml
override file.
To test service(s) locally run:
docker-compose -f docker-compose.yml -f docker/local.yml up mosquitto <service 1> <service 2> [...]
(most services depend on the MQTT broker so this should normally be included).
For example, say you wish to test the glue component, then you need to run:
docker-compose -f docker-compose.yml -f docker/local.yml up --build mosquitto glue
Or to run homeassistant and related services:
docker-compose -f docker-compose.yml -f docker/local.yml up homeassistant nginx-reverse-proxy cofybox-config mosquitto
On Ubuntu, AppArmor complains about the containers accessing dbus. To get around this, you can use the profile provided with:
sudo apparmor_parser -r docker/ubuntu/cofybox-balena-apparmor-policy
You can also copy this into /etc/apparmor.d/
if you want it to be persistent across restarts.
You then need to ln or cp ubuntu.yml from docker/ubuntu to the top level and add the additional -f flag to your docker-compose command (similar to what you need to do for docker/local.yml)
Sometimes you may wish to reset the state of services locally. The 'state' of a service is stored in the persistent volumes so these must be removed.
To do this first use docker-compose down
to remove (stopped) containers:
docker-compose -f docker-compose.yml -f docker/local.yml down <service 1>
Then remove the relevant volumes using:
docker-compose volume rm <volume>
It is possible to develop "locally" on a balena based device in your home network.
To do this, download a "development" image from the cofybox-staging fleet and follow the instructions detailed here: https://www.balena.io/docs/learn/develop/local-mode/
Push new code to the device with:
balena push <IP of device>
It should keep a livepush service running, which will detect changes to the code and restart the containers. This is not all that reliable, so handle with care!
Sometimes this works better if you're connected to the box's wifi network. You can test for availability using balena scan
to scan for your box and curl http://<IP of device>:48484/ping
to check if the api is responsive. Sometimes it works better when run with sudo (e.g. sudo ~/balena-cli/balena push
) but this should be avoided if possible.
Local push sometimes seems to get stuck or fails. You can manually rescue this by balena ssh
into the device and restarting the supervisor container and then retrying.
If this doesn't work it can help to reset the device. There are many ways to do this but the easiest is pushing a 'blank' docker-compose.yml file using balena push
e.g.
version:2
This will kill all running services. You can then balena ssh
into the device and use the cleanup command:
balena system prune --volumes
You can get a container to stay alive / idle using the balena-idle
command. This can be useful for debugging.
Sometimes, it is useful to be create a new fleet for testing and push the code there. This is relevant when
For example, we have done this to test integrating a new brand of device which no staff have direct access to.
In this case:
balena push --debug <fleet name>
Continuous Integration and Deployment is configured using the Gitlab CI/CD service, deploying changes committed to the repository to devices in the field.
Please refer to balena-cli documentation and installation instructions which can be found here. The deployment uses the documented balena deploy
workflow i.e. we build the balena images on our own runners before pushing them to balena cloud.
It is possible to test Balena builds locally using the --emulated
flag to balena build command but the need for this can largely be avoided due to the use of the multi-arch images: balena build --application cofybox-staging --emulated
Two Balena main fleets have been created - cofybox-staging and cofybox-production.
These are both connected to the continuous integration services on the cofybox-balena project.
A push to the main
branch will deploy to the staging project.
This is done by an automatically created pipeline. From the pipeline, you can push to the production fleet by triggering the manual deployment stage from the Gitlab page for that pipeline.
The CI/CD has three steps:
Code linting and tests:
This step happens for each push on all branches. The aim of this step is to keep the code style consistent and quality high.
Creation and upload of base images to AWS (shown as Preload_image
):
This occurs on the main
branch only. New images are uploaded to an AWS S3 bucket. They can be downloaded from there when provisioning a new device.
Deployment of up to date build to existing Balena devices
This occurs on the main
branch only (see above).
Devices in the staging environment should run the most up to date code - they should automatically update to a new release when available. It can be useful to pin a couple of devices to the current release, push to main, and then update the devices to 'track latest' one by one in order to watch them for errors or interesting logs.
We use docker-compose to manage the different services and the relationship/communication between them. In the docker-compose.yml file you will find:
restart: always
in the essential services to reduce the chance of a box being down. However, this clashes with being able to use the COFYBOX_ENABLE_SERVICE
environment variable, so we don't apply it to all services (we use restart: on-failure
for those).From the Host OS of a balena device:
curl 172.18.4.9:80
will show you what nginx returns at port 80).http://homeassistant:8123
should hit homeassistant from any container - but not the Host OS).If you want an external service to be able to communicate with a container (for example, an EV charger talking to homeassistant via a websocket, or a TCP connection for eesmart d2l), the best way is to go via nginx. To do this, add a new entry in http.conf or nginx.conf which points a new url (e.g. /ocpp/
) and sends the request to the target (e.g. http://homeassistant:9000
). Things we've learnt:
set $target_name http://container:port/
and then proxy_pass $target_name
than just proxy_pass http://container:port/
0.0.0.0
(i.e. NOT localhost
)Where possible the folder layout should be kept consistent as below:
cofybox
+-- services
¦ +-- container1
¦ ¦ +-- docker
¦ ¦ ¦ +-- Dockerfile
¦ ¦ +-- src
¦ ¦ +-- code1.py
¦ ¦ +-- requirements.txt
¦ +-- container2
¦ ¦ +-- docker
¦ ¦ ¦ +-- Dockerfile
¦ ¦ +-- src
¦ ¦ +-- code2.py
¦ ¦ +-- requirements.txt
¦ +-- docker-compose.yml
¦ +-- local.yml
Each container has a docker
folder and a src
folder containing the source code.
Containers should have a local dockerfile postfixed with .local
.
An advantage of the Balena ecosystem is that they provide base images for use in development, cloud, and device environments preconfigured for a variety of language and frameworks (python/node etc.).
These are available on Docker Hub and conform to a standard naming format described here: https://www.balena.io/docs/reference/base-images/base-images/#how-the-image-naming-scheme-works
It is recommended wherever possible to use these base images. It is possible to use build arguments to automatically switch between the relevant images depending on the environment.
To get output as json from any docker
or balena
command, add the flag
--format '{{json .}}'
To see all running containers:
docker ps
These assume that the container name is in the format cofybox-balena_<SERVICE>_1
which it is by default - if it doesn't work, check the docker ps command above:
To log into a bash shell in a container:
docker exec -it cofybox-balena_<SERVICE>_1 bash
To restart a container
docker container restart cofybox-balena_<SERVICE>_1
To clean the system
docker system prune && docker volume prune
When you log into the host OS of a balena device (e.g. balena ssh <uuid>
or via the Balena front end), you can run the same commands as above but with balena
replacing docker
.
To see the lease table (i.e. which devices are connected via wifi to the cofybox network):
cat /var/lib/NetworkManager/dnsmasq-wlan0.leases
It's harder if a device is connected via ethernet - someone locally can connect to the cofybox network and use an app like Fing, or you can guess the IP!
To see which services are connected to the docker network:
balena network ls
and then using the name of the cofyboxnet network
balena network inspect <COFYBOX_NET_NAME> | grep Name
To subscribe to all MQTT messages:
mosquitto_sub -t '#' -v
If you only want certain topics, you can use something more specific than the wildcard '#'.