Listing posts

Displaying posts 6 - 10 of 257 in total
Setup Deluge torrent manager attachment
mouse 1730 · person cloud · link
Last update
2021-03-25
2021
03-25
« — »
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
sudo apt-get install deluged deluge-console

# don't start at boot
sudo systemctl stop    deluged
sudo systemctl disable deluged

# now, as the desired user, create cfg files
deluged && pkill deluged

# add a new full user to deluge (10 = max level)
echo "name:password:10" >> .config/deluge/auth
deluge-console "config -s allow_remote True" # enable thin clients
pkill deluged && deluged                     # restart daemon

# install the web ui
sudo apt-get install deluge-web
deluge-web --fork
firefox http://localhost:8112/ # password: deluge

# on another host
sudo apt-get install deluge-gtk
deluge-gtk # use thin client mode, then configure connection manager

Setup an IP blocklist thanks to johntyree's quora question, useful gist and the resulting big list build by concatenating all files from bluetack in iblocklist.com.

You can use the attached deluge_manager.rb script to auto stop seeding and sort torrents by number of seeds/peers.

You can also use the Transdrone (aka Transdroid) Android app to remotely manage deluge.

Note: Since libtorrent >= 0.16 you cannot anymore set a dl/ul speed limit because it is all very well auto managed by the µTP protocol enabled by default.

You can disable it by installing the ltconfig plugin and setting:

1
2
enable_outgoing_utp = false
enable_incoming_utp = false

but it is better to keep it enabled. You can also disable TCP and let µTP do all the work by setting:

1
2
3
4
enable_outgoing_utp = true
enable_incoming_utp = true
enable_outgoing_tcp = false
enable_incoming_tcp = false

On a low end system like a raspberry pi using an sdcard you can reduce disk I/O for the fastresume and state files:

1
2
3
# change default 200s timers to 4h
sudo sed -ri 's/( +self\.save_.+_timer.start)\([0-9]+/\1(60*60*4/' \
  /usr/lib/python2.7/dist-packages/deluge/core/torrentmanager.py

Source: Deluge HP and HP guide, HowtoGeek, tuttodinternet, kamilslab


~~~ * ~~~

Listen to radio FM/DAB with RTL2832U
mouse 43 · person cloud · link
Last update
2021-03-22
2021
03-22
« — »
1
2
# /etc/udev/rules.d/10-local_rtl-sdr.rules
SUBSYSTEM=="usb", ATTRS{idVendor}=="0ccd", ATTRS{idProduct}=="00d3", GROUP="audio", MODE="0666", SYMLINK+="rtl-sdr"
1
2
# reload udev and plug the usb RTL2832U stick
udevadm control --reload-rules && udevadm trigger

FM

1
2
3
4
apt install rtl-sdr

# play a FM radio station at freq. 107300000Hz with 6x sampling
rtl_fm -f 107.30e6 -M wbfm -s 200000 -r 48000 - | aplay -r 48000 -f S16_LE

DAB

Using terminal-DAB-xxx:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apt-get install \
  git cmake build-essential g++ pkg-config libsndfile1-dev libfftw3-dev portaudio19-dev zlib1g-dev  libusb-1.0-0-dev libsamplerate0-dev ncurses-base \
  libfaad-dev librtlsdr-dev

git clone https://github.com/JvanKatwijk/terminal-DAB-xxx.git

cd terminal-DAB-xxx
mkdir build && cd build
cmake .. -DRTLSDR=ON -DFAAD=ON -DPICTURES=OFF
make
mv terminal-DAB-rtlsdr t-dab && strip t-dab

# http://www.air-radio.it/T_DAB.html
# 12A  EuroDAB     http://www.litaliaindigitale.it/radio-dab/mux-eurodab-italia
# 12C  DAB Italia  https://www.dab.it
# 12D  DAB+ RAI    http://www.rai.it/dl/DigitalRadio/dab_raiway.html
./t-dab -Q -C 12C -S R101  # play service "R101" on channel "12C" with autogain

See also: opendigitalradio.org, Qt-DAB app, welle.io DAB app, cubicsdr FM app, dablin + dabtools/eti-cmdline


~~~ * ~~~

Setup a shared scanner
mouse 1335 · person cloud · link
Last update
2021-03-22
2021
03-22
« — »
1
2
3
4
5
6
7
8
9
10
11
12
apt-get install sane sane-utils

scanimage -L # check if a local scanner is found

# add hostnames and/or IP addresses/subnets
pico /etc/sane.d/saned.conf # eg: localhost, 127.0.0.1 and 192.168.1.0/24

# enable local users to see the scanner
adduser username scanner

systemctl enable saned.socket
systemctl start  saned.socket

Configure a client host:

1
2
3
4
# edit /etc/sane.d/net.conf:
#   - uncomment the net backend entry
#   - and add the IP address/hostname of the sane server
scanimage -L # check the remote scanner is found

On windows you can use SaneTwain, on Android the SaneDroid app.

Note: If your sane-utils < 1.0.25 is broken under systemd, fix it with these steps:

1
apt-get install libsystemd-dev

In /lib/systemd/system/saned@.service set StandardInput/Output/Error=socket and append Alias=saned.service to [Install] section:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# /lib/systemd/system/saned@.service
[Unit]
Description=Scanner Service
Requires=saned.socket

[Service]
ExecStart=/usr/sbin/saned
User=saned
Group=saned
StandardInput=socket
StandardOutput=socket
StandardError=socket
Environment=SANE_CONFIG_DIR=/etc/sane.d
# Environment=SANE_CONFIG_DIR=/etc/sane.d SANE_DEBUG_DLL=255

[Install]
Also=saned.socket
Alias=saned.service
1
2
3
systemctl daemon-reload
systemctl enable  saned.socket
systemctl restart saned.socket

fix permissions on USB device

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# FIX for /lib/udev/rules.d/60-libsane.rules missing setfacl command
#
# https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=918358
# libsane:amd64: Missing permissions for scanner group on usb device
#
# reload rules: udevadm control --reload-rules && udevadm trigger
#
# modified version of 60-libsane.rules from debian 9-10
ACTION!="add"             , GOTO="libsane_rules_end"
ENV{DEVTYPE}=="usb_device", GOTO="libsane_usb_rules_begin"
SUBSYSTEM=="usb_device"   , GOTO="libsane_usb_rules_begin"
SUBSYSTEM!="usb_device"   , GOTO="libsane_usb_rules_end"

LABEL="libsane_usb_rules_begin"

# Canon CanoScan LiDE 220
ATTRS{idVendor}=="04a9", ATTRS{idProduct}=="190f", ENV{libsane_matched}="yes"

# The following rule will disable USB autosuspend for the device
ENV{DEVTYPE}=="usb_device", ENV{libsane_matched}=="yes", TEST=="power/control", ATTR{power/control}="on"

LABEL="libsane_usb_rules_end"

ENV{libsane_matched}=="yes", RUN+="/bin/setfacl -m g:scanner:rw $env{DEVNAME}"

LABEL="libsane_rules_end"

Source: debian wiki, std* fix


~~~ * ~~~

Thunderbird useful add-ons and tricks
mouse 729 · person cloud · link
Last update
2021-03-21
2021
03-21
« — »

Add-ons

Obsolete

Date format

TB uses the system locale settings in order to display the date and time, you can change them by setting the LANG environment variable:

1
2
3
4
5
#!/bin/bash
# http://kb.mozillazine.org/Date_display_format
# https://bugzilla.mozilla.org/show_bug.cgi?id=1426907
export LANG=it_IT.UTF-8 # DD/MM/YYYY HH.MM
exec /opt/thunderbird/thunderbird

Dark theme

  1. Install TT DeepDark theme
  2. Preferences > Display > Formatting > Fonts & Colors > Colors button, then swap Text and Background colors
  3. If you write HTML emails: Preferences > Composition > General, then swap Text and Background colors

~~~ * ~~~

Docker howto attachment
Last update
2021-03-19
2021
03-19
« — »

Installation on debian

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# check system compatibility
modprobe configs # loads /proc/config.gz
wget -q -O - https://raw.githubusercontent.com/docker/docker/master/contrib/check-config.sh | \
  bash | tee docker-check.txt

# install docker: key, repo, packages
apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -

# amd64 - x64
echo "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker-ce.list
# armhf - x32 / raspberry pi / raspbian
echo "deb [arch=armhf] https://download.docker.com/linux/raspbian $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker-ce.list

apt-get update && apt-get install docker-ce

# allow user to use docker
usermod -aG docker username

# test installation
docker version
docker info

# run a simple test image
docker run hello-world

See also post install for troubleshooting dns/network/remote access.

On raspberry pi just use curl -sSL https://get.docker.com | sh (repo not working).

Configure daemon

1
2
3
4
5
mkdir -p        /path/to/data
chown root.root /path/to/data
chmod 711       /path/to/data
echo '{ "data-root": "/path/to/data" }' > /etc/docker/daemon.json
systemctl restart docker
1
echo '{ "log-driver": "local" }' > /etc/docker/daemon.json

Creating an image (ref, best practices)

1
2
3
4
5
6
7
8
9
10
11
12
touch Dockerfile # and fill it
docker build -t test-myimg . # create the image with a tag

# test run image
docker run -p 4000:80    test-myimg
docker run -it test-myimg /bin/bash

# run image detached/on background
docker run -p 4000:80 -d --name tmi test-myimg
docker container ls -a
docker container stop <container_id>
docker container start -i tmi # restart container

Interact (ref)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# run interactive shell into debian image (temporary)
docker run --name prova --rm -it debian /bin/bash 

# run interactive shell into debian image
docker run -it debian /bin/bash 

apt-get update

apt-get install -y dialog nano ncdu
apt-get install -y locales

localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8
echo "LANG=en_US.utf8" >> /etc/environment

rm -rf /var/lib/apt/lists/*

docker commit e2b7329257ba myimg:v1

docker run --rm -it myimg:v1 /bin/bash

# run a command in a running container
docker exec -ti a123098734e bash -il

docker stop a123098734e
docker kill a123098734e

Save & restore

1
2
3
4
5
6
7
8
9
10
# dump image
docker save imgname | gzip > imgname.tgz
zcat imgname.tgz | docker load

# dump container
docker create --name=mytemp imgname
docker export mytemp | gzip > imgname-container.tgz

# flatten image layers (losing Dockerfile) from a container
docker export <id> | docker import - imgname:tag

Registry - Image repository

1
2
3
4
5
# push image to gitlab registry
docker login registry.gitlab.com
docker tag test-myimg registry.gitlab.com/username/repo:tag # add new tag...
docker rmi test-myimg # ...and remove the old tag
docker push registry.gitlab.com/username/repo:tag

DockerHub official base images links: debian, ruby, rails, redis, nginx.

Available free registry services:

Name # Priv/Pub Notes
gitlab inf/ND 1 prj x registry
treescale inf/inf max 500 pulls & 50GB
canister 20/ND very good service
docker hub 1/inf perfect

Running arm image on x86

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# https://ownyourbits.com/2018/06/27/running-and-building-arm-docker-containers-in-x86/
apt-get install qemu-user-static

docker run \
  -v /usr/bin/qemu-arm-static:/usr/bin/qemu-arm-static \
  -e LANG=en_US.utf8 -ti --name myarmimg arm32v7/debian:wheezy

[...]

docker commit myarmimg myarmimg

docker container prune -f

docker run \
  -v /usr/bin/qemu-arm-static:/usr/bin/qemu-arm-static \
  -ti --rm --name myarmimg \
  myarmimg /bin/bash -il

Composer (ref, dl) - Services

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# docker-compose.yml
version: "3"
services:
  web:
    image: username/repo:tag
    deploy:
      replicas: 5
      resources:
        limits:
          cpus: "0.1"
          memory: 50M
      restart_policy:
        condition: on-failure
    ports:
      - "4000:80"
    networks:
      - webnet
networks:
  webnet:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# install docker-compose
curl -L  -o /usr/local/bin/docker-compose https://github.com/docker/compose/releases/download/1.24.0-rc1/docker-compose-`uname -s`-`uname -m`
chmod 755 /usr/local/bin/docker-compose

docker swarm init

docker stack deploy --with-registry-auth -c docker-compose.yml getstartedlab
docker service ls
docker service ps getstartedlab_web # or docker stack ps getstartedlab

# change the yml file and restart service
docker stack deploy --with-registry-auth -c docker-compose.yml getstartedlab
docker service ps getstartedlab_web
docker container prune -f

# stop & destroy service
docker stack rm getstartedlab
docker container prune -f

# leave the swarm
docker swarm leave --force

Machine (ref, dl) - SWARM/Provisioning

Remember to update the host firewall: open port 2376 and do not apply rate limits on port 22.

On the fish shell you can install the useful omf plugin-docker-machine to easily select the current machine.

Without an official supported driver we can use the generic one. Install docker-ce on your worker nodes and then in your swarm manager host:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
# install docker-machine
curl -L -o /usr/local/bin/docker-machine https://github.com/docker/machine/releases/download/v0.16.1/docker-machine-`uname -s`-`uname -m`
chmod 755 /usr/local/bin/docker-machine

# setup each VMs (this creates and shares the certificates for a secure
# connetion between your client and the daemon runnig on the server)
ssh-copy-id -i ~/.ssh/id_rsa user@ww.xx.yy.zz
docker-machine create --driver generic --generic-ssh-key ~/.ssh/id_rsa \
  --generic-ip-address=ww.xx.yy.zz myvm1

ssh-copy-id -i ~/.ssh/id_rsa user@ww.xx.yy.kk
docker-machine create --driver generic --generic-ssh-key ~/.ssh/id_rsa \
  --generic-ip-address=ww.xx.yy.kk myvm2

docker-machine ls

# run a command via ssh in a VM
docker-machine ssh myvm1 "ls -l"                 # use internal SSH lib
docker-machine --native-ssh ssh myvm1 "bash -il" # use system SSH lib

# set env to run all docker commands remotely on a VM
eval $(docker-machine env myvm1) # on bash
docker-machine use myvm1         # on fish + omf plugin-docker-machine

# set VM1 to be a swarm manager
docker-machine use myvm1
docker swarm init # --advertise-addr ww.xx.yy.zz
docker swarm join-token worker # get token for adding worker nodes

# set VM2 to join the swarm as a worker
docker-machine use myvm2
docker swarm join --token SWMTKN-xxx ww.xx.yy.zz:2377

# check cluster status on your local machine...
docker-machine ls
# ...or on the manager node
docker-machine use myvm1
docker node ls

# locally login on your registry...
docker-machine unset
docker login registry.gitlab.com
# ...then deploy the app on the swarm manager
docker-machine use myvm1
docker stack deploy --with-registry-auth -c docker-compose.yml getstartedlab
docker service ls
docker service ps getstartedlab_web

# access cluster from any VM's IP
curl http://ww.xx.yy.zz:4000
curl http://ww.xx.yy.kk:4000

# eventually re-run "docker stack deploy ..." to apply changes

# undo app deployment
docker-machine use myvm1
docker stack rm getstartedlab

# remove the swarm
docker-machine ssh myvm2 "docker swarm leave"
docker-machine ssh myvm1 "docker swarm leave --force"

Stack / Deploy application

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# docker-compose.yml
version: "3"
services:
  web:
    image: username/repo:tag
    deploy:
      replicas: 5
      restart_policy:
        condition: on-failure
      resources:
        limits:
          cpus: "0.1"
          memory: 50M
    ports:
      - "80:80"
    networks:
      - webnet
  visualizer:
    image: dockersamples/visualizer:stable
    ports:
      - "8080:8080"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
    deploy:
      placement:
        constraints: [node.role == manager]
    networks:
      - webnet
  redis:
    image: redis
    ports:
      - "6379:6379"
    volumes:
      - "/home/docker/data:/data"
    deploy:
      placement:
        constraints: [node.role == manager]
    command: redis-server --appendonly yes
    networks:
      - webnet
networks:
  webnet:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
docker-machine use myvm1
docker-machine ssh myvm1 "mkdir ./data" # create redis data folder

# run stack / deploy app
docker stack deploy -c docker-compose.yml getstartedlab
docker stack ps getstartedlab

# show deployed services and restart one
docker service ls
docker service update --force getstartedlab_web

firefox http://<myvm1-ip>:8080/ # docker visualizer
redis-cli -h <myvm1-ip>         # interact with redis

docker stack rm getstartedlab

Init process to reap zombies and forward signals

  • single process: tini (use docker run --init or init: true in docker-compose.yml)
  • multiprocess: s6 and s6-overlay
  • init systems comparison

SWARM managers

Container-Host user remapping

You can map container users to the host ones for greater security.

  • put myuser:100000:65536 (start:length) in /etc/subuid and /etc/subgid, this defines the mapping id range 100000-165535 available to the host user myuser
  • configure docker daemon to use the remapping specified for myuser:

    1
    2
    echo '{ "userns-remap": "myuser" }' > daemon.json
    systemctl restart docker
    

    note that all images will reside in a /var/lib/docker subfolder named after myuser ids

  • now all your container user/group ids will be mapped to 100000+id on the host

You can write up to 5 ranges in sub* files for each user, in this example we set identical ids for users 0-999 and map ids >=1000 to id+1:

1
2
myuser:0:1000
myuser:1001:65536

UFW Firewall interactions

Docker bypasses UFW rules and published ports can be accessed from outside.

See a solution involving DOCKER-USER and ufw-user-forward/ufw-user-input chains.


Terms:

  • service = containers that only runs one/same image,
  • task = a single container running in a service,
  • swarm = a cluster of machines running Docker,
  • stack = a group of interrelated services orchestrated and scalable, defining and coordinating the functionality of an entire application.

Source: install, install@raspi, tutorial, overview, manage app data, config. daemon, config. containers,

Source for user mapping: docker docs, jujens.eu, ilya-bystrov

Useful tips: cleanup, network host mode for nginx to get client real IP, limit ram/cpu usage, docker system prune -a -f to remove all cache files

See also: thread swarm gui, docker swarm rocks