Listing posts
Displaying posts 11 - 15 of 345 in total2025-02-21
- in un pentolino far bollire acqua, pizzico di sale e aceto
- creare un vortice
- versare l'uovo dentro
- attendere almeno 2 minuti
- recupera l'uovo con un passino
Fonte: giallozafferano
~~~ * ~~~
2025-02-21
Installation on debian
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | # check system compatibility modprobe configs # loads /proc/config.gz wget -q -O - https://raw.githubusercontent.com/docker/docker/master/contrib/check-config.sh | \ bash | tee docker-check.txt # install docker: key, repo, packages apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - # amd64 - x64 echo "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker-ce.list # armhf - x32 / raspberry pi / raspbian echo "deb [arch=armhf] https://download.docker.com/linux/raspbian $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker-ce.list apt-get update && apt-get install docker-ce # allow user to use docker usermod -aG docker username # test installation docker version docker info # run a simple test image docker run hello-world |
See also post install
for troubleshooting dns/network/remote access.
On raspberry pi just use curl -sSL https://get.docker.com | sh
(repo not working).
Configure daemon
- change docker data folder location
1 2 3 4 5 | mkdir -p /path/to/data chown root.root /path/to/data chmod 711 /path/to/data echo '{ "data-root": "/path/to/data" }' > /etc/docker/daemon.json systemctl restart docker |
1 | echo '{ "log-driver": "local" }' > /etc/docker/daemon.json |
Creating an image (ref, best practices)
1 2 3 4 5 6 7 8 9 10 11 12 | touch Dockerfile # and fill it docker build -t test-myimg . # create the image with a tag # test run image docker run -p 4000:80 test-myimg docker run -it test-myimg /bin/bash # run image detached/on background docker run -p 4000:80 -d --name tmi test-myimg docker container ls -a docker container stop <container_id> docker container start -i tmi # restart container |
Interact (ref)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | # run interactive shell into debian image (temporary) docker run --name prova --rm -it debian /bin/bash # run interactive shell into debian image docker run -it debian /bin/bash apt-get update apt-get install -y dialog nano ncdu apt-get install -y locales localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8 echo "LANG=en_US.utf8" >> /etc/environment rm -rf /var/lib/apt/lists/* docker commit e2b7329257ba myimg:v1 docker run --rm -it myimg:v1 /bin/bash # run a command in a running container docker exec -ti a123098734e bash -il docker stop a123098734e docker kill a123098734e |
Save & restore
1 2 3 4 5 6 7 8 9 10 | # dump image docker save imgname | gzip > imgname.tgz zcat imgname.tgz | docker load # dump container docker create --name=mytemp imgname docker export mytemp | gzip > imgname-container.tgz # flatten image layers (losing Dockerfile) from a container docker export <id> | docker import - imgname:tag |
Registry - Image repository
1 2 3 4 5 | # push image to gitlab registry docker login registry.gitlab.com docker tag test-myimg registry.gitlab.com/username/repo:tag # add new tag... docker rmi test-myimg # ...and remove the old tag docker push registry.gitlab.com/username/repo:tag |
Tips
1 2 3 | # remove untagged image -- https://stackoverflow.com/a/33913711/13231285
docker images --digests
docker image rm image-name@sha256:xxxxxxxxxxxxxxxxxxxxxxxxxx
|
DockerHub official base images links: debian, ruby, rails, redis, nginx.
Available free registry services:
Name | # Priv/Pub | Notes |
---|---|---|
gitlab | inf/ND | 1 prj x registry |
treescale | inf/inf | max 500 pulls & 50GB |
canister | 20/ND | very good service |
docker hub | 1/inf | perfect |
Running arm
image on x86
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | # https://ownyourbits.com/2018/06/27/running-and-building-arm-docker-containers-in-x86/ apt-get install qemu-user-static docker run \ -v /usr/bin/qemu-arm-static:/usr/bin/qemu-arm-static \ -e LANG=en_US.utf8 -ti --name myarmimg arm32v7/debian:wheezy [...] docker commit myarmimg myarmimg docker container prune -f docker run \ -v /usr/bin/qemu-arm-static:/usr/bin/qemu-arm-static \ -ti --rm --name myarmimg \ myarmimg /bin/bash -il |
Composer (ref, dl) - Services
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | # docker-compose.yml version: "3" services: web: image: username/repo:tag deploy: replicas: 5 resources: limits: cpus: "0.1" memory: 50M restart_policy: condition: on-failure ports: - "4000:80" networks: - webnet networks: webnet: |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | # install docker-compose curl -L -o /usr/local/bin/docker-compose https://github.com/docker/compose/releases/download/1.24.0-rc1/docker-compose-`uname -s`-`uname -m` chmod 755 /usr/local/bin/docker-compose docker swarm init docker stack deploy --with-registry-auth -c docker-compose.yml getstartedlab docker service ls docker service ps getstartedlab_web # or docker stack ps getstartedlab # change the yml file and restart service docker stack deploy --with-registry-auth -c docker-compose.yml getstartedlab docker service ps getstartedlab_web docker container prune -f # stop & destroy service docker stack rm getstartedlab docker container prune -f # leave the swarm docker swarm leave --force |
Machine (ref, dl) - SWARM/Provisioning
Remember to update the host firewall: open port 2376
and do not apply rate limits on port 22
.
On the fish shell you can install the useful omf plugin-docker-machine to easily select the current machine.
Without an official supported driver we can use the generic one. Install docker-ce on your worker nodes and then in your swarm manager host:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | # install docker-machine curl -L -o /usr/local/bin/docker-machine https://github.com/docker/machine/releases/download/v0.16.1/docker-machine-`uname -s`-`uname -m` chmod 755 /usr/local/bin/docker-machine # setup each VMs (this creates and shares the certificates for a secure # connetion between your client and the daemon runnig on the server) ssh-copy-id -i ~/.ssh/id_rsa user@ww.xx.yy.zz docker-machine create --driver generic --generic-ssh-key ~/.ssh/id_rsa \ --generic-ip-address=ww.xx.yy.zz myvm1 ssh-copy-id -i ~/.ssh/id_rsa user@ww.xx.yy.kk docker-machine create --driver generic --generic-ssh-key ~/.ssh/id_rsa \ --generic-ip-address=ww.xx.yy.kk myvm2 docker-machine ls # run a command via ssh in a VM docker-machine ssh myvm1 "ls -l" # use internal SSH lib docker-machine --native-ssh ssh myvm1 "bash -il" # use system SSH lib # set env to run all docker commands remotely on a VM eval $(docker-machine env myvm1) # on bash docker-machine use myvm1 # on fish + omf plugin-docker-machine # set VM1 to be a swarm manager docker-machine use myvm1 docker swarm init # --advertise-addr ww.xx.yy.zz docker swarm join-token worker # get token for adding worker nodes # set VM2 to join the swarm as a worker docker-machine use myvm2 docker swarm join --token SWMTKN-xxx ww.xx.yy.zz:2377 # check cluster status on your local machine... docker-machine ls # ...or on the manager node docker-machine use myvm1 docker node ls # locally login on your registry... docker-machine unset docker login registry.gitlab.com # ...then deploy the app on the swarm manager docker-machine use myvm1 docker stack deploy --with-registry-auth -c docker-compose.yml getstartedlab docker service ls docker service ps getstartedlab_web # access cluster from any VM's IP curl http://ww.xx.yy.zz:4000 curl http://ww.xx.yy.kk:4000 # eventually re-run "docker stack deploy ..." to apply changes # undo app deployment docker-machine use myvm1 docker stack rm getstartedlab # remove the swarm docker-machine ssh myvm2 "docker swarm leave" docker-machine ssh myvm1 "docker swarm leave --force" |
Stack / Deploy application
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | # docker-compose.yml version: "3" services: web: image: username/repo:tag deploy: replicas: 5 restart_policy: condition: on-failure resources: limits: cpus: "0.1" memory: 50M ports: - "80:80" networks: - webnet visualizer: image: dockersamples/visualizer:stable ports: - "8080:8080" volumes: - "/var/run/docker.sock:/var/run/docker.sock" deploy: placement: constraints: [node.role == manager] networks: - webnet redis: image: redis ports: - "6379:6379" volumes: - "/home/docker/data:/data" deploy: placement: constraints: [node.role == manager] command: redis-server --appendonly yes networks: - webnet networks: webnet: |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | docker-machine use myvm1 docker-machine ssh myvm1 "mkdir ./data" # create redis data folder # run stack / deploy app docker stack deploy -c docker-compose.yml getstartedlab docker stack ps getstartedlab # show deployed services and restart one docker service ls docker service update --force getstartedlab_web firefox http://<myvm1-ip>:8080/ # docker visualizer redis-cli -h <myvm1-ip> # interact with redis docker stack rm getstartedlab |
Init process to reap zombies and forward signals
- single process: tini (use
docker run --init
orinit: true
in docker-compose.yml) - multiprocess: s6 and s6-overlay
- init systems comparison
SWARM managers
- traefik: github, hp
- portainer (formerly ui-for-docker)
- swarmpit
- dry (terminal gui, one man prj)
- guides, tips and hints at dockerswarm.rocks (also on github)
Container-Host user remapping
You can map container users to the host ones for greater security.
- put
myuser:100000:65536
(start:length) in/etc/subuid
and/etc/subgid
, this defines the mapping id range 100000-165535 available to the host usermyuser
configure docker daemon to use the remapping specified for
myuser
:1 2
echo '{ "userns-remap": "myuser" }' > daemon.json systemctl restart docker
note that all images will reside in a /var/lib/docker subfolder named after
myuser
idsnow all your container user/group ids will be mapped to
100000+id
on the host
You can write up to 5 ranges in sub* files for each user, in this example we set identical ids for users 0-999 and map ids >=1000 to id+1:
1 2 | myuser:0:1000 myuser:1001:65536 |
UFW Firewall interactions
Docker bypasses UFW rules and published ports can be accessed from outside.
See a solution involving DOCKER-USER and ufw-user-forward/ufw-user-input chains.
Dockerizing Rails
- docker-rails-base -- preinstalled gems, multi stage, multi image, uses onbuild triggers
- dockerfile-rails -- Dockerfile extracted from Rails 7.1 by fly.io
- Kamal -- formerly MRSK, DHH solution, deploy web apps anywhere with zero downtime, guide posts
Share network with multiple stacks in swarm
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | # swarm PROXY/BALANCER networks: nginx: { external: true } services: app: { image: nginx } # swarm APP_FOO networks: stackA: nginx: { external: true } services: app: image: app_foo networks: { stackA:, nginx: } db: image: mysql networks: { stackA: } # swarm APP_BAR networks: stackB: nginx: { external: true } services: app: image: app_bar networks: { stackB:, nginx: } db: image: postgres networks: { stackB: } |
Terms:
service
= containers that only runs one/same image,task
= a single container running in a service,swarm
= a cluster of machines running Docker,stack
= a group of interrelated services orchestrated and scalable, defining and coordinating the functionality of an entire application.
Source: install, install@raspi, tutorial, overview, manage app data, config. daemon, config. containers,
Source for user mapping: docker docs, jujens.eu, ilya-bystrov
Useful tips: cleanup,
network host mode for nginx to get client real IP, limit ram/cpu usage, docker system prune -a -f
to remove all cache files
See also: thread swarm gui, docker swarm rocks
~~~ * ~~~
2025-02-20
Fate Anime Series watch Order
Fate/stay night Series
- check_box_outline_blank Fate/stay night (2006)
- check_box Fate/stay night: Unlimited Blade Works (2014-2015) (movie available)
- check_box_outline_blank Fate/stay night [Heaven's Feel] I. presage flower
- check_box_outline_blank Fate/stay night [Heaven's Feel] II. lost butterfly
- check_box_outline_blank Fate/stay night [Heaven's Feel] I. spring song
- check_box_outline_blank Fate/Zero
Fate/Grand Order
- check_box_outline_blank Fate/Grand Order: First Order
- check_box_outline_blank Fate/Grand Order: Camelot - Wandering; Agateram
- check_box_outline_blank Fate/Grand Order: Camelot - Paladin; Agateram
- check_box_outline_blank Fate/Grand Order Absolute Demonic Front: Babylonia
- check_box_outline_blank Fate/Grand Order Final Singularity - Grand Temple of Time: Solomon
Fate Anime Spinoffs (any order)
- check_box_outline_blank Today's Menu for the Emiya Family
- check_box_outline_blank Lord El-Melloi II Case Files
- check_box_outline_blank Fate/Prototype
- check_box_outline_blank Fate/strange Fake: Whispers of Dawn
- check_box_outline_blank Fate/Apocrypha
- check_box_outline_blank Fate/EXTRA Last Encore
- check_box_outline_blank Fate/kaleid liner PRISMA ILLYA
- check_box_outline_blank Carnival Phantasm
~~~ * ~~~
2025-02-14
1. SETUP
1.1 Install
1 2 3 4 5 6 7 8 9 10 11 | apt install wireguard cd /etc/wireguard/ wg genkey | tee private.key | wg pubkey > public.key touch wg-srv.conf chmod 600 private.key wg-srv.conf ## turn on ip forward systemwide within /etc/sysctl.d ## OR do it later inside wg-srv.conf #sysctl --write net.ipv4.ip_forward=1 #echo net.ipv4.ip_forward=1 >> /etc/sysctl.d/local.conf |
1.2 Firewall
1 2 3 4 | # open port and allow traffic from intranet ufw allow 1053/udp comment 'VPN server' ufw allow from 10.1.1.0/24 comment 'intranet VPN' ufw route allow in on wg-srv comment 'VPN forward' |
1.3 Network interface up/down
NB: wg-name
= interface name = config filename without the extension .conf
.
1 2 3 4 5 6 7 | wg-quick up wg-name # start wg # info wg-quick down wg-name # stop # enable service systemctl start wg-quick@wg-name systemctl enable wg-quick@wg-name |
2. SERVER
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | # /etc/wireguard/wg-srv.conf # server (with rules to allow routing all traffic) [Interface] PrivateKey = KFSjreufI8MJq5DD4c94EIuVOMBRGB0cL00uAmy9+2s= # server private key ListenPort = 1053 Address = 10.1.1.1/24 PostUp = sysctl --write net.ipv4.ip_forward=1 PostUp = iptables -A FORWARD -i %i -j ACCEPT PostUp = iptables -A FORWARD -o %i -j ACCEPT PostUp = iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE #PostUp = iptables -t nat -A POSTROUTING -o wg-xxx -j MASQUERADE # can add other interfaces PostDown = sysctl --write net.ipv4.ip_forward=0 PostDown = iptables -D FORWARD -i %i -j ACCEPT PostDown = iptables -D FORWARD -o %i -j ACCEPT PostDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE #PostDown = iptables -t nat -D POSTROUTING -o wg-xxx -j MASQUERADE # client A [Peer] PublicKey = SCkqASUWoNXzDW59pZglfbUHMBzBMJmoH5HH7zffY0c= # client public key PresharedKey = tbAdUxK2T0uLIBk5IfSXXUYihPJUyGeFI0vP4MUPrUM= # wg genpsk AllowedIPs = 10.1.1.2/32 PersistentKeepalive = 23 |
3. CLIENT
3.1 Standard client
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | # /etc/wireguard/wg-cli.conf # client A [Interface] PrivateKey = oP6Zfi5ud9i4OL/COrL4FK0luSYpxvf3H7XRk8xfN0w= # client private key ListenPort = 2053 Address = 10.1.1.2/24 DNS = 10.0.0.1,1.1.1.1 # server [Peer] PublicKey = ECxm9+6EAt/PPgIiVQEjzl0E8VZ7JBphZjWADUv/mVs= # server public key PresharedKey = tbAdUxK2T0uLIBk5IfSXXUYihPJUyGeFI0vP4MUPrUM= # wg genpsk AllowedIPs = 0.0.0.0/0 # route all traffic through the server #AllowedIPs = 10.1.1.0/24 # OR route VPN subnet only Endpoint = 185.193.254.157:1053 PersistentKeepalive = 5 |
3.2. NordVPN client (gist)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | # create a linux access token # => https://my.nordaccount.com/dashboard/nordvpn/manual-configuration/ # get my wg private key curl -s -u token:XXXX https://api.nordvpn.com/v1/users/services/credentials | \ jq -r .nordlynx_private_key # get servers params wget -qO - https://api.nordvpn.com/v1/servers?limit=15000 | gzip -9 > servers.json.gz # get servers params (recommended) curl -s "https://api.nordvpn.com/v1/servers/recommendations?&filters\[servers_technologies\]\[identifier\]=wireguard_udp&limit=1" | \ jq -r '.[]|.hostname, .station, (.locations|.[]|.country|.city.name), (.locations|.[]|.country|.name), (.technologies|.[].metadata|.[].value), .load' # create conf [Interface] PrivateKey = <PRIVATE_KEY> # my private key Address = 10.5.0.2/32 # IP is always the same DNS = 127.0.0.1, 10.5.0.2, 1.1.1.1 # local ip/subnet/gateway rules to allow access to eth0 from outside #PostUp = ip rule add from 192.168.1.110 table 1000 ; ip route add to 192.168.1.0/24 table 1000 dev eth0; ip route add default via 192.168.1.1 table 1000 dev eth0 #PreDown = ip route del default via 192.168.1.1 table 1000 dev eth0; ip route del to 192.168.1.0/24 table 1000 dev eth0; ip rule del from 192.168.1.110 table 1000 [Peer] PublicKey = <SRV_PUB_KEY> AllowedIPs = 0.0.0.0/0, 192.168.1.110 # route everything, and allow binding to eth0 Endpoint = <SRV_IP>:51820 # port is always the same |
Source Linux: Debian wiki 1 and 2, davidshomelab, deb10 wg server, dynamic IP reddit & script
Source NordVPN: myshittycode, gist, NordVPN-WireGuard-Config-Generator, NordVPN api
~~~ * ~~~
2025-01-28
- install the C version of
smem
(no multiple python dependency!):
1 | apt install smemstat |
- simple wrapper to shows only top lines and support regexp filtering
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 | #!/usr/bin/env ruby if ARGV.include?('-h') puts "USAGE: #{File.basename __FILE__} [-a] [regexp]" exit end h, w = `stty size`.split(' ').map(&:to_i) lines = `sudo smemstat -d -m`.split("\n") lines.pop # Note: Memory reported in units of megabytes. lines.pop # empty lines totals = lines.pop # Totals header = lines.shift cmd_col = header.index('Command') if ARGV.include?('-a') ARGV.delete '-a' h = 100_000 end # replace Command with full cmdline lines = lines.map{|l| pid = l.split(' ', 2).first.to_i next if pid == Process.pid cmd_src = l[cmd_col..] cmd_dst = File.read("/proc/#{pid}/cmdline") rescue cmd_src args = cmd_dst.split(/[ \u0000]/) cmd_dst = [File.basename(args.shift)].concat(args).join(' ') "#{l[0...cmd_col]}#{cmd_dst}" }.compact lines = lines.grep Regexp.new(ARGV[0]) if ARGV[0] lines = lines[0..(h-9)] max_w = lines.map(&:size).max sep = '-' * (max_w > w ? (w-2) : max_w) puts header puts sep lines.each{|l| puts l[0..(w-2)] } puts sep puts totals |
Source: golinuxcloud, smem (python), smemstat (C)