Posts

Kubeflow in K3D with GPU Support

Image
It's easy to install Kubeflow with K3D. To have GPU support, it's needed to first build a K3S image with CUDA support. This article shows how to setup Kubeflow 1.7.0, K3D v5.4.9 and K3S 1.25.6.   But what do those names mean? Kubeflow is a Kubernetes based MLOps tool.  It allows to manage the lifecycle of ML models. K3S is a lightweight Kubernetes distribution. And K3D runs K3S clusters in Docker, which is neat.  If GPU support is needed, you have to build a K3S image with CUDA support. There is a K3D manual page to help build this image. But the manual isn't currently updated, and  this Github issue nailed the process. In the end you'll have a local K3S imagem with CUDA support. After having the image built, you just need to create a K3D cluster and install Kubeflow. First, install the k3d cli . The command below creates a k8s cluster with 3 worker nodes (-a 3), with the load balancer listening on port 8080 (-p 8080:80@loadbalancer), and one gpu (--gpus=1). The --i

Authenticating Mediawiki with OAuth2

Image
  It's the year 2023 and there must be a way to authenticate Mediawiki (MW) with OAuth2. Currently the LTS version of Mediawiki is 1.39.1. My OAuth2 and OpenID provider is Keycloak.  It can be accomplished with the extension  OpenID Connect . It's simple, once you have the dependencies in place. I spent more time providing "composer" as a dependence than configuring the SSO part. Hera are the relevant parts of  theDockerfile: FROM registry.procempa.com.br/mediawiki:1.39.1 COPY composer.local.json composer.local.json RUN  wget https://extdist.wmflabs.org/dist/extensions/PluggableAuth-REL1_39-e7de886.tar.gz &&\ wget https://extdist.wmflabs.org/dist/extensions/OpenIDConnect-REL1_39-0fefe8b.tar.gz &&\ tar -zxvf PluggableAuth-REL1_39-e7de886.tar.gz -C extensions &&\ tar -zxvf OpenIDConnect-REL1_39-0fefe8b.tar.gz -C extensions &&\ chown -R www-data:www-data extensions #Composer as dependency for OpendIDConnect     #https://tecadmin.net/how-to

Back after a long time

Image
It's been a long time since the last post. Exactly 4 years and 27 days. Right after the last post I started lecturing at UniRitter (again) for 3 years, and worked as Technical Writer for Plume Design for another year, and those activities kept me busy (enough) along with my day job and my lovely family. Now, apparently, I have time left to write again about the small daily victories of an IT generalist and enthusiast. So, lets go again. At least until the next pause.

Differentiating Environments in Docker Compose - And A Hack For Jenkins Plugin

Image
Docker Compose orchestrates the launch of multiple containers for an application on a single Docker host. A YAML file (usually named docker-compose.yml) describes how each container is going to be deployed (volumes, ports, environment variables) and how they are related to one another (Apache, PHP, and MySQL, for example). One less known feature of Docker Compose is that you can combine multiple YAM files and Compose will merge them. You can, for example, define different database parameters (host, username, password, database/schema) for different environments (production and development) by writing a base file and then more specific files for each. Lets take this base file: docker-compose.yml version: '3' services:   web:     image: php:7.3.0-apache     container_name: site-web     depends_on:       - db     volumes:       - ./site/:/var/www/html/     ports:       - "8100:80"     mysql:     image: mysql:8.0     container_name: site-db     r

Jenkins Build Using Docker Compose From Git

Image
The docker-compose Jenkins plugin adds a build step that makes easy to make builds from a docker-compose file stored in a Git repository. After creating the desired folder structure, create a new Freestyle Project . You may choose the node where the images are going to be built and the containers will run. It's needed also to point to the Git repository. Then the Docker Compose build step (the one created by the plugin) can be chosen. If the file is named docker-compose.yml, it's only needed to choose "Start all services". When the build job is executed, the docker-compose.yml is pulled from Git, the docker images are created and the containers are started.

Instalação eToken Pro no Ubuntu 18.04 para acesso ao eCAC da RFB

Image
É muito fácil fazer o eToken Pro (no meu caso, fornecido pela Certisign) no Ubuntu 18.04 (Bionic Beaver) para acesso ao eCAC da RFB (Receita Federal do Brasil). No Debian deve ser semelhante. Utilizei alguns tutorias antigos, mas ainda funcionam e permitram a configuração do eToken em 15 min. Meu eToken é identificado desta maneira: dmesg [176128.683323] usb 3-8: new full-speed USB device number 4 using xhci_hcd [176128.832836] usb 3-8: New USB device found, idVendor= 0529 , idProduct= 0620 [176128.832839] usb 3-8: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [176128.832841] usb 3-8: Product: Token JC [176128.832842] usb 3-8: Manufacturer: SafeNet lsusb Bus 003 Device 004: ID 0529:0620 Aladdin Knowledge Systems Token JC Etapas Instalar pcscd Instalar SafeNet eToken Pro v9.1 Configurar eToken no Firefox Instalar cadeia de certificados ICP-Basil v2 no Firefox Acessar o eCAC da RFB Detalhamento 1. Instalar pcscd sudo apt install pcscd

Managing Docker Restart Policy On a Set of Hosts

Image
Then you have been moving containers back and forth through the hosts (no Kubernetes yet), and each host has some containers running and some stopped. When you need to restart the host (or even the docker service), all containers start running, even the ones you already moved to another host. It's common to run a container and set the Restart Policy to Always, so that the container is started automatically once docker determines it is stopped (possibly after a restart). And its common to forget resseting this setting once the container is moved to another host. So StackOverflow comes to the rescue (thanks OscarAkaElvis) and provides a nice script to loop through all containers in a host and shows Restart Policy for each: #!/usr/bin/env bash #Script to check the restart policy of the containers readarray -t CONTAINERS < < ( docker ps -a | grep -v NAMES | awk '{print $NF}' ) for item in "${CONTAINERS[@]}" ; do #Hard-Bash way #dat