Pages

Wednesday, 22 December 2021

KinD K8s cluster

KinD is good tool to if we want to run kubernetes in docker. Good for development activities related to kubernetes. 

quick start

sample steps to create multiple clusters


Create 2 clusters with single node: 

$ kind create cluster --image myimages/external/kindest/node:v1.20.7  --name cluster-alpha

$ kind create cluster --image myimages/external/kindest/node:v1.20.7  --name cluster-beta

$ kubectl get no

NAME                                                           STATUS   ROLES                  AGE     VERSION

cluster-beta-control-plane   Ready    control-plane,master   5m27s   v1.20.7


$ kind get clusters
cluster-alpha
cluster-beta

List cluster contexts


$ kubectl config  get-contexts
CURRENT   NAME                                                  CLUSTER                                               AUTHINFO                                              NAMESPACE
          kind-cluster-alpha                 kind-cluster-alpha                    kind-cluster-alpha  
          kind-cluster-beta                   kind-cluster-beta                      kind-cluster-beta

Switch to the desired cluster context

$ kubectl config  use-context kind-cluster-alpha
Switched to context "kind-cluster-alpha".
$ kubectl get no
NAME                                             STATUS   ROLES                  AGE    VERSION
cluster-alpha-control-plane   Ready    control-plane,master   119m   v1.20.7


Monday, 13 December 2021

Wrist pain and some simple solutions I did

For the past couple of years, I have been involved in multiple projects, working parallel at a time. Most of the days I will be sitting in front of my laptop, doing my works. Literally, I was not doing enough workouts for the body. And then the COVID pandemic worsened the situation. My body started giving symptoms and I was continuing the work, neglecting them..until one day I couldn't bear the pain, numbness in my wrist. Along with the wrist pain, I had neck pain which seldom occurs. The neck pain then leads to a pulsating headache, after that sleeping in a dark and quiet room is the only remedy.

carpal tunnel syndrome  

A quick google showed me that these kinds of wrist pain are very common in people who sit and work most of the time in front of a computer. From my experience, correcting the way we use mouse and keyboard, and also some stretching exercises will help much in pain relief. 

vertical mouse

The first thing I changed is the mouse. I moved to a vertical mouse. And it does a major change. 
The difference is that in a normal mouse we keep our palm horizontal, and here in a vertical mouse, our wrist will be slightly tilted vertically, somewhat similar to a handshaking position. I bought a Live Tech Glide Wireless Vertical Mouse from Amazon.

Ergonomic keyboard

The next thing I did was to move from the laptop keyboard to a more ergonomic keyboard. Some good keyboards with interesting layouts I found online, like Kinesis advantage 2. It cost around INR 50K is a little bit expensive, I feel. 


Wireless keyboard with touchpad

I also bought a wireless keyboard (a small one) with a touchpad, so that I can lean on my couch or chair and use the keyboard and touchpad. 

 

Bigger monitor

After some internet search, I found Dell P2422HE a promising one.


 

 Ergonomic chair

I didn't shop it online. I went to a local store and bought a Nilkamal ergonomic chair.


TPlink router bridge mode

Bridge mode allows you to connect two routers, which disables the NAT in the modem and allows the router to function as a DHCP server without an IP address conflict.

So here is my main WiFi Internet router that sits in the main hall, which doesn't have enough coverage for the backyard and garage. I had a spare TPlink router in my home, which I used to extend the WiFi coverage to reach the garage. 




The TPlink router now connects to the main router's WiFi and acts as a bridge. I don't see any considerable performance degradation here, but yeah this will definitely increase the hops.

Docker best practices

 General guidelines and recommendations

Run as a non-root user

Running containers as a non-root user substantially decreases the risk that container -> host privilege escalation could occur. 

Recommends to run our components as a normal user preferably with UID, GID >10000 

Use a static UID, GID

Eventually some point in time there would require volume sharing between containers, which will lead to manipulating file permissions. For eg: read herethis, and this.

It is best that you use a single static UID/GID for all of your containers that never change. We suggest 10000:10000 such that "$ chown 10000:10000 files" always works for containers following these best practices. 


Use the binary as the ENTRYPOINT

Try to use the main binary as your ENTRYPOINT, and use CMD to pass only arguments. It allows people to ergonomically pass arguments to your binary without having to guess its name.

And also this way helps to write k8s YAMLs without having to know about its name and args.


Create ephemeral containers

The image defined by your Dockerfile should generate containers that are as ephemeral as possible. By “ephemeral”, we mean that the container can be stopped and destroyed, then rebuilt and replaced with an absolute minimum set up and configuration. Also adding unwanted dependencies or packages will increase the image size. For eg: if we have an image with a binary as the ENTRYPOINT, try to avoid other dependencies that are not needed to run binary. Also, we should use a distro less base image or use "FROM scratch". The same we could achieve also by making use of multi-stage builds.

Metadata labels

It is a Dockerfile best practice to include metadata labels when building your image. 

Labels will help in image management, like including the application version, a link to the website, how to contact the maintainer, and more.

You can take a look at the predefined annotations from the OCI image spec .


Locally scan images during development

Image scanning is another way of detecting potential problems before running your containers. 

It is a security best practice to apply the “shift left security” paradigm by directly scanning your images, as soon as they are built, in your CI pipelines before pushing them to the registry.

This also includes the developer's computer. 


Sign images and verify signatures

It is one of the Dockerfile best practices to use docker content trust, Docker notary, Harbor notary, or similar tools to digitally sign your images and then verify them on runtime.

Tag mutability

In the container world, tags are a volatile reference to a concrete image version at a specific point in time.

Tags can change unexpectedly, and at any moment. See this “Attack of the mutant tags”.

Suggested to use format like this: docker pull kindest/node:v1.19.1@sha256:98cf5288864662e37115e362b23e4369c8c4a408f99cbc06e58ac30ddc721600

Misconfigurations

  • Exposing insecure ports, like the ssh port 22 that shouldn’t be open on containers.
  • Including private files accidentally via a bad “COPY” or “ADD” command.
  • Including (leaking) secrets or credentials instead of injecting them via environment variables or mounts.
  • Envs or mounts may be encrypted in K8s YAMLs, but decrypting them and exporting them as plain texts at runtime, exposes the secrets. Decryption shall be done within the code, and decrypted values shall not be exposed as ENVs or mounts.

Exclude with .dockerignore


Tools and utilities

Below lists some of the tools and utilities available that can be used in the lifecycle of a container. Also, lists below are some findings after trying out some of the available open-source tools.

docker-bench-security

(https://github.com/docker/docker-bench-security)

The Docker Bench for Security is a script that checks for dozens of common best-practices around deploying Docker containers in production. The tests are all automated and are based on the CIS Docker Benchmark v1.3.1.

Quick run: 

$git clone https://github.com/docker/docker-bench-security.git

$cd docker-bench-security/

$./docker-bench-security.sh 

This script will scan all the images available in the docker host.


Pros:

It's easy to set up and use.

Cons:

Hope it had a nice report and some structured output like in JSON format.






docker-scan

The docker CLI has an option,"scan" (which uses "snyk" tool as the provider) to scan for vulnerabilities in the image.

Linux installation steps here.

Pros:

Detailed reports

Easy to set up and use


Cons:

Require docker hub/snyk login


hadolint Dockerfile Linter

https://github.com/hadolint/hadolint

This is a Dockerfile linter, validate inline bash, written in Haskell.

quick run: $ docker run --rm -i ghcr.io/hadolint/hadolint < Dockerfile


Pros:

Easy to use.

Docker image available.

Scans for inline bash scripts.


Cons:

--





docker-lock

https://github.com/safe-waters/docker-lock


clair

Clair is an application for parsing image contents and reporting vulnerabilities affecting the contents. This is done via static analysis and not at runtime.

https://quay.github.io/clair/howto/getting_started.html

https://github.com/quay/clair


Pros:

Open-source, widely adopted, and active project

Well documented


Cons:

Not yet fully adopted CIS benchmark

Yet-to-try


dockle

https://github.com/goodwithtech/dockle

Container Image Linter for Security, Helping build the Best-Practice Docker Image.


quick run eg: 

docker run --rm -v /var/run/docker.sock:/var/run/docker.sock  goodwithtech/dockle:v0.4.2 <your-image-url>


Pros:

Single binary, easy to set up and use

docker image available

Open source, active


Cons:

--





openscap by RedHat

https://www.open-scap.org/resources/documentation/security-compliance-of-rhel7-docker-containers/

https://www.open-scap.org/


notary

https://github.com/notaryproject/notary

Notary is a project that allows anyone to have trust over arbitrary collections of data.


syft

https://github.com/anchore/syft

SBOM generator from docker images.


threatmapper

https://github.com/deepfence/ThreatMapper

Identify vulnerabilities in running containers, images, hosts, and repositories

FAQs

Refs


https://techbeacon.com/enterprise-it/container-security-what-you-need-know-about-nist-standards
https://www.nist.gov/publications/application-container-security-guide

https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

https://github.com/hexops/dockerfile

https://docs.docker.com/get-started/09_image_best/

https://sysdig.com/blog/dockerfile-best-practices/

https://github.com/docker/docker-bench-security

https://12factor.net/processes

https://geekflare.com/container-security-scanners/


Sunday, 12 December 2021

Fedora SilverBlue

Container-based development

During one of my recent projects, I was sharing the dev machine along with other devs. And soon it messed up things because one dev will try to update some package and that will cause issues in other dev's code.

So we started to use a container-based workflow for development so that things are isolated and the host is intact. ie we were creating n number of containers, often starting them with a 'tail -f /dev/null' command, executing bash in it to get a shell, and then installing required libs, compilers, and dependencies for our development activities. 

While this container-based development approach is going smooth for us, I happened to see people using Fedora SilverBlue, which is an immutable OS, for the same purpose.

Fedora SilverBlue

  • It's an immutable OS
    • In short, the base OS file system cannot be changed directly by a normal user.
    • Hence host is intact always and thus reliable, robust, and thus stable.
  • Focus on container-based development workflow
  • Behaves like a regular desktop OS
  • OS updates are fast and just requires a reboot to start using the new version


Ostree, rpm-ostree & flatpak

Ostree  is the technology used to manage the FS and also to update the SilverBlue. For Silverblue installs, ostree is responsible for deploying and updating the OS image (including everything below '/' that is not symlinked into '/var').

Using Ostree we have rpm-ostree which makes it possible to install rpm packages in silverblue. To see the newly installed RPMs, the system needs to be rebooted with the new image.

Flatpak is a package management utility, offering a sandbox environment in which users can run application software in isolation from the rest of the system.

On Silverblue, the root filesystem is immutable. This means that '/', '/usr' and everything below it is read-only.

'/var' is where all of Silverblue’s runtime state is stored. Symlinks are used to make traditional state-carrying directories available in their expected locations. This includes:


  • /home → /var/home

  • /opt → /var/opt

  • /srv → /var/srv

  • /root → /var/roothome

  • /usr/local → /var/usrlocal

  • /mnt→ /var/mnt

  • /tmp → /sysroot/tmp

ToolBox

Silverblue also comes with the toolbox utility, which uses containers to provide an environment where development tools and libraries can be installed and used.

Toolbox makes it easy to use a containerized development environment for daily software development and debugging.
Inside the Toolbox container we'll find your existing user name & permissions, access to home dir and other locations, common utilities like dnf, rpm etc.

some refs:

https://docs.fedoraproject.org/en-US/fedora-silverblue/

https://fedoramagazine.org/what-is-silverblue/

https://www.maketecheasier.com/fedora-silverblue-future-of-linux/



Friday, 10 December 2021

Ansible best practices

 Here are a few best practices I found useful to follow always when writing Ansible playbooks. 

Tools

Ansible-lint (with Yamllint)

https://github.com/ansible-community/ansible-lint

A lint for ansible. Its main goal is to promote proven practices, patterns, and behaviors while avoiding common pitfalls that can easily lead to bugs or make code harder to maintain.

Install:

# Assuming you already installed Ansible and you also want the optional
# yamllint support:
pip3 install "ansible-lint[yamllint]"

# If you want to install and use the latest Ansible (w/o community collections)
pip3 install "ansible-lint[core,yamllint]"

# If you want to install and use the latest Ansible with community collections
pip3 install "ansible-lint[community,yamllint]"

# If you want to install an older version of Ansible 2.9
pip3 install ansible-lint "ansible>=2.9,<2.10"


Example:

$ ansible-lint -p examples/playbooks/example.yml


Best practices on writing playbooks

Using command rather than the module

Executing a command when there is an Ansible module is generally a bad idea. Eg: using 'kubectl' to apply the K8s descriptors rather than using the ansible K8s core module.

But there can be exceptions when we have to use command or shell, when such a module is not available or is buggy.

Use shell only when shell functionality is required

The shell module should only be used when piping, redirecting, or chaining commands (and Ansible would be preferred for some of those!)


Saturday, 4 December 2021

Ssh-agent

 Recently, I had a situation where I was sharing same bastion hosts with other devs for connecting to our development K8s clusters. And due to some constraints all devs where using same user in that linux bastion vm. So here each devs was provided with each id_rsa private keys to use with tools like GIT.

And I used ssh-agent to select the key.

To start ssh agent:

eval `ssh-agent`

ssh-add <path-to-key>

Type in the password for the key. And now on GIT wont ask password for this key, in this session.

ssh-add -l # list all keys.We can add multiple keys.


usage: ssh-add [options] [file ...]
Options:
  -l          List fingerprints of all identities.
  -E hash     Specify hash algorithm used for fingerprints.
  -L          List public key parameters of all identities.
  -k          Load only keys and not certificates.
  -c          Require confirmation to sign using identities
  -t life     Set lifetime (in seconds) when adding identities.
  -d          Delete identity.
  -D          Delete all identities.
  -x          Lock agent.
  -X          Unlock agent.
  -s pkcs11   Add keys from PKCS#11 provider.
  -e pkcs11   Remove keys provided by PKCS#11 provider.



Dependabot

 About

Github has a built-in bot which scans our repository and finds out the outdated dependencies. Then it will raise a merge request to bump up that dependency to latest known version. The maintainers can then decide to go with that version or not. 

https://github.com/dependabot

For Gitlab

I was using Gitlab for a while and was searching for similar cool feature there and found this.

https://github.com/dependabot/dependabot-script

The doc says it supports Gitlab, Azure Devops and Bitbucket as well.


For my personal project in Gitlab I built the docker image from src and then ran it against my Gitlab instance.


  • Build the dependabot-script Docker image
git clone https://github.com/dependabot/dependabot-script.git 
cd dependabot-script
docker build -t "dependabot/dependabot-script" -f Dockerfile .

  • Run the docker container

docker run --rm -e "PROJECT_PATH=my-project-group/my-repo" -e "PACKAGE_MANAGER=maven" -e "BRANCH=dependabot/test" -e "PULL_REQUEST_ASSIGNEE=29944" -e "GITLAB_ACCESS_TOKEN=xxxxxxPjkfiaQd3xcYsi" -e "GITLAB_HOSTNAME=gitlab.mydomain.com" "dependabot/dependabot-script"


With proxy env

docker run --rm -e "PROJECT_PATH=my-project-group/my-repo" -e "PACKAGE_MANAGER=maven" -e "BRANCH=develop" -e "PULL_REQUEST_ASSIGNEE=29944" -e "GITLAB_ACCESS_TOKEN=xxxxxxxjkfiaQd3xcYsi" -e "GITLAB_HOSTNAME=gitlab.mydomain.com" -e "HTTPS_PROXY=http://www-proxy.mydomain.com:80" -e "HTTP_PROXY=http://www-proxy.mydomain.com:80" -e "http_proxy=http://www-proxymydomain.com:80" -e "https_proxy=http://www-proxy.mydomain.com:80"  -e "NO_PROXY=mydomain2.com,localhost" "dependabot/dependabot-script"


gitlab-ci.yml

Using the above dependabot-script image  as the base-image, we can also create scheduled pipeline in Gitlab.

Example gitlab-ci.yml file here.

image: docker.mydomain.com/external/dependabot/dependabot-script:latest

variables:
  GITLAB_HOSTNAME: gitlab.mydomain.com  

stages:
  - run

dependabot-trigger:
  stage: run
  tags:
    - vm
  script:
    - cd /home/dependabot/dependabot-script
    - bundle exec ruby ./generic-update-script.rb

Other variables could be added to pipeline as vars while scheduling the job.
  • PROJECT_PATH:
    • eg: my-project-group/my-repo
  • PACKAGE_MANAGER:
    • eg: maven
  • BRANCH:
    • Branch to scan
  • PULL_REQUEST_ASSIGNEE:
    • Integer ID of the user to assign. This can be found at link like:
  •  "gitlab.mydomain.com/api/v4/users?username="
  • GITLAB_ACCESS_TOKEN:
    • Gitlab API access token
  • GITHUB_ACCESS_TOKEN:
    • private Github token




    Ansible render jinja2 file

    Snippet to render jinja files

    ansible-playbook render-j2.yml -e @ansible-extra-vars-new.json -e "in=ks8-deploy-nginx.j2 out=nginx.yml"


    (ansible-venv) bash-4.4# cat render-j2.yml

    ---

    - hosts: localhost

      connection: local


      tasks:

              - name: Render config for host

                template:

                        src: "{{ in }}"

                        dest: "{{ out }}"


    (ansible-venv) bash-4.4#


    Render multiple files in loop, as args hardcoded in playbook


    tasks:

              - name: Render config for host

                template:

                        src: "{{ item.in }}"

                        dest: "{{ item.out }}"

                loop:

                        - { in: myfile1.j2, out: myfile1.yml }

                        - { in: myfile2.j2, out: myfile2.yml }

                        - { in: myfileN.j2, out: myfileN.yml }


    Ansible skip lint

    Recently I encountered an ansible error while running my playbooks. It was failing in ansible lint with YAML syntax error. It was a K8s descriptor file and it was working with kubectl. So I had to skip YAML lint for this file as below:


    Skip lint

    diff --git a/playbooks/k8s.yml b/playbooks/k8s.yml

    index c412bcd..ed13f4d 100644

    --- a/playbooks/k8s.yml

    +++ b/playbooks/k8s.yml

    @@ -171,13 +171,15 @@

    loop:

             - api-crd.yml

             - target-type-crd.yml

    -        - target-instance-crd.yml

    +        - target-instance-crd.yml # noqa yaml

             - routing-crd.yml         




    K8s metrics server

    https://github.com/kubernetes-sigs/metrics-server


    kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

    Kubectl top pod POD_NAME 

    OCI CLI

    OCI CLI install

    bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"


    Git

     

    git remote -v # get remote details


    Git edit author of last commit

    git commit --amend --author="Kiran Menon <kiran@domain.org>"


    From <https://www.git-tower.com/learn/git/faq/change-author-name-email/> 


    Remove untracked files

     git clean -fd


    Adding change to an old commit 

    git commit --fixup <commit-id> //the change to make

    git rebase -i --autosquash <position/commit-id> //old commit where to make the change eg: HEAD~4

    Maven

     Basic commands

    mvn compile     # compile the project
    mvn deploy     # validate--> compile--> test--> package -->verify --> install -->deploy

    mvn clean package  -Dmaven.test.skip=true # clean the build files and do package but skip test file compilation


    Run integration test

    mvn failsafe:integration-test

    Skip Nexus deploy plugin

    You can use the Nexus Staging Plugin's property skipNexusStagingDeployMojo to achieve this:

    mvn clean package deploy:deploy -DskipNexusStagingDeployMojo=true 
    -DaltDeploymentRepository=snapshots::default::https://my.nexus.host/content/repositories/snapshots-local

    It is important to explicitly invoke package and then deploy:deploy, as otherwise (when only invoking mvn deploy) the default execution of the maven-deploy-plugin is suppressed by the Nexus Staging Plugin (even with the skip set to true).


    fail at end


    -fae,--fail-at-end Only fail the build afterwards; allow all non-impacted builds to continue
    -fn,--fail-never NEVER fail the build, regardless of project result


    Create src archive explicitly

    org.apache.maven.plugins:maven-source-plugin:3.2.0:jar




    Maven deploy locally

    mvn deploy -DaltDeploymentRepository=local::file:./target/staging-deploy