news

2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 FOSS devops documentation emacs fedora foss freedom gnome haskell install laptop lisp photo ruby travel verilog vhdl vlsi workshop xmonad


Rootconf 2019 was held on June 21-22, 2019 at NIMHANS Convention Centre, in Bengaluru on topics ranging from infrastructure security, site reliability engineering, DevOps and distributed systems.

Rootconf 2019 Day 1

Day I

I had proposed a workshop titled “Shooting the trouble down to the Wireshark Lua Plugin” for the event, and it was selected. I have been working on the “Aerospike Wireshark Lua plugin” for dissecting Aerospike protocols, and hence I wanted to share the insights on the same. The plugin source code is released under the AGPLv3 license.

“Wireshark” is a popular Free/Libre and Open Source Software protocol analyzer for analyzing protocols and troubleshooting networks. The “Lua programming language” is useful to extend C projects to allow developers to do scripting. Since Wireshark is written in C, the plugin extension is provided by Lua. Aerospike uses the PAXOS family and custom built protocols for distributed database operations, and the plugin has been quite useful for packet dissection, and solving customer issues.

Rootconf 2019 Day 1

The workshop had both theory and lab exercises. I began with an overview of Lua, Wireshark GUI, and the essential Wireshark Lua interfaces. The Aerospike Info protocol was chosen and exercises were given to dissect the version, type and size fields. I finished the session with real-world examples, future work and references. Around 50 participants attended the workshop, and those who had laptops were able to work on the exercises. The workshop presentation and lab exercises are available in the aerospike-wireshark-plugin/docs/workshop GitHub repository.

I had follow-up discussions with the participants before moving to the main auditorium. “Using pod security policies to harden your Kubernetes cluster” by Suraj Deshmukh was an interesting talk on the level of security that should be employed with containers. After lunch, I started my role as emcee in the main auditorium.

The keynote of the day was by Bernd Erk, the CEO at Netways GmbH, who is also the co-founder of the Icinga project. He gave an excellent talk on “How convenience is killing open standards”. He gave numerous examples on how people are not aware of open standards, and take proprietary systems for granted. This was followed by flash talks from the audience. Jaskaran Narula then spoke on “Securing infrastructure with OpenScap: the automation way”, and also shared a demo of the same.

After the tea break, Shubham Mittal gave a talk on “OSINT for Proactive Defense” in which he shared the Open Source Intelligence (OSINT) tools, techniques and procedures to protect the perimeter security for an organization. The last talk of the day was by Shadab Siddiqui on “Running a successful bug bounty programme in your organization”.

Day II

Anant Shrivastava started the day’s proceedings with a recap on the talks from day one.

The first talk of the day was by Jiten Vaidya, co-founder and CEO at Planetscale who spoke on “OLTP or OLAP: why not both?”. He gave an architectural overview of vitess.io, a Free/Libre and Open Source sharding middleware for running OLTP workloads. The design looked like they were implementing the Kubernetes features on a MySQL cluster. Ratnadeep Debnath then spoke on “Scaling MySQL beyond limits with ProxySQL”.

After the morning break, Brian McKenna gave an excellent talk on “Functional programming and Nix for reproducible, immutable infrastructure”. I have listened to his talks at the Functional Programming conference in Bengaluru, and they have been using Nix in production. The language constructs and cases were well demonstrated with examples. This was followed by yet another excellent talk by Piyush Verma on “Software/Site Reliability of Distributed Systems”. He took a very simple request-response example, and incorporated site reliability features, and showed how complex things are today. All the major issues, pitfalls, and troubles were clearly explained with beautiful illustrations.

Aaditya Talwai presented his talk on “Virtuous Cycles: Enabling SRE via automated feedback loops” after the lunch break. This was followed by Vivek Sridhar’s talk on “Virtual nodes to auto-scale applications on Kubernetes”. Microsoft has been investing heavily on Free/Libre and Open Source, and have been hiring a lot of Python developers as well. Satya Nadella has been bringing in lot of changes, and it will be interesting to see their long-term progress. After Vivek’s talk, we had few slots for flash talks from the audience, and then Deepak Goyal gave his talk on “Kafka streams at scale”.

After the evening beverage break, Øystein Grøvlen, gave an excellent talk on PolarDB - A database architecture for the cloud. They are using it with Alibaba in China to handle petabytes of data. The computing layer and shared storage layers are distinct, and they use RDMA protocol for cluster communication. They still use a single master and multiple read-only replicas. They are exploring parallel query execution for improving performance of analytical queries.

Rootconf 2019 Day 2

Overall, the talks and presentations were very good for 2019. Time management is of utmost importance at Rootconf, and we have been very consistent. I was happy to emcee again for Rootconf!

[Published in Open Source For You (OSFY) magazine, September 2017 edition.]

Introduction

Erlang is a programming language designed by Ericsson primarily for soft real-time systems. The Open Telecom Platform (OTP) consists of libraries, applications and tools to be used with Erlang to implement services that require high availability. In this article, we will create a test Virtual Machine (VM) to compile, build, and test Erlang/OTP from its source code. This allows you to create VMs with different Erlang release versions for testing.

The Erlang programming language was developed by Joe Armstrong, Robert Virding and Mike Williams in 1986 and released as free and open source software in 1998. It was initially designed to work with telecom switches, but is widely used today in large scale, distributed systems. Erlang is a concurrent and functional programming language, and is released under the Apache License 2.0.

Setup

A CentOS 6.8 Virtual Machine (VM) running on KVM will be used for the installation. Internet access should be available from the guest machine. The VM should have at least 2 GB of RAM alloted to build the Erlang/OTP documentation. The Ansible version used on the host (Parabola GNU/Linux-libre x86_64) is 2.3.0.0. The ansible/ folder contains the following files:

ansible/inventory/kvm/inventory
ansible/playbooks/configuration/erlang.yml

The IP address of the guest CentOS 6.8 VM is added to the inventory file as shown below:

erlang ansible_host=192.168.122.150 ansible_connection=ssh ansible_user=bravo ansible_password=password

An entry for the erlang host is also added to the /etc/hosts file as indicated below:

192.168.122.150 erlang

A ‘bravo’ user account is created on the test VM, and is added to the ‘wheel’ group. The /etc/sudoers file also has the following line uncommented, so that the ‘bravo’ user will be able to execute sudo commands:

## Allows people in group wheel to run all commands
%wheel	ALL=(ALL)	ALL

We can obtain the Erlang/OTP sources from a stable tarball, or clone the Git repository. The steps involved in both these cases are discussed below:

Building from the source tarball

The Erlang/OTP stable releases are available at http://www.erlang.org/downloads. The build process is divided into many steps, and we shall go through each one of them. The version of Erlang/OTP can be passed as an argument to the playbook. Its default value is the release 19.0, and is defined in the variable section of the playbook as shown below:

vars:
  ERL_VERSION: "otp_src_{{ version | default('19.0') }}"
  ERL_DIR: "{{ ansible_env.HOME }}/installs/erlang"
  ERL_TOP: "{{ ERL_DIR }}/{{ ERL_VERSION }}"
  TEST_SERVER_DIR: "{{ ERL_TOP }}/release/tests/test_server"

The ERL_DIR variable represents the directory where the tarball will be downloaded, and the ERL_TOP variable refers to the top-level directory location containing the source code. The path to the test directory from where the tests will be invoked is given by the TEST_SERVER_DIR variable.

Erlang/OTP has mandatory and optional package dependencies. Let’s first update the software package repository, and then install the required dependencies as indicated below:

tasks:
  - name: Update the software package repository
    become: true
    yum:
      name: '*'
      update_cache: yes

  - name: Install dependencies
    become: true
    package:
      name: "{{ item }}"
      state: latest
    with_items:
      - wget
      - make
      - gcc
      - perl
      - m4
      - ncurses-devel
      - sed
      - libxslt
      - fop

The Erlang/OTP sources are written using the ‘C’ programming language. The GNU C Compiler (GCC) and GNU Make are used to compile the source code. The ‘libxslt’ and ‘fop’ packages are required to generate the documentation. The build directory is then created, the source tarball is downloaded and it is extracted to the directory mentioned in ERL_DIR.

- name: Create destination directory
  file: path="{{ ERL_DIR }}" state=directory

- name: Download and extract Erlang source tarball
  unarchive:
    src: "http://erlang.org/download/{{ ERL_VERSION }}.tar.gz"
    dest: "{{ ERL_DIR }}"
    remote_src: yes

The ‘configure’ script is available in the sources, and it is used to generate the Makefile based on the installed software. The ‘make’ command will build the binaries from the source code.

- name: Build the project
  command: "{{ item }} chdir={{ ERL_TOP }}"
  with_items:
    - ./configure
    - make
  environment:
    ERL_TOP: "{{ ERL_TOP }}"

After the ‘make’ command finishes, the ‘bin’ folder in the top-level sources directory will contain the Erlang ‘erl’ interpreter. The Makefile also has targets to run tests to verify the built binaries. We are remotely invoking the test execution from Ansible and hence -noshell -noinput are passed as arguments to the Erlang interpreter, as show in the .yaml file.

- name: Prepare tests
  command: "{{ item }} chdir={{ ERL_TOP }}"
  with_items:
    - make release_tests
  environment:
    ERL_TOP: "{{ ERL_TOP }}"

- name: Execute tests
  shell: "cd {{ TEST_SERVER_DIR }} && {{ ERL_TOP }}/bin/erl -noshell -noinput -s ts install -s ts smoke_test batch -s init stop"

You need to verify that the tests have passed successfully by checking the $ERL_TOP/release/tests/test_server/index.html page in a browser. A screenshot of the test results is shown in Figure 1:

Erlang test results

The built executables, libraries can then be installed on the system using the make install command. By default, the install directory is /usr/local.

- name: Install
  command: "{{ item }} chdir={{ ERL_TOP }}"
  with_items:
    - make install
  become: true
  environment:
    ERL_TOP: "{{ ERL_TOP }}"

The documentation can also be generated and installed as shown below:

- name: Make docs
  shell: "cd {{ ERL_TOP }} && make docs"
  environment:
    ERL_TOP: "{{ ERL_TOP }}"
    FOP_HOME: "{{ ERL_TOP }}/fop"
    FOP_OPTS: "-Xmx2048m"

- name: Install docs
  become: true
  shell: "cd {{ ERL_TOP }} && make install-docs"
  environment:
    ERL_TOP: "{{ ERL_TOP }}"

The total available RAM (2 GB) is specified in the FOP_OPTS environment variable. The complete playbook to download, compile, execute the tests, and also generate the documentation is given below:

---
- name: Setup Erlang build
  hosts: erlang
  gather_facts: true
  tags: [release]

  vars:
    ERL_VERSION: "otp_src_{{ version | default('19.0') }}"
    ERL_DIR: "{{ ansible_env.HOME }}/installs/erlang"
    ERL_TOP: "{{ ERL_DIR }}/{{ ERL_VERSION }}"
    TEST_SERVER_DIR: "{{ ERL_TOP }}/release/tests/test_server"

  tasks:
    - name: Update the software package repository
      become: true
      yum:
        name: '*'
        update_cache: yes

    - name: Install dependencies
      become: true
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - wget
        - make
        - gcc
        - perl
        - m4
        - ncurses-devel
        - sed
        - libxslt
        - fop

    - name: Create destination directory
      file: path="{{ ERL_DIR }}" state=directory

    - name: Download and extract Erlang source tarball
      unarchive:
        src: "http://erlang.org/download/{{ ERL_VERSION }}.tar.gz"
        dest: "{{ ERL_DIR }}"
        remote_src: yes

    - name: Build the project
      command: "{{ item }} chdir={{ ERL_TOP }}"
      with_items:
        - ./configure
        - make
      environment:
        ERL_TOP: "{{ ERL_TOP }}"

    - name: Prepare tests
      command: "{{ item }} chdir={{ ERL_TOP }}"
      with_items:
        - make release_tests
      environment:
        ERL_TOP: "{{ ERL_TOP }}"

    - name: Execute tests
      shell: "cd {{ TEST_SERVER_DIR }} && {{ ERL_TOP }}/bin/erl -noshell -noinput -s ts install -s ts smoke_test batch -s init stop"

    - name: Install
      command: "{{ item }} chdir={{ ERL_TOP }}"
      with_items:
        - make install
      become: true
      environment:
        ERL_TOP: "{{ ERL_TOP }}"

    - name: Make docs
      shell: "cd {{ ERL_TOP }} && make docs"
      environment:
        ERL_TOP: "{{ ERL_TOP }}"
        FOP_HOME: "{{ ERL_TOP }}/fop"
        FOP_OPTS: "-Xmx2048m"

    - name: Install docs
      become: true
      shell: "cd {{ ERL_TOP }} && make install-docs"
      environment:
        ERL_TOP: "{{ ERL_TOP }}"

The playbook can be invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/erlang.yml -e "version=19.0" --tags "release" -K

Build from Git repository

We can build the Erlang/OTP sources from the Git repository. The complete playbook is given below for reference:

- name: Setup Erlang Git build
  hosts: erlang
  gather_facts: true
  tags: [git]

  vars:
    GIT_VERSION: "otp"
    ERL_DIR: "{{ ansible_env.HOME }}/installs/erlang"
    ERL_TOP: "{{ ERL_DIR }}/{{ GIT_VERSION }}"
    TEST_SERVER_DIR: "{{ ERL_TOP }}/release/tests/test_server"

  tasks:
    - name: Update the software package repository
      become: true
      yum:
        name: '*'
        update_cache: yes

    - name: Install dependencies
      become: true
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - wget
        - make
        - gcc
        - perl
        - m4
        - ncurses-devel
        - sed
        - libxslt
        - fop
        - git
        - autoconf

    - name: Create destination directory
      file: path="{{ ERL_DIR }}" state=directory

    - name: Clone the repository
      git:
        repo: "https://github.com/erlang/otp.git"
        dest: "{{ ERL_DIR }}/otp"

    - name: Build the project
      command: "{{ item }} chdir={{ ERL_TOP }}"
      with_items:
        - ./otp_build autoconf
        - ./configure
        - make
      environment:
        ERL_TOP: "{{ ERL_TOP }}"

The ‘git’ and ‘autoconf’ software packages are required for downloading and building the sources from the Git repository. The Ansible Git module is used to clone the remote repository. The source directory provides an otp_build script to create the configure script. You can invoke the above playbook as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/erlang.yml --tags "git" -K

You are encouraged to read the complete installation documentation at: https://github.com/erlang/otp/blob/master/HOWTO/INSTALL.md.

I had given a talk on “Opportunities in Free (Libre) and Open Source Software”(FLOSS) on Saturday, March 2, 2019 at the Computer Society of India, Madras Chapter, jointly organized by the IEEE Computer Society, Madras Chapter and ACM India Chennai Professional Chapter. The Computer Society of India, Education Directorate is located opposite to the Institute of Mathematical Sciences in Taramani, close to the Tidel Park. Students, IT professionals and professors from IIT Madras, Anna University and engineering colleges in and around Chennai attended the event.

Session in progress

At around 6:00 p.m. people had assembled for networking, snacks and beverage. I started the “Building Careers with FLOSS” presentation at 6:30 p.m. Prof. Dr. D. Janakiram, CSE, IIT Madras also shared his insights on the benefits of learning from source code available under a FLOSS license. The session was very interactive and the audience asked a lot of good questions. Dr. B. Govindarajulu, author of the famous “IBM PC and Clones: Hardware, Troubleshooting and Maintenance” book then presented his views on creating a Computer History Museum in Chennai. Dinner was served around 8:00 p.m. at the venue.

Group photo

A review of my book has been published in Volume 14: No. 1, January-March 2019 IEEE India Council Newsletter. The excerpts of Chapter 4 of my book on “Project Guidelines” is also available. I had also written an article on “Seasons of Code” which is published in this edition of the IEEE newsletter.

Special thanks to Mr. H. R. Mohan, Editor, IEEE India and Chairman, ACM Professional Chapter, Chennai for organizing the event and for the logistics support.

[Published in Open Source For You (OSFY) magazine, August 2017 edition.]

Introduction

In this sixth article in the DevOps series, we will install Jenkins using Ansible and set up a Continuous Integration (CI) build for a project that uses Git. Jenkins is Free and Open Source automation server software that is used to build, deploy and automate projects. It is written in Java and released under the MIT license. A number of plugins are available to integrate Jenkins with other tools such as version control systems, APIs and databases.

Setting it up

A CentOS 6.8 Virtual Machine (VM) running on KVM will be used for the installation. Internet access should be available from the guest machine. The Ansible version used on the host (Parabola GNU/Linux-libre x86_64) is 2.3.0.0. The ansible/ folder contains the following files:

ansible/inventory/kvm/inventory
ansible/playbooks/configuration/jenkins.yml
ansible/playbooks/admin/uninstall-jenkins.yml

The IP address of the guest CentOS 6.8 VM is added to the inventory file as shown below:

jenkins ansible_host=192.168.122.120 ansible_connection=ssh ansible_user=root ansible_password=password

An entry for the jenkins host is also added to the /etc/hosts file as indicated below:

192.168.122.120 jenkins

Installation

The playbook to install the Jenkins server on the CentOS VM is given below:

---
- name: Install Jenkins software
  hosts: jenkins
  gather_facts: true
  become: yes
  become_method: sudo
  tags: [jenkins]

  tasks:
    - name: Update the software package repository
      yum:
        name: '*'
        update_cache: yes

    - name: Install dependencies
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - java-1.8.0-openjdk
        - git
        - texlive-latex
        - wget

    - name: Download jenkins repo
      command: wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo

    - name: Import Jenkins CI key
      rpm_key:
        key: http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.key
        state: present

    - name: Install Jenkins
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - jenkins

    - name: Allow port 8080
      shell: iptables -I INPUT -p tcp --dport 8080 -m state --state NEW,ESTABLISHED -j ACCEPT

    - name: Start the server
      service:
        name: jenkins
        state: started

    - wait_for:
        port: 8080

The playbook first updates the Yum repository and installs the Java OpenJDK software dependency required for Jenkins. The Git and Tex Live LaTeX packages are required to build our project, github.com/shakthimaan/di-git-ally-managing-love-letters (now at https://gitlab.com/shakthimaan/di-git-ally-managing-love-letters). We then download the Jenkins repository file, and import the repository GPG key. The Jenkins server is then installed, port 8080 is allowed through the firewall, and the script waits for the server to listen on port 8080. The above playbook can be invoked using the following command:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/jenkins.yml -vv

Configuration

You can now open http://192.168.122.120:8080 in the browser on the host to start configuring Jenkins. The web page will prompt you to enter the initial Administrator password from /var/lib/jenkins/secrets/initialAdminPassword to proceed further. This is shown in Figure 1:

Unlock Jenkins

The second step is to install plugins. For this demonstration, you can select the “Install suggested plugins” option, and later install any of the plugins that you require. Figure 2 displays the selected option:

Customize Jenkins

After you select the “Install suggested plugins” option, the plugins will get installed as shown in Figure 3:

Getting Started

An admin user is required for managing Jenkins. After installing the plugins, a form is shown for you to enter the user name, password, name and e-mail address of the administrator. A screenshot of this is shown in Figure 4:

Create First Admin User

Once the administrator credentials are stored, a “Jenkins is ready!” page will be displayed, as depicted in Figure 5:

Jenkins is ready!

You can now click on the “Start using Jenkins” button to open the default Jenkins dashboard shown in Figure 6:

Jenkins Dashboard

An example of a new project

Let’s now create a new build for the github.com/shakthimaan/di-git-ally-managing-love-letters project. Provide a name in the “Enter an item name” text box and select the “Freestyle project”. Figure 7 provides shows the screenshot for creating a new project:

Enter an item name

The next step is to add the GitHub repo to the “Repositories” section. The GitHub https URL is provided as we are not going to use any credentials in this example. By default, the master branch will be built. The form to enter the GitHub URL is shown in Figure 8:

Add GitHub repo

A Makefile is available in the project source code, and hence we can simply invoke “make” to build the project. The “Execute shell” option is chosen in the “Build” step, and the “make clean; make” command is added to the build step as shown in Figure 9.

Build step

From the left panel, you can click on the “Build Now” link for the project to trigger a build. After a successful build, you should see a screenshot similar to Figure 10.

Build success

Uninstall

An uninstall script to remove the Jenkins server is available in playbooks/admin folder. It is given below for reference:

---
---
- name: Uninstall Jenkins
  hosts: jenkins
  gather_facts: true
  become: yes
  become_method: sudo
  tags: [remove]

  tasks:
    - name: Stop Jenkins server
      service:
        name: jenkins
        state: stopped

    - name: Uninstall packages
      package:
        name: "{{ item }}"
        state: absent
      with_items:
        - jenkins

The script can be invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/admin/uninstall-jenkins.yml

I have been trying to have regular monthly Emacs meetups online, starting from 2018.

The following are the meeting minutes and notes from the Jitsi meetings held online in the months of February and March 2018.

February 2018

The February 2018 meetup was primarily focussed on using Emacs for publishing.

Using Emacs with Hakyll to build websites and resumes were discussed. It is also possible to output in muliple formats (PDF, YAML, text) from the same source.

I shared my shakthimaan-blog sources that uses Hakyll to generate the web site. We also discussed the advantanges of using static site generators, especially when you have large user traffic to your web site.

I had created the xetex-book-template, for creating multilingual book PDFs. The required supported features in the same, and its usage were discussed in the meeting.

Kushal Das asked about keyboards use in Emacs, and in particular for Control and Alt, as he was using the Kinesis. The best Emacs keyboard options available at http://ergoemacs.org/emacs/emacs_best_keyboard.html was shared. The advantage of using thumb keys for Control and Alt was obvious with the bowl shaped keyboard layout in Kinesis.

We also talked about the Emacs Web Browser (eww), and suggested the use of mu4e for checking e-mails with Emacs.

March 2018

At the beginning of the meetup, the participants asked if there was a live stream available, but, we are not doing so at this point in time with Jitsi.

For drawing inside Emacs, I had suggested ASCII art using Artist Mode.

Emacs has support for rendering PDFs inside, as the following old blog post shows http://www.idryman.org/blog/2013/05/20/emacs-and-pdf/. nnnick then had a question on “Why Emacs?”:

nnnick 19:23:00
Can you tell me briefly why emacs is preferred over other text editors

The discussion then moved to the customization features and extensibility of Emacs that makes it well suited for your needs.

For people who want to start with a basic configuration for Emacs, the following repository was suggested https://github.com/technomancy/emacs-starter-kit.

I had also shared links on using Org mode and scrum-mode for project management:

I shared my Cask setup link https://gitlab.com/shakthimaan/cask-dot-emacs and mentioned that with a rolling distribution like Parabola GNU/Linux-libre, it was quite easy to re-run install.sh for newer Emacs versions, and get a consistent setup.

In order to SSH into local or remote systems (VMs), Tramp mode was suggested.

I also shared my presentation on “Literate DevOps” inspired by Howardism https://gitlab.com/shakthimaan/literate-devops-using-gnu-emacs/blob/master/literate-devops.org.

Org entries can also be used to keep track of personal journal entries. The Date Trees are helpful in this context as shown in the following web page http://members.optusnet.com.au/~charles57/GTD/datetree.html.

Tejas asked about using Org files for executing code in different programming languages. This can be done using Org Babel, and the same was discussed.

Tejas 19:38:23
can org mode files be used to keep executable code in other languages apart from elisp?

mbuf 19:38:42
Yes

mbuf 19:39:15
https://orgmode.org/worg/org-contrib/babel/languages.html

The other useful tools that were discussed for productivity are given below:

Tejas said that he uses perspective-el, but it does not have the save option - just separate workspaces to switch between them - for different projects basically.

A screenshot of the session in progress is shown below:

Emacs APAC March 2018 meetup

Arun also suggested using Try for trying out Emacs packages without installation, and cycle-resize package for managing windows.

Tejas and Arun then shared their Emacs configuration files.

Arun 19:51:37
https://github.com/aruntakkar/emacs.d

Tejas 19:51:56
https://github.com/tejasbubane/dotemacs

We closed the session with few references on learning Emacs Lisp:

Tejas 20:02:42
before closing off, can you guys quickly point me to some resources for learning elisp?

mbuf 20:03:59
Writing GNU Emacs Extensions.

mbuf 20:04:10
Tejas: Emacs Lisp manual

Tejas 20:04:35
Thanks 

I had organized a hands-on scripting workshop using the Elixir programming language for the Computer Science and Engineering department, MVJ College of Engineering, Whitefield, Bengaluru on May 5, 2018.

Elixir scripting session

The department were interested in organizing a scripting workshop, and I felt using a new programming language like Elixir with the power of the Erlang Virtual Machine (VM) will be a good choice. The syntax and semantics of the Elixir language were discussed along with the following topics:

  • Basic types
  • Basic operators
  • Pattern matching
  • case, cond and if
  • Binaries, strings and char lists
  • Keywords and maps
  • Modules and functions

Students had setup Erlang and Elixir on their laptops, and tried the code snippets in the Elixir interpreter. The complete set of examples are available in the following repo:

https://gitlab.com/shakthimaan/elixir-scripting-workshop

A group photo was taken at the end of the workshop.

Elixir scripting session

I would like to thank Prof. Karthik Myilvahanan J for working with me in organizing this workshop.

Resort
Infinity pool
Floating restaurant
Rich and poor boats
Udupi temple
Lagoon
St. Mary's island
Sunset

I wanted to start the New Year (2018) by organizing an Emacs meetup session in the APAC time zone. Since there are a number of users in different cities, I thought a virtual session will be ideal. An online Google Hangout session was scheduled for Monday, January 15, 2018 at 1000 IST.

Hangout announcement

Although I announced the same only on Twitter and IRC (#emacs on irc.freenode.net), we had a number of Emacsers join the session. The chat log with the useful web links that were shared are provided below for reference.

We started our discussion on organizing Org files and maintaining TODO lists.

9:45 AM Suraj Ghimire: it is clear and loud :) video quality is good too 
                       yes. is there any session today on hangouts ?
                       wow thats nice thanks you for introducing me to emacs :). I am happy emacs user
                       should we add bhavin and vharsh for more testing
                       oh you already shared :)
	
9:55 AM Suraj Ghimire:  working on some of my todos https://i.imgur.com/GBylmeQ.png

For few Vim users who wanted to try Emacs Org mode, it was suggested to get started with Spacemacs. Other project management and IRC tools with Emacs were also shared:

  HARSH VARDHAN can now join this call.
  HARSH VARDHAN joined group chat.
  Google Apps can now join this call.
  Google Apps joined group chat.
	
10:05 AM Shakthi Kannan: http://spacemacs.org/
                         https://github.com/ianxm/emacs-scrum
                         https://github.com/ianxm/emacs-scrum/blob/master/example-report.txt
                         https://www.emacswiki.org/emacs/CategoryWebBrowser
                         ERC for IRC chat
                         https://www.emacswiki.org/emacs/InternetRelayChat

10:13 AM Shakthi Kannan: https://github.com/skeeto/elfeed
                         https://www.emacswiki.org/emacs/EmacsMailingLists
	
10:18 AM Suraj Ghimire: I started using emacs after your session on emacs, before that i used to 
                        get scared due to lot of shortcuts. I will work on improvements you told me.
	
10:19 AM Shakthi Kannan: M - Alt, C - Control
                         http://www.tldp.org/HOWTO/Emacs-Beginner-HOWTO-3.html

  Google Apps left group chat.
  Google Apps joined group chat.
  Sacha Chua can now join this call.
  Sacha Chua joined group chat.

We then discussed on key bindings, available modes, and reading material to learn and master Emacs:

10:27 AM Shakthi Kannan: http://shop.oreilly.com/product/9780596006488.do
	
10:31 AM Shakthi Kannan: https://www.masteringemacs.org/
                         http://shakthimaan.com/tags/emacs.html
                         http://shakthimaan.com/posts/2016/04/04/introduction-to-gnu-emacs/news.html

  Dhavan Vaidya can now join this call.
  Dhavan Vaidya joined group chat.
  Sacha Chua left group chat.
	
10:42 AM Shakthi Kannan: https://www.finseth.com/craft/
                         http://shop.oreilly.com/product/9781565922617.do

  Rajesh Deo can now join this call.
  Rajesh Deo joined group chat.

Users also wanted to know of language modes for Erlang:

10:52 AM Shakthi Kannan: http://www.lambdacat.com/post-modern-emacs-setup-for-erlang/

  HARSH VARDHAN left group chat.
  Aaron Hall can now join this call.
	
10:54 AM Shakthi Kannan: https://github.com/elixir-editors/emacs-elixir

Aaron Hall joined the channel and had few interesting questions. After an hour, we ended the call.

  Aaron Hall joined group chat.
	
10:54 AM Aaron Hall: hi!
10:54 AM Dhavan Vaidya: hi!
	
10:55 AM Aaron Hall: This is really cool!

Maikel Yugcha can now join this call.
Maikel Yugcha joined group chat.
	
10:57 AM Aaron Hall: Anyone here using Emacs as their window manager?
	
10:57 AM Suraj Ghimire: not yet :)
	
10:57 AM Shakthi Kannan: I am "mbuf" on IRC. http://stumpwm.github.io/
	
10:58 AM Aaron Hall: What about on servers? I just played around, but I like tmux for persistence and emacs inside of tmux.
	
10:59 AM Shakthi Kannan: https://github.com/pashinin/workgroups2
	
11:00 AM Aaron Hall: Is anyone compiling emacs from source?

Zsolt Botykai can now join this call.
Zsolt Botykai joined group chat.
	
11:00 AM Aaron Hall: yay, me too!

Zsolt Botykai left group chat.
	
11:00 AM Aaron Hall: it wasn't easy to start, the config options are hard
                     I had trouble especially with my fonts until I got my configure right

Maikel Yugcha left group chat.
	
11:03 AM Shakthi Kannan: https://github.com/shakthimaan/cask-dot-emacs
	
11:04 AM Aaron Hall anyone using Haskell? With orgmode? I've been having a lot of trouble with that...
                    code blocks are hard to get working
                    ghci
                    inferior?
                    not really sure
                    it's been a while since I worked on it
                    I had a polyglot file I was working on, I got a lot of languages working
                    Python, Bash, R, Javascript,
                    I got C working too
	
11:06 AM Shakthi Kannan: Rajesh: http://company-mode.github.io/
	
11:07 AM Aaron Hall: cheers, this was fun!

Aaron Hall left group chat.
Dhavan Vaidya left group chat.
Rajesh Deo left group chat.
Google Apps left group chat.
Suraj Ghimire left group chat.

A screenshot of the Google Hangout session is shown below:

Google Hangout screenshot

We can try a different online platform for the next meetup (Monday, February, 19, 2018). I would like to have the meetup on the third Monday of every month. Special thanks to Sacha Chua for her valuable inputs in organizing the online meetup session.

[Published in Open Source For You (OSFY) magazine, July 2017 edition.]

Introduction

In this fifth article in the DevOps series we will learn to install and set up Graphite using Ansible. Graphite is a monitoring tool that was written by Chris Davis in 2006. It has been released under the Apache 2.0 license and comprises three components:

  1. Graphite-Web
  2. Carbon
  3. Whisper

Graphite-Web is a Django application and provides a dashboard for monitoring. Carbon is a server that listens to time-series data, while Whisper is a database library for storing the data.

Setting it up

A CentOS 6.8 Virtual Machine (VM) running on KVM is used for the installation. Please make sure that the VM has access to the Internet. The Ansible version used on the host (Parabola GNU/Linux-libre x86_64) is 2.2.1.0. The ansible/ folder contains the following files:

ansible/inventory/kvm/inventory
ansible/playbooks/configuration/graphite.yml
ansible/playbooks/admin/uninstall-graphite.yml

The IP address of the guest CentOS 6.8 VM is added to the inventory file as shown below:

graphite ansible_host=192.168.122.120 ansible_connection=ssh ansible_user=root ansible_password=password

Also, add an entry for the graphite host in /etc/hosts file as indicated below:

192.168.122.120 graphite

Graphite

The playbook to install the Graphite server is given below:

---
- name: Install Graphite software
  hosts: graphite
  gather_facts: true
  tags: [graphite]

  tasks:
    - name: Import EPEL GPG key
      rpm_key:
        key: http://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-6
        state: present

    - name: Add YUM repo
      yum_repository:
        name: epel
        description: EPEL YUM repo
        baseurl: https://dl.fedoraproject.org/pub/epel/$releasever/$basearch/
        gpgcheck: yes

    - name: Update the software package repository
      yum:
        name: '*'
        update_cache: yes

    - name: Install Graphite server
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - graphite-web

We first import the keys for the Extra Packages for Enterprise Linux (EPEL) repository and update the software package list. The ‘graphite-web’ package is then installed using Yum. The above playbook can be invoked using the following command:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/graphite.yml --tags "graphite"

MySQL

A backend database is required by Graphite. By default, the SQLite3 database is used, but we will install and use MySQL as shown below:

- name: Install MySQL
  hosts: graphite
  become: yes
  become_method: sudo
  gather_facts: true
  tags: [database]

  tasks:
    - name: Install database
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - mysql
        - mysql-server
        - MySQL-python
        - libselinux-python

    - name: Start mysqld server
      service:
        name: mysqld
        state: started

    - wait_for:
        port: 3306

    - name: Create graphite database user
      mysql_user:
        name: graphite
        password: graphite123
        priv: '*.*:ALL,GRANT'
        state: present

    - name: Create a database
      mysql_db:
        name: graphite
        state: present

    - name: Update database configuration
      blockinfile:
        path: /etc/graphite-web/local_settings.py
        block: |
          DATABASES = {
            'default': {
            'NAME': 'graphite',
            'ENGINE': 'django.db.backends.mysql',
            'USER': 'graphite',
            'PASSWORD': 'graphite123',
           }
          }

    - name: syncdb
      shell: /usr/lib/python2.6/site-packages/graphite/manage.py syncdb --noinput

    - name: Allow port 80
      shell: iptables -I INPUT -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT

    - name:
      lineinfile:
        path: /etc/httpd/conf.d/graphite-web.conf
        insertafter: '           # Apache 2.2'
        line: '           Allow from all'

    - name: Start httpd server
      service:
        name: httpd
        state: started

As a first step, let’s install the required MySQL dependency packages and the server itself. We then start the server and wait for it to listen on port 3306. A graphite user and database is created for use with the Graphite Web application. For this example, the password is provided as plain text. In production, use an encrypted Ansible Vault password.

The database configuration file is then updated to use the MySQL credentials. Since Graphite is a Django application, the manage.py script with syncdb needs to be executed to create the necessary tables. We then allow port 80 through the firewall in order to view the Graphite dashboard. The graphite-web.conf file is updated to allow read access, and the Apache web server is started.

The above playbook can be invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/graphite.yml --tags "database"

Carbon and Whisper

The Carbon and Whisper Python bindings need to be installed before starting the carbon-cache script.

- name: Install Carbon and Whisper
  hosts: graphite
  become: yes
  become_method: sudo
  gather_facts: true
  tags: [carbon]

  tasks:
    - name: Install carbon and whisper
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - python-carbon
        - python-whisper

    - name: Start carbon-cache
      shell: /etc/init.d/carbon-cache start

The above playbook is invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/graphite.yml --tags "carbon"

Dashboard

You can open http://192.168.122.120 in the browser on the host to view the Graphite dashboard. A screenshot of the Graphite web application is shown below:

Graphite dashboard

Uninstall

An uninstall script to remove the Graphite server and its dependency packages is required for administration. The Ansible playbook for the same is available in playbooks/admin folder and is given below:

---
- name: Uninstall Graphite and dependencies
  hosts: graphite
  gather_facts: true
  tags: [remove]

  tasks:
    - name: Stop the carbon-cache server
      shell: /etc/init.d/carbon-cache stop

    - name: Uninstall carbon and whisper
      package:
        name: "{{ item }}"
        state: absent
      with_items:
        - python-whisper
        - python-carbon

    - name: Stop httpd server
      service:
        name: httpd
        state: stopped

    - name: Stop mysqld server
      service:
        name: mysqld
        state: stopped

    - name: Uninstall database packages
      package:
        name: "{{ item }}"
        state: absent
      with_items:
        - libselinux-python
        - MySQL-python
        - mysql-server
        - mysql
        - graphite-web

The script can be invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/admin/uninstall-graphite.yml

References

  1. Graphite documentation. https://graphite.readthedocs.io/en/latest/

  2. Carbon. https://github.com/graphite-project/carbon

  3. Whisper database. http://graphite.readthedocs.io/en/latest/whisper.html

[Published in Open Source For You (OSFY) magazine, June 2017 edition.]

Introduction

In this fourth article in the DevOps series, we will learn to install RabbitMQ using Ansible. RabbitMQ is a free and open source message broker system that supports a number of protocols such as the Advanced Message Queuing Protocol (AMQP), Streaming Text Oriented Messaging Protocol (STOMP) and Message Queue Telemetry Transport (MQTT). The software has support for a large number of client libraries for different programming languages. RabbitMQ is written using the Erlang programming language and is released under the Mozilla Public License.

Setting it up

A CentOS 6.8 virtual machine (VM) running on KVM is used for the installation. Do make sure that the VM has access to the Internet. The Ansible version used on the host (Parabola GNU/Linux-libre x86_64) is 2.2.1.0. The ansible/ folder contains the following files:

ansible/inventory/kvm/inventory
ansible/playbooks/configuration/rabbitmq.yml
ansible/playbooks/admin/uninstall-rabbitmq.yml

The IP address of the guest CentOS 6.8 VM is added to the inventory file as shown below:

rabbitmq ansible_host=192.168.122.161 ansible_connection=ssh ansible_user=root ansible_password=password

Also, add an entry for the rabbitmq host in the /etc/hosts file as indicated below:

192.168.122.161 rabbitmq

Installation

RabbitMQ requires the Erlang environment, and uses the Open Telecom Platform (OTP) framework. There are multiple sources for installing Erlang - the EPEL repository, Erlang Solutions, zero-dependency Erlang provided by RabbitMQ. In this article, we will use the EPEL repository for installing Erlang.

---
- name: Install RabbitMQ server
  hosts: rabbitmq
  gather_facts: true
  tags: [server]

  tasks:
    - name: Import EPEL GPG key
      rpm_key:
        key: http://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-6
        state: present

    - name: Add YUM repo
      yum_repository:
        name: epel
        description: EPEL YUM repo
        baseurl: https://dl.fedoraproject.org/pub/epel/$releasever/$basearch/
        gpgcheck: yes

    - name: Update the software package repository
      yum:
        name: '*'
        update_cache: yes

    - name: Install RabbitMQ server
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - rabbitmq-server

    - name: Start the RabbitMQ server
      service:
        name: rabbitmq-server
        state: started

    - wait_for:
        port: 5672

After importing the EPEL GPG key and adding the EPEL repository to the system, the yum update command is executed. The RabbitMQ server and its dependencies are then installed. We wait for the RabbitMQ server to start and to listen on port 5672. The above playbook can be invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/rabbitmq.yml --tags "server"

Dashboard

The RabbitMQ management user interface (UI) is available through plugins.

- name: Start RabbitMQ Management UI
  hosts: rabbitmq
  gather_facts: true
  tags: [ui]

  tasks:
    - name: Start management UI
      command: /usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management

    - name: Restart RabbitMQ server
      service:
        name: rabbitmq-server
        state: restarted

    - wait_for:
        port: 15672

    - name: Allow port 15672
      shell: iptables -I INPUT 5 -p tcp --dport 15672 -m state --state NEW,ESTABLISHED -j ACCEPT

After enabling the management plugin, the server needs to be restarted. Since we are running it inside the VM, we need to allow the management user interface (UI) port 15672 through the firewall. The playbook invocation to set up the management UI is given below:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/rabbitmq.yml --tags "ui"

The default user name and password for the dashboard are ‘guest:guest’. From your host system, you can start a browser and open http://192.168.122.161:15672 to view the login page as shown in Figure 1. The default ‘Overview’ page is shown in Figure 2.

RabbitMQ Login
RabbitMQ Overview

Ruby

We will use a Ruby client example to demonstrate that our installation of RabbitMQ is working fine. The Ruby Version Manager (RVM) will be used to install Ruby as shown below:

- name: Ruby client
  hosts: rabbitmq
  gather_facts: true
  tags: [ruby]

  tasks:
    - name: Import key
      command: gpg2 --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

    - name: Install RVM
      shell: curl -sSL https://get.rvm.io | bash -s stable

    - name: Install Ruby
      shell: source /etc/profile.d/rvm.sh && rvm install ruby-2.2.6

    - name: Set default Ruby
      command: rvm alias create default ruby-2.2.6

    - name: Install bunny client
      shell: gem install bunny --version ">= 2.6.4"

After importing the required GPG keys, RVM and Ruby 2.2.6 are installed on the CentOS 6.8 VM. The bunny Ruby client for RabbitMQ is then installed. The Ansible playbook to setup Ruby is given below:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/rabbitmq.yml --tags "ruby"

We shall create a ‘temperature’ queue to send the values in Celsius. The consumer.rb code to receive the values from the queue is given below:

#!/usr/bin/env ruby

require "bunny"

conn = Bunny.new(:automatically_recover => false)
conn.start

chan  = conn.create_channel
queue = chan.queue("temperature")

begin
  puts " ... waiting. CTRL+C to exit"
  queue.subscribe(:block => true) do |info, properties, body|
    puts " Received #{body}"
  end
rescue Interrupt => _
  conn.close

  exit(0)
end

The producer.rb code to send a sample of five values in degree Celsius is as follows:

#!/usr/bin/env ruby

require "bunny"

conn = Bunny.new(:automatically_recover => false)
conn.start

chan   = conn.create_channel
queue   = chan.queue("temperature")

values = ["33.5", "35.2", "36.7", "37.0", "36.4"]

values.each do |v|
  chan.default_exchange.publish(v, :routing_key => queue.name)
end
puts "Sent five temperature values."

conn.close

As soon as you start the consumer, you will get the following output:

$ ruby consumer.rb 
 ... waiting. CTRL+C to exit

You can then run the producer.rb script that writes the values to the queue:

$ ruby producer.rb

Sent five temperature values.

The received values at the consumer side are printed out as shown below:

$ ruby consumer.rb 
 ... waiting. CTRL+C to exit
 Received 33.5
 Received 35.2
 Received 36.7
 Received 37.0
 Received 36.4

We can observe the available connections and the created queue in the management user interface as shown in Figure 3 and Figure 4, respectively.

RabbitMQ Connections RabbitMQ Queues

Uninstall

It is good to have an uninstall script to remove the RabbitMQ server for administrative purposes. The Ansible playbook for the same is available in the playbooks/admin folder and is shown below:

---
- name: Uninstall RabbitMQ server
  hosts: rabbitmq
  gather_facts: true
  tags: [remove]

  tasks:
    - name: Stop the RabbitMQ server
      service:
        name: rabbitmq-server
        state: stopped

    - name: Uninstall rabbitmq
      package:
        name: "{{ item }}"
        state: absent
      with_items:
        - rabbitmq-server

The script can be invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/admin/uninstall-rabbitmq.yml

You are encouraged to read the detailed documentation at https://www.rabbitmq.com/documentation.html to know more about the usage, configuration, client libraries and plugins available for RabbitMQ.

« OLDER POSTS