Partager via


Configuration of the ZooKeeper Ensemble

NOTE This post is part of a series on a deployment of Apache Drill on the Azure cloud.

With the VMs and other components of the Drill topology deployed into Azure, I can now start configuring the various services.  I need to start with ZooKeeper as the Drill services depend upon it.

The major steps I need to perform on each Drill VM are:

  1. Mount the data disks that serve as the storage locations for ZooKeeper data & logs
  2. Install the ZooKeeper software
  3. Configure ZooKeeper
  4. Configure the VM to start ZooKeeper on boot
  5. Test that the ZooKeeper service is working

To get started, I SSH into the first of my three ZooKeeper VMs. All of these steps will be performed consistently across all three of my ZooKeeper VMs.  If I was a more experienced BASH script developer, I would put this all in a singular script that I would execute on each VM using the Azure CustomScript Extension for Linux.

Mount the Data Disks

My first action is to create the ZooKeeper directory along with sub-directories that will be used as the mount points for the data & log disks.  In my SSH session, I issue these commands:

sudo mkdir /zookeeper
sudo mkdir /zookeeper/data
sudo mkdir /zookeeper/log

Next, I lookup the names of the data disks assigned to my VMs.  These should come up as /dev/sdc and /dev/sdd but I doesn't hurt to double-check:

dmesg | grep SCSI

Assuming they do come up as /dev/sdc and /dev/sdd, I now prep the disks and mount them as follows:

sudo fdisk /dev/sdc  # n, p,,, w commands at prompts
sudo mkfs -t ext4 /dev/sdc1
sudo mount /dev/sdc1 /zookeeper/data

sudo fdisk /dev/sdd  # n, p,,, w commands at prompts
sudo mkfs -t ext4 /dev/sdd1
sudo mount /dev/sdd1 /zookeeper/data

Just a quick note on the fdisk commands.  At the first prompt generated by the command, I enter n to create a new partition, then enter p to make that new partition a primary partition.  I then take all the default settings on that partition until I get to the next generic prompt where I enter w to submit these actions and exit fsdisk.

At this point, the data disks are mounted but would need to be re-mounted when the VM is restarted.  To make the mount permanent, I now need to edit the fstab file. Before doing this, I need to get the UUID value assigned by the system to these disks:

sudo lsblk --noheadings --output UUID /dev/sdc1  # record the UUID
sudo lsblk --noheadings --output UUID /dev/sdd1  # record the UUID

With the UUIDs recorded, I can now add the needed tab-delimited entries to the fstab table:

sudo vi /etc/fstab

I then create entries such as the following (substituting the UUID values as needed). Again, each column on the line is separated by a tab:

UUID=902628f2-8ef4-4e58-9b66-3849b0fe091f   /zookeeper/data   ext4   defaults,nofail,noatime   0    0
UUID=46168bdc-9e68-43fe-b84b-11f5fc7da7ad   /zookeeper/log    ext4   defaults,nofail,noatime   0    0

At this point, its a good idea to restart the VM and verify the disks are remounted correctly.  For more information on mounting drives to Linux VMs running in Azure, check out this document.

Install the ZooKeeper Software

With the data disks attached, I can now proceed with installation of the software.  The first thing I need to do is install the Java JDK pre-requisite:

sudo apt-get update
sudo apt-get -y install default-jdk

With Java installed, I need to configure the Java memory heap per the ZooKeeper documentation.  As my ZooKeeper VMs have 7 GB of RAM and are dedicated for use as ZooKeeper servers, I configure a max heap size of 6 GB:

java -XX:+PrintFlagsFinal -Xms1024m -Xmx6144m -Xss1024k -XX:PermSize=384m -XX:MaxPermSize=384m -version | grep -iE 'HeapSize|PermSize|ThreadStackSize'

With Java properly configured, I now download the ZooKeeper software, unpack it, setup a symbolic link to make future updates easier and adjust ownership of the installed software:

cd /zookeeper
sudo wget https://apache.cs.utah.edu/zookeeper/stable/zookeeper-3.4.8.tar.gz
sudo tar -xzvf zookeeper-3.4.8.tar.gz
sudo ln -s zookeeper-3.4.8/ current
sudo chown -R root:root /zookeeper

For a complete list of download sites for the ZooKeeper software, go to this page.

Configure ZooKeeper

With the software installed, I now need to configure it.  Configuration is pretty straightforward.  I simply need to tell ZooKeeper where its data and log directories are located and make the servers aware of each other.  This requires me to first create a configuration file called zoo.cfg in the /zookeeper/current/conf directory and then open it for editing:

cd /zookeeper/current/conf
sudo cp zoo_sample.cfg zoo.cfg
sudo vi zoo.cfg

Within the vi editor, I need to edit the path of the dataDir variable and add a variable called dataLogDir:

dataDir=/zookeeper/data
dataLogDir=/zookeeper/log

At the bottom of the config file, I add a set of entries identifying the servers in the ZooKeeper ensemble.  In my topology, these servers are named zk001, zk002 & zk003.  I will use their short names as communications between these servers takes place within an Azure Virtual Network which provides basic DNS services for the VMs within it. It is absolutely essential that these three entries are identical on each server in my ensemble:

server.1=zk001:2888:3888
server.2=zk002:2888:3888
server.3=zk003:2888:3888

Lastly, I need to create the myid file within the ZooKeeper directory that allows each server to identify itself within the ensemble.  The number I use with the echo command should reflect the number assigned to the server in those last three entries I just put in my zoo.cfg file. Therefore, the following commands reflect the instructions I would execute on zk001, identified in the cfg file as server.1:

cd /zookeeper/data
sudo -s
echo 1 > myid
exit

Configure the VM to Start ZooKeeper on Boot

Okay, here's where my limited Linux skills really catch up with me. I'm using Ubuntu which has very specific requirements for starting up a service. If you are using a different flavor of Linux, the actions you perform to tackle this challenge may be very different.

My first action here is to copy the zkServer.sh script in /zookeeper/current/bin to /etc/init.d:

sudo cp /zookeeper/current/bin/zkServer.sh /etc/init.d/zkServer.sh

Then I need to edit the copied script in vi:

sudo vi /etc/init.d/zkServer.sh

In the vi editor, I comment out the lines assigning values to ZOOBIN & ZOOBINDIR and hardcode values for the variables used by this script and the zkEnv.sh script that it calls out to. If you are an experienced Linux developer who has a better way for tackling variable assignment, please let me know but at least in Ubuntu, scripts started on init do not have access to any environment variables other than PATH, hence the hardcoding:

## use POSTIX interface, symlink is followed automatically
#ZOOBIN="${BASH_SOURCE-$0}"
#ZOOBIN="$(dirname "${ZOOBIN}")"
#ZOOBINDIR="$(cd "${ZOOBIN}"; pwd)"

ZOOBIN=/zookeeper/current/bin
ZOOBINDIR=/zookeeper/current/bin
ZOOCFGDIR=/zookeeper/current/conf
ZOOCFG=zoo.cfg
ZOO_LOG_DIR=/zookeeper/log
ZOO_DATA_DIR=/zookeeper/data

With the script edited, I instruct the system to call it on boot:

sudo update-rc.d -f zkServer.sh defaults
sudo update-rc.d -f zkServer.sh enable

At this point, I can tell the system to launch the service with this command:

sudo service zkServer.sh start

But I prefer to reboot the server for a proper test:

sudo shutdown -r now

Test ZooKeeper Service

With all the ZooKeeper servers properly configured and rebooted, it's now time to connect to the ZooKeeper service on each server to verify it is working properly. To do this, I SSH into any one of the ZooKeeper VMs and issue the following commands:

echo ruok | nc zk001 2181
echo ruok | nc zk002 2181
echo ruok | nc zk003 2181

Each server should respond with imok, telling me that the ZooKeeper service on each system is running and healthy.

Next, I connect to the ensemble using the ZooKeeper client:

cd /zookeeper/current/bin
sudo ./zkCli.sh -server zk001:2181,zk002:2181,zk003:2181

Once connected, I enter ls / to view the nodes ZooKeeper is managing.  If I get a non-error response, I am sufficiently comfortable that ZooKeeper is up and running properly.