Home Learn Linux Simple ways to do a service discovery in Linux

Simple ways to do a service discovery in Linux

by Brandon Jones
service discovery Linux

Service discovery cannot gain a proper definition without the acknowledgment of an existing computer network. A computer network sets the needed communication protocols for network devices to share the available resources through its network nodes. This sharing of resources involves both network devices and services pre-defined on that network.

The workaround to the automatic discovery or detection of these network devices and services under a computer network is a viable definition of service discovery. For service discovery under a configured computer network to be complete, it will need the assistance of a network protocol called Service Discovery Protocol (SDP). With these protocols, network users and administrators do not have to rely on their network configuration skillsets to get things going.

Since service discovery communicates with software agents on a computer network, its communication protocols need to adhere to a common networking language to prevent continuous user intervention whenever the execution of a critical step is needed.

Conceptualizing service discovery in a production environment

Traditionally, application development took a monolithic approach. This approach was later refactored by having a single application exist as small synchronized pieces working towards a common goal. This concept defines the usefulness of microservices whereby separated components work towards a single application objective. SaaS or enterprise applications are a preference for this approach to application development.

An app that is defined by small components makes it easier to eliminate the bugs and identify and replace an app component that is not fully functional. Because these components are destructible, deploying such components in a production environment links them with a network service that identifies with the components’ locations and other services attached to them.

This automatic configuration of service instances to production app components breaks down the definition of service discovery.

Popular open-source service discovery tools for Linux

The evolution of microservice architecture and its contribution to developing modern apps has made service discovery a must-have. When a new app component is deployed, service discover eliminates any latency between the app and other service endpoints. If you consider the facilitation of some service discovery functionality through microservices, you should make acquaintance with these open-source tools.

Consul

Besides meeting the service discovery objective, Consul is an effective tool for monitoring and configuring a network’s production settings. It creates a peer-to-peer data store and dynamic clusters through Serf’s library. For this reason, this service discovery tool is highly distributed.

Consul is presented as a key-value store to configure and manage a production environment. Serf exists as a gossip protocol that effectively manages things like failure detection in the created clusters. A consensus protocol handles system consistency in this production environment through Raft.

Main Consul features

  • Provided there exists an app interface like MySQL, DNS, or HTTP; services can easily and automatically register themselves. It is also easy to detect and encapsulate other external services needed for the correct functionality of the setup network environment.
  • This tool has extensive support for DNS configuration. It makes the DNS integration process seamless.
  • Provided that a setup cluster has health issues, Consul will effectively perform a health check on this cluster and register the diagnostic results on a log sent to the relevant network operator.
  • The key/value storage feature of Consul is effective in feature flagging and making dynamic configurations.
  • This tool works with HTTP APIs to store and retrieve key/value data defined and confined within a distributed key/value store.

Setting up Consul cluster

This guide will have a practical idea about achieving service discovery through a Consul cluster by using multiple nodes.

Prerequisites
  • This setup will be more productive if you have access to three Linux servers.
  • All of your three servers should have some specified ports opened. They are 8300 for TCP, 8301 for TCP & UDP, 8302 for TCP & UDP, 8400 for TCP, 8500 for TCP, and 8600 for TCP & UDP. Depending on the servers you are using, e.g., AWS, GCP, or Azure, your firewall and security groups’ tags should be properly configured so that the mentioned ports are allowed to communicate easily.
Consul cluster setup

Since we are using three servers, we will be implementing a three-node Consul cluster. We can give these nodes the names consul-1, consul-2, and consul-3. The following steps will lead us to a fully functioning Consul cluster.

Installing and configuring Consul on the three defined nodes

Steps one to three apply to all the defined Consul nodes.

Step 1: On each server terminal, navigate the bin directory and use the applicable command about your Linux distribution to download the Linux Consul binary. This latter link highlights installation procedures for other Linux package managers.

curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update && sudo apt-get install consul

Step 2: The following directories should be created. Pay attention to the directory paths.

sudo mkdir -p /etc/consul.d/scripts 
sudo mkdir /var/consul

Step 3: Out of the three servers, choose one and run the following command on its terminal to create your Consul secret. The generated secret should be saved on a text file.

consul keygen

Step 4: All of your three servers should have the following config file. Create it as shown below.

sudo vi /etc/consul.d/config.json

Populate the above-created config.json file with the following data. On this file, the “encrypt” value should be replaced with the Consul secret value you generated in step 3. Also, the “start_join” value should contain the respective IP addresses of the three servers you chose to use.

{ 
    "bootstrap_expect": 3, 
    "client_addr": "0.0.0.0", 
    "datacenter": "Us-Central", 
    "data_dir": "/var/consul", 
    "domain": "consul", 
    "enable_script_checks": true, 
    "dns_config": { 
        "enable_truncate": true, 
        "only_passing": true 
    }, 
    "enable_syslog": true, 
    "encrypt": "generated_Consul_key_value", 
    "leave_on_terminate": true, 
    "log_level": "INFO", 
    "rejoin_after_leave": true, 
    "server": true, 
    "start_join": [
        "server-1_IP", 
        "server-2_IP", 
        "server-3_IP" 
    ], 
    "ui": true 
}
Creating the Consul service

All of our three nodes or servers should pass through the following steps.

Step 1: Creating a Systemd file

sudo vi /etc/systemd/system/consul.service

After the file is created, populate it with the following data.

[Unit] 
Description=Consul Startup process 
After=network.target 

[Service] 
Type=simple 
ExecStart=/bin/bash -c '/usr/local/bin/consul agent -config-dir 
/etc/consul.d/' 
TimeoutStartSec=0 

[Install] 
WantedBy=default.target

Step 2: Perform a reload on the system daemons

sudo systemctl daemon-reload
Bootstrapping and starting the cluster

To launch the Consul service on the first server or consul-1, execute the following command on its terminal.

sudo systemctl start consul

To launch the Consul service on the other two servers, consul-2 and consul-3, you should execute the same command on their respective OS system terminals.

sudo systemctl start consul

On each of the three servers, you will be able to note their respective cluster statuses by running the following command on each of their terminals.

 /usr/local/bin/consul members

To know if your Consul cluster setup was a success, the output you receive from running the above command should have some similarities to the following.

[fosslinux@consul-1 ~]$ /usr/local/bin/consul members
Node       Address       Status     Type     Build     Protocol     DC 
Segment 
consul-1 10.128.0.7:8301 alive     server   1.2.0        2     us-central 
<all> 
consul-2 10.128.0.8:8301 alive     server   1.2.0        2     us-central 
<all> 
consul-3 10.128.0.9:8301 alive     server   1.2.0        2     us-central 
<all>
Accessing the Consul UI

If your installed Consul version is 1.20 or later, it is packaged with an in-built Consul UI component. This Consul UI is web-based, and accessing it on your browser requires that you adhere to the following URL syntax rule.

http://<your-consul-server-IP-address>:8500/ui

An example implementation of the above URL syntax rule will be something similar to the following:

http://46.129.162.98:8500/ui
Consul UI

Consul UI

Practicality of Consul

The downside to using Consul is when dealing with inherent complexities of the distributed systems configured with it. This problem is general and depends on the architecture of these systems. It has nothing to do with the performance aspect of the Consul.

Another advantage to working with Consul is that it has all the needed libraries making it unnecessary for users to define and use third-party libraries. We can liken the conceptualization of Consul to Netflix’s OSS Sidecar. Here, non-Zookeeper clients remain discoverable since they can register on the system.

The prominence of the Consul service discovery tool has attracted reputable companies such as SendGrid, Percolate, DigitalOcean, Outbrain, and EverythingMe.

Etcd

The Etcd service discovery tool offers key/value store functionality similarly depicted in Consul and Zookeeper. It used to be a key CoreOS component before the OS’s deprecation status. Go programming language was key in its development. It also uses Raft as a means of handling consensus protocols.

It is fast and reliable in the provision of JSON-based and HTTP-based APIs. This functional provision is further complemented with query and push notifications. In a practical setting, the defined or created cluster will host five or seven nodes. On top of service discovery, the microservices architectures that implement Etcd in their containers will also benefit from the registration of these services.

Under service registration, Etcd handles the writing of the needed key-value pair. Under service discovery, Etcd handles the reading of the created key-value pair.

For other created applications to communicate with Etcd, they have to adhere to a confd project protocol. This protocol creates static configuration files out of Etcd’s stored information. In this setting, it is the clients’ responsibility to manage any viable connection failures and create a re-connection through other viable service instances.

The high-profile companies that have Etcd on their resume include CloudGear, Headspace, Red Hat, Kubernetes, Apptus, Zenreach, Cloud Foundry, and Google. Etcd growing community support is improving the developers’ experience on this service discovery tool’s platform.

Setting up Etcd

Etcd’s ability to store and retrieve configurations is not its only prime feature as an open-source key-value store. The created Etcd clusters have minimal node failure issues due to their high availability. Its stored values are retrieved by clients through REST/gRPC.

Prerequisites

The following requirements will make your experience in setting up the Etcd cluster more fruitful.

  • Have access to three functional Linux servers
  • Your three server choices should be configured with valid hostnames.
  • For effective peer-to-peer communication and client requests, the 2380 and 2379 ports on your servers should be enabled from the system’s firewall rules.
Setting up the Etcd cluster on your Linux machine

The Etcd cluster setup should not give you any headaches as it is relatively straightforward, especially with the static bootstrap approach. For you to successfully bootstrap with this approach, you should memorize your node’s IPs. This setup guide will cover all the steps that you might need to successfully create Linux server clusters since we are dealing with a multinode setup.

For etcd to run as a service, we will also need to configure systemd files. The following is just an example of the mentioned hostname to IP address relationship we will be using in this setup guide.

etcd-1 : 10.128.0.7

etcd-2 : 10.128.0.8

etcd-3 : 10.128.0.9

If you have the needed administrative privilege, you can change your servers’ hostnames to reflect your customizable preferences.

Time to get on with the etcd cluster setup.

The three nodes

The following successive steps apply to all three server nodes.

Step 1: On each server terminal, navigate to the src directory with the following command:

cd /usr/local/src

Step 2: While referencing Github Releases, you should be able to retrieve the latest etcd release. Make sure to download its latest stable version.

sudo wget "https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz"

Step 3: In this step, we will untar the downloaded etcd binary.

sudo tar -xvf etcd-v3.3.9-linux-amd64.tar.gz

Step 4: The untar process should yield etcd and etcdctl files. These extractions are etcd executables. Use the following command to move them to the local bin directory.

sudo mv etcd-v3.3.9-linux-amd64/etcd* /usr/local/bin/

Step 5: Since we want an etcd user to run the etcd service, you will need to create an etcd user, group, and folders.

sudo mkdir -p /etc/etcd /var/lib/etcd
groupadd -f -g 1501 etcd
useradd -c "etcd user" -d /var/lib/etcd -s /bin/false -g etcd -u 1501 etcd
chown -R etcd:etcd /var/lib/etcd

Step 6: Make sure you have root user privileges while performing the following actions.

ETCD_HOST_IP=$(ip addr show eth0 | grep "inet\b" | awk '{print $2}' | cut -d/ -f1)
ETCD_NAME=$(hostname -s)

The command sequence above sets two environment variables. The first environment variable fetches the server’s IP address, and the second one associates that IP address with a hostname.

Etcd now needs a systemd service file.

cat << EOF > /lib/systemd/system/etcd.service

After creating this service file, populate it to look similar to the one below.

[Unit]
Description=etcd service
Documentation=https://github.com/etcd-io/etcd 
 
[Service]
User=etcd
Type=notify
ExecStart=/usr/local/bin/etcd \\
 --name ${ETCD_NAME} \\
 --data-dir /var/lib/etcd \\
 --initial-advertise-peer-urls http://${ETCD_HOST_IP}:2380 \\
 --listen-peer-urls http://${ETCD_HOST_IP}:2380 \\
 --listen-client-urls http://${ETCD_HOST_IP}:2379,http://127.0.0.1:2379 \\
 --advertise-client-urls http://${ETCD_HOST_IP}:2379 \\
 --initial-cluster-token etcd-cluster-1 \\
 --initial-cluster etcd-1=http://10.142.0.7:2380,etcd-2=http://10.142.0.8:2380,etcd-3=http://10.142.0.9:2380 \\
 --initial-cluster-state new \\
 --heartbeat-interval 1000 \\
 --election-timeout 5000
Restart=on-failure
RestartSec=5
 
[Install]
WantedBy=multi-user.target
EOF

The “–listen-client-urls” portion of this file should be replaced with the three used server IPs. Depending on the setup servers, “–name”, “–listen-peer-urls”, “–initial-advertise-peer-urls”, and “–listen-client-urls” values will differ. As for ETCD_HOST_IP and ETCD_NAME variables, their input values are automated and replaced by the system.

Bootstrapping etcd cluster

The above configurations from steps 1 to 6 should apply to all your three servers. Afterward, the next step will be to start and enable the etcd service we just created. This effort should apply to all three nodes. Server 1 will assume the functionality of a bootstrap node. Once the etcd service is up and running, it will automatically select one node as the leader. So you do not have to worry about being involved in this leader node configuration.

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd.service
systemctl status -l etcd.service
Etcd cluster status verification

The etcdctl utility we earlier extracted after downloading etcd binary is responsible for initiating the interaction with etcd cluster. All your three nodes should have this utility in the /usr/local/bin directory.

The following system checks are applicable on all cluster nodes and are not limited to a specific one. The first check is to determine the health status of your cluster.

etcdctl cluster-health

You can also check and verify the membership status of a cluster node to determine if it has the leadership status.

etcdctl  member list

By default, you will explicitly access etcd v2 functionalities through etcdctl. It is its default association. If you wish to access etcd v3 and its functionalities, using the variable “ETCDCTL_API=3” is a viable option. To implement this variable, configure it as an environment variable. Alternatively, you can pass the variable along each time you use the etcdctl command.

Try creating and verifying the following key-value pairs.

ETCDCTL_API=3 etcdctl put name5 apple
ETCDCTL_API=3 etcdctl put name6 banana
ETCDCTL_API=3 etcdctl put name7 orange
ETCDCTL_API=3 etcdctl put name8 mango

To access the name7 value, execute the following command.

ETCDCTL_API=3 etcdctl get name7

Through the use of ranges and prefixes, it is possible to list all keys as depicted below:

ETCDCTL_API=3 etcdctl get name5 name8 # lists range name5 to name8
ETCDCTL_API=3 etcdctl get --prefix name # lists all keys with name prefix

Apache Zookeeper

This service can be described as centralized, distributed, and consistent. The Java programming language is responsible for its creation. Apache Zookeeper can effectively manage cluster changes through the Zab protocol. Its previous role was maintaining software cluster components in the Apache Hadoop world.

Here, data storage is either on a tree, inside a file system, or in a hierarchical namespace. When a client is connected to this network, nodes will continue to exist. On the other hand, when network disconnection occurs or there is a problem with the configured network, the nodes disappear. When an issue with network failure or load balancing occurs, it is up to the clients to resolve them. When Apache Zookeeper registers a new service, the clients receive notifications related to these services.

The consistency of the Zookeeper system does not shield it from potential system failures. Some platforms might have issues registering the needed services or even run into errors while implementing the read and write service functions. On the other hand, Apache Zookeeper continues to be a robust and established application with extensive library support that benefits its vibrant user community and growing clients.

High-profile companies that associate with Apache Zookeeper include Apache Software Foundation, Luxoft, Solr, Reddit, Rackspace, Spero Solutions, F5 Networks, Cloudera, eBay, and Yahoo!

Setting up Apache Zookeeper

Apache Zookeeper is perfect for handling various distributed workloads because of its functional adaptation as a distributed coordination tool.

Prerequisites
  • You need three Virtual Machines (VMs). The number of VMs to use can be above 3, but that number needs to be odd for high availability cluster.
  • Ports 2181, 2888, and 3888 need to be enabled through the server system’s IPtables for the VMs’ inbound connections to happen through these ports. These ports are responsible for Apache Zookeeper’s communication.

Individuals working under cloud providers like AWS should have endpoints, or security groups enabled for Apache Zookeeper to work with these ports.

The installation and configuration of Apache Zookeeper

All your three VM’s should benefit from the following steps:

Step 1: Server update

 sudo yum -y update

Step 2: Java installation. Skip this step if Java is already installed.

 sudo yum  -y install java-1.7.0-openjdk

Step 3: Use the “wget” command to download Zookeeper.

wget http://mirror.fibergrid.in/apache/zookeeper/zookeeper-3.5.2-alpha/zookeeper-3.5.2-alpha.tar.gz

Step 4: Untar the Apache Zookeeper application to /opt directory.

 sudo tar -xf zookeeper-3.5.2-alpha.tar.gz -C /opt/

Step 5: Navigate to the Apache Zookeeper app directory and rename it to

cd /opt
sudo mv zookeeper-* zookeeper

Step 6: Inside the /opt/zookeeper/conf directory, we will need to work with a file called zoo.cfg. Create this file and populate it with the following configuration data.

tickTime=2000
dataDir=/var/lib/zookeeper
clientPort=2181
initLimit=5
syncLimit=2
server.1=<ZooKeeper_IP/hostname>:2888:3888
server.2=<ZooKeeper_iP/hostname>:2888:3888
server.3=<ZooKeeper_iP/hostname>:2888:3888

Your three Zookeeper servers are represented by Server 1, server 2, and server 3. The “ZooKeeper_IP” variable should be replaced with either your three server IP addresses or the resolvable hostnames of these identifiable IP addresses.

Step 7:  The zoo.cfg file we created and populated points to a data directory called lib, which also contains another directory called zookeeper. We need to create this directory as it does not yet exist.

 sudo mkdir /var/lib/zookeeper

Step 8: Inside the above-created directory, create a myid file.

 sudo touch /var/lib/zookeeper/myid

Step 9: This myid file will hold unique numbers to identify each Apache Zookeeper server.

For Zookeeper server 1

 sudo sh -c "echo '5' > /var/lib/zookeeper/myid"

For Zookeeper server 2

 sudo sh -c "echo '6' > /var/lib/zookeeper/myid"

For Zookeeper server 3

 sudo sh -c "echo '7' > /var/lib/zookeeper/myid"
Apache Zookeeper service configurations

To start and stop Zookeeper, we will need to utilize scripts. However, running these scripts as a service helps manage them better. We will need to open the zkServer.sh file.

 sudo vi /opt/zookeeper/bin/zkServer.sh

The opened file below “#!/usr/bin/env” populates it with the following data.

# description: Zookeeper Start Stop Restart
# processname: zookeeper
# chkconfig: 244 30 80

On the same zkServer.sh file, trace the live “#use POSTIX interface, symlink…”. Replace and substitute the variables that succeed that line with these values.

ZOOSH=`readlink $0`
ZOOBIN=`dirname $ZOOSH`
ZOOBINDIR=`cd $ZOOBIN; pwd`
ZOO_LOG_DIR=`echo $ZOOBIN`

The Zookeeper service now needs a symlink.

sudo ln -s /opt/zookeeper/bin/zkServer.sh /etc/init.d/zookeeper

The boot menu should accommodate Zookeeper.

sudo chkconfig zookeeper on

All your three servers should be restarted with the following command. Run it on their respective terminals.

 sudo  init 6

Once the servers have restarted, managing them will be effortless through the following command sequences.

sudo service zookeeper status
sudo service zookeeper stop
sudo service zookeeper start
sudo service zookeeper restart

When the command for checking the Zookeeper status runs, the terminal output should be similar to the following.

/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader

One of the three servers is assigned the leader mode, and the other two retain the follower mode.

Final note

Service discovery serves two important goals: high availability and failure detection. With more functionalities on queue, an infrastructure implementation can’t be complete without recognizing and configuring service discovery tools like Consul, Etcd, and Apache Zookeeper. These tools are open-source and fundamentally effective in their service delivery functionalities. Therefore, you will not run into any walls trying to test or implement a simple service discovery mechanism on your Linux systems.

You may also like

Leave a Comment

fl_logo_v3_footer

ENHANCE YOUR LINUX EXPERIENCE.



FOSS Linux is a leading resource for Linux enthusiasts and professionals alike. With a focus on providing the best Linux tutorials, open-source apps, news, and reviews written by team of expert authors. FOSS Linux is the go-to source for all things Linux.

Whether you’re a beginner or an experienced user, FOSS Linux has something for everyone.

Follow Us

Subscribe

©2016-2023 FOSS LINUX

A PART OF VIBRANT LEAF MEDIA COMPANY.

ALL RIGHTS RESERVED.

“Linux” is the registered trademark by Linus Torvalds in the U.S. and other countries.