Kerberos Setup for Apache Hadoop Multi-Node Cluster

Ravi Chamarthy
11 min readSep 16, 2020
Kerberos — gatekeeping with 3 locks — Authentication Server, Database, Ticket Granting Server

As explained in Apache Hadoop Multi-Node Kerberized Cluster Setup, as part of this story we shall perform the initial setup of the Hadoop ecosystem with required packages and then setup Kerberos on all cluster nodes.

Here are the various nodes in which we would setup the Hadoop ecosystem.

Various nodes that we create in the Hadoop Cluster

We create following system users on all nodes, and let’s keep the password same for all users in all nodes.

root, hadoop, hdfs, yarn, mapred, HTTP, hive (well, this user is needed only on the system where Hive/MySql is installed, but to keep it simple, create this user on all nodes as well — no harm!), MySQL user : root

Chapter 1. Users Creation and initial setup

1. Login to all systems

$ ssh root@florence1.wsdm.ami.com
$ ssh root@sicily1.wsdm.ami.com
$ ssh root@turin1.wsdm.ami.com
$ ssh root@tuscany1.wsdm.ami.com
$ ssh root@verona1.wsdm.ami.com

2. Create the hadoop user in all 5 nodes

# useradd hadoop
# passwd hadoop

Likewise create the following users and add them to the “hadoop” user group. Let the passwd be same as that of “hadoop” user.

# useradd hdfs -g hadoop
# useradd yarn -g hadoop
# useradd mapred -g hadoop
# useradd HTTP -g hadoop
# useradd hive -g hadoop

3. Install JDK

# yum -y install java-1.8.0-openjdk
# yum install java-1.8.0-openjdk-devel
# java -version
openjdk version “1.8.0_262”

4. Password less SSH

Configure passwordless ssh to the local system by following the below steps.

[root@florence1 ~]# su — hadoop
[hadoop@florence1 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Created directory ‘/home/hadoop/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.

[hadoop@florence1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[hadoop@florence1 ~]$ chmod 600 ~/.ssh/authorized_keys
[hadoop@florence1 ~]$ ssh 127.0.0.1

5. Python Installation

Install python in all nodes

[root@florence1 ~]# yum module install python36

Configure python alias in both “root” and also in the “hadoop” user.

[root@florence1 ~]# vi ~/.bashrc
# add the below two lines
alias python=python3
export PYSPARK_PYTHON=/usr/bin/python3
[root@florence1 ~]# source ~/.bashrc
[root@florence1 ~]# python --version
Python 3.6.8
[root@florence1 ~]# python -m pip install pandas

6. Hello to all nodes

Add hosts entries of other nodes in each of the cluster nodes. Following is the sample of the /etc/hosts file from the florence1 node.

[root@florence1 ~]# vi /etc/hosts
[root@florence1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.41.6.178 florence1.wsdm.ami.com florence1
10.41.6.179 sicily1.wsdm.ami.com sicily1
10.41.1.210 turin1.wsdm.ami.com turin1
10.41.6.181 tuscany1.wsdm.ami.com tuscany1
10.41.6.182
verona1.wsdm.ami.com verona1

7. Password less login from Master node to all other nodes

sicily1 is the master node in our cluster, so to be able to automatically communicate with the other two nodes, we need to copy the id_rsa.pub file from sicily1 to turn1 and tuscany1 (which are the worker nodes in the cluster). To make things simple, let’s also do the same for other two ndoes as well, so that it would ease the setup when we want to copy any files from sicily to any of the nodes.

So, we shall configure password less login from sicily1 to turin1, tuscany1, verona1 and florence1

[root@sicily1 ~]# su — hadoop
[hadoop@sicily1 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub hadoop@turin1.wsdm.ami.com
[hadoop@sicily1 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub hadoop@tuscany1.wsdm.ami.com
[hadoop@sicily1 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub hadoop@verona1.wsdm.ami.com
[hadoop@sicily1 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub hadoop@florence1.wsdm.ami.com

Need to perform the same from root as well, as with kerberos authentication in place, we would run some of the hadoop commands with root user.

[root@sicily1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@turin1.wsdm.ami.com
[root@sicily1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@tuscany1.wsdm.ami.com
[root@sicily1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@verona1.wsdm.ami.com
[root@sicily1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@florence1.wsdm.ami.com

Chapter 2. Kerberos Installation and configuration

A very quick primer on what is Kerberos all about:

  • Say, the user is trying to access Application UI. The user would be presented with a login screen where the user would enter the username and password.
  • Let’s say an OAuth token is generated and this token is used to communicate from UI to all downstream micro services.
  • Now, as part of one of the services, we wanted to submit a spark job to the Hadoop ecosystem. And this job submission has to be authenticated.
  • Hadoop does not have its own authentication system on its own.
  • And they have chosen Kerberos as the authentication system.

But why Kerberos and why not SSH or OAuth?

OAuth was not there when Hadoop was getting developed.

SSH is even more complicated in terms of maintenance of SSH certificates — mainly when a user has to be removed, where one has to find the respective certificates and then revoke them. And in case of Kerberos, it is much simpler in terms of user (aka principal) management — to remove a user just delete the user from the kerberos database.

And hence Hadoop choose Kerberos for its authentication usage.

  • Now, Kerberos is a network authentication protocol where it eliminates the need for propagating the password over various nodes (actors) in the system. The main advantage is, with no propagation of passwords or tokens then there is no question of sniffing the network.
  • Coming back, the ask here is, I want to POST for a job execution from an OpenScale service (client) to Hadoop system (server). And this communication should be authenticated.
  • To simply say, the POST call from client to server has to be authenticated.
  • Kerberos has three key components
    - Authentication Server (AS)
    - Key Distribution Center (KDC)
    - Ticket Granting Server (TGS)
  • The POST call to submit the job (request object) is encrypted using the client password (password? how would my OpenScale service would have a password .. will come to it soon) and sends that request to Authentication Server
    - Note that the password is not sent over the network here, just use it as an encryption key to encrypt the request object.
  • The Authentication Server, for the incoming user would find the password from the Kerberos Database and decrypt the request object using the password and generates a to ticket called as Ticket Granting Ticket (TGT) and encrypting it with another secret key. This TGT is sent back to the client.
  • The client would then sends the encrypted TGT to Ticket Granting Server along with the POST request object.
    - When TGS gets TGT it decrypts the ticket using the shared secret key between AS and TGT.
    - Once validated TGS issues a token to the client. This token is again encrypted using a shared secret key between TGS and the Hadoop system.
  • When hadoop system gets the token it would decrypt using the shared secret key from TGS to validate the token and thereby honouring the request. This token is valid for certain amount of time.
  • Now, in our case we do not have any password maintained in our services to communicate with Authentication Server. Hence we would expect a keytab file which will have the valid TGT stored in it. Or the customer implementing the custom wrapper app would create a proxy user and use that to authenticate between client and authentication server.

Putting the flow in a diagram here:

Kerberos Flow

As an analogy, it’s a simple thing:

  1. You would first buy a ticket to enter a Mall.
  2. Use this ticket to buy another ticket say to watch a movie. This ticket is only valid for the time duration of the movie screening time.
  3. To access another resource, say a gaming zone, you would buy another ticket using the first ticket to avail the gaming options. This ticket is again valid for some time.

Okay, let’s proceed with installing Kerberos and create principals and get tickets for those principals.

1. Install Kerberos client in all nodes

[root@sicily1 ~]# yum install krb5-workstation krb5-libs
[root@turin1 ~]# yum install krb5-workstation krb5-libs
[root@tuscany1 ~]# yum install krb5-workstation krb5-libs
[root@verona1 ~]# yum install krb5-workstation krb5-libs
[root@florence1 ~]# yum install krb5-workstation krb5-libs

2. Install Kerberos server on the master node — sicily1

[root@sicily1 ~]# yum install krb5-server

3. Configure Kerberos on the master node — sicily1

Key Distribution Center — Realm Configuration

Open “/var/kerberos/krb5kdc/kdc.conf”, and change the default EXAMPLE.COM realm to a more meaningful name, say HADOOPCLUSTER.LOCAL (or some other name, as appropriate). In my environment, after modifying here is the content of it.

[root@sicily1 ~]# cat /var/kerberos/krb5kdc/kdc.conf
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
HADOOPCLUSTER.LOCAL = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal
default_principal_flags = +renewable, +forwardable
}

Realm Configuration

Open “/etc/krb5.conf”, and change the “default_realm” to the realm name that is defined in “/var/kerberos/krb5kdc/kdc.conf”. Specify the KDC and the Admin server location to be sicily1. Meaning sicily1 will act as the Kerberos Key Distribution Center and also the Admin Server.

After making the changes, here are the contents of the /etc/krb5.conf from sicily1 node.

[root@sicily1 ~]# cat /etc/krb5.conf
# To opt out of the system crypto-policies configuration of krb5, remove the
# symlink at /etc/krb5.conf.d/crypto-policies which will not be recreated.
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 4320h
renew_lifetime = 7d
forwardable = true
default_realm = HADOOPCLUSTER.LOCAL
[realms]
HADOOPCLUSTER.LOCAL = {
kdc = sicily1.wsdm.ami.com
admin_server = sicily1.wsdm.ami.com
}
[domain_realm]
.wsdm.ami.com = HADOOPCLUSTER.LOCAL
wsdm.ami.com = HADOOPCLUSTER.LOCAL
sicily1.wsdm.ami.com = HADOOPCLUSTER.LOCAL
florence1.wsdm.ami.com = HADOOPCLUSTER.LOCAL
turin1.wsdm.ami.com = HADOOPCLUSTER.LOCAL
tuscany1.wsdm.ami.com = HADOOPCLUSTER.LOCAL
verona1.wsdm.ami.com = HADOOPCLUSTER.LOCAL

Configure the ACL’s

Change the “/var/kerberos/krb5kdc/kadm5.acl” to make sure known principals are allowed to get the tickets.

[root@sicily1 ~]# cat /var/kerberos/krb5kdc/kadm5.acl
*/admin@HADOOPCLUSTER.LOCAL *
*/sicily1.wsdm.ami.com@HADOOPCLUSTER.LOCAL *

Kerberos Cache file specification

Specify the Kerberos Cache Name in the “root” user bashrc profile file. After modifying here is the content, specific to this environment variable in the sicily1’s bashrc file.

[root@sicily1 ~]$ cat ~/.bashrc
..
export KRB5CCNAME=/tmp/krb5cc
[root@sicily1 ~]# source ~/.bashrc

And likewise, for the hadoop user, export the environment variable KRB5CCNAME export KRB5CCNAME=/tmp/krb5cc1. Repeat the same for all nodes

Edge node: florence1.wsdm.ami.com
Name node: sicily1.wsdm.ami.com
Data Node: turin1.wsdm.ami.com
Data Node: tuscany1.wsdm.ami.com
Hive: verona1.wsdm.ami.com

Configure the Key Distribution Center

Create the KDC database using the kdb5_util tool.

[root@sicily1 ~]# kdb5_util create -r HADOOPCLUSTER.LOCAL -s
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'HADOOPCLUSTER.LOCAL',
master key name 'K/M@HADOOPCLUSTER.LOCAL'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key: MasterPassword!024
Re-enter KDC database master key to verify: MasterPassword!024
[root@sicily1 ~]#

Create the “root/admin” principal

Create the initial “root/admin” principal, which is the root principal which can control all other principals.

[root@sicily1 ~]# kadmin.local
Authenticating as principal root/admin@HADOOPCLUSTER.LOCAL with password.
kadmin.local: addprinc root/admin@HADOOPCLUSTER.LOCAL
WARNING: no policy specified for root/admin@HADOOPCLUSTER.LOCAL; defaulting to no policy
Enter password for principal “root/admin@HADOOPCLUSTER.LOCAL”: Xxxxx
Re-enter password for principal “root/admin@HADOOPCLUSTER.LOCAL”: Xxxxx
Principal “root/admin@HADOOPCLUSTER.LOCAL” created.
kadmin.local: exit

Start the krb5kdc and kadmin services

[root@sicily1 ~]# service krb5kdc start
Redirecting to /bin/systemctl start krb5kdc.service
[root@sicily1 ~]# service kadmin start
Redirecting to /bin/systemctl start kadmin.service
[root@sicily1 ~]# service krb5kdc status

Sep 05 05:03:47 sicily1.wsdm.ami.com systemd[1]: Starting Kerberos 5 KDC…
Sep 05 05:03:47 sicily1.wsdm.ami.com systemd[1]: Started Kerberos 5 KDC.
[root@sicily1 ~]# service kadmin status
Redirecting to /bin/systemctl status kadmin.service

Sep 05 05:03:50 sicily1.wsdm.ami.com systemd[1]: Starting Kerberos 5 Password-changing and Administration…
Sep 05 05:03:50 sicily1.wsdm.ami.com systemd[1]: Started Kerberos 5 Password-changing and Administration.

Create the “hadoop/admin” user

Now as the kdc and kadmin services are started, we can make use of kadmin tool to create the other principals.

[root@sicily1 ~]# kadmin
Authenticating as principal root/admin@HADOOPCLUSTER.LOCAL with password.
Password for root/admin@HADOOPCLUSTER.LOCAL: Xxxxx
kadmin: addprinc hadoop/admin@HADOOPCLUSTER.LOCAL
WARNING: no policy specified for hadoop/admin@HADOOPCLUSTER.LOCAL; defaulting to no policy
Enter password for principal “hadoop/admin@HADOOPCLUSTER.LOCAL”: Xxxxx
Re-enter password for principal “hadoop/admin@HADOOPCLUSTER.LOCAL”: Xxxxx
Principal “hadoop/admin@HADOOPCLUSTER.LOCAL” created.
kadmin: exit

Getting the first ticket for the “root/admin” user

[root@sicily1 ~]# klist
klist: No credentials cache found (filename: /tmp/krb5cc)
[root@sicily1 ~]# klist -A
[root@sicily1 ~]# kinit root/admin
Password for root/admin@HADOOPCLUSTER.LOCAL: Xxxxx
[root@sicily1 ~]# klist
Ticket cache: FILE:/tmp/krb5cc
Default principal: root/admin@HADOOPCLUSTER.LOCAL
Valid starting Expires Service principal
09/05/20 05:45:08 09/06/20 05:45:08 krbtgt/HADOOPCLUSTER.LOCAL@HADOOPCLUSTER.LOCAL renew until 09/05/20 05:45:08

Configure krb5kdc and and kadmin during system startup

Make sure to configure krb5kdc and and kadmin to start automatically during system start up

[root@sicily1 ~]# chkconfig krb5kdc on
[root@sicily1 ~]# chkconfig kadmin on

Creation of principals

Create kerberos principals for “hdfs”, “mapred”, “yarn”, “HTTP”, and “hive” users for each of the node instances. The key thing in this statement is, we have got 5 nodes, and 5 users. And we need to create 5 principals for each node — so a total of 25 principals.

Create these principals under “hadoop” user. Reason being, “hadoop” is the user that we use to install and configure hadoop ecosystem.

[root@sicily1 ~]# su — hadoop
[hadoop@sicily1 ~]$ mkdir keytabs
[hadoop@sicily1 ~]$ cd keytabs
[hadoop@sicily1 keytabs]$ kadmin -p hadoop/admin
Couldn’t open log file /var/log/kadmind.log: Permission denied
Authenticating as principal hadoop/admin with password.
Password for hadoop/admin@HADOOPCLUSTER.LOCAL: Xxxxx
kadmin: addprinc hdfs/sicily1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
WARNING: no policy specified for hdfs/sicily1.wsdm.ami.com@HADOOPCLUSTER.LOCAL; defaulting to no policy
Enter password for principal “hdfs/sicily1.wsdm.ami.com@HADOOPCLUSTER.LOCAL”: Xxxxx
Re-enter password for principal “hdfs/sicily1.wsdm.ami.com@HADOOPCLUSTER.LOCAL”: Xxxxx
Principal “hdfs/sicily1.wsdm.ami.com@HADOOPCLUSTER.LOCAL” created.

While creating all the other principals, try setting the same password, so that you do not have to remember so many passwords. Up to your choice, anyway.

kadmin: addprinc yarn/sicily1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc mapred/sicily1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc HTTP/sicily1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc hive/sicily1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc hdfs/turin1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc mapred/turin1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc yarn/turin1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc HTTP/turin1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc hive/turin1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc hdfs/tuscany1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc mapred/tuscany1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc yarn/tuscany1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc HTTP/tuscany1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc hive/tuscany1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc hdfs/verona1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc mapred/verona1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc yarn/verona1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc HTTP/verona1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc hive/verona1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc hdfs/florence1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc mapred/florence1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc yarn/florence1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc HTTP/florence1.wsdm.ami.com@HADOOPCLUSTER.LOCAL
kadmin: addprinc hive/florence1.wsdm.ami.com@HADOOPCLUSTER.LOCAL

Keytab files

Using the kadmin’s xst tool, create the keytab files for each user — hdfs, mapred, yarn, hive, HTTP. Include the hive and hdfs principals in the same Keytab belonging to “hdfs”.

(each xst command is a single line)

kadmin: xst -k hdfs-unmerged.keytab hdfs/florence1.wsdm.ami.com@HADOOPCLUSTER.LOCAL hdfs/sicily1.wsdm.ami.com@HADOOPCLUSTER.LOCAL hdfs/turin1.wsdm.ami.com@HADOOPCLUSTER.LOCAL hdfs/tuscany1.wsdm.ami.com@HADOOPCLUSTER.LOCAL hdfs/verona1.wsdm.ami.com@HADOOPCLUSTER.LOCAL hive/florence1.wsdm.ami.com@HADOOPCLUSTER.LOCAL hive/sicily1.wsdm.ami.com@HADOOPCLUSTER.LOCAL hive/turin1.wsdm.ami.com@HADOOPCLUSTER.LOCAL hive/tuscany1.wsdm.ami.com@HADOOPCLUSTER.LOCAL hive/verona1.wsdm.ami.com@HADOOPCLUSTER.LOCALkadmin: xst -k yarn-unmerged.keytab yarn/florence1.wsdm.ami.com@HADOOPCLUSTER.LOCAL yarn/sicily1.wsdm.ami.com@HADOOPCLUSTER.LOCAL yarn/turin1.wsdm.ami.com@HADOOPCLUSTER.LOCAL yarn/tuscany1.wsdm.ami.com@HADOOPCLUSTER.LOCAL yarn/verona1.wsdm.ami.com@HADOOPCLUSTER.LOCALkadmin: xst -k mapred-unmerged.keytab mapred/florence1.wsdm.ami.com@HADOOPCLUSTER.LOCAL mapred/sicily1.wsdm.ami.com@HADOOPCLUSTER.LOCAL mapred/turin1.wsdm.ami.com@HADOOPCLUSTER.LOCAL mapred/tuscany1.wsdm.ami.com@HADOOPCLUSTER.LOCAL mapred/verona1.wsdm.ami.com@HADOOPCLUSTER.LOCALkadmin: xst -k HTTP.keytab HTTP/florence1.wsdm.ami.com@HADOOPCLUSTER.LOCAL HTTP/sicily1.wsdm.ami.com@HADOOPCLUSTER.LOCAL HTTP/turin1.wsdm.ami.com@HADOOPCLUSTER.LOCAL HTTP/tuscany1.wsdm.ami.com@HADOOPCLUSTER.LOCAL HTTP/verona1.wsdm.ami.com@HADOOPCLUSTER.LOCAL

Merging of the Keytab files with HTTP Keytab file

Using ktutil tool, merge each keytab file with the principals from the HTTP keytab file. Basically read (rkt) the contents of the unmerged keytab file, and also the HTTP keytab and then write (wkt) the contents by merging them to a new keytab file.

$ ktutil
ktutil: rkt hdfs-unmerged.keytab
ktutil: rkt HTTP.keytab
ktutil: wkt hdfs.keytab
ktutil: clear
ktutil: rkt mapred-unmerged.keytab
ktutil: rkt HTTP.keytab
ktutil: wkt mapred.keytab
ktutil: clear
ktutil: rkt yarn-unmerged.keytab
ktutil: rkt HTTP.keytab
ktutil: wkt yarn.keytab
ktutil: clear
ktutil: exit

Get the Kerberos tickets for each principal

Obtain and cache Kerberos ticket-granting ticket for all principals with “hadoop” and also “root” user

kinit -kt /home/hadoop/keytabs/hdfs.keytab hdfs/florence1.wsdm.ami.com@HADOOPCLUSTER.LOCAL

kinit -kt /home/hadoop/keytabs/mapred.keytab HTTP/verona1.wsdm.ami.com@HADOOPCLUSTER.LOCAL

Created a auth file for the same: https://raw.githubusercontent.com/ravichamarthy/kerberos/master/auth.py

4. Configure Kerberos on all other node

Client side krb5.conf and Keytab files

Copy the /etc/krb5.conf and the keytab files from admin server node to all other nodes. Listing down steps for one of the node — turin1.

Please repeat the same for all other nodes — Edge Node, Hive Node, Data Nodes — This step is very IMPORTANT.

[root@sicily1 keytabs]# ssh root@turin1.wsdm.ami.com
[root@turin1 ~]# su — hadoop
[hadoop@turin1 ~]$ mkdir keytabs
[hadoop@turin1 ~]$ exit
logout
[root@turin1 ~]# exit
logout
Connection to turin1.wsdm.ami.com closed.
[root@sicily1 keytabs]# scp /etc/krb5.conf root@turin1.wsdm.ami.com:/home/hadoop/keytabs
[root@sicily1 keytabs]# su — hadoop
[hadoop@sicily1 ~]$ cd keytabs/
[hadoop@sicily1 keytabs]
$ scp hdfs.keytab mapred.keytab yarn.keytab HTTP.keytab auth.sh hadoop@turin1.wsdm.ami.com:/home/hadoop/keytabs
[hadoop@sicily1 keytabs]$ exit
logout
[root@sicily1 keytabs]# ssh root@turin1.wsdm.ami.com
[root@turin1 ~]# mv /etc/krb5.conf /etc/krb5.conf.original
[root@turin1 ~]# cd /home/hadoop/keytabs/
[root@turin1 keytabs]# cp krb5.conf /etc
[root@turin1 keytabs]# su — hadoop
[hadoop@turin1 ~]$ cd keytabs/

Obtaining client side tickets

kinit in both “hadoop” and “root” users. For this run the commands that are listed in step “Get the Kerberos tickets for each principal”

kinit -kt /home/hadoop/keytabs/hdfs.keytab hdfs/florence1.wsdm.ami.com@HADOOPCLUSTER.LOCAL

kinit -kt /home/hadoop/keytabs/mapred.keytab HTTP/verona1.wsdm.ami.com@HADOOPCLUSTER.LOCAL

As a summary, in this story, we looked at the initial systems setup and configured kerberos.

In the next chapter we shall proceed with Hadoop — HDFS and YARN configuration.

--

--

Ravi Chamarthy

Software Architect, watsonx.governance - Monitoring & IBM Master Inventor