这里的话,我们假设我们有三台Linux设备要实现SSH免密登录,分别是主节点hadoop-1和两个子节点hadoop-2、hadoop-3。
第一步:生成各节点的公钥和私钥
hadoop@hadoop-1:~$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:63d3eu6ljNYfWGS/0VOZcWoBaooRjPB4HvnZGq8PW+M hadoop@hadoop-2
The key's randomart image is:
+---[RSA 2048]----+
| .. o. .....|
| o.... . o=|
| . = . o o=.|
| o o = o .o +|
| . = S +o|
| + . o +|
| o = o o.|
| B .. ooo.=|
| o.E. o..+B+|
+---[SHA256]-----+
分别在三台机器上执行上面的指令,途中会让你输入密码和关键词直接回车就好,执行完成后在 /home/hadoop(用户)目录下面执行 ls -a 会发现多了一个 .ssh文件,该文件下面分别是私钥 (id_rsa) 与公钥 (id_rsa.pub)。
第二步:在hadoop-2和hadoop-3节点上将自己的密钥发送至主节点:
hadoop@hadoop-2:~$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@192.168.56.101
在hadoop-2和hadoop-3执行上面的指令,后面的IP地址是hadoop-1的。
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/hadoop/.ssh/id_rsa.pub"
The authenticity of host '192.168.56.101 (192.168.56.101)' can't be established.
ECDSA key fingerprint is SHA256:AtiLYfGNHj/BrXOkTIUNn9iMcwcZE7GFUofU4waBUrg.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
hadoop@192.168.56.101's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'hadoop@192.168.56.101'"
and check to make sure that only the key(s) you wanted were added.
此时,在hadoop-1的.ssh文件下就会出现一个authorized_keys文件,用cat查看一下内容:
hadoop@hadoop-1:~/.ssh$ cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDCYcRBOKb+joUvbBaFKhaQUDq1lPT/sf2IWd4JJopqIm+2vtR/cRHrqcgNDy4YUCizx9tWwe54vx1HsYOvAnAOX6aeSWC
JjW0BIS8GjKWgUZ5J9alFmM2zXuhwBis/IC18YxgH5K0C9TVvmWeukbODeOAkcASPfD8k0bI7hodRFYtTCeZGCOl3yZAAdxSnZJaFt/RS5JjtLAxWydxc4JL2msX+LoCVg0O
JuJWpd7CBZ1Zy80Xh5KaLmSTB9zfC2av8sZBvYVoXHTELiTpFC/EgiSE/BzZqHZ/UNVbCsRBsYL642UWo5SMHuZJmWTxGDLIvZ8Vjvb8osX0MRp5mzOM9 hadoop@hadoo
p-2
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDctmHMrPk47ZBUhUKuEgHvvu2OXbCoL5zA8rAY6HoNKPOoOZfElBDKWGxXMfI86IlnjalHsxf6BWOcGu8+fLy+VzMtpsG
5OgpAaE6oW7p2+or/mKxdLQMwhqv3TDXr9DrwxUteXOsUR2oPF2OT58pq7ChBbH3gsR7GeQ0soQPGw2lq2RDQUCyi7wGZw93VXjKpvSk0UAub8Tg2DYXr3jo4sIXVGodjhMv
XtOVJO6/JK1HEcGI1YTs9iQ1B+8Yqab9yZVI868oajLVnHwp/tn62kKKKXszGZ62Rw3FmP4ZHGlcbK6fEwYosQIPg1ma9woBxk5d8ek/8l9+qnSPLb9VP hadoop@hadoo
p-3
此时hadoop-2和hadoop-3已将自己的公钥发了过来。
第三步:整合hadoop-1的公钥并发送给hadoop-2和hadoop-3:
hadoop@hadoop-1:~/.ssh$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
执行上面的指令,将hadoop-1的公钥发送到authorized_keys文件。
在hadoop-1 分别运行下列命令,将汇总得到的key 分发到节点hadoop-2 和hadoop-3
hadoop@hadoop-1:~/.ssh$ scp /home/hadoop/.ssh/authorized_keys hadoop@192.168.56.102:/home/hadoop/.ssh
hadoop@hadoop-1:~/.ssh$ scp /home/hadoop/.ssh/authorized_keys hadoop@192.168.56.103:/home/hadoop/.ssh
The authenticity of host '192.168.56.102 (192.168.56.102)' can't be established.
ECDSA key fingerprint is SHA256:HSFxbxCd3ENh3a5r+oJtCczpBhlvFkbe3ZOXlZ3bwFo.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.56.102' (ECDSA) to the list of known hosts.
hadoop@192.168.56.102's password:
authorized_keys 100% 1191 1.2KB/s 00:00
之后在每台设备的.ssh目录下都会出现authorized_keys文件,cat一下。
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDCYcRBOKb+joUvbBaFKhaQUDq1lPT/sf2IWd4JJopqIm+2vtR/cRHrqcgNDy4YUCizx9tWwe54vx1HsYOvAnAOX6aeSWCJj
W0BIS8GjKWgUZ5J9alFmM2zXuhwBis/IC18YxgH5K0C9TVvmWeukbODeOAkcASPfD8k0bI7hodRFYtTCeZGCOl3yZAAdxSnZJaFt/RS5JjtLAxWydxc4JL2msX+LoCVg0OJuJW
pd7CBZ1Zy80Xh5KaLmSTB9zfC2av8sZBvYVoXHTELiTpFC/EgiSE/BzZqHZ/UNVbCsRBsYL642UWo5SMHuZJmWTxGDLIvZ8Vjvb8osX0MRp5mzOM9 hadoop@hadoop-2
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDctmHMrPk47ZBUhUKuEgHvvu2OXbCoL5zA8rAY6HoNKPOoOZfElBDKWGxXMfI86IlnjalHsxf6BWOcGu8+fLy+VzMtpsG5O
gpAaE6oW7p2+or/mKxdLQMwhqv3TDXr9DrwxUteXOsUR2oPF2OT58pq7ChBbH3gsR7GeQ0soQPGw2lq2RDQUCyi7wGZw93VXjKpvSk0UAub8Tg2DYXr3jo4sIXVGodjhMvXtOV
JO6/JK1HEcGI1YTs9iQ1B+8Yqab9yZVI868oajLVnHwp/tn62kKKKXszGZ62Rw3FmP4ZHGlcbK6fEwYosQIPg1ma9woBxk5d8ek/8l9+qnSPLb9VP hadoop@hadoop-3
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDDbB80g7LbfMw/m7oWIF6z6cD3MRlQ7iIhKGLfhvTJN+U8oexh1rvFYkNhHkDFgod4QmOkwuFBCRY8VuiaGVP5NcgDAwxYi
kch8cGYZQNthHb/aRQJ0m2ciitTMn0c1y86yaK7eIBTS7b1o2q57UBGNwNqBfs7O3moU+IAgUjtS8ydkIUmw5BTyEJTl5Tg6RsBClfAiqXTb4yCSb7qz1ZFJT+dlJSzIItqBF6
izAcdDr8jpwxwvqRoXfB9iiS1yr1Z3Cu4WzNVIqMGT1iBkW6Fn3OzyIz9D6hAPHY7ttw1E0hwazm/muO8T49mIrbapJ83qz+GJBFZ+i7KJ48D8N0T hadoop@hadoop-1
第四步:设置hosts文件:
编辑/etc/hosts文件为ip地址设置主机名:
(删除之前可能已存在的127.0.0.1 hadoop-x)
127.0.0.1 localhost
192.168.56.101 hadoop-1
192.168.56.102 hadoop-2
192.168.56.103 hadoop-3
分别在是三个主机进行上述配置。
之后,直接输入ssh hadoop@hadoop-2即可登录hadoop-2主机,退出使用exit指令。