Steps to Move ISCSI Target to Another Linux Server
In a recent blog, we saw Steps to Connect And Mount ISCSI Target on Linux Server Click here to read more. In this blog, we will see How we can shift the same iSCSI target to another Server. We are expected to do this kind of task when there is an application or DB migration to a new server and iSCSI targets were used to store of backups or application files outside of the server in iSCSI drives.
In the below demo we will shift /scan mount point from Server test-machine01 is partition created on iSCSI Lun to new Server: test-machine02.
Follow below Steps
1. Install iscsi-initiator-utils rpm package in Target Server
2. Stop dependent services on iSCSI and umount /scan in Source Server
3. Logout and Remove iSCSI target in Source Server
4. Discovering & Login to iSCSI targets LUN in Target Server
5. Mount the partition in Target Server
6. Start dependent Services in Target Server
Step 1. Install iscsi-initiator-utils rpm package in Target Server: To use the Linux system as an iSCSI initiator or client, We need to install iscsi-initiator-utils rpm package. Use OS yum command to install the required packages. Please note we are using OLE 7.9 for this demo. Once installation is done share initiator name with Storage admin. Initiator name is required to map iSCSI Lun to the new Server.
[root@test-machine02 ~]#
[root@test-machine02 ~]# cat /etc/oracle-release
Oracle Linux Server release 7.9
[root@test-machine02 ~]#
[root@test-machine02 ~]# yum install iscsi-initiator-utils
Resolving Dependencies
--> Running transaction check
--> Finished Dependency Resolution
Dependencies Resolved
==============================================================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================================================
Installing:
iscsi-initiator-utils x86_64 6.2.0.874-22.0.1.el7_9 ol7_latest 429 k
Installing for dependencies:
iscsi-initiator-utils-iscsiuio x86_64 6.2.0.874-22.0.1.el7_9 ol7_latest 96 k
Transaction Summary
==============================================================================================================================================================================================
Install 1 Package (+1 Dependent package)
Total download size: 524 k
Installed size: 2.5 M
Is this ok [y/d/N]: y
Downloading packages:
(1/2): iscsi-initiator-utils-iscsiuio-6.2.0.874-22.0.1.el7_9.x86_64.rpm | 96 kB 00:00:01
(2/2): iscsi-initiator-utils-6.2.0.874-22.0.1.el7_9.x86_64.rpm | 429 kB 00:00:02
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 224 kB/s | 524 kB 00:00:02
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : iscsi-initiator-utils-iscsiuio-6.2.0.874-22.0.1.el7_9.x86_64 1/2
Installing : iscsi-initiator-utils-6.2.0.874-22.0.1.el7_9.x86_64 2/2
Verifying : iscsi-initiator-utils-6.2.0.874-22.0.1.el7_9.x86_64 1/2
Verifying : iscsi-initiator-utils-iscsiuio-6.2.0.874-22.0.1.el7_9.x86_64 2/2
Installed:
iscsi-initiator-utils.x86_64 0:6.2.0.874-22.0.1.el7_9
Dependency Installed:
iscsi-initiator-utils-iscsiuio.x86_64 0:6.2.0.874-22.0.1.el7_9
Complete!
[root@test-machine02 ~]#
[root@test-machine02 ~]#
[root@test-machine02 ~]# rpm -qa |grep iscsi
iscsi-initiator-utils-iscsiuio-6.2.0.874-22.0.1.el7_9.x86_64
iscsi-initiator-utils-6.2.0.874-22.0.1.el7_9.x86_64
[root@test-machine02 ~]#
[root@test-machine02 iscsi]#
[root@test-machine02 iscsi]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1988-12.com.oracle:f43ac3fda4a3
[root@test-machine02 iscsi]#
Step 2. Stop dependent services on iSCSI and umount /scan in Source Server: Stop the dependent services on /scan mount point. We were using /scan as nfs and samba share for application. Once service is stopped disable service and comment out in /etc/fstab so the server will not attempt to mount /scan. Make sure /scan mount point is not in use and umount the /scan. Please note we already have 39Gb of data in /scan mount point and two application folder Image and attach.
[root@test-machine01 ~]#
[root@test-machine01 ~]# service nfs stop
Shutting down NFS daemon: [ OK ]
Shutting down NFS mountd: [ OK ]
Shutting down NFS quotas: [ OK ]
[root@test-machine01 ~]#
[root@test-machine01 ~]# service smb stop
Shutting down SMB services: [ OK ]
[root@test-machine01 ~]#
[root@test-machine01 ~]#
[root@test-machine01 ~]# chkconfig nfs off
[root@test-machine01 ~]#
[root@test-machine01 ~]# chkconfig smb off
[root@test-machine01 ~]#
[root@test-machine01 ~]#
[root@test-machine01 ~]# df -Th /scan
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 1008G 39G 918G 5% /scan
[root@test-machine01 ~]#
[root@test-machine01 ~]#
[root@test-machine01 ~]#
[root@test-machine01 ~]# ls -ltr /scan
total 2260
drwx------. 2 root root 16384 Oct 22 2020 lost+found
drwxrwxrwx. 2 app_user app_user 4096 Dec 9 12:30 Image
drwxrwxrwx. 3 app_user app_user 2293760 Jan 18 12:20 attach
[root@test-machine01 ~]#
[root@test-machine01 ~]#
[root@test-machine01 ~]# vi /etc/fstab
#
# /etc/fstab
# Created by anaconda on Sun Sep 29 12:18:09 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_jedoda2-lv_root / ext4 defaults 1 1
UUID=3e2509f5-45e1-4591-8e72-93ca17cef43e /boot ext4 defaults 1 2
/dev/mapper/vg_jedoda2-lv_swap swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
UUID=59d33c45-551b-4d56-9565-5eedefd56b2d /app ext4 defaults 0 0
#UUID="eab28b6a-910f-4ba2-8e4e-6db9c6af2012" /scan ext4 _netdev 0 0
:wq!
[root@test-machine01 ~]#
[root@test-machine01 ~]#
[root@test-machine01 ~]# umount /scan
[root@test-machine01 ~]#
[root@test-machine01 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_test-machine01-lv_root
26G 4.5G 21G 19% /
tmpfs 15G 296K 15G 1% /dev/shm
/dev/vda1 477M 78M 370M 18% /boot
/dev/mapper/vg_test-machine01-lv_app
40G 7.1G 31G 19% /app
[root@test-machine01 ~]#
Step 3. Logout and Remove iSCSI target in Source Server: Log out from iSCSI targets as we will not be using this iSCSI lun anymore on this server. You can also delete the target record from this server for the safer side. Once the iSCSI target is logout and target record are deleted, confirm to Storage Admin so he can present iSCSI luns to a new server.
[root@test-machine01 ~]#
[root@test-machine01 ~]#
[root@test-machine01 ~]# /sbin/iscsiadm -m session -P 0
tcp: [1] 192.168.100.133:3260,1 iqn.1991-05.com.microsoft:storsimple8600-shg0997877l71en-target (non-flash)
[root@test-machine01 ~]#
[root@test-machine01 ~]# iscsiadm --mode node --target iqn.1991-05.com.microsoft:storsimple8600-shg0997877l71en-target --portal 192.168.100.133 --logout
Logging out of session [sid: 1, target: iqn.1991-05.com.microsoft:storsimple8600-shg0997877l71en-target, portal: 192.168.100.133,3260]
Logout of [sid: 1, target: iqn.1991-05.com.microsoft:storsimple8600-shg0997877l71en-target, portal: 192.168.100.133,3260] successful.
[root@test-machine01 ~]#
[root@test-machine01 ~]#
[root@test-machine01 ~]# /sbin/iscsiadm -m session -P 0
iscsiadm: No active sessions.
[root@test-machine01 ~]#
[root@test-machine01 ~]# iscsiadm -m session -R
iscsiadm: No session found.
[root@test-machine01 ~]#
[root@test-machine01 ~]# iscsiadm -m node --targetname iqn.1991-05.com.microsoft:storsimple8600-shg0997877l71en-target -p 192.168.100.133 -o update -n node.startup -v manual
[root@test-machine01 ~]#
[root@test-machine01 ~]# iscsiadm -m node --targetname iqn.1991-05.com.microsoft:storsimple8600-shg0997877l71en-target -p 192.168.100.133 -o delete
[root@test-machine01 ~]#
Step 4. Discovering & Login to iSCSI targets LUN in Target Server: Once storage admin confirms iSCSI luns are presented to a new server. We can login to targets and enable automatic login so no manual steps are required, once the server is rebooted. Verify-in /var/log/messages file will get device name for lun. As the luns are already partitioned and formatted with ext4 file system we can directly mount the device to our new Server. Use command blkid to get the UUID of the device and make an entry in /etc/fstab.
[root@test-machine02 ~]#
[root@test-machine02 ~]#
[root@test-machine02 ~]# /sbin/iscsiadm -m discovery -t sendtargets -p 192.168.100.133
192.168.100.133:3260,1 iqn.1991-05.com.microsoft:storsimple8600-shg0997877l71en-target
[root@test-machine02 ~]#
[root@test-machine02 ~]# /sbin/iscsiadm -m node -T iqn.1991-05.com.microsoft:storsimple8600-shg0997877l71en-target -p 192.168.100.133 -l
Logging in to [iface: default, target: iqn.1991-05.com.microsoft:storsimple8600-shg0997877l71en-target, portal: 192.168.100.133,3260] (multiple)
Login to [iface: default, target: iqn.1991-05.com.microsoft:storsimple8600-shg0997877l71en-target, portal: 192.168.100.133,3260] successful.
[root@test-machine02 ~]#
[root@test-machine02 ~]# /sbin/iscsiadm -m node -T iqn.1991-05.com.microsoft:storsimple8600-shg0997877l71en-target -p 192.168.100.133 --op update -n node.startup -v automatic
[root@test-machine02 ~]#
[root@test-machine02 ~]# /sbin/iscsiadm -m session -P 0
tcp: [1] 192.168.100.133:3260,1 iqn.1991-05.com.microsoft:storsimple8600-shg0997877l71en-target (non-flash)
[root@test-machine02 ~]#
[root@test-machine02 ~]#
[root@test-machine02 ~]# cd /var/log
[root@test-machine02 log]#
[root@test-machine02 log]# ls -l mess*
-rw-------. 1 root root 349156 Jan 18 12:10 messages
-rw-------. 1 root root 893907 Jan 2 03:12 messages-20220102
-rw-------. 1 root root 1060113 Jan 9 03:10 messages-20220109
-rw-------. 1 root root 1063258 Jan 16 03:10 messages-20220116
[root@test-machine02 log]#
[root@test-machine02 log]# vi messages
Jan 18 12:10:59 test-machine02 kernel: scsi 3:0:0:3: Direct-Access MSFT STORSIMPLE 8600 221 PQ: 0 ANSI: 6
Jan 18 12:10:59 test-machine02 kernel: sd 3:0:0:3: Attached scsi generic sg3 type 0
Jan 18 12:10:59 test-machine02 kernel: sd 3:0:0:3: [sdc] 2147483648 512-byte logical blocks: (1.10 TB/1.00 TiB)
Jan 18 12:10:59 test-machine02 kernel: sd 3:0:0:3: [sdc] Write Protect is off
Jan 18 12:10:59 test-machine02 kernel: sd 3:0:0:3: [sdc] Write cache: disabled, read cache: enabled, supports DPO and FUA
Jan 18 12:10:59 test-machine02 kernel: sdc: sdc1
Jan 18 12:10:59 test-machine02 kernel: sd 3:0:0:3: [sdc] Attached SCSI disk
[root@test-machine02 log]#
[root@test-machine02 log]# fdisk -l
Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 1048576 bytes
Disk label type: dos
Disk identifier: 0x000a8920
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2099199 1048576 83 Linux
/dev/sda2 2099200 104857599 51379200 8e Linux LVM
Disk /dev/sdb: 161.1 GB, 161061273600 bytes, 314572800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 1048576 bytes
Disk label type: dos
Disk identifier: 0x8e917b78
Device Boot Start End Blocks Id System
/dev/sdb1 2048 314572799 157285376 83 Linux
Disk /dev/mapper/ol_test-machine02-root: 47.2 GB, 47240445952 bytes, 92266496 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 1048576 bytes
Disk /dev/mapper/ol_test-machine02-swap: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 1048576 bytes
Disk /dev/sdc: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x1b167028
Device Boot Start End Blocks Id System
/dev/sdc1 63 2147472809 1073736373+ 83 Linux
[root@test-machine02 log]#
[root@test-machine02 log]# blkid
/dev/sda1: UUID="9bd3f9f7-734c-4ef5-8bc2-c86df5a79c98" TYPE="xfs"
/dev/sda2: UUID="bSyqN1-dOx9-Sgob-LzlG-iyot-Ci3L-y3HU5j" TYPE="LVM2_member"
/dev/sdb1: UUID="32ab5545-f26f-4fd1-8dee-2cd873a0ea0f" TYPE="xfs"
/dev/mapper/ol_test-machine02-root: UUID="7f0c04c4-21c4-4b02-a495-3f6f9c96a9ca" TYPE="xfs"
/dev/mapper/ol_test-machine02-swap: UUID="89709dec-a9ab-449d-8d24-575961fff8e4" TYPE="swap"
/dev/sdc1: UUID="eab28b6a-910f-4ba2-8e4e-6db9c6af2012" TYPE="ext4"
[root@test-machine02 log]#
[root@test-machine02 log]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 150G 0 disk
ââsdb1 8:17 0 150G 0 part /app
sr0 11:0 1 1024M 0 rom
sdc 8:32 0 1T 0 disk
ââsdc1 8:33 0 1024G 0 part
sda 8:0 0 50G 0 disk
ââsda2 8:2 0 49G 0 part
â ââol_test-machine02-swap 252:1 0 5G 0 lvm [SWAP]
â ââol_test-machine02-root 252:0 0 44G 0 lvm /
ââsda1 8:1 0 1G 0 part /boot
[root@test-machine02 log]#
[root@test-machine02 log]# vi /etc/fstab
#
# /etc/fstab
# Created by anaconda on Wed Dec 29 12:37:55 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/ol_test-machine02-root / xfs defaults 0 0
UUID=9bd3f9f7-734c-4ef5-8bc2-c86df5a79c98 /boot xfs defaults 0 0
/dev/mapper/ol_test-machine02-swap swap swap defaults 0 0
UUID=32ab5545-f26f-4fd1-8dee-2cd873a0ea0f /app xfs defaults 0 0
UUID="eab28b6a-910f-4ba2-8e4e-6db9c6af2012" /scan ext4 _netdev 0 0
:wq!
[root@test-machine02 log]#
Step 5. Mount the partition in Target Server: Create one folder scan in the root directory / and use command mount -a to mount /scan mount point. Also, correct the owner and permission on /scan and directories.
[root@test-machine02 log]#
[root@test-machine02 log]# cd /
[root@test-machine02 /]#
[root@test-machine02 /]# mkdir scan
[root@test-machine02 /]#
[root@test-machine02 /]# mount -a
[root@test-machine02 /]#
[root@test-machine02 /]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 12G 0 12G 0% /dev
tmpfs 12G 0 12G 0% /dev/shm
tmpfs 12G 34M 12G 1% /run
tmpfs 12G 0 12G 0% /sys/fs/cgroup
/dev/mapper/ol_test-machine02-root 44G 6.5G 38G 15% /
/dev/sdb1 150G 126M 150G 1% /app
/dev/sda1 1014M 306M 709M 31% /boot
tmpfs 2.4G 36K 2.4G 1% /run/user/1000
tmpfs 2.4G 12K 2.4G 1% /run/user/42
tmpfs 2.4G 0 2.4G 0% /run/user/0
/dev/sdc1 1008G 39G 918G 5% /scan
[root@test-machine02 /]#
[root@test-machine02 /]# cd /scan
[root@test-machine02 scan]# ls
attach lost+found SignImage
[root@test-machine02 scan]# ls -ltr
total 2260
drwx------. 2 root root 16384 Oct 22 2020 lost+found
drwxrwxrwx. 2 500 500 4096 Dec 9 12:30 Image
drwxrwxrwx. 3 500 500 2293760 Jan 18 10:21 attach
[root@test-machine02 scan]#
[root@test-machine02 scan]#
[root@test-machine02 ~]#
[root@test-machine02 ~]#
[root@test-machine02 ~]# chown app_user:app_user /scan
[root@test-machine02 ~]#
[root@test-machine02 ~]# ls -ld /scan
drwxrwxr-x. 6 app_user app_user 4096 Jan 18 05:38 /scan
[root@test-machine02 ~]#
[root@test-machine02 ~]#
[root@test-machine02 ~]# cd /scan
[root@test-machine02 scan]# ls -ltr
total 2260
drwx------. 2 root root 16384 Oct 22 2020 lost+found
drwxrwxrwx. 2 500 500 4096 Dec 9 12:30 Image
drwxrwxrwx. 3 500 500 2293760 Jan 18 10:21 attach
[root@test-machine02 scan]#
[root@test-machine02 scan]# chown app_user:app_user Image
[root@test-machine02 scan]# chown app_user:app_user attach
[root@test-machine02 scan]#
[root@test-machine02 scan]# ls -ltr
total 2260
drwx------. 2 root root 16384 Oct 22 2020 lost+found
drwxrwxrwx. 2 app_user app_user 4096 Dec 9 12:30 Image
drwxrwxrwx. 3 app_user app_user 2293760 Jan 18 10:21 attach
[root@test-machine02 scan]#
Step 6. Start dependent Services in Target Server: Start the dependent services in a new server.
[root@test-machine02 ~]#
[root@test-machine02 ~]# systemctl start smb.service
[root@test-machine02 ~]# systemctl enable smb.service
[root@test-machine02 ~]# systemctl status smb.service
â smb.service - Samba SMB Daemon
Loaded: loaded (/usr/lib/systemd/system/smb.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2021-12-30 06:52:10 +03; 2 weeks 6 days ago
Docs: man:smbd(8)
man:samba(7)
man:smb.conf(5)
Main PID: 1262 (smbd)
Status: "smbd: ready to serve connections..."
Tasks: 5
CGroup: /system.slice/smb.service
ââ 1262 /usr/sbin/smbd --foreground --no-process-group
ââ 1358 /usr/sbin/smbd --foreground --no-process-group
ââ 1359 /usr/sbin/smbd --foreground --no-process-group
ââ 1376 /usr/sbin/smbd --foreground --no-process-group
ââ49991 /usr/sbin/smbd --foreground --no-process-group
Dec 30 06:52:10 test-machine02.saudiacatering.local systemd[1]: Starting Samba SMB Daemon...
Dec 30 06:52:10 test-machine02.saudiacatering.local smbd[1262]: [2021/12/30 06:52:10.896782, 0] ../../lib/util/become_daemon.c:136(daemon_ready)
Dec 30 06:52:10 test-machine02.saudiacatering.local smbd[1262]: daemon_ready: daemon 'smbd' finished starting up and ready to serve connections
Dec 30 06:52:10 test-machine02.saudiacatering.local systemd[1]: Started Samba SMB Daemon.
[root@test-machine02 ~]#
[root@test-machine02 ~]# systemctl start nfs-server.service
[root@test-machine02 ~]# systemctl enable nfs-server.service
[root@test-machine02 ~]# systemctl status nfs-server.service
â nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
ââorder-with-mounts.conf
Active: active (exited) since Tue 2022-01-18 12:16:07 +03; 23h ago
Main PID: 208143 (code=exited, status=0/SUCCESS)
Tasks: 0
CGroup: /system.slice/nfs-server.service
Jan 18 12:16:07 test-machine02.saudiacatering.local systemd[1]: Starting NFS server and services...
Jan 18 12:16:07 test-machine02.saudiacatering.local systemd[1]: Started NFS server and services.
[root@test-machine02 ~]#
Hope so you like this article!
Please share your valuable feedback/comments/subscribe and follow us below and don’t forget to click on the bell icon to get the most recent update. Click here to understand more about our pursuit.
Related Articles
- Oracle Critical Database Patch ID for July 2024 along with enabled Download Link
- Oracle Critical Database Patch ID for April 2024 along with enabled Download Link
- Oracle Critical Database Patch ID for April 2023 along with enabled Download Link
- Oracle Critical Database Patch ID for January 2023 along with enabled Download Link
- Steps to Apply Combo Patch (Oct 2022) on Clusterware in Two Node RAC in Oracle