Note: This is an RHCE 7 exam objective.
Presentation
In the iSCSI world, you’ve got two types of agents:
- an iSCSI target provides some storage (here called server),
- an iSCSI initiator uses this available storage (here called client).
As you already guessed, we are going to use two virtual machines, respectively called server and client. If necessary, the server and client virtual machines can be one and only one machine.
iSCSI Target Configuration
Most of the target configuration is done interactively through the targetcli command. This command uses a directory tree to access the different objects.
To create an iSCSI target, you need to follow several steps on the server virtual machine.
Install the following packages:
# yum install -y targetcli
Activate the target service at boot:
# systemctl enable target
Note: This is mandatory, otherwise your configuration won’t be read after a reboot!
Execute the targetcli command:
# targetcli Warning: Could not load preferences file /root/.targetcli/prefs.bin. targetcli shell version 2.1.fb34 Copyright 2011-2013 by Datera, Inc and others. For help on commands, type 'help'. />
You’ve got two options:
- You can create a fileio backstore called shareddata of 100MB in the /opt directory (don’t hesitate to use tab completion):
/> backstores/fileio/ create shareddata /opt/shareddata.img 100M Created fileio shareddata with size 104857600
Note: If you don’t specify write_back=false at the end of the previous command, it is assumed write_back=true. The write_back option set to true enables the local file system cache. This improves performance but increases the risk of data loss. In production environments, it is recommended to use write_back=false.
- You can create a block backstore that usually provides the best performance. You can use a block device like /dev/sdb or a logical volume previously created (# lvcreate –name lv_iscsi –size 100M vg):
/> backstores/block/ create block1 /dev/vg/lv_iscsi Created block storage object block1 using /dev/vg/lv_iscsi.
Then, create an IQN (Iscsi Qualified Name) called iqn.2014-08.com.example with a target named t1 and get an associated TPG (Target Portal Group):
/> iscsi/ create iqn.2014-08.com.example:t1 Created target iqn.2014-08.com.example:t1. Created TPG 1. Global pref auto_add_default_portal=true Created default portal listening on all IPs (0.0.0.0), port 3260.
Note: The IQN follows the convention of the RFC 3270 (see http://en.wikipedia.org/wiki/ISCSI to get more details).
Now, we can go to the newly created directory:
/> cd iscsi/iqn.2014-08.com.example:t1/tpg1 /iscsi/iqn.20...ample:t1/tpg1> ls o- tpg1 ................................................. [no-gen-acls, no-auth] o- acls ............................................................ [ACLs: 0] o- luns ............................................................ [LUNs: 0] o- portals ...................................................... [Portals: 1] o- 0.0.0.0:3260 ....................................................... [OK]
Below tpg1, three objects have been defined:
- acls (access control lists: restrict access to resources),
- luns (logical unit number: define exported resources),
- portals (define ways to reach the exported resources; consist in pairs of IP addresses and ports).
If you use a version pre-RHEL 7.1 (this step is now automatically done by the iscsi/ create command), you need to create a portal (a pair of IP address and port through which the target can be contacted by initiators):
/iscsi/iqn.20...ple:t1/tpg1> portals/ create Using default IP port 3260 Binding to INADDR_ANY (0.0.0.0) Created network portal 0.0.0.0:3260.
Whatever version, create a lun depending on the kind of backstore you previously chose:
- Fileio backstore:
/iscsi/iqn.20...ample:t1/tpg1> luns/ create /backstores/fileio/shareddata Created LUN 0.
- Block backstore:
/iscsi/iqn.20...ample:t1/tpg1> luns/ create /backstores/block/block1 Created LUN 0.
Create an acl with the previously created IQN (here iqn.2014-08.com.example) and an identifier you choose (here client), together creating the future initiator name:
/iscsi/iqn.20...ample:t1/tpg1> acls/ create iqn.2014-08.com.example:client Created Node ACL for iqn.2014-08.com.example:client Created mapped LUN 0
Optionally, set a userid and a password:
/iscsi/iqn.20...ample:t1/tpg1> cd acls/iqn.2014-08.com.example:client/ /iscsi/iqn.20...xample:client> set auth userid=usr Parameter userid is now 'usr'. /iscsi/iqn.20...xample:client> set auth password=pwd Parameter password is now 'pwd'.
Now, to check the configuration, type:
/iscsi/iqn.20...om.example:d1> cd ../.. /iscsi/iqn.20...ple:tgt1/tpg1> ls o- tpg1 ................................................. [no-gen-acls, no-auth] o- acls ............................................................ [ACLs: 1] | o- iqn.2014-08.com.example:client ......................... [Mapped LUNs: 1] | o- mapped_lun0 ............................. [lun0 fileio/shareddata (rw)] o- luns ............................................................ [LUNs: 1] | o- lun0 .......................... [fileio/shareddata (/opt/shareddata.img)] o- portals ...................................................... [Portals: 1] o- 0.0.0.0:3260 ....................................................... [OK]
Finally, you can quit the targetcli command:
/iscsi/iqn.20...ple:tgt1/tpg1> exit Global pref auto_save_on_exit=true Last 10 configs saved in /etc/target/backup. Configuration saved to /etc/target/saveconfig.json
Note: The configuration is automatically saved to the /etc/target/saveconfig.json file.
Also, it can be useful to check the ports currently used:
# netstat -ant Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:3260 0.0.0.0:* LISTEN tcp 0 0 192.168.1.81:22 192.168.1.81:33584 ESTABLISHED tcp6 0 0 :::22 :::* LISTEN tcp6 0 0 ::1:25 :::* LISTEN
Finally, open the 3260 tcp port in the firewall configuration:
# firewall-cmd --permanent --add-port=3260/tcp Success
Note1: With RHEL 7.2 (RHBZ#1150656), there is now a firewalld configuration file for the iscsi-target service. So you can type: # firewall-cmd –permanent –add-service iscsi-target
Note2: In the new /usr/lib/firewalld/services/iscsi-target.xml configuration file, two lines are specified for the ports: TCP 3260 and UDP 3260. As everything was working fine until now with the TCP 3260 argument, I suppose that you can run iSCSI on top of UDP but it’s not the default option (I didn’t find any details in the RFC7143 on this point).
Reload the firewall configuration:
# firewall-cmd --reload Success
iSCSI Initiator Configuration
To create an iSCSI initiator, you need to follow several steps on the client virtual machine.
Install the following package:
# yum install -y iscsi-initiator-utils
Edit the /etc/iscsi/initiatorname.iscsi and replace the content with the initiator name that you previously configured as acl on the target side:
InitiatorName=iqn.2014-08.com.example:client
If you previously set up a userid and a password on the server, edit the /etc/iscsi/iscsid.conf file and paste the following lines:
node.session.auth.authmethod = CHAP node.session.auth.username = usr node.session.auth.password = pwd
Start the iscsi service:
# systemctl start iscsi
Caution: This action is mandatory to be able to unmount the remote resource when rebooting. Don’t confuse iscsid and iscsi services!
Execute the iscsiadm command in discovery mode with the server ip address (here 192.168.1.81):
# iscsiadm --mode discovery --type sendtargets --portal 192.168.1.81 192.168.1.81:3260,1 iqn.2014-08.com.example:t1
Note1: If you don’t specify any port, the default port is 3260.
Note2: Don’t mention a DNS entry as your portal address (here 192.168.1.81), this would be a bad idea causing you a lot of trouble.
Execute the iscsiadm command in node mode with the server ip address (here 192.168.1.81):
# iscsiadm --mode node --targetname iqn.2014-08.com.example:t1 --portal 192.168.1.81 --login Logging in to [iface: default, target: iqn.2014-08.com.example:t1, portal: 192.168.1.81,3260] (multiple) Login to [iface: default, target: iqn.2014-08.com.example:t1, portal: 192.168.1.81,3260] successful.
Note: As before, if you don’t specify any port, the default port is 3260. Use of DNS entry as portal address only brings problems.
To check the configuration, type:
# lsblk --scsi NAME HCTL TYPE VENDOR MODEL REV TRAN sda 2:0:0:0 disk LIO-ORG shareddata 4.0 iscsi
To be sure that your resource is not in read-only mode (1=read-only mode), type:
# lsblk | egrep "NAME|sda" NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 100M 0 disk
Now, you can create a file system:
# mkfs.ext4 /dev/sda mke2fs 1.42.9 (28-Dec-2013) /dev/sda is entire device, not just one partition! Proceed anyway? (y,n) y Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=4096 blocks 25688 inodes, 102400 blocks 5120 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=33685504 13 block groups 8192 blocks per group, 8192 fragments per group 1976 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729 Allocating group tables: done Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done
Retrieve the UUID of this disk:
# blkid | grep "/dev/sda" /dev/sda: UUID="4a184c70-20ad-4d91-a0b1-c2cf0eb1986f" TYPE="ext4"
Add the disk UUID to the /etc/fstab file:
# echo "UUID=..." >> /etc/fstab
Note: Be very careful to type >> and not >, otherwise this will destroy all your configuration!
Make a copy of the /etc/fstab file before doing this operation if you don’t want to take any risk.
Edit the /etc/fstab file and add the mount point (here /mnt), the file system type (here ext4) and the mount options (_netdev):
UUID=... /mnt ext4 _netdev 0 0
Note: The _netdev mount option is mandatory to postpone the mount operation after the network initialization. If you don’t do it, the initiator boot process will be stopped after a timeout in maintenance mode (more information about the _netdev option here).
To check your configuration, type:
# mount /mnt # touch /mnt/testFile
Note: A best practice is to execute the mount -a command, each time you change something in the /etc/fstab file to detect any boot problem before it occurs.
Optionally, you can dump all the initiator configuration (3=max output, 0=min output):
# iscsiadm -m session -P 3 iSCSI Transport Class version 2.0-870 version 6.2.0.873-28 Target: iqn.2014-08.com.example:t1 (non-flash) Current Portal: 192.168.1.81:3260,1 Persistent Portal: 192.168.1.81:3260,1 ********** Interface: ********** Iface Name: default Iface Transport: tcp Iface Initiatorname: iqn.2014-08.com.example:client Iface IPaddress: 192.168.1.10 Iface HWaddress: Iface Netdev: SID: 1 iSCSI Connection State: LOGGED IN iSCSI Session State: LOGGED_IN Internal iscsid Session State: NO CHANGE ********* Timeouts: ********* Recovery Timeout: 120 Target Reset Timeout: 30 LUN Reset Timeout: 30 Abort Timeout: 15 ***** CHAP: ***** username: usr password: ******** username_in: password_in: ******** ************************ Negotiated iSCSI params: ************************ HeaderDigest: None DataDigest: None MaxRecvDataSegmentLength: 262144 MaxXmitDataSegmentLength: 262144 FirstBurstLength: 65536 MaxBurstLength: 262144 ImmediateData: Yes InitialR2T: Yes MaxOutstandingR2T: 1 ************************ Attached SCSI devices: ************************ Host Number: 2 State: running scsi2 Channel 00 Id 0 Lun: 0 Attached scsi disk sda State: running
Source: targetcli man page and Linux-iSCSI wiki.
Useful Tips
Before rebooting, set up a virtual console, this can be helpful!
If you need to shut down target and initiator, shut down the initiator first. If you shut down the target first, the initiator won’t be able to unmount the remote resource and will be stuck in the shutdown process.
During the exam, as an extra precaution, unmount the remote resource before rebooting the initiator, you will avoid any bad surprise.
Additional Resources
In addition, you can watch CalPOP’s video Creating iSCSI SAN Storage on Linux (CentOS 7.0) (10min/2015), Venkat Nagappan’s video Setting up iSCSI Target & Initiator (19min/2015) or follow this IBM iScsi tutorial.
There is also a wiki about Targetcli.
Dell offers some interesting information about iSCSI, MPIO and performance tips in its RHEL Configuration Guide for Dell Storage PS Series Arrays.
Finally, RedHat provides a tutorial about Setting up iSCSI Export on Red Hat Enterprise Linux 7.
Check Your Knowledge
Test yourself!
Definition of a hero essay when writing your definition of a writemypaper4me hero essay , you need to have an outline and use helpful ideas to come up with a great paper.
I found it was also necessary to issue:
systemctl start target
otherwise the initiator could not create the filesystem with the above mentioned:
mkfs.ext4 /dev/sda
I didn’t get the same behavior but I only tried on virtual machines, not on physical hardware.
Hi,
why don’t you use
# firewall –permanent –add-port=3260/tcp
Thank you
RODOLFO
Yes, you are absolutely right, it will be quicker. Thanks.
or even better, –add-service=iscsi-target
Yes. Thanks.
Warning: the iscsi-target.xml file for firewalld was added after 7.0.
So if an exam system is only running 7.0, you will have to add the specific port like shown above.
I find myself being tripped up on these differences between 7.0 and 7.2 often. Whenever I install 7.0, if remote repos are enabled it automatically updates to 7.2. Have to remove all of the repos and add just the Everything iso to get packages and stay on 7.0.
Why do you update your repository? Why not stay in RHEL 7.0 or RHEL 7.1 without any updates?
On a CentOS 7.0 ISO the default repos upgrade you to 7.2.
I didn’t notice it for a while. Now I remove all of the default repos and add one just for the 7.0 ISO, so it uses the packages from ISO.
Yes, this is the right way to do it. I should have written it before.
You may need to set ACL configuration against the target or limit the target to a given IP address or IQN. If you are required to do this you will have 2 options
1) Protect it via ACL using IQN of client = “cat /etc/iscsi/initiatorname.iscsi” on client and add on server in targetcli (quite easy really)
“../acls> create iqn.1994-05.com.redhat:a51085a87171”
2) Protect it via firewall, using standard –add-port will not protect it unless you have specific source address in your zone. If this is the case you will need to use rich rules. The easiest is to use firewall-config as remembering something like this which is not documented in man pages maybe difficult
“firewall-cmd –permanent –add-rich-rule=’rule family=”ipv4″ port port=”3260″ protocol=”tcp” source address=”192.168.0.174″ accept'”
From my testing, if you use ACLs you do not need any of the Attribute settings and Writing of a filesystem therefor works without the demo demo_mode_write_protect=0 which is advised against strongly in http://linux-iscsi.org/wiki/ISCSI
This is also the way RedHat documents it https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/sect-iSCSI_based_storage_pools.html
or if they request Auth you will need the following (from above document)
# set auth userid=redhat
# set auth password=password123
# set attribute authentication=1
# set attribute generate_node_acls=1
This needs to correspond to the client settings in /etc/iscsi/iscsid.conf for CHAP auth
node.session.auth.authmethod = CHAP
node.session.auth.username = redhat
node.session.auth.password = password123
I think if AUTH was required it may say this as part of objectives.
I have changed my tutorial according to what you wrote. Thanks a lot.
for portals/
i found the following , it’s better to delete the default 0.0.0.0 and add the targetcli server ip.
# portals/ delete 0.0.0.0 3260
and add the targetcli server ip
# portals/ create 192.168.10.20 3260
Interesting. Thanks.
Dear CertDepot,
Thanks for your tutorials, they are so informative. But i’ve been struggling with the iscsi-initiator on the client side for a while now. I’ve followed your tutorial from start to finish but anytime i come to the login part on the client side I keep on getting this error
“iscsiadm: initiator reported error (24 – iSCSI login failed due to authorization failure).”
” iscsiadm: Could not log into all portals”
Do you have any ideas what I might be doing wrong so that i can correct it? it is driving me crazy!
Sorry, I don’t have any idea concerning the problem that you have.
This may be because you added authentication at the target after performing initial discovery ? DIscovery “caches” the target configuration and will not be updated if you update iscsi.conf. You should “rediscover” by using iscsiadm -m node -o delete and then redo discovery.
You can check “cached” configurations under /var/lib/iscsi/nodes.
Also, iscsiadm -m session -P 2 -S should show CHAP user/password to check if current values are what you expect.
I too had this problem, and could not resolve it. I found the solution for me was to restart iscsid on the client (initiator) side and it solved my issue, eg: “systemctl restart iscsid”, then re-run your login command. I know this question is old, but good information regardless.
Interesting. Thanks.
Looking for difference between iscsi and iscsid services.
Can anybody provide some brief detail?
All you need to remember is that both need to be running in order for your initiator to work properly.
Another advice is that NEVER and i mean NEVER use the DNS entry as your portal address. ALWAYS use the server IP. That is if you don’t want to spend hours troubleshooting why your initiator isnt finding its target. Some of us learned the hard way.
Interesting comment. Thanks.
Another advice that would save your audience $400 is making sure that you do a systemctl restart iscsid.service && systemctl restart iscsi.service on your initiator. Not doing this will cause your initiator to authenticate using the default iqn address instead of the one you created.
I’m ready to believe you. However, until now, I didn’t experience this problem.
that worked for me, before trying to restart both services I had the error iSCSI login failed with error 24 when I tried to run the initiator
Thank you Linuxfan, your tips saved my day. I spent countless hours and couldn’t figure out why I was getting a login error, despite my settings are alright. the systemctl restart iscsid.service && systemctl restart iscsi.service did the trick.
While testing this I fell into a booby trap: had a server exporting LUNs made from a LVM managed disk and also had LVM merging those LUNs into an iSCSI PV.
After a server (target) reboot, the server LVM claimed the LUNS and the target was unable to “export” them. Ouch. Ended up learning about LVM filtering, which can be used to prevent LVM from managing anything it sees that looks like a PV.
Hello everyone,
If you are experiencing an issue during your RHEL training such as:
“iscsiadm: initiator reported error (24 – iSCSI login failed due to authorization failure).”
” iscsiadm: Could not log into all portals”
It appears to be a bug even in RHEL7 as far as I understand, i am not sure about the upgraded version like 7.2 or software updates may fix it only if you are subscribed to redhat.
Now here is the solution:
If you are running tests on VMs and your domain is example.com and you have named your machines after the domain for e.g client.example.com and server.example.com
Then you should edit /etc/hosts file on VMs used for iscsi target and client/initiator with their ip address and domain name in it. for e.g
192.168.122.20 server.example.com server
192.168.122.60 client.example.com client
restart target and iscsi on both machines and it will succeed.
Suggestion/advices are always welcome.
I’m on a Centos 7, I run systemctl restart iscsid.service && systemctl restart iscsi.service on the initiator as said by linuxfan user and it worked without restarting anything on the target
Hello all,
The article here is really informative and helpful for the beginners. Thanks for writing in the complete step by step guide.
I am new to the environment, and have tried creating the iscsi target on centos 7 based on the inputs given.
I am connecting the ISCSI target from ubuntu on client side.
I am able to connect to the target, but the drive connected is in the read only mode.
I am not able to trace the error I did.
Can you please guide me, where I may be going wrong while making the connection / or making the volume group.
Thanks a ton in advance.
Regards.
Abhinav Aggarwal
I may seem rude but this is part of the learning process to try by yourself to find what you did wrong.
In addition, I couldn’t help you with the very limited information that you are providing.
This tutorial has been tried and tested many times by me and many exam candidates, you can rely on it.
Restart from scratch, even if it’s painful. At the end of the day, you won’t regret it!
Good luck.
On CentOS 7.1.1503 it looks like python-six library is too old and not aligned with targetcli requirements. It doesn’t work with targetcli. If you’re working on CentOS 1503 update that package (yum update -y python-fix).
Interesting. Thank you.
Very useful,
To verify node settings (authentication, automatic startup, discovery address etc.) on the client side:
iscsiadm -m node -T target_name -o show
Interesting. Thanks.
Hi! After writing an entry in /etc/fstab, I type reboot and it gets stuck. It says “connection1:0: ping timeout of 5 seconds expired, last rx 4302580612, last ping 4302585612, now 4302590625”
Any ideas? I only found one elegant solution i.e. to issue the isscsiadm …–logout issue command after typing mount -a and then it reboots instantly otherwise it takes 5 min approx.
Any more tips for rebooting after doing persistent mount?
It seems you’ve got a network connection problem. Check your network configuration.
How likely is it that we’ll be given an existing LVM partition that needs to be SHRUNK before we can create the block backstore LVM?
Obviously its way easier to do the file backstore as now partition / FS needs to be touched. I’m just a bit concerned they throw a spanner in the works and want an existing volume shrunk first, before you can deploy the block backstore.
Thanks,
I have problem, target is broken. Config is OK, but from client server:
nmap targetIP
PORT STATE SERVICE
3260/tcp filtered iscsi
In target server run:
firewall-cmd –permanent –add-service=iscsi-target
Client success nmap test and connect to target.
It looks to me like it wasn’t the target that was broken, but rather firewall which didn’t allow incoming connections.
I’m having the following issue with an iscsi initiator in RHEL7.0
When I reboot without a proper umount and logout – the system just hangs.
I guess it is some kind of bug, which I have solved by editing the “/etc/iscsi/iscsid.conf” and setting the logout timer to 1 second (default is 15).
The value is “node.conn[0].timeo_logout_timeout = 1”
Once I have edited and restarted the “iscsi” and “iscsid” services – the machine simply reboots and/or shutdowns as expected.
Can someone confirm this behaviour???
Relevant packages:
initscripts-9.49.17-1.el7_0.1.x86_64
iscsi-initiator-utils-6.2.0.873-21.el7.x86_64
iscsi-initiator-utils-iscsiuio-6.2.0.873-21.el7.x86_64
If you are studying for the RHCE you should not need to edit “/etc/iscsi/iscsid.conf”.
Did you shutdown the target before the initiator, as doing that can cause this issue. I am assuming that the /etc/fstab file is correct. On a side note did you try any other time lengths from 2 to 17?
I meant that rebooting the initiator (the client) without umounting and logging out of iscsi will cause the machine to stale (never shuts down). And in the hurry – this could happen.
I’ve mentioned this as a precaution – as the script that RedHat will use – probably will not check for iscsi mounts – it will just reboot /this is just an assumption/. And if your machine never comes up – and it doesn’t even properly shut down – then you fail 🙂
No, I didn’t try another length. The default one is 15s. I was thinking about 0 – but in real world a second or 2 could really take for a log out, although in corporate environment this means serious network issues.
I suspect that this is to do with the shutdown order in the systemd setting. I think this was resolved in RHEL 7.1. and possibly some update. If you have it try a later version of RHEL or Centos.
So many people are reporting that the hard drive they partition are full (4 Primary partitions are used) there are no partitions or space left. How would you create a partition for iScsi target without partition or space?
You don’t always need to create a partition for iSCSI, sometimes creating a file is enough.
If one of the used partitions is a swap partition, you can set up a swap file and reuse the partition.
If there is a lack of space, you can reduce the size of the swap partition/file.
You may also have some free space in a volume group. If that’s the case, the way to go is probably to create a new LVM volume.
Thank you certDepot and Lisenet. You guys are very helpful.
Everything worked perfectly, I was able to create a partition from target drive and mount it to fstab with _netdev option. but whenever system restarts it just hang and I get following message “connection1:0 ping timeout of 5 secs exoired, recv timeout5, last rx 4295702572, last ping 4295707573, now 4295712578”. God the iSCSI is so buggy.
I am trying to do a login but I get a iscsiadm: No records found.. the discovery is successful as I can see the iqn
iscsiadm –mode node –targetname iqn.2017-01.com.example:disk –portal 172.16.235.222 –login
iscsiadm: No records found
Do you have any idea what might be wrong?
Thanks.
Is there any chance that you had some other target configured previously, and that target is still returning an old target?
I found what it was, the iscsi service was not running on the initiator… I thought it was iscsid.
Both services are needed on the iSCSI initiator. The iscsid service is the main service that accesses all configuration files involved. The iscsi service is the service that establishes the iSCSI connections.
Hi,
So in the exam configuring target and initiator or server and client is part of the exam? Or you only need to configure the initiator?
You need to know how to configure both.
Okay, I have configured both, It seems not that hard. But, there’s a but, after issuing the command below
# iscsiadm -m discovery -t st -p MYIP
# vi /etc/iscsi/initiatorname.iscsi
# systemctl restart iscsi
# systemctl restart iscsid
# iscsiadm -m node -T iqn.2018.com:server -p MYIP -l
# lsblk/fdisk -l
I was able to see the disks, the 2 disks I created.
then,
fdisk /dev/sdc
fdisk /dev/sdd
then
pvcreate /dev/sdc1
pvcreate /dev/sdd1
vgcreate vgnew1 /dev/sdc1
vgcreate vgnew2 /dev/sdd1
lvcreate -l 100%FREE -n lvdisk1 /dev/vgnew1
lvcreate -l 100%FREE -n lvdisk2 /dev/vgnew2
mkfs -t xfs /dev/vgnew1/lvdisk1
mkfs -t xfs /dev/vgnew2/lvdisk2
mkdir /disk1
mkdir /disk2
vi /etc/fstab
mount -a
# BAM! All are mounted. I can see my new disk coming from the target.
Then, I tried to unmount. It was successful. Tried to mount -a again, it was not, it says device is busy.
Okay, issue the command reboot.
HANG! Rebooting…………..
Okay, force shutdown the VM, I am now in MAINTENANCE MODE. Hmmm, maybe I shall stop the sharing in TARGET.
– issue systemctl stop iscsi ; systemctl stop iscsid.
Reboot again. And in maintenance mode again, …
Right now. I’m waiting on you guys because I’m stuck and searching for answers in google.
I personally would configure /etc/iscsi/initiatorname.iscsi before attempting to discover targets to make sure that the correct initiator name is used.
PS! I have configured the TARGET nicely and easy without encountering errors. Only the initiator. After the reboot. I will really appreciate, in this blog or page. There’s a do’s and don’ts!
Like BEWARE!!!!!!!
don’t reboot the server IF!
Reboot the server AFTER you do this.
ELSE goodbye $400.
Okay, server is up now. I don’t like what happened…
1st scenario
– after I reboot the server it says Rebooting… in the console. It was hanging, so I went to vcenter and force shutdown the server. This is not good, do we have access to the console server of the VM?
After I force shutdown, I was stuck in maintenance mode. So tried to stop the sharing the iscsi and stopping the service in target. It did not work.
Finally I rebooted it again,
And I’m in maintenance mode. put # on the /etc/fstab.
I am now in.
My questions:
1. What to do next?
2. What to avoid?
3. what will I do to have my iscsi disk up after reboot without going to maintenance mode.
I think tips and advices is really needed in ISCSI.
You should be mounting the logical volume lvdisk2 and not the physical disk sdd1. From one of your replies:
[root@~]# mount -a
mount: /dev/sdd1 is already mounted or /disk1 busy
This suggests that you are mounting the physical LVM volume sdd1. You should mount the logical volume lvdisk2 that you created.
# tail -1 /etc/fstab
UUID=”fGX1XW-z55Q-bAQh-bc1q-iwzR-JNYk-V91aR” /disk1 xfs _netdev 0 0
[root@ ~]#
[root@~]# mount -a
mount: /dev/sdd1 is already mounted or /disk1 busy
[root@ ~]# reboot
questions:
1. Why it keeps saying it is mounted, when it is not? I tried to manually mount it. It keeps saying that. But it was successful mounting after first try, after the umount, it doesn’t work now.
Check the output of /proc/mounts and see if the disk1 is reported there. Also check lsof output and look for any references to the disk1. There may be a process that uses it.
I have mastered it. Hoping for an honest answer, is this enough? Can you say you can perfect the iscsi target initiator chapter if you can mount the disk from the target and it is persistent in the reboot? I think I am good. I need to master on authentication of iscsi.
I cannot claim that, but I’ve been working with NetApp storage (both iSCSI and NFS3/4) for a couple of years so I think I know a bit or two about the subject.
It is interesting that you say in the exam, its wise to unmount the remote resource to avoid surprises. Can I ask to what purpose is this?
Surely the whole point of an initiator is to mount a partition on boot and then when rebooting to do this seamlessly without manual intervention?
I do ask this though because I encountered a very strange thing in my exam related to the initiator where by after doing all the steps correctly and then doing a ‘mount -a’ where all seemed to be good, a reboot basically broke my client where it would not boot back up.
I am not entirely sure if I did something weird with the MBR, my LVM on the target or is it something odd within iscsi itself that prevented the client from rebooting. This was on RH 7.0 by the way which has its issues.
I’ve heard a lot of people having problems when they didn’t unmount resources before rebooting.
It is for this reason that I am pretty careful.
Yes, it was odd alright as I did the setup twice, both with similar outcomes. It meant that I lost a load of time which put me on the backfoot for the rest of the exam. I had to rebuild the client VM a few times as the console was giving me no output which is unexpected.
The exam version was RH 7.0, so I would advise people to practice on this as much as possible as there are a few kinks in this version that seems to have been ironed out in later versions. It is bad form on Red Hat to keep this version in the exam when there are known issues with this build.
Yes, I agree.
How about the target config, it is gone after the reboot, the only thing that is left there is the fileio.
I was able to configure iscsi target/iscsi initiator easily, with CHAP. disk, etc.. I was able to mount..and when I reboot the iscsi iniator server it is not hanging.. BUT
BUT…….
if I reboot the target server, then upon boot up it will erase my configuration. Why is this?
Then I can not do a restoreconfig because it is saying /dev/iscsi/disk1 is in use?
okay, I stop all services from initiatior to target. And yet I can’t do a restoreconfig. okay, I tried to lvremove and the (dmsetup remove) the said disk and it is saying the same. It is busy.
Okay since I have time, let’s try to configure target again, I know this is for my benefit. So I created disk2.
then all is good..able to mount from iscsi initiator etc. Then after the reboot of the target, my config was gone. And I tried to run the restoreconfig and it is not working.
Please help.
Have you tried running lsof to identify what process is using the disk?
Okay my question is, why my config is gone after every reboot. I’m asking why, tried recreating them, I don’t mind. I became a master of iscsi target and initiator. Commands are in my finger now but wondering why?
Then one server is up I can’t run the restoreconfig because it keeps saying disk1 is being used, I lsof etc. Nope it’s not being used. Spent 4 hours in google, searching for answer.
Somebody said, fixed the lvm.conf [global_filter],
I did and I was able to run the restoreconfig saveconfig. My config is back. But, I don’t want this to happen in my exam. I’m just wondering why it is gone after reboot? Any ideas please? In target?
Did you actually save the config by running “saveconfig”? Did that part work?
I’ve had this today, and figured out that I forgot to systemctl enable target. D’oh!
Yes, it’s not there. What is the reason? Why target config is not being saved? It is fixed now. I need to modify the /etc/lvm/lvm.conf
I am trying to define authentication per acl basis.
I have enabled Authentication on per ACL Basis.
i.e.
under tpg
=========
/iscsi/iqn.20…:target8/tpg1> get attribute authentication
authentication=0
under ACL:
==========
/iscsi/iqn.20…al.rhce:test1> get auth
AUTH CONFIG GROUP
=================
mutual_password=
—————-
The mutual_password auth parameter.
mutual_userid=
————–
The mutual_userid auth parameter.
password=username
—————
The password auth parameter.
userid=password
———–
The userid auth parameter.
================
As per the target configuration, it should only allow access to this acl using mentioned username/password
On Client:
/etc/iscsi/iscsid.conf
If I disable ( # ), chap settings . ( i.e.) remove user/pass settings .
#node.session.auth.authmethod = CHAP
It should not be able to access the acl and should go through error of user/password? correct me if I am wrong.
but it gets logs into the target and I am able to mount iscsi drives on the client even without password.
1) What’s the expected behavior in this case?
2) what’s the correct method to enable authentication on per acl basis?
Thanks
What’s the value of generate_node_acls?
I have already sent you the required data.
can you please answer these
1) What’s the expected behavior in this case?
2) What’s the correct method to enable authentication on per acl basis?
I would suggest you read the man file
man targetcli
in addition as Lisenet pointed out could you post the value of generate_node_acls
/iscsi/iqn.2018-01.com.example:t1>get parameter
Hi,
If I would create a backstore block for my iscsi data that I need to create a LV. I know the commands to create LV’s and VG’s but what do I have to do if it has no free space and everything is already assigned to the existing LV’s?
Thank you.
Marek
After I mounted the block device from the target and rebooted my client. It ended up in the maintenance mode.
Which minor version of RHEL 7 are you using?
I came a cross a similar problem, Check the fstab file for typeo’s