Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Anyone configured and using iSCSI targets on their Gentoo?
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo
View previous topic :: View next topic  
Author Message
eohrnberger
Apprentice
Apprentice


Joined: 09 Dec 2004
Posts: 240

PostPosted: Thu May 05, 2022 2:44 am    Post subject: Anyone configured and using iSCSI targets on their Gentoo? Reply with quote

I’m interested in configuring a number of iSCSI targets (shares) on my Gentoo host, with a zfs volume backstore.

The concept I have in mind is when a VirtualBox VM migrates from one physical host to another, its system hard disk doesn’t have to migrate from one physical host to another, but rather, being an iSCSI target, it would physically remain on the same physical storage without any, or rather minor, performance penalty, or at least allow greater flexibility as to which VM host can run the VM (I have a shared VirtualBox VM registry which all physical hosts share).

Yes, I've tried running a VM's system disk across an NFS mount as a VDI, but this does have a performance penalty. Would this performance penalty be reduced with iSCSI? I'm guessing it might be.

Its something trying to mimic SAN storage and VMSphere with migration, at least in theory.

I’ve found some references which have helped:
Yet, when I configure the iSCSI target, and try to connect to it from a VM, I get the system’s message log filling up with:
Code:
kernel: Unable to locate Target IQN: iqn.2022-05.<TLD>.<DOM>.192.168.2.2:iSCSI-Test-Volume-A in Storage Node
kernel: iSCSI Login negotiation failed.
(<TLD> and <DOM> substituted for privacy)

Near as I can figure, once VirtualBox tries to start a VM and connect to its iSCSI system disk, it refuses to stop on failure. Further, once the VirtualBox GUI is started on a VM physical host, VirtualBox enumerates all the hard disks it has configured in its registry, fails on the iSCSI, refuses to stop trying, and starts a series of events which ends up in filling the system log.

Sure, I probably have mucked up the iSCSI target configuration, and am still trying to figure out which kernel module needs to be added to the kernel to respond to the iSCSI connection request (I’m figuring it must be: ./kernel/drivers/target/iscsi/iscsi_target_mod.ko, but have yet to figure out how to insmod or modeprobe this kernel module into the kernel). Check with kernel ‘make menuconfig’, seems that it can only be compiled as a kernel module(?), and can’t be complied into the kernel itself(?).

Also wondering if an iSCSI disk can be shared between two VMS to stand in as a cluster shared disk for cluster control. Per: Shared disk with VirtualBox - Between 2 VMs on same host, this might also be possible, but that’s getting ahead of myself. I figure need to make iSCSI work.

PS: Doesn’t seem like the package:
Code:
[I] sys-block/targetcli-fb
     Available versions:  2.1.54 {PYTHON_TARGETS="python3_8 python3_9"}
     Installed versions:  2.1.54(08:55:20 05/01/22)(PYTHON_TARGETS="python3_9 -python3_8")
     Homepage:            https://github.com/open-iscsi/targetcli-fb
     Description:         Command shell for managing Linux LIO kernel target

Has an OpenRC init script to support it (or do I have this wrong? - If not, is there one around?)

Thanks for listening me venting my frustrations. As you can tell, I've tried to be thorough in research before posting and troubling everyone.
Back to top
View user's profile Send private message
alamahant
Advocate
Advocate


Joined: 23 Mar 2019
Posts: 3879

PostPosted: Thu May 05, 2022 5:18 pm    Post subject: Reply with quote

Do you have in your VMs a file
Code:

/etc/iscsi/initiatorname.iscsi

containing the IQN of initiator exactly as it was created in targetcli?

like
Code:

InitiatorName=iqn.2022-05.<dom>:<initiator-name>

?
Did you set any username and password in targetcli?
In which targetcli directory exactly?
Are they also included in the VMs

/etc/iscsi/iscsid.conf
?
Is port 3260 maybe blocked by a firewall?
Plz see
Code:

EXAMPLES
       Discover targets at a given IP address:

              sh# iscsiadm --mode discoverydb --type sendtargets --portal 192.168.1.10 --discover

       Login, must use a node record id found by the discovery:

              sh# iscsiadm --mode node --targetname iqn.2001-05.com.doe:test --portal 192.168.1.1:3260 --login

       Logout:

              sh# iscsiadm --mode node --targetname iqn.2001-05.com.doe:test --portal 192.168.1.1:3260 --logout

       List node records:

              sh# iscsiadm --mode node

       Display all data for a given node record:

              sh# iscsiadm --mode node --targetname iqn.2001-05.com.doe:test --portal 192.168.1.1:3260



from
man iscsiadm.


Is
/etc/init.d/iscsid enabled and running in VM?
Are your VMs Gentoo?
Quote:

I’m interested in configuring a number of iSCSI targets (shares) on my Gentoo host, with a zfs volume backstore.

We have not reached at this point yet but how do you do it?
Via
backstores/fileio
?
Or via
backstores/block
?
Ok I found it
Code:

backstores/block create disk01 /dev/zvol/<pool-name>/<vol-name>
iscsi/<iqn>/tpg1/luns create /backstores/block/disk01

_________________
:)
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3137

PostPosted: Thu May 05, 2022 10:38 pm    Post subject: Reply with quote

I've been experimenting with those things a pretty long time (i think I did some storage over infiniband back then), but I recommend putting the target on a dedicated machine.
The target suite is quite troublesome during updates; the fewer packages you have there, the easier you'll deal with conflicts (or you may opt to never update this single-purpose server, if it's not exposed to the internet )


Quote:
I've tried running a VM's system disk across an NFS mount as a VDI, but this does have a performance penalty. Would this performance penalty be reduced with iSCSI? I'm guessing it might be.
Dunno. NFS with RDMA was quite amazing, fully staturating 16Gbps IB link with data, but it's not supported over ethernet.
iSCSI has some rdma capabilities too, you may want to look into that. Unfortunately I can't tell you more about it. Just know it exists.
Also, ethernet adapters: you didn't say what you have there, but 10Gbps adapters offer significantly lower latency than 1Gbps. 1Gbps adapters used to cause performance issues for me, even though the aggregated links were not saturated. Granted, it was with ceph, which probably suffers from network latency more than centralized solutions, but it's still something to keep in mind.

And one last thing, slightly off, but related: if you're running multiple compute nodes, you might want to have your storage duplicated too. So far I haven't seen any solution without some serious drawback. I've worked with well polished commercial solutions, but their drawback is way too many digits on the price tag.
Ceph does work well and is really resilient and runs over ethernet, but consumes a lot of RAM, suffers from write amplification (performance), and some parts _must_ be backed by SSD (also performance).
Either way, you may want to keep this issue in mind when designing your environment.
Back to top
View user's profile Send private message
eohrnberger
Apprentice
Apprentice


Joined: 09 Dec 2004
Posts: 240

PostPosted: Thu May 05, 2022 11:34 pm    Post subject: Reply with quote

First, let me at least say that I really appreciate you taking the time to respond to me and this inquiry.
It is really nice to be part of the Gentoo community when we all help each other like this.
I kinda look forward to an opportunity to help others, but haven't yet encountered that situation, but the will is there. I can help some folks getting up to speed with the first steps into ZFS, I'm quite comfortable with my experience in that (right now I have some 7 TB of hardware mirrored storage which has survived a number of disks dieing off, without losing a single file!)

Anyway, on to the topic at hand.

alamahant wrote:
Do you have in your VMs a file
Code:

/etc/iscsi/initiatorname.iscsi

containing the IQN of initiator exactly as it was created in targetcli?

like
Code:

InitiatorName=iqn.2022-05.<dom>:<initiator-name>

?

I was trying to set up a Windows VM to start with, but at some point, yes, I'd also want to create a Gentoo VM, but I figure that VirtualBox would be in tasked with making the OS disk iSCSI connection happen and work, rather than the VM OS itself. Have to boot up some VM OS first before you can get to it connecting to an iSCSI target, right? ;)

A direct VM iSCSI to storage connection would be a great idea for the VM's data disks, mounted after its OS is up and running, as it would by pass the VirtualBox layer and probably perform better than VirtualBox doing it (well, maybe not?). But for the OS disk, I would think that the VirtualBox iSCSI system disk attachment to the LUN would be handled by VirtualBox, wouldn't it?

Thinking a bit more on this, if a VirtualBox VM makes an OS disk IO request, VirtualBox on the VM's host would handle it, and would direct it to the the VM's VDI file. If the system disk is an iSCSI target, would VirtualBox not redirect that disk IO request to the iSCSI's LUN, to the iSCSI target and then configured iSCSI backingstore? (mind you right now, they are all on the same physical host - step 1 in the experimentation progression).

alamahant wrote:

Did you set any username and password in targetcli?
In which targetcli directory exactly?
Are they also included in the VMs

/etc/iscsi/iscsid.conf
?

I decided to start with a Windows VM to begin with.
Absolutely not interested in any ACL protection of the iSCSI target at all.
This is all 'lab environment', single user (me), so I skipped that step in the configuration sequence.
Is this not optional?
alamahant wrote:
Is port 3260 maybe blocked by a firewall?

There is no firewall on the VM host, nor on the iSCSI storage host (same as of machine right now).
Nor is there likely to ever be a firewall between systems on the same private LAN.
alamahant wrote:
Plz see
Code:

EXAMPLES
       Discover targets at a given IP address:

              sh# iscsiadm --mode discoverydb --type sendtargets --portal 192.168.1.10 --discover

       Login, must use a node record id found by the discovery:

              sh# iscsiadm --mode node --targetname iqn.2001-05.com.doe:test --portal 192.168.1.1:3260 --login

       Logout:

              sh# iscsiadm --mode node --targetname iqn.2001-05.com.doe:test --portal 192.168.1.1:3260 --logout

       List node records:

              sh# iscsiadm --mode node

       Display all data for a given node record:

              sh# iscsiadm --mode node --targetname iqn.2001-05.com.doe:test --portal 192.168.1.1:3260

from
man iscsiadm.

Yeah, I saw that after I had ended my day's experimentation, and I saw it as a good way to validate that the iSCSI target has been configured properly, unfortunately, I found this bit on information after I had the initial iSCSI target system be forced into a reboot to clear the continued /var/log/messages fill up of it's system disk. This forced the experimentation to be continued on a physical host which doesn't support the storage for the rest of the household, so that it might be less impactfully rebooted (to the household) as needed for this experimentation. Isolate into a safe venue kinda thing I guess.
alamahant wrote:

Is
/etc/init.d/iscsid enabled and running in VM?
Are your VMs Gentoo?

Quote:
I’m interested in configuring a number of iSCSI targets (shares) on my Gentoo host, with a zfs volume backstore.

We have not reached at this point yet but how do you do it?
Via
backstores/fileio
?
Or via
backstores/block
?
Ok I found it
Code:

backstores/block create disk01 /dev/zvol/<pool-name>/<vol-name>
iscsi/<iqn>/tpg1/luns create /backstores/block/disk01


To summarize, here's the iSCSI target config that I had set up was this:
Code:

   cd /
   /> ls
   o- / ......................................................................................................................... [...]
     o- backstores .............................................................................................................. [...]
     | o- block .................................................................................................. [Storage Objects: 1]
     | | o- block .................................................... [/dev/zvol/p00/iSCSI-Test-Volume (10.0GiB) write-thru activated]
     | |   o- alua ................................................................................................... [ALUA Groups: 1]
     | |     o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
     | o- fileio ................................................................................................. [Storage Objects: 0]
     | o- pscsi .................................................................................................. [Storage Objects: 0]
     | o- ramdisk ................................................................................................ [Storage Objects: 0]
     o- iscsi ............................................................................................................ [Targets: 1]
     | o- iqn.2022-05.<TLD>.<DOM>.192.168.2.2:iscsi-test-volume-v ........................................................... [TPGs: 1]
     |   o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
     |     o- acls .......................................................................................................... [ACLs: 1]
     |     | o- iqn.2022-05.<TDL>.<DOM>.192.168.2.2:iscsi-test-volume-a .............................................. [Mapped LUNs: 1]
     |     |   o- mapped_lun0 ................................................................................. [lun0 block/block (rw)]
     |     o- luns .......................................................................................................... [LUNs: 1]
     |     | o- lun0 ............................................... [block/block (/dev/zvol/p00/iSCSI-Test-Volume) (default_tg_pt_gp)]
     |     o- portals .................................................................................................... [Portals: 1]
     |       o- 0.0.0.0:3260 ..................................................................................................... [OK]
     o- loopback ......................................................................................................... [Targets: 0]
     o- vhost ............................................................................................................ [Targets: 0]
     o- xen-pvscsi ....................................................................................................... [Targets: 0]


On the VM host, I had configured the iSCSI connection from the VM to its system disk iSCSI storage with this command line:
Code:

VBoxManage storageattach Win10_21H1-iSCSI-Test --storagectl "SATA" --port 0 --device 0 --type hdd --medium iscsi --server 192.168.2.2 --target "iqn.2022-05.<TLD>.<DOM>.192.168.2.2:iSCSI-Test-Volume-A" --tport 3260

Does the character case matter in the VBoxManage connect string?
I notice that 'iSCSI-Test-Volume-A' is not the same as 'iscsi-test-volume-a' in the mapped LUNs config.

This config is on the same host for both the iSCSI target and the VM.
Back to top
View user's profile Send private message
eohrnberger
Apprentice
Apprentice


Joined: 09 Dec 2004
Posts: 240

PostPosted: Fri May 06, 2022 12:30 am    Post subject: Reply with quote

szatox wrote:
I've been experimenting with those things a pretty long time (i think I did some storage over infiniband back then), but I recommend putting the target on a dedicated machine.
The target suite is quite troublesome during updates; the fewer packages you have there, the easier you'll deal with conflicts (or you may opt to never update this single-purpose server, if it's not exposed to the internet )


Quote:
I've tried running a VM's system disk across an NFS mount as a VDI, but this does have a performance penalty. Would this performance penalty be reduced with iSCSI? I'm guessing it might be.
Dunno. NFS with RDMA was quite amazing, fully staturating 16Gbps IB link with data, but it's not supported over ethernet.
iSCSI has some rdma capabilities too, you may want to look into that. Unfortunately I can't tell you more about it. Just know it exists.
Also, ethernet adapters: you didn't say what you have there, but 10Gbps adapters offer significantly lower latency than 1Gbps. 1Gbps adapters used to cause performance issues for me, even though the aggregated links were not saturated. Granted, it was with ceph, which probably suffers from network latency more than centralized solutions, but it's still something to keep in mind.

And one last thing, slightly off, but related: if you're running multiple compute nodes, you might want to have your storage duplicated too. So far I haven't seen any solution without some serious drawback. I've worked with well polished commercial solutions, but their drawback is way too many digits on the price tag.
Ceph does work well and is really resilient and runs over ethernet, but consumes a lot of RAM, suffers from write amplification (performance), and some parts _must_ be backed by SSD (also performance).
Either way, you may want to keep this issue in mind when designing your environment.


A fair enough observation on storage, but this 'environment' here of mine is little more than a hobbyist environment in my basement.

All I have here is slightly beefed up consumer PCs built from COTS components, so I rather doubt that the Gig NICs I have, which come on the motherboards, are able of doing RDMA, since they fail even with 'jumbo IP packets', even though the main Gig switch supports it - this found by experimentation.

As to storage solutions, it has been quite an evolution.

From a variety of sized hard disks cobbled together into a SW RAID 5, to an LVM2 solution with ext3, which ended up suffering from bitrot, to an XFS filesystem on LVM2, which still suffered from biterot, to ZFS with hardware mirroring, which has proven (itself, at least to me and my needs). I've replaced a failed hard disk without even bringing the storage system down, removing the failed one, installing a replacement, and having the mirror rebuilt itself in the background. Quite a trick when the needed SATA ports are being supported by a cheap Marvell Technology Group Ltd. 88SE9235 PCIe 2.0 port replicator controllers, rather than more expensive PCIe SATA controllers which would isolate a hard disk faults without disrupting the other SATA channels.

The main storage machine is still a general purpose infrastructure services system which is running a number of services, including outbound email, NFS and Samba, Galleon to talk to TiVos and download video content from them. The 'experimental' host not only serves this purpose, but it also the main compliation host for packages, though it uses distcc to distribute the workload for that as well as the binary package host for the other 2 Genotoo systems (the last being email, web, WINS, caching DNS, and running a VM for the Plex media server - content being served via NFS from the storage machine), so a 3 PC infrastructure - small time hobbyist, as I've posted.

Each of these 3 systems has an SSD, for the OS disk, which also contains the ZFS L2ARC cache file for the ZFS pool(s) they host.

Yeah, the infrastructure that you are talking about is way beyond what I'm prepared to dedicate to a simply hobbyist environment, which has probably gone overboard.
;)
Back to top
View user's profile Send private message
alamahant
Advocate
Advocate


Joined: 23 Mar 2019
Posts: 3879

PostPosted: Fri May 06, 2022 5:58 pm    Post subject: Reply with quote

Quote:

I was trying to set up a Windows VM to start with, but at some point, yes, I'd also want to create a Gentoo VM, but I figure that VirtualBox would be in tasked with making the OS disk iSCSI connection happen and work, rather than

If the VMs are in the same machine as targetcli then use iscsiadm on the host to login to the target and then pass the whole disk
/dev/sdx
to the VM.
This is the simplest form and I fail to see why you wish to complicate things.
If you do like complicated things then use
dracut
and kernel parameter
Code:

root=iscsi:@10.0.0.1::3260::iqn.2011-01.local:rig1

...maybe in grub.
I suppose you will need a separate /boot and|or /boot/efi partitions in case of uefi VM.
Plz see
Code:

eqf dracut | grep iscsi
/usr/lib/dracut/modules.d/95iscsi
/usr/lib/dracut/modules.d/95iscsi/cleanup-iscsi.sh
/usr/lib/dracut/modules.d/95iscsi/iscsiroot.sh
/usr/lib/dracut/modules.d/95iscsi/module-setup.sh
/usr/lib/dracut/modules.d/95iscsi/mount-lun.sh
/usr/lib/dracut/modules.d/95iscsi/parse-iscsiroot.sh


and in dracut.conf.d/my.conf
Code:

add_dracutmodules+=" iscsi "

Never tried it and it will only work with linux.I dont know how to include userid and password in kernel cmdline.
I dont have the slightest idea how to achieve this with win VM.
Ah and plz avoid VirtualBox.It sucks.Much better qemu+libvirt+virt-manager.
But initially how to install linux on iscsi disk VM??
Maybe use a live iso that has iscsiadm command.
Arch includes it in the iso.
See
https://wiki.archlinux.org/title/ISCSI/Boot
_________________
:)
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3137

PostPosted: Fri May 06, 2022 7:01 pm    Post subject: Reply with quote

Quote:
But initially how to install linux on iscsi disk VM??

Uhm.... By mapping the disks either on supervisor's kernel level or the virtualization software's level, so it will be presented just like any other block device inside the VM?
Back to top
View user's profile Send private message
eohrnberger
Apprentice
Apprentice


Joined: 09 Dec 2004
Posts: 240

PostPosted: Fri May 06, 2022 9:59 pm    Post subject: Reply with quote

szatox wrote:
Quote:
But initially how to install linux on iscsi disk VM??

Uhm.... By mapping the disks either on supervisor's kernel level or the virtualization software's level, so it will be presented just like any other block device inside the VM?


That was my thinking as well. Perhaps I should consider posting this in the VirtualBox forums to see what insights that community might have.

Right now, I feel kinda 'stuck'.
Back to top
View user's profile Send private message
alamahant
Advocate
Advocate


Joined: 23 Mar 2019
Posts: 3879

PostPosted: Fri May 06, 2022 10:16 pm    Post subject: Reply with quote

This is easy
Easier in KVM
https://ckirbach.wordpress.com/2017/07/25/how-to-add-a-physical-device-or-physical-partition-as-virtual-hard-disk-under-virt-manager/
Tricky in Vbox?
https://www.serverwatch.com/guides/using-a-physical-hard-drive-with-a-virtualbox-vm/
Login via iscsiadm to the target in host machine.
Then run lsblk
And you will see the iscsi disks as /dev/sdx etc.
Follow the above instructions and
Then install your Linux as normal.
Then in the VMs configure iscsi boot.
And remove the iscsi created disks from the VMs.
Then HOPEFULLY you will boot diskless.
_________________
:)
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum