Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
What are "indirect-X" after zpool remove?
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Unsupported Software
View previous topic :: View next topic  
Author Message
midnite
Guru
Guru


Joined: 09 Apr 2006
Posts: 435
Location: Hong Kong

PostPosted: Fri Jun 10, 2022 11:04 am    Post subject: What are "indirect-X" after zpool remove? Reply with quote

After zpool remove, the removed disks are shown as indirect-0, indirect-1, indirect-2, and so forth.

- What are they?
- How can I get rid of them?
- Will they pose any problems?

Demonstrate:

There were two disks. One of them is removed. Then a new disk is added back into the pool.

Code:
# truncate -s 1G raid{0,1}_{1..4}.img
# ls /tmp/test_zfs_remove/
raid0_1.img  raid0_2.img  raid0_3.img  raid0_4.img  raid1_1.img  raid1_2.img  raid1_3.img  raid1_4.img

# zpool create jbod /tmp/test_zfs_remove/raid0_{1,2}.img
# zpool status -v ; zpool list -v
  pool: jbod
 state: ONLINE
config:

        NAME                                STATE     READ WRITE CKSUM
        jbod                                ONLINE       0     0     0
          /tmp/test_zfs_remove/raid0_1.img  ONLINE       0     0     0
          /tmp/test_zfs_remove/raid0_2.img  ONLINE       0     0     0

errors: No known data errors
NAME                                 SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
jbod                                1.88G   110K  1.87G        -         -     0%     0%  1.00x    ONLINE  -
  /tmp/test_zfs_remove/raid0_1.img   960M    54K   960M        -         -     0%  0.00%      -    ONLINE
  /tmp/test_zfs_remove/raid0_2.img   960M  55.5K   960M        -         -     0%  0.00%      -    ONLINE


Now we zpool remove disk 2. Please note indirect-1 shows up.

Code:
# zpool remove jbod /tmp/test_zfs_remove/raid0_2.img
# zpool status -v ; zpool list -v
  pool: jbod
 state: ONLINE
remove: Removal of vdev 1 copied 49.5K in 0h0m, completed on Fri Jun 10 02:03:59 2022
        144 memory used for removed device mappings
config:

        NAME                                STATE     READ WRITE CKSUM
        jbod                                ONLINE       0     0     0
          /tmp/test_zfs_remove/raid0_1.img  ONLINE       0     0     0

errors: No known data errors
NAME                                 SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
jbod                                 960M   148K   960M        -         -     0%     0%  1.00x    ONLINE  -
  /tmp/test_zfs_remove/raid0_1.img   960M   148K   960M        -         -     0%  0.01%      -    ONLINE
  indirect-1                            -      -      -        -         -      -      -      -    ONLINE


Then we zpool add disk 3. indirect-1 does not go away.

Code:
# zpool add jbod /tmp/test_zfs_remove/raid0_3.img
# zpool status -v ; zpool list -v
  pool: jbod
 state: ONLINE
remove: Removal of vdev 1 copied 49.5K in 0h0m, completed on Fri Jun 10 02:03:59 2022
        144 memory used for removed device mappings
config:

        NAME                                STATE     READ WRITE CKSUM
        jbod                                ONLINE       0     0     0
          /tmp/test_zfs_remove/raid0_1.img  ONLINE       0     0     0
          /tmp/test_zfs_remove/raid0_3.img  ONLINE       0     0     0

errors: No known data errors
NAME                                 SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
jbod                                1.88G   222K  1.87G        -         -     0%     0%  1.00x    ONLINE  -
  /tmp/test_zfs_remove/raid0_1.img   960M   222K   960M        -         -     0%  0.02%      -    ONLINE
  indirect-1                            -      -      -        -         -      -      -      -    ONLINE
  /tmp/test_zfs_remove/raid0_3.img   960M      0   960M        -         -     0%  0.00%      -    ONLINE


Even if we zpool remove disk 3, then zpool add disk 3 (the same disk) immediately, another indirect-2 shows up too.

Code:
# zpool remove jbod /tmp/test_zfs_remove/raid0_3.img
# zpool add jbod /tmp/test_zfs_remove/raid0_3.img
# zpool status -v ; zpool list -v
  pool: jbod
 state: ONLINE
remove: Removal of vdev 2 copied 11.5K in 0h0m, completed on Fri Jun 10 02:09:35 2022
        240 memory used for removed device mappings
config:

        NAME                                STATE     READ WRITE CKSUM
        jbod                                ONLINE       0     0     0
          /tmp/test_zfs_remove/raid0_1.img  ONLINE       0     0     0
          /tmp/test_zfs_remove/raid0_3.img  ONLINE       0     0     0

errors: No known data errors
NAME                                 SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
jbod                                1.88G   279K  1.87G        -         -     0%     0%  1.00x    ONLINE  -
  /tmp/test_zfs_remove/raid0_1.img   960M   260K   960M        -         -     0%  0.02%      -    ONLINE
  indirect-1                            -      -      -        -         -      -      -      -    ONLINE
  indirect-2                            -      -      -        -         -      -      -      -    ONLINE
  /tmp/test_zfs_remove/raid0_3.img   960M  18.5K   960M        -         -     0%  0.00%      -    ONLINE


What make things more worrying is that, if a disk is degraded (simulated by zpool offline -f), the degraded status does not go away.

Code:
# zpool add jbod /tmp/test_zfs_remove/raid0_4.img
# zpool offline -f jbod /tmp/test_zfs_remove/raid0_4.img
# zpool remove jbod /tmp/test_zfs_remove/raid0_4.img
# zpool status -v ; zpool list -v
  pool: jbod
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
remove: Removal of vdev 4 copied 19K in 0h0m, completed on Fri Jun 10 02:16:49 2022
        336 memory used for removed device mappings
config:

        NAME                                STATE     READ WRITE CKSUM
        jbod                                DEGRADED     0     0     0
          /tmp/test_zfs_remove/raid0_1.img  ONLINE       0     0     0
          /tmp/test_zfs_remove/raid0_3.img  ONLINE       0     0     0

errors: No known data errors
NAME                                 SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
jbod                                1.88G   308K  1.87G        -         -     0%     0%  1.00x  DEGRADED  -
  /tmp/test_zfs_remove/raid0_1.img   960M   154K   960M        -         -     0%  0.01%      -    ONLINE
  indirect-1                            -      -      -        -         -      -      -      -    ONLINE
  indirect-2                            -      -      -        -         -      -      -      -    ONLINE
  /tmp/test_zfs_remove/raid0_3.img   960M   154K   960M        -         -     0%  0.01%      -    ONLINE
  indirect-4                            -      -      -        -         -      -      -      -  DEGRADED


Side Note:

  • If a disk was not degraded, zpool remove will not cause data loss. Although this is a RAID0 JBOD setup without redundancy, data will be copied to the remaining disks when the zpool remove command is issued. If the remaining disks do not have enough space to accommodate all the data, "cannot remove (disk): out of space" error is prompted. Then we cannot zpool remove a disk.

  • Using zpool replace (rather than zpool remove then zpool add) will not generate the weird indirect-X things. But it has some drawbacks. The new disk must be equal to or larger than the replaced disk. On the contrary, we can zpool remove a larger disk, then zpool add a smaller disk.


One more side demonstration that zpool detach then zpool attach mirrors will not produce the weird indirect-X things.

Code:
# zpool destroy jbod
# zpool create mirr mirror /tmp/test_zfs_remove/raid1_1.img /tmp/test_zfs_remove/raid1_2.img
# zpool detach mirr /tmp/test_zfs_remove/raid1_2.img
# zpool attach mirr /tmp/test_zfs_remove/raid1_1.img /tmp/test_zfs_remove/raid1_3.img
# zpool detach mirr /tmp/test_zfs_remove/raid1_3.img
# zpool attach mirr /tmp/test_zfs_remove/raid1_1.img /tmp/test_zfs_remove/raid1_3.img
# zpool attach mirr /tmp/test_zfs_remove/raid1_1.img /tmp/test_zfs_remove/raid1_4.img
# zpool offline -f mirr /tmp/test_zfs_remove/raid1_4.img
# zpool status -v ; zpool list -v
  pool: mirr
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
        repaired.
  scan: resilvered 291K in 00:00:00 with 0 errors on Fri Jun 10 02:47:06 2022
config:

        NAME                                  STATE     READ WRITE CKSUM
        mirr                                  DEGRADED     0     0     0
          mirror-0                            DEGRADED     0     0     0
            /tmp/test_zfs_remove/raid1_1.img  ONLINE       0     0     0
            /tmp/test_zfs_remove/raid1_3.img  ONLINE       0     0     0
            /tmp/test_zfs_remove/raid1_4.img  FAULTED      0     0     0  external device fault

errors: No known data errors
NAME                                   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
mirr                                   960M   200K   960M        -         -     0%     0%  1.00x  DEGRADED  -
  mirror-0                             960M   200K   960M        -         -     0%  0.02%      -  DEGRADED
    /tmp/test_zfs_remove/raid1_1.img      -      -      -        -         -      -      -      -    ONLINE
    /tmp/test_zfs_remove/raid1_3.img      -      -      -        -         -      -      -      -    ONLINE
    /tmp/test_zfs_remove/raid1_4.img      -      -      -        -         -      -      -      -   FAULTED

# zpool detach mirr /tmp/test_zfs_remove/raid1_4.img
# zpool status -v ; zpool list -v
  pool: mirr
 state: ONLINE
  scan: resilvered 291K in 00:00:00 with 0 errors on Fri Jun 10 02:47:06 2022
config:

        NAME                                  STATE     READ WRITE CKSUM
        mirr                                  ONLINE       0     0     0
          mirror-0                            ONLINE       0     0     0
            /tmp/test_zfs_remove/raid1_1.img  ONLINE       0     0     0
            /tmp/test_zfs_remove/raid1_3.img  ONLINE       0     0     0

errors: No known data errors
NAME                                   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
mirr                                   960M   196K   960M        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                             960M   196K   960M        -         -     0%  0.01%      -    ONLINE
    /tmp/test_zfs_remove/raid1_1.img      -      -      -        -         -      -      -      -    ONLINE
    /tmp/test_zfs_remove/raid1_3.img      -      -      -        -         -      -      -      -    ONLINE

_________________
- midnite.
Back to top
View user's profile Send private message
midnite
Guru
Guru


Joined: 09 Apr 2006
Posts: 435
Location: Hong Kong

PostPosted: Mon Jun 20, 2022 11:13 pm    Post subject: Reply with quote

zpool features device_removal and obsolete_counts may give some insights on this issue.
https://openzfs.github.io/openzfs-docs/man/7/zpool-features.7.html
_________________
- midnite.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Unsupported Software All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum