Hi,
> This is getting expensive!
That's why i let the users make the dual-layer experiments.
> I have not used xorriso yet, because the UDF part is too
> important to me, since the files are large.
You may well combine xorriso with mkisofs
Code: Select all
mkisofs -J -R -udf -V "Stuff" -iso-level 3 "stuff" \
| xorriso -as cdrecord -v dev=/dev/sr0 fs=128m -waiti -eject -
or you may use cdrecord by Joerg Schilling:
Code: Select all
mkisofs -J -R -udf -V "Stuff" -iso-level 3 "stuff" \
| cdrecord -v dev=/dev/sr0 fs=128m -waiti -eject -
Both burn programs use different implementations of the SCSI task
of burning to BD-R.
(cdrskin would support these options too, but i managed to spoil
version 1.4.2 for this use case, and am not sure whether 1.4.2.pl01
is already in Gentoo. 1.4.0 or older would be ok.)
Nevertheless, there is few reason to use UDF with large files
unless you plan to read the BDs by Solaris, or a BSD Unix.
ISO 9660 may well represent files up to the maximum filesystem
size of 8 TiB. Linux shows them properly.
xorriso in -for_backup mode records MD5 which enable you to test
each data file and the whole filesystem whether they are still ok.
If you plan for long term storage, then consider to burn multiple
copies and to checkread them once per year. Chances are good that
the other copies are still verificable when the first one fails.
> I will try the pre-format next..
If you did not get i/o errors when checkreading, then the reason for
failure is not about bad spots on the medium. And bad spots are the
only reason why formatting and the resulting Defect Management might
be beneficial.
In practice i observe the contrary at least with BD-RE. Many of mine
are well writable and readable if i disable Defect Management during
write. If if i let the old GGC-H20 drive checkread while writing,
it slows down to 0.1x write speed. Afterwards, when checkreading,
the drive clonks miserably by hopping forth and back between normal
data area and Spare Area, where the reserve blocks sit.
For problem diagnosis it would be enlightening if you burn many small
files with xorriso and -for_backup MD5s. Then we could locate the
data files which fail the MD5 check and learn about the storage region
where the reader does not see what the writer wrote.
If you are willing to waste a medium, then we could make a little
program which puts out blocks of 2048 bytes, each with its own
intended block number as repeated content. If we burn this block stream
to BD, we could read it and see which blocks are not where they
should be or do not show the content which they should have.
Tell me if you want to go that way.
(I'd also take media donations and make experiments myself.

)
Have a nice day
Thomas