-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarification of "partition block size" #39
Comments
Hi @claunia , Apologies for the delayed reply. There is a little bit of confusion around which layer defines a block, which defines a sector, and how they are inherited between each other. The physical medium can define a sector of 2048 bytes, and the You helped me realize the As you noted, the This Issue's resolution will go in to 1.3.0. Thanks! --Alex |
If it helps the convention I tend to use is sector for the minimum readable entity of a device (where it can be physical or logical), block for an agrupation of sector BY THE DEVICE (an erase block on an SSD or a tape block of 1M, or an optical disc ECC block) and cluster for the minimum addressable entity by a volume (or partitioning scheme, so I store both) |
I have a goal of releasing the schema version 1.3.0 next Wednesday, so I'll try to settle this discussion soon. Sorry for stepping away, I'd replied earlier in between conference sessions. The needs that I see you've presented are a size-in-bytes measurement of some "allocation unit" for these storage layers:
The DFXML schema has not to date dealt with characteristics of the physical device. I'm aware of at least one implementation that embeds physical medium information in output called DFXML, but I think that implementation predates the DFXML schema, and I also think it takes its interpretation of bytes per physical-medium sector as what is presented by a block-device file interpretation. So, ECC bytes might not be recorded. (I have not tested this.) All this is to say, we'll make sure the disk image, partition system, and file system layers record this allocation layer, but we'll wait for a testing system to demonstrate recording sector characteristics of devices. (On a "device" testing system - this might be doable with a Linux or FreeBSD virtual disk device-node.) Currently:
I'm aware of tools included in base operating system distributions that record an allocation unit for partition systems, but if I recall correctly they disagreed on a term. So this may just be a matter of picking a name. I'd prefer to avoid More on this after a records review for |
And of course, under ten seconds after posting, I find my testing notes for virtual devices. I was testing simulating damaged disks, and found this StackOverflow thread. Looks like this testing may have to wait for another day. |
Not many things have specifications, and many specifications are hard to come by, but I know the following examples. All pure optical media uses 2048 bytes / sector. To complicate things, older CD drives accept a MODE SELECT command that converts them in 512 bytes per sector (all done by firmware, as the disc still will always be 2048 bytes per sector). SGI and Sun depended on this feature to be able to boot from CD. CD Mode 2 also allow to reduce the size of the ECC parts and have 2324 or 2336 bytes per sector. Now for streaming tapes (LTO, AIT, DDS, et al). Most of them use 512 bytes per sector. They boot up with a default block size. You're allowed to change it, for write, or read. A handful of them CHANGE the sector size, on media, depending on an algorithm (in their respective specification) while most of them just fill the gaps or use several real-sectors. E.g. you can have a 16 MiB sector on a LTO tape. You're expected to send such amount of data in a single WRITE(10) command and are given back that amount in a single READ(10) command. Now to complicate matters, the Apple Partition Map includes the Driver Descriptor, that indicates the sector size of a drive. It is very common it indicates a CD has a 512 bytes sector. It is the responsibility of the driver to manage the conversion between the described size and the media size (as returned by ATA Manager and SCSI Manager, respectively). To add injury, HFS then has its own block size. Note, the terminology used by Apple is block for device/driver/map and "allocation block" for filesystem. Now on the UNIX world, the "allocation block" is called block because historically, Version 7 and other UNIX filesystems did not support changing the drive block size. The block size as indicated by the filesystem, was the block size expected to be returned by the drive. You can see as well SCSI call sectors "blocks". Out of UNIX, DOS used sectors and then clusters, changing the cluster size as appropriate to fit more in their 16-bit table. Atari, using the same table, changed the sector size while keeping the cluster size. Both are hardcoded to 512 bps devices (Atari's GEMDOS does the multi-read/write) and cluster is measured in "sectors per" instead of bytes per. Nowadays for flash media, they're called blocks and cells, considering that a block and a sector are interchangeable terms, except when they're not (more confusing). This is usually all hidden behind a Flash Translation Layer that allows you to read/write in 512 bytes or 4096 bytes blocks/sectors. And then we come to the "AF" and other monikers to what has been basically unnamed every time before. Reading/writing per the bus has nothing to do with reading/writing per the device, bringing us logical and physical sector/block (depending which specification do you read). LVMs also bring their own sizes to the table. So in a nutshell, the ideal would be to be able to record the size (medium wise) if we know, the size returned/expected by the firmware, the size as defined by the partition scheme or volume manager, and the size as defined by the filesystem itself. How to call all these sizes is up to you. UNIX historically call them all block. So imo do not try to see what has been historically done, as they didn't reach a consensus, just choose the one you find more coherent and less confusing when putting all of them (read, write, erase, firmware, partition, volumemgr, filesystem-volume) on the same table. |
Hi,
In https://github.com/dfxml-working-group/dfxml_schema/blob/master/dfxml.xsd#L119
I think you need to be more clear.
For example: an HFS volume can have 1536 bytes per block in a partition belonging to a 512 bytes per block APM on a disc having 2048 bytes per block.
The text was updated successfully, but these errors were encountered: