Files
Before discussing file systems, it makes sense to discuss files. What is a file? Think back to the beginning of the semester when we discussed the first of many abstractions -- the process. We said that a process is an abstracton for a unit of work to be managed by the operating system system on behalf of some user (a person, an agent, or some aspect of the system). We said that the PCB was an operating system data structure that represented a process within the operating system.Similarly, a file is an abstraction. It is a collection of data that is organized by its users. The data within the file isn't necessarily meaningful to the OS, the OS many not know how it is organized -- or even why some collection of bytes have been organized together as a file. None-the-less, it is the job of the operating system to provide a convenient way of locating each such collection of data and manipulating it, while protecting it from unintended or malicious damage by those who should not have access to it, and ensuring its privacy, as appropriate.
Introduction: File Systems
A file system is nothing more than the component of the operating system charged with managing files. It is responsible for interacting with the lower-level IO subsystem used to access the file data, as well as managing the files, themselves, and providing the API by which application programmers can manipulate the files.
Factors In Filesystem Design
- naming
- operations
- storage layout
- failure resiliance
- efficiency (lost space is not recovered when a process ends as it is with RAM, the penalty is also higher for frequent access...by a factor of 106)
- sharing and concurrency
- protection
Naming
The simplest type of naming scheme is a flat space of objects. In this model, there are only two real issues: naming and aliasing.
Naming involves:
- Syntax/format of names
- Legal characters
- Upper/lower case issues
- Length of names
- &c
Aliasing
Aliasing is the ability to have more than one name for the same file. If aliasing is to be permitted, we must detemrine what types. It is useful for several reasons:
- Some programs expect certain names. Sharing the name between two such programs is painful without aliasing.
- Manual version management: foo.1, foo.2, foo.3
- Convenience of user -- several file can appear in several places near things that relate to it.
There are two basic types:
- early binding: the target of the name is determined at the time the link is created. In UNIX aliases that are bound early are called hard links.
- late binding: The target is redetermined on each use, not once. That is to say the target is bound to the name every time it is used. In UNIX aliases that are bound late are called soft links or symbolic links. Symbolic links can dangle, that is to say that they can reference an object that has been destroyed.
In order to implement hard links, we must have low level names.
- invarient across renaming
- no aliasing of low level names
- each file has exactly 1 low-level name and at least 1 low level name (The link count is the number of high level names associated with a single low-level name.)
- the OS must ensure that the link count is 0 before removing a file (and no one can have it open)
UNIX has low-level names, they are called inodes. The pair (device number, inode # is unique). The inode also serves as the data structure that represents the file within the OS, keeping track of all of its metadata. In contrast, MS-DOS uniquely names files by their location on disk -- this scheme does not allow for hard links.
Hierarchical Naming
Real systems use hierarchical names, not flat names. The reason for this relates to scale. The human mind copes with large scale in a hierarchical fashion.It is essentially a human cognative limitation, we deal with large numbers of things by categorizing the. Every large human organization is hierarchical: army, companies, church, etc.
Furthermore, too many names are hard to remember and it can be hard to generate unique names.
With a hierarchical name space only a small fraction of the full namespace is visible at any level. Internal nodes are directories and leaf nodes are files. The pathname is a representation of the path from the leafnode to the root of the tree.
The process of translating a pathname is known as name resolution. We must translate the pathname one step at a time to allow for symbolic links.
Every process is associated with a current directory. The low level name is evaluated by chdir().If we follow a symbolic link to a location and try to "cd ..", we won't follow the symbolic link back to our original location -- the system doesn't remember how we got there, it takes us to the parent directory.
The ".." relationship superimpsoes a Directed Acyclic Graph(DAG) onto the directory structure, which may contain cycles via links.
Have you ever seen duplicate listings for the same page in Web searche ngines? This is because it is impossible to impose a DAG onto Web space -- not only is it not a DAG on any level, it is very highly connected.
Each directory is created with two implicit components
- "." and ".."
- path to the root is obtained by travelling up ".."
- getwd() and pwd (shell) report the current directory
- "." allows you to supply current working directory to system calls without calling getwd() first
- relative names remain valid, even if entire tree is relocated first
- "." and ".." are the same only for the root directory
Directory Entries
What exactly is inside of each directory entry aside form the file or directory name?
UNIX directory entries are simple: name and inode #. The inode contains all of the metadata about the file -- everything you see when you type "ls -l". It also contains the information about where (which sectors) on disk the fiel is stored.
MS-DOS directory entries are much more complex. They actually contain the meta-data about the file:
- name -- 8 bytes
- extension -- 3 bytes
- attribes (file/directory/volume label, read-only/hidden/system) -- byte
- reserved -- 10 bytes (used by OS/2 and Windows 9x)
- time -- 2 bytes
- date -- 2 bytes
- cluster # -- 2 bytes (more soon)
- size -- 4 bytes
Unix keeps similar information in the inode. We'll discuss the inode in detail very soon.
File System Operations
File system operations generally fall into one of three categories:
- Directory operations modify the names space of files. Examples include mkdir(), rename(), creat(), mount(), link() and unlink()
- File operations obtain or modify the characteristics of objects. Examples include stat(), chmod(), and chown().
- I/O operations access the contents of a file. I/O operations, unlike file operations, modify the actual contents of the file, not metadata associated with the file. Examples include read(), write(), and lseek(). These operations are typically much longer than the other two. That is to say that applications spend much more time per byte of data performing I/O operations than directory operations or file operations.
From open() to the inode
The operating system maintains two data structures representing the state of open files: the per-process file descriptor table and the system-wide open file table.
When a process calls open(), a new entry is created in the open file table. A pointer to this entry is stored in the process's file descriptor table. The file descriptor table is a simple array of pointers into the open file table. We call the index into the file descriptor table a file descriptor. It is this file descriptor that is returned by open(). When a process accesses a file, it uses the file descriptor to index into the file descriptor table and locate the corresponding entry in the open file table.
The open file table contains several pieces of information about each file:
- the current offset (the next position to be accessed in the file)
- a reference count (we'll explain below in the section about fork())
- the file mode (permissions),
- the flags passed into the open() (read-only, write-only, create, &c),
- a pointer to an in-RAM version of the inode (a slightly light-weight version of the inode for each open file is kept in RAM -- others are on disk), and a structure that contains pointers to all of the .
- A pointer to the structure containing pointers to the functions that implement the behaviors like read(), write(), close(), lseek(), &c on the file system that contains this file. This is the same structure we looked at last week when we discussed the file system interface to I/O devices.
Each entry in the open file table maintains its own read/write pointer for three important reasons:
- Reads by one process don't affect the file position in another process
- Write are visible to all processes, if the file pointer subsequently reaches the location of the write
- The program doesn't have to supply this information each call.
One important note: In modern operating systems, the "open file table" is usually a doubly linked list, not a static table. This ensures that it is typically a reasonable size while capable of accomodating workloads that use massive numbers of files.
Session Semantics
Consider the cost of many reads or writes may to one file.
- Each operation could require pathname resolution, protection checking, &c.
- Implicit information, such as the current location (offset) into the file must be maintained,
- Long term state must also be maintained, especially in light of the fact that several processes using the file might require different view.
Caches or buffers may need to be initialized The solution is to amortize the cost of this overhead over many operations by viewing operations on a file as within a session. open() creates a session and returns a handle and close() ends the session and destroys the state. The overhead can be paid once and shared by all operations.
![]()
Consequences of Fork()ing
In the absence of fork(), there is a one-to-one mapping from the file descriptor table to the open file table. But fork introduces several complications, since the parent task's file descriptor table is cloned. In other words, the child process inherits all of the parent's file descriptors -- but new entries are not created in the system-wide open file table.
One interesting consequence of this is that reads and writes in one process can affect another process. If the parent reads or writes, it will move the offset pointer in the open file table entry -- this will affect the parent and all children. The same is of course true of operations performed by the children.
What happens when the parent or child closes a shared file descriptor?
- remember that open file table entries contain a reference count.
- this reference count is decremented by a close
- the file's storage is not reclaimed as long as the reference count is non-zero indicating that an open file entry to it exists
- once the reference count reaches zero, the storage can be reclaimed
- i.e., "rm" may reduce the link count to 0, but the file hangs around until all "opens" are matched by "closes" on that file.
Why clone the file descriptors on fork()?
- it is consistent with the notion of fork creating an exact copy of the parent
- it allows the use of anonymous files by children. The never need to know the names of the files they are using -- in fact, the files may no longer have names.
- The most common use of this involves the shell's implementation of I/O redirection (< and >). Remember doing this?
Memory-Mapped Files
Earlier this semester, we got off on a bit of a tangent and discussed memory-mapped I/O. I promied we'd touch on it again -- and now seems like a good time since we just talked about how the file system maintains and access files. Remember that it is actually possible to hand a file over to the VMM and ask it to manage it, as if it were backing store for virtual memory. If we do this, we only use the file system to set things up -- and then, only to name the file.If we do this, when a page is accessed, a page fault will occur, and the page will be read into a physical frame. The access to the data in file is conducted as if it were an access to data in the backing-store. The contents of the file are then accessed via an address in virtual memory. The file can be viewed as an array of chars, ints, or any other primitive variable or struct.
Only those pages that are actually used are read into memory. The pages are cached in physical memory, so frequently accessed pages will not need to be read from external storage each access. It is important to realize that the placement and replacement of the pages of the file in physical memory competes with the pages form other memory mapped files and those from other virtual memory sources like program code, data, &c and is subject to the same placement/replacement scheme.
As is the case with virtual memory, changes are written upon page-out and unmodified pages do not require a page-out.
The system call to memory map a file is mmap(). It returns a pointer to the file. The pages of the file are faulted in as is the case with any other pages of memory. This call takes several parameters. See "man mmap" for the full details. But a simplified version is this:
void *mmap (int fd, int flags, int protection)The file descriptor is associated with an already open file. In this way the filesystem does the work of locating the file. Flags specifies the usual type of stuff: executable, readable, writable, &c. Protection is something new.
Consider what happens if multiple processes are using a memory-mapped file. Can they both share the same page? What if one of them changes a page? Will each see it?
MAP_PRIVATE ensures that pages are duplicated on write, ensuring that the calling process cannot affect another process's view of the file.MAP_SHARED does not force the duplication of dirty pages -- this implies that changes are visible to all processes.
A memory mapped file is unmapped upon a call to munmap(). This call destroys the memory mapping of a file, but it should still be closed using close() (Remember -- it was opened with open()). A simplified interface follows. See "man munmap" for the full details.
int munmap (void *address) // address was returned by mmap.If we want to ensure that changes to a memory-mapped file have been committed to disk, instead of waiting for a page-out, we can call msync(). Again, this is a bit simplified -- there are a few options. You can see "man msync" for the details.
int msync (void *address)
Cost of Memory Mapped Access To Files
Memory mapping files reduces the cost of accessing files imposed by the need for traditional access to copy the data first from the device into system space and then from system space into user space.BUt it does come at another, somewhat interesting cost. Since the file is being memory mapped into the VM space, it is competing with regular memory pages for frames. That is to say that, under sufficient memory pressure, access to a meory-mapped file can force the VMM to push a page of program text, data, or stack off to disk.
Now, let's consider the cost of a copy. Consider for example, this "quick and dirty" copy program:
int main (int argc, char *argv) { int fd_source; int fd_dest; struct stat info; unsigned char *data; fd_source = open (argv[1], O_RDONLY); fd_dest = open (argv[2], O_WRONLY | O_CREAT | O_TRUNC, 0666); fstat (fd_source, &info); data = mmap (0, info.st_size, PROT_READ, MAP_SHARED, fd_source, 0); write (fd_dest, data, info.st_size); munmap (data, info.st_size); close (fd_source); close (fd_dest); }Notice that in copying the file, the file is viewed as a collection of pages and each page is mapped into the address space. As the write() writes the file, each page, individually, will be faulted into physical memory. Each page of the source file will only be accessed once. After that, the page won't be used again.
The unfortunate thing is that these pages can force pages that are likely to be used out of memory -- even, for example, the text area of the copy program. The observation is that memory mapping files is best for small files, or those (or parts) that will be frequently accessed.
Storage Management
The key problems of storage management include:
- Media independence (floppy, CD-ROM, disks, &c)
- Efficient space utilization (minimize overhead)
- Growth -- both within a file and the creation of new files
These problems are different in several ways from the problems we encountered in memory management:
- I/O devies are so slow that the overhead of accessing more complex data structures like linked lists can be overlapped with the I/O operation itself. This makes the cost of this type of CPU/main memory operation almost zero.
- Logical contiguity need not imply physical contiguity. The bytes of a file may be stored any number of ways in physical media, yet we view them in order and contiguously.
- The equivalent of the virtual memory page table can effectively be stored entirely on disk and requires no hardware support
Blocks and Fragmentation
During our discussion of memory management, we said that a byte was the smallest addressable unit of memory. But our memory management systems created larger and more convenient memory abstractions -- pages and/or segments. The file system will employ similar medicine.
Although the sector is the smallest addressable unit in hardware, the file system manages storage in units of multiple sectors. Different operating systems give this unit a different name. CPM called it an extent. MS-DOS called it a cluster UNIX systems generally call it a block. We'll follow the UNIX nomenclature and call it a block. But regardless of what we call it, in some sense it becomes as logical sector. Except when interacting with the hardware, the operating system will perform all operations on whole blocks.
Internal fragmentation results as a result of allocating storage in whole block units -- even when less storage is requested. But, much as was the case with RAM< this approach avoids external fragmentation.
Key Differences from RAM Storage Management
- The variance in the size among files is much greater than the variance of the sizes among processes. This places additional demands on the data structures used to manage this space.
- Persistent storage implies persistent mistakes. The occasional memory bug can't be solved by "rebooting the file system."
- Disk access (and access to most other media) is much slower than RAM access -- this gives us more CPU time to make decisions.
Storage Allocation
Now that we've considered the role of the file system and the characteristics of the media that it manages, let's consider storage allocation During this discussion we will consider several different policies and data structures used to decide which disk blocks are allocated to a particular file.
Contiguous Allocation
Please think back to our discussion of memory management techniques. We began with a simple proposal. We suggested that each unit of data could be stored contiguously in physical memory. We suggested that this approach could be managed using a free list, a placement policy such as first-fit, and storage compaction.
This simple approach is applicable to a file system. But, unfortunately, it suffers from the same fatal shortcomings:
- External fragmentation would result from small, unallocatable "left over" blocks.
- Solving external fragmentation using compaction is possible and wouldn't suffer from the aliasing problems that plague RAM addressing. But, unfortunately, the very slow speed of most non-volatile media, such as disks, makes this approach to expensive to be viable.
- File growth would create a mess, because it might require relocating the entire file.
- (Write-once media, such as CD-ROMs, are somewhat of an exception).
Linked Lists
In order to eliminate the external fragmentation problem, we need to break the association between physical contiguity and logical contiguity -- we have to gain the ability to satisfy a request with non-adjacent blocks, while preserving the illusion of contiguity. To accompilish this we need a data structure that stores the information about the logical relationship among the disk blocks. This data structure must answer the question: Which phyical blocks are logically adjacent to each other.IN many ways, this is the same problem that we had in virtual memory -- we're trying to establish a virtual file address space for each file, much like we did a virtual address space for each process.
One approach might be to call upon our time-honored friend the linked list. The linked lists solves so many problems -- why not this one?
We could consider the entire disk to be a collection of linked lists, where each block is a node. Specifically, each block could contain a pointer to the next block in the file. But this approach has problems also:
Well, unfortunately, the linked list isn't the solution, this time:
- The blocks no longer hold only file data; they must now be correctly interpreted each time they are accessed
- Non-sequential access is very slow, because we must sequentially follow the pointers
File Allocation Table
Another approach might be to think back to our final solution for RAM -- the page table. A page table-proper won't work for disk, because each process does not have its own mapping from logical addresses to physical addresses. Instead this mapping is universal across the entire file system.
Remember the inverted page table? This was a system-wide mapping. We could apply a similar system-wide mapping in the file system. Actually, it gets easier in the file system. We don't need a complicated hashing system or a forward mapping on disk. Let's consider MS-DOS. We said that the directory entry associated the high-level "8 + 3" file name assigned by the user with a low-level name, the number of the first cluster populated by the file. Now we can explain the reason for this.
MS-DOS uses an approach similar to an inverted page table. It maintains a table with one entry for each cluster on disk. Each entry contains a pointer to the cluster that logically follows it. When a directory entry is opened, it provides the address (cluster number) of the first cluster in the corresponding file. This number is used as an index into the mapping table called the File Allocation Table, a.k.a FAT. This entry provides the number of the next cluster in the file. This process can be repeated until the entry in the table corresponding to the last cluster in the file is inspected -- this entry contains a sentinel value, not a cluster address.
A compilicated hash is not needed, because the directory tree structure provides the mapping. We don't need the forward mapping, because all clusters must be present on disk -- (for the most part) there is no backing store for secondary storage. To make use of this system, the only "magic" required is a priori knowledge as to the whereabouts of the FAT on disk (actually MS-DOS uses redundant FAT tables, with a write-all, read one policy).
But this approach also has limitations:
- The entire FAT must be in memory. For a 1 G disk (232 Bytes) with a 4K page size (212 Bytes) 1 MB (220 Bytes) of memory are required to hold the FAT, if 4 bytes are required for each entry. This is substantial!
Actually, earlier MS-DOS systems actually didn't cache the table in memory -- but this proved to be too slow, even by MS-DOS standards
- Space must be allocated for holes within files. That is to say that unallocated areas within the file space will occupy physical storage. This is wastage. Consider what happens when a process "core dumps" writing an image of its memory to disk. Now, remember that the system might have a 64-bit address space. It would be nice if we could leave the hole unallocated on disk, just as we did in RAM.
inode Based Allocation
UNIX uses a more sophiticated and elogant system than MS-DOS. It is based on a data structure known as the inode.
There are two important characteristics of the i-node approach:
- A per-file forward mapping table is used, instead of the system-wide FAT table.
- An adapatation of the multi-level page table scheme is used to effeciently support a wide range of file sizes and holes.
Each level-0 or outermost inode is divided into several different fields:
- File attribute: most of the good stuff that ls -l reports is stored here
- Direct mappings: Entries containing direct forward mappings. These mappings provide a mapping from the logical block number to the location of the physical block on disk
- Indirect_1 Mappings: Entries containing indirect mapping requiring one level of indirection. These mappings map a logical block number to a table of direct forward mappings as described above. This table is then used to map from the logical block number to the physical block address.
- Indirect_2 Mappings: Entries containing indirect mappings requiring two levels of indirection. These mappings map a logical block number to a table containing Indirect_1 mappings as described above
- Indirect_3 Mappings: Entries containing mappings requiring three levels of indirection. These mappings map a logical block number to a table containing Indirect_2 mappings as described above.
Files up to a certain size are mapped using only the direct mappings. If the file grows past a certain threshold, then Indirect_1 mappings are also used. As it keeps growing, Indirect_2 and Indirect_3 mappings are used. This system allows for a balance between storage compactness in secondary storage and overhead in the allocation system. In some sense, it amounts to a special optimization for small files.
![]()
Estimating Maximum File Size
Given K direct entries and I indirect entries per block, the biggest file we can store is (K + I + I2 + I3 ) blocks.
If we would need to allocate files larger than we currently can, we could reduce the number of Direct Block entries and add an Indirect_4 entry. This process could be repeated until the entire table consisted of indirect entries.
A Quick Look Back At Traditional File Systems
We've looked at "General Purpose inode-based file systems" such as UFS and ext2. They are the workhorses of the world. They are reasonable fast, but have some limitations, including:
- Internal fragmentation: waste w/in allocated blocks
- Slow disk performance due to seeking (although some like FFS try to reduce this using extents. They keep track of multisector blocks of various sizes and can allocate that way)
- Large logical blocks decrease transfer cost and metadata overhead (number of indirect blocks needed, &c), but increase fragmentation.
- Small logical blocks do the opposite, reduce fragmentation, but increase inode overhead and transfer time.
- Opportunity for meta-data inconsistency due to crashes. Long recover time due to intensive process used by tools such as the venerable fsck to discover lost information or otherwise force consistency.
- Block sizes vs. meta data, and the need for fsck to scan whole partions makes these inappropriate for large partitions - to much wastage and too much time wasted to fsck for high availability.
- Free space tracking, typically a bit for each block, is also a big RAM waste and time-consumer for large file systems.
- Directory lookups are painfully slow - sequentially open and search directory files - caching helps a good bit, but there are plenty of cold misses and capacity misses, &c.
- Static inode allocation limits the number of files and wastes space if not needed.
Hybrid file systems
Today we are going to talk about a newer generation of file systems that keep the best characteristics of traditional file systems, plus some improvements, and also logging to increase availability in the event of failure. These file systems, in particular, support much large file systems than could reasonably be managed using the older file systems and do so more robustly -- and often faster.Much like the traditional file systems that we talk about have common characteristics, such as similar inode structures, buffer cache organizations, &c, these file systems will often share some of the same characteristics:
- They will log only the metadata, not the data. This allows for a fast recovery at boot time, but doesn't provide any consistency or correctness guarantees about the data blocks of the files, themselves.
- Unlike LFS, these file systems will manage the data blocks (or extends) in a random-access way -- writes to data blocks will replace the original data, not land at the end of the log file. (The log, itself, however, is an append log).
- Make extensive use of the B+ tree data structure to organize information.
- The free block (or extent) list is one such case -- it is often maintained by B+ trees. A common organization keeps two trees -- one ordered by size and another ordered by location. This makes it fast to find an extend that is large enough, and also to find one that is nearby other data in the file -- this minimizes seek delay.
- Some filesystems further improve this by organizing extents into two different B+ trees, one by physical address and another by size.
- Give my an extent at least size X", is fast
- As is, "Give me a block or extent near by X".
- Directory files are replaced with B+ trees.
- Typically one file-system-wide B+ tree containing all directories and
- One per-directory B+ tree of entries.
- But, some use one B+ tree for whole file system
- Organize inodes by file name using a B+ tree
- Keep small-sized files' data directly in the inodes.
- Keep medium-sized files' data in extents or blocks directly named from the inodes
- Keep large-sized files' blocks or extents organized in B+ tree indexed by offset and named by inode.
- Many allow dynamic inode allocation to avoid wasted space - there's no more need to index into an array by number. The inode is directly named by the B+ tree
- In general, performance comparable to standard file systems, with more space efficiency and higher reliability.
ReiserFS
The ReiserFS isn't the most sophisticated among this class of filesystems, but it is a reasonably new filesystem. Furthermore, despite the availability of journaling file systems for other platforms, Reiser was among the first availble for Linux and is the first, and only, hybrid file system currently part of the official Linux kernel distribution.As with the other filesystems that we dicsussed, ReiserFS only journals metadata. And, it is based on a variation of the B+ tree, the B* tree. Unlike the B+ tree, which does 1-2 splits, the B* tree does 2-3 splits. This increases the overall packing density of the tree at the expense of only a small amount of code complexity.
It also offers a unique tail optimization. This feature helps to mitigate internal fragmentation. It allows the tails of files, the end portions of files that occupy less than a whole block, to be stored together to more completely fill a block.
Unlike the other file systems, its space management is still pretty "old-school". It uses a simple block-based allocator and manages free space using a simple bit-map, instead of a more efficient extent-based allocator and/or B-tree based free space management. Currently the block size is 4KB and the maximum file size 4GB, and the maximum file system size is 16TB, Furthermore, ReiserFS doesn't support sparse files -- all blocks of a file are mapped. Reiser4, scheduled for release this fall, will address some of these limitations by including extents and a variable block size of up to 64KB.
For the moment, free block are found using linear search of bitmap. The search is in the order of increasing block number to match disk spin. It tries to keep things together by searching bitmap beginning with position representing the left neighbor. This was empirically determined to be the better of the following:
- Starting at the beginning (no locality, really
- Starting at the right neighbor (begins past us, given disk spin)
- Starting at the left neighbor (if space, right in-between; but costly to find left neighbor)
ReiserFS allows for the dynamic allocation of inodes and keeps inodes and the directory structure organized within a single B* tree. This tree organizes four different types of nodes:
- Direct items - tails of files packed together or one small file
- Indirect items - unformatted [data] nodes; hold whole blocks of file data
- Directory items - key for first directory entry, plus number of directory entires
- Stat items - metadata (configuration option to combine these with directory item)
Items are stored in the tree using a key, which is a tuple:
<parent directory ID, offset within object, item type/uniqueness>, where
- parent ID is ID of parent object
- For files, the offset indicates the offset of the first byte stored in this item. For directories, it contains the first 4 bytes of the filename of the first file stored within the node
- The item type/uniqueness field indicates the type of the node:
- 0 - stat
- -1 direct
- -2 - indirect
- 500 - directory + unique number for files matching in first 4/bytes
Each key structure also contains a unique item number, basically the inode number. But, this isn't used to determine ordering. Instead, the tree sorts keys using each tuple, in order of position. This orders the files in the tree in a way that keep files within the same directory together, and then these sorted by file or directory name
The leaf nodes are data nodes. Unformatted nodes contain whole blocks of data. "Formatted" nodes hold the tails of files. They are formatted to allow more than one tail to be stored within the same block. Since the tree is balanced, the path to to any of these data nodes is the same length.
A file is composed of set of indirect items and, at most 2 direct items for the tail Why not always one? If a tail is smaller than unformatted node, but larger than formatted node, it needs to be broken apart and placed into two direct nodes).
SGI's XFS
In many ways SGI's XFS is similar to ReiserFS. But, it is in many ways more sophisticated. It may be the most sophisticated among the systems we'll consider. This being said, unlike ReiserFS, XFS uses B+ trees instead of B* trees.The extent-based allocator is rather sophisticated. In particular, it has three pretty cool features. First, it allows for delayed allocation. Basically, this allows the system to build a virtual extent in RAM and then allocate it in one piece at the end. This mitgates the "and one more thing" syndrom that can lead to a bunch of small extents instead of one bit one. It also allows for the preallocation of an extent. This allows the system to reserve an extent that is big enough in advance so that the right sized extent can be used -- without consuming memory for delayed allocation or running the risk of running out of space later on. The system also allows for the ?coalecing of extents as they are freed to reduce fragmentation.
The file system organized into different partions called allocation groups (AGs). Each allocation group has own data structures -- for practical purposes, they are seaparate instances of the same file system class. This helps to keeps data structures to a normal scale. It also allows for parallel activity on multiple AGs, without concurrency control mechanisms creating hot spots.
Inodes are created dynamically in chunks of 64 inodes. Each inode is numbered using a tuple that includes both the chunk number and the inode's index within its chunk. The location of an inode can be discovered by lookup in B+ tree by chunk number. The B+ tree also contains bitmap showing which inodes within each chunk are used.
Free space is managed using two different B+ tree of extents. One B+ tree is organized by size, whereas the other is organized by location. This allows for efficient allocation -- btoh by size and locality.
Directories are also stored in a B+ tree. Instead of storing the name, itself in the tree, a hash of the name is stored. This is done, because it is more complicated to organize a B tree to work with names of different sizes. But, regardless of the size of the name, it will hash to the same sized key.
Each file within this tree contains its own storage map (inode). Initially, each node stores block offset and extent size measured in blocks. When the file grows and overflows the inode, the storage allocation is stored in a tree rooted at the inode. This tree is indexed by the offset of the extent and stores the size of the extent. In this way, the directory structure is really a tree of inodes, which in turn are trees of the file's actual storage. Much like ReiserFS, XFS logs only metadata changes, not changes to the file's actual metadata. In the event of a crach, it replays these logs the obtain consistent metadata. XFS also includes a repair program, similar to fsck, that is capable of fixing other types of corruption. This repair tool was not in the first release of XFS, but was demanded by customers and added later. Logging can be done to a separate device to prevent the log from becoming a hot-spot in high-throughput applications. Normally asynchronous logging is used, but synchronous is possible (be it expensive).
XFS offers variable block size ranging from 512 bytes - 64K and an extent-based alloctor. The maximum file size is 9 thousand petabytes. The maximum file system size is 18 thousand petabytes.
IBM's JFS
IBM's JFS isn't one of the best performers among this class of file system. But, that is probably becuase it was one of the first. What to say? Things get better over time -- and I think everyone benefitted from IBM's experience here.File system partitions correspond to what are known in DFS as aggregates. Wthin each partition lives an allocation group, similar to that of XFS. Within each allocation group is one or more fileset. A fileset is nothing more than a mountable tree. JFS supports extents within each allocation group.
Much like XFS, JFS uses a B+ tree to store directories. And, again, it also uses a B+ tree to track allocations within a file. Unlike JFS, the B+ tree is used to track even small allocations. The only exception is an optimization that allows symlinks to live directly in the inode.
Free space is represented as array w/1 bit per block. This bit array can be viewed as an array of 32-bit words. These words then form a binary tree sorted by size. This makes it easy to find a contiguous chunk of space of the right size, without a linear search of the available blocks. The same array is also indexed by another tree as a "Binary Buddy". This allows for easy coalescing and easy tracking of the allocated size.
These trees actually have a somewhat complicated structure. We won't spend the time here to cover it in detail. This really was one of the "original attempts" and not very efficient. I can provide you with some references, if you'd like more detail.
As for sidelines statistics, the block size can be 512B, 1KB, 2KB, or 4KB. The maximum file size ranges from 512TB with a 512 byte block size to 4 petabytes with a 4KB blocks size. Similarly, the maximum file system size ranges from 4PB with a 512 byte blocks to 32 petabytes with a 4KB byte block size.
Ext3
Ext3 isn't really a new file system. It is basically a journaling layer on top of Ext2, the "standard" Linux file system. It is both forward and backward compatible with Ext2. One can actually mount any ext2 file system as ext3, or mount any ext3 filesystem as ext2. This filesystem is particularly noteworthy because it is backed by Red Hat and is their "official" file system of choice.Basically RedHat wanted to have a path into journaling file systems for their customers, but also wanted as little transitional headache and risk as possible. Ext3 offers all of this. There is no need, in any real sense, to convert an existing ext2 file system to it -- really ext3 just needs to be enabled. Furthermore, the unhappy customer cna always go back to ext2. And, in a pinch, the file system can always be mounted as ext2 and the old fsk remains perfectly effective.
The journaling layer of ext3 is really separate from the filesystem layer. There are only two differences between ext2 and ext3. The first, which really isn't a change to ext2-proper, is that ext3 has a "logging layer" to log the file system changes. The second change is the addition in ext3 of a communication interface from the file system to the logging layer. Additionally, one ext2 inode is used for log file, but this really doesn't matter from a compatibility point of view -- unless the ext2 file system is (or otherwise would be) completely full.
Three types of things are logged by ext3. These must be logged atomically (all or none).
- Metadata - The whole block of updated metadata (even if only small part of block changed). This is basically a shadow copy of the updated block.
- Descriptor Blocks - These tell where each metadata block should be copied on recovery. These are written before metadata block. Rememebr, the metadata blocks are just unformatted blocks of data -- the descriptor blocks are necessary to tell us which are which.
- Header Blocks - These describe the log file, itself. In particular, we need to know the head and tail of journal file, as well as the current sequence number. These tell us where current head and tail of log are for updates
Periodically, the in-memory log is check-pointed by writing outstanding entries to an in memory journal. This journal is committed periodically to disk. The level of journaling is a mount option. Basically, writes to the log file are cached, like any other writes. The classic performance versus recency trade-off involves how often we sync the log to disk.
As for the sideline stats, the block size is variable between 1KB and 4KB. The maximum file size is 2GB and the maximum filesystem size is 4TB.
As you can see, this is nothing more than a version of ext2, which supports a journaling/logging layer that provides for a faster, and optionally more thorough, recovery mode. I think Red Hat made the wrong choice. My bet is that people want more than compatibility - more than Ext3 offers. Instead, I think that the ultimate winner will be the new version of ReiserFS or XFS. Or, perhaps, something new -- but not this.
Handling Multiple File Systems
So far we have discussed the role of file systems and the implementation of UNIX-like file systems. But our model of the world was a little simplified -- it aimed to capture essential properties without the added complexity of optimization or real-word idiosyncracies. Today, we are going to take a closer look at the mechanisms used within Linux.
In real world systems, many different file systems may be in use on the same system at the same time. Many different file systems exist -- some are specialized for particular applications, others are just vender-specific or vestigual general-purpose file systems. The commercial success of a new entry in the OS market often depends on its ability to support a plethora of file systems -- no one wants to convert of of their old data (applications present enough trauma).
The Virtual File System (VFS), originally proposed by Sun and now a part of SYSVR4, is a file system architecture designed to facilitate support for multiple file systems. It uses an object-oriented paradigm to represent file systems. The VFS model can be viewed as consisting of an abstract class that represents a file system with derived classes for each specific type of file system.
The abstract base class defines the minimal interface to the file system. The derived class implements these behaviors in a way that is appropriate to the file system and defines additional behaviors as necessary.
![]()
Source: Rusling, David A, The Linux Kernel, V0.8-3, LDP, 1999, S.9.2.Sun also defined a similar abstraction to represent a file, called the vnode. The vnode is basically an abstract base class that, when implemented by a derived class, serves the role of a traditional inode. The vnode defines the universal interface and the derived classes implement these behaviors and others for the specific file system.
Linux is fairly loyal to the VFS architecture and has adopted many of the ideas of the vnode into its inode strcuture. Its inode structure is not however, an exact implentation of a vnode. Linux maints the general achitecture of the vnode, without employing as strong an OO model. One note: whereas a vnode # is unique across file systems, an inode # is only unique within the file system. For this reason, it is necessary to use the device # and the inode # as a unique identifier for a file in Linux.
Major Data Structures
The following are the major data structures in the Linux file system infastructure. We'll walk our way through them today.
- struct files_struct - per process table
- struct file_system_type - represents and entire file system
- struct super_block - represents super block (metadata) of file system
- struct super_operations - operations to manipulate the super-block (metadata)
- struct inode - represents a file
- struct inode_operations - operations to manipulate contents of an inode
- struct dentry - represents a name to inode mapping
- struct dentry_operations - operations to manipulate a dentry
- struct file - entry in open file table; represents state of an open file
- struct file_operations - collection of operations that can be performed on an open file (remember this from device drivers?)
Per Process File Information
So far, in lecture, we've suggested that the only file system state that is associated with a process is the file descriptor table in the PCB.
This is almost true in the real world, but not quite. There are a few other pieces of information that prove useful and a few optimizations. In Linux, the file system information associated with a process is kept a struct files_struct within the task_struct. The task_struct is Linux's version of the PCB.
include/linux/sched.h:
struct files_struct { /* kept within task_struct (PCB) */ atomic_t count; rwlock_t file_lock; int max_fds; int max_fdset; int next_fd; struct file ** fd; /* current fd array */ fd_set *close_on_exec; fd_set *open_fds; fd_set close_on_exec_init; fd_set open_fds_init; struct file * fd_array[NR_OPEN_DEFAULT]; };We find the struct file **fd, the array of file descriptors, just as expected. But, it is dynamic not static. Initially it references a small, defualt array, struct file *fs_array[NR_OPEN_DEFAULT], but if necessary, it can grow. If this happens a new array is allocated for fd and the contents are copied. This can happen repeatedly, if necessary.
The count variable tracks the number of files the process has open and the lock variable is a spin lock that is used to protect list operations.
There are a few bit-masks of type fd_set. These sets contain one bit per file descriptor. In the case of open_fds this bit indicates whether or not the corresponding file descriptor is in use. In the case of close_on_exec, each bit indiciated whether or not the corresponding file should be closed in the event of an exec(). If the new process knows nothing about the open files of its predecessor, it makes sense to close them and free the assocaited resources. But, in other cases, the open files can provide an anonymous way for the predecessor and successor to cooperate.
The open_fds_init and close_on_exec_init fd_sets are used to initialize the fd_sets of a clone()'d process. The linux clone() call is much like a super-set of Fork() and SharedFork() in Yalnix. It can create traditional processes or thread-like relationships.
next_fd is an index into the array that is used when searching for an available file descriptor. It prevents an increasingly long linear search starting at the beginning of the array, in the event that many files are in use.
System-wide File Information
List of files in use: (struct list_head) sb->s_files
- One such list exists per file system. It holds the file structures for open files. This is Linux's implentation of what we called the "open file table" -- except in Linux, it is a doubly linked list.
List of free files: struct list_head free_list
- This is a system-wide list. It holds file descriptors that are no longer used. Think of it as a big recycle bin. We'll see this approach over-and-over again. Most kernel buffers are frequently freed and allocated. There is no reason to pay the cost of freeing a buffer just to pay the price to allocate it again. For this reason, similar "recycle bins" exist for many different types of kernel buffers.
List of newly created files: struct list_head anon_list
- This list holds newly created file structs. They are added to this list in response to an open that couldn't be satsified with the free_list.
struct file
The elements of the open file list, s_files, are of type struct file. Each node represents the use of a file by a process. THe only exception occurs in the case of a clone()'ing. Several clone()'d or fork()'d processes may share the file node.
struct file { struct list_head f_list; /* Head of the list */ struct dentry *f_dentry; /* The name--> inode mapping */ struct file_operations *f_op; /* Remember this from I/O? The op pointers*/ atomic_t f_count; /* Reference count -- needed because of fork(), &c */ unsigned int f_flags; /* O_RDONLY, O_WRONLY, &c */ mode_t f_mode; /* just as in chmod() */ loff_t f_pos; /* The current position in the file -- allows for sequential reads, writes, &c */ unsigned long f_reada, f_ramax, f_raend, f_ralen, f_rawin; /* Used for read-ahead magic */ struct fown_struct f_owner; /* Module, not use */ unsigned int f_uid, f_gid; /* userid and group id */ int f_error; /* needed for NFS return codes */ unsigned long f_version; /* Needed for cache validation */ /* needed for tty driver, and maybe others */ void *private_data; };
struct file_operations
This structure should seem familiar to everyone -- we discussed it in the context of device drivers. It contains pointers to the functions that implement the standard interface. Most of the operations defined in this structure should probably be familiar to you.
Please remember that although each file ahs a pointer to this structure, many of these pointers will reference the same structure. Typically there is only one file_operations structure for each type of file supported by the file system.
struct file_operations { loff_t (*llseek) (struct file *, loff_t, int); ssize_t (*read) (struct file *, char *, size_t, loff_t *); ssize_t (*write) (struct file *, const char *, size_t, loff_t *); int (*readdir) (struct file *, void *, filldir_t); unsigned int (*poll) (struct file *, struct poll_table_struct *); int (*ioctl) (struct inode *, struct file *, unsigned int, unsigned long ); int (*mmap) (struct file *, struct vm_area_struct *); int (*open) (struct inode *, struct file *); int (*flush) (struct file *); int (*release) (struct inode *, struct file *); int (*fsync) (struct file *, struct dentry *); int (*fasync) (int, struct file *, int); int (*check_media_change) (kdev_t dev); int (*revalidate) (kdev_t dev); int (*lock) (struct file *, int, struct file_lock *); };
struct inode
Most of the fields in the inode should probably be self explanatory. Please remember that the inode number is only unique within the file system. It take the tuple
to uniquely identify a file in a global context.. We'll talk more baout the struct super_block shortly. The same is true for the struct vm_area_struct.
Please also notice the union within the inode. This allows the inode structure to be used with the several different types of file systems.
struct inode { struct list_head i_hash; struct list_head i_list; struct list_head i_dentry; unsigned long i_ino; kdev_t i_dev; /* Usual metadata, such as might be seen with "ls -l" */ /* blah, blah, blah*/ struct inode_operations *i_op; struct super_block *i_sb; wait_queue_head_t i_wait; struct vm_area_struct *i_mmap; struct pipe_inode_info *i_pipe; union { struct minix_inode_info minix_i; struct ext2_inode_info ext2_i; ... } u; }
Memory Mapping
We won't cover this in too much detail. But this is the structure that defines virtual meory areas. When a file is memory mapped, this defines the relationship between virtual memory and the file's blocks. The struct vm_operations_struct implements the operations on the memory mapped area. Obviously the implementation of these operations is different for different media types, file system types, &c.
/* * This struct defines a memory VMM memory area. There is one of these * per VM-area/task. A VM area is any part of the process virtual memory * space that has a special rule for the page-fault handlers (ie a shared * library, the executable area etc). */ struct vm_area_struct { struct mm_struct * vm_mm; /* VM area parameters */ unsigned long vm_start; unsigned long vm_end; /* linked list of VM areas per task, sorted by address */ struct vm_area_struct *vm_next; pgprot_t vm_page_prot; unsigned short vm_flags; /* AVL tree of VM areas per task, sorted by address */ short vm_avl_height; struct vm_area_struct * vm_avl_left; struct vm_area_struct * vm_avl_right; /* For areas with inode, the list inode->i_mmap, for shm areas, * the list of attaches, otherwise unused. */ struct vm_area_struct *vm_next_share; struct vm_area_struct **vm_pprev_share; struct vm_operations_struct * vm_ops; unsigned long vm_offset; struct file * vm_file; void * vm_private_data; /* was vm_pte (shared mem) */ };
Memory Mapping Operations
These operations should seem reasonably meaningful to you. The advise() operation is a BSD-ism that is not implemented in Linux. wppage() is also unimplemented; I believe that it is also a BSD-ism.
struct vm_operations_struct { void (*open)(struct vm_area_struct * area); void (*close)(struct vm_area_struct * area); void (*unmap)(struct vm_area_struct *area, unsigned long, size_t); void (*protect)(struct vm_area_struct *area, unsigned long, size_t, unsigned int newprot); int (*sync)(struct vm_area_struct *area, unsigned long, size_t, unsigned int flags); void (*advise)(struct vm_area_struct *area, unsigned long, size_t, unsigned int advise); unsigned long (*nopage)(struct vm_area_struct * area, unsigned long address, int write_access); unsigned long (*wppage)(struct vm_area_struct * area, unsigned long address, unsigned long page); int (*swapout)(struct vm_area_struct *, struct page *); };
Inode Cache
The Linux inode cache is organized as an open chain hash table. The hashing function hashes the inode # and the block #.
All blocks in the hash table are also linked into one of three LRU lists:
- Used and dirty
- Used and clean
- Unused
If cache pressure forces a block out of the cache, a clean buffer is prefered, since it does not need to be written to disk. The unused list is of course the prefered source of buffers. The buffers from deleted files, &c are placed in the unused list instead of freeing them to reduce the overhead of allocating and freeing buffers within the OS -- as we discussed earlier, this is a common strategy.
![]()
struct dentry
In our earlier discussion of UNIX-like file systems, we very much over simplified the directory entry -- we alleged that it was simply a
mapping. Here we see that it does exactly that -- but it also has another purpose. It maintains the structure of the directory tree by keeping references to siblings (d_child), the parent (d_parent), and subdirectories/children (d_subdirs).
The structure also contains some meta-data used to cache the entries (d_lru, d_hash, d_time). As well as mounting information (d_mounts = a directory mounted on top of this one, d_covers = a directory that this directory is mounted on top of).
The d_operations structure defines operations on directory entries -- mostly cache related. More soon.
struct dentry { int d_count; unsigned int d_flags; struct inode * d_inode; /* Where the name belongs to */ struct dentry * d_parent; /* parent directory */ struct dentry * d_mounts; /* mount information */ struct dentry * d_covers; struct list_head d_hash; /* lookup hash list */ struct list_head d_lru; /* d_count = 0 LRU list */ struct list_head d_child; /* child of parent list */ struct list_head d_subdirs; /* our children */ struct list_head d_alias; /* inode alias list */ struct qstr d_name; unsigned long d_time; /* used by d_revalidate */ struct dentry_operations *d_op; struct super_block * d_sb; /* The root of the dentry tree */ unsigned long d_reftime; /* last time referenced */ void * d_fsdata; /* fs-specific data */ /* small names */ unsigned char d_iname[DNAME_INLINE_LEN]; };
dentry_operations
These operations should be mostly self-explanatory. revalidate() is needed to revalidate a cahced entry, if it is possible that something other than the VFS changed it -- this is typically only the case in shared files systems.
struct dentry_operations { int (*d_revalidate)(struct dentry *, int); int (*d_hash) (struct dentry *, struct qstr *); int (*d_compare) (struct dentry *, struct qstr *, struct qstr *); void (*d_delete)(struct dentry *); void (*d_release)(struct dentry *); };
Dcache
The dcache, a.k.a, the name cache, and the diretory cache provide a fast way of mapping a name to an inode. Without the dcache every file access by name would require a traversal of the directory structure -- this could get painful.
By now, the structure of the dcache should be of no surprise to you: an open-chained hash table hashed by
, with entries also linked into an LRU list for replacement. Free dentry structures are kept in a separate list for reuse. The onyl surpise is that only names of 15 or fewer characters can be cached -- fortunately, this is most names.
![]()
Replacement:
level1_cache/level1_head
- LRU list of recently translated entries. Entries added to the end may displace older entries if cache is full.
level_2_cache/level2_head
- LRU list of recently accessed entries (moved from level 1 on second access).
Level 2 is safer - entries can only be displaced by repeatedly accessed entry, not random new entries.
struct file_system_type
SInce Linux can support multiple file systems, there is a structure that mainatins the basic information about each one. These structures are kept in a singly linked list. When you mount a file system, it walks this list until it finds a name that matches the type provided to the mount operation. If it can't find a matching type, the mount will fail. The next pointer is the link to the next node in the list, or NULL.
The most critical field in the list is the pointer to the super_block structure. The super block contains the meta-data that describes and organizes the file system.
struct file_system_type { const char *name; int fs_flags; struct super_block * (*read_super) (struct super_block *, void *, int); struct file_system_type * next; };
struct super_block
Most of the fields in the super_block should be self-explanatory. I have no idea what the purpose of the "basket" fileds might be. As far as I know they are a recent addition to this structure and aren't used anywhere within the kernel -- perhaps they are a hint of coming attractions? I can only assume that they describe an unorder linked list of inodes.
Please notice the use of the union to permit the super_block structure to support multiple different file systems.
struct super_block { struct list_head s_list; /* Keep this first */ kdev_t s_dev; unsigned long s_blocksize; unsigned char s_lock; unsigned char s_rd_only; unsigned char s_dirt; struct inode *s_ibasket; short int s_ibasket_count; short int s_ibasket_max; struct list_head s_dirty; /* dirty inodes */ struct list_head s_files; ... union { struct minix_sb_info minix_sb; struct ext2_sb_info ext2_sb; struct hpfs_sb_info hpfs_sb; .... } u; }
struct super_operations
The super block operations manipulate the meta-data associated with the file system. Their purpose is more-or-less self-evident.
struct super_operations { void (*read_inode) (struct inode *); void (*write_inode) (struct inode *); void (*put_inode) (struct inode *); void (*delete_inode) (struct inode *); int (*notify_change) (struct dentry *, struct iattr *); void (*put_super) (struct super_block *); void (*write_super) (struct super_block *); int (*statfs) (struct super_block *, struct statfs *, int); int (*remount_fs) (struct super_block *, int *, char *); void (*clear_inode) (struct inode *); void (*umount_begin) (struct super_block *); };
The Buffer Cache
The buffer cache provides a way of caching file system blocks (data and metadata) to avoid repeated accessed to disk. Please remember that there is only one buffer cache per system -- not one cache per file system. The same buffers can be shared by multiple file systems. Heavy use of one file system will reduce the number of buffers used by another, &c.
There are different sized buffers. So the cache is really a collection of caches -- one of each block size.
Some people like to think of each block buffer as the representation of a request. That is to say that the contents of a buffer represent the results of a recent request. I prefer to think of buffers as simple containers -- but the request analogy isn't bad and might be useful to you.
Two main parts:
- Lists of empty buffers of several sizes: 512B, 1K, 2K, 4K, 8K
- Open-chaining has table of block buffers: hash (device #, block #) is index
Properties:
- Block buffers are either in a free list or in the hash table
- All block buffers are also kept in an LRU list for replacement
![]()
Victim Selection
Each block buffer is maintained on one of the following LRU lists:
- BUF_CLEAN
- BUF_UNSHARED
- BUF_SHARED
- BUF_LOCKED - scheduled to be flushed
- BUF_LOCKED1 - super block and inode buffers that can't be flushed
- BUF_DIRTY
The victim is the best clean buffer. If a victim can't be found, the system will try to create more buffers. If that fails, it will try to free block buffers of ofther sizes and try again.
The bdflush Kernel Daemon
The bdflush daemon flushes dirty blocks creating clean blocks. It normally sleeps, but wakes up:
- If the system runs out of clean buffers
- More than 60% (configurable) of the buffers are dirty
struct buffer_head
The buffer_head structure is the structure that represents an individual bloc buffer (or, if you prefer, buffered request). At this point, most of the fields should be reasonably familiar.
struct buffer_head { /* First cache line: */ struct buffer_head *b_next; /* Hash queue list */ unsigned long b_blocknr; /* block number */ unsigned short b_size; /* block size */ unsigned short b_list; /* List that this buffer appears */ kdev_t b_dev; /* device (B_FREE = free) */ atomic_t b_count; /* users using this block */ kdev_t b_rdev; /* Real device */ unsigned long b_state; /* buffer state bitmap (see above) */ unsigned long b_flushtime; /* Time to write (dirty) buffer */ struct buffer_head *b_next_free;/* lru/free list linkage */ struct buffer_head *b_prev_free;/* doubly linked list of buffers */ struct buffer_head *b_reqnext; /* request queue */ struct buffer_head **b_pprev; /* 2x linked list of hash-queue */ char *b_data; /* pointer to data block (1024 bytes) */ void (*b_end_io)(struct buffer_head *bh, int uptodate); /* I/O completion */ void *b_dev_id; unsigned long b_rsector; /* Real buffer location on disk */ wait_queue_head_t b_wait; struct kiobuf * b_kiobuf; /* kiobuf which owns this IO */ };