HAMMER 27/many: Major surgery - change allocation model
After getting stuck on the recovery code and highly unoptimal write
performance issues, remove the super-cluster/cluster and radix tree bitmap
infrastructure and replace it with a circular FIFO.
* Nothing is localized yet with this major surgery commit, which means
radix nodes, hammer records, file data, and undo fifo elements are
all being written to a single fifo. These elements will soon get their
own abstracted fifos (and in particular, the undo elements will get a
fixed-sized circular fifo and be considered temporary data).
* No sequence numbers or transaction spaces are generated yet.
* Create a 'hammer_off_t' type (64 bits). This type reserves 4 bits for
a zone. Zones which encode volume numbers reserve another 8 bits,
giving us a 52 bit byte offset able to represent up to 4096 TB per
volume. Zones which do not encode volume numbers have 60 bits available
for an abstracted offset, resulting in a maximum filesystem size of
2^60 bytes (1 MTB). Up to 15 zones can be encoded.
As of this commit only 2 zones are implemented to wrap up existing
functionality.
* Adjust the B-Tree to use full 64 bit hammer offsets. Have one global B-Tree
for the entire filesystem. The tree is no longer per-cluster.
* Scrap the recovery and spike code. Scrap the cluster and super-cluster
code. Scrap those portions of the B-Tree code that dealt with spikes.
Scrap those portions of the IO subsystem that dealt with marking a
cluster open or closed.
* Expand the hammer_modify_*() functions to include a data range and add
UNDO record generation. Do not implement buffer ordering dependancies
yet (ordering issues are going change radically with the FIFO model).
31 files changed: