Fix bug when allocating zero-sized buffers in the shard allocator
Summary:
This was an interesting bug. When running shallow decls, I was getting `AllocError`s. I assumed the `AllocError` was an OOM due to an excessively large value. However, they weren't being produced by `filealloc.rs` (where we'd detect such an OOM).
I was able to discover that the allocation call that triggered this object tried to allocate a zero-sized buffer. I spent a lot of time figuring out why the `AllocError` was being generated (I still haven't figured out *where* it is generated).
Writing a test case, and stepping through `MapAlloc::allocator` using gdb, I spotted that we might do `NonNull::from` with a null-pointer if:
1. We haven't allocated anything yet in the current map allocator. This means that `new_current` will be NULL.
2. We try to allocate a zero-size buffer (the `new_current > control_data.end` check will then succeed).
How the `NonNull::from` call is converted into an AllocError, I don't understand. There must be some magic going on?
Reviewed By: mjhostet
Differential Revision:
D31157242
fbshipit-source-id:
6c8bdc14c5b244a4ef99f6543317b80345a31430