bodger: me at Carabelle beach, FL (beach monster)
I've been instructed to install our company's product, Velocity, at the customer's site. As the customer doesn't have network connectivity, I decided to create a virtual machine under Xen (which we're using because Oracle recommends it), install the software on that, get it configured and tested, then move the VM to the customer's site.

So far, so good.

It turns out there are two ways to create VMs. One is to install the OS from an ISO image, the other is to use a “VM template”. And the only way to gain hardware virtualization is to use a template. The templates come with already-built virtual disks. These virtual disks are small (4GB), and more space is intended to be added with additional virtual devices.

This is awkward, as it involves coordinating various hunks of virtual disk and keeping them together and in synch, as well as ferreting out all the necessary mount points so nothing overflows the small root filesystem.

Fortunately, I can use resize2fs to “grow” a filesystem while keeping its contents. Unfortunately, the virtual drive doesn't have room to do so. Fortunately, I can make it bigger by just tacking more space on the end (dd with the "seek" option can't really do it, so I have to make a sparse file with dd then concatenate that onto the virtual disk file).

The virtual disk, however, is partitioned. And the partition labels don't describe the additional space. So I attack it with fdisk and then parted. I figure I can just move the swap partition (whose contents are ephemeral and don't matter) to the (new) end, and then expand the root partition. Unfortunately, parted refuses to do this, as it performs the resize operation automatically with the partition expansion, and it complains that the filesystem contains options that are beyond its ability to resize (because Oracle runs SElinux).

so then it degenerates into yak shaving... )

Fortunately (and perhaps frighteningly), I have experience writing disk formatters and partitioners.

EDIT: I wrote 'em, but can't do real testing with the 32-bit OEL here at home, so real testing will have to wait until tomorrow.


Jun. 24th, 2009 11:31 pm
bodger: me at Carabelle beach, FL (beach monster)
We're working on a project at work that uses Xen to run virtual machines (VMs). These VMs are running Oracle, which eats up disk space like a maniac. So the users asked us to add some 300GB chunks to the virtual file system. This is done by creating 300GB files and attaching them to the VMs. The guys created the first one by using dd (the Unix "convert and copy" command, so named because "cc" was already taken by the C compiler) to copy 300GB of zeroes from the /dev/zero pseudo device to a file. Then they made more by copying that file to additional locations. The /dev/zero trick is at least reasonably efficient, as the kernel just zero-fills chunks of memory as needed. But copying that file is a lose, as the system has to suck all 300GB off the disk drives, and write it back out.

However, there exists a command (on Solaris and BSD) tailor-made for the purpose. It's called mkfile(8) and its sole purpose is to make files. And I remembered that it did so much faster than coping stuff from /dev/zero. But Linux doesn't have that command.

I really thought it would, but a quick scan of the RPMs on the install media didn't reveal anything likely. A little scripting (and rpm2cpio) produced a list of every file in the whole distribution, but no mkfile.

So I tried to dredge up memories of how mkfile worked. I vaguely recalled it hinged on creative use of mmap() or lseek(), so I read those manual pages, and found this:

The lseek() function allows the file offset to be set beyond the end of the file (but this does not change the size of the file). If data is later written at this point, subsequent reads of the data in the gap (a "hole") return null bytes ('\0') until data is actually written into the gap.

Aha! All I have to do is write a short program that parses command line arguments for the file name and size (with optional units), open the desired file, lseek() off to the size (minus one), and write a single null byte, and voila!

So I did. Sure enough, my home-rolled mkfile was faster than dd. On local drives, it was nearly instant, even for huge files. On the OCFS2 volumes used by Oracle, it was rather slower (journaling, coordination, and all), but still outran the next-fastest method 2:1. Unfortunately, it wouldn't run on our VM servers, as the Oracle VM Server (OVS) distribution was 32-bit, and I had compiled it on a 64-bit VM that had gcc installed. So I went and rebuilt it for 32 bits and tried again. No joy, the 32-bit OS only supports file sizes up to 2GB. A nice research exercise, but ultimately, it didn't end up helping me.

Note that the version I wrote is on the customer's closed network and I don't have access to it, but in case someone needs it, I found another person's version here. It's a little wonkily-written, but should serve.

May 2017

21222324 252627


RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Oct. 20th, 2017 05:55 pm
Powered by Dreamwidth Studios