# Full file system and I/O redirection



> This is an English translation of a blog item I wrote for 
> [AT Computing](http://www.atcomputing.nl/blog/)

While giving a course a student showed me the following:

    $ ps -ef > /tmp/file

Where `/tmp` is 100% filled yields no errors and *seems* to have worked!

Lets try to see what is going on here.

Firstly, lets fill up a file system. We are going to use an fs
mounted under `/media/disk`:

    $ cp /dev/zero /media/disk/HUGE                 
    $ cp: writing `/media/disk/HUGE': No space left on device 

Further we need a little program to test a few things:

    int 
    main(void) 
    {
	    write(1, "hello\n", 6);
    }

This writes the 6 bytes **hello\n** to *standard output*. Notice how
I don't do *any* error checking. If this were implemented correctly
I should check how many bytes are actually written by the `write()` 
system call. Also see `man 2 write`.

After compilation we have our program *output*:

    $ gcc output.c -o output    

The usual I/O redirection works as expected:

    $ ./output > tmp_file
    $ cat tmp_file
    hello

Now try this on our 100% filled file system:

    $ ./output > /tmp/tmp_file

No errors - looks like this went OK.

    $ ls -l /tmp/tmp_file
    -rw-rw-r-- 1 miekg miekg 0 Jun 10 12:47 /tmp/tmp_file

We *do* have a file in `/tmp`, but its size is only 0 bytes. We are left
with two questions. How is it that you can create a file in a full
file system? And, why do we not see any data in `/tmp/tmp_file` given
the fact that we saw *no* errors?

# Directories
If you create a directory in Linux (and Unix), then an `ls -ldn` will
say this (empty) directory has a size of 4.0K:

    $ mkdir /tmp/test
    $ ls -ldh /tmp/test
    drwxrwxr-x 2 miekg miekg 4.0K Jun 10 12:52 /tmp/test

That 4.0K is reserved. When we create files and or subdirectories
under `/tmp/test` the size of the directory will only be enlarged
when we cross the 4.0K boundary. When doing a little bit of testing
I saw that Linux then sets the new size to 12.0K:

    $ cd /tmp/test
    $ for i in $(seq 0 260); do echo $i; touch file.$i; done
    $ ls -ldh /tmp/test
    drwxrwxr-x 2 miekg miekg 12K Jun 10 12:59 /tmp/test

So as long as you don't cross that 4.0K boundary the system will allow
you to create files. Even on otherwise *full* file systems!

 Writing the data

If we do a `./output > /tmp/tmp_file`, `output` does not know it
is writing to disk. `output` just writes to its *stdout* and in this
case `output` does not check if the writing succeeds. If we add this
check we get a different story.

First amend the source:

    int 
    main(void) 
    {
	    if (write(1, "hello\n", 6) != 6) {
		    write(2, "ouput: write error\n", 20);
		    return 1;
	    }
	    return 0;
    }

Recompile:

    $ gcc output.c -o output

Retest:

    $ ./output > /tmp/tmp_file
    ouput: write error

Now we have fixed `output`, but there are still a lot of programs
which also have this bug:

The next set of commands all fail without any errors:

    $ ps -ef > /tmp/tmp_file  

    $ free > /tmp/tmp_file     

    $ grep 'as' testfile > /tmp/tmp_file           

    $ perl -e 'print "hallo";' > /tmp/tmp_file  

Luckily a lot of other programs do the right thing:

    $ who > /tmp/tmp_file
    who: write error: No space left on device

Also many (if not all) programs made by [GNU](http://www.gnu.org) 
do the correct thing.

