Jeff McCune home
I've run into several problems backing up our central file servers with Bacula, mostly centered around the sheer number of files (~6 million) a single job must process and store into the MySQL catalog. I ran into the following error last night, attempting to back up the entire 6TB array as a single job:
  07-Nov 18:10 backup-dir JobId 3: Fatal error: sql_create.c:732 sql_create.c:732 insert INSERT INTO batch VALUES (1580771,3,'/Volumes/0/export/users/kodama/Desktop/GAP/gap4r4/small/small2/','sml800.z','OAAAD DkeW IGk B ih C+ A KZn BAA BY BHLtzL 1sNQO BFnqZZ A A C','0') failed:
  Incorrect key file for table '/tmp/#sql2459_94_0.MYI'; try to repair it
After doing a bit of research, I've concluded the /tmp volume, which is only a 256M tmpfs partition is filling to capacity before the job is able to complete. Restarting the job this morning confirms MySQL is spooling data into /tmp.
  [root@backup tmp]# ls -l /tmp/
  total 332
  -rw-rw---- 1 mysql mysql 319276 Nov  8 09:48 #sql511e_3_0.MYD
  -rw-rw---- 1 mysql mysql   1024 Nov  8 09:48 #sql511e_3_0.MYI
  -rw-rw---- 1 mysql mysql   8722 Nov  8 09:48 #sql511e_3_0.frm
My solution for the time being is to reconfigure mysql to use /var/tmp for it's temporary storage, rather than /tmp. This places the data on a much larger file system.
# /etc/my.cnf
[mysqld]
tmpdir=/var/tmp
I'm also planning to split the job into smaller jobs, using regular expressions to include only pieces of the home directory tree at a time. This will keep the number of files each job needs to handle under a reasonable threshold.
Fork me on GitHub