A while back I had some RAID problems. I had a disk fail, and the new disk would give me lots of errors when I moved large amounts of files around. I'd see a lot of these in the logs:
Jan 26 04:15:02 hostname kernel: hdb: dma_intr: status=0x51 { DriveReady SeekComplete Error }
Jan 26 04:15:02 hostname kernel: hdb: dma_intr: error=0x84 { DriveStatusError BadCRC }
I figured out what that was happening. Turns out the one drive in the RAID pair (/dev/hdd) had DMA off, while /dev/hdb had it turned on. I don't know why that was the case. Perhaps my late night fiddling resulting in some sort of fat fingering (wait... that sounded really bad). Anyway, I decided to do some tests by copying about 150MB of MP3s to my array while setting DMA to either on or off.
With DMA on/off (regardless of which drive has DMA on or off), I get the errors. With it set to off/off, I don't get errors, and the array is slower than a wounded prawn and a huge CPU hog: the copy takes around 50 seconds and the load average (basically how busy the CPU is) hovers around 4.50. I don't care about slow since this is an NFS/Samba server and CAT5 is my bottleneck. The CPU load I do care about since the box does other things besides simply serve files. With DMA set to on for both drives, I also don't get the errors, which is very cool. The copy takes around 10 seconds and the load average is about 0.70. All that is to be expected, since DMA gives quite a performance boost. But it's good to know I can turn it on.
Anyway, the mystery of the BadCRC is over, finally. Now I need to look into the mystery of why the roof above my office leaks.