As someone who shot film pre-digital, I've always been under the understanding that a smaller aperture meant more depth of field, but also that stopping down meant you got more sharpness, too. This doesn't necessarily seem to be the case, and newer digital cameras seem to have more of an issue. I've seen examples in various places, but one that make the difference plain to see is at The Digital Picture. In this example, a high quality lens is compared at f/5.6 and f/16 on the Canon 50D.
This has implications in how you shoot, as when doing macro you want maximum depth of field, but stopping down to f/32 may be limiting the ultimate sharpness. It is a trade off to be sure.
Personally I don't know that this would affect my shots when printed very much, but for now I'm going to stick to f/8 when I want maximum sharpness, but don't need the depth of field. The interesting part of the 50D review on the same site is the table that shows the minimum aperture before diffraction starts to reduce sharpness - it is f/7.6 for the 50D, f/10.3 on my 20D and f/13.2 on the 5D. Again, I don't know how these numbers are calculated - but if they are accurate it is quite a difference between bodies!
If anyone can comment on why a high resolution sensor is more troubled by diffraction than a low resolution sensor of the same size, please advise! Another example is on a Canon G10 Comparison it talks about how the aperture in program mode stays pretty wide (f/4) to help keep diffraction to a minimum.