Another spin on this compiling thing...
Source code can be optimized in many different ways as it is changed to binary instructions. One of the most common being by the CPU. When people assemble/make software binaries they tend to select a compromise between the number of machines it supports and how optimized it is for a particular system. This is particularly true of the supporting libraries which are generally available or packaged up into a Linux distribution.
If you want to squeeze every little bit of performance out of the system; you can compile from source binaries which are more optimized for your computer which eeks out more performance. IE; give up general usability to get better use of your stuff. Sometimes this can be very noticeable however it can be tricky, as some optimization settings in the compilation process may break some less tested code therefore making binaries unreliable or simply cause the compilation process to fail.
Linux distributions like Gentoo or Sorcerer or *BSD try to make this easier by adding in a command line tools and configuration files to control how source is assembled, built, and deployed.