The very basic distinction between the two is that the latter (fabric) is a component of the former (cluster). At a very high level, an HPC cluster is a set of homogeneous machines connected together through some high-speed interconnect. And even those definitions are now changing since the machines no longer have to be homogeneous - some might have coprocessor cards, others might not; they might be running different architectures, have different OS or resources available, etc.
When we say fabric, we usually refer to the software-level layer that MPI, for example, might use to communicate between the nodes. Underneath the fabric would be the low-level interconnect (e.g. ethernet, infiniband) and all its associated drivers.
So if you start at highest application level, you as a developer will be making calls to the MPI library, the MPI library would be interacting with the underlying fabric interface (e.g. for Intel MPI, you would select the 'dapl' fabric to run over the infiniband network), which will in turn do its own communication to the firmware that sits on top of the network cards.
I hope this helped a bit :) Let me know if you have further questions.