I actually sit on the other side of the fence. Machine epsilon is always useful even if you don't know how your data is scaled. Why? Because it defines the resolution of the computing engine's arithmetic for the given precision. So, at least you have a starting point relative to the fundamental computations.
Realistically, you've always got a natural cap on your data, especially if you're doing fixed-point or integer (with scaling) operations. Even if it isn't obvious, typing in 1e-6 as an absolute tolerance versus 1e+10 as a scale factor is one and the same. Although I might argue that the latter should throw a red flag - perhaps using double-precision is overkill! Single-precision has a machine epsilon of ~1e-7 and is probably more appropriate if you're scaling factors are huge.
If you considering the two examples you mention, I think you'll find they are actually well suited to this scaling approach. A fixed-point value is bounded by definition and the integer division is typically scaled by the numerator or denominator depending on the usage (expansion or contraction). That doesn't mean examples to the contrary don't exist but, in my experience, they won't be solved robustly through the use of a fixed tolerance.