Computational methods for Target Fishing permit the discovery of new targets of a drug, which may result in its reposition in a new indication or improving our current understanding of its efficacy and side effects. Being a relatively recent class of methods, there is still a need to improve their validation, which is technically difficult, often limited to a small part of the targets and not easily interpretable by the user. Here we propose a new validation approach and use it to assess the reliability of ligand-centric techniques, which by construction provide the widest coverage of the proteome. On average over approved drugs, we find that only five predicted targets will have to be tested in order to find at least two true targets with submicromolar potency, although a strong variability in performance is observed. Also, we identify an average of eight known targets in approved drugs, which suggests that polypharmacology is a common and strong event. In addition, we observe that many known targets of approved drugs are currently missed by these methods. Lastly, by using a control group of randomly-selected molecules, we discuss how the data generation process confounds this analysis and its implications for method validation.