A few months ago, the “Reproducibility and Replicability in Science” report from the National Academies of Sciences, Engineering, and Medicine was published. It included a set of criteria to help determine when testing replicability may be warranted:

1) The scientific results are important for individual decision-making or for policy decisions.
2) The results have the potential to make a large contribution to basic scientific knowledge.
3) The original result is particularly surprising, that is, it is unexpected in light of previous evidence and knowledge.
4) There is controversy about the topic.
5) There was potential bias in the original investigation, due, for example, to the source of funding.
6) There was a weakness or flaw in the design, methods, or analysis of the original study.
7) The cost of a replication is offset by the potential value in reaffirming the original results.
8) Future expensive and important studies will build on the original scientific results.

However, especially points 3-6 encourage reproduction of poor studies or studies that have data quality issues.
Here is an interesting alternative view by Andrew Gelman who encourages attempts to reproduce the good studies. From this perspective, replication studies could indeed provide real incentives for scientists to focus on Good Research Practice and to conduct their studies as unbiased as possible.
As one of the commenters pointed out: “I put replications of my work in my CV. After all, it shows both that somebody was interested enough to repeat/continue the work and that they could.”