Probabilistic models of delay discounting allow the estimation of discount functions without assuming that these functions describe sharp boundaries in decision making. However, existing probabilistic models allow for two implausible possibilities: first, that no reward might sometimes be preferred over some reward (e.g., $0 now over $100 in 1 year), and second, that the same reward might sometimes be preferred later rather than sooner (e.g., $100 in a year over $100 now). Here we show that probabilistic models of discounting perform better when they assign these cases a probability of 0. We demonstrate this result across a range of discount functions using nonlinear regression. We also introduce a series of generalized linear models that implicitly parameterize various discount functions, and demonstrate the same result for these.