Humans are remarkably efficient at decision-making, even in “open-ended’’ problems where the set of possible actions is too large for exhaustive evaluation. Our success relies, in part, on efficient processes of calling to mind and considering the right candidate actions for evaluation. When this process fails, however, the result is a kind of cognitive puzzle in which the value of a solution or action would be obvious as soon as it is considered, but never gets considered in the first place. Recently, machine learning (ML) architectures have attained or even exceeded human performance on certain kinds of open-ended tasks such as the games of chess and go. We ask whether the broad architectural principles that underlie ML success in these domains tend to generate similar consideration failures to those observed in humans. We demonstrate a case in which they do, illuminating how humans make open-ended decisions, how this relates to ML approaches to similar problems, and how both architectures lead to characteristic patterns of success and failure.