Imperfections in study design and data collection often imply that a realistic model for observed data is not fully identified. A common response to this is adding enough model assumptions to obtain identifiability, at the cost of less realism. An uncommon response is sticking with the nonidentified model and whatever defensible prior information is available, as a basis for Bayesian inference. Some rudimentary analysis of estimator performance sheds light on the relative merits of the two approaches. Particularly, intuition about the 'learnability' of interest parameters in nonidentified models is seldom trustworthy - surprising findings abound. The discussion draws on biostatistical applications, where often limitations such as measurement error, unobserved confounding, and selection bias are such that identifiability can only be bought with assumptions that stretch the bounds of plausibility.
Group for Research in Decision Analysis