Empirical evaluations of uncertainty visualizations often employ complex experimental tasks to ensure ecological validity. However, if training for such tasks is not sufficient for naïve participants, differences in performance could be due to the visualizations or to differences in task comprehension, making interpretation of findings problematic. Research has begun to assess how training is related to performance on decision-making tasks using uncertainty visualizations. This study continues this line of research by investigating how training, in general, and feedback, in particular, affect performance on a simulated resource allocation task. Additionally, we examined how this alters metacognition and workload to produce differences in cognitive efficiency. Our results suggest that, on a complex decision-making task, training plays a critical role in performance with respect to accuracy, subjective workload, and cognitive efficiency. This study has implications for improving research on complex decision making, and for designing more efficacious training interventions to assess uncertainty visualizations.