Traditional studies in uncertainty visualization often require naive participants to complete complex, domain-specific tasks in order to examine how effectively a visualization conveys uncertainty to support decision making. However, without assessing whether participants understand such tasks, it can be difficult to determine whether differences in performance are due to a given visualization or to varying degrees of comprehension. Although training is commonly administered to non-experts, to date, training has not been a focal point in uncertainty visualization research. In this paper, we evaluated how variations in training, coupled with assessments of knowledge acquisition and application, can inform uncertainty visualization research. Overall, we found significant performance differences based on training condition, illustrating how training influences task comprehension, which in turn influences decision making. This study serves to highlight training as a critical component of uncertainty visualization studies by quantifying performance variations due to training.