Potter, Andrew Harland2023-02-022023-02-022022https://udspace.udel.edu/handle/19716/32197This dissertation contains three papers that examine an aspect of evaluating the quality of source-based writing. The first paper is a systematic methodological literature review. In the results I discuss the different constructs that are commonly measured in source-based writing research, and I evaluate the evidence of validity and reliability of commonly used measures. In the second study, I employ SEM modelling techniques with automated natural language processing indices to examine linguistic features of source-based writing that are associated with human ratings of writing quality for college developmental writers. Mediation analyses are also used to find potential differences in linguistic factors as a result of writing instruction. In the third study, I examine how middle school students evaluate their own source-based writing while they revise argumentative essays using an automated writing evaluation (AWE) platform. Mixed methods are used to draw conclusions about how students revise with technology and to what extent their knowledge about evaluation criteria and feedback impact their revising process and performance. Based on synthesized findings from the three studies, implications are discussed for using automated NLP indices to support the evaluation of source-based writing research and potential AWE development. Recommendations are also made for how AWE technology can more effectively support student writing development.Automated writing evaluationNatural language processingSource-based writingWriting instructionsEvaluating source-based writingThesis1368025751https://doi.org/10.58088/sfp0-mg712022-09-21en