Doctoral Dissertations (Winter 2014 to Present)
Permanent URI for this collection
New submissions to the University of Delaware Doctoral Dissertations collection are added as they are released by the Office of Graduate & Professional Education. The Office of Graduate & Professional Education deposits all dissertations from a given semester after the official graduation date.
Doctoral dissertations from 1948 to present are also available online through Dissertations & Theses @ University of Delaware. Check DELCAT Discovery to locate print or microform copies of dissertations that are not available online.
More information is available at Dissertations & Theses.
Browse
Browsing Doctoral Dissertations (Winter 2014 to Present) by Title
Now showing 1 - 20 of 2432
Results Per Page
Sort Options
Item A 3D photonic sensor integrated tissue model for strain sensing(University of Delaware, 2019) Geiger, Sarah J.The study of wound healing and wound healing therapies is motivated by the need to prevent the formation of thick scar tissue in pathologically healing wounds and tissues that rely on their elasticity and modulus to perform their function, such as cardiac and vocal fold tissues. The development of in vitro platforms that can detect cell-induced strain in mimics of healing wounds has expanded our understanding of the mechanical, chemical, and physical cues that drive wound healing. However, these platforms are limited in their resolution, dimensionality, and ability to gather information about changes in strain throughout thick, opaque tissue models. In this work we describe the development of flexible, deterministically buckled 3D photonic device arrays that are designed and fabricated to meet the specific spatial, temporal, and strain resolution requirements needed for the detection of cell-induced strains in a millimeters-thick tissue model. ☐ A polymer or silicone-clad Ge23Sb7S70 chalcogenide glass resonant cavity array is selected for this application, as high-quality chalcogenide glasses devices can be deposited at low temperatures onto flexible and cytocompatible substrates. However, the reliability of these and other highly sensitive chalcogenide glass devices is affected by their aging-induced structural relaxation. The refractive index shifts resulting from this relaxation are on the same order of magnitude as the index shifts used to small-scale strain with our device arrays. In order to overcome this limitation, we develop and demonstrate a high-precision refractometry technique that tracks small changes in the refractive index of Ge23Sb7S70 chalcogenide glass, down to 10-5 RIU. This technique allows us to both identify the aging mechanism in this glass with high accuracy and compare different index stabilization methods to optimize our device processing. ☐ The expected performance of these arrays was tested both through finite element modeling and a proof-of-concept in vitro experiment. In the modeling experiments, PDMS buckled geometries were deformed in cardiac graft tissue-like environments. From these experiments we showed that devices embedded in these materials could easily detect small, localized changes in stiffness theoretically caused by limited perfusion of growth factor throughout this model. In vitro, an SU-8 clad, symmetrically buckled device was exposed to a contracting collagen gel, and the device response as a result of this deformation was analyzed. ☐ These deterministically buckled arrays of polymer or silicone-clad chalcogenide glass resonant cavities demonstrate sensitivity to relevant strains in 3D cell culture platforms, excellent ease of use, and the potential for a wide range of applications. This technique can be used as a standalone, low cost, plug-and-play local strain gauge for use in soft material systems. Thus, this technique’s flexibility both in terms of its deformability and range of applications easily surpasses other methods of in vitro force or strain detection.Item 3D reconstruction from coded plenoptic sampling(University of Delaware, 2019) Zhou, MingyuanThe plenoptic function describes a scene in terms of light rays, it is a 7-dimensional function with spectral, directional, spatial, and temporal variation. Traditional plenoptic sampling is acquired either by employing a standard plenoptic camera or a camera array, and the spatial-angular sampling can be potentially used to model 3D surface. ☐ In this dissertation, I present three coded plenoptic sampling schemes, i.e., the rotational cross-slit (R-XSlit) plenoptic sampling, the wavelength coded plenoptic sampling, and the polarimetric plenoptic sampling. The additional coded sampling information, such as non-centric sampling, spectral sampling, and polarization sampling, are conducive to 3D reconstruction. Therefore, I also develop the corresponding 3D reconstruction framework for each of them. ☐ First, I introduce the R-XSlit plenoptic sampling scheme by exploiting a special noncentric camera called the crossed-slit or XSlit camera. An XSlit camera acquires rays that simultaneously pass through two oblique slits. I show that instead of translating the camera as in the pinhole case, we can effectively sample the 4D plenoptic sampling by rotating individual or both slits while keeping the camera fixed, which makes the plenoptic sampling coded in the spatial-angular domain. The theoretical analysis shows that it provides denser spatial-angular sampling, which is beneficial for scene reconstruction and rendering. I develop a volumetric reconstruction scheme for scene reconstruction. ☐ Second, I present two wavelength coded plenoptic sampling schemes in the visible and infrared spectrum respectively. I firstly design a compact system with lights and cameras arranged on concentric circles to acquire a concentric wavelength coded plenoptic sampling in the visible spectrum, the cameras on each ring capture images in a unique spectrum. I employ the Phong dichromatic model onto its plenoptic function for 3D reconstruction and spectral reflectance map estimation. Experiments show that our technique can achieve high accuracy and robustness in geometry recovery. Moreover, I present an infrared wavelength coded plenoptic sampling and develop a hybrid sensing framework to efficiently achieve pose estimation and face reconstruction by exploiting the captured reflected infrared rays from human eyes. ☐ Finally, I present a polarimetric plenoptic sampling framework for recovering 3D surfaces, the polarization of light is included in its plenoptic function. I employ a new analysis analogous to the optical flow to correlate the polarization radiance function with both surface normal and depth. The proposed framework effectively resolves the azimuth-zenith ambiguity by forming an over-determined system. Extensive experiments on both synthetic and real data demonstrate that the technique is capable of recovering extremely challenging glossy and textureless objects.Item A BALANCING ACT: EXAMINING THE RELATIONSHIP BETWEEN SCHOOL LEADERSHIP AND POLITICAL TENSION IN EDUCATION DECISION MAKINGCahill, AmandaThe United States has a long history of political tension around education at the federal, state, and local level. District and school leaders must balance students’ learning and social needs while working to address political tension barriers on education decisions. Political tension involves the feeling of strain or anxiety around topics aligned with political ideology. The strain and anxiety generally result in difficulty with efficiently moving forward with clear decision making, specifically in education. Education decisions are choices made by district and school leaders that affect students’ growth academically within the school setting. Examples of education decisions could include restructuring the administration team, endorsing a specific curriculum, or implementing an enrichment program. Researchers have written extensively about effective leadership approaches but there is a lack of literature on strategies for how school leaders, particularly at the district level, can address political tensions that create barriers to decision making. School leaders' abilities to address political tension would support efficacious decision making with a focus on positive outcomes for students.Overtime, as politics has become more contentious, districts’ decision making has come under attack. These attacks significantly disrupt the ways in which school districts make and implement reforms, school improvements, policy changes, and the like. District and school leaders lack strategies and resources to proactively address political tension barriers which results in sporadic and often poorly implemented or communicated decision making. These implementation and communication failures are then poorly received by the community. The constant politicization of school decisions has caused leaders to focus on responding to political tensions, rather than focusing on decision making that can positively impact students’ learning needs. Therefore, it is imperative that educational leaders are equipped and prepared to address political tensions that cause barriers so that they can remain focused on decision-making processes that result in positively supporting students’ learning and social needs. Education leaders need both the resources and the opportunity to apply decision-making strategies that simultaneously address political tensions and remove barriers in order to communicate more effectively and work harmoniously with their school community. In my educational leadership portfolio (ELP), I focus specifically on how political tensions on the national level are influencing the decision making of local district and school leaders. I evaluated the relationship between district and school leaders’ decision-making process and political tension barriers within a rural school district in the Mid-Atlantic region. I used a rural school district as a case study to explore strategies district and school leaders need to manage political tension barriers without compromising decision making aimed at supporting student growth academically, socially, and emotionally. In this ELP, I gathered data through a case study. The ELP proposes a decision-making framework that district and school leaders can proactively utilize when they predict, or encounter, political tensions that result in barriers that may impede their ability to make decisions.Item A BOUNDING SURFACE PLASTICITY-BASED HYPERELASTIC CONSTITUTIVE MODEL FOR UNSATURATED GRANULAR SOILSKadivar, MehdiOne-third or more of the earth’s surface is situated in arid or semi-arid regions where the potential evaporation exceeds the precipitation and soils exist in their unsaturated state. Unsaturated soils are also abundant in most parts of the world where there is seasonal groundwater table fluctuation. The variation in the degree of saturation gives rise to a gamut of variability in soil’s hydromechanical behavior. The co-existence of pore-air and water in the void spaces and their interaction with each other and the solid particles are the main reasons why such variability exists and why unsaturated soils are more complex than saturated or dry ones. A robust understanding of the hydromechanical properties of unsaturated soils is crucial for geotechnical engineers worldwide, as well as for those concerned with the interaction of structures with the ground. Some engineering problems associated with unsaturated soils include precipitation-induced shallow-depth landslides, settlement of soil in the vadose zone, drainage of roadway materials, and borehole stability. Proper understanding of unsaturated behavior requires considerations that go beyond those available for saturated soils. In pursuit of addressing this requirement, a plethora of research has been devoted to measuring, modeling, predicting, and interpreting unsaturated soil behavior. Many theories for characterizing the mechanical response, methodologies for laboratory testing, as well as equipment to determine the constitutive parameters have been developed. Instruments to study in-situ behavior of unsaturated soils have also been promoted and used in a few cases. The outcomes of past research include: the development and critical evaluation of various forms of the effective stress principle and its fundamental role in determining strength and deformation properties of unsaturated soils; identification of independent state variables; development of failure envelopes and yield surfaces; formulation of macromechanical and micromechanical constitutive relationships; and formulation of suction-induced stress as a component of the intergranular stress tensor. Despite the volume of work dedicated to the field of unsaturated soil mechanics, compared to two-phase (i.e., saturated) soils, relatively few advancements have been made in the development of characterization frameworks for unsaturated soils. In this doctoral research, a novel, 14-parameter, state-dependent bounding surface plasticity model that simulates the behavior of unsaturated granular soils is developed. In the development of this model, a critical state compatible hyperelastic formulation for saturated granular soils is selected as a base model and is enhanced and extended to predict unsaturated granular soil behavior. Accounting for deformation phenomena in unsaturated soils, the elastoplastic response has no purely elastic component. The hyperelasticity and assumption of no purely elastic deformation sets this model apart from existing ones. To handle the inherent hydro-mechanical coupling in unsaturated soils, a newer generation stress framework, consisting of the Bishop-type effective stress with a second stress variable, is used in conjunction with a soil-water characteristic curve function. Available unsaturated soil data for sands and silty sands were used to calibrate, validate, and assess the performance of the new model. Additional laboratory data, consisting of a suite of consolidated drained triaxial shear tests, were generated. The shear strength and volumetric behavior of a native mid-Atlantic transitional silty sand were investigated under varying values of matric suction, confining pressure, strain rate, and fines content. The experimental results are used to validate the predictive capabilities of the new bounding surface plasticity constitutive model for unsaturated granular soils. It is shown that with a set of parameter values, the model realistically simulates the main features that characterize the shear and volumetric behavior of unsaturated granular soils over a wide range of matric suction, density, and net confining pressure. In the literature, it was observed that multiple analytical expressions exist for effective stress, critical void ratio, and soil water characteristic curve. To see the effects that variations in these different analytical forms have on the model simulations of unsaturated soil behavior, a parametric investigation is performed using the constitutive model developed and the aforementioned in-house generated laboratory data. It is observed that, depending on the desired prediction accuracy, a variety of functions (with varying numbers of model parameters) could be implemented as part of the constitutive model.Item A comparative evaluation of the microbiome effects on easy and hard keeper horses(University of Delaware, 2023) Johnson, Alexa C.B.Horses with different metabolic tendencies are anecdotally referred to as “easy” or “hard” keepers. Easy keepers (EK) tend to gain weight easily while hard keepers (HK) require extra feed to maintain body condition. Horses that do not struggle with maintaining a healthy weight are referred to as “medium keepers” (MK). The horse, as an obligate herbivore, relies on the gut microbiome to provide more than half of its energy requirements. Therefore, equine energetics and metabolism is greatly influenced by the gut microbiome. It is not yet known what causes a horse to be an EK, MK, or HK but, I hypothesize that the gut microbial structure and function play a vital role in equine metabolic tendencies. The dynamic interactions between the horse and its gut microbiome likely reflect individual capacities and genetics to harbor specific populations as well as host-specific abilities to utilize available nutrients. To test this central hypothesis regarding the microbial side of the conversation, these research projects focus on the bacterial and protozoal fractions. ☐ The first objective of this work was to develop a reliable and standardized tool for determining equine keeper status. The lack of a standardized method to identify equine keeper status requires reliance on the owner reported keeper statuses which is unreliable and irreproducible. The Equine Keeper Status Scale (EKSS) was developed and validated on data gathered from 240 horses. With EKSS assignments, incorrect keeper status assignments provided by owners was reduced by 60%. The EKSS was used in all further studies to identify EK, MK, and HK study cohorts. ☐ The second objective of the project was to compare bacterial composition (16S rRNA surveys) of EKSS statuses. 16S rRNA surveys of equine feces in an observational study (n=73) found differences in alpha and beta diversities and taxa abundances based on EKSS assignments. However, when a controlled cohort (n=12) was used, significances in alpha and beta diversities were lost, but unique bacterial cores and representative bacteria of each EKSS status were found. Determining the bacterial core of each EKSS status will aid during probiotic choice and probiotic development to target these key groups and improve EK and HK weight management strategies. ☐ The third objective was to compare protozoal composition (18S rRNA surveys) of EKSS statuses. As an extension to this objective, we sought to obtain reference sequences for uncharacterized protozoans to improve molecular methods to identify protozoans. Two previously unsequenced equine protozoan species (Blepharocorys valvata and Blepharoconus benbrooki) and two other equine protozoans (Tripalmaria dogieli, Cochliatoxum periachtum), were successfully single sorted and sequenced. After the addition of these sequences to the classifier, the protozoan (18S rRNA) profile of horses (n=39) in the Mid-Atlantic Region and EKSS statuses were evaluated. Thirty-five species level protozoans were identified in the Mid-Atlantic Region, and protozoal richness was lowest in HK horses compared to EK and MK animals (P = 0.05). Describing the commensal protozoal fraction is the first step towards ultimately understanding this population’s purpose during microbial metabolism and host health. ☐ The fourth objective was to determine if bacterial activities differed between EKSS statuses. Bacterial functionality between EKSS statuses was tested using PICRUSt to hypothesize fermentative differences between the gut microbiomes, and 48h in vitro challenge protocols. PICRUSt predictions were performed on the 16S rRNA surveys from the observational study (n=73) and found overall, 18 metabolism pathways were differentially abundant in EKSS statuses (P < 0.10); seven of which were significantly enriched in the EK representing both foregut metabolism (i.e. starch and lipid digestion), and hindgut metabolism (i.e. fiber and amino acid digestion). PICRUSt is inherently limited but these results indicate that the EK has an enhanced metabolic potential to breakdown feed and harvest energy compared to the MK and HK. ☐ In vitro experiments (n=12) demonstrated that the HK microbiome had the quickest and most drastic reaction to both carbohydrate and protein challenges. The slower metabolic response demonstrated by EK cultures may be a resiliency mechanism that these communities utilize to slow overall metabolism while increasing ATP production. These different bacterial responses suggest unique microbial strategies between keeper communities to maintain stability that may ultimately change metabolism patterns. ☐ In conclusion, the data collected from these studies supports our central hypothesis that the microbiome plays a pivotal role in equine keeper status. Our results suggest that bacterial activity and functionality are more influential towards equine keeper status than the bacterial composition. Results further suggest that the protozoan population is a significant contributor in equine keeper status and this population deserves further investigation.Item A COMPARISON OF THREE EFFECT SIZE INDICES FOR COUNT-BASED OUTCOMES IN SINGLE-CASE DESIGN STUDIESShrestha, PragyaIn Single-Case Designs (SCD), the outcome variable most commonly involves some form of count data. However, statistical analyses and associated effect size (ES) calculations for count outcomes have only recently been proposed. Three recently proposed ES methods for count data are: Nonlinear Bayesian effect size (Rindskopf, 2014), Log Response Ratio effect size (Pustejovsky, 2018), and Bayesian Rate Ratio effect size (Natesan Batley, Shukla Mehta, & Hitchcock, 2021). Although all three methods calculate ES for count outcome data and can be used with an ABAB design, they use either different statistical modeling or a different estimation framework (Bayesian or frequentist), they may assume the presence or absence of autocorrelation, which is frequently present in SCD data and it is yet to examine how the ES and standard error estimates from these three ES indices are affected by overdispersion, a common occurrence in count data. These fundamental differences call for a closer examination and comparison of the methods and estimates obtained. The proposed dissertation aims to investigate the interpretability and understandability of the estimates produced as proposed by May (2004), examine if the three ES indices can be converted to a common metric to facilitate comparison of the ES estimates, document the benefits and challenges while implementing each method, and examine the performance of these ES methods under positive autocorrelation and overdispersion using Monte Carlo Simulation. Schmidt (2007), a published SCD study that examined the effect of Class-wide Function-related Interventions Teams (CW-FIT) on reducing the disruptive behavior of three first grade students using an ABAB design, was used to examine the interpretability and understandability of the estimates produced and whether the indices can be converted to a common metric. It consisted of 3 cases with 4 phases (ABAB) for each case. For the simulation study, 1000 datasets for each case were simulated using pre-specified data parameters (number of cases, number of data points within each phase of a case, and phase means) taken from Schmidt (2007) study and for various conditions of autocorrelation and overdispersion. A fully crossed factorial design with three autocorrelation (0.0, 0.2, 0.4) and four overdispersion (0.0001, 0.05, 0.1, 0.3) resulting in 12 simulation conditions for each case was used for the data generation purpose. All analyses were carried out in R software. Results indicate all three ES estimates are interpretable. LRR meets the understandability criteria, however both BRR and NLB require advance statistical knowledge to run the models. The three ES can be converted to a common metric because they are ratios of the mean count of the phases. Based on simulation, all the three methods produce almost unbiased estimates of the effect size under different data conditions, however the standard error is affected by autocorrelation and overdispersion. This dissertation can serve as a resource for other SCD researchers and applied practitioners to understand and interpret the different ES values from the LRR, NLB, and BRR methods and help them make better informed decision about which of the three ES indices to use in their own research study if there is presence of autocorrelation and overdispersion in their data.Item A Dynamic Network Model with an Application to Interbank MarketZhang, HaiciAmong central bankers and the wider academic community, interbank network analysis has received an increasing amount of attention over the past few years. In view of the high degree of interconnectedness of the financial interbank network, network theory provides a natural framework for the study of the robustness of the interbank system as a whole and its resilience to risk contagion. A good understanding of the nature of the interconnections among the interbank network and of how these interconnections respond to changes in market conditions is also vital for policymakers when they are designing regulations and monitoring systemic risk. Aiming at understanding the dynamic interconnectedness in the banking system, this dissertation proposes three dynamic network models for the e-MID electronic interbank network. In the first part of this dissertation, we propose a Bayesian dynamic interbank network model where three mechanisms control the likelihood of a link between two banks: (i) a time-series interbank activity index that expresses overall confidence of interbank across time, (ii) bank-specific latent variables describing banks’ tendency to be borrowers or lenders, and (iii) covariates characterizes pairwise past relationships. A large fraction of previous research on interconnectedness studies static and aggregated interbank networks which only reveal information about long-term connectedness (such as core-periphery structure) inside a network, and lack of studies exploring the dynamics of interbank networks. To address this research gap, we formulate a flexible dynamic interbank network model and design its novel sampling method for computational efficiency. We then demonstrate the superiority of our proposed method in a variety of areas. First, we use the proposed cross-sectional latent variables instead of general network topology statistics to explore whether these variables are representative of changes in network topology and can be used to predict macroeconomic variables that reflect the health of the banking system. Secondly, we evaluate the goodness-of-fit of the model with interbank link predictions and evaluate the results using AUC and different prediction errors. In addition, we propose two proxies for relationship lending based on information sharing, using the latent variables in our proposed model. Based on the regression results, the two proxies are significant during both the pre-crisis and crisis periods. To make the model to be more interpretive from the view of probability theory, rather than linking the binary data of the adjacency matrix to a latent variable with a probit link, we assume that the binary data has a Bernoulli distribution with a probability of success rate which is a logiit transformation of the three underlying trading mechanisms discussed in the first study. Though it makes the results more interpretive, more constraints and auxiliary variables are applied to make the estimation process converge. In the second part, we apply a GC-LSTM ( GCN units that are embedded into the LSTM framework) deep learning approach to model dynamic interbank networks. The deep learning model has to use in transportation ((Lei et al. (2019)) and social networks ((Martinez-Jaramillo et al. (2014)) but it has not been applied to any financial interbank transaction data. To see whether the deep learning model is good at capturing the spatial-temporal information of interbank network topology and making a good prediction, we apply this method to analyze the e-MID dataset. Compared with the statistical dynamic network methods, the deep learning method requires fewer assumptions and easier parameter estimation strategies. The results validate that our model outperforms the benchmarks in terms of AUC and PRAUC. Meanwhile, we also compare the results for crisis and pre-crisis periods, we find that the deep learning model is better than the traditional models in both crisis time and pre-crisis periods. In addition, the GC-LSTM model is better at predicting future links in the crisis period than the traditional statistical models which indicates that the model without statistical underlying assumptions is better at capturing structure change.Item A graph limit approach to seriation(University of Delaware, 2023) Mishura, TeddyThe study of graph limit theory began in earnest in 2006 with the publication of the paper Limits of dense graph sequences by Lovász and Szegedy. Through the lens of homomorphism densities—the probability that a random map from a fixed graph F to a graph G is a homomorphism—this paper introduced new tools to the mathematical world that allowed sufficiently large networks to be viewed as random samples from a suitably chosen symmetric function w : [0, 1]2 → [0, 1] called a graphon. Furthermore, Lovász and Szegedy showed that this method of convergence is equivalent to convergence in a specific norm known as the cut norm, where graphs are embedded into [0, 1]2 as step function in the form of their adjacency matrices and the distance is calculated between these step functions. This allows for graph theoretical questions to be translated into questions about step functions, where more standard tools of analysis can be employed. Furthermore, in the reverse direction, one can study how analytical properties of the function can affect the combinatorial properties of its sampled graphs. ☐ We address this question in the context of geometric graphs, whose edge structure is derived from a linear embedding in R. This forces their adjacency matrices to increase toward the main diagonal, also known as the Robinson property, and one that easily translates to graphons. Given a convergent sequence of graphs that become increasingly close to geometric graphs—in the sense of the cut norm—can one claim the limiting object must also be Robinson? This question was solved completely in the affirmative for dense graph sequences, which are sequences of graphs Gn with positive edge density |E(Gn)|/n2 . We therefore focus on graph sequences that are not dense, e.g. whose edge density tends to 0, which corresponds to studying graphons that are unbounded but have finite p-norm. Such graphs are often referred to as being sparse. ☐ Specifically, we introduce and investigate a graph parameter Λ that measures by how much a graphon “fails” to be Robinson, and which is continuous with respect to the cut norm. We also develop a method that constructs Robinson approximations of Lp graphons such that the difference in cut norm between the original graphon and the approximation is dependent on Λ of the original; thus, the closer to being Robinson the original graphon is, the better the approximation.Item A history of African-American public school education in Louisville, Kentucky, 1840-1956(University of Delaware, 2017) Jones, S. SeabrookThis dissertation examines African-American public school education in Louisville, Kentucky, beginning with the free black community’s creation of private schools for their children in the 1840s and concluding with the desegregation of the city’s public schools in 1956. Throughout this period, the local black community’s primary obstacle in securing improvements to black education in Louisville was the local white community’s belief in the superiority of their city’s black schools relative to public schools for African-Americans elsewhere in Kentucky and the rest of the South. ☐ The comparative strength of Louisville’s public schools for black students was demonstrable. As discussed in chapters two and three, my research found that the Board of Education of Louisville spent more money on their black schools than almost anywhere in the South. These schools had an entirely African-American faculty led by black administrators. This faculty also enjoyed wages seven times higher than those paid to black teachers elsewhere in Kentucky. Over the years, the Board built a number of fine facilities for African-American schools. However, while Louisville whites prided themselves of these facts, the local black community fought a continuous battle to achieve equality for their children with what the city’s white children received. ☐ My historical analysis shows how throughout the existence of separate schools for African-American children in Louisville, black citizens’ ability to exert influence over local politics directly correlated with their ability to affect change for black public schools. This accounts for the school board providing the same curriculum for white and black elementary schools and allocating white citizens’ taxes to help pay for black public schools from the ratification of the Fifteenth Amendment through a restructuring of school board elections in 1910 that rendered the black vote nearly obsolete. As examined in chapters four and five, once the black community’s political influence over the Board of Education waned, dramatic fissures in spending and curriculum emerged between white and black schools at all levels. Fortunately, national party shifts led to a revival of black political relevance in Louisville as Republicans fought to keep the loyalty of black voters who had begun absconding for the Democratic Party in the 1930s. Chapter six observes how Louisville’s African-American citizens used this opportunity to force the Board of Education to include black school facilities in their bond issues and, most impressively, to equalize black and white teachers’ salaries in the final years of the Great Depression. ☐ The seventh and concluding chapter of this study examines the 1956 desegregation of public schools in Louisville. Although nationally heralded by the press and politicians as a success, my research highlights the limits of this desegregation. The few white students in formerly black institutions and the Board of Education’s resistance to faculty desegregation served as indicators that the Board’s primary concern was not achieving racially-equitable schools, but was instead creating a desegregation plan palatable to the city’s white citizens. ☐ This dissertation is the first comprehensive exploration of the history of African-American public school education in Louisville, Kentucky, prior to desegregation.Item A kinomic analysis of the immunometabolic effects of antibiotic alternatives in necrotic enteritis disease models(University of Delaware, 2021) Johnson, Casey NicoleSince the restriction and removal of antibiotic growth promotors, resulting from both new regulations as well as public pressure, the poultry industry has seen a re-emergence of financially devastating disease, most notably necrotic enteritis. Necrotic enteritis has been estimated to cost the global poultry industry $6 billion annually. The economic consequences attributable to necrotic enteritis mostly result from the sub-clinical form of the disease, which can go unnoticed until slaughter due to a lack of overt clinical signs. Finding alternatives to antibiotic growth promotors has become a major area of research and many facets of this problem need to be considered in order to begin to find solutions. Looking at disease pathogenesis is one necessary component to finding more effective and efficient alternatives and to be able to better target problems such as necrotic enteritis more specifically. Looking at the mode of action of antibiotic alternative feed additives shown to have beneficial disease mitigating and growth promoting effects in addition to beginning to understand the host effects of antibiotic growth promotors will also be necessary to begin to replace the beneficial effects lost with the removal of antibiotic growth promotors. This work has been done to advance all of these frontiers. Necrotic enteritis and its predisposing factors were used both as disease challenge models and as a means to evaluate the disease mitigating effects and mechanism of action of antibiotic alternatives. ☐ Necrotic enteritis highlights how dysregulation of gut immune responses can have devastating consequences. The gut is the site of nutrient absorption and an important immune organ. As such, the gut has been considered the prototypical immunometabolic organ. Immunometabolism, being the study of the cross-talk between immune responses and metabolic pathways represents a rational research approach for studying an enteric disease and the effects of potentially disease mitigating feed additives. ☐ Most immune signaling and metabolic pathways contain key intermediates regulated by protein kinases. Kinomics involves the analysis of phosphorylation events, catalyzed by protein kinases. Phosphorylation is a predominant post-translational modification of proteins that plays a key role in regulating and mediating most cellular signaling. Species-specific kinome peptide arrays allow for a high-throughput method of measuring these phosphorylation events. Differences in phosphorylation target sites between species requires the use of species-specific kinome peptide arrays. The kinome peptide array protocol generates data regarding immunometabolic changes occurring between treatments and controls. This research approach enables us to explore the immunometabolic effects of our disease challenge models compared to healthy controls as well as the effects of antibiotic alternatives in the same context. ☐ The yeast cell wall components of Saccharomyces cerevisiae have been the focus of studies searching for antibiotic alternatives. Feed additives containing these components are currently in use by the broiler industry and have shown beneficial effects on growth and performance. In chapter 2, the effects of a crude yeast cell wall extract (YCW) and purified yeast cell wall components beta-glucan (BG) and mannoproteins (MPT) in the context of an experimental model of necrotic enteritis (NE) are reported on. The NE model used for this study involved the use of an infectious bursal disease virus vaccine and a Clostridium perfringens (C. perfringens) challenge. All groups other than unchallenged control were given the same NE challenge. Groups included unchallenged control, NE challenged, YCW, BG, MPT, and BG+MPT. Weight gain was recovered to control in the BG+MPT group. The weight gain in all other groups was not statistically significantly different from either the control or the NE challenged groups. Kinome peptide array and subsequent STRING analyses on jejunal tissue revealed changes in innate immune response and cell death or apoptosis between the BG+MPT and NE challenge groups. Closer analysis of peptide function revealed in the NE challenge group most phosphorylation changes indicated a decrease in functions promoting cell growth and survival and induced cell death. The BG+MPT group showed a reversal of these changes. While we observed clear differences in the phosphorylation status of key peptides between challenge and BG+MPT treated groups, often these peptides phosphorylation status was not significantly altered in the BG alone, MPT alone, and YCW treatment groups. It is possible that the differences in these peptides phosphorylation state are a key aspect of the difference in growth response we observed here, either allowing the disease to take hold and negatively affect growth in the case of the challenge group or reducing disease severity and limiting growth effects in the case of the BG+MPT group. Thus, the lack of significant change in phosphorylation of these specific peptides may be the reason we do not see a significant improvement in growth due to treatment with YWC, BG alone, or MPT alone. In other words, these peptides described previously may be critical determinants of disease severity and the growth effects due to this NE challenge model. The mRNA expression of pro- and anti-inflammatory cytokines were measured using quantitative reverse transcription polymerase chain reaction (qRT-PCR). The results indicated an effect on cytokine gene expression in response to the NE challenge and further changes in response to the various treatment groups. These results were consistent with the kinome peptide array analysis, showing that rather than a return to control, the treatments had distinct impacts further separating the treatment groups from either the control or NE challenged groups. In summary, this paper indicates that a combination compound used as a feed additive, BG+MPT, is able to recover weight gain in NE challenged birds as well as confer unique cellular signal transduction in the guts of challenged broilers. The responses appear centered on inducing cell growth responses and reducing cell death or apoptosis and innate inflammatory responses, but rather than returning the tissue to a control-like state, the treatment appears to generate compensatory signaling to reduce disease severity. ☐ In chapter 3, the mechanism of action of a postbiotic product in the context of a C. perfringens challenge in broiler chickens is reported on. A postbiotic product contains the byproducts of bacteria fermentation including metabolites, short-chain fatty acids, and functional peptides, among others. These products are thought to have the potential for a direct effect on the host rather than relying on the microbiome to produce the same byproducts. C. perfringens is one of, if not the major, contributing factor to the development of necrotic enteritis in broilers. Administration of the postbiotic product improved weight gain, decreased C. perfringens colony count, and reduced lesion scores and mortality between the two replicate trials. Kinome peptide array analysis showed a distinct response to the postbiotic product in the jejunum of both disease challenged and unchallenged broilers. Looking at peptide phosphorylation events unique to the postbiotic, postbiotic plus C. perfringens challenge, and the C. perfringens challenge groups, STRING analysis results showed immune related signaling in the top 20 GO biological processes in the postbiotic and C. perfringens challenge groups, this signaling was absent in the postbiotic plus C. perfringens challenge group. The protein identifiers input into STRING were unique to each group, so despite immune signaling showing up in both the postbiotic and C. perfringens challenge tables, the members of the signaling pathways were unique suggesting key differences in these signaling pathways. This is further supported by a visualization comparing the phosphorylation events of these two groups that shows that many of the events on the same peptide were differentially phosphorylated in different directions when each group was compared to control, which indicates a differential effect between the two groups. The lack of immune signaling pathways in the postbiotic plus C. perfringens challenge group suggest that the combination treatment may mitigate the disease inducing effects of the C. perfringens challenge. When the complete lists of significantly differentially phosphorylated peptides for the postbiotic and C. perfringens challenge groups were input into STING, the results showed signaling changes indicating an impact on innate immune signaling in the C. perfringens challenge group not observed in the postbiotic treatment group. The postbiotic treatment group results reflect an immune-modulatory/anti-inflammatory impact by the metabolite on jejunal tissue. The phosphorylation events indicate inhibition of such signaling pathways as mTOR, NF-κB, and PI3K-Akt. These are pathways commonly seen being activated during an active/proinflammatory immune response. In summary, the metabolite product being evaluated seems to impart an immune-modulatory effect on jejunal tissue in broilers. This could be important in maintaining a healthy gut, especially considering the withdraw of in-feed antibiotics which are widely believed to impart an anti-inflammatory effect in the gut. The gut needs to maintain both a state of tolerance (to commensal microbes) as well as a readiness to respond (to pathogenic organisms). This balance must be maintained in order to allow for optimum growth efficiency and health.Item A life less valuable? Adjudication and sentencing outcomes for perpetrators of child homicide.(University of Delaware, 2013) Poteyeva, MargaritaKilling a child almost universally galvanizes great outrage among the public. Despite this condemnation, little is known about how and if this abhorrence of killing a child translates into criminal justice practices. The purpose of this dissertation is to advance our understanding about the imposition of the law in cases of child homicide using a mixed-method design. The two general research objectives of the current study are: (1) to explore whether there are any differentials in the application of the law (e.g. decisions to prosecute, probability of conviction for offenders, and sentences received) for those who kill children compared to those who kill older victims; (2) to gain insight into the moral gradation of child homicide by exploring whether certain types of child homicide or certain perpetrators of the crime are treated more harshly than others. ☐ Quantitative analysis of a nationally representative data collected from prosecutors' offices in 33 large urban counties revealed that while killing a child did not have any effect on the probability of conviction, those who killed children received significantly shorter sentences than those who killed older victims. Being a mother-offender exuded a significant influence over the sentencing decision of the courts. ☐ Thematic and qualitative content analyses of data from the State of Maryland showed that the majority of death eligible cases with child victims either originated in a romantic conflict or involved a sexual assault on the child. No definitive conclusions could be drawn about the criteria that state attorneys in Maryland considered in determining whether or not to seek capital punishment in a particular child homicide case. In fact, there were several instances where legally similar crimes and offenders received different treatment. No female defendant in the Maryland sample was tried capitally, including the two women who were mothers to their victims. Contrasting these cases with female child homicide perpetrators who were sentenced to the death penalty in other states, however, suggests that prosecutors need to overcome a number of challenges to successfully portray female defendants as death-worthy.Item A microvessel-on-a-chip for studying how tumor-derived factors prime the endothelium for extravasation(University of Delaware, 2022) Sperduto, John LewisMetastasis is a leading cause of disability and death from cancer. Each year, 50% of new cancer patients are diagnosed with metastasis and must undergo immediate and aggressive treatment. Within 5 years, 60% of these patients will die. ☐ These statistics translate to staggering mortality. In 2020 alone, 5.8 million patients died from metastasis. By 2040, this mortality will increase to an estimated 8.4 million deaths. ☐ Yet few effective treatments for metastasis exist. One reason is poor understanding about how circulating tumor cells escape through the endothelium into the surrounding tissue of distant organs. This process, called extravasation, is critical. If extravasation does not occur, distant metastases cannot form and 5-year survival increases from 25% to 56%. Better understanding extravasation is therefore key to stopping distant metastasis and reducing the mortality of metastasis overall. ☐ However, studying extravasation is challenging. One major roadblock is modeling extravasation and its underlying pathophysiology in vitro. Current culture models lack mechanical cues fundamental to producing an endothelial monolayer that replicates the endothelium in vivo. Improved modeling of extravasation in vitro could provide better understanding of its pathophysiology in vivo. This dissertation thus aimed to create a microvessel-on-a-chip model for studying extravasation and its underlying processes in vitro with physiological relevance and high spatiotemporal resolution not possible with traditional culture models. ☐ The first objective was developing the model. This consisted of reviewing existing microvessel-on-a-chip models for their advantages and disadvantages, then using this information to design the model, and finally developing processes to fabricate the model and form endothelial microvessels inside it. ☐ The second objective was developing tools to quantify key metrics of endothelial phenotype and function from the model. This consisted of developing a toolbox of image processing pipelines and corresponding ImageJ macros for quantifying permeability, patency, protein expression, and morphology. This toolbox provides a semi-automated method to obtain high-resolution data that is laborious, cumbersome, and often inaccurate to obtain via manual means while also offering new ways to quantify the aforementioned properties. ☐ The third objective was validating if the model can produce endothelial microvessels that replicate key properties of the endothelium in vivo. This consisted of quantifying the permeability, patency, protein expression, and morphology of microvessels cultured under static, flow, and inflammatory stimuli. Results show that flow is necessary to replicate the desired properties from in vivo. Furthermore, the microvessels exhibit very low permeabilities to 70kDa (~1E-08 cm/s) and 10kDa (~1E-07 cm/s) dextrans, few-to-no focal leaks (<1 leak/mm), dense and continuous adherens junctions, cell elongation and orientation with flow, cytoskeletal alignment with flow, and an appropriate response to inflammatory stimuli. ☐ The fourth and final objective was to apply the model to study how tumor proteins and tumor extracellular vesicles can prime the endothelium to facilitate extravasation. Results show that tumor proteins have a strong capacity to disrupt and inflame the endothelium: they increased permeability, decreased patency, and shifted endothelial cells to a more mesenchymal phenotype. In contrast, tumor extracellular vesicles showed limited capacity, if any, to disrupt or inflame the endothelium. While preliminary, these results raise important questions the current paradigm that tumor extracellular vesicles prime the endothelium for extravasation and highlight the utility of the microvessel-on-a-chip for studying this process. ☐ In total, this dissertation presents a microvessel-on-a-chip model for studying extravasation and its underlying processes in vitro. This model provides a promising addition to existing culture models and affords newfound physiological relevance and spatiotemporal resolution. By doing so, it can advance understanding of extravasation and thereby help improve treatments for metastasis.Item A Preliminary Evaluation of the Feasibility, Acceptability, and Health Behavior Outcomes of a Community-Based Group Health Coaching for Cancer Survivors Program: A Mixed Methods RE-AIM StudyBerzins, Nicole J.In the United States, there are currently more than 18 million people living with and beyond cancer with that number expected to increase to 22 million by 2030. This growing population often experiences additional long-standing health challenges, including long-lasting side effects from treatment, an increased risk of cancer recurrence, and having or developing a comorbidity such as diabetes or cardiovascular disease. This can have impacts throughout the lifetime in both physical and mental health, as well as quality of life. It has been found that a substantial cancer burden may be prevented through behavioral lifestyle modifications, such as increasing physical activity, improving diet habits, improving duration and quality of sleep, and managing stress. However, adherence to these behaviors remains low. Health coaching, a process by which trained coaches help clients establish sustainable behavior changes that align with their strengths and values through client-centered interactions and goal setting, has been shown to be effective at eliciting improvements in the aforementioned behaviors in a variety of populations, including cancer patients and survivors. While typically delivered in a one-on-one format, it can be costly, requires more trained coaches, and is not time efficient to reach a large number of people. As such, health coaching in a group format has started to emerge, however, less is known about this mode as it is still in its early stages of development. Therefore, the purpose of this dissertation was to (1) explore existing group health coaching interventions targeting cancer survivors, specifically examining program composition and measured outcomes, (2) to develop and assess the feasibility and acceptability of a group health coaching program for cancer patients and survivors in a community-based setting using a videoconferencing platform, as well as to assess the preliminary effects of the program on a variety of behavioral lifestyle factors, and (3) to evaluate the program using the RE-AIM (Reach, Effectiveness, Adoption, Implementation, and Maintenance) framework. A scoping review of the literature was first conducted. A systematic search strategy was used between October 2021 and February 2023 to identify intervention studies focused on group health coaching with cancer patients and survivors. Seven studies met the criteria. These studies focused on physical activity, diet, weight loss, or some combination thereof utilizing group health coaching by itself or as one component of an exercise and/or diet intervention. There was a wide range of measured outcomes, which loosely grouped into: feasibility/acceptability; physical activity/exercise; body composition and biomarkers; diet; distress, quality of life, fatigue; and other. Overall, studies were found to be feasible and showed positive results for weight loss, diet, and quality of life. Findings for physical activity, distress, and fatigue were mixed. Additionally, variability was found in many of the group health coaching components. This review suggests group health coaching in the cancer patient and survivor population is still in the nascent stages. However, these studies were deemed highly feasible and satisfactory to the participants, and positive outcomes were associated with these interventions. Future research should consider expanding upon these findings with more robust studies. Guided by the findings of the scoping review, a community-based group health coaching program was developed. Six group health coaching sessions over a three-month period addressing the topics of stress management, physical activity, sleep, and diet were offered to cancer patients and survivors across the state of Delaware using a videoconferencing platform. Sessions were facilitated by health coaches trained through a National Board for Health and Wellness Coaching accredited training program. Data was collected using a convergent mixed methods approach. Surveys were sent pre- and post-intervention on topics including physical activity, eating habits, perceived stress, anxiety, depression, sleep, and quality of life, followed by post-program focus groups and in-depth interviews. Data on recruitment, attrition, attendance, fidelity, retention, safety, and barriers and facilitators to implementation were also assessed. Survey results were analyzed using repeated measures multilevel modeling. Inductive thematic analysis was used to analyze qualitative data. Overall, this intervention was considered feasible to implement and found acceptable by both participants and health coaches. A total of 26 participants attended an average of 74% of coaching sessions. Coaching participants also noted a moderate increase in total weekly physical activity minutes (p= 0.032, d=0.50). A small increase was seen in weekly moderate-vigorous physical activity frequency (p=0.045, d=0.39). Additionally, a moderate increase was found in functional wellbeing (p<0.001, d=0.50). This suggests group health coaching may be a feasible and acceptable way to promote behavior change in cancer patients and survivors, particularly for physical activity and functional wellbeing. The group health coaching program was then evaluated using the RE-Aim framework. The main outcomes of this study were mapped onto the five dimensions of RE-AIM. As mentioned, 26 participants attended an average of 74% of coaching sessions with an overall fidelity rate of 89.7%. Participants were more likely to be female, White, older, have a higher educational attainment, and have had breast cancer than the state population. The biggest adaptation included extending the time of sessions. The primary support to implementation included the use of NBHWC trained coaches while the primary barrier was the videoconferencing platform. While some challenges for recruitment and implementation were noted, the program will transition to a completely community-based project going forward. In summary, group health coaching with the cancer population was feasible to implement and found to be acceptable to cancer patients and survivors in the state of Delaware. It was successfully implemented within a community setting and may be particularly effective for changing physical activity levels and physical functioning scores. Provided in this format, the cancer patients and survivors are provided with peer support beyond what the coach alone can provide and allows for ongoing learning and connection with others, despite participants having different cancers and being in different places in their journey. Going forward, the program will be refined based on feedback from participants and coaches before being implemented within Cancer Support Community Delaware locations across the state.Item A recipe for success: designing flexible professional learning for foundational skills(University of Delaware, 2022) Wheedleton, Kimberly S.This Executive Leadership Portfolio (ELP) seeks to address the problem of how to design a flexible baseline professional learning architecture which both fits in the Professional Development Center for Educator’s (PDCE) 1:1 preparation model and allows for specialist customization that maintains quality and meets the unique professional learning needs of PDCE’s partners. To determine a potential solution to this problem, a professional learning design project was developed and built, along with accompanying implementation tools, to support teacher implementation of the differentiated instruction model as presented in How to Plan Differentiated Reading Instruction: Resources for Grades K-3 (Walpole & McKenna, 2017). The professional learning design was informed through an iterative try/fail/redesign process within five successive professional learning partnerships. The final professional learning design and implementation tools were then tested to determine their feasibility for addressing the problem by implementing them in a sixth partnership. ☐ Results from the sixth professional learning partnership suggest that, if implemented as designed, the flexible baseline professional learning architecture can feasibly allow PDCE’s Literacy Instructional Specialists to provide synchronous, customized, differentiated professional learning sessions which fit the Center’s 1:1 preparation model, maintain quality, and meet the unique professional learning needs of each partnership. An unforeseen outcome of the partnership was that with additional revisions, the professional learning architecture was also able to provide asynchronous professional learning support. While those initial revisions to the professional learning architecture and accompanying implementation tools extended the preparation time beyond the 1:1 ratio, once completed, feasibility for 1:1 preparation can likely be realized for asynchronous applications as well.Item A simulation framework for exploring the impacts of vehicle platoons on mixed traffic under connected and autonomous environment(University of Delaware, 2023) Yuan, DianVehicle platooning, first studied as an application of Intelligent Transportation Systems (ITS), is increasingly gaining attention in recent years as autonomous driving and connected vehicle technologies advance. When being platooned, vehicles communicate within the platoon and operate with coordination to maintain a relatively steady state status with each other and with the outside. The major goal of this study is to build a conceptual simulation framework to help with exploring the impacts of connected and autonomous vehicle platoons on the existing traffic. The first part of this work effort is reviewing autonomous and connected vehicle technologies for depicting the functional structure of a platooning-ready connected and autonomous vehicle (CAV) platform. Then models and simulation tools are reviewed to break down the simulation framework into two levels – vehicle level and traffic level. The vehicle-level model provides in-depth modeling of CAVs and platooning modules. The traffic-level simulator provides the simulation of the existing traffic with the built CAV platoons. The simulation framework has been developed by integration and usage of GIS, MATLAB/Simulink, SUMO, and OMNeT++. GIS tools are used to gather the necessary traffic data. MATLAB/Simulink serves as the platform for vehicle-level modeling and simulation. SUMO and OMNeT++ are used to build the traffic and communication simulations, respectively. The completed model was used to conduct two case studies based on a section of the US Interstate Highway in order to explore the impacts of CAV platoons on existing traffic. The results indicate that, with the existing traffic pattern and infrastructure design, traffic can be improved after the introduction of CAV platoons, even after taking into consideration the rate of traffic growth. Moreover, deploying dedicated lanes and separating CAV platoon traffic from the non-platooning traffic can benefit the traffic using such output as the travel speed/time and delay measures. However, using such new traffic patterns and infrastructure designs is not recommended for a low percentage of CAV platoon traffic.Item A study of information content in linear and metric spaces(University of Delaware, 2023) Li, DongbinThe main theme of this dissertation is to explore four topics at the intersection of information theory, probability, discrete/convex geometry and geometric functional analysis. ☐ In the first part of the dissertation, we introduce an information-theoretic approach to the Kneser-Poulsen conjecture in discrete geometry first formulated by Poulsen in 1954 and Kneser in 1955. Our approach revolves around a broad question regarding whether Rényi entropies of independent sums decrease when one of the summands is contracted by a 1-Lipschitz map. We answer this broad question affirmatively in various cases. ☐ In the second part, we characterize the convex potentials, which is also known as the information content, of log-concave densities in ℝn under various assumptions. Built upon this characterization, we show that, for a subclass of log-concave densities, the normalized information content satisfies both a central limit theorem and a large deviation principle, which, on one hand, generalizes the result for Gaussian random vectors, on the other hand, sheds some light on a more general conjecture. ☐ In the third part, we investigate a metric generalization of Rényi entropies that are called diversities. We show that on the Euclidean metric space ℝn, one may recover the Rényi entropies from diversities. And via diversities, we define the diversity dimension of different orders of a Borel probability measure, and prove that they coincide with the information dimensions defined via Rényi entropies. Meanwhile, the fact that maximum diversity can also be used to recover some geometric invariants motivates us to generalize some classic sumset inequalities with maximum diversity now playing the role of "size". The relationship with the other notion of diversity is also explored in this section.Item A Study of Some Entropy InequalitiesPollard, Emma KIn this document we investigate some entropy inequalities. We divide these into two parts. In the first part we prove a new class of inequalities for submodular set func- tions, indexed by chordal graphs. Since entropy is a particularly useful example of a submodular function, we deduce some entropy inequalities. As a further corollary, we construct a novel family of determinant inequalities for sums of positive definite Hermitian matrices, and also recover an inequality of Barrett, Johnson, and Lundquist (1989). In the second part we introduce a new inequality called entropic symmetrization resistance. An asymmetric random variable X is said to be variance (respectively, en- tropic) symmetrization-resistant if every independent random variable Y that produces a symmetric sum X+Y has a greater variance (respectively, entropy) than that of X. Asymmetric Bernoulli random variables were shown to be variance symmetrization- resistant by Kagan, Mallows, Shepp, Vanderbei, and Vardi (1999); Pal (2008) gave a proof using stochastic calculus. We give a third proof. We show for the first time that asymmetric Bernoulli random variables are entropic symmetrization resistant. We then extend the entropy and variance inequalities to the hypercube and explore other possible extensions to non-Bernoulli random variables.Item A Tale of Energy-Burdened Cities: Connecting the Low-Income Housing Tax Credit to Energy InsecurityClase, Cara MarieIn the United States, urban areas are one of the most energy-insecure spaces with energy-cost-to-income ratios (i.e. energy burdens) as high as 25%. This is more than twice the 10% standard energy insecurity studies use to distinguish highly energy-burdened households. Impoverished, urban areas tend to have residents that live in older and less energy-efficient housing that requires more energy - and thus money - to operate. Examining financial and infrastructural variables of energy insecurity, this dissertation takes a deeper look at the exogenous variation in infrastructure created by the Low-Income Housing Tax Credit (LIHTC): a credit that incentivizes housing developers to build or renovate housing for low-income renters. Specifically, a two-way fixed effects regression model is used to investigate the impact of LIHTC housing supply on the energy burdens of urban PUMAs and ConsPUMAs. The analysis found that PUMAs with more LIHTC units, especially newly-built units, had a significantly negative relationship with energy cost and energy burden in multiple model specifications. Additionally, the analysis also found strong evidence that infrastructure-centered programs like the Weatherization Assistance Program have a significantly negative relationship with energy cost and burden.Item A tendon model using mechanical overload causes degenerative multiscale changes in structure and function(University of Delaware, 2023) Bloom, Ellen T.Tendon injuries are exceedingly common and afflict 16.4 million people yearly in the United States. As tendons are involved with all movement, these injuries can be debilitating for everyday life. Frequent among these injuries include tendon rupture and tendinopathy, both arising from excess mechanical loading to the tissue, leading to damage and degeneration. Tendon degeneration and mechanical damage are widely referred to as an overuse injury, with little distinction in the clinical or research communities between the magnitude of load (overload) or the number of cycles (overuse) as the injury mechanism. Because the clinical presentation between tendinopathy and tendon rupture are quite different, it is likely that different mechanisms and pathways lead to the end-stage degenerative state. Overuse and overload injuries develop over months in humans and early changes are not detectable, necessitating the use of animal models to evaluate the structural and mechanical changes throughout the degenerative process. However, animal models used to study tendon degeneration are mostly focused on overuse. As a result, it remains unknown how overload, in the absence of overuse, leads to tendon degeneration. Without a full understanding of the multiscale structure-function relationship and damage of tendon, treatments and interventions are doomed to remain ad hoc and not founded in rigorous physiology or etiology. ☐ This dissertation addresses this gap in understanding the progression of degeneration by using novel multiscale structural and functional techniques in combination with an in vivo model of tendon overload. To help researchers and clinicians discuss and interpret the complexities of tendon loading and tendon injuries, a new visual framework for use in the field was also designed. In one of the aims in this dissertation, we were able to assess tendon structural and functional degeneration in our in vivo surgical model of tendon overload using activity monitoring, high-resolution MRI, histology, confocal microscopy, Serial Block Face Scanning Electron Microscopy, and multiscale mechanical testing. To ensure that we were accurately describing physiological damage, we first evaluated the effect of bathing solution osmolarity on ex vivo tendon mechanics. To measure collagen fibril geometry from a volumetric sample, we also developed a method using electron microscopy and machine learning to determine the 3D ultrastructure of collagen fibrils in dense tendon samples. From these three aims, we determined that bathing solutions which prevent tissue swelling result in more accurate tendon mechanics; successfully measured collagen fibril geometry in 3D tendon samples; and confirmed that our model of in vivo overload induced degenerative structural changes and impaired mechanical function of tendon. ☐ The outcomes of this dissertation established an in vivo model to study overload-induced degeneration, separate from overuse, and will be used to study mechanisms and treatments for tendon overload injuries. The novel tools developed in this dissertation will be used in future research studying structure-mechanics studies of tendon and other highly aligned, collagenous soft tissue, as well as improving treatments and prevention strategies for tendon injuries.Item A two-layer non-hydrostatic landslide model for tsunami generation on irregular bathymetry(University of Delaware, 2020) Zhang, ChengHistorically, many significant tsunami events that caused catastrophic damage to coastal communities were triggered by submarine mass failure (SMF). However, landslide tsunamis are less well documented and studied in source mechanism as a fairly new area of tsunami science. ☐ In this thesis, we describe a two-layer, coupled model for water column and landslide motion, developed for the investigation of submarine landslides and resulting tsunami generation over irregular bathymetry. The three-dimensional non-hydrostatic wave model NHWAVE (Ma et al., 2012) is applied as the upper-layer model to simulate landslide-generated tsunami waves. Here, we focus on the derivation and numerical implementation of governing equations for the lower-layer model, where the landslide is described as either a viscous mud flow or a saturated granular debris flow. The governing equations are depth-integrated in a Cartesian coordinate system referenced to the still water level in order to facilitate coupling between water and ground motions. Vertical acceleration of the slide is taken into account by including non-hydrostatic pressure effects within the slide, allowing the model to simulate motions over arbitrary and locally steep bathymetry. A quadratic pressure profile in vertical is imposed to improve the model dispersion property, and a new algorithm of granular rheology closure is introduced where Coulomb rule is applied on local coordinates with x' direction coincident to the flow direction. The model equations are solved using both a Godunov-type finite volume scheme and a finite difference scheme in space and a Runge-Kutta scheme for time integration, and the resulting model is verified in comparison to analytic results and laboratory experiments involving granular slide motion. ☐ A recent landslide-induced tsunami event occurred in the Sunda Straits of Indonesia is investigated in this thesis using the proposed model. Based on the field survey around Anak Krakatau volcano, where a major lateral collapse generated a tsunami causing severe damage to coastlines of Sumatra and Java, numerical simulations are set up with an estimated collapse volume of 0.272 km3 to reproduce observed tsunami characteristics. The simulations consists of a near-field simulation using the 3D two-layer landslide-wave model for landslide tsunami generation and a far-field simulation using the 2D Boussinesq wave model FUNWAVE–TVD for tsunami propagation (Shi et al., 2012). The results from this coupling system agree reasonably well with the field observations.