Sign up or sign in

Undergraduate Poster Session
Icon: calendar

Submissions closed on 2026-03-02 11:59PM [Central Time (US & Canada)].

Posters presented by undergraduates

Students are encouraged to present research and results of special projects in poster format. The session is scheduled for Saturday morning, concurrent with contributed talks.

Poster presenters will be provided with a tripod, posterboard, and binder clips. Printed posters can be any size, but 48” x 36” is the most common. Appropriate content for a poster includes, but is not limited to, a new result, a new proof of a known result, a new mathematical model, an innovative solution to a Putnam problem, or a method of solution to an applied problem. Contact organizer Emily Thomas (ethomas@csuniv.edu) if you need additional information.

Accepted Submissions:

A Curious Sangaku Wooden Tablet Puzzle — Cassie Lane <cflane@student.king.edu> Icon: submission_accepted

We offer a solution and interpretation of a curious old Sangaku wooden tablet puzzle involving three mutually tangent circles C0, C1, and C2, together with an isosceles triangle T. Let C0 be the largest circle, and let d denote a diameter of C0. Along d lie both the center of C1 and the base of T. The endpoints of the base are denoted by A and B, where A ∈ C0 and B ∈ C1. The problem asks: Where is the center Q of the third circle C2, which is tangent to C0, C1, and the sides of T? Surprisingly, the point Q always lies on the line perpendicular to d at B, regardless of the position of B along d.

View Submission

A Economic Exploration of Post-Pandemic America — Muhkayah Akbar <makbar1@wildcat.fvsu.edu> Icon: submission_accepted

In this study, we investigated the primary economic drivers of inflation in the United States from 2019-2025, and how can these forces be modeled to predict future inflation trends. This was important because the years following the Covid-19 pandemic brough a rapid rise in inflation, significantly impacting American households and businesses. This created a major challenge for policymakers, who needed to understand what was pushing prices up. This study aims to uncover these key drivers, providing insight to inform future economic decisions and better prepare for rising prices. To address this, we used a Vector Autoregression model, a statistical tool showing how economic factors influence each other over time. Our model included changes in consumer prices (CPI), money supply (M2SL), global supply chain pressures (GSCPI), unemployment changes, and personal savings. We ensured our data was stable for the model to work correctly and chose the best number of past periods to capture complex interactions. We then performed Granger Causality tests, to see if one variable could predict another; Impulse Response Function to show how variables react to unexpected shocks; and Forecast Error Variance Decomposition (FEVD), to find out which shocks were most responsible for future changes in each variable. Crucially, our model passed important checks for stability and reliability, ensuring our results are trustworthy. Our results showed that our model was robust and reliable. Granger Causality tests indicated that both M2SL (money supply) and the Global Supply Chain Pressure Index were significant predictors of the overall economy. However, changes in CPI did not significantly predict other variables. When analyzing shocks, a surprising increase in M2SL briefly lowered inflation and unemployment. Crucially, a GSCPI shock significantly pushed inflation higher, highlighting a strong link between supply chain issues and rising prices. The forecast analysis further changed due to its own past shocks from M2SL and especially GSCPI became increasingly important for understanding future inflation swings. Other variables like M2Sl and savings were largely divided by their own trends. These results suggest that global supply chain pressures played a major role in recent inflation, supporting a “supply-side” explanation for rising prices. While money supply does influence the economy, its direct impact on inflation can be complex and short lived. Our findings imply that strategies to strengthen supply chains could be as vital as central bank actions in controlling inflation. This deeper understanding helps improve forecasts and guides better economic policies moving forward.

View Submission

An Introduction to the Theory of Determinants — Cooper Broughton <cbrough3@cbu.edu> Icon: submission_accepted

Before the emergence of the subfield of abstract algebra known as linear algebra, and even before the nowadays very familiar notion of a matrix, there was a robust theory dealing solely in abstraction with the objects known as determinants. Today we tend to think of these objects as a property of a matrix, just one way of describing the matrix and its behavior, but mathematicians used to place a lot of importance on determinants as an object of study in their own right. This poster's aim is to present the determinant and its definition in a way more in line with this older view of determinants, along with some motivation for the definition, and then to develop some properties of determinants - some that are familiar and others that are perhaps less so - using the language of the theory of determinants.

View Submission

Assembly and Testing of Lab-Scale Heat Exchanger for Thermal Energy Storage — Sidney Owens <sidneyowensjr0@gmail.com> Icon: submission_accepted

The increasing demand for efficient energy utilization has intensified research on thermal energy storage (TES) systems, where heat exchangers play a critical role in enhancing heat transfer performance. This study presents the assembly and testing of a lab-scale heat exchanger designed for thermal energy storage applications. The objective was to develop a compact, cost-effective experimental setup capable of evaluating thermal performance under controlled operating conditions. The heat exchanger was assembled using readily available materials and configured to facilitate efficient heat transfer between the heat transfer fluid and the storage medium. Instrumentation, including thermocouples and flow measurement devices, was integrated to monitor inlet and outlet temperatures, flow rates, and overall heat transfer characteristics. Experimental testing was conducted under varying flow rates and temperature conditions to assess thermal efficiency, heat transfer rate, and system stability. Results demonstrate that the assembled system effectively stores and releases thermal energy, with performance strongly influenced by flow rate and temperature gradient. The experimental findings validate the suitability of the lab-scale setup for studying heat exchanger behavior in TES systems and provide a foundation for further optimization and scaling. This work contributes to the development of efficient, small-scale thermal energy storage solutions for renewable energy and waste heat recovery applications.

View Submission

Behind the numbers: A comparative study of enrollment vs. student success in the USG — Shari Pinckney <spinckn3@wildcat.fvsu.edu> Icon: submission_accepted

In this study, we examined whether rising enrollment in the University System of Georgia (USG) masks disparities in retention and graduation rates by ethnicity and Pell Grant recipient status. This topic is important because while Georgia’s public colleges and universities have experienced enrollment increases from 2020 to 2024, economic inequality, rising tuition costs, and shifts in public funding have the potential to obscure underlying gaps in student success. Addressing this concern is crucial to ensuring equitable educational pathways and informing policy decisions that impact Georgia’s students, particularly those from underrepresented and low-income backgrounds. We collected data from official USG enrollment reports spanning 2020–2024 across research, comprehensive, and state universities, as well as state colleges. We analyzed total enrollment, retention rates, and graduation rates in comparison to each ethnicity category and average Pell Grant percentages using Excel regression analysis. The general regression model used was Y = β0 + β1(%Black) + β2(%Hispanic) + β3(%White) + β4(%Asian) + β5(%Pell Grant) + ϵ, with Y representing the dependent variables of enrollment, retention, and graduation rates. Our results showed that although total enrollment increased, disparities persisted in retention and graduation rates for Pell Grant recipients and underrepresented ethnic groups. Specifically, while enrollment appeared inclusive, retention and graduation rates varied significantly, with Hispanic/Latino and Asian students showing relatively higher positive coefficients in retention and graduation regressions, while Pell Grant percentages and other underrepresented groups displayed lower impacts, suggesting challenges in sustained student success despite enrollment gains. These findings suggest that rising enrollment alone does not equate to equitable outcomes within the USG system. Economic inequality continues to influence higher education success, with Pell Grant recipients and some minority groups facing barriers in retention and graduation despite increasing enrollment figures. The implications of this study are significant for policymakers and educators seeking to address equity within Georgia’s higher education institutions are to provide data-driven insight that can support conversations around improving funding models, support systems, and institutional accountability to close the gaps in student success across diverse populations.

View Submission

Beyond Pass/Fail: Developing a Mastery Continuum Rubric for College Algebra — Laila McCluskey <lmccluskey@una.edu> Icon: submission_accepted

Mastery-based assessment is often implemented using binary pass/fail decisions; however, this approach can be insufficient in gateway mathematics courses such as College Algebra, where student understanding develops along a continuum. This interactive undergraduate research poster presents preliminary results from a faculty-collaborative study to design a mastery-based rubric that captures meaningful levels of algebraic understanding. Faculty participants evaluated authentic student work and identified distinguishing features across emerging mastery levels, informing the development of a multi-category mastery continuum. Poster visitors will engage directly with anonymized student responses by placing them along the proposed mastery continuum and comparing their classifications with faculty-derived levels. This activity highlights challenges in binary mastery judgments and illustrates how continuum-based rubrics can better support feedback, grading consistency, and instructional decision-making in early collegiate mathematics.

View Submission

Bioinformatic Characterization of Variants of COL11A2 Associated with Adolescent Idiopathic Scoliosis — Tate Purvis <tpurvis@una.edu> Icon: submission_accepted

Adolescent idiopathic scoliosis (AIS) is a type of scoliosis characterized not only by the curvature of the spine but also by the age at which it develops. AIS occurs in individuals between the ages of 10 and 18 years old and is defined as a lateral curvature of the spine with an angle exceeding 10 degrees. "Idiopathic" refers to scoliosis without a specific cause; however, the Scoliosis Research Society (2025) found that 30% of AIS patients have family members who have had scoliosis. Having AIS can lead to several physical symptoms, including a back or rib hump, uneven shoulders, and a torso lean due to the curvature of the spine. These symptoms can result in discomfort and even back pain, particularly in the lower back. Rebello et al. (2023) found the gene COL11A2 played a key role in vertebral development. While some COL11A2 variants have been classified, 96% of them have not. Classifying these variants can help to diagnose AIS at earlier stages and prevent further spinal curvature. Clinical submissions reported in the Ensembl database categorized pathogenic, benign and unclassified variants. YASARA modeling of the COL11A2 protein was used to examine the structural characteristics relative to regions with unclassified variants. SIFT, PolyPhen, Revel, CADD and MetaLR scores were normalized and plotted to compare known pathogenic and benign variants with selected R130W and R130Q unclassified variants. Molecular dynamics simulations were performed to model 20 nanoseconds of movement in an aqueous environment for the wild type COL11A2 as well as the two variants, R130W and R130Q. The results of these methods are mixed. The analysis of the in-silico predictor scores, as well as the chemical difference between arginine and tryptophan suggested that R130W may be pathogenic. Swaps between glutamine and arginine appear in 44.8% of pathogenic missense mutations in COL11A2. However, gene conservation analysis yielded low scores for position 130 when over 150 homologues and the molecular dynamics simulations for both R130W and R130Q demonstrated little divergence from the wild type COL11A2. This research will aid future studies into these mutations and other mutations of this gene, which will help to discover any existing links to adolescent idiopathic scoliosis and ultimately improve the ability of physicians to diagnose this condition before it becomes more severe. References: Denise Rebello, Elizabeth Wohler, Vida Erfani, Guozhuang Li, Alexya N Aguilera, Alberto Santiago-Cornier, Sen Zhao, Steven W Hwang, Robert D Steiner, Terry Jianguo Zhang, Christina A Gurnett, Cathleen Raggio, Nan Wu, Nara Sobreira, Philip F Giampietro, Brian Ciruna, COL11A2 as a candidate gene for vertebral malformations and congenital scoliosis, Human Molecular Genetics, Volume 32, Issue 19, 1 October 2023, Pages 2913–2928, https://doi.org/10.1093/hmg/ddad117 Scoliosis Research Society. (2025). Adolescent Idiopathic Scoliosis. Scoliosis Research Society, Springer Nature, [www.srs.org/Patients/Conditions/Scoliosis/Idiopathic-Scoliosis](http://www.srs.org/Patients/Conditions/Scoliosis/Idiopathic-Scoliosis).

View Submission

Computational Approaches using Machine Learning to Predict Functional Impact of SERPINA1 Missense Mutations — Lucas Hasting <lhasting@una.edu> Icon: submission_accepted

Bioinformatics is an important tool for genomics research, lowering the cost and reducing the time to wait for results. This research aims to use machine learning to predict the pathogenicity of variants of uncertain significant (VUS) for the Alpha-1 antitrypsin (AAT) protein associated with the *SERPINA1* gene. Data used was collected through the Ensembl database. Moreover, Python was used to clean the data and create/train the models used for prediction. The dataset contains 196 more benign observations than pathogenic, so we removed 196 benign observations for training. However, we used the full data set of 484 observations for testing to determine accuracy over all known missense swaps associated with SERPINA1. Next, four different models were used for prediction with all parameters found using the hold-out method (aside from the neural network) with a generalization gap of 0.015. The first model focuses on using a neural network built using PyTorch. This neural network is a multilayer perceptron (MLP) that will use known cases of missense swaps and their pathogenicity and their mutation accessor score to predict the pathogenicity of all VUS. This is done using linear affine transformations and then passing that information into a Rectified Linear Unit activation function through one hidden layer. The output layer also makes use of a linear affine transformation and then utilizes a sigmoid activation function to map into the open interval $(0,1)$ and translate that into a Bernoulli distribution to predict benign $(0)$ or pathogenic $(1)$. The model was optimized using the stochastic gradient descent algorithm, with a learning rate of 0.01, a momentum of 0.9, and a batch size of 32. The model was trained using backpropagation with 10000 epochs and a log-loss loss function. The second model utilized $K$-Nearest Neighbors (KNN) via scikit-learn with $K = 10$. The third model uses a maximum-depth decision tree which also uses the hold-out method. In this model, we used Gini impurity as a measure of information. Our last model uses a random forest with a max-depth of 4 and $n$-estimators with $n = 4$. We built these random forests using the bootstrap method which builds decision trees by taking out random subsets of our training data for each decision tree. These four models resulted in an accuracy of determining whether any given VUS was benign or pathogenic of $86\%, 85\%, 89\%,$ and $90\%$, respectively. These findings contribute to the understanding of how pathogenicity scores influence clinical significance of missense swaps related to *SERPINA1* and AAT which is associated with non-cystic fibrosis bronchiectasis while defining the importance of further investigation of these variants for improved diagnostics in clinical diagnosis.

View Submission

CycleQuest: The Mayan Challenge — Jasmine Prince <jasmine.prince@lander.edu> Icon: submission_accepted

This project stemmed from our research conducted into the Maya people and their mathematics. Particularly their method of calculations and the calendar that allowed them to predict events far into the past and future. To teach this incredibly impressive feat of ancient mathematics The work presented in this poster introduces an interactive virtual boardgame that models the 20- and 13-day cycles that the Maya used to keep track of their ritual calendar. The game’s core mechanics, the modular arithmetic engine, and implemented by the student developer into the visual wheel-based UI. This game requires players to calculate corresponding calendar days using Mayan modular arithmetic and cyclical counting systems to solve mathematical puzzles to align the other calendars to complete challenges and maintain their status. The game offers a culturally rich alternative to traditional math, transforming historical number systems into interactive tools that deepen understanding and honor ancient civilizations’ mathematical sophistication.

View Submission

D¨urer's Pentagon versus the Golden Triangle — Aleya Ebner <aaebner@student.king.edu> Icon: submission_accepted

In 1525, Albrecht D¨urer published a treatise on geometry, featuring, among many items, an algorithm for purportedly constructing a regular pentagon by starting with a regular hexagon, whose consecutive vertices, V0, V1, V2, V3, V4, V5, lie among a unit circle C with center O. Let r be the edge-lengths of the hexagonal sides. Here’s the algorithm. Let A be the point C lying on a ray from O through the midpoint of a segment V1V2. With V1 and V2 as two vertices of D¨urer’s pentagon, let a third vertex B be the intersection of ray V0A and a circle, center V2 and radius r. Angle̸ V 1V2B should be 108◦; but is it? We contrast this algorithm with that of Eudoxus during the time of Plato’s Academy, who used the golden triangle.

View Submission

Enumeration of Transfer Systems on Rank 3 Lattices — Arad Ganir <arad.ganir@emory.edu> Icon: submission_accepted

Transfer systems are a combinatorial object that occurs naturally in the study of equivariant homotopy theory. Ormsby et. al (2025) showed bijections between (co)saturated transfer systems with numerous categorical constructions. This work explores various enumeration results on rank 3 lattices including explicit counts on lattices of certain structures and a scheme to enumerate (co)saturated transfer systems using matrices.

View Submission

Estimating the Fractal Dimension of the Arctic — Christian Lee <cartwris+clee@fvsu.edu> Icon: submission_accepted

Measuring the geometry of natural landscapes is often straightforward, yet highly irregular terrains challenge traditional approaches to quantification. These irregularities are frequently fractal in nature, exhibiting self-similar patterns that complicate conventional measurement techniques. This study investigates the fractal dimension of Arctic terrain as a metric for describing surface roughness and detecting environmental change. Using the box-counting method applied to Digital Elevation Models (DEMs) sourced from ArcticDEM satellite data (2008–2025), we analyze select Greenland regions to assess how fractal geometry can enhance terrain monitoring. By constraining data to July samples within a 10 km radius, we reduced confounding factors such as seasonal variation. Python-based algorithms generate two-dimensional slope estimates and three-dimensional visualizations, enabling fractal dimension estimation across varying box sizes. Preliminary results confirm that smaller box sizes yield more reliable representations of terrain complexity, though challenges arise from incomplete box coverage. This approach demonstrates potential for identifying subtle terrain shifts linked to climate change, such as glacial retreat and erosion, that may be missed by conventional metrics. Beyond the Arctic, the methodology offers broader applicability for analyzing diverse landscapes affected by environmental pressures. Ultimately, fractal dimension analysis provides a bridge between mathematics and climate science, offering a sensitive, scalable tool for detecting and quantifying terrain change in an era of accelerating global transformation.

View Submission

From Solar System to Strata: Modeling the Atmospheric Heating of Cosmic Dust — Richard Britton <rbritto3@utsouthern.edu> Icon: submission_accepted

Cosmic dust particles continuously enter Earth's atmosphere, and knowledge of the thermal dynamics of differently sized particles is a crucial first step in understanding the distribution of extraterrestrial matter that ultimately reaches the surface. Some groups seek to link the composition of strata to that of cosmic dust being accrued by the planet during that same time period. To do this, establishing which particles entering the atmosphere will be fully ablated during their journey to the surface is an important distinction, as ablated material will have less predictable trajectories and be more randomly scattered. This study mathematically models the vertical atmospheric descent and resulting heat accumulation of spherical cosmic dust particles of different compositions as well as initial velocities. Three models are developed using Newton's Second Law to simulate particle trajectories from the mesosphere to the surface. An initial analytical model uses separation of variables under the assumption of constant atmospheric density and negligible gravity. Two subsequent models, solved via numerical integration, introduce variable atmospheric density and explicit gravity to improve heat profile accuracy. Comparative analysis across initial velocity regimes reveals that at high velocities, aerodynamic drag heavily dictates the heat profile as kinetic energy is converted to heat, whereas in low-velocity particles, gravitational acceleration dominates heat generation due to the eventual conversion of potential energy to heat. Ultimately, these preliminary models provide a foundational mathematical framework to predict the thermal survivability of cosmic dust and aid in the interpretation of geological strata.

View Submission

Fun with Wasserstein Distance for Self-similar Measures — Hunter Johnson <hunmjohn@ut.utm.edu> Icon: submission_accepted

We explore different discrete approximations for the 2-Wasserstein distance between self-similar measures defined on the unit interval.

View Submission

Helmholtz Equations with Variable Potentials on Fractal Measures — Jerry Liu <zliu19@students.kennesaw.edu> Icon: submission_accepted

We study a one-dimensional Helmholtz equation of the form (-\Delta_\mu + k(x)^2)u = \lambda u, where \mu is a finite Borel measure on [0,1], possibly singular or fractal, and k(x) is a continuous potential. This framework extends the theory of the fractal Laplacians studied by Bird, Ngai, and Teplyaev to include spatially varying potentials. We derive a Volterra–Stieltjes integral formulation for the eigenvalue problem and prove existence, uniqueness, and differentiability of solutions under minimal assumptions on k and \mu.

View Submission

Modeling NBA Championship Probability Using Linear and Logistic Regression — Nadeem Madyun <nadeem.madyun@g.fmarion.edu> Icon: submission_accepted

Modern professional basketball relies heavily on advanced statistics to evaluate team performance. This project examines whether regular-season data can be used to estimate a team’s probability of winning the NBA championship. Using team-level data from the 1995-1996 season through the present, collected from Basketball-Reference.com, we apply regression modeling to analyze championship outcomes. Using IBM SPSS, multiple linear regression is first used to identify the statistical factors that contribute most strongly to overall team strength. These factors include offensive and defensive efficiency metrics such as shooting percentage, turnover rate, and free-throw rate. Building on this analysis, logistic regression is then used to estimate the probability that a team wins the championship based on its regular-season performance and playoff seeding. This study demonstrates how classical statistical methods can be applied to modern sports analytics, illustrating how mathematical modeling helps explain and predict competitive success in professional basketball.

View Submission

Multiplying Rabbits: Introduction to Rabbit Laminations — Michaela Carter <michaela.carter@lander.edu> Icon: submission_accepted

The presentation will cover the main components of Julia sets and their laminations. Laminations are a topological way of modeling the dynamics of Julia sets. It will begin with defining Julia sets and unicritical laminations. Next, images will be provided in reference to these definitions, as well as some guidance on how to read them. Following these will be a deeper explanation of how to identify the correct Julia set when given a lamination. More diagrams will be provided to assist with visualization and stress the significance of making the correct identification. This is joint work with Dr. Brittany Duncan from University of North Georgia and Dr. Chase Worley from Lander University.

View Submission

Numbers with Four Close Factorizations — Laura Holmes <lholme10@students.kennesaw.edu> Icon: submission_accepted

Consider n = 99, 990, 000, a number that has two close factorizations: 10, 000 · 9, 999 and 11,000 · 9, 090. Generalizing this to k close factorizations, we have n = AB = (A + a_1)(B − b_1) = (A + a_2)(B − b_2) = · · · = (A + a_{k−1})(B − b_{k−1}) where 1 ≤ B ≤ A as well as 1 ≤ a_1 < a_2 <· · · < a_{k−1} ≤ C and 1 ≤ b_1 < b_2 < · · · < b_{k−1} ≤ C. Here, C is our closeness measure. Our faculty mentor, Dr. Tsz Chan, previously studied numbers with three close factorizations and identified an optimal ratio R_3 = A / C^3 ≤ 0.25. The scope of our project this summer was to expand on his work and study numbers with four close factorizations. The optimal closeness ratio here was calculated to be R_4 = A / C^3 ≤ (6+√ 6) / 9(2+√ 6)^2 = 0.04742.... Arriving at this ratio involved proving identities and inequalities, deducing intermediate lemmas, transforming our original question into a generalized Pell-type equation, determining which numbers should be entered into that equation, and eliminating cases that we knew would yield no solutions.

View Submission

On the Gonality of Kneser Graphs — Willoughby Caine <cartwris+wcaine@fvsu.edu> Icon: submission_accepted

The Kneser graphs KG(n,k) are a classically studied family of graphs, wherein vertices are subsets of size k from a set of n elements, and vertices are connected if the sets they correspond to are disjoint. The chip-firing game is a combinatorial game played with poker chips relevant to various fields of mathematics and physics (see the dollar game, probabilistic abacus, Abelian sandpile model). Gonality is the graph invariant corresponding to the minimum number of chips needed to win the chip-firing game after one chip has been removed from one of the vertices of the graph. Gonality has applications in various mathematical fields, from combinatorics to algebraic geometry and coding theory, relating most notably to finding solutions to equations defining algebraic curves. Various techniques for upper and lower bounding gonality exist; using one such technique, we give (n-1) choose k as an upper bound for the gonality of KG(n,k) when n > 2k. We also apply the recent result that the invariant scramble number is a lower bound for gonality to show that the gonality of KG(n,k) is exactly (n-1) choose k when n is greater than (3k 2 + k + 2)/2, and improvement on existing results towards the conjectured n > 4k. Finally, we conjecture that an even stricter polynomial bound may hold by considering a specific scramble.

View Submission

On the equivalence of the weak and integral formulations for a variable coefficient Helmholtz equation associated with fractal measures — Nischal Regmi <nregmi@students.kennesaw.edu> Icon: submission_accepted

We study a one–dimensional Helmholtz-type equation with a variable coefficient k(x) . The problem is formulated as a Volterra–Stieltjes integral equation, allowing the inclusion of singular measures beyond the continuous setting. An equivalence is obtained between weak formulations, integral representations, and derivative identities, providing an analogue of a result of Bird–Ngai–Teplyaev in the variable-coefficient setting. These results extend known analysis of fractal and measure-based Laplacians to variable-coefficient Helmholtz operators.

View Submission

Physics-Based RNN for IMU-Based Sensor-to-Segment Alignment — Zeyad Chatila <zeyad.chatila@g.fmarion.edu> Icon: submission_accepted

In this work, a Recurrent Neural Network (RNN) model is used to predict the joint center of a one-degree-of-freedom mechanism in 2D rotation from inertial measurement unit (IMU) data. The model is trained using synthetic data and tested using experimental measurements. In half of the trained models, an additional physics-inspired signal is calculated from the IMU data and used to examine whether it can improve prediction accuracy. The results demonstrate that the RNN with physics-inspired feature engineering can effectively learn the relationship between IMU measurements and joint center location, suggesting that neural networks are a promising tool for improving kinematic estimation from sensor data.

View Submission

Prüfer Transformation and Spectral Analysis for a Sturm–Liouville-Type Equation — Bailyn Hall <bhall76@ut.utm.edu> Icon: submission_accepted

The Prüfer transformation is a classical technique that converts a second-order differential equation into a pair of first-order equations using polar-like coordinates. By separating a solution into amplitude and phase components, the oscillatory behavior decouples from the growth of solutions — the zeros are determined entirely by the phase. This makes it a powerful tool for studying eigenvalue problems arising in vibration analysis, quantum mechanics, and mathematical physics.

View Submission

Reconstructing Weighted Directed Graphs from Dynamical Systems — Donnell Wilkins <jbarnes+donnellwilkins@email.wcu.edu> Icon: submission_accepted

Analyzing dynamical systems is challenging due to their high dimensionality and chaotic, nonlinear behavior. Motivated by this challenge, we develop graph representations of these systems that track both the structure of states and their temporal evolution. More precisely, we introduce two reconstruction methods: a direct binning approach that discretizes phase space into grid-based nodes, and a k-nearest neighbors (k-NN) approach in which nodes are defined by local neighborhoods. In both cases, directed edges encode temporal transitions between nodes. These graph constructions capture local geometric organization, recurrent behavior, and transition structure in the discretized state space. Consequently, graph-theoretic techniques such as community detection and loop detection can be used to identify structural signatures of the underlying dynamics.

View Submission

Shor’s Algorithm: A Mathematical Approach to Quantum Computing — Porter Allen <portera.1308@gmail.com> Icon: submission_accepted

In 1994, Dr. Peter Shor developed an algorithm for finding the period of large prime numbers. This algorithm is called Shor’s Algorithm and it enables quantum computers to calculate the period faster than classical computers. Classical computers implement a “brute-force” strategy for calculations by systematically trying every possible value. Alternatively, Shor’s algorithm sequentially tests the most probable number instead. An exploration of this algorithm’s implemented mathematics and a comparison with classical computing will be the focus of this poster.

View Submission

The Kelly Criterion Used in Betting and the Stock Market — Johnny Drouillard <jdrouil1@cbu.edu> Icon: submission_accepted

This project compares the mathematical structure of the Kelly Criterion, meant to maximize expected logarithmic wealth in two settings: a repeated biased coin toss model and the stock market modeled as a stochastic process. This project derives the optimal fraction in each case and demonstrates that both solutions follow the same structural principle: optimal allocation equals expected excess return divided by variance. By looking at both examples side by side, this project shows how the same mathematical idea can be applied to simple games of chance as well as real-world investing decisions.

View Submission

The Life of Srinivasa Ramanujan: Partitions and Infinite Pi Series — Jamion Carter <jamion.carter@lander.edu> Icon: submission_accepted

This project is a study on the life of Ramanujan. We cover topics including a general description of life, his work in partitions, and the infinite pi series. The closest people to Ramanujan were his mother Komalatammal, his wife Janakiammal, and G.H Hardy (his mentor/partner). We will start by discussing his relationship to these people and the impact they had on him. Then once we get into mathematics, we can explain how Ramanujan did a lot of work in number theory with partitions. Number Theory is a subset of mathematics about studying the patterns and relationships between integers and other number groups/sets. Partitions are all the ways to add a number using non-negative integers. The poster will have photos and a description of the 3 partition congruences Ramanujan proved, and the partitions formula he created with Hardy. Partitions is significant in mathematics by giving us all possible combinations that we can apply to topics in statistics and graph theory. Finally, we will explain how Ramanujan’s infinite pi series is applied everywhere in math and physics by giving us more digits of pi.

View Submission

The Plus Topology on $R^2$ — Griffin Duncan <duncangs@appstate.edu> Icon: submission_accepted

This poster session will highlight the use of visualization tools for understanding calculus and topological concepts. We take a novice's view of understanding open sets in the plane in order to define continuity of functions. This highlights why continuity within a neighbourhood is important for differentiability. In multivariate calculus, we expand on differential calculus notions of derivatives and the limits that define them, by growing from one dimension with two directions of movement towards a point, to two dimensions and an infinite number of ways to move towards a point. This work is done on top of the backdrop of the usual topology on the real plane. This research project is an investigation of applying the Plus Topology on the plane as a backdrop for continuity and differentiability as discussed in a multivariable calculus class.

View Submission

Tracking the Trajectory: Time-Series Analysis of Inflation From 2020 to 2025 — Jaela Adams <jadams56@wildcat.fvsu.edu> Icon: submission_accepted

In this study, we investigated how the COVID-19 pandemic influenced inflation trends in the United States from the year 2020 through 2025. This topic is especially important because inflation directly affects the everyday lives of citizens and plays a crucial role in shaping the economy’s overall health. High inflation can reduce purchasing power, increase the cost of goods and services, and place additional strain on households and businesses. During the pandemic, the economy faced unprecedented disruptions, including breakdowns in global supply chains, changes in consumer demand, and fluctuating energy prices. These factors significantly influenced inflation rates, making it essential to track and understand the trends that followed. To explore this, we collected monthly data on the Consumer Price Index (CPI) and Producer Price Index (PPI) from credible sources such as the U.S. Bureau of Labor Statistics. Using Microsoft Excel, we organized the data, created charts, and applied linear regression analysis to observe how CPI and PPI shifted over time. Our results showed a sharp increase in inflation from 2020 to 2022, largely driven by supply chain disruptions, changing consumer behavior, and external shocks to food and energy markets. This supported our first hypothesis that inflation in the early pandemic years was heavily influenced by logistical and production challenges. However, contrary to our second hypothesis, inflation did not significantly decline between 2022 and 2025, despite government interventions and partial stabilization in energy markets. This finding suggests that recovery from global crises like COVID-19 is neither immediate nor linear. Economic variables are interconnected, and a single policy or market correction may not be enough to reverse inflationary trends. These results also emphasize that inflation impacts individuals differently based on factors such as income, geographic location, and spending behavior. Therefore, it is critical to continue monitoring inflation indicators like CPI and PPI in real time. This allows policymakers, economists, and the public to respond proactively and avoid long-term economic instability caused by unchecked inflation.

View Submission

Using Bioinformatics to Characterize Missense Variants of SERPINA1 Associated with Bronchiectasis — Joseph Pope <jpope7@una.edu> Icon: submission_accepted

Bioinformatics is the intersection of statistics, biology, and computer science. The abundance and open-source nature of data from clinical submissions of genome sequencing can allow one to study genetic mutations without the need for expensive lab equipment and wet-lab time. Bioinformatics research is an important tool for lowering cost and reducing time to wait for genomic results. This study investigates missense swaps of the *SERPINA1* gene. *SERPINA1* is the gene that carries the instructions to produce the Alpha-1 antitrypsin (AAT) protein. AAT is a protease inhibitor created in the liver that protects the airways from pollutants. Mutations in *SERPINA1* can cause not enough AAT to be produced, halt creation of AAT altogether, or deform the folding of the protein causing it to not reach its intended destination; the lungs. Such a deficiency in AAT can cause non-cystic fibrosis bronchiectasis. This study aims to predict the pathogenicity of the missense swaps S38F and S69F. These swaps were chosen due to their proximity to an alpha-helix, their possible harmful effects on other bonds throughout the protein, and their relationship to the pathogenicity of other serine to phenylalanine swaps. Pathogenicity scores were compared to the known pathogenic swap S77F and the average of all known benign scores using the *in-silico* prediction analysis from SIFT, PolyPhen, REVEL, MetaLR, and CADD scores. ConSurf modeling over 150 homologues revealed conservation scores and predicted if the amino acid positions of interest were exposed, buried, structural, or functional. Molecular dynamic simulations were used to predict the movement of the protein, comparing the wild type and the selected variants of uncertain significance to determine differences in movement. These simulations indicate a statistically significant increase in movement for the selected uncertain variants. These findings contribute to the understanding of genetic factors influencing non-cystic fibrosis bronchiectasis and define the importance of further investigation of these variants for improved diagnostics in clinical diagnosis.

View Submission

« Back to Undergrad