View All Items
Pages
- Title
- Transient Multi-scale Computational Fluid Dynamics (CFD) Model for Thrombus Tracking in an Assit Device Vascular Bed.
- Creator
-
Osorio, Ruben, Kassab, Alain, Divo, Eduardo, Ilie, Marcel, University of Central Florida
- Abstract / Description
-
Heart failure occurs when the heart is not capable to pump blood at a sufficient rate to meet the demands of the body. Depending on the health of the heart, doctors may recommend a heart transplant, but finding a suitable donor is often a long duration process and the patient might be at an advance condition or the patient is not adequate for a heart transplant. In such cases Ventricular assist devices (VAD) are implemented. The purpose of a VAD is to aid the heart to pump the correct amount...
Show moreHeart failure occurs when the heart is not capable to pump blood at a sufficient rate to meet the demands of the body. Depending on the health of the heart, doctors may recommend a heart transplant, but finding a suitable donor is often a long duration process and the patient might be at an advance condition or the patient is not adequate for a heart transplant. In such cases Ventricular assist devices (VAD) are implemented. The purpose of a VAD is to aid the heart to pump the correct amount of blood, by doing so it relives the load that is put on the heart while giving the patient a chance for recovery. This study focuses on observing the hemodynamic effects of implementing a left ventricular assist device (LVAD) along the aortic arch and main arteries. Thrombi creation and transportation is other subject included in the study, due to the fact that thrombi can obstruct blood flow to critical arteries, manly carotid and vertebral. Occlusion of these can lead to a stroke with devastating effects on the neurocognitive functions and even death.A multi-scale CFD analysis a patient specific geometry model is used as well as a lumped system which provides the correct conditions in order to simulate the whole cardiovascular system. The main goal of the study is to understand the difference in flow behavior created by the unsteady pulsatile boundary conditions. The model described in this work has a total cardiac output of 7.0 Liters/ minute, this for a healthy heart. Two cardiac output splits are used to simulate heart failure conditions. The first split consists of 5 Liters/minute flowing through the LVAD cannula and 2 Liters/minute via the aortic root. The second scenario is when heartivfailure is critical, meaning that zero flow is being output by the left ventricle, thus a split of 7 Liter/minute trough the LVAD cannula and 0 Liters/minute traveling through the aortic root. A statistical analysis for the thrombi motion throughout the patient aortic arch was performed in order to quantify the influence that pulsatile flow has on the particles being track. Spherical particles of 2mm, 4mm and 5mm were released and accounted in the statistical analysis for each of the two split configurations. The study focuses on particles that escaped on the outlet boundaries of the upper arteries (Right Carotid, Left Carotid, and Vertebral). Results exhibit the statistical comparison of means for each particle diameter as well as for the overall probability for the steady and unsteady flow condition.
Show less - Date Issued
- 2013
- Identifier
- CFE0004905, ucf:49633
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004905
- Title
- Post Conversion Correction of Non-Linear Mismatches for Time Interleaved Analog-to-Digital Converters.
- Creator
-
Parkey, Charna, Mikhael, Wasfy, Qu, Zhihua, Georgiopoulos, Michael, Myers, Brent, Wei, Lei, Chester, David, University of Central Florida
- Abstract / Description
-
Time Interleaved Analog-to-Digital Converters (TI-ADCs) utilize an architecture which enables conversion rates well beyond the capabilities of a single converter while preserving most or all of the other performance characteristics of the converters on which said architecture is based. Most of the approaches discussed here are independent of architecture; some solutions take advantage of specific architectures. Chapter 1 provides the problem formulation and reviews the errors found in ADCs as...
Show moreTime Interleaved Analog-to-Digital Converters (TI-ADCs) utilize an architecture which enables conversion rates well beyond the capabilities of a single converter while preserving most or all of the other performance characteristics of the converters on which said architecture is based. Most of the approaches discussed here are independent of architecture; some solutions take advantage of specific architectures. Chapter 1 provides the problem formulation and reviews the errors found in ADCs as well as a brief literature review of available TI-ADC error correction solutions. Chapter 2 presents the methods and materials used in implementation as well as extend the state of the art for post conversion correction. Chapter 3 presents the simulation results of this work and Chapter 4 concludes the work. The contribution of this research is three fold: A new behavioral model was developed in SimulinkTM and MATLABTM to model and test linear and nonlinear mismatch errors emulating the performance data of actual converters. The details of this model are presented as well as the results of cumulant statistical calculations of the mismatch errors which is followed by the detailed explanation and performance evaluation of the extension developed in this research effort. Leading post conversion correction methods are presented and an extension with derivations is presented. It is shown that the data converter subsystem architecture developed is capable of realizing better performance of those currently reported in the literature while having a more efficient implementation.
Show less - Date Issued
- 2015
- Identifier
- CFE0005683, ucf:50171
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005683
- Title
- The effect of free primary education programs on marriage for Kenyan women.
- Creator
-
Eisele, Joanna, Wright, James, Corzine, Harold, Rivera, Fernando, Carter, J. Scott, Pals, Heili, University of Central Florida
- Abstract / Description
-
This dissertation investigates the effect of education on the chances and age of marriage during the transition from adolescence into young adulthood among Kenyan women age 15-22. Women who receive more education are more likely to delay marriage. The literature suggests that occupation and age at sexual debut are also significantly associated with age of marriage. This study considers how these and other factors may possibly affect the life course of women in Kenya over a period of time and...
Show moreThis dissertation investigates the effect of education on the chances and age of marriage during the transition from adolescence into young adulthood among Kenyan women age 15-22. Women who receive more education are more likely to delay marriage. The literature suggests that occupation and age at sexual debut are also significantly associated with age of marriage. This study considers how these and other factors may possibly affect the life course of women in Kenya over a period of time and increases our understanding of marriage predictors. Data comes from the 2003 and 2008 Kenya Demographic and Health Surveys. Binary logistic and OLS regression models are used to analyze and compare the data. The results imply that while education has a statistically significant and strong positive effect on a woman's marital status as well as age of marriage, the effect of education on age of marriage has not changed since the introduction of Kenya's free primary education program.
Show less - Date Issued
- 2014
- Identifier
- CFE0005486, ucf:50349
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005486
- Title
- The Bridging Technique: Crossing Over the Modality Shifting Effect.
- Creator
-
Alicia, Thomas, Mouloua, Mustapha, Hancock, Peter, Szalma, James, Pharmer, James, University of Central Florida
- Abstract / Description
-
Operator responsiveness to critical alarm/alert display systems must rely on faster and safer behavioral responses in order to ensure mission success in complex environments such as the operator station of an Unmanned Aerial System (UAS). An important design consideration for effective UAS interfaces is how to map these critical alarm/alert display systems to an appropriate sensory modality (e.g., visual or auditory) (Sarter, 2006). For example, if an alarm is presented during a mission in a...
Show moreOperator responsiveness to critical alarm/alert display systems must rely on faster and safer behavioral responses in order to ensure mission success in complex environments such as the operator station of an Unmanned Aerial System (UAS). An important design consideration for effective UAS interfaces is how to map these critical alarm/alert display systems to an appropriate sensory modality (e.g., visual or auditory) (Sarter, 2006). For example, if an alarm is presented during a mission in a modality already highly taxed or overloaded, this can result in increased response time (RT), thereby decreasing operator performance (Wickens, 1976). To overcome this problem, system designers may allow the switching of the alarm display from a highly-taxed to a less-taxed modality (Stanney et al., 2004). However, this modality switch may produce a deleterious effect known as the Modality Shifting Effect (MSE) that erodes the expected performance gain (Spence (&) Driver, 1997). The goal of this research was to empirically examine a technique called bridging which allows the transitioning of a cautionary alarm display from one modality to another while simultaneously counteracting the Modality Shifting Effect.Sixty-four participants were required to complete either a challenging visual or auditory task using a computer-based UAS simulation environment while responding to both visual and auditory alarms. An approach was selected which utilized two 1 (task modality) x 2 (switching technique) ANCOVAs and one 2 (modality) x 2 (technique) ANCOVA, using baseline auditory and visual RT as covariates, to examine differences in alarm response times when the alert modality was changed abruptly or with the bridging technique from a highly loaded sensory channel to an underloaded sensory channel. It was hypothesized that the bridging technique condition would show faster response times for a new unexpected modality versus the abrupt switching condition. The results indicated only a marginal decrease in response times for the auditory alerts and a larger yet not statistically significant effect for the visual alerts; results were also not statistically significant for the analysis collapsed across modality. Findings suggest that there may be some benefit of the bridging technique on performance of alarm responsiveness, but further research is still needed before suggesting generalizable design guidelines for switching modalities which can apply in a variety of complex human-machine systems.
Show less - Date Issued
- 2015
- Identifier
- CFE0005568, ucf:50283
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005568
- Title
- Effect of Acute L-Alanyl-L-Glutamine (Sustamine) and Electrolyte Ingestion on Cognitive Function, Multiple Object Tracking and Reaction Time Following Prolonged Exercise.
- Creator
-
Pruna, Gabriel, Hoffman, Jay, Stout, Jeffrey, Fragala, Maren, University of Central Florida
- Abstract / Description
-
Changes in physiological function occurring during a body water deficit may result in significant decrements in performance, cognitive function and fine motor control during exercise. This may be due to the magnitude of the body water deficit. Rehydration strategies are important to prevent these deleterious effects in performance. The purpose of this study was to examine the changes before and after prolonged exercise of an alanine-glutamine dipeptide (AG) on cognitive function and reaction...
Show moreChanges in physiological function occurring during a body water deficit may result in significant decrements in performance, cognitive function and fine motor control during exercise. This may be due to the magnitude of the body water deficit. Rehydration strategies are important to prevent these deleterious effects in performance. The purpose of this study was to examine the changes before and after prolonged exercise of an alanine-glutamine dipeptide (AG) on cognitive function and reaction time.Twelve male endurance-trained runners (age: 23.5 (&)#177; 3.7 y; height: 175.5 (&)#177; 5.4 cm; weight: 70.7 (&)#177; 7.6 kg) participated in this study. Participants were asked to run on a treadmill at 70% of their predetermined VO2max for 1 h and then run at 90% of VO2max until volitional exhaustion on four separate days (T1-T4). T1 was a dehydration trial and T2-T4 were all different hydration modalities (electrolyte drink, electrolyte drink with a low dose of AG, electrolyte drink with a high dose of AG, respectively) where the participants drank 250 mL every 15 min. Before and after each hour run, cognitive function and reaction tests were administered. Hopkins Magnitude Based Inferences were used to analyze cognitive function and reaction time data.Results showed that physical reaction time was likely faster for the low dose trial than the high dose trial. Dehydration had a possible negative effect on the number of hits in 60-sec compared to both the low and high dose trials. Comparisons between only the electrolyte drink and the high dose ingestion appeared to be possibly negative. Analysis of lower body quickness indicates that performance in both the low and high dose trials were likely improved (decreased) in comparison to the dehydration trial. Multiple object tracking analysis indicated a possible greater performance for dehydration and low dose compared to only the electrolyte drink, while there was a likely greater performance in multiple object tracking for the high dose trial compared to consumption of the electrolyte drink only. The serial subtraction test was possibly greater in the electrolyte drink trial compared to dehydration.Rehydration with the alanine-glutamine dipeptide during an hour run at a submaximal intensity appears to maintain or enhance subsequent visual reaction time in both upper and lower body activities compared to a no hydration trial. The combination of the alanine-glutamine dipeptide may have enhanced fluid and electrolyte absorption from the gut and possibly into skeletal tissue to maintain neuromuscular performance.
Show less - Date Issued
- 2014
- Identifier
- CFE0005233, ucf:50583
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005233
- Title
- ANALYSIS OF TIME SYNCHRONIZATION ERRORS IN HIGH DATA RATE ULTRAWIDEBAND ORTHOGONAL FREQUENCY DIVISION MULTIPLEXING DATA LINKS.
- Creator
-
Bates, Lakesha, Jones, W. Linwood, University of Central Florida
- Abstract / Description
-
Emerging Ultra Wideband (UWB) Orthogonal Frequency Division Multiplexing (OFDM) systems hold the promise of delivering wireless data at high speeds, exceeding hundreds of megabits per second over typical distances of 10 meters or less. The purpose of this Thesis is to estimate the timing accuracies required with such systems in order to achieve Bit Error Rates (BER) of the order of magnitude of 10-12 and thereby avoid overloading the correction of irreducible errors due to misaligned timing...
Show moreEmerging Ultra Wideband (UWB) Orthogonal Frequency Division Multiplexing (OFDM) systems hold the promise of delivering wireless data at high speeds, exceeding hundreds of megabits per second over typical distances of 10 meters or less. The purpose of this Thesis is to estimate the timing accuracies required with such systems in order to achieve Bit Error Rates (BER) of the order of magnitude of 10-12 and thereby avoid overloading the correction of irreducible errors due to misaligned timing errors to a small absolute number of bits in error in real-time relative to a data rate of hundreds of megabits per second. Our research approach involves managing bit error rates through identifying maximum timing synchronization errors. Thus, it became our research goal to determine the timing accuracies required to avoid operation of communication systems within the asymptotic region of BER flaring at low BERs in the resultant BER curves. We propose pushing physical layer bit error rates to below 10-12 before using forward error correction (FEC) codes. This way, the maximum reserve is maintained for the FEC hardware to correct for burst as well as recurring bit errors due to corrupt bits caused by other than timing synchronization errors.
Show less - Date Issued
- 2004
- Identifier
- CFE0000197, ucf:46173
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000197
- Title
- EXAMINING ROUTE DIVERSION AND MULTIPLE RAMP METERING STRATEGIES FOR REDUCING REAL-TIME CRASH RISK ON URBAN FREEWAYS.
- Creator
-
Gayah, Vikash, Abdel-Aty, Mohamed, University of Central Florida
- Abstract / Description
-
Recent research at the University of Central Florida addressing crashes on Interstate-4 in Orlando, Florida has led to the creation of new statistical models capable of calculating the crash risk on the freeway (Abdel-Aty et al., 2004; 2005, Pande and Abdel-Aty, 2006). These models yield the rear-end and lane-change crash risk along the freeway in real-time by using static information at various locations along the freeway as well as real-time traffic data that is obtained from the roadway....
Show moreRecent research at the University of Central Florida addressing crashes on Interstate-4 in Orlando, Florida has led to the creation of new statistical models capable of calculating the crash risk on the freeway (Abdel-Aty et al., 2004; 2005, Pande and Abdel-Aty, 2006). These models yield the rear-end and lane-change crash risk along the freeway in real-time by using static information at various locations along the freeway as well as real-time traffic data that is obtained from the roadway. Because these models use the real-time traffic data, they are capable of calculating the respective crash risk values as the traffic flow changes along the freeway. The purpose of this study is to examine the potential of two Intelligent Transportation System strategies for reducing the crash risk along the freeway by changing the traffic flow parameters. The two ITS measures that are examined in this research are route diversion and ramp metering. Route diversion serves to change the traffic flow by keeping some vehicles from entering the freeway at one location and diverting them to another location where they may be more efficiently inserted into the freeway traffic stream. Ramp metering alters the traffic flow by delaying vehicles at the freeway on-ramps and only allowing a certain number of vehicles to enter at a time. The two strategies were tested by simulating a 36.25 mile section of the Interstate-4 network in the PARAMICS micro-simulation software. Various implementations of route diversion and ramp metering were then tested to determine not only the effects of each strategy but also how to best apply them to an urban freeway. Route diversion was found to decrease the overall rear-end and lane-change crash risk along the network at free-flow conditions to low levels of congestion. On average, the two crash risk measures were found to be reduced between the location where vehicles were diverted and the location where they were reinserted back into the network. However, a crash migration phenomenon was observed at higher levels of congestion as the crash risk would be greatly increased at the location where vehicles were reinserted back onto the network. Ramp metering in the downtown area was found to be beneficial during heavy congestion. Both coordinated and uncoordinated metering algorithms showed the potential to significantly decrease the crash risk at a network wide level. When the network is loaded with 100 percent of the vehicles the uncoordinated strategy performed the best at reducing the rear-end and lane-change crash risk values. The coordinated strategy was found to perform the best from a safety and operational perspective at moderate levels of congestion. Ramp metering also showed the potential for crash migration so care must be taken when implementing this strategy to ensure that drivers at certain locations are not put at unnecessary risk. When ramp metering is applied to the entire freeway network both the rear-end and lane-change crash risk is decreased further. ALINEA is found to be the best network-wide strategy at the 100 percent loading case while a combination of Zone and ALINEA provides the best safety results at the 90 percent loading case. It should also be noted that both route diversion and ramp metering were found to increase the overall network travel time. However, the best route diversion and ramp metering strategies were selected to ensure that the operational capabilities of the network were not sacrificed in order to increase the safety along the freeway. This was done by setting the maximum allowable travel time increase at 5% for any of the ITS strategies considered.
Show less - Date Issued
- 2006
- Identifier
- CFE0001437, ucf:47054
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001437
- Title
- VOICE ONSET TIME PRODUCTION IN INDIVIDUALS WTH ALZHEIMER'S DISEASE.
- Creator
-
Baker, Julie Baker, Ryalls, Jack, University of Central Florida
- Abstract / Description
-
In the present study, voice onset time (VOT) measurements were compared between a group of individuals with moderate Alzheimer's disease (AD) and a group of healthy age- and gender-matched peers. Participants read a list of consonant-vowel-consonant (CVC) words, which included the six stop consonants. Recordings were gathered and digitized. The VOT measurements were made from oscillographic displays obtained from the Brown Laboratory Interactive Speech System (BLISS) implemented on an IBM...
Show moreIn the present study, voice onset time (VOT) measurements were compared between a group of individuals with moderate Alzheimer's disease (AD) and a group of healthy age- and gender-matched peers. Participants read a list of consonant-vowel-consonant (CVC) words, which included the six stop consonants. Recordings were gathered and digitized. The VOT measurements were made from oscillographic displays obtained from the Brown Laboratory Interactive Speech System (BLISS) implemented on an IBM-compatible computer. VOT measures for the participants' six stop consonant productions were subjected to statistical analysis. The results of the study indicated that differences in VOT values were not statistically significant in the speakers with Alzheimer's disease from the normal control speakers.
Show less - Date Issued
- 2006
- Identifier
- CFE0001269, ucf:46918
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001269
- Title
- EXAMINING DYNAMIC VARIABLE SPEED LIMIT STRATEGIES FOR THE REDUCTION OF REAL-TIME CRASH RISK ON FREEWAYS.
- Creator
-
Cunningham, Ryan, Abdel-Aty, Mohamed, University of Central Florida
- Abstract / Description
-
Recent research at the University of Central Florida involving crashes on Interstate-4 in Orlando, Florida has led to the creation of new statistical models capable of determining the crash risk on the freeway (Abdel-Aty et al., 2004; 2005, Pande and Abdel-Aty, 2006). These models are able to calculate the rear-end and lane-change crash risks along the freeway in real-time through the use of static information at various locations along the freeway as well as the real-time traffic data...
Show moreRecent research at the University of Central Florida involving crashes on Interstate-4 in Orlando, Florida has led to the creation of new statistical models capable of determining the crash risk on the freeway (Abdel-Aty et al., 2004; 2005, Pande and Abdel-Aty, 2006). These models are able to calculate the rear-end and lane-change crash risks along the freeway in real-time through the use of static information at various locations along the freeway as well as the real-time traffic data obtained by loop detectors. Since these models use real-time traffic data, they are capable of calculating rear-end and lane-change crash risk values as the traffic flow conditions are changing on the freeway. The objective of this study is to examine the potential benefits of variable speed limit implementation techniques for reducing the crash risk along the freeway. Variable speed limits is an ITS strategy that is typically used upstream of a queue in order to reduce the effects of congestion. By lowering the speeds of the vehicles approaching a queue, more time is given for the queue to dissipate from the front before it continues to grow from the back. This study uses variable speed limit strategies in a corridor-wide attempt to reduce rear-end and lane-change crash risks where speed differences between upstream and downstream vehicles are high. The idea of homogeneous speed zones was also introduced in this study to determine the distance over which variable speed limits should be implemented from a station of interest. This is unique since it is the first time a dynamic distance has been considered for variable speed limit implementation. Several VSL strategies were found to successfully reduce the rear-end and lane-change crash risks at low-volume traffic conditions (60% and 80% loading conditions). In every case, the most successful treatments involved the lowering of upstream speed limits by 5 mph and the raising of downstream speed limits by 5 mph. In the free-flow condition (60% loading), the best treatments involved the more liberal threshold for defining homogeneous speed zones (5 mph) and the more liberal implementation distance (entire speed zone), as well as a minimum time period of 10 minutes. This treatment was actually shown to significantly reduce the network travel time by 0.8%. It was also shown that this particular implementation strategy (lowering upstream, raising downstream) is wholly resistant to the effects of crash migration in the 60% loading scenario. In the condition approaching congestion (80% loading), the best treatment again involved the more liberal threshold for homogeneous speed zones (5 mph), yet the more conservative implementation distance (half the speed zone), along with a minimum time period of 5 minutes. This particular treatment arose as the best due to its unique capability to resist the increasing effects of crash migration in the 80% loading scenario. It was shown that the treatments implementing over half the speed zone were more robust against crash migration than other treatments. The best treatment exemplified the greatest benefit in reduced sections and the greatest resistance to crash migration in other sections. In the 80% loading scenario, the best treatment increased the network travel time by less than 0.4%, which is deemed acceptable. No treatment was found to successfully reduce the rear-end and lane-change crash risks in the congested traffic condition (90% loading). This is attributed to the fact that, in the congested state, the speed of vehicles is subject to the surrounding traffic conditions and not to the posted speed limit. Therefore, changing the posted speed limit does not affect the speed of vehicles in a desirable manner. These conclusions agree with Dilmore (2005).
Show less - Date Issued
- 2007
- Identifier
- CFE0001723, ucf:47309
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001723
- Title
- A STUDY OF EQUATORIAL IONOPSHERIC VARIABILITY USING SIGNAL PROCESSING TECHNIQUES.
- Creator
-
wang, xiaoni, Eastes, Richard, University of Central Florida
- Abstract / Description
-
The dependence of equatorial ionosphere on solar irradiances and geomagnetic activity are studied in this dissertation using signal processing techniques. The statistical time series, digital signal processing and wavelet methods are applied to study the ionospheric variations. The ionospheric data used are the Total Electron Content (TEC) and the critical frequency of the F2 layer (foF2). Solar irradiance data are from recent satellites, the Student Nitric Oxide Explorer (SNOE) satellite and...
Show moreThe dependence of equatorial ionosphere on solar irradiances and geomagnetic activity are studied in this dissertation using signal processing techniques. The statistical time series, digital signal processing and wavelet methods are applied to study the ionospheric variations. The ionospheric data used are the Total Electron Content (TEC) and the critical frequency of the F2 layer (foF2). Solar irradiance data are from recent satellites, the Student Nitric Oxide Explorer (SNOE) satellite and the Thermosphere Ionosphere Mesosphere Energetics Dynamics (TIMED) satellite. The Disturbance Storm-Time (Dst) index is used as a proxy of geomagnetic activity in the equatorial region. The results are summarized as follows. (1) In the short-term variations < 27-days, the previous three days solar irradiances have significant correlation with the present day ionospheric data using TEC, which may contribute 18% of the total variations in the TEC. The 3-day delay between solar irradiances and TEC suggests the effects of neutral densities on the ionosphere. The correlations between solar irradiances and TEC are significantly higher than those using the F10.7 flux, a conventional proxy for short wavelength band of solar irradiances. (2) For variations < 27 days, solar soft X-rays show similar or higher correlations with the ionosphere electron densities than the Extreme Ultraviolet (EUV). The correlations between solar irradiances and foF2 decrease from morning (0.5) to the afternoon (0.1). (3) Geomagnetic activity plays an important role in the ionosphere in short-term variations < 10 days. The average correlation between TEC and Dst is 0.4 at 2-3, 3-5, 5-9 and 9-11 day scales, which is higher than those between foF2 and Dst. The correlations between TEC and Dst increase from morning to afternoon. The moderate/quiet geomagnetic activity plays a distinct role in these short-term variations of the ionosphere (~0.3 correlation).
Show less - Date Issued
- 2007
- Identifier
- CFE0001602, ucf:47188
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001602
- Title
- STUDIES OF A QUANTUM SCHEDULING ALGORITHM AND ON QUANTUM ERROR CORRECTION.
- Creator
-
Lu, Feng, Marinescu, Dan, University of Central Florida
- Abstract / Description
-
Quantum computation has been a rich field of study for decades because it promises possible spectacular advances, some of which may run counter to our classically rooted intuitions. At the same time, quantum computation is still in its infancy in both theoretical and practical areas. Efficient quantum algorithms are very limited in number and scope; no real breakthrough has yet been achieved in physical implementations. Grover's search algorithm can be applied to a wide range of problems;...
Show moreQuantum computation has been a rich field of study for decades because it promises possible spectacular advances, some of which may run counter to our classically rooted intuitions. At the same time, quantum computation is still in its infancy in both theoretical and practical areas. Efficient quantum algorithms are very limited in number and scope; no real breakthrough has yet been achieved in physical implementations. Grover's search algorithm can be applied to a wide range of problems; even problems not generally regarded as searching problems can be reformulated to take advantage of quantum parallelism and entanglement leading to algorithms which show a square root speedup over their classical counterparts. This dissertation discusses a systematic way to formulate such problems and gives as an example a quantum scheduling algorithm for an R||C_max problem. This thesis shows that quantum solution to such problems is not only feasible but in some cases advantageous. The complexity of the error correction circuitry forces us to design quantum error correction codes capable of correcting only a single error per error correction cycle. Yet, time-correlated errors are common for physical implementations of quantum systems; an error corrected during a certain cycle may reoccur in a later cycle due to physical processes specific to each physical implementation of the qubits. This dissertation discusses quantum error correction for a restricted class of time-correlated errors in a spin-boson model. The algorithm proposed allows the correction of two errors per error correction cycle, provided that one of them is time-correlated. The algorithm can be applied to any stabilizer code, perfect or non-perfect, and simplified the circuit complexity significantly comparing to the classic quantum error correction codes.
Show less - Date Issued
- 2007
- Identifier
- CFE0001873, ucf:47391
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001873
- Title
- THE USE OF THE UCF DRIVING SIMIULATOR TO TEST THE CONTRIBUTION OF LARGER SIZE VEHICLES (LSVS) IN REAR-END COLLISIONS AND RED LIGHT RUNNING ON INTERSECTIONS.
- Creator
-
Harb, Rami, Radwan, Essam, University of Central Florida
- Abstract / Description
-
Driving safety has been an issue of great concern in the United States throughout the years. According to the National Center for Statistics and Analysis (NCSA), in 2003 alone, there were 6,267,000 crashes in the U.S. from which 1,915,000 were injury crashes, including 38,764 fatal crashes and 43,220 human casualties. The U.S. Department of Transportation spends millions of dollars every year on research that aims to improve roadway safety and decrease the number of traffic collisions. In...
Show moreDriving safety has been an issue of great concern in the United States throughout the years. According to the National Center for Statistics and Analysis (NCSA), in 2003 alone, there were 6,267,000 crashes in the U.S. from which 1,915,000 were injury crashes, including 38,764 fatal crashes and 43,220 human casualties. The U.S. Department of Transportation spends millions of dollars every year on research that aims to improve roadway safety and decrease the number of traffic collisions. In spring 2002, the Center for Advanced Traffic System Simulation (CATSS), at the University of Central Florida, acquired a sophisticated reconfigurable driving simulator. This simulator, which consists of a late model truck cab, or passenger vehicle cab, mounted on a motion base capable of operation with six degrees of freedom, is a great tool for traffic studies. Two applications of the simulator are to study the contribution of Light Truck Vehicles (LTVs) to potential rear-end collisions, the most common type of crashes, which account for about a third of the U.S. traffic crashes, and the involvement of Larger Size Vehicles (LSVs) in red light running. LTVs can obstruct horizontal visibility for the following car driver and has been a major issue, especially at unsignalized intersections. The sudden stop of an LTV, in the shadow of the blindness of the succeeding car driver, may deprive the following vehicle of a sufficient response time, leading to high probability of a rear-end collision. As for LSVs, they can obstruct the vertical visibility of the traffic light for the succeeding car driver on signalized intersection producing a potential red light running for the latter. Two sub-scenarios were developed in the UCF driving simulator for each the vertical and horizontal visibility blockage scenarios. The first sub-scenario is the base sub-scenario for both scenarios, where the simulator car follows a passenger car, and the second sub-scenario is the test sub-scenario, where the simulator car follows an LTV for the horizontal visibility blockage scenario and an LSV for the vertical visibility blockage scenario. A suggested solution for the vertical visibility blockage of the traffic light problem that consisted of adding a traffic signal pole on the right side of the road was also designed in the driving simulator. The results showed that LTVs produce more rear-end collisions at unsignalized intersections due to the horizontal visibility blockage and following car drivers' behavior. The results also showed that LSVs contribute significantly to red light running on signalized intersections and that the addition of a traffic signal pole on the right side of the road reduces the red light running probability.
Show less - Date Issued
- 2005
- Identifier
- CFE0000626, ucf:46513
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000626
- Title
- REAL TIME REVERSE TRANSCRIPTION-POLYMERASE CHAIN REACTION FOR DIRECT DETECTION OF VIABLE MYCOBACTERIUM AVIUM SUBSPECIES PARATUBERCULOSIS IN CROHN'S DISEASE PATIENTSANDASSOCIATION OF MAP INFECTION WITH DOWNREGUALTION IN INTERFERON-GAMMA RECEPTOR (INFG1) GENE IN CROHN'S DISEASE PATIENTS.
- Creator
-
Chehtane, Mounir, Naser, Saleh, University of Central Florida
- Abstract / Description
-
Association of Mycobacterium avium subspecies paratuberculosis (MAP) with Crohn's disease (CD) and not with ulcerative colitis (UC), two forms of inflammatory bowel disease (IBD), has been vigorously debated in recent years. This theory has been strengthened by recent culture of MAP from breast milk, intestinal tissue and Blood from patients with active Crohn's disease. Culture of MAP from clinical samples remained challenging due to the fastidious nature of MAP including its lack of cell...
Show moreAssociation of Mycobacterium avium subspecies paratuberculosis (MAP) with Crohn's disease (CD) and not with ulcerative colitis (UC), two forms of inflammatory bowel disease (IBD), has been vigorously debated in recent years. This theory has been strengthened by recent culture of MAP from breast milk, intestinal tissue and Blood from patients with active Crohn's disease. Culture of MAP from clinical samples remained challenging due to the fastidious nature of MAP including its lack of cell wall in infected patients. The advent of real time PCR has proven to be significant in infectious disease diagnostics. In this study, real time reverse transcriptase PCR (RT-PCR) assay based on targeting mRNA of the IS900 gene unique to MAP has been developed. All variables included in RNA isolation, cDNA synthesis and real time PCR amplification have been optimized. Oligonucleotide primers were designed to amplify 165 bp specific to MAP and the assay demonstrated sensitivity of 4 genomes per sample. In hope this real time RT-PCR may aid in the detection of viable MAP cells in Crohn's disease patients, a total of 45 clinical samples were analyzed. Portion of each sample was also subjected to 12 weeks culture followed by standard nested PCR analysis. The samples consisted of 17 cultures (originated from 13 CD, 1 UC and 3 NIBD subjects), 24 buffy coat blood (originated from 7 CD, 2 UC, 11 NIBD and 4 healthy subjects) and 4 intestinal biopsies from 2 CD patients. Real time RT-PCR detected viable MAP in 11/17 (65%) of iii suspected cultures compared to 12/17 (70%) by nested PCR including 77% and 84% from CD samples by both methods, respectively. Real time RT-PCR detected MAP RNA directly from 3/7 (42%) CD, 2/2 (100%) UC and 0/4 healthy controls similar to results following long term culture incubation and nested PCR analysis. Interestingly, real time RT-PCR detected viable MAP in 2/11 (13%) compared to 4/11 (26%) by culture and nested PCR in NIBD patients. For tissue samples, real time RT-PCR detected viable MAP in one CD patient with the culture outcome remains pending. This study clearly indicates that a 12-hr real time RT-PCR assay provided data that are similar to those from 12 weeks culture and nested PCR analysis. Consequently, use of real time In our laboratory, we previously demonstrated a possible downregulation in the Interferon-gamma receptor gene (IFNGR1) in patients with active Crohn's disease using microarray chip analysis. In this study, measurement of RNA by real time qRT-PCR indicated a possible downregulation in 5/6 CD patients compared to 0/12 controls. The preliminary data suggest that downregulation in INFGR1 gene, and the detection of viable MAP in CD patients provides yet the strongest evidence toward the linkage between MAP and CD etiology.
Show less - Date Issued
- 2005
- Identifier
- CFE0000629, ucf:46504
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000629
- Title
- ESTIMATION OF HYBRID MODELS FOR REAL-TIME CRASH RISK ASSESSMENT ON FREEWAYS.
- Creator
-
pande, anurag, Abdel-Aty, Mohamed, University of Central Florida
- Abstract / Description
-
Relevance of reactive traffic management strategies such as freeway incident detection has been diminishing with advancements in mobile phone usage and video surveillance technology. On the other hand, capacity to collect, store, and analyze traffic data from underground loop detectors has witnessed enormous growth in the recent past. These two facts together provide us with motivation as well as the means to shift the focus of freeway traffic management toward proactive strategies that would...
Show moreRelevance of reactive traffic management strategies such as freeway incident detection has been diminishing with advancements in mobile phone usage and video surveillance technology. On the other hand, capacity to collect, store, and analyze traffic data from underground loop detectors has witnessed enormous growth in the recent past. These two facts together provide us with motivation as well as the means to shift the focus of freeway traffic management toward proactive strategies that would involve anticipating incidents such as crashes. The primary element of proactive traffic management strategy would be model(s) that can separate 'crash prone' conditions from 'normal' traffic conditions in real-time. The aim in this research is to establish relationship(s) between historical crashes of specific types and corresponding loop detector data, which may be used as the basis for classifying real-time traffic conditions into 'normal' or 'crash prone' in the future. In this regard traffic data in this study were also collected for cases which did not lead to crashes (non-crash cases) so that the problem may be set up as a binary classification. A thorough review of the literature suggested that existing real-time crash 'prediction' models (classification or otherwise) are generic in nature, i.e., a single model has been used to identify all crashes (such as rear-end, sideswipe, or angle), even though traffic conditions preceding crashes are known to differ by type of crash. Moreover, a generic model would yield no information about the collision most likely to occur. To be able to analyze different groups of crashes independently, a large database of crashes reported during the 5-year period from 1999 through 2003 on Interstate-4 corridor in Orlando were collected. The 36.25-mile instrumented corridor is equipped with 69 dual loop detector stations in each direction (eastbound and westbound) located approximately every ½ mile. These stations report speed, volume, and occupancy data every 30-seconds from the three through lanes of the corridor. Geometric design parameters for the freeway were also collected and collated with historical crash and corresponding loop detector data. The first group of crashes to be analyzed were the rear-end crashes, which account to about 51% of the total crashes. Based on preliminary explorations of average traffic speeds; rear-end crashes were grouped into two mutually exclusive groups. First, those occurring under extended congestion (referred to as regime 1 traffic conditions) and the other which occurred with relatively free-flow conditions (referred to as regime 2 traffic conditions) prevailing 5-10 minutes before the crash. Simple rules to separate these two groups of rear-end crashes were formulated based on the classification tree methodology. It was found that the first group of rear-end crashes can be attributed to parameters measurable through loop detectors such as the coefficient of variation in speed and average occupancy at stations in the vicinity of crash location. For the second group of rear-end crashes (referred to as regime 2) traffic parameters such as average speed and occupancy at stations downstream of the crash location were significant along with off-line factors such as the time of day and presence of an on-ramp in the downstream direction. It was found that regime 1 traffic conditions make up only about 6% of the traffic conditions on the freeway. Almost half of rear-end crashes occurred under regime 1 traffic regime even with such little exposure. This observation led to the conclusion that freeway locations operating under regime 1 traffic may be flagged for (rear-end) crashes without any further investigation. MLP (multilayer perceptron) and NRBF (normalized radial basis function) neural network architecture were explored to identify regime 2 rear-end crashes. The performance of individual neural network models was improved by hybridizing their outputs. Individual and hybrid PNN (probabilistic neural network) models were also explored along with matched case control logistic regression. The stepwise selection procedure yielded the matched logistic regression model indicating the difference between average speeds upstream and downstream as significant. Even though the model provided good interpretation, its classification accuracy over the validation dataset was far inferior to the hybrid MLP/NRBF and PNN models. Hybrid neural network models along with classification tree model (developed to identify the traffic regimes) were able to identify about 60% of the regime 2 rear-end crashes in addition to all regime 1 rear-end crashes with a reasonable number of positive decisions (warnings). It translates into identification of more than ¾ (77%) of all rear-end crashes. Classification models were then developed for the next most frequent type, i.e., lane change related crashes. Based on preliminary analysis, it was concluded that the location specific characteristics, such as presence of ramps, mile-post location, etc. were not significantly associated with these crashes. Average difference between occupancies of adjacent lanes and average speeds upstream and downstream of the crash location were found significant. The significant variables were then subjected as inputs to MLP and NRBF based classifiers. The best models in each category were hybridized by averaging their respective outputs. The hybrid model significantly improved on the crash identification achieved through individual models and 57% of the crashes in the validation dataset could be identified with 30% warnings. Although the hybrid models in this research were developed with corresponding data for rear-end and lane-change related crashes only, it was observed that about 60% of the historical single vehicle crashes (other than rollovers) could also be identified using these models. The majority of the identified single vehicle crashes, according to the crash reports, were caused due to evasive actions by the drivers in order to avoid another vehicle in front or in the other lane. Vehicle rollover crashes were found to be associated with speeding and curvature of the freeway section; the established relationship, however, was not sufficient to identify occurrence of these crashes in real-time. Based on the results from modeling procedure, a framework for parallel real-time application of these two sets of models (rear-end and lane-change) in the form of a system was proposed. To identify rear-end crashes, the data are first subjected to classification tree based rules to identify traffic regimes. If traffic patterns belong to regime 1, a rear-end crash warning is issued for the location. If the patterns are identified to be regime 2, then they are subjected to hybrid MLP/NRBF model employing traffic data from five surrounding traffic stations. If the model identifies the patterns as crash prone then the location may be flagged for rear-end crash, otherwise final check for a regime 2 rear-end crash is applied on the data through the hybrid PNN model. If data from five stations are not available due to intermittent loop failures, the system is provided with the flexibility to switch to models with more tolerant data requirements (i.e., model using traffic data from only one station or three stations). To assess the risk of a lane-change related crash, if all three lanes at the immediate upstream station are functioning, the hybrid of the two of the best individual neural network models (NRBF with three hidden neurons and MLP with four hidden neurons) is applied to the input data. A warning for a lane-change related crash may be issued based on its output. The proposed strategy is demonstrated over a complete day of loop data in a virtual real-time application. It was shown that the system of models may be used to continuously assess and update the risk for rear-end and lane-change related crashes. The system developed in this research should be perceived as the primary component of proactive traffic management strategy. Output of the system along with the knowledge of variables critically associated with specific types of crashes identified in this research can be used to formulate ways for avoiding impending crashes. However, specific crash prevention strategies e.g., variable speed limit and warnings to the commuters demand separate attention and should be addressed through thorough future research.
Show less - Date Issued
- 2005
- Identifier
- CFE0000842, ucf:46659
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000842
- Title
- UTILIZING A REAL LIFE DATA WAREHOUSE TO DEVELOP FREEWAY TRAVEL TIME ELIABILITY STOCHASTIC MODELS.
- Creator
-
Emam, Emam, Al-Deek, Haitham, University of Central Florida
- Abstract / Description
-
During the 20th century, transportation programs were focused on the development of the basic infrastructure for the transportation networks. In the 21st century, the focus has shifted to management and operations of these networks. Transportation network reliability measure plays an important role in judging the performance of the transportation system and in evaluating the impact of new Intelligent Transportation Systems (ITS) deployment. The measurement of transportation network travel...
Show moreDuring the 20th century, transportation programs were focused on the development of the basic infrastructure for the transportation networks. In the 21st century, the focus has shifted to management and operations of these networks. Transportation network reliability measure plays an important role in judging the performance of the transportation system and in evaluating the impact of new Intelligent Transportation Systems (ITS) deployment. The measurement of transportation network travel time reliability is imperative for providing travelers with accurate route guidance information. It can be applied to generate the shortest path (or alternative paths) connecting the origins and destinations especially under conditions of varying demands and limited capacities. The measurement of transportation network reliability is a complex issue because it involves both the infrastructure and the behavioral responses of the users. Also, this subject is challenging because there is no single agreed-upon reliability measure. This dissertation developed a new method for estimating the effect of travel demand variation and link capacity degradation on the reliability of a roadway network. The method is applied to a hypothetical roadway network and the results show that both travel time reliability and capacity reliability are consistent measures for reliability of the road network, but each may have a different use. The capacity reliability measure is of special interest to transportation network planners and engineers because it addresses the issue of whether the available network capacity relative to the present or forecast demand is sufficient, whereas travel time reliability is especially interesting for network users. The new travel time reliability method is sensitive to the users' perspective since it reflects that an increase in segment travel time should always result in less travel time reliability. And, it is an indicator of the operational consistency of a facility over an extended period of time. This initial theoretical effort and basic research was followed by applying the new method to the I-4 corridor in Orlando, Florida. This dissertation utilized a real life transportation data warehouse to estimate travel time reliability of the I-4 corridor. Four different travel time stochastic models: Weibull, Exponential, Lognormal, and Normal were tested. Lognormal was the best-fit model. Unlike the mechanical equipments, it is unrealistic that any freeway segment can be traversed in zero seconds no matter how fast the vehicles are. So, an adjustment of the developed best-fit statistical model (Lognormal) location parameter was needed to accurately estimate the travel time reliability. The adjusted model can be used to compute and predict travel time reliability of freeway corridors and report this information in real time to the public through traffic management centers. Compared to existing Florida Method and California Buffer Time Method, the new reliability method showed higher sensitivity to geographical locations, which reflects the level of congestion and bottlenecks. The major advantages/benefits of this new method to practitioners and researchers over the existing methods are its ability to estimate travel time reliability as a function of departure time, and that it treats travel time as a continuous variable that captures the variability experienced by individual travelers over an extended period of time. As such, the new method developed in this dissertation could be utilized in transportation planning and freeway operations for estimating the important travel time reliability measure of performance. Then, the segment length impacts on travel time reliability calculations were investigated utilizing the wealth of data available in the I-4 data warehouse. The developed travel time reliability models showed significant evidence of the relationship between the segment length and the results accuracy. The longer the segment, the less accurate were the travel time reliability estimates. Accordingly, long segments (e.g., 25 miles) are more appropriate for planning purposes as a macroscopic performance measure of the freeway corridor. Short segments (e.g., 5 miles) are more appropriate for the evaluation of freeway operations as a microscopic performance measure. Further, this dissertation has explored the impact of relaxing an important assumption in reliability analysis: Link independency. In real life, assuming that link failures on a road network are statistically independent is dubious. The failure of a link in one particular area does not necessarily result in the complete failure of the neighboring link, but may lead to deterioration of its performance. The "Cause-Based Multimode Model" (CBMM) has been used to address link dependency in communication networks. However, the transferability of this model to transportation networks has not been tested and this approach has not been considered before in the calculation of transportation networks' reliability. This dissertation presented the CBMM and applied it to predict transportation networks' travel time reliability that an origin demand can reach a specified destination under multimodal dependency link failure conditions. The new model studied the multi-state system reliability analysis of transportation networks for which one cannot formulate an "all or nothing" type of failure criterion and in which dependent link failures are considered. The results demonstrated that the newly developed method has true potential and can be easily extended to large-scale networks as long as the data is available. More specifically, the analysis of a hypothetical network showed that the dependency assumption is very important to obtain more reasonable travel time reliability estimates of links, paths, and the entire network. The results showed large discrepancy between the dependency and independency analysis scenarios. Realistic scenarios that considered the dependency assumption were on the safe side, this is important for transportation network decision makers. Also, this could aid travelers in making better choices. In contrast, deceptive information caused by the independency assumption could add to the travelers' anxiety associated with the unknown length of delay. This normally reflects negatively on highway agencies and management of taxpayers' resources.
Show less - Date Issued
- 2006
- Identifier
- CFE0000965, ucf:46709
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000965
- Title
- EPIDEMIOLOGICAL MODELS FOR MUTATING PATHOGENS WITH TEMPORARY IMMUNITY.
- Creator
-
Singh, Neeta, Rollins, David, University of Central Florida
- Abstract / Description
-
Significant progress has been made in understanding different scenarios for disease transmissions and behavior of epidemics in recent years. A considerable amount of work has been done in modeling the dynamics of diseases by systems of ordinary differential equations. But there are very few mathematical models that deal with the genetic mutations of a pathogen. In-fact, not much has been done to model the dynamics of mutations of pathogen explaining its effort to escape the host's immune...
Show moreSignificant progress has been made in understanding different scenarios for disease transmissions and behavior of epidemics in recent years. A considerable amount of work has been done in modeling the dynamics of diseases by systems of ordinary differential equations. But there are very few mathematical models that deal with the genetic mutations of a pathogen. In-fact, not much has been done to model the dynamics of mutations of pathogen explaining its effort to escape the host's immune defense system after it has infected the host. In this dissertation we develop an SIR model with variable infection age for the transmission of a pathogen that can mutate in the host to produce a second infectious mutant strain. We assume that there is a period of temporary immunity in the model. A temporary immunity period along with variable infection age leads to an integro-differential-difference model. Previous efforts on incorporating delays in epidemic models have mainly concentrated on inclusion of latency periods (this assumes that the force of infection at a present time is determined by the number of infectives in the past). We begin with reviewing some basic models. These basic models are the building blocks for the later, more detailed models. Next we consider the model for mutation of pathogen and discuss its implications. Finally, we improve this model for mutation of pathogen by incorporating delay induced by temporary immunity. We examine the influence of delay as we establish the existence, and derive the explicit forms of disease-free, boundary and endemic equilibriums. We will also investigate the local stability of each of these equilibriums. The possibility of Hopf bifurcation using delay as the bifurcation parameter is studied using both analytical and numerical solutions.
Show less - Date Issued
- 2006
- Identifier
- CFE0001043, ucf:46801
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001043
- Title
- AN IMPACT EVALUATION OF U.S. ARMS EXPORT CONTROLS ON THE U.S. DEFENSE INDUSTRIAL BASE: AN INTERRUPTED TIME-SERIES ANALYSIS.
- Creator
-
Condron, Aaron, Sweo, Robert, University of Central Florida
- Abstract / Description
-
The United States Defense Industrial Base (USDIB) is an essential industry to both the economic prosperity of the US and its strategic control over many advanced military systems and technologies. The USDIB, which encompasses the industries of aerospace and defense, is a volatile industry - prone to many internal and external factors that cause demand to ebb and flow widely year over year. Among the factors that influence the volume of systems the USDIB delivers to its international customers...
Show moreThe United States Defense Industrial Base (USDIB) is an essential industry to both the economic prosperity of the US and its strategic control over many advanced military systems and technologies. The USDIB, which encompasses the industries of aerospace and defense, is a volatile industry - prone to many internal and external factors that cause demand to ebb and flow widely year over year. Among the factors that influence the volume of systems the USDIB delivers to its international customers are the arms export controls of the US. These controls impose a divergence from the historical US foreign policy of furthering an open exchange of ideas and liberalized trade. These controls, imposed by the Departments of Commerce, Defense, and State rigidly control all international presence of the Industry. The overlapping controls create an inability to conform to rapidly changing realpolitiks, leaving these controls in an archaic state. This, in turn, imposes a great deal of anxiety and expense upon managers within and outside of the USDIB. Using autoregressive integrated moving average time-series analyses, this paper confirms that the implementation of or amendment to broad arms export controls correlates to significant and near immediate declines in USDIB export volumes. In the context of the US's share of world arms exports, these controls impose up to a 20% decline in export volume.
Show less - Date Issued
- 2011
- Identifier
- CFH0004064, ucf:44785
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004064
- Title
- TOWARDS THE FINITE: A CASE AGAINST INFINITY IN JORGE LUIS BORGES.
- Creator
-
SANTIS, ESTEBAN, Rodríguez Milanés, Cecilia, University of Central Florida
- Abstract / Description
-
The role of infinity as an antagonist in Jorge Luis Borges's oeuvre is undeniable. His stories in El jardín de senderos que se bifurcan (1941), Ficciones (1944), and El Aleph (1949) exhibit Borges's tendency to evoke dreams, labyrinths, mirrors, and libraries as both conduits for infinity and sources of conflict. Oftentimes, Borges's characters experience discomfort upon encountering the limitations of secular temporal succession. This discomfort is rooted in Borges's pessimism about the...
Show moreThe role of infinity as an antagonist in Jorge Luis Borges's oeuvre is undeniable. His stories in El jardín de senderos que se bifurcan (1941), Ficciones (1944), and El Aleph (1949) exhibit Borges's tendency to evoke dreams, labyrinths, mirrors, and libraries as both conduits for infinity and sources of conflict. Oftentimes, Borges's characters experience discomfort upon encountering the limitations of secular temporal succession. This discomfort is rooted in Borges's pessimism about the subject which is explored in Borges's most comprehensive essay on the issue of time: "A New Refutation of Time." Consequently, this thesis considers Borges's attitude towards the issue of time as postulated in "A New Refutation of Time" and exhibited in his early fiction, continues to acknowledge infinity as a fundamental conflict in Borges's work, and proceeds to search for a solution to this conflict.The analysis in this thesis relies heavily on a comparative study of the themes and symbols in Borges's fiction in order to establish a pattern wherein infinity is portrayed negatively. More importantly, the use of interviews, biographies, and Borges's own fiction, facilitates the construction of cohesive conception of time in his work. Subsequently, this study looks to establish a solution to the problem of infinity and establish a new pattern wherein there is a positive resolution to the narrative. Ultimately, the goal of this thesis is to acknowledge the problem of infinity in Borges's work and then propose a way to escape it.
Show less - Date Issued
- 2012
- Identifier
- CFH0004237, ucf:44903
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004237
- Title
- INVESTIGATION OF DAMAGE DETECTION METHODOLOGIES FOR STRUCTURAL HEALTH MONITORING.
- Creator
-
Gul, Mustafa, Catbas, F. Necati, University of Central Florida
- Abstract / Description
-
Structural Health Monitoring (SHM) is employed to track and evaluate damage and deterioration during regular operation as well as after extreme events for aerospace, mechanical and civil structures. A complete SHM system incorporates performance metrics, sensing, signal processing, data analysis, transmission and management for decision-making purposes. Damage detection in the context of SHM can be successful by employing a collection of robust and practical damage detection methodologies...
Show moreStructural Health Monitoring (SHM) is employed to track and evaluate damage and deterioration during regular operation as well as after extreme events for aerospace, mechanical and civil structures. A complete SHM system incorporates performance metrics, sensing, signal processing, data analysis, transmission and management for decision-making purposes. Damage detection in the context of SHM can be successful by employing a collection of robust and practical damage detection methodologies that can be used to identify, locate and quantify damage or, in general terms, changes in observable behavior. In this study, different damage detection methods are investigated for global condition assessment of structures. First, different parametric and non-parametric approaches are re-visited and further improved for damage detection using vibration data. Modal flexibility, modal curvature and un-scaled flexibility based on the dynamic properties that are obtained using Complex Mode Indicator Function (CMIF) are used as parametric damage features. Second, statistical pattern recognition approaches using time series modeling in conjunction with outlier detection are investigated as a non-parametric damage detection technique. Third, a novel methodology using ARX models (Auto-Regressive models with eXogenous output) is proposed for damage identification. By using this new methodology, it is shown that damage can be detected, located and quantified without the need of external loading information. Next, laboratory studies are conducted on different test structures with a number of different damage scenarios for the evaluation of the techniques in a comparative fashion. Finally, application of the methodologies to real life data is also presented along with the capabilities and limitations of each approach in light of analysis results of the laboratory and real life data.
Show less - Date Issued
- 2009
- Identifier
- CFE0002830, ucf:48069
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002830
- Title
- REAL-TIME REALISTIC RENDERING OF NATURE SCENES WITH DYNAMIC LIGHTING.
- Creator
-
Boulanger, Kevin, Pattanaik, Sumanta, University of Central Florida
- Abstract / Description
-
Rendering of natural scenes has interested the scientific community for a long time due to its numerous applications. The targeted goal is to create images that are similar to what a viewer can see in real life with his/her eyes. The main obstacle is complexity: nature scenes from real life contain a huge number of small details that are hard to model, take a lot of time to render and require a huge amount of memory unavailable in current computers. This complexity mainly comes from geometry...
Show moreRendering of natural scenes has interested the scientific community for a long time due to its numerous applications. The targeted goal is to create images that are similar to what a viewer can see in real life with his/her eyes. The main obstacle is complexity: nature scenes from real life contain a huge number of small details that are hard to model, take a lot of time to render and require a huge amount of memory unavailable in current computers. This complexity mainly comes from geometry and lighting. The goal of our research is to overcome this complexity and to achieve real-time rendering of nature scenes while providing visually convincing dynamic global illumination. Our work focuses on grass and trees as they are commonly visible in everyday life. We handle geometry and lighting complexities for grass to render millions of grass blades interactively with dynamic lighting. As for lighting complexity, we address real-time rendering of trees by proposing a lighting model that handles indirect lighting. Our work makes extensive use of the current generation of Graphics Processing Units (GPUs) to meet the real-time requirement and to leave the CPU free to carry out other tasks.
Show less - Date Issued
- 2008
- Identifier
- CFE0002262, ucf:47868
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002262