Current Search: Images (x)
Pages
-
-
Title
-
EFFICIENT CONE BEAM RECONSTRUCTION FOR THE DISTORTED CIRCLE AND LINE TRAJECTORY.
-
Creator
-
Konate, Souleymane, Katsevich, Alexander, University of Central Florida
-
Abstract / Description
-
We propose an exact filtered backprojection algorithm for inversion of the cone beam data in the case when the trajectory is composed of a distorted circle and a line segment. The length of the scan is determined by the region of interest , and it is independent of the size of the object. With few geometric restrictions on the curve, we show that we have an exact reconstruction. Numerical experiments demonstrate good image quality.
-
Date Issued
-
2009
-
Identifier
-
CFE0002530, ucf:47669
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002530
-
-
Title
-
SYSTEM DESIGN AND OPTIMIZATION OF OPTICAL COHERENCE TOMOGRAPHY.
-
Creator
-
Akcay, Avni, Rolland, Jannick, University of Central Florida
-
Abstract / Description
-
Optical coherence imaging, including tomography (OCT) and microscopy (OCM), has been a growing research field in biomedical optical imaging in the last decade. In this imaging modality, a broadband light source, thus of short temporal coherence length, is used to perform imaging via interferometry. A challenge in optical coherence imaging, as in any imaging system towards biomedical diagnosis, is the quantification of image quality and optimization of the system components, both a primary...
Show moreOptical coherence imaging, including tomography (OCT) and microscopy (OCM), has been a growing research field in biomedical optical imaging in the last decade. In this imaging modality, a broadband light source, thus of short temporal coherence length, is used to perform imaging via interferometry. A challenge in optical coherence imaging, as in any imaging system towards biomedical diagnosis, is the quantification of image quality and optimization of the system components, both a primary focus of this research. We concentrated our efforts on the optimization of the imaging system from two main standpoints: axial point spread function (PSF) and practical steps towards compact low-cost solutions. Up to recently, the criteria for the quality of a system was based on speed of imaging, sensitivity, and particularly axial resolution estimated solely from the full-width at half-maximum (FWHM) of the axial PSF with the common practice of assuming a Gaussian source power spectrum. As part of our work to quantify axial resolution we first brought forth two more metrics unlike FWHM, which accounted for side lobes in the axial PSF caused by irregularities in the shape of the source power spectrum, such as spectral dips. Subsequently, we presented a method where the axial PSF was significantly optimized by suppressing the side lobes occurring because of the irregular shape of the source power spectrum. The optimization was performed through optically shaping the source power spectrum via a programmable spectral shaper, which consequentially led to suppression of spurious structures in the images of a layered specimen. The superiority of the demonstrated approach was in performing reshaping before imaging, thus eliminating the need for post-data acquisition digital signal processing. Importantly, towards the optimization and objective image quality assessment in optical coherence imaging, the impact of source spectral shaping was further analyzed in a task-based assessment method based on statistical decision theory. Two classification tasks, a signal-detection task and a resolution task, were investigated. Results showed that reshaping the source power spectrum was a benefit essentially to the resolution task, as opposed to both the detection and resolution tasks, and the importance of the specimen local variations in index of refraction on the resolution task was demonstrated. Finally, towards the optimization of OCT and OCM for use in clinical settings, we analyzed the detection electronics stage, which is a crucial component of the system that is designed to capture extremely weak interferometric signals in biomedical and biological imaging applications. We designed and tested detection electronics to achieve a compact and low-cost solution for portable imaging units and demonstrated that the design provided an equivalent performance to the commercial lock-in amplifier considering the system sensitivity obtained with both detection schemes.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000651, ucf:46527
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000651
-
-
Title
-
TEXT-IMAGE RESTORATION AND TEXT ALIGNMENT FOR MULTI-ENGINE OPTICAL CHARACTER RECOGNITION SYSTEMS.
-
Creator
-
Kozlovski, Nikolai, Weeks, Arthur, University of Central Florida
-
Abstract / Description
-
Previous research showed that combining three different optical character recognition (OCR) engines (ExperVision® OCR, Scansoft OCR, and Abbyy® OCR) results using voting algorithms will get higher accuracy rate than each of the engines individually. While a voting algorithm has been realized, several aspects to automate and improve the accuracy rate needed further research. This thesis will focus on morphological image preprocessing and morphological text restoration that goes to OCR...
Show morePrevious research showed that combining three different optical character recognition (OCR) engines (ExperVision® OCR, Scansoft OCR, and Abbyy® OCR) results using voting algorithms will get higher accuracy rate than each of the engines individually. While a voting algorithm has been realized, several aspects to automate and improve the accuracy rate needed further research. This thesis will focus on morphological image preprocessing and morphological text restoration that goes to OCR engines. This method is similar to the one used in restoration partial finger prints. Series of morphological dilating and eroding filters of various mask shapes and sizes were applied to text of different font sizes and types with various noises added. These images were then processed by the OCR engines, and based on these results successful combinations of text, noise, and filters were chosen. The thesis will also deal with the problem of text alignment. Each OCR engine has its own way of dealing with noise and corrupted characters; as a result, the output texts of OCR engines have different lengths and number of words. This in turn, makes it impossible to use spaces a delimiter as a method to separate the words for processing by the voting part of the system. Text aligning determines, using various techniques, what is an extra word, what is supposed to be two or more words instead of one, which words are missing in one document compared to the other, etc. Alignment algorithm is made up of a series of shifts in the two texts to determine which parts are similar and which are not. Since errors made by OCR engines are due to visual misrecognition, in addition to simple character comparison (equal or not), a technique was developed that allows comparison of characters based on how they look.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001060, ucf:46799
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001060
-
-
Title
-
IMPROVING FMRI CLASSIFICATION THROUGH NETWORK DECONVOLUTION.
-
Creator
-
Martinek, Jacob, Zhang, Shaojie, University of Central Florida
-
Abstract / Description
-
The structure of regional correlation graphs built from fMRI-derived data is frequently used in algorithms to automatically classify brain data. Transformation on the data is performed during pre-processing to remove irrelevant or inaccurate information to ensure that an accurate representation of the subject's resting-state connectivity is attained. Our research suggests and confirms that such pre-processed data still exhibits inherent transitivity, which is expected to obscure the true...
Show moreThe structure of regional correlation graphs built from fMRI-derived data is frequently used in algorithms to automatically classify brain data. Transformation on the data is performed during pre-processing to remove irrelevant or inaccurate information to ensure that an accurate representation of the subject's resting-state connectivity is attained. Our research suggests and confirms that such pre-processed data still exhibits inherent transitivity, which is expected to obscure the true relationships between regions. This obfuscation prevents known solutions from developing an accurate understanding of a subject's functional connectivity. By removing correlative transitivity, connectivity between regions is made more specific and automated classification is expected to improve. The task of utilizing fMRI to automatically diagnose Attention Deficit/Hyperactivity Disorder was posed by the ADHD-200 Consortium in a competition to draw in researchers and new ideas from outside of the neuroimaging discipline. Researchers have since worked with the competition dataset to produce ever-increasing detection rates. Our approach was empirically tested with a known solution to this problem to compare processing of treated and untreated data, and the detection rates were shown to improve in all cases with a weighted average increase of 5.88%.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFH0004895, ucf:45410
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004895
-
-
Title
-
A COMPARATIVE ANALYSIS OF COLLEGE STUDENT SPRING BREAK DESTINATIONS: AN EMPIRICAL STUDY OF TOURISM DESTINATION ATTRIBUTES.
-
Creator
-
Tang, Tricia, Choi, Youngsoo, University of Central Florida
-
Abstract / Description
-
The tourism industry has become one of the fastest growing sectors in the world's economy, contributing 9.1% of world GDP and more than 260 million jobs worldwide (World Travel & Tourism Council, 2011). The U.S college student market has emerged as major segment within this sector, generating approximately $15 billion on annual domestic and international travel. Among the various travel patterns of college students, they are most highly motivated for spring break travel, with more than two...
Show moreThe tourism industry has become one of the fastest growing sectors in the world's economy, contributing 9.1% of world GDP and more than 260 million jobs worldwide (World Travel & Tourism Council, 2011). The U.S college student market has emerged as major segment within this sector, generating approximately $15 billion on annual domestic and international travel. Among the various travel patterns of college students, they are most highly motivated for spring break travel, with more than two million students traveling per season (Bai et al., 2004; Borgerding, 2001; Reynolds, 2004). This research, through surveying college students majoring in hospitality and tourism management, analyzed the significance of college student perceptions of key spring break destination attributes. A total of 281 usable responses were subjected to the Principal Component Analysis that generated six dimensions: Breaking Away, Sun and Beach, Safety and Hygiene, Psychological Distance, Price and Value, and Social Exploration, comprised of 24 key attributes that influence a college spring breaker's destination selection decision. An Importance-Performance Analysis (Martilla & James, 1977) was conducted based on the respondents' assessment of attributes on five of the six dimensions. The results of the IPA allowed comparison of the top four most visited destinations identified by the respondents: Daytona Beach, South Beach Miami, Panama City Beach, and Clearwater Beach/ Tampa. The study findings may provide valuable implications for destination service providers to improve their destination's appeal in this highly competitive and lucrative market. Future research on college spring break groups located in different geographic locations within the country is highly encouraged to better understand the general characteristics of this market.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFH0004193, ucf:44837
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004193
-
-
Title
-
The Feminine Margin: The Re-Imagining of One Professor's Rhetorical Pedagogy--A Curriculum Project.
-
Creator
-
Alvarez, Camila, Brenckle, Martha, Bowdon, Melody, Mauer, Barry, Weishampel, John, University of Central Florida
-
Abstract / Description
-
Writing pedagogy uses techniques that institutionalize dichotomous thinking rather than work against it. Cartesian duality has helped to create the marginalization of people, environments, and animals inherent in Western thought. Writing pedagogy based in current-traditional rhetoric uses a writing process that reinforces the hierarchical structure of Self/Other, Author/Reader, and Teacher/Student. This structure, in conjunction with capitalism, prioritizes the self and financial gain while...
Show moreWriting pedagogy uses techniques that institutionalize dichotomous thinking rather than work against it. Cartesian duality has helped to create the marginalization of people, environments, and animals inherent in Western thought. Writing pedagogy based in current-traditional rhetoric uses a writing process that reinforces the hierarchical structure of Self/Other, Author/Reader, and Teacher/Student. This structure, in conjunction with capitalism, prioritizes the self and financial gain while diminishing and objectifying the other. The thought process behind the objectification and monetization of the other created the unsustainable business and life practices behind global warming, racism, sexism, and environmental destruction. A reframing of pedagogical writing practices can fight dichotomous thinking by re-imagining student writers as counter-capitalism content creators. Changing student perceptions from isolation to a transmodern, humanitarian, and feminist ethics of care model uses a self-reflexive ethnography to form a pedagogy of writing that challenges dichotomous thought(-)by focusing on transparency in my teaching practice, the utilization of liminality through images, the use of technology to publish student work, and both instructor and student self-reflection as a part of the writing and communication process. This practice has led me to a theory of resistance and influence that I have titled The Resistance Hurricane, a definition of digital rhetoric that includes humanitarian and feminist objectives that I have titled Electric Rhetoric, and a definition for the digitally mediated product of that rhetoric that I call Electric Blooms or electracy after Gregory Ulmer's term for digital media.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007777, ucf:52371
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007777
-
-
Title
-
Imaging through Glass-air Anderson Localizing Optical Fiber.
-
Creator
-
Zhao, Jian, Schulzgen, Axel, Amezcua Correa, Rodrigo, Pang, Sean, Delfyett, Peter, Mafi, Arash, University of Central Florida
-
Abstract / Description
-
The fiber-optic imaging system enables imaging deeply into hollow tissue tracts or organs of biological objects in a minimally invasive way, which are inaccessible to conventional microscopy. It is the key technology to visualize biological objects in biomedical research and clinical applications. The fiber-optic imaging system should be able to deliver a high-quality image to resolve the details of cell morphology in vivo and in real time with a miniaturized imaging unit. It also has to be...
Show moreThe fiber-optic imaging system enables imaging deeply into hollow tissue tracts or organs of biological objects in a minimally invasive way, which are inaccessible to conventional microscopy. It is the key technology to visualize biological objects in biomedical research and clinical applications. The fiber-optic imaging system should be able to deliver a high-quality image to resolve the details of cell morphology in vivo and in real time with a miniaturized imaging unit. It also has to be insensitive to environmental perturbations, such as mechanical bending or temperature variations. Besides, both coherent and incoherent light sources should be compatible with the imaging system. It is extremely challenging for current technologies to address all these issues simultaneously. The limitation mainly lies in the deficient stability and imaging capability of fiber-optic devices and the limited image reconstruction capability of algorithms. To address these limitations, we first develop the randomly disordered glass-air optical fiber featuring a high air-filling fraction (~28.5 %) and low loss (~1 dB per meter) at visible wavelengths. Due to the transverse Anderson localization effect, the randomly disordered structure can support thousands of modes, most of which demonstrate single-mode properties. By making use of these modes, the randomly disordered optical fiber provides a robust and low-loss imaging system which can transport images with higher quality than the best commercially available imaging fiber. We further demonstrate that deep-learning algorithm can be applied to the randomly disordered optical fiber to overcome the physical limitation of the fiber itself. At the initial stage, a laser-illuminated system is built by integrating a deep convolutional neural network with the randomly disordered optical fiber. Binary sparse objects, such as handwritten numbers and English letters, are collected, transported and reconstructed using this system. It is proved that this first deep-learning-based fiber imaging system can perform artifact-free, lensless and bending-independent imaging at variable working distances. In real-world applications, the gray-scale biological subjects have much more complicated features. To image biological tissues, we re-design the architecture of the deep convolutional neural network and apply it to a newly designed system using incoherent illumination. The improved fiber imaging system has much higher resolution and faster reconstruction speed. We show that this new system can perform video-rate, artifact-free, lensless cell imaging. The cell imaging process is also remarkably robust with regard to mechanical bending and temperature variations. In addition, this system demonstrates stronger transfer-learning capability than existed deep-learning-based fiber imaging system.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007746, ucf:52405
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007746
-
-
Title
-
Perceptual Judgment: The Impact of Image Complexity and Training Method on Category Learning.
-
Creator
-
Curtis, Michael, Jentsch, Florian, Salas, Eduardo, Szalma, James, Boloni, Ladislau, University of Central Florida
-
Abstract / Description
-
The goal of this dissertation was to bridge the gap between perceptual learning theory and training application. Visual perceptual skill has been a vexing topic in training science for decades. In complex task domains, from aviation to medicine, visual perception is critical to task success. Despite this, little, if any, emphasis is dedicated to developing perceptual skills through training. Much of this may be attributed to the perceived inefficiency of perceptual training. Recent applied...
Show moreThe goal of this dissertation was to bridge the gap between perceptual learning theory and training application. Visual perceptual skill has been a vexing topic in training science for decades. In complex task domains, from aviation to medicine, visual perception is critical to task success. Despite this, little, if any, emphasis is dedicated to developing perceptual skills through training. Much of this may be attributed to the perceived inefficiency of perceptual training. Recent applied research in perceptual training with discrimination training, however, holds promise for improved perceptual training efficiency. As with all applied research, it is important to root application in solid theoretical bases. In perceptual learning, the challenge is connecting the basic science to more complex task environments. Using a common aviation task as an applied context, participants were assigned to a perceptual training condition based on variation of image complexity and training type. Following the training, participants were tested for transfer of skill. This was intended to help to ground a potentially useful method of perceptual training in a model category learning, while offering qualitative testing of model fit in increasingly complex visual environments. Two hundred and thirty-one participants completed the computer based training module. Results indicate that predictions of a model of category learning largely extend into more complex training stimuli, suggesting utility of the model in more applied contexts. Although both training method conditions showed improvement across training blocks, the discrimination training condition did not transfer to the post training transfer tasks. Lack of adequate contextual information related to the transfer task in training was attributed to this outcome. Further analysis of the exposure training condition showed that individuals training with simple stimuli performed as well as individuals training on more complex stimuli in a complex transfer task. On the other hand, individuals in the more complex training conditions were less accurate when presented with a simpler representation of the task in transfer. This suggests training benefit to isolating essential task cues from irrelevant information in perceptual judgment tasks. In all, the study provided an informative look at both the theory and application associated with perceptual category learning. Ultimately, this research can help inform future research and training development in domains where perceptual judgment is critical for success.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0004096, ucf:49139
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004096
-
-
Title
-
TOWARDS CALIBRATION OF OPTICAL FLOW OF CROWD VIDEOS USING OBSERVED TRAJECTORIES.
-
Creator
-
Elbadramany, Iman, Kaup, David, University of Central Florida
-
Abstract / Description
-
The need exists for finding a quantitative method for validating crowd simulations. One approach is to use optical flow of videos of real crowds to obtain velocities that can be used for comparison to simulations. Optical flow, in turn, needs to be calibrated to be useful. It is essential to show that optical flow velocities obtained from crowd videos can be mapped into the spatially averaged velocities of the observed trajectories of crowd members, and to quantify the extent of the...
Show moreThe need exists for finding a quantitative method for validating crowd simulations. One approach is to use optical flow of videos of real crowds to obtain velocities that can be used for comparison to simulations. Optical flow, in turn, needs to be calibrated to be useful. It is essential to show that optical flow velocities obtained from crowd videos can be mapped into the spatially averaged velocities of the observed trajectories of crowd members, and to quantify the extent of the correlation of the results. This research investigates methods to uncover the best conditions for a good correlation between optical flow and the average motion of individuals in crowd videos, with the aim that this will help in the quantitative validation of simulations. The first approach was to use a simple linear proportionality relation, with a single coefficient, alpha, between velocity vector of the optical flow and observed velocity of crowd members in a video or simulation. Since there are many variables that affect alpha, an attempt was made to find the best possible conditions for determining alpha, by varying experimental and optical flow settings. The measure of a good alpha was chosen to be that alpha does not vary excessively over a number of video frames. Best conditions of low coefficient of variation of alpha using the Lucas-Kanade optical flow algorithm were found to be when a larger aperture of 15x15 pixels was used, combined with a smaller threshold. Adequate results were found at cell size 40x40 pixels; the improvement in detecting details when smaller cells are used did not reduce the variability of alpha, and required much more computing power. Reduction in variability of alpha can be obtained by spreading the tracked location of a crowd member from a pixel into a rectangle. The Particle Image Velocimetry optical flow algorithm had better correspondence with the velocity vectors of manually tracked crowd members than results obtained using the Lukas-Kanade method. Here, also, it was found that 40x40 pixel cells were better than 15x15. A second attempt at quantifying the correlation between optical flow and actual crowd member velocities was studied using simulations. Two processes were researched, which utilized geometrical correction of the perspective distortion of the crowd videos. One process geometrically corrects the video, and then obtains optical flow data. The other obtains optical flow data from video, and then geometrically corrects the data. The results indicate that the first process worked better. Correlation was calculated between sets of data obtained from the average of twenty frames. This was found to be higher than calculating correlations between the velocities of cells in each pair of frames. An experiment was carried out to predict crowd tracks using optical flow and a calculated parameter, beta, seems to give promising results.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0004024, ucf:49175
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004024
-
-
Title
-
Phonon Modulation by Polarized Lasers for Material Modification.
-
Creator
-
Chen, Sen-Yong, Kar, Aravinda, Vaidyanathan, Rajan, Harvey, James, Likamwa, Patrick, University of Central Florida
-
Abstract / Description
-
Magnetic resonance imaging (MRI) has become one of the premier non-invasive diagnostic tools, with around 60 million MRI scans applied each year. However, there is a risk of thermal injury due to radiofrequency (RF) induction heating of the tissue and implanted metallic device for the patients with the implanted metallic devices. Especially, MRI of the patients with implanted elongated devices such as pacemakers and deep brain stimulation systems is considered contraindicated. Many efforts,...
Show moreMagnetic resonance imaging (MRI) has become one of the premier non-invasive diagnostic tools, with around 60 million MRI scans applied each year. However, there is a risk of thermal injury due to radiofrequency (RF) induction heating of the tissue and implanted metallic device for the patients with the implanted metallic devices. Especially, MRI of the patients with implanted elongated devices such as pacemakers and deep brain stimulation systems is considered contraindicated. Many efforts, such as determining preferred MRI parameters, modifying magnetic field distribution, designing new structure and exploring new materials, have been made to reduce the induction heating. Improving the MRI-compatibility of implanted metallic devices by modifying the properties of the existing materials would be valuable.To evaluate the temperature rise due to RF induction heating on a metallic implant during MRI procedure, an electromagnetic model and thermal model are studied. The models consider the shape of RF magnetic pulses, interaction of RF pulses with metal plate, thermal conduction inside the metal and the convection at the interface between the metal and the surroundings. Transient temperature variation and effects of heat transfer coefficient, reflectivity and MRI settings on the temperature change are studied.Laser diffusion is applied to some titanium sheets for a preliminary study. An electromagnetic and thermal model is developed to choose the proper diffusant. Pt is the diffusant in this study. An electromagnetic model is also developed based on the principles of inverse problems to calculate the electromagnetic properties of the metals from the measured magnetic transmittance. This model is used to determine the reflectivity, dielectric constant and conductivity of treated and as-received Ti sheets. The treated Ti sheets show higher conductivity than the as-received Ti sheets, resulting higher reflectivity.A beam shaping lens system which is designed based on vector diffraction theory is used in laser diffusion. Designing beam shaping lens based on the vector diffraction theory offers improved irradiance profile and new applications such as polarized beam shaping because the polarization nature of laser beams is considered. Laser Pt diffusion are applied on the titanium and tantalum substrates using different laser beam polarizations. The concentration of Pt and oxygen in those substrates are measured using Energy Dispersive X-Ray Spectroscopy (EDS). The magnetic transmittance and conductivity of those substrates are measured as well. The effects of laser beam polarizations on Pt diffusion and the magnetic transmittance and conductivity of those substrates are studied. Treated Ti sheets show lower magnetic transmittance due to the increased conductivity from diffused Pt atoms. On the other hand, treated Ta sheets show higher magnetic transmittance due to reduced conductivity from oxidation. Linearly polarized light can enhance the Pt diffusion because of the excitation of local vibration mode of atoms.Laser Pt diffusion and thermo-treatment were applied on the Ta and MP35N wires. The Pt concentration in laser platinized Ta and MP35N wires was determined using EDS. The ultimate tensile strength, fatigue lives and lead tip heating in real MRI environment of those wires were measured. The lead tip hating of the platinized Ta wires is 42 % less than the as-received Ta wire. The diffused Pt increases the conductivity of Ta wires, resulting in more reflection of magnetic field. In the case of the platinized MP35N wire, the reduction in lead tip heating was only 1 (&)deg;C due to low concentration of Pt. The weaker ultimate tensile strength and shorter fatigue lives of laser-treated Ta and MP35N wires may attribute to the oxidation and heating treatment.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004500, ucf:49269
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004500
-
-
Title
-
CHARACTERIZATION OF MOTILITY ALTERATIONS CAUSED BY THE IMPAIRMENT OF DYNEIN/DYNACTIN MOTOR PROTEIN COMPLEX.
-
Creator
-
Nandini, Swaran, King, Stephen, Kim, Yoon-Seong, Estevez, Alvaro, University of Central Florida
-
Abstract / Description
-
Transport of intracellular cargo is an important and dynamic process required for cell maintenance and survival. Dynein is the motor protein that carries organelles and vesicles from the cell periphery to the cell center along the microtubule network. Dynactin is a protein that activates dynein for this transport process. Together, dynein and dynactin forms a motor protein complex that is essential for transport processes in all the vertebrate cells. Using fluorescent microscope based live...
Show moreTransport of intracellular cargo is an important and dynamic process required for cell maintenance and survival. Dynein is the motor protein that carries organelles and vesicles from the cell periphery to the cell center along the microtubule network. Dynactin is a protein that activates dynein for this transport process. Together, dynein and dynactin forms a motor protein complex that is essential for transport processes in all the vertebrate cells. Using fluorescent microscope based live cell imaging techniques and kymograph analyses, I studied dynein/dynactin disruptions on the intracellular transport in two different cell systems. In one set of experiments, effects of dynein heavy chain (DHC) mutations on the vesicular motility were characterized in the fungus model system Neurospora crassa. I found that many DHC mutations had a severe transport defect, while one mutation linked to neurodegeneration in mice had a subtle effect on intracellular transport of vesicles. In a different set of experiments in mammalian tissue culture CAD cells, I studied the effects of dynactin knockdown and dynein inhibition on mitochondrial motility. My results indicated that reductions in dynactin levels decrease the average number of mitochondrial movements and surprisingly, increase the mitochondrial run lengths. Also, I determined that the dynein inhibitory drug Ciliobrevin causes changes in mitochondrial morphology and decreases the number of mitochondrial movements inside cells. Overall, my research shows that distinct disruptions in the dynein and dynactin motor complex alters intracellular motility, but in different ways. So far, my studies have set the ground work for future experiments to analyze the motility mechanism of motor proteins having mutations that lead to neurodegenerative disorders.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004897, ucf:49664
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004897
-
-
Title
-
THE DEVELOPMENT OF THE UNIVERSITY OF CENTRAL FLORIDA HOME MOVIE ARCHIVE AND THE HARRIS ROSEN COLLECTION.
-
Creator
-
Niedermeyer, Michael, Gordon, Fon, University of Central Florida
-
Abstract / Description
-
Since the invention of the cinema, people have been taking home movies. The ever increasing popularity of this activity has produced a hundred years worth of amateur film culture which is in desperate need of preservation. As film archival and public history have coalesced in the past thirty years around the idea that every personÃÂ's history is important, home movies represent a way for those histories to be preserved and studied by communities and researchers alike....
Show moreSince the invention of the cinema, people have been taking home movies. The ever increasing popularity of this activity has produced a hundred years worth of amateur film culture which is in desperate need of preservation. As film archival and public history have coalesced in the past thirty years around the idea that every personÃÂ's history is important, home movies represent a way for those histories to be preserved and studied by communities and researchers alike. The University of Central Florida is in a perfect position to establish an archive of this nature, one that is specifically dedicated to acquiring, preserving, and presenting the home movies of Central Florida residents. This project has resulted in the establishment of The Central Florida Home Movie Archive, and the resulting analysis will show that the archive will be a benefit for researchers from all areas of academic study as well as the residents of Central Florida.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003432, ucf:48410
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003432
-
-
Title
-
ANALYSIS AND DESIGN OF WIDE-ANGLE FOVEATED OPTICAL SYSTEMS.
-
Creator
-
Curatu, George, Harvey, James, University of Central Florida
-
Abstract / Description
-
The development of compact imaging systems capable of transmitting high-resolution images in real-time while covering a wide field-of-view (FOV) is critical in a variety of military and civilian applications: surveillance, threat detection, target acquisition, tracking, remote operation of unmanned vehicles, etc. Recently, optical foveated imaging using liquid crystal (LC) spatial light modulators (SLM) has received considerable attention as a potential approach to reducing size and...
Show moreThe development of compact imaging systems capable of transmitting high-resolution images in real-time while covering a wide field-of-view (FOV) is critical in a variety of military and civilian applications: surveillance, threat detection, target acquisition, tracking, remote operation of unmanned vehicles, etc. Recently, optical foveated imaging using liquid crystal (LC) spatial light modulators (SLM) has received considerable attention as a potential approach to reducing size and complexity in fast wide-angle lenses. The fundamental concept behind optical foveated imaging is reducing the number of elements in a fast wide-angle lens by placing a phase SLM at the pupil stop to dynamically compensate aberrations left uncorrected by the optical design. In the recent years, considerable research and development has been conducted in the field of optical foveated imaging based on the LC SLM technology, and several foveated optical systems (FOS) prototypes have been built. However, most research has been focused so far on the experimental demonstration of the basic concept using off the shelf components, without much concern for the practicality or the optical performance of the systems. Published results quantify only the aberration correction capabilities of the FOS, often claiming diffraction limited performance at the region of interest (ROI). However, these results have continually overlooked diffraction effects on the zero-order efficiency and the image quality. The research work presented in this dissertation covers the methods and results of a detailed theoretical research study on the diffraction analysis, image quality, design, and optimization of fast wide-angle FOSs based on the current transmissive LC SLM technology. The amplitude and phase diffraction effects caused by the pixelated aperture of the SLM are explained and quantified, revealing fundamental limitations imposed by the current transmissive LC SLM technology. As a part of this study, five different fast wide-angle lens designs that can be used to build practical FOSs were developed, revealing additional challenges specific to the optical design of fast wide-angle systems, such as controlling the relative illumination, distortion, and distribution of aberrations across a wide FOV. One of the lens design examples was chosen as a study case to demonstrate the design, analysis, and optimization of a practical wide-angle FOS based on the current state-of-the-art transmissive LC SLM technology. The effects of fabrication and assembly tolerances on the image quality of fast wide-angle FOSs were also investigated, revealing the sensitivity of these fast well-corrected optical systems to manufacturing errors. The theoretical study presented in this dissertation sets fundamental analysis, design, and optimization guidelines for future developments in fast wide-angle FOSs based on transmissive SLM devices.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002584, ucf:48254
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002584
-
-
Title
-
Compressive Sensing and Recovery of Structured Sparse Signals.
-
Creator
-
Shahrasbi, Behzad, Rahnavard, Nazanin, Vosoughi, Azadeh, Wei, Lei, Atia, George, Pensky, Marianna, University of Central Florida
-
Abstract / Description
-
In the recent years, numerous disciplines including telecommunications, medical imaging, computational biology, and neuroscience benefited from increasing applications of high dimensional datasets. This calls for efficient ways of data capturing and data processing. Compressive sensing (CS), which is introduced as an efficient sampling (data capturing) method, is addressing this need. It is well-known that the signals, which belong to an ambient high-dimensional space, have much smaller...
Show moreIn the recent years, numerous disciplines including telecommunications, medical imaging, computational biology, and neuroscience benefited from increasing applications of high dimensional datasets. This calls for efficient ways of data capturing and data processing. Compressive sensing (CS), which is introduced as an efficient sampling (data capturing) method, is addressing this need. It is well-known that the signals, which belong to an ambient high-dimensional space, have much smaller dimensionality in an appropriate domain. CS taps into this principle and dramatically reduces the number of samples that is required to be captured to avoid any distortion in the information content of the data. This reduction in the required number of samples enables many new applications that were previously infeasible using classical sampling techniques.Most CS-based approaches take advantage of the inherent low-dimensionality in many datasets. They try to determine a sparse representation of the data, in an appropriately chosen basis using only a few significant elements. These approaches make no extra assumptions regarding possible relationships among the significant elements of that basis. In this dissertation, different ways of incorporating the knowledge about such relationships are integrated into the data sampling and the processing schemes.We first consider the recovery of temporally correlated sparse signals and show that using the time correlation model. The recovery performance can be significantly improved. Next, we modify the sampling process of sparse signals to incorporate the signal structure in a more efficient way. In the image processing application, we show that exploiting the structure information in both signal sampling and signal recovery improves the efficiency of the algorithm. In addition, we show that region-of-interest information can be included in the CS sampling and recovery steps to provide a much better quality for the region-of-interest area compared the rest of the image or video. In spectrum sensing applications, CS can dramatically improve the sensing efficiency by facilitating the coordination among spectrum sensors. A cluster-based spectrum sensing with coordination among spectrum sensors is proposed for geographically disperse cognitive radio networks. Further, CS has been exploited in this problem for simultaneous sensing and localization. Having access to this information dramatically facilitates the implementation of advanced communication technologies as required by 5G communication networks.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0006392, ucf:51509
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006392
-
-
Title
-
The Mechanical Response and Parametric Optimization of Ankle-Foot Devices.
-
Creator
-
Smith, Kevin, Gordon, Ali, Kassab, Alain, Bai, Yuanli, Pabian, Patrick, University of Central Florida
-
Abstract / Description
-
To improve the mobility of lower limb amputees, many modern prosthetic ankle-foot devices utilize a so called energy storing and return (ESAR) design. This allows for elastically stored energy to be returned to the gait cycle as forward propulsion. While ESAR type feet have been well accepted by the prosthetic community, the design and selection of a prosthetic device for a specific individual is often based on clinical feedback rather than engineering design. This is due to an incomplete...
Show moreTo improve the mobility of lower limb amputees, many modern prosthetic ankle-foot devices utilize a so called energy storing and return (ESAR) design. This allows for elastically stored energy to be returned to the gait cycle as forward propulsion. While ESAR type feet have been well accepted by the prosthetic community, the design and selection of a prosthetic device for a specific individual is often based on clinical feedback rather than engineering design. This is due to an incomplete understanding of the role of prosthetic design characteristics (e.g. stiffness, roll-over shape, etc.) have on the gait pattern of an individual. Therefore, the focus of this work has been to establish a better understanding of the design characteristics of existing prosthetic devices through mechanical testing and the development of a prototype prosthetic foot that has been numerically optimized for a specific gait pattern. The component stiffness, viscous properties, and energy return of commonly prescribed carbon fiber ESAR type feet were evaluated through compression testing with digital image correlation at select loading angles following the idealized gait from the ISO 22675 standard for fatigue testing. A representative model was developed to predict the stress within each of the tested components during loading and to optimize the design for a target loading response through parametric finite element analysis. This design optimization approach, along with rapid prototyping technologies, will allow clinicians to better identify the role the design characteristics of the foot have on an amputee's biomechanics during future gait analysis.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006397, ucf:51502
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006397
-
-
Title
-
Hydrodynamic Measurements of the Flow Structure Emanating From A Multi-Row Film Cooling Configuration.
-
Creator
-
Voet, Michael, Kapat, Jayanta, Vasu Sumathi, Subith, Ahmed, Kareem, University of Central Florida
-
Abstract / Description
-
The demand for more power is rapidly increasing worldwide. Attention is turned to increasingthe efficiency of modern methods for power generation. Gas turbines provide 35% of the powerdemands within the United States. Efficiency of gas turbines is defined in an ideal sense by thethermal efficiency of the Brayton Cycle. The overall efficiency of a gas turbine can be increased while simultaneously maximizing specific work output, by increasing the turbine inlet temperature. However, even with...
Show moreThe demand for more power is rapidly increasing worldwide. Attention is turned to increasingthe efficiency of modern methods for power generation. Gas turbines provide 35% of the powerdemands within the United States. Efficiency of gas turbines is defined in an ideal sense by thethermal efficiency of the Brayton Cycle. The overall efficiency of a gas turbine can be increased while simultaneously maximizing specific work output, by increasing the turbine inlet temperature. However, even with the advancements in modern materials in terms of maximum operatingtemperature, various components are already subjected to temperatures higher than their melting temperatures. An increase in inlet temperature would subject various components to even higher temperatures, such that more effective cooling would be necessary, whilst ideally using the same (or less) amount of cooling air bled from compressor. Improvements in the performance of these cooling techniques is thus required. The focus of this thesis is on one such advanced cooling technique, namely film cooling.The objective of this study is to investigate the effects of coolant density on the jet structure for different multi-row film cooling configurations. As research is performed on improving the performance of film cooling, the available conditions during testing may not reflect actual engine-like conditions. Typical operating density ratio at engine conditions are between 1.5 and 2, while it is observed that a majority of the density ratios tested in literature are between 1 and 1.5. While thesetests may be executed outside of engine-like conditions, it is important to understand how density ratio effects the flow physics and film cooling performance. The density ratio within this study is varied between 1.0 and 1.5 by alternating the injecting fluid between air and Carbon Dioxide, respectively.Both a simple cylindrical and fan-shape multi-row film cooling configuration are tested in the present study. In order to compare the results collected from these geometries, lateral and spanwise hole-to-hole spacing, metering hole diameter, hole length, and inclination angle are held constant between all testing configurations. The effect of fluid density upon injection is examined by independently holding either blowing, momentum flux, or velocity ratio constant whilst varying density ratio. Comparisons between both of the film cooling configurations are also made as similar ratios are tested between geometries. This allows the variation in flow structure and performance to be observed from alternating the film cooling hole shape.Particle Image Velocimetry (PIV) is implemented to obtain both streamwise and wall normal velocitymeasurements for the array centerline plane. This data is used to examine the interactionof the jet as it leaves the film cooling hole and the structure produced when the jet mixes with theboundary layer.Similarities in jet to jet interactions and surface attachment between density ratios are seen for the cylindrical configuration when momentum flux ratio is held constant. When observing constant blowing ratio comparisons of the cylindrical configurations, the lower density ratio is seen to begin detaching from the wall at M = 0.72 with little evidence of coolant in the near wall region. However, the higher density cylindrical injection retains its surface attachment at M = 0.74 with noticeably more coolant near the wall, because of significantly lower momentum flux ratio and lower (")jetting(") effect. The fan-shape film cooling configuration demonstrates improved performance, in terms of surface attachment, over a larger range of all ratios than that of the cylindrical cases. Additionally, the fan-shape configuration is shown to constantly retain a thicker layer of low velocity fluid in the near wall region when injected with the higher density coolant, suggesting improved performance at the higher density ratio.When tracking the jet trajectory, it is shown that the injection of CO2 through the cylindricalconfiguration yields a higher centerline wall normal height per downstream location than that of the lower density fluid. Comparing the results of the centerline tracking produced by the third and fifth rows for both the injection of air and CO2, it is confirmed that the fifth row of injection interacts with the boundary layer at a great wall normal height than that of the third row. Additionally, when observing the change in downstream trajectory between the fifth and seventh row of injection, a significant decrease in wall normal height is seen for the coolant produced by the seventh row. It is believed that the lack of a ninth row of injection allows the coolant from the seventh row of injection to remain closer to the target surface. This is further supported by the observation of the derived pressure gradient field and the path streamlines take while interacting with the recirculatory region produced by the injection of coolant into the boundary layer.Further conclusions are drawn by investigating the interaction between momentum thickness andthe influence of blowing ratio. Relatively constant downstream momentum thickness is observedfor the injection of lower density fluid for the blowing ratio range of M= 0.4 to 0.8 for the cylindrical configuration. It is suggested that a correlation exists between momentum thickness and film cooling performance, however further studies are needed to validate this hypothesis.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006817, ucf:51791
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006817
-
-
Title
-
Simultaneous Imaging of the Diatomic Carbon and Methylidyne Species Radicals for the Quantification of the Fuel to Air Ratio from Low to High Pressure Combustion.
-
Creator
-
Reyes, Jonathan, Ahmed, Kareem, Kassab, Alain, Kapat, Jayanta, University of Central Florida
-
Abstract / Description
-
The radical intensity ratio of the diatomic carbon to methylidyne was characterized at initialpressures up to 10 bar using certified gasoline of 93% octane. This gasoline was selected due toits availability as a common fuel. The characterization of the radical intensity ratio of gasoline atelevated pressures enabled the creation of a calibration map of the equivalence ratio at enginerelevant conditions.The proposed calibration map acts as a feedback loop for a combustor. It allows for...
Show moreThe radical intensity ratio of the diatomic carbon to methylidyne was characterized at initialpressures up to 10 bar using certified gasoline of 93% octane. This gasoline was selected due toits availability as a common fuel. The characterization of the radical intensity ratio of gasoline atelevated pressures enabled the creation of a calibration map of the equivalence ratio at enginerelevant conditions.The proposed calibration map acts as a feedback loop for a combustor. It allows for thelocation of local rich and lean zones. The local information acquired can be used as an optimizationparameter for injection and ignition timings, and future combustor designs. The calibration map isapplicable at low and high engine loads to characterize a combustors behavior at all points in itsoperation map.Very little emphasis has been placed on the radical intensity ratio of unsteady flames,flames at high pressure, and liquid fuels. The current work performed the measurement on anunsteady flame ignited at different initial pressures employing a constant volume combustionchamber and liquid gasoline as the fuel source. The chamber can sustain a pressure rise of 200 barand allows for homogenous fuel to air mixtures.The results produced a viable calibration map from 1 to 10 bar. The intensity ratio at initialpressures above 5 bar behaved adversely in comparison to the lower pressure tests. The acquiredratios at the higher initial pressures are viable as individual calibration curves, but created anunexpected calibration map. The data shows promise in creating a calibration map that is usefulfor practical combustors.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006910, ucf:51692
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006910
-
-
Title
-
Applied Advanced Error Control Coding for General Purpose Representation and Association Machine Systems.
-
Creator
-
Dai, Bowen, Wei, Lei, Lin, Mingjie, Rahnavard, Nazanin, Turgut, Damla, Sun, Qiyu, University of Central Florida
-
Abstract / Description
-
General-Purpose Representation and Association Machine (GPRAM) is proposed to be focusing on computations in terms of variation and flexibility, rather than precision and speed. GPRAM system has a vague representation and has no predefined tasks. With several important lessons learned from error control coding, neuroscience and human visual system, we investigate several types of error control codes, including Hamming code and Low-Density Parity Check (LDPC) codes, and extend them to...
Show moreGeneral-Purpose Representation and Association Machine (GPRAM) is proposed to be focusing on computations in terms of variation and flexibility, rather than precision and speed. GPRAM system has a vague representation and has no predefined tasks. With several important lessons learned from error control coding, neuroscience and human visual system, we investigate several types of error control codes, including Hamming code and Low-Density Parity Check (LDPC) codes, and extend them to different directions.While in error control codes, solely XOR logic gate is used to connect different nodes. Inspired by bio-systems and Turbo codes, we suggest and study non-linear codes with expanded operations, such as codes including AND and OR gates which raises the problem of prior-probabilities mismatching. Prior discussions about critical challenges in designing codes and iterative decoding for non-equiprobable symbols may pave the way for a more comprehensive understanding of bio-signal processing. The limitation of XOR operation in iterative decoding with non-equiprobable symbols is described and can be potentially resolved by applying quasi-XOR operation and intermediate transformation layer. Constructing codes for non-equiprobable symbols with the former approach cannot satisfyingly perform with regarding to error correction capability. Probabilistic messages for sum-product algorithm using XOR, AND, and OR operations with non-equiprobable symbols are further computed. The primary motivation for the constructing codes is to establish the GPRAM system rather than to conduct error control coding per se. The GPRAM system is fundamentally developed by applying various operations with substantial over-complete basis. This system is capable of continuously achieving better and simpler approximations for complex tasks.The approaches of decoding LDPC codes with non-equiprobable binary symbols are discussed due to the aforementioned prior-probabilities mismatching problem. The traditional Tanner graph should be modified because of the distinction of message passing to information bits and to parity check bits from check nodes. In other words, the message passing along two directions are identical in conventional Tanner graph, while the message along the forward direction and backward direction are different in our case. A method of optimizing signal constellation is described, which is able to maximize the channel mutual information.A simple Image Processing Unit (IPU) structure is proposed for GPRAM system, to which images are inputted. The IPU consists of a randomly constructed LDPC code, an iterative decoder, a switch, and scaling and decision device. The quality of input images has been severely deteriorated for the purpose of mimicking visual information variability (VIV) experienced in human visual systems. The IPU is capable of (a) reliably recognizing digits from images of which quality is extremely inadequate; (b) achieving similar hyper-acuity performance comparing to human visual system; and (c) significantly improving the recognition rate with applying randomly constructed LDPC code, which is not specifically optimized for the tasks.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006449, ucf:51413
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006449
-
-
Title
-
Analytical study of computer vision-based pavement crack quantification using machine learning techniques.
-
Creator
-
Mokhtari, Soroush, Yun, Hae-Bum, Nam, Boo Hyun, Catbas, Necati, Shah, Mubarak, Xanthopoulos, Petros, University of Central Florida
-
Abstract / Description
-
Image-based techniques are a promising non-destructive approach for road pavement condition evaluation. The main objective of this study is to extract, quantify and evaluate important surface defects, such as cracks, using an automated computer vision-based system to provide a better understanding of the pavement deterioration process. To achieve this objective, an automated crack-recognition software was developed, employing a series of image processing algorithms of crack extraction, crack...
Show moreImage-based techniques are a promising non-destructive approach for road pavement condition evaluation. The main objective of this study is to extract, quantify and evaluate important surface defects, such as cracks, using an automated computer vision-based system to provide a better understanding of the pavement deterioration process. To achieve this objective, an automated crack-recognition software was developed, employing a series of image processing algorithms of crack extraction, crack grouping, and crack detection. Bottom-hat morphological technique was used to remove the random background of pavement images and extract cracks, selectively based on their shapes, sizes, and intensities using a relatively small number of user-defined parameters. A technical challenge with crack extraction algorithms, including the Bottom-hat transform, is that extracted crack pixels are usually fragmented along crack paths. For de-fragmenting those crack pixels, a novel crack-grouping algorithm is proposed as an image segmentation method, so called MorphLink-C. Statistical validation of this method using flexible pavement images indicated that MorphLink-C not only improves crack-detection accuracy but also reduces crack detection time.Crack characterization was performed by analysing imagerial features of the extracted crack image components. A comprehensive statistical analysis was conducted using filter feature subset selection (FSS) methods, including Fischer score, Gini index, information gain, ReliefF, mRmR, and FCBF to understand the statistical characteristics of cracks in different deterioration stages. Statistical significance of crack features was ranked based on their relevancy and redundancy. The statistical method used in this study can be employed to avoid subjective crack rating based on human visual inspection. Moreover, the statistical information can be used as fundamental data to justify rehabilitation policies in pavement maintenance.Finally, the application of four classification algorithms, including Artificial Neural Network (ANN), Decision Tree (DT), k-Nearest Neighbours (kNN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) is investigated for the crack detection framework. The classifiers were evaluated in the following five criteria: 1) prediction performance, 2) computation time, 3) stability of results for highly imbalanced datasets in which, the number of crack objects are significantly smaller than the number of non-crack objects, 4) stability of the classifiers performance for pavements in different deterioration stages, and 5) interpretability of results and clarity of the procedure. Comparison results indicate the advantages of white-box classification methods for computer vision based pavement evaluation. Although black-box methods, such as ANN provide superior classification performance, white-box methods, such as ANFIS, provide useful information about the logic of classification and the effect of feature values on detection results. Such information can provide further insight for the image-based pavement crack detection application.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005671, ucf:50186
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005671
-
-
Title
-
Synthesis of Fluorene-based derivatives, Characterization of Optical properties and their Applications in Two-photon Fluorescence Imaging and Photocatalysis.
-
Creator
-
Githaiga, Grace, Belfield, Kevin, Patino Marin, Pedro, Chumbimuni Torres, Karin, Zou, Shengli, Cheng, Zixi, University of Central Florida
-
Abstract / Description
-
The two-photon absorption (2PA) phenomenon has attracted attention from various fields ranging from chemistry and biology to optics and engineering. Two of the common NLO applications in which organic materials have been used are three-dimensional (3D) fluorescence imaging and optical power limiting. Two-photon absorbing materials are, therefore, in great demand to meet the needs of emerging technologies. Organic molecules show great promise to meet this need as they can be customized through...
Show moreThe two-photon absorption (2PA) phenomenon has attracted attention from various fields ranging from chemistry and biology to optics and engineering. Two of the common NLO applications in which organic materials have been used are three-dimensional (3D) fluorescence imaging and optical power limiting. Two-photon absorbing materials are, therefore, in great demand to meet the needs of emerging technologies. Organic molecules show great promise to meet this need as they can be customized through molecular engineering, and as the development of two-photon materials that suit practical application intensifies, so does research to meet this need. However, there remains some uncertainty in the particulars of design criteria for molecules with large 2PA cross sections at desired wavelengths, as such research to understand structure-property relationships is matter of significant importance. As a result, the full potential of 2PA materials has not been fully exploited. Several strategies to enhance the magnitude and tune the wavelength of 2PA have been reported for ?-conjugated organic molecules. On this account, we have designed novel fluorophores using the fluorene moiety and modified it to tune the properties of the compounds.Chapter 2 of this dissertation reports the successful application of fluorene-based compounds in photocatalysis; a process that involves the decomposition of organic compounds into environmentally friendly carbon dioxide and water attesting to the photostability of the fluorene moiety. A facile organic nanoparticle preparation method is reported in chapter 3 using the reprecipitation method, whose surface was then modified using a naturally occurring surfactant, Lecithin, and were then successfully used in fluorescence cell imaging. Chapter 4 reports the design and synthesis of a fluorene-based compound using an acceptor, s-indacene-1, 3, 5, 7(2H, 6H)-tetra one, or Janus Dione, a moiety that is relatively new and that has not been fully exploited despite its very attractive features. Owing to the hydrophobicity of this compound, notwithstanding its unprecedented 2PA cross section, it was not applicable in fluorescence cell imaging but provided the tenets for the design of related derivative. This limitation was circumvented in the concluding chapter by tuning the compound's hydrophilicity. The hydrophilic Janus dione probe was then used as envisioned for cell imaging as the dual prerequisites for fluorescence imaging probes; large 2PA cross sections and high fluorescence quantum yields were met.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005620, ucf:50207
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005620
Pages