Current Search: Images (x)
Pages
-
-
Title
-
Inversion of the Broken Ray Transform.
-
Creator
-
Krylov, Roman, Katsevich, Alexander, Tamasan, Alexandru, Nashed, M, Zeldovich, Boris, University of Central Florida
-
Abstract / Description
-
The broken ray transform (BRT) is an integral of a functionalong a union of two rays with a common vertex.Consider an X-ray beam scanning an object of interest.The ray undergoes attenuation and scatters in all directions inside the object.This phenomena may happen repeatedly until the photons either exit the object or are completely absorbed.In our work we assume the single scattering approximation when the intensity of the raysscattered more than once is negligibly small.Among all paths that...
Show moreThe broken ray transform (BRT) is an integral of a functionalong a union of two rays with a common vertex.Consider an X-ray beam scanning an object of interest.The ray undergoes attenuation and scatters in all directions inside the object.This phenomena may happen repeatedly until the photons either exit the object or are completely absorbed.In our work we assume the single scattering approximation when the intensity of the raysscattered more than once is negligibly small.Among all paths that the scattered rays travel inside the object we pick the one that isa union of two segments with one common scattering point.The intensity of the ray which traveled this path and exited the object can be measured by a collimated detector.The collimated detector is able to measure the intensity of X-rays from the selected direction.The logarithm of such a measurement is the broken ray transform of the attenuation coefficientplus the logarithm of the scattering coefficient at the scattering point (vertex)and a known function of the scattering angle.In this work we consider the reconstruction of X-ray attenuation coefficient distributionin a plane from the measurements on two or three collimated detector arrays.We derive an exact local reconstruction formula for three flat collimated detectorsor three curved or pin-hole collimated detectors.We obtain a range condition for the case of three curved or pin-hole detectors and provide a special caseof the range condition for three flat detectors.We generalize the reconstruction formula to four and more detectors and find anoptimal set of parameters that minimize noise in the reconstruction.We introduce a more accurate scattering model which takes into accountenergy shifts due to the Compton effect, derive an exact reconstruction formula and develop an iterativereconstruction method for the energy-dependent case.To solve the problem we assume that the radiation source is monoenergeticand the dependence of the attenuation coefficient on energy is linearon an energy interval from the minimal to the maximal scattered energy. %initial radiation energy.We find the parameters of the linear dependence of the attenuation on energy as a function of a pointin the reconstruction plane.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005514, ucf:50324
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005514
-
-
Title
-
Electrical Conductivity Imaging via Boundary Value Problems for the 1-Laplacian.
-
Creator
-
Veras, Johann, Tamasan, Alexandru, Mohapatra, Ram, Nashed, M, Dogariu, Aristide, University of Central Florida
-
Abstract / Description
-
We study an inverse problem which seeks to image the internal conductivity map of a body by one measurement of boundary and interior data. In our study the interior data is the magnitude of the current density induced by electrodes. Access to interior measurements has been made possible since the work of M. Joy et al. in early 1990s and couples two physical principles: electromagnetics and magnetic resonance. In 2007 Nachman et al. has shown that it is possible to recover the conductivity...
Show moreWe study an inverse problem which seeks to image the internal conductivity map of a body by one measurement of boundary and interior data. In our study the interior data is the magnitude of the current density induced by electrodes. Access to interior measurements has been made possible since the work of M. Joy et al. in early 1990s and couples two physical principles: electromagnetics and magnetic resonance. In 2007 Nachman et al. has shown that it is possible to recover the conductivity from the magnitude of one current density field inside. The method now known as Current Density Impedance Imaging is based on solving boundary value problems for the 1-Laplacian in an appropriate Riemann metric space. We consider two types of methods: the ones based on level sets and a variational approach, which aim to solve specific boundary value problem associated with the 1-Laplacian. We will address the Cauchy and Dirichlet problems with full and partial data, and also the Complete Electrode Model (CEM). The latter model is known to describe most accurately the voltage potential distribution in a conductive body, while taking into account the transition of current from the electrode to the body. For the CEM the problem is non-unique. We characterize the non-uniqueness, and explain which additional measurements fix the solution. Multiple numerical schemes for each of the methods are implemented to demonstrate the computational feasibility.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005437, ucf:50388
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005437
-
-
Title
-
Taming Wild Faces: Web-Scale, Open-Universe Face Identification in Still and Video Imagery.
-
Creator
-
Ortiz, Enrique, Shah, Mubarak, Sukthankar, Rahul, Da Vitoria Lobo, Niels, Wang, Jun, Li, Xin, University of Central Florida
-
Abstract / Description
-
With the increasing pervasiveness of digital cameras, the Internet, and social networking, there is a growing need to catalog and analyze large collections of photos and videos. In this dissertation, we explore unconstrained still-image and video-based face recognition in real-world scenarios, e.g. social photo sharing and movie trailers, where people of interest are recognized and all others are ignored. In such a scenario, we must obtain high precision in recognizing the known identities,...
Show moreWith the increasing pervasiveness of digital cameras, the Internet, and social networking, there is a growing need to catalog and analyze large collections of photos and videos. In this dissertation, we explore unconstrained still-image and video-based face recognition in real-world scenarios, e.g. social photo sharing and movie trailers, where people of interest are recognized and all others are ignored. In such a scenario, we must obtain high precision in recognizing the known identities, while accurately rejecting those of no interest.Recent advancements in face recognition research has seen Sparse Representation-based Classification (SRC) advance to the forefront of competing methods. However, its drawbacks, slow speed and sensitivity to variations in pose, illumination, and occlusion, have hindered its wide-spread applicability. The contributions of this dissertation are three-fold: 1. For still-image data, we propose a novel Linearly Approximated Sparse Representation-based Classification (LASRC) algorithm that uses linear regression to perform sample selection for l1-minimization, thus harnessing the speed of least-squares and the robustness of SRC. On our large dataset collected from Facebook, LASRC performs equally to standard SRC with a speedup of 100-250x.2. For video, applying the popular l1-minimization for face recognition on a frame-by-frame basis is prohibitively expensive computationally, so we propose a new algorithm Mean Sequence SRC (MSSRC) that performs video face recognition using a joint optimization leveraging all of the available video data and employing the knowledge that the face track frames belong to the same individual. Employing MSSRC results in a speedup of 5x on average over SRC on a frame-by-frame basis.3. Finally, we make the observation that MSSRC sometimes assigns inconsistent identities to the same individual in a scene that could be corrected based on their visual similarity. Therefore, we construct a probabilistic affinity graph combining appearance and co-occurrence similarities to model the relationship between face tracks in a video. Using this relationship graph, we employ random walk analysis to propagate strong class predictions among similar face tracks, while dampening weak predictions. Our method results in a performance gain of 15.8% in average precision over using MSSRC alone.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005536, ucf:50313
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005536
-
-
Title
-
Vision-Based Sensing and Optimal Control for Low-Cost and Small Satellite Platforms.
-
Creator
-
Sease, Bradley, Xu, Yunjun, Lin, Kuo-Chi, Bradley, Eric, University of Central Florida
-
Abstract / Description
-
Current trends in spacecraft are leading to smaller, more inexpensive options whenever possible. This shift has been primarily pursued for the opportunity to open a new frontier for technologies with a small financial obligation. Limited power, processing, pointing, and communication capabilities are all common issues which must be considered when miniaturizing systems and implementing low-cost components. This thesis addresses some of these concerns by applying two methods, in attitude...
Show moreCurrent trends in spacecraft are leading to smaller, more inexpensive options whenever possible. This shift has been primarily pursued for the opportunity to open a new frontier for technologies with a small financial obligation. Limited power, processing, pointing, and communication capabilities are all common issues which must be considered when miniaturizing systems and implementing low-cost components. This thesis addresses some of these concerns by applying two methods, in attitude estimation and control. Additionally, these methods are not restricted to only small, inexpensive satellites, but offer a benefit to large-scale spacecraft as well.First, star cameras are examined for the tendency to generate streaked star images during maneuvers. This issue also comes into play when pointing capabilities and camera hardware quality are low, as is often the case in small, budget-constrained spacecraft. When pointing capabilities are low, small residual velocities can cause movement of the stars in the focal plane during an exposure, causing them to streak across the image. Additionally, if the camera quality is low, longer exposures may be required to gather sufficient light from a star, further contributing to streaking. Rather than improving the pointing or hardware directly, an algorithm is presented to retrieve and utilize the endpoints of streaked stars to provide feedback where traditional methods do not. This allows precise attitude and angular rate estimates to be derived from an image which, with traditional methods, would return large attitude and rate error. Simulation results are presented which demonstrate endpoint error of approximately half a pixel and rate estimates within 2% of the true angular velocity. Three methods are also considered to remove overlapping star streaks and resident space objects from images to improve performance of both attitude and rate estimates. Results from a large-scale Monte Carlo simulation are presented in order to characterize the performance of the method.Additionally, a rapid optimal attitude guidance method is experimentally validated in a ground-based, pico-scale satellite test bed. Fast slewing performance is demonstrated for an incremental step maneuver with low average power consumption. Though the focus of this thesis is primarily on increasing the capabilities of small, inexpensive spacecraft, the methods discussed have the potential to increase the capabilities of current and future large-scale missions as well.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0005249, ucf:50603
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005249
-
-
Title
-
Automatic 3D human modeling: an initial stage towards 2-way inside interaction in mixed reality.
-
Creator
-
Xiong, Yiyan, Hughes, Charles, Pattanaik, Sumanta, Laviola II, Joseph, Moshell, Michael, University of Central Florida
-
Abstract / Description
-
3D human models play an important role in computer graphics applications from a wide range of domains, including education, entertainment, medical care simulation and military training. In many situations, we want the 3D model to have a visual appearance that matches that of a specific living person and to be able to be controlled by that person in a natural manner. Among other uses, this approach supports the notion of human surrogacy, where the virtual counterpart provides a remote presence...
Show more3D human models play an important role in computer graphics applications from a wide range of domains, including education, entertainment, medical care simulation and military training. In many situations, we want the 3D model to have a visual appearance that matches that of a specific living person and to be able to be controlled by that person in a natural manner. Among other uses, this approach supports the notion of human surrogacy, where the virtual counterpart provides a remote presence for the human who controls the virtual character's behavior. In this dissertation, a human modeling pipeline is proposed for the problem of creating a 3D digital model of a real person. Our solution involves reshaping a 3D human template with a 2D contour of the participant and then mapping the captured texture of that person to the generated mesh. Our method produces an initial contour of a participant by extracting the user image from a natural background. One particularly novel contribution in our approach is the manner in which we improve the initial vertex estimate. We do so through a variant of the ShortStraw corner-finding algorithm commonly used in sketch-based systems. Here, we develop improvements to ShortStraw, presenting an algorithm called IStraw, and then introduce adaptations of this improved version to create a corner-based contour segmentatiuon algorithm. This algorithm provides significant improvements on contour matching over previously developed systems, and does so with low computational complexity. The system presented here advances the state of the art in the following aspects. First, the human modeling process is triggered automatically by matching the participant's pose with an initial pose through a tracking device and software. In our case, the pose capture and skeletal model are provided by the Microsoft Kinect and its associated SDK. Second, color image, depth data, and human tracking information from the Kinect and its SDK are used to automatically extract the contour of the participant and then generate a 3D human model with skeleton. Third, using the pose and the skeletal model, we segment the contour into eight parts and then match the contour points on each segment to a corresponding anchor set associated with a 3D human template. Finally, we map the color image of the person to the 3D model as its corresponding texture map. The whole modeling process only take several seconds and the resulting human model looks like the real person. The geometry of the 3D model matches the contour of the real person, and the model has a photorealistic texture. Furthermore, the mesh of the human model is attached to the skeleton provided in the template, so the model can support programmed animations or be controlled by real people. This human control is commonly done through a literal mapping (motion capture) or a gesture-based puppetry system. Our ultimate goal is to create a mixed reality (MR) system, in which the participants can manipulate virtual objects, and in which these virtual objects can affect the participant, e.g., by restricting theirmobility. This MR system prototype design motivated the work of this dissertation, since a realistic 3D human model of the participant is an essential part of implementing this vision.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005277, ucf:50543
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005277
-
-
Title
-
Automatic Detection of Brain Functional Disorder Using Imaging Data.
-
Creator
-
Dey, Soumyabrata, Shah, Mubarak, Jha, Sumit, Hu, Haiyan, Weeks, Arthur, Rao, Ravishankar, University of Central Florida
-
Abstract / Description
-
Recently, Attention Deficit Hyperactive Disorder (ADHD) is getting a lot of attention mainly for two reasons. First, it is one of the most commonly found childhood behavioral disorders. Around 5-10% of the children all over the world are diagnosed with ADHD. Second, the root cause of the problem is still unknown and therefore no biological measure exists to diagnose ADHD. Instead, doctors need to diagnose it based on the clinical symptoms, such as inattention, impulsivity and hyperactivity,...
Show moreRecently, Attention Deficit Hyperactive Disorder (ADHD) is getting a lot of attention mainly for two reasons. First, it is one of the most commonly found childhood behavioral disorders. Around 5-10% of the children all over the world are diagnosed with ADHD. Second, the root cause of the problem is still unknown and therefore no biological measure exists to diagnose ADHD. Instead, doctors need to diagnose it based on the clinical symptoms, such as inattention, impulsivity and hyperactivity, which are all subjective.Functional Magnetic Resonance Imaging (fMRI) data has become a popular tool to understand the functioning of the brain such as identifying the brain regions responsible for different cognitive tasks or analyzing the statistical differences of the brain functioning between the diseased and control subjects. ADHD is also being studied using the fMRI data. In this dissertation we aim to solve the problem of automatic diagnosis of the ADHD subjects using their resting state fMRI (rs-fMRI) data.As a core step of our approach, we model the functions of a brain as a connectivity network, which is expected to capture the information about how synchronous different brain regions are in terms of their functional activities. The network is constructed by representing different brain regions as the nodes where any two nodes of the network are connected by an edge if the correlation of the activity patterns of the two nodes is higher than some threshold. The brain regions, represented as the nodes of the network, can be selected at different granularities e.g. single voxels or cluster of functionally homogeneous voxels. The topological differences of the constructed networks of the ADHD and control group of subjects are then exploited in the classification approach.We have developed a simple method employing the Bag-of-Words (BoW) framework for the classification of the ADHD subjects. We represent each node in the network by a 4-D feature vector: node degree and 3-D location. The 4-D vectors of all the network nodes of the training data are then grouped in a number of clusters using K-means; where each such cluster is termed as a word. Finally, each subject is represented by a histogram (bag) of such words. The Support Vector Machine (SVM) classifier is used for the detection of the ADHD subjects using their histogram representation. The method is able to achieve 64% classification accuracy.The above simple approach has several shortcomings. First, there is a loss of spatial information while constructing the histogram because it only counts the occurrences of words ignoring the spatial positions. Second, features from the whole brain are used for classification, but some of the brain regions may not contain any useful information and may only increase the feature dimensions and noise of the system. Third, in our study we used only one network feature, the degree of a node which measures the connectivity of the node, while other complex network features may be useful for solving the proposed problem.In order to address the above shortcomings, we hypothesize that only a subset of the nodes of the network possesses important information for the classification of the ADHD subjects. To identify the important nodes of the network we have developed a novel algorithm. The algorithm generates different random subset of nodes each time extracting the features from a subset to compute the feature vector and perform classification. The subsets are then ranked based on the classification accuracy and the occurrences of each node in the top ranked subsets are measured. Our algorithm selects the highly occurring nodes for the final classification. Furthermore, along with the node degree, we employ three more node features: network cycles, the varying distance degree and the edge weight sum. We concatenate the features of the selected nodes in a fixed order to preserve the relative spatial information. Experimental validation suggests that the use of the features from the nodes selected using our algorithm indeed help to improve the classification accuracy. Also, our finding is in concordance with the existing literature as the brain regions identified by our algorithms are independently found by many other studies on the ADHD. We achieved a classification accuracy of 69.59% using this approach. However, since this method represents each voxel as a node of the network which makes the number of nodes of the network several thousands. As a result, the network construction step becomes computationally very expensive. Another limitation of the approach is that the network features, which are computed for each node of the network, captures only the local structures while ignore the global structure of the network.Next, in order to capture the global structure of the networks, we use the Multi-Dimensional Scaling (MDS) technique to project all the subjects from an unknown network-space to a low dimensional space based on their inter-network distance measures. For the purpose of computing distance between two networks, we represent each node by a set of attributes such as the node degree, the average power, the physical location, the neighbor node degrees, and the average powers of the neighbor nodes. The nodes of the two networks are then mapped in such a way that for all pair of nodes, the sum of the attribute distances, which is the inter-network distance, is minimized. To reduce the network computation cost, we enforce that the maximum relevant information is preserved with minimum redundancy. To achieve this, the nodes of the network are constructed with clusters of highly active voxels while the activity levels of the voxels are measured based on the average power of their corresponding fMRI time-series. Our method shows promise as we achieve impressive classification accuracies (73.55%) on the ADHD-200 data set. Our results also reveal that the detection rates are higher when classification is performed separately on the male and female groups of subjects.So far, we have only used the fMRI data for solving the ADHD diagnosis problem. Finally, we investigated the answers of the following questions. Do the structural brain images contain useful information related to the ADHD diagnosis problem? Can the classification accuracy of the automatic diagnosis system be improved combining the information of the structural and functional brain data? Towards that end, we developed a new method to combine the information of structural and functional brain images in a late fusion framework. For structural data we input the gray matter (GM) brain images to a Convolutional Neural Network (CNN). The output of the CNN is a feature vector per subject which is used to train the SVM classifier. For the functional data we compute the average power of each voxel based on its fMRI time series. The average power of the fMRI time series of a voxel measures the activity level of the voxel. We found significant differences in the voxel power distribution patterns of the ADHD and control groups of subjects. The Local binary pattern (LBP) texture feature is used on the voxel power map to capture these differences. We achieved 74.23% accuracy using GM features, 77.30% using LBP features and 79.14% using combined information.In summary this dissertation demonstrated that the structural and functional brain imaging data are useful for the automatic detection of the ADHD subjects as we achieve impressive classification accuracies on the ADHD-200 data set. Our study also helps to identify the brain regions which are useful for ADHD subject classification. These findings can help in understanding the pathophysiology of the problem. Finally, we expect that our approaches will contribute towards the development of a biological measure for the diagnosis of the ADHD subjects.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005786, ucf:50060
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005786
-
-
Title
-
ELECTRICAL CAPACITANCE VOLUME TOMOGRAPHY OF HIGH CONTRAST DIELECTRICS USING A CUBOID GEOMETRY.
-
Creator
-
Nurge, Mark, Schelling, Patrick, University of Central Florida
-
Abstract / Description
-
An Electrical Capacitance Volume Tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 x 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of...
Show moreAn Electrical Capacitance Volume Tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 x 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This dissertation presents a method of reconstructing images of high contrast dielectric materials using only the self capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer ill-determined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminum structure inserted at different positions within the sensing region. Comparisons with standard two dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001591, ucf:47119
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001591
-
-
Title
-
SELF-ASSEMBLED LIPID TUBULES: STRUCTURES, MECHANICAL PROPERTIES, AND APPLICATIONS.
-
Creator
-
Zhao, Yue, Fang, Jiyu, University of Central Florida
-
Abstract / Description
-
Self-assembled lipid tubules are particularly attractive for inorganic synthesis and drug delivery because they have hollow cylindrical shapes and relatively rigid mechanical properties. In this thesis work, we have synthesized lipid tubules of 1,2-bis(tricosa-10,12-dinoyl)-sn-glycero-3-phosphocholine (DC8,9PC) by self-assembly and polymerization in solutions. We demonstrate for the first time that both uniform and modulated molecular tilt orderings exist in the tubule walls, which have been...
Show moreSelf-assembled lipid tubules are particularly attractive for inorganic synthesis and drug delivery because they have hollow cylindrical shapes and relatively rigid mechanical properties. In this thesis work, we have synthesized lipid tubules of 1,2-bis(tricosa-10,12-dinoyl)-sn-glycero-3-phosphocholine (DC8,9PC) by self-assembly and polymerization in solutions. We demonstrate for the first time that both uniform and modulated molecular tilt orderings exist in the tubule walls, which have been predicted by current theories, and therefore provide valuable supporting evidences for self-assembly mechanisms of chiral molecules. Two novel methods are developed for studying the axial and radial deformations of DC8,9PC lipid tubules. Mechanical properties of DC8,9PC tubules are systematically studied in terms of persistence length, bending rigidity, strain energy, axial and radial elastic moduli, and critical force for collapse. Mechanisms of recovery and surface stiffening are discussed. Due to the high aspect ratio of lipid tubules, the hierarchical assembly of lipid tubules into ordered arrays and desired architectures is critical in developing their applications. Two efficient methods for fabricating ordered arrays of lipid tubules on solid substrates have been developed. Ordered arrays of hybrid silica-lipid tubes are synthesized by tubule array-templated sol-gel reactions. Ordered arrays of optical anisotropic fibers with tunable shapes and refractive indexes are fabricated. This thesis work provides a paradigm for molecularly engineered structures.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001918, ucf:47486
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001918
-
-
Title
-
TEMPORAL TRENDS IN GRAVE MARKER ATTRIBUTES: AN ANALYSIS OF HEADSTONES IN FLORIDA.
-
Creator
-
Reynolds, Patrisha, Schultz, Ph.D., John J., University of Central Florida
-
Abstract / Description
-
Grave markers reflect a wealth of information and collectively epitomize society's historic, social, and economic patterns over time. Despite an abundance of cemetery research in other parts of the country, little research has been undertaken to evaluate grave marker attributes in Florida. The purpose of this research was to determine how grave marker attributes have changed over time in north-central, central, and southeast Florida. Data were collected from ten cemeteries in five counties in...
Show moreGrave markers reflect a wealth of information and collectively epitomize society's historic, social, and economic patterns over time. Despite an abundance of cemetery research in other parts of the country, little research has been undertaken to evaluate grave marker attributes in Florida. The purpose of this research was to determine how grave marker attributes have changed over time in north-central, central, and southeast Florida. Data were collected from ten cemeteries in five counties in Florida, representing the grave markers of over 1,100 individuals. Data collection involved visiting each cemetery, photographing markers, and cataloging grave marker attributes. Attributes analyzed included marker type, marker material, epitaphs, iconographic images, memorial photographs, footstones, and kerbs. A number of important trends were noted. Marker material exhibited the clearest example of a temporal trend, shifting over time from 73% marble to 73% granite. Marker type varied greatly from upright and flat ground markers to a variety of customized markers and vaults. Cultural differences were also noted with in-ground vaults dominating traditionally black cemeteries. There were clear differences in marker style between affluent and less affluent cemeteries, with numerous hand-cast cement markers observed in less prosperous areas. Furthermore, beginning in the early 1980's there is an increase in customized laser engraved markers. Overall, Florida's cemeteries offer a rich history of the state's mortuary practices and further research should be conducted to preserve this history.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFH0004240, ucf:44918
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004240
-
-
Title
-
A Decision Support Tool for Video Retinal Angiography.
-
Creator
-
Laha, Sumit, Bagci, Ulas, Foroosh, Hassan, Song, Sam, University of Central Florida
-
Abstract / Description
-
Fluorescein angiogram (FA) is a medical procedure that helps the ophthalmologists to monitor the status of the retinal blood vessels and to diagnose proper treatment. This research is motivated by the necessity of blood vessel segmentation of the retina. Retinal vessel segmentation has been a major challenge and has long drawn the attention of researchers for decades due to the presence of complex blood vessels with varying size, shape, angles and branching pattern of vessels, and non-uniform...
Show moreFluorescein angiogram (FA) is a medical procedure that helps the ophthalmologists to monitor the status of the retinal blood vessels and to diagnose proper treatment. This research is motivated by the necessity of blood vessel segmentation of the retina. Retinal vessel segmentation has been a major challenge and has long drawn the attention of researchers for decades due to the presence of complex blood vessels with varying size, shape, angles and branching pattern of vessels, and non-uniform illumination and huge anatomical variability between subjects. In this thesis, we introduce a new computational tool that combines deep learning based machine learning algorithm and a signal processing based video magnification method to support physicians in analyzing and diagnosing retinal angiogram videos for the first time in the literature.The proposed approach has a pipeline-based architecture containing three phases - image registration for large motion removal from video angiogram, retinal vessel segmentation and video magnification based on the segmented vessels. In image registration phase, we align distorted frames in the FA video using rigid registration approaches. In the next phase, we use baseline capsule based neural networks for retinal vessel segmentation in comparison with the state-of-the-art methods. We move away from traditional convolutional network approaches to capsule networks in this work. This is because, despite being widely used in different computer vision applications, convolutional neural networks suffer from learning ability to understand the object-part relationships, have high computational times due to additive nature of neurons and, loose information in the pooling layer. Although having these drawbacks, we use deep learning methods like U-Net and Tiramisu to measure the performance and accuracy of SegCaps. Lastly, we apply Eulerian video magnification to magnify the subtle changes in the retinal video. In this phase, magnification is applied to segmented videos to visualize the flow of blood in the retinal vessels.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007342, ucf:52125
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007342
-
-
Title
-
Video categorization using semantics and semiotics.
-
Creator
-
Rasheed, Zeeshan, Shah, Mubarak, Engineering and Computer Science
-
Abstract / Description
-
University of Central Florida College of Engineering Thesis; There is a great need to automatically segment, categorize, and annotate video data, and to develop efficient tools for browsing and searching. We believe that the categorization of videos can be achieved by exploring the concepts and meanings of the videos. This task requires bridging the gap between low-level content and high-level concepts (or semantics). Once a relationship is established between the low-level computable...
Show moreUniversity of Central Florida College of Engineering Thesis; There is a great need to automatically segment, categorize, and annotate video data, and to develop efficient tools for browsing and searching. We believe that the categorization of videos can be achieved by exploring the concepts and meanings of the videos. This task requires bridging the gap between low-level content and high-level concepts (or semantics). Once a relationship is established between the low-level computable features of the video and its semantics, .the user would be able to navigate through videos through the use of concepts and ideas (for example, a user could extract only those scenes in an action film that actually contain fights) rat her than sequentially browsing the whole video. However, this relationship must follow the norms of human perception and abide by the rules that are most often followed by the creators (directors) of these videos. These rules are called film grammar in video production literature. Like any natural language, this grammar has several dialects, but it has been acknowledged to be universal. Therefore, the knowledge of film grammar can be exploited effectively for the understanding of films. To interpret an idea using the grammar, we need to first understand the symbols, as in natural languages, and second, understand the rules of combination of these symbols to represent concepts. In order to develop algorithms that exploit this film grammar, it is necessary to relate the symbols of the grammar to computable video features.
Show less
-
Date Issued
-
2003
-
Identifier
-
CFR0001717, ucf:52920
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFR0001717
-
-
Title
-
Estimation and clustering in statistical ill-posed linear inverse problems.
-
Creator
-
Rajapakshage, Rasika, Pensky, Marianna, Swanson, Jason, Zhang, Teng, Bagci, Ulas, Foroosh, Hassan, University of Central Florida
-
Abstract / Description
-
The main focus of the dissertation is estimation and clustering in statistical ill-posed linear inverse problems. The dissertation deals with a problem of simultaneously estimating a collection of solutions of ill-posed linear inverse problems from their noisy images under an operator that does not have a bounded inverse, when the solutions are related in a certain way. The dissertation defense consists of three parts. In the first part, the collection consists of measurements of temporal...
Show moreThe main focus of the dissertation is estimation and clustering in statistical ill-posed linear inverse problems. The dissertation deals with a problem of simultaneously estimating a collection of solutions of ill-posed linear inverse problems from their noisy images under an operator that does not have a bounded inverse, when the solutions are related in a certain way. The dissertation defense consists of three parts. In the first part, the collection consists of measurements of temporal functions at various spatial locations. In particular, we studythe problem of estimating a three-dimensional function based on observations of its noisy Laplace convolution. In the second part, we recover classes of similar curves when the class memberships are unknown. Problems of this kind appear in many areas of application where clustering is carried out at the pre-processing step and then the inverse problem is solved for each of the cluster averages separately. As a result, the errors of the procedures are usually examined for the estimation step only. In both parts, we construct the estimators, study their minimax optimality and evaluate their performance via a limited simulation study. In the third part, we propose a new computational platform to better understand the patterns of R-fMRI by taking into account the challenge of inevitable signal fluctuations and interpretthe success of dynamic functional connectivity approaches. Towards this, we revisit an auto-regressive and vector auto-regressive signal modeling approach for estimating temporal changes of the signal in brain regions. We then generate inverse covariance matrices fromthe generated windows and use a non-parametric statistical approach to select significant features. Finally, we use Lasso to perform classification of the data. The effectiveness of theproposed method is evidenced in the classification of R-fMRI scans
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007710, ucf:52450
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007710
-
-
Title
-
EFFICIENT TECHNIQUES FOR RELEVANCE FEEDBACK PROCESSING IN CONTENT-BASED IMAGE RETRIEVAL.
-
Creator
-
Liu, Danzhou, Hua, Kien, University of Central Florida
-
Abstract / Description
-
In content-based image retrieval (CBIR) systems, there are two general types of search: target search and category search. Unlike queries in traditional database systems, users in most cases cannot specify an ideal query to retrieve the desired results for either target search or category search in multimedia database systems, and have to rely on iterative feedback to refine their query. Efficient evaluation of such iterative queries can be a challenge, especially when the multimedia database...
Show moreIn content-based image retrieval (CBIR) systems, there are two general types of search: target search and category search. Unlike queries in traditional database systems, users in most cases cannot specify an ideal query to retrieve the desired results for either target search or category search in multimedia database systems, and have to rely on iterative feedback to refine their query. Efficient evaluation of such iterative queries can be a challenge, especially when the multimedia database contains a large number of entries, and the search needs many iterations, and when the underlying distance measure is computationally expensive. The overall processing costs, including CPU and disk I/O, are further emphasized if there are numerous concurrent accesses. To address these limitations involved in relevance feedback processing, we propose a generic framework, including a query model, index structures, and query optimization techniques. Specifically, this thesis has five main contributions as follows. The first contribution is an efficient target search technique. We propose four target search methods: naive random scan (NRS), local neighboring movement (LNM), neighboring divide-and-conquer (NDC), and global divide-and-conquer (GDC) methods. All these methods are built around a common strategy: they do not retrieve checked images (i.e., shrink the search space). Furthermore, NDC and GDC exploit Voronoi diagrams to aggressively prune the search space and move towards target images. We theoretically and experimentally prove that the convergence speeds of GDC and NDC are much faster than those of NRS and recent methods. The second contribution is a method to reduce the number of expensive distance computation when answering k-NN queries with non-metric distance measures. We propose an efficient distance mapping function that transfers non-metric measures into metric, and still preserves the original distance orderings. Then existing metric index structures (e.g., M-tree) can be used to reduce the computational cost by exploiting the triangular inequality property. The third contribution is an incremental query processing technique for Support Vector Machines (SVMs). SVMs have been widely used in multimedia retrieval to learn a concept in order to find the best matches. SVMs, however, suffer from the scalability problem associated with larger database sizes. To address this limitation, we propose an efficient query evaluation technique by employing incremental update. The proposed technique also takes advantage of a tuned index structure to efficiently prune irrelevant data. As a result, only a small portion of the data set needs to be accessed for query processing. This index structure also provides an inexpensive means to process the set of candidates to evaluate the final query result. This technique can work with different kernel functions and kernel parameters. The fourth contribution is a method to avoid local optimum traps. Existing CBIR systems, designed around query refinement based on relevance feedback, suffer from local optimum traps that may severely impair the overall retrieval performance. We therefore propose a simulated annealing-based approach to address this important issue. When a stuck-at-a-local-optimum occurs, we employ a neighborhood search technique (i.e., simulated annealing) to continue the search for additional matching images, thus escaping from the local optimum. We also propose an index structure to speed up such neighborhood search. Finally, the fifth contribution is a generic framework to support concurrent accesses. We develop new storage and query processing techniques to exploit sequential access and leverage inter-query concurrency to share computation. Our experimental results, based on the Corel dataset, indicate that the proposed optimization can significantly reduce average response time while achieving better precision and recall, and is scalable to support a large user community. This latter performance characteristic is largely neglected in existing systems making them less suitable for large-scale deployment. With the growing interest in Internet-scale image search applications, our framework offers an effective solution to the scalability problem.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002728, ucf:48162
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002728
-
-
Title
-
Synthesis of Fluorescent Molecules and their Applications as Viscosity Sensors, Metal Ion Indicators, and Near-Infrared Probes.
-
Creator
-
Wang, Mengyuan, Belfield, Kevin, Campiglia, Andres, Miles, Delbert, Frazer, Andrew, Cheng, Zixi, University of Central Florida
-
Abstract / Description
-
The primary focus of this dissertation is the development of novel fluorescent near-infrared molecules for various applications. In Chapter 1, a compound dU-BZ synthesized via Sonogashira coupling reaction methodology is described. A deoxyuridine building block was introduced to enhance hydrophilic properties and reduce toxicity, while an alkynylated benzothiazolium dye was incorporated for near-IR emission and reduce photodamage and phototoxicity that is characteristic of common fluorphores...
Show moreThe primary focus of this dissertation is the development of novel fluorescent near-infrared molecules for various applications. In Chapter 1, a compound dU-BZ synthesized via Sonogashira coupling reaction methodology is described. A deoxyuridine building block was introduced to enhance hydrophilic properties and reduce toxicity, while an alkynylated benzothiazolium dye was incorporated for near-IR emission and reduce photodamage and phototoxicity that is characteristic of common fluorphores that are excited by UV or visible light. A 30-fold enhancement of fluorescence intensity of dU-BZ was achieved in a viscous environment. Values of fluorescence quantum yields in 99% glycerol/1% methanol (v/v) of varying temperature from 293 K to 343 K, together with fluorescence quantum yields, radiative and nonradiative rate constants and fluorescence lifetimes in glycerol/methanol solutions of varying viscosities from 4.8 to 950 cP were determined. It was found that both fluorescence quantum yields and fluorescence lifetimes increased with increasing viscosity, which is consistent with results predicted by theory. This suggests that the newly designed compound dU-BZ is capable of functioning as a probe of local microviscosity, and was later confirmed by in vitro bioimaging experiments.In Chapter 2, a new BAPTA (O,O'-bis(2-aminophenyl)ethyleneglycol-N,N,N',N'-tetra acetic acid) and BODIPY (4,4-difluoro-4-bora-3a,4a-diaza-s-indacene)-based calcium indicator, BAPBO-3, is reported. A new synthetic route was employed to simplify both synthesis and purification, which tend to be low yielding and cumbersome for BAPTA derivatives. Upon excitation, a 1.5-fold increase in fluorescence intensity in buffer containing 39 ?? Ca2+ and a 3-fold increase in fluorescence intensity in buffer containing 1 M Ca2+ was observed; modest but promising fluorescence turn-on enhancements.In Chapter 3, a newly-designed unsymmetrical squaraine dye, SQ3, was synthesized. A one-pot synthesis was employed resulting in a 10% yield, a result that is generally quite favorable for the creation of unsymmetrical squaraines Photophysical and photochemical characterization was conducted in various solvents, and a 678 nm absorption maximum and a 692 nm emission maximum were recorded in DMSO solution with a fluorescence quantum yield of 0.32. In vitro cell studies demonstrated that SQ3 can be used as a near-IR probe for bioimaging.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005900, ucf:50863
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005900
-
-
Title
-
Learning to Grasp Unknown Objects using Weighted Random Forest Algorithm from Selective Image and Point Cloud Feature.
-
Creator
-
Iqbal, Md Shahriar, Behal, Aman, Boloni, Ladislau, Haralambous, Michael, University of Central Florida
-
Abstract / Description
-
This method demonstrates an approach to determine the best grasping location on an unknown object using Weighted Random Forest Algorithm. It used RGB-D value of an object as input to find a suitable rectangular grasping region as the output. To accomplish this task, it uses a subspace of most important features from a very high dimensional extensive feature space that contains both image and point cloud features. Usage of most important features in the grasping algorithm has enabled the...
Show moreThis method demonstrates an approach to determine the best grasping location on an unknown object using Weighted Random Forest Algorithm. It used RGB-D value of an object as input to find a suitable rectangular grasping region as the output. To accomplish this task, it uses a subspace of most important features from a very high dimensional extensive feature space that contains both image and point cloud features. Usage of most important features in the grasping algorithm has enabled the system to be computationally very fast while preserving maximum information gain. In this approach, the Random Forest operates using optimum parameters e.g. Number of Trees, Number of Features at each node, Information Gain Criteria etc. ensures optimization in learning, with highest possible accuracy in minimum time in an advanced practical setting. The Weighted Random Forest chosen over Support Vector Machine (SVM), Decision Tree and Adaboost for implementation of the grasping system outperforms the stated machine learning algorithms both in training and testing accuracy and other performance estimates. The Grasping System utilizing learning from a score function detects the rectangular grasping region after selecting the top rectangle that has the largest score. The system is implemented and tested in a Baxter Research Robot with Parallel Plate Gripper in action.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005509, ucf:50358
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005509
-
-
Title
-
REAL-TIME REALISTIC RENDERING AND HIGH DYNAMIC RANGE IMAGE DISPLAY AND COMPRESSION.
-
Creator
-
Xu, Ruifeng, Pattanaik, Sumanta, University of Central Florida
-
Abstract / Description
-
This dissertation focuses on the many issues that arise from the visual rendering problem. Of primary consideration is light transport simulation, which is known to be computationally expensive. Monte Carlo methods represent a simple and general class of algorithms often used for light transport computation. Unfortunately, the images resulting from Monte Carlo approaches generally suffer from visually unacceptable noise artifacts. The result of any light transport simulation is, by its very...
Show moreThis dissertation focuses on the many issues that arise from the visual rendering problem. Of primary consideration is light transport simulation, which is known to be computationally expensive. Monte Carlo methods represent a simple and general class of algorithms often used for light transport computation. Unfortunately, the images resulting from Monte Carlo approaches generally suffer from visually unacceptable noise artifacts. The result of any light transport simulation is, by its very nature, an image of high dynamic range (HDR). This leads to the issues of the display of such images on conventional low dynamic range devices and the development of data compression algorithms to store and recover the corresponding large amounts of detail found in HDR images. This dissertation presents our contributions relevant to these issues. Our contributions to high dynamic range image processing include tone mapping and data compression algorithms. This research proposes and shows the efficacy of a novel level set based tone mapping method that preserves visual details in the display of high dynamic range images on low dynamic range display devices. The level set method is used to extract the high frequency information from HDR images. The details are then added to the range compressed low frequency information to reconstruct a visually accurate low dynamic range version of the image. Additional challenges associated with high dynamic range images include the requirements to reduce excessively large amounts of storage and transmission time. To alleviate these problems, this research presents two methods for efficient high dynamic range image data compression. One is based on the classical JPEG compression. It first converts the raw image into RGBE representation, and then sends the color base and common exponent to classical discrete cosine transform based compression and lossless compression, respectively. The other is based on the wavelet transformation. It first transforms the raw image data into the logarithmic domain, then quantizes the logarithmic data into the integer domain, and finally applies the wavelet based JPEG2000 encoder for entropy compression and bit stream truncation to meet the desired bit rate requirement. We believe that these and similar such contributions will make a wide application of high dynamic range images possible. The contributions to light transport simulation include Monte Carlo noise reduction, dynamic object rendering and complex scene rendering. Monte Carlo noise is an inescapable artifact in synthetic images rendered using stochastic algorithm. This dissertation proposes two noise reduction algorithms to obtain high quality synthetic images. The first one models the distribution of noise in the wavelet domain using a Laplacian function, and then suppresses the noise using a Bayesian method. The other extends the bilateral filtering method to reduce all types of Monte Carlo noise in a unified way. All our methods reduce Monte Carlo noise effectively. Rendering of dynamic objects adds more dimension to the expensive light transport simulation issue. This dissertation presents a pre-computation based method. It pre-computes the surface radiance for each basis lighting and animation key frame, and then renders the objects by synthesizing the pre-computed data in real-time. Realistic rendering of complex scenes is computationally expensive. This research proposes a novel 3D space subdivision method, which leads to a new rendering framework. The light is first distributed to each local region to form local light fields, which are then used to illuminate the local scenes. The method allows us to render complex scenes at interactive frame rates. Rendering has important applications in mixed reality. Consistent lighting and shadows between real scenes and virtual scenes are important features of visual integration. The dissertation proposes to render the virtual objects by irradiance rendering using live captured environmental lighting. This research also introduces a virtual shadow generation method that computes shadows cast by virtual objects to the real background. We finally conclude the dissertation by discussing a number of future directions for rendering research, and presenting our proposed approaches.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000730, ucf:46615
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000730
-
-
Title
-
Ignition Studies of Oxy-Syngas/CO2 Mixtures Using Shock Tube for Cleaner Combustion Engines.
-
Creator
-
Barak, Samuel, Vasu Sumathi, Subith, Kapat, Jayanta, Ahmed, Kareem, University of Central Florida
-
Abstract / Description
-
In this study, syngas combustion was investigated behind reflected shock waves in order to gain insight into the behavior of ignition delay times and effects of the CO2 dilution. Pressure and light emissions time-histories measurements were taken at a 2 cm axial location away from the end wall. High-speed visualization of the experiments from the end wall was also conducted. Oxy-syngas mixtures that were tested in the shock tube were diluted with CO2 fractions ranging from 60% - 85% by volume...
Show moreIn this study, syngas combustion was investigated behind reflected shock waves in order to gain insight into the behavior of ignition delay times and effects of the CO2 dilution. Pressure and light emissions time-histories measurements were taken at a 2 cm axial location away from the end wall. High-speed visualization of the experiments from the end wall was also conducted. Oxy-syngas mixtures that were tested in the shock tube were diluted with CO2 fractions ranging from 60% - 85% by volume. A 10% fuel concentration was consistently used throughout the experiments. This study looked at the effects of changing the equivalence ratios (?), between 0.33, 0.5, and 1.0 as well as changing the fuel ratio (?), hydrogen to carbon monoxide, from 0.25, 1.0 and 4.0. The study was performed at 1.61-1.77 atm and a temperature range of 1006-1162K. The high-speed imaging was performed through a quartz end wall with a Phantom V710 camera operated at 67,065 frames per second. From the experiments, when increasing the equivalence ratio, it resulted in a longer ignition delay time. In addition, when increasing the fuel ratio, a lower ignition delay time was observed. These trends are generally expected with this combustion reaction system. The high-speed imaging showed non-homogeneous combustion in the system, however, most of the light emissions were outside the visible light range where the camera is designed for. The results were compared to predictions of two combustion chemical kinetic mechanisms: GRI v3.0 and AramcoMech v2.0 mechanisms. In general, both mechanisms did not accurately predict the experimental data. The results showed that current models are inaccurate in predicting CO2 diluted environments for syngas combustion.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0006974, ucf:52909
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006974
-
-
Title
-
Science occupational images and aspirations of African American/ Black elementary students.
-
Creator
-
LaMothe, Saron, Hagedorn, W. Bryce, Hopp, Carolyn, Van Horn, Stacy, Blank, William, University of Central Florida
-
Abstract / Description
-
Within the United States, more than a million jobs in science and engineering (S(&)E) are projected over the next few years; yet, the Nation lacks the workforce to meet these demands. Despite the need for a more diverse, qualified workforce, African Americans/Blacks remain disproportionately underrepresented in science occupations, science degree attainment, and in science postsecondary majors. The lack of science participation is reflective of how minority secondary students view science and...
Show moreWithin the United States, more than a million jobs in science and engineering (S(&)E) are projected over the next few years; yet, the Nation lacks the workforce to meet these demands. Despite the need for a more diverse, qualified workforce, African Americans/Blacks remain disproportionately underrepresented in science occupations, science degree attainment, and in science postsecondary majors. The lack of science participation is reflective of how minority secondary students view science and science occupations as many consider the pursuit of a science career as unfavorable. Moreover, minority secondary students, who do choose to pursue science occupations, seem to possess inaccurate (or a lack of) occupational knowledge necessary to do so successfully. Therefore, an understanding of antecedents to career choice will assist educational professionals in addressing the underrepresentation of diverse populations, such as African Americans/Blacks, within the science workforce. The purpose of this study is to garner insight into the science occupational images, occupational and educational aspirations of African American/Black fourth and five grade students. Gottfredson's Theory of Circumscription and Compromise, in conjunction with extant empirical literature, serves as the foundation for the study's conceptual framework. A qualitative case study design was used. The qualitative data provided a contextual understanding of science occupational images, occupational and educational aspirations. Participant-produced drawings, questionnaires, and semi-structured interviews served as sources for data collection. Overall, participants lacked some occupational knowledge. Participants viewed scientists as mostly male and Black. Additionally, the occupation of scientist was perceived as a dangerous and of high status. Lastly, half of the participants expressed aspirations to be a scientist, while a majority expressed college educational aspirations.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007668, ucf:52493
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007668
-
-
Title
-
MEDIA EFFECTS ON BODY IMAGE IN THE CONTEXT OF ENVIRONMENTAL AND INTERNAL INFLUENCES: WHAT MATTERS MOST?.
-
Creator
-
VanVonderen, Kristen, Kinnally, William, University of Central Florida
-
Abstract / Description
-
Media effects on body dissatisfaction is a long-studied issue; however, aspects of the research - such as those regarding cultivation theory and its effects on body image - are unclear or incomplete. This study attempts to clarify the relationship between cultivation and body dissatisfaction. Besides cultivation, social comparison theory is also examined because upward comparisons with media images and peers can shape and reinforce body image attitudes as well. Additionally, the study...
Show moreMedia effects on body dissatisfaction is a long-studied issue; however, aspects of the research - such as those regarding cultivation theory and its effects on body image - are unclear or incomplete. This study attempts to clarify the relationship between cultivation and body dissatisfaction. Besides cultivation, social comparison theory is also examined because upward comparisons with media images and peers can shape and reinforce body image attitudes as well. Additionally, the study examines the connection between media and body dissatisfaction by looking at a broader social context - one that includes other social/environmental influences, such as peer and parental attitudes, as well as internal influences such as self-esteem. A sample of 285 female undergraduate students completed media exposure, parental influence, peer influence, and self-esteem measures, as well as internalization of the thin-ideal and body dissatisfaction measures. Overall, the study found that while peer comparisons and self-esteem are associated with internalization of the thin ideal, they are not as powerful as the most significant indicators - media attitudes regarding weight and body shape and media comparisons. Contrastingly, peer comparisons and self-esteem were observed to be the strongest indicators of body dissatisfaction. These findings suggest that cultivation is directly associated with the internalization of the thin ideal. However, the cultivation of media messages may not have a direct effect on body dissatisfaction, as social/environmental influences and the internal variable of self-esteem proved to be the most significant indicators.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0003995, ucf:48676
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003995
-
-
Title
-
Dissertation Title: Development of molecular and cellular imaging tools to evaluate gene and cell based therapeutic strategies in vivo.
-
Creator
-
Xia, Jixiang, Ebert, Steven, Khaled, Annette, Cheng, Zixi, Daniell, Henry, University of Central Florida
-
Abstract / Description
-
Molecular imaging modalities are important tools to evaluate the efficacy of gene delivery systems and cell-based therapies. Development and application of these modalities will advance our understanding of the mechanism of transgene expression and cell fate and functions. Physical gene transfer methods hold many advantages over viral vectors among gene therapeutic strategies. Here, we evaluated the efficacy of biolistic ((")gene gun(")) gene targeting to tissues with non-invasive...
Show moreMolecular imaging modalities are important tools to evaluate the efficacy of gene delivery systems and cell-based therapies. Development and application of these modalities will advance our understanding of the mechanism of transgene expression and cell fate and functions. Physical gene transfer methods hold many advantages over viral vectors among gene therapeutic strategies. Here, we evaluated the efficacy of biolistic ((")gene gun(")) gene targeting to tissues with non-invasive bioluminescence imaging (BLI) methods. Plasmids carrying the firefly luciferase reporter gene were transfected into mouse skin and liver using biolistics, and BLI was measured at various time points after transfer. With optimized DNA loading ratio (DLRs), reporter gene expression reached to peak 1day after transfer to mouse skin, and the maximum depth of tissue penetration was between 200-300?m. Similar peak expression of reporter gene was found in mouse liver but the expression was relatively stable 4-8 days post-biolistic gene transfer and remained for up to two weeks afterward. Our results demonstrated BLI was an efficient strategy for evaluation of reporter gene expression in the same animals over a period of up to two weeks in vivo. Different tissues showed different expression kinetics, suggesting that this is an important parameter to consider when developing gene therapy strategies for different target tissues. We also employed BLI to measure differentiation of mouse embryonic stem (ES) cells into beating cardiomyocytes in vitro and in vivo. A subset of these cardiomyocytes appears to be derived from an adrenergic lineage that ultimately contribute to substantial numbers of cardiomyocytes primarily on the left side of the heart. At present, it is unclear what the precise role of these cardiac adrenergic cells is with respect to heart development, though it is known that adrenergic hormones (adrenaline and noradrenaline) are essential for embryonic development since mice lacking them die from apparent heart failure during the prenatal period. To identify and characterize cardiac adrenergic cells, we developed a novel mouse genetic model in which the nuclear-localized enhanced green fluorescent protein (nEGFP) reporter gene was targeted to the first exon of the Phenylethanoamine N-transferase (Pnmt) gene, which encodes for the enzyme that converts noradrenaline to adrenaline, and hence serves as a marker for adrenergic cells. Our results demonstrate this knock-in strategy effectively marked adrenergic cells in both fetal and adult mice. Expression of nEGFP was found in Pnmt-positive cells of the adult adrenal medulla, as expected. Pnmt-nEGFP expression also recapitulated endogenous Pnmt expression in the embryonic mouse heart. In addition, nEGFP and Pnmt expression were induced in parallel during differentiation of pluripotent mouse ES cells into beating cardiomyocytes. This new mouse genetic model provides a useful new tool for studying the properties of adrenergic cells in different tissues. We also identified two limitations of the Pnmt-nEGFP model. One is that the amount of nEGFP expressed within individual adrenergic cells was highly variable. Secondly, expression of nEGFP in the embryonic heart was of low abundance and difficult to distinguish from background autofluorescence. To overcome these limitations, we developed two alternative genetic models to investigate adrenergic cells: (1) Mouse embryonic stem cells, which have been previously targeted with Pnmt-Cre recombinase gene, were additionally targeted with a dual reporter plasmid which covered both a loxP-flanked cDNA of red fluorescence protein (HcRed) and also EGFP. Under the undifferentiated status, cells emit red fluorescence as transcription stops before EGFP coding sequence. After differentiation into beating cardiomyoctyes, some cells switch fluorescence from red to green, indicating that excision of loxP-flanked sequences by Cre since Pnmt had been activated. (2) A surface marker, truncated low-affinity nerve growth factor receptor (?LNGFR) was used as the reporter gene as cells expressing this marker can be enriched by magnetic-activated cell sorting (MACS), a potentially efficient way to yield highly purified positive cells at low input abundance in a population. Through a series of subcloning steps, the targeting construct, Pnmt-?LNGFR-Neo-DTA was created and electroporated into 7AC5EYFP embryonic stem cells. Correctly targeted cells were selected by positive and negative screening. These cells provide a new tool with which to identify, isolate, and characterize the function of adrenergic cells in the developing heart, adrenal gland, and other tissues where adrenergic cells make important contributions.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0004491, ucf:49287
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004491
Pages