Current Search: computer (x)
Pages
-
-
Title
-
Multi-Modal Interfaces for Sensemaking of Graph-Connected Datasets.
-
Creator
-
Wehrer, Anthony, Hughes, Charles, Wisniewski, Pamela, Pattanaik, Sumanta, Specht, Chelsea, Lisle, Curtis, University of Central Florida
-
Abstract / Description
-
The visualization of hypothesized evolutionary processes is often shown through phylogenetic trees. Given evolutionary data presented in one of several widely accepted formats, software exists to render these data into a tree diagram. However, software packages commonly in use by biologists today often do not provide means to dynamically adjust and customize these diagrams for studying new hypothetical relationships, and for illustration and publication purposes. Even where these options are...
Show moreThe visualization of hypothesized evolutionary processes is often shown through phylogenetic trees. Given evolutionary data presented in one of several widely accepted formats, software exists to render these data into a tree diagram. However, software packages commonly in use by biologists today often do not provide means to dynamically adjust and customize these diagrams for studying new hypothetical relationships, and for illustration and publication purposes. Even where these options are available, there can be a lack of intuitiveness and ease-of-use. The goal of our research is, thus, to investigate more natural and effective means of sensemaking of the data with different user input modalities. To this end, we experimented with different input modalities, designing and running a series of prototype studies, ultimately focusing our attention on pen-and-touch. Through several iterations of feedback and revision provided with the help of biology experts and students, we developed a pen-and-touch phylogenetic tree browsing and editing application called PhyloPen. This application expands on the capabilities of existing software with visualization techniques such as overview+detail, linked data views, and new interaction and manipulation techniques using pen-and-touch. To determine its impact on phylogenetic tree sensemaking, we conducted a within-subject comparative summative study against the most comparable and commonly used state-of-the-art mouse-based software system, Mesquite. Conducted with biology majors at the University of Central Florida, each used both software systems on a set number of exercise tasks of the same type. Determining effectiveness by several dependent measures, the results show PhyloPen was significantly better in terms of usefulness, satisfaction, ease-of-learning, ease-of-use, and cognitive load and relatively the same in variation of completion time. These results support an interaction paradigm that is superior to classic mouse-based interaction, which could have the potential to be applied to other communities that employ graph-based representations of their problem domains.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007872, ucf:52788
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007872
-
-
Title
-
Numerical Simulation of Non-Premixed and Premixed Axial Stage Combustor at High Pressure.
-
Creator
-
Worbington, Tyler, Ahmed, Kareem, Bhattacharya, Samik, Vasu Sumathi, Subith, University of Central Florida
-
Abstract / Description
-
Axial-staged combustors represent an important concept that can be applied to reduce NOx emissions throughout a gas turbine engine. There are four main CFD models presented in this study that describe a highly turbulent jet-in-crossflow (JIC) simulation of partially premixed and non-premixed jets with a constant chamber pressure of 5 atm absolute. The equivalence ratio of the partially premixed jet was held constant at rich conditions with a ?_jet of 4 while the main stage varied from ?_1 and...
Show moreAxial-staged combustors represent an important concept that can be applied to reduce NOx emissions throughout a gas turbine engine. There are four main CFD models presented in this study that describe a highly turbulent jet-in-crossflow (JIC) simulation of partially premixed and non-premixed jets with a constant chamber pressure of 5 atm absolute. The equivalence ratio of the partially premixed jet was held constant at rich conditions with a ?_jet of 4 while the main stage varied from ?_1 and ?_2 of 0.575 and 0.73 with an average headend temperature of 1415K and 1545K, respectively. Chemistry was reduced by tabulation of eight main species using the equilibrium calculation of the software Chemkin. The centerline temperatures entering the JIC stage were measured experimentally and used as the starting point of a radial temperature profile that follows a parabolic trend. Comparison between the uniform and radial temperature profiles showed that the latter had a higher penetration depth into the vitiated crossflow due to a direct relationship between temperature and velocity. To capture the combustion process, Flamelet Generated Manifold (FGM) model was used. The progress variable source uses Turbulent Flame Speed Closure (TFC) to calculate flame propagation and position. There are two distinct flame positions of stability, the windward and leeward sides of the jet. The leeward flame positions for the two equivalence ratios showed that the richer condition sits closer to the jet due to the hotter equilibrium temperature; while the windward flame position is shifted upstream for the leaner case due to more availability of oxygen. The total temperature rise for ?_1 = 0.575 and ?_2 = 0.73 are ?T = 239 K and 186 K, respectively. The non-premixed simulations used a Steady Laminar Flamelet (SLF) approach with a headend equivalence ratio of ?_non = 0.6 and a detailed prediction of CH4 usage, CO production, and temperature increase throughout the jet-in-crossflow domain. Methane was shown to be consumed at a high amount, at almost 90% conversion with a temperature rise of ?T = 149 K. The heat release is below the calculated equilibrium ?T with the main reason pointed out that a significant amount of CH4 is only partially oxidized to CO due to limited oxygen availability with a fuel only configuration. Realizable K-Epsilon, SST K-Omega ?-Re?, and Reynolds Stress Transport (RST) turbulence models were used and compared. RST turbulence model showed to over predict the penetration depths and dissipation of the jet in the downstream domain when compared to literature and experimental data.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007880, ucf:52772
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007880
-
-
Title
-
A Study of Localization and Latency Reduction for Action Recognition.
-
Creator
-
Masood, Syed, Tappen, Marshall, Foroosh, Hassan, Stanley, Kenneth, Sukthankar, Rahul, University of Central Florida
-
Abstract / Description
-
The success of recognizing periodic actions in single-person-simple-background datasets, such as Weizmann and KTH, has created a need for more complex datasets to push the performance of action recognition systems. In this work, we create a new synthetic action dataset and use it to highlight weaknesses in current recognition systems. Experiments show that introducing background complexity to action video sequences causes a significant degradation in recognition performance. Moreover, this...
Show moreThe success of recognizing periodic actions in single-person-simple-background datasets, such as Weizmann and KTH, has created a need for more complex datasets to push the performance of action recognition systems. In this work, we create a new synthetic action dataset and use it to highlight weaknesses in current recognition systems. Experiments show that introducing background complexity to action video sequences causes a significant degradation in recognition performance. Moreover, this degradation cannot be fixed by fine-tuning system parameters or by selecting better feature points. Instead, we show that the problem lies in the spatio-temporal cuboid volume extracted from the interest point locations. Having identified the problem, we show how improved results can be achieved by simple modifications to the cuboids.For the above method however, one requires near-perfect localization of the action within a video sequence. To achieve this objective, we present a two stage weakly supervised probabilistic model for simultaneous localization and recognition of actions in videos. Different from previous approaches, our method is novel in that it (1) eliminates the need for manual annotations for the training procedure and (2) does not require any human detection or tracking in the classification stage. The first stage of our framework is a probabilistic action localization model which extracts the most promising sub-windows in a video sequence where an action can take place. We use a non-linear classifier in the second stage of our framework for the final classification task. We show the effectiveness of our proposed model on two well known real-world datasets: UCF Sports and UCF11 datasets.Another application of the weakly supervised probablistic model proposed above is in the gaming environment. An important aspect in designing interactive, action-based interfaces is reliably recognizing actions with minimal latency. High latency causes the system's feedback to lag behind and thus significantly degrade the interactivity of the user experience. With slight modification to the weakly supervised probablistic model we proposed for action localization, we show how it can be used for reducing latency when recognizing actions in Human Computer Interaction (HCI) environments. This latency-aware learning formulation trains a logistic regression-based classifier that automatically determines distinctive canonical poses from the data and uses these to robustly recognize actions in the presence of ambiguous poses. We introduce a novel (publicly released) dataset for the purpose of our experiments. Comparisons of our method against both a Bag of Words and a Conditional Random Field (CRF) classifier show improved recognition performance for both pre-segmented and online classification tasks.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004575, ucf:49210
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004575
-
-
Title
-
RECONFIGURABLE COMPUTING FOR VIDEO CODING.
-
Creator
-
Huang, Jian, Lee, Jooheung, University of Central Florida
-
Abstract / Description
-
Video coding is widely used in our daily life. Due to its high computational complexity, hardware implementation is usually preferred. In this research, we investigate both ASIC hardware design approach and reconfigurable hardware design approach for video coding applications. First, we present a unified architecture that can perform Discrete Cosine Transform (DCT), Inverse Discrete Cosine Transform (IDCT), DCT domain motion estimation and compensation (DCT-ME/MC). Our proposed architecture...
Show moreVideo coding is widely used in our daily life. Due to its high computational complexity, hardware implementation is usually preferred. In this research, we investigate both ASIC hardware design approach and reconfigurable hardware design approach for video coding applications. First, we present a unified architecture that can perform Discrete Cosine Transform (DCT), Inverse Discrete Cosine Transform (IDCT), DCT domain motion estimation and compensation (DCT-ME/MC). Our proposed architecture is a Wavefront Array-based Processor with a highly modular structure consisting of 8*8 Processing Elements (PEs). By utilizing statistical properties and arithmetic operations, it can be used as a high performance hardware accelerator for video transcoding applications. We show how different core algorithms can be mapped onto the same hardware fabric and can be executed through the pre-defined PEs. In addition to the simplified design process of the proposed architecture and savings of the hardware resources, we also demonstrate that high throughput rate can be achieved for IDCT and DCT-MC by fully utilizing the sparseness property of DCT coefficient matrix. Compared to fixed hardware architecture using ASIC design approach, reconfigurable hardware design approach has higher flexibility, lower cost, and faster time-to-market. We propose a self-reconfigurable platform which can reconfigure the architecture of DCT computations during run-time using dynamic partial reconfiguration. The scalable architecture for DCT computations can compute different number of DCT coefficients in the zig-zag scan order to adapt to different requirements, such as power consumption, hardware resource, and performance. We propose a configuration manager which is implemented in the embedded processor in order to adaptively control the reconfiguration of scalable DCT architecture during run-time. In addition, we use LZSS algorithm for compression of the partial bitstreams and on-chip BlockRAM as a cache to reduce latency overhead for loading the partial bitstreams from the off-chip memory for run-time reconfiguration. A hardware module is designed for parallel reconfiguration of the partial bitstreams. The experimental results show that our approach can reduce the external memory accesses by 69% and can achieve 400 MBytes/s reconfiguration rate. Detailed trade-offs of power, throughput, and quality are investigated, and used as a criterion for self-reconfiguration. Prediction algorithm of zero quantized DCT (ZQDCT) to control the run-time reconfiguration of the proposed scalable architecture has been used, and 12 different modes of DCT computations including zonal coding, multi-block processing, and parallel-sequential stage modes are supported to reduce power consumptions, required hardware resources, and computation time with a small quality degradation. Detailed trade-offs of power, throughput, and quality are investigated, and used as a criterion for self-reconfiguration to meet the requirements set by the users.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003262, ucf:48522
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003262
-
-
Title
-
REAL-TIME CINEMATIC DESIGN OF VISUAL ASPECTS IN COMPUTER-GENERATED IMAGES.
-
Creator
-
Obert, Juraj, Pattanaik, Sumanta, University of Central Florida
-
Abstract / Description
-
Creation of visually-pleasing images has always been one of the main goals of computer graphics. Two important components are necessary to achieve this goal --- artists who design visual aspects of an image (such as materials or lighting) and sophisticated algorithms that render the image. Traditionally, rendering has been of greater interest to researchers, while the design part has always been deemed as secondary. This has led to many inefficiencies, as artists, in order to create a...
Show moreCreation of visually-pleasing images has always been one of the main goals of computer graphics. Two important components are necessary to achieve this goal --- artists who design visual aspects of an image (such as materials or lighting) and sophisticated algorithms that render the image. Traditionally, rendering has been of greater interest to researchers, while the design part has always been deemed as secondary. This has led to many inefficiencies, as artists, in order to create a stunning image, are often forced to resort to the traditional, creativity-baring, pipelines consisting of repeated rendering and parameter tweaking. Our work shifts the attention away from the rendering problem and focuses on the design. We propose to combine non-physical editing with real-time feedback and provide artists with efficient ways of designing complex visual aspects such as global illumination or all-frequency shadows. We conform to existing pipelines by inserting our editing components into existing stages, hereby making editing of visual aspects an inherent part of the design process. Many of the examples showed in this work have been, until now, extremely hard to achieve. The non-physical aspect of our work enables artists to express themselves in more creative ways, not limited by the physical parameters of current renderers. Real-time feedback allows artists to immediately see the effects of applied modifications and compatibility with existing workflows enables easy integration of our algorithms into production pipelines.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003250, ucf:48559
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003250
-
-
Title
-
MARKERLESS TRACKING USING POLAR CORRELATION OF CAMERA OPTICAL FLOW.
-
Creator
-
Gupta, Prince, da Vitoria Lobo, Niels, University of Central Florida
-
Abstract / Description
-
We present a novel, real-time, markerless vision-based tracking system, employing a rigid orthogonal configuration of two pairs of opposing cameras. Our system uses optical flow over sparse features to overcome the limitation of vision-based systems that require markers or a pre-loaded model of the physical environment. We show how opposing cameras enable cancellation of common components of optical flow leading to an efficient tracking algorithm that captures five degrees of freedom...
Show moreWe present a novel, real-time, markerless vision-based tracking system, employing a rigid orthogonal configuration of two pairs of opposing cameras. Our system uses optical flow over sparse features to overcome the limitation of vision-based systems that require markers or a pre-loaded model of the physical environment. We show how opposing cameras enable cancellation of common components of optical flow leading to an efficient tracking algorithm that captures five degrees of freedom including direction of translation and angular velocity. Experiments comparing our device with an electromagnetic tracker show that its average tracking accuracy is 80% over 185 frames, and it is able to track large range motions even in outdoor settings. We also present how opposing cameras in vision-based inside-looking-out systems can be used for gesture recognition. To demonstrate our approach, we discuss three different algorithms for recovering motion parameters at different levels of complete recovery. We show how optical flow in opposing cameras can be used to recover motion parameters of the multi-camera rig. Experimental results show gesture recognition accuracy of 88.0%, 90.7% and 86.7% for our three techniques, respectively, across a set of 15 gestures.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003163, ucf:48611
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003163
-
-
Title
-
Evolution Through the Search for Novelty.
-
Creator
-
Lehman, Joel, Stanley, Kenneth, Gonzalez, Avelino, Wiegand, Rudolf, Hoffman, Eric, University of Central Florida
-
Abstract / Description
-
I present a new approach to evolutionary search called novelty search, wherein only behavioral novelty is rewarded, thereby abstracting evolution as a search for novel forms. This new approach contrasts with the traditional approach of rewarding progress towards the objective through an objective function. Although they are designed to light a path to the objective, objective functions can instead deceive search into converging to dead ends called local optima.As a significant problem in...
Show moreI present a new approach to evolutionary search called novelty search, wherein only behavioral novelty is rewarded, thereby abstracting evolution as a search for novel forms. This new approach contrasts with the traditional approach of rewarding progress towards the objective through an objective function. Although they are designed to light a path to the objective, objective functions can instead deceive search into converging to dead ends called local optima.As a significant problem in evolutionary computation, deception has inspired many techniques designed to mitigate it. However, nearly all such methods are still ultimately susceptible to deceptive local optima because they still measure progress with respect to the objective, which this dissertation will show is often a broken compass. Furthermore, although novelty search completely abandons the objective, it counterintuitively often outperforms methods that search directly for the objective in deceptive tasks and can induce evolutionary dynamics closer in spirit to natural evolution. The main contributions are to (1) introduce novelty search, an example of an effective search method that is not guided by actively measuring or encouraging objective progress; (2) validate novelty search by applying it to biped locomotion; (3) demonstrate novelty search's benefits for evolvability (i.e. the abilityof an organism to further evolve) in a variety of domains; (4) introduce an extension of novelty search called minimal criteria novelty search that brings a new abstraction of natural evolution to evolutionary computation (i.e. evolution as a search for many ways of meeting the minimal criteria of life); (5) present a second extension of novelty search called novelty search with local competition that abstracts evolution instead as a process driven towards diversity with competition playing a subservient role; and (6) evolve a diversity of functional virtual creatures in a single run as a culminating application of novelty search with local competition. Overall these contributions establish novelty search as an important new research direction for the field of evolutionary computation.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004398, ucf:49390
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004398
-
-
Title
-
Quantum Algorithms for: Quantum Phase Estimation, Approximation of the Tutte Polynomial and Black-box Structures.
-
Creator
-
Ahmadi Abhari, Seyed Hamed, Brennan, Joseph, Mucciolo, Eduardo, Li, Xin, Marinescu, Dan, University of Central Florida
-
Abstract / Description
-
In this dissertation, we investigate three different problems in the field of Quantum computation. First, we discuss the quantum complexity of evaluating the Tutte polynomial of a planar graph. Furthermore, we devise a new quantum algorithm for approximating the phase of a unitary matrix. Finally, we provide quantum tools that can be utilized to extract the structure of black-box modules and algebras. While quantum phase estimation (QPE) is at the core of many quantum algorithms known to date...
Show moreIn this dissertation, we investigate three different problems in the field of Quantum computation. First, we discuss the quantum complexity of evaluating the Tutte polynomial of a planar graph. Furthermore, we devise a new quantum algorithm for approximating the phase of a unitary matrix. Finally, we provide quantum tools that can be utilized to extract the structure of black-box modules and algebras. While quantum phase estimation (QPE) is at the core of many quantum algorithms known to date, its physical implementation (algorithms based on quantum Fourier transform (QFT)) is highly constrained by the requirement of high-precision controlled phase shift operators, which remain difficult to realize. In the second part of this dissertation, we introduce an alternative approach to approximately implement QPE with arbitrary constant-precision controlled phase shift operators.The new quantum algorithm bridges the gap between QPE algorithms based on QFT and Kitaev's original approach. For approximating the eigenphase precise to the nth bit, Kitaev's original approach does not require any controlled phase shift operator. In contrast, QPE algorithms based on QFT or approximate QFT require controlled phase shift operators with precision of at least Pi/2n. The new approach fills the gap and requires only arbitrary constant-precision controlled phase shift operators. From a physical implementation viewpoint, the new algorithm outperforms Kitaev's approach.The other problem we investigate relates to approximating the Tutte polynomial. We show that the problem of approximately evaluating the Tutte polynomial of triangular graphs at the points (q,1/q) of the Tutte plane is BQP-complete for (most) roots of unity q. We also consider circular graphs and show that the problem of approximately evaluating the Tutte polynomial of these graphs at a point is DQC1-complete and at some points is in BQP.To show that these problems can be solved by a quantum computer, we rely on the relation of the Tutte polynomial of a planar G graph with the Jones and HOMFLY polynomial of the alternating link D(G) given by the medial graph of G. In the case of our graphs the corresponding links are equal to the plat and trace closures of braids. It is known how to evaluate the Jones and HOMFLY polynomial for closures of braids.To establish the hardness results, we use the property that the images of the generators of the braid group under the irreducible Jones-Wenzl representations of the Hecke algebra have finite order. We show that for each braid we can efficiently construct a braid such that the evaluation of the Jones and HOMFLY polynomials of their closures at a fixed root of unity leads to the same value and that the closures of the resulting braid are alternating links.The final part of the dissertation focuses on finding the structure of a black-box module or algebra. Suppose we are given black-box access to a finite module M or algebra over a finite ring R and a list of generators for M and R. We show how to find a linear basis and structure constants for M in quantum poly (log|M|) time. This generalizes a recent quantum algorithm of Arvind et al. which finds a basis representation for rings. We then show that our algorithm is a useful primitive allowing quantum computer to determine the structure of a finite associative algebra as a direct sum of simple algebras. Moreover, it solves a wide variety of problems regarding finite modules and rings. Although our quantum algorithm is based on Abelian Fourier transforms, it solves problems regarding the multiplicative structure of modules and algebras, which need not be commutative. Examples include finding the intersection and quotient of two modules, finding the additive and multiplicative identities in a module, computing the order of an module, solving linear equations over modules, deciding whether an ideal is maximal, finding annihilators, and testing the injectivity and surjectivity of ring homomorphisms. These problems appear to be exponentially hard classically.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004239, ucf:49526
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004239
-
-
Title
-
Human Action Localization and Recognition in Unconstrained Videos.
-
Creator
-
Boyraz, Hakan, Tappen, Marshall, Foroosh, Hassan, Lin, Mingjie, Zhang, Shaojie, Sukthankar, Rahul, University of Central Florida
-
Abstract / Description
-
As imaging systems become ubiquitous, the ability to recognize human actions is becoming increasingly important. Just as in the object detection and recognition literature, action recognition can be roughly divided into classification tasks, where the goal is to classify a video according to the action depicted in the video, and detection tasks, where the goal is to detect and localize a human performing a particular action. A growing literature is demonstrating the benefits of localizing...
Show moreAs imaging systems become ubiquitous, the ability to recognize human actions is becoming increasingly important. Just as in the object detection and recognition literature, action recognition can be roughly divided into classification tasks, where the goal is to classify a video according to the action depicted in the video, and detection tasks, where the goal is to detect and localize a human performing a particular action. A growing literature is demonstrating the benefits of localizing discriminative sub-regions of images and videos when performing recognition tasks. In this thesis, we address the action detection and recognition problems. Action detection in video is a particularly difficult problem because actions must not only be recognized correctly, but must also be localized in the 3D spatio-temporal volume. We introduce a technique that transforms the 3D localization problem into a series of 2D detection tasks. This is accomplished by dividing the video into overlapping segments, then representing each segment with a 2D video projection. The advantage of the 2D projection is that it makes it convenient to apply the best techniques from object detection to the action detection problem. We also introduce a novel, straightforward method for searching the 2D projections to localize actions, termed Two-Point Subwindow Search (TPSS). Finally, we show how to connect the local detections in time using a chaining algorithm to identify the entire extent of the action. Our experiments show that video projection outperforms the latest results on action detection in a direct comparison.Second, we present a probabilistic model learning to identify discriminative regions in videos from weakly-supervised data where each video clip is only assigned a label describing what action is present in the frame or clip. While our first system requires every action to be manually outlined in every frame of the video, this second system only requires that the video be given a single high-level tag. From this data, the system is able to identify discriminative regions that correspond well to the regions containing the actual actions. Our experiments on both the MSR Action Dataset II and UCF Sports Dataset show that the localizations produced by this weakly supervised system are comparable in quality to localizations produced by systems that require each frame to be manually annotated. This system is able to detect actions in both 1) non-temporally segmented action videos and 2) recognition tasks where a single label is assigned to the clip. We also demonstrate the action recognition performance of our method on two complex datasets, i.e. HMDB and UCF101. Third, we extend our weakly-supervised framework by replacing the recognition stage with a two-stage neural network and apply dropout for preventing overfitting of the parameters on the training data. Dropout technique has been recently introduced to prevent overfitting of the parameters in deep neural networks and it has been applied successfully to object recognition problem. To our knowledge, this is the first system using dropout for action recognition problem. We demonstrate that using dropout improves the action recognition accuracies on HMDB and UCF101 datasets.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004977, ucf:49562
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004977
-
-
Title
-
From the top: Impression management strategies and organizational identity in executive-authored weblogs.
-
Creator
-
McLane, Teryl, Hastings, Sally, Weger, Harry, Musambira, George, University of Central Florida
-
Abstract / Description
-
This research examines impression management strategies high-ranking organizational executives employ to create an identity for themselves and their companies via executive authored Weblogs (blogs). This study attempts to identify specific patterns of impression management strategies through a deductive content analysis applying Jones' (1990) taxonomy of self-presentation strategies to this particular type of computer mediated communication. Sampling for this study (n=227) was limited to...
Show moreThis research examines impression management strategies high-ranking organizational executives employ to create an identity for themselves and their companies via executive authored Weblogs (blogs). This study attempts to identify specific patterns of impression management strategies through a deductive content analysis applying Jones' (1990) taxonomy of self-presentation strategies to this particular type of computer mediated communication. Sampling for this study (n=227) was limited to blogs solely and regularly authored by the highest-ranking leaders of Fortune 500 companies. The study revealed that executive bloggers frequently employed impression management strategies aimed at currying competency attributes (self-promotion), likeability (ingratiation), and moral worthiness (exemplification) to construct and shape a positive identify for themselves and their organization for their publics. Supplication strategies were used less frequently, while intimidation strategies were rarely used.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004411, ucf:49373
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004411
-
-
Title
-
NEW COMPUTATIONAL APPROACHES FOR MULTIPLE RNA ALIGNMENT AND RNA SEARCH.
-
Creator
-
DeBlasio, Daniel, Zhang, Shaojie, University of Central Florida
-
Abstract / Description
-
In this thesis we explore the the theory and history behind RNA alignment. Normal sequence alignments as studied by computer scientists can be completed in $O(n^2)$ time in the naive case. The process involves taking two input sequences and finding the list of edits that can transform one sequence into the other. This process is applied to biology in many forms, such as the creation of multiple alignments and the search of genomic sequences. When you take into account the RNA sequence...
Show moreIn this thesis we explore the the theory and history behind RNA alignment. Normal sequence alignments as studied by computer scientists can be completed in $O(n^2)$ time in the naive case. The process involves taking two input sequences and finding the list of edits that can transform one sequence into the other. This process is applied to biology in many forms, such as the creation of multiple alignments and the search of genomic sequences. When you take into account the RNA sequence structure the problem becomes even harder. Multiple RNA structure alignment is particularly challenging because covarying mutations make sequence information alone insufficient. Existing tools for multiple RNA alignments first generate pair-wise RNA structure alignments and then build the multiple alignment using only the sequence information. Here we present PMFastR, an algorithm which iteratively uses a sequence-structure alignment procedure to build a multiple RNA structure alignment. PMFastR also has low memory consumption allowing for the alignment of large sequences such as 16S and 23S rRNA. Specifically, we reduce the memory consumption to $\sim O(band^2*m)$ where $band$ is the banding size. Other solutions are $\sim O(n^2*m)$ where $n$ and $m$ are the lengths of the target and query respectively. The algorithm also provides a method to utilize a multi-core environment. We present results on benchmark data sets from BRAliBase, which shows PMFastR outperforms other state-of-the-art programs. Furthermore, we regenerate 607 Rfam seed alignments and show that our automated process creates similar multiple alignments to the manually-curated Rfam seed alignments. While these methods can also be applied directly to genome sequence search, the abundance of new multiple species genome alignments presents a new area for exploration. Many multiple alignments of whole genomes are available and these alignments keep growing in size. These alignments can provide more information to the searcher than just a single sequence. Using the methodology from sequence-structure alignment we developed AlnAlign, which searches an entire genome alignment using RNA sequence structure. While programs have been readily available to align alignments, this is the first to our knowledge that is specifically designed for RNA sequences. This algorithm is presented only in theory and is yet to be tested.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002736, ucf:48166
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002736
-
-
Title
-
OF GODS, BEASTS AND MEN: DIGITAL SCULPTURE.
-
Creator
-
Salisbury, Brian, Kovach, Keith, University of Central Florida
-
Abstract / Description
-
My most recent body of work explores the synthesis of my influences, interests and life experiences into imagery of common themes: The expression of dynamic figures and forms and colors in digital 3d space, cinematic composition, and vibrant color, expressed through a semblance of Aztec culture and wildlife. My sculptures of nature and ancient culture are created using contemporary digital art creation technologies and techniques. I examine the art and religion of the Aztecs and the universal...
Show moreMy most recent body of work explores the synthesis of my influences, interests and life experiences into imagery of common themes: The expression of dynamic figures and forms and colors in digital 3d space, cinematic composition, and vibrant color, expressed through a semblance of Aztec culture and wildlife. My sculptures of nature and ancient culture are created using contemporary digital art creation technologies and techniques. I examine the art and religion of the Aztecs and the universal search for understanding and purpose in the world and the forces around and beyond us.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002587, ucf:48278
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002587
-
-
Title
-
NEAR-FIELD OPTICAL INTERACTIONS AND APPLICATIONS.
-
Creator
-
Haefner, David, Dogariu, Aristide, University of Central Florida
-
Abstract / Description
-
The propagation symmetry of electromagnetic fields is affected by encounters with material systems. The effects of such interactions, for example, modifications of intensity, phase, polarization, angular spectrum, frequency, etc. can be used to obtain information about the material system. However, the propagation of electromagnetic waves imposes a fundamental limit to the length scales over which the material properties can be observed. In the realm of near-field optics, this limitation is...
Show moreThe propagation symmetry of electromagnetic fields is affected by encounters with material systems. The effects of such interactions, for example, modifications of intensity, phase, polarization, angular spectrum, frequency, etc. can be used to obtain information about the material system. However, the propagation of electromagnetic waves imposes a fundamental limit to the length scales over which the material properties can be observed. In the realm of near-field optics, this limitation is overcome only through a secondary interaction that couples the high-spatial-frequency (but non-propagating) field components to propagating waves that can be detected. The available information depends intrinsically on this secondary interaction, which constitutes the topic of this study. Quantitative measurements of material properties can be performed only by controlling the subtle characteristics of these processes. This dissertation discusses situations where the effects of near-field interactions can be (i) neglected in certain passive testing techniques, (ii) exploited for active probing of static or dynamic systems, or (iii) statistically isolated when considering optically inhomogeneous materials. This dissertation presents novel theoretical developments, experimental measurements, and numerical results that elucidate the vectorial aspects of the interaction between light and nano-structured material for use in sensing applications.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003095, ucf:48318
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003095
-
-
Title
-
MULTIAGENT LEARNING THROUGH INDIRECT ENCODING.
-
Creator
-
D'Ambrosio, David, Stanley, Kenneth, University of Central Florida
-
Abstract / Description
-
Designing a system of multiple, heterogeneous agents that cooperate to achieve a common goal is a difficult task, but it is also a common real-world problem. Multiagent learning addresses this problem by training the team to cooperate through a learning algorithm. However, most traditional approaches treat multiagent learning as a combination of multiple single-agent learning problems. This perspective leads to many inefficiencies in learning such as the problem of reinvention, whereby...
Show moreDesigning a system of multiple, heterogeneous agents that cooperate to achieve a common goal is a difficult task, but it is also a common real-world problem. Multiagent learning addresses this problem by training the team to cooperate through a learning algorithm. However, most traditional approaches treat multiagent learning as a combination of multiple single-agent learning problems. This perspective leads to many inefficiencies in learning such as the problem of reinvention, whereby fundamental skills and policies that all agents should possess must be rediscovered independently for each team member. For example, in soccer, all the players know how to pass and kick the ball, but a traditional algorithm has no way to share such vital information because it has no way to relate the policies of agents to each other.In this dissertation a new approach to multiagent learning that seeks to address these issues is presented. This approach, called multiagent HyperNEAT, represents teams as a pattern of policies rather than individual agents. The main idea is that an agent's location within a canonical team layout (such as a soccer team at the start of a game) tends to dictate its role within that team, called the policy geometry. For example, as soccer positions move from goal to center they become more offensive and less defensive, a concept that is compactly represented as a pattern. The first major contribution of this dissertation is a new method for evolving neural network controllers called HyperNEAT, which forms the foundation of the second contribution and primary focus of this work, multiagent HyperNEAT. Multiagent learning in this dissertation is investigated in predator-prey, room-clearing, and patrol domains, providing a real-world context for the approach. Interestingly, because the teams in multiagent HyperNEAT are represented as patterns they can scale up to an infinite number of multiagent policies that can be sampled from the policy geometry as needed. Thus the third contribution is a method for teams trained with multiagent HyperNEAT to dynamically scale their size without further learning. Fourth, the capabilities to both learn and scale in multiagent HyperNEAT are compared to the traditional multiagent SARSA(lamba) approach in a comprehensive study. The fifth contribution is a method for efficiently learning and encoding multiple policies for each agent on a team to facilitate learning in multi-task domains. Finally, because there is significant interest in practical applications of multiagent learning, multiagent HyperNEAT is tested in a real-world military patrolling application with actual Khepera III robots. The ultimate goal is to provide a new perspective on multiagent learning and to demonstrate the practical benefits of training heterogeneous, scalable multiagent teams through generative encoding.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0003661, ucf:48812
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003661
-
-
Title
-
THE ACQUISITION OF LEXICAL KNOWLEDGE FROM THE WEB FOR ASPECTS OF SEMANTIC INTERPRETATION.
-
Creator
-
Schwartz, Hansen, Gomez, Fernando, University of Central Florida
-
Abstract / Description
-
This work investigates the effective acquisition of lexical knowledge from the Web to perform semantic interpretation. The Web provides an unprecedented amount of natural language from which to gain knowledge useful for semantic interpretation. The knowledge acquired is described as common sense knowledge, information one uses in his or her daily life to understand language and perception. Novel approaches are presented for both the acquisition of this knowledge and use of the knowledge in...
Show moreThis work investigates the effective acquisition of lexical knowledge from the Web to perform semantic interpretation. The Web provides an unprecedented amount of natural language from which to gain knowledge useful for semantic interpretation. The knowledge acquired is described as common sense knowledge, information one uses in his or her daily life to understand language and perception. Novel approaches are presented for both the acquisition of this knowledge and use of the knowledge in semantic interpretation algorithms. The goal is to increase accuracy over other automatic semantic interpretation systems, and in turn enable stronger real world applications such as machine translation, advanced Web search, sentiment analysis, and question answering. The major contributions of this dissertation consist of two methods of acquiring lexical knowledge from the Web, namely a database of common sense knowledge and Web selectors. The first method is a framework for acquiring a database of concept relationships. To acquire this knowledge, relationships between nouns are found on the Web and analyzed over WordNet using information-theory, producing information about concepts rather than ambiguous words. For the second contribution, words called Web selectors are retrieved which take the place of an instance of a target word in its local context. The selectors serve for the system to learn the types of concepts that the sense of a target word should be similar. Web selectors are acquired dynamically as part of a semantic interpretation algorithm, while the relationships in the database are useful to stand-alone programs. A final contribution of this dissertation concerns a novel semantic similarity measure and an evaluation of similarity and relatedness measures on tasks of concept similarity. Such tasks are useful when applying acquired knowledge to semantic interpretation. Applications to word sense disambiguation, an aspect of semantic interpretation, are used to evaluate the contributions. Disambiguation systems which utilize semantically annotated training data are considered supervised. The algorithms of this dissertation are considered minimally-supervised; they do not require training data created by humans, though they may use human-created data sources. In the case of evaluating a database of common sense knowledge, integrating the knowledge into an existing minimally-supervised disambiguation system significantly improved results -- a 20.5\% error reduction. Similarly, the Web selectors disambiguation system, which acquires knowledge directly as part of the algorithm, achieved results comparable with top minimally-supervised systems, an F-score of 80.2\% on a standard noun disambiguation task. This work enables the study of many subsequent related tasks for improving semantic interpretation and its application to real-world technologies. Other aspects of semantic interpretation, such as semantic role labeling could utilize the same methods presented here for word sense disambiguation. As the Web continues to grow, the capabilities of the systems in this dissertation are expected to increase. Although the Web selectors system achieves great results, a study in this dissertation shows likely improvements from acquiring more data. Furthermore, the methods for acquiring a database of common sense knowledge could be applied in a more exhaustive fashion for other types of common sense knowledge. Finally, perhaps the greatest benefits from this work will come from the enabling of real world technologies that utilize semantic interpretation.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0003688, ucf:48805
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003688
-
-
Title
-
NUMERICAL COMPUTATIONS FOR PDE MODELS OF ROCKET EXHAUST FLOW IN SOIL.
-
Creator
-
Brennan, Brian, Moore, Brian, University of Central Florida
-
Abstract / Description
-
We study numerical methods for solving the nonlinear porous medium and Navier-Lame problems. When coupled together, these equations model the flow of exhaust through a porous medium, soil, and the effects that the pressure has on the soil in terms of spatial displacement. For the porous medium equation we use the Crank-Nicolson time stepping method with a spectral discretization in space. Since the Navier-Lame equation is a boundary value problem, it is solved using a finite element method...
Show moreWe study numerical methods for solving the nonlinear porous medium and Navier-Lame problems. When coupled together, these equations model the flow of exhaust through a porous medium, soil, and the effects that the pressure has on the soil in terms of spatial displacement. For the porous medium equation we use the Crank-Nicolson time stepping method with a spectral discretization in space. Since the Navier-Lame equation is a boundary value problem, it is solved using a finite element method where the spatial domain is represented by a triangulation of discrete points. The two problems are coupled by using approximations of solutions to the porous medium equation to define the forcing term in the Navier-Lame equation. The spatial displacement solutions can be used to approximate the strain and stress imposed on the soil. An analysis of these physical properties shows whether or not the material ceases to act as an elastic material and instead behaves like a plastic which will tell us if the soil has failed and a crater has formed. Analytical as well as experimental tests are used to find a good balance for solving the porous medium and Navier-Lame equations both accurately and efficiently.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003217, ucf:48565
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003217
-
-
Title
-
SIMULATION FOR COMMERCIAL DRIVER LICENSE THIRD PARTY TESTER TESTING.
-
Creator
-
Truong, Henry, Lin, Kurt, University of Central Florida
-
Abstract / Description
-
The advance of technology is thought to help ease the myriad tasks that are usually involved in operating equipment. Training and testing in modern times have been replacing with simulation technologies that mimic the actual live operations and testing. Many successful stories of flight simulation come from military fighter aircraft and commercial pilot programs. The possibilities of safety in saving lives, economic incentive in reducing the operational cost and reducing the carbon footprint...
Show moreThe advance of technology is thought to help ease the myriad tasks that are usually involved in operating equipment. Training and testing in modern times have been replacing with simulation technologies that mimic the actual live operations and testing. Many successful stories of flight simulation come from military fighter aircraft and commercial pilot programs. The possibilities of safety in saving lives, economic incentive in reducing the operational cost and reducing the carbon footprint via simulation makes simulation worth looking into. These considerations quickly boosted the transfer from live training operations to virtual and simulation, as were easily adopted in the history of flight training and testing. Although, there has been a lack of application, the benefits of the computer based simulation as a modeling and simulation (M&S) tool can be applied to the commercial driver license (CDL) for the trucking industry. Nevertheless, this is an uphill battle to convince CDL administrators to integrate modern technology into the CDL program instead of using the traditional daily business of manual testing. This is because the cost of trucking industry live operations is still relatively affordable; individuals and companies are reluctant to adopt the use of the modeling and simulation driving or testing system. Fortunately, cost is not the only variable to consider for the training and testing administrators and their management. There is a need to expand the use of technology to support live operations. The safety of the student, trainer, and tester should be taken into account. The availability of training or testing scenarios is also an influencing factor. Ultimately, the most important factor is driving safety on the American road. The relationship of accidents with driver license fraud has led the Federal Department of Transportation to want to reduce fraud in third-party Commercial Driver License (CDL) administration. Although it is not a perfect solution that can fix all, the utilization of simulation technologies for driving assessment could be a solution to help reduce fraud if it is applied correctly. The Department of Transportation (DOT) authorized the statesÃÂ' independent authority to administrate the local CDL including the use of the Third-Party Tester (TPT). As a result, some criminal activities prompted the Federal investigation to recommend changes and to fund the states to take action to stay in compliance with the Federal regulation. This is the opportunity for the state CDL administrator to explore the use of M&S to support its mission. Recall, those arguments for the use of the M&S is the thought of safety in saving lives, economic incentive in reducing the operational cost, and reducing the carbon footprint via using simulation. This makes simulation a viable resource. This paper will report the research study of using the computer based testing modeling and simulation tools to replace or augment the current state examiner as means of assessing the CDL TPT proficiency in basic backing skills. This pilot study of this system has several aspects to address. The scenarios must be relevant to test the knowledge of the TPT by using closely comparable scenarios to the current manual testing method. The scenario-based simulation should incorporate randomness to provide a greater sense of reality. In addition, the reconfigurable built-in random behavior scenarios provide the administrator greater control of behaviors and allow the administrator to be able to select among the random scenarios. Finally, the paper will present the data sampling from relevant participants of the CDL TPT and methodology applied. The analysis of data presents in this research study will be valuable for the State and Federal CDL administrator to consider the pros and cons of applying or adding a computer based simulation to their current testing methodology.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003222, ucf:48577
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003222
-
-
Title
-
A SUSTAINABLE AUTONOMIC ARCHITECTURE FOR ORGANICALLY RECONFIGURABLE COMPUTING SYSTEMS.
-
Creator
-
Oreifej, Rashad, DeMara, Ronald, University of Central Florida
-
Abstract / Description
-
A Sustainable Autonomic Architecture for Organically Reconfigurable Computing System based on SRAM Field Programmable Gate Arrays (FPGAs) is proposed, modeled analytically, simulated, prototyped, and measured. Low-level organic elements are analyzed and designed to achieve novel self-monitoring, self-diagnosis, and self-repair organic properties. The prototype of a 2-D spatial gradient Sobel video edge-detection organic system use-case developed on a XC4VSX35 Xilinx Virtex-4 Video Starter Kit...
Show moreA Sustainable Autonomic Architecture for Organically Reconfigurable Computing System based on SRAM Field Programmable Gate Arrays (FPGAs) is proposed, modeled analytically, simulated, prototyped, and measured. Low-level organic elements are analyzed and designed to achieve novel self-monitoring, self-diagnosis, and self-repair organic properties. The prototype of a 2-D spatial gradient Sobel video edge-detection organic system use-case developed on a XC4VSX35 Xilinx Virtex-4 Video Starter Kit is presented. Experimental results demonstrate the applicability of the proposed architecture and provide the infrastructure to quantify the performance and overcome fault-handling limitations. Dynamic online autonomous functionality restoration after a malfunction or functionality shift due to changing requirements is achieved at a fine granularity by exploiting dynamic Partial Reconfiguration (PR) techniques. A Genetic Algorithm (GA)-based hardware/software platform for intrinsic evolvable hardware is designed and evaluated for digital circuit repair using a variety of well-accepted benchmarks. Dynamic bitstream compilation for enhanced mutation and crossover operators is achieved by directly manipulating the bitstream using a layered toolset. Experimental results on the edge-detector organic system prototype have shown complete organic online refurbishment after a hard fault. In contrast to previous toolsets requiring many milliseconds or seconds, an average of 0.47 microseconds is required to perform the genetic mutation, 4.2 microseconds to perform the single point conventional crossover, 3.1 microseconds to perform Partial Match Crossover (PMX) as well as Order Crossover (OX), 2.8 microseconds to perform Cycle Crossover (CX), and 1.1 milliseconds for one input pattern intrinsic evaluation. These represent a performance advantage of three orders of magnitude over the JBITS software framework and more than seven orders of magnitude over the Xilinx design flow. Combinatorial Group Testing (CGT) technique was combined with the conventional GA in what is called CGT-pruned GA to reduce repair time and increase system availability. Results have shown up to 37.6% convergence advantage using the pruned technique. Lastly, a quantitative stochastic sustainability model for reparable systems is formulated to evaluate the Sustainability of FPGA-based reparable systems. This model computes at design-time the resources required for refurbishment to meet mission availability and lifetime requirements in a given fault-susceptible missions. By applying this model to MCNC benchmark circuits and the Sobel Edge-Detector in a realistic space mission use-case on Xilinx Virtex-4 FPGA, we demonstrate a comprehensive model encompassing the inter-relationships between system sustainability and fault rates, utilized, and redundant hardware resources, repair policy parameters and decaying reparability.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0003969, ucf:48661
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003969
-
-
Title
-
An intelligent editor for natural language processing of unrestricted text.
-
Creator
-
Glinos, Demetrios George, Gomez, Fernando, Arts and Sciences
-
Abstract / Description
-
University of Central Florida College of Arts and Sciences Thesis; The understanding of natural language by computational methods has been a continuing and elusive problem in artificial intelligence. In recent years there has been a resurgence in natural language processing research. Much of this work has been on empirical or corpus-based methods which use a data-driven approach to train systems on large amounts of real language data. Using corpus-based methods, the performance of part-of...
Show moreUniversity of Central Florida College of Arts and Sciences Thesis; The understanding of natural language by computational methods has been a continuing and elusive problem in artificial intelligence. In recent years there has been a resurgence in natural language processing research. Much of this work has been on empirical or corpus-based methods which use a data-driven approach to train systems on large amounts of real language data. Using corpus-based methods, the performance of part-of-speech (POS) taggers, which assign to the individual words of a sentence their appropriate part of speech category (e.g., noun, verb, preposition), now rivals human performance levels, achieving accuracies exceeding 95%. Such taggers have proved useful as preprocessors for such tasks as parsing, speech synthesis, and information retrieval. Parsing remains, however, a difficult problem, even with the benefit of POS tagging. Moveover, as sentence length increases, there is a corresponding combinatorial explosing of alternative possible parses. Consider the following sentence from a New York Times online article: After Salinas was arrested for murder in 1995 and lawyers for the bank had begun monitoring his accounts, his personal banker in New York quietly advised Salinas' wife to move the money elsewhere, apparently without the consent of the legal department. To facilitate the parsing and other tasks, we would like to decompose this sentence into the following three shorter sentences which, taken together, convey the same meaning as the original: 1. Salinas was arrested for murder in 1995. 2. Lawyers for the bank had begun monitoring his accounts. 3. His personal banker in New York quietly advised Salinas' wife to move the money elsewhere, apparently without the consent of the legal department. This study investigates the development of heuristics for decomposing such long sentences into sets of shorter sentences without affecting the meaning of the original sentences. Without parsing or semantic analysis, heuristic rules were developed based on: (1) the output of a POS tagger (Brill's tagger); (2) the punctuation contained in the input sentences; and (3) the words themselves. The heuristic algorithms were implemented in an intelligent editor program which first augmented the POS tags and assigned tags to punctuation, and then tested the rules against a corpus of 25 New York Times online articles containing approximately 1,200 sentences and over 32,000 words, with good results. Recommendations are made for improving the algorithms and for continuing this line of research.
Show less
-
Date Issued
-
1999
-
Identifier
-
CFR0008181, ucf:53055
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFR0008181
-
-
Title
-
Enterface : a novella.
-
Creator
-
McLeod, Hubert Calip, Rushin, Pat, Arts and Sciences
-
Abstract / Description
-
University of Central Florida College of Arts and Sciences Thesis; A computer screen places each of us in an interface and virtual reality provides a totally simulated environment, a virtual world that we can enter. Enterface is a novella that examines the question first posed by Michael Heim: How far can we enter cyberspace and still remain human? It also explores the power and the limitation of language and the role of stories to shape reality in human life. Its themes are death, technology...
Show moreUniversity of Central Florida College of Arts and Sciences Thesis; A computer screen places each of us in an interface and virtual reality provides a totally simulated environment, a virtual world that we can enter. Enterface is a novella that examines the question first posed by Michael Heim: How far can we enter cyberspace and still remain human? It also explores the power and the limitation of language and the role of stories to shape reality in human life. Its themes are death, technology, ethics, and love. It is informed by Wittgensteinian philosophy, Norse mythology, and the "metaphysics of virtual reality." The plot involves Moses Mackinow, a former Air Force officer and entrepreneur, who decides there should be a way to simply live forever. He hits upon the idea that life could be digitized, and a civilization, a world of complete, sentient humans could be created in cyberspace--a world he could enter upon his death and continue to live. A variety of technologies are available to digitize the physical human (x-rays, CTSCNS, Magnetic Resonance Images, graphic images, etc.), but the big problem is how to synthesize his human heart. Moses decides that the stories of his life are the keys to creating the "rag and bone shop" of his eternal heart. Getting the stories "right" is critical to the prospect of digitizing life and is a major focus of the novella action. The novella traces the reduction of Moses as a a human being as he pursues his obsession, compromising one principle after another. Everything in the environment of the novella, reflects this reduction. Everything becomes less than it was, a glimpse of humanity reduced to bits and bytes, floating 1's and 0's. Enterface is a work at war with itself.
Show less
-
Date Issued
-
1999
-
Identifier
-
CFR0011964, ucf:53091
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFR0011964
Pages