Current Search: computer (x)
Pages
-
-
Title
-
Computational Methods for Comparative Non-coding RNA Analysis: From Structural Motif Identification to Genome-wide Functional Classification.
-
Creator
-
Zhong, Cuncong, Zhang, Shaojie, Hu, Haiyan, Hua, Kien, Li, Xiaoman, University of Central Florida
-
Abstract / Description
-
Non-coding RNA (ncRNA) plays critical functional roles such as regulation, catalysis, and modification etc. in the biological system. Non-coding RNAs exert their functions based on their specific structures, which makes the thorough understanding of their structures a key step towards their complete functional annotation. In this dissertation, we will cover a suite of computational methods for the comparison of ncRNA secondary and 3D structures, and their applications to ncRNA molecular...
Show moreNon-coding RNA (ncRNA) plays critical functional roles such as regulation, catalysis, and modification etc. in the biological system. Non-coding RNAs exert their functions based on their specific structures, which makes the thorough understanding of their structures a key step towards their complete functional annotation. In this dissertation, we will cover a suite of computational methods for the comparison of ncRNA secondary and 3D structures, and their applications to ncRNA molecular structural annotation and their genome-wide functional survey.Specifically, we have contributed the following five computational methods. First, we have developed an alignment algorithm to compare RNA structural motifs, which are recurrent RNA 3D structural fragments. Second, we have improved upon the previous alignment algorithm by incorporating base-stacking information and devise a new branch-and-bond algorithm. Third, we have developed a clustering pipeline for RNA structural motif classification using the above alignment methods. Fourth, we have generalized the clustering pipeline to a genome-wide analysis of RNA secondary structures. Finally, we have devised an ultra-fast alignment algorithm for RNA secondary structure by using the sparse dynamic programming technique.A large number of novel RNA structural motif instances and ncRNA elements have been discovered throughout these studies. We anticipate that these computational methods will significantly facilitate the analysis of ncRNA structures in the future.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004966, ucf:49580
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004966
-
-
Title
-
The Effect of Civics-Based Video Games on Middle School Students' Civic Engagement.
-
Creator
-
Pagnotti, John, Russell, William, Hewitt, Randall, Hopp, Carolyn, Dobson, Leonard, University of Central Florida
-
Abstract / Description
-
Democratic theorists argue that democratic institutions thrive when the citizens of the society robustly participate in governance (Galston, 2004; Barber, 2001). A traditional indicator of democratic participation is voting in elections or referendums. However, democratic apologetics posit that humans need to be trained in democratic processes in order to be democratic citizens (Dewey, 1916; Gutmann, 1990; Sehr, 1997; Goodlad, 2001). Citizens need to know not only the protocol of...
Show moreDemocratic theorists argue that democratic institutions thrive when the citizens of the society robustly participate in governance (Galston, 2004; Barber, 2001). A traditional indicator of democratic participation is voting in elections or referendums. However, democratic apologetics posit that humans need to be trained in democratic processes in order to be democratic citizens (Dewey, 1916; Gutmann, 1990; Sehr, 1997; Goodlad, 2001). Citizens need to know not only the protocol of participation, they also need to be trained in the processes of mind (Dewey, 1916; 1927). Educational systems in this country have been the traditional place where democratic training has been vested (Spring, 2001). It seems, though, that the methods that educators are using to train young people fail to meet this challenge as voting rates among the youngest citizens (under 30) have never been higher than slightly more than half of eligible voters in the age group. To remedy this situation, Congress and several private civic-education organizations have called for changing curricular approaches to engage more youth. One such method that may hold promise is the use of video game technology. The current generation of youth has grown up in a digital world where they have been labeled (")Digital Natives(") (Prensky, 2001a). They are (")tech savvy(") and comfortable with their lives being integrated with various forms of digital technology. Significantly, industry research suggests that over 90% of (")Digital Natives(") have played a video game in the last 30 days, and business is booming to the level that video games pulled in more money than the movie industry did in 2008 (ESA, 2009). As early as the 1970s, educational researchers have looked at the use of video game technology to engage student learning; however, this research has been limited at best. More recently, educational scholars such as James Gee (2003; 2007) and Kurt Squire (2002; 2003; 2006) have sought to make the academic conversation more mature with regard to using video games as a classroom supplement.This study continues that conversation by using quantitative methods to investigate whether or not different groups of middle school students self-report a greater propensity to be civically engaged as a result of civic-themed video gameplay. The investigator collected data from middle school students who were given access to civic-themed video games to see if there were statistically significant differences in self-reported civic-engagement scores as a result of gameplay. This investigation was conducted at a large, urban middle school in the Southeast region of the United States.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004422, ucf:49379
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004422
-
-
Title
-
COMPUTER BASED INTERVENTION AND ITS EFFECT ON BENCHMARK TEST SCORES OF ENGLISH LANGUAGE LEARNERS.
-
Creator
-
Berrio, Gabriel, Martin, Suzanne, University of Central Florida
-
Abstract / Description
-
The Florida Department of EducationÃÂ's (FLDOE) Adequate Yearly Progress (AYP) Report (2007) listed and defined students who are in the process of learning English as a second language as English Language Learners (ELL). The graduation rate of English Language Learners in Florida is consistently smaller than the graduation rate of the total population of students (Echevarria, Short and Powers, 2006) in part due to the requirement for students to pass the FCAT in order...
Show moreThe Florida Department of EducationÃÂ's (FLDOE) Adequate Yearly Progress (AYP) Report (2007) listed and defined students who are in the process of learning English as a second language as English Language Learners (ELL). The graduation rate of English Language Learners in Florida is consistently smaller than the graduation rate of the total population of students (Echevarria, Short and Powers, 2006) in part due to the requirement for students to pass the FCAT in order to graduate. ELL students face the challenge of having to learn a different language, learn the subject area content in that language, and often-times pass a standardized test in order to graduate. In Florida districts, ELL is categorized as a subgroup often times not meeting adequate yearly progress in Reading (Florida Department of Education 2007). This study measured the effectiveness of a district approved computer based intervention in increasing student achievement for English Language Learners as identified by the Florida Department of Education (US DOE, 2009).
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003445, ucf:48403
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003445
-
-
Title
-
EXAMINING ENGINEERING & TECHNOLOGY STUDENTS ACCEPTANCE OF NETWORK VIRTUALIZATION TECHNOLOGY USING THE TECHNOLOGY ACCEPTANCE MODEL.
-
Creator
-
Yousif, Wael K. Yousif, Boote, David, University of Central Florida
-
Abstract / Description
-
This causal and correlational study was designed to extend the Technology Acceptance Model (TAM) and to test its applicability to Valencia Community College (VCC) Engineering and Technology students as the target user group when investigating the factors influencing their decision to adopt and to utilize VMware as the target technology. In addition to the primary three indigenous factors: perceived ease of use, perceived usefulness, and intention toward utilization, the model was also...
Show moreThis causal and correlational study was designed to extend the Technology Acceptance Model (TAM) and to test its applicability to Valencia Community College (VCC) Engineering and Technology students as the target user group when investigating the factors influencing their decision to adopt and to utilize VMware as the target technology. In addition to the primary three indigenous factors: perceived ease of use, perceived usefulness, and intention toward utilization, the model was also extended with enjoyment, external control, and computer self-efficacy as antecedents to perceived ease of use. In an attempt to further increase the explanatory power of the model, the Task-Technology Fit constructs (TTF) were included as antecedents to perceived usefulness. The model was also expanded with subjective norms and voluntariness to assess the degree to which social influences affect students decision for adoption and utilization. This study was conducted during the fall term of 2009, using 11 instruments: (1) VMware Tools Functions Instrument; (2) Computer Networking Tasks Characteristics Instrument; (3) Perceived Usefulness Instrument; (4) Voluntariness Instrument; (5) Subjective Norms Instrument; (6) Perceived Enjoyment Instrument; (7) Computer Self-Efficacy Instrument; (8) Perception of External Control Instrument; (9) Perceived Ease of Use Instrument; (10) Intention Instrument; and (11) a Utilization Instrument. The 11 instruments collectively contained 58 items. Additionally, a demographics instrument of six items was included to investigate the influence of age, prior experience with the technology, prior experience in computer networking, academic enrollment status, and employment status on student intentions and behavior with regard to VMware as a network virtualization technology. Data were analyzed using path analysis, regressions, and univariate analysis of variance in SPSS and AMOS for Windows. The results suggest that perceived ease of use was found to be the strongest determinant of student intention. The analysis also suggested that external control, measuring the facilitating conditions (knowledge, resources, etc) necessary for adoption was the highest predictor of perceived ease of use. Consistent with previous studies, perceived ease of use was found to be the strongest predictor of perceived usefulness followed by subjective norms as students continued to use the technology. Even though the integration of the task-technology fit construct was not helpful in explaining the variance in student perceived usefulness of the target technology, it was statistically significant in predicting student perception of ease of use. The study concluded with recommendations to investigate other factors (such as service quality and ease of implementation) that might contribute to explaining the variance in perceived ease of use as the primary driving force in influencing student decision for adoption. A recommendation was also made to modify the task-technology fit construct instruments to improve the articulation and the specificity of the task. The need for further examination of the influence of the instructor on student decision for adoption of a target technology was also emphasized.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003071, ucf:48313
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003071
-
-
Title
-
THE EFFECTS OF MULTIMODAL FEEDBACK AND AGE ON A MOUSE POINTING TASK.
-
Creator
-
Oakley, Brian, Smither, Janan, University of Central Florida
-
Abstract / Description
-
As the beneficial aspects of computers become more apparent to the elderly population and the baby boom generation moves into later adulthood there is opportunity to increase performance for older computer users. Performance decrements that occur naturally to the motor skills of older adults have shown to have a negative effect on interactions with indirect-manipulation devices, such as computer mice (Murata & Iwase, 2005). Although, a mouse will always have the traits of an indirect...
Show moreAs the beneficial aspects of computers become more apparent to the elderly population and the baby boom generation moves into later adulthood there is opportunity to increase performance for older computer users. Performance decrements that occur naturally to the motor skills of older adults have shown to have a negative effect on interactions with indirect-manipulation devices, such as computer mice (Murata & Iwase, 2005). Although, a mouse will always have the traits of an indirect-manipulation interaction, the inclusion of additional sensory feedback likely increases the saliency of the task to the real world resulting in increases in performance (Biocca et al., 2002). There is strong evidence for a bimodal advantage that is present in people of all ages; additionally there is also very strong evidence that older adults are a group that uses extra sensory information to increase their everyday interactions with the environment (Cienkowski & Carney, 2002; Thompson & Malloy, 2004). This study examined the effects of having multimodal feedback (i.e., visual cues, auditory cues, and tactile cues) present during a target acquisition mouse task for young, middle-aged, and older experienced computer users. This research examined the performance and subjective attitudes when performing a mouse based pointing task when different combinations of the modalities were present. The inclusion of audio or tactile cues during the task had the largest positive effect on performance, resulting in significantly quicker task completion for all of the computer users. The presence of audio or tactile cues increased performance for all of the age groups; however the performance of the older adults tended to be positively influenced more than the other age groups due the inclusion of these modalities. Additionally, the presence of visual cues did not have as strong of an effect on overall performance in comparison to the other modalities. Although the presence of audio and tactile feedback both increased performance there was evidence of a speed accuracy trade-off. Both the audio and tactile conditions resulted in a significantly higher number of misses in comparison to having no additional cues or visual cues present. So, while the presence of audio and tactile feedback improved the speed at which the task could be completed this occurred due to a sacrifice in accuracy. Additionally, this study shows strong evidence that audio and tactile cues are undesirable to computer users. The findings of this research are important to consider prior to adding extra sensory modalities to any type of user interface. The idea that additional feedback is always better may not always hold true if the feedback is found to be distracting, annoying, or negatively affects accuracy, as was found in this study with audio and tactile cues.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002692, ucf:48188
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002692
-
-
Title
-
MULTI-TOUCH FOR GENERAL-PURPOSE COMPUTING: AN EXAMINATION OF TEXT ENTRY.
-
Creator
-
Varcholik, Paul, Hughes, Charles, University of Central Florida
-
Abstract / Description
-
In recent years, multi-touch has been heralded as a revolution in human-computer interaction. Multi-touch provides features such as gestural interaction, tangible interfaces, pen-based computing, and interface customization features embraced by an increasingly tech-savvy public. However, multi-touch platforms have not been adopted as "everyday" computer interaction devices; that is, multi-touch has not been applied to general-purpose computing. The questions this thesis seeks to address are...
Show moreIn recent years, multi-touch has been heralded as a revolution in human-computer interaction. Multi-touch provides features such as gestural interaction, tangible interfaces, pen-based computing, and interface customization features embraced by an increasingly tech-savvy public. However, multi-touch platforms have not been adopted as "everyday" computer interaction devices; that is, multi-touch has not been applied to general-purpose computing. The questions this thesis seeks to address are: Will the general public adopt these systems as their chief interaction paradigm? Can multi-touch provide such a compelling platform that it displaces the desktop mouse and keyboard? Is multi-touch truly the next revolution in human-computer interaction? As a first step toward answering these questions, we observe that general-purpose computing relies on text input, and ask: "Can multi-touch, without a text entry peripheral, provide a platform for efficient text entry? And, by extension, is such a platform viable for general-purpose computing?" We investigate these questions through four user studies that collected objective and subjective data for text entry and word processing tasks. The first of these studies establishes a benchmark for text entry performance on a multi-touch platform, across a variety of input modes. The second study attempts to improve this performance by examining an alternate input technique. The third and fourth studies include mouse-style interaction for formatting rich-text on a multi-touch platform, in the context of a word processing task. These studies establish a foundation for future efforts in general-purpose computing on a multi-touch platform. Furthermore, this work details deficiencies in tactile feedback with modern multi-touch platforms, and describes an exploration of audible feedback. Finally, the thesis conveys a vision for a general-purpose multi-touch platform, its design and rationale.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0003711, ucf:48798
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003711
-
-
Title
-
Motor imagery classification using sparse representation of EEG signals.
-
Creator
-
Saidi, Pouria, Atia, George, Vosoughi, Azadeh, Berman, Steven, University of Central Florida
-
Abstract / Description
-
The human brain is unquestionably the most complex organ of the body as it controls and processes its movement and senses. A healthy brain is able to generate responses to the signals it receives, and transmit messages to the body. Some neural disorders can impair the communication between the brain and the body preventing the transmission of these messages. Brain Computer Interfaces (BCIs) are devices that hold immense potential to assist patients with such disorders by analyzing brain...
Show moreThe human brain is unquestionably the most complex organ of the body as it controls and processes its movement and senses. A healthy brain is able to generate responses to the signals it receives, and transmit messages to the body. Some neural disorders can impair the communication between the brain and the body preventing the transmission of these messages. Brain Computer Interfaces (BCIs) are devices that hold immense potential to assist patients with such disorders by analyzing brain signals, translating and classifying various brain responses, and relaying them to external devices and potentially back to the body. Classifying motor imagery brain signals where the signals are obtained based on imagined movement of the limbs is a major, yet very challenging, step in developing Brain Computer Interfaces (BCIs). Of primary importance is to use less data and computationally efficient algorithms to support real-time BCI. To this end, in this thesis we explore and develop algorithms that exploit the sparse characteristics of EEGs to classify these signals. Different feature vectors are extracted from EEG trials recorded by electrodes placed on the scalp.In this thesis, features from a small spatial region are approximated by a sparse linear combination of few atoms from a multi-class dictionary constructed from the features of the EEG training signals for each class. This is used to classify the signals based on the pattern of their sparse representation using a minimum-residual decision rule.We first attempt to use all the available electrodes to verify the effectiveness of the proposed methods. To support real time BCI, the electrodes are reduced to those near the sensorimotor cortex which are believed to be crucial for motor preparation and imagination.In a second approach, we try to incorporate the effect of spatial correlation across the neighboring electrodes near the sensorimotor cortex. To this end, instead of considering one feature vector at a time, we use a collection of feature vectors simultaneously to find the joint sparse representation of these vectors. Although we were not able to see much improvement with respect to the first approach, we envision that such improvements could be achieved using more refined models that can be subject of future works. The performance of the proposed approaches is evaluated using different features, including wavelet coefficients, energy of the signals in different frequency sub-bands, and also entropy of the signals. The results obtained from real data demonstrate that the combination of energy and entropy features enable efficient classification of motor imagery EEG trials related to hand and foot movements. This underscores the relevance of the energies and their distribution in different frequency sub-bands for classifying movement-specific EEG patterns in agreement with the existence of different levels within the alpha band. The proposed approach is also shown to outperform the state-of-the-art algorithm that uses feature vectors obtained from energies of multiple spatial projections.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005882, ucf:50884
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005882
-
-
Title
-
Research on High-performance and Scalable Data Access in Parallel Big Data Computing.
-
Creator
-
Yin, Jiangling, Wang, Jun, Jin, Yier, Lin, Mingjie, Qi, GuoJun, Wang, Chung-Ching, University of Central Florida
-
Abstract / Description
-
To facilitate big data processing, many dedicated data-intensive storage systems such as Google File System(GFS), Hadoop Distributed File System(HDFS) and Quantcast File System(QFS) have been developed. Currently, the Hadoop Distributed File System(HDFS) [20] is the state-of-art and most popular open-source distributed file system for big data processing. It is widely deployed as the bedrock for many big data processing systems/frameworks, such as the script-based pig system, MPI-based...
Show moreTo facilitate big data processing, many dedicated data-intensive storage systems such as Google File System(GFS), Hadoop Distributed File System(HDFS) and Quantcast File System(QFS) have been developed. Currently, the Hadoop Distributed File System(HDFS) [20] is the state-of-art and most popular open-source distributed file system for big data processing. It is widely deployed as the bedrock for many big data processing systems/frameworks, such as the script-based pig system, MPI-based parallel programs, graph processing systems and scala/java-based Spark frameworks. These systems/applications employ parallel processes/executors to speed up data processing within scale-out clusters.Job or task schedulers in parallel big data applications such as mpiBLAST and ParaView can maximize the usage of computing resources such as memory and CPU by tracking resource consumption/availability for task assignment. However, since these schedulers do not take the distributed I/O resources and global data distribution into consideration, the data requests from parallel processes/executors in big data processing will unfortunately be served in an imbalanced fashion on the distributed storage servers. These imbalanced access patterns among storage nodes are caused because a). unlike conventional parallel file system using striping policies to evenly distribute data among storage nodes, data-intensive file systems such as HDFS store each data unit, referred to as chunk or block file, with several copies based on a relative random policy, which can result in an uneven data distribution among storage nodes; b). based on the data retrieval policy in HDFS, the more data a storage node contains, the higher the probability that the storage node could be selected to serve the data. Therefore, on the nodes serving multiple chunk files, the data requests from different processes/executors will compete for shared resources such as hard disk head and network bandwidth. Because of this, the makespan of the entire program could be significantly prolonged and the overall I/O performance will degrade.The first part of my dissertation seeks to address aspects of these problems by creating an I/O middleware system and designing matching-based algorithms to optimize data access in parallel big data processing. To address the problem of remote data movement, we develop an I/O middleware system, called SLAM, which allows MPI-based analysis and visualization programs to benefit from locality read, i.e, each MPI process can access its required data from a local or nearby storage node. This can greatly improve the execution performance by reducing the amount of data movement over network. Furthermore, to address the problem of imbalanced data access, we propose a method called Opass, which models the data read requests that are issued by parallel applications to cluster nodes as a graph data structure where edges weights encode the demands of load capacity. We then employ matching-based algorithms to map processes to data to achieve data access in a balanced fashion. The final part of my dissertation focuses on optimizing sub-dataset analyses in parallel big data processing. Our proposed methods can benefit different analysis applications with various computational requirements and the experiments on different cluster testbeds show their applicability and scalability.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0006021, ucf:51008
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006021
-
-
Title
-
Development of 3D Vision Testbed for Shape Memory Polymer Structure Applications.
-
Creator
-
Thompson, Kenneth, Xu, Yunjun, Gou, Jihua, Lin, Kuo-Chi, University of Central Florida
-
Abstract / Description
-
As applications for shape memory polymers (SMPs) become more advanced, it is necessary to have the ability to monitor both the actuation and thermal properties of structures made of such materials. In this paper, a method of using three stereo pairs of webcams and a single thermal camera is studied for the purposes of both tracking three dimensional motion of shape memory polymers, as well as the temperature of points of interest within the SMP structure. The method used includes a stereo...
Show moreAs applications for shape memory polymers (SMPs) become more advanced, it is necessary to have the ability to monitor both the actuation and thermal properties of structures made of such materials. In this paper, a method of using three stereo pairs of webcams and a single thermal camera is studied for the purposes of both tracking three dimensional motion of shape memory polymers, as well as the temperature of points of interest within the SMP structure. The method used includes a stereo camera calibration with integrated local minimum tracking algorithms to locate points of interest on the material and measure their temperature through interpolation techniques. The importance of the proposed method is that it allows a means to cost effectively monitor the surface temperature of a shape memory polymer structure without having to place intrusive sensors on the samples, which would limit the performance of the shape memory effect. The ability to monitor the surface temperatures of a SMP structure allows for more complex configurations to be created while increasing the performance and durability of the material. Additionally, as compared to the previous version, both the functionalities of the testbed and the user interface have been significantly improved.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005893, ucf:50860
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005893
-
-
Title
-
Towards High-Efficiency Data Management In the Next-Generation Persistent Memory System.
-
Creator
-
Chen, Xunchao, Wang, Jun, Fan, Deliang, Lin, Mingjie, Ewetz, Rickard, Zhang, Shaojie, University of Central Florida
-
Abstract / Description
-
For the sake of higher cell density while achieving near-zero standby power, recent research progress in Magnetic Tunneling Junction (MTJ) devices has leveraged Multi-Level Cell (MLC) configurations of Spin-Transfer Torque Random Access Memory (STT-RAM). However, in order to mitigate the write disturbance in an MLC strategy, data stored in the soft bit must be restored back immediately after the hard bit switching is completed. Furthermore, as the result of MTJ feature size scaling, the soft...
Show moreFor the sake of higher cell density while achieving near-zero standby power, recent research progress in Magnetic Tunneling Junction (MTJ) devices has leveraged Multi-Level Cell (MLC) configurations of Spin-Transfer Torque Random Access Memory (STT-RAM). However, in order to mitigate the write disturbance in an MLC strategy, data stored in the soft bit must be restored back immediately after the hard bit switching is completed. Furthermore, as the result of MTJ feature size scaling, the soft bit can be expected to become disturbed by the read sensing current, thus requiring an immediate restore operation to ensure the data reliability. In this paper, we design and analyze a novel Adaptive Restore Scheme for Write Disturbance (ARS-WD) and Read Disturbance (ARS-RD), respectively. ARS-WD alleviates restoration overhead by intentionally overwriting soft bit lines which are less likely to be read. ARS-RD, on the other hand, aggregates the potential writes and restore the soft bit line at the time of its eviction from higher level cache. Both of these two schemes are based on a lightweight forecasting approach for the future read behavior of the cache block. Our experimental results show substantial reduction in soft bit line restore operations. Moreover, ARS promotes advantages of MLC to provide a preferable L2 design alternative in terms of energy, area and latency product compared to SLC STT-RAM alternatives. Whereas the popular Cell Split Mapping (CSM) for MLC STT-RAM leverages the inter-block nonuniform access frequency, the intra-block data access features remain untapped in the MLC design. Aiming to minimize the energy-hungry write request to Hard-Bit Line (HBL) and maximize the dynamic range in the advantageous Soft-Bit Line (SBL), an hybrid mapping strategy for MLC STT-RAM cache (Double-S) is advocated in the paper. Double-S couples the contemporary Cell-Split-Mapping with the novel Word-Split-Mapping (WSM). Sparse cache block detector and read depth based data allocation/ migration policy are proposed to release the full potential of Double-S.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006865, ucf:51751
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006865
-
-
Title
-
Modeling of flow generated sound in a constricted duct at low Mach number.
-
Creator
-
Thibbotuwawa Gamage, Peshala, Mansy, Hansen, Kassab, Alain, Bhattacharya, Samik, University of Central Florida
-
Abstract / Description
-
Modelling flow and acoustics in a constricted duct at low Mach numbers is important for investigating many physiological phenomena such as phonation, generation of arterial murmurs, and pulmonary conditions involving airway obstruction. The objective of this study is to validate computational fluid dynamics (CFD) and computational aero-acoustics (CAA) simulations in a constricted tube at low Mach numbers. Different turbulence models were employed to simulate the flow field. Models included...
Show moreModelling flow and acoustics in a constricted duct at low Mach numbers is important for investigating many physiological phenomena such as phonation, generation of arterial murmurs, and pulmonary conditions involving airway obstruction. The objective of this study is to validate computational fluid dynamics (CFD) and computational aero-acoustics (CAA) simulations in a constricted tube at low Mach numbers. Different turbulence models were employed to simulate the flow field. Models included Reynolds Average Navier-Stokes (RANS), Detached eddy simulation (DES) and Large eddy simulation (LES). The models were validated by comparing study results with laser doppler anemometry (LDA) velocity measurements. The comparison showed that experimental data agreed best with the LES model results. Although RANS Reynolds stress transport (RST) model showed good agreement with mean velocity measurements, it was unable to capture velocity fluctuations. RANS shear stress transport (SST) k-? model and DES models were unable to predict the location of high fluctuating flow region accurately.CAA simulation was performed in parallel with LES using Acoustic Perturbation Equation (APE) based hybrid CAA method. CAA simulation results agreed well with measured wall sound pressure spectra. The APE acoustic sources were found in jet core breakdown region downstream of the constriction, which was also characterized by high flow fluctuations. Proper Orthogonal Decomposition (POD) is used to study the coherent flow structures at the different frequencies corresponding to the peaks of the measured sound pressure spectra. The study results will help enhance our understanding of sound generation mechanisms in constricted tubes including biomedical applications.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006920, ucf:51696
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006920
-
-
Title
-
Computerized Evaluatution of Microsurgery Skills Training.
-
Creator
-
Jotwani, Payal, Foroosh, Hassan, Hughes, Charles, Hua, Kien, University of Central Florida
-
Abstract / Description
-
The style of imparting medical training has evolved, over the years. The traditional methods of teaching and practicing basic surgical skills under apprenticeship model, no longer occupy the first place in modern technically demanding advanced surgical disciplines like neurosurgery. Furthermore, the legal and ethical concerns for patient safety as well as cost-effectiveness have forced neurosurgeons to master the necessary microsurgical techniques to accomplish desired results. This has lead...
Show moreThe style of imparting medical training has evolved, over the years. The traditional methods of teaching and practicing basic surgical skills under apprenticeship model, no longer occupy the first place in modern technically demanding advanced surgical disciplines like neurosurgery. Furthermore, the legal and ethical concerns for patient safety as well as cost-effectiveness have forced neurosurgeons to master the necessary microsurgical techniques to accomplish desired results. This has lead to increased emphasis on assessment of clinical and surgical techniques of the neurosurgeons. However, the subjective assessment of microsurgical techniques like micro-suturing under the apprenticeship model cannot be completely unbiased. A few initiatives using computer-based techniques, have been made to introduce objective evaluation of surgical skills.This thesis presents a novel approach involving computerized evaluation of different components of micro-suturing techniques, to eliminate the bias of subjective assessment. The work involved acquisition of cine clips of micro-suturing activity on synthetic material. Image processing and computer vision based techniques were then applied to these videos to assess different characteristics of micro-suturing viz. speed, dexterity and effectualness. In parallel subjective grading on these was done by a senior neurosurgeon. Further correlation and comparative study of both the assessments was done to analyze the efficacy of objective and subjective evaluation.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0006221, ucf:51056
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006221
-
-
Title
-
Visual Geo-Localization and Location-Aware Image Understanding.
-
Creator
-
Roshan Zamir, Amir, Shah, Mubarak, Jha, Sumit, Sukthankar, Rahul, Lin, Mingjie, Fathpour, Sasan, University of Central Florida
-
Abstract / Description
-
Geo-localization is the problem of discovering the location where an image or video was captured. Recently, large scale geo-localization methods which are devised for ground-level imagery and employ techniques similar to image matching have attracted much interest. In these methods, given a reference dataset composed of geo-tagged images, the problem is to estimate the geo-location of a query by finding its matching reference images.In this dissertation, we address three questions central to...
Show moreGeo-localization is the problem of discovering the location where an image or video was captured. Recently, large scale geo-localization methods which are devised for ground-level imagery and employ techniques similar to image matching have attracted much interest. In these methods, given a reference dataset composed of geo-tagged images, the problem is to estimate the geo-location of a query by finding its matching reference images.In this dissertation, we address three questions central to geo-spatial analysis of ground-level imagery: \textbf{1) How to geo-localize images and videos captured at unknown locations? 2) How to refine the geo-location of already geo-tagged data? 3) How to utilize the extracted geo-tags?}We present a new framework for geo-locating an image utilizing a novel multiple nearest neighbor feature matching method using Generalized Minimum Clique Graphs (GMCP). First, we extract local features (e.g., SIFT) from the query image and retrieve a number of nearest neighbors for each query feature from the reference data set. Next, we apply our GMCP-based feature matching to select a single nearest neighbor for each query feature such that all matches are globally consistent. Our approach to feature matching is based on the proposition that the first nearest neighbors are not necessarily the best choices for finding correspondences in image matching. Therefore, the proposed method considers multiple reference nearest neighbors as potential matches and selects the correct ones by enforcing the consistency among their global features (e.g., GIST) using GMCP. Our evaluations using a new data set of 102k Street View images shows the proposed method outperforms the state-of-the-art by 10 percent.Geo-localization of images can be extended to geo-localization of a video. We have developed a novel method for estimating the geo-spatial trajectory of a moving camera with unknown intrinsic parameters in a city-scale. The proposed method is based on a three step process: 1) individual geo-localization of video frames using Street View images to obtain the likelihood of the location (latitude and longitude) given the current observation, 2) Bayesian tracking to estimate the frame location and video's temporal evolution using previous state probabilities and current likelihood, and 3) applying a novel Minimum Spanning Trees based trajectory reconstruction to eliminate trajectory loops or noisy estimations. Thus far, we have assumed reliable geo-tags for reference imagery are available through crowdsourcing. However, crowdsourced images are well known to suffer from the acute shortcoming of having inaccurate geo-tags. We have developed the first method for refinement of GPS-tags which automatically discovers the subset of corrupted geo-tags and refines them. We employ Random Walks to discover the uncontaminated subset of location estimations and robustify Random Walks with a novel adaptive damping factor that conforms to the level of noise in the input. In location-aware image understanding, we are interested in improving the image analysis by putting it in the right geo-spatial context. This approach is of particular importance as the majority of cameras and mobile devices are now being equipped with GPS chips. Therefore, developing techniques which can leverage the geo-tags of images for improving the performance of traditional computer vision tasks is of particular interest. We have developed a location-aware multimodal approach which incorporates business directories, textual information, and web images to identify businesses in a geo-tagged query image.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005544, ucf:50282
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005544
-
-
Title
-
The effect of Curriculum Organization on the acquisition of Abstract Declarative Knowledge in Computer Based Instructions.
-
Creator
-
Al-Foraih, Saleh, Williams, Kent, Proctor, Michael, Rabelo, Luis, Ozkaptan, Halim, University of Central Florida
-
Abstract / Description
-
ABSTRACTThe United States of America has dropped behind many countries in terms of theScience and Engineering university degrees awarded since the beginning of the nineties.Multiple studies have been conducted to determine the cause of this decline in degreesawarded, and try to reverse the trend in US education. The goal of these studies was todetermine the proper instructional methods that facilitate the knowledge acquisitionprocess for the student. It has been determined that not one method...
Show moreABSTRACTThe United States of America has dropped behind many countries in terms of theScience and Engineering university degrees awarded since the beginning of the nineties.Multiple studies have been conducted to determine the cause of this decline in degreesawarded, and try to reverse the trend in US education. The goal of these studies was todetermine the proper instructional methods that facilitate the knowledge acquisitionprocess for the student. It has been determined that not one method works for all types ofcurriculum, for example methods that have been found to work effectively in curriculumthat teaches procedures and physical systems often fail in curriculum that teaches abstractand conceptual content. The purpose of this study is to design an instructional methodthat facilitates teaching of abstract knowledge, and to demonstrate its effectivenessthrough empirical research.An experiment including 72 undergraduate students was conducted to determinethe best method of acquiring abstract knowledge. All students were presented with thesame abstract knowledge but presented in different types of organization. Theseorganization types consisted of hierarchy referred as Bottom Up, Top Down, andUnorganized. Another factor that was also introduced is Graphing, which is a method thatis believe to improve the learning process. The experiment was completed in 8 weeks anddata was gathered and analyzed.The results strongly suggest that abstract knowledge acquisition is greatlyimproved when the knowledge is presented in a Bottom Up hierarchical fashion. On theother hand, neither Graphing nor the Top Down or Unorganized conditions affectlearning in these novice students.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004644, ucf:49893
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004644
-
-
Title
-
TRAFFIC CONFLICT ANALYSIS UNDER FOG CONDITIONS USING COMPUTER SIMULATION.
-
Creator
-
Zhang, Binya, Radwan, Essam, Abdel-Aty, Mohamed, Abou-Senna, Hatem, University of Central Florida
-
Abstract / Description
-
The weather condition is a crucial influence factor on road safety issues. Fog is one of the most noticeable weather conditions, which has a significant impact on traffic safety. Such condition reduces the road's visibility and consequently can affect drivers' vision, perception, and judgments. The statistical data shows that many crashes are directly or indirectly caused by the low-visibility weather condition. Hence, it is necessary for road traffic engineers to study the relationship of...
Show moreThe weather condition is a crucial influence factor on road safety issues. Fog is one of the most noticeable weather conditions, which has a significant impact on traffic safety. Such condition reduces the road's visibility and consequently can affect drivers' vision, perception, and judgments. The statistical data shows that many crashes are directly or indirectly caused by the low-visibility weather condition. Hence, it is necessary for road traffic engineers to study the relationship of road traffic accidents and their influence factors. Among these factors, the traffic volume and the speed limits in poor visibility areas are the primary reasons that can affect the types and occurring locations of road accidents.In this thesis, microscopic traffic simulation, through the use of VISSIM software, was used to study the road safety issue and its influencing factors due to limited visibility. A basic simulation model was built based on previously collected field data to simulate Interstate 4 (I-4)'s environment, geometry characteristics, and the basic traffic volume composition conditions. On the foundation of the basic simulation model, an experimental model was built to study the conflicts' types and distribution places under several different scenarios. Taking into consideration the entire 4-mile study area on I-4, this area was divided into 3 segments: section 1 with clear visibility, fog area of low visibility, and section 2 with clear visibility. Lower speed limits in the fog area, which were less than the limits in no-fog areas, were set to investigate the different speed limits' influence on the two main types of traffic conflicts: lane-change conflicts and rear-end conflicts. The experimental model generated several groups of traffic trajectory data files. The vehicle conflicts data were stored in these trajectory data files which, contains the conflict locations' coordinates, conflict time, time-to-conflict, and post-encroachment-time among other measures. The Surrogate Safety Assessment Model (SSAM), developed by the Federal Highway Administration, was applied to analyze these conflict data.From the analysis results, it is found that the traffic volume is an important factor, which has a large effect on the number of conflicts. The number of lane-change and rear-end conflicts increases along with the traffic volume growth. Another finding is that the difference between the speed limits in the fog area and in the no-fog areas is another significant factor that impacts the conflicts' frequency. Larger difference between the speed limits in two nearing road sections always leads to more accidents due to the inadequate reaction time for vehicle drivers to brake in time. And comparing to the scenarios that with the reduced speed limits in the low visibility zone, the condition that without the reduced speed limit has higher conflict number, which indicates that the it is necessary to put a lower speed limit in the fog zone which has a lower visibility. The results of this research have a certain reference value for studying the relationship between the road traffic conflicts and the impacts of different speed limits under fog condition. Overall, the findings of this research suggest follow up studies to further investigate possible relationships between conflicts as observed by simulation models and reported crashes in fog areas.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005747, ucf:50104
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005747
-
-
Title
-
Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems.
-
Creator
-
Groves, Curtis, Kassab, Alain, Das, Tuhin, Kauffman, Jeffrey, Moore, Brian, University of Central Florida
-
Abstract / Description
-
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data...
Show moreSpacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional (")validation by test only(") mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions.Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in (")Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations("). This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions. The method accounts for all uncertainty terms from both numerical and input variables. Objective three is to compile a table of uncertainty parameters that could be used to estimate the error in a Computational Fluid Dynamics model of the Environmental Control System /spacecraft system.Previous studies have looked at the uncertainty in a Computational Fluid Dynamics model for a single output variable at a single point, for example the re-attachment length of a backward facing step. For the flow regime being analyzed (turbulent, three-dimensional, incompressible), the error at a single point can propagate into the solution both via flow physics and numerical methods. Calculating the uncertainty in using Computational Fluid Dynamics to accurately predict airflow speeds around encapsulated spacecraft in is imperative to the success of future missions.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005174, ucf:50662
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005174
-
-
Title
-
Improving the performance of data-intensive computing on Cloud platforms.
-
Creator
-
Dai, Wei, Bassiouni, Mostafa, Zou, Changchun, Wang, Jun, Lin, Mingjie, Bai, Yuanli, University of Central Florida
-
Abstract / Description
-
Big Data such as Terabyte and Petabyte datasets are rapidly becoming the new norm for various organizations across a wide range of industries. The widespread data-intensive computing needs have inspired innovations in parallel and distributed computing, which has been the effective way to tackle massive computing workload for decades. One significant example is MapReduce, which is a programming model for expressing distributed computations on huge datasets, and an execution framework for data...
Show moreBig Data such as Terabyte and Petabyte datasets are rapidly becoming the new norm for various organizations across a wide range of industries. The widespread data-intensive computing needs have inspired innovations in parallel and distributed computing, which has been the effective way to tackle massive computing workload for decades. One significant example is MapReduce, which is a programming model for expressing distributed computations on huge datasets, and an execution framework for data-intensive computing on commodity clusters as well. Since it was originally proposed by Google, MapReduce has become the most popular technology for data-intensive computing. While Google owns its proprietary implementation of MapReduce, an open source implementation called Hadoop has gained wide adoption in the rest of the world. The combination of Hadoop and Cloud platforms has made data-intensive computing much more accessible and affordable than ever before.This dissertation addresses the performance issue of data-intensive computing on Cloud platforms from three different aspects: task assignment, replica placement, and straggler identification. Both task assignment and replica placement are subjects closely related to load balancing, which is one of the key issues that can significantly affect the performance of parallel and distributed applications. While task assignment schemes strive to balance data processing load among cluster nodes to achieve minimum job completion time, replica placement policies aim to assign block replicas to cluster nodes according to their processing capabilities to exploit data locality to the maximum extent. Straggler identification is also one of the crucial issues data-intensive computing has to deal with, as the overall performance of parallel and distributed applications is often determined by the node with the lowest performance. The results of extensive evaluation tests confirm that the schemes/policies proposed in this dissertation can improve the performance of data-intensive applications running on Cloud platforms.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006731, ucf:51896
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006731
-
-
Title
-
In-Memory Computing Using Formal Methods and Paths-Based Logic.
-
Creator
-
Velasquez, Alvaro, Jha, Sumit Kumar, Leavens, Gary, Wu, Annie, Subramani, K., University of Central Florida
-
Abstract / Description
-
The continued scaling of the CMOS device has been largely responsible for the increase in computational power and consequent technological progress over the last few decades. However, the end of Dennard scaling has interrupted this era of sustained exponential growth in computing performance. Indeed, we are quickly reaching an impasse in the form of limitations in the lithographic processes used to fabricate CMOS processes and, even more dire, we are beginning to face fundamental physical...
Show moreThe continued scaling of the CMOS device has been largely responsible for the increase in computational power and consequent technological progress over the last few decades. However, the end of Dennard scaling has interrupted this era of sustained exponential growth in computing performance. Indeed, we are quickly reaching an impasse in the form of limitations in the lithographic processes used to fabricate CMOS processes and, even more dire, we are beginning to face fundamental physical phenomena, such as quantum tunneling, that are pervasive at the nanometer scale. Such phenomena manifests itself in prohibitively high leakage currents and process variations, leading to inaccurate computations. As a result, there has been a surge of interest in computing architectures that can replace the traditional CMOS transistor-based methods. This thesis is a thorough investigation of how computations can be performed on one such architecture, called a crossbar. The methods proposed in this document apply to any crossbar consisting of two-terminal connective devices. First, we demonstrate how paths of electric current between two wires can be used as design primitives in a crossbar. We then leverage principles from the field of formal methods, in particular the area of bounded model checking, to automate the synthesis of crossbar designs for computing arithmetic operations. We demonstrate that our approach yields circuits that are state-of-the-art in terms of the number of operations required to perform a computation. Finally, we look at the benefits of using a 3D crossbar for computation; that is, a crossbar consisting of multiple layers of interconnects. A novel 3D crossbar computing paradigm is proposed for solving the Boolean matrix multiplication and transitive closure problems and we show how this paradigm can be utilized, with small modifications, in the XPoint crossbar memory architecture that was recently announced by Intel.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007419, ucf:52720
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007419
-
-
Title
-
Larger-first partial parsing.
-
Creator
-
Van Delden, Sebastian Alexander, Gomez, Fernando, Engineering and Computer Science
-
Abstract / Description
-
University of Central Florida College of Engineering Thesis; Larger-first partial parsing is a primarily top-down approach to partial parsing that is opposite to current easy-first, or primarily bottom-up, strategies. A rich partial tree structure is captured by an algorithm that assigns a hierarchy of structural tags to each of the input tokens in a sentence. Part-of-speech tags are first assigned to the words in a sentence by a part-of-speech tagger. A cascade of Deterministic Finite State...
Show moreUniversity of Central Florida College of Engineering Thesis; Larger-first partial parsing is a primarily top-down approach to partial parsing that is opposite to current easy-first, or primarily bottom-up, strategies. A rich partial tree structure is captured by an algorithm that assigns a hierarchy of structural tags to each of the input tokens in a sentence. Part-of-speech tags are first assigned to the words in a sentence by a part-of-speech tagger. A cascade of Deterministic Finite State Automata then uses this part-of-speech information to identify syntactic relations primarily ina descending order of their size. The cascade is divided into four specialized sections: (1) a Comma Network, which identifies syntactic relations associated with commas; (2) a Conjunction Network, which partially disambiguates phrasal conjunctions and fully disambiguates clausal conjunctions; (3) a Clause Network, which identifies non-comma-delimited clauses; and (4) a Phrase Network, which identifies the remaining base phrases in the sentence. Each automaton is capable of adding one ore more levels of structural tags to the to the tokens in a sentence. The larger-first approach is compared against a well-known easy-first approach. The results indicate that this larger-first approach is capable of (1) producing a more detailed partial parse than an easy first approach; (2) providing better containment of attachment ambiguity; (3) handling overlapping syntactic relations; and (4) achieving a higher accuracy than the easy-first approach. The automata of each network were developed by an empirical analysis of several sources and are presented here in details.
Show less
-
Date Issued
-
2003
-
Identifier
-
CFR0000760, ucf:52932
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFR0000760
-
-
Title
-
THE INTEGRATED USER EXPERIENCE EVALUATION MODEL: A SYSTEMATIC APPROACH TO INTEGRATING USER EXPERIENCE DATA SOURCES.
-
Creator
-
Champney, Roberto, Malone, Linda, University of Central Florida
-
Abstract / Description
-
Evaluating the user experience (UX) associated with product interaction is a challenge for current human-systems developers. This is largely due to a lack of theoretical guidance for directing how best to assess the UX and a paucity of tools to support such evaluation. This dissertation provided a framework and tools for guiding and supporting evaluation of the user experience. This doctoral research involved reviewing the literature on UX, using this knowledge to build first build a...
Show moreEvaluating the user experience (UX) associated with product interaction is a challenge for current human-systems developers. This is largely due to a lack of theoretical guidance for directing how best to assess the UX and a paucity of tools to support such evaluation. This dissertation provided a framework and tools for guiding and supporting evaluation of the user experience. This doctoral research involved reviewing the literature on UX, using this knowledge to build first build a theoretical model of the UX construct and later develop a theoretical model to for the evaluation of UX in order to aid evaluators the integrated User eXperience EValuation (iUXEV), and empirically validating select components of the model through three case studies. The developed evaluation model was subjected to a three phase validation process that included the development and application of different components of the model separately. The first case study focused on developing a tool and method for assessing the affective component of UX which resulted in lessons learned for the integration of the tool and method into the iUXEV model. The second case study focused on integrating several tools that target different components of UX and resulted in a better understanding of how the data could be utilized as well as identify the need for an integration method to bring the data together. The third case study focused on the application of the results of an usability evaluation on an organizational setting which resulted in the identification of challenges and needs faced by practitioners. Taken together, this body of research, from the theoretically-driven iUXEV model to the newly developed emotional assessment tool, extends the user experience / usability body of knowledge and state-of-practice for interaction design practitioners who are challenged with holistic user experience evaluations, thereby advancing the state-of-the-art in UX design and evaluation.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002761, ucf:48098
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002761
Pages