View All Items
Pages
- Title
- OUTSIDE THE FRAME: TOWARDS A PHENOMENOLOGY OF TEXTS AND TECHNOLOGY.
- Creator
-
Crisafi, Anthony, Grajeda, Anthony, University of Central Florida
- Abstract / Description
-
The subject of my dissertation is how phenomenology can be used as a tool for understanding the intersection between texts and technology. What I am suggesting here is that, specifically in connection with the focus of our program in Texts and Technology, there are very significant questions concerning how digital communications technology extends our humanity, and more importantly what kind of epistemological and ontological questions are raised because of this. There needs to be a coherent...
Show moreThe subject of my dissertation is how phenomenology can be used as a tool for understanding the intersection between texts and technology. What I am suggesting here is that, specifically in connection with the focus of our program in Texts and Technology, there are very significant questions concerning how digital communications technology extends our humanity, and more importantly what kind of epistemological and ontological questions are raised because of this. There needs to be a coherent theory for Texts and Technology that will help us to understand this shift, and I feel that this should be the main focus for the program itself. In this dissertation I present an analysis of the different phenomenological aspects of the study of Texts and Technology. For phenomenologists such as Husserl, Heidegger, and Merleau-Ponty, technology, in all of its forms, is the way in which human consciousness is embodied. Through the creation and manipulation of technology, humanity extends itself into the physical world. Therefore, I feel we must try to understand this extension as more than merely a reflection of materialist practices, because first and foremost we are discussing how the human mind uses technology to further its advancement. I will detail some of the theoretical arguments both for and against the study of technology as a function of human consciousness. I will focus on certain issues, such as problems of archiving and copyright, as central to the field. I will further argue how from a phenomenological standpoint we are in the presence of a phenomenological shift from the primacy of print towards a more hybrid system of representing human communications.
Show less - Date Issued
- 2008
- Identifier
- CFE0002181, ucf:47885
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002181
- Title
- A SCREN OF ONE'S OWN: THE TPEC AND FEMINIST TECHNOLOGICAL TEXTUALITY IN THE 21ST CENTURY.
- Creator
-
Barnickel, Amy, Bowdon, Melody, University of Central Florida
- Abstract / Description
-
In this dissertation, I analyze the 20th century text, A Room of One's Own, by Virginia Woolf (2005), and I engage with Woolf's concept of a woman's need for a room of her own in which she can be free to think for herself, study, write, or pursue other interests away from the oppression of patriarchal societal expectations and demands. Through library-based research, I identify four screens in Woolf's work through which she viewed and critiqued culture, and I use these screens to...
Show moreIn this dissertation, I analyze the 20th century text, A Room of One's Own, by Virginia Woolf (2005), and I engage with Woolf's concept of a woman's need for a room of her own in which she can be free to think for herself, study, write, or pursue other interests away from the oppression of patriarchal societal expectations and demands. Through library-based research, I identify four screens in Woolf's work through which she viewed and critiqued culture, and I use these screens to reconceptualize "a room of one's own" in 21st Century terms. I determine that the new "room" is intimately and intricately technological and textual and it is reformulated in the digital spaces of blogs, social media, and Web sites. Further, I introduce the new concept of the technologized politically embodied cyborg, or TPEC, and examine the ways 21st Century TPECs are shaping U.S. culture in progressive ways.
Show less - Date Issued
- 2010
- Identifier
- CFE0003500, ucf:48939
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003500
- Title
- Critical Programming: Toward a Philosophy of Computing.
- Creator
-
Bork, John, Janz, Bruce, Grajeda, Anthony, McDaniel, Rudy, Hughes, Charles, University of Central Florida
- Abstract / Description
-
Beliefs about the relationship between human beings and computing machines and their destinies have alternated from heroic counterparts to conspirators of automated genocide, from apocalyptic extinction events to evolutionary cyborg convergences. Many fear that people are losing key intellectual and social abilities as tasks are offloaded to the everywhere of the built environment, which is developing a mind of its own. If digital technologies have contributed to forming a dumbest generation...
Show moreBeliefs about the relationship between human beings and computing machines and their destinies have alternated from heroic counterparts to conspirators of automated genocide, from apocalyptic extinction events to evolutionary cyborg convergences. Many fear that people are losing key intellectual and social abilities as tasks are offloaded to the everywhere of the built environment, which is developing a mind of its own. If digital technologies have contributed to forming a dumbest generation and ushering in a robotic moment, we all have a stake in addressing this collective intelligence problem. While digital humanities continue to flourish and introduce new uses for computer technologies, the basic modes of philosophical inquiry remain in the grip of print media, and default philosophies of computing prevail, or experimental ones propagate false hopes. I cast this as-is situation as the post-postmodern network dividual cyborg, recognizing that the rational enlightenment of modernism and regressive subjectivity of postmodernism now operate in an empire of extended mind cybernetics combined with techno-capitalist networks forming societies of control.Recent critical theorists identify a justificatory scheme foregrounding participation in projects, valorizing social network linkages over heroic individualism, and commending flexibility and adaptability through life long learning over stable career paths. It seems to reify one possible, contingent configuration of global capitalism as if it was the reflection of a deterministic evolution of commingled technogenesis and synaptogenesis. To counter this trend I offer a theoretical framework to focus on the phenomenology of software and code, joining social critiques with textuality and media studies, the former proposing that theory be done through practice, and the latter seeking to understand their schematism of perceptibility by taking into account engineering techniques like time axis manipulation. The social construction of technology makes additional theoretical contributions dispelling closed world, deterministic historical narratives and requiring voices be given to the engineers and technologists that best know their subject area. This theoretical slate has been recently deployed to produce rich histories of computing, networking, and software, inform the nascent disciplines of software studies and code studies, as well as guide ethnographers of software development communities.I call my syncretism of these approaches the procedural rhetoric of diachrony in synchrony, recognizing that multiple explanatory layers operating in their individual temporal and physical orders of magnitude simultaneously undergird post-postmodern network phenomena. Its touchstone is that the human-machine situation is best contemplated by doing, which as a methodology for digital humanities research I call critical programming. Philosophers of computing explore working code places by designing, coding, and executing complex software projects as an integral part of their intellectual activity, reflecting on how developing theoretical understanding necessitates iterative development of code as it does other texts, and how resolving coding dilemmas may clarify or modify provisional theories as our minds struggle to intuit the alien temporalities of machine processes.
Show less - Date Issued
- 2015
- Identifier
- CFE0005928, ucf:50843
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005928
- Title
- The Relationship Between Comprehension of Descriptive and Sequential Expository Texts and Reader Characteristics in Typically Developing Kindergarten Children.
- Creator
-
Zadroga, Cheran, Schwartz, Jamie, Kent-Walsh, Jennifer, Nye, Chad, Lieberman, Rita, Hahs-Vaughn, Debbie, University of Central Florida
- Abstract / Description
-
Researchers have found that children need to be proficient in reading and writing expository text to succeed academically as well as in their future careers. More than ever before, children in primary grade classrooms are being exposed to and expected to comprehend a variety expository text types. However, empirical evidence to support the use of expository texts in kindergarten classrooms, in particular, is sorely lacking. To begin to fill this gap, this study was conducted to investigate...
Show moreResearchers have found that children need to be proficient in reading and writing expository text to succeed academically as well as in their future careers. More than ever before, children in primary grade classrooms are being exposed to and expected to comprehend a variety expository text types. However, empirical evidence to support the use of expository texts in kindergarten classrooms, in particular, is sorely lacking. To begin to fill this gap, this study was conducted to investigate kindergarten children's comprehension of two types of expository text structures (i.e., descriptive and sequential) commonly found in kindergarten classrooms. Specifically, the aims of the study were three fold: (1) to investigate if there is a relationship between prior knowledge and the comprehension of descriptive or sequential expository text; (2) to determine if the comprehension of descriptive and sequential expository text are important predictors of performance on the Token Test for Children-2 (TTFC-2) and the Assessment of Literacy and Language (ALL); and (3) to determine if there is a correlation between the descriptive and sequential expository text comprehension measures (i.e., retelling of expository text and answering comprehension questions) on the researcher created Expository Text Protocol.The sample included 45 typically developing kindergarten children (ages 5 years, 8 months to 6 years, 10 months). All children passed a vision and a hearing screening; were enrolled in kindergarten for the first time (no history of retention); scored within the normal range on a non-verbal intelligence screener; and, were not receiving services in the English for Speakers of Other Languages (ESOL) program or the Exceptional Student Education (ESE) program. Each child participated in two, one-hour, assessment sessions on two separate days. During the sessions, children were administered formal (i.e., TTFC-2 (&) ALL) and informal (i.e., Expository Text Protocol) assessments, counter balanced across the sessions. The standardized tests were administered in the prescribed manner. During administration of the researcher created Expository Text Protocol children listened first to either an illustrated descriptive expository text or an illustrated sequential expository text read aloud by a researcher. After the reading, the children either first retold the text without the use of the corresponding expository text or answered a set of 12 comprehension questions for each type of expository text (i.e., descriptive and sequential). The order of the retelling and comprehension questions were counter balanced across children. Simple linear regressions, multiple linear regressions, and partial correlational analyses were used to assess the data obtained in this study. The research findings indicated that a statistically significant relationship exists between the comprehension of expository text and the following reader characteristics: listening comprehension ability, language ability, and literacy ability. However, a statistically significant relationship was not found between the comprehension of the expository text types and prior knowledge. In addition, a statistically significant relationship was found between each of the two types of comprehension measures: retelling of descriptive and sequential expository texts and answering comprehension questions related to each type of text.This investigation revealed that the incorporation of descriptive and sequential expository text structures into the kindergarten curricula is appropriate and the exposure to expository texts may facilitate language and literacy growth and build upon kindergarten children's existing prior knowledge. In turn, exposure to expository texts also may be beneficial in expanding children's use of expository language found in these types of texts. Future research is needed to examine kindergarten children's comprehension of other types of expository text structures found in kindergarten classrooms.
Show less - Date Issued
- 2016
- Identifier
- CFE0006426, ucf:51479
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006426
- Title
- TRANSFORM BASED AND SEARCH AWARE TEXT COMPRESSION SCHEMES AND COMPRESSED DOMAIN TEXT RETRIEVAL.
- Creator
-
Zhang, Nan, Mukherjee, Amar, University of Central Florida
- Abstract / Description
-
In recent times, we have witnessed an unprecedented growth of textual information via the Internet, digital libraries and archival text in many applications. While a good fraction of this information is of transient interest, useful information of archival value will continue to accumulate. We need ways to manage, organize and transport this data from one point to the other on data communications links with limited bandwidth. We must also have means to speedily find the information we need...
Show moreIn recent times, we have witnessed an unprecedented growth of textual information via the Internet, digital libraries and archival text in many applications. While a good fraction of this information is of transient interest, useful information of archival value will continue to accumulate. We need ways to manage, organize and transport this data from one point to the other on data communications links with limited bandwidth. We must also have means to speedily find the information we need from this huge mass of data. Sometimes, a single site may also contain large collections of data such as a library database, thereby requiring an efficient search mechanism even to search within the local data. To facilitate the information retrieval, an emerging ad hoc standard for uncompressed text is XML which preprocesses the text by putting additional user defined metadata such as DTD or hyperlinks to enable searching with better efficiency and effectiveness. This increases the file size considerably, underscoring the importance of applying text compression. On account of efficiency (in terms of both space and time), there is a need to keep the data in compressed form for as much as possible. (2) Exact and approximate pattern matching in Burrows-Wheeler transformed (BWT) files: We proposed a method to extract the useful context information in linear time from the BWT transformed text. The auxiliary arrays obtained from BWT inverse transform brings logarithm search time. Meanwhile, approximate pattern matching can be performed based on the results of exact pattern matching to extract the possible candidate for the approximate pattern matching. Then fast verifying algorithm can be applied to those candidates which could be just small parts of the original text. We present algorithms for both k-mismatch and k-approximate pattern matching in BWT compressed text. A typical compression system based on BWT has Move-to-Front and Huffman coding stages after the transformation. We propose a novel approach to replace the Move-to-Front stage in order to extend compressed domain search capability all the way to the entropy coding stage. A modification to the Move-to-Front makes it possible to randomly access any part of the compressed text without referring to the part before the access point. . Modified LZW algorithm that allows random access and partial decoding for the compressed text retrieval: Although many compression algorithms provide good compression ratio and/or time complexity, LZW is the first one studied for the compressed pattern matching because of its simplicity and efficiency. Modifications on LZW algorithm provide the extra advantage for fast random access and partial decoding ability that is especially useful for text retrieval systems. Based on this algorithm, we can provide a dynamic hierarchical semantic structure for the text, so that the text search can be performed on the expected level of granularity. For example, user can choose to retrieve a single line, a paragraph, or a file, etc. that contains the keywords. More importantly, we will show that parallel encoding and decoding algorithm is trivial with the modified LZW. Both encoding and decoding can be performed with multiple processors easily and encoding and decoding process are independent with respect to the number of processors. Text compression is concerned with techniques for representing the digital text data in alternate representations that takes less space. Not only does it help conserve the storage space for archival and online data, it also helps system performance by requiring less number of secondary storage (disk or CD Rom) accesses and improves the network transmission bandwidth utilization by reducing the transmission time. Unlike static images or video, there is no international standard for text compression, although compressed formats like .zip, .gz, .Z files are increasingly being used. In general, data compression methods are classified as lossless or lossy. Lossless compression allows the original data to be recovered exactly. Although used primarily for text data, lossless compression algorithms are useful in special classes of images such as medical imaging, finger print data, astronomical images and data bases containing mostly vital numerical data, tables and text information. Many lossy algorithms use lossless methods at the final stage of the encoding stage underscoring the importance of lossless methods for both lossy and lossless compression applications. In order to be able to effectively utilize the full potential of compression techniques for the future retrieval systems, we need efficient information retrieval in the compressed domain. This means that techniques must be developed to search the compressed text without decompression or only with partial decompression independent of whether the search is done on the text or on some inversion table corresponding to a set of key words for the text. In this dissertation, we make the following contributions: (1) Star family compression algorithms: We have proposed an approach to develop a reversible transformation that can be applied to a source text that improves existing algorithm's ability to compress. We use a static dictionary to convert the English words into predefined symbol sequences. These transformed sequences create additional context information that is superior to the original text. Thus we achieve some compression at the preprocessing stage. We have a series of transforms which improve the performance. Star transform requires a static dictionary for a certain size. To avoid the considerable complexity of conversion, we employ the ternary tree data structure that efficiently converts the words in the text to the words in the star dictionary in linear time. (2) Exact and approximate pattern matching in Burrows-Wheeler transformed (BWT) files: We proposed a method to extract the useful context information in linear time from the BWT transformed text. The auxiliary arrays obtained from BWT inverse transform brings logarithm search time. Meanwhile, approximate pattern matching can be performed based on the results of exact pattern matching to extract the possible candidate for the approximate pattern matching. Then fast verifying algorithm can be applied to those candidates which could be just small parts of the original text. We present algorithms for both k-mismatch and k-approximate pattern matching in BWT compressed text. A typical compression system based on BWT has Move-to-Front and Huffman coding stages after the transformation. We propose a novel approach to replace the Move-to-Front stage in order to extend compressed domain search capability all the way to the entropy coding stage. A modification to the Move-to-Front makes it possible to randomly access any part of the compressed text without referring to the part before the access point. (3) Modified LZW algorithm that allows random access and partial decoding for the compressed text retrieval: Although many compression algorithms provide good compression ratio and/or time complexity, LZW is the first one studied for the compressed pattern matching because of its simplicity and efficiency. Modifications on LZW algorithm provide the extra advantage for fast random access and partial decoding ability that is especially useful for text retrieval systems. Based on this algorithm, we can provide a dynamic hierarchical semantic structure for the text, so that the text search can be performed on the expected level of granularity. For example, user can choose to retrieve a single line, a paragraph, or a file, etc. that contains the keywords. More importantly, we will show that parallel encoding and decoding algorithm is trivial with the modified LZW. Both encoding and decoding can be performed with multiple processors easily and encoding and decoding process are independent with respect to the number of processors.
Show less - Date Issued
- 2005
- Identifier
- CFE0000438, ucf:46396
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000438
- Title
- TRANSFORM BASED AND SEARCH AWARE TEXT COMPRESSION SCHEMES AND COMPRESSED DOMAIN TEXT RETRIEVAL.
- Creator
-
Zhang, Nan, Mukherjee, Amar, University of Central Florida
- Abstract / Description
-
In recent times, we have witnessed an unprecedented growth of textual information via the Internet, digital libraries and archival text in many applications. While a good fraction of this information is of transient interest, useful information of archival value will continue to accumulate. We need ways to manage, organize and transport this data from one point to the other on data communications links with limited bandwidth. We must also have means to speedily find the information we need...
Show moreIn recent times, we have witnessed an unprecedented growth of textual information via the Internet, digital libraries and archival text in many applications. While a good fraction of this information is of transient interest, useful information of archival value will continue to accumulate. We need ways to manage, organize and transport this data from one point to the other on data communications links with limited bandwidth. We must also have means to speedily find the information we need from this huge mass of data. Sometimes, a single site may also contain large collections of data such as a library database, thereby requiring an efficient search mechanism even to search within the local data. To facilitate the information retrieval, an emerging ad hoc standard for uncompressed text is XML which preprocesses the text by putting additional user defined metadata such as DTD or hyperlinks to enable searching with better efficiency and effectiveness. This increases the file size considerably, underscoring the importance of applying text compression. On account of efficiency (in terms of both space and time), there is a need to keep the data in compressed form for as much as possible. Text compression is concerned with techniques for representing the digital text data in alternate representations that takes less space. Not only does it help conserve the storage space for archival and online data, it also helps system performance by requiring less number of secondary storage (disk or CD Rom) accesses and improves the network transmission bandwidth utilization by reducing the transmission time. Unlike static images or video, there is no international standard for text compression, although compressed formats like .zip, .gz, .Z files are increasingly being used. In general, data compression methods are classified as lossless or lossy. Lossless compression allows the original data to be recovered exactly. Although used primarily for text data, lossless compression algorithms are useful in special classes of images such as medical imaging, finger print data, astronomical images and data bases containing mostly vital numerical data, tables and text information. Many lossy algorithms use lossless methods at the final stage of the encoding stage underscoring the importance of lossless methods for both lossy and lossless compression applications. In order to be able to effectively utilize the full potential of compression techniques for the future retrieval systems, we need efficient information retrieval in the compressed domain. This means that techniques must be developed to search the compressed text without decompression or only with partial decompression independent of whether the search is done on the text or on some inversion table corresponding to a set of key words for the text. In this dissertation, we make the following contributions: (1) Star family compression algorithms: We have proposed an approach to develop a reversible transformation that can be applied to a source text that improves existing algorithm's ability to compress. We use a static dictionary to convert the English words into predefined symbol sequences. These transformed sequences create additional context information that is superior to the original text. Thus we achieve some compression at the preprocessing stage. We have a series of transforms which improve the performance. Star transform requires a static dictionary for a certain size. To avoid the considerable complexity of conversion, we employ the ternary tree data structure that efficiently converts the words in the text to the words in the star dictionary in linear time. (2) Exact and approximate pattern matching in Burrows-Wheeler transformed (BWT) files: We proposed a method to extract the useful context information in linear time from the BWT transformed text. The auxiliary arrays obtained from BWT inverse transform brings logarithm search time. Meanwhile, approximate pattern matching can be performed based on the results of exact pattern matching to extract the possible candidate for the approximate pattern matching. Then fast verifying algorithm can be applied to those candidates which could be just small parts of the original text. We present algorithms for both k-mismatch and k-approximate pattern matching in BWT compressed text. A typical compression system based on BWT has Move-to-Front and Huffman coding stages after the transformation. We propose a novel approach to replace the Move-to-Front stage in order to extend compressed domain search capability all the way to the entropy coding stage. A modification to the Move-to-Front makes it possible to randomly access any part of the compressed text without referring to the part before the access point. (3) Modified LZW algorithm that allows random access and partial decoding for the compressed text retrieval: Although many compression algorithms provide good compression ratio and/or time complexity, LZW is the first one studied for the compressed pattern matching because of its simplicity and efficiency. Modifications on LZW algorithm provide the extra advantage for fast random access and partial decoding ability that is especially useful for text retrieval systems. Based on this algorithm, we can provide a dynamic hierarchical semantic structure for the text, so that the text search can be performed on the expected level of granularity. For example, user can choose to retrieve a single line, a paragraph, or a file, etc. that contains the keywords. More importantly, we will show that parallel encoding and decoding algorithm is trivial with the modified LZW. Both encoding and decoding can be performed with multiple processors easily and encoding and decoding process are independent with respect to the number of processors.
Show less - Date Issued
- 2005
- Identifier
- CFE0000488, ucf:46358
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000488
- Title
- THE DARK SIDE OF THE TUNE: A STUDY OF VILLAINS.
- Creator
-
Biggs, Michael, Weaver, Earl, University of Central Florida
- Abstract / Description
-
On "championing" the villain, there is a naïve quality that must be maintained even though the actor has rehearsed his tragic ending several times. There is a subtle difference between "to charm" and "to seduce." The need for fame, glory, power, money, or other objects of affection drives antagonists so blindly that they've no hope of regaining a consciousness about their actions. If and when they do become aware, they infrequently feel remorse. I captured the essence of the villain...
Show moreOn "championing" the villain, there is a naïve quality that must be maintained even though the actor has rehearsed his tragic ending several times. There is a subtle difference between "to charm" and "to seduce." The need for fame, glory, power, money, or other objects of affection drives antagonists so blindly that they've no hope of regaining a consciousness about their actions. If and when they do become aware, they infrequently feel remorse. I captured the essence of the villain by exposing these lightless characters to the sun. On Monday, April 9th and Tuesday, April 17th, 2007, on the Gillespie stage in Daytona Beach, Florida, I performed a thirty-minute, one-act cabaret entitled The Dark Side of the Tune. By selecting pieces from the musical theatre genre to define and demonstrate the qualities of the stock character, the villain, I created a one-man show; a musical play, including an inciting incident, rising conflict, climax, and dénouement, with only a few moments of my own dialogue to help handle the unique transitions for my own particular story. By analyzing the arc of major historical villains and comparing them to some of the current dark characters, I will discuss the progression of the villain's role within a production and the change from the clearly defined villain to modern misfits who are frequently far less scheming or obvious. My research includes analysis of the dark references within each piece's originating production, and how it has been integrated into the script for The Dark Side of the Tune and a breakdown of my cabaret's script (Appendix A). I explore actors' tools, specifically voice, movement, and characterization, and their use in creating villainous characters. I also discuss similarities in story progression for the deviant's beginning, middle, and final positions within the plot structure of a production.
Show less - Date Issued
- 2008
- Identifier
- CFE0002446, ucf:47709
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002446
- Title
- INVESTIGATING THE EFFECTIVENESS OF REDUNDANT TEXT AND ANIMATION IN MULTIMEDIA LEARNING ENVIRONMENTS.
- Creator
-
Chu, Shiau-Lung, Hirumi, Atsusi, University of Central Florida
- Abstract / Description
-
In multimedia learning environments, research suggests that simultaneous presentation of redundant text (i.e. identical narration and on-screen text) may inhibit learning when presented with animation at the same time. However, related studies are limited to testing with cause-and-effects content information (e.g., Moreno & Mayer, 1999, 2002). This study examined the effects of redundant text on learners' memory achievement and problem solving ability. The study replicated and extended prior...
Show moreIn multimedia learning environments, research suggests that simultaneous presentation of redundant text (i.e. identical narration and on-screen text) may inhibit learning when presented with animation at the same time. However, related studies are limited to testing with cause-and-effects content information (e.g., Moreno & Mayer, 1999, 2002). This study examined the effects of redundant text on learners' memory achievement and problem solving ability. The study replicated and extended prior research by using descriptive, rather than cause-and-effect content information. The primary research questions were (a) does redundant text improve learning performance if learners are presented with instructional material that addresses subject matter other than cause-and-effect relationship? and (b) does sequential presentation of animation followed by redundant text help learning? To answer the research questions, five hypotheses were tested with a sample of 224 Taiwanese students enrolled in a college level Management Information System (MIS) courses at a management college in southern Taiwan. Statistically significant differences were found in memory achievement and problem solving test scores between simultaneous and sequential groups; while no statistically significant differences were found in memory achievement and problem solving test scores between verbal redundant and non-redundant groups. These results were supported by interviewees expressing difficulty in connecting animation and verbal explanation in the two sequential presentation groups. The interview responses also helped to explain why insignificant results were obtained when redundant and non-redundant verbal explanations with animation were presented simultaneously. In general, the results support previous research on the contiguity principle, suggesting that sequential presentations may lead to lower learning performance when animation and verbal explanation are closely related. The separation of the two types of information may increase cognitive load. In addition, the study found that impairment of redundant text was also affected by various learning characteristics, such as the structure of the instructional content and learners previous learning experiences. Recommendations for future study include: (a) research on various situations such as characteristics of the content, characteristics of learners, and difficulty of the instructional material that influences the effects of redundant text, and (b) research on prior learning experience that influences the effects of simultaneous redundant text presentations.
Show less - Date Issued
- 2006
- Identifier
- CFE0000934, ucf:46723
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000934
- Title
- Visual-Textual Video Synopsis Generation.
- Creator
-
Sharghi Karganroodi, Aidean, Shah, Mubarak, Da Vitoria Lobo, Niels, Rahnavard, Nazanin, Atia, George, University of Central Florida
- Abstract / Description
-
In this dissertation we tackle the problem of automatic video summarization. Automatic summarization techniques enable faster browsing and indexing of large video databases. However, due to the inherent subjectivity of the task, no single video summarizer fits all users unless it adapts to individual user's needs. To address this issue, we introduce a fresh view on the task called "Query-focused'' extractive video summarization. We develop a supervised model that takes as input a video and...
Show moreIn this dissertation we tackle the problem of automatic video summarization. Automatic summarization techniques enable faster browsing and indexing of large video databases. However, due to the inherent subjectivity of the task, no single video summarizer fits all users unless it adapts to individual user's needs. To address this issue, we introduce a fresh view on the task called "Query-focused'' extractive video summarization. We develop a supervised model that takes as input a video and user's preference in form of a query, and creates a summary video by selecting key shots from the original video. We model the problem as subset selection via determinantal point process (DPP), a stochastic point process that assigns a probability value to each subset of any given set. Next, we develop a second model that exploits capabilities of memory networks in the framework and concomitantly reduces the level of supervision required to train the model. To automatically evaluate system summaries, we contend that a good metric for video summarization should focus on the semantic information that humans can perceive rather than the visual features or temporal overlaps. To this end, we collect dense per-video-shot concept annotations, compile a new dataset, and suggest an efficient evaluation method defined upon the concept annotations. To enable better summarization of videos, we improve the sequential DPP in two folds. In terms of learning, we propose a large-margin algorithm to address the exposure bias that is common in many sequence to sequence learning methods. In terms of modeling, we integrate a new probabilistic distribution into SeqDPP, the resulting model accepts user input about the expected length of the summary. We conclude this dissertation by developing a framework to generate textual synopsis for a video, thus, enabling users to quickly browse a large video database without watching the videos.
Show less - Date Issued
- 2019
- Identifier
- CFE0007862, ucf:52756
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007862
- Title
- A STUDY OF FACTORS CONTRIBUTING TO SELF-REPORTED ANOMALIES IN CIVIL AVIATION.
- Creator
-
Andrzejczak, Chris, Karwowski, Waldemar, University of Central Florida
- Abstract / Description
-
A study investigating what factors are present leading to pilots submitting voluntary anomaly reports regarding their flight performance was conducted. The study employed statistical methods, text mining, clustering, and dimensional reduction techniques in an effort to determine relationships between factors and anomalies. A review of the literature was conducted to determine what factors are contributing to these anomalous incidents, as well as what research exists on human error, its causes...
Show moreA study investigating what factors are present leading to pilots submitting voluntary anomaly reports regarding their flight performance was conducted. The study employed statistical methods, text mining, clustering, and dimensional reduction techniques in an effort to determine relationships between factors and anomalies. A review of the literature was conducted to determine what factors are contributing to these anomalous incidents, as well as what research exists on human error, its causes, and its management. Data from the NASA Aviation Safety Reporting System (ASRS) was analyzed using traditional statistical methods such as frequencies and multinomial logistic regression. Recently formalized approaches in text mining such as Knowledge Based Discovery (KBD) and Literature Based Discovery (LBD) were employed to create associations between factors and anomalies. These methods were also used to generate predictive models. Finally, advances in dimensional reduction techniques identified concepts or keywords within records, thus creating a framework for an unsupervised document classification system. Findings from this study reinforced established views on contributing factors to civil aviation anomalies. New associations between previously unrelated factors and conditions were also found. Dimensionality reduction also demonstrated the possibility of identifying salient factors from unstructured text records, and was able to classify these records using these identified features.
Show less - Date Issued
- 2010
- Identifier
- CFE0003463, ucf:48382
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003463
- Title
- HUMANIZING TECHNICAL COMMUNICATION WITH METAPHOR.
- Creator
-
McClure, Ashley, Jones, Dan, University of Central Florida
- Abstract / Description
-
This thesis explores how metaphors can humanize a technical document and more effectively facilitate user comprehension. The frequent use of metaphor in technical communication reminds us that the discipline is highly creative and rhetorical. Theory demonstrates that a technical text involves interpretation and subjectivity during both its creation by the technical communicator and its application by the user. If employed carefully and skillfully, metaphor can be a powerful tool to ensure...
Show moreThis thesis explores how metaphors can humanize a technical document and more effectively facilitate user comprehension. The frequent use of metaphor in technical communication reminds us that the discipline is highly creative and rhetorical. Theory demonstrates that a technical text involves interpretation and subjectivity during both its creation by the technical communicator and its application by the user. If employed carefully and skillfully, metaphor can be a powerful tool to ensure users' needs are met during this process. The primary goal of technical communication is to convey information to an audience as clearly and efficiently as possible. Because of the often complex nature of technical content, users are likely to feel alienated, overwhelmed, or simply uninterested if the information presented seems exceedingly unfamiliar or complicated. If users experience any of these reactions, they are inclined to abandon the document, automatically rendering it unsuccessful. I identify metaphor as a means to curtail such an occurrence. Using examples from a variety of technical communication genres, I illustrate how metaphors can humanize a technical document by establishing a strong link between the document and its users.
Show less - Date Issued
- 2009
- Identifier
- CFE0002948, ucf:47979
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002948
- Title
- MULTI-TOUCH FOR GENERAL-PURPOSE COMPUTING: AN EXAMINATION OF TEXT ENTRY.
- Creator
-
Varcholik, Paul, Hughes, Charles, University of Central Florida
- Abstract / Description
-
In recent years, multi-touch has been heralded as a revolution in human-computer interaction. Multi-touch provides features such as gestural interaction, tangible interfaces, pen-based computing, and interface customization features embraced by an increasingly tech-savvy public. However, multi-touch platforms have not been adopted as "everyday" computer interaction devices; that is, multi-touch has not been applied to general-purpose computing. The questions this thesis seeks to address are...
Show moreIn recent years, multi-touch has been heralded as a revolution in human-computer interaction. Multi-touch provides features such as gestural interaction, tangible interfaces, pen-based computing, and interface customization features embraced by an increasingly tech-savvy public. However, multi-touch platforms have not been adopted as "everyday" computer interaction devices; that is, multi-touch has not been applied to general-purpose computing. The questions this thesis seeks to address are: Will the general public adopt these systems as their chief interaction paradigm? Can multi-touch provide such a compelling platform that it displaces the desktop mouse and keyboard? Is multi-touch truly the next revolution in human-computer interaction? As a first step toward answering these questions, we observe that general-purpose computing relies on text input, and ask: "Can multi-touch, without a text entry peripheral, provide a platform for efficient text entry? And, by extension, is such a platform viable for general-purpose computing?" We investigate these questions through four user studies that collected objective and subjective data for text entry and word processing tasks. The first of these studies establishes a benchmark for text entry performance on a multi-touch platform, across a variety of input modes. The second study attempts to improve this performance by examining an alternate input technique. The third and fourth studies include mouse-style interaction for formatting rich-text on a multi-touch platform, in the context of a word processing task. These studies establish a foundation for future efforts in general-purpose computing on a multi-touch platform. Furthermore, this work details deficiencies in tactile feedback with modern multi-touch platforms, and describes an exploration of audible feedback. Finally, the thesis conveys a vision for a general-purpose multi-touch platform, its design and rationale.
Show less - Date Issued
- 2011
- Identifier
- CFE0003711, ucf:48798
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003711
- Title
- A study of holistic strategies for the recognition of characters in natural scene images.
- Creator
-
Ali, Muhammad, Foroosh, Hassan, Hughes, Charles, Sukthankar, Gita, Wiegand, Rudolf, Yun, Hae-Bum, University of Central Florida
- Abstract / Description
-
Recognition and understanding of text in scene images is an important and challenging task. The importance can be seen in the context of tasks such as assisted navigation for the blind, providing directions to driverless cars, e.g. Google car, etc. Other applications include automated document archival services, mining text from images, and so on. The challenge comes from a variety of factors, like variable typefaces, uncontrolled imaging conditions, and various sources of noise corrupting...
Show moreRecognition and understanding of text in scene images is an important and challenging task. The importance can be seen in the context of tasks such as assisted navigation for the blind, providing directions to driverless cars, e.g. Google car, etc. Other applications include automated document archival services, mining text from images, and so on. The challenge comes from a variety of factors, like variable typefaces, uncontrolled imaging conditions, and various sources of noise corrupting the captured images. In this work, we study and address the fundamental problem of recognition of characters extracted from natural scene images, and contribute three holistic strategies to deal with this challenging task. Scene text recognition (STR) has been a known problem in computer vision and pattern recognition community for over two decades, and is still an active area of research owing to the fact that the recognition performance has still got a lot of room for improvement. Recognition of characters lies at the heart of STR and is a crucial component for a reliable STR system. Most of the current methods heavily rely on discriminative power of local features, such as histograms of oriented gradient (HoG), scale invariant feature transform (SIFT), shape contexts (SC), geometric blur (GB), etc. One of the problems with such methods is that the local features are rasterized in an ad hoc manner to get a single vector for subsequent use in recognition. This rearrangement of features clearly perturbs the spatial correlations that may carry crucial information vis-(&)#224;-vis recognition. Moreover, such approaches, in general, do not take into account the rotational invariance property that often leads to failed recognition in cases where characters in scene images do not occur in upright position. To eliminate this local feature dependency and the associated problems, we propose the following three holistic solutions: The first one is based on modelling character images of a class as a 3-mode tensor and then factoring it into a set of rank-1 matrices and the associated mixing coefficients. Each set of rank-1 matrices spans the solution subspace of a specific image class and enables us to capture the required holistic signature for each character class along with the mixing coefficients associated with each character image. During recognition, we project each test image onto the candidate subspaces to derive its mixing coefficients, which are eventually used for final classification.The second approach we study in this work lets us form a novel holistic feature for character recognition based on active contour model, also known as snakes. Our feature vector is based on two variables, direction and distance, cumulatively traversed by each point as the initial circular contour evolves under the force field induced by the character image. The initial contour design in conjunction with cross-correlation based similarity metric enables us to account for rotational variance in the character image. Our third approach is based on modelling a 3-mode tensor via rotation of a single image. This is different from our tensor based approach described above in that we form the tensor using a single image instead of collecting a specific number of samples of a particular class. In this case, to generate a 3D image cube, we rotate an image through a predefined range of angles. This enables us to explicitly capture rotational variance and leads to better performance than various local approaches.Finally, as an application, we use our holistic model to recognize word images extracted from natural scenes. Here we first use our novel word segmentation method based on image seam analysis to split a scene word into individual character images. We then apply our holistic model to recognize individual letters and use a spell-checker module to get the final word prediction. Throughout our work, we employ popular scene text datasets, like Chars74K-Font, Chars74K-Image, SVT, and ICDAR03, which include synthetic and natural image sets, to test the performance of our strategies. We compare results of our recognition models with several baseline methods and show comparable or better performance than several local feature-based methods justifying thus the importance of holistic strategies.
Show less - Date Issued
- 2016
- Identifier
- CFE0006247, ucf:51076
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006247
- Title
- DISCUSSION ON EFFECTIVE RESTORATION OF ORAL SPEECH USING VOICE CONVERSION TECHNIQUES BASED ON GAUSSIAN MIXTURE MODELING.
- Creator
-
Alverio, Gustavo, Mikhael, Wasfy, University of Central Florida
- Abstract / Description
-
Today's world consists of many ways to communicate information. One of the most effective ways to communicate is through the use of speech. Unfortunately many lose the ability to converse. This in turn leads to a large negative psychological impact. In addition, skills such as lecturing and singing must now be restored via other methods. The usage of text-to-speech synthesis has been a popular resolution of restoring the capability to use oral speech. Text to speech synthesizers convert...
Show moreToday's world consists of many ways to communicate information. One of the most effective ways to communicate is through the use of speech. Unfortunately many lose the ability to converse. This in turn leads to a large negative psychological impact. In addition, skills such as lecturing and singing must now be restored via other methods. The usage of text-to-speech synthesis has been a popular resolution of restoring the capability to use oral speech. Text to speech synthesizers convert text into speech. Although text to speech systems are useful, they only allow for few default voice selections that do not represent that of the user. In order to achieve total restoration, voice conversion must be introduced. Voice conversion is a method that adjusts a source voice to sound like a target voice. Voice conversion consists of a training and converting process. The training process is conducted by composing a speech corpus to be spoken by both source and target voice. The speech corpus should encompass a variety of speech sounds. Once training is finished, the conversion function is employed to transform the source voice into the target voice. Effectively, voice conversion allows for a speaker to sound like any other person. Therefore, voice conversion can be applied to alter the voice output of a text to speech system to produce the target voice. The thesis investigates how one approach, specifically the usage of voice conversion using Gaussian mixture modeling, can be applied to alter the voice output of a text to speech synthesis system. Researchers found that acceptable results can be obtained from using these methods. Although voice conversion and text to speech synthesis are effective in restoring voice, a sample of the speaker before voice loss must be used during the training process. Therefore it is vital that voice samples are made to combat voice loss.
Show less - Date Issued
- 2007
- Identifier
- CFE0001793, ucf:47286
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001793
- Title
- RECREATIONAL TECHNOLOGY AND ITS IMPACT ON THE LEARNING DEVELOPMENT OF CHILDREN AGES 4-8: A META-ANALYSIS FOR THE 21ST CENTURY CLASSROOM.
- Creator
-
Templeton, Joey, Dombrowski, Paul, University of Central Florida
- Abstract / Description
-
This research focuses on technology (specifically video games and interactive software games) and their effects on the cognitive development of children ages 4-8. The research will be conducted as a meta-analysis combining research and theory in order to determine if the educational approach to this age group needs to change/adapt to learners who have been affected by this technology. I will focus upon both the physical and mental aspects of their development and present a comprehensive...
Show moreThis research focuses on technology (specifically video games and interactive software games) and their effects on the cognitive development of children ages 4-8. The research will be conducted as a meta-analysis combining research and theory in order to determine if the educational approach to this age group needs to change/adapt to learners who have been affected by this technology. I will focus upon both the physical and mental aspects of their development and present a comprehensive review of current educational theory and practice. By examining current curriculum goals and cross-referencing them to research conducted in fields other than education (i.e. technology, child development, media literacy, etc.) I hope to demonstrate a need for change; and, at the end of my research, be able to make recommendations for curriculum adaptations that will work within the current educational structure. These recommendations will be made with respect to budget and time constraints.
Show less - Date Issued
- 2007
- Identifier
- CFE0001970, ucf:47458
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001970
- Title
- FROM TEXTBOOKS TO SAFETY BRIEFINGS: HELPING TECHNICAL WRITERS NEGOTIATE COMPLEX RHETORICAL SITUATIONS.
- Creator
-
Blackburne, Brian, Bowdon, Melody, University of Central Florida
- Abstract / Description
-
In this dissertation, I analyze the organizational and political constraints that technical writers encounter when dealing with complex rhetorical situations, particularly within risk-management discourse. I ground my research in case studies of safety briefings that airlines provide to their passengers because these important documents have long been regarded as ineffective, yet they've gone largely unchanged in the last 20 years. Airlines are required to produce these safety briefings,...
Show moreIn this dissertation, I analyze the organizational and political constraints that technical writers encounter when dealing with complex rhetorical situations, particularly within risk-management discourse. I ground my research in case studies of safety briefings that airlines provide to their passengers because these important documents have long been regarded as ineffective, yet they've gone largely unchanged in the last 20 years. Airlines are required to produce these safety briefings, which must satisfy multiple audiences, such as corporate executives, federal safety inspectors, flight attendants, and passengers. Because space and time are limited when presenting safety information to passengers, the technical writers must negotiate constraints related to issues such as format, budget, audience education and language, passenger perceptions/fears, reproducibility, and corporate image/branding to name a few. The writers have to negotiate these constraints while presenting important (and potentially alarming) information in a way that's as informative, realistic, and tasteful as possible. But such constraints aren't unique to the airline industry. Once they enter the profession, many writing students will experience complex rhetorical situations that constrain their abilities to produce effective documentation; therefore, I am looking at the theories and skills that we're teaching our future technical communicators for coping with such situations. By applying writing-style and visual-cultural analyses to a set of documents, I demonstrate a methodology for analyzing complex rhetorical situations. I conclude by proposing a pedagogy that teachers of technical communication can employ for helping students assess and work within complex rhetorical situations, and I offer suggestions for implementing such practices in the classroom.
Show less - Date Issued
- 2008
- Identifier
- CFE0002465, ucf:47729
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002465
- Title
- TEXT COMPLEXITY AND CLOSE READING: TEACHERS' PERCEPTIONS OF THE LANGUAGE ARTS FLORIDA STANDARDS AND CURRICULUM IMPLEMENTATION.
- Creator
-
Diaz-Davila, Clare, Wenzel, Taylar, University of Central Florida
- Abstract / Description
-
The Florida Department of Education revised the Common Core State Standards into what are now known as the Florida Standards in February 2014, approving 99 revisions to the original standards that were accepted in 2010 (Dunkelberger, 2014). The purpose of this research was to identify current teachers' attitudes towards the new Language Arts Florida Standards (LAFS), specifically regarding teachers' perceptions of text complexity and close reading as enacted in the reading curriculum....
Show moreThe Florida Department of Education revised the Common Core State Standards into what are now known as the Florida Standards in February 2014, approving 99 revisions to the original standards that were accepted in 2010 (Dunkelberger, 2014). The purpose of this research was to identify current teachers' attitudes towards the new Language Arts Florida Standards (LAFS), specifically regarding teachers' perceptions of text complexity and close reading as enacted in the reading curriculum. Additionally, this study will attempt to identify how teachers' attitudes impact their implementation of the new standards. This research used a self-administered survey to collect the teacher perceptions of the LAFS in six different categories. The sample entailed the participation of 21 practicing teachers from the Central Florida area. The survey revealed that, although teachers don't necessarily dislike the construction of the standards, they feel that they are not knowledgeable in some integral areas of the LAFS, such as text complexity and close reading. The implications of the results are discussed, and some improvements for the future of the LAFS are given.
Show less - Date Issued
- 2014
- Identifier
- CFH0004682, ucf:45281
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004682
- Title
- THE EFFECT OF TEXT MESSAGING ALERTS UPON TESTICULAR SELF-EXAMINATION (TSE) ADHERENCE.
- Creator
-
Soler, Lisa, Rovito, Michael, University of Central Florida
- Abstract / Description
-
Based on Kim Witte's proven Extended Parallel Process Model, a mobile communication system was developed in which men were sent reminders about their health. This study focused on reminding men about testicular self-examination (TSE), a proactive behavior used to detect testicular cancer, through the use of text messaging. A cohort of 75 men were recruited for this study and placed into one of four groups. All participants were provided with information concerning TSE and told to perform the...
Show moreBased on Kim Witte's proven Extended Parallel Process Model, a mobile communication system was developed in which men were sent reminders about their health. This study focused on reminding men about testicular self-examination (TSE), a proactive behavior used to detect testicular cancer, through the use of text messaging. A cohort of 75 men were recruited for this study and placed into one of four groups. All participants were provided with information concerning TSE and told to perform the exam monthly; two of the four groups were sent reminders via text message while the other two groups were told once about the behavior. An original 30-item survey was used to measure intention. Proper data analysis could not be performed due to an attrition rate of 71%. Nonetheless, a significant relationship was observed between pre- and post-test adherence as reported by the participants. In addition, the measurement tool was assessed and determined to be useful in measuring intention to perform TSE. Internal consistency measures were reported as 0.672 and 0.626, both of which would have been higher with a larger sample size. While further research and analysis is recommended, this study has laid a foundation for a way to communicate with young men about their health.
Show less - Date Issued
- 2012
- Identifier
- CFH0004320, ucf:45058
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004320
- Title
- The utility of verbal display redundancy in managing pilot's cognitive load during controller-pilot voice communications.
- Creator
-
Kratchounova, Daniela, Jentsch, Florian, Mouloua, Mustapha, Hancock, Peter, Wise, John, University of Central Florida
- Abstract / Description
-
Miscommunication between controllers and pilots, potentially resulting from a high pilot cognitive load, has been a causal or contributing factor in a large number of aviation accidents. In this context, failure to communicate can be attributed, among other factors, to an inadequate human-system interface design, the related high cognitive load imposed on the pilot, and poor performance reflected by a higher error rate. To date, voice radio remains in service without any means for managing...
Show moreMiscommunication between controllers and pilots, potentially resulting from a high pilot cognitive load, has been a causal or contributing factor in a large number of aviation accidents. In this context, failure to communicate can be attributed, among other factors, to an inadequate human-system interface design, the related high cognitive load imposed on the pilot, and poor performance reflected by a higher error rate. To date, voice radio remains in service without any means for managing pilot cognitive load by design (as opposed to training or procedures). Such an oversight is what prompted this dissertation. The goals of this study were (a) to investigate the utility of a voice-to-text transcription (V-T-T) of ATC clearances in managing pilot's cognitive load during controller-pilot communications within the context of a modern flight deck environment, and (b) to validate whether a model of variable relationships which is generated in the domain of learning and instruction would (")transfer("), and to what extend, to an operational domain. First, within the theoretical framework built for this dissertation, all the pertaining factors were analyzed. Second, by using the process of synthesis, and based on guidelines generated from that theoretical framework, a redundant verbal display of ATC clearances (i.e., a V-T-T) was constructed. Third, the synthesized device was empirically examined. Thirty four pilots participated in the study (-) seventeen pilots with 100-250 total flight hours and seventeen with (>)500 total flight hours. All participants had flown within sixty days prior to attending the study. The experiment was conducted one pilot at a time in 2.5-hour blocks. A 2 Verbal Display Redundancy (no-redundancy and redundancy) X 2 Verbal Input Complexity (low and high) X 2 Level of Expertise (novices and experts) mixed-model design was used for the study with 5 IFR clearances in each Redundancy X Complexity condition. The results showed that the amounts of reduction of cognitive load and improvement of performance, when verbal display redundancy was provided, were in the range of about 20%. These results indicated that V-T-T is a device which has a tremendous potential to serve as (a) a pilot memory aid, (b) a way to verify a clearance has been captured correctly without having to make a (")Say again(") call, and (c) to ultimately improve the margin of safety by reducing the propensity for human error for the majority of pilot populations including those with English as a second language. Fourth, the results from the validation of theoretical models (")transfer(") showed that although cognitive load remained as a significant predictor of performance, both complexity and redundancy also had unique significant effects on performance. Furthermore, these results indicated that the relationship between these variables was not as (")clear-cut(") in the operational domain investigated here as the models from the domain of learning and instruction suggested. Until further research is conducted, (a) to investigate how changes in the operational task settings via adding additional coding (e.g., permanent record of clearances which can serve as both a memory aid and a way to verify a clearance is captured correctly) affect performance through mechanisms other than cognitive load; and (b) unless the theoretical models are modified to reflect how changes in the input variables impact the outcome in a variety of ways; a degree of prudence should be exercised when the results from the model (")transfer(") validation are applied to operational environments similar to the one investigated in this dissertation research.
Show less - Date Issued
- 2012
- Identifier
- CFE0004251, ucf:49504
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004251
- Title
- PREDICTING THE PERFORMANCE OF INTERPRETING INSTRUCTION BASED ON DIGITAL PROPENSITY INDEX SCORE IN TEXT AND GRAPHIC FORMATS.
- Creator
-
Norman, David, Hirumi, Atsusi, University of Central Florida
- Abstract / Description
-
Practitioners have proposed that Digital Natives prefer graphics while Digital Immigrants prefer text. While Instructional Design has been extensively studied and researched, the impact of the graphical emphasis in instructional designs as it relates to digital propensity has not been widely explored. Specifically, this study examined the performance of students when presented with text-only and graphic-only instructional formats. The purpose of this study was to test the relationship between...
Show morePractitioners have proposed that Digital Natives prefer graphics while Digital Immigrants prefer text. While Instructional Design has been extensively studied and researched, the impact of the graphical emphasis in instructional designs as it relates to digital propensity has not been widely explored. Specifically, this study examined the performance of students when presented with text-only and graphic-only instructional formats. The purpose of this study was to test the relationship between Digital Propensity Index scores of individuals and their performance when interpreting online instruction. A sample of students from the population of a large metropolitan university received the Digital Propensity Index questionnaire, which is a measure of an individual's time spent interacting with digital media. Each student was randomly assigned varying formats of a computer-based instructional unit via a public survey. The instructional unit consisted of the DPI questionnaire and six tasks related to the Central Florida commuter rail system. Participants were asked to answer the DPI questionnaire on a website by clicking on a link in an emailed invitation. Following the DPI questionnaire, participants were randomly assigned to one of two groups. Group One saw three instructional tasks shown in text and shuffled in random order. Each task was displayed on its own webpage. By submitting an answer to the task, the group progressed through the website to the next task. Group Two saw graphic tasks first, again, shuffled in random order. After the first three tasks, the groups swapped instructional formats to view the opposing group's initial questions. Participants were timed on how many seconds they spent reviewing each task. Each task had an assessment question to evaluate the learning outcomes of the instructional unit. Finally, the DPI score of the participant was matched with the time spent viewing each presentation format. The findings indicate that DPI score had a statistically significant prediction of time spent navigating each type of instruction. Though the link between DPI score and time spent navigating instruction was statistically significant, the actual measurable time difference between navigating text and graphic formats was only a fraction of a second for each increment in DPI score. Limitations and potential future research related to the study are discussed as well.
Show less - Date Issued
- 2008
- Identifier
- CFE0002234, ucf:47896
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002234