Current Search: Architecture (x)
View All Items
Pages
- Title
- PLACE, SPACE, AND FORM CAPTURED THROUGH PHOTOGRAPHIC MEDITATION.
- Creator
-
Stead, Sarah, Robinson, Elizabeth Brady, University of Central Florida
- Abstract / Description
-
Inspired by Buddhist philosophy, the photographic series Architectural Zen attempts to beautify banal and pragmatic architecture through limiting and preexisting artificial light conditions. The selective illumination of artificial light eliminates the non-essential details and enhances the pure forms and saturated color presented by the camera lens. This encourages the photographer and the viewer to enter a state of meditation. The resulting process is similar to a Zen approach to image...
Show moreInspired by Buddhist philosophy, the photographic series Architectural Zen attempts to beautify banal and pragmatic architecture through limiting and preexisting artificial light conditions. The selective illumination of artificial light eliminates the non-essential details and enhances the pure forms and saturated color presented by the camera lens. This encourages the photographer and the viewer to enter a state of meditation. The resulting process is similar to a Zen approach to image making. The ancient Zen artistÃÂ's compositions are strengthened by a meditation on form and subsequent elimination of the non-essential elements of the subject. Through embracing this Zen mentality and mindfulness,aspects of Eastern aesthetic and balance also appear through the work. The warm glow of artificial lights, long recessed shadows, and surreal colors contribute to the feeling of rest, contemplation, isolation, and solitude. Although the work in Architectural Zen is not directly about Buddhist doctrines, the process of creating the art parallels the ideas and practices of Zen Buddhism and meditation, finding the Buddha nature of typically unappealing architectural forms during a different time of day.
Show less - Date Issued
- 2010
- Identifier
- CFE0003092, ucf:48292
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003092
- Title
- Limitations of Micro and Macro Solutions to the Simulation Interoperability Challenge: An EASE Case Study.
- Creator
-
Barry, John, Proctor, Michael, Wiegand, Rudolf, Allen, Gary, University of Central Florida
- Abstract / Description
-
This thesis explored the history of military simulations and linked it to the current challenges of interoperability. The research illustrated the challenge of interoperability in integrating different networks, databases, standards, and interfaces and how it results in U.S. Army organizations constantly spending time and money to create and implement irreproducible Live, Virtual, and Constructive (LVC) integrating architectures to accomplish comparable tasks. Although the U.S. Army has made...
Show moreThis thesis explored the history of military simulations and linked it to the current challenges of interoperability. The research illustrated the challenge of interoperability in integrating different networks, databases, standards, and interfaces and how it results in U.S. Army organizations constantly spending time and money to create and implement irreproducible Live, Virtual, and Constructive (LVC) integrating architectures to accomplish comparable tasks. Although the U.S. Army has made advancements in interoperability, it has struggled with this challenge since the early 1990s. These improvements have been inadequate due to evolving and growing needs of the user coupled with the technical complexities of interoperating legacy systems with emergent systems arising from advances in technology. To better understand the impact of the continued evolution of simulations, this paper mapped Maslow's Hierarchy of Needs with Tolk's Levels of Conceptual Interoperability Model (LCIM). This mapping illustrated a common relationship in both the Hierarchy of Needs and the LCIM model depicting that each level increases with complexity and the proceeding lower level must first be achieved prior to reaching the next. Understanding the continuum of complexity of interoperability, as requirements or needs, helped to determine why the previous funding and technical efforts have been inadequate in mitigating the interoperability challenges within U.S. Army simulations. As the U.S. Army's simulation programs continue to evolve while the military and contractor personnel turnover rate remains near constant, a method of capturing and passing on the tacit knowledge from one personnel staffing life cycle to the next must be developed in order to economically and quickly reproduce complex simulation events. This thesis explored a potential solution to this challenge, the Executable Architecture Systems Engineering (EASE) research project managed by the U.S. Army's Simulation and Training Technology Center in the Army Research Laboratory within the Research, Development and Engineering Command. However, there are two main drawbacks to EASE; it is still in the prototype stage and has not been fully tested and evaluated as a simulation tool within the community of practice. In order to determine if EASE has the potential to reduce the micro as well as macro interoperability, an EASE experiment was conducted as part of this thesis. The following three alternative hypothesis were developed, tested, and accepted as a result of the research for this thesis:Ha1 = Expert stakeholders believe the EASE prototype does have potential as a U.S. Army technical solution to help mitigate the M(&)S interoperability challenge. Ha2 = Expert stakeholders believe the EASE prototype does have potential as a U.S. Army managerial solution to help mitigate the M(&)S interoperability challenge. Ha3 = Expert stakeholders believe the EASE prototype does have potential as a U.S. Army knowledge management solution to help mitigate the M(&)S interoperability challenge. To conduct this experiment, eleven participants representing ten different organizations across the three M(&)S Domains were selected to test EASE using a modified Technology Acceptance Model (TAM) approach developed by Davis. Indexes were created from the participants' responses to include both the quality of participants and research questions. The Cronbach Alpha Test for reliability was used to test the reliability of the adapted TAM. The Wilcoxon Signed Ranked test provided the statistical analysis that formed the basis of the research; that determined the EASE project has the potential to help mitigate the interoperability challenges in the U.S. Army's M(&)S domains.
Show less - Date Issued
- 2013
- Identifier
- CFE0005084, ucf:50740
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005084
- Title
- ARCHITECTURAL SUPPORT FOR IMPROVING SYSTEMHARDWARE/SOFTWARE RELIABILITY.
- Creator
-
Dimitrov, Martin, Zhou, Huiyang, University of Central Florida
- Abstract / Description
-
It is a great challenge to build reliable computer systems with unreliable hardware and buggy software. On one hand, software bugs account for as much as 40% of system failures and incur high cost, an estimate of $59.5B a year, on the US economy. On the other hand, under the current trends of technology scaling, transient faults (also known as soft errors) in the underlying hardware are predicted to grow at least in proportion to the number of devices being integrated, which further...
Show moreIt is a great challenge to build reliable computer systems with unreliable hardware and buggy software. On one hand, software bugs account for as much as 40% of system failures and incur high cost, an estimate of $59.5B a year, on the US economy. On the other hand, under the current trends of technology scaling, transient faults (also known as soft errors) in the underlying hardware are predicted to grow at least in proportion to the number of devices being integrated, which further exacerbates the problem of system reliability. We propose several methods to improve system reliability both in terms of detecting and correcting soft-errors as well as facilitating software debugging. In our first approach, we detect instruction-level anomalies during program execution. The anomalies can be used to detect and repair soft-errors, or can be reported to the programmer to aid software debugging. In our second approach, we improve anomaly detection for software debugging by detecting different types of anomalies as well as by removing false-positives. While the anomalies reported by our first two methods are helpful in debugging single-threaded programs, they do not address concurrency bugs in multi-threaded programs. In our third approach, we propose a new debugging primitive which exposes the non-deterministic behavior of parallel programs and facilitates the debugging process. Our idea is to generate a time-ordered trace of events such as function calls/returns and memory accesses in different threads. In our experience, exposing the time-ordered event information to the programmer is highly beneficial for reasoning about the root causes of concurrency bugs.
Show less - Date Issued
- 2010
- Identifier
- CFE0002975, ucf:47941
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002975
- Title
- IMPROVING BRANCH PREDICTION ACCURACY VIA EFFECTIVE SOURCE INFORMATION AND PREDICTION ALGORITHMS.
- Creator
-
GAO, HONGLIANG, ZHOU, HUIYANG, University of Central Florida
- Abstract / Description
-
Modern superscalar processors rely on branch predictors to sustain a high instruction fetch throughput. Given the trend of deep pipelines and large instruction windows, a branch misprediction will incur a large performance penalty and result in a significant amount of energy wasted by the instructions along wrong paths. With their critical role in high performance processors, there has been extensive research on branch predictors to improve the prediction accuracy. Conceptually a dynamic...
Show moreModern superscalar processors rely on branch predictors to sustain a high instruction fetch throughput. Given the trend of deep pipelines and large instruction windows, a branch misprediction will incur a large performance penalty and result in a significant amount of energy wasted by the instructions along wrong paths. With their critical role in high performance processors, there has been extensive research on branch predictors to improve the prediction accuracy. Conceptually a dynamic branch prediction scheme includes three major components: a source, an information processor, and a predictor. Traditional works mainly focus on the algorithm for the predictor. In this dissertation, besides novel prediction algorithms, we investigate other components and develop untraditional ways to improve the prediction accuracy. First, we propose an adaptive information processing method to dynamically extract the most effective inputs to maximize the correlation to be exploited by the predictor. Second, we propose a new prediction algorithm, which improves the Prediction by Partial Matching (PPM) algorithm by selectively combining multiple partial matches. The PPM algorithm was previously considered optimal and has been used to derive the upper limit of branch prediction accuracy. Our proposed algorithm achieves higher prediction accuracy than PPM and can be implemented in realistic hardware budget. Third, we discover a new locality existing between the address of producer loads and the outcomes of their consumer branches. We study this address-branch correlation in detail and propose a branch predictor to explore this correlation for long-latency and hard-to-predict branches, which existing branch predictors fail to predict accurately.
Show less - Date Issued
- 2008
- Identifier
- CFE0002283, ucf:47877
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002283
- Title
- Evaluation of an Early Classic Round Structure at Santa Rita Corozal, Belize.
- Creator
-
Kangas, Rachael, Chase, Arlen, Chase, Diane, Barber, Sarah, University of Central Florida
- Abstract / Description
-
Round structures in the Maya area are an architectural form that is not well understood, in part due to the relatively few examples recovered through archaeological excavations. The site of Santa Rita Corozal, Belize offers one of the few examples of an Early Classic Period round structure (Structure 135) in the Maya region, one that is distinctive in its timing and architectural form. This thesis seeks to compare Structure 135 with the patterns of round structures identified in the...
Show moreRound structures in the Maya area are an architectural form that is not well understood, in part due to the relatively few examples recovered through archaeological excavations. The site of Santa Rita Corozal, Belize offers one of the few examples of an Early Classic Period round structure (Structure 135) in the Maya region, one that is distinctive in its timing and architectural form. This thesis seeks to compare Structure 135 with the patterns of round structures identified in the Preclassic and Terminal/early Postclassic Periods, when there are comparatively more examples and to pinpoint the multiple construction periods evidenced in the excavations to define the changes to the structure over time. Based on this research, Structure 135 at Santa Rita Corozal does not clearly conform to earlier or later patterns of round structures in the Maya region and its use before abandonment and eventual transformation to a rectilinear shape was shorter than previously thought. This research also offers insights into the need for the contextual analysis of ceramics, and the difficulties of assuming context through the use of construction fill, even with a clear cultural formation process.
Show less - Date Issued
- 2015
- Identifier
- CFE0005962, ucf:50798
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005962
- Title
- Leveraging the Intrinsic Switching Behaviors of Spintronic Devices for Digital and Neuromorphic Circuits.
- Creator
-
Pyle, Steven, DeMara, Ronald, Vosoughi, Azadeh, Chanda, Debashis, University of Central Florida
- Abstract / Description
-
With semiconductor technology scaling approaching atomic limits, novel approaches utilizing new memory and computation elements are sought in order to realize increased density, enhanced functionality, and new computational paradigms. Spintronic devices offer intriguing avenues to improve digital circuits by leveraging non-volatility to reduce static power dissipation and vertical integration for increased density. Novel hybrid spintronic-CMOS digital circuits are developed herein that...
Show moreWith semiconductor technology scaling approaching atomic limits, novel approaches utilizing new memory and computation elements are sought in order to realize increased density, enhanced functionality, and new computational paradigms. Spintronic devices offer intriguing avenues to improve digital circuits by leveraging non-volatility to reduce static power dissipation and vertical integration for increased density. Novel hybrid spintronic-CMOS digital circuits are developed herein that illustrate enhanced functionality at reduced static power consumption and area cost. The developed spin-CMOS D Flip-Flop offers improved power-gating strategies by achieving instant store/restore capabilities while using 10 fewer transistors than typical CMOS-only implementations. The spin-CMOS Muller C-Element developed herein improves asynchronous pipelines by reducing the area overhead while adding enhanced functionality such as instant data store/restore and delay-element-free bundled data asynchronous pipelines.Spintronic devices also provide improved scaling for neuromorphic circuits by enabling compact and low power neuron and non-volatile synapse implementations while enabling new neuromorphic paradigms leveraging the stochastic behavior of spintronic devices to realize stochastic spiking neurons, which are more akin to biological neurons and commensurate with theories from computational neuroscience and probabilistic learning rules. Spintronic-based Probabilistic Activation Function circuits are utilized herein to provide a compact and low-power neuron for Binarized Neural Networks. Two implementations of stochastic spiking neurons with alternative speed, power, and area benefits are realized. Finally, a comprehensive neuromorphic architecture comprising stochastic spiking neurons, low-precision synapses with Probabilistic Hebbian Plasticity, and a novel non-volatile homeostasis mechanism is realized for subthreshold ultra-low-power unsupervised learning with robustness to process variations. Along with several case studies, implications for future spintronic digital and neuromorphic circuits are presented.
Show less - Date Issued
- 2019
- Identifier
- CFE0007514, ucf:52658
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007514
- Title
- USING THE SOFTWARE ADAPTER TO CONNECT LEGACY SIMULATION MODELS TO THE RTI.
- Creator
-
Rachapalli, Deepak Kumar, Rabelo, Luis, University of Central Florida
- Abstract / Description
-
The establishment of a network of persistent shared simulations depends on the presence of a robust standard for communicating state information between those simulations. The High Level Architecture (HLA) can serve as the basis for such a standard. While the HLA is architecture, not software, use of Run Time Infrastructure (RTI) software is required to support operations of a federation execution. The integration of RTI with existing simulation models is complex and requires a lot of...
Show moreThe establishment of a network of persistent shared simulations depends on the presence of a robust standard for communicating state information between those simulations. The High Level Architecture (HLA) can serve as the basis for such a standard. While the HLA is architecture, not software, use of Run Time Infrastructure (RTI) software is required to support operations of a federation execution. The integration of RTI with existing simulation models is complex and requires a lot of expertise. This thesis implements a less complex and effective interaction between a legacy simulation model and RTI using a middleware tool known as Distributed Manufacturing Simulation (DMS) adapter. Shuttle Model, an Arena based discrete-event simulation model for shuttle operations, is connected to the RTI using the DMS adapter. The adapter provides a set of functions that are to be incorporated within the Shuttle Model, in a procedural manner, in order to connect to RTI. This thesis presents the procedure when the Shuttle Model connects to the RTI, to communicate with the Scrub Model for approval of its shuttle's launch.
Show less - Date Issued
- 2006
- Identifier
- CFE0000922, ucf:46764
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000922
- Title
- A COMMON COMPONENT-BASED SOFTWARE ARCHITECTURE FOR MILITARY AND COMMERCIAL PC-BASED VIRTUAL SIMULATION.
- Creator
-
Lewis, Joshua, Proctor, Michael, University of Central Florida
- Abstract / Description
-
Commercially available military-themed virtual simulations have been developed and sold for entertainment since the beginning of the personal computing era. There exists an intense interest by various branches of the military to leverage the technological advances of the personal computing and video game industries to provide low cost military training. By nature of the content of the commercial military-themed virtual simulations, a large overlap has grown between the interests, resources,...
Show moreCommercially available military-themed virtual simulations have been developed and sold for entertainment since the beginning of the personal computing era. There exists an intense interest by various branches of the military to leverage the technological advances of the personal computing and video game industries to provide low cost military training. By nature of the content of the commercial military-themed virtual simulations, a large overlap has grown between the interests, resources, standards, and technology of the computer entertainment industry and military training branches. This research attempts to identify these commonalities with the purpose of systematically designing and evaluating a common component-based software architecture that could be used to implement a framework for developing content for both commercial and military virtual simulation software applications.
Show less - Date Issued
- 2006
- Identifier
- CFE0001268, ucf:46893
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001268
- Title
- Measuring the evolving Internet ecosystem with exchange points.
- Creator
-
Ahmad, Mohammad Zubair, Guha, Ratan, Bassiouni, Mostafa, Chatterjee, Mainak, Jha, Sumit, Goldiez, Brian, University of Central Florida
- Abstract / Description
-
The Internet ecosystem comprising of thousands of Autonomous Systems (ASes) now include Internet eXchange Points (IXPs) as another critical component in the infrastructure. Peering plays a significant part in driving the economic growth of ASes and is contributing to a variety of structural changes in the Internet. IXPs are a primary component of this peering ecosystem and are playing an increasing role not only in the topology evolution of the Internet but also inter-domain path routing. In...
Show moreThe Internet ecosystem comprising of thousands of Autonomous Systems (ASes) now include Internet eXchange Points (IXPs) as another critical component in the infrastructure. Peering plays a significant part in driving the economic growth of ASes and is contributing to a variety of structural changes in the Internet. IXPs are a primary component of this peering ecosystem and are playing an increasing role not only in the topology evolution of the Internet but also inter-domain path routing. In this dissertation we study and analyze the overall affects of peering and IXP infrastructure on the Internet. We observe IXP peering is enabling a quicker flattening of the Internet topology and leading to over-utilization of popular inter-AS links. Indiscriminate peering at these locations is leading to higher end-to-end path latencies for ASes peering at an exchange point, an effect magnified at the most popular worldwide IXPs. We first study the effects of recently discovered IXP links on the inter-AS routes using graph based approaches and find that it points towards the changing and flattening landscape in the evolution of the Internet's topology. We then study more IXP effects by using measurements to investigate the networks benefits of peering. We propose and implement a measurement framework which identifies default paths through IXPs and compares them with alternate paths isolating the IXP hop. Our system is running and recording default and alternate path latencies and made publicly available. We model the probability of an alternate path performing better than a default path through an IXP by identifying the underlying factors influencing the end-to end path latency. Our first-of-its-kind modeling study, which uses a combination of statistical and machine learning approaches, shows that path latencies depend on the popularity of the particular IXP, the size of the provider ASes of the networks peering at common locations and the relative position of the IXP hop along the path. An in-depth comparison of end-to-end path latencies reveal a significant percentage of alternate paths outperforming the default route through an IXP. This characteristic of higher path latencies is magnified in the popular continental exchanges as measured by us in a case study looking at the largest regional IXPs. We continue by studying another effect of peering which has numerous applications in overlay routing, Triangle Inequality Violations (TIVs). These TIVs in the Internet delay space are created due to peering and we compare their essential characteristics with overlay paths such as detour routes. They are identified and analyzed from existing measurement datasets but on a scale not carried out earlier. This implementation exhibits the effectiveness of GPUs in analyzing big data sets while the TIVs studied show that the a set of common inter-AS links create these TIVs. This result provides a new insight about the development of TIVs by analyzing a very large data set using GPGPUs.Overall our work presents numerous insights into the inner workings of the Internet's peering ecosystem. Our measurements show the effects of exchange points on the evolving Internet and exhibits their importance to Internet routing.
Show less - Date Issued
- 2013
- Identifier
- CFE0004802, ucf:49744
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004802
- Title
- Do Changes in Muscle Architecture Effect Post- Activation Potentiation.
- Creator
-
Reardon, Danielle, Hoffman, Jay, Fragala, Maren, Stout, Jeffrey, Fukuda, David, University of Central Florida
- Abstract / Description
-
Purpose: To examine the effect of three muscle potentiation protocols on changes in muscle architecture and the subsequent effect on jump power performance. Methods: Maximal (1RM) squat strength (Mean SD=178.3 (&)#177; 36.6kg), vertical jump power, and muscle architecture were obtained in 12 resistance trained men (25.2(&)#177;3.6y; 90.67(&)#177;12.7kg). Participants randomly completed three squatting protocols at 75% (3 x 10 reps), 90% (3 x 3 reps) or 100% (1 x 1) of their 1RM, or no workout...
Show morePurpose: To examine the effect of three muscle potentiation protocols on changes in muscle architecture and the subsequent effect on jump power performance. Methods: Maximal (1RM) squat strength (Mean SD=178.3 (&)#177; 36.6kg), vertical jump power, and muscle architecture were obtained in 12 resistance trained men (25.2(&)#177;3.6y; 90.67(&)#177;12.7kg). Participants randomly completed three squatting protocols at 75% (3 x 10 reps), 90% (3 x 3 reps) or 100% (1 x 1) of their 1RM, or no workout (CON), with each protocol being separated by one week. During each testing session ultrasound and vertical jump testing were assessed at baseline (BL), 8min post (8P) and 20min post (20P) workout. Ultrasound measures of the rectus femoris (RF) and vastus lateralis (VL) muscles included; cross sectional area (CSA) and pennation angle (PNG). Following each ultrasound, peak (PVJP) and mean (MVJP) vertical jump power (using hands for maximum jump height) were measured using an accelerometer. Results: Magnitude based inferences analysis indicated that in comparison to CON, 75% resulted in a likely greater change in RF-CSA and VL-CSA (BL-8P and BL(-)20P), 90% resulted in a likely greater RF-CSA and VL-CSA (BL(-)20P), and 100% resulted in a very likely or likely decrease in VL-PNG at BL-8P and BL(-)20P, respectively). Meanwhile, changes in PVJP and MVJP for the 75% trial was likely decreased at BL-8P and BL(-)20P; and for the 90% trial MVJP was likely decreased at BL-8P and BL(-)20P. Analysis of the magnitude of the relationships indicated a likely negative relationship between VL-PNG and MVJP (r = -0.35; p (<) 0.018) at BL-8P, while at BL(-)20P, a negative relationship was observed between PVJP and RF-CSA (r = -0.37; p (<) 0.014). Conclusion: Acute increases in muscle size and acute decreases in pennation angle did not result in any potentiation in vertical jump power measures. Although the inverse relationships observed between muscle architecture variables and power suggests a potential effect, the change in position (i.e. movement from standing to supine for ultrasound measures) may negate, as a result of potential fluid shifts or muscle relaxation, the potentiating effects of the exercise. It is also possible that the fatiguing nature of the squat protocols in trained but not competitive participants may have also contributed to the results.
Show less - Date Issued
- 2013
- Identifier
- CFE0005048, ucf:49963
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005048
- Title
- OPTIMIZING DYNAMIC LOGIC REALIZATIONS FOR PARTIAL RECONFIGURATION OF FIELD PROGRAMMABLE GATE ARRAYS.
- Creator
-
Parris, Matthew, DeMara, Ronald, University of Central Florida
- Abstract / Description
-
Many digital logic applications can take advantage of the reconfiguration capability of Field Programmable Gate Arrays (FPGAs) to dynamically patch design flaws, recover from faults, or time-multiplex between functions. Partial reconfiguration is the process by which a user modifies one or more modules residing on the FPGA device independently of the others. Partial Reconfiguration reduces the granularity of reconfiguration to be a set of columns or rectangular region of the device....
Show moreMany digital logic applications can take advantage of the reconfiguration capability of Field Programmable Gate Arrays (FPGAs) to dynamically patch design flaws, recover from faults, or time-multiplex between functions. Partial reconfiguration is the process by which a user modifies one or more modules residing on the FPGA device independently of the others. Partial Reconfiguration reduces the granularity of reconfiguration to be a set of columns or rectangular region of the device. Decreasing the granularity of reconfiguration results in reduced configuration filesizes and, thus, reduced configuration times. When compared to one bitstream of a non-partial reconfiguration implementation, smaller modules resulting in smaller bitstream filesizes allow an FPGA to implement many more hardware configurations with greater speed under similar storage requirements. To realize the benefits of partial reconfiguration in a wider range of applications, this thesis begins with a survey of FPGA fault-handling methods, which are compared using performance-based metrics. Performance analysis of the Genetic Algorithm (GA) Offline Recovery method is investigated and candidate solutions provided by the GA are partitioned by age to improve its efficiency. Parameters of this aging technique are optimized to increase the occurrence rate of complete repairs. Continuing the discussion of partial reconfiguration, the thesis develops a case-study application that implements one partial reconfiguration module to demonstrate the functionality and benefits of time multiplexing and reveal the improved efficiencies of the latest large-capacity FPGA architectures. The number of active partial reconfiguration modules implemented on a single FPGA device is increased from one to eight to implement a dynamic video-processing architecture for Discrete Cosine Transform and Motion Estimation functions to demonstrate a 55-fold reduction in bitstream storage requirements thus improving partial reconfiguration capability.
Show less - Date Issued
- 2008
- Identifier
- CFE0002323, ucf:47793
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002323
- Title
- Automated Synthesis of Memristor Crossbar Networks.
- Creator
-
Chakraborty, Dwaipayan, Jha, Sumit Kumar, Leavens, Gary, Ewetz, Rickard, Valliyil Thankachan, Sharma, Xu, Mengyu, University of Central Florida
- Abstract / Description
-
The advancement of semiconductor device technology over the past decades has enabled the design of increasingly complex electrical and computational machines. Electronic design automation (EDA) has played a significant role in the design and implementation of transistor-based machines. However, as transistors move closer toward their physical limits, the speed-up provided by Moore's law will grind to a halt. Once again, we find ourselves on the verge of a paradigm shift in the computational...
Show moreThe advancement of semiconductor device technology over the past decades has enabled the design of increasingly complex electrical and computational machines. Electronic design automation (EDA) has played a significant role in the design and implementation of transistor-based machines. However, as transistors move closer toward their physical limits, the speed-up provided by Moore's law will grind to a halt. Once again, we find ourselves on the verge of a paradigm shift in the computational sciences as newer devices pave the way for novel approaches to computing. One of such devices is the memristor -- a resistor with non-volatile memory.Memristors can be used as junctional switches in crossbar circuits, which comprise of intersecting sets of vertical and horizontal nanowires. The major contribution of this dissertation lies in automating the design of such crossbar circuits -- doing a new kind of EDA for a new kind of computational machinery. In general, this dissertation attempts to answer the following questions:a. How can we synthesize crossbars for computing large Boolean formulas, up to 128-bit?b. How can we synthesize more compact crossbars for small Boolean formulas, up to 8-bit?c. For a given loop-free C program doing integer arithmetic, is it possible to synthesize an equivalent crossbar circuit?We have presented novel solutions to each of the above problems. Our new, proposed solutions resolve a number of significant bottlenecks in existing research, via the usage of innovative logic representation and artificial intelligence techniques. For large Boolean formulas (up to 128-bit), we have utilized Reduced Ordered Binary Decision Diagrams (ROBDDs) to automatically synthesize linearly growing crossbar circuits that compute them. This cutting edge approach towards flow-based computing has yielded state-of-the-art results. It is worth noting that this approach is scalable to n-bit Boolean formulas. We have made significant original contributions by leveraging artificial intelligence for automatic synthesis of compact crossbar circuits. This inventive method has been expanded to encompass crossbar networks with 1D1M (1-diode-1-memristor) switches, as well. The resultant circuits satisfy the tight constraints of the Feynman Grand Prize challenge and are able to perform 8-bit binary addition. A leading edge development for end-to-end computation with flow-based crossbars has been implemented, which involves methodical translation of loop-free C programs into crossbar circuits via automated synthesis. The original contributions described in this dissertation reflect the substantial progress we have made in the area of electronic design automation for synthesis of memristor crossbar networks.
Show less - Date Issued
- 2019
- Identifier
- CFE0007609, ucf:52528
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007609
- Title
- A FRAMEWORK TO MODEL COMPLEX SYSTEMS VIA DISTRIBUTED SIMULATION A CASE STUDY OF THE VIRTUAL TEST BED SIMULATION SYSTEM USING THE HIGH LEVEL ARCHITECTURE.
- Creator
-
Park, Jaebok, Sepulveda, Jose, University of Central Florida
- Abstract / Description
-
As the size, complexity, and functionality of systems we need to model and simulate con-tinue to increase, benefits such as interoperability and reusability enabled by distributed discrete-event simulation are becoming extremely important in many disciplines, not only military but also many engineering disciplines such as distributed manufacturing, supply chain management, and enterprise engineering, etc. In this dissertation we propose a distributed simulation framework for the development...
Show moreAs the size, complexity, and functionality of systems we need to model and simulate con-tinue to increase, benefits such as interoperability and reusability enabled by distributed discrete-event simulation are becoming extremely important in many disciplines, not only military but also many engineering disciplines such as distributed manufacturing, supply chain management, and enterprise engineering, etc. In this dissertation we propose a distributed simulation framework for the development of modeling and the simulation of complex systems. The framework is based on the interoperability of a simulation system enabled by distributed simulation and the gateways which enable Com-mercial Off-the-Shelf (COTS) simulation packages to interconnect to the distributed simulation engine. In the case study of modeling Virtual Test Bed (VTB), the framework has been designed as a distributed simulation to facilitate the integrated execution of different simulations, (shuttle process model, Monte Carlo model, Delay and Scrub Model) each of which is addressing differ-ent mission components as well as other non-simulation applications (Weather Expert System and Virtual Range). Although these models were developed independently and at various times, the original purposes have been seamlessly integrated, and interact with each other through Run-time Infrastructure (RTI) to simulate shuttle launch related processes. This study found that with the framework the defining properties of complex systems - interaction and emergence are realized and that the software life cycle models (including the spiral model and prototyping) can be used as metaphors to manage the complexity of modeling and simulation of the system. The system of systems (a complex system is intrinsically a "system of systems") continuously evolves to accomplish its goals, during the evolution subsystems co-ordinate with one another and adapt with environmental factors such as policies, requirements, and objectives. In the case study we first demonstrate how the legacy models developed in COTS simulation languages/packages and non-simulation tools can be integrated to address a compli-cated system of systems. We then describe the techniques that can be used to display the state of remote federates in a local federate in the High Level Architecture (HLA) based distributed simulation using COTS simulation packages.
Show less - Date Issued
- 2005
- Identifier
- CFE0000534, ucf:46416
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000534
- Title
- THE DESIGN PROCESS AS ASSISTANT ART DIRECTOR FOR THE FILM NATIONAL LAMPOON'S ROBODOC.
- Creator
-
Davis, Cecil, Scott, Hubert, University of Central Florida
- Abstract / Description
-
In this thesis, I will detail and analyze the production design processes for National Lampoon's RoboDoc, written by Douglas Gordon M.D., filmed and produced in Orlando, Universal Studios and Ormond Beach, FL, as experienced through the art department. The direction of the thesis will be based on how a background in architecture and theatre guides the design motivation(s) within a production team for film. My documentation will include a process journal written throughout the production...
Show moreIn this thesis, I will detail and analyze the production design processes for National Lampoon's RoboDoc, written by Douglas Gordon M.D., filmed and produced in Orlando, Universal Studios and Ormond Beach, FL, as experienced through the art department. The direction of the thesis will be based on how a background in architecture and theatre guides the design motivation(s) within a production team for film. My documentation will include a process journal written throughout the production of the film to include design meeting topics, research and design inspiration, sketches, budget and location concerns, coordination of scenic elements, crew team coordination, paperwork, and thoughts on working within the art department team as well as working with other teams of production. Photographic records will include pre-production allocation and storage, load-in scenarios, set construction, and final design in set and set dressing. Final comments will be based on a personal evaluation, evidence of my progression throughout the production, and how an advanced focus in design through education and practice affected the project.
Show less - Date Issued
- 2007
- Identifier
- CFE0001647, ucf:47232
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001647
- Title
- A COMMON COMPONENT-BASED SOFTWARE ARCHITECTURE FOR MILITARY AND COMMERCIAL PC-BASED VIRTUAL SIMULATION.
- Creator
-
Lewis, Joshua, Proctor, Michael, University of Central Florida
- Abstract / Description
-
Commercially available military-themed virtual simulations have been developed and sold for entertainment since the beginning of the personal computing era. There exists an intense interest by various branches of the military to leverage the technological advances of the personal computing and video game industries to provide low cost military training. By nature of the content of the commercial military-themed virtual simulations, a large overlap has grown between the interests, resources,...
Show moreCommercially available military-themed virtual simulations have been developed and sold for entertainment since the beginning of the personal computing era. There exists an intense interest by various branches of the military to leverage the technological advances of the personal computing and video game industries to provide low cost military training. By nature of the content of the commercial military-themed virtual simulations, a large overlap has grown between the interests, resources, standards, and technology of the computer entertainment industry and military training branches. This research attempts to identify these commonalities with the purpose of systematically designing and evaluating a common component-based software architecture that could be used to implement a framework for developing content for both commercial and military virtual simulation software applications.
Show less - Date Issued
- 2006
- Identifier
- CFE0001177, ucf:46868
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001177
- Title
- Harmony Oriented Architecture.
- Creator
-
Martin, Kyle, Hua, Kien, Wu, Annie, Heinrich, Mark, University of Central Florida
- Abstract / Description
-
This thesis presents Harmony Oriented Architecture: a novel architectural paradigm that applies the principles of Harmony Oriented Programming to the architecture of scalable and evolvable distributed systems. It is motivated by research on Ultra Large Scale systems that has revealed inherent limitations in human ability to design large-scale software systems that can only be overcome through radical alternatives to traditional object-oriented software engineering practice that simplifies the...
Show moreThis thesis presents Harmony Oriented Architecture: a novel architectural paradigm that applies the principles of Harmony Oriented Programming to the architecture of scalable and evolvable distributed systems. It is motivated by research on Ultra Large Scale systems that has revealed inherent limitations in human ability to design large-scale software systems that can only be overcome through radical alternatives to traditional object-oriented software engineering practice that simplifies the construction of highly scalable and evolvable system.HOP eschews encapsulation and information hiding, the core principles of object- oriented design, in favor of exposure and information sharing through a spatial abstraction. This helps to avoid the brittle interface dependencies that impede the evolution of object-oriented software. HOA extends these concepts to distributed systems resulting in an architecture in which application components are represented by objects in a spatial database and executed in strict isolation using an embedded application server. Application components store their state entirely in the database and interact solely by diffusing data into a space for proximate components to observe. This architecture provides a high degree of decoupling, isolation, and state exposure allowing highly scalable and evolvable applications to be built.A proof-of-concept prototype of a non-distributed HOA middleware platform supporting JavaScript application components is implemented and evaluated. Results show remarkably good performance considering that little effort was made to optimize the implementation.
Show less - Date Issued
- 2011
- Identifier
- CFE0004480, ucf:49298
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004480
- Title
- FABRIC ARCHITECTURE: BODY IN MOTION.
- Creator
-
Cosovic, Daniela, Robinson, Elizabeth Brady, University of Central Florida
- Abstract / Description
-
Making a dress, creating an object for someone else is a simple act of giving to another person. I did not want to decide between an object to wear and one to hang on the wall, so I gave you both, and movement in between. Take a dress off of a wall. Wear it. Put it back on the wall. Repeat it, or not. There is balance in movement of an object between a person and the wall. It is this quietness of balance amongst the sound of movement that I am seeking in my work.
- Date Issued
- 2009
- Identifier
- CFE0002606, ucf:48291
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002606
- Title
- HENRY JAMES, VIRGINIA WOOLF, AND FRANK LLOYD WRIGHT: INTERIORITY, CONSCIOUSNESS, TIME, AND SPACE IN THE MODERNIST NOVEL AND THE HOME.
- Creator
-
Michaelsen, Carol, Smith, Ernest, University of Central Florida
- Abstract / Description
-
During the Modernist period, generally defined between the years 1890 and 1945, artists were attempting to break away from previous forms and styles. For example, writers like Henry James and Virginia Woolf sought to change the novel by exploring the consciousness of characters, while playing with the ideas of time and space to create the present moment. The thesis explores the modernist techniques used by James and Woolf, but also connects the work of the writers with the architecture of...
Show moreDuring the Modernist period, generally defined between the years 1890 and 1945, artists were attempting to break away from previous forms and styles. For example, writers like Henry James and Virginia Woolf sought to change the novel by exploring the consciousness of characters, while playing with the ideas of time and space to create the present moment. The thesis explores the modernist techniques used by James and Woolf, but also connects the work of the writers with the architecture of Frank Lloyd Wright. Using Joseph Frank's theory of spatial form, my work explores the similarities between Wright's designs of private residences with the design of space in the novel. All three artists, I argue, are working with spatial form, blending interior with exterior, to provide the reader and the dweller with the opportunity to experience an organic unity, which ultimately results in a freezing of the moment. In addition to Frank's theory, I also incorporate Stanley Fish and Reader Response theory and William James's Principles of Psychology. The reader and the dweller must actively engage with the structure, whether a text or the home, to develop and realize the possibilities of spatial form. Also, William James's ideas about the mind and consciousness influenced Henry James and Virginia Woolf, especially in their focus on character, rather than description. I have chosen James's The Turn of the Screw and The Wings of the Dove along with Woolf's To the Lighthouse and The Waves to study with Wright's Prairie and Usonian residences. Each chapter looks at one novel and Wright's corresponding work during approximately the same time period. By connecting literature and architecture, the thesis provides new ways of thinking about the two disciplines, especially concerning interiority and consciousness. James, Woolf, and Wright are all experimenting with time and space to create a unified experience, and the striking parallels between their work deserves more attention.
Show less - Date Issued
- 2006
- Identifier
- CFE0001280, ucf:46925
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001280
- Title
- RESOURCE-CONSTRAINT AND SCALABLE DATA DISTRIBUTION MANAGEMENT FOR HIGH LEVEL ARCHITECTURE.
- Creator
-
Gupta, Pankaj, Guha, Ratan, University of Central Florida
- Abstract / Description
-
In this dissertation, we present an efficient algorithm, called P-Pruning algorithm, for data distribution management problem in High Level Architecture. High Level Architecture (HLA) presents a framework for modeling and simulation within the Department of Defense (DoD) and forms the basis of IEEE 1516 standard. The goal of this architecture is to interoperate multiple simulations and facilitate the reuse of simulation components. Data Distribution Management (DDM) is one of the six...
Show moreIn this dissertation, we present an efficient algorithm, called P-Pruning algorithm, for data distribution management problem in High Level Architecture. High Level Architecture (HLA) presents a framework for modeling and simulation within the Department of Defense (DoD) and forms the basis of IEEE 1516 standard. The goal of this architecture is to interoperate multiple simulations and facilitate the reuse of simulation components. Data Distribution Management (DDM) is one of the six components in HLA that is responsible for limiting and controlling the data exchanged in a simulation and reducing the processing requirements of federates. DDM is also an important problem in the parallel and distributed computing domain, especially in large-scale distributed modeling and simulation applications, where control on data exchange among the simulated entities is required. We present a performance-evaluation simulation study of the P-Pruning algorithm against three techniques: region-matching, fixed-grid, and dynamic-grid DDM algorithms. The P-Pruning algorithm is faster than region-matching, fixed-grid, and dynamic-grid DDM algorithms as it avoid the quadratic computation step involved in other algorithms. The simulation results show that the P-Pruning DDM algorithm uses memory at run-time more efficiently and requires less number of multicast groups as compared to the three algorithms. To increase the scalability of P-Pruning algorithm, we develop a resource-efficient enhancement for the P-Pruning algorithm. We also present a performance evaluation study of this resource-efficient algorithm in a memory-constraint environment. The Memory-Constraint P-Pruning algorithm deploys I/O efficient data-structures for optimized memory access at run-time. The simulation results show that the Memory-Constraint P-Pruning DDM algorithm is faster than the P-Pruning algorithm and utilizes memory at run-time more efficiently. It is suitable for high performance distributed simulation applications as it improves the scalability of the P-Pruning algorithm by several order in terms of number of federates. We analyze the computation complexity of the P-Pruning algorithm using average-case analysis. We have also extended the P-Pruning algorithm to three-dimensional routing space. In addition, we present the P-Pruning algorithm for dynamic conditions where the distribution of federated is changing at run-time. The dynamic P-Pruning algorithm investigates the changes among federates regions and rebuilds all the affected multicast groups. We have also integrated the P-Pruning algorithm with FDK, an implementation of the HLA architecture. The integration involves the design and implementation of the communicator module for mapping federate interest regions. We provide a modular overview of P-Pruning algorithm components and describe the functional flow for creating multicast groups during simulation. We investigate the deficiencies in DDM implementation under FDK and suggest an approach to overcome them using P-Pruning algorithm. We have enhanced FDK from its existing HLA 1.3 specification by using IEEE 1516 standard for DDM implementation. We provide the system setup instructions and communication routines for running the integrated on a network of machines. We also describe implementation details involved in integration of P-Pruning algorithm with FDK and provide results of our experiences.
Show less - Date Issued
- 2007
- Identifier
- CFE0001949, ucf:47447
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001949