Current Search: networking (x)
View All Items
Pages
- Title
- Resource Management in Large-scale Systems.
- Creator
-
Paya, Ashkan, Marinescu, Dan, Wocjan, Pawel, Bassiouni, Mostafa, Mucciolo, Eduardo, University of Central Florida
- Abstract / Description
-
The focus of this thesis is resource management in large-scale systems. Our primary concerns are energy management and practical principles for self-organization and self-management. The main contributions of our work are:1. Models. We proposed several models for different aspects of resource management, e.g., energy-aware load balancing and application scaling for the cloud ecosystem, hierarchical architecture model for self-organizing and self-manageable systems and a new cloud delivery...
Show moreThe focus of this thesis is resource management in large-scale systems. Our primary concerns are energy management and practical principles for self-organization and self-management. The main contributions of our work are:1. Models. We proposed several models for different aspects of resource management, e.g., energy-aware load balancing and application scaling for the cloud ecosystem, hierarchical architecture model for self-organizing and self-manageable systems and a new cloud delivery model based on auction-driven self-organization approach.2. Algorithms. We also proposed several different algorithms for the models described above. Algorithms such as coalition formation, combinatorial auctions and clustering algorithm for scale-free organizations of scale-free networks.3. Evaluation. Eventually we conducted different evaluations for the proposed models and algorithms in order to verify them. All the simulations reported in this thesis had been carried out on different instances and services of Amazon Web Services (AWS).All of these modules will be discussed in detail in the following chapters respectively.
Show less - Date Issued
- 2015
- Identifier
- CFE0005862, ucf:50913
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005862
- Title
- Automatic Detection of Brain Functional Disorder Using Imaging Data.
- Creator
-
Dey, Soumyabrata, Shah, Mubarak, Jha, Sumit, Hu, Haiyan, Weeks, Arthur, Rao, Ravishankar, University of Central Florida
- Abstract / Description
-
Recently, Attention Deficit Hyperactive Disorder (ADHD) is getting a lot of attention mainly for two reasons. First, it is one of the most commonly found childhood behavioral disorders. Around 5-10% of the children all over the world are diagnosed with ADHD. Second, the root cause of the problem is still unknown and therefore no biological measure exists to diagnose ADHD. Instead, doctors need to diagnose it based on the clinical symptoms, such as inattention, impulsivity and hyperactivity,...
Show moreRecently, Attention Deficit Hyperactive Disorder (ADHD) is getting a lot of attention mainly for two reasons. First, it is one of the most commonly found childhood behavioral disorders. Around 5-10% of the children all over the world are diagnosed with ADHD. Second, the root cause of the problem is still unknown and therefore no biological measure exists to diagnose ADHD. Instead, doctors need to diagnose it based on the clinical symptoms, such as inattention, impulsivity and hyperactivity, which are all subjective.Functional Magnetic Resonance Imaging (fMRI) data has become a popular tool to understand the functioning of the brain such as identifying the brain regions responsible for different cognitive tasks or analyzing the statistical differences of the brain functioning between the diseased and control subjects. ADHD is also being studied using the fMRI data. In this dissertation we aim to solve the problem of automatic diagnosis of the ADHD subjects using their resting state fMRI (rs-fMRI) data.As a core step of our approach, we model the functions of a brain as a connectivity network, which is expected to capture the information about how synchronous different brain regions are in terms of their functional activities. The network is constructed by representing different brain regions as the nodes where any two nodes of the network are connected by an edge if the correlation of the activity patterns of the two nodes is higher than some threshold. The brain regions, represented as the nodes of the network, can be selected at different granularities e.g. single voxels or cluster of functionally homogeneous voxels. The topological differences of the constructed networks of the ADHD and control group of subjects are then exploited in the classification approach.We have developed a simple method employing the Bag-of-Words (BoW) framework for the classification of the ADHD subjects. We represent each node in the network by a 4-D feature vector: node degree and 3-D location. The 4-D vectors of all the network nodes of the training data are then grouped in a number of clusters using K-means; where each such cluster is termed as a word. Finally, each subject is represented by a histogram (bag) of such words. The Support Vector Machine (SVM) classifier is used for the detection of the ADHD subjects using their histogram representation. The method is able to achieve 64% classification accuracy.The above simple approach has several shortcomings. First, there is a loss of spatial information while constructing the histogram because it only counts the occurrences of words ignoring the spatial positions. Second, features from the whole brain are used for classification, but some of the brain regions may not contain any useful information and may only increase the feature dimensions and noise of the system. Third, in our study we used only one network feature, the degree of a node which measures the connectivity of the node, while other complex network features may be useful for solving the proposed problem.In order to address the above shortcomings, we hypothesize that only a subset of the nodes of the network possesses important information for the classification of the ADHD subjects. To identify the important nodes of the network we have developed a novel algorithm. The algorithm generates different random subset of nodes each time extracting the features from a subset to compute the feature vector and perform classification. The subsets are then ranked based on the classification accuracy and the occurrences of each node in the top ranked subsets are measured. Our algorithm selects the highly occurring nodes for the final classification. Furthermore, along with the node degree, we employ three more node features: network cycles, the varying distance degree and the edge weight sum. We concatenate the features of the selected nodes in a fixed order to preserve the relative spatial information. Experimental validation suggests that the use of the features from the nodes selected using our algorithm indeed help to improve the classification accuracy. Also, our finding is in concordance with the existing literature as the brain regions identified by our algorithms are independently found by many other studies on the ADHD. We achieved a classification accuracy of 69.59% using this approach. However, since this method represents each voxel as a node of the network which makes the number of nodes of the network several thousands. As a result, the network construction step becomes computationally very expensive. Another limitation of the approach is that the network features, which are computed for each node of the network, captures only the local structures while ignore the global structure of the network.Next, in order to capture the global structure of the networks, we use the Multi-Dimensional Scaling (MDS) technique to project all the subjects from an unknown network-space to a low dimensional space based on their inter-network distance measures. For the purpose of computing distance between two networks, we represent each node by a set of attributes such as the node degree, the average power, the physical location, the neighbor node degrees, and the average powers of the neighbor nodes. The nodes of the two networks are then mapped in such a way that for all pair of nodes, the sum of the attribute distances, which is the inter-network distance, is minimized. To reduce the network computation cost, we enforce that the maximum relevant information is preserved with minimum redundancy. To achieve this, the nodes of the network are constructed with clusters of highly active voxels while the activity levels of the voxels are measured based on the average power of their corresponding fMRI time-series. Our method shows promise as we achieve impressive classification accuracies (73.55%) on the ADHD-200 data set. Our results also reveal that the detection rates are higher when classification is performed separately on the male and female groups of subjects.So far, we have only used the fMRI data for solving the ADHD diagnosis problem. Finally, we investigated the answers of the following questions. Do the structural brain images contain useful information related to the ADHD diagnosis problem? Can the classification accuracy of the automatic diagnosis system be improved combining the information of the structural and functional brain data? Towards that end, we developed a new method to combine the information of structural and functional brain images in a late fusion framework. For structural data we input the gray matter (GM) brain images to a Convolutional Neural Network (CNN). The output of the CNN is a feature vector per subject which is used to train the SVM classifier. For the functional data we compute the average power of each voxel based on its fMRI time series. The average power of the fMRI time series of a voxel measures the activity level of the voxel. We found significant differences in the voxel power distribution patterns of the ADHD and control groups of subjects. The Local binary pattern (LBP) texture feature is used on the voxel power map to capture these differences. We achieved 74.23% accuracy using GM features, 77.30% using LBP features and 79.14% using combined information.In summary this dissertation demonstrated that the structural and functional brain imaging data are useful for the automatic detection of the ADHD subjects as we achieve impressive classification accuracies on the ADHD-200 data set. Our study also helps to identify the brain regions which are useful for ADHD subject classification. These findings can help in understanding the pathophysiology of the problem. Finally, we expect that our approaches will contribute towards the development of a biological measure for the diagnosis of the ADHD subjects.
Show less - Date Issued
- 2014
- Identifier
- CFE0005786, ucf:50060
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005786
- Title
- Chemical Analysis, Databasing, and Statistical Analysis of Smokeless Powders for Forensic Application.
- Creator
-
Dennis, Dana-Marie, Sigman, Michael, Campiglia, Andres, Yestrebsky, Cherie, Fookes, Barry, Ni, Liqiang, University of Central Florida
- Abstract / Description
-
Smokeless powders are a set of energetic materials, known as low explosives, which are typically utilized for reloading ammunition. There are three types which differ in their primary energetic materials; where single base powders contain nitrocellulose as their primary energetic material, double and triple base powders contain nitroglycerin in addition to nitrocellulose, and triple base powders also contain nitroguanidine. Additional organic compounds, while not proprietary to specific...
Show moreSmokeless powders are a set of energetic materials, known as low explosives, which are typically utilized for reloading ammunition. There are three types which differ in their primary energetic materials; where single base powders contain nitrocellulose as their primary energetic material, double and triple base powders contain nitroglycerin in addition to nitrocellulose, and triple base powders also contain nitroguanidine. Additional organic compounds, while not proprietary to specific manufacturers, are added to the powders in varied ratios during the manufacturing process to optimize the ballistic performance of the powders. The additional compounds function as stabilizers, plasticizers, flash suppressants, deterrents, and opacifiers. Of the three smokeless powder types, single and double base powders are commercially available, and have been heavily utilized in the manufacture of improvised explosive devices.Forensic smokeless powder samples are currently analyzed using multiple analytical techniques. Combined microscopic, macroscopic, and instrumental techniques are used to evaluate the sample, and the information obtained is used to generate a list of potential distributors. Gas chromatography (-) mass spectrometry (GC-MS) is arguably the most useful of the instrumental techniques since it distinguishes single and double base powders, and provides additional information about the relative ratios of all the analytes present in the sample. However, forensic smokeless powder samples are still limited to being classified as either single or double base powders, based on the absence or presence of nitroglycerin, respectively. In this work, the goal was to develop statistically valid classes, beyond the single and double base designations, based on multiple organic compounds which are commonly encountered in commercial smokeless powders. Several chemometric techniques were applied to smokeless powder GC-MS data for determination of the classes, and for assignment of test samples to these novel classes. The total ion spectrum (TIS), which is calculated from the GC-MS data for each sample, is obtained by summing the intensities for each mass-to-charge (m/z) ratio across the entire chromatographic profile. A TIS matrix comprising data for 726 smokeless powder samples was subject to agglomerative hierarchical cluster (AHC) analysis, and six distinct classes were identified. Within each class, a single m/z ratio had the highest intensity for the majority of samples, though the m/z ratio was not always unique to the specific class. Based on these observations, a new classification method known as the Intense Ion Rule (IIR) was developed and used for the assignment of test samples to the AHC designated classes.Discriminant models were developed for assignment of test samples to the AHC designated classes using k-Nearest Neighbors (kNN) and linear and quadratic discriminant analyses (LDA and QDA, respectively). Each of the models were optimized using leave-one-out (LOO) and leave-group-out (LGO) cross-validation, and the performance of the models was evaluated by calculating correct classification rates for assignment of the cross-validation (CV) samples to the AHC designated classes. The optimized models were utilized to assign test samples to the AHC designated classes. Overall, the QDA LGO model achieved the highest correct classification rates for assignment of both the CV samples and the test samples to the AHC designated classes.In forensic application, the goal of an explosives analyst is to ascertain the manufacturer of a smokeless powder sample. In addition, knowledge about the probability of a forensic sample being produced by a specific manufacturer could potentially decrease the time invested by an analyst during investigation by providing a shorter list of potential manufacturers. In this work, Bayes' Theorem and Bayesian Networks were investigated as an additional tool to be utilized in forensic casework. Bayesian Networks were generated and used to calculate posterior probabilities of a test sample belonging to specific manufacturers. The networks were designed to include manufacturer controlled powder characteristics such as shape, color, and dimension; as well as, the relative intensities of the class associated ions determined from cluster analysis. Samples were predicted to belong to a manufacturer based on the highest posterior probability. Overall percent correct rates were determined by calculating the percentage of correct predictions; that is, where the known and predicted manufacturer were the same. The initial overall percent correct rate was 66%. The dimensions of the smokeless powders were added to the network as average diameter and average length nodes. Addition of average diameter and length resulted in an overall prediction rate of 70%.
Show less - Date Issued
- 2015
- Identifier
- CFE0005784, ucf:50059
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005784
- Title
- A Comparative Evaluation of FDSA,GA, and SA Non-Linear Programming Algorithms and Development of System-Optimal Dynamic Congestion Pricing Methodology on I-95 Express.
- Creator
-
Graham, Don, Radwan, Ahmed, Abdel-Aty, Mohamed, Al-Deek, Haitham, Uddin, Nizam, University of Central Florida
- Abstract / Description
-
As urban population across the globe increases, the demand for adequatetransportation grows. Several strategies have been suggested as a solution to the congestion which results from this high demand outpacing the existing supply of transportation facilities.High (-)Occupancy Toll (HOT) lanes have become increasingly more popular as a feature on today's highway system. The I-95 Express HOT lane in Miami Florida, which is currently being expanded from a single Phase (Phase I) into two Phases,...
Show moreAs urban population across the globe increases, the demand for adequatetransportation grows. Several strategies have been suggested as a solution to the congestion which results from this high demand outpacing the existing supply of transportation facilities.High (-)Occupancy Toll (HOT) lanes have become increasingly more popular as a feature on today's highway system. The I-95 Express HOT lane in Miami Florida, which is currently being expanded from a single Phase (Phase I) into two Phases, is one such HOT facility. With the growing abundance of such facilities comes the need for in- depth study of demand patterns and development of an appropriate pricing scheme which reduces congestion.This research develops a method for dynamic pricing on the I-95 HOT facility such as to minimize total travel time and reduce congestion. We apply non-linear programming (NLP) techniques and the finite difference stochastic approximation (FDSA), genetic algorithm (GA) and simulated annealing (SA) stochastic algorithms to formulate and solve the problem within a cell transmission framework. The solution produced is the optimal flow and optimal toll required to minimize total travel time and thus is the system-optimal solution.We perform a comparative evaluation of FDSA, GA and SA non-linear programmingalgorithms used to solve the NLP and the ANOVA results show that there are differences in the performance of the NLP algorithms in solving this problem and reducing travel time. We then conclude by demonstrating that econometric forecasting methods utilizing vector autoregressive (VAR) techniques can be applied to successfully forecast demand for Phase 2 of the 95 Express which is planned for 2014.
Show less - Date Issued
- 2013
- Identifier
- CFE0005000, ucf:50019
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005000
- Title
- A NEW PARADIGM OF MODELING WATERSHED WATER QUALITY.
- Creator
-
Zhang, Fan, Yeh, Gour-Tsyh, University of Central Florida
- Abstract / Description
-
Accurate models to reliably predict sediment and chemical transport in watershed water systems enhance the ability of environmental scientists, engineers and decision makers to analyze the impact of contamination problems and to evaluate the efficacy of alternative remediation techniques and management strategies prior to incurring expense in the field. This dissertation presents the conceptual and mathematical development of a general numerical model simulating (1) sediment and reactive...
Show moreAccurate models to reliably predict sediment and chemical transport in watershed water systems enhance the ability of environmental scientists, engineers and decision makers to analyze the impact of contamination problems and to evaluate the efficacy of alternative remediation techniques and management strategies prior to incurring expense in the field. This dissertation presents the conceptual and mathematical development of a general numerical model simulating (1) sediment and reactive chemical transport in river/stream networks of watershed systems; (2) sediment and reactive chemical transport in overland shallow water of watershed systems; and (3) reactive chemical transport in three-dimensional subsurface systems. Through the decomposition of the system of species transport equations via Gauss-Jordan column reduction of the reaction network, fast reactions and slow reactions are decoupled, which enables robust numerical integrations. Species reactive transport equations are transformed into two sets: nonlinear algebraic equations representing equilibrium reactions and transport equations of kinetic-variables in terms of kinetically controlled reaction rates. As a result, the model uses kinetic-variables instead of biogeochemical species as primary dependent variables, which reduces the number of transport equations and simplifies reaction terms in these equations. For each time step, we first solve the advective-dispersive transport of kinetic-variables. We then solve the reactive chemical system node by node to yield concentrations of all species. In order to obtain accurate, efficient and robust computations, five numerical options are provided to solve the advective-dispersive transport equations; and three coupling strategies are given to deal with the reactive chemistry. Verification examples are compared with analytical solutions to demonstrate the numerical accuracy of the code and to emphasize the need of implementing various numerical options and coupling strategies to deal with different types of problems for different application circumstances. Validation examples are presented to evaluate the ability of the model to replicate behavior observed in real systems. Hypothetical examples with complex reaction networks are employed to demonstrate the design capability of the model to handle field-scale problems involving both kinetic and equilibrium reactions. The deficiency of current practices in the water quality modeling is discussed and potential improvements over current practices using this model are addressed.
Show less - Date Issued
- 2005
- Identifier
- CFE0000448, ucf:46405
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000448
- Title
- IMPROVING AIRLINE SCHEDULE RELIABILITY USING A STRATEGIC MULTI-OBJECTIVE RUNWAY SLOT ASSIGNMENT SEARCH HEURISTIC.
- Creator
-
Hafner, Florian, Sepulveda, Alejandro, University of Central Florida
- Abstract / Description
-
Improving the predictability of airline schedules in the National Airspace System (NAS) has been a constant endeavor, particularly as system delays grow with ever-increasing demand. Airline schedules need to be resistant to perturbations in the system including Ground Delay Programs (GDPs) and inclement weather. The strategic search heuristic proposed in this dissertation significantly improves airline schedule reliability by assigning airport departure and arrival slots to each flight in the...
Show moreImproving the predictability of airline schedules in the National Airspace System (NAS) has been a constant endeavor, particularly as system delays grow with ever-increasing demand. Airline schedules need to be resistant to perturbations in the system including Ground Delay Programs (GDPs) and inclement weather. The strategic search heuristic proposed in this dissertation significantly improves airline schedule reliability by assigning airport departure and arrival slots to each flight in the schedule across a network of airports. This is performed using a multi-objective optimization approach that is primarily based on historical flight and taxi times but also includes certain airline, airport, and FAA priorities. The intent of this algorithm is to produce a more reliable, robust schedule that operates in today's environment as well as tomorrow's 4-Dimensional Trajectory Controlled system as described the FAA's Next Generation ATM system (NextGen). This novel airline schedule optimization approach is implemented using a multi-objective evolutionary algorithm which is capable of incorporating limited airport capacities. The core of the fitness function is an extensive database of historic operating times for flight and ground operations collected over a two year period based on ASDI and BTS data. Empirical distributions based on this data reflect the probability that flights encounter various flight and taxi times. The fitness function also adds the ability to define priorities for certain flights based on aircraft size, flight time, and airline usage. The algorithm is applied to airline schedules for two primary US airports: Chicago O'Hare and Atlanta Hartsfield-Jackson. The effects of this multi-objective schedule optimization are evaluated in a variety of scenarios including periods of high, medium, and low demand. The schedules generated by the optimization algorithm were evaluated using a simple queuing simulation model implemented in AnyLogic. The scenarios were simulated in AnyLogic using two basic setups: (1) using modes of flight and taxi times that reflect highly predictable 4-Dimensional Trajectory Control operations and (2) using full distributions of flight and taxi times reflecting current day operations. The simulation analysis showed significant improvements in reliability as measured by the mean square difference (MSD) of filed versus simulated flight arrival and departure times. Arrivals showed the most consistent improvements of up to 80% in on-time performance (OTP). Departures showed reduced overall improvements, particularly when the optimization was performed without the consideration of airport capacity. The 4-Dimensional Trajectory Control environment more than doubled the on-time performance of departures over the current day, more chaotic scenarios. This research shows that airline schedule reliability can be significantly improved over a network of airports using historical flight and taxi time data. It also provides for a mechanism to prioritize flights based on various airline, airport, and ATC goals. The algorithm is shown to work in today's environment as well as tomorrow's NextGen 4-Dimensional Trajectory Control setup.
Show less - Date Issued
- 2008
- Identifier
- CFE0002067, ucf:47572
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002067
- Title
- OPTIMAL DETOUR PLANNING AROUND BLOCKED CONSTRUCTION ZONES.
- Creator
-
Jardaneh , Mutasem, Khalafallah, Ahmed, University of Central Florida
- Abstract / Description
-
Construction zones are traffic way areas where construction, maintenance or utility work is identified by warning signs, signals and indicators, including those on transport devices that mark the beginning and end of construction zones. Construction zones are among the most dangerous work areas, with workers facing workplace safety challenges that often lead to catastrophic injuries or fatalities. In addition, daily commuters are also impacted by construction zone detours that affect their...
Show moreConstruction zones are traffic way areas where construction, maintenance or utility work is identified by warning signs, signals and indicators, including those on transport devices that mark the beginning and end of construction zones. Construction zones are among the most dangerous work areas, with workers facing workplace safety challenges that often lead to catastrophic injuries or fatalities. In addition, daily commuters are also impacted by construction zone detours that affect their safety and daily commute time. These problems represent major challenges to construction planners as they are required to plan vehicle routes around construction zones in such a way that maximizes the safety of construction workers and reduces the impact on daily commuters. This research aims at developing a framework for optimizing the planning of construction detours. The main objectives of the research are to first identify all the decision variables that affect the planning of construction detours and secondly, implement a model based on shortest path formulation to identify the optimal alternatives for construction detours. The ultimate goal of this research is to offer construction planners with the essential guidelines to improve construction safety and reduce construction zone hazards as well as a robust tool for selecting and optimizing construction zone detours.
Show less - Date Issued
- 2011
- Identifier
- CFE0003586, ucf:48900
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003586
- Title
- Development of Traffic Safety Zones and Integrating Macroscopic and Microscopic Safety Data Analytics for Novel Hot Zone Identification.
- Creator
-
Lee, JaeYoung, Abdel-Aty, Mohamed, Radwan, Ahmed, Nam, Boo Hyun, Kuo, Pei-Fen, Choi, Keechoo, University of Central Florida
- Abstract / Description
-
Traffic safety has been considered one of the most important issues in the transportation field. With consistent efforts of transportation engineers, Federal, State and local government officials, both fatalities and fatality rates from road traffic crashes in the United States have steadily declined from 2006 to 2011.Nevertheless, fatalities from traffic crashes slightly increased in 2012 (NHTSA, 2013). We lost 33,561 lives from road traffic crashes in the year 2012, and the road traffic...
Show moreTraffic safety has been considered one of the most important issues in the transportation field. With consistent efforts of transportation engineers, Federal, State and local government officials, both fatalities and fatality rates from road traffic crashes in the United States have steadily declined from 2006 to 2011.Nevertheless, fatalities from traffic crashes slightly increased in 2012 (NHTSA, 2013). We lost 33,561 lives from road traffic crashes in the year 2012, and the road traffic crashes are still one of the leading causes of deaths, according to the Centers for Disease Control and Prevention (CDC). In recent years, efforts to incorporate traffic safety into transportation planning has been made, which is termed as transportation safety planning (TSP). The Safe, Affordable, Flexible Efficient, Transportation Equity Act (-) A Legacy for Users (SAFETEA-LU), which is compliant with the United States Code, compels the United States Department of Transportation to consider traffic safety in the long-term transportation planning process. Although considerable macro-level studies have been conducted to facilitate the implementation of TSP, still there are critical limitations in macroscopic safety studies are required to be investigated and remedied. First, TAZ (Traffic Analysis Zone), which is most widely used in travel demand forecasting, has crucial shortcomings for macro-level safety modeling. Moreover, macro-level safety models have accuracy problem. The low prediction power of the model may be caused by crashes that occur near the boundaries of zones, high-level aggregation, and neglecting spatial autocorrelation.In this dissertation, several methodologies are proposed to alleviate these limitations in the macro-level safety research. TSAZ (Traffic Safety Analysis Zone) is developed as a new zonal system for the macroscopic safety analysis and nested structured modeling method is suggested to improve the model performance. Also, a multivariate statistical modeling method for multiple crash types is proposed in this dissertation. Besides, a novel screening methodology for integrating two levels is suggested. The integrated screening method is suggested to overcome shortcomings of zonal-level screening, since the zonal-level screening cannot take specific sites with high risks into consideration. It is expected that the integrated screening approach can provide a comprehensive perspective by balancing two aspects: macroscopic and microscopic approaches.
Show less - Date Issued
- 2014
- Identifier
- CFE0005195, ucf:50653
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005195
- Title
- PAUL VERHOEVEN, MEDIA MANIPULATION, AND HYPER-REALITY.
- Creator
-
Malchiodi, Emmanuel, Janz, Bruce, University of Central Florida
- Abstract / Description
-
Dutch director Paul Verhoeven is a polarizing figure. Although many of his American made films have received considerable praise and financial success, he has been lambasted on countless occasions for his gratuitous use of sex, violence, and contentious symbolism - 1995s Showgirls was overwhelmingly dubbed the worst film of all time and 1997s Starship Troopers earned him a reputation as a fascist. Regardless of the controversy surrounding him, his science fiction films are a move beyond the...
Show moreDutch director Paul Verhoeven is a polarizing figure. Although many of his American made films have received considerable praise and financial success, he has been lambasted on countless occasions for his gratuitous use of sex, violence, and contentious symbolism - 1995s Showgirls was overwhelmingly dubbed the worst film of all time and 1997s Starship Troopers earned him a reputation as a fascist. Regardless of the controversy surrounding him, his science fiction films are a move beyond the conventions of the big blockbuster science fiction films of the 1980s (E.T. and the Star Wars trilogy are prime examples), revealing a deeper exploration of both sociopolitical issues and the human condition. Much like the novels of Philip K. Dick (and Verhoeven's 1990 film Total Recall - an adaptation of a Dick short story), Verhoeven's science fiction work explores worlds where paranoia is a constant and determining whether an individual maintains any liberty is regularly questionable. In this thesis I am basically exploring issues regarding power. Although I barely bring up the term power in it, I feel it is central. Power is an ambiguous term; are we discussing physical power, state power, objective power, subjective power, or any of the other possible manifestations of the word? The original Anglo-French version of power means to be able,asking whether it is possible for one to do something. In relation to Verhoeven's science fiction work each demonstrates the limitations placed upon an individual's autonomy, asking are the protagonists capable of independent agency or rather just environmental constructs reflecting the myriad influences surrounding them. Does the individual really matter in the post-modern world, brimming with countless signs and signifiers? My main objective in this writing is to demonstrate how this happens in Verhoeven's films, exploring his central themes and subtext and doing what science fiction does: hold a mirror up to the contemporary world and critique it, asking whether our species' current trajectory is beneficial or hazardous.
Show less - Date Issued
- 2011
- Identifier
- CFH0003844, ucf:44697
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0003844
- Title
- Field Theoretic Lagrangian Stencils from Off-Shell Supermultiplet Gauge Quotients.
- Creator
-
Katona, Gregory, Klemm, Richard, Hubsch, Tristan, Peale, Robert, Shivamoggi, Bhimsen, University of Central Florida
- Abstract / Description
-
Recent efforts to classify off-shell representations of supersymmetry without a central charge have focused upon directed, supermultiplet graphs of hypercubic topology known as Adinkras. These encodings of Super Poincare algebras, depict every generator of a chosen supersymmetry as a node-pair transformtion between fermionic / bosonic componentfields. This research thesis is a culmination of investigating novel diagrammatic sums of gauge quotients by supersymmetric images of other Adinkras,...
Show moreRecent efforts to classify off-shell representations of supersymmetry without a central charge have focused upon directed, supermultiplet graphs of hypercubic topology known as Adinkras. These encodings of Super Poincare algebras, depict every generator of a chosen supersymmetry as a node-pair transformtion between fermionic / bosonic componentfields. This research thesis is a culmination of investigating novel diagrammatic sums of gauge quotients by supersymmetric images of other Adinkras, and the correlated building of field theoretic worldline Lagrangians to accommodate both classical and quantum venues. We find Ref [40], that such gauge quotients do not yield other stand alone or (")proper(") Adinkras as afore sighted, nor can they be decomposed into supermultiplet sums, but are rather a connected (")Adinkraic network("). Their iteration, analogous to Weyl's construction for producing all finite-dimensional unitary representations in Lie algebras, sets off chains of algebraic paradigms in discrete-graph and continuous-field variables, the links of which feature distinct, supersymmetric Lagrangian templates. Collectively, these Adiankraic series air new symbolic genera for equation to phase moments in Feynman path integrals. Guided in this light, we proceed by constructing Lagrangians actions for the N = 3 supermultiplet YI /(iDI X) for I = 1, 2, 3, where YI and X are standard, Salam-Strathdee superfields: YI fermionic and X bosonic. The system, bilinear in the component fields exhibits a total of thirteen free parameters, seven of which specify Zeeman-like coupling to external background (magnetic) fluxes. All but special subsets of this parameter space describe aperiodic oscillatory responses, some of which are found to be surprisingly controlled by the golden ratio, ? ? 1.61803, Ref [52]. It is further determined that these Lagrangians allow an N = 3 ? 4 supersymmetric extension to the Chiral-Chiral and Chiral-twisted-Chiral multiplet, while a subset admits two inequivalent such extensions. In a natural progression, a continuum of observably and usefully inequivalent, finite-dimensional off-shellrepresentations of worldline N = 4 extended supersymmetry are explored, that are variatefrom one another but in the value of a tuning parameter, Ref [53]. Their dynamics turnsout to be nontrivial already when restricting to just bilinear Lagrangians. In particular, wefind a 34-parameter family of bilinear Lagrangians that couple two differently tuned supermultiplets to each other and to external magnetic fluxes, where the explicit parameter dependence is unremovable by any field redefinition and is therefore observable. This offers the evaluation of X-phase sensitive, off-shell path integrals with promising correlationsto group product decompositions and to deriving source emergences of higher-order background flux-forms on 2-dimensional manifolds, the stacks of which comprise space-time volumes. Application to nonlinear sigma models would naturally follow, having potential use in M- and F- string theories.
Show less - Date Issued
- 2013
- Identifier
- CFE0005011, ucf:50004
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005011