Current Search: Algorithm (x)
Pages


Title

Computing a diameterconstrained minimum spanning tree.

Creator

Abdalla, Ayman Mahmoud, Deo, Narsingh, Engineering and Computer Science

Abstract / Description

University of Central Florida College of Engineering Thesis; In numerous practical applications, it is necessary to find the smallest possible tree with a bounded diameter. A diameterconstrained minimum spanning tree (DCMST) of a given undirected, edgeweighted graph, G, is the smallestweight spanning tree of all spanning trees of G which contain no path with more than k edges, where k is a given positive integer. The problem of finding a DCMST is NPcomplete for all values of k; 4
Show moreUniversity of Central Florida College of Engineering Thesis; In numerous practical applications, it is necessary to find the smallest possible tree with a bounded diameter. A diameterconstrained minimum spanning tree (DCMST) of a given undirected, edgeweighted graph, G, is the smallestweight spanning tree of all spanning trees of G which contain no path with more than k edges, where k is a given positive integer. The problem of finding a DCMST is NPcomplete for all values of k; 4 <= k <= (n  2), except when all edgeweights are identical. A DCMST is essential for the efficiency of various distributed mutual exclusion algorithms, where it can minimize the number of messages communicated among processors per critical section. It is also useful in linear lightwave networks, where it can minimize interference in the network by limiting the traffic in the network lines. Another practical application requiring a DCMST arises in data compression, where some algorithms compress a file utilizing a datastructure, and decompress a path in the tree to access a record A DCMST helps such algorithm to be fast without sacrificing a lot of storage storage space.
Show less

Date Issued

2001

Identifier

CFR0002215, ucf:52914

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFR0002215


Title

INVERSE BOUNDARY ELEMENT/GENETIC ALGORITHM METHOD FOR RECONSTRUCTION OF MULTIDIMENSIONAL HEAT FLUX DISTRIBUTIONS WITH FILM COOLING APPLICATIONS.

Creator

Silieti, Mahmood, Kassab, Alain, University of Central Florida

Abstract / Description

A methodology is formulated for the solution of the inverse problem concerned with the reconstruction of multidimensional heat fluxes for film cooling applications. The motivation for this study is the characterization of complex thermal conditions in industrial applications such as those encountered in film cooled turbomachinery components. The heat conduction problem in the metal endwall/shroud is solved using the boundary element method (bem), and the inverse problem is solved using a...
Show moreA methodology is formulated for the solution of the inverse problem concerned with the reconstruction of multidimensional heat fluxes for film cooling applications. The motivation for this study is the characterization of complex thermal conditions in industrial applications such as those encountered in film cooled turbomachinery components. The heat conduction problem in the metal endwall/shroud is solved using the boundary element method (bem), and the inverse problem is solved using a genetic algorithm (ga). Thermal conditions are overspecified at exposed surfaces amenable to measurement, while the temperature and surface heat flux distributions are unknown at the film cooling hole/slot walls. The latter are determined in an iterative process by developing two approaches. The first approach, developed for 2d applications, solves an inverse problem whose objective is to adjust the film cooling hole/slot wall temperatures and heat fluxes until the temperature and heat flux at the measurement surfaces are matched in an overall heat conduction solution. The second approach, developed for 2d and 3d applications, is to distribute a set of singularities (sinks) at the vicinity of the cooling slots/holes surface inside a fictitious extension of the physical domain or along cooling hole centerline with a given initial strength distribution. The inverse problem iteratively alters the strength distribution of the singularities (sinks) until the measuring surfaces heat fluxes are matched. The heat flux distributions are determined in a postprocessing stage after the inverse problem is solved. The second approach provides a tremendous advantage in solving the inverse problem, particularly in 3d applications, and it is recommended as the method of choice for this class of problems. It can be noted that the ga reconstructed heat flux distributions are robust, yielding accurate results to both exact and errorladen inputs. In all cases in this study, results from experiments are simulated using a full conjugate heat transfer (cht) finite volume models which incorporate the interactions of the external convection in the hot turbulent gas, internal convection within the cooling plena, and the heat conduction in the metal endwall/shroud region. Extensive numerical investigations are undertaken to demonstrate the significant importance of conjugate heat transfer in film cooling applications and to identify the implications of various turbulence models in the prediction of accurate and more realistic surface temperatures and heat fluxes in the cht simulations. These, in turn, are used to provide numerical inputs to the inverse problem. Single and multiple cooling slots, cylindrical cooling holes, and fanshaped cooling holes are considered in this study. The turbulence closure is modeled using several twoequation approach, the fourequation turbulence model, as well as five and seven moment reynolds stress models. The predicted results, by the different turbulence models, for the cases of adiabatic and conjugate models, are compared to experimental data reported in the open literature. Results show the significant effects of conjugate heat transfer on the temperature field in the film cooling hole region, and the additional heating up of the cooling jet itself. Moreover, results from the detailed numerical studies presented in this study validate the inverse problem approaches and reveal good agreement between the bem/ga reconstructed heat fluxes and the cht simulated heat fluxes along the inaccessible cooling slot/hole walls
Show less

Date Issued

2004

Identifier

CFE0000166, ucf:52896

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0000166


Title

Reconfigurable Reflectarray Antennas with Bandwidth Enhancement for High Gain, BeamSteering Applications.

Creator

Trampler, Michael, Gong, Xun, Wahid, Parveen, Jones, W Linwood, Chen, Kenle, Kuebler, Stephen, University of Central Florida

Abstract / Description

Reconfigurable reflectarrays are a class of antennas that combine the advantages of traditional parabolic antennas and phased array antennas. Chapter 1 discusses the basic operational theory of reflectarrays and their design. A review of previous research and the current status is also presented. Furthermore the inherent advantages and disadvantages of the reflectarray topography are presented. In chapter 2, a BSTintegrated reflectarray operating at Ka band is presented. Due to the...
Show moreReconfigurable reflectarrays are a class of antennas that combine the advantages of traditional parabolic antennas and phased array antennas. Chapter 1 discusses the basic operational theory of reflectarrays and their design. A review of previous research and the current status is also presented. Furthermore the inherent advantages and disadvantages of the reflectarray topography are presented. In chapter 2, a BSTintegrated reflectarray operating at Ka band is presented. Due to the monolithic integration of the tuning element, this design is then extended to V band where a novel interdigital gap configuration is utilized. Finally to overcome loss and phase limitations of the single resonant design, a BSTintegrated, dualresonance unit cell operating at Ka band is designed. While the losses are still high, a 360(&)deg; phase range is demonstrated.In chapter 3, the operational theory of dualresonant array elements is introduced utilizing Q theory. An equivalent circuit is developed and used to demonstrate design tradeoffs. Using this theory the design procedure of a varactor tuned dualresonant unit cell operating at Xband is presented. Detailed analysis of the design is performed by fullwave simulations and verified via measurements. In chapter 4, the array performance of the dualresonance unit cell is analyzed. The effects of varying angles of incidence on the array element are studied using Floquet simulations. The beam scanning, crosspolarization and bandwidth performance of a 7(&)#215;7 element reflectarray is analyzed using fullwave simulations and verified via measurements.In chapter 5 a loss analysis of the dualresonant reflectarray element is performed. Major sources of loss are identified utilizing fullwave simulations before an equivalent circuit is utilized to optimize the loss performance while maintaining a full phase range and improved bandwidth performance. Finally the dualresonance unit cell is modified to support two linear polarizations. Overall, the operational and design theory of dual resonant reflectarray unit cells using Q theory is developed. A valuable equivalent circuit is developed and used to aid in array element design as well as optimize the loss and bandwidth performance. The proposed theoretical models provide valuable physical insight through the use of Q theory to greatly aid in reflectarray design
Show less

Date Issued

2019

Identifier

CFE0007735, ucf:52457

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0007735


Title

TOWARDS CALIBRATION OF OPTICAL FLOW OF CROWD VIDEOS USING OBSERVED TRAJECTORIES.

Creator

Elbadramany, Iman, Kaup, David, University of Central Florida

Abstract / Description

The need exists for finding a quantitative method for validating crowd simulations. One approach is to use optical flow of videos of real crowds to obtain velocities that can be used for comparison to simulations. Optical flow, in turn, needs to be calibrated to be useful. It is essential to show that optical flow velocities obtained from crowd videos can be mapped into the spatially averaged velocities of the observed trajectories of crowd members, and to quantify the extent of the...
Show moreThe need exists for finding a quantitative method for validating crowd simulations. One approach is to use optical flow of videos of real crowds to obtain velocities that can be used for comparison to simulations. Optical flow, in turn, needs to be calibrated to be useful. It is essential to show that optical flow velocities obtained from crowd videos can be mapped into the spatially averaged velocities of the observed trajectories of crowd members, and to quantify the extent of the correlation of the results. This research investigates methods to uncover the best conditions for a good correlation between optical flow and the average motion of individuals in crowd videos, with the aim that this will help in the quantitative validation of simulations. The first approach was to use a simple linear proportionality relation, with a single coefficient, alpha, between velocity vector of the optical flow and observed velocity of crowd members in a video or simulation. Since there are many variables that affect alpha, an attempt was made to find the best possible conditions for determining alpha, by varying experimental and optical flow settings. The measure of a good alpha was chosen to be that alpha does not vary excessively over a number of video frames. Best conditions of low coefficient of variation of alpha using the LucasKanade optical flow algorithm were found to be when a larger aperture of 15x15 pixels was used, combined with a smaller threshold. Adequate results were found at cell size 40x40 pixels; the improvement in detecting details when smaller cells are used did not reduce the variability of alpha, and required much more computing power. Reduction in variability of alpha can be obtained by spreading the tracked location of a crowd member from a pixel into a rectangle. The Particle Image Velocimetry optical flow algorithm had better correspondence with the velocity vectors of manually tracked crowd members than results obtained using the LukasKanade method. Here, also, it was found that 40x40 pixel cells were better than 15x15. A second attempt at quantifying the correlation between optical flow and actual crowd member velocities was studied using simulations. Two processes were researched, which utilized geometrical correction of the perspective distortion of the crowd videos. One process geometrically corrects the video, and then obtains optical flow data. The other obtains optical flow data from video, and then geometrically corrects the data. The results indicate that the first process worked better. Correlation was calculated between sets of data obtained from the average of twenty frames. This was found to be higher than calculating correlations between the velocities of cells in each pair of frames. An experiment was carried out to predict crowd tracks using optical flow and a calculated parameter, beta, seems to give promising results.
Show less

Date Issued

2011

Identifier

CFE0004024, ucf:49175

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0004024


Title

SPECTRUM SHARING AND SERVICE PRICING IN DYNAMIC SPECTRUM ACCESS NETWORKS.

Creator

Brahma, Swastik, Chatterjee, Mainak, University of Central Florida

Abstract / Description

Traditionally, radio spectrum has been statically allocated to wireless service providers (WSPs). Regulators, like FCC, give wireless service providers exclusive long term licenses for using specific range of frequencies in particular geographic areas. Moreover, restrictions are imposed on the technologies to be used and the services to be provided. The lack of flexibility in static spectrum allocation constrains the ability to make use of new technologies and the ability to redeploy the...
Show moreTraditionally, radio spectrum has been statically allocated to wireless service providers (WSPs). Regulators, like FCC, give wireless service providers exclusive long term licenses for using specific range of frequencies in particular geographic areas. Moreover, restrictions are imposed on the technologies to be used and the services to be provided. The lack of flexibility in static spectrum allocation constrains the ability to make use of new technologies and the ability to redeploy the spectrum to higher valued uses, thereby resulting in inefficient spectrum utilization [23, 38, 42, 62, 67]. These limitations have motivated a paradigm shift from static spectrum allocation towards a more 'liberalized' notion of spectrum management in which secondary users can borrow idle spectrum from primary spectrum licensees, without causing harmful interference to the latter a notion commonly referred to as dynamic spectrum access (DSA) or open spectrum access ,. Cognitive radio [30, 47], empowered by Software Defined Radio (SDR), is poised to promote the efficient use of spectrum by adopting this open spectrum approach. In this dissertation, we first address the problem of dynamic channel (spectrum) access by a set of cognitive radio enabled nodes, where each node acting in a selfish manner tries to access and use as many channels as possible, subject to the interference constraints. We model the dynamic channel access problem as a modified RubinsteinStahl bargaining game. In our model, each node negotiates with the other nodes to obtain an agreeable sharing rule of the available channels, such that, no two interfering nodes use the same channel. We solve the bargaining game by finding Subgame Perfect Nash Equilibrium (SPNE) strategies of the nodes. First, we consider finite horizon version of the bargaining game and investigate its SPNE strategies that allow each node to maximize its utility against the other nodes (opponents). We then extend these results to the infinite horizon bargaining game. Furthermore, we identify Pareto optimal equilibria of the game for improving spectrum utilization. The bargaining solution ensures that no node is starved of channels. The spectrum that a secondary node acquires comes to it at a cost. Thus it becomes important to study the 'end system' perspective of such a cost, by focusing on its implications. Specifically, we consider the problem of incentivizing nodes to provide the service of routing using the acquired spectrum. In this problem, each secondary node having a certain capacity incurs a cost for routing traffic through it. Secondary nodes will not have an incentive to relay traffic unless they are compensated for the costs they incur in forwarding traffic. We propose a path auction scheme in which each secondary node announces its cost and capacity to the routing mechanism, both of which are considered as private information known only to the node. We design a route selection mechanism and a pricing function that can induce nodes to reveal their cost and capacity honestly (making our auction truthful), while minimizing the payment that needs to be given to the nodes (making our auction optimal). By considering capacity constraint of the nodes, we explicitly support multiple path routing. For deploying our path auction based routing mechanism in DSA networks, we provide polynomial time algorithms to find the optimal route over which traffic should be routed and to compute the payment that each node should receive. All our proposed algorithms have been evaluated via extensive simulation experiments. These results help to validate our design philosophy and also illustrate the effectiveness of our solution approach.
Show less

Date Issued

2011

Identifier

CFE0004049, ucf:49125

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0004049


Title

Design of the layout of a manufacturing facility with a closed loop conveyor with shortcuts using queueing theory and genetic algorithms.

Creator

Lasrado, Vernet, Nazzal, Dima, Mollaghasemi, Mansooreh, Reilly, Charles, Garibay, Ivan, Sivo, Stephen, Armacost, Robert, University of Central Florida

Abstract / Description

With the ongoing technology battles and price wars in today's competitive economy, every company is looking for an advantage over its peers. A particular choice of facility layout can have a significant impact on the ability of a company to maintain lower operational expenses under uncertain economic conditions. It is known that systems with less congestion have lower operational costs. Traditionally, manufacturing facility layout problem methods aim at minimizing the total distance traveled,...
Show moreWith the ongoing technology battles and price wars in today's competitive economy, every company is looking for an advantage over its peers. A particular choice of facility layout can have a significant impact on the ability of a company to maintain lower operational expenses under uncertain economic conditions. It is known that systems with less congestion have lower operational costs. Traditionally, manufacturing facility layout problem methods aim at minimizing the total distance traveled, the material handling cost, or the time in the system (based on distance traveled at a specific speed). The proposed methodology solves the looped layout design problem for a looped layout manufacturing facility with a looped conveyor material handling system with shortcuts using a system performance metric, i.e. the work in process (WIP) on the conveyor and at the input stations to the conveyor, as a factor in the minimizing function for the facility layout optimization problem which is solved heuristically using a permutation genetic algorithm. The proposed methodology also presents the case for determining the shortcut locations across the conveyor simultaneously (while determining the layout of the stations around the loop) versus the traditional method which determines the shortcuts sequentially (after the layout of the stations has been determined). The proposed methodology also presents an analytical estimate for the work in process at the input stations to the closed looped conveyor.It is contended that the proposed methodology (using the WIP as a factor in the minimizing function for the facility layout while simultaneously solving for the shortcuts) will yield a facility layout which is less congested than a facility layout generated by the traditional methods (using the total distance traveled as a factor of the minimizing function for the facility layout while sequentially solving for the shortcuts). The proposed methodology is tested on a virtual 300mm Semiconductor Wafer Fabrication Facility with a looped conveyor material handling system with shortcuts. The results show that the facility layouts generated by the proposed methodology have significantly less congestion than facility layouts generated by traditional methods. The validation of the developed analytical estimate of the work in process at the input stations reveals that the proposed methodology works extremely well for systems with Markovian Arrival Processes.
Show less

Date Issued

2011

Identifier

CFE0004125, ucf:49088

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0004125


Title

Quantum Algorithms for: Quantum Phase Estimation, Approximation of the Tutte Polynomial and Blackbox Structures.

Creator

Ahmadi Abhari, Seyed Hamed, Brennan, Joseph, Mucciolo, Eduardo, Li, Xin, Marinescu, Dan, University of Central Florida

Abstract / Description

In this dissertation, we investigate three different problems in the field of Quantum computation. First, we discuss the quantum complexity of evaluating the Tutte polynomial of a planar graph. Furthermore, we devise a new quantum algorithm for approximating the phase of a unitary matrix. Finally, we provide quantum tools that can be utilized to extract the structure of blackbox modules and algebras. While quantum phase estimation (QPE) is at the core of many quantum algorithms known to date...
Show moreIn this dissertation, we investigate three different problems in the field of Quantum computation. First, we discuss the quantum complexity of evaluating the Tutte polynomial of a planar graph. Furthermore, we devise a new quantum algorithm for approximating the phase of a unitary matrix. Finally, we provide quantum tools that can be utilized to extract the structure of blackbox modules and algebras. While quantum phase estimation (QPE) is at the core of many quantum algorithms known to date, its physical implementation (algorithms based on quantum Fourier transform (QFT)) is highly constrained by the requirement of highprecision controlled phase shift operators, which remain difficult to realize. In the second part of this dissertation, we introduce an alternative approach to approximately implement QPE with arbitrary constantprecision controlled phase shift operators.The new quantum algorithm bridges the gap between QPE algorithms based on QFT and Kitaev's original approach. For approximating the eigenphase precise to the nth bit, Kitaev's original approach does not require any controlled phase shift operator. In contrast, QPE algorithms based on QFT or approximate QFT require controlled phase shift operators with precision of at least Pi/2n. The new approach fills the gap and requires only arbitrary constantprecision controlled phase shift operators. From a physical implementation viewpoint, the new algorithm outperforms Kitaev's approach.The other problem we investigate relates to approximating the Tutte polynomial. We show that the problem of approximately evaluating the Tutte polynomial of triangular graphs at the points (q,1/q) of the Tutte plane is BQPcomplete for (most) roots of unity q. We also consider circular graphs and show that the problem of approximately evaluating the Tutte polynomial of these graphs at a point is DQC1complete and at some points is in BQP.To show that these problems can be solved by a quantum computer, we rely on the relation of the Tutte polynomial of a planar G graph with the Jones and HOMFLY polynomial of the alternating link D(G) given by the medial graph of G. In the case of our graphs the corresponding links are equal to the plat and trace closures of braids. It is known how to evaluate the Jones and HOMFLY polynomial for closures of braids.To establish the hardness results, we use the property that the images of the generators of the braid group under the irreducible JonesWenzl representations of the Hecke algebra have finite order. We show that for each braid we can efficiently construct a braid such that the evaluation of the Jones and HOMFLY polynomials of their closures at a fixed root of unity leads to the same value and that the closures of the resulting braid are alternating links.The final part of the dissertation focuses on finding the structure of a blackbox module or algebra. Suppose we are given blackbox access to a finite module M or algebra over a finite ring R and a list of generators for M and R. We show how to find a linear basis and structure constants for M in quantum poly (logM) time. This generalizes a recent quantum algorithm of Arvind et al. which finds a basis representation for rings. We then show that our algorithm is a useful primitive allowing quantum computer to determine the structure of a finite associative algebra as a direct sum of simple algebras. Moreover, it solves a wide variety of problems regarding finite modules and rings. Although our quantum algorithm is based on Abelian Fourier transforms, it solves problems regarding the multiplicative structure of modules and algebras, which need not be commutative. Examples include finding the intersection and quotient of two modules, finding the additive and multiplicative identities in a module, computing the order of an module, solving linear equations over modules, deciding whether an ideal is maximal, finding annihilators, and testing the injectivity and surjectivity of ring homomorphisms. These problems appear to be exponentially hard classically.
Show less

Date Issued

2012

Identifier

CFE0004239, ucf:49526

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0004239


Title

A SUSTAINABLE AUTONOMIC ARCHITECTURE FOR ORGANICALLY RECONFIGURABLE COMPUTING SYSTEMS.

Creator

Oreifej, Rashad, DeMara, Ronald, University of Central Florida

Abstract / Description

A Sustainable Autonomic Architecture for Organically Reconfigurable Computing System based on SRAM Field Programmable Gate Arrays (FPGAs) is proposed, modeled analytically, simulated, prototyped, and measured. Lowlevel organic elements are analyzed and designed to achieve novel selfmonitoring, selfdiagnosis, and selfrepair organic properties. The prototype of a 2D spatial gradient Sobel video edgedetection organic system usecase developed on a XC4VSX35 Xilinx Virtex4 Video Starter Kit...
Show moreA Sustainable Autonomic Architecture for Organically Reconfigurable Computing System based on SRAM Field Programmable Gate Arrays (FPGAs) is proposed, modeled analytically, simulated, prototyped, and measured. Lowlevel organic elements are analyzed and designed to achieve novel selfmonitoring, selfdiagnosis, and selfrepair organic properties. The prototype of a 2D spatial gradient Sobel video edgedetection organic system usecase developed on a XC4VSX35 Xilinx Virtex4 Video Starter Kit is presented. Experimental results demonstrate the applicability of the proposed architecture and provide the infrastructure to quantify the performance and overcome faulthandling limitations. Dynamic online autonomous functionality restoration after a malfunction or functionality shift due to changing requirements is achieved at a fine granularity by exploiting dynamic Partial Reconfiguration (PR) techniques. A Genetic Algorithm (GA)based hardware/software platform for intrinsic evolvable hardware is designed and evaluated for digital circuit repair using a variety of wellaccepted benchmarks. Dynamic bitstream compilation for enhanced mutation and crossover operators is achieved by directly manipulating the bitstream using a layered toolset. Experimental results on the edgedetector organic system prototype have shown complete organic online refurbishment after a hard fault. In contrast to previous toolsets requiring many milliseconds or seconds, an average of 0.47 microseconds is required to perform the genetic mutation, 4.2 microseconds to perform the single point conventional crossover, 3.1 microseconds to perform Partial Match Crossover (PMX) as well as Order Crossover (OX), 2.8 microseconds to perform Cycle Crossover (CX), and 1.1 milliseconds for one input pattern intrinsic evaluation. These represent a performance advantage of three orders of magnitude over the JBITS software framework and more than seven orders of magnitude over the Xilinx design flow. Combinatorial Group Testing (CGT) technique was combined with the conventional GA in what is called CGTpruned GA to reduce repair time and increase system availability. Results have shown up to 37.6% convergence advantage using the pruned technique. Lastly, a quantitative stochastic sustainability model for reparable systems is formulated to evaluate the Sustainability of FPGAbased reparable systems. This model computes at designtime the resources required for refurbishment to meet mission availability and lifetime requirements in a given faultsusceptible missions. By applying this model to MCNC benchmark circuits and the Sobel EdgeDetector in a realistic space mission usecase on Xilinx Virtex4 FPGA, we demonstrate a comprehensive model encompassing the interrelationships between system sustainability and fault rates, utilized, and redundant hardware resources, repair policy parameters and decaying reparability.
Show less

Date Issued

2011

Identifier

CFE0003969, ucf:48661

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0003969


Title

Model Selection via Racing.

Creator

Zhang, Tiantian, Georgiopoulos, Michael, Anagnostopoulos, Georgios, Wu, Annie, Hu, Haiyan, Nickerson, David, University of Central Florida

Abstract / Description

Model Selection (MS) is an important aspect of machine learning, as necessitated by the No Free Lunch theorem. Briefly speaking, the task of MS is to identify a subset of models that are optimal in terms of preselected optimization criteria. There are many practical applications of MS, such as model parameter tuning, personalized recommendations, A/B testing, etc. Lately, some MS research has focused on trading off exactness of the optimization with somewhat alleviating the computational...
Show moreModel Selection (MS) is an important aspect of machine learning, as necessitated by the No Free Lunch theorem. Briefly speaking, the task of MS is to identify a subset of models that are optimal in terms of preselected optimization criteria. There are many practical applications of MS, such as model parameter tuning, personalized recommendations, A/B testing, etc. Lately, some MS research has focused on trading off exactness of the optimization with somewhat alleviating the computational burden entailed. Recent attempts along this line include metaheuristics optimization, local searchbased approaches, sequential modelbased methods, portfolio algorithm approaches, and multiarmed bandits.Racing Algorithms (RAs) are an active research area in MS, which trade off some computational cost for a reduced, but acceptable likelihood that the models returned are indeed optimal among the given ensemble of models. All existing RAs in the literature are designed as SingleObjective Racing Algorithm (SORA) for SingleObjective Model Selection (SOMS), where a single optimization criterion is considered for measuring the goodness of models. Moreover, they are offline algorithms in which MS occurs before model deployment and the selected models are optimal in terms of their overall average performances on a validation set of problem instances. This work aims to investigate racing approaches along two distinct directions: Extreme Model Selection (EMS) and MultiObjective Model Selection (MOMS). In EMS, given a problem instance and a limited computational budget shared among all the candidate models, one is interested in maximizing the final solution quality. In such a setting, MS occurs during model comparison in terms of maximum performance and involves no model validation. EMS is a natural framework for many applications. However, EMS problems remain unaddressed by current racing approaches. In this work, the first RA for EMS, named MaxRace, is developed, so that it optimizes the extreme solution quality by automatically allocating the computational resources among an ensemble of problem solvers for a given problem instance. In MaxRace, significant difference between the extreme performances of any pair of models is statistically inferred via a parametric hypothesis test under the Generalized Pareto Distribution (GPD) assumption. Experimental results have confirmed that MaxRace is capable of identifying the best extreme model with high accuracy and low computational cost. Furthermore, in machine learning, as well as in many realworld applications, a variety of MS problems are multiobjective in nature. MS which simultaneously considers multiple optimization criteria is referred to as MOMS. Under this scheme, a set of Pareto optimal models is sought that reflect a variety of compromises between optimization objectives. So far, MOMS problems have received little attention in the relevant literature. Therefore, this work also develops the first MultiObjective Racing Algorithm (MORA) for a fixedbudget setting, namely SRace. SRace addresses MOMS in the proper sense of Pareto optimality. Its key decision mechanism is the nonparametric sign test, which is employed for inferring pairwise dominance relationships. Moreover, SRace is able to strictly control the overall probability of falsely eliminating any nondominated models at a userspecified significance level. Additionally, SPRINTRace, the first MORA for a fixedconfidence setting, is also developed. In SPRINTRace, pairwise dominance and nondominance relationships are established via the Sequential Probability Ratio Test with an Indifference zone. Moreover, the overall probability of falsely eliminating any nondominated models or mistakenly retaining any dominated models is controlled at a prescribed significance level. Extensive experimental analysis has demonstrated the efficiency and advantages of both SRace and SPRINTRace in MOMS.
Show less

Date Issued

2016

Identifier

CFE0006203, ucf:51094

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0006203


Title

Modeling User Transportation Patterns Using Mobile Devices.

Creator

Davami, Erfan, Sukthankar, Gita, Gonzalez, Avelino, Foroosh, Hassan, Sukthankar, Rahul, University of Central Florida

Abstract / Description

Participatory sensing frameworks use humans and their computing devices as a large mobile sensing network. Dramatic accessibility and affordability have turned mobile devices (smartphone and tablet computers) into the most popular computational machines in the world, exceeding laptops. By the end of 2013, more than 1.5 billion people on earth will have a smartphone. Increased coverage and higher speeds of cellular networks have given these devices the power to constantly stream large amounts...
Show moreParticipatory sensing frameworks use humans and their computing devices as a large mobile sensing network. Dramatic accessibility and affordability have turned mobile devices (smartphone and tablet computers) into the most popular computational machines in the world, exceeding laptops. By the end of 2013, more than 1.5 billion people on earth will have a smartphone. Increased coverage and higher speeds of cellular networks have given these devices the power to constantly stream large amounts of data.Most mobile devices are equipped with advanced sensors such as GPS, cameras, and microphones. This expansion of smartphone numbers and power has created a sensing system capable of achieving tasks practically impossible for conventional sensing platforms. One of the advantages of participatory sensing platforms is their mobility, since human users are often in motion. This dissertation presents a set of techniques for modeling and predicting user transportation patterns from cellphone and social media checkins. To study largescale transportation patterns, I created a mobile phone app, Kpark, for estimating parking lot occupancy on the UCF campus. Kpark aggregates individual user reports on parking space availability to produce a global picture across all the campus lots using crowdsourcing. An issue with crowdsourcing is the possibility of receiving inaccurate information from users, either through error or malicious motivations. One method of combating this problem is to model the trustworthiness of individual participants to use that information to selectively include or discard data.This dissertation presents a comprehensive study of the performance of different worker quality and data fusion models with plausible simulated user populations, as well as an evaluation of their performance on the real data obtained from a full release of the Kpark app on the UCF Orlando campus. To evaluate individual trust prediction methods, an algorithm selection portfolio was introduced to take advantage of the strengths of each method and maximize the overall prediction performance.Like many other crowdsourced applications, user incentivization is an important aspect of creating a successful crowdsourcing workflow. For this project a form of nonmonetized incentivization called gamification was used in order to create competition among users with the aim of increasing the quantity and quality of data submitted to the project. This dissertation reports on the performance of Kpark at predicting parking occupancy, increasing user app usage, and predicting worker quality.
Show less

Date Issued

2015

Identifier

CFE0005597, ucf:50258

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0005597


Title

Cascaded Digital Refinement for Intrinsic Evolvable Hardware.

Creator

Thangavel, Vignesh, DeMara, Ronald, Sundaram, Kalpathy, Song, Zixia, University of Central Florida

Abstract / Description

Intrinsic evolution of reconfigurable hardware is sought to solve computational problems using the intrinsic processing behavior of SystemonChip (SoC) platforms. SoC devices combine capabilities of analog and digital embedded components within a reconfigurable fabric under software control. A new technique is developed for these fabrics that leverages the digital resources' enhanced accuracy and signal refinement capability to improve circuit performance of the analog resources' which are...
Show moreIntrinsic evolution of reconfigurable hardware is sought to solve computational problems using the intrinsic processing behavior of SystemonChip (SoC) platforms. SoC devices combine capabilities of analog and digital embedded components within a reconfigurable fabric under software control. A new technique is developed for these fabrics that leverages the digital resources' enhanced accuracy and signal refinement capability to improve circuit performance of the analog resources' which are providing low power processing and high computation rates. In particular, Differential Digital Correction (DDC) is developed utilizing an error metric computed from the evolved analog circuit to reconfigure the digital fabric thereby enhancing precision of analog computations. The approach developed herein, Cascaded Digital Refinement (CaDR), explores a multilevel strategy of utilizing DDC for refining intrinsic evolution of analog computational circuits to construct building blocks, known as Constituent Functional Blocks (CFBs). The CFBs are developed in a cascaded sequence followed by digital evolution of higherlevel control of these CFBs to build the final solution for the larger circuit athand. One such platform, Cypress PSoC5LP was utilized to realize solutions to ordinary differential equations by first evolving various powers of the independent variable followed by that of their combinations to emulate mathematical seriesbased solutions for the desired range of values. This is shown to enhance accuracy and precision while incurring lower computational energy and time overheads. The fitness function for each CFB being evolved is different from the fitness function that is defined for the overall problem.
Show less

Date Issued

2015

Identifier

CFE0005723, ucf:50123

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0005723


Title

ELECTRICAL CAPACITANCE VOLUME TOMOGRAPHY OF HIGH CONTRAST DIELECTRICS USING A CUBOID GEOMETRY.

Creator

Nurge, Mark, Schelling, Patrick, University of Central Florida

Abstract / Description

An Electrical Capacitance Volume Tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 x 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of...
Show moreAn Electrical Capacitance Volume Tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 x 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This dissertation presents a method of reconstructing images of high contrast dielectric materials using only the self capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer illdetermined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminum structure inserted at different positions within the sensing region. Comparisons with standard two dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.
Show less

Date Issued

2007

Identifier

CFE0001591, ucf:47119

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0001591


Title

A MECHANICSBASED APPROACH FOR PUTT DISTANCE OPTIMIZATION.

Creator

SantiagoMartinez, Pascual, Gordon, Ali, University of Central Florida

Abstract / Description

Quantifying the core mechanics of putting is imperative to developing a reliable model that predicts postcollision ball behavior. A preliminary model for the stroking motion of putting and putterball collision is developed alongside experiments, establishing an empirical model that supports the theory. The goal of the present study is to develop a correlation between the backstroke of a putt, or the preimpact translation of the putter, and the postimpact displacement of the golf ball....
Show moreQuantifying the core mechanics of putting is imperative to developing a reliable model that predicts postcollision ball behavior. A preliminary model for the stroking motion of putting and putterball collision is developed alongside experiments, establishing an empirical model that supports the theory. The goal of the present study is to develop a correlation between the backstroke of a putt, or the preimpact translation of the putter, and the postimpact displacement of the golf ball. This correlation is subsequently utilized to generate an algorithm that predicts the twodimensional ball trajectory based on putt displacement and putting surface texture by means of finite element analysis. In generating a model that accurately describes the putting behavior, the principles of classical mechanics were utilized. As a result, the putt displacement was completely described as a function of backstroke and some environmental parameters, such as: friction, slope of the green, and the elasticity of the putterball collision. In support of the preliminary model, experimental data were gathered from golfers of all levels. The collected data demonstrated a linear correlation between backstroke and putt distance, with the environmental parameters factoring in as a constant value; moreover, the data showed that experienced golfers tend to have a constant acceleration through ball impact. Combining the empirical results with the trajectory prediction algorithm will deliver an accurate predictor of ball behavior that can be easily implemented by golfers under most practical applications. Putt distance to backstroke ratios were developed under a variety of conditions
Show less

Date Issued

2015

Identifier

CFH0004764, ucf:45340

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFH0004764


Title

FREQUENCY DISTRIBUTION OF PYROXENE TYPES AND A METHOD TO SEPARATE THE COMPOSITION OF MULTIPLE PYROXENES IN A SAMPLE.

Creator

Davis, Jimmy, Britt, Daniel, University of Central Florida

Abstract / Description

Determining mafic mineral composition of asteroid bodies is a topic reviewed by M.J. Gaffey et al. (2002). The iterative procedure discussed can be implemented as an algorithm, and such efforts revealed weaknesses that are examined in this work. We seek to illustrate the limits of this method and graphically determine its predictions. There are boundaries in the formulae given where the equations break down. In ranges where mafic mixtures are predicted, a method is illustrated that allows a...
Show moreDetermining mafic mineral composition of asteroid bodies is a topic reviewed by M.J. Gaffey et al. (2002). The iterative procedure discussed can be implemented as an algorithm, and such efforts revealed weaknesses that are examined in this work. We seek to illustrate the limits of this method and graphically determine its predictions. There are boundaries in the formulae given where the equations break down. In ranges where mafic mixtures are predicted, a method is illustrated that allows a decoupling of these mixtures into the constituents.
Show less

Date Issued

2007

Identifier

CFE0001922, ucf:47493

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0001922


Title

FALCONET: FORCEFEEDBACK APPROACH FOR LEARNING FROM COACHING AND OBSERVATION USING NATURAL AND EXPERIENTIAL TRAINING.

Creator

Stein, Gary, Gonzalez, Avelino, University of Central Florida

Abstract / Description

Building an intelligent agent model from scratch is a difficult task. Thus, it would be preferable to have an automated process perform this task. There have been many manual and automatic techniques, however, each of these has various issues with obtaining, organizing, or making use of the data. Additionally, it can be difficult to get perfect data or, once the data is obtained, impractical to get a human subject to explain why some action was performed. Because of these problems, machine...
Show moreBuilding an intelligent agent model from scratch is a difficult task. Thus, it would be preferable to have an automated process perform this task. There have been many manual and automatic techniques, however, each of these has various issues with obtaining, organizing, or making use of the data. Additionally, it can be difficult to get perfect data or, once the data is obtained, impractical to get a human subject to explain why some action was performed. Because of these problems, machine learning from observation emerged to produce agent models based on observational data. Learning from observation uses unobtrusive and purely observable information to construct an agent that behaves similarly to the observed human. Typically, an observational system builds an agent only based on prerecorded observations. This type of system works well with respect to agent creation, but lacks the ability to be trained and updated online. To overcome these deficiencies, the proposed system works by adding an augmented forcefeedback system of training that senses the agents intentions haptically. Furthermore, because not all possible situations can be observed or directly trained, a third stage of learning from practice is added for the agent to gain additional knowledge for a particular mission. These stages of learning mimic the natural way a human might learn a task by first watching the task being performed, then being coached to improve, and finally practicing to self improve. The hypothesis is that a system that is initially trained using human recorded data (Observational), then tuned and adjusted using forcefeedback (Instructional), and then allowed to perform the task in different situations (Experiential) will be better than any individual step or combination of steps.
Show less

Date Issued

2009

Identifier

CFE0002746, ucf:48157

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0002746


Title

GLOBAL SECURE SETS OF TREES AND GRIDLIKE GRAPHS.

Creator

Ho, Yiuyu, Dutton, Ronald, University of Central Florida

Abstract / Description

Let G = (V, E) be a graph and let S be a subset of vertices. The set S is a defensive alliance if for all x in S, N intersect S >= N  S. The concept of defensive alliances was introduced in , primarily for the modeling of nations in times of war, where allied nations are in mutual agreement to join forces if anyone of them is attacked. For a vertex x in a defensive alliance, the number of neighbors of x inside the alliance, plus the vertex x, is at least the number of neighbors of x...
Show moreLet G = (V, E) be a graph and let S be a subset of vertices. The set S is a defensive alliance if for all x in S, N intersect S >= N  S. The concept of defensive alliances was introduced in , primarily for the modeling of nations in times of war, where allied nations are in mutual agreement to join forces if anyone of them is attacked. For a vertex x in a defensive alliance, the number of neighbors of x inside the alliance, plus the vertex x, is at least the number of neighbors of x outside the alliance. In a graph model, the vertices of a graph represent nations and the edges represent country boundaries. Thus, if the nation corresponding to a vertex x is attacked by its neighbors outside the alliance, the attack can be thwarted by x with the assistance of its neighbors in the alliance. In a different subject matter, applies graph theory to model the world wide web, where vertices represent websites and edges represent links between websites. A web community is a subset of vertices of the web graph, such that every vertex in the community has at least as many neighbors in the set as it has outside. So, a web community C satisfies for all x in C, N intersect C > N  C. These sets are very similar to defensive alliances. They are known as strong defensive alliances in the literature of alliances in graphs. Other areas of application for alliances and related topics include classification, data clustering, ecology, business and social networks. Consider the application of modeling nations in times of war introduced in the first paragraph. In a defensive alliance, any attack on a single member of the alliance can be successfully defended. However, as will be demonstrated in Chapter 1, a defensive alliance may not be able to properly defend itself when multiple members are under attack at the same time. The concept of secure sets is introduced in for exactly this purpose. The set S is a secure set if every subset X of S, with the assistance of vertices in S, can successfully defend against simultaneous attacks coming from vertices outside of S. The exact definition of simultaneous attacks and how such attacks may be defended will be provided in Chapter 1. In , the authors presented an interesting characterization for secure sets which resembles the definition of defensive alliances. A nonempty set S is a secure set if and only if for all subset X of S, N intersect S >= N  S (, Theorem 11). The cardinality of a minimum secure set is the security number of G, denoted s(G). A secure set S is a global secure set if it further satisfies N = V. The cardinality of a minimum global secure set of G is the global security number of G, denoted gs(G). In this work, we present results on secure sets and global secure sets. In particular, we present algorithms and bounds for the global security numbers of trees, and the exact values of the global security numbers of paths, cycles and their Cartesian products. Petter Kristiansen, Sandra M. Hedetniemi, and Stephen T. Hedetniemi. "Alliances in graphs." J. Combin. Math. Combin. Comput., 48:157177, 2004. G. W. Flake, S. Lawrence, and C. L. Giles. "Efficient identification of web communities." ACM SIGKDD, pp. 150160, 2000. Robert C. Brigham, Ronald D. Dutton, and Stephen T. Hedetniemi. "Security in graphs." Discrete Appl. Math., 155(13):17081714, 2007.
Show less

Date Issued

2011

Identifier

CFE0003888, ucf:48719

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0003888


Title

Quality Diversity: Harnessing Evolution to Generate a Diversity of HighPerforming Solutions.

Creator

Pugh, Justin, Stanley, Kenneth, Wu, Annie, Sukthankar, Gita, Garibay, Ivan, University of Central Florida

Abstract / Description

Evolution in nature has designed countless solutions to innumerable interconnected problems, giving birth to the impressive array of complex modern life observed today. Inspired by this success, the practice of evolutionary computation (EC) abstracts evolution artificially as a search operator to find solutions to problems of interest primarily through the adaptive mechanism of survival of the fittest, where stronger candidates are pursued at the expense of weaker ones until a solution of...
Show moreEvolution in nature has designed countless solutions to innumerable interconnected problems, giving birth to the impressive array of complex modern life observed today. Inspired by this success, the practice of evolutionary computation (EC) abstracts evolution artificially as a search operator to find solutions to problems of interest primarily through the adaptive mechanism of survival of the fittest, where stronger candidates are pursued at the expense of weaker ones until a solution of satisfying quality emerges. At the same time, research in openended evolution (OEE) draws different lessons from nature, seeking to identify and recreate processes that lead to the type of perpetual innovation and indefinitely increasing complexity observed in natural evolution. New algorithms in EC such as MAPElites and Novelty Search with Local Competition harness the toolkit of evolution for a related purpose: finding as many types of good solutions as possible (rather than merely the single best solution). With the field in its infancy, no empirical studies previously existed comparing these socalled quality diversity (QD) algorithms. This dissertation (1) contains the first extensive and methodical effort to compare different approaches to QD (including both existing published approaches as well as some new methods presented for the first time here) and to understand how they operate to help inform better approaches in the future.It also (2) introduces a new technique for encoding neural networks for evolution with indirect encoding that contain multiple sensory or output modalities.Further, it (3) explores the idea that QD can act as an engine of openended discovery by introducing an expressive platform called Voxelbuild where QD algorithms continually evolve robots that stack blocks in new ways. A culminating experiment (4) is presented that investigates evolution in Voxelbuild over a very long timescale. This research thus stands to advance the OEE community's desire to create and understand openended systems while also laying the groundwork for QD to realize its potential within EC as a means to automatically generate an endless progression of new content in realworld applications.
Show less

Date Issued

2019

Identifier

CFE0007513, ucf:52638

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0007513


Title

InverseConsistent Determination of Young's Modulus of Human Lung.

Creator

Seyfi Noferest, Behnaz, Ilegbusi, Olusegun, Santhanam, Anand, Kassab, Alain, Moslehy, Faissal, University of Central Florida

Abstract / Description

Human lung undergoes respirationinduced deformation due to sequential inhalation and exhalation. Accurate determination of lung deformation is crucial for tumor localization and targeted radiotherapy in patients with lung cancer. Numerical modeling of human lung dynamics based on underlying physics and physiology enables simulation and virtual visualization of lung deformation. Dynamical modeling is numerically complicated by the lack of information on lung elastic behavior, structural...
Show moreHuman lung undergoes respirationinduced deformation due to sequential inhalation and exhalation. Accurate determination of lung deformation is crucial for tumor localization and targeted radiotherapy in patients with lung cancer. Numerical modeling of human lung dynamics based on underlying physics and physiology enables simulation and virtual visualization of lung deformation. Dynamical modeling is numerically complicated by the lack of information on lung elastic behavior, structural heterogeneity as well as boundary constrains. This study integrates physicsbased modeling and imagebased data acquisition to develop the patientspecific biomechanical model and consequently establish the first consistent Young's modulus (YM) of human lung. This dissertation has four major components: (i) develop biomechanical model for computation of the flow and deformation characteristics that can utilize subjectspecific, spatiallydependent lung material property; (ii) develop a fusion algorithm to integrate deformation results from a deformable image registration (DIR) and physicsbased modeling using the theory of Tikhonov regularization; (iii) utilize fusion algorithm to establish unique and consistent patient specific Young's modulus and; (iv) validate biomechanical model utilizing established patientspecific elastic property with imaging dataThe simulation is performed on three dimensional lung geometry reconstructed from fourdimensional computed tomography (4DCT) dataset of human subjects. The heterogeneous Young's modulus is estimated from a linear elastic deformation model with the same lung geometry and 4D lung DIR. The biomechanical model adequately predicts the spatiotemporal lung deformation, consistent with data obtained from imaging. The accuracy of the numerical solution is enhanced through fusion with the imaging data beyond the classical comparison of the two sets of data. Finally, the fused displacement results are used to establish unique and consistent patientspecific elastic property of the lung.
Show less

Date Issued

2015

Identifier

CFE0006391, ucf:51512

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0006391


Title

Solving Constraint Satisfaction Problems with Matrix Product States.

Creator

Pelton, Sabine, Mucciolo, Eduardo, Ishigami, Masa, Leuenberger, Michael, University of Central Florida

Abstract / Description

In the past decade, Matrix Product State (MPS) algorithms have emerged as an efficient method of modeling some manybody quantum spin systems. Since spin system Hamiltonians can be considered constraint satisfaction problems (CSPs), it follows that MPS should provide a versatile framework for studying a variety of general CSPs. In this thesis, we apply MPS to two types of CSP. First, use MPS to simulate adiabatic quantum computation (AQC), where the target Hamiltonians are instances of a...
Show moreIn the past decade, Matrix Product State (MPS) algorithms have emerged as an efficient method of modeling some manybody quantum spin systems. Since spin system Hamiltonians can be considered constraint satisfaction problems (CSPs), it follows that MPS should provide a versatile framework for studying a variety of general CSPs. In this thesis, we apply MPS to two types of CSP. First, use MPS to simulate adiabatic quantum computation (AQC), where the target Hamiltonians are instances of a fully connected, random Ising spin glass. Results of the simulations help shed light on why AQC fails for some optimization problems. We then present the novel application of a modified MPS algorithm to classical Boolean satisfiability problems, specifically kSAT and max kSAT. By construction, the algorithm also counts solutions to a given Boolean formula (\#SAT). For easy satisfiable instances, the method is more expensive than other existing algorithms; however, for hard and unsatisfiable instances, the method succeeds in finding satisfying assignments where other algorithms fail to converge.
Show less

Date Issued

2017

Identifier

CFE0006902, ucf:51713

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0006902


Title

Techniques for automated parameter estimation in computational models of probabilistic systems.

Creator

Hussain, Faraz, Jha, Sumit, Leavens, Gary, Turgut, Damla, Uddin, Nizam, University of Central Florida

Abstract / Description

The main contribution of this dissertation is the design of two new algorithms for automatically synthesizing values of numerical parameters of computational models of complexstochastic systems such that the resultant model meets userspecified behavioral specifications.These algorithms are designed to operate on probabilistic systems () systems that, in general,behave differently under identical conditions. The algorithms work using an approach thatcombines formal verification and...
Show moreThe main contribution of this dissertation is the design of two new algorithms for automatically synthesizing values of numerical parameters of computational models of complexstochastic systems such that the resultant model meets userspecified behavioral specifications.These algorithms are designed to operate on probabilistic systems () systems that, in general,behave differently under identical conditions. The algorithms work using an approach thatcombines formal verification and mathematical optimization to explore a model's parameterspace.The problem of determining whether a model instantiated with a given set of parametervalues satisfies the desired specification is first defined using formal verification terminology,and then reformulated in terms of statistical hypothesis testing. Parameter space explorationinvolves determining the outcome of the hypothesis testing query for each parameter pointand is guided using simulated annealing. The first algorithm uses the sequential probabilityratio test (SPRT) to solve the hypothesis testing problems, whereas the second algorithmuses an approach based on Bayesian statistical model checking (BSMC).The SPRTbased parameter synthesis algorithm was used to validate that a given model ofglucoseinsulin metabolism has the capability of representing diabetic behavior by synthesizingvalues of three parameters that ensure that the glucoseinsulin subsystem spends at least 20minutes in a diabetic scenario. The BSMCbased algorithm was used to discover the valuesof parameters in a physiological model of the acute inflammatory response that guarantee aset of desired clinical outcomes.These two applications demonstrate how our algorithms use formal verification, statisticalhypothesis testing and mathematical optimization to automatically synthesize parameters ofcomplex probabilistic models in order to meet userspecified behavioral properties
Show less

Date Issued

2016

Identifier

CFE0006117, ucf:51200

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0006117
Pages