Browsing M.Sc. Computer Science by Title
Now showing items 3251 of 104

Feature Selection and Classification Using Age Layered Population Structure Genetic ProgrammingThe curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and determinis tic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel meta–heuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/subtrees. The FSALPS meta–heuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.

Formalizing affordances in situationThe representation of a perceptual scene by a computer is usually limited to numbers representing dimensions and colours. The theory of affordances attempted to provide a new way of representing an environment, with respect to a particular agent. The view was introduced as part of an entire field of psychology labeled as 'ecological,' which has since branched into computer science through the field of robotics, and formal methods. This thesis will describe the concept of affordances, review several existing formalizations, and take a brief look at applications to robotics. The formalizations put forth in the last 20 years have no agreed upon structure, only that both the agent and the environment must be taken in relation to one another. Situation theory has also been evolving since its inception in 1983 by Barwise & Perry. The theory provided a formal way to represent any arbitrary piece of information in terms of relations. This thesis will take a toy version of situation theory published in CSLI lecture notes no. 22, and add to the given ontologies. This thesis extends the given ontologies to include specialized affordance types, and individual object types. This allows for the definition of semantic objects called environments, which support a situation and a set of affordances, and niches which refer to a set of actions for an individual. Finally, a possible way for an environment to change into a new environment is suggested via the activation of an affordance.

A Functional Programming Language with Patterns and CopatternsSince the emergence of coinductive data types in functional programming languages, various languages such as Haskell and Coq tried different ways in dealing with them. Yet, none of them dealt with coinductive data types properly. In lazy languages such as Haskell, both inductive data types and coinductive data types are gathered and mixed in one list. Moreover, some languages such as Coq used the same constructors that are used for inductive data types as a tool to tackle coinductive data types, and while other languages such as Haskell did use destructors, they did not use them properly. Coinductive data types behave differently than inductive data types and therefore, it is more appropriate to deal with them differently. In this thesis, we propose a new functional programming language where coinductive data types are dealt with in a dual approach in which coinductive data types are defined by observation and inductive data types are defined by constructors. This approach is more appropriate in dealing with coinductive data types whose importance comes from their role in creating a safer and more sophisticated software.

GA approach for finding Rough Set decision rules based on bireductsFeature selection plays an important role in knowledge discovery and data mining nowadays. In traditional rough set theory, feature selection using reduct  the minimal discerning set of attributes  is an important area. Nevertheless, the original definition of a reduct is restrictive, so in one of the previous research it was proposed to take into account not only the horizontal reduction of information by feature selection, but also a vertical reduction considering suitable subsets of the original set of objects. Following the work mentioned above, a new approach to generate bireducts using a multiobjective genetic algorithm was proposed. Although the genetic algorithms were used to calculate reduct in some previous works, we did not find any work where genetic algorithms were adopted to calculate bireducts. Compared to the works done before in this area, the proposed method has less randomness in generating bireducts. The genetic algorithm system estimated a quality of each bireduct by values of two objective functions as evolution progresses, so consequently a set of bireducts with optimized values of these objectives was obtained. Different fitness evaluation methods and genetic operators, such as crossover and mutation, were applied and the prediction accuracies were compared. Five datasets were used to test the proposed method and two datasets were used to perform a comparison study. Statistical analysis using the oneway ANOVA test was performed to determine the significant difference between the results. The experiment showed that the proposed method was able to reduce the number of bireducts necessary in order to receive a good prediction accuracy. Also, the influence of different genetic operators and fitness evaluation strategies on the prediction accuracy was analyzed. It was shown that the prediction accuracies of the proposed method are comparable with the best results in machine learning literature, and some of them outperformed it.

Game Theorybased Allocation Management in VCC NetworksVehicular Adhoc Networks (VANETs) have contributed significantly towards improving road traffic management and safety. VANETs, integrated with Vehicular Clouds, enable underutilized vehicular resources for efficient resource management, fulfilling service requests. However, due to the frequently changing network topology of vehicular cloud networks, the vehicles frequently move out of the coverage area of roadside units (RSUs), disconnecting from the RSUs and interrupting the fulfillment of ongoing service requests. In addition, working with heterogeneous vehicles makes it difficult to match the service requests with the varying resources of individual vehicles. Therefore, to address these challenges, this work introduces the concept of clustering resources from nearby vehicles to form Combined Resource Units (CRUs). These units contribute to maximizing the rate of fulfillment of service requests. CRU composition is helpful, especially for the heterogeneity of vehicles, since it allows clustering the varying resources of vehicles into a single unit. The vehicle resources are clustered into CRUs based on three different sized pools, making the service matching process more timeefficient. Previous works have adopted stochastic models for resource clustering configurations. However, this work adopts distinct search algorithms for CRU composition, which are computationally less complex. Results showed that lightweight search algorithms, such as selective search algorithm (SSA), achieved close to 80% of resource availability without overassembling CRUs in higher density scenarios. Following CRU composition, a gametheoretical approach is opted for allocating CRUs to service requests. Under this approach, the CRUs play a noncooperative game to maximize their utility, contributing to factors such as fairness, efficiency, improved system performance and reduced system overhead. The utility value takes into account the RSS (Received Signal Strength) value of each CRU and the resources required in fulfilling a request. Results of the game model showed that the proposed approach of CRU composition obtained 90% success rate towards matching and fulfilling service requests.

Generating Aesthetically Pleasing Images in a Virtual Environment using Particle Swarm OptimizationThis research focuses on generating aesthetically pleasing images in virtual environments using the particle swarm optimization (PSO) algorithm. The PSO is a stochastic population based search algorithm that is inspired by the flocking behavior of birds. In this research, we implement swarms of cameras flying through a virtual world in search of an image that is aesthetically pleasing. Virtual world exploration using particle swarm optimization is considered to be a new research area and is of interest to both the scientific and artistic communities. Aesthetic rules such as rule of thirds, subject matter, colour similarity and horizon line are all analyzed together as a multiobjective problem to analyze and solve with rendered images. A new multiobjective PSO algorithm, the sum of ranks PSO, is introduced. It is empirically compared to other singleobjective and multiobjective swarm algorithms. An advantage of the sum of ranks PSO is that it is useful for solving highdimensional problems within the context of this research. Throughout many experiments, we show that our approach is capable of automatically producing images satisfying a variety of supplied aesthetic criteria.

Generating finite integral relation algebrasRelation algebras and categories of relations in particular have proven to be extremely useful as a fundamental tool in mathematics and computer science. Since relation algebras are Boolean algebras with some wellbehaved operations, every such algebra provides an atom structure, i.e., a relational structure on its set of atoms. In the case of complete and atomic structure (e.g. finite algebras), the original algebra can be recovered from its atom structure by using the complex algebra construction. This gives a representation of relation algebras as the complex algebra of a certain relational structure. This property is of particular interest because storing the atom structure requires less space than the entire algebra. In this thesis I want to introduce and implement three structures representing atom structures of integral heterogeneous relation algebras, i.e., categorical versions of relation algebras. The first structure will simply embed a homogeneous atom structure of a relation algebra into the heterogeneous context. The second structure is obtained by splitting all symmetric idempotent relations. This new algebra is in almost all cases an heterogeneous structure having more objects than the original one. Finally, I will define two different union operations to combine two algebras into a single one.

Generating Relation Algebras for Qualitative Spatial ReasoningBasic relationships between certain regions of space are formulated in natural language in everyday situations. For example, a customer specifies the outline of his future home to the architect by indicating which rooms should be close to each other. Qualitative spatial reasoning as an area of artificial intelligence tries to develop a theory of space based on similar notions. In formal ontology and in ontological computer science, mereotopology is a firstorder theory, embodying mereological and topological concepts, of the relations among wholes, parts, parts of parts, and the boundaries between parts. We shall introduce abstract relation algebras and present their structural properties as well as their connection to algebras of binary relations. This will be followed by details of the expressiveness of algebras of relations for region based models. Mereotopology has been the main basis for most region based theories of space. Since its earliest inception many theories have been proposed for mereotopology in artificial intelligence among which Region Connection Calculus is most prominent. The expressiveness of the region connection calculus in relational logic is far greater than its original eight base relations might suggest. In the thesis we formulate ways to automatically generate representable relation algebras using spatial data based on region connection calculus. The generation of new algebras is a two pronged approach involving splitting of existing relations to form new algebras and refinement of such newly generated algebras. We present an implementation of a system for automating aforementioned steps and provide an effective and convenient interface to define new spatial relations and generate representable relational algebras.

Generator Matrix Based Search for Extremal SelfDual Binary ErrorCorrecting CodesSelfdual doubly even linear binary errorcorrecting codes, often referred to as Type II codes, are codes closely related to many combinatorial structures such as 5designs. Extremal codes are codes that have the largest possible minimum distance for a given length and dimension. The existence of an extremal (72,36,16) Type II code is still open. Previous results show that the automorphism group of a putative code C with the aforementioned properties has order 5 or dividing 24. In this work, we present a method and the results of an exhaustive search showing that such a code C cannot admit an automorphism group Z6. In addition, we present so far unpublished construction of the extended Golay code by P. Becker. We generalize the notion and provide example of another Type II code that can be obtained in this fashion. Consequently, we relate Becker's construction to the construction of binary Type II codes from codes over GF(2^r) via the Gray map.

Generic Matrix Manipulator SystemIn this thesis we describe in detail a generic matrix manipulator system that performs operations on matrices in a flexible way, using a graphical user interface. A user defines allowable data entries called a coefficient set, as well as closed nary operations based on the coefficient set, called coefficient operators. Together the coefficient set and the coefficient operators form a basis. The defined coefficient operators can then further define operations on matrices. A basis and nary matrix operations can be entered into the system by various ways including predefined, Java data types, JavaScript, and various XML formats defining certain mathematical structures. This described system functions similar to the RelView system, while offering additional features. These features are designed to increase convenience and usability for a user by providing support for arbitrary coefficient set types, cross platform capability, and automatic type checking for user defined expressions.

Genetic Programming for NonPhotorealistic RenderingThis thesis focuses on developing an evolutionary art system using genetic programming. The main goal is to produce new forms of evolutionary art that filter existing images into new nonphotorealistic (NPR) styles, by obtaining images that look like traditional media such as watercolor or pencil, as well as brand new effects. The approach permits GP to generate creative forms of NPR results. The GP language is extended with different techniques and methods inspired from NPR research such as colour mixing expressions, image processing filters and painting algorithm. Colour mixing is a major new contribution, as it enables many familiar and innovative NPR effects to arise. Another major innovation is that many GP functions process the canvas (rendered image), while is dynamically changing. Automatic fitness scoring uses aesthetic evaluation models and statistical analysis, and multiobjective fitness evaluation is used. Results showed a variety of NPR effects, as well as new, creative possibilities.

Genetic programming for the RoboCup Rescue Simulation SystemThe Robocup Rescue Simulation System (RCRSS) is a dynamic system of multiagent interaction, simulating a largescale urban disaster scenario. Teams of rescue agents are charged with the tasks of minimizing civilian casualties and infrastructure damage while competing against limitations on time, communication, and awareness. This thesis provides the first known attempt of applying Genetic Programming (GP) to the development of behaviours necessary to perform well in the RCRSS. Specifically, this thesis studies the suitability of GP to evolve the operational behaviours required of each type of rescue agent in the RCRSS. The system developed is evaluated in terms of the consistency with which expected solutions are the target of convergence as well as by comparison to previous competition results. The results indicate that GP is capable of converging to some forms of expected behaviour, but that additional evolution in strategizing behaviours must be performed in order to become competitive. An enhancement to the standard GP algorithm is proposed which is shown to simplify the initial search space allowing evolution to occur much quicker. In addition, two forms of population are employed and compared in terms of their apparent effects on the evolution of control structures for intelligent rescue agents. The first is a single population in which each individual is comprised of three distinct trees for the respective control of three types of agents, the second is a set of three coevolving subpopulations one for each type of agent. Multiple populations of cooperating individuals appear to achieve higher proficiencies in training, but testing on unseen instances raises the issue of overfitting.

Heuristics for the Critical Node Detection Problem in Large Complex NetworksComplex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NPhard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depthfirst search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient postprocessing algorithm is also proposed to quickly rerank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.

A Hybrid Approach to Network Robustness Optimization using Edge Rewiring and Edge AdditionNetworks are ubiquitous in the modern world. From computer and telecommunication networks to road networks and power grids, networks make up many crucial pieces of infrastructure that we interact with on a daily basis. These networks can be subjected to damage from many different sources, both random and targeted. If one of these networks receives too much damage, it may be rendered inoperable, which can have disastrous consequences. For this reason, it is in the best interests of those responsible for these networks to ensure that they are highly robust to failure. Since it is not usually feasible to rebuild most existing networks from scratch to make them more resilient, it is necessary to have an approach that can modify an existing network to make it more robust to failure. Previous work has established several methods of accomplishing this task, including edge rewiring and edge addition. Both of these methods can be very useful for optimizing network robustness, but each comes with its own set of limitations. This thesis proposes a new hybrid approach to network robustness optimization that combines both of these approaches. Four edge rewiring based metaheuristic approaches were modified to incorporate one of three different edge addition strategies. A comparative study was performed on these new hybrid optimizers, comparing them to each other and to the vanilla edge rewiring only approach on both synthetic and real world networks. Experiments showed that this new hybrid approach to network robustness optimization leads to much more highly robust networks than an edge rewiring only approach.

Hyperspectral Mineral Identification using SVM and SOMRemote sensing techniques involving hyperspectral imagery have applications in a number of sciences that study some aspects of the surface of the planet. The analysis of hyperspectral images is complex because of the large amount of information involved and the noise within that data. Investigating images with regard to identify minerals, rocks, vegetation and other materials is an application of hyperspectral remote sensing in the earth sciences. This thesis evaluates the performance of two classification and clustering techniques on hyperspectral images for mineral identification. Support Vector Machines (SVM) and SelfOrganizing Maps (SOM) are applied as classification and clustering techniques, respectively. Principal Component Analysis (PCA) is used to prepare the data to be analyzed. The purpose of using PCA is to reduce the amount of data that needs to be processed by identifying the most important components within the data. A wellstudied dataset from Cuprite, Nevada and a dataset of more complex data from Baffin Island were used to assess the performance of these techniques. The main goal of this research study is to evaluate the advantage of training a classifier based on a small amount of data compared to an unsupervised method. Determining the effect of feature extraction on the accuracy of the clustering and classification method is another goal of this research. This thesis concludes that using PCA increases the learning accuracy, and especially so in classification. SVM classifies Cuprite data with a high precision and the SOM challenges SVM on datasets with high level of noise (like Baffin Island).

Image Evolution Using 2D Power SpectraProcedurally generated textures have seen use in many applications, are a highinterest topic when exploring evolutionary algorithms, and hold a central interest for digital art. However, there is an existing difficulty in finding suitable heuristics for measuring perceived qualities of an image. Particular difficulty remains for quantifying aspects of style and shape. In an attempt to bridge the divide between computer vision and cognitive perception, one set of proposed measures from previous studies relate to image spatial frequencies. Based on existing research which uses power spectral density of spatial frequencies as an effective metric for image classification and retrieval, we believe this measure and others based on Fourier decomposition may be effective for guiding evolutionary texture synthesis. We briefly compare some alternative means of using frequency analysis to guide evolution of shape and composition, and refine fitness measures based on Fourier analysis and spatial frequency. Our exploration has been conducted with the goals of improving intuition of these measures, evaluating the utility of these measures for image composition, and observing possible adaptations of their use in digital evolutionary art. Multiple evolutionary guidance schemes with consideration of the spatial frequencies' power spectra and phase have been evaluated across numerous targets with mixed results. We will display our exploration of power spectral density measures and their effectiveness as used for evolutionary algorithm fitness targets, particularly for basic compositional guidance in evolutionary art. We also observe and analyze a previously identified phenomenon of spatial properties which could lead to further consideration of visual comfort and aesthetics.

An Implementation of Separation Logic in CoqFor certain applications, the correctness of software involved is crucial, particularly if human life is in danger. In order to achieve correctness, common practice is to gather evidence for program correctness by testing the system. Even though testing may find certain errors in the code, it cannot guarantee that the program is errorfree. The program of formal verification is the act of proving or disproving the correctness of the system with respect to a formal specification. A logic for program verification is the socalled Hoare Logic. Hoare Logic can deal with programs that do not utilize pointers, i.e., it allows reasoning about programs that do not use shared mutable data structures. Separation Logic extends Hoare logic that allows pointers, including pointer arithmetic, in the programming language. It has fourpointer manipulating commands which perform the heap operations such as lookup, allocation, deallocation, and mutation. We introduce an implementation of separation logic in the interactive proof system Coq. Besides verifying that separation logic is correct, we will provide several examples of programs and their correctness proof.

IMPROVING BWAMEM WITH GPU PARALLEL COMPUTINGDue to the many advances made in designing algorithms, especially the ones used in bioinformatics, it is becoming harder and harder to improve their efficiencies. Therefore, hardware acceleration using GeneralPurpose computing on Graphics Processing Unit has become a popular choice. BWAMEM is an important part of the BWA software package for sequence mapping. Because of its high speed and accuracy, we choose to parallelize the popular short DNA sequence mapper. BWA has been a prevalent single node tool in genome alignment, and it has been widely studied for acceleration for a long time since the first version of the BWA package came out. This thesis presents the Big Data GPGPU distributed BWAMEM, a tool that combines GPGPU acceleration and distributed computing. The four hardware parallelization techniques used are CPU multithreading, GPU paralleled, CPU distributed, and GPU distributed. The GPGPU distributed software typically outperforms other parallelization versions. The alignment is performed on a distributed network, and each node in the network executes a separate GPGPU paralleled version of the software. We parallelize the chain2aln function in three levels. In Level 1, the function ksw\_extend2, an algorithm based on SmithWaterman, is parallelized to handle extension on one side of the seed. In Level 2, the function chain2aln is parallelized to handle chain extension, where all seeds within the same chain are extended. In Level 3, part of the function mem\_align1\_core is parallelized for extending multiple chains. Due to the program's complexity, the parallelization work was limited at the GPU version of ksw\_extend2 parallelization Level 3. However, we have successfully combined Spark with BWAMEM and ksw\_extend2 at parallelization Level 1, which has shown that the proposed framework is possible. The paralleled Level 3 GPU version of ksw\_extend2 demonstrated noticeable speed improvement with the test data set.

Improving Short DNA Sequence Alignment with Parallel ComputingVariations in different types of genomes have been found to be responsible for a large degree of physical diversity such as appearance and susceptibility to disease. Identification of genomic variations is difficult and can be facilitated through computational analysis of DNA sequences. Newly available technologies are able to sequence billions of DNA base pairs relatively quickly. These sequences can be used to identify variations within their specific genome but must be mapped to a reference sequence first. In order to align these sequences to a reference sequence, we require mapping algorithms that make use of approximate string matching and string indexing methods. To date, few mapping algorithms have been tailored to handle the massive amounts of output generated by newly available sequencing technologies. In otrder to handle this large amount of data, we modified the popular mapping software BWA to run in parallel using OpenMPI. Parallel BWA matches the efficiency of multithreaded BWA functions while providing efficient parallelism for BWA functions that do not currently support multithreading. Parallel BWA shows significant wall time speedup in comparison to multithreaded BWA on highperformance computing clusters, and will thus facilitate the analysis of genome sequencing data.

Improving the Scalability of Reduct Determination in Rough SetsRough Set Data Analysis (RSDA) is a noninvasive data analysis approach that solely relies on the data to find patterns and decision rules. Despite its noninvasive approach and ability to generate human readable rules, classical RSDA has not been successfully used in commercial data mining and rule generating engines. The reason is its scalability. Classical RSDA slows down a great deal with the larger data sets and takes much longer times to generate the rules. This research is aimed to address the issue of scalability in rough sets by improving the performance of the attribute reduction step of the classical RSDA  which is the root cause of its slow performance. We propose to move the entire attribute reduction process into the database. We defined a new schema to store the initial data set. We then defined SOL queries on this new schema to find the attribute reducts correctly and faster than the traditional RSDA approach. We tested our technique on two typical data sets and compared our results with the traditional RSDA approach for attribute reduction. In the end we also highlighted some of the issues with our proposed approach which could lead to future research.