• Image Evolution Using 2D Power Spectra

      Gircys, Michael; Department of Computer Science
      Procedurally generated textures have seen use in many applications, are a high-interest topic when exploring evolutionary algorithms, and hold a central interest for digital art. However, there is an existing difficulty in finding suitable heuristics for measuring perceived qualities of an image. Particular difficulty remains for quantifying aspects of style and shape. In an attempt to bridge the divide between computer vision and cognitive perception, one set of proposed measures from previous studies relate to image spatial frequencies. Based on existing research which uses power spectral density of spatial frequencies as an effective metric for image classification and retrieval, we believe this measure and others based on Fourier decomposition may be effective for guiding evolutionary texture synthesis. We briefly compare some alternative means of using frequency analysis to guide evolution of shape and composition, and refine fitness measures based on Fourier analysis and spatial frequency. Our exploration has been conducted with the goals of improving intuition of these measures, evaluating the utility of these measures for image composition, and observing possible adaptations of their use in digital evolutionary art. Multiple evolutionary guidance schemes with consideration of the spatial frequencies' power spectra and phase have been evaluated across numerous targets with mixed results. We will display our exploration of power spectral density measures and their effectiveness as used for evolutionary algorithm fitness targets, particularly for basic compositional guidance in evolutionary art. We also observe and analyze a previously identified phenomenon of spatial properties which could lead to further consideration of visual comfort and aesthetics.
    • Improving Short DNA Sequence Alignment with Parallel Computing

      Peters, Darren; Department of Computer Science (2013-04-11)
      Variations in different types of genomes have been found to be responsible for a large degree of physical diversity such as appearance and susceptibility to disease. Identification of genomic variations is difficult and can be facilitated through computational analysis of DNA sequences. Newly available technologies are able to sequence billions of DNA base pairs relatively quickly. These sequences can be used to identify variations within their specific genome but must be mapped to a reference sequence first. In order to align these sequences to a reference sequence, we require mapping algorithms that make use of approximate string matching and string indexing methods. To date, few mapping algorithms have been tailored to handle the massive amounts of output generated by newly available sequencing technologies. In otrder to handle this large amount of data, we modified the popular mapping software BWA to run in parallel using OpenMPI. Parallel BWA matches the efficiency of multithreaded BWA functions while providing efficient parallelism for BWA functions that do not currently support multithreading. Parallel BWA shows significant wall time speedup in comparison to multithreaded BWA on high-performance computing clusters, and will thus facilitate the analysis of genome sequencing data.
    • Improving the Scalability of Reduct Determination in Rough Sets

      Mahmood, Shahid; Department of Computer Science (2013-04-09)
      Rough Set Data Analysis (RSDA) is a non-invasive data analysis approach that solely relies on the data to find patterns and decision rules. Despite its noninvasive approach and ability to generate human readable rules, classical RSDA has not been successfully used in commercial data mining and rule generating engines. The reason is its scalability. Classical RSDA slows down a great deal with the larger data sets and takes much longer times to generate the rules. This research is aimed to address the issue of scalability in rough sets by improving the performance of the attribute reduction step of the classical RSDA - which is the root cause of its slow performance. We propose to move the entire attribute reduction process into the database. We defined a new schema to store the initial data set. We then defined SOL queries on this new schema to find the attribute reducts correctly and faster than the traditional RSDA approach. We tested our technique on two typical data sets and compared our results with the traditional RSDA approach for attribute reduction. In the end we also highlighted some of the issues with our proposed approach which could lead to future research.
    • An Interactive Theorem Prover for First-Order Dynamic Logic

      Das, Tuhin Kanti; Department of Computer Science (Brock University, 2012-10-11)
      Dynamic logic is an extension of modal logic originally intended for reasoning about computer programs. The method of proving correctness of properties of a computer program using the well-known Hoare Logic can be implemented by utilizing the robustness of dynamic logic. For a very broad range of languages and applications in program veri cation, a theorem prover named KIV (Karlsruhe Interactive Veri er) Theorem Prover has already been developed. But a high degree of automation and its complexity make it di cult to use it for educational purposes. My research work is motivated towards the design and implementation of a similar interactive theorem prover with educational use as its main design criteria. As the key purpose of this system is to serve as an educational tool, it is a self-explanatory system that explains every step of creating a derivation, i.e., proving a theorem. This deductive system is implemented in the platform-independent programming language Java. In addition, a very popular combination of a lexical analyzer generator, JFlex, and the parser generator BYacc/J for parsing formulas and programs has been used.
    • Inverse Illumination Design with Genetic Programming

      Moylan, Kelly; Department of Computer Science
      Interior illumination is a complex problem involving numerous interacting factors. This research applies genetic programming towards problems in illumination design. The Radiance system is used for performing accurate illumination simulations. Radiance accounts for a number of important environmental factors, which we exploit during fitness evaluation. Illumination requirements include local illumination intensity from natural and artificial sources, colour, and uniformity. Evolved solutions incorporate design elements such as artificial lights, room materials, windows, and glass properties. A number of case studies are examined, including many-objective problems involving up to 7 illumination requirements, the design of a decorative wall of lights, and the creation of a stained-glass window for a large public space. Our results show the technical and creative possibilities of applying genetic programming to illumination design.
    • L-Fuzzy Relations in Coq

      Jackson, Ethan; Department of Computer Science (Brock University, 2014-09-05)
      Heyting categories, a variant of Dedekind categories, and Arrow categories provide a convenient framework for expressing and reasoning about fuzzy relations and programs based on those methods. In this thesis we present an implementation of Heyting and arrow categories suitable for reasoning and program execution using Coq, an interactive theorem prover based on Higher-Order Logic (HOL) with dependent types. This implementation can be used to specify and develop correct software based on L-fuzzy relations such as fuzzy controllers. We give an overview of lattices, L-fuzzy relations, category theory and dependent type theory before describing our implementation. In addition, we provide examples of program executions based on our framework.
    • L-Fuzzy Structured Query Language

      Adjei, Evans; Department of Computer Science
      Lattice valued fuzziness is more general than crispness or fuzziness based on the unit interval. In this work, we present a query language for a lattice based fuzzy database. We define a Lattice Fuzzy Structured Query Language (LFSQL) taking its membership values from an arbitrary lattice L. LFSQL can handle, manage and represent crisp values, linear ordered membership degrees and also allows membership degrees from lattices with non-comparable values. This gives richer membership degrees, and hence makes LFSQL more flexible than FSQL or SQL. In order to handle vagueness or imprecise information, every entry into an L-fuzzy database is an L-fuzzy set instead of crisp values. All of this makes LFSQL an ideal query language to handle imprecise data where some factors are non-comparable. After defining the syntax of the language formally, we provide its semantics using L-fuzzy sets and relations. The semantics can be used in future work to investigate concepts such as functional dependencies. Last but not least, we present a parser for LFSQL implemented in Haskell.
    • Learning Strategies for Evolved Co-operating Multi-Agent Teams in Pursuit Domain

      Grossi, Gina; Department of Computer Science
      This study investigates how genetic programming (GP) can be effectively used in a multi-agent system to allow agents to learn to communicate. Using the predator-prey scenario and a co-operative learning strategy, communication protocols are compared as multiple predator agents learn the meaning of commands in order to achieve their common goal of first finding, and then tracking prey. This work is divided into three parts. The first part uses a simple GP language in the Pursuit Domain Development Kit (PDP) to investigate several communication protocols, and compares the predators' ability to find and track prey when the prey moves both linearly and randomly. The second part, again in the PDP environment, enhances the GP language and fitness measure in search of a better solution for when the prey moves randomly. The third part uses the Ms. Pac-Man Development Toolkit to test how the enhanced GP language performs in a game environment. The outcome of each part of this study reveals emergent behaviours in different forms of message sending patterns. The results from Part 1 reveal a general synchronization behaviour emerging from simple message passing among agents. Additionally, the results show a learned behaviour in the best result which resembles the behaviour of guards and reinforcements found in popular stealth video games. The outcomes from Part 2 reveal an emergent message sending pattern such that one agent is designated as the "sending" agent and the remaining agents are designated as "receiving" agents. Evolved agents in the Ms. Pac-Man simulator show an emergent sending pattern in which there is one agent that sends messages when it is in view of the prey. In addition, it is shown that evolved agents in both Part 2 and Part 3 are able to learn a language. For example, "sending" agents are able to make decisions about when and what type of command to send and "receiving" agents are able to associate the intended meaning to commands.
    • Lossy Compression of Quality Values in Next-Generation Sequencing Data

      Suaste Morales, Veronica; Department of Computer Science
      In this work we address the compression of SAM files which is the standard output file for DNA assembly. We specifically study lossy compression techniques used for quality values reported in the SAM file and we analyse the impact of such lossy techniques in the CRAM format. We also study the impact of these lossy techniques in the SNP calling process. Our results show that lossy techniques allow a better compression ratio than the one obtained with the original quality values. We also show that SNP calling performance is not negatively affected. Moreover we confirmed that some of the lossy techniques can even boost the SNP calling performance.
    • Modal and Relevance Logics for Qualitative Spatial Reasoning

      Ghosh, Pranab kumar; Department of Computer Science
      Qualitative Spatial Reasoning (QSR) is an alternative technique to represent spatial relations without using numbers. Regions and their relationships are used as qualitative terms. Mostly peer qualitative spatial reasonings has two aspect: (a) the first aspect is based on inclusion and it focuses on the ”part-of” relationship. This aspect is mathematically covered by mereology. (b) the second aspect focuses on topological nature, i.e., whether they are in ”contact” without having a common part. Mereotopology is a mathematical theory that covers these two aspects. The theoretical aspect of this thesis is to use classical propositional logic with non-classical relevance logic to obtain a logic capable of reasoning about Boolean algebras i.e., the mereological aspect of QSR. Then, we extended the logic further by adding modal logic operators in order to reason about topological contact i.e., the topological aspect of QSR. Thus, we name this logic Modal Relevance Logic (MRL). We have provided a natural deduction system for this logic by defining inference rules for the operators and constants used in our (MRL) logic and shown that our system is correct. Furthermore, we have used the functional programming language and interactive theorem prover Coq to implement the definitions and natural deduction rules in order to provide an interactive system for reasoning in the logic.
    • Modeling Metal Protein Complexes from Experimental Extended X-ray Absorption Fine Structure using Computational Intelligence

      Price, Collin; Department of Computer Science (Brock University, 2014-10-30)
      Experimental Extended X-ray Absorption Fine Structure (EXAFS) spectra carry information about the chemical structure of metal protein complexes. However, pre- dicting the structure of such complexes from EXAFS spectra is not a simple task. Currently methods such as Monte Carlo optimization or simulated annealing are used in structure refinement of EXAFS. These methods have proven somewhat successful in structure refinement but have not been successful in finding the global minima. Multiple population based algorithms, including a genetic algorithm, a restarting ge- netic algorithm, differential evolution, and particle swarm optimization, are studied for their effectiveness in structure refinement of EXAFS. The oxygen-evolving com- plex in S1 is used as a benchmark for comparing the algorithms. These algorithms were successful in finding new atomic structures that produced improved calculated EXAFS spectra over atomic structures previously found.
    • Modelling and Proving Cryptographic Protocols in the Spi Calculus using Coq

      Tonet, Adam; Department of Computer Science
      The spi calculus is a process algebra used to model cryptographic protocols. A process calculus is a means of modelling a system of concurrently interacting agents, and provides tools for the description of communications and synchronizations between those agents. The spi calculus is based on Robin Milner's pi calculus, which was itself based upon his Calculus of Communicating Systems (CCS). It was created by Martin Abadi and Andrew D. Gordon as an expansion of the pi calculus intended to focus on cryptographic protocols, and adds features such as the encryption and decryption of messages using keys. The Coq proof system is an interactive theorem prover that allows for the definition of types and functions, and provides means by which to prove properties about them. The spi calculus has been implemented in Coq and subsequently used to model and show an example proof of a property about a simple cryptographic protocol. This required the implementation of both the syntax and the semantics of the calculus, as well as the rules and axioms used to manipulate the syntax in proofs. We discuss the spi calculus in detail as defined by Abadi and Gordon, then the various challenges faced during the implementation of the calculus and our rationale for the decisions made in the process.
    • A Multi-Objective Genetic Algorithm with Side Effect Machines for Motif Discovery

      Alizadeh Noori, Farhad; Department of Computer Science (Brock University, 2012-09-18)
      Understanding the machinery of gene regulation to control gene expression has been one of the main focuses of bioinformaticians for years. We use a multi-objective genetic algorithm to evolve a specialized version of side effect machines for degenerate motif discovery. We compare some suggested objectives for the motifs they find, test different multi-objective scoring schemes and probabilistic models for the background sequence models and report our results on a synthetic dataset and some biological benchmarking suites. We conclude with a comparison of our algorithm with some widely used motif discovery algorithms in the literature and suggest future directions for research in this area.
    • Multi-objective Genetic Algorithms for Multi-depot VRP with Time Windows

      Biswas, Sanjib; Department of Computer Science
      Efficient routing and scheduling has significant economic implications for many real-world situations arising in transportation logistics, scheduling, and distribution systems, among others. This work considers both the single depot vehicle routing problem with time windows (VRPTW) and the multi-depot vehicle routing problem with time windows (MDVRPTW). An age-layered population structure genetic algorithm is proposed for both variants of the vehicle routing problem. To the best of the author’s knowledge, this is first work to provide a multi-objective genetic algorithm approach for the MDVRPTW using well-known benchmark data with up to 288 customers.
    • Multi-Objective Genetic Algorithms for the Single Allocation Hub Location Problem

      Asobiela, Stephen Yamzuuga; Department of Computer Science (Brock University, 2013-09-12)
      Hub Location Problems play vital economic roles in transportation and telecommunication networks where goods or people must be efficiently transferred from an origin to a destination point whilst direct origin-destination links are impractical. This work investigates the single allocation hub location problem, and proposes a genetic algorithm (GA) approach for it. The effectiveness of using a single-objective criterion measure for the problem is first explored. Next, a multi-objective GA employing various fitness evaluation strategies such as Pareto ranking, sum of ranks, and weighted sum strategies is presented. The effectiveness of the multi-objective GA is shown by comparison with an Integer Programming strategy, the only other multi-objective approach found in the literature for this problem. Lastly, two new crossover operators are proposed and an empirical study is done using small to large problem instances of the Civil Aeronautics Board (CAB) and Australian Post (AP) data sets.
    • Network Similarity Measures and Automatic Construction of Graph Models using Genetic Programming

      Harrison, Kyle Robert; Department of Computer Science (Brock University, 2014-09-05)
      A complex network is an abstract representation of an intricate system of interrelated elements where the patterns of connection hold significant meaning. One particular complex network is a social network whereby the vertices represent people and edges denote their daily interactions. Understanding social network dynamics can be vital to the mitigation of disease spread as these networks model the interactions, and thus avenues of spread, between individuals. To better understand complex networks, algorithms which generate graphs exhibiting observed properties of real-world networks, known as graph models, are often constructed. While various efforts to aid with the construction of graph models have been proposed using statistical and probabilistic methods, genetic programming (GP) has only recently been considered. However, determining that a graph model of a complex network accurately describes the target network(s) is not a trivial task as the graph models are often stochastic in nature and the notion of similarity is dependent upon the expected behavior of the network. This thesis examines a number of well-known network properties to determine which measures best allowed networks generated by different graph models, and thus the models themselves, to be distinguished. A proposed meta-analysis procedure was used to demonstrate how these network measures interact when used together as classifiers to determine network, and thus model, (dis)similarity. The analytical results form the basis of the fitness evaluation for a GP system used to automatically construct graph models for complex networks. The GP-based automatic inference system was used to reproduce existing, well-known graph models as well as a real-world network. Results indicated that the automatically inferred models exemplified functional similarity when compared to their respective target networks. This approach also showed promise when used to infer a model for a mammalian brain network.
    • New Contig Creation Algorithm for the de novo DNA Assembly Problem

      Goodarzi, Mohammad; Department of Computer Science (Brock University, 2014-02-25)
      DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.
    • Non-photorealistic Rendering with Cartesian Genetic Programming using Graphic Processing Units

      Bakurov, Illya; Department of Computer Science
      Non-photorealistic rendering (NPR) is concerned with the algorithm generation of images having unrealistic characteristics, for example, oil paintings or watercolour. Using genetic programming to evolve aesthetically pleasing NPR images is a relatively new approach in the art field, and in the majority of cases it takes a lot of time to generate results. With use of Cartesian genetic programming (CGP) and graphic processing units (GPUs), we can improve the performance of NPR image evolution. Evolutionary NPR can render images with interesting, and often unexpected, graphic effects. CGP provides a means to eliminate large, inefficient rendering expressions, while GPU acceleration parallelizes the calculations, which minimizes the time needed to get results. By using these tools, we can speed up the image generation process. Experiments revealed that CGP expressions are more concise, and search is more exploratory, than in tree-based approaches. Implementation of the system with GPUs showed significant speed-up.
    • Object Classification using L-Fuzzy Concept Analysis

      Addison, George Tsekpetse; Department of Computer Science
      Object classification and processing have become a coordinated piece of modern industrial manufacturing systems, generally utilized in a manual or computerized inspection process. Vagueness is a common issue related to object classification and analysis such as the ambiguity in input data, the overlapping boundaries among the classes or regions, and the indefiniteness in defining or extracting features and relations among them. The main purpose of this thesis is to construct, define, and implement an abstract algebraic framework for L-fuzzy relations to represent the uncertainties involved at every stage of the object classification. This is done to handle the proposed vagueness that is found in the process of object classification such as retaining information as much as possible from the original data for making decisions at the highest level making the ultimate output or result of the associated system with least uncertainty.
    • Object-Oriented Genetic Programming for the Automatic Inference of Graph Models for Complex Networks

      Medland, Michael; Medland, Michael; Department of Computer Science
      Complex networks are systems of entities that are interconnected through meaningful relationships. The result of the relations between entities forms a structure that has a statistical complexity that is not formed by random chance. In the study of complex networks, many graph models have been proposed to model the behaviours observed. However, constructing graph models manually is tedious and problematic. Many of the models proposed in the literature have been cited as having inaccuracies with respect to the complex networks they represent. However, recently, an approach that automates the inference of graph models was proposed by Bailey [10] The proposed methodology employs genetic programming (GP) to produce graph models that approximate various properties of an exemplary graph of a targeted complex network. However, there is a great deal already known about complex networks, in general, and often specific knowledge is held about the network being modelled. The knowledge, albeit incomplete, is important in constructing a graph model. However it is difficult to incorporate such knowledge using existing GP techniques. Thus, this thesis proposes a novel GP system which can incorporate incomplete expert knowledge that assists in the evolution of a graph model. Inspired by existing graph models, an abstract graph model was developed to serve as an embryo for inferring graph models of some complex networks. The GP system and abstract model were used to reproduce well-known graph models. The results indicated that the system was able to evolve models that produced networks that had structural similarities to the networks generated by the respective target models.