• Real-Time Automatic Object Classification and Tracking using Genetic Programming and NVIDIA R CUDA TM

      Maghoumi, Mehran; Department of Computer Science (Brock University, 2014-08-01)
      Genetic Programming (GP) is a widely used methodology for solving various computational problems. GP's problem solving ability is usually hindered by its long execution times. In this thesis, GP is applied toward real-time computer vision. In particular, object classification and tracking using a parallel GP system is discussed. First, a study of suitable GP languages for object classification is presented. Two main GP approaches for visual pattern classification, namely the block-classifiers and the pixel-classifiers, were studied. Results showed that the pixel-classifiers generally performed better. Using these results, a suitable language was selected for the real-time implementation. Synthetic video data was used in the experiments. The goal of the experiments was to evolve a unique classifier for each texture pattern that existed in the video. The experiments revealed that the system was capable of correctly tracking the textures in the video. The performance of the system was on-par with real-time requirements.
    • ReAlM - a system to manipulate relations

      Ahmed, Zafor; Department of Computer Science (Brock University, 2010-10-26)
      Given a heterogeneous relation algebra R, it is well known that the algebra of matrices with coefficient from R is relation algebra with relational sums that is not necessarily finite. When a relational product exists or the point axiom is given, we can represent the relation algebra by concrete binary relations between sets, which means the algebra may be seen as an algebra of Boolean matrices. However, it is not possible to represent every relation algebra. It is well known that the smallest relation algebra that is not representable has only 16 elements. Such an algebra can not be put in a Boolean matrix form.[15] In [15, 16] it was shown that every relation algebra R with relational sums and sub-objects is equivalent to an algebra of matrices over a suitable basis. This basis is given by the integral objects of R, and is, compared to R, much smaller. Aim of my thesis is to develop a system called ReAlM - Relation Algebra Manipulator - that is capable of visualizing computations in arbitrary relation algebras using the matrix approach.
    • Region Connection Calculus: Composition Tables and Constraint Satisfaction Problems

      Ghosh, Manas; Department of Computer Science (Brock University, 2013-11-05)
      Qualitative spatial reasoning (QSR) is an important field of AI that deals with qualitative aspects of spatial entities. Regions and their relationships are described in qualitative terms instead of numerical values. This approach models human based reasoning about such entities closer than other approaches. Any relationships between regions that we encounter in our daily life situations are normally formulated in natural language. For example, one can outline one's room plan to an expert by indicating which rooms should be connected to each other. Mereotopology as an area of QSR combines mereology, topology and algebraic methods. As mereotopology plays an important role in region based theories of space, our focus is on one of the most widely referenced formalisms for QSR, the region connection calculus (RCC). RCC is a first order theory based on a primitive connectedness relation, which is a binary symmetric relation satisfying some additional properties. By using this relation we can define a set of basic binary relations which have the property of being jointly exhaustive and pairwise disjoint (JEPD), which means that between any two spatial entities exactly one of the basic relations hold. Basic reasoning can now be done by using the composition operation on relations whose results are stored in a composition table. Relation algebras (RAs) have become a main entity for spatial reasoning in the area of QSR. These algebras are based on equational reasoning which can be used to derive further relations between regions in a certain situation. Any of those algebras describe the relation between regions up to a certain degree of detail. In this thesis we will use the method of splitting atoms in a RA in order to reproduce known algebras such as RCC15 and RCC25 systematically and to generate new algebras, and hence a more detailed description of regions, beyond RCC25.
    • A Relation-Algebraic Approach to L - Fuzzy Topology

      Imangazin, Nurbek; Department of Computer Science
      Any science deals with the study of certain models of the real world. However, a model is always an abstraction resulting in some uncertainty, which must be considered. The theory of fuzzy sets is one way of formalizing one of the types of uncertainty that occurs when modeling real objects. Fuzzy sets have been applied in various real-world problems such as control system engineering, image processing, and weather forecasting systems. This research focuses on applying the categorical framework of abstract L - fuzzy relations to L-fuzzy topology with ideas, concepts and methods of the theory of L-fuzzy sets. Since L-fuzzy sets were introduced to deal with the problem of approximate reasoning, t − norm based operations are essential in the definition of L - fuzzy topologies. We use the abstract theory of arrow categories with additional t − norm based connectives to define L - fuzzy topologies abstractly. In particular, this thesis will provide an abstract relational definition of an L - fuzzy topology, consider bases of topological spaces, continuous maps, and the first two separation axioms T0 and T1. The resulting theory of L - fuzzy topological spaces provides the foundation for applications and algorithms in areas such as digital topology, i.e., analyzing images using topological features.
    • RelMDD-A Library for Manipulating Relations Based on MDDs

      Atampore, Francis; Department of Computer Science (Brock University, 2012-09-18)
      Relation algebras is one of the state-of-the-art means used by mathematicians and computer scientists for solving very complex problems. As a result, a computer algebra system for relation algebras called RelView has been developed at Kiel University. RelView works within the standard model of relation algebras. On the other hand, relation algebras do have other models which may have different properties. For example, in the standard model we always have L;L=L (the composition of two (heterogeneous) universal relations yields a universal relation). This is not true in some non-standard models. Therefore, any example in RelView will always satisfy this property even though it is not true in general. On the other hand, it has been shown that every relation algebra with relational sums and subobjects can be seen as matrix algebra similar to the correspondence of binary relations between sets and Boolean matrices. The aim of my research is to develop a new system that works with both standard and non-standard models for arbitrary relations using multiple-valued decision diagrams (MDDs). This system will implement relations as matrix algebras. The proposed structure is a library written in C which can be imported by other languages such as Java or Haskell.
    • The Salmon Algorithm - A New Population Based Search Metaheuristic

      Orth, John; Department of Computer Science (Brock University, 2012-03-02)
      This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.
    • A Scalability Study and New Algorithms for Large-Scale Many-Objective Optimization

      Maltese, Justin; Department of Computer Science
      Many real-world optimization problems contain multiple (often conflicting) goals to be optimized concurrently, commonly referred to as multi-objective problems (MOPs). Over the past few decades, a plethora of multi-objective algorithms have been proposed, often tested on MOPs possessing two or three objectives. Unfortunately, when tasked with solving MOPs with four or more objectives, referred to as many-objective problems (MaOPs), a large majority of optimizers experience significant performance degradation. The downfall of these optimizers is that simultaneously maintaining a well-spread set of solutions along with appropriate selection pressure to converge becomes difficult as the number of objectives increase. This difficulty is further compounded for large-scale MaOPs, i.e., MaOPs possessing large amounts of decision variables. In this thesis, we explore the challenges of many-objective optimization and propose three new promising algorithms designed to efficiently solve MaOPs. Experimental results demonstrate the proposed optimizers to perform very well, often outperforming state-of-the-art many-objective algorithms.
    • Shortest Path Routing on the Hypercube with Faulty Nodes

      Arabpour Niasari, Mehrdad; Department of Computer Science
      Interconnection networks are widely used in parallel computers. There are many topologies for interconnection networks and the hypercube is one of the most popular networks. There are a variety of different routing paradigms that need to be investigated on the hypercube. In this thesis we investigate the shortest path routing between two nodes on the hypercube when some nodes are faulty and cannot be used. In this thesis the shortest path between two nodes is considered as the Hamming distance of them. Regarding the shortest path problem in a faulty hypercube, some efficient algorithms have been proposed when each processor (node) has limited information regarding the status of other processors (whether they are faulty or not). There are also some proposed algorithms for the case where there is no limitation on the data of each processor but they are not efficient and are exponential in terms of number of faulty nodes and dimension of the hypercube. To check whether there is a shortest path between two given nodes in a faulty hypercube, we propose a polynomial algorithm with time complexity of O(n^2 * m^2) where n is the dimension of the hypercube and m is the number of faulty nodes. Our algorithm only requires the source node to know the state of all other nodes. The proposed algorithm first checks whether there is a shortest path from the source node to the target node and then it can construct it efficiently. Our idea is based on a so-called ordering and permutation model of paths in the hypercube. We use a constructive approach to find the path which is a permutation as well. We then use inclusion-exclusion and dynamic programming techniques to make our method efficient. We also propose an algorithm for counting all possible shortest paths in the hypercube.
    • Statistical Image Analysis for Image Evolution

      Salimi, Elham; Department of Computer Science
      This thesis is focused on using genetic programming to evolve images based on lightweight features extracted from a given target image. The main motivation of this thesis is research by Lombardi et al. in which an image retrieval system is developed based on lightweight statistical features of images for comparing and classifying them in painting style categories; primarily based on color matching. In this thesis, automatic fitness scoring of variations of up to 17 lightweight image features using many-objective fitness evaluation was used to evolve textures. Evolved results were shown to have similar color characteristics to target images. Although a human survey was conducted to confirm those results, it was inconclusive.
    • A Study of Ordered Gene Problems Featuring DNA Error Correction and DNA Fragment Assembly with a Variety of Heuristics, Genetic Algorithm Variations, and Dynamic Representations

      Hughes, James; Department of Computer Science (Brock University, 2014-08-18)
      Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.
    • Surface Areas of Some Interconnection Networks

      Salahi, Fatemeh; Department of Computer Science
      An interesting property of an interconnected network (G) is the number of nodes at distance i from an arbitrary processor (u), namely the node centered surface area. This is an important property of a network due to its applications in various fields of study. In this research, we investigate on the surface area of two important interconnection networks, (n, k)-arrangement graphs and (n, k)-star graphs. Abundant works have been done to achieve a formula for the surface area of these two classes of graphs, but in general, it is not trivial to find an algorithm to compute the surface area of such graphs in polynomial time or to find an explicit formula with polynomially many terms in regards to the graph's parameters. Among these studies, the most efficient formula in terms of computational complexity is the one that Portier and Vaughan proposed which allows us to compute the surface area of a special case of (n, k)-arrangement and (n, k)-star graphs when k = n-1, in linear time which is a tremendous improvement over the naive solution with complexity order of O(n * n!). The recurrence we propose here has the linear computational complexity as well, but for a much wider family of graphs, namely A(n, k) for any arbitrary n and k in their defined range. Additionally, for (n, k)-star graphs we prove properties that can be used to achieve a simple recurrence for its surface area.
    • Swarm-based Algorithms for Neural Network Training

      McLean, Reginald; Department of Computer Science
      The main focus of this thesis is to compare the ability of various swarm intelligence algorithms when applied to the training of artificial neural networks. In order to compare the performance of the selected swarm intelligence algorithms both classification and regression datasets were chosen from the UCI Machine Learning repository. Swarm intelligence algorithms are compared in terms of training loss, training accuracy, testing loss, testing accuracy, hidden unit saturation, and overfitting. Our observations showed that Particle Swarm Optimization (PSO) was the best performing algorithm in terms of Training loss and Training accuracy. However, it was also found that the performance of PSO dropped considerably when examining the testing loss and testing accuracy results. For the classification problems, it was found that firefly algorithm, ant colony optimization, and fish school search outperformed PSO for testing loss and testing accuracy. It was also observed that ant colony optimization was the algorithm that performed the best in terms of hidden unit saturation.
    • A System for Models of First Order Theories

      AbdalBari, Anwar; Department of Computer Science (2012-10-12)
      If you want to know whether a property is true or not in a specific algebraic structure,you need to test that property on the given structure. This can be done by hand, which can be cumbersome and erroneous. In addition, the time consumed in testing depends on the size of the structure where the property is applied. We present an implementation of a system for finding counterexamples and testing properties of models of first-order theories. This system is supposed to provide a convenient and paperless environment for researchers and students investigating or studying such models and algebraic structures in particular. To implement a first-order theory in the system, a suitable first-order language.( and some axioms are required. The components of a language are given by a collection of variables, a set of predicate symbols, and a set of operation symbols. Variables and operation symbols are used to build terms. Terms, predicate symbols, and the usual logical connectives are used to build formulas. A first-order theory now consists of a language together with a set of closed formulas, i.e. formulas without free occurrences of variables. The set of formulas is also called the axioms of the theory. The system uses several different formats to allow the user to specify languages, to define axioms and theories and to create models. Besides the obvious operations and tests on these structures, we have introduced the notion of a functor between classes of models in order to generate more co~plex models from given ones automatically. As an example, we will use the system to create several lattices structures starting from a model of the theory of pre-orders.
    • Towards automated derivation in the theory of allegories

      Glanfield, Joel.; Department of Computer Science (Brock University, 2008-02-16)
      We provide an algorithm that automatically derives many provable theorems in the equational theory of allegories. This was accomplished by noticing properties of an existing decision algorithm that could be extended to provide a derivation in addition to a decision certificate. We also suggest improvements and corrections to previous research in order to motivate further work on a complete derivation mechanism. The results presented here are significant for those interested in relational theories, since we essentially have a subtheory where automatic proof-generation is possible. This is also relevant to program verification since relations are well-suited to describe the behaviour of computer programs. It is likely that extensions of the theory of allegories are also decidable and possibly suitable for further expansions of the algorithm presented here.
    • Using Age Layered Population Structure for the Multi-Depot Vehicle Routing Problem

      Opoku-Amankwaah, Audrey; Department of Computer Science
      This thesis studies the NP-hard multi-depot vehicle routing problem (MDVRP) which is an extension of the classical VRP with the exception that vehicles based at one of several depots should service every customer assigned to that depot. Finding the optimal solution to MDVRP is computationally intractable for practical sized problem sets, and various meta-heuristics including genetic algorithms have been proposed in the literature. In this work, an e fficient multi-population genetic algorithm based on age layered population structures for the MDVRP is proposed. Three inter-layer transfer strategies are proposed and multi-objective fi tness evaluation is compared with weighted sum approach. An empirical study comparing the proposed approach with existing genetic algorithms and other meta-heuristics is carried out using well known benchmark data. The performance found in terms of solution quality is very promising.
    • Using Deep Learning for Predicting Stock Trends

      Fazeli, Arvand; Department of Computer Science
      Deep learning has shown great promise in solving complicated problems in recent years. One applicable area is finance. In this study, deep learning will be used to test the predictability of stock trends. Stock markets are known to be volatile, prices fluctuate, and there are many complicated financial indicators involved. While the opinion of researchers differ about the predictability of stocks, it has been shown by previous empirical studies that some aspects of stock markets can be predictable to some extent. Various data including news or financial indicators can be used to predict stock prices. In this study, the focus will be on using past stock prices and using technical indicators to increase the performance of the results. The goal of this study is to measure the accuracy of predictions and evaluate the results. Historical data is gathered for Apple, Microsoft, Google and Intel stocks. A prediction model is created by using past data and technical indicators were used as features in the model. The experiments were performed by using long short-term memory networks. Different approaches and techniques were tested to boost the performance of the results. To prove the usability of the final model in the real world and measure the profitability of results backtesting was performed. The final results show that while it is not possible to predict the exact price of a stock in the future to gain profitable results, deep learning can be used to predict the trend of stock markets to generate buy and sell signals.
    • Using genetic algorithms for the single allocation hub location problem

      Naeem, Mohammad; Department of Computer Science (Brock University, 2010-10-25)
      Hub location problem is an NP-hard problem that frequently arises in the design of transportation and distribution systems, postal delivery networks, and airline passenger flow. This work focuses on the Single Allocation Hub Location Problem (SAHLP). Genetic Algorithms (GAs) for the capacitated and uncapacitated variants of the SAHLP based on new chromosome representations and crossover operators are explored. The GAs is tested on two well-known sets of real-world problems with up to 200 nodes. The obtained results are very promising. For most of the test problems the GA obtains improved or best-known solutions and the computational time remains low. The proposed GAs can easily be extended to other variants of location problems arising in network design planning in transportation systems.
    • Winâ Foy : functional object-oriented programming language

      Foy, Grant.; Department of Computer Science (Brock University, 2009-01-28)
      This thesis will introduce a new strongly typed programming language utilizing Self types, named Win--*Foy, along with a suitable user interface designed specifically to highlight language features. The need for such a programming language is based on deficiencies found in programming languages that support both Self types and subtyping. Subtyping is a concept that is taken for granted by most software engineers programming in object-oriented languages. Subtyping supports subsumption but it does not support the inheritance of binary methods. Binary methods contain an argument of type Self, the same type as the object itself, in a contravariant position, i.e. as a parameter. There are several arguments in favour of introducing Self types into a programming language (11. This rationale led to the development of a relation that has become known as matching [4, 5). The matching relation does not support subsumption, however, it does support the inheritance of binary methods. Two forms of matching have been proposed (lJ. Specifically, these relations are known as higher-order matching and I-bound matching. Previous research on these relations indicates that the higher-order matching relation is both reflexive and transitive whereas the f-bound matching is reflexive but not transitive (7]. The higher-order matching relation provides significant flexibility regarding inheritance of methods that utilize or return values of the same type. This flexibility, in certain situations, can restrict the programmer from defining specific classes and methods which are based on constant values [21J. For this reason, the type This is used as a second reference to the type of the object that cannot, contrary to Self, be specialized in subclasses. F-bound matching allows a programmer to define a function that will work for all types of A', a subtype of an upper bound function of type A, with the result type being dependent on A'. The use of parametric polymorphism in f-bound matching provides a connection to subtyping in object-oriented languages. This thesis will contain two main sections. Firstly, significant details concerning deficiencies of the subtype relation and the need to introduce higher-order and f-bound matching relations into programming languages will be explored. Secondly, a new programming language named Win--*Foy Functional Object-Oriented Programming Language has been created, along with a suitable user interface, in order to facilitate experimentation by programmers regarding the matching relation. The construction of the programming language and the user interface will be explained in detail.