Parameter Estimation Of Linear Hinges Models Using Genetic Algorithms

Páginas: 19 (4674 palabras) Publicado: 7 de mayo de 2012
1

Parameter estimation of Linear Hinges Models using Genetic Algorithms
A. Gascón
Aiming at this purpose, the LHM can be expressed as a set of K knots and K+1 straight-line segments that connect the knots as shown in Figure 1.

Abstract — This article describes the application of a Genetic Algorithm to adjust the parameters of a particular Automatic Learning Model: The Linear Hinges Model.Specifically, the algorithm will be fed with a given solution for the parameters of the model and it will aim to improve this solution attending accuracy and simplicity. The results and conclusions will show that a real preliminary contribution has been made to the adjustment of this particular model and to the adjustment of parametric models in general, however, there still remain various openquestions. A comprehensive discussion about the results and future works will be done. Index Terms—Automatic learning, genetic algorithms, statistical methods, regression problems, machine learning.

I.

INTRODUCTION

Figure 1 - LHM definition

The automatic learning problem aims at extracting information about a system (than can be as complex as necessary) and revealing the associationsbetween the inputs and the outputs of this system without human intervention. Among the great quantity of documentation available related to this general problem, [1] and [2] can be useful for a comprehensive overview and [3] goes more in deep into particular problems that will be studied in this article. Specifically, [3] and [4] fully explain the Linear Hinges Model (LHM); a one-dimensionalparametric regression model that combines the noise filtering feature of some nonparametric models with the flexibility and computation efficiency of piecewise polynomial models. One-dimensional regression problems can be easily understood thinking about curve fitting from scatterplot data. Given a set of N points: , , Which have been generated according to: , where • is the true underlying function andare the ‘true’ errors, the objective is to find a simple enough model • of the true function •  where ̂ results in a small enough error ̂ measured by the overall Mean Square Error (MSE): 1 1   ̂
, ,

A. Gascón (alberto.gascon@iit.upcomillas.es) is with the Instituto de Investigación Tecnológica (IIT), Escuela Técnica Superior de Ingenieros Industriales (ICAI), Universidad Pontificia Comillas,Madrid, Spain.

This model, shares a building common strategy with the decision trees, since it is expanded using a top-down growing procedure and it is pruned together with cross-validation to select the appropriate number of hinges. This learning method can be described as a particular implementation of the ‘backfitting’ algorithm [5]. A hinge can only be set at an x position where thescatterplot has also a dot, and it is easily observable than minimum error happens in general when there is a hinge for each point of the scatterplot (with the consequent loss of generalization ability). Due to the way it is implemented, the current algorithm, although proven to be very convenient to solve a large amount of problems, tends to be biased towards particular positions of the hinges for eachcomplexity, leading to a lack of global searching and loss of accuracy when facing particular input data. The GA proposed in this paper aims at correcting this aspect introducing an overall search for different model complexities and different positions of the hinges. Using hybrid AI techniques in order to solve this kind of complex problems and adjusting other models’ parameters is not a newconcept. In the past recent years, there has been an exponential increase in the use of these procedures, especially in the neuro-fuzzy field [6]. The GAs themselves have regularly been used to adjust neural network parameters [7]. However, it has been observed a lack of rigorous approaches to the dilemma of automatically adjusting both the amount and values of the best parameters of a regression...
Leer documento completo

Regístrate para leer el documento completo.

Estos documentos también te pueden resultar útiles

  • Time Line Of Atomic Models
  • Optimization Of Laminated Composite Plates And Shells Using Genetic Algorithms, Neural Networks And Finite Elements
  • Immersion Models Of Language Adquisition
  • lINE OF bALANCE
  • Benefits Of Using Drama In English Teaching
  • Advantages and disadvantages of using social networks
  • Models Of Carbohydrates
  • End of the line for PBCs

Conviértase en miembro formal de Buenas Tareas

INSCRÍBETE - ES GRATIS