Sunday, March 31, 2019

Genetic Algorithm (GA) as Optimization Technique

inherited Algorithm (GA) as Optimization TechniquePreference hold ining (or election elicitation) is a critical caper in numerous scientific fields, much(prenominal) as decision theory 1,2, economics 3,4, logistic ref and informationbase 5. When modeling substance ab user preference, researchers often model the preferences as a solution to an optimization problem which maximizes some utility function. In reality, however, we argon not a-priori given a utility but have further access to a exhaustible historical user choice data. Therefore, the resistless preference learning problem, that is, how to learn user preferences using her historical choice data, has gained a lot of tutelage in recent years.When dealing with preference learning, it is often assumed that user preference over the value of each attribute is independent of the values of other attributes. However, this assumption is not a sound in many population scenarios. For example, as it is shown in Fig. 1 for cl oth shopping problem, wiz might contract the color of her shoes depending on the color of habit she will buy, i.e. her preference over shoes color is conditioned by the available dresses. More formally, we say the preferences induced by the users behavior are intrinsically related to textitconditional preferential independence, a cite notion in multi-attribute decision theory20.Conditional preference networks (CP-nets) have been proposed for such(prenominal) problems 4 and have received a great deal of attention collect to the compact and inborn representation of ordinal preferences in multi-attribute empyreans 8-12, 17-19,22. Briefly, a CP-net, fig. 1, is a digraph, whose bosss correspond to alternative attributes and edges correspond to the dependency between nodes and each node is annotated with a conditional preference table which describe the preferences over that special attribute (chapter 3).It is sometimes claimed that CP-nets are easy to elicit 16. That is, we firs t excuse CP-nets to the user, and then ask her to write down the CP-net that outstrip describes her decision-making process 18,30. However, it has been shown that when come on the choices, people often act differently from what they described previously as their preferences 39,40,97,103. As an example, Kamishima and Akaho 53 point out that when customers were asked to rank ten sushi items and then ulterior to assign rating scores to the same items, in 68% of the cases, the order of magnitude implied by the ratings did not agree with the ranking elicited directly only minutes before. Based on these experiments, several CP-net learning algorithms have been developed depend on the users choice data. Some algorithms work on the historical choice data 23,64, a process known as passive learning. Others actively offer solutions in an attempt to learn the users preferences as they choose 23,29,47,58. The work of this paper falls into the category of passive learning, in which the book man uses the recorded users choices and then fits a CP-net model to the observed data. Formally, we collect the differentiate of samples $S = o_i succ o_i$, where $o_i succ o_i$ means that the user strictly prefers expiry $o_i$ over effect $o_i$ and then find a model $N$ that can best describe $S$. Such set of samples may be gathitherd, for instance, by observing online users choices.Table1 shows the number of binary CP-nets up to 7 nodes, i.e. each outcome consists of 7 attributes A250110. From the values, it is evident that, even for a small number of attributes, conclusion the best CP-net is not a trivial task due to the spacious size of the search space. textbfinja np-completo begoo. To the best of our knowledge, there is no existing approach that can perform well on problem with more than 7 attributes hence they are not practical when facing real world problems, in which the alternatives usually consist of tens or even hundreds of attributes.Another problem that rises when learning preferences from human subjects is the possibility of tone or comparison data that are ultimately inconsistent in the chose data-set $S$. While noise is forgets of the observation of the users behavior, inconsistency is the result of randomicity of the users behaviors that is, the transitive closure of data-set may result in a cycle in which some outcome $o$ is seen to be preferred to itself. The objective of most CP-net learning techniques is to learn (i.e. rebuild) a CP-net that can describe the whole data-setref. However, since the $S$ is not usually clean, there is no possibility of finding such a CP-net, that is consistent with every example in $S$. This fact motivated us to manakin the CP-net learning problem as an optimization problem that is, to identify a model that maximizes some objective function, $f$, with respect to choice data-set.In this work, we utilized the power of Genetic Algorithm (GA) as an optimization technique. GA is an optimization algorithm in spired from the mechanism of natural selection and natural genetics, which can work without any a-priori knowledge about the problem domain and have received a growing interest in closure the complex combinatorial optimization problems especially for their scalability as compared with the deterministic algorithms 1. In this work, we investigate the feasibility of implementing the GA to solve the passive CP-net learning problem.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.