Posts Tagged ‘Machine-Learning’

PostHeaderIcon Learning By Correcting Mistakes

While learning new things, there is a possibility that the learning system may make mistakes. Like human beings, learning system can correct itself by identifying reasons for its failure, isolate it, explain how the particular assumption causes failure, and modifies its knowledge base. For example, while playing chess a learning system may make a wrong move and ends up with failure. Now the learning system thinks of the reasons for the failure and corrects its knowledge base. Therefore when it plays again it will not repeat the same mistake.

In his work, Active Learning with Multiple Views, Ion Muslea has used this technique to label the data. He develops a technique known as Co-EMT which is a combination of two techniques: Co-testing and Co-EM. The Co-testing method interacts with the user to label the data. If it does any mistake in labeling, it learns from the mistakes and improves. After learning, the system labels the unlabelled data extracted from a source efficiently. The labeled data constitutes what is called knowledge.

NEXT Learning By Recording Cases

PostHeaderIcon Discovery-based Learning: Clustering

Discovery is a restricted form of learning. The knowledge acquisition is done without getting any assistance from a teacher. Discovery Learning is an inquiry-based learning method.

In discovery learning, the learner uses his own experience and prior knowledge to discover the truths that are to be learned. The learner constructs his own knowledge by experimenting with a domain, and inferring rules from the results of these experiments. In addition to domain information the learner need some support in choosing and interpreting the information to build his knowledge base.

A cluster is a collection of objects which are similar in some way. Clustering groups data items into similarity classes. The properties of these classes can then be used to understand problem characteristics or to find similar groups of data items. Clustering can be defined as the process of reducing a large set of unlabeled data to manageable piles consisting of similar items. The similarity measures depend on the assumptions and desired usage one brings to the data.

Clustering begins by doing feature extraction on data items and measure the values of the chosen feature set. Then the clustering model selects and compares two sets of data items and outputs the similarity measure between them. Clustering algorithms that use particular similarity measures as subroutines are employed to produce clusters.

The clustering algorithms are generally classified as Exclusive Clustering, Overlapping Clustering, Hierarchical Clustering and Probabilistic Clustering. The selection of clustering algorithms depends on various criteria such as time and space complexity. The results are checked to see if they meet the standard otherwise some or all of the above steps have to be repeated.

Some of the applications of clustering are data compression, hypothesis generation and hypothesis testing. The conceptual clustering system accepts a set of object descriptions in the form of events, observations, facts and then produces a classification scheme over the observations.

COBWEB is an incremental conceptual clustering system. It incrementally adds the objects into a classification tree. The attractive feature of incremental systems is that the knowledge is updated with each new observation. In COBWEB system, learning is incremental and the knowledge it learned in the form of classification trees increase the inference abilities.

NEXT Learning By Correcting Mistakes

PostHeaderIcon Chunking

What is chunking?

Chunking is similar to learnig with macro-operators. Generally, it is used by problem solver systems that make use of production systems.

A production system consists of a set of rules that are in if-then form. That is given a particular situation, what are the actions to be performed. For example, if it is raining then take umbrella.

Production system also contains knowledge base, control strategy and a rule applier. To solve a problem, a system will compare the present situation with the left hand side of the rules. If there is a match then the system will perform the actions described in the right hand side of the corresponding rule.

Problem solvers solve problems by applying the rules. Some of these rules may be more useful than others and the results are stored as a chunk. Chunking can be used to learn general search control knowledge. Several chunks may encode a single macro-operator and one chunk may participate in a number of macro sequences. Chunks learned in the beginning of problem solving, may be used in the later stage. The system keeps the chunk to use it in solving other problems.

Soar is a general cognitive architecture for developing intelligent systems. Soar requires knowledge to solve various problems. It acquires knowledge using chunking mechanism. The system learns reflexively when impasses have been resolved. An impasse arises when the system does not have sufficient knowledge. Consequently, Soar chooses a new problem space (set of states and the operators that manipulate the states) in a bid to resolve the impasse. While resolving the impasse, the individual steps of the task plan are grouped into larger steps known as chunks. The chunks decrease the problem space search and so increase the efficiency of performing the task.

In Soar, the knowledge is stored in long-term memory. Soar uses the chunking mechanism to create productions that are stored in long-term memory. A chunk is nothing but a large production that does the work of an entire sequence of smaller ones. The productions have a set of conditions or patterns to be matched to working memory which consists of current goals, problem spaces, states and operators and a set of actions to perform when the production fires. Chunks are generalized before storing. When the same impasse occurs again, the chunks so collected can be used to resolve it.

Related Articles

Explanation Based Learning

PostHeaderIcon Learning with macro-operators

What is Learning with macro-operators?

Sequence of actions that can be treated as a whole are called macro-operators. Once a problem is solved, the learning component takes the computed plan and stores it as a macro-operator. The preconditions are the initial conditions of the problem just solved, and its post conditions correspond to the goal just achieved.

The problem solver efficiently uses the knowledge base it gained from its previous experiences. By generalizing macro-operators the problem solver can even solve different problems. Generalization is done by replacing all the constants in the macro-operators with variables.The STRIPS, for example, is a planning algorithm that employed macro-operators in it’s learning phase. It builds a macro operator MACROP, that contains preconditions, post-conditions and the sequence of actions. The macro operator will be used in the future operation.

Related Articles
Learning By Chunking

PostHeaderIcon Learning By Taking Advice

What is learning by taking advice?

This is a simple form of learning. Suppose a programmer writes a set of instructions to instruct the computer what to do, the programmer is a teacher and the computer is a student. Once learned (i.e. programmed), the system will be in a position to do new things.

The advice may come from many sources: human experts, internet to name a few. This type of learning requires more inference than rote learning. The knowledge must be transformed into an operational form before stored in the knowledge base. Moreover the reliability of the source of knowledge should be considered.

The system should ensure that the new knowledge is conflicting with the existing knowledge. FOO (First Operational Operationaliser), for example, is a learning system which is used to learn the game of Hearts. It converts the advice which is in the form of principles, problems, and methods into effective executable (LISP) procedures (or knowledge). Now this knowledge is ready to use.

NEXT Learning By Parameter Adjustment