Posts Tagged ‘Artificial Intelligence’
This article gives an overview of machine learning. This is an introductory article and a starting point to learn the various types of machine learning algorithms.
What is Machine Learning?
To solve problems computers require intelligence. Learning is central to intelligence. As intelligence requires knowledge, it is necessary for the computers to acquire knowledge. Machine learning serves this purpose.
Machine learning refers to a system capable of acquiring and integrating the knowledge automatically. The capability of the systems to learn from experience, training, analytical observation, and other means, results in a system that can continuously self-improve and thereby exhibit efficiency and effectiveness.
A machine learning system usually starts with some knowledge and a corresponding knowledge organization so that it can interpret, analyze, and test the knowledge acquired.
The figure shown above is a typical learning system model. It consists of the following components.
1. Learning element
2. Knowledge base
3. Performance element
4. Feedback element
5. Standard system.
1. Learning element
It receives and processes the input obtained from a person ( i.e. a teacher), from reference material like magazines, journals, etc, or from the environment at large.
2. Knowledge base
This is somewhat similar to the database. Initially it may contain some basic knowledge. Thereafter it receives more knowledge which may be new and so be added as it is or it may replace the existing knowledge.
3. Performance element
It uses the updated knowledge base to perform some tasks or solves some problems and produces the corresponding output.
4. Feedback element
It is receiving the two inputs, one from learning element and one from standard (or idealized) system. This is to identify the differences between the two inputs. The feedback is used to determine what should be done in order to produce the correct output.
5. Standard system
It is a trained person or a computer program that is able to produce the correct output. In order to check whether the machine learning system has learned well, the same input is given to the standard system. The outputs of standard system and that of performance element are given as inputs to the feedback element for the comparison. Standard system is also called idealized system.
The sequence of operations described above may be repeated until the system gets the desired perfection.
There are several factors affecting the performance. They are,
• Types of training provided
• The form and extent of any initial background knowledge
• The type of feedback provided
• The learning algorithms used.
Training is the process of making the system able to learn. It may consist of randomly selected examples that include a variety of facts and details including irrelevant data. The learning techniques can be characterized as a search through a space of possible hypotheses or solutions. Background knowledge can be used to make learning more efficient by reducing the search space. The feedback may be a simple yes or no type of evaluation or it may contain useful information describing why a particular action was good or bad. If the feedback is always reliable and carries useful information, the learning process will be faster and the resultant knowledge will be correct.
The success of machine learning system also depends on the algorithms. These algorithms control the search to find and build the knowledge structures. The algorithms should extract useful information from training examples. There are several machine learning techniques available. I have explored some of the important techniques.
Managing Multiple Models
What is Artificial Intelligence?
Artificial Intelligence (AI) is the study and creation of computer systems that can perceive, reason and act. The primary aim of AI is to produce intelligent machines. The intelligence should be exhibited by thinking, making decisions, solving problems, more importantly by learning. AI is an interdisciplinary field that requires knowledge in computer science, linguistics, psychology, biology, philosophy and so on for serious research.
AI can also be defined as the area of computer science that deals with the ways in which computers can be made to perform cognitive functions ascribed to humans. But this definition does not say what functions are performed, to what degree they are performed, or how theses functions are carried out.
AI draws heavily on following domains of study.
- Computer Science
- Cognitive Science
- Natural Sciences
Strong Artificial Intelligence
It deals with creation of real intelligence artificially. Strong AI believes that machines can be made sentient or self-aware. There are two types of strong AI: Human-like AI, in which the computer program thinks and reasons to the level of human-being. Non-human-like AI, in which the computer program develops a non-human way of thinking and reasoning.
Weak Artificial Intelligence
Weak AI does not believe that creating human-level intelligence in machines is possible but AI techniques can be developed to solve many real-life problems. That is, it is the study of mental models implemented on a computer.
AI and Nature
Nowadays AI techniques developed with the inspiration from nature is becoming popular. A new area of research what is known as Nature Inspired Computing is emerging. Biological inspired AI approaches such as neural networks and genetic algorithms are already in place.
It is true that AI does not yet achieve its ultimate goal. Still AI systems could not defeat even a three year old child on many counts: ability to recognize and remember different objects, adapt to new situations, understand and generate human languages, and so on. The main problem is that we, still could not understand how human mind works, how we learn new things, especially how we learn languages and reproduce them properly.
There are many AI applications that we witness: Robotics, Machine translators, chatbots, voice recognizers to name a few. AI tehniques are used to solve many real life problems. Some kind of robots are helping to find land-mines, searching humans trapped in rubbles due to natural calamities.
Future of AI
AI is the best field for dreamers to play around. It must be evolved from the thought that making a human-machine is possible. Though many conclude that this is not possible, there is still a lot of research going on in this field to attain the final objective. There are inherent advantages of using computers as they do not get tired or loosing temper and are becoming faster and faster. Only time will say what will be the future of AI: will it attain human-level or above human-level intelligence or not.
History of Artificial Intelligence began when McCulloch and Walter Pitts proposed a model of artificial neurons in 1943. Significance of this work is that each neuron is characterised as being “on” or “off”. Switching to “on” occurred when significant number of neighbouring neurons stimulated. McCulloth and Pits showed that any computable function could be computed by network of connected neurons. In 1949, Donald Hebb modified the connection strength between neurons using a simple updating rule what is known as Hebbian learning even today. Marvin Minsky and Dean Edmonds built the first neural network computer called SNARC in 1951. This computer used 3000 vacuum tubes and a network of 40 neurons. Alan Turing introduced the infamous Turing test, machine learning, genetic algorithms, and reinforcement learning.
Artificial Intelligence was formally born in a workshop conducted by IBM at Dartmouth College in 1956. Mc Carthy coined the term Artificial Intelligence. It turns out to be the greatest milestone in the history of artificial intelligence. Newell, Shaw and Simon developed a reasoning program called Logic Theorist. It was meant for automatic theorem proving which led the development of Information Processing Language, the first list-processing language. Chomsky’s theory of generative grammar influenced Natural Language Processing. Rosenblatt invented perceptrons in 1958. John McCarthy developed LISP, an AI programming language.
Newell and Simson wrote General Problem Sover (GPS) in IPL. It imitated the way humans solve the problems. In 1976, they formulated physical symbol system and claimed that it is sufficient for general intelligent action. Herbert Gelernter developed Geometry Theorem Prover. A.L.Samuel developed checkers program between 1961 and 1965. J.A.Robinson introduced a inference method, resolution in 1965. In the same period DENDRAL, the first knowledge-based expert system was developed at Stanford University by J.Laderberg, Edward Feigenbaum and Carl Djerassi. DEDNDRAL was to infer molecular structure from the information provided by a mass spectrometer. Feigenbaum, Buchanan and Edward Shortlife developed an expert system called MYCIN to diagnose blood infections. MYCIN used 450 rules acquired from the information given by experts. MYCIN incorporated certainty factors, a calculus of uncertainty.