308-424 (Topics in Artificial Intelligence as taught by G. Dudek)
Reference: Textbook
Is artificial intelligence
about solving problems, or about core scientific problems?
If we are working on artificial intelligence, how concerned
to we have to be about the natural kind?
How can we decide that we have solved the problem?
What is Intelligence?
What is Artificial Intelligence?
When do you expect us to achieve artificial intelligence (already,
soon, in 10 years, never)?
Yes: it
deals algorithms, efficiency, tractability, etc.
No: it
includes philosophy, cognitive science, and engineering
Maybe: we
doesn't know yet how to define the area or the techniques. Maybe
it's a field in it's own right?
Who cares: the problems are exciting and important, isn't
this classification needless pedantry?
AI is truly interdisciplinary.
It relates to psychology,
neurophysiology, mathematical, control theory (EE), etc.
Why AI (in it's broadest sense) is the best part of science
(a personal confession):
Understanding the mind one of the oldest and most challenging
questions considered by modern science.
It allows you to see idea come to fruition in a tangible and useful
way.
You have wide latitude to select a preferred mix of theory, construction,
data collection, and data analysis.
It can enormous potential practical impact.
Course contents:
We will overview selected
topics. This is not a comprehensive view of all of AI (it can't
be).
Key topics include:
PART 1
Knowledge representation: predicate calculus
Search: e.g. A*, alpha-beta search
PART 2
Learning: a couple of flavors
PART 3
Perception: vision
3 stereotypical components
to actual systems:
AI has a whole has fragmented: there
are many sub-areas with reduced interaction between them.
Perception, and vision in particular,
has become a distinct community.
Robotics (i.e. action) has also become largely separate.
By and large, deliberative reasoning has held on to the title
"(traditional) Artificial Intelligence".
With in each major branch
sub-areas have developed.
Within reasoning, different approaches have developed their own
styles and even jargon.
E.g. neural networks, learning,
game playing, reasoning with uncertainty, randomized search.
Scheduling
Perception: Trivial task description language.
Reasoning: Constraint Satisfaction, Stochastic Optimization, Linear
programming, Genetic Algorithms
Action: Trivial
Medical Diagnosis (e.g.
Pathfinder by Heckermann at Microsoft)
Perception: Symptoms, test results.
Reasoning: Bayes Network inference, Machine Learning, Monte-carlo
simulation
Actions: Suggest tests, make diagnoses
There are big questions.....
Can we make something that is as
intelligent as a human?
Can we make something that is as intelligent as a bee?
Does intelligence depend on a model of the physical world?
Can we get something that is really evolutionary and self improving
and autonomous and flexible....?
And little questions.....
Can we save this plant $20million a year by improved pattern
recognition?
Can we save this bank $50million a year by automatic fraud detection?
Can we start a new industry of handwriting recognition / software
agents
Reasoning was once seen
as *the* AI problem.
Chess, and related games, were once considered pivotal to understanding
intelligence.
They are now seen as a sub-domain
of limited relevant to be bulk of AI research.
While playing chess it a "solved problem", understanding
of humans play chess (so well) is hardly solved at all.
Vision (almost all of it)
was once given to an MIT graduate student as a "summer project".
More recently, a a major figure said
roughly: it is so hard that "if it were not for the human
existence proof, we would have given up a long time ago".