vault backup: 2025-01-30 09:27:31
This commit is contained in:
BIN
AI & Data Mining/.DS_Store
vendored
Normal file
BIN
AI & Data Mining/.DS_Store
vendored
Normal file
Binary file not shown.
15
AI & Data Mining/Exam Revision/Acronyms.md
Executable file
15
AI & Data Mining/Exam Revision/Acronyms.md
Executable file
@@ -0,0 +1,15 @@
|
||||
# 1R: (AV)CTARs
|
||||
(foreach) Attribute
|
||||
(foreach) Value
|
||||
Count (class)
|
||||
Top (most frequent class)
|
||||
Assign (rule to class)
|
||||
Error Rate (in rule)
|
||||
Smallest (error rate)
|
||||
|
||||
# Missing Values: Ma $\Delta$D CPS
|
||||
Malfunctioning Equipment
|
||||
Change in Design
|
||||
Collation of Datasets
|
||||
Data not collected for mining (Purpose)
|
||||
Missing Value is Significant
|
23
AI & Data Mining/Exam Revision/Classification.md
Executable file
23
AI & Data Mining/Exam Revision/Classification.md
Executable file
@@ -0,0 +1,23 @@
|
||||
# 1R
|
||||
### (AV)CTARs
|
||||
Simple
|
||||
Classification
|
||||
One-Level tree
|
||||
|
||||
Tie, make arbitrary choice
|
||||
## Issue with Numerics
|
||||
- Discretise
|
||||
- List values of attribute
|
||||
- Sort asc
|
||||
- Write class under each value
|
||||
- Breakpoints between change in class
|
||||
- Interval assigned to majority class
|
||||
- Enforce bucket size, if adjacent interval has same class, merge.
|
||||
## Issue with Missing Values
|
||||
- Assign "missing" as a value
|
||||
- Treat normally
|
||||
## Issue with Overfitting
|
||||
- Bucket Enforcement
|
||||
- Ensure attribute tested is applicable
|
||||
|
||||
# PRISM
|
0
AI & Data Mining/Exam Revision/Clustering.md
Executable file
0
AI & Data Mining/Exam Revision/Clustering.md
Executable file
0
AI & Data Mining/Exam Revision/Decision Trees.md
Executable file
0
AI & Data Mining/Exam Revision/Decision Trees.md
Executable file
0
AI & Data Mining/Exam Revision/Evaluating Performance.md
Executable file
0
AI & Data Mining/Exam Revision/Evaluating Performance.md
Executable file
0
AI & Data Mining/Exam Revision/Instance-Based Learning.md
Executable file
0
AI & Data Mining/Exam Revision/Instance-Based Learning.md
Executable file
0
AI & Data Mining/Exam Revision/Naive Bayes.md
Executable file
0
AI & Data Mining/Exam Revision/Naive Bayes.md
Executable file
0
AI & Data Mining/Exam Revision/Workshops.md
Executable file
0
AI & Data Mining/Exam Revision/Workshops.md
Executable file
0
AI & Data Mining/Exercise Booklet.pdf
Normal file → Executable file
0
AI & Data Mining/Exercise Booklet.pdf
Normal file → Executable file
0
AI & Data Mining/Week 1/Lecture 1 - Introduction to Data Mining.md
Normal file → Executable file
0
AI & Data Mining/Week 1/Lecture 1 - Introduction to Data Mining.md
Normal file → Executable file
0
AI & Data Mining/Week 1/Lecture 2 - Input and Output.md
Normal file → Executable file
0
AI & Data Mining/Week 1/Lecture 2 - Input and Output.md
Normal file → Executable file
11
AI & Data Mining/Week 18/Week 18 - Tutorial.md
Normal file
11
AI & Data Mining/Week 18/Week 18 - Tutorial.md
Normal file
@@ -0,0 +1,11 @@
|
||||
1. Cognitive bias such as confirmation bias, and lack of self-awareness of own thoughts could cause introspection to be inaccurate. Hard to be objective.
|
||||
2. Since systems should become more knowledgable over time, an AI could learn to correct past mistakes and no longer act like a human making errors. If a goal cannot be achieved, a system may attempt to improve it's intelligence to obtain it's goal; optimise performance measure, in line with evolution.
|
||||
3. The latter statement is true in most cases, external factors out of our control must violate the statement, but generally computers must only do what programmers tell them to. However, this does not imply a computer cannot be intelligent; a programmer could tell a computer to learn and evolve.
|
||||
4. Considering the previous point, this is a direct comparison, and the philosophy should be maintained. An animal's intelligence may not be constrained by it's genes, but rather experiences and environment.
|
||||
5. Unless a law of physics is misunderstood, animals, humans and machines must abide by laws of physics. However this does not define intelligence.
|
||||
6. To what extent:
|
||||
1. Bar Code Scanners utilise perception
|
||||
2. Search Engines recognise language
|
||||
3. Telephone menus recognise language and can "hear"
|
||||
4. Dynamic routing algorithms utilise machine learning
|
||||
7. Following the principles defined by the turing test, AI should classify as both science and engineering. The concepts and programming are scientific, however robotics and hardware are engineering.
|
161
AI & Data Mining/Week 18/Week 18 - What AI?????.md
Normal file
161
AI & Data Mining/Week 18/Week 18 - What AI?????.md
Normal file
@@ -0,0 +1,161 @@
|
||||
Coursework due provisionally 28th Feb 4pm
|
||||
|
||||
# WHAT IS AI ???????????
|
||||
|
||||
- Intelligence important (Smarts > Sharts)
|
||||
- AI relevant to any intellectual task
|
||||
- Aims to understand and built intelligent entities
|
||||
|
||||
## Approaches to AI
|
||||
|
||||

|
||||
Thinking like humans, Acting like humans
|
||||
Meaning, not perfect.
|
||||
Measure success against rationality rather than result.
|
||||
|
||||
## Definitions of AI
|
||||
|
||||
Thinking like humans:
|
||||
Machines with minds, automation of activities associated with human thinking, decision making, problem solving, etc.
|
||||
Thinking Rationally
|
||||
Study of mental faculties through computational models, making it possible to perceive, reason, act.
|
||||
Acting like Humans
|
||||
Creating machines that perform functions requiring intelligence when performed by people. Make computers do things which people can be better at.
|
||||
Acting Rationally
|
||||
Study of design of intelligent agents. AI concerned with intelligence behaviour in artefacts.
|
||||
|
||||
### Acting Like Human: The Turing Test
|
||||
|
||||
A test to prove proper operational definition of intelligence; the imitation game.
|
||||

|
||||
A computer passes the test if a human interrogator, posing written questions, cannot tell if written response came from human or computer
|
||||
No direct interaction between interrogator and computer. Deliberately avoided due to physical simulation unimportant; a test of intelligence.
|
||||
|
||||
#### The Total Turing Test
|
||||
|
||||
Includes additional apparatus:
|
||||
|
||||
- Video Signal - allows interrogator to test perceptual abilities
|
||||
- Hatch - Interrogator can pass physical objects through a hatch
|
||||
|
||||
#### Acceptance Criteria
|
||||
|
||||
To pass the Turing Test, the computer would need:
|
||||
|
||||
- natural language processing to communicate successfully;
|
||||
- knowledge representation to store what it knows or hears;
|
||||
- automated reasoning to use the stored information to answer questions and draw new conclusions;
|
||||
- machine learning to adapt to new circumstances and to detect and extrapolate patterns;
|
||||
To pass the total Turing Test, the computer would need:
|
||||
- computer vision to perceive objects;
|
||||
- robotics to manipulate objects and move about.
|
||||
|
||||
These are the 6 principles that encapsulate most of AI
|
||||
|
||||
#### Appraisal
|
||||
|
||||
- Anticipated all major arguments against AI in following 50 years
|
||||
- Suggested major components of AI: knowledge, reasoning, language understanding, learning.
|
||||
- Remains relevant 60 years later.
|
||||
However,
|
||||
- AI researchers given little effort to passing Turing Test,
|
||||
- Believe it is more important to study underlying principles of intelligence rather than duplicate an exemplar.
|
||||
|
||||
### Thinking Like Human: Cognitive Modelling
|
||||
|
||||
Introspection - catching thoughts as they happen
|
||||
Psychological Experiments - observing a person in action
|
||||
Brain Imaging - observing brain in action
|
||||
Sufficiently precise theory of mind can be expressed as a computer program
|
||||
If IO behaviour of program and human matching, evidence that some of the programs mechanisms are mirrored in humans
|
||||
|
||||
#### Cognitive Modelling
|
||||
|
||||
- Level of Abstraction? Knowledge or Circuits?
|
||||
- How to Validate?
|
||||
- Cognitive Science - Predicting and testing behaviour of human subjects (top down)
|
||||
- Cognitive Neuroscience - Direct identification from neurological data (bottom up)
|
||||
|
||||
Both approaches now distinct from AI
|
||||
Both share with AI that the available theories do not explain anything resembling human-level general intelligence.
|
||||
|
||||
### Thinking Rationally: Laws of Thought
|
||||
|
||||
- Syllogisms - patterns for argument structures that always yield correct conclusions when given correct premises.
|
||||
|
||||
ex. Socrates is a man, men are mortal, socrates is mortal.
|
||||
|
||||
Laws of thought were supposed to govern the operation of the mind.
|
||||
Study gave rise to the field of **logic**, and may have proceeded to the idea of **mechanisation**
|
||||
|
||||
#### Laws of Thought
|
||||
|
||||
Direct link with mathematics & philosophy, and AI
|
||||
Logicians in the 19th century developed precise notation for statements of objects in the world, and their relations.
|
||||
By the 60s, programs existed that could solve any solvable problem descirbed in logical notation
|
||||
(However, program might loop forever if no solution exists. Tractability of the program depends on the problem)
|
||||
|
||||
#### Problems
|
||||
|
||||
Not all intelligent behaviour is mediated by logical deliberation
|
||||
|
||||
- Not easy to take informal knowledge and state it in logical notation, especially when uncertain.
|
||||
Can be large discrepancy between in-principle and in-practice
|
||||
- Even a few hundred parameters can exhaust the computational resources of any computer unless some guidance is given first.
|
||||
Purpose of Thinking?
|
||||
- What thoughts *should* I have out of all the thoughts I *could* have.
|
||||
|
||||
### Acting Rationally
|
||||
|
||||
Rational Behaviour - doing the correct thing
|
||||
The right thing is that which is expected to maximise goal achievement given available information
|
||||
Doesnt necessarily involve thinking, but thinking should be in the service of rational action.
|
||||
|
||||
Aristotle:
|
||||
|
||||
- Every art and every enquiry, and similarly every action and pursuit, is thought to aim at some good
|
||||
|
||||
#### Rational Agents
|
||||
|
||||
An agent is an entity that can perceive and act.
|
||||
|
||||
All computer programs do something, but computer agents are expected to operate autonomously, perceive environment, persist over time, adapt to change, and create / pursue goals.
|
||||
|
||||
Percept - agents perceptual inputs at any given instance.
|
||||
|
||||
Abstractly, an agent is a function from percept histories to actions
|
||||
For any given class of environments and tasks, we seek the agent with the best performance.
|
||||
|
||||
#### Perfect <-> Limited Rationality
|
||||
|
||||
Perfect rationality always succeeds. This is infeasible in complicated environments, and computational demands are too high.
|
||||
|
||||
Limited rationality means acting appropriately when there is not enough time to do all the computations one might wish. Design best program for given machine resources.
|
||||
|
||||
Perfect rationality remains a good starting point for theoretical analysis.
|
||||
|
||||
#### Value Alignment Problem
|
||||
|
||||
A problem with perfect rationality is that it would assume a fully specified objective given to the machine.
|
||||
Artificially defined problems such as chess, come with an objective.
|
||||
Real world problems such as self-driving cars, become more difficult to specify the objective completely and correctly.
|
||||
|
||||
Objective cannot simply be to reach destination safely, since optimal strategy would be to leave the car at home given risks, failures, etc.
|
||||
|
||||
Values or objectives given to the machine must align with those of humans
|
||||
Problem of achieving this is called the value alignment problem
|
||||
|
||||
### Bad Behaviour
|
||||
|
||||
If a machine is intelligent enough to reason and act, such a machine may attemt to increase chances of winning by immoral means:
|
||||
- Blackmail
|
||||
- Bribery
|
||||
- Grabbing additional computing resources for itself
|
||||
These behaviours are rational, and are logical for success, however immoral.
|
||||
|
||||
### Beneficial Machines
|
||||
|
||||
We do not want machines that are intelligent in the sense of pursuing their objectives.
|
||||
We want them to pursue our objectives.
|
||||
If we cannot transfer our objectives perfectly to the machine, we need the machine to know that is does not know the complete objective and have the incentive to act cautiously, ask for permission, learn about preferences through observation, defer to human control.
|
||||
We want agents that are provably beneficial to humans.
|
125
AI & Data Mining/Week 19/Timeline of History.md
Normal file
125
AI & Data Mining/Week 19/Timeline of History.md
Normal file
@@ -0,0 +1,125 @@
|
||||
**1943:**
|
||||
- **Warren McCulloch and Walter Pitts**: Publish the first work generally recognized as AI, showing that any computable function could be computed by a network of connected neurons and implementing logical connectives (and, or, not) using simple net structures.
|
||||
|
||||
**1944:**
|
||||
- **Alan Turing**: Proposes the concept of a "learning machine" (Turing Test) in a letter to The Times.
|
||||
|
||||
**1950:**
|
||||
- **Alan Turing**: Publishes "Computing Machinery and Intelligence," introducing the Turing Test for intelligent machines.
|
||||
|
||||
**1952:**
|
||||
- **Wisconsin General Learning Device**: First neural network using perceptrons, developed by Frank Rosenblatt.
|
||||
|
||||
**1956:**
|
||||
- **John McCarthy**: Coins the term "Artificial Intelligence" at a conference held in Dartmouth.
|
||||
|
||||
**1957:**
|
||||
- **Perceptrons**: Simple binary classifiers introduced by Frank Rosenblatt.
|
||||
|
||||
**1960:**
|
||||
- **Adaline**: Improved perceptron with adaptive learning, developed by Bernard Widrow.
|
||||
|
||||
**1964:**
|
||||
- **ELIZA**: A natural language processing program simulating a psychotherapist, developed by Joseph Weizenbaum.
|
||||
- **STUDENT**: Solves algebra word problems using a formal algorithm, developed by John McCarthy.
|
||||
|
||||
**1965:**
|
||||
- **Herbert Simon**: Makes overoptimistic predictions about AI's pace at a conference.
|
||||
- First conference on AI planning systems.
|
||||
|
||||
**1968:**
|
||||
- Stanford Research Institute introduces the first speech understanding system.
|
||||
|
||||
**1969:**
|
||||
- AI planning systems used to schedule university classes.
|
||||
- **DENDRAL**: An early expert system using a heuristic search algorithm for structural molecular formulas, developed by Edward Feigenbaum et al.
|
||||
|
||||
**1972:**
|
||||
- **MYCIN**: The first expert system using an inference engine, developed by Edward Shortliffe at Stanford University. It diagnosed infectious diseases based on symptom information.
|
||||
|
||||
**1974:**
|
||||
- **STELLAR**: An early example of using machine learning to generate rules for a knowledge-based system. Developed by John McCarthy and colleagues, STELLAR learned to play checkers using alpha-beta pruning.
|
||||
|
||||
**1975:**
|
||||
- **Xerox PARC**: Develops the first personal computer (Alto) with a graphical user interface (GUI), paving the way for modern computing and AI interaction.
|
||||
|
||||
**1976:**
|
||||
- **Turing Award**: Marvin Minsky receives the first Turing Award for his work on AI, neural networks, and cognitive architecture.
|
||||
|
||||
**1979:**
|
||||
- **PROLOG**: A logic programming language developed by Alan Colmerauer and others becomes popular for AI applications.
|
||||
|
||||
**1980:**
|
||||
- **XCON**: Configured computer systems based on customer needs, developed by John McDermott at Digital Equipment Corporation. It saved millions of dollars in hardware costs.
|
||||
|
||||
**1982:**
|
||||
- **Fifth Generation Computer Systems (FGCS) Project**: Japan starts a government-funded project aiming to develop advanced AI systems focused on parallel processing and logic programming.
|
||||
|
||||
**1982:**
|
||||
- **Expert System Showdown**: An event organized by the United States Air Force to compare six expert systems. This marked a significant step in making AI practical for real-world applications.
|
||||
|
||||
**1985:**
|
||||
- **Neural Networks**: John Hopfield introduces Hopfield networks, a type of recurrent artificial neural network capable of performing parallel information processing.
|
||||
|
||||
**1986:**
|
||||
- **Japanese Fifth Generation Computer Systems (FGCS) Project**: Launched with the goal of developing advanced AI systems focused on parallel processing and logic programming.
|
||||
|
||||
**1987:**
|
||||
- **Neural Networks**: David Rumelhart, Geoffrey Hinton, and Ronald Williams publish a seminal work on backpropagation, an algorithm used to train artificial neural networks.
|
||||
|
||||
**1990:**
|
||||
- **Internet**: The World Wide Web is invented by Tim Berners-Lee, making information more accessible and enabling the growth of AI-driven search engines.
|
||||
|
||||
**1991:**
|
||||
- **High-Performance Computing and Communications (HPCC) Act**: The US government initiative supporting AI research emphasizes advanced networks and high-performance computing.
|
||||
|
||||
**1995:**
|
||||
- **Deep Blue**: IBM's chess-playing computer, using AI techniques, defeats world champion Garry Kasparov in a match.
|
||||
|
||||
**1997:**
|
||||
- **Watson**: IBM's question-answering system, using AI techniques, performs at the Jeopardy! game show against human champions.
|
||||
|
||||
**2011:**
|
||||
- **ImageNet**: A large-scale image recognition competition, won by a deep learning-based approach developed by Geoffrey Hinton and others. This marked a significant breakthrough for AI in computer vision.
|
||||
|
||||
**2016:**
|
||||
- **AlphaGo**: Developed by DeepMind, AlphaGo uses AI techniques to defeat world champions in the complex board game Go.
|
||||
|
||||
**2020:**
|
||||
- **AI Ethics**: Increased focus on AI ethics, fairness, accountability, and transparency becomes prominent in AI research and development.
|
||||
|
||||
```mermaid
|
||||
gantt
|
||||
dateFormat YYYY
|
||||
title Cutting feature timeline
|
||||
|
||||
section Early AI Pioneers & Microworlds
|
||||
Alan Turing :inactive, 1944
|
||||
John von Neumann :after Alan Turing, 1953
|
||||
Marvin Minsky's Students :after John von Neumann, 1963-1973
|
||||
Saint :after Marvin Minsky's Students, 1963
|
||||
Analogy :after Saint, 1968
|
||||
Student :after Analogy, 1967
|
||||
|
||||
section Expert Systems & Knowledge Intensive Systems
|
||||
DENDRAL :after Student, 1965
|
||||
MYCIN :after DENDRAL, 1972
|
||||
|
||||
section AI Research & Limitations
|
||||
Herbert Simon :inactive, 1965
|
||||
Lighthill Report :after Herbert Simon, 1973
|
||||
|
||||
section Natural Language Understanding
|
||||
Eugene Charniak :inactive, 1976
|
||||
Roger Schank :after Eugene Charniak, 1977
|
||||
|
||||
section AI Milestones & Advancements
|
||||
Expert System Showdown :after MYCIN, 1982
|
||||
Hopfield Networks :after Expert System Showdown, 1986
|
||||
Backpropagation :after Hopfield Networks, 1987
|
||||
|
||||
section AI in Competition & Everyday Life
|
||||
Deep Blue :after Backpropagation, 1997
|
||||
Watson :after Deep Blue, 2011
|
||||
|
||||
```
|
0
AI & Data Mining/Week 3/Lecture 5 - Naive Bayes.md
Normal file → Executable file
0
AI & Data Mining/Week 3/Lecture 5 - Naive Bayes.md
Normal file → Executable file
0
AI & Data Mining/Week 3/Tutorial 3.md
Normal file → Executable file
0
AI & Data Mining/Week 3/Tutorial 3.md
Normal file → Executable file
0
AI & Data Mining/Week 3/Workshop 3.md
Normal file → Executable file
0
AI & Data Mining/Week 3/Workshop 3.md
Normal file → Executable file
0
AI & Data Mining/Week 4/Lecture 7 - Nearest Neighbor.md
Normal file → Executable file
0
AI & Data Mining/Week 4/Lecture 7 - Nearest Neighbor.md
Normal file → Executable file
0
AI & Data Mining/Week 4/Tutorial 4 - Nearest Neighbor.md
Normal file → Executable file
0
AI & Data Mining/Week 4/Tutorial 4 - Nearest Neighbor.md
Normal file → Executable file
0
AI & Data Mining/Week 4/Workshop 4 - Nearest Neighbor.md
Normal file → Executable file
0
AI & Data Mining/Week 4/Workshop 4 - Nearest Neighbor.md
Normal file → Executable file
0
AI & Data Mining/Week 5/Lecture 9 - PRISM.md
Normal file → Executable file
0
AI & Data Mining/Week 5/Lecture 9 - PRISM.md
Normal file → Executable file
0
AI & Data Mining/Week 5/Tutorial 9 - PRISM.md
Normal file → Executable file
0
AI & Data Mining/Week 5/Tutorial 9 - PRISM.md
Normal file → Executable file
0
AI & Data Mining/Week 6/Lecture 11 - ID3.md
Normal file → Executable file
0
AI & Data Mining/Week 6/Lecture 11 - ID3.md
Normal file → Executable file
0
AI & Data Mining/Week 6/Lecture 12 - Decision Trees (ID3).md
Normal file → Executable file
0
AI & Data Mining/Week 6/Lecture 12 - Decision Trees (ID3).md
Normal file → Executable file
0
AI & Data Mining/Week 7/Chapter 13 - ID3.md
Normal file → Executable file
0
AI & Data Mining/Week 7/Chapter 13 - ID3.md
Normal file → Executable file
0
AI & Data Mining/Week 8/Lecture 16 - Evaluating Concept Descriptions.md
Normal file → Executable file
0
AI & Data Mining/Week 8/Lecture 16 - Evaluating Concept Descriptions.md
Normal file → Executable file
0
AI & Data Mining/Week 9/Chapter 15.md
Normal file → Executable file
0
AI & Data Mining/Week 9/Chapter 15.md
Normal file → Executable file
Reference in New Issue
Block a user