vault backup: 2025-03-16 18:59:42

This commit is contained in:
boris
2025-03-16 18:59:42 +00:00
parent 6befcc90d4
commit ae837183f1
188 changed files with 17794 additions and 409 deletions

View File

@@ -0,0 +1,6 @@
Lectures 5,6,7,8,9,10,11,12
Chapters 20,21,22,23
texlive-recommended
texlive-luatex
texlive-fonts-recommended

View File

@@ -1,4 +1,5 @@
# 1R: (AV)CTARs
(foreach) Attribute
(foreach) Value
Count (class)
@@ -8,6 +9,7 @@
Smallest (error rate)
# Missing Values: Ma $\Delta$D CPS
Malfunctioning Equipment
Change in Design
Collation of Datasets

View File

@@ -1,11 +1,15 @@
# 1R
### (AV)CTARs
Simple
Classification
One-Level tree
Tie, make arbitrary choice
## Issue with Numerics
- Discretise
- List values of attribute
- Sort asc
@@ -13,11 +17,15 @@ Tie, make arbitrary choice
- Breakpoints between change in class
- Interval assigned to majority class
- Enforce bucket size, if adjacent interval has same class, merge.
## Issue with Missing Values
- Assign "missing" as a value
- Treat normally
## Issue with Overfitting
- Bucket Enforcement
- Ensure attribute tested is applicable
# PRISM
# PRISM

View File

@@ -0,0 +1 @@

View File

@@ -8,4 +8,4 @@
2. Search Engines recognise language
3. Telephone menus recognise language and can "hear"
4. Dynamic routing algorithms utilise machine learning
7. Following the principles defined by the turing test, AI should classify as both science and engineering. The concepts and programming are scientific, however robotics and hardware are engineering.
7. Following the principles defined by the turing test, AI should classify as both science and engineering. The concepts and programming are scientific, however robotics and hardware are engineering.

View File

@@ -134,7 +134,7 @@ Limited rationality means acting appropriately when there is not enough time to
Perfect rationality remains a good starting point for theoretical analysis.
#### Value Alignment Problem
#### Value Alignment Problem
A problem with perfect rationality is that it would assume a fully specified objective given to the machine.
Artificially defined problems such as chess, come with an objective.
@@ -147,7 +147,8 @@ Problem of achieving this is called the value alignment problem
### Bad Behaviour
If a machine is intelligent enough to reason and act, such a machine may attemt to increase chances of winning by immoral means:
If a machine is intelligent enough to reason and act, such a machine may attempt to increase chances of winning by immoral means:
- Blackmail
- Bribery
- Grabbing additional computing resources for itself
@@ -157,5 +158,5 @@ These behaviours are rational, and are logical for success, however immoral.
We do not want machines that are intelligent in the sense of pursuing their objectives.
We want them to pursue our objectives.
If we cannot transfer our objectives perfectly to the machine, we need the machine to know that is does not know the complete objective and have the incentive to act cautiously, ask for permission, learn about preferences through observation, defer to human control.
We want agents that are provably beneficial to humans.
If we cannot transfer our objectives perfectly to the machine, we need the machine to know that is does not know the complete objective and have the incentive to act cautiously, ask for permission, learn about preferences through observation, defer to human control.
We want agents that are provably beneficial to humans.

BIN
AI & Data Mining/Week 18/test.pdf Executable file

Binary file not shown.

View File

@@ -27,16 +27,16 @@ Formula: p => q
(b) Increased spending overheats the economy.
Atomic Propositions:
Increased spending
Increased spending
Overheats the economy
Connectives:
Connectives:
None, implied non-linguistic
Formula: p => q
(c) Increased spending coupled with tax cuts overheats the economy.
Atomic Propositions:
Increased spending
There are tax cuts
There are tax cuts
Overheated economy
Connectives:
Coupled with
@@ -56,7 +56,7 @@ Atomic Propositions:
Inflation does not rise
Connectives:
either / or
Formula: p
Formula: p
1. Remove as many brackets as possible from the following propositions without altering their meaning (i.e. the truth table).

View File

@@ -19,7 +19,9 @@
- Disjunction (): p q is true if and only if at least one of p or q is true
- Implication (⇒): p ⇒ q is false if and only if p is true and q is false
- Equivalence (⇔): p ⇔ q is true if and only if p and q have the same truth value
## Precedence Order of Connectives
1. Negation (¬)
2. Conjunction (∧)
3. Disjunction ()
@@ -27,6 +29,7 @@
5. Equivalence (⇔)
This means that in a formula without parentheses, ¬ takes precedence over ∧ and , ∧ and have the same precedence but associativity to the left, and ⇒ and ⇔ also have the same precedence but associativity to the right. For example, p ∧ q ⇒ r is equivalent to (p ∧ q) ⇒ r, not p ∧ (q ⇒ r).
### Propositions and Connectives (Examples)
#### Atomic Propositions:
@@ -119,6 +122,7 @@ This means that in a formula without parentheses, ¬ takes precedence over ∧ a
- Formalized as: p ∧ q ⇒ ¬r
- Argument: If Bob eats carrots, then he will be able to see in the dark. Therefore, if Bob cant see in the dark, then he hasnt eaten carrots.
- Formalized as: p ⇒ q ≡ ¬q ⇒ ¬p
# Summary
- Logicians focus on argument form

View File

@@ -0,0 +1,44 @@
1. George Boole
2. Truth Tables for a) Negation b) Contraposition
a)
Negation Law
¬¬p ≡ p
| p | ¬p | ¬(¬p) | ¬(¬p) ⇔ p |
| --- | --- | ----- | --------- |
| T | F | T | T |
| F | T | F | T |
b)
Contraposition Law
p ⇒ q ≡ ¬q ⇒ ¬p
| p | ¬p | q | ¬q | p ⇒ q | ¬q ⇒ ¬p | p ⇒ q ⇔ ¬q ⇒ ¬p |
| --- | --- | --- | --- | ----- | ------- | --------------- |
| T | F | T | F | T | T | T |
| T | F | F | T | F | F | T |
| F | T | T | F | T | T | T |
| F | T | F | T | T | T | T |
p ⇒ q ⇔ ¬q ⇒ ¬p MUST be true, since p ⇒ q and ¬q ⇒ ¬p are shown in the truth table to be the same logical equivalence
1. Provide names of laws
1. Negation Law
2. De Morgan's Law
3. Negation Law
4. De Morgan's Law
5. Negation Law Twice
6. Associative Law
7. De Morgan's Law
8. De Morgan's Law
9. Negation Law Twice
2. Show logical equivalence
p ⇒ q
¬q ⇒ ¬p
p ⇒ q
≡ (¬p) v q
≡ q v (¬p)
≡ ¬ (¬q) v (¬p)
≡ (¬q) ⇒ (¬p)

View File

@@ -151,4 +151,4 @@
2. Prove De Morgan's laws using truth tables or transformational proofs.
3. Prove the laws involving true and false using truth tables or transformational proofs.
4. Prove the laws of simplification using truth tables or transformational proofs.
5. Prove the equivalence of two given formulae using transformational proofs (as demonstrated in slides 21 and 22).
5. Prove the equivalence of two given formulae using transformational proofs (as demonstrated in slides 21 and 22).

View File

@@ -0,0 +1,53 @@
1. A syllogism is an instance of a form of reasoning in which a conclusion is drawn from two given or assumed propositions; a common or middle term is present in the two premises but not in the conclusion, which may be invalid.
2. Aristotle
**Double Negation ¬ Elim ¬ ¬ p p**
| Propositions | Premises | Conclusion |
| ------------ | ------------ | ---------- |
| p | $\neg\neg p$ | p |
| T | T | T |
| F | F | F |
**Hypothetical syllogism; this says that if p implies q and q implies r, then it can be logically concluded that p implies r. p ⇒ q q ⇒ r p ⇒ r**
| Propositions | | | Premises | | Conclusion |
| ------------ | --- | --- | -------------- | -------------- | -------------- |
| p | q | r | $p \implies q$ | $q \implies r$ | $p \implies r$ |
| T | T | T | T | T | T |
| T | T | F | T | F | |
| T | F | T | F | T | |
| T | F | F | F | T | |
| F | T | T | T | T | T |
| F | T | F | T | F | |
| F | F | T | T | T | T |
| F | F | F | T | T | T |
1. Involves linking implications together in a sequential manner, much like the links in a chain.
**p q q Therefore, p**
| Propositions | | Premises | | Conclusion |
| ------------ | --- | ---------- | --- | ---------- |
| $p$ | $q$ | $p \lor q$ | $q$ | p |
| T | T | T | T | T |
| T | F | T | F | T |
| F | T | T | T | F |
| F | F | F | F | F |
**p ⇒ q q ⇒ p Therefore, p ∧ q**
| Propositions | | Premises | | Conclusion |
| ------------ | --- | -------------- | -------------- | ----------- |
| p | q | $p \implies q$ | $q \implies p$ | $p \land q$ |
| T | T | T | T | T |
| T | F | F | T | F |
| F | T | T | F | F |
| F | F | T | T | F |
$p \implies q$
$r \implies s$
$p \lor r$ (p disjunction (or) r)
Conclusion: $q \lor s$
The "Constructive Dilemma": If the disjunction of the antecedent of two implications holds then the disjunction of the conclusions also must hold

View File

@@ -0,0 +1,163 @@
**Detailed Notes on Lectures 9 & 10: Validity and Inference Rules**
**Slide 1: Learning Objectives**
- Define the notion of validity in an argument.
- Establish validity using truth tables.
- Demonstrate invalidity using truth tables.
- Understand inference rules.
**Slide 2: Contents**
- Objectives
- Transformational proofs are not sufficient.
- Comparison of deduction with induction.
- Validity.
- Demonstrating validity/invalidity using truth tables.
- Problem with truth tables.
- Inference rules.
- Summary, reading, and references.
**Slide 3: Transformational Proofs do not Suffice**
- Understanding transformations of formulas is useful but insufficient.
- Logic uses rules of inference to deduce true propositions from other true propositions.
- Invalid premises cannot lead to valid conclusions, preventing proofs of contradictions or useless systems.
**Slide 4: Premises and Conclusions**
- An argument consists of premises (basis for accepting) and a conclusion.
- Example:
- Premises: Every adult is eligible to vote; John is an adult.
- Conclusion: Therefore, John is eligible to vote.
**Slide 5: Deduction vs. Induction**
- Deductive arguments: Conclusion is wholly justified by premises.
- Inductive arguments: More general new knowledge inferred from facts or observations.
**Slide 6: Valid vs. Invalid Arguments**
- Valid arguments: Conclusion always true when premises are true.
- Invalid arguments: At least one assignment where premises are true, but conclusion is false.
**Slide 7: Example of Valid Argument**
- If John is an adult, then he is eligible to vote (premise).
- John is an adult (premise).
- Therefore, John is eligible to vote (conclusion).
**Slide 8: Example of Valid Argument with False Conclusion**
- If I catch the 19:32 train, I'll arrive in Glasgow at 19:53 (premise).
- I catch the 19:32 train (premise).
- Therefore, I arrive in Glasgow at 19:53 (conclusion) Factually false but valid argument.
**Slide 9: Example of Invalid Argument**
- If I win the lottery, then I am lucky (premise).
- I do not win the lottery (premise).
- Therefore, I am unlucky (conclusion) Invalid argument with factually true premises and conclusion.
**Slide 10: Demonstrating Validity Using Truth Tables**
- View argument as implication (p ⇒ q).
- If premises entail conclusion, then argument is valid.
**Slide 12: Demonstrating Validity Using Truth Table (Example)**
- Argument: If John is an adult, then he is eligible to vote; John is an adult; Therefore, John is eligible to vote.
- Atomic Propositions: p (John is an adult), q (John is eligible to vote).
| p | q | p ⇒ q | p ∧ q |
|---|---|------|-------|
| T | T | T | T |
| F | T | F | F |
- Argument is valid because conclusion (q) is always true when premises are true.
**Slide 13: Viewing Argument as Implication**
- If premises logically imply conclusion, argument is valid.
- Example: ((p ⇒ q) ∧ p) ⇒ q
**Slide 15: Demonstrating Invalidity Using Truth Tables**
- Argument is invalid if there's at least one assignment where premises are true, but conclusion is false.
**Slide 16: Demonstrating Invalidity Using Truth Table (Example)**
- Argument: p ⇔ q; p ⇒ r; Therefore, p Invalid argument.
| p | q | r | p ⇔ q | p ⇒ r |
|---|---|------|-------|--------|
| T | T | T | T | T |
| F | T | F | F | F |
- Argument is invalid because there's a row where premises are true, but conclusion (p) is false.
**Slide 17: Exercise**
- Demonstrate the invalidity of the argument: p q; ¬p; Therefore, ¬q.
**Slide 18: Solution to Exercise**
- Atomic Propositions: p, q.
| p | q | p q | ¬p |
|---|---|------|-----|
| F | T | T | T |
- Argument is invalid because there's a row where premises are true, but conclusion (¬q) is false.
**Slide 19: A Problem with Truth Tables**
- Using truth tables to establish validity becomes tedious as the number of variables increases.
**Slide 20: Deductive Proofs**
- Approach to establishing validity using a series of simpler arguments known to be valid.
- Uses laws of logic (logical equivalences) and inference rules.
**Slide 21: Inference Rules**
- Primitive valid argument forms eliminating or introducing logical connectives.
- Categories: Intro (introduces connective), Elim (eliminates connective).
**Slide 22: The Layout of an Inference Rule**
- Premises (above the line): List of formulas already in proof.
- Conclusion (below the line): What may be deduced by applying the inference rule.
**Slide 23: Conjunction (∧Intro)**
- Introduces the connective ∧.
- Example: p, q; Therefore, p ∧ q.
**Slide 24: Simplification (∧Elim)**
- Eliminates the connective ∧.
- Example: p ∧ q; Therefore, p.
**Slide 25: Addition (Intro)**
- Introduces the connective .
- Example: p; Therefore, p q.
**Slide 26: Exercise on Disjunctive Syllogism**
- Demonstrate the validity of the inference rule using a truth table.
**Slide 27: Solution to Exercise**
- Atomic Propositions: p, q.
| p | q | ¬p |
|---|---|-----|
| F | T | T |
- Argument is valid because conclusion (q) is always true when premises are true.
**Slide 28: Modus Ponens (⇒Elim)**
- Eliminates the connective ⇒.
- Example: p ⇒ q; p; Therefore, q.
**Slide 29: Modus Tollens (⇒Elim)**
- Eliminates the connective ⇒.
- Example: p ⇒ q; ¬q; Therefore, ¬p.
**Slide 30: Other Inference Rules**
- Double Negation (¬Elim): ¬¬p; Therefore, p.
- Laws of Equivalence (⇔Elim): p ⇔ q; Therefore, p ⇒ q and q ⇒ p.
**Slide 31: Transitive Inference Rules**
- Transitivity of Equivalence: If p ≡ q and q ≡ r, then p ≡ r.
- Hypothetical Syllogism: If p ⇒ q and q ⇒ r, then p ⇒ r.
**Slide 32: Summary**
- Valid arguments: Conclusion always true when premises are true.
- Invalid arguments: At least one assignment where premises are true, but conclusion is false.
- Truth tables demonstrate invalidity.
- Inference rules deduce true propositions from other true propositions.
**Slide 33: Reading and References**
- Russell, Norvig (2022). Artificial Intelligence. 4th Edition.
- Nissanke (1999). Introductory Logic and Sets for Computer Scientists.
- Gray (1984). Logic, Algebra and Databases.

BIN
AI & Data Mining/Week 22/test.pdf Executable file

Binary file not shown.

View File

@@ -0,0 +1,16 @@
![](Pasted%20image%2020250221132524.png)
| Line | | | |
| ---- | --------------------------- | ---------- | --------------------------------- |
| 1 | $A \implies \lnot B$ | Premise | |
| 2 | $(B \lor C) \lor D$ | Premise | |
| 3 | $\lnot C \lor D \implies A$ | Premise | |
| 4 | $\lnot C$ | Premise | |
| 5 | $\lnot C \lor D$ | From 4 | $\lor Intro$ |
| 6 | $A$ | From 3 & 5 | $\implies Elim$ Modus Ponens |
| 7 | $\lnot B$ | From 1 & 6 | $\implies Elim$ Modus Ponens |
| 8 | $\lnot B \land \lnot C$ | From 2 & 7 | $\land Intro$ |
| 9 | $\lnot (B \lor C)$ | From 8 | De Morgans Law |
| 10 | $D$ | From 9 | $\lor Elim$ Disjunctive Syllogism |
B or C is not true, therefore D must be true from step 2.

View File

@@ -0,0 +1,204 @@
**Slide 3: Recap on Logical Implication (Entailment) |-|=-**
- Entailment notation: p |=q if and only if the implication p$\implies$q is a tautology.
- Example:
- p $\land$ q |=q
- Truth table for p $\implies$ q:
| p | q | p $\land$ q | p $\implies$ q |
|---|---|------|--------|
| T | T | T | T |
| T | F | F | F |
| F | T | F | T |
| F | F | F | T |
**Slide 4: (r $\implies$ s) $\land$ (r $\implies$ $\lnot$s) |-|=-**
- Intuitively, if r implies both s and $\lnot$s, then r must be false.
- Truth table for (r $\implies$ s) $\land$ (r $\implies$ $\lnot$s):
| r | s | $\lnot$s | (r $\implies$ s) $\land$ (r $\implies$ $\lnot$s) |
|---|---|---|------------------------|
| T | T | F | F |
| T | F | F | F |
| F | T | T | T |
| F | F | T | T |
**Slide 5: p $\vdash$ q**
- Notation: p $\vdash$ q means q is provable from p using inference rules.
- Example:
- A $\implies$ B, $\lnot$A, therefore $\lnot$B
**Slide 6: Differences Between |-|=- and $\vdash$**
- |= indicates semantic entailment (truth conditions).
- $\vdash$ represents syntactic derivation (inference rules).
**Slide 7: Recap on Inference Rules**
- Example inference rules:
- Modus Ponens ($\implies$Elim):
p $\implies$ q, p $\vdash$ q
- Conjunction Introduction ($\land$Intro):
p $\vdash$ q, p $\vdash$ r $\vdash$ p $\land$ q
- Conditional Proof ($\implies$Intro):
p $\vdash$ r, p $\vdash$ s $\vdash$ p $\implies$ (r $\land$ s)
**Slide 8: Layout of an Inference Rule**
- Premises above the line, conclusion below the line.
- Example inference rule ($\implies$Intro):
p $\vdash$ r, p $\vdash$ s
p $\implies$ (r $\land$ s)
**Slide 9: Presentation of Proofs**
- Steps:
- Number each step.
- Justify each step with previous line(s) and inference rule used.
**Slide 10: Deriving $\lnot$p $\implies$ r From (p $\land$ q) $\lor$ r**
- Example proof:
(p $\land$ q) $\lor$ r, $\lnot$E
$\lnot$p $\implies$ r
**Slide 11: Two Special Inference Rules**
- Deductive Theorem ($\implies$Intro):
p $\vdash$ r, p $\vdash$ s
p $\implies$ (r $\land$ s)
- Reductio ad absurdum ($\lnot$Intro):
p $\vdash$ r, p $\vdash$ $\lnot$s
p $\vdash$ $\lnot$r
**Slide 12: Conditional Proofs**
- Strategy: Assume p, deduce q if possible, discharge assumption.
- Example:
(p $\land$ q) $\lor$ r
$\lnot$p $\implies$ r
**Slide 13: Indirect Proofs**
- Strategy: Assume negation of goal, deduce contradiction.
- Example:
(p $\land$ q) $\lor$ r
$\lnot$p $\implies$ r
**Slide 14: Solution to Exercise**
Given argument:
A (You eat carefully) ⇒ B (You have a healthy digestive system)
C (You exercise regularly) ⇒ D (You are very fit)
B D ⇒ E (You live to a ripe old age)
¬E
Therefore, ¬A ∧ ¬C
**Proof:**
| Line | Formula | Justification |
| ---- | --------- | -------------------- |
| 1 | A ⇒ B | Premise |
| 2 | C ⇒ D | Premise |
| 3 | B D ⇒ E | Premise |
| 4 | ¬E | Premise |
| 5 | ¬(B D) | Modus Tollens (3, 4) |
| 6 | ¬B ∧ ¬D | De Morgan's Law (5) |
| 7 | ¬B | ∧Elim (6) |
| 8 | ¬A | Modus Tollens (1, 7) |
| 9 | ¬D | ∧Elim (6) |
| 10 | ¬C | Modus Tollens (2, 9) |
| 11 | ¬A ∧ ¬C | ∧Intro (8, 10) |
**Conclusion:**
We have proven that ¬A ∧ ¬C, i.e., you did not eat carefully and you did not exercise regularly.
**Slide 15: Two Special Inference Rules (continued)**
- Deductive Theorem:
p $\vdash$ r, p $\vdash$ s
p $\implies$ (r $\land$ s)
- Reductio ad absurdum:
p $\vdash$ r, p $\vdash$ $\lnot$s
p $\vdash$ $\lnot$r
**Slide 16: Soundness and Completeness**
- Sound: Valid argument with true premises.
- Complete: Derives any sentence entailed by premises.
**Slide 17: Formal Proofs of Natural Language Arguments**
- Steps:
- Identify atomic propositions.
- Formalize argument in logic.
- Check for invalidity.
- Attempt proof.
**Slide 18: Example - Travel**
- Argument:
Therefore, if my neighbours claim to be impressed then they are just pretending.
**Slide 19: Example - Travel (continued)**
- Formalize argument:
p $\implies$ q, $\lnot$p $\implies$ $\lnot$r, $\lnot$q
$\lnot$r
- Proof:
$\lnot$p $\implies$ r
**Slide 20: Example - Nutrition**
- Argument:
Therefore, you did not eat carefully and you did not exercise regularly.
**Slide 21: Example - Nutrition (continued)**
- Formalize argument:
A $\implies$ B, C $\implies$ D, B $\lor$ D $\implies$ E, $\lnot$E
$\lnot$A $\land$ $\lnot$C
- Proof:
$\lnot$A $\land$ $\lnot$C
**Slide 22: Application to Software Engineering**
- Questions about software specifications and claims are arguments.
**Slide 23: Reading and References**
- Russell and Norvig, Artificial Intelligence (4th Edition)
- Nissanke, Introductory Logic and Sets for Computer Scientists
- Gray, Logic, Algebra and Databases

BIN
AI & Data Mining/Week 23/test.pdf Executable file

Binary file not shown.

View File

@@ -0,0 +1,18 @@
1. Just the following propositions hold true:
1. male(Ahmed) male(Patel) male(Scott) tall(Ahmed) tall(Patel) short(Khan) short(Scott)
Evaluate the truth of the following formula
| x | tall(x) | male(x) | short(x) | $\lnot$short(x) | male(x) $\land \lnot$short(x) | tall(x) $\iff$ male(x) $\land \lnot$short(x) | $\forall x \bullet$tall(x) $\iff$ male(x) $\land \lnot$short(x) |
| --- | ------- | ------- | -------- | --------------- | ----------------------------- | -------------------------------------------- | --------------------------------------------------------------- |
| Ah | T | T | F | T | T | T | |
| Kh | F | F | T | F | F | T | |
| Pa | T | T | F | T | T | T | |
| Sc | F | T | T | F | F | T | |
| | | | | | | | T |
2. Using appropriate binary predicates, express each of the following sentences in predicate logic
a) Salford stores only supply stores outside of Salford.
- $\forall x \bullet \forall y \bullet \textsf{in(x, Salford)} \land \textsf{supples}$
b) No store supplies itself
c) There are no stores in Eccles but there are some in Trafford.
d) Stores do not supply stores that are supplied by stores which they supply
e) Stores which supply each other are always in the same place

View File

@@ -0,0 +1,171 @@
**Slide 1: Learning Objectives**
- Understand when the order of quantifiers is important.
- Understand how ∀ and ∃ are connected.
- Use and remember scoping rules.
- Identify bound and free variables in formulae.
- Establish the truth of formulae in predicate logic.
- Understand why predicate logic is described as undecidable.
- Understand the difference between zero-order, first-order, and higher-order predicate logics.
**Slide 2: Objectives & Recap**
- Quantifiers and their alternative view.
- Distributive laws of quantifiers.
- De Morgan's laws with quantifiers.
- Scope of quantifiers.
- Bound and free variables.
**Slide 3: The Universal Quantifier (∀)**
- Pronounced as "for all".
- Example: ∀x (human(x) ⇒ mortal(x)) = All humans are mortal.
- In terms of conjunction: ∀x p(x) ≡ p(x₁) ∧ p(x₂) ∧... ∧ p(xₙ)
**Slide 4: The Existential Quantifier (∃)**
- Pronounced as "there exists" or "for some".
- Example: ∃x (human(x) ∧ happy(x)) = Some humans are happy.
- In terms of disjunction: ∃x p(x) ≡ p(x₁) p(x₂) ... p(xₙ)
**Slide 5: Distributive Laws of Quantifiers**
- ∀x (p(x) ∧ q(x)) ≡ (∀x p(x)) ∧ (∀x q(x))
- ∃x (p(x) q(x)) ≡ (∃x p(x)) (∃x q(x))
**Slide 6: The Order of Quantification**
- Example 1 (Everyone loves someone): ∀x ∃y loves(x, y)
- Example 2 (There is someone loved by everyone): ∃y ∀x loves(x, y)
**Slide 7: De Morgan's Laws and Quantifiers**
- ¬∃x p(x) ≡ ∀x ¬p(x)
- ¬∀x p(x) ≡ ∃x ¬p(x)
**Slide 8: Scope of Quantifiers**
- In absence of brackets, scope extends to the end of the formula.
- Brackets can enforce different scoping patterns.
**Slide 9: Bound and Free Variables**
- Bound variable: occurrence introduced by a quantifier within its scope.
- Example:
```
Formula Variable status
child(x) Only one occurrence of x, free.
∀x child(x) ∧ clever(x) Both occurrences of x are bound by the same quantifier.
((∀x child(x)) ∧ clever(x)) The first occurrence of x is bound, the second is free.
```
- Free variable: occurrence not within any quantifier's scope.
**Slide 10: Meaning of Bound Variables**
- The meaning of a bound variable does not depend on its name.
- Example:
```
∀x (child(x) ∧ clever(x)) ⇒ ∃y loves(y, x)
∀B (child(B) ∧ clever(B)) ⇒ ∃C loves(C, B)
```
**Slide 11: Meaning of Free Variables**
- Free variables denote unknowns or unspecified objects.
- Example:
```
∀x (child(x) ∧ clever(x)) ⇒ x is loved.
∀x (child(x) ∧ clever(x)) ⇒ z is loved.
```
**Slide 12: Exercise**
- Identify bound and free variables in the formula:
```
∃x taller(y, x) ∃x ∃y taller(x, y) ∧ taller(x, z)
```
- Solution:
- In the first formula: x is bound, y is free.
- In the second formula: y is bound by ∃y, z is free. Both occurrences of x are bound and refer to the same variable.
**Slide 13: The Equality Symbol (=)**
- ⊢ Richard has at least two brothers:
```
∃x ∃y (brother(x, richard) ∧ brother(y, richard) ∧ ¬(x = y))
```
- Definition of sibling using parent:
```
∀x ∀y sibling(x, y) ≡ (¬(x = y) ∧ ∃m ∃f ¬(m = f) ∧ parent(m, x) ∧ parent(f, x) ∧ parent(m, y) ∧ parent(f, y))
```
**Slide 14: Establishing the Truth Values of Formulae**
- Example (slide 21 and 22):
- Individuals: Ahmed, Khan, Patel, Scott.
- Properties: male, tall, short.
- Formula: ∀x (male(x) ⇒ tall(x) short(x))
- Truth table shows the formula is false (Patel is male but not tall or short).
**Slide 15: Exercise**
- Given individuals, properties, and true propositions as in slide 23.
- Evaluate the truth of: ∀x ¬male(x) ⇒ short(x)
- Solution (slide 24):
```
x male(x) ¬male(x) short(x) ¬male(x) ⇒ short(x)
Ahmed T F F F T
Khan F T T T T
Patel T F F F T
Scott T F T F F
```
- The formula is false (Patel is male but not short).
**Slide 16: Example Involving ∃**
- Given individuals, properties, and true propositions as in slide 25.
- Formula: ∃x (male(x) ∧ ¬tall(x) ⇒ short(x))
- Truth table (slide 26):
```
x male(x) ¬tall(x) short(x) ¬tall(x) ⇒ short(x) male(x) ∧ ¬tall(x) ⇒ short(x)
Ahmed T F F F T T
Khan F F T F F F
Patel T F F F F F
Scott T T T F T F
```
- The formula is true (Scott is male, not tall but short).
**Slide 17: Exercise**
- Given individuals, properties, and true propositions as in slide 27.
- Evaluate the truth of: ∃x (male(x) ∧ (tall(x) ¬short(x)))
- Solution (slide 28):
```
x male(x) tall(x) short(x) ¬short(x) tall(x) ¬short(x) male(x) ∧ (tall(x) ¬short(x))
Ahmed T T F T T T T
Khan F F T F F F F
Patel T T F F T T F
Scott T F T F F F F
```
- The formula is true (Ahmed and Patel are males, tall or not short).
**Slide 29: Predicate Logic is Undecidable**
- Universal quantification introduces computational impossibility when testing truth values with an infinite number of possible values.
**Slide 30: First-order Predicate Logic**
- Quantifiers refer only to objects (constants), not predicate or function names.
- Propositional logic is zero-order logic.
**Slide 31: Summary**
- Order of quantifiers matters when both ∀ and ∃ are present.
- Quantifiers are connected through negation, obey De Morgan's laws.
- Scope of quantifiers extends to the end of the formula without brackets.
- Bound variables are introduced by a quantifier within its scope; free variables are not within any quantifier's scope.
**Slide 32: Reading, References and Acknowledgements**
- Reading from Artificial Intelligence textbook by Russell and Norvig.
- References: Introductory Logic and Sets for Computer Scientists by Nissanke.

View File

@@ -0,0 +1,159 @@
### **Slide Notes:**
---
#### **Slide 0: Learning Objectives**
- Identify genuine variables and variables standing for unknowns.
- Describe the role of inference rules for eliminating and introducing quantifiers.
- Justify constraints on handling variables when eliminating and introducing quantifiers.
- Conduct proofs in predicate logic.
- Prove the validity of an argument presented in English.
---
#### **Slide 1: Contents**
- Recap on inference rules
- Deductive proof in predicate logic
- Genuine variables versus unknown variables
- Extra inference rules for predicate logic
- Constraints on variables
- Summary, reading and references
---
#### **Slide 2: Recap on Inference Rules**
| **Connective** | **Introduction Rule** | **Elimination Rule** |
| ------------------ | --------------------------------------------------------------------- | ------------------------------------------------------------ |
| ∧ (and) | ∧ Intro:$p, q \vdash p \land q$ | ∧ Elim:$p \land q, p \vdash q$ |
| (or) | Intro:$p \vdash p \lor q$,$q \vdash p \lor q$ | Elim:$p \vdash \varnothing$,$p \lor q \vdash \varnothing$x |
| ¬ (not) | N/A | ¬ Elim:$\neg p, q \vdash p$ |
| ⇒ (implies) | ⇒ Intro:$p, p \Rightarrow q \vdash q$ | ⇒ Elim (Modus Ponens):$p \Rightarrow q, p \vdash q$ |
| ⇔ (if and only if) | ⇔ Intro:$p \Rightarrow q, q \Rightarrow p \vdash p \leftrightarrow q$ | ⇔ Elim:$p \leftrightarrow q, p, q \vdash \varnothing$ |
---
#### **Slide 3: Deductive Proof in Predicate Logic**
1. Reduce formulae to propositional form by removing quantifiers.
2. Manipulate formulae using inference rules and logical laws of propositional logic.
3. Reintroduce quantifiers at the end, depending on the goal.
---
#### **Slide 4: Genuine Variables vs. Unknown Variables**
- **Genuine variable**:
- A free variable whose universal quantification yields a true formula.
- **Unknown variable**:
- A free variable whose existential quantification yields a true formula.
---
#### **Slide 5: Exercise**
**Example**:
-$\frac{9}{z} = 3$
-$\text{even}(n) \lor \text{odd}(n)$
| **Variable** | **Status** | **Justification** |
|--------------|-------------------------|-----------------------------------|
|$z$ | Unknown variable |$\exists z \cdot \frac{9}{z} = 3$|
|$n$ | Genuine variable |$\forall n \cdot (\text{even}(n) \lor \text{odd}(n))$|
---
#### **Slide 6: Extra Inference Rules for Predicate Logic**
| **Quantifier** | **Introduction Rule** | **Elimination Rule** |
|-----------------|----------------------------------------|---------------------------------------------------|
|$\forall$(for all) |$\forall \text{Intro}: \varphi(x), x \neq y \vdash \varphi(x)$|$\forall \text{Elim}: \forall x \cdot \varphi(x), \varphi(x) \vdash \varnothing$|
|$\exists$(there exists) |$\exists \text{Intro}: \varphi(x), x \neq y \vdash \exists x \cdot \varphi(x)$|$\exists \text{Elim}: \exists x \cdot \varphi(x), \varphi(x) \vdash \varnothing$|
---
#### **Slide 7: Freeing Variables from$\forall$**
- Example:
-$\forall x \cdot \text{lecturer}(x), \text{lecturer}(bryant)$
-$\forall x \cdot \text{lecturer}(x), \text{lecturer}(y)$
-$y$is a genuine variable.
---
#### **Slide 8: Freeing Variables from$\exists$**
- Example:
-$\exists x \cdot \text{student}(x), \text{student}(y)$
-$y$is an unknown variable.
---
#### **Slide 9: Introduction of$\forall$**
- Constraint:
- Do not introduce$\forall$on the basis of an individual (a constant).
---
#### **Slide 10: Constraint 1**
- When eliminating$\exists$, do not instantiate its variable with a constant.
---
#### **Slide 11: Constraint 2**
- When eliminating$\exists$, do not instantiate its variable with an existing free variable.
---
#### **Slide 12: Constraint 3**
- Do not introduce$\forall$on the basis of an individual (a constant).
---
#### **Slide 13: Constraint 4**
- Do not introduce$\forall$on the basis of an unknown.
---
#### **Slide 14: Constraint 5**
- Do not quantify a variable by introducing$\forall$in a formula if the formula contains another var obtained by eliminating$\exists$.
---
#### **Slide 15: Constraint 6**
- Within the scope of an assumption, do not introduce$\forall$on the basis of a variable appearing in the assumption.
---
#### **Slide 16: Constraint 7**
- Every instantiation, whether following the elimination of a$\forall$or$\exists$, must always be done with a free var, rather than a bound var.
---
#### **Slide 17: Constraint 8**
- Beware of binding any newly quantified var by an unintended quantifier.
---
#### **Slide 18: Summary**
- Use inference rules to eliminate and introduce logical connectives.
- Handle quantifiers carefully, following constraints when eliminating and introducing them.
---
#### **Slide 19: Summary of Constraints**
1. Do not assume a property holds for a particular individual based on it holding for at least one individual.
2. When eliminating$\exists$, do not instantiate its var with a constant or existing free var.
3. Do not introduce$\forall$on the basis of a constant or unknown.
4. Do not quantify a variable by introducing$\forall$if the formula contains another var from$\exists$elimination.
5. Within an assumption, do not introduce$\forall$based on a var in the assumption.
6. Instantiate variables only with free vars, not bound vars.
7. Avoid binding newly quantified vars by unintended quantifiers.
---
#### **Slide 20: Deducing a Conclusion**
Example:
1.$\forall x \cdot (\text{master}(x) \lor \text{slave}(x)) \Rightarrow \text{adult}(x) \land \text{man}(x)$
2.$\neg \forall x \cdot (\text{adult}(x) \land \text{man}(x))$
3.$\exists x \cdot (\neg (\text{adult}(x) \land \text{man}(x)))$
4.$\neg (\text{master}(x) \lor \text{slave}(x))$
5.$\neg \text{master}(x) \land \neg \text{slave}(x)$
6.$\neg \text{master}(x)$
7.$\exists y \cdot (\neg \text{master}(y))$
---

View File

@@ -55,13 +55,17 @@
# Modified Probability Estimates
- Consider attribute *outlook* for class *yes*
# $\frac{2+\frac{1}{3}\mu}{9+\mu}$
Sunny
# $\frac{4+\frac{1}{3}\mu}{9+\mu}$
Overcast
# $\frac{3+\frac{1}{3}\mu}{9+\mu}$
Rainy
- Each value treated the same way
@@ -73,13 +77,16 @@ Rainy
## Fully Bayesian Formulation
# $\frac{2+\frac{1}{3}\mu p_1}{9+\mu}$
Sunny
# $\frac{4+\frac{1}{3}\mu p_2}{9+\mu}$
Overcast
# $\frac{3+\frac{1}{3}\mu p_3}{9+\mu}$
Rainy
- Where $p_1 + p_2 + p_3 = 1$
- $p_1, p_2, p_3$ are prior probabilities of outlook being sunny, overcast or rainy before seeing the training set. However, in practice it is not clear how these prior probabilities should be assigned.
- Where $p_1 + p_2 + p_3 = 1$
- $p_1, p_2, p_3$ are prior probabilities of outlook being sunny, overcast or rainy before seeing the training set. However, in practice it is not clear how these prior probabilities should be assigned.

View File

@@ -27,20 +27,25 @@
| High | 2/5 | 1/9 | Red | 3/5 | 2/9 | | | | | | | | |
# Problem 1
# $Pr[Diagnosis=N|E] = \frac{2}{5} \times \frac{2}{5} \times \frac{4}{5} \times \frac{3}{5} \times \frac{5}{14} = 0.027428571$
# $Pr[Diagnosis = B|E] = \frac{3}{9} \times \frac{4}{9} \times \frac{3}{9} \times \frac{3}{9} \times \frac{9}{14} = 0.010582011$
# $p(B) = \frac{0.0106}{0.0106+0.0274} = 0.2789$
# $p(N) = \frac{0.0274}{0.0106+0.0274} = 0.7211$
# $p(N) = \frac{0.0274}{0.0106+0.0274} = 0.7211$
Diagnosis N is much more likely than Diagnosis B
# Problem 2
# $Pr[Diagnosis = N|E] = \frac{2}{5} \times \frac{1}{5} \times \frac{3}{5} \times \frac{5}{14} = 0.0171$
# $Pr[Diagnosis = B|E] = \frac{3}{9} \times \frac{6}{9} \times \frac{3}{9} \times \frac{9}{14} = 0.0476$
# $p(N) = \frac{0.0171}{0.0171+0.0476} = 0.2643$
# $p(B) = \frac{0.0474}{0.0476+0.0171} = 0.7357$
Diagnosis B is much more likely than Diagnosis N
@@ -48,4 +53,5 @@ Diagnosis B is much more likely than Diagnosis N
# Problem 3
# $Pr[Diagnosis = N|E] = \frac{0}{5} \times \frac{2}{5} \times \frac{4}{5} \times \frac{3}{5} \times \frac{5}{14} = 0$
# $Pr[Diagnosis = B|E] = \frac{5}{9} \times \frac{4}{9} \times \frac{3}{9} \times \frac{3}{9} \times \frac{9}{14} = 0.018$

View File

@@ -274,4 +274,4 @@ Root mean squared error 0.3223
Relative absolute error 70.1487 %
Root relative squared error 68.0965 %
Total Number of Instances 3
```
```

View File

@@ -12,6 +12,7 @@
# $a_i = \frac{v_i - minv_i}{maxv_i - minv_i}$
Where:
- $a_i$ is normalised value for attribute $i$
- $v_i$ is the current value for attribute $i$
- $maxv_i$ is largest value of attribute $i$
@@ -19,8 +20,10 @@ Where:
## Example
# $maxv_{humidity} = 96$
# $maxv_{humidity} = 96$
# $minv_{humidity} = 65$
# $v_{humidity} = 80.5$
# $a_i = \frac{80.5-65}{96-55} = \frac{15.5}{31} = 0.5$
@@ -28,8 +31,11 @@ Where:
## Example (Transport Dataset)
# $maxv_{doors} = 5$
# $minv_{doors} = 2$
# $v_{doors} = 3$
# $a_i = \frac{3-2}{5-2} = \frac{1}{3}$
# Nearest Neighbor Applied (Transport Dataset)
@@ -39,14 +45,18 @@ Where:
- Right most column shows euclidean distances between each vehicle and new vehicle
- New vehicle is closest to the 1st example, a taxi, NN predicts taxi
![](Pasted%20image%2020241010133818.png)
# $vmin_{doors} = 2$
# $vmax_{doors} = 5$
# $vmin_{seats} = 7$
# $vmax_{seats} = 65$
# Missing Values
## Missing Nominal Values
## Missing Nominal Values
- Assume missing feature is maximally different from any other value
- Distance is:
@@ -72,7 +82,7 @@ Where:
- Number of seats of one example = 16
- Normalised = 9/58
- One missing
- 1 - 9/58 = 49/58
- 1 - 9/58 = 49/58
## Normalised Transport Data with Missing Values
@@ -85,13 +95,13 @@ Where:
## Euclidean Distance
# $\sqrt{(a_1-a_1')^2) + (a_2-a_2')^2 + ... + (a_n-a_n')^2}$
# $\sqrt{(a_1-a_1')^2) + (a_2-a_2')^2 + + (a_n-a_n')^2}$
Where $a$ and $a'$ are two examples with $n$ attributes and $a'$ is the value of attribute $i$ for $a$
## Manhattan Distance
# $|a_1-a_1'|+|a_2-a_2'|+...+|a_n-a_n'|$
# $|a_1-a_1'|+|a_2-a_2'|++|a_n-a_n'|$
Vertical bar means absolute value
Negative becomes positive
@@ -109,4 +119,3 @@ Euclidean distance is generally a good compromise
- Does not detect noise
- Use k-NN, get k closest examples and take majority vote on solutions
![](Pasted%20image%2020241011131542.png)

View File

@@ -1,18 +1,21 @@
![](Pasted%20image%2020241011131844.png)
## Normalisation Equation
# $a_i = \frac{v_i - minv_i}{maxv_i - minv_i}$
## Euclidean Distance Equation
# $\sqrt{(a_1-a_1')^2) + (a_2-a_2')^2 + ... + (a_n-a_n')^2}$
# $a_i = \frac{v_i - minv_i}{maxv_i - minv_i}$
## Euclidean Distance Equation
# $\sqrt{(a_1-a_1')^2) + (a_2-a_2')^2 + … + (a_n-a_n')^2}$
# $vmax_{temp} = 85$
# $vmin_{temp} = 64$
# $a_{temp} = \frac{v_{temp} - 64}{21}$
# $vmax_{humidity} = 96$
# $vmin_{humidity} = 65$
# $a_{humidity} = \frac{v_{humidity} - 65}{31}$

View File

@@ -39,4 +39,4 @@ Root mean squared error 0.3409
Relative absolute error 90.9091 %
Root relative squared error 90.9091 %
Total Number of Instances 2
```
```

View File

@@ -18,6 +18,7 @@
## Selecting a Test
Goal: Maximise probability of desired class
- $t$ = total number of examples covered by rule
- $p$ = number of positive examples of the class covered by rule
- $t - p$ = number of errors made by rule
@@ -44,7 +45,7 @@ Stop Condition: $t-p=0$
![](Pasted%20image%2020241017131912.png)
#### Modified Rule and its Coverage
#### Modified Rule and Its Coverage
- Rule with best test added: If astigmatism = Yes and tear rate = Normal { then recommendation = Hard }
![](Pasted%20image%2020241017132019.png)
@@ -92,4 +93,4 @@ For each class C
- Default Rule
- If no rules cover example, prediction is the majority class (most frequent in training data)
- Conflict Resolution Strategy
- If more than one rule covers an example, select predicted class with highest recurrance in training data
- If more than one rule covers an example, select predicted class with highest recurrence in training data

View File

@@ -41,6 +41,7 @@ IF Income = 0-15k THEN risk = high
| debt = high | 4/6 |
| debt = low | 2/4 |
| **collateral = none** | **6/6** |
IF Income = 0-15k AND collateral = none THEN risk = high
| Credit History | Debt | Collateral | Income | Risk |
@@ -59,5 +60,3 @@ IF Income = 0-15k AND collateral = none THEN risk = high
| credit history = unknown | 1/4 |
| debt = high | 2/4 |
| debt = low | 2/4 |

View File

@@ -1,6 +1,7 @@
# Logarithms
$log_2X$ used for generating decision trees
- Power to which we have to raise 2 to get X
- When using, X will be probability between 0 and 1
- log of probability is always negative
@@ -48,19 +49,23 @@ $log_2X$ used for generating decision trees
- Given probability distribution, info required to predict an event is the distributions entropy
- Entropy gives the information required in bits
# $I(p_1,p_2,...,p_n)=-p_{1}\log_{2}p_1 -p_{2}\log_{2}p_2 ... -p_{n}\log_{2}p_n$
Where n = number of classes, and $p_1 + p_2 + ... p_{n} = 1$
# $I(p_1,p_2,,p_n)=-p_{1}\log_{2}p_1 -p_{2}\log_{2}p_2 -p_{n}\log_{2}p_n$
Where n = number of classes, and $p_1 + p_2 + … p_{n} = 1$
Minus signs included since output must be positive
### Expected Information for Outlook
- Outlook = Sunny
# $info([2,3]) = I(\frac{2}{5},\frac{3}{5}) = -\frac{2}{5}\log_2(\frac{2}{5}) - \frac{3}{5}\log_2(\frac{3}{5}) = 0.971 bits$
- Outlook = Overcast
# $info([4,0]) = I(\frac{4}{4},\frac{0}{4}) = -1\log_2(1) -0\log_2(0) = 0 bits$
- Outlook = Rainy
# $info([3,2]) = I(\frac{3}{5},\frac{2}{5}) = -\frac{3}{5}\log_2(\frac{3}{5}) - \frac{2}{5}\log_2(\frac{2}{5}) = 0.693 bits$
### Computing Information Gain

View File

@@ -1,2 +1 @@
![](Pasted%20image%2020241025132339.png)

View File

@@ -0,0 +1 @@

View File

@@ -1,21 +1,21 @@
1)
a) Binomial Distribution
b) Measures dispersion of probabilities with respect to a mean average value. Each possible value of S from 0 to N, the probability of observing S correct predictions given a sample of N independent examples of true accuracy P
2)
a) (150 + 180 + 420) / (150 + 180 + 420 + 30 + 50 + 50 + 40 + 50 + 30) = 0.75
# Variance of S $\sigma^2_S = N_p(1-p)$
# Variance of S $\sigma^2_S = N_p(1-p)$
# Std Dev of S $\sigma_S = \sqrt{N_p(1-p)}$
# Variance in F $\sigma_f = \frac{\sigma_S}{N} = \sqrt{\frac{N_p(1-p)}{N^2}} = \sqrt{\frac{p(1-p)}{N}}$
# Estimate of Predictive Accuracy $\mu_f = \frac{S}{N}$
# Successful Trials $S$
# Number of Trials $N$
750 Successes 1000 Trials
S = 750
S = 750
N = 1000
$\mu_f$ = 0.75
$\sqrt{(0.75 \times 0.25)/1000} = 0.0137$
@@ -25,9 +25,7 @@ $\mu_f \pm z \times \sigma_f = 0.75 \pm (1.28 \times 0.0137)$
$= 0.75 \pm 0.0175$
p lies between 73.25% and 76.75%, with 80% confidence.
3)
a)
Stratified Holdout, data split to guarantee same distribution of class values in training and test set
b)
Repeated Holdout, training and testing done several times with different splits. Overall estimate of predictive accuracy is average of predicted accuracy in different iteration
Repeated Holdout, training and testing done several times with different splits. Overall estimate of predictive accuracy is average of predicted accuracy in different iteration