vault backup: 2025-03-16 18:59:42

This commit is contained in:
boris
2025-03-16 18:59:42 +00:00
parent 6befcc90d4
commit ae837183f1
188 changed files with 17794 additions and 409 deletions

View File

@@ -8,4 +8,4 @@
2. Search Engines recognise language
3. Telephone menus recognise language and can "hear"
4. Dynamic routing algorithms utilise machine learning
7. Following the principles defined by the turing test, AI should classify as both science and engineering. The concepts and programming are scientific, however robotics and hardware are engineering.
7. Following the principles defined by the turing test, AI should classify as both science and engineering. The concepts and programming are scientific, however robotics and hardware are engineering.

View File

@@ -134,7 +134,7 @@ Limited rationality means acting appropriately when there is not enough time to
Perfect rationality remains a good starting point for theoretical analysis.
#### Value Alignment Problem
#### Value Alignment Problem
A problem with perfect rationality is that it would assume a fully specified objective given to the machine.
Artificially defined problems such as chess, come with an objective.
@@ -147,7 +147,8 @@ Problem of achieving this is called the value alignment problem
### Bad Behaviour
If a machine is intelligent enough to reason and act, such a machine may attemt to increase chances of winning by immoral means:
If a machine is intelligent enough to reason and act, such a machine may attempt to increase chances of winning by immoral means:
- Blackmail
- Bribery
- Grabbing additional computing resources for itself
@@ -157,5 +158,5 @@ These behaviours are rational, and are logical for success, however immoral.
We do not want machines that are intelligent in the sense of pursuing their objectives.
We want them to pursue our objectives.
If we cannot transfer our objectives perfectly to the machine, we need the machine to know that is does not know the complete objective and have the incentive to act cautiously, ask for permission, learn about preferences through observation, defer to human control.
We want agents that are provably beneficial to humans.
If we cannot transfer our objectives perfectly to the machine, we need the machine to know that is does not know the complete objective and have the incentive to act cautiously, ask for permission, learn about preferences through observation, defer to human control.
We want agents that are provably beneficial to humans.

BIN
AI & Data Mining/Week 18/test.pdf Executable file

Binary file not shown.