10-29-19
Artificial Intelligence discussion
Group 4 members:
Carsyn
Colleen
Claudio
Brody
Brett
Cevin
Notes:
- No morals or emotions in AI
- Progression about AI
- Progression can be dangerous – it cant tell whos who
- Becomes confusing and controversial it can value itself over humanity article said AI reaches a point smarter than human and it passes it’s smart enough to fail on purpose so we don’t know its actually that smart
(Smart enough to pass the Turing test —> smart enough to pretend not to pass it.)
- If we show war to AI = bad behavior shows good behavior = good signs
- When is AI useful and when should people take over where is it actually needed
- Not everyone has emotions or morals
- Have false statements on the negative aspects
- Tesla partial AI user control AI
- Where can you draw the line to where AI can become potentially dangerous?
- When is AI necessary vs when is it necessary to have people take over control?
- Once it’s integrated into everything it leads into a totalitarian government- they know everything about you, turn your car off
- Anyone can implement AI into something
- How can we relate/ re-design AI into Maslow’s hierarchy of needs?
- Physiological needs – efficient city planning and resource management
- Esteem – doesn’t fit that much if AI does this you don’t get self-satisfaction from it by ut
You must be logged in to post a comment.