AI is developing at speed, and there are valid concerns about the harms that it can enable.
The potential dangers of AI are many, and include:
- Defamation
- Fraud
- Misinformation
- Discrimination
- Harassment
- Privacy Violations
Even though AI companies implement input and output filters in an attempt to prevent their systems from generating harmful content, this doesn’t always work. “Adversarial attacks” are specialised prompts that make it possible to bypass an AI system’s built-in content filters.
A more comprehensive solution is to distribute generated content to users via Vaulted Objects, which then provide the forensic evidence of the fact that the user generated the content and their consequent duty of care. In this way, users need to consider the possibility of being sued personally for any harms that the content may cause before they publish it.