AI Training Practices

Training on Personal Data
Yes
Training on User Interactions
Yes
Training on Public Content
No
AI Data Sharing
Unclear
AI Use Cases
  • Fraud Detection — AI is used to conduct research to improve detection capabilities for spam, scam, threat, and fraud.
  • Content Processing — AI is used to process device and threat information to improve detection capabilities for security threats.
  • Recommendation — AI is likely used to inform users about additional services or features they may be interested in, implying a recommendation system.
  • Other — AI (machine learning) is used for general service improvement and enhancement.
Risk Assessment
Aura presents significant privacy risks due to a recent data breach and an extremely low deterministic score (0/70), indicating fundamental issues with their privacy posture. While the policy is comprehensive, the collection of sensitive data like government IDs, combined with sharing with 22 data partners (including ad-tech), raises concerns about data security and potential misuse.
Recommended Actions
  • Exercise extreme caution before sharing sensitive personal data, especially government IDs, with Aura.
  • Regularly review and adjust privacy settings within your Aura account to limit data sharing where possible.
  • Given the recent data breach, ensure you use strong, unique passwords and enable two-factor authentication for your Aura account and any linked services.
  • Consider alternative services if privacy and data security are high priorities, as Aura's current practices pose substantial risks.
AI Overview
Trains on user dataYes
Trains on interactionsYes
Opt-out availableNo
AI disclosureYes
Third-party AINo
AI Training Opt-Out
No opt-out mechanism available.