
The policy does not explicitly use terms like 'AI', 'Machine Learning', or 'Generative AI'. However, it mentions 'automated program' for CAPTCHA and 'automated processes' in the context of user rights regarding automated decision-making, indicating the use of automated systems that may involve AI/ML without explicit disclosure of the underlying technology.
We use CAPTCHA across our applications to mitigate brute force logins and as a means of spam protection.
The CAPTCHA service evaluates various information (e.g., IP address, how long the visitor has been on the app, mouse movements) to try to detect if the activity is from an automated program instead of a human.
You have the right to object to and prevent any decision that could have a legal or similarly significant effect on you from being made solely based on automated processes.
Automated systems (CAPTCHA) are used to mitigate brute force logins and spam protection by detecting activity from automated programs.
We use CAPTCHA across our applications to mitigate brute force logins and as a means of spam protection. [...] The CAPTCHA service evaluates various information [...] to try to detect if the activity is from an automated program instead of a human.
The policy acknowledges the user's right to object to decisions made solely based on automated processes that could have a legal or similarly significant effect, implying that such automated decisions might occur.
You have the right to object to and prevent any decision that could have a legal or similarly significant effect on you from being made solely based on automated processes.
The policy explicitly mentions a 'Right to not Be Subject to Automated Decision-Making' for decisions that could have a 'legal or similarly significant effect' on the user. This indicates a potential for high impact, even if specific instances of such decisions are not detailed.
The policy explicitly states the use of a 'CAPTCHA service' which evaluates user information to detect automated programs, and this service provides results to 37signals without 37signals having access to the evaluated information, indicating it's a third-party service performing automated analysis.
The policy does not mention training any AI or machine learning models with personal data.
The policy does not mention training any AI or machine learning models with user interactions.
The policy does not mention training any AI or machine learning models with public content.
The policy does not mention sharing data specifically for AI training purposes.
Basecamp demonstrates strong privacy practices, notably by committing to never selling user data and providing comprehensive support for GDPR and CCPA rights. While they share data with a limited number of third-party partners essential for service operation, this is clearly outlined. The primary area for minor improvement is the lack of a self-service data request form.
Utilize the provided privacy email (jason@basecamp.com) for any data requests or privacy inquiries.
Regularly review your account settings within Basecamp for available privacy controls and data management options.
Be aware that full data removal after account cancellation or content deletion may take 60-90 days.
Familiarize yourself with their full privacy policy to understand specific data collection and sharing practices.