Privacy concerns in using generative AI in Google Workspace and Microsoft 365?
Data Collection
Data Retention
Data Security
Model Leakage
Third-Party Access
Transparency and Consent
The problem lies in AI models within major platforms like Google Workspace and Microsoft 365, which, similar to ChatGPT or Gemini, gather substantial amounts of Personally Identifiable Information (PII). Users often don’t have a clear view of how their sensitive data is stored, shared, or used across these AI-driven services. This lack of transparency creates significant privacy risks, especially when dealing with confidential information.
The challenge is ensuring that AI solutions are designed with robust privacy protections, offering transparency on data use while safeguarding sensitive information.
Why isn’t Google and Microsoft fully adopting privacy-focused AI models ?
The tech companies, like Google and Microsoft, rely heavily on data-driven advertising and personalized services as a key component of their business models. These services require access to vast amounts of user data to build highly personalized experiences, and offering targeted ads forms a significant portion of their revenue. This reliance on data collection creates a conflict when it comes to adopting privacy-focused AI models, as privacy-first solutions limit the amount of user data accessible for analysis, advertising, and personalization, potentially affecting their core revenue streams.
We built a secure model
Encryption (at rest and in transit)
Data Retention
Federated Learning
Differential Privacy
Access Control and Auditing
Ethical Data Collection
Data Minimisation
Zero-Knowledge Proofs (ZKP)
Sensitive data should be encrypted both at rest and during transmission to ensure that even if intercepted, it remains unreadable. Removing personally identifiable information (PII) before training AI models and using decentralized techniques like federated learning minimizes privacy risks by keeping the raw data on local devices. Differential privacy methods add noise to the dataset, protecting individual records while maintaining useful patterns. Access to models and data should be restricted through role-based access controls and robust authentication systems to further safeguard privacy.




