Trusted Local News

Chris Surdak of CA Discusses the Transparency Toll Booth: Chris Surdak’s Take on AI Regulation and Privacy

  • News from our partners


Chris Surdak of CA, a thought leader in AI, privacy, and security, has long examined the intersection of regulatory oversight and technological advancement. As the use of Artificial Intelligence (AI) accelerates across industries, regulatory bodies are taking note. On January 9, 2024, the U.S. Federal Trade Commission (FTC) issued a pointed directive titled, “AI Companies: Uphold Your Privacy and Confidentiality Commitments.” While this statement may initially seem to be a routine regulatory notice, it carries profound implications for AI providers, particularly those offering Model as a Service (MaaS) platforms.


At its core, the FTC’s message is clear: AI providers are not exempt from privacy and security regulations, despite the fervor surrounding Generative AI (GAI). Christopher Surdak of CA emphasizes that this notice should serve as a wake-up call for both AI providers and their customers—those who integrate these platforms into their businesses. The implicit warning is that misleading claims or vague policies regarding data usage, security, and privacy commitments may be classified as regulatory violations.


To navigate this evolving regulatory landscape, businesses must critically assess how their AI providers manage data. Christopher Surdak of CA identifies four key areas that demand immediate attention:

 

1. Data Collection: What Is Being Captured?


MaaS providers do more than simply collect training data—they also amass vast amounts of metadata on user behavior. This behavioral data, often overlooked by users, is immensely valuable. It informs marketing strategies, customer profiling, AI model optimization, and even competitive intelligence.

Get local news in your inbox every morning

* indicates required


Chris Surdak of CA stresses that organizations should demand full transparency from AI providers regarding:

  • The specific types of data being collected.
  • The purpose of collecting such data.
  • Who within the organization has access to it.
  • Whether it is resold or shared with third parties.


Many businesses assume that their interactions with AI platforms are purely transactional, when in reality, their data footprints are deeply embedded within the AI ecosystem. Without explicit disclosure, companies may unknowingly provide proprietary insights to their AI providers, who then use this information to refine their own models—often with little to no benefit for the original data owner.

 

2. Data Usage: Where Does the Data Go?


A significant regulatory concern is how customer data is utilized beyond its intended purpose. Historically, many AI providers have employed broad, ambiguous language in their terms of service, allowing them to repurpose data in ways that benefit their own enterprises. The FTC’s recent statement signals a shift—regulators now expect AI companies to be explicit about whether user data contributes to ongoing AI training and model refinement.


According to Christopher Surdak of CA, businesses must demand clear, specific language regarding data usage policies, including:

  • Whether user interactions contribute to AI training.
  • If anonymized data is leveraged for commercial gain.
  • How long the data is stored and whether it is ever purged.


This level of scrutiny will likely prompt AI providers to reassess their data policies. Transparency is no longer optional—it is becoming a regulatory necessity.

 

3. Jurisdiction: The Challenges of Global AI Governance


AI providers operate on a global scale, often managing data across multiple jurisdictions with vastly different privacy laws. The complexity of ensuring compliance across various regulatory frameworks—such as the European Union’s GDPR, California’s CCPA, and China’s PIPL—makes it difficult for AI platforms to maintain consistent policies worldwide.


Chris Surdak of CA warns that businesses relying on AI must closely examine their providers’ jurisdictional policies. Many MaaS providers process data in different locations, and this can introduce significant compliance risks. Key considerations include:


  • Where AI processing occurs and which jurisdictions govern that data.
  • Whether local data protection laws align with corporate compliance standards.
  • How AI platforms ensure regulatory adherence in real time.


Organizations must recognize that jurisdictional ambiguities could leave them exposed to legal challenges, particularly when working with AI tools that cross international boundaries.

 

4. Managing Consent: The Emerging Challenges of AI Accountability


Consent is the cornerstone of modern data privacy regulation. Over the past decade, regulators have strengthened consumer protections, requiring organizations to obtain explicit, informed consent before using personal data.


Chris Surdak of CA points out that Generative AI presents a particularly thorny issue in consent management. AI models often function as "black boxes," making it difficult to establish clear chains of custody and consent. Because these models transform input data in ways that are not always traceable, users may have little visibility into how their data is ultimately repurposed. To address this effectively, AI providers must develop governance mechanisms that:

  • Maintain a clear chain of custody for all user inputs.
  • Offer explicit opt-in and opt-out controls for AI-driven data processing.
  • Ensure that changes to data policies are communicated transparently.


Failing to manage consent properly could lead to regulatory penalties and reputational damage. Christopher Surdak of CA suggests that organizations should proactively seek AI providers who prioritize ethical data stewardship rather than those that merely comply with the minimum regulatory requirements.

 

Balancing Innovation with Accountability


The rise of Generative AI has brought about an unprecedented level of innovation, but with that innovation comes heightened responsibility. As regulatory bodies like the FTC step in to enforce stricter compliance measures, AI providers must adapt to this new era of transparency.


Chris Surdak of CA asserts that the businesses best positioned for success will be those that embrace privacy and security as competitive advantages rather than burdens. Companies integrating AI into their operations must hold their providers accountable, ensuring that data governance practices align with both regulatory expectations and ethical business practices.


Ultimately, AI’s future will be shaped not only by technological advancements but also by the frameworks put in place to safeguard user trust. As the regulatory landscape evolves, organizations that prioritize transparency, accountability, and ethical AI deployment will be the ones that thrive.


Chris Surdak remains at the forefront of this discussion, urging businesses to recognize the "Transparency Toll Booth" ahead. Those who ignore it risk costly regulatory roadblocks—while those who plan wisely will find themselves navigating the AI revolution with confidence.

author

Chris Bates



STEWARTVILLE

JERSEY SHORE WEEKEND

LATEST NEWS

Real Estate Widget Fragment

Events

March

S M T W T F S
23 24 25 26 27 28 1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31 1 2 3 4 5

To Submit an Event Sign in first

Today's Events

No calendar events have been scheduled for today.