Our AI Policy

Updated 2026 from our original post in 2024

Our 2024 AI statement outlined how we thoughtfully integrate emerging technology into our workflow. Since then, both AI systems and client expectations have evolved. This updated policy clarifies how we use AI responsibly — with strengthened standards around accuracy, privacy, oversight, and accountability.

AI supports execution.

Strategy, judgment, and responsibility remain human.


Our Core Standard

Technology may assist production.
It does not replace thinking.

Every deliverable we provide is:

  • Strategically directed by a human
  • Reviewed for factual accuracy
  • Evaluated for brand alignment
  • Approved before delivery

We assume full responsibility for the final outcome — regardless of which tools were used during development.

AI is part of our internal infrastructure. It is not the author of your strategy.


How AI Supports Our Process

AI may assist in structured or early-stage workflow tasks, including:

  • Draft organization and restructuring
  • Research summaries
  • SEO analysis and keyword clustering
  • Content outlining
  • Brainstorming variations
  • Internal documentation support
  • Workflow automation
  • Early-stage visual concept exploration (never final design creation)

AI is used to reduce repetitive production tasks so we can invest more time in strategic clarity, positioning, refinement, and creative leadership.

AI does not replace discovery interviews, strategic direction, or decision-making.


Human Oversight & Verification

All work — whether AI-assisted or not — is held to the same standards.

We:

  • Verify factual claims before delivery
  • Confirm time-sensitive data
  • Review technical statements for correctness
  • Separate interpretation from confirmed fact
  • Ensure alignment with your brand voice and objectives

Accuracy takes priority over speed. If information cannot be confidently verified, it is clarified or excluded.


Privacy & Sensitive Data Handling

We maintain disciplined data stewardship practices.

  • Protected health information (PHI) is not entered into generative AI tools.
  • Confidential contracts or sensitive legal documentation are not submitted to open AI systems.
  • Sensitive client data is handled manually or within secure, appropriate platforms.
  • Meeting transcription tools may assist with note-taking, but confidential segments are reviewed and handled manually.

Data protection is process-driven, not tool-dependent.


Compliance Awareness

Where applicable, we consider:

  • HIPAA requirements
  • GDPR considerations
  • ADA accessibility standards
  • Copyright and intellectual property protections

We do not assume any tool is inherently compliant. Compliance is achieved through internal controls, review procedures, and disciplined workflow design.


Editorial Control, Bias & Accessibility Review

AI-generated drafts are never delivered without review.

All work is evaluated for:

  • Inclusivity and bias
  • Cultural sensitivity
  • Accessibility alignment
  • Copyright compliance
  • Professional appropriateness

Editorial authority remains human.


Transparency

We are open about our use of AI as part of our internal workflow.

AI enhances efficiency and structural support.
It does not replace expertise, accountability, or craftsmanship.

We do not use AI to reduce diligence. We use it to remove friction.


Tool Evaluation & Ongoing Governance

AI systems evolve rapidly. Responsible use requires ongoing oversight.

We continuously evaluate tools based on:

  • Security posture
  • Data handling policies
  • Reliability
  • Suitability for client industries

This policy is reviewed regularly, with formal reassessment at least twice per year, and adjusted as standards evolve.


In Summary

AI is a tool within our studio.

Strategy is human.
Judgment is human.
Responsibility is ours.