Skip to content

Key Findings

Finding #1: The Tension Between Visibility and Safety

Section titled “Finding #1: The Tension Between Visibility and Safety”

From Survey A, Survey B, and Interviews:

We can see that all segments of the target audience value visibility in some way as transparency necessary to achieving their missions, while simultaneously recognizing with varying degrees of awareness that visibility can create vulnerabilities. While it is a strategic trade-off any organization in this day of age must consider to effectively manage their digital identity (brand), it is in the same way symptomatic of a “wicked”, systemic problem requiring dynamic, context-sensitive solutions.

Evidence:

  • Survey A: 37% display staff info because “we want to be visible and accessible”; 32% because “community expects to see who we are”
  • Survey B: 14% of individuals say “transparency is critical” (would remove nothing); yet 31.4% have experienced threats
  • Interviews: “You can be as safe as you can imagine, but you will be useless because you’re self-censoring” vs. “How do we protect people by not identifying ourselves too much while maintaining credibility”

The experiential for both organizations and staff should be framed as anxiety and displeasure with the binary choices inherent in making trade-offs. The pattern that forms is that neither side wants binary choices. Organizations can practice selective disclosure (showing names/roles but hiding contact info). Conversely, staff want graduated responses (partial removal, not total anonymity).

Finding #2: Lack of Reporting Creates a Dangerous Information Gap

Section titled “Finding #2: Lack of Reporting Creates a Dangerous Information Gap”

From Survey B and Interviews:

Approximately one-third of individuals who experience threats never tell their organizations. This had a strong negative impact where an organization’s threat modeling was consequently based on incomplete data, which one can infer was systematically underestimating the actual threats occurring to staff.

Why threats went unreported:

  • “Didn’t seem serious enough”
  • “Not sure what they could do”
  • “Didn’t want to be seen as overreacting”
  • “They were sympathetic but didn’t have tools to help”
  • Previous reports led to “discussed it but no formal action taken”

Organizational responses when threats were reported:

  • Positive (rare): “Security training and updated protocols” / “IT team helped immediately” / “Developed safety plan together”
  • Inadequate (common): “No formal process” / “Told to be more careful online” / “Sympathetic but no tools”

The impact of not reporting is that organizations tend to only remove PII or attributions when facing direct threats (72.7%), and removal is done in presumably inconsistent ways. The approach leads to threats going unreported by staff, never surfacing to the level of awareness for the organization. It is this problem that then perpetuates a gap between organizational intent towards safety and their staff’s individual experience—clearly undermining trust and often leaving their staff feeling vulnerable.

Finding #3: Current Processes Cannot Meet Speed Requirements

Section titled “Finding #3: Current Processes Cannot Meet Speed Requirements”

From Survey A, Survey B, and Interviews:

Organizations’ manual processes (hours to days) fundamentally cannot meet individuals’ protection needs (minutes to hours). This speed gap is a safety failure, not just an inconvenience.

The urgency/speed data:

  • Organizations: Manual editing “across multiple pages,” no clear process, takes “hours or days”
  • Individuals: 33% need ≤30-minute response; 48% need same-day; 88% care about speed
  • Interviews: Threats “propagate in minutes,” experienced as “rapid escalations” during “critical threat windows”

The real-world consequences are that by the time manual processes complete, threats may have escalated, information has been cached/archived, and harm has occurred or is already underway. Current tools force organizations to choose between inadequate slow response or inadequate permanent deletion.

Finding #4: Organizations Default to Permanent Removal, Individuals Want Reversibility

Section titled “Finding #4: Organizations Default to Permanent Removal, Individuals Want Reversibility”

From Survey A and Survey B:

A striking finding reveals fundamental tool inadequacy: 0% of organizations temporarily hide information. When they act, removal is permanent. Yet 72% of individuals prefer partial, nuanced protection, instead of total removal of their PII or site attributions.

The permanence trap:

  • Organizations: 25% permanently removed content; 0% temporarily hidden; 38% never considered removal
  • Individuals: 11% want complete anonymity; 72% want partial removal; 14% want nothing removed

Why organizations default to permanent:

  • Lack confidence in securing “hidden” content
  • No tools exist for easy, reversible privacy
  • Threats are perceived as permanent enough to require permanent solutions
  • Fear that restoration “undoes safety”

So there is a bit of a paradox: organizations resort to the chainsaw option (permanent deletion) when they should want a scalpel instead (targeted, temporary protection). Individuals want visibility to return when threats subside, but no mechanism exists.

From Survey A and Survey B:

The organizations described in the surveys have a tendency to distribute responsibility across multiple roles. Presumably, this complicates:

  1. The desire of staff to collaborate within their organization to prevent threats to their person and community.
  2. The chain of authority/responsibility in the removal (addition) of PII or attribution.

It is also most certainly indicative of the types/sizes of organizations represented in the sample. Still, we can understand from this that more broadly, that part of a threat response relies on small organizations and teams providing greater clarity around:

Organizational fragmentation:

  • 25% has a dedicated security officer/team
  • 25% has no formal process at all
  • Responsibility can be scattered across 6+ different roles (IT, ED, Communications, Leadership, Individual staff)

Expectations & Individual preferences:

  • 65% want some form of notification or approval (collaborative approach)
  • 32% “want input but org can act quickly if needed”
  • 9% want to personally approve any changes
  • 0% want organization to handle entirely (no autonomy abdication)

This is all because during an incident requiring rapid removal of PII, critical operational questions will emerge:

  • Who has authority to activate removal(s)?
  • Who has technical permissions?
  • Who can approve quickly?
  • Who communicates with staff?

And there may be a considerable degree of misalignment between organizations and staff goals in the sample. In so much that organizations assign responsibility to individual staff, yet as mentioned in the previous finding, their staff purportedly desired collaboration to resolve their threat/incident. In this instance, we can see that neither wants complete autonomy, but current ad-hoc approaches satisfy no one at small organizations and teams.

Finding #6: Threats Exist on a Maturity Continuum

Section titled “Finding #6: Threats Exist on a Maturity Continuum”

From Interviews and Survey A:

Organizations exist along a continuum of threat response maturity, from basic awareness to advanced expertise. A responsive transparency approach should strive to adequately serve this spectrum of experience and not assume uniformity or sophistication. Some teams will need orientation/onboarding to develop an understanding of the fundamentals of digital safety and privacy, while others need detailed documentation of a technological solution to optimize integration into their systems for coordinated responses.

The spectrum:

  • Beginners (25%): no formal process, need education and templates
  • Intermediate (50%): some formal processes but inconsistent, need efficient tools and playbooks
  • Advanced (25%): dedicated security teams, need API access and integration

Implications:

  • Can’t assume technical capacity (62% lack dedicated IT staff)
  • Can’t assume threat literacy (50% assess only during incidents)
  • Can’t assume process maturity (25% have no formal process)

Finding #7: Threats are Contextual, Evolving, and Collective

Section titled “Finding #7: Threats are Contextual, Evolving, and Collective”

From Survey A, Survey B, and Interviews:

Threats vary by geography, sector, time, and extend beyond individual staff members to their families, professional colleges, partner organizations, and communities. Protection or prevention should be ecosystem-aware, not just individual-focused (in this case also meaning, an individual organization as well as individual staff).

Regional variation:

  • Violence in Kenya demands a different response than online harassment elsewhere.
  • Iran/Russia face unique government surveillance risks.
  • Open-carry states in the US create new physical security considerations.

Periodic/temporal factors:

  • Election seasons spike threat levels (mentioned across sources);
  • Short-term incidents vs. long-term strategic campaigns;
  • Unexpected virality of ad-hoc online attacks.

Emerging threats:

  • AI-generated deepfakes and clones, transforming media (especially photo) exposure decisions
  • Sophisticated doxxing evolution
  • Archive persistence (archive.org, caches) as ongoing vulnerability

Collective scope:

  • 64% of organizations act when partner attacked
  • Individuals worry about family safety: “threats to self/family
  • Interview insight: “Scope of protection reaches beyond individual or organization… sharing the load of danger [is important to us]

Finding #8: Knowledge Gaps Paralyze Decision-Making

Section titled “Finding #8: Knowledge Gaps Paralyze Decision-Making”

From Survey A, Survey B, and Interviews:

Even technically capable organizations and tech-savvy staff struggle with responding to digital threats, most notably because of knowledge gaps in making/implementing threat assessments (technical debt), and a lack of decision-making frameworks (i.e. playbooks).

What small to mid-sized organizations don’t know:

  • What constitutes “high-risk data” (varied, uncertain responses)
  • When to take a proactive stance vs. a reactive one
  • How to verify tool effectiveness (“false sense of security is worse than no tool”)
  • How to manage archived/cached content

What individuals at small to mid-sized organizations don’t know:

  • What information is currently visible about them (10% “not sure”)
  • How much to remove during threats (43% couldn’t answer)
  • When protection should activate (41% couldn’t answer)
  • What their organization can/will do (implicit throughout)

From interviews: “A lot of civil society folks, I don’t think [put] threat assessment at the center of the organization” / “There was probably someone who can help” (but unclear who).