Skip to content

Survey A: Organizational Assessment

The following synthesizes findings from Survey A: Organizational Assessment, designed to understand how civil society organizations (CSOs) and non-profit organizations (NPOs) manage staff information online and their needs for privacy protection tools.

The survey featured 17 structured questions, using a mix of multiple choice and single choice formats, along with 7 open-ended questions for more detailed feedback. It was organized into sections to assess organizational perspectives on safety and privacy. The sections are focused on gathering technical and practical barriers of the respondent’s organization, collective decision-making, as well as operational/systematic details related to a potential an implementation of a technical solution, like ahe “privacy-mode” website -plugin.

  • Section 1 - Intro: establish frame and context
  • Section 2 - Profile: segmentation and continued context setting
  • Section 3 - Attribution: records current organizational state
  • Section 4 - Threats: threat/trigger identification
  • Section 5 - Technical Environment: feasibility and potential implementation
  • Section 6 - Open Feedback: capture unconstrained insights
  • Section 7 - Final: engagement and network-building

The survey was aimed at small-to-medium NGOs, specifically those with 5 to 50 staff working in areas like human rights, journalism, and advocacy. We consider the results and the effort exploratory. In the end, 11 different organizations responded. While the overall sample size is on the smaller side—which means it’s hard to draw any broad, sweeping statistical conclusions—the responses themselves offered detailed feedback. So, even though the survey can’t claim large-scale representativeness, it does offer valuable and nuanced qualitative insights.

  • Basic statistics to summarize closed-ended questions
  • Frequency distributions and percentages
  • Cross-tabulation analysis to explore relationships between groups/variables
  • Thematic coding for open-ended questions/responses

Information Visibility

Information typeCountPercentageRank
Full names621%1
Photos/headshots621%1
Job titles/roles621%1
Biographical information00%5
Board members names00%5
Email addresses414%2
Office locations00%5
Social media links13%4
Phone numbers414%2
Volunteer information00%5
None - don’t list individuals27%3

The survey shows how both CSOs and NPOs may display a targeted information profile:

High-visibility items (displayed by 21% of orgs):

  • Full names
  • Photos/headshots
  • Job titles/roles

Medium-visibility items (14%):

  • Email addresses
  • Phone numbers

Almost never displayed (0-3%):

  • Social media links
  • Volunteer information
  • Office locations

A minority of our sample (7%) deliberately choose to not list any individuals on their websites. An approach like this stands out as the strictest form of privacy practice among the sample–i.e. completely avoiding public staff attribution online. The implications for Privacy Mode being that for some, even “privacy modes” or redaction tools are less relevant than having a default posture of zero individual exposure. And so, Privacy Mode should accommodate this use case, perhaps with settings to easily “remove all individual data” or similarly settings for easy adoption of non-attributive site templates.

A key insight here is Privacy Mode’s target audience may be already practicing selective disclosure. They do display identity-establishing information (names, faces, roles) but withhold contact vectors (email, phone), and this can suggest an awareness of the potential for harassment risks via direct contact. Conversely, it may also be indicative of less awareness of how identity information alone can enable doxxing, social engineering, and targeted campaigns.

Seeing that most of the respondents are in fact sharing identity-establishing information, the pattern of reasons for disclosure or use of attributions appears to be credibility-driven. In fact, our sample most frequently cites visibility expectations, the desire to be “…visible and accessible”, as the primary reason for sharing staff information. And an assumption can be made worth validating: that while it is important for organizations to share information that establishes trust and legitimacy (bios, governance), they also have a duty to secure and maintain their operations and it’s assets (no contacts, or locations). If this is a valid assumption, understanding how to make the right kind of (brand) promise to CSOs and NPOs, while also alleviating the tensions that the assumption is alluding to, could be fertile ground for future research and development of a content strategy and/or branding.

Disclosure Distribution

ReasonCountPercentageRank
We want to be visible and accessible737%1
Community expects to see who we are632%2
*Other316%3
Professional standards (e.g., journalism bylines)211%4
Legal/regulatory requirements15%5
Funder requirements00%6
No formal reason…00%6
Not sure00%6

But the striking finding is the near total absence of external mandates: zero organizations reference funder requirements, and just one points to legal requirements. The sample begins to challenge the assumption that policy or funder rules drive transparency. It could be worth exploring the degree to which that visibility self-imposed and a reflection of organizational values (i.e. “We want to be visible and accessible” at (37%)), or shaped by perceived and/or real community expectations (i.e. “Community expect to see who we are” at (32%). Regardless, the pressure of community expectations around staff exposure seems consistent, regardless of how much information is actually displayed.

Organizations displaying the “basic trio” (names + photos + titles) cite “community expects to see us” at nearly the same rate as those displaying less information. Expectation pressure is uniform regardless of actual exposure level. Therefore, social norms and organizational values, rather than external compliance or rational threat analysis, may drive public disclosure decisions. As such, privacy mode strategies should support organizations to recalibrate these pressures for organizations, showing that visibility can be conditional and responsive rather than static or permanent.

Information Removal History Distribution

StatusCountPercentage
No, never considered it338%
No, but considered it338%
Yes, temporarily hidden00%
Yes, permanently removed225%
Total8100.0%

Is there a need for the capability to reverse removal of information easily without data loss? No one shared they were temporarily hiding information. 25% of the organizations permanently removed information rather than temporarily hiding it. Meaning organizations may:

  • Lack confidence in ability to secure information even when hidden
  • Have had threats that are persistent enough to make temporary hiding insufficient
  • Lack of existing tools to easily hide content led to irreversible decisions
  • Those who did removals did so reactively - post-threat, not proactively

What the survey is suggesting is crucial: CSOs and NPOs that remove information may be reluctant to restore it even when threats subside, because restoration is somehow frictious–exploring this journey mapping is potentially a step towards understanding why (e.g. “its time-consuming and difficult”, “ it feels like undoing safety”, etc.) Regardless of the reason why there is friction, Privacy Mode’s explicit “toggle” concept could normalize the idea that visibility can be situational rather than permanent.

Removal Trigger Scenarios

ScenarioCountPercentageRank
Direct threats to specific staff821%1
Staff member request821%1
Partner organization under attack718%2
Government attention/investigation616%3
General hostile climate513%4
Preventive measure during sensitive work411%5
Other00%6

The survey tells us the common triggers for removal are direct threats to specific staff (21%) and staff member requests (21%), followed by partner organizations under attack (18%) and government attention/investigation (16%). The implication being permanent removal is more common than temporarily hiding sensitive information for the target audience; who potentially would not take a “toggle-based” approach towards information privacy. Therefore, seemingly, most of Privacy Mode’s target audience probably either do not act, or remove information permanently in response to staff threats or requests, rather than in reaction to government attention alone.

What this might also reveal is a difference in scope, creating two modes of decision-making within CSOs and NPOs when considering PII removal, namely:

  1. Organization-initiated: response to verified, objective threats
  2. Individual-initiated: response to staff agency circumstances and/or comfort

The target audience will need the ability to activate removals site-wide during crisis, but individuals need the ability to request their own information be hidden without triggering an organizational emergency response. Meaning that Privacy Mode must support both pathways.

Organizations selecting community expects to see who we are (45.5%) are just as likely to identify government surveillance as a concern as those who don’t. This suggests the pressure of that expectation overrides security concerns in the organization’s decision-making.

64% of respondents would act when a partner organization is under attack, even if they themselves aren’t targeted. This suggests ecosystem thinking - organizations view threats to peer organizations as warnings for themselves. Yet, conversely, only 36% of respondents would remove information as a preventative measure during sensitive work. Most organizations wait for threats to materialize. Privacy mode can enable a shift toward proactive protection by making temporary hiding of attribution or PII less costly, easy, and reversible.

Assessment Frequency

FreqCountPercentage
Only when incidents occur550%
Never formally00%
Few times a year220%
Monthly220%
No answer110%
Total10100%

Half of our sample was purely reactive. But none of the respondents report “never formally” assessing threats. This may suggest that for most of the target audience for Privacy Mode, capacity exists (whether internally or via external support) but is triggered only by crisis.

Threat Type Distribution

Threat typeCountPercentageRank
Online harassment campaigns315%4
Doxxing of staff/families550%1
Hacking/cyberattacks550%1
Government surveillance330%2
Legal intimidation110%5
Physical security threats220%3
Other*110%5

Hacking and doxxing tie as top concerns at 50%. But surveillance (30%) and physical security (27%) follow closely. What this reveals about organizational threat modeling for the target audience is that both CSOs and NPOs understand that PII on their websites (assets) creates two attack vectors:

  1. Technical exploitation (surveillance, hacking): Public staff information enables social engineering attacks, phishing, credential stuffing, and targeted hacking

  2. Identity-based targeting: Public staff identities enable doxxing, harassment campaigns, and physical threats

Yet there may be some disconnect we can surmise from this sample. That is, despite identifying hacking as a top concern, 0% display email address (the primary vector for phishing/social engineering attacks). This suggests the target audience may have internalized specific threat lessons (i.e. we don’t publish emails) but haven’t generalized the principle (e.g. names + roles + context = enabled social engineering).

Organization Size

Size categoryCountPercentage
Small team (1-5 people)436%
Small organization (5-20 people)218%
Medium organization (25-50 people)436%
Large organization (100+ people)19%
Total11100%

Survey results show that organizations of all sizes share similar concerns about risk and exposure—there aren’t clear distinctions in the types of threats reported based on organization size. This means Privacy Mode needs to tackle a mix of risks, not just technical ones like hacking but also personal, targeted harassment. By addressing doxxing and cyberattacks as general priorities, the design will naturally help organizations manage other secondary threats as well, including safety risks and surveillance.

Geographic Distribution

Operating regionCountPercentage
National (one country)218%
Regional (various neighboring countries)19%
Single city/local19%
Global (many countries)764%
Total11100%

Privacy mode should support distributed organizations in multiregional threat scenarios, not just single country contexts. More than half of the respondents (64%) are operating globally, these organizations face:

  • Multiple regulatory environments
  • Varied threat landscapes across regions
  • Complex partnership networks that may require notification during privacy activation
  • Time zone considerations for a 24/7 threat response

Data Responsibility Distribution

Who Is Responsible…?CountPercentage
Dedicated Security Officer/Team325%
No formal process325%
IT217%
Executive Director only18%
Any leadership team member18%
Communications manager18%
Any staff member for their own info18%
Data Protection Officer or similar dedicated role00%
Other12100%

Three organizations have dedicated security staff, but three have no formal process at all. Within this small sample, organizations are split between highly professionalized security and complete ad-hoc approaches (some selected multiple roles).

Yet, even among the 8 organizations with formal processes, responsibility is distributed across 6 different roles. We can infer more broadly that there may be a fragmentation problem here. During a crisis requiring rapid information removal:

  • Who has authority to activate?
  • Who has permissions/technical access to make changes?
  • Who can approve the decision quickly?
  • Who communicates with staff?

In developing Privacy Mode, some effort should be given to providing clarity (i.e. solving) an potential authority problem. Ostensibly this would be done by enabling role-based access: security teams can activate immediately when necessary (with some stipulation in a SLA) ; executive directors can activate with access and 1-click; individual staff can request their own info be hidden through (likely incorporated in some ticketing process); communications managers can manage messaging about visibility status.

Technical Capacity Distribution

Capacity LevelCountPercentage
We have dedicated IT staff338%
We have tech-savvy program staff338%
No answer225%
We rely on external support for website updates00%
Total8100%

Clearly the account is small and self-reported but it is surprising to see that no organizations reporting that they rely on external support for their website. The sample is more technically sophisticated than expected.

Platform Distribution

CMS/PlatformCountPercentage
Traditional CMS (WordPress, Drupal, Joomla, etc.)457%
Headless CMS (Contentful, Strapi, Sanity)229%
WordPress/other open-source CMS114%
Static site generator (Jekyll, Hugo, Astro)00%
Total7100%

Given WordPress cover 57% of respondents it should be the prioritized platform for the MVP.

Website Features Adoption

FeatureCountPercentage
Staging/test site529%
Version control529%
Automated backups424%
Not sure212%
Content Delivery Network (CDN), e.g. Cloudflare16%
Total17100%

The respondents’ organizations have modern development practices, which may put them in a good foundation for privacy mode integration. Most are using either staging/test sites (29%), version control (29%), or automated backups (24%) – this is infrastructure of software development not just website management.

Barriers to Tool Adoption

BarrierCountPercentage
Too complex to set up429%
Concerns about security429%
Cost214%
Maintenance burden214%
Staff resistance214%
Legal/compliance issues00%
Other00%
Total14100%

And there may be value in framing the aforementioned as a prerequisite (the capability to handle technical implementations). But the survey also shows us that complexity can still be a barrier despite tech-savvy or sophistication, with 36% or the respondents citing “too complex to set up” as a reason against tool adoption. What can be inferred more broadly is that “complex” doesn’t mean “requiring technical skills”. It could mean “requires time, attention, and integration effort”, from already stretched teams. Therefore, Privacy Mode must show low integration friction even for technically capable organizations.

This section examines relationships between key variables to identify patterns that would inform Privacy Mode’s design.

How many people work for your organization?We haven’t made changesWe’ve considered removing but haven’t yetWe’ve increased security measures but kept information publicWe’ve removed staff information due to threatsGrand Total
Large organization (100+ people)11
Medium organization (25-50 people)1214
Small organization (5-20 people)112
Small team (1-5 people)2114
Grand Total323311

Organizations that “haven’t made changes”:

  • Average 3.7 types of information displayed
  • 67% are small (1-20 people)

Organizations that “increased security measures”:

  • Average 2.3 types of information displayed
  • 67% are medium (25-50 people)

The data shows that “Medium organizations (25-50 people)” are more proactive about security—they increase security measures and display less staff information on their websites. In contrast, “Smaller organization (1-20 people)” are less likely to take security measures and have higher exposure (more types of staff information displayed) with less protection. With can make a grounded assumption about our target audience from this: CSOs and NPOs with less than 20 people are potentially at greater risk due to higher public exposure of staff info while lacking proactive security measures than organizations with 25 or more people employ.

How often does your organization assess online threats to staff?We haven’t made changesWe’ve considered removing but haven’t yetWe’ve increased security measures but kept information publicWe’ve removed staff information due to threatsGrand Total
Few times a year112
Monthly112
No answer112
Only when incidents occur2215
Grand Total323311

The survey results show regular assessment correlates with recognizing more types of threats, and taking action regarding a wider rage of potential threats, compared to those who only assess regarions to incidents. We can hypothesize that if Privacy Mode promotes regular assessments or provides situational prompts/tools for privacy action, it could raise organizational awareness of threats and encourage proactive protection practices.

Organizations with dedicated IT:

  • 100% use version control

  • 67% use staging sites

  • 67% use automated backups

Organizations relying on external support:

  • 33% use version control

  • 0% use staging sites

  • 33% use automated backups

This sample shows technical capacity clearly affects website development practices. Organizations with dedicated IT are much more likely to employ modern features like version control, staging environments, and automated backups, whereas those relying on external support use these features less frequently. Therefore, Privacy Mode should not assume all target organizations use staging or version control—it needs to work even for those with lower technical capacity or no advanced deployment tooling.

The open-ended responses to the question “What data do you consider highest risk?” captured a range of sensitive information types that the respondents encountered in their work. In hindsight, the phrasing of the question is a bit confusing. Regardless, the data shows:

  • Only one respondent explicitly mentioned “home addresses” as location data

  • “PII” was referenced in the general sense, not specifically tied to names or other details

  • Some unique types of sensitive information was identified:

    • Data about audiences reaching out through the helpdesk

    • Confidential client strategy documents

    • Information about clients’ security stance and measures

    • One response indicated uncertainty about the question itself

What we can see, given the sample and it’s size, what is missing is more of the broader context around how these types are used or risked in practice, contexualized examples from the respondents, and a fuller taxonomy of sensitive data encountered. The responses do not support broad grouping solely as “location data” and “names”.

This section analyzes open-ended responses from Survey A. Response rates for these open-ended questions represented deeper insights from a subset of engaged participants. Responses were analyzed using affinity mapping in Miro, allowing themes to emerge organically from the data rather than through predetermined categories. Each response was tagged with its Response ID for traceability.

Overall, respondents from diverse organizations self-identify through their mission and operational context, showing a range of sectors with varying web policies and governance structures. The responses show organizations defined as grassroots activists to established nonprofits, each bringing different technical abilities and policy frameworks to their privacy decisions.

Theme: Valuing Visibility & Trust Building

Section titled “Theme: Valuing Visibility & Trust Building”
”We think visibility is important""Building trust with our community is hard if they don’t know who we are”

All respondents shared how their organizations value visibility. The concept should be understood as fundamental to the target audience’s achieving their mission and meeting expectations in their community/partner relationships. An organization’s transparency regarding its staff by making them visible on its site(s) provides visible access for people to verify what the organizaiton is doing, who is a part of it, and ways to contact them (for feedback, services, etc.)

“Not sure!””Don’t know”“Not sure - especially with legacy data (archive.org)”

CSOs and NPOs need education and frameworks before implementing technical solutions–not surprising. Even though the given samples is small, still a significant pattern of uncertainty emerges across multiple sets of questions to validate that assertion:

  • Unclear about what constitutes high-risk data
  • Lack of formal policies for handling staff information
  • Confusion about technical capabilities and limitations
  • Uncertainty about external factors (archives, caches)

Theme: Need to Meet Compliance Requirements

Section titled “Theme: Need to Meet Compliance Requirements”
”NGO focused on tech policy with a focus on privacy and social justice in regard to digital transformation”“We want to display board and ED for compliance, I think, but I never actually asked.”

Responses convey how external requirements can create non-negotiable visibility demands. Yet, these requirements, of course, can vary by jurisdiction and/or funding source, creating complex compliance matrices. Organizations must balance:

  • Transparency demands for regulatory compliance or legal obligations
  • Funder requirements for programmatic and/or financial accountability
  • Professional standards (i.e. journalism bylines)

Theme: Navigating Environment With Escalating Risks

Section titled “Theme: Navigating Environment With Escalating Risks”
”Doxxing during the election season last year""Photos (especially with increasing risk of deepfake/AI image manipulation and misuse)""data about our audience that reach out to us via helpdesk”

The survey shows how organizations may understand risk is dynamic and contextual but lack frameworks customized/suited to their specific contexts for systemic assessment. We can garner this is due to, perhaps, a lack of capacity. Regardless we can see in the responses, risk is deeply contextual, influenced by: geographic and polictical environments, temporal factors (elections, campaigns), emergent threats (AI/crypto), secondary risks (need to protect not just staff but constituents).

For the target audience there is a need for tools or process that lower the friction of content updates–especially for those self-identified as smaller teams or organizations. Privacy Mode should consider how and when being frictionless to activate across decentralied or fragmented sites, and accommodate workflows that are not CMS-based. We can infer that support for CSOs and NPOs involves not just technical setup, but likely capacity-building and education at minimum.

”We need a whole website overhaul but don’t have capacity. If we had to change things at this time it would be more of a PITA than usual/than it should be.""…we have a problem with our WP theme that makes editing more time consuming and do not have capacity to overhaul the site as we need to""[we are a] Consultancy serving nonprofits; many of whom do not maintain [their own] public websites”