Yes, clearly it's going to be somewhere over 14%. I'm mainly critiquing the lazy reporting here. In reality that "59%" number is a wild guess, and should be reported as such.
> Here's a recent study that uses modern statistical modelling methods to extrapolate actual harassment rates from reporting rates that suffer strongly from self-selection bias in a similar environment
It's really not in a similar environment at all. That study is about estimating the rates of harassment, assault etc. of professional fisheries observers. I.e. independently employed "fisheries narcs" (my paraphrasing) who are dropped onto a fishing vessel for a limited time. "Upon returning from deployment, all observers are debriefed to ensure data quality and integrity".
That's a much more controlled dataset than "I hung out in Antarctica for a year, and I and 1/4 of the people that were there with me responded to this survey long afterwards, for whatever reason".
Even then, they have huge error bars on their 95% confidence intervals, e.g. as noted by this revealing paragraph:
> "Our estimates of observer victimization among years indicate that 0.4 to 1.8% of observers experienced assault, 22 to 34% experienced intimidation, coercion, and hostile work environments, and 9 to 43% experienced sexual harassment".
So, e.g. 9%-43%. That's a pretty huge difference.
In any case, we're way into the weeds here. Nothing I'm saying here is calling into question the the important underlying question here, that it certainly seems that the Antarctic program has some pretty screwed up problems with harassment etc., and how they're dealing with it.
I was mainly critiquing the lazy reporting of that study, i.e. to cherry-pick some overly-specific (but ultimately wildly extrapolated) percentage like 59%, and running with that, when (as I quoted above) the very study itself you're quoting is telling you it can't and shouldn't be used like that.
It's doubly unfortunate because if you page through that NSF study, there really are some pretty revealing quotes in some of the free-form answers submitted as part of the survey, clearly from people who've had a lot of time to think about and reflect on the issues they encountered in Antarctica.
This one in particular stuck with me (I looked up the full quote just now, several days later):
We can’t really have a zero-tolerance
policy on assault and harassment
because for 8.5 months there are no
planes unless there is a really serious
emergency. And I don’t think the program
will ever fly planes in to get people out
who have assaulted or harassed people.
The issues with it are far too many.
So, since we can’t have a zero-tolerance
policy[...]
Those sorts of constraints seem to both drive a lot of the problems with places like Antarctica, as well as well as being a big part of why certain people are drawn to work in those places.
That mandatory debriefing was part of the reason why the data was better. That's what I'm saying.
If the relevant Antarctic authorities were performing an end-of-season debrief with each and every employee, following the paradigm used for marine fisheries observers, they might go from "59%, but some punter can claim its actually 14%" to "actually its like 70%, and here's the mountains of data to prove it".
I specifically referenced the marine-fisheries example because, outside of grievous injury or death, the observers cannot leave until the captain of the boat they're narc'ing (your words) on chooses to return to port, and only if the captain of that boat returns to a port they can easily go home from. Which is a pretty serious problem when the harasser/assaulter is the captain. The analogy to Antarctica is pretty strong in that light.
> That mandatory debriefing was part of the reason why the data was better. That's what I'm saying.
Indeed, and the NSF study notes the limitations of its voluntary survey methodology.
But I don't think it's going to be as easy as your "[...]repeat the original study in light of the suggested methods just published". That study leans heavily into something the NSF can't easily replicate, e.g. all data they got from military personnel is voluntary.
> they might go from "59%, but some punter can claim its actually 14%" to "actually its like 70%, and here's the mountains of data to prove it".
Perhaps I'm reading too much into this, but it's rather telling that you think the number would go up under further scrutiny.
I'm not saying it will or it won't, but neither study we're discussing supports that conclusion, as shown e.g. by the error bars in the one you linked to.
E.g. the "9 to 43% experienced sexual harassment" line. In that case 9-26% is as likely as 26-43%, if we accept the rest of their methodology.
To speculate, I'd think the NSF methodology in particular would lead to a higher extrapolated than actual rate.
Presumably someone who's been the victim of sexual harassment or assault would be more inclined to participate in a time consuming survey to have their voice heard.
Whereas someone who had an entirely uneventful year (at least an that front) in Antarctica would be more inclined to skip it.
In addition to that, the NSF survey in particular does a bad job in conflating questions like "did X happen to you?" v.s. "...or anyone you knew, heard of etc.".
If you attempted to measure e.g. theft with those sorts of questions you'd end up with inflated numbers.
> [...]The analogy to Antarctica is pretty strong in that light.
Somewhat, but it also breaks down in other ways, e.g. McMurdo has around 200-1200 people depending on the season (winter v.s. summer).
The tricky bit about the NOAA affiliated survey was that the debrief was mandatory. The underlying justification for the whole study was that they were finding a higher rate of "I experienced harassment" in the debrief vs. in anonymous surveys and via criminal complaints and they wanted to understand why, and how to apply a correction to the anonymous surveys to address the manifold negative pressures on reporting.
> Presumably someone who's been the victim of sexual harassment or assault would be more inclined to participate in a time consuming survey to have their voice heard.
If you read the paper, you'd see that for certain kinds of incidents, this is true. If its especially egregious harassment or physical assault, those tend to get reported more often, even through regular channels. But if its just the bog-standard hostile-workplace stuff that we all have to attend annual seminars about, the reporting rate is very low, because the victims rarely feel like anything will be done about it. And its rare that anything's done about it, because the leadership doesn't have firm enough evidence that its a widespread enough problem to even do something about. What I'm saying is that even in the light of that NSF report, the Antarctic leadership are doing the same calculus you are and saying "well, it seems pretty endemic, but we don't really have any specifics". To which I say "well, then collect some specifics".
Also, re: data from military personnel, I don't have a clear idea how to address that, except to say that if military personnel are harassing civilian staff, or vice versa, that's a pretty serious problem irrespective of location, and the DoD maintains an office specifically to address that for all situations where military personnel regularly work alongside civilian personnel. So they, too, should be doing the same thing.
I know some people who worked on the NOAA Alaska fisheries study, and they were using pretty widely accepted methods for estimating e.g. "bycatch" - the incidental catching of undesired species during commercial fishing operations, because its impossible to have 100% coverage in the moment of all fishers everywhere. In general, compliance with the Endangered Species Act is evaluated via surveys that are processed and extrapolated using these same methods. The reason the paper even got published was because it turns out that those methods are pretty novel for e.g. human bycatch.
If you look at the figures from the paper, its pretty clear that the stated variance is because of year-over-year variation, rather than absolute uncertainty of the method (which checks out, if Ass-Grab-Greg or whoever finally got fired in 2016, you'd expect the harassment rate to drop in 2017, but you can't just ignore 2016 and prior when reporting for the entire interval).
I'm not sure what if anything we disagree on at this point.
My main point (to the extent I was trying to make one) was that I found the reporting on this issue to be irresponsible, particularly the prominent citing of statistics that the survey authors themselves warned readers from using in that exact way.
Taking several steps back from the issue, I'm not sure the fundamental implicit assumptions that are being made by the NSF here are realistic.
Yes, the issues they're discussing are Bad, and shouldn't happen. But those Bad issues also happen in society at large.
The NSF is effectively creating a small town in Antarctica. It doesn't seem realistic to me to expect what's essentially a small town mostly filled with civilians to conform to HR rule 24/7/365.
At some point it seem like the right structure to run such a thing is the normal structures we've created in society for dealing with these sorts of issues.
Would a small town shut down its bars to some degree because of the "town HR" learning about whatever happened there between various parties "after work", most of which wouldn't rise to the level of criminal or civil offenses?
That, or running it like a military base, which they did away with.
Yes, clearly it's going to be somewhere over 14%. I'm mainly critiquing the lazy reporting here. In reality that "59%" number is a wild guess, and should be reported as such.
> Here's a recent study that uses modern statistical modelling methods to extrapolate actual harassment rates from reporting rates that suffer strongly from self-selection bias in a similar environment
It's really not in a similar environment at all. That study is about estimating the rates of harassment, assault etc. of professional fisheries observers. I.e. independently employed "fisheries narcs" (my paraphrasing) who are dropped onto a fishing vessel for a limited time. "Upon returning from deployment, all observers are debriefed to ensure data quality and integrity".
That's a much more controlled dataset than "I hung out in Antarctica for a year, and I and 1/4 of the people that were there with me responded to this survey long afterwards, for whatever reason".
Even then, they have huge error bars on their 95% confidence intervals, e.g. as noted by this revealing paragraph:
> "Our estimates of observer victimization among years indicate that 0.4 to 1.8% of observers experienced assault, 22 to 34% experienced intimidation, coercion, and hostile work environments, and 9 to 43% experienced sexual harassment".
So, e.g. 9%-43%. That's a pretty huge difference.
In any case, we're way into the weeds here. Nothing I'm saying here is calling into question the the important underlying question here, that it certainly seems that the Antarctic program has some pretty screwed up problems with harassment etc., and how they're dealing with it.
I was mainly critiquing the lazy reporting of that study, i.e. to cherry-pick some overly-specific (but ultimately wildly extrapolated) percentage like 59%, and running with that, when (as I quoted above) the very study itself you're quoting is telling you it can't and shouldn't be used like that.
It's doubly unfortunate because if you page through that NSF study, there really are some pretty revealing quotes in some of the free-form answers submitted as part of the survey, clearly from people who've had a lot of time to think about and reflect on the issues they encountered in Antarctica.
This one in particular stuck with me (I looked up the full quote just now, several days later):
Those sorts of constraints seem to both drive a lot of the problems with places like Antarctica, as well as well as being a big part of why certain people are drawn to work in those places.