Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've done enough kernel work to feel that it's a safe assumption in general, but that there may be some exceptions.

For example, I wouldn't be completely shocked if somebody said, "We really need to support a particular version of an old OS that had unusually high per-process overhead in some particular corner case."

If anybody knows how much kernel memory a basic process needs in, say, modern-day Linux, please chime in. I tried looking it up, but didn't find it. Probably it's just sizeof(task_struct), which I can't be bothered to check right now, plus a few KB for stack stuff.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: