My only problem with terminal and vagrant built into the IDE, is I have multiple monitors and enjoy separation. I usually fullscreen my IDE/editor in one monitor, maximizing each file's viewport to the best I can, and then have multiple terminals on another monitor.
Can I "pop-out" the terminal/vagrant windows in this IDE and put them on another monitor?
PHPStorm is more focused and coherent and includes some features that IDEA + PHP plugin doesn't, IMO. If I were a full-time PHP developer, I'd use PHPStorm, but for my occasional PHP needs, IDEA + PHP plugin is ideal.
PyCharm is much the same - for full-time Python development, I'd prefer it to IDEA + plugin.
"Post-mortem" is usually appended to titles for solved vulnerabilities, although this was 2 months ago. Maybe just timestamp it? e.g. "Facebook CSRF leading to full account takeover (Post-mortem, August 2013)"
That would imply that the Post-Mortem itself was written in August 2013, which would probably get far fewer clicks as people assume they've read about the vulnerability before.
The idea, as I understood it, was that you have a very limited amount of time on this earth, so you should (in general) avoid spending time or other resources on building deep relationships with people that you can't rely on.
I have amazing friends that I can rely on - friends of over 2 decades that would literally give their lives for mine. But they'd make horrible business partners. They aren't self motivated in the way they'd need to be. They require the structure of a 9 to 5.
I get the sentiment of the quote - that you need trustworthy cofounders - but it's a really odd and incorrect way to phrase it.
That's absurd. What about the friend who always makes you laugh but happens to be a bit of a slacker? Dispose of him? Let's all be miserable and uptight then.
I'm sure most start ups in Miami have A/C, so I'm not sure what your concern is. Most apartments too.
Anyway, I live in an area that reaches 43c in the summer and -6c in the winter. When I first moved here it definitely threw me off, but I got acclimated to it quickly.
As someone who grew up in Milwaukee and moved to Charleston, I agree 100%. One of my largest clients is in Milwaukee now and I can say definitively that I have lost all tolerance for that bitter cold. And to all you Canadians living north of the wall... I don't know how you do it.
I actually really prefer the Winter. I personally think when you start feeling like a Canadian is when you enjoy the winter weather.
For whatever reason the sun in Canada really seems to burn(at least in Toronto area) and it can get really humid in the summers. But in the winter you can always just bundle up and feel comfortable.
I'm a canadian, I really appreciate the weather down in the bay area after growing up in seattle weather for my childhood and college years. Don't miss the rain and the cold, or the 'lack of seasons' some friends seem to complain about.
Well, not quite. In the summer Vancouver/Victoria are both way nicer than SF. I personally don't think there is a nicer place to be in August. In the winter, they're both colder.
Neither compare to the peninsula though. It's still mid-20s in Palo Alto during the day right now whereas its mid-teens in Vancouver.
> Also, some of us have (in business environments) $1k+ S-IPS 30"+ monitors — the quality of these monitors is way above that of consumer models like the VG248QE and others. If there is no way to generically mod monitors without onboard DSPs, I could see that hindering adoption.
I think Nvidia is targeting hardcore gamers first and foremost. Most gamers are not gaming at 2560x1600/1440. Some are, but most aren't.
The most popular monitors by pro gamers right now (Twitch/eSport players and enthusiasts) are 120/144hz 1ms monitors, such as the ASUS VG248QE. Color reproduction isn't as important to pro gamers as smoothness/framerates.
Also hardcore/pro players are dumping lots of money on the most expensive computer rigs, often upgrading to the latest and greatest every generation. They are a very important marketing group for Nvidia.
It's not really about the latency (5ms vs 1ms is negligible), but its about the pixel response to reduce/eliminate ghosting and other artifacts of LCD's persistent pixels. The speed that the pixels can update is proportional to the amount of ghosting. Interestingly enough, it won't eliminate it no matter how fast the pixels update. The real problem with ghosting turned out to be precisely the pixel-persistence. Even more interesting is that someone discovered a hack for the modern 3D monitors like the ASUS mentioned that completely eliminates ghosting: the strobing backlight functionality necessary for 3D completely eliminates ghosting when applied to 2D. I currently have this setup and its exactly like using a CRT. A flat, light, 1920x1080 CRT. It's beautiful.
He's actually completely wrong. Persistence is about image quality, and can be mitigated by filtering that hardcore gamers always turn off, because it costs them latency.
Reducing latency isn't about how noticeable it is. Latency can be completely impossible to detect for you but still hurt you.
Input lag is the time between providing some input, such as clicking with your mouse, to getting feedback of this event on the screen. As the clicking will be prompted by things happening on the screen, input lag acts as a command delay to everything that is done. The most interesting feature of latency is that all latency is additive. It doesn't matter how fast or slow each part in the system is, none of them can hide latency for one another. Or, even if the game simulation adds 150ms and your stupid wireless mouse adds 15ms, the 2 ms added by the screen still matter just as much.
The second mental leap is that the human controlling the system can also be considered to be just another part in the pipeline adding latency. Consider a twitch shooter, where two players suddenly appear on each other's screens. Who wins depends on who first detects the other guy, successfully aims at him, and pulls the trigger. In essence, it's a contest between the total latency (simulation/cpu + gpu + screen + person + mouse + simulation) of one player against the other player. Since all top tier human players have latencies really close to one another, even minute differences, 2 ms here or there, produce real detectable effects.
This is completely wrong. When even the fastest human reaction time is on the order of 200ms, 5ms vs 1ms of monitor input lag has no effect on the outcome. Also consider that 5ms is within the snapshot time that servers run on, so +/- 5ms is effectively simultaneous to the server on average.
Pixel persistence is not about image quality and cannot be mitigated by anything, except turning off the backlight at an interval in sync with the frame rate you're updating the image. This is how CRTs work, and that's why they had no ghosting effects. The 3D graphics driver hack I mentioned does exactly that for 3D enabled LCD monitors.
People can notice input latencies that are many times smaller than their reaction time. 200ms of input latency is going to be noticeable and bothersome to basically everyone for even basic web browsing tasks. Most gamers will notice more than 2-3 frames of latency, and even smaller latencies will be noticed in direct manipulation set-ups like touchscreens and VR goggles where the display has to track 1:1 the user's physical movements.
I think you misunderstood my point. In terms of actual advantage, 1ms vs 5ms is negligible, considering the fact that human reaction time is 200ms. So in the case of shooting someone as they popped out from behind a corner, the 200ms reaction time + human variation + variation in network latency + discreet server time, will absolutely dominate the effects.
I definitely agree that small latencies can be noticed, even latencies approaching 5ms (but not 5ms itself--I've seen monitor tests done that showed this).
> I think you misunderstood my point. In terms of actual advantage, 1ms vs 5ms is negligible, considering the fact that human reaction time is 200ms.
You did not understand the point of my post. The quality that matters is total latency. How long a human takes to react is completely irrelevant to what level of latency has an effect. Whether average human reaction time was 1ms or 1s doesn't matter. All that matters is that your loop is shorter than his, and your reaction time is very near his, so any advantage counts.
> the 200ms reaction time + human variation + variation in network latency + discreet server time, will absolutely dominate the effects.
Server tick time is the same for everyone. Top level gaming tourneys are held in lans, where the players typically make sure that the network latency from their machine to the server is not any greater than from anyone else. However, none of that matters to the question at hand.
Assume that total latency of the system, including the player, can be approximated by:
and assume all are normally distributed random around some base value, except display lag, and you have:
(midpoint, standard deviation)
rand(200,20) + rand(20,5) + rand(16,2) + 15
while I have:
rand(200,20) + rand(20,5) + rand(16,2) + 5
The total latency is utterly dominated by the human processing time. Yet if we model this statistically, and assume that lower latency wins, the one with the faster screen wins 63% of time. That's enough of an edge that people pay money for it.
No I understood your point, I just don't agree that it results in any meaningful advantage. What you didn't model was the fact that the server does not process packets immediately as they are received. They are buffered and processed in batch during a server tick. If the two packets from different players are not received along a tick boundary, then the server will effectively consider them simultaneous.
And remember, we're considering 1ms vs 5ms, so the difference would be 4ms in this case. I would like to see what percentage an advantage someone has in this setup. Even 63% isn't anything significant considering skill comes down to game knowledge rather than absolute reaction time. People will pay for smaller/bigger numbers, sure. But that doesn't mean there is anything practically significant about it.
But it doesn't average out. The 5ms player is continuosly 4ms behind the other player. As above poster explained - the times add up. So you have 200ms + one to five ms. If server tick is as little as 5ms the problem is even worse as in that case player A will with exact same reactions get a faster tick in 4 out of 5 times. I don't know how often it matters, but I'd expect top player to have pretty similar reaction times. So let's say 2 opponents are both between 200 and 220 ms reaction time - then constantly having 4ms more for one player definitely sounds like it will have an effect.
edit: Or in other words - it depends on how often the reaction of the opponents is with the 4ms difference. That certainly depends on the game and the players.
Most server ticks are nowhere near 5ms. Quake 3 ran on 20-30 tick, CS:S/CS:GO runs on 33 by default and up to 66 if you run on high quality servers. 100 tick was somewhat common in CS:S. Some really expensive servers claimed 500 tick but I never bought it. Either way no one's client would update that fast.
Furthermore, if you watch pro matches, you'll quickly realize their skill has nothing to do with having the fastest reaction time. Once you get to a certain skill level, it all comes down to game knowledge. Having a consistent 4ms advantage is absolutely negligible.
If you get a chance, demo a 120hz monitor setup and spin around quickly in an FPS. It's quite noticeable. It almost feels extra surreal a la the Hobbit at 48fps until you get used to it.
What is the real size of the market of gamers who upgrade with every new generation of hardware? I've gotta say, I know many gamers (though no professional ones), and none of them upgrade that often. It's more like once every 2-3 years at the most.
Hardcore gamers are not a great source of income however a great marketing resource for nvidia. They are very influential on others when choosing products. They also represent a large portion of the review industry online.
Having the crown for best graphic card even translates to sales on low end laptops.
Exactly, especially in the Twitch.tv and eSport era. Sponsored players flaunt their hardware, often linking to Amazon product pages or giving hardware away in their Twitch channels. These professional players have tens of thousands of viewers, and thousands of subscribers, on Twitch. There's a lot of marketing to be had.
Seems unlikely. Essentially all animals with a brain sleep, even flies. You'd need to find a pill that could do something that not one animal was able to evolve to do over hundreds of millions of years. Something that was extremely calorie intensive is all I can imagine.
I appreciate your logic, but there is a wide variety of needed levels of sleep among different animals, and even among different individual humans which tells me that some are more efficient than others at sleeping (maybe clearing toxins?).
As efficient as evolution is, the ability not to sleep as much may not be a huge determinant in reproduction. Just 100 years ago, people worked all day, then relaxed and slept. There was little to no demand for not sleeping at all from a survival standpoint.
The main reason people want to stay awake all night now is to work more and advance their careers. You don't need that for survival.
Further, Pharma companies have done some amazing things. While I agree that it is unlikely soon, I am confident that if humans are around in 200 years, it becomes much, much more likely.
I'd be happy with a pill that speeds up the sleep/recovery process - it seems easier to achieve and it wouldn't go against millions of years of evolution...
Not at all. There have been numerous campaigns that released their product to the community upon reaching their goal, like Cards Against Humanity, The Pirate Bay - AFK documentary, the Open Goldberg Variations, and many more projects.
Notice that none of those are video games. It's great that lots of creative work is being licensed under CC BY-NC-SA, or even CC BY in a few cases. But most of those have something else they can sell later, like a physical book or cards, or are otherwise able to completely fund the entire process from the pledges. So they were able to open-source the product, have a free version, and still make enough to survive. That's not quite the same thing as completely releasing an entire game for free, although they may appear similar at first glance.
Unless you're disabled, why do people want an electric bicycle? I figured most people cycle because they love cycling and the exercise of it. Including hills. Is this product meant to attract more automobilist?
It's true in the USA that a disproportionate number of people bicycle as a sport, because we've marginalized it as an athletic activity. The market of people who would bicycle if it was less effort (or safer, or more convenient) is huge.
Electric bicycles are pretty big in Japan - a lot of mothers and housewives use them to get around all the time as they don't tire you as much as normal bikes.
Also, there's a few bicycles these days with two childseats, so that's a lot of weight to cart around.
Cars cost a few thousand a year + large initial investment. Lots of people bike in cities because they can't afford to drive/pay insurance/gas/parking.
It's not always convenient (or even possible) for your everyday transportation to also be exercise. Electric assist extends biking range (and makes biking possible) for people who would otherwise need to use another mode of transport -- because you have an important meeting today and don't want to be sweaty, because you are sore from weekend exercise, because the office is a bit far, because you have to pick up groceries, because you're 70 years old, because you're 8 months pregnant, because you have a cargo bike, because you have a serious bike race later in the day....
Electric assist is fairly common in the Netherlands (I have family there), where a pretty big percentage of the population uses bikes as the basic mode of transport. It's really sensible to have the option to ride somewhere without it being a workout.
> The week after Christmas, we drove our entire lives up to our new apartment in NYC, got my wife settled in the city, and then I flew out to San Francisco to live on the other side of the country from her for 3 months.
> And it was, without a doubt, one of the best experiences of my life.
Can I "pop-out" the terminal/vagrant windows in this IDE and put them on another monitor?