I wonder if it means a transition to Visual Studio Code at some point. I doubt it would happen soon, but I feel like there's a lot more support behind that IDE than MonoDevelop.
> MonoDevelop is a clone of Visual Studio written in C#
They might have great intentions with MonoDevelop when it first launched, but it has been lagging behind other IDEs forever. With things like IntelliJ Rider, this position is not going to improve.
From my perspective Microsoft has done more with Visual Studio Code since its release 10 months ago than MonoDevelop has done in a decade. I'd argue it's easier to extend VSC to use it for C# development (which it already supports [1]) than to get MonoDevelop on par with modern IDEs.
And despite being JS/TS, the IDE is very very fast.
The vscode source is very well "layered". It's not totally inconceivable that the vscode folks or some other team take it and allow you to choose at build time whether to target it's current DOM-backed UI or one backed by more "native" drawing routines and widgets.
It's weird to bring up slowness when VSCode is much more amenable for use on machines with meager specs than MonoDevelop is. Why do you care what language your editor is written in unless you're submitting patches, anyway?
OmniSharp powered intellisense and other features are easy enough to stick in VS Code. It's easier to package and deploy than MonoDevelop. Much easier (at least for me) to write extensions for VS Code. I see it as a win-win. MonoDevelop has always been a frustrating experience for me. While VS Code isn't perfect, I'm less frustrated using it than MonoDevelop.
A company in the UK is working on a multithrust rocket/launch vehicle that would be reusable. It mixes a super-cooled jet engine with a rocket booster. For once you get beyond enough air pressure for the jet engine.
Something like this, that's reusable, would allow small (but larger than cubesat) payloads into orbit for what should be a reasonable price. 200 launch lifetime on the vehicle is the plan.
Seems like a really interesting idea. For now I've got my money on SpaceX in terms of decreasing the cost to orbit first, they have the hardware already up and flying, and it's a bit less exotic so the engineering issues may be more well understood.
I agree, I think SpaceX will have re-usable rockets with the Falcon 9 long before this is working. But I believe you could extend the jet-powered-rockets to larger launch vehicles than you could the tech in the Falcon 9. Plus, the tech might be able to be extended to have cheaper super-sonic flights. But that's long ways down the road.
I think my problem was using the Custom Service style, rather than inheriting from ServiceControl. There are a number of ways to interact with the HostConfiguration.Service<T>() method and its overloads, but there's only one tiny section in the documentation.
TBH, I think you might want to deprecate that whole way of using TopShelf. Using ServiceControl is much much easier and better documented. I think if one wants to keep TopShelf out of their core code, it's easier to create a separate "ServiceWrapper" project that has a ServiceControl subclass and calls into your other code, rather than using the HostControl.Service() style.
Talking to a lot of people at conferences and similar events, rebasing against the target branch before merging is pretty uncommon. Few people think a clean history is that important.
Personally I think, rebase + auto-squash/auto-fixit makes the history a lot easier when it's time to look back. It just happens so rarely I wonder if it's really worth the effort I expend on it.
I don't know what it is, but when I try to rebase master into the feature branch, my PR diff ends up littered with commits that aren't part of the PR. Then my reviewers have to wade through a bunch of irrelevant crap to see my changes. I thought the whole point of rebasing was so that wouldn't happen.
Anyway, now I just don't do it anymore. Git is a pain in the ass.
If you rebased master into the feature branch, then what you said makes sense. I realize it may look like semantics, but you rebase your feature branch onto the master branch. This means something completely different.
Huh. If I rebase master into the feature branch, isn't that supposed to move the point at which the feature branched off of master from where it was, to the HEAD of master?
About rebasing the feature branch onto master, my team doesn't do that, we squash the feature branch commits into one commit when merging to master.
(I don't use anything besides git, but that doesn't mean I can't hate it)
No, if you rebase master into the feature branch, that will move all of the new commits to master into the feature branch.
Now, it could be that I am just being a stickler for phrasing here. So, to clarify, if you are on the branch and run 'git rebase master', that is not rebasing master into the feature branch. That is rebasing the feature branch onto master.
My phraseology was wrong, apologies. Yes that is what I was doing. You can probably tell that my default attitude towards git is one of confusion and frustration, this is no different
No worries. I can not claim that it is a simple problem to just immediately understand. Worse, I am not good enough in my understanding, to explain in a message forum. :(
I can say that it is easy, once you understand it more. It will take some time.
Countering that point, though; if you have a codebase that is rapidly changing at all times... there really isn't anything git can do to help.
Well, the problem for me is what I perceive to be the discrepancy between how rebase is supposed to work, and what it actually does when I use it. I would like to think I understand it but I guess you could argue I don't. At any rate I'm not really learning about it, I just don't do it anymore because I kept getting burnt by it.
My teammates have suggested to simply merge master into the branch so I do that now. It adds a commit to the branch, but Github is smart enough not to litter up the PR diff with the merge commit.
Our codebase doesn't change that much, every commit to master has to go through a PR and get approved. So fortunately we don't have to contend with that.
Only thing I can say is that rebase is exactly the same as reseting to the merge base, then cherry picking each individual commit that was lost in the reset. This means if you did 10 commits locally, it may stop 10 times to have you fix things. Versus a merge, which just does a single commit.
That said, I will also say that any worries about having merge commits in the history should largely be overcome. They actually provide useful information and are easy to ignore if you want.
It seems like the example of double fixing the calculation is a great reminder of the value of unit tests. Even if both commits added a unit test in slightly different ways so they didn't conflict, you'd just end up with a failing build and know something got screwed up.
> Don't the tests just run on the branch before the merge?
The merge button is only green if github could generate a merge commit, and that merge commit is made available. You can run your tests on either the branch before merge or the branch after merge, as you prefer.
You may want to do both and have the former block the latter: the merged head is going to change (and require a re-test) any time the target branch gets a new commit, no point in wasting cycle if the branch's own tests don't pass in the first place.
I think this is exactly why my wife got frustrated with law and changed careers. The judge seems just as frustrated as I would expect any reasonable person to be.
If I had to pick something, I would likely pick NYC as my next stop. There's enough cool people doing cool things and plenty to do in the city. Good luck, I hope it's a blast wherever you go!
As someone who used to write a lot of LaTeX documents, this just isn't up to snuff yet. It's slow (which they can fix) but what would make it really helpful is hints in LaTeX commands and support for packages. I haven't tried any complex documents yet, hopefully a full 200 page doc with biblo and index would render correctly as well. It's cool though, I hope it does end up working well.
There is a full TeXLive distribution on the back end, so all of the packages are there (including beamer, tikz, bibliography, etc.). The editor is CodeMirror, so the underlying infrastructure for auto-complete is there; I just haven't had a chance to try it out. I mostly use writeLaTeX for short documents (papers, talks, etc.), so I've never tried it with a 200-page document -- current page limit for the auto-preview is 30, but that's a bit arbitrary.