Hacker Newsnew | past | comments | ask | show | jobs | submit | legomaster's commentslogin

I wonder if it means a transition to Visual Studio Code at some point. I doubt it would happen soon, but I feel like there's a lot more support behind that IDE than MonoDevelop.


Visual Studio Code is an entirely different animal, built with JavaScript on top of node.js (like Atom).

MonoDevelop is a clone of Visual Studio written in C# which isn't even remotely compatible with the mis-named Visual Studio Code.

If they want to port MonoDevelop to Visual Studio Code, they would have to re-write all the features from scratch.

At this point they might as well re-write Visual Studio to node.js and not waste time on porting MonoDevelop.

I hope it doesn't happen. I certainly don't want to develop on an IDE written on top of a God-awful language with a ridiculously slow run-time.


> MonoDevelop is a clone of Visual Studio written in C#

They might have great intentions with MonoDevelop when it first launched, but it has been lagging behind other IDEs forever. With things like IntelliJ Rider, this position is not going to improve.

From my perspective Microsoft has done more with Visual Studio Code since its release 10 months ago than MonoDevelop has done in a decade. I'd argue it's easier to extend VSC to use it for C# development (which it already supports [1]) than to get MonoDevelop on par with modern IDEs.

And despite being JS/TS, the IDE is very very fast.

[1] https://code.visualstudio.com/Docs/languages/csharp


Despite that, Visual Studio Code is pretty awesome though.


I agree, but I wish they had chosen another platform to build it on.

Maybe when we finally have a universal bytecode for the web and ECMAScript 6 then developing on it will be less painful. Let us hope.


The vscode source is very well "layered". It's not totally inconceivable that the vscode folks or some other team take it and allow you to choose at build time whether to target it's current DOM-backed UI or one backed by more "native" drawing routines and widgets.


It's weird to bring up slowness when VSCode is much more amenable for use on machines with meager specs than MonoDevelop is. Why do you care what language your editor is written in unless you're submitting patches, anyway?


OmniSharp powered intellisense and other features are easy enough to stick in VS Code. It's easier to package and deploy than MonoDevelop. Much easier (at least for me) to write extensions for VS Code. I see it as a win-win. MonoDevelop has always been a frustrating experience for me. While VS Code isn't perfect, I'm less frustrated using it than MonoDevelop.


> OmniSharp powered intellisense and other features are easy enough to stick in VS Code

Already shipping with it. See:

1. https://github.com/Microsoft/vscode/issues/3029

2. https://github.com/OmniSharp/omnisharp-vscode


A company in the UK is working on a multithrust rocket/launch vehicle that would be reusable. It mixes a super-cooled jet engine with a rocket booster. For once you get beyond enough air pressure for the jet engine.

https://en.wikipedia.org/wiki/SABRE_(rocket_engine) https://en.wikipedia.org/wiki/Skylon_(spacecraft)

Something like this, that's reusable, would allow small (but larger than cubesat) payloads into orbit for what should be a reasonable price. 200 launch lifetime on the vehicle is the plan.


Seems like a really interesting idea. For now I've got my money on SpaceX in terms of decreasing the cost to orbit first, they have the hardware already up and flying, and it's a bit less exotic so the engineering issues may be more well understood.


I agree, I think SpaceX will have re-usable rockets with the Falcon 9 long before this is working. But I believe you could extend the jet-powered-rockets to larger launch vehicles than you could the tech in the Falcon 9. Plus, the tech might be able to be extended to have cheaper super-sonic flights. But that's long ways down the road.


I agree, it's quite an exciting proposition if we can get a real space plane.


Topshelf's documentation does need some more TLC -- if there's something specific you thought was lacking I can take a look at updating it.

I'm glad to hear you're getting value out of it! Last place I was at, we had about 30 distinct services running. It was a huge bonus to have Topshelf.


I think my problem was using the Custom Service style, rather than inheriting from ServiceControl. There are a number of ways to interact with the HostConfiguration.Service<T>() method and its overloads, but there's only one tiny section in the documentation.

TBH, I think you might want to deprecate that whole way of using TopShelf. Using ServiceControl is much much easier and better documented. I think if one wants to keep TopShelf out of their core code, it's easier to create a separate "ServiceWrapper" project that has a ServiceControl subclass and calls into your other code, rather than using the HostControl.Service() style.


Talking to a lot of people at conferences and similar events, rebasing against the target branch before merging is pretty uncommon. Few people think a clean history is that important.

Personally I think, rebase + auto-squash/auto-fixit makes the history a lot easier when it's time to look back. It just happens so rarely I wonder if it's really worth the effort I expend on it.


YMMV. I've seen people all over the map on this.

I have noticed that among people I respect, there is a strong correlation between using git-bisect and wanting a clean history.


I don't know what it is, but when I try to rebase master into the feature branch, my PR diff ends up littered with commits that aren't part of the PR. Then my reviewers have to wade through a bunch of irrelevant crap to see my changes. I thought the whole point of rebasing was so that wouldn't happen.

Anyway, now I just don't do it anymore. Git is a pain in the ass.


If you rebased master into the feature branch, then what you said makes sense. I realize it may look like semantics, but you rebase your feature branch onto the master branch. This means something completely different.

I am curious what you use as an alternative.


Huh. If I rebase master into the feature branch, isn't that supposed to move the point at which the feature branched off of master from where it was, to the HEAD of master?

About rebasing the feature branch onto master, my team doesn't do that, we squash the feature branch commits into one commit when merging to master.

(I don't use anything besides git, but that doesn't mean I can't hate it)


No, if you rebase master into the feature branch, that will move all of the new commits to master into the feature branch.

Now, it could be that I am just being a stickler for phrasing here. So, to clarify, if you are on the branch and run 'git rebase master', that is not rebasing master into the feature branch. That is rebasing the feature branch onto master.

So, is that what you were doing?


My phraseology was wrong, apologies. Yes that is what I was doing. You can probably tell that my default attitude towards git is one of confusion and frustration, this is no different


No worries. I can not claim that it is a simple problem to just immediately understand. Worse, I am not good enough in my understanding, to explain in a message forum. :(

I can say that it is easy, once you understand it more. It will take some time.

Countering that point, though; if you have a codebase that is rapidly changing at all times... there really isn't anything git can do to help.


Well, the problem for me is what I perceive to be the discrepancy between how rebase is supposed to work, and what it actually does when I use it. I would like to think I understand it but I guess you could argue I don't. At any rate I'm not really learning about it, I just don't do it anymore because I kept getting burnt by it.

My teammates have suggested to simply merge master into the branch so I do that now. It adds a commit to the branch, but Github is smart enough not to litter up the PR diff with the merge commit.

Our codebase doesn't change that much, every commit to master has to go through a PR and get approved. So fortunately we don't have to contend with that.


Only thing I can say is that rebase is exactly the same as reseting to the merge base, then cherry picking each individual commit that was lost in the reset. This means if you did 10 commits locally, it may stop 10 times to have you fix things. Versus a merge, which just does a single commit.

That said, I will also say that any worries about having merge commits in the history should largely be overcome. They actually provide useful information and are easy to ignore if you want.


It seems like the example of double fixing the calculation is a great reminder of the value of unit tests. Even if both commits added a unit test in slightly different ways so they didn't conflict, you'd just end up with a failing build and know something got screwed up.


But still the test would only fail after the merge, whereas you'd want to catch this before.


On GitHub you can have tests run before you've merged, so you know whether it's safe or not.


Don't the tests just run on the branch before the merge? If so, you wouldn't actually see the failure until after you merge.


> Don't the tests just run on the branch before the merge?

The merge button is only green if github could generate a merge commit, and that merge commit is made available. You can run your tests on either the branch before merge or the branch after merge, as you prefer.

You may want to do both and have the former block the latter: the merged head is going to change (and require a re-test) any time the target branch gets a new commit, no point in wasting cycle if the branch's own tests don't pass in the first place.


You can do either. Or both if you really want.

    $ git ls-remote
    From https://github.com/cyaninc/git-fat.git
    4c86c39bb5fca55a692d11680ed62fd5cb183921	refs/pull/14/head
    cb850f9d0bd09f8bb903e1c5959cef24ceb51695	refs/pull/14/merge
The merge ref is what the code would be if you merged it.


I'm guessing this is because a merge commit is created before the merge is fully executed.


Yeah - we run both the pull request and the master + branch merge every time.


Can't you run unit tests before you merge on an Atlassian service too?


I'm not sure about BitBucket as I don't use it, but I expect so.


You can, though you doing get to do it by using the ref/pull/## trick. That's just a connivence, you can get CI to do about anything.


Fair point!


I think this is exactly why my wife got frustrated with law and changed careers. The judge seems just as frustrated as I would expect any reasonable person to be.


If I had to pick something, I would likely pick NYC as my next stop. There's enough cool people doing cool things and plenty to do in the city. Good luck, I hope it's a blast wherever you go!


As someone who used to write a lot of LaTeX documents, this just isn't up to snuff yet. It's slow (which they can fix) but what would make it really helpful is hints in LaTeX commands and support for packages. I haven't tried any complex documents yet, hopefully a full 200 page doc with biblo and index would render correctly as well. It's cool though, I hope it does end up working well.


Creator here...

Thanks for your comments!

There is a full TeXLive distribution on the back end, so all of the packages are there (including beamer, tikz, bibliography, etc.). The editor is CodeMirror, so the underlying infrastructure for auto-complete is there; I just haven't had a chance to try it out. I mostly use writeLaTeX for short documents (papers, talks, etc.), so I've never tried it with a 200-page document -- current page limit for the auto-preview is 30, but that's a bit arbitrary.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: