Hacker Newsnew | past | comments | ask | show | jobs | submit | pianoben's commentslogin

Android folks have good reason to have anti-Java bias. Their bias, as it happens, is against old Java, which they are constrained to use as fallout from the Oracle lawsuits of yore. Kotlin breathed new life into Android in a meaningful way.

On backend teams, I've not personally encountered much anti-JVM bias - people seem to love the platform, but not necessarily the language.

(yes I know there's desugaring that brings a little bit of contemporary Java to Android by compiling new constructs into older bytecode, but it's piecemeal and not a general solution)


Lies, damm lies.

They cherry pick whatever they feel like from OpenJDK.

And even though Oracle was right, given that Android is Google's J++, in this case they had better luck than Microsoft.

They don't take more from OpenJDK because then their anti-Java narrative doesn't work out.

But there is some schadenfreund, to keep Kotlin compatibility story relevant they are nonetheless obligated to keep up with is mostly used on Maven Central, thus the updates up to Java 17 subset.


Maybe I'm wrong about the state of Java in Android today - it's been a few years since I did that work full-time. But I do remember when Kotlin broke on to the scene in 2015, and most of us were thrilled to finally move beyond Java 7! The embrace of a non-Java language was grassroots and genuine; Google's adoption came several years later.

J++ though, now that is a blast from the past! I think I still have a J# book from my student days, somewhere :)


ART is updatable via PlayStore since Android 12, however in 2026 the latest is a Java 17 subset, while the latest LTS is Java 25.

Kotlin only worked properly on Android after some folks pushed it from inside, and then they used Java 6 vs Kotlin samples to advocate for it.

In 2015 the latest Java version was 8, which never was properly supported on Android, the community had to come up with RetroLambda, before Google created desugaring support, think Babel but for Java.

Naturally it also meant that the performance of Java 8 features wasn't the same, e.g. lambdas make use of invokedynamic on the JVM, on Android they used to be rewriten into nested classes.

Even today, although Android documentation has Java and Kotlin tabs for code snippets, the Java ones are hardly taking advantage of modern features.

Naturally who learns Java on Android gets an adulterated view on the matter.


  > But I do remember when Kotlin broke on to the scene in 2015, and most of us were thrilled to finally move beyond Java 7! 
n=1 but i was there with android studio v0.01 (or thereabouts) using kotlin for a production app cause i was so sick of old-java + eclipse... google was asleep at the wheele imo and android development would be nowhere near where it is today without jetbrains


Compared to Apple and Microsoft, Android development is mostly outsourced.

None of the development environments is from Google, none of the languages as well, or the build tools for app developers (Internally they use Bazel and Soong).

Naturally having gone into bed with JetBrains for the IDE, after leaving NDK users without IDE tooling for almost two years during the IDE transition, the deal was in place to push Kotlin as well.

I am surprised Google hasn't yet bought JetBrains.


  > I am surprised Google hasn't yet bought JetBrains.
same, but i just guess google doesn't know what they would do with them outside of supporting android

  > Compared to Apple and Microsoft, Android development is mostly outsourced.
its true, but i wish apple worked harder on their ide because its so barebones compared to jetbrains its not even funny


> I am surprised Google hasn't yet bought JetBrains.

Why would they? It’s the best of both worlds. They can pay a fraction of the price while having 100% of the benefits.


Rumour has it Google tried many times but JetBrains isn't for sale.


> what about interop with Java?

From the proposal discussion[0], the runtime representation on the JVM will just be `Object`.

[0]: https://github.com/Kotlin/KEEP/discussions/447#discussioncom...


Oof. That's pretty gross: just throw away all typesafety?

A `Result<T, E>` return type is way better.

This feels like it'll be viewed like Java's `Date` class: a mistake to be avoided.


complexity in software is invisibly-preceded with "unnecessary", and usually indicates software that is difficult to maintain or even to verify its behavior. A really cool software architecture can scratch a similar itch as a good fugue, but that's not its typical function nor is it the way we usually engage with software professionally.

Bach's complexity, incidentally, is seldom "for its own sake" - the pieces all fit together beautifully and without extraneous movement. Contrast that with some lesser works by later composers like Liszt, where you often get the sense that a given passage could be reduced or removed without harming the work.


Gemini was so much fun during lockdown - I loved the distraction of a new simple protocol, and the challenge of writing a gui client for it.

Can't say I'm surprised that it hasn't taken the world by storm, but it's still a cozy part of the Internet.


I completely missed out on this :'(


No doubt you were doing a myriad of other things that were worthwhile to you at the time.


Lol I "love" that the first benefit this company lists in their jobs page is "In-Office Culture". Do people actually believe that having to commute is a benefit?


You can't reduce the in-office or remote experience purely to commuting. It's just one aspect about how and where you work and work life balance in general.

But since you asked, yes, I actually enjoy commuting when it is less than 30 minutes each way and especially when it involves physical activities. My best commutes have been walking and biking commutes of around 20-25 minutes each way. They give me exercise, a chance to clear my head, and provide "space" between work and home.

During 2020, I worked from home the entire time and eventually I found it just mentally wasn't good for me to work and live in the same space. I couldn't go into the office, so I started taking hour long walks at the end of every day to reset. It helped a lot.

That said, I've also done commutes of up to an hour each way by crowed train and highway driving and those are...not good.


> provide "space" between work and home

I don't get this. This idea that 'work life balance' should mean that the two should be compartmentalised to specific blocks of time seems counterproductive to me. To me it feels like an unnatural way of living. 8 hours in which I should only focus on work, 8 hours I should focus on everything else followed by 8 hours of sleep. I don't think that is how we are supposed to operate. Even the 8 hours of sleep in one block is not natural and a recent invention. Before industrialisation people used to sleep in multiple blocks (wikipedia: polyphasic sleeping)

The idea that you have to be 'on' for 8 hours at a time seems extremely stressful to me. No wonder you need an hour afterwards just to unwind. Interleaving blocks of work and personal time over the day feels much more natural and less stressful to me. WFH makes this possible. If I'm stuck on something, I can do something else for a while, maybe even take a short nap. The ability to focus and do mentally straining work comes in waves for me. Being able to go with my natural flow makes me both happier, more relaxed and more productive.

The key to work/life balance to me is not stricter separation but instead better integration.


> This idea that 'work life balance' should mean that the two should be compartmentalised to specific blocks of time seems counterproductive to me.

Different people are different and can have different preferences.

For me, having different physical spaces helps me focus on work at work and my family at home. When they are the same physical space, both suffer. I'm not saying everyone should feel this way.


Counterpoint: when I "wfh" I end up just sleeping 90% of my work hours and smashing out actual work for the remainder. When I'm in an office I'm productive 70% of my hours and it has nothing to do with accountability, just a proper office environment (and yes I have a work area at home). Regardless of going to office or wfh, I don't have set hours.

The overarching point is everyone is different, ymmv.


> provide "space" between work and home.

This is part of the company culture. If the company respects the boundary between work and personal life, and it's a cultural value, then it shouldn't be a problem for you establishing a space even without going to the office. You just close down your work laptop, put it aside and open it up next time when it's time to work again. Of course, there's stuff like on-call shifts, and there's a temptation to just stay later and finish this one thing, but if the company culture does not expect you to be tethered to work 24x7 then it's doable. If the culture is right, you don't need a physical barrier for this to be doable.

> so I started taking hour long walks at the end of every day to reset. It helped a lot.

A good habit. I dont see why any remote worker couldn't do that.


> it shouldn't be a problem for you establishing a space even without going to the office.

No, this was nothing to do with company culture. This was just my own mental response to just always being at home. Admittedly, the pandemic accentuated this because we weren't going anywhere even on weekends and evenings. But even as things opened up and we resumed our normal socialization, I returned to the office long before most people because I needed the mental and physical distance.

I know I'm atypical. In those early days,I estimated fewer than 5% of people in my office were voluntarily returning and even today when we're at RTO 3 days a week, most people do exactly that and no more.


[flagged]


Or maybe they don't live in US where everything is by car :)


I live in sweden. I assure you public transport commute is no joke.

edit: and if you live outside of the city you'll need a car anyway.


In-office culture would be dope if there were actual benefits to an office like maybe

Learning from smart people, making friends, free food and drinks, a DDR machine

My last office job had none of that. Instead it was just sort of like a depressing scaled up version of my home office


My office has some nice perks!

1. It's extremely cold and dark! I must wear extra clothes when going inside and I get depressed at wasting a day of nice weather in what looks like a WW1 bunker.

2. Terrible accessibility for disabled people! (such as myself)

3. Filthy toilets!

4. Internet is slower than at home!

5. Half the team lives somewhere else so all meetings are on teams anyway!

6. They couldn't afford a decent headset so I get pain in my head after 5 minutes, but I don't have a laptop so I can't move to a meeting room.

The HR really can't understand why after all these great perks I insist on wanting to work from home. I am such an illogical person!


Friend works at office that allows dogs. Her workplace is one big dog toilet! She is expected to clean it (she is not toilet cleaner). She get sexually assaulted, when her boss shoved his dog into her crotch!

There were some hospitalisations from work related injuries... Regular bullying, threats of violence....

Lovely office culture!


I did quit a job within 2 weeks because of casual racism and sex harassment in the office. But I was lucky to find something else that fast, to be able to do it.


Some people like being in office. People are different.


I rather commute than WFH. So yeah, people do. Maybe not all the people, but certainly some people.


> Do people actually believe that having to commute is a benefit?

Everything is subjective here. I don't love commuting, but I'm remote now and there are days I kind of miss it. I got a lot more podcasting listening in when I did which I really do miss, and I enjoyed getting out of the house, on a schedule, and seeing my city and area.

As for BEING in the office, yes I also miss that. I miss the friendships with people from other parts of the org that I made; I miss the getting together at lunch and talking about both work and non-work stuff; I miss the pinball machines that one enthusiast set up.

THAT SAID, I abhor the _requirement_ to be in an office; it's a top down, heavy handed, hamfisted attempt at trying to force something that IMO can only come naturally, under the guise of "CuLtUrE!", and unless forced to I won't consider any job that requires it. (NB: This, too, is a tradeoff - if it's close to my house and I've got some latitude as to what time to make it there so I can have some freedom to avoid the heaviest of traffic, sure.)

This is just another example of the "open office" concept. When that came out everyone hated it except for the C-suite that didn't have to do it, under the mistaken idea that it forces "collaboration, which is good", when the reality was that the "good" part was emergent, holistic, and natural, and any forcing function kills it. But of course we also know that it was nothing but a cost-savings issue, and the "collaboration" argument was a gaslight retcon of the highest order. Open offices actually worked when PART of the office was open, allowing collaboration _as needed_ and driven by the teams/groups that wanted to do it, not by management. RTO is exactly the same.


The trouble I have with this approach (which, conceptually, I agree with) is that it's damned hard to do anything with the parse results. Want to print that email_t? Then you're right back to char*, unless you somehow write your own I/O system that knows about your opaque conventions.

So you say, okay, I'll make an `email_to_string` function. Does it return a copy or a reference? Who frees it? etc, etc, and you're back to square one again. The idea is to keep char* and friends at "the edge", but I've never found a way to really achieve that.

Could just be my limitations as a C programmer, in which case I'd be thrilled to learn better.


Firstly, `parsing` is just a way to say "serialise from a string". The reverse operation can be done for every type you are creating. If the reverse operation (serialise to a string) does not exist in the interface then adding it gives you a single place to catch all the bugs.

I'm thinking of that recent git bug that occurred because the round-trip of `string -> type -> string` had an error (stripping out the CR character). Using a specific type for a value that is being round-tripped means that a bugfix needs to only be made in the parser function. Storing the value as simple strings would result in needing to put your fix everywhere.

> The trouble I have with this approach (which, conceptually, I agree with) is that it's damned hard to do anything with the parse results.

You're right - it is damn hard, but that is on purpose; if you're doing something with the email that boils down to "treat it like a `char *`" then the potential for error is large.

If you're forced to add in a new use-case to the `email_t` interface then you have reduced the space of potential errors.

For example:

> Want to print that email_t? Then you're right back to char, unless you somehow write your own I/O system that knows about your opaque conventions.

is a bug waiting to surface, because it's an email, not a string, and if you decide to print an email* that was read as a `char *` you might not get what you expect.

It's all a trade-off - if you want more flexibility with the value stored in a variable, then sure, you can have it but it comes at a cost: some code somewhere almost certainly will eventually use that flexibility to mismatch the type!

If you want to prevent type mismatches, then a lot of flexibility goes out the window.


Linguistic nit: deserialize from a string, serialize to a string

“Serialization” is the act of taking an internal data structure (of whatever shape and depth) and outputting it for transmission or storage. The opposite is “deserialization,” restoring the original shape and depth.


But that’s where TFA breaks down: the whole point of this is to claim “hey, C does too have typesafety” —if you use a modern language with actual typesafety, you can just make the underlying representation accessible (probably read-only) and you don’t need a reverse conversion step. You can have type safety, efficiency, and ease of use. Just rip the band-aid off and move to Rust or Swift already.


> the whole point of this is to claim “hey, C does too have typesafety” —if you use a modern language with actual typesafety,

I'm afraid the point was not some childish and immature comparison of C with modern languages.

The point was to demonstrate what type safety there is, and how to use it. The advantages of modern languages are even acknowledged:

> Much to the surprise of, well, everybody, C actually has type safety. Sure, it isn’t as enforceable as (for example) Rust… and, sure, if you are willing to do extra work you can bypass it,

The entire point of TFA is actually in TFA:

> The problem isn’t that C lacks type safety (it clearly enforces most types in most expressions), it’s that raw pointers do not encode semantics (e.g., a char * doesn’t tell you if it’s an email, a name, or a filename).


In the past I've taken inspiration from strncpy: the caller needs to allocate the memory. For the email example, you'd probably also want a function to tell you the length of the emailstring, but for other types there are clear size limits. This puts the caller in control of memory allocation, so they may be able to statically allocate, allocate in an arena, or use other methods which promote performance. The static approach is really nice when it works, because there's nothing to free.


email_t doesn't have to be opaque; if it's just a visible wrapper around char* then you can still do everything with it as a char* (that is, everything you do with strings).

The benefit is to avoid treating char*s as email_t, not avoiding treating email_t as char*.


(Using a thin wrapper like this to add safety is called the newtype pattern, if anyone wants to know.)


I was curious how this would look in C, and I found this article[1] how this could look in C, apparently with very little overhead.

And as I just saw, Python 3.10 also introduced a NewType[2] wrapper. I'll have to see how that feels to handle.

1: https://blog.nelhage.com/2010/10/using-haskells-newtype-in-c...

2: https://typing.python.org/en/latest/spec/aliases.html#newtyp...


Python’s NewType is, confusingly, a very different thing: it’s a compile-time-only subtype of the original, rather than a Haskell-style newtype (which is an entirely separate type from its source).


I've re-read the article again since getting a bunch of up and down votes across the comment section, and I think you've chosen a better name for this article than PdV. It really is just about using newtype wrappers.


In the example code they explicitly put the struct in the c file so the char* is not available.

If you're suggesting getting around this by casting an email_t* to char* then I wish you good luck on your adventures. There's some times you gotta do stuff like that but this ain't it.


You could probably get away with the typecast if you satisfy the "common struct prefix" requirement, that's nowhere near necessary.

While the article does hide the internal char*, that's not strictly necessary to get the benefit of "parse, don't validate". Hide implementation details sure, but not everything is an implementation detail.


The main benefit for me with this approach is that the boundries are not transparent anymore. That content printing is such a boundry. Your data is about to exit through there and you're summoned to handle that. The inconvenience that comes with it is as any other when security enters the play. The same with the data management responsibilities - who handles what, for how long, and with whom. Without data type distinctions everything is (more or less) common, with vague or broadly defined ownership.


One of the largest codebases I've ever worked on, generating billions in revenue every year to this day, is in Rails. Over a thousand hands have touched it, and none of the original people are still around to hold an empire.

I'm with you on the general lack of discipline enforced by Rails; this codebase isn't fun to maintain, precisely for that reason. All the same, I don't think your critique is fair or even that accurate.

But that's from my POV working at bigger companies. Maybe it looks different as a freelancer for smaller shops.


By the sound of it, your POV working at one big company. The codebases I've worked on that did similar ARR numbers with similar depths of commit history were in Javascript and TypeScript, so "your argument is invalid," right? Why not? It's just what you said and my credentials are better. What's the most in revenue one second of your life was ever worth? For me that's about $35k. But no, you go ahead, try to big league me with numbers some more.

If you think my critique is unjust or inaccurate, then attack it, not my standing to speak on the subject. Especially not when I'm the more forthcoming of the two of us when it comes to professional history, anyway; mine is findable from my HN profile, while you prefer true pseudonymy. To argue from authority as you've tried is quite risible with none of that even in evidence, don't you think?


I don't mean to say one of us has better bona-fides, only that there is an existence proof to the contrary of your post. You claim that rails' lack of discipline promotes unmaintainable code shepherded by empire-builders; I claim that this is not always so. I gave the numbers I did to emphasize that rails (and rails shops) can succeed even at that scale.

Not sure what I said that came off as an attack on you or your standing. Not my intent.


I don't see an attack either, FWIW


You should work on that. Technical discussion would I think better resume from https://news.ycombinator.com/item?id=44254539 But I haven't said no Rails project ever meets its definition of success. I assumed the scare quotes in my earlier comment would suffice to indicate my argument intended to contradict that definition.


Of course that's only half the story - Microsoft invents amazing things, and promptly fails to capitalize on them.

AJAX, that venerable piece of kit that enabled every dynamic web-app ever, was a Microsoft invention. It didn't really take off, though, until Google made some maps with it.


That would segue into a discussion about NetDocs at Microsoft.

background - https://en.wikipedia.org/wiki/Ajax_(programming) - 1998, XMLHttpRequest appears.

NetDocs vs Office - https://hardcoresoftware.learningbyshipping.com/p/064-the-st...

Resolving NetDocs vs Office [Spoiler Alert - if you don't know about NetDocs, there's a big reason which tells you who "won" this fight] - https://hardcoresoftware.learningbyshipping.com/p/071-resolv...

Aftermath - https://www.eweek.com/development/netdocs-succumbs-to-xp/


I don't know that they ever used it internally, certainly not for anything major. If they had, they probably wouldn't have sold it as it was...

Can't explain TFS though, that was still garbage internally and externally.


Yep! A formative experience of my childhood was working out how to type SMTP commands over telnet and sending mail from billg@microsoft.com to my dad. Such "opportunities" vanished decades ago.

Fun times :)


Worked at an aerospace concern in the early 90s… for the first year or so there was no firewall. Yes, my Mac and PC directly on the internet with routable addresses.

I soon set up a website and webcam as they were shipped. CU-See-Me blew my mind. At some point I stood up a Quake server and invited friends to play. ;-)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: