Hacker Newsnew | past | comments | ask | show | jobs | submit | dustin1114's commentslogin

I wrote a relatively simple web application (https://github.com/DSpeckhals/bible.rs) about 9 months ago with version 0.7. The initial code changes to migrate to 1.0 weren't that bad (https://github.com/DSpeckhals/bible.rs/commit/fbd7e8207023a0...). The most substantial changes I made during the migration weren't necessary, but according to the author, are a little more idiomatic actix-web: moving from sync arbiter actors for Diesel connections to just using `web::block`. Both use a threadpool behind the scenes, but `web::block` is less verbose.

I've been extremely satisfied with its performance and ergonomic abstractions over HTTP and async Rust that actix-web offers. And like others have mentioned, the author and other contributors provided me with some good, practical answers to a few questions I had.


I think I like most of what this document proposes except for the following:

  In strong code, accessing objects (strong or not) throws on missing properties.
  New object properties have to be defined explicitly and cannot be removed
  from strong objects.
To me, this seems to break a fundamental aspect of the language. I've found it very acceptable to be able to define an object literal property "on the fly." However, with strong mode trying to make the language friendlier to eventually being more statically typed, I see the necessity. It just boggles my dynamically typed mind :-)


It does break fundamental uses of the language. Throwing on missing properties is a near-hostile change for most current JS developers.

The given justifications for doing this, though, seem to be all about performance rather than an attempt to evolve the language.

If that's true, I think it's possible the proposal has a naming problem.

Calling it "use strong" implies that this is about evolving the JS language so that devs are spending time writing more strongly-typed code.

Calling "use optimize" would make it a lot clearer that this is not an attempt to Java-ify JS, and this is more something you'd primarily invoke for performance-critical code paths.


I fail to see how it breaks anything? It is opt-in. Aside from that "use optimize" is indeed a better naming.


It is supposed to be backwards compatible.

    "use strong"
    try {
        foo = bar[maybeMissing]
    } catch {
        // only runs in strong mode
    }
The above will take different code paths depending on whether your JS engine supports "strong" mode.


That's a good spot.

I suppose it's a lot easier to declare one's intention to create a backwards-compatible subset than to actually create one.

Hope the V8 people are open to revisions on the details at least.


This is true of strict mode as well, which shipped in ES5.

    "use strict";
    try {
      a = true;
    } catch(e) {
      // only runs in strict mode
    }
It's true of any mode switch that makes the language smaller. You shouldn't write non-strict (or non-strong) code if you opted into that mode, and catching those errors defeats the purpose of using it.

But it's not only true of mode switches...

    try {
      JSON.parse("{}");
    } catch(e) {
      // only runs in browsers that don't support JSON.parse
    }

    try {
      [ 1, 2, 3 ].forEach(function(x) { /* ... */ });
    } catch(e) {
      // only runs in browsers that don't support forEach
    }
Or even just:

    const x = 10;
    // only runs in browsers that support const
Any language change can cause differences between what executes in one browser vs. what executes in another. What strong mode guarantees is that if your code doesn't throw errors in strong mode, it won't throw errors in non-strong-mode (which is more than many changes guarantee!). Any other kinds of compatibility guarantees are impossible to make unless your changes are literally meaningless.


Indeed (regarding your first point), my bad. I hadn't read the complete proposal, just the SaneScript slides, where that point was not fully developed. From TFA, emphasis is mine:

However, a mode directive has the significant advantage that any program +not hitting any of the strong mode restriction+ should run unchanged in a VM not recognising the directive, and no translation step should be required.

Your next two points are different: the standard library additions can be polyfilled and the syntax change is intentionally backwards incompatible.


That's funny; I think that's the only part of what the document proposes I like. -- Well, okay, that's not really true; there are a lot of bits that just make sense, either because they hurt performance for little benefit (holes in arrays) or they're just dumb (arguments.caller). But I don't like gratuitously locking things down in a way that makes highly dynamically-typed code harder to write, where it doesn't seem to solve a real unavoidable performance problem: for example, the ban on constructors leaking 'this', and the oddly specific recursion limitations. In general, the document seems to express the sentiment that only statically typed code matters, which I think is short-sighted. I also dislike, among other performance-unrelated changes, the "let's fix C" syntax bits, such as banning fallthrough, which is just likely to annoy programmers familiar with C (who are used to writing code that relies on it on occasion).

However, making nonexistent property accesses silently return undefined is just an amazing way to ensure typos in property names never get caught. I don't think `foo.bar || baz` is much to sacrifice - Python, for example, has getattr(foo, 'bar', baz), which works fine, and has the benefit of returning foo.bar if it exists at all, not just if it's a truthy value.


My experience is that the python way results in long and obfuscating chains of access checks. It's a fine idea to not make it the default behaviour, but there should be a way to do a chained check a-la coffeescript's '?.' to ease deep accesses, especially since in js objects are commonly used for data structures.


I've found that even in Python, only statically typed code matters - at least, statically enough that I can be confident that it works. I don't think the ability to have an object also be a map is especially valuable; code where everything is one or the other is clearer. These changes won't affect the ability to monkeypatch methods on individual instances (that's the one "really dynamic" thing that I find actually useful), will they?


It's kind of funny. ES6 seems to be making JS more "Python", while "strong mode" makes it more like Java/C# (at least to me). I'm open to worthwhile changes; I'm just thinking about the tens of thousands of lines of JS I've written with various utilities and libraries, and how they will eventually fit into ES6...I'm not too sure about strong mode, though. I guess we'll see how it all pans out.


Being able to do

  foo = bar.x || 3;
rather than

  foo = bar.hasOwnProperty('x') ? bar.x : 3;
is nice.


Personally, I would say

    foo = ('x' in bar) ? bar.x : 3;
instead. The problem with your code is that if the property bar.x exists, but is one of any number of values, like 0 or false, your code will still set foo to 3. Requiring properties to be explicitly created means that you're separating existence from value, which are two very different things in my book.


Which are some of the most insidious bugs possible in JS. I shudder every time I see the syntax in your post's parent. It's a red flag. Your solution is good. I also use a lot of typeof(bar.x) !== 'undefined' since it's very explicit.


Every time?

I mean, very many times (for example) you're pulling in some JSON from a server app, which will have types properly enforced at the database and/or application level. There are still gotchas around defaulting to true, whether or not an empty string is a valid value and so on, but there are many cases where foo || bar is safe enough.


I have to disagree with you there. It's simply not a good idea to have a works-sometimes syntax for "if property is unset". The brevity doesn't make up for the fact that you sometimes have to fallback on the explicit check anyway. It may look clever but it's an abusage, and you're eventually going to get a production error, unless you're testing for it, and if you've got tests for that, brevity has already lost.


Well, I'm not responsible for your code, so you do whatever you want. But this, for me, is like being Van Halen and seeing brown M&Ms in the bowl. It's a red flag when I see that syntax in code that I should be much more defensive about what's happening everywhere else. There are no assurances about data, especially about data coming across the network.


Agreed. And unless I'm reading the document wrong, this would also be prohibited in strong mode:

  let x = {
    keyA: "valueA"
  };
  x.keyB = "valueB";


I think they're trying to push onto [Maps].

[Maps]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Yep, they're explicitly trying to do that.

For what it's worth, you can still use "options hashes" in your argument APIs in strong mode using objects; you just write a library function something like:

    function options(options, defaults) {
      // the final args start off as a clone of the default args
      let args = Object.clone(defaults);

      // we then loop through the keys and copy in any overrides
      for(let key of Object.keys(args)) {
        // ignore inherited properties and skip missing ones
        if(args.hasOwnProperty(key) && options.hasOwnProperty(key)) {
          args[key] = options[key];
        }
      }

      // args now has all of the overrides from options
      return args;
    }
And then in all your functions that take options objects:

    function bakeBread(ingredientOverrides) {
      let ingredients = options(ingredientOverrides, {
        flourType: 'whole wheat',
        sugarAmount: '3 tbsp',
        waterAmount: '1 cup',
        milkAmount: '0.3 cups',
        flourAmount: '4 cups'
      });

      let batter = mix(ingredients);
      return bake(batter);
    }
JS is still quite dynamic, even in strong mode — you can define arbitrary objects and types at runtime, and easily inspect/reflect on them — it's just a little harder to silently corrupt data.

The neat thing about using named arguments with objects is that in typed variants of JS — for example, TypeScript, or perhaps someday SoundScript — you can actually typecheck them! Maps can't do that in any language I know of: by design they can contain anything.


The Java (5+) Map interface can be typed using generics (which -- if JS eventually introduces static typing -- I hope is implemented).

  Map<String, Integer> = new HashMap<String, Integer>();
That declares a Map with a String key and Integer value. Is this what you're thinking of?


Not what I was thinking of, but I should've been more explicit. Using generics you can type the keys and values of Maps in any language with generic support that I'm familiar with: certainly C++ and Java can. But you can't make type assertions about certain values existing under certain keys: that's what structs/classes/etc are for. But, those constructs can't easily be generated at runtime, and are typed nominally: even just as a caller you have to explicitly say they inherit (or in Java, implement) a specific type. However, with TypeScript-like structural subtyping, you can do:

    interface BreadIngredientOptions {
      flourType?: String; // this is the syntax for optional strings
      sugarAmount?: String; // ditto: it's the ? that makes it optional
      // ...
    }

    function bakeBread(ingredientOverrides: BreadIngredientOptions) {
      // ...
    }

    // callers don't need to explicitly inherit or implement to be type checked
    // however, since all properties are optional, this is less interesting
    bakeBread({
      flourType: 'white'
    });
But you can do even better than that example shows. One common problem with maps-as-named-arguments is that you can't easily determine which arguments are required and which are optional. With typed optional properties and structural subtyping you can enforce that at compile time, as follows:

    interface MyArgumentInterface {
      requiredArg: number;
      optionalArg?: number;
    }

    function f(args: MyArgumentInterface) {
      // ...
    }

    // This works:
    f({
      requiredArg: 10,
      optionalArg: 5
    });

    // This also works:
    f({ requiredArg: 50 });

    // This statically throws at compile time:
    f({ optionalArg: 10 });
It's a combination of the simple object literal syntax from raw JS that makes it easy to create objects of arbitrary types, with structural subtyping. I'm not aware of any language with the same features (but would love to be corrected!).


I'm torn here. I agree that this is one of the Nice Things about javascript - I can do stuff like

  let foo = opts.foo || 'default'
Which is nice. On the other hand, if you look at how V8 does its JIT compiling, it seems like there's just some things you can't optimize around, and they've gotten as far as they can reasonably be expected to get there. Having object schema that can change on the fly is just really hard to JIT efficiently.

My worry is that Strong Mode becomes the defacto standard, and we end up losing the flexibility and expressiveness of well-written JS, and end up with static typing all over. If your JS is compact and otherwise well-formed, you can probably afford the compiler hit sometimes, knowing that code is a lot easier to write.


I agree and I come from a strongly typed background (.Net). It is extremely liberating and useful to just add properties to an object at any time, even ones I didn't create. Kind of like slapping a sticker on something for later reference, whereas a strong types object just explodes when apply you the sticker.

I understand that there are performances issues to doing things this way, but it seems like you could have the strong typed base while still allowing expando properties that may be slower? In .Net the added Dynamic but I find it of limited use because it must be explicitly used rather than just Object allowing it which is what you are going to get from most libraries.


I don't think performance is the primary reason behind this. The motivation section in the article states in the very first sentence:

Silent property failures and the resulting proliferation of 'undefined' are the most prominent mistake in JavaScript, and can be rather tedious to debug (very much like null pointer exceptions in other languages, but much more common).

Having different objects (that were created in the same way and represent the same thing) potentially have different properties at different parts of their lifetime can easily lead to a codebase where you can never be certain of anything about such objects and have to do loads of explicit checking every time you use them. Especially in a larger codebase developed by more than one person.

It's a nice feature for whipping up some quick prototypes though.


It's not just for quick prototypes though, many times for me it's about marking up objects from other libraries with my own data. I'm extending them one off and not changing the base behavior in any way, this should cause no issues to anyone else using that object. Yes that can be done with inheritance in say C# so long as the base object isn't sealed, which many library classes are from MS especially, ugh.

The undefined issue seems no different to the null reference as you mentioned. I run into it constantly in C#, in fact everyone does that why they are adding the null lifting operator (?.), javascript could do the same for both undefined and null.

I may be weird but I actually like the fact that javascript has both undefined and null. It allows an extra state over just null in C# which I constantly wish I had, typically for things like data/domain objects. With undefined and null it's trivial to encoded the fact that a property is not loaded say from a db or sent from a client vs it being loaded but the value is null or sent over the network with a null value.


Disclaimer: I'm a son of the Midwest :-)

This article hit home for me. Though I technically now live in a medium-sized mid-Atlantic city, the culture here is more like the Midwest. I'm not against the Bay area or NYC, but for me, I couldn't imagine living in those places: congestion, price of living, and their lacking of the "down-home" feeling.

I believe it would be more difficult to found a technology-centric company where I live, though. As an example, the company I work for, though very large, is based in a small city. For years, there was not much of an issue attracting new talent. The problem is that our IT organization has grown immensely recently, and attracting new "hacker" talent into the middle of the country is a huge obstacle. One of the solutions was opening another IT office in New Jersey, just a few miles from NYC. Problem solved.

But then there's those of us who grew up in the Midwest. I love being able to buy and own a nice home for under $200K. I enjoy being able to drive to work with minimal traffic. I even like reading about living in the Bay area on HN and laughing at the things so many have to deal with! But the truth is, tech people thrive in the Bay area. And I'd say that the Bay thrives on them. But there will always be a few of us engineers who live in "flyover" country :-)


I like the idea of a WP API. I guess in the future you could more easily build non-browser apps that utilize WP data. Of course, there's always going to be the people who don't very much like the whole front-end JavaScript MVC idea (Backbone, React, Angular, etc.), but there will still be the regular PHP themes for them to use.

My only wish is that a proliferation of horrible themes based on inefficient JS does not occur. Front-end MVC can be done right, but don't abuse it with bad code that gives the rest a bad name!


I just wish there were a WP management API. Like, let me remotely do updates, backups, user administration etc. against a bunch of WP sites from a single console (or better yet, command line script), rather than having to log in to each one individually to do that stuff, which is a pain.

EDIT: So it turns out there's a third party project that kinda sorta does this: WP-CLI (http://wp-cli.org/). Anyone used it?


I've written batch scripts with WP-CLI and mysql to spin up new sites, from scratch, including database creations, install latest core, a suite of base plugins, install themes and create child themes, and configure some options. Works a treat.


Pantheon (hosting) provides a lot of tools for managing Drupal sites like this. Now that they also support hosting Wordpress sites, they may offer similar tools. Might be worth a look: getpantheon.com


Have you tried WP Multi-Site?


I use wp-cli pretty much every day at work


wp cli is the bomb, try it, it rocks!


Very much this.


Note that a “regular PHP theme” could be built on the WP-API instead of the current horrible mess of functions and loops. I’m tempted to port a version of html5-blank-wordpress over to it…


It seems to just be more federal regulation from DC. I'm as much of a fan of a free and open internet as anyone, but why risk the FCC getting involved?

Also, can anyone honestly see rates being reduced because of this? Sure, all of us would love to see more competition (I actually only have one choice where I live, sadly), but the truth is, the companies that invest the capital to build the infrastructure deserve to reap the profits. I'm not quite sure what the solution would be to having more competition.

What worries me the most is the bureaucracy of it. Are we the people really getting a say? The FCC is made up of unelected officials (appointed by the Executive branch, Republican or Democrat) plastering on their views. Why not let our elected representatives take care of this? You may say that they would just block it, it would never move, etc. Perhaps it's not as much of an emergency as we think, then? I guess this is just the same old federalism versus statism argument. Good ol' American politics.


> why risk the FCC getting involved?

In national communications policy?

> Also, can anyone honestly see rates being reduced because of this?

Prices are pretty much an orthogonal issue to net neutrality. If you don't have a source/destination neutral network, it's possible you won't be able to buy the services such a network supports at any price.

> The FCC is made up of unelected officials

I think it's reasonably clear that doesn't mean they're unaccountable. Congress or the President can heavily influence policy if they screw it up.

But strangely, at the moment, they seem to be doing policy better than most elected officials. :/


"Last-mile" broadband need-not be a 'national' issue except to the extent national politicians want to grandstand about it.

The options at every location are different... from city-to-city and even block-to-block. Some local broadband markets are competitive; others aren't. Creating options requires specific locally-adapted work – new wires, new antennas, new hardware. Three regulators signing-into-law new regulations adds no capacity, only new constraints on the people doing the real work.

One set of national service-shaping rules for all, because some localities have limited choices, is an overreach that doesn't match the problem.


I agree, it is national communications policy. But why not try getting a new statute passed that isn't based on telephones from the switch-operator era? It just seems to me that it's being pressed so hard because it's such a "fad" issue right now. Most people honestly don't even have a clue what the FCC or "net neutrality" is. Sure, the FCC is accountable to the President (ultimately), and to congressional oversight committees, but they don't have to worry about being thrown out of office for any decisions they make.


"Why not let our elected representatives take care of this?"

Uh, because they can't be trusted farther than they can be thrown?

Seriously, between the gerrymandering, the closed primaries, and the unlimited private election finance they've done as astonishing amount to insulate themselves from the wrath of voters they screw on behalf of super-rich special interests. And in cases where they do get their comeuppance, the revolving door means they can count on well-paid sinecures after leaving "public" service.

None of this suggests that leaving broad policy choices to the F.C.C. is optimal. Indeed, having a less-corrupt Congress that could be relied on to represent the people would be vastly preferable. But that's not what we've got. Indeed, under normal circumstances, the only time popular will is taken into consideration is when it happens to coincide with the wishes of the wealthiest.

(Depressing details on that phenomena here: http://www.washingtonpost.com/blogs/monkey-cage/wp/2014/04/0...)

This development with the F.C.C. represents a remarkable and welcome exception to that norm. Not coincidentally, it's because the Internet represents a means for marshaling and focusing democratic will in a way that hasn't been as undermined as severely as the ballot box.


These circumstances have been happening since the creation of the United States. We act like things move far slower now than they ever have. The government was made to move slow on controversial issues.

Still, the closest thing we have to a democracy is not the FCC, it is our local/state/federal election process. Sure, there's money involved -- too much in fact. Push your representatives, they might listen if enough people let them know. There are turnovers in seats every two years on both sides of the aisle because of their bad decisions.


"These circumstances have been happening since the creation of the United States."

No dude. Just...no.

Gerrymandering has existed for ages but only in recent years - with the advent of seriously high-powered data mining - has it had anything remotely close to the influence it now possesses. Citizens United (which did a major number on campaign finance) passed in 2010. Key sections of the Voting Rights Act were overturned last June, less than a year ago. As far as powerfully damaging structural changes go, these are all very recent events. Your position is like saying "computers have always existed" while ignoring the differences between an abacus and a Xenon chip.

And saying "seats turnover in the House ever year" is even more meaningless. Intelligent people look at the rate of turnover - which is at record lows and declining relentlessly and not because people are satisfied. Indeed, approval ratings for Congress are setting record lows as well. The reason these trends don't correct each other is because Congress has - in recent years - secured an unprecedented level of detachment from the will of the public. This, in turn has become a major factor in driving inequality to unprecedented levels.

On the off-chance that you're genuinely interested in the relations between regulatory capture, extreme concentrations of wealth, and the proliferation of rentier economies, I can strongly recommend "Why Nations Fail" by MIT's Daron Acemoglu. One of the essential point he makes is that Inclusive economies (i.e., the good kind) can often give way to Extractive economies (the bad kind) following periods of retrograde policy change not unlike the ones we're presently witnessing.

http://www.amazon.com/Why-Nations-Fail-Origins-Prosperity/dp...


In regards to "circumstances", I was specifically referring to the gridlock in Congress (since we're posting links, here's one: http://blog.oup.com/2013/10/federalist-papers-government-gri...), not gerrymandering; which is prevalent in heavily GOP and heavily Dem states. I think gerrymandering is a little off-topic here, so I'll defer for now :-)

The truth is, I think we might agree more than you think. The problem with the last thirty or so years in politics is that politicians (and by default, those they appoint) and special interest groups (corporations, labor unions, etc.) have created together what's we often referred to as "crony capitalism." Do you really think that the FCC and the current administration are doing this "for the people?" No, they're pandering to the tech block (Google, Facebook, eBay, etc.). I don't see that is being too much different than pandering to the big ISP's.

Capitalism without a sense of morality will itself turn into an oligarchy, as we see now. Thus, people seek more government regulation, which then just breeds more interference in individual freedoms.


This comment is as if it's straight out of 2009. Have you followed anything related to TWC/Comcast/Verizon over the last 5 years? We've TRIED to let the Executive Branch handle this and they produced SOPA/PIPA. We TRIED to let the ISPs work on their own with little regulation and we get 3 Mbps speeds in areas with no competition.


I code for the thrill and pleasure of the creativity involved. As an add-on, it provides for my family.


You could try CoffeeScript. It takes away a lot of what people don't like about JavaScript out, and replaces it with a decent syntax. I personally love well written vanilla JavaScript, but CoffeeScript might give you a little better experience if you "hate" JS.

So, is it the DOM you don't like, or the actual JS syntax and structure? For me, when I overcame the weakness of the DOM API, and utilized it elegently, I started to enjoy JS.


its just javascript, its so error prone, it can take hella long to write simple algorithms, because of some undefined error.

compare that to golang, even though u make errors, the compiler is so awesome at pointing those out. And u can get so much done


I get that you like Go, but you can't run Go in a web browser. Ultimately, whatever you write will have to be evaluated in a web browser, which means it will be JavaScript at some point. Just like you've observed with CoffeeScript, writing in a different language is just a layer of abstraction on top of JavaScript.

Web apps have a back-end and a front-end. You can definitely write the back-end without touching JavaScript. You can even write it in Go. Square did a nice writeup comparing several:

http://corner.squareup.com/2014/05/evaluating-go-frameworks....

When it comes to browser side, you're stuck with JavaScript though. Trying to avoid it will only bring more pain and frustration than attacking the problem head-on.

I would point out that programming languages don't make errors, programmers do. I'm no fan of JavaScript. I too avoid it when I can. It's absolutely necessary if you want to write web apps though. As you become more familiar with JavaScript, your error rate will go down.

If JavaScript really puts you off, maybe consider going another direction, like mobile app development. Both iOS and Android applications are developed using compiled languages. You may find them more suitable to your programming style.


It is a poor workman who blames his tools.


works both ways : a good workman choose good tools.


After hearing about the idea of a phone from Amazon, I really thought they'd do more. I like to buy things from Amazon (like many here probably do), and thought that the phone would be competitively priced. It wasn't in my mind; and with only one major US carrier (AT&T) that it's being sold under, the customer base is even smaller. I love the ability of my current unlocked Nexus 4 to switch to another carrier or MVNO. Call me spoiled on the Nexus devices...

How would I be sold on the Fire Phone, Amazon? Allow the phone to be unlocked and lower off-contract price. Then, I'd consider it. I like the phone's features, but if the value isn't there, then I'll pass.


The lack of unlocking really surprised me. We bought one for testing that some of our mobile software would run on it, so we got the no-contract version. Even if you pay full retail price, it's still locked to AT&T. That's just blatant foot-shooting. If someone's going to pay you full price, it shouldn't be locked.

(And yes we tried getting Amazon to give us the unlock code. They said their contract with AT&T didn't allow it.)


Sounds like AT&T was the only carrier willing to give them the time of day. I doubt Amazon really wanted to go single source.


It was refreshing to see something like this on HN. I'll have to look into it. Thanks!


That's pretty neat: I discuss an interesting subject with someone on Hacker News that directly affects my current industry, but I am a few levels down in the supply chain. I deal more with product distribution and logistics, while you deal with the actual manufacturing of the pallets they sit on. Thank you for your expertise and insight!


Believe me, the last thing I ever thought to see on HN was pallets! Glad to have the conversation!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: