Hacker Newsnew | past | comments | ask | show | jobs | submit | anifow's commentslogin

this is an excellent step towards the useability revolution, interesting to see how long it will take to permeate to a standard for high-school and undergraduate students to get to real work faster and in a more interactive way

more reactions on X https://twitter.com/AnimaAnandkumar/status/17340800431967683...


This is not a revolutionary idea. Several professional programs (law, accounting, etc.) have certification exams that do just what you are describing. It seems to require a fairly healthy, non-profit professional body in order for this to work, and you can only ever assess a fairly slim chunk of the 4-year experience, so they end up pairing in a credit-hour requirement as well.

I work in large-scale assessment, and can attest that we should hold a fairly skeptical/conservative position in the power of tests to get to the "real" skill level of prospective practitioners. All of the measures currently available are fairly weak at predicting outcomes.


As a Mac user, I have to agree whole-heartedly!

I was a pretty heavy OpenFL devotee for a couple of years. It's an amazing project, although PixiJS is a very competent competitor when it comes to building HTML5 games. Pixi has a much smaller baseline footprint (~300kb unminified compared to 1300kb unminified).

Lately we've been working with Pixi with a webpack / TypeScript workflow. It's starting to feel a lot like HaXe / OpenFL again. We are winning big in terms of flexibility and good support of workflow. We are using VS Code as the IDE and we have a very smooth integration smoothly with an Angular-based asset and config management tool. The big thing we lost however is HaXe's ability to compile directly to C++. Even when we compiled to Adobe AIR, we got much better performance than something like Cordova or Ionic.


Also, HaXe is the only viable technology which enables a high-quality canvas-based applications which mirror native applications (while also permitting compilation to those native applications).


I suppose you're not counting the likes of NW.js and Electron because the native version isn't really native - though from the user's perspective, it can feel as native as a Haxe application, assuming they don't look at the ridiculous distributable size and RAM usage of the NW version.

Even then, this still leaves C/emscripten which can also fulfill those requirements, and with similar output sizes (I've tested both and compared the sizes of the resulting web versions, seems that both have an overhead of 1-2 MB for a small graphical application). It depends on what you're planning to do, but one problem I found with Haxe is that it's lacking a good general-purpose UI library that also works well on the web target. Only lower-level graphics libraries.


Words like "debt" can be tricky to pin down, which is why we have an accounting profession who's job it is to establish a cohesive and consistent language system with specific meanings.

Kickstarter money represents both revenue and a liability for the company that is doing the project. There are many types of liabilities which are not exactly debt in the sense that we think about a bank loan (which is also a liability). The thing that all liabilities have in common is that they represent an obligation of the company which could require them to spend money or other assets to fulfill. This does not mean they have to satisfy the liability, it just means that there is a financial justification for any work they do relating to that.

I am sure the accounting profession is continually being tested with new business models and trying to capture the financial reality for people who are playing very different games in the world of capital.


This was open-sourced a while back. Looks like they hollowed out the repo though. Is that bad etiquette?

https://github.com/hackpad/hackpad


No, it wasn't, it was only announced and then took until now to be actually released (see the issue in the repo you linked)


That repo never had code in it.


That's what it looks like


More appropriate to call it a questionnaire than an interview.


We have a perfect solution for the kind of publishing model which Brett is referring to, which operates in sharing and replication rather than central repositories which are historically prone to snuffing out. Just use Bittorrent.

Bittorrent was embraced for pirating music and movies, but it has major benefits if adopted as a back-end protocol for distribution of content on mobile devices. Bandwidth isn't an issue as long as everything is done on WiFi when plugged in, then it's just like a laptop. You are obviously looking at poorer seed ratios than the desktop-centric distribution we are used to, but it will be relatively cheaper for content distributors to support some of the seeding with their data centers than to absorb the entire responsibility. Granted, this is way less efficient than just doing central distribution from a data center, but you gain in resilience, and since you aren't depending on this for the navigation and basic UI, the user experience isn't dependent on real time network transfers which still are very from being dependable. This model might not work for everyone, but I think it is the ideal model for sharing data sets, academic papers, DRM videos and music (though I'd prefer non-DRM), public digital arts projects, and rich interactive news stories (with a collection of video, models, etc). The Web was great when bandwidth was expensive, but today it is cheap and unevenly distributed, and it is time for a new model.

Apps can serve as trackers which curate links (bonus points if Android / iOS bake the protocol into the OS so that your device can keep track of torrents without that information being locked into your app (can probably get some efficiencies in managing sharing if those settings are centralized in the OS). I also wonder if seed ratios can be shared between different distributors and bought and sold from data centers. It seems only fair. It does go against net neutrality though, as I can imagine a situation where people are given less bandwidth because they don't have a seed ratio of 100 or something (i.e., 100 GB of upload for every 1 GB of download). Nonetheless, this may be a fairly healthy way of having people with better means subsidize the content to those less fortunate. You can even run all monetization of this scheme by having the distribution networks downloading it to themselves, jacking up the seed ratios that are possible.

Someone else here mentioned the need for an app ecosystem that treats apps as adversarial. I think that is exactly what we have seen with Android and iOS. Want to use my camera? You need permission. Granted, there are some social engineering issues with people not really considering the rights they give up when they tap on the Install button,even when it is spelled out in simple language (I can say this because I know I don't care as much as I should, so it's not coming from a lack of technical knowledge). I think it's a short matter of time before Android becomes a serious competitor to Windows. iOS is more tricky as it would cannibalize the Mac ecosystem which is heavily built up. Things that would signal things moving in that direction are for example, Adobe releasing a complete version of photoshop on one of these mobile first OS. Also, Microsoft releasing their tools on Android (unless they keep that as a competitive advantage of the windows os, which is likely).

In either case, I think the death of the Web is near on mobile. The benefits include more safety, more functionality, a collection of trusted brands (unfortunately, innovation WILL be hampered by this move. People will accept toy apps that don't take permissions, but otherwise people will have a really hard time competing in the Web ghetto)


It seems the problem is that the author is trying to create a representation of knowledge, whereas research papers are more of a logging of work done.

His approach may be a perfect fit for an experiment in meta-research. You could run periodic reviews of articles released in specific topical areas and create summaries of the findings. Any time those findings change, its a git commit, and you can track this change over time. It's something like Wikipedia, but designed for research. I would not be surprised if this already exists.

I see a challenge in figuring out the document structure which will be the most conducive to distributed version control (pull requests, etc), and can provide some insights on a historicalbasis. For example, you could run these summaries retrospectively, say looking at DNA research in the 1960s, and do a separate commit for every key finding through the years. It seems to me that you would pretty much have to specify a specialized coding language for scientific knowledge for this structure to work.


Exactly. The problem is that papers aren't worth what the rest of the world think they are. Nobody who is working on the cutting edge of a given field give a damn about the papers that are published in the field's major journals. Papers are little more than permanent records of what people talked about at some conference several months before, and through other informal channels even earlier. By the time they're published, they're already old news. They may be worth some archival value, but that's about it.

Unfortunately, the rest of the world thinks of papers as the primary method by which scientists exchange ideas. This is a myth. Sure, there some scientists (mostly in developing countries, or those coming from a different field) who rely on papers to figure out what their colleagues are up to, but if so, that's only a reason to improve real-time communication, not a reason to turn papers into real-time communication tools.


This and the previous comment are /exactly/ right, especially: "... that's only a reason to improve real-time communication, not a reason to turn papers into real-time communication tools".


This sounds like an update to the format of textbooks. They sort of work like this already: as new versions come out, they include the latest revisions in the field. On the other hand, they also serve their publishers by putting out new versions with little or no improvement to continue earning money.


I disagree.

An hourly wage is a very special payment system which requires some kind of foreman to be sustainable. Otherwise, the labour industry tends to drift towards inefficiencies that allow them to claim money for less work. This is classic agency risk, where the incentives are not aligned, and heading down this path often entails major bureaucracies to try and keep everyone honest (like time clocks and punch cards, a "boss" looking over you, etc).

Simply paying more within the existing structure makes perfect sense. It will actually have a disproportionate effect on recruitment when some of the most efficient drivers start bragging about making $160,000 a year driving trucks (this is one of the big drivers of immigration by the way. Compared to poorer nations, the streets are paved in gold in the USA, and some lucky immigrants find shovels to dig it up. Unfortunately, most people who come never get a shovel so it remains but a dream...).

Paying more by the mile absorbs the costs of refuelling and traffic, without giving any driver an incentive to waste time when, for example refuelling the truck. So rational truckers will go to gas stations that are more efficient and will get out of there as soon as they can.

Where the trucker has no agency, however, it could make sense to pay by the hour. For example, since loading is in the hands of another group as soon as the truck docks in, and since it can vary quite a bit based on warehouse, better that the driver is paid for time sitting around. You could argue that the driver's incentive is to rush the loaders, but I doubt they have any coercion in that process. So better that the shipping company have an incentive to make loadings as fast possible through influence over choice of warehouse, penalties to warehouses for slow loadings, etc.

I've never really thought about this industry before, but thinking about it now makes it sound very interesting!


>This is classic agency risk, where the incentives are not aligned

Paying by the mile is even worse. How much incentive does the company have to make unloading and loading more efficient when they get it for free?


If drivers spend much time loading and unloading, and not getting paid, the amount paid per mile required to attract drivers will rise.


What seems more likely, actually, is that you will get Akerlof's classic lemon market - due to the information asymmetry of paying by the mile.

The employer knows how much value the driver's gonna contribute for the money they pay, but the potential drivers aren't at all sure how much income they'll get paid for their time/effort. They basically only have hearsay from other drivers/ex-drivers to go on. Furthermore, some are getting burned and leaving the industry (I know personally of one; and there's another in this very thread), and telling others about their experiences.

The number of confounding variables and risks (including but not limited to loading/unloading) are simply too high for them to be able to make an accurate judgment about their potential income.

The natural effect of a lemon market is that the market dries up because the buyers/sellers simply stop transacting when the problems caused by the information asymmetry are too much.

That actually seems to be exactly the situation we're getting here. The wages don't seem to be that bad actually, but the risk is all piled on to the driver, so new recruits are very reluctant to enter the industry after hearing a few horror stories.

Lots of startups have gone down for similar reasons - pricing that is too complex makes potential customers go 'fuck it' and go with the devil they know.


The market for lemons doesn't really apply here, because a lemon market requires a supply that is heavily weighted toward worthless goods, to drag the average down to 0 as the "best" opportunities are withdrawn from market.


Assume labor is the currency and the good is cash. If a majority of trucking employers give employees a mediocre or poor deal, then the supply of trucking jobs in the aggregate delivers poor value for the required labor input, in comparison to other lines of employment. Good firms either go out of business (due to being undercut) or retain their drivers, such that there are few openings at good firms.

In an ideal world good firms would increase their market share, but doing that requires satisfying trucking consumers and they want their goods shipped as cheaply and quickly as possible. So the interests of consumers and suppliers (of trucking services, ie drivers) are not very well aligned. I don't know what can be done about this; as a consumer I have absolutely no clue which hauling companies handle the goods I purchase, nor do I have any clue how much of a good's price consists of shipping costs. So I can't really vote with my dollars to let store owners know that I'm willing to pay a little more to support trucking companies that treat their employees well.


Nope. It just requires information asymmetry, which IS the case here. Trucking companies know more about what income their potential workers will make than the potential workers will themselves.

Experienced truckers will have less information asymmetry, but experienced truckers need to be replenished as they retire. Apparently that's not happening.


In the perfect world of the auto-correcting market.

In real life, in the case of coal miners and their "dead work" (necessary work that wasn't getting paid), it took bloody strikes and fights to get those rises, the amounts paid weren't automatically upgraded.

The "money required to attract X workers" only plays a role when those X workers are not destitute and starving to begin with. If they are, and the company can pay them less and still have a huge profit from their work, it'll do that.


In theory yes,.,, almost 100 years of history says no this is not the case


> If drivers spend much time loading and unloading, and not getting paid, the amount paid per mile required to attract drivers will rise.

Actually, logic would indicate that it has already happened. Every industry pays people the least amount that it can reliably obtain labor and/or reliably profit from their employment. Trucking is no different.


Not every industry tries to create such an opaque payment structure.

Industries that do that sort of thing tend to suffer in aggregate because you get a 'bad money drives out the good' effect, similar to used cars.

The same thing happened in finance (nobody touches the CDO market now - it would have disappeared entirely were it not for government life support).


That has not been the case, historically. Time spent "on duty, not driving" has been a major complaint for decades and is one of the reasons duty schedules are ridiculous.


there are alot of factors here

1. Most companies use what is called "Book Miles" for payment and these books I believe were published in the 1940's or something, because most companies "Book Miles" are vastly different than the actual miles driven

2. "Foreman" problem can be solved with Technology, the DOT is planning on requiring all electronic logs in the near future, like 2015 I believe, most trucking outfits already have realtime GPS monitoring of their trucks, you can monitor these employees in a far more accurate manner than one can monitor factory workers.

3. Most company truckers do not have a choice of where they fuel, this is planned as part of the trip, and they are provided a fleet card that is only accepted at a chain the company has a contract with.

this is just a start but I think you believe truckers have more freedom then they do in reality. The technology and logistics that goes in to trucking does not allow for any independent thought, they are given a route, with preplanned fuel and rest stops, if they deviate from this route they must explain why (i.e mechanical failure, stopped by police, etc)

While they may not have a human foreman monitoring their time clock, they have electronic foreman's and government regulators acting as foreman's


"Paying more by the mile absorbs the costs of refuelling and traffic, without giving any driver an incentive to waste time..."

It also gives the driver incentives to drive too long, too fast, to reduce maintenance, and to take other shortcuts that cause longer-term and safety problems. (Fortunately, the industry has done pretty well at pushing liability onto the drivers.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: