I have this little bookmarklet in my bookmarks bar that I use constantly. It removes all fixed or sticky elements on the page and re-enabled y-overflow if it was disabled:
A simple salary + percent commission is a great model.
That said, this calculator was built to model/simulate the things that are super common in enterprise SaaS:
1. It takes sellers time to ramp up. Experienced sellers might be willing to jump to your company, but not if they are guaranteed to only get their (relatively) low base salary for 1-2 quarters.
2. If you decide to do a ramp, you have to make a choice about the OTE.
If you can avoid doing these things, that's great. Though whether that will fly largely depends on whether your sales cycle and target talent market supports it!
I wonder if it would be reasonable to offer sellers a sliding scale to trade off their salary and commission rate over, say, ten graduations. Then let them choose whatever scale they want, maybe with some rules about how they can change it etc to prevent high frequency minmaxing.
Some companies do things like this, but I'd be cautious about it.
There's a few things I'd consider:
* If you have a bunch of reps, doing the periodic accounting to cut the right checks becomes more of a pain (though it's a pain anyways)
* When you give a choice, the employee might make what, in retrospect, winds up being the wrong choice. This can lead to pissed off sales people and regrettable churn.
OTOH, sales comp plans change every year anyways, so could just be renegotiated
Oh yeah, i definitely didn't take it seriously. worldgov.org?
The title "Crypto is Inevitable" was not supported in the article.
I do not believe the statement that "crypto is quietly rebuilding the plumbing of finance itself", and regardless of whether that's true, I don't see any reason why it would pick up any adoption beyond the speculators.
My feedback for you is that this community expects a disclaimer when you're promoting your own product, and that the product is too tangential for this plug not to feel spammy.
I was an early eng and first VP of Product at Flexport. Global logistics is inherently complicated and involves coordinating many disparate parties. To complete any step in the workflow, you're generally taking in input data from a bunch of different companies, each of which have varying formats and quality of data. A very challenging context if your goal is process automation.
The only way to make progress was exactly the way you described. At each step of the workflow, you need to design at least 2 potential resolution pathways:
1. Automated
2. Manual
For the manual case, you have to actually build the interfaces for an operator to do the manual work and encode the results of their work as either:
1. Input into the automated step
2. Or, in the same format as the output of the automated case
In either case, this is precisely aligned with your "reuinifying divergent paths" framing.
In the automated case, you actually may wind up with N different automation pathways for each workflow step. For an example at Flexport: if we needed to ingest some information from an ocean carrier, we often had to build custom processors for each of the big carriers. And if the volume with a given trading partner didn't justify that investment, then it went to the manual case.
From the software engineering framing, it's not that different from building a micro-services architecture. You encapsulate complexity and expose standard inputs and outputs. This avoids creating an incomprehensible mess and also allows the work to be subdivided for individual teams to solve.
All that said – doing this in practice at a scaling organization is tough. The micro-services framing is hard to explain to people who haven't internalized the message.
But yeah, 100% automation is a wild-goose chase. Maybe you eventually get it, maybe not. But you have to start with the assumption that you won't or you never will.
Sounds like a really interesting problem space. I'm curious if you have any comments about how you approached dealing with inconsistencies between information sources? System A says X, system B says Y. I suppose best approach is again just to bail out to manual resolution?
In the early days, we bailed out to manual resolution. In the later days, we had enough disparate data sources that we built oracles to choose which of the conflicting data was most likely to be correct.
For example, we integrated with a data source that used OCR to scan container numbers as they passed through various way points while they were on trains. The tech wasn't perfect. We frequently got reports from the rail data source that a train was, for example, passing through the middle of the country when we knew with 100% certainty that it was currently in the middle of the pacific ocean on a boat. That spurious data could be safely thrown out on logical grounds. Other cases were not as straightforward!
1) it doesn’t appear search links are shareable or have the query terms are in it
2) are you embedding the search phrases word by word? And using the same model as the documents used? Because I searched for „lead generation“ which any decent non-unigram embedding should understand, but I got results for lead poisoning.
If I have to be honest, I started with this project to prove to myself that I can build something, ship it, and market it (bring traffic to it). This idea crossed my mind, but I didn't do it because I didn't expect I would have had enough traffic to make it potentially worth it!!
AFAIK, Amazon will ban you because you’re not adding original content. Look at their TOS before bothering. Most likely you will waste your time. (An acquaintance of mine had a similar site.)
I ran an ISA company for a while and can say with pretty high confidence that there are many reasons why fee-for-success is not an effective way to monetize this type of content (Which I've written about before [0]).
I agree that a business, like YC, that has been around for a while, monetizes directly with a fee-for-success model, and has good reviews is very likely to be effective. But the inverse does not hold: a program using fee-for-service is not particularly good evidence that the program is ineffective.
You make good points, but I think we agree on the core.
There are challenges with a pay for performance model, but a business that successfully operates on one is sending very strong evidence that they are actually effective.
A fee for service program may or may not be good, but the burden of proof is now on the company that they are worth what you'll spend on them.
javascript: (function () {document.querySelectorAll("body *").forEach(function(node){["fixed","sticky"].includes(getComputedStyle(node).position)&&node.parentNode.removeChild(node)});var htmlNode=document.querySelector("html");htmlNode.style.overflow="visible",htmlNode.style["overflow-x"]="visible",htmlNode.style["overflow-y"]="visible";var bodyNode=document.querySelector("body");bodyNode.style.overflow="visible",bodyNode.style["overflow-x"]="visible",bodyNode.style["overflow-y"]="visible";var nodes=document.querySelectorAll('.tp-modal-open');for(i in nodes) {nodes[i].classList.remove('tp-modal-open');}}())