This is the magic pattern that means a startup is probably going to make it.
Incidentally, while 10k uniques a day is not a huge number for the average site, it is for this site. These visitors are per capita about as valuable as you can get. They're not just checking out the latest pictures on their friend's profile; they're all looking for electronic parts.
We did some sourcing of hardware parts in a previous startup, so I know the field a little. I think octopart will survive simply because they solve a problem that hasn't been solved well before. Sourcing is a pain...
I'm glad to see these stats, and I think they will definitely make it.
Alexa is nuts. It's only useful as a comparison tool for sites which are really popular. Otherwise, their extrapolations just don't make sense statistically.
Though this probably sounds pedantic, it's not intended to be. How would one define an error-bar for alexa's report on octopart?
Isn't the whole problem with alexa, in this case, that the cross-section alexa trackees, is not necessarily the cross-section of people who would be interested in octopart?
Yes but surely alexa knows how many alexa trackees it has. It knows how much it's scaling that up to make an estimate of the real population.
For example, a very big site, you have enough trackees to make a reasonably statistically valid sample. But smaller sites, you don't see as many trackees, so you set the error bars higher.
Sure, I agree that the system is flawed anyway, seeing as how I don't know a single person who installs the alexa toolbar, and amongst techie sites it's usage is probably 0.
A more useful depiction of the error would be to somehow calculate the bias in the Alexa population (my guess, tech investors are the only segment of the population using the Alexa toolbar, which is incredibly ironic).
Huge error bars on a near zero estimate don't mean a whole lot except that the experiment producing the data is flawed.
Also incredibly ironic. These people are just seeing what they want to see and creating buzz within their own community rather than measuring something useful.
Assuming "calculating the bias" is a well posed question (and I'm not convinced it is) then measuring that bias is a noble goal for alexa engineers. Surely this is a multi-phd-thesis kind of problem though :)
I don't think alexa "scales up." If they do, then, you're right, there is uncertainty. If they don't, I don't think there's a concept of an error bar here. They track how many people go to what sites. Aside from a few dropped packets here or there, there is no uncertainty in that measurement. Therefor, there is no way to measure that sort of error bar. (Right?)
This may be some what unrelated... but www.mcmaster.com is the best website I've ever seen or used for sourcing and buying mechanical or industrial products. Maybe it's worth considering that kind of model, ie more like craigslist, for octopart's homepage vs the google style.
we really like mcmaster-carr's website as well. eventually octopart's website will be closer to that. first we have to aggregate a lot of technical data though. technical data is hard to come by because it is held tightly by a few companies who license it for $100K's. this is one of the problems we are trying to solve.
Congrats fellas! - i'm curious, can you say who your top referring sites are? Does Google dominate 80% of it? Do you spend lots of cycles on SEO? I understand if you aren't comfortable answering these questions in a public forum.
Incidentally, while 10k uniques a day is not a huge number for the average site, it is for this site. These visitors are per capita about as valuable as you can get. They're not just checking out the latest pictures on their friend's profile; they're all looking for electronic parts.