I work for an actual tech company, which actually told me to look into a couple of ML projects to improve our operating efficiency, which actually produced good results and had an obvious and short path to being put into production... and which were promptly shelved.
Can you go into why they were shelved? Were they normal technology and business reasons - or related to ML directly? I ask so that potentially others can learn to either navigate around those in the future.
I wish I could. I have very little insight. I presume it's some sort of "normal business reasons", but the from my point of view the decision process went something like this:
Me: "It's finished, here are the areas of strength and weakness, and here's where we can deploy the system for maximum effectiveness."
Business: "We're thinking about the best way to deploy this."
Me: "This is how you deploy it."
Business: "We'll think about and get back to you. Don't do anything until we tell you."
That's the "weaknesses" part. And I did in fact spend a good bit of time with my manager going over exactly what the model does not mean, and what you should not conclude from it or use it for.
I would also be interested in knowing which projects / applications of ML seemed easy wins - I generally get stumped on "use CV and facial recognition" for a business that has no need of facial recognition.
1. Market prediction -- given basic demographic data, and publicly-available or easily acquirable information like when the person's house was purchased, what their credit score is, etc., how likely is this person to want / need your product, and is it worth spending a salesman's time on them?
2. Data entry. We took a picture of this customer's utility bill / bank statement / receipt / whatever. Now do we give it to human to identify relevant fields and manually type them into a spreadsheet, or do we have a computer automatically extract the business-relevant data? Or, heck, maybe that's too complex, but can we at least have a computer help--automatically filter out bad images, do perspective correction, highlight areas of interest, etc.? (This sort of thing is actually used in, e.g., digitizing census records; we don't trust handwriting OCR to be good enough on its own, but we trust it to automatically highlight relevant fields, in order, and provide a first-draft guess at the transcription to assist the human transcribers).
I could probably come up with a few more if I thought about it for a while, but those are the areas I've actually worked on recently.
Business people make no sense.