Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Does anyone know how well these really quantized image models perform?

IME it's better if they are quantized during training/refinement as it can definitely affect model performance if not.

> I have never tried or needed to reduce a model to <8.4 mb.

Same. The limit has been inference speed before model size. It's quite unusual to run video inference at full resolution due to this (having spent too much of my life up-scaling segmentation masks) but maybe their accelerator is actually capable of that speed? They don't seem to say what resolution of image their model is optimized for.

EDIT: The Sony specs say input tensor size is 640x480.



Really terribly. To put it into context, from memory, yolov3 tiny model was around 6 million parameters. How many parameters will this model be using? Indeed, no one is saying what it is quantized to, but you can be pretty sure that it's 8 bits.

I deployed yolov3 tiny model back in 2019 with 32 bit weights. It thought that the drain pipe in the driveway was a person.

Whenever people show hype about this this show pre-selected images where there are no candidates for false positives. In reality you would not want to be woken in the middle of the night on the output of these things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: