Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You should make an app for that (seriously)!


I can't believe he posted it up on github before doing this - there's so much potential for this to go viral once it's packaged with a doodling app.

EDIT: Actually reading more closely I guess 10 minutes on a machine with a decent GPU is a lot of server load :|.


The research is based on work I did writing and improving @DeepForger (http://twitter.com/deepforger), an online service for "basic" style transfer. The GitHub is a standalone version for learning and education, which doesn't do HD rendering as well yet and uses a bit more memory. The positive side, however, is that opening up the source code makes these ideas progress faster!

We'll try to integrate the idea of semantic style transfer into @DeepForger in the future, but this require quite a bit of work to get it to reliably understand portraits or landscapes without anyone's intervention. The fact it does require these semantic maps for all images makes it less straightforward to release as a service.


One question, is the semantic map created on the fly at the same time as the final image is composed, or are the maps pre-computed?


The semantic map remains static during the optimization, so it can be provided as a pre-computation (e.g. pixel labeling, semantic segmentation, etc.) or done by hand. The ones in the repository are done manually, but now experimenting with other algorithms. Anything that returns a bitfield or masks can be used!


Or an online generator!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: