From that perspective, this research is two steps further than Neural Style, I wrote about it yesterday here: http://nucl.ai/blog/neural-doodles/
First, the paper I call "Neural Patches" (Li, January 2016) makes it possible to apply context-sensitive style, so you have more control how things map from one image to another. Second, we added extra annotations (which you can specify by hand or from a segmentation algorithm) that helps you control exactly how you want the styles to map. We call that "semantic style transfer" (Champandard, March 2016).
You're right about it being hard otherwise, it was for many months and that's what pushed this particular line of research! Try it and see ;-)
This reminds me of "If Edison didn't invent the light bulb, someone else would have: there were thousands of other engineers experimenting with the exact same thing, a natural next step after electricity came about" (-- paraphrased from Kevin Kelly)
One was called Swan, Edison tried to sue him for patent infringement but Edison's lawyers warned him about prior art, so instead he negotiated a joint venture.
First, the paper I call "Neural Patches" (Li, January 2016) makes it possible to apply context-sensitive style, so you have more control how things map from one image to another. Second, we added extra annotations (which you can specify by hand or from a segmentation algorithm) that helps you control exactly how you want the styles to map. We call that "semantic style transfer" (Champandard, March 2016).
You're right about it being hard otherwise, it was for many months and that's what pushed this particular line of research! Try it and see ;-)