It easily qualifies as an exploit, given that Apple's app store model is based on the fact that each app is reviewed beforehand to ensure various properties, including the property that the app does not contain spyware, etc. If Apple approved a harmless app, and then said app downloaded code that snooped on the user's calls or asked for their credit card number, that's an exploit.
First - I think just general manners, as well as established protocol, would have the security researcher let Apple know ahead of time what he would be doing. A simple email sent prior to uploading this code would have been sufficient to cover his bases - I'm surprised he didn't do that.
Second - Unless I'm mistaken - his proof of concept was more a violation of Apples TOU, it didn't really attempt to copy credit card numbers, or snoop on users calls - so, in that sense, it wasn't an exploit.
Net-Net - nobody comes out of this looking good, but Apple makes it clear that they are prepared to back up the language of their Developer TOU with actions.
Part of the security of the app store is the review process. "It's possible to download and execute code" is neat, "it's possible to download and execute code and the app store reviewers don't catch that" is much more impressive.
Nothing in the App Store review process will allow them to catch a zero-day exploit. Coming up with a zero-day exploit in IOS is very impressive - but, by definition, once you have it, the App Store review process isn't going to catch it.
Yep. There's no deep check of what your code contains, only a fairly superficial check of what it actually does. You can include nearly anything in your app (perhaps lightly obfuscated) as long as it doesn't show its face during the review.
Depends on your level of paranoia and willingness to rely on the network. The server has the advantage of letting you turn it on and off at will, but a timer will work even if the user has no internet connection or your server gets confiscated by the FBI.