Servo is trying to answer ambitious questions like "can layout be done as a sequence of top-down and bottom-up parallel tree traversals", "can we GC Rust DOM objects safely", and "how much of a browser engine has to be infected with support for non-UTF-8 strings".
Gecko, on the other hand, will gain Rust in modules that can be changed without replacing the entire browser. I'm happy that one of the first will be the code that supports the URLUtils DOM API. When the C++ code was changed from being just "URL parsing" to supporting segment changes, it came to have way more than its fair share of memory safety bugs. It needs a rewrite and it might as well be in Rust.
Servo is already hooked up to a (very old) version of SpiderMonkey, but it's missing enough DOM features that most pages hit errors. This Steam page hits "ele.canPlayType is not a function", "document.write is not a function", and "link.href is undefined".
Last I heard, Servo does not plan to support plugins such as Flash. I'm not sure if the existence of Shumway changes this.
Fun times ahead if document.write isn't already supported. When I rewrote Gecko's HTML parsing, accommodating document.write was a (or maybe the) dominant design issue.
It's quite different from innerHTML, since document.write inserts source characters to the character stream going into the parser, and there's no guarantee that all elements that get opened get closed. There's even no guarantee that the characters inserted don't constitute a partial tag. So document.write potentially affects the parsing of everything that comes after it.
For this to work, scripts have to appear to block the parser. However, it's desirable to start fetching external resources (images, scripts, etc.) that occur after the script that's blocking the parser. In Firefox, the scripts see the state of the world as if the parser was blocked, but in reality the parser continues in the background and keeps starting fetches for the external resources it finds and keeps building a queue of operations that need to be performed in order to build the DOM according to what was parsed. If the script doesn't call document.write or calls it in a way that closes all the elements that it opens, the operation queue that got built in the background is used. If the document.write is of the bad kind, the work that was done in the background is thrown away and the input stream is rewound. See https://developer.mozilla.org/en-US/docs/Mozilla/Gecko/HTML_... for the details.
For added fun, document.write can write a script that calls document.write.
What a mess. To support a stupid HTML feature, the browser's parser has to be set up like a superscalar CPU, retirement unit and all. Hopefully the discard operation doesn't happen very often.
i86 CPUs have to do something like this if you store into code just ahead of execution. That was an optimization technique marginally useful in the 1980s. Today it's a performance hit. The CPU is happily looking ahead and decoding instructions, when one of the superscalar pipelines has a store into the instruction stream. The retirement unit catches the conflict between an instruction fetch on one stream and a store into the same location in another. The CPU stalls as instructions up to the changed instructions are committed. Then the CPU is flushed and cleared as for a page fault or context switch, and starts from the newly stored instruction.
Only x86 machines do this, for backwards compatibility with the DOS era. The same thing seems to have happened in the browser area.
(Prospective fetching should be disabled on devices where you pay for data traffic. Is it?)
Apparently a lot of the older webpages do stuff like this. You could have a table being outputted by nesting for loops for the `<table>`, `<tr>`, and `<td>` opening/closing tags. It's a small step from here to constructing tags bit by bit.
I don't see people doing it in modern websites (doesn't mean there aren't), but we sort of have to support all of the Internet, so...
The last question was about using Rust for Servo, and in particular whether there had been any major pain points. Patrick Walton helped to answer (45:48):
~ The overall discipline hasn't been too difficult to been follow.
~ Most of the issues we hit are issues in the implementation. (e.g. the precise way the borrow-checker checks inveriants, reasons about lifetimes and whatnot.) These kinds of issues are fixable, and we continue to improve them all the time.
~ I don't speak for the entire Servo team, but I feel like the discipline, the overall type-system strategy that Rust enforces, has been pretty friendly.
~ We still have a lot of unsafe code, but a lot of it is unavoidable, for calling C libraries. And also we're doing things like: we have Rust objects which are managed by the SpiderMonkey [JavaScript] garbage collector. Which is really cool that we can do that, but the interface has to be written in the unsafe dialect [of Rust].
The first set of questions was about how the borrow checker understands vectors.
~ How do you tie the ownership of [the element array] to the vector? How does the compiler know that when you take a reference into the element array, [it should treat the vector itself as borrowed]?
~ What happens if I write my own library class [instead of using one that's part of the standard library like vec]?
It often depends on your skill level, whether you take breaks from the game, and how much of a completionist you are. In some games these factors interact in complicated ways.
From a regulatory perspective, I'm afraid the best we can do is:
* Ban tying game mechanics to time outside the game, whether it's "wait 8 hours unless you pay" or "this reward is only available for 4 hours" or an insidious combination of the two.
* Increase transparency, e.g. by asking app stores to show graphs of (time played) vs (money spent).
Ad-blockers protect against attacks coming from compromised ad servers, but also increase your total attack surface. The message is misleading about the relative risks, and it's dishonest in singling out ad-blockers among thousands of extensions, but I wouldn't say it's blatantly false.
Is it libelous against browser makers, though? They spoof the browser's info bar and I believe they style the landing page to look like an internal browser page.
Waiting by not playing a game is never a "core mechanic" of a game. It's either an excuse for the app to grab your attention (play now to get a bonus) or a way to drive IAP (pay so you don't have to wait).
I'm thinking of something like a Tamogotchi where you need to take action at specified intervals and push notifications would seem like something I would like as a reminder.
about:memory is primarily intended as a debugging tool, so we can diagnose and fix memory problems. It has been a fantastic success in this regard.
It also gives add-on authors and web application developers a chance to do the same.
If some (advanced/heavy) users are also able to use about:memory to diagnose and work around problems they encounter, that's a nice side effect. As long as some of those users remember to file bug reports, that is :)
Gecko, on the other hand, will gain Rust in modules that can be changed without replacing the entire browser. I'm happy that one of the first will be the code that supports the URLUtils DOM API. When the C++ code was changed from being just "URL parsing" to supporting segment changes, it came to have way more than its fair share of memory safety bugs. It needs a rewrite and it might as well be in Rust.