> Exploiting Clickjacking on Google YOLO allows visitors' name, profile picture and email address to be leaked. That's right, I can even know your email address. :). Click here if you want to see behind the sense (make sure you have logged in Google with a modern browser, PC preferably).
Google's reply to a VRP submission:
> Thanks for your bug report and research to keep our users secure! We've investigated your submission and made the decision not to track it as a security bug.
> The login widget has to be frameable for it to work. I'm not sure how we could fix this to prevent this problem, but thanks for the report!
Yeah, I read through the post and (assuming I'm parsing it right) can't figure out why this hasn't caused a massive shitstorm already. Are they actually arguing that it's not a security bug because it's necessary for them to implement a 'one click sign in through Google' feature?
Likejacking Facebook likes has been around for 8+ years, leaks a similar amount of information, and there’s no big shitstorm. Not sure what the big difference between YOLO and FB’s like button are?
I was wondering whether this is actually the same as like jacking. Is the ‘leak’ in that case the ability for the Facebook page/post owner to be able to then look you up in the list of ‘likes’? If so, I think Facebook privacy settings may allow users to not leak their emails or pictures in this case.
Also, I think it’s more widespread given that ‘Google identity’ covers a large number of Google products, and signing into one signs into all. With Facebook any time I log in nowadays I open incognito, check messages, log out, whereas with Google I generally stay logged in, mostly because I want gmail and my cross device browsing history to work.
Indeed. A Google engineer stated on Twitter [0] that the shutdown of the service happened because apparently YOLO is only supposed to be accessible to whitelisted partners.
They also state in the same Twitter thread that they were aware of the issue before the blog post was written. IANAL but even if the shutdown was intentional (as opposed to being the example of terrible damage control it looks like), willfully leaving a bug in production that allows a set of whitelisted partners to deanonymize their visitors without their consent seems like something that shouldn't fly in countries with data protection laws?
I just received a message back on Twitter saying that the whitelist wasn't the fix and they are still making more changes.
This is seriously denting my continued belief in Google's security chops. I know they have some of the finest security researchers on the planet but this was handled in a ham-fisted and ineffective way so far.
And best of all: without 'partner' status you won't be able to check if has been fixed.
>This is seriously denting my continued belief in Google's security chops. I know they have some of the finest security researchers on the planet but this was handled in a ham-fisted and ineffective way so far.
This is a great demonstration how a company can have all of the right talent but still manage to become incompetent through poor organizational policies.
It would be fine if they only gave whitelist access to people who could already simply access your data by request. But GDPR would only require that they know who could access, and that the access list be less than "the entire world".
Exactly. If it was about "just whitelisted partners" he discovered it was actually "everybody." It's not different than discovering that instead of the password just an empty string is enough.
They're burning developers and potential employees trust in the first place. This "we don't know how to fix it ==> not a bug" attitude is what's staggering.
This keeps happening over and over again. I remarked the other day that the most feared words when reporting a serious bug are 'won't fix'. It is super annoying. If the feature can't be made to work safely then drop the feature.
Anything that users get conditioned to because of repeated appearance has this potential, and has been warned against.
What should really bother you is that rather than putting up these stupid cookiewalls the intended effect of the legislation was to get websites to stop tracking everything and everybody and this was the result.
Self regulation didn't work, then there was a soft push, which resulted in a lot of wriggling to get around the laws intent and now we will see the hard push.
I wonder how many parties will have the guts to try to wiggle out of the hard push, and I'm quietly hoping for one of the larger offenders to be hit so hard they have to shut down, which might send a useful message to the rest.
Analytics is fine but this wholesale profile building is really across the line.
What really bothers me is the law's original design got hamstrung when governments realized it would subvert their own site analytics, and we ended up with the quite-empty-but-mandatory dialog informing users that a site does a thing that is pretty fundamental web technology (not quite as fundamental as "Transmits data using the HTTP protocol", but pretty close)---instead of scrubbing the whole initiative or replacing it with a Europe-wide education initiative ("The EU presents: browsing and you").
Maybe regulation would work better if there weren't such a disconnect between what lawmakers think people want and the way the technology works.
I always thought it was a combination of slow legislative process, legislators not understanding tech, and industry pushback. I somehow doubt underfunded government IT departments had that much pull.
Cookie disclaimers at this point need to be taken to their logical conclusion: browser vendors and site operators should add a standard Yes-I-Know-What-Cookies-Are header to the next HTTP update, which can then be vomited at sites by default browser configuration to let them know it's okay to auto-hide the banner.
Hell, let's repurpose Do Not Track for it; it's not like it's being used for anything meaningful otherwise.
I feel like honoring Do Not Tracks is like honoring deadbolts on wooden doors. Most people honor it, but you're not using it to keep those people out...
I expect that the reason why most people honor the first is the high likelihood of getting caught or seen. This deterrent does not exist for web tracking.
I think it's more likely that most tracking companies ignore do not track.
The GDPR rather makes it obsolete, actually. DNT was meant as a general purpose opt-out, whereas the GDPR requires an explicit opt-in for most things.
And well, DNT could have had legal bearing, since most legislations in the world require you to stop tracking when the user tells you not to.
So, if the user goes and sets up this general purpose opt-out, you'd have to have some sort of argument why you're different than what the user had in mind when they turned DNT on.
Could have had that legal bearing. Microsoft as well as Google and Facebook killed it off pretty well.
Microsoft by turning it on by default in Internet Explorer. Meaning that there were now lots of instances where the user had not explicitely gone into the settings to turn it on (nor did they perform some other action that serves as reasonable sign that this is what they'd want, like going into InPrivate Browsing, or specifically installing a privacy-focused browser / operating system.)
Google and Facebook killed it off by saying right away that they would not respect it. With how many webpages bundle a Facebook Like button or Google: Analytics, ads, GStatic, ajax.googleapis.com, JQuery, fonts, ReCaptcha, Maps, YouTube etc.
As such, there were very few webpages left that could have chosen to respect it and no judge would have just ruled that everyone has to respect it. It would have killed the internet for a few months.
Same here. And it's so annoying that I'm scared of resetting my Android phone just so that I don't have to hit cookie consent everywhere...
Anyway, now with GDPR consent buttons on their way (at least in Europe), there's a fresh new opportunity for black hats to click jack their whole population of visitors all over again.
I just immediately hit those 'cookie consent buttons/boxes' with a uBlock Origin 'block element'. Gets rid of them permanently, and doesn't require submitting/clicking anything.
Except for cookies are already transmitted to the client device in far too many cases before the disclaimer is displayed.
Also I'm not sure blanket agreeing to all (tracking)cookies will be in accordance with GDPR.
The implication is that, now that you know that cookies are used for tracking, remaining on the site is implied consent. Like the omnipresent "this call may be recorded" statement at call centers.
It is an instance of a much broader issue, where contracts are no longer the result of any negotiation, but are a take it or leave it option.
I understand going after each and every website would be impractical, but imo a disclaimer with a button - most probably after the fact the site already transmitted a handful of cookies - does not comply with the spirit of the regulation.
Most people have at least a passing desire for cleanliness and order (the stuff that becomes OCD when out of balance) which compels them to get rid of the banner.
To generalize, it's not easy to judge what pixels on a browser's rendered webpage are trustworthy and legitimate.
For example, every time I see a "Are you sure you want to leave this page?"[1], I hesitate for a moment and wonder if that dialog box is being spoofed. That dialog shows up for many scammy websites but also legitimate ones too. Yes, one could try to learn which dialogs can't be spoofed[2] but there's always paranoia because you can't keep up-to-date with all unknown future exploits.
Chrome makes that dialog box scarier because it is modal and you can't click outside of the box on the browser's tab [x] to close the window. (You also can't use the keyboard Ctrl+F4 to close it either.) In contrast, Firefox let's you avoid clicking the dialog box by letting you click on the tab's [x] or press Ctrl+F4.
It's easy to replicate these differences in behavior on website regex101.com.[3] Type a few characters there and then try to navigate away from the page. Chrome forces you to interact with the dialog box but Firefox lets you click
[x] on the browser tab.
It's nearly impossible for any combination of CSS and Javascript to "escape" the browser window and hijack the [x] button on the browser's tab so it feels "safer" just to click there.
FWIW, every time a browser pops up a modal that I find suspicious, I use a task manager or an OS shell to kill the process. If I have lost faith in anything a program has rendered to the screen, I no longer trust any of the program's own ways -- including the topmost 'x' -- of making the modal cleanly go away without triggering an action I didn't want to approve of.
The essay 'The Line of Death' [1] talks about users' trust placed into UI elements, and the implications thereof.
I think Safari has actually made some good improvements here. It now renders all JS-initiated alerts with a different chrome app fully within the page’s frame with a different UI than what’s used elsewhere on the system.
Perhaps there should be a symbol for "trustworthy", that you can't render on a browser. (The browser would detect it and censor it, e.g. by blackening it out). But the browser itself can use it, e.g. in dialog boxes.
>Perhaps there should be a symbol for "trustworthy", that you can't render on a browser.
To expand on this, the web browsers are missing:
1) trusted pixels: Some bank websites implement this idea when you try to sign in. When you enter your id, you are shown a special secret image that you chose when you created the account. If that image isn't there, you should not trust the password field presented. Therefore, any criminal who wants to present a fake bank login screen also has to know the secret image as well. E.g. Chrome could use this technique to show the secret image with dialog boxes truly triggered by Chrome itself instead of painted by malicious HTML.
2) a trusted keyboard sequence that is well-known and standard : Windows operating system had this with Ctrl+Alt+Del. Instead of trusting any login screen, you just press Ctrl+Alt+Del because no user-mode program can hijack that special key sequence. Intercepting it requires a kernel patch or a registry hack. A similar idea could be used in browsers to toggle a special keyboard mode that disables all javascript keyboard events. This mode may be useful for password fields or as a special key sequence to "unstack" hidden buttons, etc.
Someone tried defeating the secret-image security... it turns out all it takes is a static image saying "Error with Secret Image Server, call us if the problem lasts more than 24 hours."
>3. Real bank website shows fake bank website your "secret" image.
I had left out some implementation details for brevity. Any first time use of a "new" computer to access the online account requires verification from the bank. (E.g. random code is emailed.) At that point, a bank cookie is set. The bank doesn't show the secret image unless the computer already has a cookie from a previous verification.
A fake webpage that tries to forward credentials to a "robo" browser on a computer in Russia wouldn't have that cookie so they'd never be able to see the secret image.
There are probably other security checks the banks do such as ip blacklists etc.
The secret image isn't foolproof but it's an extra signal to signify trust. Likewise, 2-factor authentication with mobile phones isn't foolproof either and can also be hacked.
Banks should notice a new IP/browser and then force 2 factor authentication before showing the image. ex: Sending a text. Which would make Users far more suspicious as rather than a normal login they see one of those "we don't recognize your browser" screens. The bank can also track the 3rd party connection to their servers making this more tricky to get away with. So, while not fool proof done correctly it is actually very useful.
However, a website would not have access to the browsers image unless the machine was already compromised.
Hm... the way I remember this feature (forgot where it was) is that your custom image is stored in your browser (localstorage?), not on the remote site. So when you see your image, you know it's the same origin. (E.g. not a similar URL with two letters swapped, I guess.)
That's not an issue though if we're talking about the browser UI, as there's no way for a website (malicious or otherwise) to obtain secret image data from the browser.
Sorry for not being clear. For the Chrome implementation of the secret image, I was thinking that the user would store it locally inside of Google Chrome configuration. E.g. in "chrome://settings" or "chrome://flags", the user sets the secret image (e.g. a photo of their cat or whatever.)
Oops, I was the one being unclear. I was just going off on a tangent about the HTML ones that some banks use. A native one indeed wouldn't have the problem I'm mentioning.
True. Like the other commenter noted, perhaps we could use a special key-combination (or perhaps a new key even) to enter a secure mode. Pressing that key-combination could trigger the area above the line-of-death to increase in size. Then it could show more security-related information, and perhaps even password entry fields. Just brainstorming here.
I think that'll end up back firing by making a single target a lot of people will aim to break, creating an arms race that the browser will lose on occasion, to great determent to its users.
I think the issue you're describing has been fixed for years in Chrome. (The SuperUser question is from 2013.) Websites no longer have full control over the content of the dialog box, they do not control the button labels ("leave page"), and they are (I believe) prevented from so much text that the button runs off the screen.
The fact that the dialog box is modal proves that it's not spoofed.
Right, it was fixed (past tense) but that doesn't change the cognitive burden for tomorrow's unknown exploits that look very similar (future tense). Everytime a popup shows up on screen, I have to question myself, "am I up-to-date on the latest browser engine internals to safely click this UI element?"
>The fact that the dialog box is modal proves that it's not spoofed.
Right but... this creates a very convoluted "decision tree" in the web surfer's brain to know whether dialog boxes are real and trustworthy. E.g. if I want to instruct my grandmother to only click on the trustworthy "Leave this Page" buttons, I have to tell her to click outside the box and if she hears a beep while at the same time nothing happens, (the layman's determination for the computer geek's jargon of "modal"), she can then safely click that button. Otherwise that "Leave this Page" button could be a fake and it downloads malware on her computer. Those are very nuanced and error-prone step-by-step instructions for safe web surfing.
Instead of that, using the spatial rules of clicking on the tab browser (the "line of death" as others pointed out) is a much easier guideline to follow.
> Shortly after thie article was published, Google silently prevented my domain from using the API:
> The client origin is not permitted to use this API.
> Welp.
So some buttons stoped working, and now you have to believe that everything was as the blog said. Well, it was.
And a "mitigation" from google being just avoiding the access to the API just makes things more interesting.
A lot of Google employees are reading HN and actively posting so no surprise. Did they at least contacted you to properly open a ticket now that they implicitely recognized the vulnerability? Otherwise very very dickish move as it solve nothing and you basically worked for free...
And now if anybody from HN team is listnening. Can you explain why this thread is fastly slipping from the front page?
Currently it’s being devanced by articles that are olders, with less upvote and fewer comments. Can you guarantee that nobody is able manipulate ranking? It’s only a hunch, but it’s not the first time that I notice that google related "bad buzz" move away from main page slightly faster than other...
PS: I’ll gladly accept downvotes. But answers on why I’m wrong or paranoid would have been better
There appear to be quite a few flags on the article pushing it down. The ratio of upvotes to age compared to the rest of the front page is a strong indicator of this.
Also: lots of HN'ers work at google. It would be a nice rule if people were told to abstain from using their flagging privileges when the company they work at is the subject of a thread.
It's probably because a lot of Google folks are on here - protecting their brand. Unfortunately that part isn't transparent, but its hopefully a minor issue.
I recall a video talking explicitly about this problem - it was something about using the browser paint API in conjunction with iframes for security? The gist was a browser should be able to tell in real time if an iframe is visible and should be able to block user input depending on whether or not the site was hiding the iframe, putting something on top of it, pushing it off screen, moving it around, etc...
But I can't remember the source. If I can find it, I'll add it in an edit. And of course if anyone else knows the talk I'm thinking of, please link.
NoScript includes protection against this! He calls it ClearClick:
" whenever you click or otherwise interact, through your mouse or your keyboard, with an embedded element which is partially obstructed, transparent or otherwise disguised, NoScript prevents the interaction from completing and reveals you the real thing in "clear". At that point you can evaluate if the click target was actually the intended one, and decide if keeping it locked or unlock it for free interaction."
It certainly makes me glad I did _this_ on my FB account:
>>
You previously turned off platform apps, websites and plug-ins. To use this feature, you need to turn them back on, which also resets your Apps others use settings to their default settings.
<<
.. but further to that, I should take my FB login and stick it in a Firefox container where it belongs.
> This report will unfortunately not be accepted for our VRP. Only first reports of technical security vulnerabilities that substantially affect the confidentiality or integrity of our users' data are in scope, and we feel the issue you mentioned does not meet that bar :(
Or maybe that simply mean this is not the FIRST REPORT of that technical security vulnerability that substantially affect the confidentiality or theirs user’s data.
To fix this, there could be a new `X-Frame-Options`: `compose-over`. The browser rendering context will compose the frame separately, and always place it on the top of the rendering context, above every other element; regardless of the host page element's z-index, opacity, whatever.
It's kind of like how an app cannot draw over system UI; like the permissions dialog.
I'm surprised this is not how X-Frame-Options worked in the first place.
Or maybe logging in ought to be handled directly by the browser in a way that couldn't be highjacked or phished easily. Do we really need a million different implementations of a login form?
I'm looking forward to Google giving out a $100 reward or even nothing to the researcher.
Like they did to the guy who found the sitemap ranking bug in Google Search where he was able to let others pay for a first page ranking. He only got $1,337 and it took Google 6 months to fix it.
I don't think so. Chrome's site isolation just isolates different origins in different browser processes, whereas Firefox's first party isolation is intended to isolate _cookies_.
Interesting discovery: The facebook-like clickjacking doesn't work on Firefox when I have Facebook in its own tab-container (even though I'm logged in, just in that container, not the one I clicked on).
I'm not sure what the minimal repro is here, but if it's the containerization working as intended, that'd be awesome.
This is the intended effect! And if you use the dedicated Facebook Container it's even stronger. The Like button will be blocked entirely, so even Facebook won't receive the "Like" action. https://addons.mozilla.org/en-US/firefox/addon/facebook-cont...
I won't vote either way, but I will say that regardless of what GDPR may mean, "you need a cookie warning" will be accepted web dev mantra for years, and the things they build in that time will be around longer still.
You mentioning "log in" in this context makes it pretty clear you're also fairly ignorant about the specific topic. Log ins, shopping baskets, ... do not require a cookie notice.
Not only is what you're saying irrelevant to my point, you're wrong. "Log ins, shopping baskets" do require a notice if they are persistent, which almost all of them are. Looks like you're the ignorant one here.
> As for the reason this was closed as working as intended, it was just done accidentally, we had already an internal bug tracking clickjacking in YOLO. Sorry for the confusion!
Somehow this was known, the blog (innerht.ml) gained some traction and then action was taken. Seems that some miscommunication occured inside Google and this problem atracted much more attention than it was necessary.
According to an update on the OP post Google apparently now silently blocked the OP webpage, so the exploit doesn't work in this case - but will still work for any other malicious page. Not cool Google.
For me Google one Tap stopped working on all my sites that previously worked. I added API HTTP refer to restriction in console.developer.com, but I still get a warning message "The client origin is not permitted to use this API." any thoughts?
If you go to the page https://www.wego.com/ you can see that Google one tap still works...
Exactly same thing here, I use it for secure my admin acount and got "The client origin is not permitted to use this API.". And like you, my domain is correctly allowed console.developers.google.com.
I guess they don't even patch, the ninja block everything until they got better. It's stupid since they got the information before, and could prepare it. It prove again than full disclosure is usefull.
If even Google can't get basic clickjacking protection right, I really see no hope for the Web as it is. Is there a FF plugin to block all forms of non-first-party content (including but not limited to iframes) and also to switch off "dubious" use of CSS?
Is there any particular reason why js wouldn't be used to emulate the same effect? I'm thinking that the onclick() method calling several things instead of just whatever the button is intended to do.
Please do ignore if i completely misunderstand the discovery, but i don't really see the need to make a html+css button to make any of this execute.
The same origin policy prevents JS from triggering clicks on elements in iframes that have different origins! The web would be a very insecure place without that... =)
Unfortunately that is true, and it's really bad for accessibility too (using links as buttons, but not coding the keyboard events that are used on buttons, for example.
Well, yes; you double-click the play button to play the video/iframe. I'd be more worried about "Oh, the button did nothing, I should try again.". The real fix is to not allow transparency/compositing.
I saw it, immediately thought "this is clearly part of the demo", and clicked it, because I was certain it was going to be fun. Woe betide me and my poor risk valuation skills - but not today.
I use clickjacking as a “feature” on a website I operate, http://vlograd.io
I had no choice, at least on mobile.
On mobile browsers, audio contexts start out as muted. They can only be unmuted by an event originating from user interaction.
I use a web player embedded in an iframe on my site. It has an API to communicate with it to do things like playing and pausing the current track. However, this also means the audio context is in a cross-domain iframe, and my only way to trigger the play() method is via the asynchronous postMessage API it exposes. So, in order to unlock the audio context, I present mobile users with a “tap to start” screen. In reality, I’ve positioned and zoomed in on the iframe such that the play button is covering the entire screen for any reasonable screen size. Thus, when the user taps to start, the audio context is unlocked (since the “tap” event on the play button in the iframe fires), and I immediately send a “pause” command via the player’s API. Now, the audio context is unmuted and I’m free to send the “play” command for any track to start playing music.
Quite common misunderstand about Clickjacking is the idea that a 3rd party content embedded in an iframe can hijack clicks from the parent (yours) website. While embedding an untrusted iframe in your website is not a god idea, the Clickjacking attack goes the other way around.
Why aren't events masked by the last several frames generated by the rendering system?
If a page is divided into two columns with the left half originating from the source origin and the right half from a delegated origin, why should the source origin observe interaction events from the right half, or vice versa?
We should be able to press a hotkey and immediately see at-a-glance who is operating what.
Yeah, that took me a while to figure out just now. But I still don't see how that's an issue, I'm browsing on ycombinator.com, not ashittyiframesite.com
I'd like to see the browser vendors move to allow the source page to carve out and delegate rendering a single region of pixels per child frame -- preventing other frames, including the source page, rendering into that region or receiving events originating in that region. Finally, child frames should not be allowed to sub-partition their allocation -- there's no defensible need for this except clickjacking.
This would neatly solve this problem with the low cost of making folks who want to implement modal popovers have to do some proper scene management in their pages.
It should always be possible for the end-user to view a colored overlay of their screen and see exactly which origins are operating which regions of the screen.
With uMatrix the iframes come out nicely too.
I was never happy about noscript usability, so I didn't use any additional script blocking, until I figured out how easy to use uMatrix is.
You may be protected from the specific examples provided in the blog post, but, on the whole, you will not be protected. Most of the underlying vulnerabilities here can be exploited with simple HTML and CSS.
Content blockers can also prevent embedded iframes from loading. The article looks like this for me using uMatrix in Firefox: https://i.imgur.com/pYFXRR3.png
Clicking the link opens the iframe in a new tab, so it's hard to click it again without noticing what's going on.
You can make it a bit more visible if you use the Stylus extension.
Unfortunately Chrome (and probably Firefox quantum) doesn't let you apply css agent_sheets (only user/author), so that style="display:none!important" on the iframes can't be overridden.
If you use older Firefox or Palemoon then you can use Stylish v2.0.7 and override it.
Facebook attempts to prevent “likejacking” by sometimes asking the user to verify they intended to really like that page. If they see that most people do not confirm this then they ban your like button/page.
So taking facebook’s example this can be “prevented” through some random verification.
Yes, if you aren't signed in to other accounts most of these click jacking scenarios would need to convince you to sign in which would be pretty obvious.
I'm perhaps not understanding the significance of this. Is the issue that if you go to a shitty scam site and start clicking things you might have issues?? I don't see how that's an issue to be solved by a browser.
Leaking your image and email is a huge issue though.
So now none can use this service, because I am getting a warning message "The client origin is not permitted to use this API." even though I added API restrictions...
Well, elinks is a cul-de-sac, and AFL would probably obliterate it, so elinks users are most probably putting themselves in more danger than they avoiding.
Of all the content in this article, "modern browser" is what you latch on to? This author isn't shaming you for your choice of browser, or telling you what you should be using for day-to-day browser.
"Modern browser" means a browser that keeps up with modern web standards. Yes, w3m (for instance) still receives updates, but (going by the changelog) those updates refine how the browser handles very old web standards rather than extends support to new ones.
> Exploiting Clickjacking on Google YOLO allows visitors' name, profile picture and email address to be leaked. That's right, I can even know your email address. :). Click here if you want to see behind the sense (make sure you have logged in Google with a modern browser, PC preferably).
Google's reply to a VRP submission:
> Thanks for your bug report and research to keep our users secure! We've investigated your submission and made the decision not to track it as a security bug.
> The login widget has to be frameable for it to work. I'm not sure how we could fix this to prevent this problem, but thanks for the report!
That's why we don't trust login widgets, right?