Yes, and I have yet to meet another New Yorker who finds it appealing. Apparently, from what I've gathered, tourists love it. I guess that's how it works: tourists take pictures of the graffiti, spread it over social media, artist writes the same corny words on an object you can put in your home, profit.
I’m a less happy having been made aware of 7soulsdeep. It’s so fundamentally commercial in nature that it’s quite different from what graffiti used to be. Clearly street artists have been monetizing using social media for a while but this is both hyper cringe yet more successful than I’ve seen before.
I am still not convinced that this is better than our custom built system actually, even if it brings some extra functionality (but you know, people want to try new tools all the time even if it is not bringing any value)
That's about it. Horrible user experience - oh you're about to pay us, just click a few sidewalks first - and condescension of asking people to do a menial task that improves their ML models. But forcing you to use one of their sanctioned browsers and let the record what they want to is where the real hostility comes in. Its exercising monopoly power to squeeze more out of people and repress competition, I'd call that hostile.
Do you block Google trackers aggressively? reCAPTCHA uses that very heavily: if you allow all of their stuff and they track you across the web, you'll have to basically never do more than click the button. On the other hand, if you take your privacy seriously and are aggressive about tracker blocking, you'll have a pretty awful time.
I imagine hCaptcha doesn't have enough trackers sprinkled around the web to use those as signals for this.
I do block Google trackers, and have network state partitioning enabled, however the reCAPTCHA tests are usually bearable. (often a checkbox, sometimes a page) It seem like I get at least 2 pages of tests for hCaptcha every time.
I work at OpenBCI. It's been great getting to work with Gabe and the Valve team. Can't overstate how unique they are as partners on a project like this. Also cool to see OpenBCI (sortof) in the top 10 today :)
What is the timeframe you imagine for us to see the first consumer BCI-based products?
Gabe seems to talk a lot about "inserting" data (like feelings) into the brain, instead of reading it. Is the technology really there already? Can we reliably read data from the brain (i.e. using it as input for a digital system)? And regarding inserting, what is the coolest thing you've done that you can share with us?
Right now, OpenBCI makes products that only handle the "read" side of the equation.
As far as "writing" back into the brain, the coolest thing I've seen was the "BrainNet" project from University of Washington which used transcranial magnetic stimulation (TMS)
The science and tech is advancing very fast, but I think it's not accurate enough to be in everyday use yet as a controller for devices. 90% accuracy sounds great in a paper, but imagine if your mouse clicks or keystrokes didn't register 1 out of 10 times.
What feels way more likely is that we'll see biometric data being collected by more consumer tech devices (cellphones, laptops, headphones) and used as one of many inputs to improve software applications and operating systems. Could EMG or EEG data be used to improve iOS autocorrect and reduce fat finger mistakes? That's a mundane application for crazy tech, but it's the kind of thing that I think will be a necessary intermediate step in us learning how to use these types of signals in everyday ways.
I don't know much about the tech, but I'm curious if neural networks and/or machine learning is used to process data and find connections, correlations, or...cool stuff like that? I just think machine learning is neat man, and bio tech is also very neat. The two together....super neat...haha.
I know the example they put out, a horror game that responds to your fears by reading the data you produce. Is the tech actually at a point were a dedicated team can accurately and consistently identify bio markers correlated with a fear response? Or diffrentiate between fear, anger, happiness, etc?
Has there been any research into using this tech effectively to treat anxiety/depression? Really interested in this focus once "writing" information is possible, especially with more severe mental health issues that aren't related to physical brain deterioration. But it seems (per your comments) that that's still a ways a way, haha. Would be really cool to be able to see a psychatrist and have them treat my anxiety with a few bip bops from an industrial headset, or go home with a prescription program to run once a day, that just sounds like science fiction.
I often wonder about the parallels between reverse engineering games using memory inspection software like 'cheat engine' to trying to reverse engineer the brain using a BCI.
For example if you want to find the memory address for your guns ammo you search for a start value in memory say '30', get all addresses that match that value, fire the gun and then find which of those addresses now have the value 29. Continue the search until you narrow down the memory address to just one. Then you can use that address to query the ammo for a 3rd party program that alerts you that you're low on ammo or even write to the memory address to give yourself more ammo..
Obviously the brain isn't as discrete but I feel like if I could play around with a BCI I could find fun signals for when I'm thinking about 'apples' vs 'oranges' and slowly build up an interface.
Have you been able to use a BCI to detect when you're thinking about something specific?
There's a huge chunk of neuroscience devoted to questions like this.
Several groups have shown that they can "decode" a remembered image from brain activity. This is comparatively easy when the images are simple and there are only a few possibilities, but can generalize to larger sets of images and even (sort of) never-before-seen ones. Sensory and motor information is relatively accessible; I don't know that anyone's making great progress decoding thoughts like "I should be home by 8pm".
I thought about buying an OpenBCI kit to try something like this, but I think there's just not enough sensor resolution to get anything meaningful in the way you're suggesting. The strongest signals by far come from muscle movement, too, so that tends to be the basis of interfaces.
This sounds like the spaceship problem. Whenever you build a spaceship to take people to another planet, it will be passed by a later ship which was built to go faster.
Reminds me of my games of Civilization. Should I spend few more turns to build additional engines to my spaceship, or should I launch it now and risk being overtaken by a faster ship.
In real life its so much more complicated than that. Because choosing to "send ship now" will result in huge amounts of progress in overcoming obstacles you could not have known about until you actual began the project.
It seems like the answer is always send out a test ship as fast as possible, knowing that it will absolutely be overtaken by a ship with more engines AND the knowledge gain from the first ship.
I don't think that's quite right - it's more that they're a lab and not a product manufacturer, so they can't decide to go mass-market.
Even if they decided that they want to go public with a virtual keyboard/mouse/controller and virtual heads-up display that you use by wearing an electrode net, the current team would not be able to make that pivot. End users won't be debugging Labview sketches, spending weeks training a neural network to recognize virtual keystrokes, or shaving spots on their scalp.
Personally, I expect that the V1 product here is approximately just that: a game controller. A few buttons, a pointer, maybe a little haptic feedback. It would be great for me if it supported text input and output faster than a keyboard and terminal, but I don't think that's likely to happen in an early version.
Even less likely is that we're going to jump straight to The Matrix. Valve needs to admit that, and be happy with a limited version, instead of letting it fizzle out in the lab.
I cycle daily in NYC and have used both Citymapper and Google Maps
Google Maps' cycling instructions route me onto streets with bicycle lanes. I haven't had any problems with it and I think it's a fine option. Maybe because NYC is the type of location google has excellent data on.
Citymapper is also good. The fastest vs quietest distinction doesn't often yield much of a change in route. What I like about citymapper is you can distinguish between riding a personal bike, or riding a bike rental. Sometimes when bringing a friend along, they rent a citibike, and it's nice to have the option for directions to take you to a docking station close to the destination.
I've used Google Maps in London, Berlin, Washington DC, Chicago, New York, SF, Austin, and a few other places. Berlin and DC were probably the worst, but it still worked. New York (where I live now) it sometimes chooses the "wrong" bike path. For instance, it'll choose a non-protected lane and insist on it even though there's a protected, 2-way bike lane a block over that's much much faster and safer.
Then again, this is a really really hard problem to solve for, and Google probably has it the best no matter where I am, so that's what my default is. I'm excited to see how Apple's will work in iOS 14 though.