Hacker Newsnew | past | comments | ask | show | jobs | submit | nope_42's commentslogin

The burden of proof is on the person claiming something is true.


So if I claim it is true that neural nets are not conscious, the burden of proof is now on that claim?

The burden of proof is on the person making an assertion. The original claim was not an assertion, it was that they "may be slightly conscious". The article linked here is the only one that made an actual assertion, which is that the original claim was categorically false.

In short, I agree with you. The burden of proof is on this article to demonstrate that neural nets are not conscious.


The null hypothesis is that things aren't concious until there is evidence that they are.


The null hypothesis is not your prior!

https://en.wikipedia.org/wiki/Null_hypothesis


And what evidence is that, specifically?


Can one argue that humans are even conscious? Doesn't consciousness require some form of free-will?


See, now we're running into all sorts of problems.

First: What is consciousness? That's the first thing we need to do. Come up with a definition that we can both agree on absent any particular example. So we can't hold up a human and just shrug and gesture in its direction.

Now. We need to prove that humans fit that bill. And in a way that precludes other philosophically possible scenarios. Or at least agree on some fundamental axioms. Like assuming that no one is a brain in a vat. But if we're trying to prove humans are conscious and not simply automatons, can we even dismiss solipsism? Because if humans are not conscious, then I could be a brain in a vat and they're just automatons running for a separate reason.

And the truth is, we can't get past even those two hurdles. We assume humans are conscious because we kind of define consciousness as just shrugging and gesturing in its general direction.

Even if we call consciousness "awareness of self", how do we prove something is aware of itself and not just making the claim without understanding.



You can but it is both costly and a hack (the resulting embedding will not be as good as the one you would have gotten restarting from scratch). So I would not recommend using it in an inference pipeline.


If this is a thing you want to be able to do efficiently then ParametricUMAP (see [docs](https://umap-learn.readthedocs.io/en/latest/parametric_umap....) and [the paper](https://arxiv.org/abs/2009.12981)) will be very effective. It uses a neural network to learn a mapping directly from data to embedding space using a UMAP loss. Pushing new data through is only slightly more expensive than PCA, so being part of an inference pipeline is fine.


But then you have to train a neural network and lose on the speed advantage of UMAP (offline yes but still much slower and finicky).


It is really not that much slower for training (see the paper), and if you are interested in pipelines the difference is not so great considering you are looking at a one off training time vs. lots of inference.


> Much as I hate it, docker solves this. Failing that poetry or if you must venv. (if you're being "clever" statically compile everything and ship the whole environment, including the interpreter) its packaging is a joy compared to node. Even better, enforce standard environments, which stops all of this. One version of everything. you want to change it? best upgrade it for everyone else.

No, docker doesn't solve the fact that some packages just won't play nicely together. NPM actually does this better than the python ecosystem too since it will still work with different versions of the same dependency. You get larger bundle sizes but that's better than the alternative of it just flat not working.


Link to study?




Can this session replay deal with secured fields like credit cards? e.g. during the replay don't record the credit card field

I'm just wondering how any technology like this would work in a PCI compliant environment.


We take security and privacy seriously. We don't track any sensitive data including Credit card numbers, CVV, passwords etc., Zarget cleverly tracks these fields and masks those details while it is recorded in a session. Hence, the user can be assured that their payment details are secured and safe.

https://docs.zarget.com/v1.0/docs/handling-sensitive-data


Having read some of the comments in here I'm surprised to see that no one has mentioned how the co-workers reporting their time are lying and this developer isn't. If you worked on a feature for an hour and stared off into space for another hour you report that the feature took two hours. If you took 30 minutes to write an email you certainly don't report that; you move time around to make it look better.

This behavior is totally normal in this industry and if you get put in a situation like that you have to do the same. No one can concentrate for 40 hours every week on development without getting burned out.

The reality is most people are probably lucky to get 20 hours of productive work done a week.


I do some of my best programming while I'm staring out the window or getting coffee or sitting on the toilet.


Me too, but it still feels cheating when I bill for that time.


I'm looking for something like kafka with an at least once guarantee. I believe this can be achieved with the kafka java client (not sure on that) but librdkafka (C++ client) doesn't seem to support this guarantee. Performance is secondary to messages not getting dropped in my use cases.

What kind of guarantees does tank make?


The Tank Client will immediately publish to Tank (it doesn't buffer requests). You get at least once semantics /w Tank(exactly once pretty much means at least once but with dupes-detection).


So if I have a subscriber that simply publishes a transformed message onto another topic I can have a guarantee that if the publish fails it wont move on to the next message in the subscription?


The consumers applications (which interface with Tank brokers via a Tank client library) are responsible for maintaining the sequence number they are consuming from.

Suppose one such application is consuming every new event that's publishing ("tailing the log"). As soon as another application successfully publishes 1 or more new messages, the consumer will get them immediately. If the application that attempted to publish failed to do so, or didn't get an ACK for success, then you are guaranteed that no new message(s) were published(i.e no partial message content).

I am not sure if that answers your question, if not, can you please elaborate?


I believe so. I suppose I'm asking for an abstraction that makes maintaining the sequence number simple and fails safely in the presence of errors.

I'd basically like to be able to map messages from one topic to another with a guarantee that none of those messages will be lost; even when some error occurs (either a programming error, system downtime, or network partitions). I'd prefer the application to stop producing messages than lose any of them.

It sounds like that is possible with Tank so I may end up giving it a try.


FWIW exactly once is being worked on now: https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+E...


Should have just used dotPeek instead of ilspy and writing IL code by hand. Recompiling would have certainly been easier. http://www.jetbrains.com/decompiler/


---Author here---

AWESOME TIP, stoked to try it out, thanks!


This kinda goes against a popular theory that Microsoft mine JetBrains for Visual Studio functionality improvement ideas (or more likely the blog author just doesn't work on VS or w/ the VS dev team all that much)

Awesome read btw!


I had a random idea the other day to create a program that monitors/logs system level calls and lets you replay them for a program. This would effectively let you use your program as normal, log any actual pieces that use external resources, and then automatically use the saved results as part of your test framework by overriding those system calls at the appropriate time.

Is anyone aware of a solution like this? I certainly don't have time to create it but I would definitely find it useful.


We have something like this at the company I work for. We have a "simulation layer" which wraps ALL third party code. This lets us simulate the failure of file system calls, network drivers... And we use it to write our tests.


I did this for a gdb wrapper I wrote. It logs all the stdin/stdout between itself and gdb and between itself and the user. You can then take that log file and replay it to simulate both the user input and the gdb output.


Although not quite the same, VCR for Ruby does this for network requests.


IIRC, SoapUI does something similar.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: