Yeah, this particular instance seems ok to me. This one makes the example feel weirder:
func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
fmt.Println(runtime.NumCPU(), runtime.GOMAXPROCS(0))
started := make(chan bool)
go func() {
started <- true
for {
atomic.AddUint64(&a, uint64(1))
}
}()
<-started
for {
fmt.Println(atomic.LoadUint64(&a))
time.Sleep(time.Second)
}
}
Here we explicitly wait until the goroutine is started, so we know it's scheduled by the time our other loop runs. Here on my computer with go1.8 linux/amd64 it still optimizes out the while loop, which makes sense as nothing changed that would convince the optimizer the loop should remain given the compiler's current logic.
If you add time.Sleep(time.Millisecond) to the goroutine loop or any other synchronization it works fine. I'm having trouble thinking of a real world example where you'd want an atomic operation going ham in a loop without any sort of time or synchronization. At the very least a channel indicating when the loop is done would cause the loop to compile.
Just because started ensures the goroutine has been scheduled once it makes no guarantees it will ever be scheduled again. It really does nothing to extend the original example code.
> I'm having trouble thinking of a real world example where you'd want an atomic operation going ham in a loop without any sort of time or synchronization.
By using another atomic op as synchronization? After all that's their stated purpose.
But I don't think in go atomic ops are considered synchronization in the sense that they force two goroutines to synchronize at a particular point like a chan. I.e. a chan send in one goroutine must be matched with a chan receive in another (unless they're buffered). If you have an atomic operation between two synchronization points I'd expect the only guarantee is that it occurs between the two points, and when it does it happens atomically.
>>Strictly saying this is confirming behavior as we don't give any guarantees about scheduler (just like any other language, e.g. C++). This can be explained as "the goroutine is just not scheduled".
Not a GO developer. On a multi-processor machine, how is this a conforming behavior? Is "scheduler cannot give any guarantees" acceptable?
'Is "scheduler cannot give any guarantees" acceptable?'
Most schedulers give far fewer guarantees that you might think. A guarantee to be a guarantee must be true no matter what you do within the boundaries of the language. If you create a goroutine fork bomb
func fork_bomb() {
for {
go fork_bomb()
}
}
Go doesn't, to the best of my knowledge, guarantee that any other goroutine will get any execution time, or guarantee much of anything will happen. Your OS is likely to have similarly weak guarantees for the equivalent process bomb, unless you do something to turn on more guarantees/protection.
You have to go into some relatively special-purpose stuff before you can get schedulers that will guarantee that some process will get scheduled for at least 10ms out of every 100ms or something. And then, once you get that guarantee, you'll pay some other way.
Given the fact that most of our machines are incredibly powerful, and that a lot of them still get upgraded on a fairly routine schedule in a lot of dimensions even if single-core clock speed has stalled out, most of us prefer to work with things that just promise to do their best as long as you don't overload them, because the other prices you have to pay to get guarantees turn out not to be beneficial on our monster machines. Of course one should always have their eyes out for when that may not be the case at some point, but in general we're headed away from rigorous guarantees in favor of shared resources that are cheaper and more scalable and making up the difference in volume, rather than trying to get better guarantees.
An unbuffered channel read is always matched with a corresponding write. Let's call the point at which `point1 <- true` occurs and `<-point1` occurs T1 and the point at which `point2 <- true` occurs and `<-point2` occurs T2. fmt.Println("hello") and time.Sleep(3*time.Second) are both guaranteed to occur between T1 and T2. If we didn't have T2 there's no guarantee fmt.Println("hello") would run before the program exits.
Maybe I'm wrong but this is my understanding of Hoare's and go's concurrency model.
How about a long-running calculation that uses atomic variables to report progress or poll for cancellation? Might it move all of the atomic ops outside of the loop?
That's very interesting! The alternatives to TCP that can be written in UDP which are optimized for a certain application are especially interesting to me. Working at the UDP level can present a lot of interesting performance options, kinda like working in C instead of Python.
Something I've been curious about is if there are proxies that can convert protocols. So for instance, on your local machine, you could have a proxy that turns TCP connections on port 8000 into FASP connections to another machine. This would let you use an ordinary web browser over FASP.
You could even pipe the proxies, e.g., TCP -> FASP -> MinimaLT [1]. That way any program could have really fast data transfer over an encrypted tunnel.
That's the reason I wrote this for myself, and why I made https://filegrave.com, as I thought it could be useful to others as well.
Git is already a great tool with a workflow for handling incremental updates that I happen to be used to working with, so there was no need to reinvent the wheel.
[1] is a similar project someone else did, where he used a space-filling curve (Hilbert's curve, specifically) to plot all the RGB colors in one image, and have nearby colors be similar.
For this case, the "fast" int types are provided, eg. uint_fast8_t is the fastest unsigned integer which can hold at least 8 bits; uint8_t is required to have exactly 8 bits.
If you add time.Sleep(time.Millisecond) to the goroutine loop or any other synchronization it works fine. I'm having trouble thinking of a real world example where you'd want an atomic operation going ham in a loop without any sort of time or synchronization. At the very least a channel indicating when the loop is done would cause the loop to compile.