Hacker Newsnew | past | comments | ask | show | jobs | submit | sacnoradhq's commentslogin

There's a lack of nosological classification of meaningful sub-types of depression and their specific dysfunctions. It's currently scatter-gun, medieval treatments without bothering to measure or understand the systems affected.


100% agreed. The current system of "Sorry you feel suicidal / horribly depressed. Here are some meds that are statistically unlikely to help you but you won't know whether they help or not until ~60 days after you start taking them. What if these don't work? We'll shovel you a slightly different permutation of the same meds in hopes that what didn't work the last time will work this time. By the way, it will be another 60 days before you know whether the new meds are working or not" is terribly broken.

Is it any wonder that depressed and schizophrenic people smoke?


From the comments as of writing, few read between the lines of this sloppy reporting: it's to standardize cloud-desktop apps for Amazon's internal use.

Meta, for example, consolidated to Workplace + Google, away from Dropbox, Microsoft, Apple, and others. They use AWS and Azure for some 50k desktops on demand and a few servers, but it mostly deploys to its own metal because it's way, way cheaper than AWS in most use-cases.


I don't see anyone confused about this in the comments and am struggling to see how else it could be interpreted.


The city of Austin is floating legislation to allow up to 3 housing units per property including campers, RVs, and tiny homes.


And here I thought goatse was just for trolling and arp- and IP-spoofing http:// on unprotected Wi-Fi.

PS: NSFW in case the casual observer never encountered the horror that was goatse.cx:

NSFW https://web.archive.org/web/20010518002205/http://www.goatse... NSFW


This statement would be technically legal on its own in x86 real mode if the compiler didn't do null pointer checks. However it would set the divide-by-zero IRQ handler to itself 0000:0000, and when the next division by zero happened, the machine run into UB (likely a reset or halt) because it would jump there, do 4x ADD byte ptr [BX + SI], AL (or ADD byte ptr [EAX], AL) followed by running the remaining interrupt vectors as instructions.


Not quite. (char *) 0 is the null pointer. The null pointer is not necessarily a binary all-zero. On some compilers in x86, the null pointer intentionally points to something which will cause a crash when written to.


Find me one contemporary example (ANSI C) with a disassembled screenshot.

This is writing sizeof(char) (== 1 almost everywhere) zero to address zero. It is not using a NULL macro or other predefined symbol.

In the real world, this would generally write a byte to address 0000:0000, leading to UB because it would fuck up the divide-by-zero IV.

PS: I used Borland C++ 3.1, Microsoft C++ 3.x and 4.5x, Watcom, and early GNU.


(void *) 0 is a null pointer constant and is not necessarily an all zeros representation. This has been defined virtually forever.

https://c-faq.com/null/null2.html

https://c-faq.com/null/machexamp.html

Actual ways to do what you want to do are described in

https://c-faq.com/null/accessloc0.html

but technically speaking the pointer with a constant zero assigned to it _is_ a null pointer (which can be implemented as whatever bit pattern), independent of the preprocessor macro.


> Find me one contemporary example (ANSI C) with a disassembled screenshot.

Here in godbolt, clang compiling C simply deletes the code in the function past and including the null pointer dereference.

https://godbolt.org/z/9aqWPazsP

> This is writing sizeof(char) (== 1 almost everywhere)

1 everywhere. sizeof's unit is "how many chars". For instance there was a cray machine that could only access 64bit words. sizeof(char) is still 1, with 64bit chars.

> zero to address zero. It is not using a NULL macro or other predefined symbol.

NULL is defined as literal 0.


make it a '*(volatile char*)0 = 0' to force the store.



> sizeof(char) (== 1 almost everywhere)

sizeof char is 1 by definition everywhere.

/pedantic


> sizeof char is 1 by definition everywhere.

Parentheses are required around char because it's a type.

/pedantic


That is incorrect :-).

sizeof is an operator in C, and does not need parenthesis any more than pointer operator *. It is true that programmers frequently think of it as a function and use parenthesis.


It's not that simple!

To begin with, sizeof has two syntaxes: the first, which is the one you seem to refer to, is simply

  sizeof expression
where expression involves variables and constants, not types. The second is

  sizeof (type)
where the parentheses are mandatory.

Then, even in the first syntax, even if sizeof is listed among the operators, even if it doesn't look any different from "pointer operator ", nonetheless it has strange priority rules. For example

  sizeof (T) *x
If it was a regular prefix operator obeying priority and right-to-left evaluation, this would mean: dereference x, cast it to T, and return its size. Instead the C standard forces the compiler to interpret it as: take the size of type T and multiply it by x.


Hilariously I've been down voted, even though you absolutely need sizeof (char) because it's a type. Given char x; sizeof x is fine. I know sizeof is an operator. I've been using C for 40 years.


That's incorrect, from GCC: error: expected parentheses around type name in sizeof expression.


In the mathematical sense of almost, a property that holds everywhere does qualify as holding almost everywhere.


Guess can be taken as a shortened explaination of what a programming language committee is tasked with making happen. Likely why Lisp so successful/useful.

-----

unless initial property is start of dynamic operation, in which case, holding almost anywhere begins at the first operation after the start of the dynamic operation. process / lambda / epsilon calculi is just symbolic math. address 0 static, everything else dynamic.

per math, dimension N is static, to be able to "change things up" in dimension N, need to to be almost everywhere higher than dimension n. Edge cases are weird in any dimension. Guess why logicians just do the equivalent of C's !0

(cast classic logic) A=1 (cast boolean logic) B=0

C statement !(!B == A) hold everywhere and almost everywhere depends on how read C spec to interpret A & B.


Regarding your PS, I used Borland's Turbo C++ 1.0, and I think you've forgotten that memory models existed. Honestly, that's a good nightmare to forget.


I hated that about DOS, real-mode and BC++. After about 6-8 months of that misery, installing linux and learning to write C code with GCC was the best thing that ever happened to me. I felt like an animal being released from a cage and into the wild.


In those days MS-DOS, Linux was barely usable, when Linux became usable Windows 95 was already around, without those limitations.

My first kernel was 1.0.9 released alongside Slackware 2.0, offering initial support for IDE CD-ROM drives and experimental support for ELF files, by the way.


It doesn't have to be the NULL macro, which is correctly defined as plain 0.

The literal 0 is treated specially, so this could indeed be one of those 'turns into a weird bit pattern NULL pointers', if such a thing existed in the wild anymore.

But you're correct in that there probably haven't been any since the turn of the century or whenever the last Univac mainframes got turned off.


Apparently according to the c-faqs link elsethread

    execl takes a variable-length, null-pointer-terminated list of character pointer arguments, and is correctly called like this:
    execl("/bin/sh", "sh", "-c", "date", (char *)0);
Due to ececl being a variadic function it can not take advantage of a prototype to instruct the compiler that one of its arguments needs to be treated as a pointer context.


Good thing this was covered in the talk


Having it in text is much nicer than having it in video.


Having it in .rodata is much nice than having it in .text


Having in in machine readable form that runs really keeps things running.


Not quite, I think. Since this is a char pointer being used, only the first byte of the interrupt address would be zeroed. Since in real mode those are far pointers, the lower byte of the segment would be zeroed. So xx00:xxxx.

But yes, the interrupt table was my first thought when reading the headline.


Char can be the same size as short or int. You can't assume it is one byte.


You can't assume char is one octet. It is one byte by definition.

A byte is CHAR_BIT bits, where CHAR_BIT >= 8. (It's exactly 8 on most implementations; DSPs are the most common exception).

short and int are both required to be at least 16 bits wide. It's possible for int to be 1 byte (sizeof (int) == 1), but only if CHAR_BIT >= 16.


A clarification: You can certainly assume that char is 8 bits if you don't mind losing portability to a small minority of systems.

If I'm being pedantic, I might add something like

    #if CHAR_BIT != 8
    #error "This code assumes 8-bit char"
    #endif
But realistically, if I'm using headers defined by either POSIX or Windows, that's probably enough of a guarantee. (Though I'd still use CHAR_BIT rather than 8 to refer to the number of bits in a byte.)


posix indeed guarantees CHAR_BIT == 8.


Yeah, if you're going to be pedantic, check your facts, see the sibling. Since I'm assuming an 8086 interrupt table, I'm also going to assume 8-bit chars, as that's the x86 addressing model. And dereferencing a null pointer is UB, so you can't count on anything anyway without making further assumptions.


If I'm understanding what you're saying correctly, the memory location with address 0 is actually a writable address, but with the value being used semantically to handle division by zero? It's kind of wild to me that would even something that's even allowed to be done manually, let alone required by a certain mode. Is this something provided for compatibility reasons that you'd have to opt into, or is it just something enabled by default?


Which part is wild? "Magic" memory addresses are a fairly normal way to communicate with hardware; nowadays there are more layers to how you set up mappings in the MMU etc., but in the old days it was normal for everything to just have a fixed address (e.g. I remember back on the Apple ][ the screen's framebuffer was in a particular memory range, or rather two - to avoid tearing you'd draw on one and then flip which one it was using). And particularly for the CPU, it's hard to see how else it could do customizable interrupt handling - I guess you could have some kind of special API with dedicated CPU instructions or something for "programming" in an interrupt table, but that would be more complex and have no particular benefit. "It reads your table of pointers from this address in memory, in this format" is pretty straightforward and easy to use.

As for why it's address 0, well, it has to go somewhere, every machine has a CPU so everyone needs an interrupt table even if they don't have much memory. And when memory was precious there was no sense wasting even one byte of it; 0 was a real address on your physical memory chip, so why not use it just like any other?

(The fact that it's "address 0" for "division by 0" is just coincidence as far as I can see; division by 0 just happens to be the first kind of possible CPU interrupt. Perhaps it was the most common one?)


The part that surprised me is that this would be the way things worked on a modern C++ compiler without any special flags. The article is about C++, and using "magic" memory addresses doesn't seem at all what I'd expect to be the default way to handle division by zero.

From the numerous responses here, it's clear that people interpret my question as about how the hardware itself works, which isn't at all what I was asking about; I'm aware of how stuff like this works at the assembly level, but my understanding was that in C and C++, trying to write arbitrarily to "special" addresses like that would be considered undefined behavior (often resulting in segfaults). When I read the comment I responded to above, it surprised me, so I wanted to check whether I understood what was said correctly. It's honestly kind of confusing to me that so many people seem very upset by the idea that a stranger on the internet might have a misconception about how hardware abstractions are exposed via compiled code to the point that they feel the need to explain in detail how hardware works but not actually answer the question I asked.


The difference between modern days and days of DOS isn't in C/C++ compiler, it's in virtual memory and address space isolation and privilege isolation. So it's not a job of a C/C++ compiler to enforce protection from writing to "special" addresses, because interrupt table updates (and memory-mapped hardware I/O in general) still must happen somewhere (i.e. in kernel, hypervisor, drivers etc) and that code is still written in C/C++, same as in the DOS era.


Mmmm..;. job of modern OS is to use/manage MMU. Prior to DEC, OS just automated version of human feeding punch cards/spooling up tape.

DEC provided the necessary hardware MMU to do actual real time multi-processing/multi-user access in feasibile/practical manner.


> The article is about C++, and using "magic" memory addresses doesn't seem at all what I'd expect to be the default way to handle division by zero.

They're not saying this is, like, a portable standard way to handle division by zero in C++. You're right that it would be undefined behaviour under the standard (but a C++ compiler for real-mode x86 would be expected to support it, at least implicitly; obviously this specific case is not a particularly useful, but C++ is used in embedded settings and setting a custom interrupt handler is something its users want and expect).

A decent, well-behaved language would do some kind of structured error handling on divide by zero, like throwing an exception. IMO that includes any C++ compiler worth bothering with (though again the standard makes it undefined behaviour so it's possible that some compilers don't). But, the way the runtime of such a decent C++ compiler would actually implement that would be by setting up an interrupt handler for the divide by zero interrupt (that would contain code to construct the exception etc.), and by performing this write to address 0 you're overwriting (the pointer to) that interrupt handler. So, this line of code would cause your program to behave (almost certainly) badly on the next division by zero, even if you were using a well-behaved C++ compiler that normally handled division by zero gracefully.

(OTOH with a maliciously pedantic C++ compiler that division by zero would already be undefined behaviour, so in practice, since most C++ compilers tend to be maliciously pedantic, you might be no worse off than you were before that line).

The original post you replied to was just talking about the somewhat interesting details of what would actually happen because of the quirks of what these addresses are used for on that hardware (e.g. the fact that address 0 is supposed to contain a pointer to the handler, so by setting it to 0 you cause the CPU to start executing the interrupt handler table as code, is kind of interesting - not as a point about C++, but as a point about funny emergent behaviour of hardware), not about what this is specified as doing or the normal way of doing things in C++. I don't know why you were downvoted.


> ".. the fact that address 0 is supposed to contain a pointer to the handler .."

What got missed though, is ther has to be an "unused"/"reserve" bit(s) space in order for things to run without requiring additional specific hardware operations.


Back in the day there were no protections. You could write to any address whether it was used by the CPU for interrupt vectors, part of the OS, hardware addresses, anything.


Sure, but I'm asking if that's something that's enabled by default today or not. I don't see why it's unreasonable for things that were useful "back in the day" to not be available with the default arguments on current versions of compilers but available with certain flags. I'm not sure why my question touched a nerve, because I'm genuinely asking both if I understand correctly and if it's something that needs a flag to enable.


In older CPUs there was no memory management hardware and no virtual memory. You could just read/write in code from any address anywhere and you'd be writing to that actual physical memory in the computer. This wasn't a feature so much as it is a lack of a feature.

Modern CPUs with virtual memory means the question is a lot more complicated. Every process in a modern OS gets it's own address space so you can write to 0 but it could go anywhere (even virtualized to disk) and all the actual hardware is not directly accessible (must go through the OS).

I'm not sure I'd call this ability "useful" except if you're writing an operating system. This is vast simplification but when your computer boots it's effectively in a mode that allows reading/writing to anywhere. The OS kernel has direct access to all the hardware and then it limits access when running user processes.


pre-MMU, unless using DEC box.


It is still true even today for microcontrollers – many of them come with a miniscule amount of RAM, no MMU and generally unpredictable memory maps.


Think of it as part of the “API” of the CPU that a program can make use of however it likes. In the early days (for DOS and the like), the distinction between operating system and application was more one of convention and not enforced by hardware mechanisms. The program was supposed to control the hardware, and not the other way around.


The interrupt vector table on x86 sits by default at 0000:0000 and the CPU uses it to handle interrupts and other exceptions by jumping to the address entry corresponding to the event. Entry 0 is division by 0, but there are also entries for illegal instruction, hardware interrupts and so on.

The address can be changed with the LIDT instruction and operating systems nowadays will just put it wherever, but for backward compatibility it is expected to still be at 0000:0000 (not sure how this is handled nowadays in UEFI, but it should still be possible t o set it up that way).


The kernel can write almost anywhere. (Well, actually, nothing can write on most addresses in a 64 bits machine, but if it's usable for something, the kernel can use it directly.)

And yes, some addresses are special. (AFAIK, on all current mainstream architectures.) This is the expected way to set those signal handlers, output (and input) data, configure devices, etc.

That said, there are some gotchas on using specific addresses in C. AFAIK none apply to x86, but it's something you usually do in assembly.


Post-ACID for stateful services -> DHC: Durability*, hygiene**, and confidentiality***.

* Superset of bare metal recovery readiness, proven backups, monitoring, availability, and warm storage integrity.

** Superset of consistency, isolation, referential integrity, and data hygiene.

*** Superset of authenticated, G4+ FHE, and/or zero-knowledge encryption at rest and in-flight, elimination of side-channels, least privilege access control of metadata and data, removing plaintext paths, reducing privacy-liability metadata, and eliminating bleed-through of internal metadata externally.


If you're not building a PaaS while running stuff, you should either rent or build a platform because having a reusable platform enables scalability and manageability.

And if you have 750+ employees, you better have an (endpoint) client platform too to minify development, non-development, and support variables.


The Target here in ATX next to downtown hired 2 ex-PMC's in full plate carriers. (APD's call backlog for nonviolent reports runs 36- to 48-hours, so it might as well be never. A private person must then either be equipped to defend themselves, cease venturing in public, or move because the police may be too busy or too unconcerned elsewhere.)


ATX? PMC? APD?

I think I can maybe reverse engineer the meaning of two of those, but really you should use acronyms in full on first use.

xPD is often how Americans refer to the police departments, so possibly that with “A” being a city or state or similar. ATX I can now assume is referring to the same “A” in APD, leaving “TX”, so I assume Austin, Texas and Austin Police Department? It shouldn’t take this much effort and international geographic knowledge to read your comment! (And I’m still don’t really grok it, wtf is a PMC?)

I didn’t know what a full plate carrier was either, but I could at least Google that one ;)


PMC is a private military contractor, also known as a mercenary. Think Blackwater, Wagner, that sort of company.

You're right that ATX == Austin, Texas and APD == Austin police department.


And of course, the police have no duty to protect individuals.


95% correct. "Planting a lot of trees" has been proven dead wrong because of simple math, plant matter decay, and increasing fucking forest fires. Instead, #teamtrees couldn't face reality while doing the disservice of misinforming and lying to millions of people for egotistical and aspirational post-factual rhetoric. https://youtu.be/gqht2bIQXIY


> "Planting a lot of trees" has been proven dead wrong because of simple math

Planting a lot of trees and let the trees grow still looks a much better plan that most of the other options and magical solutions. We just need to find the courage to try it seriously.

Forest fires are a crime in 90% of the times. The idea that we can't do what we need to do because criminals exist is not acceptable. We just need to make this crime much more expensive, painful and unprofitable for that people


Forest fires are a natural state of forest and required for a healthy forest for many types of forest.


Then we will need to unblock and support the grow of the other types of forest.

Forests shouldn't be composed of a 99% of <10 Years old trees

Do people know that there is an humid Mediterranean forest ecosystem?

Do people know that some Eucaliptus species evolved as part of humid ecosystems?


Yes of course people know this. But policymakers "flatten" knowledge when making policy. See James C. Scott's "Seeing Like A State" for discussion of how this plays out in development projects.

What we have got is exactly what we can expect.

1. https://en.wikipedia.org/wiki/Seeing_Like_a_State

Required reading for anyone who seeks to understand what happens at the scale of a city, or larger.


Oh man, delusions abound.

Trees grow - then plateau/stop growing. They don’t (and can’t!) meaningfully keep sequestering carbon.

And even if you covered the remaining non-tree covered portion of the earth (somehow) with trees, it still won’t solve the problem.


You could sequestered by storing plant matter biologically inert underground. Deep lime mine shafts landfill..of compressed biomatter


Considering how energy intensive it is to gather, shred/pulp, transport, and pump said material - wouldn’t it be a lot better to do that with algae or the like though?

Trees are tough, and grow well in inhospital and rough terrain. That’s basically their value prop? Growing them in good crop land for something like this would be a waste, for instance.

They are good for building materials, and that is one of the longer forms of sequestration. Transport and processing is a big cost there already, and makes it dubious carbon neutrality wise currently.


This is logging business, not conservation. Making furniture when we need to make climate. What was the title of this thread?

> Growing them in good crop land for something like this would be a waste, for instance.

Obviously not, because environmental services are also needed


Not logging those trees is neither conservation or sequestration. But you seem to be stuck on that.

And those crops not only would sequester carbon faster most likely (if you buried them anyway), but be far more scalable and efficient in doing so.

I love trees, but square peg round hole.


> Not logging those trees is neither conservation...

not chopping forests is by definition, a basic advise in all serious treatises know by men, on conservation of biodiversity on forest ecosystems [1]

[1] Only exceptions when is a mono-culture, or when we deal with alien species

Yes, I know that biodiversity can increase in ecotones. But to have a forest boundary, you need to have a forest first.

> ... or sequestration

trees take CO2 from the air and store it in wood, a structure that can last potentially for several hundreds or even thousands of years. Half of this structure is buried in the soil. Tons of carbon. If this is not the very own definition of sequestration of CO2, I don't know how to call it.

A sofa can't grow.

> those crops not only would sequester carbon faster most likely (if you buried them)

What crops? corn? wrong. The soft wood Paulownia? Wrong again. The Paulownia superfast thing is based in a lie.

> but be far more scalable and efficient in doing so.

Again wrong

> Everybody is delusory

Okay


> Trees grow - then plateau/stop growing

I can't understand this phrase, would you explain it?


If you, say, clear cut some land, or turn previously unproductive land into fertile land, you start from roughly zero carbon ‘at rest’.

The trees, when they grow, pull the vast majority of the carbon in their structure from the atmosphere. 99% or close to it.

At some point, the tree growth plateaus - it may have been slow the first year or two, then accelerated the next 10 years, then gradually tapers until it hits it’s maximum growth potential at that site (give or take). That may be in 20 years, or 100, but it will happen sooner or later.

During this time, trees die - and their carbon gets broken down and re-released in the form of methane or co2.

Roots, debris, etc. roughly equal the amount of plant matter above ground.

All of this rots when the trees die or burn (which happens on an ongoing basis), which releases the carbon back to the atmosphere in the form of co, co2, methane, etc. except in rare exceptions.

You can tell this is the end result because if you dig into/in these forests, they aren’t sitting on thick beds of carbon. Typical soil depth is 3-6ft before you hit bare mineral soil. So they’re in equilibrium over time.

According to this random link I found [https://www2.nau.edu/~gaud/bio326/class/ecosyst/USFScarb.htm] in the US it averages out to about 17.7kg/m^2 across all forests.

So if you start with bare/no carbon and add trees, you’ll on average be able to temporarily store 71,614 kg of carbon per acre. Until the trees die, or burn.

If you harvest the trees and either bury them (where they can’t rot) or use them for construction, you can reset it and sequester the carbon. That has costs however.

If that isn’t done, then the carbon store is temporary - the carbon will eventually get back into the atmosphere, and the forest will have net zero carbon uptake at some point in the meantime.

The EPA recently estimate the US releases 6.1 billion metric tons of Co2 and equivalents per year (as of 2021). If we estimate an average acre of forest plateus at ~ 72 metric tons of co2 stored, that means you'd need to plant ~ 85 million acres of forest to EVENTUALLY store that co2 - per year. Just for the US alone. And hope none of those forests completely die off, don't end up growing, or burn eventually.

There are approximately 800 million acres of existing forest in the US (a heavily forested nation). Accounting for 7.5% of all forested land in the world.

It's not clear there even IS that much land which could support a forest to last even a decade, let alone land that would support it for long enough for these forests to mature - or not just burn/die soon.

But if we look at farmland in the US (currently ~ 900 million acres) - https://www.nass.usda.gov/Publications/Highlights/2019/2017C...

We'd have converted 100% of our existing cropland to forests in about a decade, just trying to counteract each years co2 equivalent emissions.

And that is 40% of all land in the US!

Add together 800 million acres of existing forest, and 900 million acres of farmland and we're nearing almost all fertile land in the US total. Even if we could squeeze out another 10% AND keep all these forests going (while we starve as we have no farms/crops anymore?) we'd still be completely out of room in less than 2 decades, with all potentially tree bearing land covered. While we still continue to produce carbon.

And if these trees die or burn, any carbon they stored is back into the atmosphere.

Carbon wise, trees are springs/capacitors, not 'sinks'. They help buffer spikes, but they don't solve the real problem - too much carbon coming from 'out of the carbon cycle' geological stores and being injected directly into the atmosphere at massive scale.

To solve that problem, the carbon needs to get taken out of the carbon cycle again. And as long as those trees are above ground, that can't happen.


Trees literally won’t and can’t solve the problem. They’re a temporary buffer at best. That’s the problem.


I will choose the planet with the temporary buffer over the planets without any buffer; thanks. All the time


If you spend all your time building a limited, temporary buffer - that catches on fire in high heat no less! - instead of a more scalable solution, then you’ll end up burning while people who actually did the math go ‘wtf’.


In this time period, there are many people who are eager to cause mass suffering, murder, sabotage, and dissolution of civilization. With advances in technological organization, there is an increasing potential for coordinated autonomous attacks against infrastructure, essential services, groups, and individual persons in multiple theaters concurrently.

It would be neigh impossible to defend against 100k flying drones wielding cow-knockers programmed to swarm, loiter, break windows, and kill humans. Such could be dispensed by purpose-built intermodal containers, moved by ordinary freight logistics to attack multiple population centers in a simultaneous attack.

Code and binaries are shipped to 10's of millions of servers by internal configuration management tools. Take Meta, Google, or Microsoft and turn them into AI exploit and worm fabrication at scale. While it may happen for only an hour or 2, there's quite a bit that could happen and the possibility of advanced persistent threats is real.

Target a demographic to alter their filter bubble to persuade them to engage in mass-casualty terrorism.

Hate group applies AI to create a designer virus lethal to particular a demographic.

Subtly disable water treatment facilities with an autonomous, stuxnet-like attack.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: