Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just an idea, maybe the first fetch from S3 should be allowed by R2 do delete the original object from S3 too, so that eventually we're only left with two mutually exclusive sets of files (and no double storage)


I thought about that too. It's could be a good solution because the challenge otherwise is going to be listing all objects in the bucket and comparing to what's in the R2 bucket, right?


Based on what I’ve seen in this convo:

- Set up the S3 mirror into R2

- Migrate your code to read from R2.

- Set up SQS to populate with S3 Create events. SQS listeners just make a GET request to R2 for that file.

- Generate S3 events and populate SQS by running List operations or by abusing S3 Lifecycle Management.

- Let it process.

- Switch writes to R2.

This all assumes you can’t delete from S3 till R2 is fully ready. Depending on the application, you could switch over writes to R2 in a different step and also possibly delete the S3 file in the SQS processor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: