I'd love to fill that in! If anyone would like a comparison, please add links in this thread and I'll reply. Later, I can collect it into a published page.
> Dex is NOT a user-management system, but acts as a portal to other identity providers through "connectors."
> ORY Hydra is not an identity provider (user sign up, user log in, password reset flow), but connects to your existing identity provider through a consent app.
AuthN IS all the things that Dex and Hydra say they are not. I'll bet it could integrate with both given a bit of investment, e.g. by satisfying the "consent app" expectations.
AuthN does use as much of the OpenID Connect protocol as I could manage though. I started there and streamlined down to optimize for API-driven interactions rather than the redirect-driven interactions that are common with OAuth and OIC.
I've just started dabbling on a small project and would be interested to understand how features overlap and differences in license/distribution model.
Auth0 is top-notch SaaS. I have only good things to say about their product.
Aside from being OSS, one major difference is that Keratin AuthN is purely an API. It's optimized for customization so that it will fit with any bespoke (secure) UX you want to provide. I found Auth0's API to be something of an after-thought, second to their hosted/branded/templatable pages.
"Dex is NOT a user-management system, but acts as a portal to other identity providers through "connectors." This lets dex defer authentication to LDAP servers, SAML providers, or established identity providers like GitHub, Google, and Active Directory."
It seems like AuthN IS a user management system. So that's a big difference right there.
Hydra and Dex both support OAuth and OpenID Connect. This apparently supports neither, but comes with its own JWT structure.
With inbound federation that shouldn't be much of a problem, but with outbound federation you'll have some very difficult questions to answer (especially because all major identity solutions are pretty much OIC centric these days)
Yeah, I don't expect this JWT scheme to become an adopted standard. It's been streamlined from OIC for the narrow use case of working tightly with a trusted app.
Adding support for inbound federation is on the roadmap. Support for outbound federation using OIC isn't out of the question either, but I don't yet see the motivation.
I've taken a real liking to Fedora. It actually made me switch from my custom Arch linux with i3 to almost default setup Fedora with Gnome. Worked flawlessly on my old Dell XPS 13 developers edition and my new one.
Sadly after the switch to Wayland my pen tablet doesn't get picked up by most applications I use daily including Chrome, Firefox, VSCode, Kritta, Blender, etc. It's a real shame as Gnome/Wayland handles switching the tablet between monitors and multimonitor setups really well! Running Gnome on Xorg also isn't an option, it tends to crash a lot now...
It's been crashing for me since the switch to wayland as a default. I don't mind really, this stuff needs to get adopted. I tend to use my keyboard way more than my tablet these days.
Basically, it all works fine (I've verified it myself). I've been using a Yubikey for both SSH and challenge/response for quite a while now. A few days ago I started messing with U2F as well. The first that happens is gpg-agent "gets confused" after U2F auth and you have to remove/insert your key and/or re-enter your GPG PIN on next use (cf. linked thread).
Next up for me is figuring out how to disable U2F on my Nano's and use separate U2F-only keys for that (without any conflicts or issues, hopefully).
N.B.: I don't use the OTP functionality at all, currently. I'll probably try out the PIV stuff soon as well and I expect no conflicts or issues with the existing stuff (GPG, C/R, etc.) I have setup.
I've never really been able to figure out what a good strategy is for object storage organization. Do you create a bucket per application instance? user? organization?
Right now I'm playing with a new service and came up with this which is probably over engineerd:
Here is an actual object key including the bucket:
7dcdb229600e4467a2714866e0d406f6/85/26c/c0271374067b5db832adb7909a7/bbda55db15266f7ce2284d8f5f66fc85e495e2b12265ef87537237ad5e2658b24c081970332417f60e5fc352ae9b8c1031398c02ecde03eb29af2d3c8eda8a4b/y18.gif
Given the file's uuid is aabbbcccccccccccccccccccccccccccccc
for original images:
{{organizations_uuid as bucket}}/aa/bbb/cccccccccccccccccccccccccccccc/{{sha512sum}}/{{originalfilename}}
And for all derivatives of it:
{{organizations_uuid as bucket}}/aa/bbb/cccccccccccccccccccccccccccccc/derived/{{this files uuid}}_{{filename}}
My thinking was that:
- using the organizations' uuid (which can have multiple users) as a bucket makes backing up per organization and having on prem deployments easier.
- Encoding the file's uuid in the object name can identify it easily and by splitting that uuid up in 2/3/rest would help with spreading of objects.
- Encoding the file's sha512sum in the key name would enable checking that file's integrity even without a database.
- putting all derived files under derived but with the original file's uuid prefix makes the link between them clear.
I know this will result in long object names as shown above in the actual example but it does include quite some information.
What parts of this is considered bad practice? Do you have any real world examples for other strategies? They seem hard to come by.
Perhaps I'm missing something about your use case, but I only create buckets per application, or sometimes file category (videos, profile images, whatever).
I don't have any other real use case for bucket per org other than easy bucket mirroring, backup and maybe migration from shared hosting to on premises.
I didn't think of using different prefixes for different media usages. We for example would then use thumbnail/originating_file_uuid.png and poster/originating_file_uuid.png.
Correct, I have no need for the original filename in most cases. If I did want this info, for example if I was building a file browser type thing (ala dropbox), then sure I'd keep that in the db.
Personally I'm uploading directly from the browser to S3 using presigned URL's. All files get uploaded to a /tmp directory in my bucket. This bucket is configured so that all files in /tmp are deleted after 1 day (to remove any unsaved uploads). When a form is submitted, I pass the key to the temporary file in the form (via e.g <input type="hidden" name="s3_key">) and create the associated database record. I then move the file from its temporary location to its permanent one upon saving said record.
Feel free to email me to continue this discussion - email address is on my profile.
> The problem with that is that the originally uploaded filename is lost. At least without storing it in a separate database.
Sure, but that's a tradeoff nearly every website accepts because they just need the image itself. If you do want to preserve the original filename, is there a reason for not just keeping it in a database?
I'd like to have these systems as decoupled as possible or at least have some meaningful information without a dependency on an external datastore. This might be just me being paranoid and overthinking it but after having to deal with a nasty monolith of an application for the last couple of years and finally convincing the rest that we need to change if we want to be able to expand I want to do it right.
Thank you, I was wondering about the reason Istio was created. It also sheds some light as to why the CNCF brought on both Istio and Envoy. Still wondering why both Envoy and Linkerd are on their page as service mesh and not just one. They have one project for every other category.
why both Envoy and Linkerd are on their page as service mesh
Linkerd is written in Scala. There are a class of people who avoid the JVM, sometimes for good reasons. For one, Envoy is more resource-efficient. See https://github.com/envoyproxy/envoy/issues/99.
Same here. I was sceptical about code generation and the whole protobuf as a base thing but i'm really liking it. Especially when you throw opentracing in the mix.
I've been jumping between hydra and dex for the last couple of weeks. On the one hand I like the tight focus hydra has, with the exception of the warden api. On the other hand it is really involved to simply setup a working environment that includes hydra ready to go. It would be nice to do all the token, client and policy setup using a simple docker-compose up.
Dex for example has a dev mode doing that for you. The downside of Dex is you cannot use your own backend without forking the project, writing your own login page and creating a custom connector for your existing login system.
Thank you for the valuable feedback! The dev mode is indeed a very good idea - I'll probably spin up another docker-compose example with all the default things set up. Would that make it easier?
Yes definitely. Although you could build a new image based off of the original one, add a bash script that sets it all up for you and overwrite the entrypoint i never like that sollution. It should be something that the software supports out of the box as it plays into one of the strengths of docker, easily spinning up and taking down instances.
I've deployed almost exactly this on a new kubernetes cluster running coreos exclusively with 4 nodes, 3 masters and 3 etcd instances divided over 4 physical machines.
Things I haven't implemented yet is deploying the node-exporters via DaemonSets and the Prometheus config through a ConfigMap. Those are right now done through a cloud-config systemd override and a gluster mount.
Couple of things I ran into. The kubelet's running the kubernetes master components need access to the ssl certificates of the apiserver, otherwise they cannot be scraped by prometheus over https when they communicate with the apiserver over https. And I'm still very confused about a seemingly simple thing of getting a per request response time query. This is what I'm using now:
Two things: I'm confused by what I should set the vector time to `[5m]`. And how I can get the response time of an individual request. We've had a request take 30 seconds but that spike only showed up when viewing the graph over a 3 hour time period. When viewing it over 6 hours or 1 hour it simply will not show up even though it happend in the last hour.
Things I do really love is the ability of configuring services to be scraped by setting annotations to it. Works great when slowly transitioning your services to prometheus style metrics!
If any body has some pointers about querying and visualizing using grafana and prometheus that would be great!
From the list given by @imaginenore the major one for me is CineformHD support. We work on a lot of VR stuff and there are quite some GoPro users out there that generate material in this codec. Not having to transcode to an intermediate is nice. Also hardware acceleration is always good to have.
FYI, the phrase "quite some users" is not uncommon among (continental european?) non-native speakers of English, but it's not correct.
"In the British National Corpus, for example, most examples of quite some are "quite some time", others are "quite some distance". If you replace "quite some" with "a considerable", the meaning should be clear.
If the sentence does not make sense when you do that, it's likely that "quite some" is not being used properly."
I was wondering how you would test web apps written in rust. In particular the api you're building. Right now as my first rust project I'm trying to implement an HTTP protocol with hyper but I'm not really sure how to test it properly next to just creating a client connection and throwing requests at it. Btw my final idea is to have it as a library that gives you a handler to use with what ever rust framework you choose to use. Not really sure how to do that though, I'm using a lot of hyper stuff.