Skip to content(if available)orjump to list(if available)

Show HN: S3mini – Tiny and fast S3-compatible client, no-deps, edge-ready

arianvanp

libcurl also has AWS auth with --aws-sigv4 which gives you a fully compatible S3 cliënt without installing anything! (You probably already have curl installed)

impulser_

Yeah, but that will not work on cloudflare, vercel, or any other serverless environment because at most you only have access to node apis.

leerob

Should work on Vercel, you have access to full Node.js APIs in functions.

akouri

This is awesome! Been waiting for something like this to replace the bloated SDK Amazon provides. Important question— is there a pathway to getting signed URLs?

nikeee

I've built an S3 client with similar goals like TFA, but supports pre-signing:

https://github.com/nikeee/lean-s3

Pre-signing is about 30 times faster than the AWS SDK and is not async.

You can read about why it looks like it does here: https://github.com/nikeee/lean-s3/blob/main/DESIGN_DECISIONS...

e1g

FYI, you can add browser support by using noble-hashes[1] for SHA256/HMAC - it's a well-done library, and gives you performance that is indistinguishable from native crypto on any scale relevant to S3 operations. We use it for our in-house S3 client.

[1] https://github.com/paulmillr/noble-hashes

continuational

SHA256 and HMAC are widely available in the browser APIs: https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypt...

neon_me

For now, unfortunately, no - no signed URLs are supported. It wasn't my focus (use case), but if you find a simple/minimalistic way to implement it, I can help you with that to integrate it.

From my helicopter perspective, it adds extra complexity and size, which could maybe be ideal for a separate fork/project?

mannyv

Signed URLs are great because it allows you to allow third parties access to a file without them having to authenticate against AWS.

Our primary use case is browser-based uploads. You don't want people uploading anything and everything, like the wordpress upload folder. And it's timed, so you don't have to worry about someone recycling the URL.

jmogly

I use presigned urls as part of a federation layer on top of an s3 bucket. Users make authenticated requests to my api which checks their permissions (if they have access to read/write to the specified slice of the s3 bucket), my api sends a presigned url back to allow read/write/delete to that specific portion of the bucket.

ecshafer

You can just use s3 vis rest calls if you dont like their sdk.

linotype

This looks slick.

What I would also love to see is a simple, single binary S3 server alternative to Minio. Maybe a small built in UI similar to DuckDB UI.

koito17

> What I would also love to see is a simple, single binary S3 server alternative to Minio

Garage[1] lacks a web UI but I believe it meets your requirements. It's an S3 implementation that compiles to a single static binary, and it's specifically designed for use cases where nodes do not necessarily have identical hardware (i.e. different CPUs, different RAM, different storage sizes, etc.). Overall, Garage is my go-to solution for object storage at "home server scale" and for quickly setting up a real S3 server.

There seems to be an unofficial Web UI[2] for Garage, but you're no longer running a single binary if you use this. Not as convenient as a built-in web UI.

[1] https://garagehq.deuxfleurs.fr/

[2] https://github.com/khairul169/garage-webui

everfrustrated

Presumably smaller and quicker because it's not doing any checksumming

neon_me

does it make sense or should that be optional?

tom1337

checksumming does make sense because it ensures that the file you've transferred is complete and what was expected. if the checksum of the file you've downloaded differs from the server gave you, you should not process the file further and throw an error (worst case would probably be a man in the middle attack, not so worse cases being packet loss i guess)

supriyo-biswas

> checksumming does make sense because it ensures that the file you've transferred is complete and what was expected.

TCP has a checksum for packet loss, and TLS protects against MITM.

I've always found this aspect of S3's design questionable. Sending both a content-md5 AND a x-amz-content-sha256 header and taking up gobs of compute in the process, sheesh...

It's also part of the reason why running minio in its single node single drive mode is a resource hog.

vbezhenar

TLS ensures that stream was not altered. Any further checksums are redundant.

neon_me

yes, you are right!

On the other hand S3 uses checksums only to verify expected upload (on the write from client -> server) ... and suprisingly you can do that in paralel after the upload - by checking the MD5 hash of blob to ETag (*with some caveats)

0x1ceb00da

You need the checksum only if the file is big and you're downloading it to disk, or if you're paranoid that some malware with root access might be altering the contents of your memory.

dev_l1x_be

for Node.

These are nice projects. I had a few rounds with Rust S3 libraries and having a simple low or no dep client is much needed. The problem is that you start to support certain features (async, http2, etc.) and your nice nodep project is starting to grow.

pier25

for JS

> It runs on Node, Bun, Cloudflare Workers, and other edge platforms

spott

But not in the browser… because it depends on node.js apis.

pier25

Cloudflare Workers don't use any Node apis afaik

cosmotic

> https://raw.githubusercontent.com/good-lly/s3mini/dev/perfor...

It gets slower as the instance gets faster? I'm looking at ops/sec and time/op. How am I misreading this?

xrendan

I read that as the size of file it's transferring so each operation would be bigger and therefore slower

math-ias

It measures PutObject[0] performance across different object sizes (1, 8, 100MiB)[1]. Seems to be an odd screenshot of text in the terminal.

[0] https://github.com/good-lly/s3mini/blob/30a751cc866855f783a1... [1] https://github.com/good-lly/s3mini/blob/30a751cc866855f783a1...

cosmotic

Oh, I see my mistake. Those are payload sizes not intance sizes in the heading for each table.

tommoor

Interesting project, though it's a little amusing that you announced this before actually confirming it works with AWS?

neon_me

Personally, I don't like AWS that much. I tried to set it up, but found it "terribly tedious" and drop the idea and instead focus on other platforms.

Right now, I am testing/configuring Ceph ... but its open-source! Every talented weirdo with free time is welcomed to contribute!

leansensei

Also try out Garage.

zikani_03

Good to see this mentioned. We are considering running it for some things internally, along with Harbor. The fact that the resource footprint is advertised as small enough is compelling.

What's your experience running it?

nodesocket

Somewhat related, I just came across s5cmd[1] which is mainly focused on performance and fast upload/download and sync of s3 buckets.

> 32x faster than s3cmd and 12x faster than aws-cli. For downloads, s5cmd can saturate a 40Gbps link (~4.3 GB/s), whereas s3cmd and aws-cli can only reach 85 MB/s and 375 MB/s respectively.

[1] https://github.com/peak/s5cmd

uncircle

I prefer s5cmd as well because it has a better CLI interface than s3cmd, especially if you need to talk with non-AWS S3-compatible servers. It does few things and does them well, whereas s3cmd is a tool with a billion options, configuration files, badly documented env variables, and its default mode of operation assumes you are talking with AWS.

rsync

s5cmd is built into the rsync.net platform. See:

https://news.ycombinator.com/item?id=44248372

brendanashworth

How does this compare to obstore? [1]

[1] https://developmentseed.org/obstore/latest/

_1

carlio

minio is an S3-compatable object store, the linked s3mini is just a client for s3-compatable stores.

arbll

No this is an S3-compatible client, minio is an S3-compatible backend

prmoustache

The minio project provides both.

null

[deleted]

shortformblog

This is good to have. A few months ago I was testing a S3 alternative but running into issues getting it to work. Turned out it was because AWS made changes to the tool that had the effect of blocking non-first-party clients. Just sheer chance on my end, but I imagine that was infuriating for folks who have to rely on that client. There is an obvious need for a compatible client like this that AWS doesn’t manage.

busymom0

Does this allow generating signed URLs for uploads with size limit and name check?