I'm considering writing a small Node server to proxy some S3 downloads through for the sake of tracking bandwidth usage and restrict access to the files.The files can be anything from images embedded on a website to a 1gb CSV file, and there could potentially be many concurrent requests.How would this scale? Would I run into memory issues even when I pipe the S3 stream to the response? I want to make sure I don't degrade the user experience too much.Would something like Go be better suited for this?
Submitted October 03, 2017 at 05:48PM by jeffijoe
No comments:
Post a Comment