Skip to content

Upload to S3 bucket

edited July 2019 in SecuritySpy
Hi Ben - have you ever considered extending the upload function to include support for Amazon S3 buckets?
I have done script to take images from a folder periodically up to S3 but it would be nice to have it baked into the app.

(I use Amazon SES for my emails as they give me free 200 a day emails - plenty for my and my parents installations)
«1

Comments

  • Thanks for the suggestion. I think this would be a useful addition, I'll see if we can add it in the future.
  • New here. Really like SecuritySpy in the 48 hours I've played with it. Just adding my voice to desire for S3 integration. There's another thread on it here: https://www.bensoftware.com/forum/discussion/766/s3-server-upload/p1

    Thanks
  • I definitely need this feature too!
  • I'd say if you opted to support S3, you'd most likely get requested to support others like Google Drive, Azure, Dropbox, OneDrive, Blackblaze etc

    You could use something like s3sync?

    I've got google drive setup that syncs any motion detection recordings straight away.
  • OK, by popular demand, we have now added S3 support in the latest beta version of SecuritySpy (currently 5.1.1b13).

    Could you all please test this and report back? Thanks.
  • Thanks Ben!
  • Thanks for doing this.

    1. I first created a bucket in us-east-1 (Since the UI in SecuritySpy prompts for a bucket name)
    2. I created an IAM user with PutObject permissions for that bucket
    3. I entereted the bucket name and access key/secret key and pressed test.

    Observations:
    1. It began to install Mac development tools (but not on the screen foreground). This is fine.
    2. It displays an error saying "Error 1580,88799 make_bucket failed: s3://bucketname An error occurred (AccessDenied) when calling the CreateBucket operation: Access Denied"

    I checked the ~/.aws/config and saw the default region was us-east-2. Changed that to us-east-1 (where my bucket exists) and "test" still shows the same error.

    Granted my user CreateBucket and appened a "1" to my bucketname. SecuritySpy created the bucket successfully and the 'test' passed and there is a dummyfile in my bucket. I then dropped the 1 (so the bucket name of the existing bucket I created in the console is there) and the test also passed. So it looks like the test function needs CreateBucket even if it doesn't acutally create a bucket (I assume it tries to, and passes if the bucket already exists; but doesn't trap for the bucket existing (which would have also failed since I did not grant the IAM use ListAllMyBuckets) before trying to create it). This is more permissions than I'd normally offer.

    After setting this up, my motion files uploaded to S3 as expected.
  • Thanks for the feedback. Based on this, I've made the following tweaks:

    SecuritySpy will now attempt the upload first, before attempting to create the bucket. So if the bucket already exists, this should now succeed without requiring the CreateBucket permission for the user.

    Only if the upload fails with the "NoSuchBucket" message will SecuritySpy then attempt to create the bucket.

    I've also made the default region selection much more accurate, as it's now based on the Mac's latitude and longitude (previously it was based on the Mac's time zone, which isn't very accurate). SecuritySpy creates the config file when the AWS CLI tool is installed initially, or if the config file does not exist when an upload is initiated. So, if you delete the config file and attempt an upload, SecuritySpy will recreate the config file with the closest region - it would be interesting to see what it now chooses.

    However it does seem that the region in the config file does not need to match the region of the bucket, for the uploads to work, so I don't think this is too critical.

    This is all in a new beta (5.1.1b14), so if you can re-test and report back, that would be great.
  • Here you go:

    Here's what I did:

    1. With the Nov 1 beta, I deleted my existing S3 connection.
    2. Deleted ~/.aws/config and credentials
    3. In AWS IAM, rolled back the permissions so my IAM user only has PutObject to that existing bucket.
    4. Deleted all contents from that bucket.
    5. Installed Nov 2nd beta
    6. Reconfigured S3 and Test passed with the exiting bucket (in us-east-1) and just PutObjects permissions.

    It re-created my config file and used us-east-2 even though my Mac is 30 miles from us-east-1. I have location services enabled but SecuritySpy does not show as an allowed app for Location Services under Security & Privacy.

    7. I granted the AWS IAM user create bucket permissions and it successfully created a new bucket (by pressing the test button) in us-east-2.

    Tested saving 10 second snapshots with and without folder name and date prefixes and both work great. I don't think the region issue is a deal breaker since it can now work with existing buckets and low permissions.

    Great work with the quick turnaround.
  • BenBen
    edited November 2019
    Great to hear it's working well!

    There are ways to get a pretty good location without using Location Services - specifically the "closest city" location as set in the Date & Time system preference. This gives a pretty accurate location as long as you are in the vicinity of a major city (which most people are).

    As for the locations of the enpoints, currently I'm just taking the latitude/longitude of the centre of the state containing the enpoint, specified on the AWS Service Endpoints page, so this could be where the inaccuracy lies (Virginia for us-east-1 and Ohio for us-east-2). As far as I can tell, Amazon don't publish exactly where their datacentres are - if I had this information I could make the automatic location more accurate. Do you know if this information has been made public?
  • Ah. Makes sense. I just checked and my city was New York. I've since changed that. I think the method you are using is fine -- and end-users can easily change it as well.

    In the US, the regions just go by state names in the AWS console. That is probably fine for the logic you are using.
  • Nice addition! My only concern is feature bloat with lots of cloud providers but if done nice way e.g. modular code etc I guess its not a big issue.

    Even though I won't be using S3 with SS, I'd recommend allowing users to pick the region mainly because of varying region costs.

    One alternative would allow for shell script to be run after a file is written e.g. after motion etc, it should be easy to then pass this file location to any cloud script to upload it.

    I think these file names are available via the event stream but would make it bit easier as shell command.
  • Thanks for the feedback. I'm also concerned with feature bloat/creep, but S3 is a big player and this has been requested many times, so we considered it useful enough to add to SecuritySpy.

    I'm also concerned with UI clutter, so I'm reluctant to add an option to select the region just for S3. I think that the closest region (which will automatically be selected) will work best for most users, and if not then it's easy to edit the config file to change this. Also, it seems to be the case that once you create the bucket in a particular region, it exists (and is charged) in that region forevermore, and the upload works even if the region in the local config file doesn't match.

    Yes we do already have the option to run a script for every file that is captured - see the ProcessCapturedFile script at the bottom of the SecuritySpy AppleScript Examples page.
  • I'm trying to come up to speed on S3... I'd like to ask the group a couple of broad questions, if I may. My usage would be around 1 TB max.

    Is the performance, upload and download rates, faster than Dropbox? Are there other considerations that would favor using S3 over Dropbox?

    Many thanks in advance for any offered insight!
  • Not a drop box SME. But it would be hard to beat S3 speeds. This of course depends on where you are located and latency to S3 for your region.

    S3 has a cost. If you want to keep your files more than 30 days and you are mostly concerned about archival but not viewing/distributing from S3 except if your on prem server goes down, you can set a lifecycle policy to move content to Infrequent Access and pay half the storage cost. Minimum 30 day charge though.
  • Thanx for your insight, Ramias - appreciate it!
  • Is the use case here to move existing video off of your primary storage and onto S3 on a recurring basis, or is the idea to save your video to S3 in real-time to remove the need for local storage?
  • I have plenty of local storage. My use case is secure offsite access for motion recordings, if local storage were corrupted, stolen, destroyed etc.

    I go to S3 standard and lifecycle off after a week.
  • This description by @Ramias is exactly what this feature is designed for: offsite backup of the footage, in case the local storage is compromised (damaged/stolen/corrupted etc.). S3 is perfect for this because it's fast, inexpensive, and has built-in options to automatically remove old files.
  • Hi @Ben did anything change with S3 upload in the most recent upgrade? My uploads stopped after upgrading. Checked and my bucket name in the software had changed to an old bucket name. Changed back to my current bucket name and the "Test" function fails with a Regex name match error (Bucket already exists). Fails with and without bucket dashes in the bucket name for existing buckets (or new buckets, in case SecuirtySpy wants to create the bucket).
  • Hi @Ramias, sorry about this, could you please email us at support@bensoftware.com and include the log file (File menu -> Open Log) and a screenshot of your upload settings? Thanks.
  • A bunch of these errors starting June 10th.

    06/16/2020 08:18:55: Error uploading the file "/Volumes/SecSpyVideos/Yard/2020-06-16/06-16-2020 08-18-24 M Yard.m4v". 5.2.3,1590,88794 FTP general error. Invalid bucket name "foo-videofoo": Bucket name must match the regex "^[a-zA-Z0-9.\-_]{1,255}$"

    Also note, the first several errors were to a different bucket name (NOT the bucket I'd been using the past 6 months or so; probably a bucket I used very early on; not sure how that showed up).
  • Could you please check your upload settings, as follows:

    - If you are using the Amazon S3 service, the "Endpoint" field should be empty.

    - Enter the bucket name (3-63 alphanumeric characters) in the "S3 bucket" field.

    Does that do it?
  • yes endpoint is empty.

    just created a new bucket with all lower case characters (no dashes) and get the same RegEx error. Is the regex logic failing?
  • Same 1590,88794 error for me.

    I think there must be something wrong with the Regex in the latest release and beta. My bucket name is, for example, 's3-securityspy-motion'. If I remove the dashes, then it works fine (and a new bucket is created). Just a slight pain, as renaming a bucket is not possible in the AWS console, so all the settings will need to be re-created to keep our offsite backups.
  • Maybe not that simple. I created a new bucket without numbers and dashes and it seems to now be OK. Might well be something to do with a regex on the bucket name?
  • BenBen
    edited June 2020
    I have tested this and can't find a problem.

    @Ramias - the error message you pasted above shows that there is an erroneous space character before your bucket name. This would indeed prompt the error message you are seeing. Make sure that there are no whitespace characters anywhere in the bucket name.

    @samsykes - could you please do the same - make sure there are no whitespace or other erroneous characters anywhere in the bucket name.

    We'll get the next update to automatically remove any whitespace from the bucket name to avoid problems like this in the future.
  • no leading spaces, no trailing spaces. It was definitely something with this version upgrade. My existing entry for S3 won't work. I did create a new S3 entry (same credentials) and it worked. Strange thing - I went back and updated my previous entry and it now works too. So it could be some setting triggered when an S3 destination is added. And my original destination was not added under this version.
  • I also had this regex issue. No leading spaces, no trailing spaces. It started after I updated SecuritySpy. I had to add a new server to the Uploads section of SecuritySpy using exactly the same settings as the server that worked prior to the update and then update all of my cameras to use the new server.
  • It turns out there was a bug when updating to SecuritySpy 5.2.3 from a previous version, that corrupted the bucket name. Clearing and re-adding the bucket name (or deleting and recreating the server instance) is the solution here. Apologies.
Sign In or Register to comment.