-
Notifications
You must be signed in to change notification settings - Fork 4
Documentation
This section describes some words in the context of the library and Storj DCS, so we all know what we are talking about.
An access defines, where and what can be seen and done in the Storj-network. An access can either be provided as a combination of satellite-address, API-key and secret or by providing a serialised access grant containing that info. An access grant also includes which paths (prefixes) and which permissions (read, write, list, delete) apply.
A bucket is the "root", kind of like a hard-drive. You can have multiple buckets in a project where each bucket is completely separate from the others. All object-based operations except "MoveObject" operate only in one specific bucket. A bucket needs to have a name that is S3-compliant. The following rules apply:
- Bucket names must be between 3 and 63 characters long.
- Bucket names can consist only of lowercase letters, numbers, dots (.), and hyphens (-).
- Bucket names must begin and end with a letter or number.
An object is any kind of binary data with arbitrary sizes, that you can upload and download using a unique key. As the Storj DCS-network uses segmend-sizes of 64MB, it is best to have objects with that size (or multiples of it). But you can always upload any amount of data. Be aware, though, that there is an object-based accounting in place - so reducing the overall amount of objects in your application-design is recommended.
An object is uniquely defined by a key. A key can be seen as a filepath with a folder-like structure, e.g. "World/Europe/France/Paris". Every object needs a key. You can query a bucket to list all keys with a specific prefix - e.g. "World/Europe/France/" would list all cities in France.
The following namespaces contain relevant classes:
- uplink.NET.Models: contains all models
- uplink.NET.Services: contains all service classes
- uplink.NET.Interfaces: contains all interfaces (helpful for testing and mocking interactions with uplink.NET)
- uplink.SWIG: contains auto-generated objects, that you should not need - so simply ignore that one :)
For all operations on the Storj DCS network you need an Access
. This can either be a serialized access grant, that contains all info. Or it could be a combination of satellite-address, API-key and secret.
In most cases you will have an access grant, that you have generated yourself. In this case generate an Access
like this:
var access = new Access("YOUR_ACCESS_GRANT");
If you are using an API-key you can generate the Access
like this:
var access = new Access("SATELLITE_ADDRESS:PORT", "APIKEY", "SECRET");
Be careful: using this to create an access takes a bit more time as the system has to calculate some internal keys again, which takes a little more time. A serialized access grant contains that information already.
In both cases you can provide an optional Config
-object. A config object contains properties to set the temp-directory, a timeout value and a UserAgent. The latter can be used to give attribution to Open-Source-Software. More info can be found here.
If you are developing an Uno-Platform-application you may want to take a look at the uplink.NET.UnoHelpers-repository. It provides some UI-building-blocks for managing login and logout.
To make use of your Access
you need instances of a BucketService
and an ObjectService
. Both can be found in the uplink.NET.Services-namespace. Simply create an instance by providing your previously generated Access
:
var bucketService = new BucketService(access);
var objectService = new ObjectService(access);
To upload larger files in the background you can use the MultipartUploadService
:
var multipartUploadService = new MultipartUploadService(access);
For details see multipart-uploading and upload-queue for a more convenient way.
You can create a bucket simply by calling CreateBucket()
on the BucketService
:
await bucketService.CreateBucketAsync("mybucket");
If the bucket might or might not exist, you could either check its existance by calling GetBucket()
before - or you could make use of EnsureBucket()
:
await bucketService.EnsureBucketAsync("mybucket");
You can list all buckets, that your access grants you permission to, by calling:
ListBucketsOptions listOptions = new ListBucketsOptions();
var buckets = await bucketService.ListBucketsAsync(listOptions);
foreach (var bucket in buckets.Items)
{
//work with the bucket
}
By utilising the ListBucketsOptions
you could start listing at a specific Cursor
.
To delete a bucket simply call "DeleteBucket()":
var buckets = await bucketService.DeleteBucketAsync("mybucket");
That only works for empty buckets! Either remove all existing objects before removing the bucket or call DeleteBucketWithObjectsAsync()
:
var buckets = await bucketService.DeleteBucketWithObjectsAsync("mybucket");
Uploading an object can be done in multiple ways. You can upload the data from a byte[] and from a stream. The upload can happen in the "foreground" and in "background". You can even upload object data in chunks using multipart upload and you can let the UploadQueueService
take care of your uploads completely.
Simply call UploadObjectAsync()
on the ObjectService
. It will return an UploadOperation
giving you information about the status. Be careful with the "immediateStart"-parameter - you need to set it to false to be able to wait for upload-completion easily:
byte[] bytesToUpload = Encoding.UTF8.GetBytes("Storj is awesome!");
var uploadOperation = await objectService.UploadObjectAsync(bucket, "awesome.txt", new UploadOptions(), bytesToUpload, false);
await uploadOperation.StartUploadAsync(); //Wait until the upload finished
The byte[] provided is internally wrapped with a MemoryStream
, so pay attention to large byte[] and consider providing the data as a stream (e.g. if you read a file, provide the stream to the file instead of reading the file into a byte[]).
This works exactly like uploading a byte[] - simply provide the stream to UploadObjectAsync()
:
var stream = ...; //Get your stream
var uploadOperation = await objectService.UploadObjectAsync(bucket, "awesome.txt", new UploadOptions(), stream, false);
await uploadOperation.StartUploadAsync(); //Wait until the upload finished
Be careful not to dispose the stream before the upload finished!
The simplest way would be to set "immediateStart" on UploadObjectAsync()
to "true":
var stream = ...; //Get your stream
var uploadOperation = await objectService.UploadObjectAsync(bucket, "awesome.txt", new UploadOptions(), stream, true);
Now the upload started immediately and might or might not have been finished already. You can use the returned UploadOperation
to gather information about the upload:
-
BytesSent
: provides the bytes already uploaded -
TotalBytes
: provides the amount of bytes in total to be uploaded -
Completed
: is true, if the upload completed -
Failed
: is true, if the upload failed -
Cancelled
: is true, if the upload got cancelled by "Cancel()" -
Running
: is true, if the upload has not been completed, failed or cancelled and is therefore still ongoing -
ErrorMessage
: contains information about the error happened if "Failed" is true -
PercentageCompleted
: Give a percentage-value based on SentBytes and TotalBytes -
UploadOperationProgressChanged
: the event gets fired whenever a state-change happened during upload -
UploadOperationEnded
: the event gets fired when the UploadOperation Completed, Failed or got cancelled
If you want to upload larger files in a secured way completely in the background with automatic resumes then take a look at "Multipart uploading".
If the data you want to upload arrive step-by-step, you can upload them whenever you have new data available in so called "chunks" (parts). You start a chunked-upload by calling UploadObjectChunkedAsync()
:
var chunkedUploadOperation = await objectService.UploadObjectChunkedAsync(bucket, "awesome.txt", new UploadOptions(), new CustomMetadata());
The ChunkedUploadOperation
now provides you the methods WriteBytes()
and Commit()
. Simply add bytes to the upload using the first one and finish the upload using the latter.
A multipart upload is an upload, where a larger file is split into smaller pieces, where every piece itself can be finished on it's own. If a piece failes, it can be resumed without uploading all of the other parts again.
For a multipart upload you need the MultipartUploadService
, begin an upload (BeginUploadAsync()
), add parts (UploadPartAsync()
) and commit (CommitUploadAsync()
) or abort (AbortUploadAsync()
) the upload.
var multipart = await multipartUploadService.BeginUploadAsync(bucketname, objectKey, new UploadOptions());
var partResult1 = await multipartUploadService.UploadPartAsync(bucketname, objectKey, multipart.UploadId, 1, bytesToUpload);
var partResult2 = await multipartUploadService.UploadPartAsync(bucketname, objectKey, multipart.UploadId, 2, bytesToUpload);
... //all other parts
var uploadResult = await multipartUploadService.CommitUploadAsync(bucketname, objectKey, multipart.UploadId, new CommitUploadOptions());
If you want to "fire and forget" uploads you may take advantage of the UploadQueueService
. With that service you can upload files in a robust way and be asured that they will be uploaded even if the app (like on mobile devices) has bad connectivity or gets opened and closed on the users behalf.
Simply initialise the UploadQueueService
as a singleton and use the same instance of it throughout the whole application.
Then simply call AddObjectToQeueAsync()
:
var uploadQueueService = new UploadQueueService();
await uploadQueueService.AddObjectToUploadQueueAsync("mybucket", "myqueuefile.txt", access.Serialize(), bytesToUpload, "File description");
You need to provide the bucket-name, the object-key, the serialized access, the bytes to upload (either as stream or as byte[]) and an identifier. Your object then gets uploaded into an SQLite-Database in the application-data folder of your application. As your access-object might not be available anymore when the upload really gets performed, you have to provide your access as a serialized access grant. The identifier is solely for you own purpose to identify the object you've uploaded in case of an error or if you list the queued uploads.
After adding an object to the queue, the queue immediately starts processing and tries to upload all undone tasks. In order to always start the upload whenever your app is running, you may simply call ProcessQueueInBackground()
:
await uploadQueueService.ProcessQueueInBackground();
If - for whatever reason - you do not want to process the queue for the moment, you can stop it by calling StopQueueInBackground()
:
await uploadQueueService.StopQueueInBackground();
You can call ProcessQueueInBackground()
multiple times - it only starts a new background thread if there is currently no one processing the uploads in background.
If you want to cancel a single queued upload simply call CancelUploadAsync()
providing the key of the object. If an upload failed you can retry it by calling RetryAsync()
. You can get a list of undone Uploads by calling GetAwaitingUploadsAsync()
and get info about the number of open uploads by calling GetOpenUploadCountAsync()
.
If you want to monitor what happens in the background, you can do so by adding an event handler to UploadQueueChangedEvent
.
If you are developing an Uno-Platform-application you may want to take a look at the uplink.NET.UnoHelpers-repository. It provides some UI-building-blocks for managing uploads in the background.
Bear in mind that your serialized access grant is saved on disk in the SQLite-Database file. If you need to keep this absolutely secure consider to not use the upload queue.
All objects on Storj DCS have some so called system-metadata. It contains the creation-date, the expires-date and the content length. On all normal upload operations you can additionally add custom metadata to your object.
Both - system and custom metadata - get downloaded if you get an object. For listing you can decide whether or not you need one or the other or both. This has implications on the performance of your listings. The default listing operation skips providing both metadata-types.
To add metadata to an object simply create a CustomMetadata
object and add a CustomMetadataEntry
to its Entries-collection:
CustomMetadata customMetadata = new CustomMetadata();
customMetadata.Entries.Add(new CustomMetadataEntry { Key = "my-key 1", Value = "my-value 1" });
customMetadata.Entries.Add(new CustomMetadataEntry { Key = "my-key 2", Value = "my-value 2" });
Then provide it when you start an upload:
var uploadOperation = await objectService.UploadObjectAsync(bucket, "myfile1.txt", new UploadOptions(), bytesToUpload, customMetadata, false);
The keys and values for custom metadata are expected to be valid UTF-8.
You may also update the metadata after an object is uploaded by doing the following:
await objectService.UpdateObjectMetadataAsync(bucket, "myfile1.txt", customMetadata);
The CustomMetadata gets overridden completely by the provided value.
Listing objects is as simple as this:
var objectList = await objectService.ListObjectsAsync(bucket, new ListObjectsOptions());
But in most cases you want to restrict the listing to a specific prefix like this:
var listObjectsOptions = new ListObjectsOptions();
listObjectsOptions.Prefix = "World/Europe/France/";
var objectList = await objectService.ListObjectsAsync(bucket, listObjectsOptions);
This would list all objects that share the same prefix but don't get deeper. In the following list only the entries with (*) would be selected:
- World/Europe/France/Paris (*)
- World/Europe/France/Paris/DistrictA
- World/Europe/Germany
If you want to select all sub-elements, too, then set "Recursive" to "true":
var listObjectsOptions = new ListObjectsOptions();
listObjectsOptions.Prefix = "World/Europe/France/";
listObjectsOptions.Recursive = true;
var objectList = await objectService.ListObjectsAsync(bucket, listObjectsOptions);
This would list the following objects (marked with a (*)):
- World/Europe/France/Paris (*)
- World/Europe/France/Paris/DistrictA (*)
- World/Europe/Germany
The resulting objectList
only provides system and/or custom metadata, if you've set the corresponding options-flag:
var listObjectsOptions = new ListObjectsOptions();
listObjectsOptions.Prefix = "World/Europe/France/";
listObjectsOptions.System = true; //Load system metadata
listObjectsOptions.Custom = true; //Load custom metadata
var objectList = await objectService.ListObjectsAsync(bucket, listObjectsOptions);
If you only want to get info about an object you may call GetObjectAsync()
on the ObjectService
:
var myObject = await objectService.GetObjectAsync(bucket, "myfile.txt");
This will give you all the custom and system metadata, if the object exists and can be accessed.
To download an object you start by creating a DownloadOperation
:
var downloadOperation = await _objectService.DownloadObjectAsync(bucket, "myfile.txt", new DownloadOptions(), false);
downloadOperation.DownloadOperationProgressChanged += ..;
await downloadOperation.StartDownloadAsync();
This will block execution until the object is downloaded or the operation failed. You can get info about the DownloadOperation
via its DownloadOperationProgressChanged
event or by the DownloadOperation
object returned from StartDownloadAsync()
. The info you can retrieve is similar to the ones found on the UploadOperation
(see here).
If you want to process an object as a stream you may take advantage of a DownloadStream
object:
var stream = await objectService.DownloadObjectAsStreamAsync(bucket, "myfile.txt");
Now you can use the stream like you would use any other stream. You can even seek any bytes at any location within the stream. Already downloaded byte-ranges get buffered within the stream and therefore don't get downloaded again.
Dispose the stream after use to free up unmanaged resources!
Deleting an object is as simple as calling DeleteObjectAsync()
:
var myObject = await objectService.DeleteObjectAsync(bucket, "myfile.txt");
You can move/rename an object within one bucket or from one bucket to the other. Simply call MoveObjectAsync()
:
await _objectService.MoveObjectAsync(oldBucket, "oldfile.txt", newBucket, "newfile.txt");
To share an access without changing it, you can always simply serialize it:
string serializedAccess = access.Serialize();
However, if you want to share an object or prefix without providing the same full access you have, you can call the Share()
method to restrict the access for the shared access:
var permission = new Permission();
permission.AllowDownload = true;
var prefixes = new List<SharePrefix>();
prefixes.Add(new SharePrefix{ Bucket = "mybucket", Prefix = "World/Europe/France/" });
var restrictedAccess = access.Share(permission, prefixes);
string serializedAccess = restrictedAccess.Serialize();
The available permissions are:
- AllowDownload
- AllowUpload
- AllowList
- AllowDelete
- NotBefore
- NotAfter
NotBefore
and NotAfter
restrict the access to a certain time-frame - if left untouched the access is valid immediately and "forever".
If you want to share a URL to an object, you can call CreateShareURL
on an Access
:
string url = access.CreateShareURL("mybucket", "myimage.png", true, true);
This will register your access on the satellite and provide you with an URL to the object. It has two special parameters:
- raw: It shows no landing page, but brings you directly to the object. Depending on the mimetype it gets handled by the browser accordingly.
- isPublic: Defines whether the objects can be read using only the AccessKeyId.
This functionality is still work-in-progress and might change slightly in the future.
To revoke an access call:
await parentAccess.RevokeAsync(childAccess);
Remember that you can only revoke child-accesses, i.e. an Access that got derived from a parent-access.
The following not-complete list might help you developing applications using uplink.NET. If you want to see your application listed, too, please get in contact.
- Duplicati: a widely-used backup utility for Windows, Linux and MacOs. You can create backup tasks with quite complex filters that automatically run however you like. The code is open-source and the uplink.NET-integration can be found here. A documentation on how to use Duplicati together with Storj DCS can be found here
- Storj Photo Gallery: An open-source uno-platform based application, that will be available on Android an iOs. Users of the application are able to create albums and upload pictures to them. The album then gets rendered as a nice web-gallery and fully served directly from the Storj DCS network. The app is still work in progress [but the beta-version can already be found in the Google Play Store].(https://play.google.com/store/apps/details?id=io.storj.photogalleryuploader&hl=de&gl=US). It makes heavy use of the uplink.NET.UnoHelper-library.
- The uplink.NET-repository contains a sample application. But that one is slightly outdated. You may still find it useful.
If the documentation is wrong or you still did not find what you need, please let me know! Either start a discussion or create an issue.