StorageManager is helper interface for downloading & uploading files to supported remote storage Support remote servers: http(s)/S3/GS/Azure/File-System-Folder Cache is enabled by default for all downloaded remote urls/files

classmethod download_folder(remote_url, local_folder=None, match_wildcard=None, overwrite=False)

Download remote folder recursively to the local machine, maintaining the sub folder structure from the remote storage. For Example:

If we have a local file: s3://bucket/sub/file.ext StorageManager.download_folder(‘s3://bucket/’, ‘~/folder/’) will create: ~/folder/sub/file.ext


remote_url – Source remote storage location, tree structure of remote_url will

be created under the target local_folder. Supports S3/GS/Azure and shared filesystem. Example: ‘s3://bucket/data/’ :param local_folder: Local target folder to create the full tree from remote_url. If None, use the cache folder. (Default: use cache folder) :param match_wildcard: If specified only download files matching the match_wildcard Example: *.json :param overwrite: If False, and target files exist do not download. If True always download the remote files. Default False. :return: Target local folder

classmethod get_local_copy(remote_url, cache_context=None, extract_archive=True, name=None, force_download=False)

Get a local copy of the remote file. If the remote URL is a direct file access, the returned link is the same, otherwise a link to a local copy of the url file is returned. Caching is enabled by default, cache limited by number of stored files per cache context. Oldest accessed files are deleted when cache is full.

  • remote_url (str) – remote url link (string)

  • cache_context (str) – Optional caching context identifier (string), default context ‘global’

  • extract_archive (bool) – if True returned path will be a cached folder containing the archive’s content, currently only zip files are supported.

  • name – name of artifact.

  • force_download – download file from remote even if exists in local cache


Full path to local copy of the requested url. Return None on Error.

classmethod set_cache_file_limit(cache_file_limit, cache_context=None)

Set the cache context file limit. File limit is the maximum number of files the specific cache context holds. Notice, there is no limit on the size of these files, only the total number of cached files.

  • cache_file_limit (int) – New maximum number of cached files

  • cache_context (str) – Optional cache context identifier, default global context


The new cache context file limit.

classmethod upload_file(local_file, remote_url, wait_for_upload=True, retries=1)

Upload a local file to a remote location. remote url is the finale destination of the uploaded file.


upload_file('/tmp/artifact.yaml', 'http://localhost:8081/manual_artifacts/my_artifact.yaml')
upload_file('/tmp/artifact.yaml', 's3://a_bucket/artifacts/my_artifact.yaml')
upload_file('/tmp/artifact.yaml', '/mnt/share/folder/artifacts/my_artifact.yaml')
  • local_file (str) – Full path of a local file to be uploaded

  • remote_url (str) – Full path or remote url to upload to (including file name)

  • wait_for_upload (bool) – If False, return immediately and upload in the background. Default True.

  • retries (int) – Number of retries before failing to upload file, default 1.


Newly uploaded remote URL.

classmethod upload_folder(local_folder, remote_url, match_wildcard=None)

Upload local folder recursively to a remote storage, maintaining the sub folder structure in the remote storage. For Example:

If we have a local file: ~/folder/sub/file.ext StorageManager.upload_folder(‘~/folder/’, ‘s3://bucket/’) will create: s3://bucket/sub/file.ext

  • local_folder – Local folder to recursively upload

  • remote_url – Target remote storage location, tree structure of local_folder will

be created under the target remote_url. Supports Http/S3/GS/Azure and shared filesystem. Example: ‘s3://bucket/data/’ :param match_wildcard: If specified only upload files matching the match_wildcard Example: *.json (Notice: target file size/date are not checked). Default True, always upload Notice if uploading to http, we will always overwrite the target.