Skip to main content

File storage provider

By default, Logto Console uses a text input for static file URLs such as avatars. To enable a more intuitive file upload experience with drag and drop, you need to configure a storage provider.

Logto supports multiple storage providers, including AWS S3, Azure Storage. This recipe will show you how to configure a storage provider for Logto.

The configuration is stored in DB's systems table, but it is recommended to use the CLI to configure the storage provider. For more information, try the "help" command:

pnpm logto db system --help

Azure Storage

Azure Storage is a powerful and scalable cloud storage solution that allows you to store and manage your data in the cloud. The following recipe will show you how to configure Azure Storage as a storage provider for Logto.

Prerequisites

Config using CLI

Example usage:

pnpm logto db system set storageProvider '{"provider":"AzureStorage","connectionString":"DefaultEndpointsProtocol=https;AccountName=logto;AccountKey=oRhfTBHOHiBxxxxxxxxxxxxxxxxZ0se6XROftl/Xrow==;EndpointSuffix=core.windows.net","container":"logto"}'

connectionString

To access Azure Storage, you need to use a connection string, which is a string of characters that contains the necessary information for establishing a connection to your storage account.

To get the connection string, follow the official Azure Storage connection string documentation.

container

The container is a storage resource that stores blobs. You can use the container to organize your blobs and to control access to your data.

To create a container, follow the official Azure Storage container documentation.

publicUrl

Optional.

The public URL is the URL that can be used to access the storage resource publicly. If you are not using CDN, you can leave it blank to use the Azure Storage's default "Primary endpoint" as the public URL. Logto will get this value from "connectionString" with the help of Azure SDK. To learn more about your storage account's primary endpoint, follow the official Azure Storage primary endpoint documentation.

S3 Storage

S3 Storage is a cloud storage service that offers object storage through a web service interface. The following recipe will show you how to configure S3 Storage as a storage provider for Logto.

Prerequisites

Config using CLI

Example usage:

pnpm logto db system set storageProvider '{"provider":"S3Storage","accessKeyId":"my-access-key-id","accessSecretKey": "my-secret-access-key","bucket":"logto","endpoint":"https://s3.us-east-2.amazonaws.com"}'

accessKeyId

The access key ID is an identifier for your AWS account. To find your access Key ID for your AWS account, follow the official AWS access key ID documentation.

accessSecretKey

The secret access key is used in conjunction with the access key ID to sign programmatic requests. To find your access key secret for your AWS account, follow the official AWS access key secret documentation.

bucket

The bucket is a container for objects stored in Amazon S3. To create a bucket, follow the official AWS S3 bucket documentation.

region

Optional.

The region is the geographical region where the AWS S3 bucket is located. If endpoint is a standard AWS S3 endpoint, it can be parsed from endpoint. To find your AWS S3 region, follow the official AWS S3 region documentation.

If you are using a S3 compatible storage service, you may leave this field blank.

endpoint

Optional.

Endpoint is the URL that is used to access the AWS S3 service. To find your AWS S3 endpoint, follow the official AWS S3 endpoint documentation.

You can leave this field blank to use the default endpoint for the region. If you are using a S3 compatible storage service, you can use the endpoint of the service.

publicUrl

Optional.

The public URL is the URL that can be used to access the storage resource publicly. If you are not using CDN, you can leave it blank to use the S3 Storage's default URL.

Google Cloud Storage

Google Cloud Storage is a cloud storage service that provides object storage through a web service interface. The following guide will show you how to configure Google Cloud Storage as a storage provider for Logto.

Prerequisites

Obtain the key file

Google Cloud SDKs commonly use a "key file." If you're unfamiliar with Google Cloud, this might be the most challenging part. Here's how to get it:

  1. Go to the service account page: https://console.cloud.google.com/iam-admin/serviceaccounts
  2. Create an account, enter a name, and then continue.
  3. In the next step, select the role of “Storage Object User.” You can find it using the filter.
  4. Once you've finished creating the account, go to the account detail page and select the “keys” tab.
  5. Click “add key,” select “create a new key,” choose “json” in the dialog, and then download your json file.

Add the key file to Logto

Logto should have access to the key file.

Running in Node.js

Copy the file to /path/to/logto/core and rename it to google-storage-key.json.

Running in a Docker Container

If you're running Logto in a Docker container, you'll need to mount the file to the container. Assuming you're using Docker Compose, add this to your configuration:

volumes:
- ./path/to/google-storage-key.json:/etc/logto/core/google-storage-key.json

Remember to replace /path/to with the actual path.

Config using CLI

Example usage:

pnpm logto db system set storageProvider '{"provider":"GoogleStorage","projectId":"psychic-trainer-403801","keyFilename":"google-storage-key.json","bucketName":"logto-test2"}'

projectId

Your Google Cloud project ID.

keyFilename

The name of the key file, if you follow the above steps, then it is google-storage-key.json.

bucketName

The bucket name.

publicUrl

Optional.

The public URL is the URL that can be used to access the storage resource publicly. If you are not using CDN, you can leave it blank to use the S3 Storage's default URL.