A deep dive into Glance multistore support using Cinder

Setting up Glance multistore support can be somewhat confusing. This is largely due to 1) the number of options available, 2) existing documentation / guides may be somewhat lacking and 3) it may not be entirely clear as to why you’d want to set up multistore support.

Introduction

Glance is the Image service for OpenStack. It provides a catalog of images that are used to create VMs. 

You have a number of options of where you want to store these images. They can be stored locally or remotely. The number of image backends Glance can use can be (but not limited to):

  • file (local storage)
  • http
  • rbd
  • swift
  • cinder

In this post, I’ll cover using Cinder as a backend Glance. So what are the advantages to using Cinder as a image backend?

  • Logical separation. In my examples, my Cinder backend with be ONTAP. ONTAP makes use of SVMs (or Storage Virtual Machines). SVMs are designed with logical separation in mind. Each SVM is effectively isolated from other SVMs.
  • Instant clones! If you’re provisioning 25 new instances, would you rather create these instances in seconds? Or wait 15 minutes or more with multiple ongoing copies taking up bandwidth on the network?

Without instant clones, you’re typically using Linux dd to copy your image over to your new cinder volume. This can be quite time consuming and significantly increase provisioning times.

What we’ll be covering

In this post, I’ll be configuring Glance in two different ways:

  • Using a single project to own Glance images
  • Having each individual project own its own Glance images

How you configure Glance is entirely dependent upon your use case. For example, your OpenStack environment might be one you use within your organization. From a management perspective, it might make sense having all of your images owned by a single project.

Alternatively, your OpenStack environment might be part of a managed services offering. You have several customers (tenants) and you want to make sure you’re billing them for the storage their images are consuming. Hence, you want customer images owned by that customer’s project.

Caveats

Some important caveats to get out of the way…

  • I’m using DevStack. This is a OpenStack release that’s great for spinning up a quick lab to check out workflows. However, I would not consider this a platform suitable for production.
  • Glance is highly customizable. To me, I just want to kick some things around in lab and explore workflows. You’ll obviously want to explore your own options and thoroughly test before rolling anything into production
  • I do add some options that help see what is happening in the back end. These options could be a security concern

Option #1: Configuring a single project to own Glance images

In this example, we will use a simple Cinder volume type with a single extra spec parameter (volume_backend_name) that identifies the cinder volume backend we’ll be using:

openstack volume type create --property volume_backend_name=ontap-iscsi-916 ontap-iscsi-916

Changes to cinder.conf

To enable Cinder to create new volumes based on a Image-Volume, add the following lines to cinder.conf:

[DEFAULT]
allowed_direct_url_schemes = cinder
image_upload_use_cinder_backend = True

Changes to glance-api.conf

[DEFAULT]
enabled_backends=ontap-iscsi-916:cinder
show_multiple_locations = true
show_image_direct_url = true

  • enabled_backends – This defines the Glance store (backed by Cinder) that we will be using. The Glance storage backend (in this example, labeled ontap-iscsi-916), will be defined later in the glance-api.conf
  • show_multiple_locations and show_image_direct_url are enabled for illustrative purposes to show how an image is located within the Glance store. With a production environment, having these options enabled may represent a security concern

[glance_store]
filesystem_store_datadir = /opt/stack/data/glance/images/
stores = file,cinder
default_store = cinder
default_backend = ontap-iscsi-916

[os_glance_tasks_store]
filesystem_store_datadir = /opt/stack/data/glance/tasks_work_dir

[os_glance_staging_store]
filesystem_store_datadir = /opt/stack/data/glance/staging

  • filesystem_store_datadir is used for a file (local) store backend. Even though we will be using a cinder storage backend, this option still must be defined in glance-api.conf.
  • [os_glance_tasks_store] and [os_glance_staging_store] are internal “reserved” Glance stores used for staging / temporary locations for processing image data. The directories specified in filesystem_store_datadir should actually exist.
  • stores lists of enabled Glance stores. Note, file is included as the staging store is a file store.
  • default_store defines that a Cinder store will be used as the default backend
  • default_backend defines the default Cinder backend to be used

[ontap-iscsi-916]
store_description = "NetApp iSCSI"
cinder_store_auth_address = http:///identity
cinder_store_user_name = glance
cinder_store_password = Netapp1!
cinder_store_project_name = service
cinder_volume_type = ontap-iscsi-916

  • [ontap-iscsi-916] defines a Cinder backend to use as a Glance store
  • cinder_store_auth_address defines the Keystone endpoint for Glance to use to authenticate. This can vary per OpenStack distribution.
  • cinder_store_user_name and cinder_store_password are the credentials that Glance uses to authenticate
  • cinder_store_project_name defines the project that owns the image
  • cinder_volume_type defines the Cinder volume type to use. This parameter is important, as this is what controls which Glance storage backend is selected during volume provisioning

Testing by uploading an image

After adding the above configuration (and restarting Glance and Cinder services), we can test by creating a new image.

First, let’s check and see the new Glance store and confirm that multistore support is working:

$ glance stores-info
+----------+-------------------------------------------------------------------------------+
| Property | Value |
+----------+-------------------------------------------------------------------------------+
| stores | [{"id": "ontap-iscsi-916", "description": "NetApp iSCSI", "default": "true"}] |
+----------+-------------------------------------------------------------------------------+

Creating a new image:

$ openstack image create --public --disk-format raw --file /opt/stack/images/cirros-0.6.2-x86_64-disk.img cirros
+------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Field | Value |
+------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum | c8fc807773e5354afe61636071771906 |
| container_format | bare |
| created_at | 2025-08-21T15:05:48Z |
| disk_format | raw |
| file | /v2/images/8fb1e333-f9c2-4703-8dc1-7fe7a511614d/file |
| id | 8fb1e333-f9c2-4703-8dc1-7fe7a511614d |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 79a5395d14cc44a793cc43ae34712dd3 |
| protected | False |
| schema | /v2/schemas/image |
| size | 21430272 |
| status | active |
| tags | |
| updated_at | 2025-08-21T15:06:03Z |
| virtual_size | 117440512 |
| visibility | public |
+------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

In the above output, the image is owned by 79a5395d14cc44a793cc43ae34712dd3. That id corresponds to the service project:

$ openstack project list
+----------------------------------+--------------------+
| ID | Name |
+----------------------------------+--------------------+
| 64b6cddc71f84fc3bc89107b4c809ef0 | demo |
| 76e9c16d594146fd96857d3e013b11b6 | alt_demo |
| 7774cb1a7a844c3ab74251d2af4b43a7 | service |
| 79a5395d14cc44a793cc43ae34712dd3 | admin |
| c4b549998de74f4487c3b704d9cbf464 | invisible_to_admin |
+----------------------------------+--------------------+

To see more details about this image and it’s associated volume type:

$ cinder list --all-tenants
+--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-----------------+----------+-------------+
| ID                                   | Tenant ID                        | Status    | Name                                       | Size | Volume Type     | Bootable | Attached to |
+--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-----------------+----------+-------------+
| 05731acd-67d2-4ec0-9595-8aa7e1afc39f | 7774cb1a7a844c3ab74251d2af4b43a7 | available | image-8fb1e333-f9c2-4703-8dc1-7fe7a511614d | 1    | ontap-iscsi-916 | false    |             |
+--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-----------------+----------+-------------+

The key takeaways from the above output are:

  • The image is owned by the service project
    • If you don’t want to use the service project, you can always create a new project to own images.
    • The key here is that our Glance images are owned by a single project
  • Let’s talk about the important of volume_type. In the above example, the image is associated with volume type ontap-iscsi-916:
    • This is because in the [glance_store] section, we set default_backend = ontap-iscsi-916
    • Managing which volume_type is associated with the Image-Volume is important. This is how the image is associated with the correct Cinder backend
      • In other words, let’s say you provision a new cinder volume using volume_type ontap-iscsi-916 (which specifies the Cinder backend).
      • You also use the cirros image we uploaded above (which is also associated with the ontap-iscsi-916 volume type)
      • If the above criteria are met, Cinder will create a instant clone as both image and new cinder volume share the same backend.
    • You may also have a configuration where you have multiple enabled Glance backends
      • In each Glance backend configuration, each backend would have it’s own Cinder volume_type value.
        • For example:

          [ontap-iscsi]
          store_description = "NetApp iSCSI"
          cinder_store_auth_address = http:///identity
          cinder_store_user_name = glance
          cinder_store_password = Netapp1!
          cinder_store_project_name = service
          cinder_volume_type = ontap-iscsi


          [ontap-iscsi-new]
          store_description = "NetApp iSCSI"
          cinder_store_auth_address = http:///identity
          cinder_store_user_name = glance
          cinder_store_password = Netapp1!
          cinder_store_project_name = service
          cinder_volume_type = ontap-iscsi-new
        • And lets say I’ve uploaded images to both Glance backends …
        • When creating a new cinder volume, if I use a cinder_volume_type of ontap-iscsi-new, and I select a image that has a image-volume with that same volume type, then I should expect that cinder volume to be a clone. Because both my Cinder volume as well as Image-Volume share the same backend.
        • Conversely, if I provision a new Cinder volume with volume_type of ontap-iscsi-new, however the image I use is associated with a image-volume with a different volume_type (hence a different Cinder backend), then my cinder volume would not be a clone. In this example, we’re copying image blocks from one Cinder backend to another Cinder backend.
        • If you have multiple Glance backends, and you want to push your images to all backend, then the following command will allow you to do so: glance image-import <image_id> --all-stores true --import-method copy-image
    • How to manage multiple Glance backend with differing volume types in a production environment is likely beyond the reach of this article. My goal was to try and understand how Glance enabled backends work. I can imagine it’d take some planning to try and ensure that most of your Cinder volumes are clones, and to avoid falling back to using dd Linux copies.

Option #2: Glance configuration where images are owned by each project / tenant

In this example, we will use a simple Cinder volume type with a single extra spec parameter (volume_backend_name) that identifies the cinder volume backend we’ll be using:

openstack volume type create --property volume_backend_name=ontap-iscsi-916 ontap-iscsi-916

In this configuration, we’re going to set up Glance so that images are owned by each individual project. This configuration is similar to the above, with the exception that Glance will not use defined Cinder backends. As a result, images uploaded to Glance by users will be associated with that user’s project.

Changes to cinder.conf

To enable Cinder to create new volumes based on a Image-Volume, add the following lines to cinder.conf:

[DEFAULT]
allowed_direct_url_schemes = cinder
image_upload_use_cinder_backend = True

Changes to glance-api.conf

[DEFAULT]
show_multiple_locations = true
show_image_direct_url = true
bind_host=127.0.0.1

  • show_multiple_locations and show_image_direct_url are enabled for illustrative purposes to show how an image is located within the Glance store. With a production environment, having these options enabled may represent a security concern

[glance_store]
filesystem_store_datadir = /opt/stack/data/glance/images/
stores = file,cinder
default_store = cinder
cinder_store_auth_address = http://10.216.27.36/identity
cinder_catalog_info = volumev3::publicURL

[os_glance_tasks_store]
filesystem_store_datadir = /opt/stack/data/glance/tasks_work_dir

[os_glance_staging_store]
filesystem_store_datadir = /opt/stack/data/glance/staging

  • filesystem_store_datadir is used for a file (local) store backend. Even though we will be using a cinder storage backend, this option still must be defined in glance-api.conf.
  • [os_glance_tasks_store] and [os_glance_staging_store] are internal “reserved” Glance stores used for staging / temporary locations for processing image data. The directories specified in filesystem_store_datadir should actually exist.
  • stores lists of enabled Glance stores. Note, file is included as the staging store is a file store.
  • default_store defines that a Cinder store will be used as the default backend
  • cinder_store_auth_address defines the Keystone endpoint for Glance to use to authenticate. This can vary per OpenStack distribution. For details on correctly identifying the URL to use, see Finding the Keystone Identity URL
  • cinder_catalog_info Glance uses this to help facilitate communicate to Cinder (Glance uses this to reach out to KeyStone to ask such questions such as Region name)

Testing by uploading an image

After added the above configuration, and restarting Glance and Cinder services, we can test by creating a new image:

$ openstack image create --disk-format raw --file /opt/stack/images/cirros-0.6.2-x86_64-disk.img cirros2
+------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum | c8fc807773e5354afe61636071771906 |
| container_format | bare |
| created_at | 2025-08-22T16:28:30Z |
| disk_format | raw |
| file | /v2/images/1887e9a1-3975-4b58-a240-5add4d1d8919/file |
| id | 1887e9a1-3975-4b58-a240-5add4d1d8919 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros2 |
| owner | 79a5395d14cc44a793cc43ae34712dd3 | |
| protected | False |
| schema | /v2/schemas/image |
| size | 21430272 |
| status | active |
| tags | |
| updated_at | 2025-08-22T16:28:54Z |
| virtual_size | 117440512 |
| visibility | shared |
+------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

In the above output, the image is owned by 79a5395d14cc44a793cc43ae34712dd3.  That id corresponds to the admin project, as I used the admin user in the admin project to create the image:

$ openstack project list
+----------------------------------+--------------------+
| ID                               | Name               |
+----------------------------------+--------------------+
| 64b6cddc71f84fc3bc89107b4c809ef0 | demo               |
| 76e9c16d594146fd96857d3e013b11b6 | alt_demo           |
| 7774cb1a7a844c3ab74251d2af4b43a7 | service            |
| 79a5395d14cc44a793cc43ae34712dd3 | admin              |
| c4b549998de74f4487c3b704d9cbf464 | invisible_to_admin |
+----------------------------------+--------------------+

$ cinder list --all-tenants
+--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-----------------+----------+-------------+
| ID                                   | Tenant ID                        | Status    | Name                                       | Size | Volume Type     | Bootable | Attached to |
+--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-----------------+----------+-------------+
| ac4761e9-bd5e-4e49-b001-1b7eaeaf417a | 79a5395d14cc44a793cc43ae34712dd3 | available | image-1887e9a1-3975-4b58-a240-5add4d1d8919 | 1    | __DEFAULT__     | false    |             |
+--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-----------------+----------+-------------+

The key takeaways from the above output are:

  • The image is owned by the project / tenant associated with the project and user I was using at the time of image creation
  • The image is associated with volume type __DEFAULT__.
    • This is because 1) I have no defined Cinder backend in glance-api.conf and 2) I have not defined a default  volume type (using default_volume_type) in cinder.conf
    • In other words, Glance is not sending a volume type to Cinder. Cinder simply selects the __DEFAULT__ volume type
    • You can define a different default_volume_type in cinder.conf if you wish
  • Similar to option #1, managing how you configure your volume_type used would require additional planning / testing.
    • With this configuration, the image-volume would be created on the project user’s Cinder backend
    • When the user goes to create a new cinder volume, as long as the volume type selected uses the same backend as the image-volume, a clone will occur
    • volume_type of the cinder volume and image-volume don’t necessarily have to match. As long as both the new Cinder volume and image-volume use the same backend, a clone should result (because both source and destination are using the same Cinder backend).
  • Now a Cinder backend may have multiple storage pools which adds to the planning you’d have to do
    • In ONTAP, instant clones (or to be more accurate sis-clones), occur as long as the source and destination are within the same flexvol
    • If you’re copying from one flexvol to another flexvol (or storage pool to storage pool), you lose the ability to sis-clone
    • Part of your planning might include “seeding” each flexvol with your images. For example, using glance image-upload with the --store <store> option

Some possible gotchas…

How do I know instant clones are being used?

In order for instant clones to occur, the following criteria must be met:

  • The Image-Volume is present in the same flexvol as the cinder volume being created
  • The image was uploaded to Glance using the raw format

If the image is uploaded to Glance using a format such as qcow2, Glance must first download the image locally and convert the image to raw using a temporary image. For example:

Aug 25 11:29:21 openstack-epoxy-devstack cinder-volume[326427]: DEBUG cinder.image.image_utils [None req-e418c80c-8975-4b03-900f-0cb444265678 demo None] Image conversion details: src /opt/stack/data/cinder/conversion/image_fetch_3e83cca4-356a-40e8-9561-d98282218c61_6rsfc3o_openstack-epoxy-devstack@ontap-iscsi-916, size 112.00 MB, duration 1.65 sec, destination /dev/sdb {{(pid=326427) _convert_image /opt/stack/cinder/cinder/image/image_utils.py:501}}

Aug 25 11:29:21 openstack-epoxy-devstack cinder-volume[326427]: INFO cinder.image.image_utils [None req-e418c80c-8975-4b03-900f-0cb444265678 demo None] Converted 112.00 MB image at 68.08 MB/s

Aug 25 11:29:22 openstack-epoxy-devstack cinder-volume[326427]: DEBUG cinder.volume.volume_utils [None req-e418c80c-8975-4b03-900f-0cb444265678 demo None] Downloaded image 3e83cca4-356a-40e8-9561-d98282218c61 (('cinder://5e1ba2dc-0af7-45fb-984b-0629f07eb74d', [{'url': 'cinder://5e1ba2dc-0af7-45fb-984b-0629f07eb74d', 'metadata': {}}])) to volume 95a9ebbc-fe81-46b3-90f9-975dad0d431b successfully. {{(pid=326427) copy_image_to_volume /opt/stack/cinder/cinder/volume/volume_utils.py:1234}}

Aug 25 11:29:22 openstack-epoxy-devstack cinder-volume[326427]: DEBUG cinder.image.image_utils [None req-e418c80c-8975-4b03-900f-0cb444265678 demo None] Temporary image 3e83cca4-356a-40e8-9561-d98282218c61 for user 9e376fcc519949b984862e08f07e60b2 is deleted. {{(pid=326427) fetch /opt/stack/cinder/cinder/image/image_utils.py:1403}}

To avoid this, use the raw image format.

When creating a new cinder volume, you can confirm that the volume is created as a clone by running a lun show command in ONTAP:

ontap916::*> lun show -vserver ontap_916_svm -path /vol/openstack_iscsi_vol_01/volume-41c20942-3492-484e-9fb7-b3502c3837a9 -fields is-clone
vserver           path                                                                    is-clone
----------------- ----------------------------------------------------------------------- --------
jdw_ontap_916_svm /vol/openstack_iscsi_vol_01/volume-41c20942-3492-484e-9fb7-b3502c3837a9 true

How do I find my Keystone Identity URL?

If you bounce between different distributions of OpenStack like I do, it may not be immediately obvious what URL to use for the Keystone Identity service. You need this when configuring the cinder_store_auth_address paremeter.

  • To find the Keystone Identity URL to use, you can look for a www_authenticate_uri option in glance-api.conf
  • Alternatively, you can find the keystone endpoint via  openstack endpoint list
    • Using the keystone URL returned in the openstack endpoint list output, you can use the curl command to query the endpoint for available resources, such as the Identity service.  For example:

openstack@jcontroller01:~$ curl -kv http://192.168.0.36:5000
*   Trying 192.168.0.36:5000...
* TCP_NODELAY set
* Connected to 192.168.0.36 (192.168.0.36) port 5000 (#0)
> GET / HTTP/1.1
> Host: 192.168.0.36:5000
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 300 MULTIPLE CHOICES
< date: Fri, 22 Aug 2025 13:22:39 GMT
< server: Apache
< content-length: 268
location: http://192.168.0.36:5000/v3/
< vary: X-Auth-Token
< x-openstack-request-id: req-5905169b-3d22-411a-86bd-3c3e14c6626d
< content-type: application/json
<
* Connection #0 to host 192.168.0.36 left intact
{"versions": {"values": [{"id": "v3.14", "status": "stable", "updated": "2020-04-07T00:00:00Z", "links": [{"rel": "self", "href": "http://192.168.0.36:5000/v3/"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}]}]}}