Skip to content

[CLOUDSTACK-10240] ACS cannot migrate a local volume to shared storage#2425

Merged
rafaelweingartner merged 3 commits intoapache:masterfrom
rafaelweingartner:CLOUDSTACK-10240
Mar 7, 2018
Merged

[CLOUDSTACK-10240] ACS cannot migrate a local volume to shared storage#2425
rafaelweingartner merged 3 commits intoapache:masterfrom
rafaelweingartner:CLOUDSTACK-10240

Conversation

@rafaelweingartner
Copy link
Member

@rafaelweingartner rafaelweingartner commented Jan 23, 2018

CloudStack is logically restricting the migration of local storages to shared storage and vice versa. This restriction is a logical one and can be removed for XenServer deployments. Therefore, we will enable migration of volumes between local-shared storages in XenServers independently of their service offering. This will work as an override mechanism to the disk offering used by volumes. If administrators want to migrate local volumes to a shared storage, they should be able to do so (the hypervisor already allows that). The same the other way around.

Extra information for reviewers:
This PS is introducing an overriding mechanism for volumes placement. We are providing a way for root administrators to override service/disk offering definitions by allocation volumes in storage pools that they could not have been allocated before if we followed their(volumes) service/disk offerings.
Therefore, it will not change the type of disk/service offerings when migrating volumes between local/shared pools. We will simply migrate them as requested by root admin. Furthermore, volumes will only be able to migrate to "suitable" storage pools. This means, storage pools that have same tags as the disk/service offering.

In summary, we are overriding placement considering location (shared/local) storage pools. However, we still consider storage tags to decide which storage pools are suitable or not.

Loading
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants