feature/CSTACKEX-46: Create, Delete iSCSI type Cloudstack volumes, Enter, Cancel Maintenance mode#27
Conversation
…base in agent code
…ragePool create and delete
| if (volumeVO != null) { | ||
| volumeVO.setPath(null); | ||
| if (cloudStackVolume.getLun().getUuid() != null) { | ||
| volumeVO.setFolder(cloudStackVolume.getLun().getUuid()); |
There was a problem hiding this comment.
Add a comment on the need for this
| if (mapResp != null && mapResp.containsKey(Constants.LOGICAL_UNIT_NUMBER)) { | ||
| String lunNumber = mapResp.get(Constants.LOGICAL_UNIT_NUMBER); | ||
| s_logger.info("ensureLunMapped: Existing LunMap found for LUN [{}] in igroup [{}] with LUN number [{}]", lunName, accessGroupName, lunNumber); | ||
| return lunNumber; |
There was a problem hiding this comment.
Add a TODO comment to keep an eye on possibility of duplicate LUN for luns
| VolumeVO volumeVO = volumeDao.findById(volInfo.getId()); | ||
| if (volumeVO != null) { | ||
| String iscsiPath = Constants.SLASH + storagePool.getPath() + Constants.SLASH + lunNumber; | ||
| volumeVO.set_iScsiName(iscsiPath); |
There was a problem hiding this comment.
Remove any redundant code
| return false; | ||
| } | ||
| // Set the aggregates which are according to the storage requirements | ||
| for (Aggregate aggr : aggrs) { |
There was a problem hiding this comment.
Check for rebase issue here
| Map<String, Object> queryParams = Map.of("allow_delete_while_mapped", "true"); | ||
| try { | ||
| sanFeignClient.deleteLun(authHeader, cloudstackVolume.getLun().getUuid(), queryParams); | ||
| } catch (Exception ex) { |
There was a problem hiding this comment.
Check if feignException should be added
| String authHeader = Utility.generateAuthHeader(storage.getUsername(), storage.getPassword()); | ||
| sanFeignClient.deleteLunMap(authHeader, lunUUID, igroupUUID); | ||
| s_logger.info("disableLogicalAccess: LunMap deleted successfully."); | ||
| } catch (Exception e) { |
There was a problem hiding this comment.
Use feignException
| return null; | ||
| } | ||
|
|
||
| /** |
There was a problem hiding this comment.
I think its would be better to skip these comments on func as some of the information are related to our plugin internal design.
| VolumeInfo volInfo = (VolumeInfo) dataObject; | ||
|
|
||
| // Create the backend storage object (LUN for iSCSI, no-op for NFS) | ||
| CloudStackVolume created = createCloudStackVolume(dataStore, volInfo, details); |
There was a problem hiding this comment.
a better name would be more meaningful.
| } | ||
|
|
||
| // Determine scope ID based on storage pool scope (cluster or zone level igroup) | ||
| long scopeId = (storagePool.getScope() == ScopeType.CLUSTER) |
There was a problem hiding this comment.
@rajiv-jain-netapp Can we discuss on including scope id and scope both for igroup name. It would be better to have only storage pool name reference in igroup name . Having scope or any other entity creates a dependency of their lifecycle on igroup.
There was a problem hiding this comment.
@piyush5netapp lets discuss in tomorrow scrum
| return 0L; | ||
| } | ||
| } catch (Exception ignore) { | ||
| // If FS check fails for any reason, fall back to blockdev call |
| } | ||
|
|
||
| // Determine scope ID based on storage pool scope (cluster or zone level igroup) | ||
| long scopeId = (storagePool.getScope() == ScopeType.CLUSTER) |
There was a problem hiding this comment.
@piyush5netapp lets discuss in tomorrow scrum
| volumeVO.setPath(iscsiPath); | ||
| s_logger.info("createAsync: Volume [{}] iSCSI path set to {}", volumeVO.getId(), iscsiPath); | ||
|
|
||
| } else if (ProtocolType.NFS3.name().equalsIgnoreCase(details.get(Constants.PROTOCOL))) { |
There was a problem hiding this comment.
I am not seeing anything significant for NFS block. I would recommend to not have if-else for protocol atleast for this situation, we should handle it via strategy concrete implementation.
...s/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/IscsiAdmStorageAdaptor.java
Show resolved
Hide resolved
| // ManagedNFS qcow2 backing file deletion handled by KVM host/libvirt; nothing to do via ONTAP REST. | ||
| s_logger.info("deleteAsync: ManagedNFS volume {} no-op ONTAP deletion", data.getId()); | ||
| // NFS file deletion is handled by the hypervisor; no ONTAP REST call needed | ||
| s_logger.info("deleteAsync: ManagedNFS volume {} - file deletion handled by hypervisor", data.getId()); |
| VirtualMachine.State.Destroyed, | ||
| VirtualMachine.State.Expunging, | ||
| VirtualMachine.State.Error).contains(vm.getState())) { | ||
| s_logger.debug("revokeAccess: Volume [{}] is still attached to VM [{}] in state [{}], skipping revokeAccess", |
There was a problem hiding this comment.
I guess, this logger can be made error instead debug, to highlight in the logs
There was a problem hiding this comment.
This can be a warning at max, as this is cloudstack behaviour to make the revokeAccess call but its us who don't want to perform it, when the VMs state is one if these.
|
|
||
| @JsonProperty("protocol") | ||
| private ProtocolEnum protocol = ProtocolEnum.mixed; | ||
| private ProtocolEnum protocol = null; |
There was a problem hiding this comment.
this is good catch, since we have 1-1 mapping between igroup and storage-pool, we should not keep protocol type as mixed, lets keep as per the storage-pool protocol.
|
|
||
| @JsonInclude(JsonInclude.Include.NON_NULL) | ||
| public static class Links { } | ||
|
|
There was a problem hiding this comment.
There is no code change in this file. Can you exclude this file?
| CloudStackVolume getCloudStackVolume(CloudStackVolume cloudstackVolume) { | ||
| //TODO | ||
| return null; | ||
| public void copyCloudStackVolume(CloudStackVolume cloudstackVolume) { |
There was a problem hiding this comment.
this code is added for which workflow?
There was a problem hiding this comment.
This has lun clone, could be used for snapshot
| if (callback == null) { | ||
| throw new InvalidParameterValueException("createAsync: callback should not be null"); | ||
| } | ||
|
|
There was a problem hiding this comment.
add dataObject null condition check, also add error logger
| VolumeInfo volInfo = (VolumeInfo) dataObject; | ||
|
|
||
| // Create the backend storage object (LUN for iSCSI, no-op for NFS) | ||
| CloudStackVolume created = createCloudStackVolume(dataStore, volInfo, details); |
| public String getName() { | ||
| s_logger.trace("OntapPrimaryDatastoreProvider: getName: Called"); | ||
| return "ONTAP Primary Datastore Provider"; | ||
| return "ONTAP"; |
There was a problem hiding this comment.
please add this string to some constant and use it here
Co-authored-by: Srivastava, Piyush <Piyush.Srivastava@netapp.com>
…ce recreate option (apache#12004)
Co-authored-by: Vishesh <8760112+vishesh92@users.noreply.github.com>
* api,server: allow configuring repetitive alerts Fixes apache#6613 Introduces support for configuring additional alert types that can be published repeatedly, beyond the default set. Operators can now use the dynamic configuration `alert.allowed.repetitive.types` to specify a comma-separated list of alert type names that should be allowed for repetitive publication. Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com> * add tests Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com> * fix Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com> * test fix Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com> * allow repetition for custom alerts Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com> * remove refactoring Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com> --------- Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
…hey already existed. (apache#12059)
…pache#12497) Bumps [org.apache.maven.plugins:maven-war-plugin](https://github.com/apache/maven-war-plugin) from 3.4.0 to 3.5.1. - [Release notes](https://github.com/apache/maven-war-plugin/releases) - [Commits](apache/maven-war-plugin@maven-war-plugin-3.4.0...maven-war-plugin-3.5.1) --- updated-dependencies: - dependency-name: org.apache.maven.plugins:maven-war-plugin dependency-version: 3.5.1 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* Replace maven-jgit-buildnumber-plugin with thread safe buildnumber-maven-plugin * Fix mysql-connector-java warning * Fix thread safe warning for properties-maven-plugin * Fix mvn build - marvin warnings * Update tools/marvin/README.md * Update tools/marvin/README.md Co-authored-by: dahn <daan.hoogland@gmail.com> --------- Co-authored-by: dahn <daan.hoogland@gmail.com>
Signed-off-by: Viddya K <viddya.k@ibm.com> Signed-off-by: Niyam Siwach <niyam@ibm.com> Co-authored-by: root <root@c63716v1.fyre.ibm.com>
… registration (apache#12165) * engine/schema: prepend algorithm to checksum during systemvm template registration * Update utils/src/main/java/org/apache/cloudstack/utils/security/DigestHelper.java
* extension/proxmox: improve host vm power reporting Add `statuses` action in extensions to report VM power states This PR introduces support for retrieving the power state of all VMs on a host directly from an extension using the new `statuses` action. When available, this provides a single aggregated response, reducing the need for multiple calls. If the extension does not implement `statuses`, the server will gracefully fall back to querying individual VMs using the existing `status` action. This helps with updating the host in CloudStack after out-of-band migrations for the VM. Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com> * address review Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com> --------- Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
…job polling method
|
This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch. |
Description
This PR has the following changes:
Types of changes
Feature/Enhancement Scale or Bug Severity
Feature/Enhancement Scale
Bug Severity
Screenshots (if appropriate):
How Has This Been Tested?
Passcode: n=AE=.C9
Passcode: $%j3+K4B
Observations:
Though we have increased retries to wait for discovery of luns, the additional volume attachment still times out, so, it needs a manual retry.
When placed into Maintenance mode, Instance is getting Stopped in the first case while it remains in Running state in the second case. We currently don't have any code attributing to this behaviour in our plugin code. Need to understand why cloudstack is behaving this way.
I've also tested force delete of storage pool when CS volumes were present, but, haven't covered it in these recordings.