executing the installation of all the operators (ibm-common-services, ibm-sls, mas-dro, mas-core, mas-manage, cert-manager) at one stretch, may not be possible. Hence, there should be an ability to break down the installation of operators one-by-one.
4. Similarly, due to segregation of duties, even namespace admins might not have access to all the APIs provided by the operators and to create instances/workspaces. RBAC policies might need to be defined in-order to gain access to the operator APIs and to create instances/services/workspace. So, installation procedure should also clearly dProblem Statement:
Maximo Application Suite installation has limitations of a single deployment tool- Ansible. While Ansible is a popular choice for automating installation, there are situations where organizations have their own cloud native powerful deployment tools and organizations should have ability to break free from the limitations of single deployment tool like Ansible.
Idea/Requirement:
1. Major OEMs will have their own cloud solutions/Open Shift platforms. Open Shift cluster will be managed by their infrastructure/engineering team. Application team may not/will not have cluster admin access to the Open Shift cluster, since there will be multiple applications/instances might be running under the same Open Shift cluster. Providing cluster admin access to the Open shift cluster creates risk of tampering other applications running under the same cluster. Hence, the pre-requisite of having cluster admin access as part of Ansible Playbook installation may not be a viable option. So, the installation procedure should be split into tasks which has to be executed by cluster admin and those that can be executed by namespace admins. Segregation of duties and access restrictions are key internal controls of every organization.
2. Though Ansible is robust automation deployment engine, not every organization might be using Ansible. In our case, we use Kustomize manifest with ArgoCD based deployment. So, installation procedure should not be restricted to just one deployment engine. Organizations should have ability to extrapolate the Kubernetes YAMLs and install MAS using their own deployment methodology.
3. Ability to install Operators individually should be provided. As mentioned earlier cluster admin access will not be available for the application team. Application team might have only namespace admin access. So, efine the RBAC policies/cluster role bindings /Certificate issuer policies with issuer details that might be needed in-order to gain access for the namespace admins/service accounts that calls/creates/updates the operator APIs/instances/workspaces/generate certificates etc.
4. Similarly, due to segregation of duties, even namespace admins might not have access to all the APIs provided by the operators and to create instances/workspaces. RBAC policies might need to be defined in-order to gain access to the operator APIs and to create instances/services/workspace. So, installation procedure should also clearly define the RBAC policies/cluster role bindings /Certificate issuer policies with issuer details that might be needed in-order to gain access for the namespace admins/service accounts that calls/creates/updates the operator APIs/instances/workspaces/generate certificates etc.
5. Every organization may not use automatic approval plans for the operators. So, steps to manually approve/upgrade the operators (with other deployment engine/tools) should be available. When primary operators are approved manually, then its secondary operators should also get approved automatically. In addition, following requirements should be considered:
a. All Operator/CRD MUST be installed explicitly.
b. NO Operator/CRD MUST install automatically install another Operators/CRD
c. All Operator/CRD MUST be pinned by to version i.e., no automatic mode.
d. NO wildcards in Kubernetes RBAC are allowed.
e. NO custom OpenShift would be allowed.
6. Following restrictions should also be taken into considerations, as they are standard best practices in K8s
a. Ability to install MAS on clusters where basic security policies are turned on.
b. Singleton is not typically allowed in most of the OpenShift Clusters.
c. Logs running as a root are generally not allowed.
d. Running anything other than restricted SCC is not allowed.
e. Sometimes OpenShift clusters do not have default storage classes, so we may have to be explicitly define if we want to use storage.
f. YAML & Kustomize via ArgoCD is one of GitOps model to support deployment other than Ansible.
a. Nested operators need to be explicitly declared (in most of the OpenShift environments, Operators do not have permissions to install nested operators per least privilege model)
g. Sometimes there are restrictions to make use of in built opnshift-image registry, so ability to store container images in other registries/artifacts should be considered.
h. All containers must run as non-root user.
i. Containers cannot run privileged.
j. Containers cannot mount host filesystems (No host Path mounts)
k. Containers cannot attach to host network.
l. All Kubernetes objects must define resource requests/limits.
m. All Kubernetes objects must define liveness and readiness probes.
n. All application logs must be written to STDOUT so it can be picked up by the platform logging stack.
o. Cross mounting persistent volumes will not be permitted.
p. Container (POD) IPs are ephemeral and should not be considered static or a source of identity (static IPs not supported)