You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently CloudAPI is missing ability to reprovision a machine to a current or newer image version.
Therefore also Triton CLI is missing the ability.
-Is there a technical reasoning behind this or is this just not implemented yet?
Reprovisioning is a useful feature when doing rolling upgrades and not wanting to spin up new instance for the upgrade (a little bit of container anti-pattern but useful in limited use-cases).
This would be "nice to have" feature in the long run but not a must - more modern way would be to just spin up a new instance and fail over to that when ready.
The text was updated successfully, but these errors were encountered:
This could be very useful in mixed environments where Triton is not the only platform or in high security places where firewall appliances are strictly managed by disparate teams (there is an abundance in this type of "Enterprise" organisational IT setups).
For example, in these "Enterprise" environments when a new container version is rolled out with the current provisioning model (spin-up a new container), the network team has to intervene and update the IP addresses in firewall configs, load balancers etc, as often times integration hooks are non existent or too fragile. Lots of anti-patterns, but unfortunately this is the sad reality at a lot of organisations stuck in the past.
The path to a "new world/container revolution" is long and hard, paved with many legacy issues - supporting the re-provision feature could actually solve many dilemmas.
Currently CloudAPI is missing ability to reprovision a machine to a current or newer image version.
Therefore also Triton CLI is missing the ability.
-Is there a technical reasoning behind this or is this just not implemented yet?
Reprovisioning is a useful feature when doing rolling upgrades and not wanting to spin up new instance for the upgrade (a little bit of container anti-pattern but useful in limited use-cases).
This would be "nice to have" feature in the long run but not a must - more modern way would be to just spin up a new instance and fail over to that when ready.
The text was updated successfully, but these errors were encountered: