- Start v4.0 cycle (GH-444):
- deprecated API functions are now removed
- the former and deprecated way to handle Kubernetes deployments is not supported anymore
- Can be used within Alien 2.2.0
- Should support the creation of OpenStack Compute instances using bootable volume (GH-139)
- Sample scripts using REST API should be updated to run with a secured Alien4Cloud (GH-136)
- Add an example of basic job implementation for infrastructures without a default job scheduler (GH-125)
- Running a custom workflow consumes 100% of one CPU thread (GH-140)
- Update TOSCA samples to use Alien4Cloud 2.2 types (GH-134)
- mem_per_node slurm option parameter is limited to integer number of GB (GH-446)
- Workflow ends with timeout after 4 hours and application is undeployed (GH-131)
- Emit a persistent event on deployment purge (GH-402)
- Implement an anti-affinity placement policy for Openstack (GH-84)
- Monitor deployed services liveness (GH-104)
- Scale Down operation never ending with compute instance final status 'Initial' (GH-117)
- Deployment update: support the ability to add/remove workflows with Yorc Premium version (GH-112)
- Yorc support of kubernetes PersistentVolumeClaim (GH-209)
- Application undeployment seen in progress until timeout of 30 minutes occurs (GH-110)
- Upgrade to Alien4Cloud 2.1.1
- Add SSL configuration parameters to connect to a secure Yorc Server (GH-82)
- Publish value change event for instance attributes (GH-222)
- Slurm user credentials can be defined as slurm deployment topology properties, as an alternative to yorc configuration properties (GH-281)
- Deploying applications simultaneously can fail on invalid zip error (GH-45)
- Uninstall workflow is not correct for Topology involving BlockStorage node (GH-90)
- Yorc failure at undeployment leaves an app unpurged on Yorc server while undeployed in Alien4Cloud (GH-95)
- Can't connect to Yorc in secure mode (GH-81)
- Deployment status inconsistency when restarting Alien4Cloud and an application finishes to deploy (GH-77)
- Uninstall workflow is not correct for Topology involving BlockStorage node (GH-90)
- Vision sample topology upload fails on component version issue (GH-78)
- Technical update to use Alien4Cloud 2.1.0 final version
- Updated Slurm and Kubernetes types to final version (respectively 1.1.0 and 2.0.0)
- Support Jobs lifecycle enhancements (new operations
submit
,run
,cancel
) (GH-196) - Generate Alien 2.1-compatible events (GH-148)
- Even with a wrong yorc url in orchestrator configuration, it displays "connected" when enabled (GH-72)
- Take advantage of Alien4Cloud meta-properties to specify a namespace in which to deploy Kubernetes resources (GH-76)
- Enable scaling of Kubernetes deployments (GH-77)
- Node Instance attributes are only resolved when Node state is "started" (GH-59)
- Support GCE Block storages. (GH-82)
- Upgrade to Alien4Cloud 2.1 (GH-50)
- Support GCE Public IPs. (GH-82)
- Make the run step of a Job execution asynchronous not to block a worker during the duration of the job. (GH-85)
- When an artifact references a folder its content is not part of the resulting CSAR sent to Yorc (GH-43)
- When an orchestrator has been disabled, the Yorc A4C plugin is still trying to listen log events (GH-34)
- On TOSCA types generation do not generate a artifact if its mandatory file parameter is empty (GH-15)
- Support of applications secrets in Yorc engine makes it usable within Alien4Cloud (ystia/yorc#134)
Yorc 3.0.0 is the first major version since we open-sourced the formerly known Janus project. Previous versions have been made available on GitHub.
We are still shifting some of our tooling like road maps and backlogs publicly available tools. The idea is to make project management clear and to open Yorc to external contributions.
Alien4Cloud released recently a fantastic major release with new features leveraged by Yorc to deliver a great orchestration solution.
Among many features, the ones we will focus on below are:
- UI redesign: Alien4Cloud 2.0.0 includes various changes in UI in order to make it more consistent and easier to use.
- Topology modifiers: Alien4Cloud 2.0.0 allows to define modifiers that could be executed in various phases prior to the deployment. Those modifiers allow to transform a given TOSCA topology.
We are really excited to announce our first support of Google Cloud Platform.
Yorc now natively supports Google Compute Engine to create compute on demand on GCE.
Yorc 3.0.0 supports a new infrastructure that we called "Hosts Pool". It allows to register generic hosts into Yorc and let Yorc allocate them for deployments. These hosts can be anything, VMs, physical machines, containers, ... whatever as long as we can ssh into them for provisioning. Yorc exposes a REST API and a CLI that allow to manage the hosts pool, making it easy to integrate it with other tools.
For more informations about the Hosts Pool infrastructure, check out our dedicated documentation.
We made some improvements with our Slurm integration:
- We now support Slurm "features" (which are basically tags on nodes) and "constraints" syntax to allocate nodes. Examples here.
- Support of srun and sbatch commands (see Jobs scheduling below)
In Yorc 2 we made a first experimental integration with Kubernetes. This support and associated TOSCA types are deprecated in Yorc 3.0. Instead we switched to new TOSCA types defined collectively with Alien4Cloud.
This new integration will allow to build complex Kubernetes topologies.
Alien4Cloud has a great feature called "Services". It allows both to define part of an application to be exposed as a service so that it can be consumed by other applications, or to register an external service in Alien4Cloud to be exposed and consumed by applications.
This feature allows to build new use cases like cross-infrastructure deployments or shared services among many others.
We are very excited to support it!
Yet another super interesting feature! Until now TOSCA components handled by Yorc were designed to be hosted on a compute (whatever it was) that means that component's life-cycle scripts were executed on the provisioned compute. This feature allows to design components that will not necessary be hosted on a compute, and if not, life-cycle scripts are executed on the Yorc's host.
This opens a wide range of new use cases. You can for instance implement new computes implementations in pure TOSCA by calling cloud-providers CLI tools or interact with external services
Icing on the cake, for security reasons those executions are by default sand-boxed into containers to protect the host from mistakes and malicious usages.
This release brings a tech preview support of jobs scheduling. It allows to design workloads made of Jobs that could interact with each other and with other "standard" TOSCA component within an application. We worked hard together with the Alien4Cloud team to extent TOSCA to support Jobs scheduling.
In this release we mainly focused on the integration with Slurm for supporting this feature (but we are also working on Kubernetes for the next release 😄). Bellow are new supported TOSCA types and implementations:
- SlurmJobs: will lead to issuing a srun command with a given executable file.
- SlurmBatch: will lead to issuing a sbatch command with a given batch file and associated executables
- Singularity integration: allows to execute a Singularity container instead of an executable file.
Alien4Cloud and Yorc can now mutually authenticate themselves with TLS certificates.
We constantly try to improve feedback returned to our users about runtime execution. In this release we are publishing logs with more context on the node/instance/operation/interface to which the log relates to.
Yorc 3.0 brings foundations on applicative monitoring, it allows to monitor compute liveness at a interval defined by the user. When a compute goes down or up we use or events API to notify the user and Alien4Cloud to monitor an application visually within the runtime view.
Our monitoring implementation was designed to be a fault-tolerant service.