diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json
index 4cd8b914..b7a2a784 100644
--- a/dev/.documenter-siteinfo.json
+++ b/dev/.documenter-siteinfo.json
@@ -1 +1 @@
-{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2023-12-07T11:31:38","documenter_version":"1.2.1"}}
\ No newline at end of file
+{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2023-12-08T10:57:42","documenter_version":"1.2.1"}}
\ No newline at end of file
diff --git a/dev/artifact/index.html b/dev/artifact/index.html
index f7414c42..c61d91a2 100644
--- a/dev/artifact/index.html
+++ b/dev/artifact/index.html
@@ -1,2 +1,2 @@
-
The ExaData artifact contains test cases relevant to the Exascale Computing Project. It is built from the git repository available at ExaData. Apart from the standard MATPOWER files it additionally contains demand scenarios and contingencies used in multiperiod security constrained optimal power flow settings.
Settings
This document was generated with Documenter.jl version 1.2.1 on Thursday 7 December 2023. Using Julia version 1.9.4.
The ExaData artifact contains test cases relevant to the Exascale Computing Project. It is built from the git repository available at ExaData. Apart from the standard MATPOWER files it additionally contains demand scenarios and contingencies used in multiperiod security constrained optimal power flow settings.
Settings
This document was generated with Documenter.jl version 1.2.1 on Friday 8 December 2023. Using Julia version 1.9.4.
ExaPF.jl is a package to solve the power flow problem on upcoming exascale architectures. The code has been designed to be:
Portable: Targeting exascale architectures implies a focus on graphics processing units (GPUs) as these systems lack substantial computational performance through classical CPUs.
Differentiable: All the expressions implemented in ExaPF are fully compatible with ForwardDiff.jl, and routines are provided to extract first- and second-order derivatives to solve efficiently power flow and optimal power flow problems.
ExaPF implements a vectorized modeler for power systems, which allows to manipulate basic expressions. All expressions are fully differentiable : their first and second-order derivatives can be extracted efficiently using automatic differentiation. In addition, we provide extensions that leverage the packages CUDA.jl, [AMDGPU.jl]((https://github.com/JuliaGPU/AMDGPU.jl), and KernelAbstractions.jl to make ExaPF portable across GPU architectures.
This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative.
Settings
This document was generated with Documenter.jl version 1.2.1 on Thursday 7 December 2023. Using Julia version 1.9.4.
ExaPF.jl is a package to solve the power flow problem on upcoming exascale architectures. The code has been designed to be:
Portable: Targeting exascale architectures implies a focus on graphics processing units (GPUs) as these systems lack substantial computational performance through classical CPUs.
Differentiable: All the expressions implemented in ExaPF are fully compatible with ForwardDiff.jl, and routines are provided to extract first- and second-order derivatives to solve efficiently power flow and optimal power flow problems.
ExaPF implements a vectorized modeler for power systems, which allows to manipulate basic expressions. All expressions are fully differentiable : their first and second-order derivatives can be extracted efficiently using automatic differentiation. In addition, we provide extensions that leverage the packages CUDA.jl, [AMDGPU.jl]((https://github.com/JuliaGPU/AMDGPU.jl), and KernelAbstractions.jl to make ExaPF portable across GPU architectures.
This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative.
Settings
This document was generated with Documenter.jl version 1.2.1 on Friday 8 December 2023. Using Julia version 1.9.4.
Abstract type for differentiable function $f(x)$. Any AbstractExpression implements two functions: a forward mode to evaluate $f(x)$, and an adjoint to evaluate $∂f(x)$.
Forward mode
The direct evaluation of the function $f$ is implemented as
(expr::AbstractExpression)(output::VT, stack::AbstractStack{VT}) where VT<:AbstractArray
+AutoDiff · ExaPF.jl
Abstract type for differentiable function $f(x)$. Any AbstractExpression implements two functions: a forward mode to evaluate $f(x)$, and an adjoint to evaluate $∂f(x)$.
Forward mode
The direct evaluation of the function $f$ is implemented as
(expr::AbstractExpression)(output::VT, stack::AbstractStack{VT}) where VT<:AbstractArray
the input being specified in stack, the results being stored in the array output.
Reverse mode
The adjoint of the function is specified by the function adjoint!, with the signature:
adjoint!(expr::AbstractExpression, ∂stack::AbstractStack{VT}, stack::AbstractStack{VT}, ̄v::VT) where VT<:AbstractArray
-
The variable stack stores the result of the direct evaluation, and is not modified in adjoint!. The results are stored inside the adjoint stack ∂stack.
adjoint!(expr::AbstractExpression, ∂stack::AbstractStack{VT}, stack::AbstractStack{VT}, ̄v::VT) where VT<:AbstractArray
Compute the adjoint of the AbstractExpressionexpr with relation to the adjoint vector ̄v. The results are stored in the adjoint stack ∂stack. The variable stack stores the result of a previous direct evaluation, and is not modified in adjoint!.
The variable stack stores the result of the direct evaluation, and is not modified in adjoint!. The results are stored inside the adjoint stack ∂stack.
adjoint!(expr::AbstractExpression, ∂stack::AbstractStack{VT}, stack::AbstractStack{VT}, ̄v::VT) where VT<:AbstractArray
Compute the adjoint of the AbstractExpressionexpr with relation to the adjoint vector ̄v. The results are stored in the adjoint stack ∂stack. The variable stack stores the result of a previous direct evaluation, and is not modified in adjoint!.
Load a PolarForm instance from the specified benchmark library dir on the target device (default is CPU). ExaPF uses two different benchmark libraries: MATPOWER (dir=EXADATA) and PGLIB-OPF (dir=PGLIB).
Load a PolarForm instance from the specified benchmark library dir on the target device (default is CPU). ExaPF uses two different benchmark libraries: MATPOWER (dir=EXADATA) and PGLIB-OPF (dir=PGLIB).
Solve the power flow equations $g(x, u) = 0$ w.r.t. the stack $x$, using the (NewtonRaphson algorithm. The initial state $x$ is specified implicitly inside stack, with the mapping mapping associated to the polar formulation. The object stack is modified inplace in the function.
The algorithm stops when a tolerance rtol or a maximum number of iterations maxiter is reached.
Arguments
polar::AbstractFormulation: formulation of the power flow equation
stack::NetworkStack: initial values in the network
Implement a subset of the power injection corresponding to $(p_{inj}^{pv}, p_{inj}^{pq}, q_{inj}^{pq})$. The function encodes the active balance equations at PV and PQ nodes, and the reactive balance equations at PQ nodes:
Implement the bounds on voltage magnitudes not taken into account in the bound constraints. In the reduced space, this is associated to the the voltage magnitudes at PQ nodes:
\[v_{pq}^♭ ≤ v_{pq} ≤ v_{pq}^♯ .\]
Dimension:n_pq
Complexity
1 copyto
Note
In the reduced space, the constraints on the voltage magnitudes at PV nodes $v_{pv}$ are taken into account when bounding the control $u$.
Implement the bounds on voltage magnitudes not taken into account in the bound constraints. In the reduced space, this is associated to the the voltage magnitudes at PQ nodes:
\[v_{pq}^♭ ≤ v_{pq} ≤ v_{pq}^♯ .\]
Dimension:n_pq
Complexity
1 copyto
Note
In the reduced space, the constraints on the voltage magnitudes at PV nodes $v_{pv}$ are taken into account when bounding the control $u$.
Constraints on the active power productions and on the reactive power productions that are not already taken into account in the bound constraints. In the reduced space, that amounts to
Implement expressions concatenation. Takes as input a vector of expressions [expr1,...,exprN] and concatenate them in a single expression mexpr, such that
Implement expression composition. Takes as input two expressions expr1 and expr2 and returns a composed expression cexpr such that ``` cexpr(x) = expr2 ∘ expr1(x)
Notes
Currently, only PolarBasis is supported for expr1.
Implement expressions concatenation. Takes as input a vector of expressions [expr1,...,exprN] and concatenate them in a single expression mexpr, such that
Implement expression composition. Takes as input two expressions expr1 and expr2 and returns a composed expression cexpr such that ``` cexpr(x) = expr2 ∘ expr1(x)
Notes
Currently, only PolarBasis is supported for expr1.
ExaPF allows to solve linear systems with either direct and indirect linear algebra, both on CPU and on GPU. To solve a linear system $Ax = b$, ExaPF uses the function ldiv!.
solver::AbstractLinearSolver: linear solver to solve the system
x::AbstractVector: Solution
A::AbstractMatrix: Input matrix
y::AbstractVector: RHS
Solve the linear system $A x = y$ using the algorithm specified in solver. If A is not specified, the function will used directly the factorization stored inside solver.
GMRES can be warm-started from an initial guess X0 where kwargs are the same keyword arguments as above.
Solve the linear system AX = B of size n with p right-hand sides using block-GMRES.
Input arguments
A: a linear operator that models a matrix of dimension n;
B: a matrix of size n × p.
Optional argument
X0: a matrix of size n × p that represents an initial guess of the solution X.
Keyword arguments
memory: if restart = true, the restarted version block-GMRES(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
ldiv: define whether the preconditioners use ldiv! or mul!;
restart: restart the method after memory iterations;
reorthogonalization: reorthogonalize the new matrices of the block-Krylov basis against all previous matrix;
atol: absolute stopping tolerance based on the residual norm;
rtol: relative stopping tolerance based on the residual norm;
itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2 * div(n,p);
timemax: the time limit in seconds;
verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
history: collect additional statistics on the run such as residual norms.
Output arguments
x: a dense matrix of size n × p;
stats: statistics collected on the run in a BlockGmresStats.
ExaPF allows to solve linear systems with either direct and indirect linear algebra, both on CPU and on GPU. To solve a linear system $Ax = b$, ExaPF uses the function ldiv!.
solver::AbstractLinearSolver: linear solver to solve the system
x::AbstractVector: Solution
A::AbstractMatrix: Input matrix
y::AbstractVector: RHS
Solve the linear system $A x = y$ using the algorithm specified in solver. If A is not specified, the function will used directly the factorization stored inside solver.
This structure contains constant parameters that define the topology and physics of the power network.
The object PowerNetwork uses its own contiguous indexing for the buses. The indexing is independent from those specified in the Matpower or the PSSE input file. However, a correspondence between the two indexing (Input indexing toPowerNetwork indexing) is stored inside the attribute bus_to_indexes.
Note
The object PowerNetwork is created in the host memory. Use a AbstractFormulation object to move data to the target device.
Convenient function to load a PowerNetwork instance from one of the benchmark library (dir=EXADATA for MATPOWER, dir=PGLIB for PGLIB-OPF). Default library is lib=EXADATA.
Examples
julia> net = PS.load_case("case118") # default is MATPOWER
+PowerSystem · ExaPF.jl
This structure contains constant parameters that define the topology and physics of the power network.
The object PowerNetwork uses its own contiguous indexing for the buses. The indexing is independent from those specified in the Matpower or the PSSE input file. However, a correspondence between the two indexing (Input indexing toPowerNetwork indexing) is stored inside the attribute bus_to_indexes.
Note
The object PowerNetwork is created in the host memory. Use a AbstractFormulation object to move data to the target device.
Convenient function to load a PowerNetwork instance from one of the benchmark library (dir=EXADATA for MATPOWER, dir=PGLIB for PGLIB-OPF). Default library is lib=EXADATA.
This document was generated with Documenter.jl version 1.2.1 on Friday 8 December 2023. Using Julia version 1.9.4.
diff --git a/dev/man/autodiff/index.html b/dev/man/autodiff/index.html
index 6e9dc21d..bd07d294 100644
--- a/dev/man/autodiff/index.html
+++ b/dev/man/autodiff/index.html
@@ -35,4 +35,4 @@
F[npq + i] += coef_cos * sin_val - coef_sin * cos_val
end
end
-end
These two abstractions are a powerful tool that allow us to implement the forward mode in vectorized form where the number of directions or tangent components of a tangent variable are the number of Jacobian colors. We illustrate this in the figure below with a point-wise vector product x .* y
This natural way of computing the compressed Jacobian yields a very high performing code that is portable to any vector architecture, given that a similar package like CUDA.jl exists. We note that similar packages for the Intel Compute Engine and AMD ROCm are in development called oneAPI.jl and AMDGPU.jl, respectively. We expect our package to be portable to AMD and Intel GPUs in the future.
1Griewank, Andreas, and Andrea Walther. Evaluating derivatives: principles and techniques of algorithmic differentiation. Society for Industrial and Applied Mathematics, 2008.
Settings
This document was generated with Documenter.jl version 1.2.1 on Thursday 7 December 2023. Using Julia version 1.9.4.