This page introduces the first steps to set up ExaPF.jl
. We show how to load a power network instance and how to solve the power flow equations both on the CPU and on the GPU. The full script is implemented in test/quickstart.jl.
We start by importing CUDA and KernelAbstractions:
using CUDA
using KernelAbstractions
Then, we load ExaPF and its submodules with
using ExaPF
import ExaPF: AutoDiff
const PS = ExaPF.PowerSystem
@@ -15,12 +15,12 @@
#it 2: 5.88242e-01
#it 3: 4.88493e-03
#it 4: 1.39924e-06
-#it 5: 1.35671e-11
+#it 5: 1.52178e-11
Power flow has converged: true
* #iterations: 5
- * Time Jacobian (s) ........: 0.5833
- * Time linear solver (s) ...: 0.0210
- * Time total (s) ...........: 1.1181
Implicitly, ExaPF has just proceed to the following operations:
- instantiate automatically a starting point $x_0$ from the variables stored in
stack
- instantiate the Jacobian of the powerflow equations using AutoDiff.
- solve the powerflow equations iteratively, using a Newton-Raphson algorithm.
This compact syntax allows to solve quickly any powerflow equations in a few lines a code. However, in most case, the user may want more coarse grained control on the different objects manipulated.
In what follows, we detail step by step the detailed procedure to solve the powerflow equations.
We start by importing a MATPOWER instance to a ExaPF.PowerSystem.PowerNetwork
object:
julia> pf = PS.PowerNetwork(datafile)
PowerNetwork object with:
+ * Time Jacobian (s) ........: 0.4023
+ * Time linear solver (s) ...: 0.0168
+ * Time total (s) ...........: 0.7970
Implicitly, ExaPF has just proceed to the following operations:
- instantiate automatically a starting point $x_0$ from the variables stored in
stack
- instantiate the Jacobian of the powerflow equations using AutoDiff.
- solve the powerflow equations iteratively, using a Newton-Raphson algorithm.
This compact syntax allows to solve quickly any powerflow equations in a few lines a code. However, in most case, the user may want more coarse grained control on the different objects manipulated.
In what follows, we detail step by step the detailed procedure to solve the powerflow equations.
We start by importing a MATPOWER instance to a ExaPF.PowerSystem.PowerNetwork
object:
julia> pf = PS.PowerNetwork(datafile)
PowerNetwork object with:
Buses: 1354 (Slack: 1. PV: 259. PQ: 1094)
Generators: 260.
The different fields of the object pf
specify the characteristics of the network. For instance, we can retrieve the number of buses or get the indexing of the PV buses with
julia> nbus = PS.get(pf, PS.NumberOfBuses())
1354
julia> pv_indexes = pf.pv;
However, a ExaPF.PowerSystem.PowerNetwork
object stores only the physical attributes of the network. To choose a given mathematical formulation, we need to pass the object pf
to an ExaPF.AbstractFormulation
layer. Currently, only the polar formulation is provided with the ExaPF.PolarForm
structure. In the future, other formulations (e.g. RectangularForm
) may be implemented as well.
To solve the powerflow equations, we need to choose a given mathematical formulation for the equations of the network. To each formulation corresponds a given state $x$ and control $u$. Using polar representation of the voltage vector, such as $\bm{v} = |v|e^{j \theta}$, each bus $i=1, \cdots, N_B$ must satisfy the power balance equations:
\[\begin{aligned}
p_i &= v_i \sum_{j}^{n} v_j (g_{ij}\cos{(\theta_i - \theta_j)} + b_{ij}\sin{(\theta_i - \theta_j})) \,, \\
@@ -34,36 +34,36 @@
#controls: 519
#states : 2447
Note that the constructor ExaPF.PolarForm
takes as input a ExaPF.PowerSystem.PowerNetwork
object and a KernelAbstractions.jl
device (here set to CPU()
by default). We will explain in the next section how to load a ExaPF.PolarForm
object on the GPU with the help of a CUDABackend()
.
The Newton-Raphson solves the equation $g(x, u) = 0$ in an iterative fashion. The algorithm solves at each step the linear equation:
\[ x_{k+1} = - (\nabla_x g_k)^{-1} g(x_k, u).\]
Hence, the algorithm requires the following elements:
- an initial variable $x_0$
- a function to solve efficiently the linear system $(\nabla_x g_k) x_{k+1} = g(x_k, u)$
- a function to evaluate the Jacobian $\nabla_x g_k$
The variable $x$ is instantiated as:
julia> stack = ExaPF.NetworkStack(polar)
2968-elements NetworkStack{Vector{Float64}}
The function $g$ is implemented using ExaPF's custom modeler:
julia> basis = ExaPF.PolarBasis(polar)
PolarBasis (AbstractExpression)
julia> powerflow = ExaPF.PowerFlowBalance(polar) ∘ basis
ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.PowerFlowBalance{Vector{Float64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}}(PolarBasis (AbstractExpression), PowerFlowBalance (AbstractExpression))
The Jacobian $\nabla_x g$ is evaluated automatically using forward-mode AutoDiff:
julia> mapx = ExaPF.mapping(polar, State());
julia> jx = ExaPF.Jacobian(polar, powerflow, mapx)
A AutoDiff Jacobian for ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.PowerFlowBalance{Vector{Float64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}}(PolarBasis (AbstractExpression), PowerFlowBalance (AbstractExpression))
Number of Jacobian colors: 25
The (direct) linear solver can be instantiated directly as
julia> linear_solver = LS.DirectSolver(jx.J);
Let's explain further these three objects.
stack
is a AbstractStack
storing all the variables attached to the formulation polar::PolarForm
.jx
is a Jacobian
structure which allows the solver to compute efficiently the Jacobian of the powerflow equations $\nabla_x g$ using AutoDiff.linear_solver
specifies the linear algorithm uses to solve the linear system $(\nabla_x g_k) x_{k+1} = g(x_k, u)$. By default, we use direct linear algebra.
In the AutoDiff Jacobian jx
, the evaluation of the Jacobian $J$ is stored in jx.J
:
julia> jac = jx.J
2447×2447 SparseArrays.SparseMatrixCSC{Float64, Int64} with 15803 stored entries:
-⢿⣷⣲⣷⣯⣷⣣⢯⣮⣌⢫⡿⣿⣲⢡⣾⡾⣶⣰⣷⡧⣾⣵⣿⢺⣵⣧⣙⣿⢿⣟⡮⢴⣗⣷⣶⣾⣿⡱⡧
-⢼⣾⣿⣿⣿⡺⣶⣿⣟⢷⣲⣿⣷⣿⣳⡟⢹⡿⣿⢿⢓⣻⡗⣾⢼⡿⡿⣆⣾⣾⣯⣗⢿⠃⡻⣿⡿⣟⢚⣧
-⢯⣿⣻⡻⢟⣵⢿⣟⣇⣾⣵⣿⡿⡷⣿⣿⢿⣿⢿⢟⣿⡽⢿⣾⢟⣾⣷⣫⣾⣿⣿⡾⣯⣾⣟⡿⡷⣿⣿⠿
-⡭⣞⣼⣿⣿⢷⡿⣯⣧⣿⢿⣾⣷⣷⡫⣧⣻⣻⣵⢾⡹⣻⡿⣿⢿⣿⣹⣾⣿⣿⣶⡟⣻⣌⣿⣮⣷⡏⣿⡇
-⡊⢿⢿⣝⣩⣽⣭⣿⣿⢟⣻⣿⡷⢿⢽⣯⣿⢾⣼⡻⢽⣼⢯⡻⣽⢿⡿⣟⣿⣿⠿⣿⣿⣽⡷⣿⣽⡯⣧⣿
-⣯⡶⣼⣾⣵⣿⣻⣷⣿⣾⡛⣬⣛⣿⢻⣽⠽⣯⣿⣳⣙⣿⣾⣝⣿⣴⡿⣿⢳⣽⣻⡯⣯⡿⢧⣻⣾⣏⣻⢗
-⢻⣻⣽⣿⢿⡯⢽⣿⣽⣏⣿⣼⡿⣯⣞⣷⡿⣹⡽⣗⣾⣟⢵⡭⣿⡯⣷⣹⣧⣻⢿⣷⣳⣳⠯⣿⣯⣷⣾⡝
-⣡⣶⣽⠾⣿⣿⠯⣮⡷⣷⣟⣶⢾⣽⣻⣾⣷⣷⢿⣿⣾⣳⣿⣿⢵⣾⢿⣾⣻⣦⢯⣞⣿⣿⣿⢿⣟⣿⣟⣜
-⢺⣯⣷⡶⣿⣷⣿⣺⣻⣟⡷⣧⣟⣫⢽⣿⡻⣮⣿⣳⢻⣿⣷⢿⢟⣞⣯⣷⣻⣾⣓⣩⣿⣟⢿⣾⣟⡮⣺⣿
-⢴⣾⣿⣟⣿⢗⣱⣟⣶⡻⢿⣻⢷⢯⣿⣷⢿⣻⢿⢗⣚⣞⣾⣒⣿⣷⣖⡷⣿⡿⡼⣿⣿⣾⣿⣯⣿⣖⢺⡹
-⣩⣯⣽⣰⣟⡿⣷⣪⣓⣷⣷⣼⣾⢿⢾⣻⣿⣶⣺⢼⡿⣯⣿⢿⣗⣝⣹⣿⢷⣷⡾⡷⢟⢿⣷⣲⡯⣞⣿⣿
-⣵⣿⣹⣭⣻⣷⣿⣯⣯⡳⣞⢿⡕⡷⣿⣿⣽⣟⢺⢻⣿⣟⣿⣿⢯⣟⣟⢶⣾⣻⣻⣿⣷⢯⣿⣟⡟⣿⡽⡻
-⢞⣶⣶⡷⣻⣵⣿⣷⣷⣟⢛⣿⡿⡿⣱⣷⣻⢵⢿⣿⣝⢽⣯⢷⣿⣿⣽⣟⣻⣿⢿⠏⣹⣦⡿⢶⣿⣏⡿⢇
-⣍⢻⠻⢯⡽⣻⣳⣾⣿⢯⣿⣯⣝⣻⣻⣷⢯⣿⢼⡽⣷⣾⢻⣝⣷⢿⣟⣽⡻⣭⣛⣽⢿⡿⡿⣳⣾⣗⣷⡻
-⣿⣟⣺⣿⣾⣿⣿⣿⣿⣿⣝⣶⣭⣻⠻⣾⣻⣾⣿⡿⢽⣷⣾⣻⣿⣾⡟⣮⣻⣾⣝⡷⢷⡟⣧⣾⣿⡯⢿⣲
-⡻⡽⢯⢿⣻⡿⣼⠿⣿⣧⡿⡾⢿⣷⣫⢷⡝⣸⣶⣯⢾⡯⣿⣾⡿⠗⣟⣼⢷⡽⣿⣿⣹⣿⣗⡿⣦⣷⣿⣎
-⢴⢷⠿⠓⣫⣿⡛⢾⣟⣿⣯⡿⢽⣺⣿⣿⣿⢿⣻⣿⣿⣕⡽⣟⠳⣾⣿⡷⣽⠷⣷⣾⣿⣿⣿⢿⣿⣿⣯⣶
-⢹⣿⣿⣮⣿⡽⡻⣿⣽⣯⣭⣳⣯⣧⣿⣟⣻⣷⡿⣿⢹⣻⣿⢿⢻⣏⢿⣫⣩⣿⣽⡽⣿⣟⣵⣿⣿⡷⣽⣷
-⣾⣿⣿⢯⣽⣯⡽⠿⡷⡿⡾⢿⢯⣿⣿⣽⡻⡽⢻⢿⣫⢯⣿⣭⡿⢿⢾⢿⡿⡿⢬⣿⣿⣿⢿⡿⣵⣿⣹⣝
-⠵⡮⠾⣴⣿⡟⠿⠿⣭⣿⢿⢞⣞⠿⣛⢽⣾⣾⣞⡲⣿⣿⣷⡫⠿⢏⣽⡻⢻⣳⡻⢿⢫⣿⢷⣿⣗⢾⣿⣿
This matrix is at the basis of the powerflow algorithm. At each iteration, the AutoDiff backend updates the nonzero values in the sparse Jacobian jx
and solve the associated linear system to compute the next descent direction.
The procedure is implemented in the nlsolve!
function, which uses a Newton-Raphson algorithm to solve the powerflow equations. The Newton-Raphson algorithm is specified as:
julia> pf_algo = NewtonRaphson(; verbose=1, tol=1e-10)
NewtonRaphson(20, 1.0e-10, 1)
Then, we can solve the powerflow equations simply with
julia> convergence = ExaPF.nlsolve!(pf_algo, jx, stack; linear_solver=linear_solver)
#it 0: 1.15103e+02
+⎡⢿⣷⣲⣷⣯⣷⣣⢯⣮⣌⢫⡿⣿⣲⢡⣾⡾⣶⣰⣷⡧⣾⣵⣿⢺⣵⣧⣙⣿⢿⣟⡮⢴⣗⣷⣶⣾⣿⡱⡧⎤
+⎢⢼⣾⣿⣿⣿⡺⣶⣿⣟⢷⣲⣿⣷⣿⣳⡟⢹⡿⣿⢿⢓⣻⡗⣾⢼⡿⡿⣆⣾⣾⣯⣗⢿⠃⡻⣿⡿⣟⢚⣧⎥
+⎢⢯⣿⣻⡻⢟⣵⢿⣟⣇⣾⣵⣿⡿⡷⣿⣿⢿⣿⢿⢟⣿⡽⢿⣾⢟⣾⣷⣫⣾⣿⣿⡾⣯⣾⣟⡿⡷⣿⣿⠿⎥
+⎢⡭⣞⣼⣿⣿⢷⡿⣯⣧⣿⢿⣾⣷⣷⡫⣧⣻⣻⣵⢾⡹⣻⡿⣿⢿⣿⣹⣾⣿⣿⣶⡟⣻⣌⣿⣮⣷⡏⣿⡇⎥
+⎢⡊⢿⢿⣝⣩⣽⣭⣿⣿⢟⣻⣿⡷⢿⢽⣯⣿⢾⣼⡻⢽⣼⢯⡻⣽⢿⡿⣟⣿⣿⠿⣿⣿⣽⡷⣿⣽⡯⣧⣿⎥
+⎢⣯⡶⣼⣾⣵⣿⣻⣷⣿⣾⡛⣬⣛⣿⢻⣽⠽⣯⣿⣳⣙⣿⣾⣝⣿⣴⡿⣿⢳⣽⣻⡯⣯⡿⢧⣻⣾⣏⣻⢗⎥
+⎢⢻⣻⣽⣿⢿⡯⢽⣿⣽⣏⣿⣼⡿⣯⣞⣷⡿⣹⡽⣗⣾⣟⢵⡭⣿⡯⣷⣹⣧⣻⢿⣷⣳⣳⠯⣿⣯⣷⣾⡝⎥
+⎢⣡⣶⣽⠾⣿⣿⠯⣮⡷⣷⣟⣶⢾⣽⣻⣾⣷⣷⢿⣿⣾⣳⣿⣿⢵⣾⢿⣾⣻⣦⢯⣞⣿⣿⣿⢿⣟⣿⣟⣜⎥
+⎢⢺⣯⣷⡶⣿⣷⣿⣺⣻⣟⡷⣧⣟⣫⢽⣿⡻⣮⣿⣳⢻⣿⣷⢿⢟⣞⣯⣷⣻⣾⣓⣩⣿⣟⢿⣾⣟⡮⣺⣿⎥
+⎢⢴⣾⣿⣟⣿⢗⣱⣟⣶⡻⢿⣻⢷⢯⣿⣷⢿⣻⢿⢗⣚⣞⣾⣒⣿⣷⣖⡷⣿⡿⡼⣿⣿⣾⣿⣯⣿⣖⢺⡹⎥
+⎢⣩⣯⣽⣰⣟⡿⣷⣪⣓⣷⣷⣼⣾⢿⢾⣻⣿⣶⣺⢼⡿⣯⣿⢿⣗⣝⣹⣿⢷⣷⡾⡷⢟⢿⣷⣲⡯⣞⣿⣿⎥
+⎢⣵⣿⣹⣭⣻⣷⣿⣯⣯⡳⣞⢿⡕⡷⣿⣿⣽⣟⢺⢻⣿⣟⣿⣿⢯⣟⣟⢶⣾⣻⣻⣿⣷⢯⣿⣟⡟⣿⡽⡻⎥
+⎢⢞⣶⣶⡷⣻⣵⣿⣷⣷⣟⢛⣿⡿⡿⣱⣷⣻⢵⢿⣿⣝⢽⣯⢷⣿⣿⣽⣟⣻⣿⢿⠏⣹⣦⡿⢶⣿⣏⡿⢇⎥
+⎢⣍⢻⠻⢯⡽⣻⣳⣾⣿⢯⣿⣯⣝⣻⣻⣷⢯⣿⢼⡽⣷⣾⢻⣝⣷⢿⣟⣽⡻⣭⣛⣽⢿⡿⡿⣳⣾⣗⣷⡻⎥
+⎢⣿⣟⣺⣿⣾⣿⣿⣿⣿⣿⣝⣶⣭⣻⠻⣾⣻⣾⣿⡿⢽⣷⣾⣻⣿⣾⡟⣮⣻⣾⣝⡷⢷⡟⣧⣾⣿⡯⢿⣲⎥
+⎢⡻⡽⢯⢿⣻⡿⣼⠿⣿⣧⡿⡾⢿⣷⣫⢷⡝⣸⣶⣯⢾⡯⣿⣾⡿⠗⣟⣼⢷⡽⣿⣿⣹⣿⣗⡿⣦⣷⣿⣎⎥
+⎢⢴⢷⠿⠓⣫⣿⡛⢾⣟⣿⣯⡿⢽⣺⣿⣿⣿⢿⣻⣿⣿⣕⡽⣟⠳⣾⣿⡷⣽⠷⣷⣾⣿⣿⣿⢿⣿⣿⣯⣶⎥
+⎢⢹⣿⣿⣮⣿⡽⡻⣿⣽⣯⣭⣳⣯⣧⣿⣟⣻⣷⡿⣿⢹⣻⣿⢿⢻⣏⢿⣫⣩⣿⣽⡽⣿⣟⣵⣿⣿⡷⣽⣷⎥
+⎢⣾⣿⣿⢯⣽⣯⡽⠿⡷⡿⡾⢿⢯⣿⣿⣽⡻⡽⢻⢿⣫⢯⣿⣭⡿⢿⢾⢿⡿⡿⢬⣿⣿⣿⢿⡿⣵⣿⣹⣝⎥
+⎣⠵⡮⠾⣴⣿⡟⠿⠿⣭⣿⢿⢞⣞⠿⣛⢽⣾⣾⣞⡲⣿⣿⣷⡫⠿⢏⣽⡻⢻⣳⡻⢿⢫⣿⢷⣿⣗⢾⣿⣿⎦
This matrix is at the basis of the powerflow algorithm. At each iteration, the AutoDiff backend updates the nonzero values in the sparse Jacobian jx
and solve the associated linear system to compute the next descent direction.
The procedure is implemented in the nlsolve!
function, which uses a Newton-Raphson algorithm to solve the powerflow equations. The Newton-Raphson algorithm is specified as:
julia> pf_algo = NewtonRaphson(; verbose=1, tol=1e-10)
NewtonRaphson(20, 1.0e-10, 1)
Then, we can solve the powerflow equations simply with
julia> convergence = ExaPF.nlsolve!(pf_algo, jx, stack; linear_solver=linear_solver)
#it 0: 1.15103e+02
#it 1: 1.50328e+01
#it 2: 5.88242e-01
#it 3: 4.88493e-03
#it 4: 1.39924e-06
-#it 5: 1.35671e-11
+#it 5: 1.52178e-11
Power flow has converged: true
* #iterations: 5
- * Time Jacobian (s) ........: 0.0040
- * Time linear solver (s) ...: 0.0193
- * Time total (s) ...........: 0.0236
Here, the algorithm solves the powerflow equations in 5 iterations. The algorithm modifies the values of stack
inplace, to avoid any unnecessary memory allocations.
Now, how can we deport the resolution on the GPU? The procedure looks exactly the same. It suffices to initiate a new ExaPF.PolarForm
object, but on the GPU:
julia> polar_gpu = ExaPF.PolarForm(pf, CUDABackend())
Polar formulation (instantiated on device CUDA.CUDAKernels.CUDABackend(false, false))
+ * Time Jacobian (s) ........: 0.0045
+ * Time linear solver (s) ...: 0.0168
+ * Time total (s) ...........: 0.0216
Here, the algorithm solves the powerflow equations in 5 iterations. The algorithm modifies the values of stack
inplace, to avoid any unnecessary memory allocations.
Now, how can we deport the resolution on the GPU? The procedure looks exactly the same. It suffices to initiate a new ExaPF.PolarForm
object, but on the GPU:
julia> polar_gpu = ExaPF.PolarForm(pf, CUDABackend())
Polar formulation (instantiated on device CUDA.CUDAKernels.CUDABackend(false, false))
Network characteristics:
#buses: 1354 (#slack: 1 #PV: 259 #PQ: 1094)
#generators: 260
@@ -76,19 +76,19 @@
#it 2: 5.88242e-01
#it 3: 4.88493e-03
#it 4: 1.39924e-06
-#it 5: 1.23093e-11
+#it 5: 9.96622e-12
Power flow has converged: true
* #iterations: 5
- * Time Jacobian (s) ........: 3.1577
- * Time linear solver (s) ...: 0.3269
- * Time total (s) ...........: 4.5047
Note that we get exactly the same iterations as when we solve the power flow equations on the CPU.
By default, the algorithm runs with a direct solver, which might be inefficient for large problems. To overcome this issue, ExaPF implements a wrapper for different iterative algorithms (GMRES, BICGSTAB).
The performance of iterative solvers is usually improved if we use a preconditioner. ExaPF.jl
implements an overlapping Schwarz preconditioner, tailored for GPU usage. To build an instance with 8 blocks, just write
julia> npartitions = 8;
julia> jac_gpu = jx_gpu.J;
julia> precond = LS.BlockJacobiPreconditioner(jac_gpu, npartitions, CUDABackend());
You can attach the preconditioner to an BICGSTAB algorithm simply as
julia> linear_solver = ExaPF.KrylovBICGSTAB(jac_gpu; P=precond);
(this will use the BICGSTAB algorithm implemented in Krylov.jl).
We need to update accordingly the tolerance of the Newton-Raphson algorithm (the iterative solver is less accurate than the direct solver):
julia> pf_algo = NewtonRaphson(; verbose=1, tol=1e-7)
NewtonRaphson(20, 1.0e-7, 1)
We reset the variables to their initial values:
julia> ExaPF.init!(polar_gpu, stack_gpu)
Then, solving the power flow with the iterative solvers directly translates to one call to nlsolve!
:
julia> convergence = ExaPF.nlsolve!(pf_algo, jx_gpu, stack_gpu; linear_solver=linear_solver)
#it 0: 1.15103e+02
+ * Time Jacobian (s) ........: 2.6187
+ * Time linear solver (s) ...: 0.3118
+ * Time total (s) ...........: 3.7690
Note that we get exactly the same iterations as when we solve the power flow equations on the CPU.
By default, the algorithm runs with a direct solver, which might be inefficient for large problems. To overcome this issue, ExaPF implements a wrapper for different iterative algorithms (GMRES, BICGSTAB).
The performance of iterative solvers is usually improved if we use a preconditioner. ExaPF.jl
implements an overlapping Schwarz preconditioner, tailored for GPU usage. To build an instance with 8 blocks, just write
julia> npartitions = 8;
julia> jac_gpu = jx_gpu.J;
julia> precond = LS.BlockJacobiPreconditioner(jac_gpu, npartitions, CUDABackend());
You can attach the preconditioner to an BICGSTAB algorithm simply as
julia> linear_solver = ExaPF.KrylovBICGSTAB(jac_gpu; P=precond);
(this will use the BICGSTAB algorithm implemented in Krylov.jl).
We need to update accordingly the tolerance of the Newton-Raphson algorithm (the iterative solver is less accurate than the direct solver):
julia> pf_algo = NewtonRaphson(; verbose=1, tol=1e-7)
NewtonRaphson(20, 1.0e-7, 1)
We reset the variables to their initial values:
julia> ExaPF.init!(polar_gpu, stack_gpu)
Then, solving the power flow with the iterative solvers directly translates to one call to nlsolve!
:
julia> convergence = ExaPF.nlsolve!(pf_algo, jx_gpu, stack_gpu; linear_solver=linear_solver)
#it 0: 1.15103e+02
#it 1: 1.50328e+01
#it 2: 5.88242e-01
#it 3: 4.88493e-03
#it 4: 1.39924e-06
-#it 5: 1.35570e-11
+#it 5: 6.63821e-11
Power flow has converged: true
* #iterations: 5
- * Time Jacobian (s) ........: 0.0035
- * Time linear solver (s) ...: 6.8257
- * Time total (s) ...........: 6.8306