Dates are in YYYY-MM-DD format.
- Fixed a bug where
Graph.toposort()
would not consider implicit inputs of nodes with subgraphs. For example, a graph including anIf
node whose subgraphs used tensors from the outer graph may previously have been sorted such that it occurred before the nodes producing those tensors.
- Fixed a bug where
numpy.dtype
would not be exported correctly when specified as a node attribute.
Graph.tensors()
will now display a warning when duplicate tensors are detected in the graph, even ifcheck_duplicates=False
. As before, whencheck_duplicates=True
, it will throw an exception in such cases.
- Added support for
Cast
elision infold_constants()
.
- Updated
fold_constants()
so that it no longer fails if a shape folding pass fails whenerror_ok
isTrue
.
- Fixed a bug where
fold_constants()
would fail if a model contained aSlice
node without astarts
orends
input.
- Added support for folding
Shape -> Slice
patterns even when the entire shape may not be known.
fold_constants()
will no longer store values for foldable tensors whose outputs are all foldable. For example, while folding a constant subgraph likeA (constant) -> B -> C
, previously,B
values would be computed in addition toC
. With these changes, onlyC
values are computed and stored. This can reduce memory usage significantly.
- Fixed a bug where
copy()
would not work with subgraphs that included tensors with the same names as outer graph tensors unless atensor_map
was provided.
fold_constants()
can now foldShape -> Gather
patterns even when the entire shape may not be known.- Added an
error_ok
parameter infold_constants()
which can be set toFalse
to re-raise errors encountered during inference.
- Fixed a bug where
copy()
would not correctly copy tensors in nested graphs. - Fixed a bug where
fold_constants()
would attempt to fold nodes including graph attributes even if nodes within the nested graph could not be folded.
fold_constants()
no longer loads constant values into numpy arrays. This can save a significant amount of memory.cleanup()
will no longer remove unused graph inputs by default - this was causing invalid ONNX models to be generated in cases withLoop
nodes. Setremove_unused_graph_inputs
toTrue
to revert to the old behavior.cleanup()
will no longer reorder node inputs in cases where they are also graph outputs.
- Added support for models with externally stored data. See the README for details on how to import and export such models.
- Operator domains are now preserved when exporting graphs to ONNX.
fold_constants
will no longer attempt to run inference if there are no constants to compute.
- Fixed a bug in
fold_constants
where it would fail if ONNX-Runtime could not run a node with constant inputs. In such cases, the graph is now partitioned to exclude the node before running another pass of constant folding. - Fixed a bug where graph output tensors would still point to consumer nodes that had been removed from the graph.
- Constant folding is now significantly faster in models with large weights.
- Added support for folding
Shape
nodes infold_constants
. This requires that shape inference has been run on the graph, and that the input to theShape
node has a static shape. This behavior can be disabled by settingfold_shapes=False
.
cleanup
,toposort
, andfold_constants
are now recursively applied to subgraphs by default. This behavior can be disabled by settingrecurse_subgraphs=False
.
- Fixed a bug where
do_type_check
would not propagate to subgraphs. - Fixed a bug where
cleanup()
would incorrectly remove outer-level nodes if they were used only by inner-nodes of subgraphs.
- Removed
__deepcopy__
fromGraph
as it wasn't deep-copying weights or attributes. The method is now calledcopy
and makes a shallow copy of everything exceptNode
s andTensor
instances.
- Fixed a bug where shapes including empty strings for
dim_param
would be treated as empty tensors. They are now correctly imported as tensors with dynamic shapes. - Fixed a bug where variable tensors with unknown shapes would be imported as scalars.
- The
values
property ofConstant
tensors is now lazily loaded. This can greatly improve model loading times.
- Fixed a bug where graph inputs and outputs could be assigned
SynchronizedList
instances, and would therefore be modified if nodes in the graph were.
- Changed the default value of
remove_unused_node_outputs
incleanup()
toFalse
, as a value ofTrue
can lead to unintuitive behavior, especially with looping constructs likeScan
andLoop
.
- Fixed a bug where calling
graph.tensors()
would cause the inputs or outputs of some tensors to be modified.
SynchronizedList.__add__()
no longer modifies the left operand.
- Fixed a bug where nodes including subgraphs whose inputs/outputs had the same names as the node's inputs/outputs would not be imported correctly.
fold_constants()
will no longer fail if there is nothing to fold in the graphcleanup()
will now properly remove the producer nodes of graph inputs.- Fixed a bug where graph input/output tensors not attached to nodes would not be correctly exported.
Graph.register()
now accepts anopsets
argument so that functions can be registered for specific opsets.
has_metadata
has been removed fromTensor
, since the function is no longer used.
- ONNX GraphSurgeon now enforces the constraint that graph inputs/outputs must include type information.
- Fixed a bug where
opset
was not being considering when running inference for constant folding.
- Added
layer()
function toGraph
to make it easier to generate models from scratch - Added
i()
ando()
convenience functions toTensor
, which are similar to the functions forNode
, but returnTensor
s instead ofNode
s
- Added an
examples
directory - Added
has_metadata()
toTensor
classes to determine if dtype/shape are known. - Added a
check_duplicates
parameter toGraph.tensors()
to make it easy to check for duplicate tensors in the graph.
- Various improvements to the logger
- Updated
OnnxImporter
so that it can correctly import shapes and types from an ONNX graph after shape inference. - Made
Tensor
an abstract class - all tensors in a graph are now eitherVariable
orConstant
- Renames
generate_tensor_map()
totensors()
inGraph
- Removed
Tensor
suffix from Tensor classes.
- The
import_onnx
andexport_onnx
functions will now preserve opset information anddim_param
values in shapes.
- Added
i()
ando()
convenience functions toNode
for retrieving input/output nodes. - Added
fold_constants()
toGraph
to allow for folding constants in the graph. - Added
__deepcopy__()
toGraph
. - Added
to_constant()
andto_variable()
functions toVariable
andConstant
respectively to transmute them in-place.
- Removed some type annotations to allow compatibility with Python 3.5.
- Added
Node
,Tensor
andGraph
classes. - Added
BaseImporter
andOnnxImporter
classes. - Added support for importing initializers in the
OnnxImporter
- Added
Variable
andConstant
- Consolidates inputs/outputs of Nodes/Tensors. Now, inputs/outputs should generally only be added to
Node
s. - Added
OnnxExporter
to exportGraph
toonnx.GraphProto
- Added
OnnxExporter
andOnnxImporter
to public imports - Added
toposort
function toGraph
, which will topologically sort it. - Added
cleanup
function toGraph
, which will remove unused nodes and tensors. - Added high-level API for importing/exporting
Graph
s from/to ONNX models. Graph
s are now generated with a default name ofonnx_graphsurgeon