You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
In the Section of Averaging Over Multiple Baselines the expected gradients is originally defined as the first integral as an expectation over D, and the second integral as an expectation over the path between x’ and x which is double integral. Then as the EG is belong to "the path atrribution methods", it becomes the Riemann Integral of one dimension.
In the origin paper the path methods are defined as integrating the gradients along a smooth path from x' to x. Expected gradient is the average of multiple path methods but these path methods have different origin point x', then why they can be considered as the " integrate gradients over one or more paths between two valid inputs"(there are more than two inputs). Then why it can be computed as the Riemann Integral of one dimension with a single path.
The second confusion stems from the figure(4) in Using the Training Distribution. I want to know what the blue line's value means in this picture. In A Better Understanding of Integrated Gradients the blue line in figure(4) represents the |f(x)-f(x')|, but in the train dataset expected gradient formula, the x' is not a constant value so why is the blue line represents a constant value. If it is the average of f(x') , it is reasonable that the blue line value changes along with the increasement of k.
Moreover, the gradient function is obviously not a linear function. In the Riemann Integral the differential part converge to x-E(x') while the function part never converge to gradient(x-E(x')), so I'm not sure that the expected gradient can really converge a constant value or a curve.
The text was updated successfully, but these errors were encountered:
Hello,
In the Section of Averaging Over Multiple Baselines the expected gradients is originally defined as the first integral as an expectation over D, and the second integral as an expectation over the path between x’ and x which is double integral. Then as the EG is belong to "the path atrribution methods", it becomes the Riemann Integral of one dimension.
In the origin paper the path methods are defined as integrating the gradients along a smooth path from x' to x. Expected gradient is the average of multiple path methods but these path methods have different origin point x', then why they can be considered as the " integrate gradients over one or more paths between two valid inputs"(there are more than two inputs). Then why it can be computed as the Riemann Integral of one dimension with a single path.
The second confusion stems from the figure(4) in Using the Training Distribution. I want to know what the blue line's value means in this picture. In A Better Understanding of Integrated Gradients the blue line in figure(4) represents the |f(x)-f(x')|, but in the train dataset expected gradient formula, the x' is not a constant value so why is the blue line represents a constant value. If it is the average of f(x') , it is reasonable that the blue line value changes along with the increasement of k.
Moreover, the gradient function is obviously not a linear function. In the Riemann Integral the differential part converge to x-E(x') while the function part never converge to gradient(x-E(x')), so I'm not sure that the expected gradient can really converge a constant value or a curve.
The text was updated successfully, but these errors were encountered: