-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
转换规则 No. 114-120 #122
转换规则 No. 114-120 #122
Conversation
Thanks for your contribution! |
2f9eae0
to
535290c
Compare
b7f41e8
to
01585fa
Compare
efb4795
to
3aabfd9
Compare
paconvert/api_matcher.py
Outdated
device = kwargs["device"] | ||
if ( | ||
"replace('cuda', 'gpu')," in device | ||
or 'replace("cuda", "gpu"),' in device |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
我觉得不用判断device是不是数字,而是直接转化为静态代码判断torch.device: int-> str().replace('cuda', '')这样是不是代码简单一些
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
不判断数字那转换后的代码增加了逻辑
paconvert/api_matcher.py
Outdated
else: | ||
return None | ||
elif ( | ||
"replace('cuda', 'gpu')" in device or 'replace("cuda", "gpu")' in device |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
这个判断比较多有些复杂,有办法统一成一个吗
-
下面的return None表示不支持,什么情况下会不支持呢
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
如果不包含字符串replace('cuda',并且不是数字,那是其他不能识别的场景,返回None不支持
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
总共就两个判断,目前感觉代码可读性不太好
if nums:
pass
elif 'replace' in device:
pass
else:
return None
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
有以下几种情况
1 数字参数 torch.cuda.device(device=0) paddle不使用id=,是kwargs参数转为args参数
2 不是数字,包含index参数 torch.cuda.device(torch.device("cuda", 0)),转为 str('cuda').replace('cuda', 'gpu'), index=1
3 不是数字,不包含index参数 torch.cuda.device(torch.device("cuda")), 转为 str('cuda').replace('cuda', 'gpu')
4 其他输入,不是数字也不包含 torch.cuda.device(torch.device("cpu"))
paconvert/api_matcher.py
Outdated
|
||
args = [] | ||
new_kwargs = {} | ||
for ele in kwargs: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里有办法不去改动GenericMatcher公共函数吗
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
是paddle使用id,paddle没有.index调用,比如 torch.device("cuda").index 这样不能转 paddle,如果是int参数没有名称,比如id=0,paddle没有paddle.CUDAPlace(id=0),需要使用paddle.CUDAPlace(0),kwargs转args
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
我觉得这里你写的太复杂了,这个api的参数都是固定的 就一个,应该不需要用各种循环、dict、列表来组装吧
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
有以下几种情况
1 数字参数 torch.cuda.device(device=0) paddle不使用id=,是kwargs参数转为args参数
2 不是数字,包含index参数 torch.cuda.device(torch.device("cuda", 0)),转为 str('cuda').replace('cuda', 'gpu'), index=1
3 不是数字,不包含index参数 torch.cuda.device(torch.device("cuda")), 转为 str('cuda').replace('cuda', 'gpu')
4 其他输入,不是数字也不包含 torch.cuda.device(torch.device("cpu"))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已删除torch.cuda.device
需要解决下冲突 |
已修改 |
@co63oc 单测失败了,需要再看一下 |
@co63oc 之前提交的 麻烦重新改一下,不然会造成转换后的代码存在运行失败的问题。 |
单测失败是 tests/test_Tensor_multinomial.py 引入torch.mps,CI环境没有torch.mps模块 |
paddle 没有hist属性,torch返回两个Tensor,paddle返回一个Tensor,没明白这单测是要怎么修改 |
问题不在单测,在于 怎么能保证转换后与转换前计算结果一致,比如通过组合等形式,这个工作是保证用户模型代码中的任意用法case能够运行通过,而不是保证单测能运行通过。 hist Tensor有没有办法组合替代得到呢,如果没有那就是无法实现的Matcher,在issue下直接登记 功能缺失 即可。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- 目前case是测了能跑的功能,unsupport_args也写下case吧,通过在单测里设置unsupport=True来跑
- 目前比较结果前,做了不少设置和修改,直接比较裸的结果会更好,覆盖问题更全面
h = torch.autograd.functional.vjp(func, x, v) | ||
result = h[:] | ||
for item in result: | ||
item.requires_grad = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个最后是不是不需要设置.requires_grad为False,也能比较
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
h = torch.autograd.functional.hessian(func, x) | ||
result = h[:] | ||
result.requires_grad = False | ||
result = torch.flatten(result) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
后面不加这些处理,直接比较h能跑过不,或者直接比较result能跑过不
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
z2 = torch.matmul(x, y) | ||
|
||
torch.autograd.backward([z1, z2], [grad_tensor1, grad_tensor2], True) | ||
x.grad.requires_grad=False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个最后是不是不设置.requires_grad,能比较不
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已在issue添加 |
@co63oc torch返回的bin能否通过paddle.linspace来实现 |
这个stop_gradient的问题是因为paddle的所有梯度的stop_gradient也为False,而torch的梯度的requires_grad为True,不过没有太大影响,只是这个属性不同,我先合入了。 后面可以看下两点:
|
|
如果min, max都为0,调用paddle.linspace 还需要使用paddle.min, paddle.max计算数据最小最大值,使用api较多 |
已增加 unsupport用例 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR Docs
#112
已有映射文档增加转换规则
torch.autograd.backward
torch.autograd.functional.jacobian
torch.autograd.functional.hessian
torch.autograd.functional.vjp
torch.autograd.functional.jvp
115 torch.autograd.grad # 已存在
torch.cuda.device 文档 PaddlePaddle/docs#5966
PR APIs
[used AI Studio]