-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【Hackathon 5th No.47】API转换 103-124 #346
Conversation
Thanks for your contribution! |
pdist 也已经合入了 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
所有单测都必须符合:
单测覆盖范围要求为:涉及到多个API形参的,应包含各种参数用法( 全部指定关键字、全部不指定关键字、改变关键字顺序、默认参数均不指定 四种情况必须考虑),不能只考虑最简单常见的用法,要求至少列举5种不同的使用case(越多越好)。
因为目前正在做单测规范化的工作,为避免后续被重构,新合入的都需要规范。
new_kwargs = self.parse_kwargs(kwargs) | ||
if new_kwargs is None: | ||
new_kwargs = {} | ||
if "copy" in new_kwargs: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个是不是考虑json配置下,直接kwargs_change成空字符串
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kwargs_change 使用GenericMatcher,GenericMatcher 中 dtype, device转换不同不适合使用,然后只有参数 copy ,不需要使用 kwargs_change
if "memory_format" in new_kwargs: | ||
new_kwargs.pop("memory_format") | ||
if "non_blocking" in new_kwargs: | ||
new_kwargs["blocking"] = "not " + new_kwargs.pop("non_blocking").strip("()") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
有个 Reverse*
的Matcher可以参考,看是不是统一成一个功能更强的Matcher更好
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pytorch_code = textwrap.dedent( | ||
""" | ||
import torch | ||
result = torch.atleast_1d(torch.tensor(123, dtype=torch.int32)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个API有输入关键字的用法吗
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
参数是*tensors,没有关键字用法
https://pytorch.org/docs/stable/_modules/torch/functional.html#atleast_1d 源码
运行设置tensors提示没有关键字
def _test_case_2(): | ||
pytorch_code = textwrap.dedent( | ||
""" | ||
import torch | ||
x = torch.tensor([1., 2., 3.]) | ||
module1 = torch.nn.Module() | ||
module1.register_buffer('buffer', x) | ||
module1.type_unsupport(dst_type=torch.float32) | ||
module1.type(dst_type=torch.float32) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
因为这个提前被torch.Tensor.type
拦截了,我建议你将这一块功能融到TensorTypeMatcher里,在其中增加对dst_type参数的处理。
torch.nn.Module.type
和 torch.Tensor.type
两者转完后都是*.astype
,代码是一致的,因此即使被torch.Tensor.type
误判了,也是可以实现的。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改
|
||
def test_case_1(): | ||
pytorch_code = textwrap.dedent( | ||
generate_torch_code( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个应该冲突了
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改
opt = torch.optim.Adam(model.parameters()) | ||
result = torch.optim.lr_scheduler.LinearLR(opt, start_factor=0.5, total_iters=4) | ||
""" | ||
generate_torch_code("torch.optim.lr_scheduler.LinearLR(sgd, verbose=True)") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
冲突了
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改
@co63oc lr_scheduler这个地方的名字应该有改动了 |
已修改 |
5e03cc4
to
e42a5bf
Compare
paconvert/api_mapping.json
Outdated
"paddle_api": "paddle.atleast_2d", | ||
"min_input_args": 1, | ||
"args_list": [ | ||
"*" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改 *tensors
paconvert/api_mapping.json
Outdated
"paddle_api": "paddle.atleast_3d", | ||
"min_input_args": 1, | ||
"args_list": [ | ||
"*" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改
paconvert/api_mapping.json
Outdated
"paddle_api": "paddle.atleast_1d", | ||
"min_input_args": 1, | ||
"args_list": [ | ||
"*" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改
def get_paddle_nodes(self, args, kwargs): | ||
new_args = self.parse_args(args) | ||
new_kwargs = self.parse_kwargs(kwargs) | ||
if new_kwargs is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
什么时候会return None
,return None时一般是不支持,应该继续return None
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
API有bug时返回None,已修改return None
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR Docs
PaddlePaddle/Paddle#57262
https://github.com/PaddlePaddle/community/blob/master/hackathon/hackathon_5th/%E3%80%90PaddlePaddle%20Hackathon%205th%E3%80%91%E5%BC%80%E6%BA%90%E8%B4%A1%E7%8C%AE%E4%B8%AA%E4%BA%BA%E6%8C%91%E6%88%98%E8%B5%9B%E6%A1%86%E6%9E%B6%E5%BC%80%E5%8F%91%E4%BB%BB%E5%8A%A1%E5%90%88%E9%9B%86.md#no47%E4%B8%BApaddle%E4%BB%A3%E7%A0%81%E8%BD%AC%E6%8D%A2%E5%B7%A5%E5%85%B7%E6%96%B0%E5%A2%9Eapi%E8%BD%AC%E6%8D%A2%E8%A7%84%E5%88%99
API转换 103-124
文档PR PaddlePaddle/docs#6380
torch.histogramdd
torch.nn.functional.pdist PaddlePaddle/docs#6400
torch.nn.Module.type是按torch.nn.Module 匹配,如果参数带有名称dst_type解析出错,不确定怎么修改paconvert
torch.Tensor.index_fill_ Bug PR PaddlePaddle/Paddle#59863 已修改
torch.Tensor.signbit Bug PR: PaddlePaddle/Paddle#60150
PR APIs