-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core: new inference system for constraints #3456
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #3456 +/- ##
==========================================
+ Coverage 90.29% 90.31% +0.02%
==========================================
Files 462 462
Lines 57937 58039 +102
Branches 5566 5562 -4
==========================================
+ Hits 52313 52417 +104
+ Misses 4189 4188 -1
+ Partials 1435 1434 -1 ☔ View full report in Codecov by Sentry. 🚨 Try these New Features:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good, although I'm still not sure what the motivation for this is. Is it better error messages? Or is there more downstream? Also, my first instinct would be to somehow be a bit more lazy about the resolver situation, would it make sense to make them into lazy model objects, that let the user iterate over the keys, and define each function only once, instead of creating wrappers of wrappers. Not 100% sure if it would work, but might be worth a try in this PR?
1fab125
to
d1001b6
Compare
I tried to put some motivation in the PR description and there is a bit more discussion in #3318 about needing to verify properties during parsing with the old system (I have rebased that PR onto this PR). |
Thank you, happy to approve once the CI passes |
I'm also not sure how happy I am about stacks of wrappers. Another option which could be worth trying is to have a resolver be a list of "accessors" where each accessor is a piece of data (such as the attribute name, or operand index). I haven't tried putting this into action so I'm not sure how messy it would get. Could you expand on what you mean by "lazy model objects"? |
Something like class AbstractResolver(abc.ABC):
@abstractmethod
def iter_variables() -> Iterator[str]: ...
@abstractmethod
def resolve(attr: Attribute) -> Attribute | Sequence[Attribute]:
...
@dataclass
class WhateverResolver(AbstractResolver):
a: AbstractResolver
b: AbstractResolver
def iter_variables() -> Iterator[str]:
yield from a.iter_variables()
yield from b.iter_variables()
def resolve(attribute: Attribute) -> Attribute | Sequence[Attribute]:
return a.resolve(attribute) if creative_condition else b.resolve(attribute) |
I like that idea, I'll give it a go next week |
This implements something halfway between what I originally had in #3456 and what @superlopuh suggested in #3456 (comment) . I think this a strict improvement over the original which uses callables and lambda functions and will merge it into that PR if people agree
Have merged #3464 here, so this is ready for review again (Think there are no outstanding open comments) |
resolvers = dict[str, Resolver[ParsingState]]() | ||
for i, (_, operand_def) in enumerate(self.op_def.operands): | ||
if self.seen_operand_types[i]: | ||
seen_variables |= operand_def.constr.get_resolved_variables() | ||
resolvers.update( | ||
{ | ||
v: self._OperandResultResolver(i, True, r) | ||
for v, r in operand_def.constr.get_resolvers().items() | ||
} | ||
) | ||
for i, (_, result_def) in enumerate(self.op_def.results): | ||
if self.seen_result_types[i]: | ||
seen_variables |= result_def.constr.get_resolved_variables() | ||
return seen_variables | ||
resolvers.update( | ||
{ | ||
v: self._OperandResultResolver(i, False, r) | ||
for v, r in result_def.constr.get_resolvers().items() | ||
} | ||
) | ||
return resolvers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if there's an overlap in v
s? Worth a comment if that's not possible? My instinct here would be to create two dictionaries, probably with two dictionary comprehensions, and then take their union. (Maybe assert that the intersection of keys is empty)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there's an overlap in v
s then the second one encountered will be used. This would be happen if a variable can be inferred from two different places in the assembly format. Which resolver is used shouldn't make any difference for any operation that successfully verifies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how do you feel about a resolver that asserts at runtime that the two resolved results are the same?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I imagine this would be easy to do with the new class-based system. I'm not sure how I feel about putting what is effectively verification as a check in parsing.
xdsl/dialects/bufferization.py
Outdated
if isa(attr, TensorType[Attribute]): | ||
memref_type = MemRefType(attr.element_type, attr.shape) | ||
return self.memref_var_constr.verify(memref_type, constraint_context) | ||
if isa(attr, UnrankedTensorType[Attribute]): | ||
memref_type = UnrankedMemrefType.from_type(attr.element_type) | ||
|
||
raise VerifyException(f"Expected TensorType or UnrankedTensorType, got {attr}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the logic here seems to be changed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is, though it appears I forgot to amend the doc comment. In my opinion the previous constraint was an abuse of the particular workings of the previous system. The one I have written here is a bit more well-behaved with respect to inference, and can be used to achieve the same effect as the old one. I'll update the doc comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to merge this change as a separate PR, while we discuss the details of this one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I'll get that ready first thing tomorrow
a002ad5
to
aeadd84
Compare
I believe all comments have been resolved.
|
Forgot to change the |
@superlopuh do you feel the extractor merging is worth the added complexity? My examples in the tests feel a bit contrived |
I feel like it's OK, I'm just imagining the headache that would emerge from trying to debug this, and it all seems worth it. |
Co-authored-by: Sasha Lopoukhine <[email protected]>
Co-authored-by: Sasha Lopoukhine <[email protected]>
Co-authored-by: Sasha Lopoukhine <[email protected]>
Co-authored-by: Sasha Lopoukhine <[email protected]>
Co-authored-by: Sasha Lopoukhine <[email protected]>
@math-fehr @superlopuh are you both happy for this to be merged? Do you want to merge other conflicting PRs first? |
I just merged the small conflicting PR, hopefully will be easy to resolve. I think the best order now is for you to merge this and for me to resolve my draft PRs. |
I've gone through a few different refactors this week and have eventually settled on this slightly smaller refactor for the constraint system. The key addition is that each constraint now returns a dictionary of "resolvers" (name can maybe be improved) which resolve a variable from the given attribute. This allows inference to be decoupled entirely from verification, and inference should no longer be quite as fragile/maybe slightly quicker.
In this version, the resolvers are not used for verification, as I had originally planned, and instead the previous verification framework is left as is. I believe this change can enable the following changes in the future:
PS. as of time of writing it appears this saves 4 lines of code