Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge OpenAI Triton commit 76ed94d #2543

Merged
merged 10 commits into from
Oct 23, 2024
Merged

Merge OpenAI Triton commit 76ed94d #2543

merged 10 commits into from
Oct 23, 2024

Conversation

whitneywhtsang
Copy link
Contributor

This PR change the Triton base from 185299e to 76ed94d (Oct 18).
Pass rate: 98.99%

Please do not squash and merge this PR.

anmyachev and others added 7 commits October 17, 2024 10:29
This makes the dependence more obvious since `const std::string &target`
is used in this file.
Minor changes that reuse Python's capabilities for writing
platform-independent code.

Signed-off-by: Anatoly Myachev <[email protected]>
This is a PR adding an attribute in the HIP backend to test
for a tensor storage to be within 2GB. This will enable
support of buffer operations.
This will make it easier to support other platforms downstream. I hope
that such code should not complicate the support of Triton itself.

---------

Signed-off-by: Anatoly Myachev <[email protected]>
Make `allow_reorder` and `efficient_layout` `UnitAttr` for a cleaner
interface.

This way, the operation exposes a `bool getEfficientLayout()` member to
check for that attribute and a constructor receiving `bool` arguments
for both of these attributes (defaulted to `false`).

The core Triton is a small number of people, and we receive many PRs
(thank
you!).  To help us review your code more quickly, **if you are a new
contributor (less than 3 PRs merged) we ask that you complete the
following
tasks and include the filled-out checklist in your PR description.**

Complete the following tasks before sending your PR, and replace `[ ]`
with
`[x]` to indicate you have done them.

- [X] I am not making a trivial change, such as fixing a typo in a
comment.

- [X] I have written a PR description following these
  [rules](https://cbea.ms/git-commit/#why-not-how).

- [X] I have run `pre-commit run --from-ref origin/main --to-ref HEAD`.

- Select one of the following.
  - [X] I have added tests.
    - `/test` for `lit` tests
    - `/unittest` for C++ tests
    - `/python/test` for end-to-end tests
  - [ ] This PR does not need a test because `FILL THIS IN`.

- Select one of the following.
  - [ ] I have not added any `lit` tests.
- [X] The `lit` tests I have added follow these [best
practices](https://mlir.llvm.org/getting_started/TestingGuide/#filecheck-best-practices),
including the "tests should be minimal" section. (Usually running Python
code
    and using the instructions it generates is not minimal.)

Signed-off-by: victor-eds <[email protected]>
We have flipped stream pipeliner v2 on as default
for quite sometime. All known issues has been fixed.
So now remove old v1 pipeliner.

Note that this changes know `num_stages` are handled:
previously we used to enable pipelining if `num_stages`
is `0`, which really is not a good behavior. Now switched
to follow common practice where `0`/`1` won't trigger
pipelining anymore; need `2` or more to trigger. 

Given downstream users might be using `0` in the 
codebase, right now we `assert` to give developers
a clear indication the switch of behavior instead of
silently drop the perf. The `assert` is expected to be
dropped sometime down the line.

---------

Co-authored-by: Lei Zhang <[email protected]>
@whitneywhtsang whitneywhtsang self-assigned this Oct 23, 2024
@etiotto etiotto marked this pull request as ready for review October 23, 2024 14:10
@alexbaden
Copy link
Contributor

alexbaden commented Oct 23, 2024

Trying a PyTorch Inductor run as this contains the commit that changes AttrsDescriptor again: https://github.com/intel/intel-xpu-backend-for-triton/actions/runs/11481885630
Edit: Oh I see you reverted it. And Inductor has been failing for a different reason anyway. So, I guess this is fine.

@whitneywhtsang
Copy link
Contributor Author

Trying a PyTorch Inductor run as this contains the commit that changes AttrsDescriptor again: https://github.com/intel/intel-xpu-backend-for-triton/actions/runs/11481885630

I reverted that commit, as it requires special handling, so this PR doesn't contain changes to AttrsDescriptor.

@alexbaden
Copy link
Contributor

I have started working on adapting to those changes in PyTorch as part of my effort to land the __repr__ string representation of AttrsDescriptor in upstream. Is there an issue to re-land this commit?

@whitneywhtsang
Copy link
Contributor Author

whitneywhtsang commented Oct 23, 2024

I have started working on adapting to those changes in PyTorch as part of my effort to land the __repr__ string representation of AttrsDescriptor in upstream. Is there an issue to re-land this commit?

There are some unit test failures that require investigation, created #2544 to track it.

@whitneywhtsang whitneywhtsang merged commit 24bd5a6 into main Oct 23, 2024
25 of 52 checks passed
@whitneywhtsang whitneywhtsang deleted the whitneywhtsang/merge2 branch October 23, 2024 14:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants