Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update graph_optimization.ipynb for Intel Xeon 6 Processors #2331

Merged
merged 1 commit into from
Oct 21, 2024

Conversation

louie-tsai
Copy link
Contributor

Intel Xeon 6 Processors also support fp16 with auto mixed precision optimizer

@louie-tsai louie-tsai requested a review from a team as a code owner October 11, 2024 00:46
Copy link

Preview

Preview and run these notebook edits with Google Colab: Rendered notebook diffs available on ReviewNB.com.

Format and style

Use the TensorFlow docs notebook tools to format for consistent source diffs and lint for style:
$ python3 -m pip install -U --user git+https://github.com/tensorflow/docs

$ python3 -m tensorflow_docs.tools.nbfmt notebook.ipynb
$ python3 -m tensorflow_docs.tools.nblint --arg=repo:tensorflow/docs notebook.ipynb
If commits are added to the pull request, synchronize your local branch: git pull origin graph_optimze

@8bitmp3
Copy link
Contributor

8bitmp3 commented Oct 11, 2024

@MarkDaoust @markmcd

@MarkDaoust
Copy link
Member

Hi @louie-tsai, I don't know anything about this. Do you have any references showing that this is true?

@louie-tsai
Copy link
Contributor Author

louie-tsai commented Oct 15, 2024

@MarkDaoust
Copy link
Member

Cool, thanks. I see the processors have FP16. Does anyone know for sure that this actualy works with the Auto mixed precision optimizer?

@louie-tsai
Copy link
Contributor Author

Cool, thanks. I see the processors have FP16. Does anyone know for sure that this actualy works with the Auto mixed precision optimizer?

It works for AMP optimizer.
Loop the Intel architect @agramesh1 for more comment if any.

@agramesh1
Copy link
Contributor

Cool, thanks. I see the processors have FP16. Does anyone know for sure that this actualy works with the Auto mixed precision optimizer?

It works for AMP optimizer. Loop the Intel architect @agramesh1 for more comment if any.

Hi @MarkDaoust yes, it is now supported in TF and XLA. We have been working with @penpornk to add support for fp16 AMP using the AMX-FP16 that was introduced in the latest Intel Xeon CPUs.

@MarkDaoust MarkDaoust added the ready to pull Start merge process label Oct 18, 2024
@MarkDaoust
Copy link
Member

Cool thanks!

@louie-tsai
Copy link
Contributor Author

@markmcd
possible to help review and approve the PR?
thanks!

@markmcd
Copy link
Member

markmcd commented Oct 21, 2024

@markmcd possible to help review and approve the PR? thanks!

Done - copybara will merge shortly.

@copybara-service copybara-service bot merged commit 344f0e9 into tensorflow:master Oct 21, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants