Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check sum with 2% error instead of abs error. #45

Closed
wants to merge 1 commit into from

Conversation

tomdeakin
Copy link
Contributor

This seems to help validate single precision runs where the sum
is as close as we can expect for these large arrays.

This fixes #20.

This seems to help validate single precision runs where the sum
is as close as we can expect for these large arrays.
@tomdeakin tomdeakin requested a review from jrprice February 15, 2018 04:08
@jrprice
Copy link
Contributor

jrprice commented Feb 20, 2018

This is extremely sensitive to not just array size, but also number of threads (for OpenMP).

For example, running with the default array size with just 2 threads using the Intel compiler produces an error a little over 2%.

For a single thread running with 1GB arrays (2^28) elements, I'm getting an error of 44%, which is worryingly large.

That said, I'm not sure there's a better solution. We may get lower errors by tweaking the initial/scalar constants to keep the dot result from getting too large (where the coarseness of the floating point representation becomes an issue)?

@tomdeakin
Copy link
Contributor Author

Oh wow, those are big errors. I wonder if we should normalise the dot product by the array size just in the checking script. Maybe that would net out a factor of the array size?

@tomdeakin
Copy link
Contributor Author

Closing as this doesn't sufficiently solve Issue #20

@tomdeakin tomdeakin closed this Jun 22, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Dot verification fails with single precision
2 participants