Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated wheel release mechanism #511

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

PhilipDeegan
Copy link
Member

@PhilipDeegan PhilipDeegan commented Feb 12, 2023

Biggest change really is now,

  • There is only one C++ library - much simpler for build time

Fixes somethings from #504

  • no segfaults seen in a while
  • csr deprecations assessed

OSX still has some weird issues with random, and I'm guessing this is the reason why some real tests are failing now

for instance

Traceback (most recent call last):
  File "/Users/runner/work/tick/tick/tick/linear_model/tests/logistic_regression_test.py", line 93, in test_LogisticRegression_fit
    self.assertGreater(
AssertionError: 0.6981863527347516 not greater than 0.699 : solver sgd with penalty binarsity and intercept False reached too low AUC

I changed it from .7 to .69, but I'm not sure it really should be changed, maybe it's ok?
@Mbompr thoughts?

With this pull request, we should be able to publish new wheels to both test-pypi repo on master merge
And
push to pypi regular on release/tag

@PhilipDeegan PhilipDeegan marked this pull request as draft February 12, 2023 15:53
@PhilipDeegan PhilipDeegan force-pushed the releases branch 3 times, most recently from 186b969 to 673fe59 Compare February 12, 2023 19:29
@PhilipDeegan PhilipDeegan marked this pull request as ready for review October 26, 2023 10:25
@PhilipDeegan PhilipDeegan force-pushed the releases branch 23 times, most recently from 5b8c821 to 0f89463 Compare October 28, 2023 09:46
@PhilipDeegan PhilipDeegan force-pushed the releases branch 7 times, most recently from 53db44c to 6c4dc65 Compare October 29, 2023 16:11
@PhilipDeegan PhilipDeegan requested a review from Mbompr November 5, 2023 13:19
@Mbompr
Copy link
Contributor

Mbompr commented Nov 26, 2023

Sorry, just discovering this thread :o, wow the amount of work is huge !

Honestly, these tests involving randomness are very bad designed, but I didn't know that back then :).

About changing the AUC threshold from .7 to .69, this is not an issue at all. The only thing we are checking is that the model is learning something significantly better than random (which would lead to 0.5), so 0.65 or 0.6 would also be fine.

About the darwin test filtering, this is due to

        # This is temporary
        # At the moment, seeds are not cross platform as distributions in
        # C++ depends on standard library shipped with compiler
        import os
        if os.name == 'posix':
            import platform
            if platform.system() == 'Darwin':
                # We check that we get the same as what was recorded
                np.testing.assert_almost_equal(seeded_sample_1, seeded_sample)

(note the "temporary"...). That is also bad designed. The test purpose was to check if with the same seed we were obtaining the same random numbers. But nothing really guarantees this and I guess an update in darwin random number generation can break this easily.

I would thus simply remove this part of the test as well as all seeded_sample arguments. I would need help tell me, otherwise I'm quite happy to not setup tick, cmake and swig on my machine :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants