Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gh-115999: Add free-threaded specialization for STORE_SUBSCR #127169

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

colesbury
Copy link
Contributor

@colesbury colesbury commented Nov 22, 2024

The specialization only depends on the type, so no special thread-safety considerations there.

STORE_SUBSCR_LIST_INT needs to lock the list before modifying it.

_PyDict_SetItem_Take2 already internally locks the dictionary using a critical section.

The specialization only depends on the type, so no special thread-safety
considerations there.

STORE_SUBSCR_LIST_INT needs to lock the list before modifying it.

`_PyDict_SetItem_Take2` already internally locks the dictionary using a
critical section.
// avoid any potentially escaping calls (like PyStackRef_CLOSE) while the
// object is locked.
#ifdef Py_GIL_DISABLED
# define LOCK_OBJECT(op) PyMutex_LockFast(&(_PyObject_CAST(op))->ob_mutex)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mpage, I think this locking pattern will be slightly faster than using critical sections inline in bytecodes.c. The downside is that we will deopt if there's any lock contention, but I think that's an okay tradeoff.

If we decide to go with this pattern, we can update UNPACK_SEQUENCE_LIST to use it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense to me.

@colesbury
Copy link
Contributor Author

@corona10 - I put my name down next to STORE_ATTR, but then got confused and worked on STORE_SUBSCR instead. Sorry!

@colesbury
Copy link
Contributor Author

(STORE_ATTR is up for grabs)

Copy link
Contributor

@mpage mpage left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! I left one comment/question inline and there's a compiler warning that we need to fix up.

// avoid any potentially escaping calls (like PyStackRef_CLOSE) while the
// object is locked.
#ifdef Py_GIL_DISABLED
# define LOCK_OBJECT(op) PyMutex_LockFast(&(_PyObject_CAST(op))->ob_mutex)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense to me.

Comment on lines 1822 to 1823
if (as_mapping && (as_mapping->mp_ass_subscript
== PyDict_Type.tp_as_mapping->mp_ass_subscript)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's possible for a data race to occur between the load in as_mapping->mp_ass_subscript and an assignment to the same field in another thread (potentially as a result of Python code assigning to the type's __setitem__ attr). I'm not totally sure what we want to do here, but a couple of thoughts:

  1. We could use an atomic load in as_mapping->mp_ass_subscript and change the code in typeobject.c that updates the slots to use atomic stores.
  2. Or, since this only happens in stats builds and is benign (to the extent that a data race is benign), we could just ignore it.

@corona10
Copy link
Member

I put my name down next to STORE_ATTR, but then got confused and worked on STORE_SUBSCR instead. Sorry!

Race condition with the wrong lock :) Fine, thank you for the work, I've updated the list of issues.

Copy link
Member

@corona10 corona10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add test codes at test_opcache.py

class TestSpecializer(TestBase):

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants