Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RecursionError: maximum recursion depth exceeded while calling a Python object. When compiling a transformer model with JIT #604

Open
Sardhendu opened this issue Nov 15, 2023 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@Sardhendu
Copy link

Sardhendu commented Nov 15, 2023

Environment info

  • adapter-transformers version: 3.2.1
  • Platform: linux
  • Python version: 3.10
  • PyTorch version (GPU): 2.0.0 and 2.1.1
  • Transformer version: 4.35.2 and 4.28.1
  • Tensorflow version (GPU?) Not required
  • Using GPU in script: yes
  • Using distributed or parallel set-up in script: No

Information

I am using the transformers-CLIP model and compiling it using JIT. The compilation code works fine when I don't have adapter-transformers installed. I face the error only after I install the adapter-transformers package. I can't avoid using adapter-transformers because other modules require it.

To reproduce

Steps to reproduce the behavior:

  1. create a conda environment: conda create -n exp python==3.10
  2. activate the env: conda activate exp
  3. install the packages. pip install torch, pip install transformers, pip install adapter-transformers, pip install ipython
  4. enter the ipython shell: ipython
  5. Run the below code.
from transformers import  CLIPModel
import torch
import torch.nn as nn


class Clip(nn.Module):
    def __init__(self,):
        super().__init__()
        self.model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
    
    def forward(self, X: torch.Tensor):
        return self.model.get_image_features(X).view(-1, 512)

mymodel = Clip()
mymodel.eval()
mymodel.to(torch.device("cuda"))

shape = [1, 3, 224, 224]
_random_example = torch.rand(shape, requires_grad=False, dtype=torch.float32).to(device=torch.device("cuda"))
x = torch.index_select(
    _random_example,
    dim=0,
    index=torch.Tensor([0]).to(dtype=torch.int32).to(device="cuda"),
).to("cuda")
print(x)
_inputs = {"X": x}

traced_model = torch.jit.trace(
    mymodel, 
    example_inputs=(
        _inputs["X"], 
    )
)
torch.jit.save(traced_model, "jit_model.pth")

Error:

File ~/miniconda3/envs/exp2/lib/python3.10/site-packages/torch/_jit_internal.py:758, in module_has_exports(mod)
    756 def module_has_exports(mod):
    757     for name in dir(mod):
--> 758         if hasattr(mod, name):
    759             item = getattr(mod, name)
    760             if callable(item):

File ~/miniconda3/envs/exp2/lib/python3.10/site-packages/transformers/adapters/model_mixin.py:316, in EmbeddingAdaptersWrapperMixin.active_embeddings(self)
    314 @property
    315 def active_embeddings(self):
--> 316     return self.base_model.active_embeddings

File ~/miniconda3/envs/exp2/lib/python3.10/site-packages/transformers/adapters/model_mixin.py:316, in EmbeddingAdaptersWrapperMixin.active_embeddings(self)
    314 @property
    315 def active_embeddings(self):
--> 316     return self.base_model.active_embeddings

    [... skipping similar frames: EmbeddingAdaptersWrapperMixin.active_embeddings at line 316 (2971 times)]

File ~/miniconda3/envs/exp2/lib/python3.10/site-packages/transformers/adapters/model_mixin.py:316, in EmbeddingAdaptersWrapperMixin.active_embeddings(self)
    314 @property
    315 def active_embeddings(self):
--> 316     return self.base_model.active_embeddings

File ~/miniconda3/envs/exp2/lib/python3.10/site-packages/transformers/modeling_utils.py:1117, in PreTrainedModel.base_model(self)
   1112 @property
   1113 def base_model(self) -> nn.Module:
   1114     """
   1115     `torch.nn.Module`: The main body of the model.
   1116     """
-> 1117     return getattr(self, self.base_model_prefix, self)

File ~/miniconda3/envs/exp2/lib/python3.10/site-packages/torch/nn/modules/module.py:1695, in Module.__getattr__(self, name)
   1693     if name in modules:
   1694         return modules[name]
-> 1695 raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")

RecursionError: maximum recursion depth exceeded while calling a Python object

Expected behavior

The expected outcome is a properly saved jit_model.pth. I have tried increase the recursion depth to 5000 and it still doesn't work. Ideally the/adapters/model_mixin.py should not be called since my Clip model is simply dependent on transformers. It looks like adapter-transformer CLIP model is invoked.

Any help here would be appreciated. Thank you so much!!

@Sardhendu Sardhendu added the bug Something isn't working label Nov 15, 2023
@TimoImhof TimoImhof self-assigned this Dec 6, 2023
@TimoImhof
Copy link
Contributor

TimoImhof commented Dec 6, 2023

Hi @Sardhendu, thanks for opening this issue.

The adapter-transformer package is deprecated and the new and actively maintained package we provide is the adapters package (see here or #584 for more information).

However, I tried to reproduce the error with the new package and it is still possible; here is my slimmed-down version to reproduce the error with adapters:

from transformers import  CLIPModel
import torch
import torch.nn as nn
from adapters import init

testmodel = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
testmodel.eval()
# enable adapter support
init(testmodel)

shape = [1, 3, 224, 224]
_random_example = torch.rand(shape, requires_grad=False, dtype=torch.float32)
x = torch.index_select(
    _random_example,
    dim=0,
    index=torch.Tensor([0]).to(dtype=torch.int32),
)
print(x)
_inputs = {"X": x}

traced_model = torch.jit.trace(
    testmodel, 
    example_inputs=(
        _inputs["X"], 
    )
)
torch.jit.save(traced_model, "jit_model.pth")

Since this problem still exists in the new package I will have a look into this and get back to you.
If you have gained any new insights since you posted this issue, please let me know.

Cheers,
Timo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants