Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added modules for embedding head #378

Merged
merged 5 commits into from
Nov 11, 2024
Merged

Added modules for embedding head #378

merged 5 commits into from
Nov 11, 2024

Conversation

floccinauc
Copy link
Collaborator

No description provided.

@floccinauc floccinauc self-assigned this Nov 10, 2024
Copy link
Collaborator

@mosheraboh mosheraboh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.
Minors inline.

@@ -150,3 +151,149 @@ def __init__(
def forward(self, x: Tensor) -> Tensor:
x = self.classifier(x)
return x


class EncoderEmbeddingOutputHead(nn.Module):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets rename to something more general.
Maybe SequenceEmbeddingHead?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EncoderEmbeddingOutputHead is mainly a placeholder code for when I start using the actual head for tasks. It works, but I still need to plan how to use it, given all our use cases. I suggest we leave this for now, and I'll incorporate your comments into next PRs of this.
Right now, I'm only using ModularPooling1D.

num_classes=num_classes,
).classifier

if pooling is not None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense that pooling will be None. Isn't it the whole point?

embedding_size: int,
layers: List[int],
dropout: float,
num_classes: int,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need num_classes here? Maybe ClassifierMLP allows to skip this layer?

mosheraboh
mosheraboh previously approved these changes Nov 11, 2024
@floccinauc floccinauc merged commit 5f95441 into master Nov 11, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants