-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added modules for embedding head #378
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good.
Minors inline.
@@ -150,3 +151,149 @@ def __init__( | |||
def forward(self, x: Tensor) -> Tensor: | |||
x = self.classifier(x) | |||
return x | |||
|
|||
|
|||
class EncoderEmbeddingOutputHead(nn.Module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets rename to something more general.
Maybe SequenceEmbeddingHead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EncoderEmbeddingOutputHead is mainly a placeholder code for when I start using the actual head for tasks. It works, but I still need to plan how to use it, given all our use cases. I suggest we leave this for now, and I'll incorporate your comments into next PRs of this.
Right now, I'm only using ModularPooling1D.
num_classes=num_classes, | ||
).classifier | ||
|
||
if pooling is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense that pooling will be None. Isn't it the whole point?
embedding_size: int, | ||
layers: List[int], | ||
dropout: float, | ||
num_classes: int, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need num_classes here? Maybe ClassifierMLP allows to skip this layer?
No description provided.