Skip to content

Question about the layer representation from the first token of the X #2

@KnightZhang625

Description

@KnightZhang625

Hello,

Thanks for the easy-read code. I have a question about the code, could you please help me?

In your paper, you said "we use the hidden state of the first token in text x" (between equations (6) and (7)). From my understanding, you are using the first token of the text x, which should be the [cls] token. However, in your code, representation, _ = pooling_layer(hidden_states) (line 730 in model.py). Looks like you are using the output from the layer attention network, which is in the size of [batch_size, seq_length, hidden_size], then put them into the pooling layer and get the layer representation in the size of [batch_size, hidden_size]. If so, could you please tell me why you choose this way, instead of the method in paper.

Please forgive me if I misunderstand this! And thanks for the great work!

Jiaxin

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions