Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions aboout evaluation metrics #5

Open
tom68-ll opened this issue Dec 18, 2023 · 0 comments
Open

Some questions aboout evaluation metrics #5

tom68-ll opened this issue Dec 18, 2023 · 0 comments

Comments

@tom68-ll
Copy link

Dear Author,
We have noticed that in the SFNET code, you have used semSQL's intermediate representation as the evaluation benchmark instead of the final SQL, which is not the ultimate goal of the semantic parsing task. Therefore, we attempted to follow the method provided in IRNET to convert semSQL into formal SQL, we obtained the following results on the SQL Exact Match metric:
ACC_a = 54.67; ACC_w = 55.78; BWT = -0.46; FWT=41.09
Although it is normal for the results to vary, we observed significant changes in the model's performance on the BWT and FWT metrics (originally BWT = -1.0; FWT = 45.9). Notably, the model's performance on catastrophic forgetting almost disappeared. Could you explain why this is the case?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant