Skip to content
This repository has been archived by the owner on Oct 11, 2024. It is now read-only.

Expand lm eval testing to many models #340

Closed

Conversation

robertgshaw2-neuralmagic
Copy link
Collaborator

@robertgshaw2-neuralmagic robertgshaw2-neuralmagic commented Jun 27, 2024

SUMMARY:

  • updated configs for models to 1000 samples
  • run smoke for large model on H100 for remote-push and nightly

@andy-neuma
Copy link
Member

wondering if we should have "lm eval" configs much like we transitioned to "test configs".

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants