Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Demonstrate script testing/debugging in the script mode walkthroughs #33

Open
athewsey opened this issue May 10, 2023 · 0 comments
Open
Labels
enhancement New feature or request

Comments

@athewsey
Copy link
Contributor

Feature request

We already provide example shell commands to invoke and test training scripts for all variants of the migration challenge. For example from SKLearn:

!python3 src/main.py \
    --train ./data/train \
    --test ./data/test \
    --model_dir ./data/model \
    --class_names {class_names_str} \
    --n_estimators=100 \
    --min_samples_leaf=3

Since this is the recommended debugging workflow, we should also demonstrate it in the script mode walkthroughs by adding equivalent commands in the 'SageMaker' variants of these notebooks - before the Estimator gets created.

This will help these notebooks illustrate the process/workflow of translating from in-notebook to notebook+job, better than just showing the final result.

Background

Today, we use in-notebook shell commands as the recommended script debugging workflow for the migration challenge - because our options are somewhat constrained for a workshop:

We talk about these other options in the post-challenge wrap-up, but don't want to confuse the issue by introducing them up-front in the code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant