Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

training setdata #15

Open
zi1zi opened this issue Apr 24, 2023 · 1 comment
Open

training setdata #15

zi1zi opened this issue Apr 24, 2023 · 1 comment

Comments

@zi1zi
Copy link

zi1zi commented Apr 24, 2023

I use LF-Font for training and have the following problem:

  1. Are all the unicode encodings in the given example data/chn/train_chars.json used for training? I think the number of characters is very large. Don’t we have fewer samples?
  2. ref_imgs = batch["ref_imgs"].cuda()
    in this link
    ref_imgs = batch["ref_imgs"].cuda(), why there is a ref_imgs field in the batch
@8uos
Copy link
Collaborator

8uos commented Jun 16, 2023

Hi,

  1. Yes, we use all the characters in data/chn/train_chars.json for training. As mentioned in our paper, our method uses large number of characters using training phase.
  2. We use a custom collate_fn for dataloader to have a dictionary-formatted batch. See here.

Sorry for the late reply.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants