Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using Own Dataset #4

Open
sheshap opened this issue Sep 14, 2022 · 6 comments
Open

Using Own Dataset #4

sheshap opened this issue Sep 14, 2022 · 6 comments

Comments

@sheshap
Copy link

sheshap commented Sep 14, 2022

Hi

Very Interesting Work.

I have a dataset that has points and colors in the below format in the form of .txt files.
X,Y,Z,R,G,B

Kindly, please suggest how to use your method.

Thanks in advance

@Ideefixze
Copy link
Collaborator

Hi,

do you want to use it for training? If so, you will also need images and camera matrix poses. To prepare training dataset see and use:
https://github.com/gmum/points2nerf/blob/main/dataset_generation_scripts/generate.py

Additional description is in main README.md

If only for inference, you will need to probably load it directly in code and use
https://github.com/gmum/points2nerf/blob/main/utils.py#L49

It takes entry["data"] which is a point of cloud of 2048 points (X,Y,Z, R,G,B) and returns code from which Hypernetwork generates NeRF. Data would need to be very similar (scale, shape) to existing ShapeNet objects from existing pre-trained models (car, plane or chair).

@sheshap
Copy link
Author

sheshap commented Sep 14, 2022

Hi,

Yes, I would like to use it for training (my dataset has different classes).

I am not sure if I have camera matrix poses, but I do have intrinsic matrices.

Thanks

@sheshap
Copy link
Author

sheshap commented Sep 20, 2022

my mobile camera extrinsic matrix is same for all images
image

Kindly, Please suggest if the extrinsic matrix needs to be different?

@Ideefixze
Copy link
Collaborator

Yes, it should be different if images are from different angles.
NeRF would have no idea from which position it is looking at, so you will need to change your dataset.

@shuyueW1991
Copy link

Hi,

do you want to use it for training? If so, you will also need images and camera matrix poses. To prepare training dataset see and use: https://github.com/gmum/points2nerf/blob/main/dataset_generation_scripts/generate.py

Additional description is in main README.md

If only for inference, you will need to probably load it directly in code and use https://github.com/gmum/points2nerf/blob/main/utils.py#L49

It takes entry["data"] which is a point of cloud of 2048 points (X,Y,Z, R,G,B) and returns code from which Hypernetwork generates NeRF. Data would need to be very similar (scale, shape) to existing ShapeNet objects from existing pre-trained models (car, plane or chair).

Hi, I am also interested in the point cloud thing. I am wondering if there is corresponding car/chair/plane point cloud. Now in the ds.zip package, there is only sampled npz files. In 'shapenet' directory, there is nothing.

@Ideefixze
Copy link
Collaborator

@shuyueW1991

In .npz files you should have for each object: point cloud, images and poses. Shapenet directory, which is needed in generating data or calculating metrics need to be downloaded from offical Shapenet source

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants