You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I cannot get the training to work as I get this message at the forward pass step: 'The input_points must be a 3D tensor. Of shape batch_size, nb_boxes, 4.', ' got torch.Size([2, 4]).'
I think the input_boxes is wrong somehow?
The images I am using are colour PNG images rather than the tiff images in the reference code and are showing with 3 channels here....
My SamDataset code is:
classSAMDataset(Dataset):
""" This class is used to create a dataset that serves input images and masks. It takes a dataset and a processor as input and overrides the __len__ and __getitem__ methods of the Dataset class. """def__init__(self, dataset, processor):
self.dataset=datasetself.processor=processordef__len__(self):
returnlen(self.dataset)
def__getitem__(self, idx):
item=self.dataset[idx]
image=item["image"]
ground_truth_mask=np.array(item["label"])
# get bounding box prompt# prompt = get_bounding_box(ground_truth_mask)prompt=item["bounding_box"]
# prepare image and prompt for the modelinputs=self.processor(image, input_boxes=[[prompt]], return_tensors="pt")
# remove batch dimension which the processor adds by defaultinputs= {k:v.squeeze(0) fork,vininputs.items()}
# add ground truth segmentationinputs["ground_truth_mask"] =ground_truth_maskreturninputs
and this is where I run into trouble...
The text was updated successfully, but these errors were encountered:
Edit: Actually, I was able to resolve my issue. I suspect the OP also is having the same problem. My issue is related to a custom bounding_box function I wrote. It was returning multiple boxes, but the individual boxes were not in the required structure [[x,y,z,w]].
Hi,
I am trying to fine tune SAM on custom images and masks but am struggling and am hoping someone can point me in the right direction to resolving it.
I have been referencing this code:
https://github.com/bnsreenu/python_for_microscopists/blob/master/331_fine_tune_SAM_mito.ipynb
which is based on this:
https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Fine_tune_SAM_(segment_anything)_on_a_custom_dataset.ipynb
I cannot get the training to work as I get this message at the forward pass step:
'The input_points must be a 3D tensor. Of shape
batch_size,
nb_boxes,
4.', ' got torch.Size([2, 4]).'
I think the input_boxes is wrong somehow?
The images I am using are colour PNG images rather than the tiff images in the reference code and are showing with 3 channels here....
My SamDataset code is:
and this is where I run into trouble...
The text was updated successfully, but these errors were encountered: