-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Results are bad when training cityscapes on my own #100
Comments
Hi, I have meet the same problems with you. Have you solved the problem?Waiting for reply! Thank you. |
Hi. I couldn't solve the problem. I moved onto this implementation - https://github.com/oandrienko/fast-semantic-segmentation . This works well |
Would u tell me yours cityscapes dataset path? or the type of self.image_list and self.label_list?
|
Hi
1.)
I'm using the model for cityscapes. When I'm using the pre-trained model I'm getting right mIoU.
But when I'm using my custom list(of 75% data) and training it, I'm only getting mIoU as 2-3%. My loss is decreasing till 0.6-0.7 but still, mIoU is really bad. Can somebody help with this?
Train command i used:
python train.py --update-mean-var --train-beta-gamma
--random-scale --random-mirror --dataset cityscapes --filter-scale 1
All other parameters are the same. LR: 5e-4 , Batch size: 8 , etc.
2.) Using trained weights I'm getting the mentioned mIoU(67) but when I'm using it to finetune my model my initial loss is ~10-11. Shouldn't it be around 0.5-1 considering the model is already trained on the same data?
I notice that, as the loss goes down the test accuracy also decreases.
The text was updated successfully, but these errors were encountered: