-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
precision #1
Comments
I have checked the mIoUs, but I dont know what is the measure of moiu. |
can the miou reach 80%------------------ 原始邮件 ------------------
发件人: "Serge-weihao"<[email protected]>
发送时间: 2020年1月10日(星期五) 晚上6:06
收件人: "Serge-weihao/CCNet-Pure-Pytorch"<[email protected]>;
抄送: "swjtulinxi"<[email protected]>;"Author"<[email protected]>;
主题: Re: [Serge-weihao/CCNet-Pure-Pytorch] precision (#1)
have you compared the moiu between your cc.py and the original code on cityscapes ???
becauese i have test your cc.py,and .....
I have checked the mIoUs, but I dont know what is the measure of moiu.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
I load the same checkpoint using our pure PyTorch implementation and the official cc_attention, the mIoU results are almost the same, because the calculation is the same. If you have a ckpt with a testing code of 80% of the original cc_attention, my implementation can get the same results. |
i see ,thank you for yor code,beacuse i can not use the original code, your code can work on my computer,i don't know why. |
the original code is not like yours,i don't understand his code ,for example , _ext. is for what, i can't understand def _check_contiguous(*args): class CA_Weight(autograd.Function):
|
They use CPP extension with CUDA, so you may have some problems of compatibility(兼容性). The official inplace-abn is also a CUDA implementation. |
What op sys are you using, Linux or Win? |
win |
This repo uses Synchronized-BatchNorm-PyTorch as cross gpus BatchNorm, it costs more gpu memory. I have tested it for batchsize of 4, the result is about 67, so I think the Synchronized-BatchNorm-PyTorch may have some problems or the training hyperparameters are not good for batchsize of 4. I use inplace-abn under Linux, and I will implement inplace-abn using pure PyTorch in the future. |
i just one GPU,so i use the nn.batchnorm2d, |
the paper said ,the batchsize shoud be 12 or higer when we training ,otherwise it will affect the result |
inplace-abn can save some GPU memory for training, but the official CUDA inplace-abn may not support Windows well. If this repo is helpful for you, you can star it. |
have you compared the moiu between your cc.py and the original code on cityscapes ???
becauese i have test your cc.py,and .....
The text was updated successfully, but these errors were encountered: