
- LEARN LEANOTE TRICKS GITHUB VALIDATION CODE
- LEARN LEANOTE TRICKS GITHUB CODE
- LEARN LEANOTE TRICKS GITHUB FREE
'fc6' is transformed into convolution operation by tricks proposed in FCN paper. You can test the model by feeding test_data.Ģ. The pretrained model is trained on private dataset, which has large difference from authors data, so it performs struggling on author's data. Python test.py -alpha=./test_data/alpha/1.png -rgb=./test_data/RGB/1.pngīecause I mid-delete the pretrained model on google drive, and that is the only one copy, so there is no pretrained model any more.ġ. My Chinese blog about the implementation of this paper I have no plan to modify this repo but probably restart a new repo for Image Matting with brand new algorithm in near future.
LEARN LEANOTE TRICKS GITHUB CODE
If it helps, let me know : )īecause I changed implementation of 'unpool' so the test code can't work now.
LEARN LEANOTE TRICKS GITHUB FREE
I don't have free GPU to keep working on this, so above suggestions are not verified to be useful.
Testing time ,use original size (or resize it to the closest number which can be divided by 32). Generate trimap using random dilation and random erosion both! Previous code used random dilation only which is a fatal mistake!.
Or: try PIL or opencv, there won't be too much troublesome things).
Preparing training set using author's code (I used to work with scipy.misc which has too many weird auto-settings, it hurts the performance! If you want to use scipy.misc, make sure you understand this lib very well. In issues, I noticed some great comments may give the hint that why previous work can't reach author's performance! Here is some idea you can apply to improve this work: I was working on other projects recently, so long time no maintaining this repo. Add hard mode to allow training on tough samples. Discussion about 'whether deconvolution can replace unpooling' is welcomed! (Maybe I didn't training enough time : lr = 1e-5 with 5 days training, can't converge). But deconvolution can't converge on whole dataset. Another thing need to mentioned here is that when we training on single complex sample like bike, even with deconvolution (not unpooling), the network can overfitting. And because of using unpooling, batch_size is also changed from 5 to 1 ( The code is not decent now, just can work). Because in the experiment, it is shown that deconvolution is always hard to learn detailed information (like hair). My suggestion is that composition should always happen after resize.) The result RGB images of those two preprocessing order are slightly different from each other, although it's hard to tell the difference by eye.) Rearrange the order of preprocessing so that there is no ground truth shift in preprocessing.(Composite bg,fg,alpha first then resize, or resize bg,fg,alpha then composite. Latest version of code has following changes: But some details or complex foregrounds like bike is still bad. Currently, general boundary is easy to predit. The weight Wi of two loss is still vague, I'm trying to find best weight structure. And the decoder structure is exactly same with paper despide of replacing unpooling with deconvolution layer which means the network is more complex than before. LEARN LEANOTE TRICKS GITHUB VALIDATION CODE
Some bugs on compositional_loss and validation code are fixed. Validation code and tensorboard view on 'alphamatting' dataset are added. Besides, it can save model and restore pre-trained model now, and can test on alphamatting set at rum time. Now this code can be used to train, but the data is owned by company.I'll try my best to provide code and model that can do inference.Fix bugs about memory leak when training and change one of randomly crop size from 640 to 620 for boundary security issue.This can be avoid by preparing training data more carefully.
Thanks to Davi Frossard, "vgg16_weights.npz" can be found in his blog: This is tensorflow implementation for paper "Deep Image Matting".