dataset/ -fold 0 -epochs 500 -learning-rate-drop-period 50 -num_training_images 0. You should adjust training image directories before running the code.Įxample: CUDA_VISIBLE_DEVICE=0 python train.py -training_dir. You can also specify the task in the demo_single_image.py demo. The available tasks are AWB, all, and editing. Example: python demo_images.py -input_dir. Run demo_images.py to process image directory.result_images and output the following figure: This example should save the output image in. Run demo_single_image.py to process a single image.Įxample of applying AWB + different WB settings: python demo_single_image.py -input_image.
COLOR FINALE AUTO WHITE BALANCE CODE
The code may work with library versions other than the specified. Torchvision (tested with 0.4.0 and 0.6.0) Alternative solutions include using custom trianing loop. Hint: you may need to use a persistent variable to control the process. For non-graphical interface, you can edit your custom code here to save example patches periodically. The figure will show you produced patches (first row) and the corresponding ground truth patches (second row).
You can change the value of i in the above code to see different images in the current training batch. If you run Matlab with a graphical interface and you want to visualize some of input/output patches during training, set a breakpoint here and write the following code in the command window:Ĭlose all i = 1 figure subplot(2,3,1) imshow(extractdata(Y(:,:,1:3,i))) subplot(2,3,2) imshow(extractdata(Y(:,:,4:6,i))) subplot(2,3,3) imshow(extractdata(Y(:,:,7:9,i))) subplot(2,3,4) imshow(gather(T(:,:,1:3,i))) subplot(2,3,5) imshow(gather(T(:,:,4:6,i))) subplot(2,3,6) imshow(gather(T(:,:,7:9,i))) You can use this file to visualize training progress. cvs file will be created in the reports_and_checkpoints directory. To start training from scratch, use loadpath=. You can use the loadpath variable to continue training from a training checkpoint. To control the learning rate drop rate and factor, please check the get_training_options.m function located in the utilities directory. Other useful options include: patchsPerImg to select the number of random patches per image and patchSize to set the size of training patches. Then the code will train on the remaining folds and leave the selected fold for testing. If you would like to do 3-fold cross-validation, use fold = testing_fold. If you would like to limit the number of training images to be n images, set trainingImgsNum to n.
If you set fold = 0 and trainingImgsNum = 0, the training will use all training data without fold cross-validation. You can change the training settings in training.m before training.įor example, you can use epochs and miniBatch variables to change the number of training epochs and mini-batch size, respectively. You should adjust training image directories from the datasetDir variable before running the code. If you run the demo_single_image.m, it should save the result in. Run demo_single_image.m or demo_images.m to process a single image or image directory, respectively.There is no guarantee that the trained models produce exactly the same results. We provide source code for Matlab and PyTorch platforms. As an example, please refer to dataset directory. This is the same filename style used in the Rendered WB dataset. Each pair of input/ground truth images should be in the following format: input image: name_WB_picStyle.png and the corresponding ground truth image: name_G_AS.png.
Author=,Ĭopy both input images and ground-truth images in a single directory.