Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to do semantic segmentation like the example from the main page? #30

Closed
willsbit opened this issue Oct 5, 2020 · 8 comments
Closed
Labels
good first issue Good for newcomers question Further information is requested

Comments

@willsbit
Copy link

willsbit commented Oct 5, 2020

Hi, I know this is a very general question, but how would I go on doing something like this?
image

@remicres
Copy link
Owner

Hi,
To do semantic segmentation, you can build some network that inputs patches, and outputs patches (for instance a U-Net like model is great).

Patches extraction
To extract the patches in your images, you can call the PatchesExtraction application (change the environment variable OTB_TF_NSOURCES to 2: you will need one source for the input image, and another source for the ground truth labels).

Training
You can then train your model either with TensorflowModelTrain of OTBTF, or from your own python code. Convert your model into a SavedModel.
Just keep in mind that your output tensor that estimates the labels, i.e. the one that you will use to generate your output image using TensorflowModelServe, must have a size that is consistent with the output.efieldx/output.efieldy, and also according to the input (i.e. source1.rfieldx/source1.rfieldy).
To avoid blocking artefacts, or reduce them, you should keep only the central part of the output tensor: You want some model that is fully convolutional... but convolutions reduce the theoretical exact output volume of the tensor. In a U-Net like model, convolutions with padding are often used because they simplify the built, but they "pollute" the borders of features maps after each convolution.

Generate the map
Once you have your trained SavedModel, just use TensorflowModelServe to generate some output from your remote sensing images.
You just have to set properly the efield and rfield values to tell the application what input volume the model "sees" and what output volume the model "creates". Use model.fullyconv on if your model is FCN.

There is a full course and step-by-step practice exercice on this topic, with OSM data and some Spot-7 image in this book. Part III, semantic segmentation (34 pages).

@remicres remicres added good first issue Good for newcomers help wanted Extra attention is needed question Further information is requested labels Oct 15, 2020
@willsbit
Copy link
Author

Thank you! I'll check out your book.

@remicres remicres pinned this issue Oct 17, 2020
@willsbit
Copy link
Author

I'm trying to use PatchesExtraction, but I'm getting this error:

sudo docker run mdl4eo/otbtf2.0:cpu otbcli_PatchesExtraction \ 
-source1.il rectify09rorm_normalize_bandmathX.tif \ 
-source1.patchsizex 16 \ -source1.patchsizey 16 \ 
-source1.out rorm_normalize_patches.tif \
-vec t_sampleSelection.gpkg 
-field "class" 
-outlabels rorm_normalize_labels.tif uint8

(FATAL) PatchesExtraction: Cannot open image rectify09rorm_normalize_bandmathX.tif. The file does not exist.

however, when i run ls it shows that the file is in the right directory

Any suggestions?

@remicres
Copy link
Owner

Hi,
This is related to the use of docker. You must mount a volume to your container. Some users had the same issue: see here.

@willsbit
Copy link
Author

Thank you!
I'll let you know if I run into more issues.

@willsbit
Copy link
Author

Hello, M. Cresson.
When running

sudo docker run -u otbuser -v $(pwd):/home/otbuser mdl4eo/otbtf2.0:cpu otbcli_TensorflowModelServe \
	-source1.il myraster.tif \
	-source1.rfieldx 16 \
	-source1.rfieldy 16 \
	-source1.placeholder "x" \
	-model.dir mydir \
	-model.fullyconv on \
	-output.names "prediction" \
	-optim.tilesizex 999999 \
	-optim.tilesizey 128 \
	-out myraster_classified.tif uint8

I get the following error:

Error while reading resource variable conv2d/kernel from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist.
(Could not find resource: localhost/conv2d/kernel)
         [[{{node conv2d/Conv2D/ReadVariableOp}}]]
OTB Filter debug message:
Output image buffered region: ImageRegion (0x7ffd4d88bd30)
  Dimension: 2
  Index: [0, 0]
  Size: [14280, 128]

Input #0:
Requested region: ImageRegion (0x7ffd4d88bd60)
  Dimension: 2
  Index: [0, 0]
  Size: [14295, 143]

Tensor shape ("y": {1, 143, 14295, 4}
User placeholders:

I'd appreciate your help very much.

@remicres
Copy link
Owner

Hi,
You must train your model prior to apply it (It seems like you try to generate an image using a model that isn't trained).
If you train your model using the otbtf applications, you must set a directory for the output SavedModel and check that the variables have been updated (in the proper directory).

@willsbit
Copy link
Author

willsbit commented Nov 4, 2020

Yeah, I had messed up the directories in the training step. Thank you again.

@remicres remicres removed the help wanted Extra attention is needed label Apr 12, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants