Geo-coordinates of Sentinel-2 results
Posted: Tue Aug 04, 2020 10:35 am
Dear Francois,
in order to compare results of different processors it is often easiest to merge the outputs of them into one. We do merging with SNAP MergeOp, and we currently work with Sentinel-2 60m resolution. But the Polymer outputs unfortunately refuse to be merged because the pixel positions are slightly different from those SNAP provides for e.g. the C2RCC outputs written to NetCDF. Lat and Lon rasters of Polymer seem to be stretched by the distance of half a pixel into each direction, making the image slightly larger than it should be.
Comparing two products there are always two, and it may be that the other one is wrong. I do not exclude that SNAP is wrong in how it reads the geo-information of the Sentinel-2 L1C product. The reasons that support the SNAP way of reading Sentinel-2 are: SNAP expects that the lat and lon coordinates are centre coordinates. I think that Polymer uses the same assumption because else I would expect that the Polymer image is only shifted in relation to the SNAP output image, but not stretched. Second, I assume that in the L1C product the corner coordinate given in UTM define the north-east corner (!) of the north-east pixel because the coordinates are the same for the resolutions 10m, 20m, and 60m (and I assume the first 6*6 10m pixels cover the same area as the first 60m pixel). This matches with the SNAP way of reading the product and writing 60m processing results.
It seems that Polymer uses the north-east corner coordinates and the calculated south-west corner coordinates of the complete L1C image (by adding 1830*60m) and divides it by 1829 to get the pixel spacing for the "60m" result (that no longer is 60m then). This assumes that the image corner coordinates are pixel centres of the outermost pixels. I do not think that this fits well with the image extend being the same in 10m, 20m and 60m and how I assume the 10m pixel blocks fit into the 60m pixels. Would it be possible to divide by 1830 instead to really get 60m pixels? And would it be possible to shift by half a pixel distance to get pixel centre coordinates in the lat and lon variables. This would make merging with SNAP products possible, and it would make matchups more correct (if my observations are correct).
Best regards,
Martin
in order to compare results of different processors it is often easiest to merge the outputs of them into one. We do merging with SNAP MergeOp, and we currently work with Sentinel-2 60m resolution. But the Polymer outputs unfortunately refuse to be merged because the pixel positions are slightly different from those SNAP provides for e.g. the C2RCC outputs written to NetCDF. Lat and Lon rasters of Polymer seem to be stretched by the distance of half a pixel into each direction, making the image slightly larger than it should be.
Comparing two products there are always two, and it may be that the other one is wrong. I do not exclude that SNAP is wrong in how it reads the geo-information of the Sentinel-2 L1C product. The reasons that support the SNAP way of reading Sentinel-2 are: SNAP expects that the lat and lon coordinates are centre coordinates. I think that Polymer uses the same assumption because else I would expect that the Polymer image is only shifted in relation to the SNAP output image, but not stretched. Second, I assume that in the L1C product the corner coordinate given in UTM define the north-east corner (!) of the north-east pixel because the coordinates are the same for the resolutions 10m, 20m, and 60m (and I assume the first 6*6 10m pixels cover the same area as the first 60m pixel). This matches with the SNAP way of reading the product and writing 60m processing results.
It seems that Polymer uses the north-east corner coordinates and the calculated south-west corner coordinates of the complete L1C image (by adding 1830*60m) and divides it by 1829 to get the pixel spacing for the "60m" result (that no longer is 60m then). This assumes that the image corner coordinates are pixel centres of the outermost pixels. I do not think that this fits well with the image extend being the same in 10m, 20m and 60m and how I assume the 10m pixel blocks fit into the 60m pixels. Would it be possible to divide by 1830 instead to really get 60m pixels? And would it be possible to shift by half a pixel distance to get pixel centre coordinates in the lat and lon variables. This would make merging with SNAP products possible, and it would make matchups more correct (if my observations are correct).
Best regards,
Martin