You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm attempting to co-register longwave infrared hyperspectral images. These images were captured by a push broom sensor on a plane flying in a circular orbit using various sensor inclination angles. There is a significant overlap between the images (around 80-90%). I'm using Band 111 on both images because it has relatively low noise and striping.
However, it seems that the algorithm is unable to identify enough control points throughout the entire image, and is instead only utilizing points in certain areas. As a result, the output images are noticeably distorted.
What I Did
I’m running the image registration in the following way:
Polygonize progress |==================================================| 100.0% Complete => 0:00:00
Polygonize progress |==================================================| 100.0% Complete => 0:00:00
Calculating footprint polygon and actual data corner coordinates for reference image...
Bounding box of calculated footprint for reference image:
(258028.40410699998, 4519791.669148499, 258199.601042, 4519890.994421)
Calculating footprint polygon and actual data corner coordinates for image to be shifted...
Bounding box of calculated footprint for image to be shifted:
(258032.49094080002, 4519800.9808134, 258202.92704160002, 4519890.2045003995)
Matching window position (X,Y): 258118.03887428824/4519844.002253382
Initializing tie points grid...
Warping progress |==================================================| 100.0% Complete => 0:00:00
Warping progress |==================================================| 100.0% Complete => 0:00:00
Equalizing pixel grids and projections of reference and target image...
Calculating tie point grid (88860 points) using 8 CPU cores...
progress: |==================================================| 100.0% Complete => 0:24:58
Found 60005 matches.
Performing validity checks...
36973 tie points flagged by level 1 filtering (reliability).
18160 tie points flagged by level 2 filtering (SSIM).
680 tie points flagged by level 3 filtering (RANSAC)
18487 valid tie points remain after filtering.
C:\Users\gol\miniconda3\envs\arosics\Lib\site-packages\arosics\Tie_Point_Grid.py:809: UserWarning: By far not more than 7000 tie points can be used for warping within a limited computation time (due to a GDAL bottleneck). Thus these 7000 points are randomly chosen out of the 18487 available tie points.
warnings.warn('By far not more than 7000 tie points can be used for warping within a limited '
C:\Users\gol\miniconda3\envs\arosics\Lib\site-packages\arosics\DeShifter.py:291: UserWarning:
The coordinate grid of warp cannot be aligned to the desired grid because their pixel sizes are not exact multiples of each other (input [X/Y]: 0.4239704/0.2832498; desired [X/Y]: 0.41755350001039915/0.279789499938488). Therefore the original grid is chosen for the resampled output image. If you don´t like that you can use the 'out_gsd' or 'match_gsd' parameters to set an appropriate output pixel size or to allow changing the pixel size.
warnings.warn("\nThe coordinate grid of %s cannot be aligned to the desired grid because their pixel "
Correcting geometric shifts...
Translating progress |==================================================| 100.0% Complete => 0:00:00
Warping progress |==================================================| 100.0% Complete => 0:03:22
Writing GeoArray of size (317, 404, 256) to D:\rosic\registered.img.
[31]:
OrderedDict([('band', None),
('is shifted', True),
('is resampled', True),
('updated map info',
['UTM',
1.0,
1.0,
258031.643,
4519890.771,
0.4239704000065103,
0.283249800093472,
18,
'North',
'WGS-84']),
('updated geotransform',
(258031.643,
0.4239704000065103,
0.0,
4519890.771,
0.0,
-0.283249800093472)),
('updated projection',
'PROJCS["WGS 84 / UTM zone 18N",GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AUTHORITY["EPSG","4326"]],PROJECTION["Transverse_Mercator"],PARAMETER["latitude_of_origin",0],PARAMETER["central_meridian",-75],PARAMETER["scale_factor",0.9996],PARAMETER["false_easting",500000],PARAMETER["false_northing",0],UNIT["Meter",1],AXIS["Easting",EAST],AXIS["Northing",NORTH],AUTHORITY["EPSG","32618"]]'),
('arr_shifted',
array([[[ <removed for brevity>
('GeoArray_shifted',
<geoarray.baseclasses.GeoArray at 0x1a48d0e1e50>)])
That is a bit a difficult case, mainly because you have quite large shifts in your images (up to ~25 pixels) and the dimensions of your input images are rather small. There is also a slight tilt between the images.
However, it seems like the best solution is to first run a global co-registration that corrects for the coarse shift at the center of the image overlap:
Here it might be needed to experiment a bit with the thresholds used for filtering false-positives, especially the min_reliability value. Probably visualizing the detected shifts as vectors might be helpful:
In general, it does not make sense to compute tie points for each pixel (grid_res=1), it is more important to have a few reliable tie points and get rid of all false-positives that will introduce you distortions into the warped output result.
If you able to pass larger input images to AROSICS, the algorithm will be able to compute more reliable shifts, especially around the former image edges (because there, the algorithm automatically reduces the matching window size leading to less reliable tie points).
Description
I'm attempting to co-register longwave infrared hyperspectral images. These images were captured by a push broom sensor on a plane flying in a circular orbit using various sensor inclination angles. There is a significant overlap between the images (around 80-90%). I'm using Band 111 on both images because it has relatively low noise and striping.
However, it seems that the algorithm is unable to identify enough control points throughout the entire image, and is instead only utilizing points in certain areas. As a result, the output images are noticeably distorted.
What I Did
I’m running the image registration in the following way:
I'm getting the following output:
Tie points with grid_res = 8
Tie points with grid_res = 1
Here's the reference image:
Here's the target image:
Here's the referenced output image:
Here are the images to reproduce the issue:
Reference image
Target image
The text was updated successfully, but these errors were encountered: