Part 1 Estimating transformations from the image points
From corresponding points , , is satisfied where . Thus, using these corresponding points, I modify this equation as form by constructing matrix and linearize matrix as 1 dimensional vector .
After obtaining matrix , I applied SVD to get eigenvector which has smallest eigenvalue to make minimum and assign the eigenvetor as .
Then, I reshape 1 dimensional vector to be matrix to get final homography we want to compute.
and are generally obtained from different image. Thus for each data, I find out maximum value of and pixels. Then I divide each pixel data to each maximum value and pixel data to each maximum value to normalize the pixel value into .
Then, I applied fuction with normalized version of and . This outputs the homogrpahy between normalized and stably.
To make sure the output homography to be same as the original homography, I have to use diagonal matrix which is composed with maximum value of , maximum value of , and 1 as diagonal term.
Above equation expresses what I’ve done in implementation. I have to finally figure out using normalized vectors.
By above equation I can think of below relationship.
Then, I can finally derive that can be computed by left-multiplying
and right-multiplyingto .
Thus, I can finally compute the using computed from normalized vectors.
Part 2 Mosaicing
I manually find out corresponding points by visualizing image with matplotlib and extract pixel points by mouseover-ing the interesting points in the visualization.
To avoid holes in image, I used inverse warping. First I compute the inverse homography using . Then, I firstly defined warped image same size with reference image, and iterate over the pixel and apply inverse homogrpaphy to each pixel. This maps the points to original image, thus I find out the mapped pixel on original image.
In general, the mapped pixel on original image consists of floating number, thus I used bilinear interpolation with nearest 4 pixels. I can finally get corresponding original pixel value and successfully paint the region in warped image. If the mapped pixel on original image is out of range, I just paint it as 0 (black).
After implementing, I found out the warped image which has same size with reference image doesn’t show all of warped image regions. Thus, I add padding on left, right, up, and down to warped image to safely containing the computed value which has negative or much bigger than reference image size.
Part 3 Rectification
For setting , I manually find out 4 corner points pixels of iphone screen by visualizing image with matplotlib and extract pixel points by mouseover-ing the points in the visualization.
For setting , I find out the reference that ratio of width and height of recent iphones are . Thus I set 4 manual pixels consisting rectangle on image, which width and height are parallel to image width and image and has ratio of .
This implementation is totally similar with the warped_image implementing part in . Initial size of the warped image is changed in this function, and I have to call manually in this function because there are no homography argument passed in this function.
The leftover implementation is same as . I computed inverse homography, and implement inverse warping by iterating warped image pixel and apply inverse homogrpahy to find out mapped original pixel. Then, I find out each mapped original pixel value using bilinear interpolation, too.