I'm going to slice it into smaller strips and flatten it. Meaning make all the normal of the polygon (0,0,-1), assuming the camera was facing (0,0,1). Then translate each polygon to the same z coordinate, stitch each peaces to gather. If I texture map this polygon with the original image, I will get a de-warped image. Of course in my program I'm not going to texture map a polygon but create a new image out of this information.
Essentially, the polygon that is flatten will tell me how much each strip of the image needs to be skewed, Horizontally and Vertically. In the end its exactly the same thing as what you and steve are doing. I'm just driving the amount of skewing based on 3D coordinate.
For example if I have a strip (this is unrealistic because you wouldn't be able to see this strip but for example).
P = [(0,0,0), (0,-1,0), (0,-1,-1), (0, 0, -1)] normal would be [0, -1, 0]
if I rotate this and make the normal 0,0,-1... I would get.
Q = [(0,0,0), (0, -1, 0), (1, -1, 0), (1,0,0)]
So Q is facing straight up at the camera.
If I plot Q... instead of having a narrow strip I should get a perfect square on the screen.
This tell me that the image area representing P needs to be stretched to fit Q to de-skew.
Does that make since?