I have been exploring the production of map projections of the globe in Python.
This is purely a programming exercise, as existing sites already catalogue the large variety of map projections.
High resolution coastline coordinates can be downloaded as land polygons or coastline linestrings from Data Derived from OpenStreetMap in Shapefile format and extracted using the Python PyShp Library.
Some projection formulae can be obtained from John P. Snyder, Map Projections - A Working Manual (transcribed by Mircea Neacsu), the Mercator projection, for example:- \[x = R\left(\lambda - \lambda_0\right)\] \[y = R\ln \left(\tan \left(\left(\pi / 4 - \phi / 2\right)\right)\right)\] where R is the radius of the sphere at the scale of the map as drawn, and latitude
\(\phi\) and longitude \(\lambda\) are given in radians as is the central meridian \(\lambda_0\).
The coastline can be drawn by projecting each point in each coastline linestring according to the formulae and drawing the \(x\) and \(y\) values in an image with the Pillow ImageDraw library.
More complicated projections can be performed with pyproj, the Peirce Quincuncial projection, for example:-
p = Proj('+proj=peirce_q +lon_0=25 +shape=square')
This approach does have some drawbacks:-
The complicated specification for this amazing projection fails for some reason in pyproj:-
+proj=pipeline +step +proj=ob_tran +o_proj=peirce_q +o_lat_p=-45 +o_lon_p=45 +o_type=horizontal +o_scrollx=-0.25 +step +proj=affine +s11=-1 +s12=0 +s21=0 +s22=-1
Fortunately, Björn Grieger helpfully provides tables for performing his projection. For each pixel in the projected image, the tables specify the latitude and longitude corresponding to that pixel. The pixel value at that latitude and longitude can be obtained from an equirectangular reference image. The tables effectively specify an inverse projection:-
A 5400×2700 equirectangular reference image is first prepared from the shapefile land polygons using \(f\left(x,y\right) = x,y\) (suitably scaled).
A 5400×2700 reference image from NASA's Blue Marble collection, as suggested by Grieger, can be used instead. Hovering the mouse over this and subsequent images switches to a reference image (world.topo.bathy.200407.3x5400x2700.png) downloaded from that site.
The inverse projection proceeds by calling an inverse function to obtain the latitude and longitude for each pixel in the projected image. The corresponding pixel value is obtained from the equirectangular reference image as in the Grieger triptychial projection.
For the Mercator projection, the inverse formulae are:-
\[\lambda = x/ R + \lambda_0\]
\[\phi = \arctan \left(\sinh \left(y/ R\right)\right)\]
This is the resulting projected image:-
Using the above reference image, the resulting Grieger triptychial projection is:-
Note that the graticule which was drawn on the reference image has thickened in places, due to the local distortion imposed by the projection. This is not altogether undesirable as it performs a similar role as Tissot's indicatrix to show local distortions due to map projection.
For the Transverse Mercator projection, the inverse formulae are
\[D = y + \phi_0\]
\[\lambda = \lambda_0 + \arctan \left(\sinh \left(x\right)/ \cos \left(D\right)\right)\]
\[\phi = \arcsin \left(\sin \left(D\right)/ \cosh \left(x\right)\right)\]
In Python, it is important to implement the \(\lambda\) formula as
λ = λ0 + atan2(sinh(x), cos(D))
This mapping from the reference image to the projected image is similar to "nearest" resampling when resizing an image, and may not be smooth. A smoother projected image can be obtained by mapping to a larger projected image and resizing it using a high-quality Lanczos filter.
This programming exercise has effectively just become a plotting front-end for pyproj. A few tricks are needed:-
Here are some more projections created using pyproj:-
Creation of a projected image as a NumPy array of shape (hp, wp, 3) from a reference image consisting of an array of shape (hr, wr, 3) proceeds as follows, for 0 <= ip < hp, 0 <= jp < wp:-
The projected image imp can then be recreated more rapidly from the reference image imr by using the map to set
imr1 is a numpy array of shape (2700, 5400, 3).It came back with the amazingly good answer:-
m is a numpy array of shape (3054, 4320, 2).
What is the most efficient python code to create a numpy array imp1 of shape (3054, 4320, 3),
where imp1[ip, jp] = imr1[m[ip, jp][0], m[ip, jp][1]]
To create a numpy array imp1 of shape (3054, 4320, 3) where each element imp1[ip, jp] is set to imr1[m[ip, jp][0], m[ip, jp][1]], you can use numpy's advanced indexing capabilities. This will allow you to efficiently perform the required mapping without the need for explicit loops. Here is how you can do it: imp1 = imr1[m[..., 0], m[..., 1]]That one-liner executes in a fraction of a second! The advanced indexing capabilities of NumPy were news to me.
This provided the means to rapidly create a sequence of projections of the rotating globe. A copy of the reference image imr1 can be rolled a few pixels to the right using imr1 = numpy.roll(imr1, step, axis = 1) after projecting it as above for each image in the sequence. This can then be converted to a GIF using the imageio library: imageio.mimsave(giffile, sequence, fps=25) and optimised using the pygifsicle library: optimize(giffile).
Here is the rather unsurprising result for the Fahey projection:-
The result for the Transverse Mercator projection is more interesting as it shows the Northern hemisphere rotating anti-clockwise around the North pole and the Southern hemisphere rotating clockwise around the South pole.
A map produced as above is a very useful encapsulation of a projection and can be used to rapidly create projected images from reference images. Grieger saved his map as a text file, but this needs to be read line by line to create the requisite mapping array. A map can be saved in a more usable format as a 16-bit grayscale PNG image. Firstly, we revise the above description of map creation to create a map array of shape (2, h, w) for a w × h projected image. This allows an even more succinct projection statement: imp1 = imr1[m[0], m[1]]. My Python code now creates the map without explicit looping by exclusively using array operations. The pyproj call p(x, y, inverse = True) accepts x and y as arrays.
OpenCV cannot save a two-channel grayscale image, so the map needs to be reshaped by m.reshape(2 * h, w) to effectively double the height before saving. Note that numpy.reshape() gives a new shape to an array without changing its data. After reading a saved map, it is restored to its original shape by m.reshape(2, h // 2, w). PNG images are saved with lossless data compression. A typical map of shape (2, 4320, 4320) is 74,649,600 bytes in memory but 10,873,214 bytes on disk. There is not usually much to see when viewing these map images as the pixel values are less than the maximum dimension of the reference image, for example 5400 compared with the maximum value 65535 of a 16-bit grayscale pixel. Out of interest, for viewing purposes only, the map can be scaled up, in this example by a factor of 10:-
Map projections have been considered as representations of the Earth's globe, but they are equally applicable in another familiar context: the sphere of vision. The external world as perceived with one eye can be regarded as being painted on the inside of a sphere and seen from its centre. With our eyes we just look and see, but as a record we can take photographs. A photograph effectively uses a type of projection of the sphere of vision which depends on the type of camera and its lens:-
You don't actually need to use any of the above-mentioned apps to view 360° equirectangular images: there is excellent support for them in Google Photos, which also has good editing tools to enhance your images. On a PC, you access it in your browser, on a smartphone or tablet there is a Google Photos app.
UsePanoramaViewer True
.
If this metadata is missing for some reason (the image was not created in a 360° camera, for example) it can be added using the ExifFixer utility.
You can use the mouse to rotate and zoom the view as in the above-mentioned apps. Only a straight view is supported. Toggle to full-screen and back using F11.
In the Google Photos app on an iPhone, the same images can be viewed and rotated and zoomed using finger movement. Press the image to show it with or without icons. There is also a compass icon which moves the image using the iPhone's gyroscope. These images are ideal for mirroring to a big screen TV using Apple TV.
Google Photos limits the zooming out of these images as they would suffer extreme wide-angle distortion at the edges. If you want to see wide views it is better to create a central cylindrical panorama from a 360° equirectangular image. This can be done in Hugin, or with an ffmpeg commandffmpeg -i Parliament.jpg -vf v360=e:cylindrical:h_fov=360:v_fov=115,scale=w=iw/2:h=ih -y -update true output.jpg
A 360 degree camera with back-to-back fisheye lenses may also provide a RAW image showing the original dual fisheye images which were stitched by firmware in the camera, like this image taken with the RICOH THETA Z1:-
When a fisheye lens is used on a normal camera, however, an original fisheye image is obtained, like this one from the excellent review of the Laowa 4mm f/2.8 Circular Fisheye lens by Klaus Schroiff at OpticalLimits.
Such an image needs to be converted to equirectangular format before it can be viewed in one of the 360 degree apps.
This could be done using a panorama stitching app like PTGui or Hugin, but it is easier to use the ffmpeg v360 filter. In a Windows cmd window, the command
ffmpeg -i P6300019.jpg -vf v360=fisheye:e:ih_fov=190:iv_fov=190 -y -update true output.jpg
gave this result:-
With a 180° field of view, the hemisphere would have projected to a square, but the 190° field of view has extended the sides of the square.
This can now be viewed in the Online 360° Panorama Viewer
For a fisheye photograph taken with the camera pointing up or down (this one was pointing slightly down), the pitch parameter can be used to get a straight horizon with perpendicular verticals:-
ffmpeg -i P6300019.jpg -vf v360=fisheye:e:ih_fov=190:iv_fov=190:pitch=-5 -y -update true output.jpg
It is not documented exactly which fisheye projection ffmpeg uses, but it is almost certainly the Equidistant projection. It also supports the Stereographic and Equisolid formats which are mentioned in the
Fisheye lens article. The effective projection of this particular lens is also unknown and would take analysis of an image of concentric circles to establish. The ffmpeg fisheye projection, however, does seem to give an acceptable result for this lens. As mentioned above, it may be preferable to create a central cylindrical panorama instead of an equirectangular projection, which can be done by this ffmpeg command
ffmpeg -i P6300019.jpg -vf v360=fisheye:cylindrical:ih_fov=190:iv_fov=190:h_fov=190:v_fov=90,scale=w=iw/2:h=ih -y -update true output.jpg
giving
Here is another fisheye image taken by Klaus Schroiff with the lens pointing upwards:-
It is possible to create a 360° cylindrical panorama from the periphery of such an image using the ffmpeg v360 filter
ffmpeg -i P6080006.jpg -vf v360=fisheye:cylindrical:ih_fov=190:iv_fov=190:h_fov=360:v_fov=100:pitch=-89,scale=w=iw/2:h=ih -y -update true output.jpg
The pitch
parameter was needed because the lens axis was not exactly vertical. Some yaw
may be needed as well. The best values for these parameters can be determined by trial and error. This may be the most natural way to use a fisheye lens, as our sphere of vision is usually an extended hemisphere above us, with the underneath hidden below the surface of the earth.
It may be thought desirable to convert a panorama (taken with a panoramic camera or stitched from separate images) to equirectangular format for viewing. This can be done, but is unnecessary, however, as these panoramas can already be viewed as they are in Google Photos. What is the point of replacing with this?
Cameras like the insta360 EVO take a side-by-side pair of 180° fisheye images which the camera converts to left and right hemispheres in an equirectangular projection, like this example from Carlton Wright:- On a smartphone, you would use the Insta360 EVO app connected to the camera by WiFi to view the image with VR glasses. On a PC, the easiest way to view these images interactively in various directions without distortion is to use StereoPhoto Maker. In SPM, use Menu -> Edit -> 360/180 image -> 360/180 Image Preview (Ctrl+Alt P). You can zoom in and out using the mouse wheel or drag the image in various directions with the mouse. Press the space bar to capture a view for saving. Stereo pairs taken with two front facing fisheye lenses have an inherent problem in that horizontal disparity decreases away from the centre whereas vertical disparity increases. This is only corrected in rectified views looking straight ahead. In practice, however, this effect may not be noticeable in sideways views if there are no nearby objects close to the left or right edges. If you want to see the entire width you can convert the stereo pair from equirectangular to a central cylindrical panorama using the above-mentioned ffmpeg command for a 360° image. This might exaggerate vertical disparity effects. You can convert these side-by side stereo pairs to colour anaglyphs in StereoPhoto Maker, copy them to Google Photos, and then mirror them to your big screen TV for viewing with red/cyan glasses.
A photograph taken with a normal rectilinear wide-angle lens can also be viewed looking in different directions. This photograph of Graz in Austria was taken with a 22mm lens on a Canon M50 camera:- It can be converted to equirectangular format with Hugin. Select it with Edit->Add Image... and Hugin gets the Lens type, Focal length, and Focal length multiplier from the EXIF data. On the Stitcher tab, click on the Calculate field of view and Calculate optimal size buttons, then Stitch!, giving this:- Hugin adds the XMP metadata needed to indicate to compatible panorama viewers that this is part of a full 360° equirectangular projection:-
You can also view these original photographs in different directions in my pan+tilt+crop app from 2007 which I have recovered from backup.
Christopher B. Jones, 2024-07-12