Forward Map Projections

I have been exploring the production of map projections of the globe in Python. This is purely a programming exercise, as existing sites already catalogue the large variety of map projections. High resolution coastline coordinates can be downloaded as land polygons or coastline linestrings from Data Derived from OpenStreetMap in Shapefile format and extracted using the Python PyShp Library. Some projection formulae can be obtained from John P. Snyder, Map Projections - A Working Manual (transcribed by Mircea Neacsu), the Mercator projection, for example:- \[x = R\left(\lambda - \lambda_0\right)\] \[y = R\ln \left(\tan \left(\left(\pi / 4 - \phi / 2\right)\right)\right)\] where R is the radius of the sphere at the scale of the map as drawn, and latitude \(\phi\) and longitude \(\lambda\) are given in radians as is the central meridian \(\lambda_0\). The coastline can be drawn by projecting each point in each coastline linestring according to the formulae and drawing the \(x\) and \(y\) values in an image with the Pillow ImageDraw library.

More complicated projections can be performed with pyproj, the Peirce Quincuncial projection, for example:-

p = Proj('+proj=peirce_q +lon_0=25 +shape=square')

This approach does have some drawbacks:-

The Grieger Triptychial Projection

The complicated specification for this amazing projection fails for some reason in pyproj:-

+proj=pipeline +step +proj=ob_tran +o_proj=peirce_q +o_lat_p=-45 +o_lon_p=45 +o_type=horizontal +o_scrollx=-0.25 +step +proj=affine +s11=-1 +s12=0 +s21=0 +s22=-1

Fortunately, Björn Grieger helpfully provides tables for performing his projection. For each pixel in the projected image, the tables specify the latitude and longitude corresponding to that pixel. The pixel value at that latitude and longitude can be obtained from an equirectangular reference image. The tables effectively specify an inverse projection:-

\(\lambda ,\phi = f\left(x,y\right)\) instead of \(x,y = f\left(\lambda ,\phi \right)\)

Using this approach (without the tables) solves many of the above drawbacks.

Inverse Map Projections

A 5400×2700 equirectangular reference image is first prepared from the shapefile land polygons using \(f\left(x,y\right) = x,y\) (suitably scaled).

A 5400×2700 reference image from NASA's Blue Marble collection, as suggested by Grieger, can be used instead. Hovering the mouse over this and subsequent images switches to a reference image (world.topo.bathy.200407.3x5400x2700.png) downloaded from that site.

The inverse projection proceeds by calling an inverse function to obtain the latitude and longitude for each pixel in the projected image. The corresponding pixel value is obtained from the equirectangular reference image as in the Grieger triptychial projection. For the Mercator projection, the inverse formulae are:- \[\lambda = x/ R + \lambda_0\] \[\phi = \arctan \left(\sinh \left(y/ R\right)\right)\] This is the resulting projected image:-

Using the above reference image, the resulting Grieger triptychial projection is:-

Note that the graticule which was drawn on the reference image has thickened in places, due to the local distortion imposed by the projection. This is not altogether undesirable as it performs a similar role as Tissot's indicatrix to show local distortions due to map projection. For the Transverse Mercator projection, the inverse formulae are \[D = y + \phi_0\] \[\lambda = \lambda_0 + \arctan \left(\sinh \left(x\right)/ \cos \left(D\right)\right)\] \[\phi = \arcsin \left(\sin \left(D\right)/ \cosh \left(x\right)\right)\] In Python, it is important to implement the \(\lambda\) formula as

λ = λ0 + atan2(sinh(x), cos(D))
to ensure that the resulting angle is in the correct quadrant.

This mapping from the reference image to the projected image is similar to "nearest" resampling when resizing an image, and may not be smooth. A smoother projected image can be obtained by mapping to a larger projected image and resizing it using a high-quality Lanczos filter.

This programming exercise has effectively just become a plotting front-end for pyproj. A few tricks are needed:- Similar projections can be produced by PyGMT, although it cannot currently use the Blue Marble collection. The List of map projections at Wikipedia uses images created with the Geocart map projection software which costs US$240.00 for a Personal License.

Here are some more projections created using pyproj:-

Eckert I +proj=eck1
Gall +proj=gall
Interrupted Goode Homolosine +proj=igh
Natural Earth +proj=natearth
Quartic Authalic +proj=qua_aut
Stereographic +proj=stere +lat_0=90
This is conventionally drawn inside a circular boundary such as the equator, but is continued here to fill a rectangle.
Times +proj=times
Fahey +proj=fahey
Foucaut +proj=fouc
Tobler-Mercator +proj=tobmerc
Azimuthal Equidistant +proj=aeqd
Lambert Azimuthal Equal Area +proj=laea
Oblique Mercator +proj=omerc lon_1=0 +lat_1=51 +lon_2=151 +lat_2=-34 +no_rot
This projection is useful for showing long narrow strips on the globe. It is conformal, unlike most other cylindrical projections. Here it shows (when cropped) what would be seen under a great circle flight from London to Sydney.
The uncropped projection using this ETOPO Global Relief Model is:-

Rotating the Globe

Creation of a projected image as a NumPy array of shape (hp, wp, 3) from a reference image consisting of an array of shape (hr, wr, 3) proceeds as follows, for 0 <= ip < hp, 0 <= jp < wp:-

  1. coordinates x and y are derived from jp and ip
  2. longitude and latitude λ and φ are obtained from p(x, y, inverse = True)
  3. ir and jr are derived from φ and λ
  4. Pixel [ip, jp] in the projected image is copied from pixel [ir, jr] in the reference image
The mapping to the projected image from the reference image in step 4 can be stored in an array m of shape (hp, wp, 2) by setting m[ip, jp] = [ir, jr]. This creates a map similar to the tables provided by Grieger.

The projected image imp can then be recreated more rapidly from the reference image imr by using the map to set

imp1[ip, jp] = imr1[m[ip, jp][0], m[ip, jp][1]] for 0 <= ip < hp, 0 <= jp < wp.

On my PC this still takes about 16 seconds compared to 100 seconds for the original projection. It is slowed down by the explicit looping of ip and jp in the Python code. Trying to find a way of letting Numpy perform the looping, I asked ChatGPT:-
imr1 is a numpy array of shape (2700, 5400, 3).
m is a numpy array of shape (3054, 4320, 2).
What is the most efficient python code to create a numpy array imp1 of shape (3054, 4320, 3),
where imp1[ip, jp] = imr1[m[ip, jp][0], m[ip, jp][1]]
It came back with the amazingly good answer:-
To create a numpy array imp1 of shape (3054, 4320, 3) where each element imp1[ip, jp] is set to imr1[m[ip, jp][0], m[ip, jp][1]], you can use numpy's advanced indexing capabilities. This will allow you to efficiently perform the required mapping without the need for explicit loops. Here is how you can do it: imp1 = imr1[m[..., 0], m[..., 1]]
That one-liner executes in a fraction of a second! The advanced indexing capabilities of NumPy were news to me.

This provided the means to rapidly create a sequence of projections of the rotating globe. A copy of the reference image imr1 can be rolled a few pixels to the right using imr1 = numpy.roll(imr1, step, axis = 1) after projecting it as above for each image in the sequence. This can then be converted to a GIF using the imageio library: imageio.mimsave(giffile, sequence, fps=25) and optimised using the pygifsicle library: optimize(giffile).

Here is the rather unsurprising result for the Fahey projection:-

The result for the Transverse Mercator projection is more interesting as it shows the Northern hemisphere rotating anti-clockwise around the North pole and the Southern hemisphere rotating clockwise around the South pole.

Saving the Map

A map produced as above is a very useful encapsulation of a projection and can be used to rapidly create projected images from reference images. Grieger saved his map as a text file, but this needs to be read line by line to create the requisite mapping array. A map can be saved in a more usable format as a 16-bit grayscale PNG image. Firstly, we revise the above description of map creation to create a map array of shape (2, h, w) for a w × h projected image. This allows an even more succinct projection statement: imp1 = imr1[m[0], m[1]]. My Python code now creates the map without explicit looping by exclusively using array operations. The pyproj call p(x, y, inverse = True) accepts x and y as arrays.

OpenCV cannot save a two-channel grayscale image, so the map needs to be reshaped by m.reshape(2 * h, w) to effectively double the height before saving. Note that numpy.reshape() gives a new shape to an array without changing its data. After reading a saved map, it is restored to its original shape by m.reshape(2, h // 2, w). PNG images are saved with lossless data compression. A typical map of shape (2, 4320, 4320) is 74,649,600 bytes in memory but 10,873,214 bytes on disk. There is not usually much to see when viewing these map images as the pixel values are less than the maximum dimension of the reference image, for example 5400 compared with the maximum value 65535 of a 16-bit grayscale pixel. Out of interest, for viewing purposes only, the map can be scaled up, in this example by a factor of 10:-

The Sphere of Vision

Map projections have been considered as representations of the Earth's globe, but they are equally applicable in another familiar context: the sphere of vision. The external world as perceived with one eye can be regarded as being painted on the inside of a sphere and seen from its centre. With our eyes we just look and see, but as a record we can take photographs. A photograph effectively uses a type of projection of the sphere of vision which depends on the type of camera and its lens:-

Only the last of these needs any special arrangement for viewing. An equirectangular image can be viewed with a Virtual Reality headset, but may also be viewed with a PC or Mac app such as RICOH THETA or Insta360 STUDIO. With these interactive apps, you can view part of the equirectangular image in a chosen direction to see a gnomonic or mirror ball projection. RICOH THETA calls these Straight and Mirror Ball views, Insta360 STUDIO calls them DeWarp and Crystal Ball. Here, for example, is an equirectangular image taken by Carlton Wright, with two straight views in RICOH THETA of parts of the image:-
For a very wide view, equivalent to a photograph with a very wide-angle lens, the stretching at the sides can be extreme:- It is then better to use the mirror ball view:- Here is a panorama created by my Python program using a Central Cylindrical projection and a Fahey projection:- Exploiting the equivalence of the globe and the sphere of vision, these apps can also be used to view reference images like the ETOPO Global Relief Model:-

Christopher B. Jones, 2024-06-11