I have been exploring the production of map projections of the globe in Python.
This is purely a programming exercise, as existing sites already catalogue the large variety of map projections.
High resolution coastline coordinates can be downloaded as land polygons or coastline linestrings from Data Derived from OpenStreetMap in Shapefile format and extracted using the Python PyShp Library.
Some projection formulae can be obtained from John P. Snyder, Map Projections - A Working Manual (transcribed by Mircea Neacsu), the Mercator projection, for example:- \[x = R\left(\lambda - \lambda_0\right)\] \[y = R\ln \left(\tan \left(\left(\pi / 4 - \phi / 2\right)\right)\right)\] where R is the radius of the sphere at the scale of the map as drawn, and latitude
\(\phi\) and longitude \(\lambda\) are given in radians as is the central meridian \(\lambda_0\).
The coastline can be drawn by projecting each point in each coastline linestring according to the formulae and drawing the \(x\) and \(y\) values in an image with the Pillow ImageDraw library.

More complicated projections can be performed with pyproj, the Peirce Quincuncial projection, for example:-

`p = Proj('+proj=peirce_q +lon_0=25 +shape=square')`

This approach does have some drawbacks:-

- It is slow and inefficient: the coastal linestrings contain 73,361,080 points, most of which are overlaid on the projected image.
- When the central meridian differs from 0°, lines crossing the edges of the projected image may need to be split.
- The minimum line thickness of one pixel obscures the shape of intricate coastlines. It would be better to draw filled polygons with no outline, but the polygon for Antarctica contains the South pole, which projects to infinity in the Mercator projection.
- If parallels or meridians are not straight lines, it can be hard to draw the graticule.

The complicated specification for this amazing projection fails for some reason in pyproj:-

`+proj=pipeline +step +proj=ob_tran +o_proj=peirce_q +o_lat_p=-45 +o_lon_p=45 +o_type=horizontal +o_scrollx=-0.25 +step +proj=affine +s11=-1 +s12=0 +s21=0 +s22=-1`

Fortunately, Björn Grieger helpfully provides tables for performing his projection. For each pixel in the projected image, the tables specify the latitude and longitude corresponding to that pixel. The pixel value at that latitude and longitude can be obtained from an equirectangular reference image. The tables effectively specify an inverse projection:-

A 5400×2700 equirectangular reference image is first prepared from the shapefile land polygons using \(f\left(x,y\right) = x,y\) (suitably scaled).

A 5400×2700 reference image from NASA's Blue Marble collection, as suggested by Grieger, can be used instead. Hovering the mouse over this and subsequent images switches to a reference image (world.topo.bathy.200407.3x5400x2700.png) downloaded from that site.

The inverse projection proceeds by calling an inverse function to obtain the latitude and longitude for each pixel in the projected image. The corresponding pixel value is obtained from the equirectangular reference image as in the Grieger triptychial projection.
For the Mercator projection, the inverse formulae are:-
\[\lambda = x/ R + \lambda_0\]
\[\phi = \arctan \left(\sinh \left(y/ R\right)\right)\]
This is the resulting projected image:-

Using the above reference image, the resulting Grieger triptychial projection is:-

Note that the graticule which was drawn on the reference image has thickened in places, due to the local distortion imposed by the projection. This is not altogether undesirable as it performs a similar role as Tissot's indicatrix to show local distortions due to map projection.
For the Transverse Mercator projection, the inverse formulae are
\[D = y + \phi_0\]
\[\lambda = \lambda_0 + \arctan \left(\sinh \left(x\right)/ \cos \left(D\right)\right)\]
\[\phi = \arcsin \left(\sin \left(D\right)/ \cosh \left(x\right)\right)\]
In Python, it is important to implement the \(\lambda\) formula as

`λ = λ0 + atan2(sinh(x), cos(D))`

This mapping from the reference image to the projected image is similar to "nearest" resampling when resizing an image, and may not be smooth. A smoother projected image can be obtained by mapping to a larger projected image and resizing it using a high-quality Lanczos filter.

This programming exercise has effectively just become a plotting front-end for pyproj. A few tricks are needed:-- The dimensions of a projection cannot be obtained directly from pyproj. They can often be obtained by projecting points known to be on the boundary such as the central meridian ± 180° and the central parallel ± 90°.
- Some projections are not rectangular and should be surrounded by a black background. The inverse projection of a background point in pyproj sometimes gives (inf, inf) which indicates that the corresponding pixel should be black. The inverse projection, however, sometimes gives a latitude and longitude which project elsewhere on the map. These cases can be detected by only plotting points \((x, y)\) where \(p(p(x, y), inverse=True) = (x, y)\).

Here are some more projections created using pyproj:-

The uncropped projection using this ETOPO Global Relief Model is:-

Creation of a projected image as a NumPy array of shape (hp, wp, 3) from a reference image consisting of an array of shape (hr, wr, 3) proceeds as follows, for 0 <= ip < hp, 0 <= jp < wp:-

- coordinates x and y are derived from jp and ip
- longitude and latitude λ and φ are obtained from p(x, y, inverse = True)
- ir and jr are derived from φ and λ
- Pixel [ip, jp] in the projected image is copied from pixel [ir, jr] in the reference image

The projected image imp can then be recreated more rapidly from the reference image imr by using the map to set

On my PC this still takes about 16 seconds compared to 100 seconds for the original projection. It is slowed down by the explicit looping of ip and jp in the Python code. Trying to find a way of letting Numpy perform the looping, I asked ChatGPT:-

imr1 is a numpy array of shape (2700, 5400, 3).It came back with the amazingly good answer:-

m is a numpy array of shape (3054, 4320, 2).

What is the most efficient python code to create a numpy array imp1 of shape (3054, 4320, 3),

where imp1[ip, jp] = imr1[m[ip, jp][0], m[ip, jp][1]]

To create a numpy array imp1 of shape (3054, 4320, 3) where each element imp1[ip, jp] is set to imr1[m[ip, jp][0], m[ip, jp][1]], you can use numpy's advanced indexing capabilities. This will allow you to efficiently perform the required mapping without the need for explicit loops. Here is how you can do it: imp1 = imr1[m[..., 0], m[..., 1]]That one-liner executes in a fraction of a second! The advanced indexing capabilities of NumPy were news to me.

This provided the means to rapidly create a sequence of projections of the rotating globe. A copy of the reference image imr1 can be rolled a few pixels to the right using imr1 = numpy.roll(imr1, step, axis = 1) after projecting it as above for each image in the sequence. This can then be converted to a GIF using the imageio library: imageio.mimsave(giffile, sequence, fps=25) and optimised using the pygifsicle library: optimize(giffile).

Here is the rather unsurprising result for the Fahey projection:-

The result for the Transverse Mercator projection is more interesting as it shows the Northern hemisphere rotating anti-clockwise around the North pole and the Southern hemisphere rotating clockwise around the South pole.

A map produced as above is a very useful encapsulation of a projection and can be used to rapidly create projected images from reference images. Grieger saved his map as a text file, but this needs to be read line by line to create the requisite mapping array. A map can be saved in a more usable format as a 16-bit grayscale PNG image. Firstly, we revise the above description of map creation to create a map array of shape (2, h, w) for a w × h projected image. This allows an even more succinct projection statement: imp1 = imr1[m[0], m[1]]. My Python code now creates the map without explicit looping by exclusively using array operations. The pyproj call p(x, y, inverse = True) accepts x and y as arrays.

OpenCV cannot save a two-channel grayscale image, so the map needs to be reshaped by m.reshape(2 * h, w) to effectively double the height before saving. Note that numpy.reshape() gives a new shape to an array without changing its data. After reading a saved map, it is restored to its original shape by m.reshape(2, h // 2, w). PNG images are saved with lossless data compression. A typical map of shape (2, 4320, 4320) is 74,649,600 bytes in memory but 10,873,214 bytes on disk. There is not usually much to see when viewing these map images as the pixel values are less than the maximum dimension of the reference image, for example 5400 compared with the maximum value 65535 of a 16-bit grayscale pixel. Out of interest, for viewing purposes only, the map can be scaled up, in this example by a factor of 10:-

Map projections have been considered as representations of the Earth's globe, but they are equally applicable in another familiar context: the sphere of vision. The external world as perceived with one eye can be regarded as being painted on the inside of a sphere and seen from its centre. With our eyes we just look and see, but as a record we can take photographs. A photograph effectively uses a type of projection of the sphere of vision which depends on the type of camera and its lens:-

- A normal camera with a rectilinear lens (at any zoom level) takes a Gnomonic projection.
- A normal camera with a fisheye lens takes a projection depending on the lens.
- A panoramic camera takes a Central Cylindrical projection (which is also used when stitching separate images).
- A 360 degree camera may have back-to-back fisheye lenses, each taking a Stereographic projection. By default, these are converted by the camera to an Equirectangular projection.

Christopher B. Jones, 2024-06-11