## Forward Map Projections

I have been exploring the production of map projections of the globe in Python. This is purely a programming exercise, as existing sites already catalogue the large variety of map projections. High resolution coastline coordinates can be downloaded as land polygons or coastline linestrings from Data Derived from OpenStreetMap in Shapefile format and extracted using the Python PyShp Library. Some projection formulae can be obtained from John P. Snyder, Map Projections - A Working Manual (transcribed by Mircea Neacsu), the Mercator projection, for example:- $x = R\left(\lambda - \lambda_0\right)$ $y = R\ln \left(\tan \left(\left(\pi / 4 - \phi / 2\right)\right)\right)$ where R is the radius of the sphere at the scale of the map as drawn, and latitude $$\phi$$ and longitude $$\lambda$$ are given in radians as is the central meridian $$\lambda_0$$. The coastline can be drawn by projecting each point in each coastline linestring according to the formulae and drawing the $$x$$ and $$y$$ values in an image with the Pillow ImageDraw library.

More complicated projections can be performed with pyproj, the Peirce Quincuncial projection, for example:-

p = Proj('+proj=peirce_q +lon_0=25 +shape=square')

This approach does have some drawbacks:-

• It is slow and inefficient: the coastal linestrings contain 73,361,080 points, most of which are overlaid on the projected image.
• When the central meridian differs from 0°, lines crossing the edges of the projected image may need to be split.
• The minimum line thickness of one pixel obscures the shape of intricate coastlines. It would be better to draw filled polygons with no outline, but the polygon for Antarctica contains the South pole, which projects to infinity in the Mercator projection.
• If parallels or meridians are not straight lines, it can be hard to draw the graticule.

## The Grieger Triptychial Projection

The complicated specification for this amazing projection fails for some reason in pyproj:-

+proj=pipeline +step +proj=ob_tran +o_proj=peirce_q +o_lat_p=-45 +o_lon_p=45 +o_type=horizontal +o_scrollx=-0.25 +step +proj=affine +s11=-1 +s12=0 +s21=0 +s22=-1

Fortunately, Björn Grieger helpfully provides tables for performing his projection. For each pixel in the projected image, the tables specify the latitude and longitude corresponding to that pixel. The pixel value at that latitude and longitude can be obtained from an equirectangular reference image. The tables effectively specify an inverse projection:-

$$\lambda ,\phi = f\left(x,y\right)$$ instead of $$x,y = f\left(\lambda ,\phi \right)$$

Using this approach (without the tables) solves many of the above drawbacks.

## Inverse Map Projections

A 5400×2700 equirectangular reference image is first prepared from the shapefile land polygons using $$f\left(x,y\right) = x,y$$ (suitably scaled).

A 5400×2700 reference image from NASA's Blue Marble collection, as suggested by Grieger, can be used instead. Hovering the mouse over this and subsequent images switches to a reference image (world.topo.bathy.200407.3x5400x2700.png) downloaded from that site.

The inverse projection proceeds by calling an inverse function to obtain the latitude and longitude for each pixel in the projected image. The corresponding pixel value is obtained from the equirectangular reference image as in the Grieger triptychial projection. For the Mercator projection, the inverse formulae are:- $\lambda = x/ R + \lambda_0$ $\phi = \arctan \left(\sinh \left(y/ R\right)\right)$ This is the resulting projected image:-

Using the above reference image, the resulting Grieger triptychial projection is:-

Note that the graticule which was drawn on the reference image has thickened in places, due to the local distortion imposed by the projection. This is not altogether undesirable as it performs a similar role as Tissot's indicatrix to show local distortions due to map projection. For the Transverse Mercator projection, the inverse formulae are $D = y + \phi_0$ $\lambda = \lambda_0 + \arctan \left(\sinh \left(x\right)/ \cos \left(D\right)\right)$ $\phi = \arcsin \left(\sin \left(D\right)/ \cosh \left(x\right)\right)$ In Python, it is important to implement the $$\lambda$$ formula as

λ = λ0 + atan2(sinh(x), cos(D))
to ensure that the resulting angle is in the correct quadrant.

This mapping from the reference image to the projected image is similar to "nearest" resampling when resizing an image, and may not be smooth. A smoother projected image can be obtained by mapping to a larger projected image and resizing it using a high-quality Lanczos filter.

This programming exercise has effectively just become a plotting front-end for pyproj. A few tricks are needed:-
• The dimensions of a projection cannot be obtained directly from pyproj. They can often be obtained by projecting points known to be on the boundary such as the central meridian ± 180° and the central parallel ± 90°.
• Some projections are not rectangular and should be surrounded by a black background. The inverse projection of a background point in pyproj sometimes gives (inf, inf) which indicates that the corresponding pixel should be black. The inverse projection, however, sometimes gives a latitude and longitude which project elsewhere on the map. These cases can be detected by only plotting points $$(x, y)$$ where $$p(p(x, y), inverse=True) = (x, y)$$.
Similar projections can be produced by PyGMT, although it cannot currently use the Blue Marble collection. The List of map projections at Wikipedia uses images created with the Geocart map projection software which costs US\$240.00 for a Personal License.

Here are some more projections created using pyproj:-

Eckert I +proj=eck1
Gall +proj=gall
Interrupted Goode Homolosine +proj=igh
Natural Earth +proj=natearth
Quartic Authalic +proj=qua_aut
Stereographic +proj=stere +lat_0=90
This is conventionally drawn inside a circular boundary such as the equator, but is continued here to fill a rectangle.
Times +proj=times
Fahey +proj=fahey
Foucaut +proj=fouc
Tobler-Mercator +proj=tobmerc
Azimuthal Equidistant +proj=aeqd
Lambert Azimuthal Equal Area +proj=laea
Oblique Mercator +proj=omerc lon_1=0 +lat_1=51 +lon_2=151 +lat_2=-34 +no_rot
This projection is useful for showing long narrow strips on the globe. It is conformal, unlike most other cylindrical projections. Here it shows (when cropped) what would be seen under a great circle flight from London to Sydney.
The uncropped projection using this ETOPO Global Relief Model is:-

## Rotating the Globe

Creation of a projected image as a NumPy array of shape (hp, wp, 3) from a reference image consisting of an array of shape (hr, wr, 3) proceeds as follows, for 0 <= ip < hp, 0 <= jp < wp:-

1. coordinates x and y are derived from jp and ip
2. longitude and latitude λ and φ are obtained from p(x, y, inverse = True)
3. ir and jr are derived from φ and λ
4. Pixel [ip, jp] in the projected image is copied from pixel [ir, jr] in the reference image
The mapping to the projected image from the reference image in step 4 can be stored in an array m of shape (hp, wp, 2) by setting m[ip, jp] = [ir, jr]. This creates a map similar to the tables provided by Grieger.

The projected image imp can then be recreated more rapidly from the reference image imr by using the map to set

imp1[ip, jp] = imr1[m[ip, jp][0], m[ip, jp][1]] for 0 <= ip < hp, 0 <= jp < wp.

On my PC this still takes about 16 seconds compared to 100 seconds for the original projection. It is slowed down by the explicit looping of ip and jp in the Python code. Trying to find a way of letting Numpy perform the looping, I asked ChatGPT:-
imr1 is a numpy array of shape (2700, 5400, 3).
m is a numpy array of shape (3054, 4320, 2).
What is the most efficient python code to create a numpy array imp1 of shape (3054, 4320, 3),
where imp1[ip, jp] = imr1[m[ip, jp][0], m[ip, jp][1]]
It came back with the amazingly good answer:-
To create a numpy array imp1 of shape (3054, 4320, 3) where each element imp1[ip, jp] is set to imr1[m[ip, jp][0], m[ip, jp][1]], you can use numpy's advanced indexing capabilities. This will allow you to efficiently perform the required mapping without the need for explicit loops. Here is how you can do it: imp1 = imr1[m[..., 0], m[..., 1]]
That one-liner executes in a fraction of a second! The advanced indexing capabilities of NumPy were news to me.

This provided the means to rapidly create a sequence of projections of the rotating globe. A copy of the reference image imr1 can be rolled a few pixels to the right using imr1 = numpy.roll(imr1, step, axis = 1) after projecting it as above for each image in the sequence. This can then be converted to a GIF using the imageio library: imageio.mimsave(giffile, sequence, fps=25) and optimised using the pygifsicle library: optimize(giffile).

Here is the rather unsurprising result for the Fahey projection:-

The result for the Transverse Mercator projection is more interesting as it shows the Northern hemisphere rotating anti-clockwise around the North pole and the Southern hemisphere rotating clockwise around the South pole.

## Saving the Map

A map produced as above is a very useful encapsulation of a projection and can be used to rapidly create projected images from reference images. Grieger saved his map as a text file, but this needs to be read line by line to create the requisite mapping array. A map can be saved in a more usable format as a 16-bit grayscale PNG image. Firstly, we revise the above description of map creation to create a map array of shape (2, h, w) for a w × h projected image. This allows an even more succinct projection statement: imp1 = imr1[m[0], m[1]]. My Python code now creates the map without explicit looping by exclusively using array operations. The pyproj call p(x, y, inverse = True) accepts x and y as arrays.

OpenCV cannot save a two-channel grayscale image, so the map needs to be reshaped by m.reshape(2 * h, w) to effectively double the height before saving. Note that numpy.reshape() gives a new shape to an array without changing its data. After reading a saved map, it is restored to its original shape by m.reshape(2, h // 2, w). PNG images are saved with lossless data compression. A typical map of shape (2, 4320, 4320) is 74,649,600 bytes in memory but 10,873,214 bytes on disk. There is not usually much to see when viewing these map images as the pixel values are less than the maximum dimension of the reference image, for example 5400 compared with the maximum value 65535 of a 16-bit grayscale pixel. Out of interest, for viewing purposes only, the map can be scaled up, in this example by a factor of 10:-

## The Sphere of Vision

Map projections have been considered as representations of the Earth's globe, but they are equally applicable in another familiar context: the sphere of vision. The external world as perceived with one eye can be regarded as being painted on the inside of a sphere and seen from its centre. With our eyes we just look and see, but as a record we can take photographs. A photograph effectively uses a type of projection of the sphere of vision which depends on the type of camera and its lens:-

• A normal camera with a rectilinear lens (at any zoom level) takes a Gnomonic projection.
• A normal camera with a fisheye lens takes a projection depending on the lens.
• A panoramic camera takes a Central Cylindrical projection (which is also used when stitching separate images).
• A 360 degree camera may have back-to-back fisheye lenses, each taking a Stereographic projection. By default, these are converted by the camera to an Equirectangular projection.
Only the last of these needs any special arrangement for viewing. An equirectangular image can be viewed with a Virtual Reality headset, but may also be viewed with a PC or Mac app such as RICOH THETA or Insta360 STUDIO. With these interactive apps, you can view part of the equirectangular image in a chosen direction to see a gnomonic or mirror ball projection. RICOH THETA calls these Straight and Mirror Ball views, Insta360 STUDIO calls them DeWarp and Crystal Ball. You can also use the handy Online 360° Panorama Viewer hosted by renderstuff which shows similar straight views. Here, for example, is an equirectangular image taken by Carlton Wright, with two straight views from RICOH THETA of parts of the image:-
For a very wide view, equivalent to a photograph with a very wide-angle lens, the stretching at the sides can be extreme:- It is then better to use the mirror ball view:- Here is a panorama created by my Python program using a Central Cylindrical projection and a Fahey projection:- This image can be automatically uploaded to the renderstuff Online 360° Panorama Viewer.

Exploiting the equivalence of the globe and the sphere of vision, these apps can also be used to view reference images like the ETOPO Global Relief Model:-

You don't actually need to use any of the above-mentioned apps to view 360° equirectangular images: there is excellent support for them in Google Photos, which also has good editing tools to enhance your images. On a PC, you access it in your browser, on a smartphone or tablet there is a Google Photos app.

2. Create an album with title "360"
3. Drag and Drop 360° equirectangular images into the album.
The images appear in the album with an icon indicating that they are to be viewed in Google Photos with the Panorama Viewer when clicked on. This icon only appears when the image XMP metadata includes some special parameters, including UsePanoramaViewer True. If this metadata is missing for some reason (the image was not created in a 360° camera, for example) it can be added using the ExifFixer utility. You can use the mouse to rotate and zoom the view as in the above-mentioned apps. Only a straight view is supported. Toggle to full-screen and back using F11. In the Google Photos app on an iPhone, the same images can be viewed and rotated and zoomed using finger movement. Press the image to show it with or without icons. There is also a compass icon which moves the image using the iPhone's gyroscope. These images are ideal for mirroring to a big screen TV using Apple TV.

Google Photos limits the zooming out of these images as they would suffer extreme wide-angle distortion at the edges. If you want to see wide views it is better to create a central cylindrical panorama from a 360° equirectangular image. This can be done in Hugin, or with an ffmpeg command
ffmpeg -i Parliament.jpg -vf v360=e:cylindrical:h_fov=360:v_fov=115,scale=w=iw/2:h=ih -y -update true output.jpg
This produces the same output as the above panorama created by my Python program. It can be copied into Google Photos when it will not have the icon but can still be explored with the same mouse movements. This approach is recommended for 360° equirectangular images where there is not much to see overhead or underneath.

A 360 degree camera with back-to-back fisheye lenses may also provide a RAW image showing the original dual fisheye images which were stitched by firmware in the camera, like this image taken with the RICOH THETA Z1:- When a fisheye lens is used on a normal camera, however, an original fisheye image is obtained, like this one from the excellent review of the Laowa 4mm f/2.8 Circular Fisheye lens by Klaus Schroiff at OpticalLimits. Such an image needs to be converted to equirectangular format before it can be viewed in one of the 360 degree apps. This could be done using a panorama stitching app like PTGui or Hugin, but it is easier to use the ffmpeg v360 filter. In a Windows cmd window, the command
ffmpeg -i P6300019.jpg -vf v360=fisheye:e:ih_fov=190:iv_fov=190 -y -update true output.jpg
gave this result:- With a 180° field of view, the hemisphere would have projected to a square, but the 190° field of view has extended the sides of the square. This can now be viewed in the Online 360° Panorama Viewer For a fisheye photograph taken with the camera pointing up or down (this one was pointing slightly down), the pitch parameter can be used to get a straight horizon with perpendicular verticals:-
ffmpeg -i P6300019.jpg -vf v360=fisheye:e:ih_fov=190:iv_fov=190:pitch=-5 -y -update true output.jpg It is not documented exactly which fisheye projection ffmpeg uses, but it is almost certainly the Equidistant projection. It also supports the Stereographic and Equisolid formats which are mentioned in the Fisheye lens article. The effective projection of this particular lens is also unknown and would take analysis of an image of concentric circles to establish. The ffmpeg fisheye projection, however, does seem to give an acceptable result for this lens. As mentioned above, it may be preferable to create a central cylindrical panorama instead of an equirectangular projection, which can be done by this ffmpeg command
ffmpeg -i P6300019.jpg -vf v360=fisheye:cylindrical:ih_fov=190:iv_fov=190:h_fov=190:v_fov=90,scale=w=iw/2:h=ih -y -update true output.jpg
giving Here is another fisheye image taken by Klaus Schroiff with the lens pointing upwards:- It is possible to create a 360° cylindrical panorama from the periphery of such an image using the ffmpeg v360 filter
ffmpeg -i P6080006.jpg -vf v360=fisheye:cylindrical:ih_fov=190:iv_fov=190:h_fov=360:v_fov=100:pitch=-89,scale=w=iw/2:h=ih -y -update true output.jpg The pitch parameter was needed because the lens axis was not exactly vertical. Some yaw may be needed as well. The best values for these parameters can be determined by trial and error. This may be the most natural way to use a fisheye lens, as our sphere of vision is usually an extended hemisphere above us, with the underneath hidden below the surface of the earth.

It may be thought desirable to convert a panorama (taken with a panoramic camera or stitched from separate images) to equirectangular format for viewing. This can be done, but is unnecessary, however, as these panoramas can already be viewed as they are in Google Photos. What is the point of replacing with this?

Cameras like the insta360 EVO take a side-by-side pair of 180° fisheye images which the camera converts to left and right hemispheres in an equirectangular projection, like this example from Carlton Wright:- On a smartphone, you would use the Insta360 EVO app connected to the camera by WiFi to view the image with VR glasses. On a PC, the easiest way to view these images interactively in various directions without distortion is to use StereoPhoto Maker. In SPM, use Menu -> Edit -> 360/180 image -> 360/180 Image Preview (Ctrl+Alt P). You can zoom in and out using the mouse wheel or drag the image in various directions with the mouse. Press the space bar to capture a view for saving. Stereo pairs taken with two front facing fisheye lenses have an inherent problem in that horizontal disparity decreases away from the centre whereas vertical disparity increases. This is only corrected in rectified views looking straight ahead. In practice, however, this effect may not be noticeable in sideways views if there are no nearby objects close to the left or right edges. If you want to see the entire width you can convert the stereo pair from equirectangular to a central cylindrical panorama using the above-mentioned ffmpeg command for a 360° image. This might exaggerate vertical disparity effects. You can convert these side-by side stereo pairs to colour anaglyphs in StereoPhoto Maker, copy them to Google Photos, and then mirror them to your big screen TV for viewing with red/cyan glasses.

A photograph taken with a normal rectilinear wide-angle lens can also be viewed looking in different directions. This photograph of Graz in Austria was taken with a 22mm lens on a Canon M50 camera:- It can be converted to equirectangular format with Hugin. Select it with Edit->Add Image... and Hugin gets the Lens type, Focal length, and Focal length multiplier from the EXIF data. On the Stitcher tab, click on the Calculate field of view and Calculate optimal size buttons, then Stitch!, giving this:- Hugin adds the XMP metadata needed to indicate to compatible panorama viewers that this is part of a full 360° equirectangular projection:-

It will then be viewed correctly in viewers such as Google Photos.

You can also view these original photographs in different directions in my pan+tilt+crop app from 2007 which I have recovered from backup.

Christopher B. Jones, 2024-07-12