While writing the blog post on AI-realistic photos, I wanted to include one of my 360-degree photos. In the past, I have done this by embedding code snippets from commercial services. However, those tend to disappear or move, so I wanted to check (again) if I can do it natively on my own server instead. And, lo and behold, now, in 2025, it is finally possible to do this easily with regular web tools!


Testing different solutions

Both ChatGPT and CoPilot helped me along the way, trying different complicated approaches. Both pointed me to Hugin, an open-source panorama stitching tool that manually aligns and blends multiple images. However, I want a terminal-based method for batch-processing multiple files.

CoPilot then set up an approach using various Python libraries, including OpenCV, which seems overkill for this job. In the end, I came to realize that I have been converting video files using FFmpeg’s v360 filter. Even though FFmpeg is primarily a video tool, it can also convert images (after all, video processing is typically based on processing images).

Tweaking FFmpeg

The script is as easy as this:

ffmpeg -i input.insp \
 -vf "v360=input=dfisheye:output=e:ih_fov=204:iv_fov=204:w=8000:h=4000" \
  -frames:v 1 output.jpg -y

Where:

  • input=dfisheye = dual-fisheye side-by-side format
  • output=e = equirectangular projection
  • ih_fov=204:iv_fov=204 = horizontal and vertical field of view (in degrees)
  • w=8000:h=4000 = 2:1 aspect ratio (standard equirectangular)

It converts an Insta360 .insp file in dual-fisheye format to an equirectangular .jpg file. Hooray!

Insta360 files are basically JPEG

The first lesson learned is that those cryptic .insp files are just JPEGs. I guess they have made some tweaks inside of the file so that it doesn’t adhere to the JPEG specifications and therefore cannot formally call it JPEG. For any practical purpose, though, they can be renamed to JPEG and processed as such.

Finding the right field of view

What took the most time after figuring out the approach was finding the right field of view (FoV). The default FoV is 180°, but that is clearly wrong:

a woman taking a picture of herself in a mirror

CoPilot argued that 190° is the optimal setting for the FoV from Insta360 cameras, but that is certainly not the case for my INSTA360 X2:

a woman is taking a picture of herself in a mirror

It turns out that the best setting (at least for my Insta360 X2) is 204° horizontally and vertically.

a woman taking a picture of herself in a mirror

This I found by trial and error, running a script testing various options:

for fov in 200 202 204 206 208; do
  ffmpeg -i input.jpg -vf "v360=input=dfisheye:output=e:ih_fov=$fov:iv_fov=$fov:w=8000:h=4000" \
 -frames:v 1 "test_fov${fov}.jpg" -y 2>/dev/null
done

It is not perfect, the stitching is suboptimal at the bottom of the image, so I will continue to explore fine-tuning. CoPilot suggests testing different horizontal and vertical stitching:

  • ih_fov=202:iv_fov=202 (less expansion, tighter seam)
  • ih_fov=206:iv_fov=206 (more expansion)
  • ih_fov=204:iv_fov=206 (different H/V FOV for uneven capture)

That will have to wait for another day, though, since I was also curious about how to make the embedding work.

Embed with Panorama

The embedding is done with a Hugo shortcode to embed an equirectangular panoramic image with interactive viewing capabilities. The content of the shortcode file, saved as panorama.html in /layouts/shortcodes/, is:

{{ $id := default (printf "pano-%s" (sha1 (.Get "src"))) (.Get "id") }}
{{ $height := default "520px" (.Get "height") }}
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/pannellum@2.5.6/build/pannellum.css">
<script src="https://cdn.jsdelivr.net/npm/pannellum@2.5.6/build/pannellum.js"></script>

<div id="{{ $id }}" style="width:100%;height:{{ $height }}"></div>
<script>
  pannellum.viewer("{{ $id }}", {
    type: "equirectangular",
    panorama: "{{ .Get "src" }}",
    autoLoad: true,
    autoRotate: {{ default 0 (.Get "autorotate") }},
    pitch: {{ default 0 (.Get "pitch") }},
    yaw: {{ default 0 (.Get "yaw") }},
    hfov: {{ default 100 (.Get "hfov") }},
    compass: {{ default false (.Get "compass") }}
 });
</script>

The embedding is as simple as placing this one-liner in your Hugo blog post:

< panorama src="input.jpg" height="500px" yaw=180 hfov=90 autorotate=2 compass=true >

The parameters are:

ParameterTypeDescription
srcstringPath to the panoramic image file (relative to static directory)
heightstringDisplay height of the panorama viewer (CSS value, e.g., “500px”)
yawnumberInitial horizontal rotation angle in degrees (0 = default orientation)
hfovnumberHorizontal field of view in degrees (100 = zoom level; lower = more zoomed in)

This renders as:

The best thing is that it’s based solely on web standards. Nothing special!

Conclusion

I have been taking 360° images for more than a decade now, but have struggled to find good ways to show them online. In the past, I relied on Ricoh’s embeddings, but they have moved away. Now, I work with both GoPro and Insta360 cameras, but they don’t have (good) solutions for showing and embedding images. Finally, I have found a way to easily convert native Insta360 images to equirectangular and display them online.

I am particularly satisfied with having FFmpeg as the backend here. It is a super-powerful, versatile tool that I use daily for many other things. Adding 360 images to the mix works great.

I haven’t solved the minor stitching issues, though. CoPilot keeps arguing that I should try Hugin to fix rotation and misalignment. For now, however, my current solution works well enough and can be batch-processed. That is the most important for now.