Extracting ramblings
· 4 min read · January 15, 2026 · #tech #linuxOver a number of years—in fact, my word, decades—recorded my GPS tracks and taken photos when I go walking.1 Over the years I’ve used a range of apps with associated cloud services to capture and render this data, predominantly EveryTrail and (until recently) Ramblr (who I think may have bought EveryTrail or been bought by whoever bought EveryTrail).2
I moved off EveryTrail a decade ago because, if I recall correctly, it went a bit rubbish: it was slow, unreliable, the website sucked and so on. I extracted my trails and did nothing with them—they just sat there on my laptop of the moment.
I then switched to Ramblr but have found over the last year or two that it is also becoming enshittified :( Specifically, the app is becoming less reliable on iOS in terms of obtaining and maintaining a GPS fix and the website is now frankly appalling in terms of design and performance. At the same time they’ve reduced the features available to free tier users and started trying to push me toward a pay-for tier.
So the time had come to move off Ramblr too. I found a basic “record my GPS track” app that did little more than that, and certainly did not support publishing tracks to a cloud service or similar. Which then left two other problems: how to extract ~185 tracks from Ramblr, and how to usefully display them with any associated photos.
I’ll deal with the latter point in a separate post but first: how to extract my data from Ramblr. What follows was what I did for Ramblr specifically, but the general approach is probably more widely applicable. I did this in Firefox, similar capabilities exist in other major browsers. Begin by logging into the website https://ramblr.com/ and turn on Settings > Web Developer Tools and select the Network tab.
Obtain a list of trip_ids
We need to capture a list of the URLs corresponding to each track, or “trip” as Ramblr calls them. I did so by visiting My Archive, selecting the second page of trips and then examining the Responses tab. It turned out to be ‘https://www.ramblr.com/trip/search/mysearchTrip_1767801082017’. Right-click on the URL and select Copy Value > Copy as cURL. Paste that into a terminal and execute to verify—you should see some JSON content being fetched. This is what the base My Archive page renders to give you each page listing your trips.
curl -v 'https://www.ramblr.com/trip/search/mysearchTrip_1767790586587' \
-X POST \
-H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:146.0) Gecko/20100101 Firefox/146.0' \
-H 'Accept: application/json, text/javascript, */*; q=0.01' \
-H 'Accept-Language: en-GB,en;q=0.5' \
-H 'Accept-Encoding: gzip, deflate, br, zstd' \
-H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' \
-H 'X-Requested-With: XMLHttpRequest' \
-H 'Origin: https://www.ramblr.com' \
-H 'Connection: keep-alive' \
-H 'Referer: https://www.ramblr.com/web/mymap/trip/38704/1125687/' \
-H 'Cookie: cq_language=english; cq_session=a93967til7hr...; cq_latlng=...; _ga_...; _ga=...; cq_RamblrCustomer=jEOh...; cq_RainierUser=aPWx...; cq_RainierPresence=aPWx...; cq_presence=eSa8...; mainParam=' \
-H 'Sec-Fetch-Dest: empty' \
-H 'Sec-Fetch-Mode: cors' \
-H 'Sec-Fetch-Site: same-origin' \
-H 'Pragma: no-cache' \
-H 'Cache-Control: no-cache' \
--data-raw 'text_srch=&activity=&difficulty=&distance_st=&distance_ed=&sort=10&bounds=&page=2&mode=1&user_id=&user_uid=38704&unit=1'To get the full list then requires a bit of command line scripting and experimentation to confirm which parameter needs to be replaced to fetch each page of the My Archive tab, e.g.,
for p in $(seq 1 14) ; do
PAGE=$p
curl -v --no-clobber -o trips.json 'https://www.ramblr.com/trip/search/mysearchTrip_1767790586587' \
-X POST \
-H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:146.0) Gecko/20100101 Firefox/146.0' \
-H 'Accept: application/json, text/javascript, */*; q=0.01' \
-H 'Accept-Language: en-GB,en;q=0.5' \
-H 'Accept-Encoding: gzip, deflate, br, zstd' \
-H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' \
-H 'X-Requested-With: XMLHttpRequest' \
-H 'Origin: https://www.ramblr.com' \
-H 'Connection: keep-alive' \
-H 'Referer: https://www.ramblr.com/web/mymap/trip/38704/1125687/' \
-H 'Cookie: cq_language=english; cq_session=a93967til7hr...; cq_latlng=...; _ga_...; _ga=...; cq_RamblrCustomer=jEOh...; cq_RainierUser=aPWx...; cq_RainierPresence=aPWx...; cq_presence=eSa8...; mainParam=' \
-H 'Sec-Fetch-Dest: empty' \
-H 'Sec-Fetch-Mode: cors' \
-H 'Sec-Fetch-Site: same-origin' \
-H 'Pragma: no-cache' \
-H 'Cache-Control: no-cache' \
--data-raw "text_srch=&activity=&difficulty=&distance_st=&distance_ed=&sort=10&bounds=&page=$PAGE&mode=1&user_id=&user_uid=38704&unit=1"
doneThat results in a set of files named trips.json, trips.json.1, trips.json.2 and so on. Finally, process those and extract the trip IDs so we can subsequently fetch each one:
cat trips.json* | jq '.result.trip_list[].trip_id' | tr -d '"' | sort -n | uniq > trip_idsFetching each trip
Next we need the URL that will allow us to fetch a single trip. Find that by clicking on a track and then the Download GPX button. In the web developer tools Network tab spot the resulting URL—in this case https://www.ramblr.com/gpx/downloadGPX/TRIP_ID?v=1767801576157.
Finally, we iterate over the list of trip IDs to download each GPX track:
cat trip_ids | while read i; do
curl -o $i.gpx "https://www.ramblr.com/gpx/downloadGPX/$i?v=1767801576157" \
-H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:146.0) Gecko/20100101 Firefox/146.0'\
-H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' \
-H 'Accept-Language: en-GB,en;q=0.5' -H 'Accept-Encoding: gzip, deflate, br, zstd' \
-H 'Connection: keep-alive' \
-H 'Referer: https://www.ramblr.com/web/mymap/trip/38704' \
-H 'Cookie: cq_language=english; cq_latlng=...; _ga_...=GS2.1....; _ga=GA1.1....; cq_RamblrCustomer=jEOh...; cq_RainierUser=aPWx...; cq_RainierPresence=aPWx...; cq_presence=eSa8...; mainParam=; cq_session=e4po...' \
-H 'Upgrade-Insecure-Requests: 1' \
-H 'Sec-Fetch-Dest: iframe' \
-H 'Sec-Fetch-Mode: navigate' \
-H 'Sec-Fetch-Site: same-origin' \
-H 'Priority: u=4' \
-H 'Pragma: no-cache' \
-H 'Cache-Control: no-cache' \
-H 'TE: trailers'
doneAnd, if you want the KML which includes more metadata and photo thumbnails:
cat trip_ids | while read i; do
curl 'https://www.ramblr.com/gpx/downloadKML/$i?v=1767803311189' \
-H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:146.0) Gecko/20100101 Firefox/146.0' \
-H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' \
-H 'Accept-Language: en-GB,en;q=0.5' \
-H 'Accept-Encoding: gzip, deflate, br, zstd' \
-H 'Connection: keep-alive' \
-H 'Referer: https://www.ramblr.com/web/mymap/trip/38704' \
-H 'Cookie: cq_language=english; cq_latlng=...; _ga_...=GS2.1....; _ga=GA1.1....; cq_RamblrCustomer=jEOh...; cq_RainierUser=aPWx...; cq_RainierPresence=aPWx...; cq_presence=eSa8...; mainParam=; cq_session=e4po...' \
-H 'Upgrade-Insecure-Requests: 1' \
-H 'Sec-Fetch-Dest: iframe' \
-H 'Sec-Fetch-Mode: navigate' \
-H 'Sec-Fetch-Site: same-origin' \
-H 'Priority: u=4' \
-H 'Pragma: no-cache' \
-H 'Cache-Control: no-cache' \
-H 'TE: trailers'
doneAnd voila! You should have the GPX and/or KML tracks corresponding to each of your trips in Ramblr. Often a very similar process for extracting data from other hosted platforms: examine the page in web developer tools, find the relevant URL to fetch, work out the relevant parameters, and do some command line scripting to iterate.
Something that I enjoy doing and wish I did more.
Changes may also have occurred because I switch platforms more often than some, for devices and laptops, mixing between iOS, Android, Windows, Linux. Currently Linux and iOS. Maybe a Linux phone will be in my future, who knows.