I'm trying to download a large scientific dataset (specifically: [link removed]) which is too large to form a zip file from. Somewhere in the directory hierarchy that becomes possible for each subdirectory, but I really don't want to manually find those points, download the zips, and reconstruct the original hierarchy.
It seems like this should be supported by the python API, and I'm happy (enough) to write code for this, rather than risk making mistakes doing it manually. However although it seems like you can get metadata for the shared link, you can't move from there to any sort of folder object that you can recurse. Similarly it doesn't seem to work to call sharing_mount_folder with the id returned by get_shared_link_metadata.
(Aside: It would be nice if the ids or the documentation for the API contained some namespace information, so that you could directly find out what you can do with a given id.)
Clicking Copy to Dropbox seems to time out, and I presume it wouldn't work anyway, given I don't have sufficient quota to make a copy of the data (Aside: I can't even tell how much quota I would need...)
Is there anything that I'm missing, or is this just an impossible task?