Currently, 'proc_offset' in the swarm save logic is stored as an HDF5 attribute, causing a 'object header message is too large' error for highly decomposed models. Instead of storing this data in a h5py attribute, refactor the code to store 'proc_offset' as a small HDF5 dataset in the output file (using collective operations to mimic attribute behavior).
Tasks:
- Change the use of h5py attrs for 'proc_offset' to a dataset in both _swarmvariable.py and _swarm.py.
- Ensure the dataset is collective and written at the appropriate stage in the save routine.
- Add compatibility handling for reading previous files (with proc_offset stored as attr).
- Update documentation if relevant.
- Add tests verifying the change resolves the large attribute error.
Background: See parent issue where large mesh decompositions lead to an HDF5 error due to excessive attribute size.
Impact: Prevents crashes on large-scale saves and improves compatibility with HDF5 limits.