diff --git a/docs/getting_started/index.md b/docs/getting_started/index.md index f643d5bfc..3f3955d67 100644 --- a/docs/getting_started/index.md +++ b/docs/getting_started/index.md @@ -9,7 +9,3 @@ Getting started with parcels is easy; here you will find: 🎓 Output tutorial 📖 Conceptual workflow ``` - -```{note} -TODO: Add links to Reference API in quickstart tutorial and concepts explanation -``` diff --git a/docs/getting_started/tutorial_quickstart.md b/docs/getting_started/tutorial_quickstart.md index 038302c22..30cbe082b 100644 --- a/docs/getting_started/tutorial_quickstart.md +++ b/docs/getting_started/tutorial_quickstart.md @@ -41,8 +41,8 @@ ds_fields As we can see, the reanalysis dataset contains eastward velocity `uo`, northward velocity `vo`, potential temperature (`thetao`) and salinity (`so`) fields. -These hydrodynamic fields need to be stored in a `parcels.FieldSet` object. Parcels provides tooling to parse many types -of models or observations into such a `parcels.FieldSet` object. Here, we use `FieldSet.from_copernicusmarine()`, which +These hydrodynamic fields need to be stored in a {py:obj}`parcels.FieldSet` object. Parcels provides tooling to parse many types +of models or observations into such a `parcels.FieldSet` object. Here, we use {py:obj}`FieldSet.from_copernicusmarine()`, which recognizes the standard names of a velocity field: ```{code-cell} @@ -61,10 +61,10 @@ velocity = ds_fields.isel(time=0, depth=0).plot.quiver(x="longitude", y="latitud Now that we have created a `parcels.FieldSet` object from the hydrodynamic data, we need to provide our second input: the virtual particles for which we will calculate the trajectories. -We need to create a `parcels.ParticleSet` object with the particles' initial time and position. The `parcels.ParticleSet` +We need to create a {py:obj}`parcels.ParticleSet` object with the particles' initial time and position. The `parcels.ParticleSet` object also needs to know about the `FieldSet` in which the particles "live". Finally, we need to specify the type of -`parcels.Particle` we want to use. The default particles have `time`, `z`, `lat`, and `lon`, but you can easily add -other `Variables` such as size, temperature, or age to create your own particles to mimic plastic or an [ARGO float](../user_guide/examples/tutorial_Argofloats.ipynb). +{py:obj}`parcels.ParticleClass` we want to use. The default particles have `time`, `z`, `lat`, and `lon`, but you can easily add +other {py:obj}`parcels.Variable`s such as size, temperature, or age to create your own particles to mimic plastic or an [ARGO float](../user_guide/examples/tutorial_Argofloats.ipynb). ```{code-cell} # Particle locations and initial time @@ -90,13 +90,9 @@ ax.scatter(lon,lat,s=40,c='w',edgecolors='r'); ## Compute: `Kernel` After setting up the input data and particle start locations and times, we need to specify what calculations to do with -the particles. These calculations, or numerical integrations, will be performed by what we call a `Kernel`, operating on +the particles. These calculations, or numerical integrations, will be performed by what we call a {py:obj}`parcels.Kernel`, operating on all particles in the `ParticleSet`. The most common calculation is the advection of particles through the velocity field. -Parcels comes with a number of standard kernels, from which we will use the Runge-Kutta advection kernel `AdvectionRK2`: - -```{note} -TODO: link to a list of included kernels -``` +Parcels comes with a number of common {py:obj}`parcels.kernels`, from which we will use the Runge-Kutta advection kernel {py:obj}`parcels.kernels.AdvectionRK2`: ```{code-cell} kernels = [parcels.kernels.AdvectionRK2] @@ -105,7 +101,7 @@ kernels = [parcels.kernels.AdvectionRK2] ## Prepare output: `ParticleFile` Before starting the simulation, we must define where and how frequent we want to write the output of our simulation. -We can define this in a `ParticleFile` object: +We can define this in a {py:obj}`parcels.ParticleFile` object: ```{code-cell} output_file = parcels.ParticleFile("output-quickstart.zarr", outputdt=np.timedelta64(1, "h")) @@ -117,7 +113,7 @@ the `outputdt` argument so that it captures the smallest timescales of our inter ## Run Simulation: `ParticleSet.execute()` -Finally, we can run the simulation by _executing_ the `ParticleSet` using the specified list of `kernels`. +Finally, we can run the simulation by _executing_ the `ParticleSet` using the specified list of `kernels`. This is done using the {py:meth}`parcels.ParticleSet.execute()` method. Additionally, we need to specify: - the `runtime`: for how long we want to simulate particles. @@ -125,10 +121,6 @@ Additionally, we need to specify: integration scheme, the accuracy of our simulation will depend on `dt`. Read [this notebook](https://github.com/Parcels-code/10year-anniversary-session2/blob/8931ef69577dbf00273a5ab4b7cf522667e146c5/advection_and_windage.ipynb) to learn more about numerical accuracy. -```{note} -TODO: add Michaels 10-years Parcels notebook to the user guide -``` - ```{code-cell} :tags: [hide-output] pset.execute( diff --git a/docs/user_guide/index.md b/docs/user_guide/index.md index 8bbb41651..a8975a06a 100644 --- a/docs/user_guide/index.md +++ b/docs/user_guide/index.md @@ -17,10 +17,6 @@ The tutorials written for Parcels v3 are currently being updated for Parcels v4. ## How to -```{note} -TODO: Add links to Reference API throughout -``` - ```{note} **Migrate from v3 to v4** using [this migration guide](v4-migration.md) ``` diff --git a/src/parcels/_core/basegrid.py b/src/parcels/_core/basegrid.py index af64e7109..05649436d 100644 --- a/src/parcels/_core/basegrid.py +++ b/src/parcels/_core/basegrid.py @@ -60,6 +60,7 @@ def search(self, z: float, y: float, x: float, ei=None) -> dict[str, tuple[int, - Unstructured grid: {"Z": (zi, zeta), "FACE": (fi, bcoords)} Where: + - index (int): The cell position of the particles along the given axis - barycentric_coordinates (float or np.ndarray): The coordinates defining the particles positions within the grid cell. For structured grids, this diff --git a/src/parcels/_datasets/__init__.py b/src/parcels/_datasets/__init__.py index 6e57e6950..8a82843b1 100644 --- a/src/parcels/_datasets/__init__.py +++ b/src/parcels/_datasets/__init__.py @@ -11,8 +11,7 @@ Developers, note that you should only add functions that create idealised datasets to this subpackage if they are (a) quick to generate, and (b) only use dependencies already shipped with Parcels. No data files should be added to this subpackage. Real world data files should be added to the `Parcels-code/parcels-data` repository on GitHub. -Parcels Dataset Philosophy -------------------------- +**Parcels Dataset Philosophy** When adding datasets, there may be a tension between wanting to add a specific dataset or wanting to add machinery to generate completely parameterised datasets (e.g., with different grid resolutions, with different ranges, with different datetimes etc.). There are trade-offs to both approaches: @@ -31,8 +30,7 @@ Sometimes we may want to test Parcels against a whole range of datasets varying in a certain way - to ensure Parcels works as expected. For these, we should add machinery to create generated datasets. -Structure --------- +**Structure** This subpackage is broken down into structured and unstructured parts. Each of these have common submodules: