Parcels documentation

Welcome to the documentation of parcels. This page provides detailed documentation for each method, class and function. The documentation corresponds to the latest conda release, for newer documentation see the docstrings in the code.

See http://www.oceanparcels.org for general information on the Parcels project, including how to install and use.

parcels.particleset module

parcels.particlesets.particlesetsoa.ParticleSet

alias of parcels.particlesets.particlesetsoa.ParticleSetSOA

class parcels.particlesets.particlesetsoa.ParticleSetSOA(fieldset=None, pclass=<class 'parcels.particle.JITParticle'>, lon=None, lat=None, depth=None, time=None, repeatdt=None, lonlatdepth_dtype=None, pid_orig=None, **kwargs)[source]

Bases: parcels.particlesets.baseparticleset.BaseParticleSet

Container class for storing particle and executing kernel over them.

Please note that this currently only supports fixed size particle sets.

Parameters
  • fieldsetparcels.fieldset.FieldSet object from which to sample velocity. While fieldset=None is supported, this will throw a warning as it breaks most Parcels functionality

  • pclass – Optional parcels.particle.JITParticle or parcels.particle.ScipyParticle object that defines custom particle

  • lon – List of initial longitude values for particles

  • lat – List of initial latitude values for particles

  • depth – Optional list of initial depth values for particles. Default is 0m

  • time – Optional list of initial time values for particles. Default is fieldset.U.grid.time[0]

  • repeatdt – Optional interval (in seconds) on which to repeat the release of the ParticleSet

  • lonlatdepth_dtype – Floating precision for lon, lat, depth particle coordinates. It is either np.float32 or np.float64. Default is np.float32 if fieldset.U.interp_method is ‘linear’ and np.float64 if the interpolation method is ‘cgrid_velocity’

  • pid_orig – Optional list of (offsets for) the particle IDs

  • partitions – List of cores on which to distribute the particles for MPI runs. Default: None, in which case particles are distributed automatically on the processors

Other Variables can be initialised using further arguments (e.g. v=… for a Variable named ‘v’)

Kernel(pyfunc, c_include='', delete_cfiles=True)[source]

Wrapper method to convert a pyfunc into a parcels.kernel.Kernel object based on fieldset and ptype of the ParticleSet

Parameters

delete_cfiles – Boolean whether to delete the C-files after compilation in JIT mode (default is True)

ParticleFile(*args, **kwargs)[source]

Wrapper method to initialise a parcels.particlefile.ParticleFile object from the ParticleSet

add(particles)[source]

Add particles to the ParticleSet. Note that this is an incremental add, the particles will be added to the ParticleSet on which this function is called.

Parameters

particles – Another ParticleSet containing particles to add to this one.

Returns

The current ParticleSet

cstruct()[source]

‘cstruct’ returns the ctypes mapping of the combined collections cstruct and the fieldset cstruct. This depends on the specific structure in question.

data_indices(variable_name, compare_values, invert=False)[source]

Get the indices of all particles where the value of variable_name equals (one of) compare_values.

Parameters
  • variable_name – Name of the variable to check.

  • compare_values – Value or list of values to compare to.

  • invert – Whether to invert the selection. I.e., when True, return all indices that do not equal (one of) compare_values.

Returns

Numpy array of indices that satisfy the test.

density(field_name=None, particle_val=None, relative=False, area_scale=False)[source]

Method to calculate the density of particles in a ParticleSet from their locations, through a 2D histogram.

Parameters
  • field – Optional parcels.field.Field object to calculate the histogram on. Default is fieldset.U

  • particle_val – Optional numpy-array of values to weigh each particle with, or string name of particle variable to use weigh particles with. Default is None, resulting in a value of 1 for each particle

  • relative – Boolean to control whether the density is scaled by the total weight of all particles. Default is False

  • area_scale – Boolean to control whether the density is scaled by the area (in m^2) of each grid cell. Default is False

property error_particles

Get an iterator over all particles that are in an error state.

Returns

Collection iterator over error particles.

classmethod from_field(fieldset, pclass, start_field, size, mode='monte_carlo', depth=None, time=None, repeatdt=None, lonlatdepth_dtype=None)[source]

Initialise the ParticleSet randomly drawn according to distribution from a field

Parameters
  • fieldsetparcels.fieldset.FieldSet object from which to sample velocity

  • pclass – mod:parcels.particle.JITParticle or parcels.particle.ScipyParticle object that defines custom particle

  • start_field – Field for initialising particles stochastically (horizontally) according to the presented density field.

  • size – Initial size of particle set

  • mode – Type of random sampling. Currently only ‘monte_carlo’ is implemented

  • depth – Optional list of initial depth values for particles. Default is 0m

  • time – Optional start time value for particles. Default is fieldset.U.time[0]

  • repeatdt – Optional interval (in seconds) on which to repeat the release of the ParticleSet

  • lonlatdepth_dtype – Floating precision for lon, lat, depth particle coordinates. It is either np.float32 or np.float64. Default is np.float32 if fieldset.U.interp_method is ‘linear’ and np.float64 if the interpolation method is ‘cgrid_velocity’

classmethod from_particlefile(fieldset, pclass, filename, restart=True, restarttime=None, repeatdt=None, lonlatdepth_dtype=None, **kwargs)[source]

Initialise the ParticleSet from a netcdf ParticleFile. This creates a new ParticleSet based on locations of all particles written in a netcdf ParticleFile at a certain time. Particle IDs are preserved if restart=True

Parameters
  • fieldsetparcels.fieldset.FieldSet object from which to sample velocity

  • pclass – mod:parcels.particle.JITParticle or parcels.particle.ScipyParticle object that defines custom particle

  • filename – Name of the particlefile from which to read initial conditions

  • restart – Boolean to signal if pset is used for a restart (default is True). In that case, Particle IDs are preserved.

  • restarttime – time at which the Particles will be restarted. Default is the last time written. Alternatively, restarttime could be a time value (including np.datetime64) or a callable function such as np.nanmin. The last is useful when running with dt < 0.

  • repeatdt – Optional interval (in seconds) on which to repeat the release of the ParticleSet

  • lonlatdepth_dtype – Floating precision for lon, lat, depth particle coordinates. It is either np.float32 or np.float64. Default is np.float32 if fieldset.U.interp_method is ‘linear’ and np.float64 if the interpolation method is ‘cgrid_velocity’

classmethod monte_carlo_sample(start_field, size, mode='monte_carlo')[source]

Converts a starting field into a monte-carlo sample of lons and lats.

Parameters

start_fieldparcels.fieldset.Field object for initialising particles stochastically (horizontally) according to the presented density field.

returns list(lon), list(lat)

property num_error_particles

Get the number of particles that are in an error state.

Returns

The number of error particles.

populate_indices()[source]

Pre-populate guesses of particle xi/yi indices using a kdtree.

This is only intended for curvilinear grids, where the initial index search may be quite expensive.

remove_booleanvector(indices)[source]

Method to remove particles from the ParticleSet, based on an array of booleans

remove_indices(indices)[source]

Method to remove particles from the ParticleSet, based on their indices

set_variable_write_status(var, write_status)[source]

Method to set the write status of a Variable :param var: Name of the variable (string) :param write_status: Write status of the variable (True, False or

‘once’)

show(with_particles=True, show_time=None, field=None, domain=None, projection=None, land=True, vmin=None, vmax=None, savefile=None, animation=False, **kwargs)[source]

Method to ‘show’ a Parcels ParticleSet

Parameters
  • with_particles – Boolean whether to show particles

  • show_time – Time at which to show the ParticleSet

  • field – Field to plot under particles (either None, a Field object, or ‘vector’)

  • domain – dictionary (with keys ‘N’, ‘S’, ‘E’, ‘W’) defining domain to show

  • projection – type of cartopy projection to use (default PlateCarree)

  • land – Boolean whether to show land. This is ignored for flat meshes

  • vmin – minimum colour scale (only in single-plot mode)

  • vmax – maximum colour scale (only in single-plot mode)

  • savefile – Name of a file to save the plot to

  • animation – Boolean whether result is a single plot, or an animation

to_dict(pfile, time, deleted_only=False)[source]

Convert all Particle data from one time step to a python dictionary. :param pfile: ParticleFile object requesting the conversion :param time: Time at which to write ParticleSet :param deleted_only: Flag to write only the deleted Particles returns two dictionaries: one for all variables to be written each outputdt,

and one for all variables to be written once

parcels.fieldset module

class parcels.fieldset.FieldSet(U, V, fields=None)[source]

Bases: object

FieldSet class that holds hydrodynamic data needed to execute particles

Parameters
add_constant(name, value)[source]

Add a constant to the FieldSet. Note that all constants are stored as 32-bit floats. While constants can be updated during execution in SciPy mode, they can not be updated in JIT mode.

Tutorials using fieldset.add_constant: Analytical advection Diffusion Periodic boundaries

Parameters
  • name – Name of the constant

  • value – Value of the constant (stored as 32-bit float)

add_constant_field(name, value, mesh='flat')[source]
Wrapper function to add a Field that is constant in space,

useful e.g. when using constant horizontal diffusivity

Parameters
  • name – Name of the parcels.field.Field object to be added

  • value – Value of the constant field (stored as 32-bit float)

  • units – Optional UnitConverter object, to convert units (e.g. for Horizontal diffusivity from m2/s to degree2/s)

add_field(field, name=None)[source]

Add a parcels.field.Field object to the FieldSet

Parameters

For usage examples see the following tutorials:

add_periodic_halo(zonal=False, meridional=False, halosize=5)[source]

Add a ‘halo’ to all parcels.field.Field objects in a FieldSet, through extending the Field (and lon/lat) by copying a small portion of the field on one side of the domain to the other.

Parameters
  • zonal – Create a halo in zonal direction (boolean)

  • meridional – Create a halo in meridional direction (boolean)

  • halosize – size of the halo (in grid points). Default is 5 grid points

add_vector_field(vfield)[source]

Add a parcels.field.VectorField object to the FieldSet

Parameters

vfieldparcels.field.VectorField object to be added

advancetime(fieldset_new)[source]

Replace oldest time on FieldSet with new FieldSet

Parameters

fieldset_new – FieldSet snapshot with which the oldest time has to be replaced

computeTimeChunk(time, dt)[source]

Load a chunk of three data time steps into the FieldSet. This is used when FieldSet uses data imported from netcdf, with default option deferred_load. The loaded time steps are at or immediatly before time and the two time steps immediately following time if dt is positive (and inversely for negative dt)

Parameters
  • time – Time around which the FieldSet chunks are to be loaded. Time is provided as a double, relatively to Fieldset.time_origin

  • dt – time step of the integration scheme

classmethod from_b_grid_dataset(filenames, variables, dimensions, indices=None, mesh='spherical', allow_time_extrapolation=None, time_periodic=False, tracer_interp_method='bgrid_tracer', chunksize=None, **kwargs)[source]

Initialises FieldSet object from NetCDF files of Bgrid fields.

Parameters
  • filenames – Dictionary mapping variables to file(s). The filepath may contain wildcards to indicate multiple files, or be a list of file. filenames can be a list [files], a dictionary {var:[files]}, a dictionary {dim:[files]} (if lon, lat, depth and/or data not stored in same files as data), or a dictionary of dictionaries {var:{dim:[files]}} time values are in filenames[data]

  • variables – Dictionary mapping variables to variable names in the netCDF file(s).

  • dimensions

    Dictionary mapping data dimensions (lon, lat, depth, time, data) to dimensions in the netCF file(s). Note that dimensions can also be a dictionary of dictionaries if dimension names are different for each variable. U and V velocity nodes are not located as W velocity and T tracer nodes (see http://www.cesm.ucar.edu/models/cesm1.0/pop2/doc/sci/POPRefManual.pdf ).

    U[k,j+1,i],V[k,j+1,i]

    U[k,j+1,i+1],V[k,j+1,i+1]

    W[k:k+2,j+1,i+1],T[k,j+1,i+1]

    U[k,j,i],V[k,j,i]

    U[k,j,i+1],V[k,j,i+1]

    In 2D: U and V nodes are on the cell vertices and interpolated bilinearly as a A-grid.

    T node is at the cell centre and interpolated constant per cell as a C-grid.

    In 3D: U and V nodes are at the midlle of the cell vertical edges,

    They are interpolated bilinearly (independently of z) in the cell. W nodes are at the centre of the horizontal interfaces. They are interpolated linearly (as a function of z) in the cell. T node is at the cell centre, and constant per cell.

  • indices – Optional dictionary of indices for each dimension to read from file(s), to allow for reading of subset of data. Default is to read the full extent of each dimension. Note that negative indices are not allowed.

  • fieldtype – Optional dictionary mapping fields to fieldtypes to be used for UnitConverter. (either ‘U’, ‘V’, ‘Kh_zonal’, ‘Kh_meridional’ or None)

  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.

    2. flat: No conversion, lat/lon are assumed to be in m.

  • allow_time_extrapolation – boolean whether to allow for extrapolation (i.e. beyond the last available time snapshot) Default is False if dimensions includes time, else True

  • time_periodic – To loop periodically over the time component of the Field. It is set to either False or the length of the period (either float in seconds or datetime.timedelta object). (Default: False) This flag overrides the allow_time_interpolation and sets it to False

  • tracer_interp_method – Method for interpolation of tracer fields. It is recommended to use ‘bgrid_tracer’ (default) Note that in the case of from_pop() and from_bgrid(), the velocity fields are default to ‘bgrid_velocity’

  • chunksize – size of the chunks in dask loading

classmethod from_c_grid_dataset(filenames, variables, dimensions, indices=None, mesh='spherical', allow_time_extrapolation=None, time_periodic=False, tracer_interp_method='cgrid_tracer', gridindexingtype='nemo', chunksize=None, **kwargs)[source]

Initialises FieldSet object from NetCDF files of Curvilinear NEMO fields.

See here for a more detailed explanation of the different methods that can be used for c-grid datasets.

Parameters
  • filenames – Dictionary mapping variables to file(s). The filepath may contain wildcards to indicate multiple files, or be a list of file. filenames can be a list [files], a dictionary {var:[files]}, a dictionary {dim:[files]} (if lon, lat, depth and/or data not stored in same files as data), or a dictionary of dictionaries {var:{dim:[files]}} time values are in filenames[data]

  • variables – Dictionary mapping variables to variable names in the netCDF file(s).

  • dimensions

    Dictionary mapping data dimensions (lon, lat, depth, time, data) to dimensions in the netCF file(s). Note that dimensions can also be a dictionary of dictionaries if dimension names are different for each variable. Watch out: NEMO is discretised on a C-grid: U and V velocities are not located on the same nodes (see https://www.nemo-ocean.eu/doc/node19.html ).

    V[k,j+1,i+1]

    U[k,j+1,i]

    W[k:k+2,j+1,i+1],T[k,j+1,i+1]

    U[k,j+1,i+1]

    V[k,j,i+1]

    To interpolate U, V velocities on the C-grid, Parcels needs to read the f-nodes, which are located on the corners of the cells. (for indexing details: https://www.nemo-ocean.eu/doc/img360.png ) In 3D, the depth is the one corresponding to W nodes.

  • indices – Optional dictionary of indices for each dimension to read from file(s), to allow for reading of subset of data. Default is to read the full extent of each dimension. Note that negative indices are not allowed.

  • fieldtype – Optional dictionary mapping fields to fieldtypes to be used for UnitConverter. (either ‘U’, ‘V’, ‘Kh_zonal’, ‘Kh_meridional’ or None)

  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.

    2. flat: No conversion, lat/lon are assumed to be in m.

  • allow_time_extrapolation – boolean whether to allow for extrapolation (i.e. beyond the last available time snapshot) Default is False if dimensions includes time, else True

  • time_periodic – To loop periodically over the time component of the Field. It is set to either False or the length of the period (either float in seconds or datetime.timedelta object). (Default: False) This flag overrides the allow_time_interpolation and sets it to False

  • tracer_interp_method – Method for interpolation of tracer fields. It is recommended to use ‘cgrid_tracer’ (default) Note that in the case of from_nemo() and from_cgrid(), the velocity fields are default to ‘cgrid_velocity’

  • gridindexingtype – The type of gridindexing. Set to ‘nemo’ in FieldSet.from_nemo() See also the Grid indexing documentation on oceanparcels.org

  • chunksize – size of the chunks in dask loading

classmethod from_data(data, dimensions, transpose=False, mesh='spherical', allow_time_extrapolation=None, time_periodic=False, **kwargs)[source]

Initialise FieldSet object from raw data

Parameters
  • data

    Dictionary mapping field names to numpy arrays. Note that at least a ‘U’ and ‘V’ numpy array need to be given, and that the built-in Advection kernels assume that U and V are in m/s

    1. If data shape is [xdim, ydim], [xdim, ydim, zdim], [xdim, ydim, tdim] or [xdim, ydim, zdim, tdim], whichever is relevant for the dataset, use the flag transpose=True

    2. If data shape is [ydim, xdim], [zdim, ydim, xdim], [tdim, ydim, xdim] or [tdim, zdim, ydim, xdim], use the flag transpose=False (default value)

    3. If data has any other shape, you first need to reorder it

  • dimensions – Dictionary mapping field dimensions (lon, lat, depth, time) to numpy arrays. Note that dimensions can also be a dictionary of dictionaries if dimension names are different for each variable (e.g. dimensions[‘U’], dimensions[‘V’], etc).

  • transpose – Boolean whether to transpose data on read-in

  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation, see also this tutorial:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.

    2. flat: No conversion, lat/lon are assumed to be in m.

  • allow_time_extrapolation – boolean whether to allow for extrapolation (i.e. beyond the last available time snapshot) Default is False if dimensions includes time, else True

  • time_periodic – To loop periodically over the time component of the Field. It is set to either False or the length of the period (either float in seconds or datetime.timedelta object). (Default: False) This flag overrides the allow_time_interpolation and sets it to False

classmethod from_mitgcm(filenames, variables, dimensions, indices=None, mesh='spherical', allow_time_extrapolation=None, time_periodic=False, tracer_interp_method='cgrid_tracer', chunksize=None, **kwargs)[source]

Initialises FieldSet object from NetCDF files of MITgcm fields. All parameters and keywords are exactly the same as for FieldSet.from_nemo(), except that gridindexing is set to ‘mitgcm’ for grids that have the shape

V[k,j+1,i]

U[k,j,i]

W[k-1:k,j,i], T[k,j,i]

U[k,j,i+1]

V[k,j,i]

For indexing details: https://mitgcm.readthedocs.io/en/latest/algorithm/algorithm.html#spatial-discretization-of-the-dynamical-equations Note that vertical velocity (W) is assumed postive in the positive z direction (which is upward in MITgcm)

classmethod from_mom5(filenames, variables, dimensions, indices=None, mesh='spherical', allow_time_extrapolation=None, time_periodic=False, tracer_interp_method='bgrid_tracer', chunksize=None, **kwargs)[source]

Initialises FieldSet object from NetCDF files of MOM5 fields.

Parameters
  • filenames – Dictionary mapping variables to file(s). The filepath may contain wildcards to indicate multiple files, or be a list of file. filenames can be a list [files], a dictionary {var:[files]}, a dictionary {dim:[files]} (if lon, lat, depth and/or data not stored in same files as data), or a dictionary of dictionaries {var:{dim:[files]}} time values are in filenames[data]

  • variables – Dictionary mapping variables to variable names in the netCDF file(s). Note that the built-in Advection kernels assume that U and V are in m/s

  • dimensions

    Dictionary mapping data dimensions (lon, lat, depth, time, data) to dimensions in the netCF file(s). Note that dimensions can also be a dictionary of dictionaries if dimension names are different for each variable.

    U[k,j+1,i],V[k,j+1,i]

    U[k,j+1,i+1],V[k,j+1,i+1]

    W[k-1:k+1,j+1,i+1],T[k,j+1,i+1]

    U[k,j,i],V[k,j,i]

    U[k,j,i+1],V[k,j,i+1]

    In 2D: U and V nodes are on the cell vertices and interpolated bilinearly as a A-grid.

    T node is at the cell centre and interpolated constant per cell as a C-grid.

    In 3D: U and V nodes are at the midlle of the cell vertical edges,

    They are interpolated bilinearly (independently of z) in the cell. W nodes are at the centre of the horizontal interfaces, but below the U and V. They are interpolated linearly (as a function of z) in the cell. Note that W is normally directed upward in MOM5, but Parcels requires W in the positive z-direction (downward) so W is multiplied by -1. T node is at the cell centre, and constant per cell.

  • indices – Optional dictionary of indices for each dimension to read from file(s), to allow for reading of subset of data. Default is to read the full extent of each dimension. Note that negative indices are not allowed.

  • fieldtype – Optional dictionary mapping fields to fieldtypes to be used for UnitConverter. (either ‘U’, ‘V’, ‘Kh_zonal’, ‘Kh_meridional’ or None)

  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation, see also https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_unitconverters.ipynb:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.

    2. flat: No conversion, lat/lon are assumed to be in m.

  • allow_time_extrapolation – boolean whether to allow for extrapolation (i.e. beyond the last available time snapshot) Default is False if dimensions includes time, else True

  • time_periodic – To loop periodically over the time component of the Field. It is set to either False or the length of the period (either float in seconds or datetime.timedelta object). (Default: False) This flag overrides the allow_time_interpolation and sets it to False

  • tracer_interp_method – Method for interpolation of tracer fields. It is recommended to use ‘bgrid_tracer’ (default) Note that in the case of from_mom5() and from_bgrid(), the velocity fields are default to ‘bgrid_velocity’

  • chunksize – size of the chunks in dask loading

classmethod from_nemo(filenames, variables, dimensions, indices=None, mesh='spherical', allow_time_extrapolation=None, time_periodic=False, tracer_interp_method='cgrid_tracer', chunksize=None, **kwargs)[source]

Initialises FieldSet object from NetCDF files of Curvilinear NEMO fields.

See here for a detailed tutorial on the setup for 2D NEMO fields and here for the tutorial on the setup for 3D NEMO fields.

See here for a more detailed explanation of the different methods that can be used for c-grid datasets.

Parameters
  • filenames – Dictionary mapping variables to file(s). The filepath may contain wildcards to indicate multiple files, or be a list of file. filenames can be a list [files], a dictionary {var:[files]}, a dictionary {dim:[files]} (if lon, lat, depth and/or data not stored in same files as data), or a dictionary of dictionaries {var:{dim:[files]}} time values are in filenames[data]

  • variables – Dictionary mapping variables to variable names in the netCDF file(s). Note that the built-in Advection kernels assume that U and V are in m/s

  • dimensions

    Dictionary mapping data dimensions (lon, lat, depth, time, data) to dimensions in the netCF file(s). Note that dimensions can also be a dictionary of dictionaries if dimension names are different for each variable. Watch out: NEMO is discretised on a C-grid: U and V velocities are not located on the same nodes (see https://www.nemo-ocean.eu/doc/node19.html ).

    V[k,j+1,i+1]

    U[k,j+1,i]

    W[k:k+2,j+1,i+1],T[k,j+1,i+1]

    U[k,j+1,i+1]

    V[k,j,i+1]

    To interpolate U, V velocities on the C-grid, Parcels needs to read the f-nodes, which are located on the corners of the cells. (for indexing details: https://www.nemo-ocean.eu/doc/img360.png ) In 3D, the depth is the one corresponding to W nodes The gridindexingtype is set to ‘nemo’. See also the Grid indexing documentation on oceanparcels.org

  • indices – Optional dictionary of indices for each dimension to read from file(s), to allow for reading of subset of data. Default is to read the full extent of each dimension. Note that negative indices are not allowed.

  • fieldtype – Optional dictionary mapping fields to fieldtypes to be used for UnitConverter. (either ‘U’, ‘V’, ‘Kh_zonal’, ‘Kh_meridional’ or None)

  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation, see also this tutorial:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.

    2. flat: No conversion, lat/lon are assumed to be in m.

  • allow_time_extrapolation – boolean whether to allow for extrapolation (i.e. beyond the last available time snapshot) Default is False if dimensions includes time, else True

  • time_periodic – To loop periodically over the time component of the Field. It is set to either False or the length of the period (either float in seconds or datetime.timedelta object). (Default: False) This flag overrides the allow_time_interpolation and sets it to False

  • tracer_interp_method – Method for interpolation of tracer fields. It is recommended to use ‘cgrid_tracer’ (default) Note that in the case of from_nemo() and from_cgrid(), the velocity fields are default to ‘cgrid_velocity’

  • chunksize – size of the chunks in dask loading. Default is None (no chunking)

classmethod from_netcdf(filenames, variables, dimensions, indices=None, fieldtype=None, mesh='spherical', timestamps=None, allow_time_extrapolation=None, time_periodic=False, deferred_load=True, chunksize=None, **kwargs)[source]

Initialises FieldSet object from NetCDF files

Parameters
  • filenames – Dictionary mapping variables to file(s). The filepath may contain wildcards to indicate multiple files or be a list of file. filenames can be a list [files], a dictionary {var:[files]}, a dictionary {dim:[files]} (if lon, lat, depth and/or data not stored in same files as data), or a dictionary of dictionaries {var:{dim:[files]}}. time values are in filenames[data]

  • variables – Dictionary mapping variables to variable names in the netCDF file(s). Note that the built-in Advection kernels assume that U and V are in m/s

  • dimensions – Dictionary mapping data dimensions (lon, lat, depth, time, data) to dimensions in the netCF file(s). Note that dimensions can also be a dictionary of dictionaries if dimension names are different for each variable (e.g. dimensions[‘U’], dimensions[‘V’], etc).

  • indices – Optional dictionary of indices for each dimension to read from file(s), to allow for reading of subset of data. Default is to read the full extent of each dimension. Note that negative indices are not allowed.

  • fieldtype – Optional dictionary mapping fields to fieldtypes to be used for UnitConverter. (either ‘U’, ‘V’, ‘Kh_zonal’, ‘Kh_meridional’ or None)

  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation, see also this tuturial:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.

    2. flat: No conversion, lat/lon are assumed to be in m.

  • timestamps – list of lists or array of arrays containing the timestamps for each of the files in filenames. Outer list/array corresponds to files, inner array corresponds to indices within files. Default is None if dimensions includes time.

  • allow_time_extrapolation – boolean whether to allow for extrapolation (i.e. beyond the last available time snapshot) Default is False if dimensions includes time, else True

  • time_periodic – To loop periodically over the time component of the Field. It is set to either False or the length of the period (either float in seconds or datetime.timedelta object). (Default: False) This flag overrides the allow_time_interpolation and sets it to False

  • deferred_load – boolean whether to only pre-load data (in deferred mode) or fully load them (default: True). It is advised to deferred load the data, since in that case Parcels deals with a better memory management during particle set execution. deferred_load=False is however sometimes necessary for plotting the fields.

  • interp_method – Method for interpolation. Options are ‘linear’ (default), ‘nearest’, ‘linear_invdist_land_tracer’, ‘cgrid_velocity’, ‘cgrid_tracer’ and ‘bgrid_velocity’

  • gridindexingtype – The type of gridindexing. Either ‘nemo’ (default) or ‘mitgcm’ are supported. See also the Grid indexing documentation on oceanparcels.org

  • chunksize – size of the chunks in dask loading. Default is None (no chunking). Can be None or False (no chunking), ‘auto’ (chunking is done in the background, but results in one grid per field individually), or a dict in the format ‘{parcels_varname: {netcdf_dimname : (parcels_dimname, chunksize_as_int)}, …}’, where ‘parcels_dimname’ is one of (‘time’, ‘depth’, ‘lat’, ‘lon’)

  • netcdf_engine – engine to use for netcdf reading in xarray. Default is ‘netcdf’, but in cases where this doesn’t work, setting netcdf_engine=’scipy’ could help

For usage examples see the following tutorials:

classmethod from_parcels(basename, uvar='vozocrtx', vvar='vomecrty', indices=None, extra_fields=None, allow_time_extrapolation=None, time_periodic=False, deferred_load=True, chunksize=None, **kwargs)[source]

Initialises FieldSet data from NetCDF files using the Parcels FieldSet.write() conventions.

Parameters
  • basename – Base name of the file(s); may contain wildcards to indicate multiple files.

  • indices – Optional dictionary of indices for each dimension to read from file(s), to allow for reading of subset of data. Default is to read the full extent of each dimension. Note that negative indices are not allowed.

  • fieldtype – Optional dictionary mapping fields to fieldtypes to be used for UnitConverter. (either ‘U’, ‘V’, ‘Kh_zonal’, ‘Kh_meridional’ or None)

  • extra_fields – Extra fields to read beyond U and V

  • allow_time_extrapolation – boolean whether to allow for extrapolation (i.e. beyond the last available time snapshot) Default is False if dimensions includes time, else True

  • time_periodic – To loop periodically over the time component of the Field. It is set to either False or the length of the period (either float in seconds or datetime.timedelta object). (Default: False) This flag overrides the allow_time_interpolation and sets it to False

  • deferred_load – boolean whether to only pre-load data (in deferred mode) or fully load them (default: True). It is advised to deferred load the data, since in that case Parcels deals with a better memory management during particle set execution. deferred_load=False is however sometimes necessary for plotting the fields.

  • chunksize – size of the chunks in dask loading

classmethod from_pop(filenames, variables, dimensions, indices=None, mesh='spherical', allow_time_extrapolation=None, time_periodic=False, tracer_interp_method='bgrid_tracer', chunksize=None, depth_units='m', **kwargs)[source]
Initialises FieldSet object from NetCDF files of POP fields.

It is assumed that the velocities in the POP fields is in cm/s.

Parameters
  • filenames – Dictionary mapping variables to file(s). The filepath may contain wildcards to indicate multiple files, or be a list of file. filenames can be a list [files], a dictionary {var:[files]}, a dictionary {dim:[files]} (if lon, lat, depth and/or data not stored in same files as data), or a dictionary of dictionaries {var:{dim:[files]}} time values are in filenames[data]

  • variables – Dictionary mapping variables to variable names in the netCDF file(s). Note that the built-in Advection kernels assume that U and V are in m/s

  • dimensions

    Dictionary mapping data dimensions (lon, lat, depth, time, data) to dimensions in the netCF file(s). Note that dimensions can also be a dictionary of dictionaries if dimension names are different for each variable. Watch out: POP is discretised on a B-grid: U and V velocity nodes are not located as W velocity and T tracer nodes (see http://www.cesm.ucar.edu/models/cesm1.0/pop2/doc/sci/POPRefManual.pdf ).

    U[k,j+1,i],V[k,j+1,i]

    U[k,j+1,i+1],V[k,j+1,i+1]

    W[k:k+2,j+1,i+1],T[k,j+1,i+1]

    U[k,j,i],V[k,j,i]

    U[k,j,i+1],V[k,j,i+1]

    In 2D: U and V nodes are on the cell vertices and interpolated bilinearly as a A-grid.

    T node is at the cell centre and interpolated constant per cell as a C-grid.

    In 3D: U and V nodes are at the middle of the cell vertical edges,

    They are interpolated bilinearly (independently of z) in the cell. W nodes are at the centre of the horizontal interfaces. They are interpolated linearly (as a function of z) in the cell. T node is at the cell centre, and constant per cell. Note that Parcels assumes that the length of the depth dimension (at the W-points) is one larger than the size of the velocity and tracer fields in the depth dimension.

  • indices – Optional dictionary of indices for each dimension to read from file(s), to allow for reading of subset of data. Default is to read the full extent of each dimension. Note that negative indices are not allowed.

  • fieldtype – Optional dictionary mapping fields to fieldtypes to be used for UnitConverter. (either ‘U’, ‘V’, ‘Kh_zonal’, ‘Kh_meridional’ or None)

  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation, see also this tutorial:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.

    2. flat: No conversion, lat/lon are assumed to be in m.

  • allow_time_extrapolation – boolean whether to allow for extrapolation (i.e. beyond the last available time snapshot) Default is False if dimensions includes time, else True

  • time_periodic – To loop periodically over the time component of the Field. It is set to either False or the length of the period (either float in seconds or datetime.timedelta object). (Default: False) This flag overrides the allow_time_interpolation and sets it to False

  • tracer_interp_method – Method for interpolation of tracer fields. It is recommended to use ‘bgrid_tracer’ (default) Note that in the case of from_pop() and from_bgrid(), the velocity fields are default to ‘bgrid_velocity’

  • chunksize – size of the chunks in dask loading

  • depth_units – The units of the vertical dimension. Default in Parcels is ‘m’, but many POP outputs are in ‘cm’

classmethod from_xarray_dataset(ds, variables, dimensions, mesh='spherical', allow_time_extrapolation=None, time_periodic=False, **kwargs)[source]

Initialises FieldSet data from xarray Datasets.

Parameters
  • ds – xarray Dataset. Note that the built-in Advection kernels assume that U and V are in m/s

  • variables – Dictionary mapping parcels variable names to data variables in the xarray Dataset.

  • dimensions – Dictionary mapping data dimensions (lon, lat, depth, time, data) to dimensions in the xarray Dataset. Note that dimensions can also be a dictionary of dictionaries if dimension names are different for each variable (e.g. dimensions[‘U’], dimensions[‘V’], etc).

  • fieldtype – Optional dictionary mapping fields to fieldtypes to be used for UnitConverter. (either ‘U’, ‘V’, ‘Kh_zonal’, ‘Kh_meridional’ or None)

  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation, see also this tutorial:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.

    2. flat: No conversion, lat/lon are assumed to be in m.

  • allow_time_extrapolation – boolean whether to allow for extrapolation (i.e. beyond the last available time snapshot) Default is False if dimensions includes time, else True

  • time_periodic – To loop periodically over the time component of the Field. It is set to either False or the length of the period (either float in seconds or datetime.timedelta object). (Default: False) This flag overrides the allow_time_interpolation and sets it to False

get_fields()[source]

Returns a list of all the parcels.field.Field and parcels.field.VectorField objects associated with this FieldSet

write(filename)[source]

Write FieldSet to NetCDF file using NEMO convention

Parameters

filename – Basename of the output fileset

parcels.field module

class parcels.field.Field(name, data, lon=None, lat=None, depth=None, time=None, grid=None, mesh='flat', timestamps=None, fieldtype=None, transpose=False, vmin=None, vmax=None, time_origin=None, interp_method='linear', allow_time_extrapolation=None, time_periodic=False, gridindexingtype='nemo', **kwargs)[source]

Bases: object

Class that encapsulates access to field data.

Parameters
  • name – Name of the field

  • data

    2D, 3D or 4D numpy array of field data.

    1. If data shape is [xdim, ydim], [xdim, ydim, zdim], [xdim, ydim, tdim] or [xdim, ydim, zdim, tdim], whichever is relevant for the dataset, use the flag transpose=True

    2. If data shape is [ydim, xdim], [zdim, ydim, xdim], [tdim, ydim, xdim] or [tdim, zdim, ydim, xdim], use the flag transpose=False

    3. If data has any other shape, you first need to reorder it

  • lon – Longitude coordinates (numpy vector or array) of the field (only if grid is None)

  • lat – Latitude coordinates (numpy vector or array) of the field (only if grid is None)

  • depth – Depth coordinates (numpy vector or array) of the field (only if grid is None)

  • time – Time coordinates (numpy vector) of the field (only if grid is None)

  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation: (only if grid is None)

    1. spherical: Lat and lon in degree, with a correction for zonal velocity U near the poles.

    2. flat (default): No conversion, lat/lon are assumed to be in m.

  • timestamps – A numpy array containing the timestamps for each of the files in filenames, for loading from netCDF files only. Default is None if the netCDF dimensions dictionary includes time.

  • gridparcels.grid.Grid object containing all the lon, lat depth, time mesh and time_origin information. Can be constructed from any of the Grid objects

  • fieldtype – Type of Field to be used for UnitConverter when using SummedFields (either ‘U’, ‘V’, ‘Kh_zonal’, ‘Kh_meridional’ or None)

  • transpose – Transpose data to required (lon, lat) layout

  • vmin – Minimum allowed value on the field. Data below this value are set to zero

  • vmax – Maximum allowed value on the field. Data above this value are set to zero

  • time_origin – Time origin (TimeConverter object) of the time axis (only if grid is None)

  • interp_method – Method for interpolation. Options are ‘linear’ (default), ‘nearest’, ‘linear_invdist_land_tracer’, ‘cgrid_velocity’, ‘cgrid_tracer’ and ‘bgrid_velocity’

  • allow_time_extrapolation – boolean whether to allow for extrapolation in time (i.e. beyond the last available time snapshot)

  • time_periodic – To loop periodically over the time component of the Field. It is set to either False or the length of the period (either float in seconds or datetime.timedelta object). The last value of the time series can be provided (which is the same as the initial one) or not (Default: False) This flag overrides the allow_time_interpolation and sets it to False

  • (opt.) (chunkdims_name_map) – gives a name map to the FieldFileBuffer that declared a mapping between chunksize name, NetCDF dimension and Parcels dimension; required only if currently incompatible OCM field is loaded and chunking is used by ‘chunksize’ (which is the default)

For usage examples see the following tutorials:

add_periodic_halo(zonal, meridional, halosize=5, data=None)[source]

Add a ‘halo’ to all Fields in a FieldSet, through extending the Field (and lon/lat) by copying a small portion of the field on one side of the domain to the other. Before adding a periodic halo to the Field, it has to be added to the Grid on which the Field depends

See this tutorial for a detailed explanation on how to set up periodic boundaries

Parameters
  • zonal – Create a halo in zonal direction (boolean)

  • meridional – Create a halo in meridional direction (boolean)

  • halosize – size of the halo (in grid points). Default is 5 grid points

  • data – if data is not None, the periodic halo will be achieved on data instead of self.data and data will be returned

calc_cell_edge_sizes()[source]

Method to calculate cell sizes based on numpy.gradient method Currently only works for Rectilinear Grids

cell_areas()[source]

Method to calculate cell sizes based on cell_edge_sizes Currently only works for Rectilinear Grids

property ctypes_struct

Returns a ctypes struct object containing all relevant pointers and sizes for this field.

eval(time, z, y, x, particle=None, applyConversion=True)[source]

Interpolate field values in space and time.

We interpolate linearly in time and apply implicit unit conversion to the result. Note that we defer to scipy.interpolate to perform spatial interpolation.

classmethod from_netcdf(filenames, variable, dimensions, indices=None, grid=None, mesh='spherical', timestamps=None, allow_time_extrapolation=None, time_periodic=False, deferred_load=True, **kwargs)[source]

Create field from netCDF file

Parameters
  • filenames – list of filenames to read for the field. filenames can be a list [files] or a dictionary {dim:[files]} (if lon, lat, depth and/or data not stored in same files as data) In the latetr case, time values are in filenames[data]

  • variable – Tuple mapping field name to variable name in the NetCDF file.

  • dimensions – Dictionary mapping variable names for the relevant dimensions in the NetCDF file

  • indices – dictionary mapping indices for each dimension to read from file. This can be used for reading in only a subregion of the NetCDF file. Note that negative indices are not allowed.

  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.

    2. flat: No conversion, lat/lon are assumed to be in m.

  • timestamps – A numpy array of datetime64 objects containing the timestamps for each of the files in filenames. Default is None if dimensions includes time.

  • allow_time_extrapolation – boolean whether to allow for extrapolation in time (i.e. beyond the last available time snapshot) Default is False if dimensions includes time, else True

  • time_periodic – boolean whether to loop periodically over the time component of the FieldSet This flag overrides the allow_time_interpolation and sets it to False

  • deferred_load – boolean whether to only pre-load data (in deferred mode) or fully load them (default: True). It is advised to deferred load the data, since in that case Parcels deals with a better memory management during particle set execution. deferred_load=False is however sometimes necessary for plotting the fields.

  • gridindexingtype – The type of gridindexing. Either ‘nemo’ (default) or ‘mitgcm’ are supported. See also the Grid indexing documentation on oceanparcels.org

  • chunksize – size of the chunks in dask loading

For usage examples see the following tutorial:

classmethod from_xarray(da, name, dimensions, mesh='spherical', allow_time_extrapolation=None, time_periodic=False, **kwargs)[source]

Create field from xarray Variable

Parameters
  • da – Xarray DataArray

  • name – Name of the Field

  • dimensions – Dictionary mapping variable names for the relevant dimensions in the DataArray

  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.

    2. flat: No conversion, lat/lon are assumed to be in m.

  • allow_time_extrapolation – boolean whether to allow for extrapolation in time (i.e. beyond the last available time snapshot) Default is False if dimensions includes time, else True

  • time_periodic – boolean whether to loop periodically over the time component of the FieldSet This flag overrides the allow_time_interpolation and sets it to False

set_depth_from_field(field)[source]

Define the depth dimensions from another (time-varying) field

See this tutorial for a detailed explanation on how to set up time-evolving depth dimensions

set_scaling_factor(factor)[source]

Scales the field data by some constant factor.

Parameters

factor – scaling factor

For usage examples see the following tutorial:

show(animation=False, show_time=None, domain=None, depth_level=0, projection=None, land=True, vmin=None, vmax=None, savefile=None, **kwargs)[source]

Method to ‘show’ a Parcels Field

Parameters
  • animation – Boolean whether result is a single plot, or an animation

  • show_time – Time at which to show the Field (only in single-plot mode)

  • domain – dictionary (with keys ‘N’, ‘S’, ‘E’, ‘W’) defining domain to show

  • depth_level – depth level to be plotted (default 0)

  • projection – type of cartopy projection to use (default PlateCarree)

  • land – Boolean whether to show land. This is ignored for flat meshes

  • vmin – minimum colour scale (only in single-plot mode)

  • vmax – maximum colour scale (only in single-plot mode)

  • savefile – Name of a file to save the plot to

spatial_interpolation(ti, z, y, x, time, particle=None)[source]

Interpolate horizontal field values using a SciPy interpolator

temporal_interpolate_fullfield(ti, time)[source]

Calculate the data of a field between two snapshots, using linear interpolation

Parameters
  • ti – Index in time array associated with time (via time_index())

  • time – Time to interpolate to

Return type

Linearly interpolated field

time_index(time)[source]

Find the index in the time array associated with a given time

Note that we normalize to either the first or the last index if the sampled value is outside the time value range.

write(filename, varname=None)[source]

Write a Field to a netcdf file

Parameters
  • filename – Basename of the file

  • varname – Name of the field, to be appended to the filename

class parcels.field.NestedField(name, F, V=None, W=None)[source]

Bases: list

Class NestedField is a list of Fields from which the first one to be not declared out-of-boundaries at particle position is interpolated. This induces that the order of the fields in the list matters. Each one it its turn, a field is interpolated: if the interpolation succeeds or if an error other than ErrorOutOfBounds is thrown, the function is stopped. Otherwise, next field is interpolated. NestedField returns an ErrorOutOfBounds only if last field is as well out of boundaries. NestedField is composed of either Fields or VectorFields.

See here for a detailed tutorial

Parameters
  • name – Name of the NestedField

  • F – List of fields (order matters). F can be a scalar Field, a VectorField, or the zonal component (U) of the VectorField

  • V – List of fields defining the meridional component of a VectorField, if F is the zonal component. (default: None)

  • W – List of fields defining the vertical component of a VectorField, if F and V are the zonal and meridional components (default: None)

class parcels.field.SummedField(name, F, V=None, W=None)[source]

Bases: list

Class SummedField is a list of Fields over which Field interpolation is summed. This can e.g. be used when combining multiple flow fields, where the total flow is the sum of all the individual flows. Note that the individual Fields can be on different Grids. Also note that, since SummedFields are lists, the individual Fields can still be queried through their list index (e.g. SummedField[1]). SummedField is composed of either Fields or VectorFields.

See here for a detailed tutorial

Parameters
  • name – Name of the SummedField

  • F – List of fields. F can be a scalar Field, a VectorField, or the zonal component (U) of the VectorField

  • V – List of fields defining the meridional component of a VectorField, if F is the zonal component. (default: None)

  • W – List of fields defining the vertical component of a VectorField, if F and V are the zonal and meridional components (default: None)

class parcels.field.VectorField(name, U, V, W=None)[source]

Bases: object

Class VectorField stores 2 or 3 fields which defines together a vector field. This enables to interpolate them as one single vector field in the kernels.

Parameters
  • name – Name of the vector field

  • U – field defining the zonal component

  • V – field defining the meridional component

  • W – field defining the vertical component (default: None)

spatial_c_grid_interpolation3D(ti, z, y, x, time, particle=None)[source]

V1

U0

U1

V0

The interpolation is done in the following by interpolating linearly U depending on the longitude coordinate and interpolating linearly V depending on the latitude coordinate. Curvilinear grids are treated properly, since the element is projected to a rectilinear parent element.

parcels.gridset module

class parcels.gridset.GridSet[source]

Bases: object

GridSet class that holds the Grids on which the Fields are defined

dimrange(dim)[source]

Returns maximum value of a dimension (lon, lat, depth or time) on ‘left’ side and minimum value on ‘right’ side for all grids in a gridset. Useful for finding e.g. longitude range that overlaps on all grids in a gridset

parcels.grid module

class parcels.grid.CGrid[source]

Bases: _ctypes.Structure

class parcels.grid.CurvilinearSGrid(lon, lat, depth, time=None, time_origin=None, mesh='flat')[source]

Bases: parcels.grid.CurvilinearGrid

Curvilinear S Grid.

Parameters
  • lon – 2D array containing the longitude coordinates of the grid

  • lat – 2D array containing the latitude coordinates of the grid

  • depth – 4D (time-evolving) or 3D (time-independent) array containing the vertical coordinates of the grid, which are s-coordinates. s-coordinates can be terrain-following (sigma) or iso-density (rho) layers, or any generalised vertical discretisation. The depth of each node depends then on the horizontal position (lon, lat), the number of the layer and the time is depth is a 4D array. depth array is either a 4D array[xdim][ydim][zdim][tdim] or a 3D array[xdim][ydim[zdim].

  • time – Vector containing the time coordinates of the grid

  • time_origin – Time origin (TimeConverter object) of the time axis

  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.

    2. flat: No conversion, lat/lon are assumed to be in m.

class parcels.grid.CurvilinearZGrid(lon, lat, depth=None, time=None, time_origin=None, mesh='flat')[source]

Bases: parcels.grid.CurvilinearGrid

Curvilinear Z Grid.

Parameters
  • lon – 2D array containing the longitude coordinates of the grid

  • lat – 2D array containing the latitude coordinates of the grid

  • depth – Vector containing the vertical coordinates of the grid, which are z-coordinates. The depth of the different layers is thus constant.

  • time – Vector containing the time coordinates of the grid

  • time_origin – Time origin (TimeConverter object) of the time axis

  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.

    2. flat: No conversion, lat/lon are assumed to be in m.

class parcels.grid.Grid(lon, lat, time, time_origin, mesh)[source]

Bases: object

Grid class that defines a (spatial and temporal) grid on which Fields are defined

property child_ctypes_struct

Returns a ctypes struct object containing all relevant pointers and sizes for this grid.

class parcels.grid.GridCode(value)[source]

Bases: enum.IntEnum

An enumeration.

class parcels.grid.RectilinearSGrid(lon, lat, depth, time=None, time_origin=None, mesh='flat')[source]

Bases: parcels.grid.RectilinearGrid

Rectilinear S Grid. Same horizontal discretisation as a rectilinear z grid,

but with s vertical coordinates

Parameters
  • lon – Vector containing the longitude coordinates of the grid

  • lat – Vector containing the latitude coordinates of the grid

  • depth – 4D (time-evolving) or 3D (time-independent) array containing the vertical coordinates of the grid, which are s-coordinates. s-coordinates can be terrain-following (sigma) or iso-density (rho) layers, or any generalised vertical discretisation. The depth of each node depends then on the horizontal position (lon, lat), the number of the layer and the time is depth is a 4D array. depth array is either a 4D array[xdim][ydim][zdim][tdim] or a 3D array[xdim][ydim[zdim].

  • time – Vector containing the time coordinates of the grid

  • time_origin – Time origin (TimeConverter object) of the time axis

  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.

    2. flat: No conversion, lat/lon are assumed to be in m.

class parcels.grid.RectilinearZGrid(lon, lat, depth=None, time=None, time_origin=None, mesh='flat')[source]

Bases: parcels.grid.RectilinearGrid

Rectilinear Z Grid

Parameters
  • lon – Vector containing the longitude coordinates of the grid

  • lat – Vector containing the latitude coordinates of the grid

  • depth – Vector containing the vertical coordinates of the grid, which are z-coordinates. The depth of the different layers is thus constant.

  • time – Vector containing the time coordinates of the grid

  • time_origin – Time origin (TimeConverter object) of the time axis

  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.

    2. flat: No conversion, lat/lon are assumed to be in m.

parcels.particle module

class parcels.particle.JITParticle(*args, **kwargs)[source]

Bases: parcels.particle.ScipyParticle

Particle class for JIT-based (Just-In-Time) Particle objects

Parameters
  • lon – Initial longitude of particle

  • lat – Initial latitude of particle

  • fieldsetparcels.fieldset.FieldSet object to track this particle on

  • dt – Execution timestep for this particle

  • time – Current time of the particle

Additional Variables can be added via the :Class Variable: objects

Users should use JITParticles for faster advection computation.

class parcels.particle.ScipyParticle(lon, lat, pid, fieldset, depth=0.0, time=0.0, cptr=None)[source]

Bases: parcels.particle._Particle

Class encapsulating the basic attributes of a particle, to be executed in SciPy mode

Parameters
  • lon – Initial longitude of particle

  • lat – Initial latitude of particle

  • depth – Initial depth of particle

  • fieldsetparcels.fieldset.FieldSet object to track this particle on

  • time – Current time of the particle

Additional Variables can be added via the :Class Variable: objects

class parcels.particle.Variable(name, dtype=<class 'numpy.float32'>, initial=0, to_write=True)[source]

Bases: object

Descriptor class that delegates data access to particle data

Parameters
  • name – Variable name as used within kernels

  • dtype – Data type (numpy.dtype) of the variable

  • initial – Initial value of the variable. Note that this can also be a Field object, which will then be sampled at the location of the particle

  • to_write ((bool, 'once', optional)) – Boolean or ‘once’. Controls whether Variable is written to NetCDF file. If to_write = ‘once’, the variable will be written as a time-independent 1D array

is64bit()[source]

Check whether variable is 64-bit

parcels.kernels.advection module

Collection of pre-built advection kernels

parcels.kernels.advection.AdvectionAnalytical(particle, fieldset, time)[source]

Advection of particles using ‘analytical advection’ integration

Based on Ariane/TRACMASS algorithm, as detailed in e.g. Doos et al (https://doi.org/10.5194/gmd-10-1733-2017). Note that the time-dependent scheme is currently implemented with ‘intermediate timesteps’ (default 10 per model timestep) and not yet with the full analytical time integration

parcels.kernels.advection.AdvectionEE(particle, fieldset, time)[source]

Advection of particles using Explicit Euler (aka Euler Forward) integration.

Function needs to be converted to Kernel object before execution

parcels.kernels.advection.AdvectionRK4(particle, fieldset, time)[source]

Advection of particles using fourth-order Runge-Kutta integration.

Function needs to be converted to Kernel object before execution

parcels.kernels.advection.AdvectionRK45(particle, fieldset, time)[source]

Advection of particles using adadptive Runge-Kutta 4/5 integration.

Times-step dt is halved if error is larger than tolerance, and doubled if error is smaller than 1/10th of tolerance, with tolerance set to 1e-5 * dt by default.

parcels.kernels.advection.AdvectionRK4_3D(particle, fieldset, time)[source]

Advection of particles using fourth-order Runge-Kutta integration including vertical velocity.

Function needs to be converted to Kernel object before execution

parcels.kernels.advectiondiffusion module

Collection of pre-built advection-diffusion kernels

See this tutorial for a detailed explanation

parcels.kernels.advectiondiffusion.AdvectionDiffusionEM(particle, fieldset, time)[source]

Kernel for 2D advection-diffusion, solved using the Euler-Maruyama scheme (EM).

Assumes that fieldset has fields Kh_zonal and Kh_meridional and variable fieldset.dres, setting the resolution for the central difference gradient approximation. This should be (of the order of) the local gridsize.

The Euler-Maruyama scheme is of strong order 0.5 and weak order 1.

The Wiener increment dW is normally distributed with zero mean and a standard deviation of sqrt(dt).

parcels.kernels.advectiondiffusion.AdvectionDiffusionM1(particle, fieldset, time)[source]

Kernel for 2D advection-diffusion, solved using the Milstein scheme at first order (M1).

Assumes that fieldset has fields Kh_zonal and Kh_meridional and variable fieldset.dres, setting the resolution for the central difference gradient approximation. This should be (of the order of) the local gridsize.

This Milstein scheme is of strong and weak order 1, which is higher than the Euler-Maruyama scheme. It experiences less spurious diffusivity by including extra correction terms that are computationally cheap.

The Wiener increment dW is normally distributed with zero mean and a standard deviation of sqrt(dt).

parcels.kernels.advectiondiffusion.DiffusionUniformKh(particle, fieldset, time)[source]

Kernel for simple 2D diffusion where diffusivity (Kh) is assumed uniform.

Assumes that fieldset has constant fields Kh_zonal and Kh_meridional. These can be added via e.g.

fieldset.add_constant_field(“Kh_zonal”, kh_zonal, mesh=mesh) fieldset.add_constant_field(“Kh_meridional”, kh_meridional, mesh=mesh)

where mesh is either ‘flat’ or ‘spherical’

This kernel assumes diffusivity gradients are zero and is therefore more efficient. Since the perturbation due to diffusion is in this case isotropic independent, this kernel contains no advection and can be used in combination with a seperate advection kernel.

The Wiener increment dW is normally distributed with zero mean and a standard deviation of sqrt(dt).

parcels.kernels.EOSseawaterproperties module

Collection of pre-built eos sea water property kernels

parcels.kernels.EOSseawaterproperties.AdiabticTemperatureGradient(particle, fieldset, time)[source]

Calculates adiabatic temperature gradient as per UNESCO 1983 routines.

Parameters
  • particle.S (array_like) – salinity [psu (PSS-78)]

  • particle.T (array_like) – temperature [℃ (ITS-90)]

  • particle.pressure (array_like) – pressure [db]

Returns

adiabatic temperature gradient [℃ db -1]

Type

array_like

1

Fofonoff, P. and Millard, R.C. Jr UNESCO 1983. Algorithms for computation of fundamental properties of seawater. UNESCO Tech. Pap. in Mar. Sci., No. 44, 53 pp. http://unesdoc.unesco.org/images/0005/000598/059832eb.pdf

2

Bryden, H. 1973. New Polynomials for thermal expansion, adiabatic temperature gradient and potential temperature of sea water. Deep-Sea Res. Vol20,401-408. doi:10.1016/0011-7471(73)90063-6

parcels.kernels.EOSseawaterproperties.PressureFromLatDepth(particle, fieldset, time)[source]

Calculates pressure in dbars from depth in meters and latitude.

parray_like

pressure [db]

1

Saunders, Peter M., 1981: Practical Conversion of Pressure to Depth. J. Phys. Oceanogr., 11, 573-574. doi: 10.1175/1520-0485(1981)011<0573:PCOPTD>2.0.CO;2

parcels.kernels.EOSseawaterproperties.PtempFromTemp(particle, fieldset, time)[source]

Calculates potential temperature as per UNESCO 1983 report.

Parameters
  • particle.S (array_like) – salinity [psu (PSS-78)]

  • particle.T (array_like) – temperature [℃ (ITS-90)]

  • particle.pressure (array_like) – pressure [db]

  • fieldset.refpressure (array_like) – reference pressure [db], default = 0

Returns

potential temperature relative to PR [℃ (ITS-90)]

Type

array_like

1

Fofonoff, P. and Millard, R.C. Jr UNESCO 1983. Algorithms for computation of fundamental properties of seawater. UNESCO Tech. Pap. in Mar. Sci., No. 44, 53 pp. Eqn.(31) p.39. http://unesdoc.unesco.org/images/0005/000598/059832eb.pdf

2

Bryden, H. 1973. New Polynomials for thermal expansion, adiabatic temperature gradient and potential temperature of sea water. Deep-Sea Res. Vol20,401-408. doi:10.1016/0011-7471(73)90063-6

parcels.kernels.EOSseawaterproperties.TempFromPtemp(particle, fieldset, time)[source]

Calculates temperature from potential temperature at the reference pressure PR and in situ pressure P.

Parameters
  • particle.S (array_like) – salinity [psu (PSS-78)]

  • particle.T (array_like) – potential temperature [℃ (ITS-90)]

  • particle.pressure (array_like) – pressure [db]

  • fieldset.refpressure (array_like) – reference pressure [db], default = 0

Returns

temperature [℃ (ITS-90)]

Type

array_like

1

Fofonoff, P. and Millard, R.C. Jr UNESCO 1983. Algorithms for computation of fundamental properties of seawater. UNESCO Tech. Pap. in Mar. Sci., No. 44, 53 pp. Eqn.(31) p.39. http://unesdoc.unesco.org/images/0005/000598/059832eb.pdf

2

Bryden, H. 1973. New Polynomials for thermal expansion, adiabatic temperature gradient and potential temperature of sea water. Deep-Sea Res. Vol20,401-408. doi:10.1016/0011-7471(73)90063-6

parcels.kernels.TEOSseawaterdensity module

Collection of pre-built sea water density kernels

parcels.kernels.TEOSseawaterdensity.PolyTEOS10_bsq(particle, fieldset, time)[source]

# calculates density based on the polyTEOS10-bsq algorithm from Appendix A.2 of # https://www.sciencedirect.com/science/article/pii/S1463500315000566 # requires fieldset.abs_salinity and fieldset.cons_temperature Fields in the fieldset # and a particle.density Variable in the ParticleSet # # References: # Roquet, F., Madec, G., McDougall, T. J., Barker, P. M., 2014: Accurate # polynomial expressions for the density and specific volume of # seawater using the TEOS-10 standard. Ocean Modelling. # McDougall, T. J., D. R. Jackett, D. G. Wright and R. Feistel, 2003: # Accurate and computationally efficient algorithms for potential # temperature and density of seawater. Journal of Atmospheric and # Oceanic Technology, 20, 730-741.

parcels.compilation.codegenerator module

parcels.compilation.compiler module

parcels.kernel module

class parcels.kernel.Kernel(fieldset, ptype, pyfunc=None, funcname=None, funccode=None, py_ast=None, funcvars=None, c_include='', delete_cfiles=True)[source]

Bases: object

Kernel object that encapsulates auto-generated code.

Parameters
  • fieldset – FieldSet object providing the field information

  • ptype – PType object for the kernel particle

  • delete_cfiles – Boolean whether to delete the C-files after compilation in JIT mode (default is True)

Note: A Kernel is either created from a compiled <function …> object or the necessary information (funcname, funccode, funcvars) is provided. The py_ast argument may be derived from the code string, but for concatenation, the merged AST plus the new header definition is required.

compile(compiler)[source]

Writes kernel code to file and compiles it.

execute(pset, endtime, dt, recovery=None, output_file=None, execute_once=False)[source]

Execute this Kernel over a ParticleSet for several timesteps

execute_jit(pset, endtime, dt)[source]

Invokes JIT engine to perform the core update loop

execute_python(pset, endtime, dt)[source]

Performs the core update loop via Python

remove_deleted(pset, output_file, endtime)[source]

Utility to remove all particles that signalled deletion.

This version is generally applicable to all structures and collections

remove_deleted_soa(pset, output_file, endtime)[source]

Utility to remove all particles that signalled deletion

This deletion function is targetted to index-addressable, random-access array-collections.

parcels.particlefile module

Module controlling the writing of ParticleSets to NetCDF file

class parcels.particlefile.ParticleFile(name, particleset, outputdt=inf, write_ondelete=False, convert_at_end=True, tempwritedir=None, pset_info=None)[source]

Initialise trajectory output.

Parameters
  • name – Basename of the output file

  • particleset – ParticleSet to output

  • outputdt – Interval which dictates the update frequency of file output while ParticleFile is given as an argument of ParticleSet.execute() It is either a timedelta object or a positive double.

  • write_ondelete – Boolean to write particle data only when they are deleted. Default is False

  • convert_at_end – Boolean to convert npy files to netcdf at end of run. Default is True

  • tempwritedir – directories to write temporary files to during executing. Default is out-XXXXXX where Xs are random capitals. Files for individual processors are written to subdirectories 0, 1, 2 etc under tempwritedir

  • pset_info – dictionary of info on the ParticleSet, stored in tempwritedir/XX/pset_info.npy, used to create NetCDF file from npy-files.

add_metadata(name, message)[source]

Add metadata to parcels.particleset.ParticleSet

Parameters
  • name – Name of the metadata variabale

  • message – message to be written

close(delete_tempfiles=True)[source]

Close the ParticleFile object by exporting and then deleting the temporary npy files

delete_tempwritedir(tempwritedir=None)[source]

Deleted all temporary npy files

:param tempwritedir Optional path of the directory to delete

dump_dict_to_npy(data_dict, data_dict_once)[source]

Buffer data to set of temporary numpy files, using np.save

dump_psetinfo_to_npy()[source]
export()[source]

Exports outputs in temporary NPY-files to NetCDF file

open_netcdf_file(data_shape)[source]

Initialise NetCDF4.Dataset for trajectory output. The output follows the format outlined in the Discrete Sampling Geometries section of the CF-conventions: http://cfconventions.org/cf-conventions/v1.6.0/cf-conventions.html#discrete-sampling-geometries The current implementation is based on the NCEI template: http://www.nodc.noaa.gov/data/formats/netcdf/v2.0/trajectoryIncomplete.cdl

Parameters

data_shape – shape of the variables in the NetCDF4 file

read_from_npy(file_list, time_steps, var)[source]

Read NPY-files for one variable using a loop over all files.

Parameters
  • file_list – List that contains all file names in the output directory

  • time_steps – Number of time steps that were written in out directory

  • var – name of the variable to read

write(pset, time, deleted_only=False)[source]

Write all data from one time step to a temporary npy-file using a python dictionary. The data is saved in the folder ‘out’.

Parameters
  • pset – ParticleSet object to write

  • time – Time at which to write ParticleSet

  • deleted_only – Flag to write only the deleted Particles

parcels.rng module

parcels.rng.expovariate(lamb)[source]

Returns a randome float of an exponential distribution with parameter lamb

parcels.rng.normalvariate(loc, scale)[source]

Returns a random float on normal distribution with mean loc and width scale

parcels.rng.randint(low, high)[source]

Returns a random int between low and high

parcels.rng.random()[source]

Returns a random float between 0. and 1.

parcels.rng.seed(seed)[source]

Sets the seed for parcels internal RNG

parcels.rng.uniform(low, high)[source]

Returns a random float between low and high

parcels.rng.vonmisesvariate(mu, kappa)[source]

Returns a randome float of a Von Mises distribution with mean angle mu and concentration parameter kappa

parcels.particlesets.baseparticleset module

class parcels.particlesets.baseparticleset.BaseParticleSet(fieldset=None, pclass=None, lon=None, lat=None, depth=None, time=None, repeatdt=None, lonlatdepth_dtype=None, pid_orig=None, **kwargs)[source]

Bases: parcels.particlesets.baseparticleset.NDCluster

Base ParticleSet.

abstract Kernel(pyfunc, c_include='', delete_cfiles=True)[source]

Wrapper method to convert a pyfunc into a parcels.kernel.Kernel object based on fieldset and ptype of the ParticleSet :param delete_cfiles: Boolean whether to delete the C-files after compilation in JIT mode (default is True)

abstract ParticleFile(*args, **kwargs)[source]

Wrapper method to initialise a parcels.particlefile.ParticleFile object from the ParticleSet

abstract cstruct()[source]

‘cstruct’ returns the ctypes mapping of the combined collections cstruct and the fieldset cstruct. This depends on the specific structure in question.

density(field_name=None, particle_val=None, relative=False, area_scale=False)[source]

Method to calculate the density of particles in a ParticleSet from their locations, through a 2D histogram.

Parameters
  • field – Optional parcels.field.Field object to calculate the histogram on. Default is fieldset.U

  • particle_val – Optional numpy-array of values to weigh each particle with, or string name of particle variable to use weigh particles with. Default is None, resulting in a value of 1 for each particle

  • relative – Boolean to control whether the density is scaled by the total weight of all particles. Default is False

  • area_scale – Boolean to control whether the density is scaled by the area (in m^2) of each grid cell. Default is False

abstract property error_particles

Get an iterator over all particles that are in an error state.

This is a fallback implementation, it might be slow.

Returns

Collection iterator over error particles.

execute(pyfunc=<function AdvectionRK4>, endtime=None, runtime=None, dt=1.0, moviedt=None, recovery=None, output_file=None, movie_background_field=None, verbose_progress=None, postIterationCallbacks=None, callbackdt=None)[source]

Execute a given kernel function over the particle set for multiple timesteps. Optionally also provide sub-timestepping for particle output.

Parameters
  • pyfunc – Kernel function to execute. This can be the name of a defined Python function or a parcels.kernel.Kernel object. Kernels can be concatenated using the + operator

  • endtime – End time for the timestepping loop. It is either a datetime object or a positive double.

  • runtime – Length of the timestepping loop. Use instead of endtime. It is either a timedelta object or a positive double.

  • dt – Timestep interval to be passed to the kernel. It is either a timedelta object or a double. Use a negative value for a backward-in-time simulation.

  • moviedt – Interval for inner sub-timestepping (leap), which dictates the update frequency of animation. It is either a timedelta object or a positive double. None value means no animation.

  • output_fileparcels.particlefile.ParticleFile object for particle output

  • recovery – Dictionary with additional :mod:parcels.tools.error recovery kernels to allow custom recovery behaviour in case of kernel errors.

  • movie_background_field – field plotted as background in the movie if moviedt is set. ‘vector’ shows the velocity as a vector field.

  • verbose_progress – Boolean for providing a progress bar for the kernel execution loop.

  • postIterationCallbacks – (Optional) Array of functions that are to be called after each iteration (post-process, non-Kernel)

  • callbackdt – (Optional, in conjecture with ‘postIterationCallbacks) timestep inverval to (latestly) interrupt the running kernel and invoke post-iteration callbacks from ‘postIterationCallbacks’

classmethod from_field(fieldset, pclass, start_field, size, mode='monte_carlo', depth=None, time=None, repeatdt=None, lonlatdepth_dtype=None)[source]

Initialise the ParticleSet randomly drawn according to distribution from a field

Parameters
  • fieldsetparcels.fieldset.FieldSet object from which to sample velocity

  • pclass – mod:parcels.particle.JITParticle or parcels.particle.ScipyParticle object that defines custom particle

  • start_field – Field for initialising particles stochastically (horizontally) according to the presented density field.

  • size – Initial size of particle set

  • mode – Type of random sampling. Currently only ‘monte_carlo’ is implemented

  • depth – Optional list of initial depth values for particles. Default is 0m

  • time – Optional start time value for particles. Default is fieldset.U.time[0]

  • repeatdt – Optional interval (in seconds) on which to repeat the release of the ParticleSet

  • lonlatdepth_dtype – Floating precision for lon, lat, depth particle coordinates. It is either np.float32 or np.float64. Default is np.float32 if fieldset.U.interp_method is ‘linear’ and np.float64 if the interpolation method is ‘cgrid_velocity’

classmethod from_line(fieldset, pclass, start, finish, size, depth=None, time=None, repeatdt=None, lonlatdepth_dtype=None)[source]

Initialise the ParticleSet from start/finish coordinates with equidistant spacing Note that this method uses simple numpy.linspace calls and does not take into account great circles, so may not be a exact on a globe

Parameters
  • fieldsetparcels.fieldset.FieldSet object from which to sample velocity

  • pclass – mod:parcels.particle.JITParticle or parcels.particle.ScipyParticle object that defines custom particle

  • start – Starting point for initialisation of particles on a straight line.

  • finish – End point for initialisation of particles on a straight line.

  • size – Initial size of particle set

  • depth – Optional list of initial depth values for particles. Default is 0m

  • time – Optional start time value for particles. Default is fieldset.U.time[0]

  • repeatdt – Optional interval (in seconds) on which to repeat the release of the ParticleSet

  • lonlatdepth_dtype – Floating precision for lon, lat, depth particle coordinates. It is either np.float32 or np.float64. Default is np.float32 if fieldset.U.interp_method is ‘linear’ and np.float64 if the interpolation method is ‘cgrid_velocity’

classmethod from_list(fieldset, pclass, lon, lat, depth=None, time=None, repeatdt=None, lonlatdepth_dtype=None, **kwargs)[source]

Initialise the ParticleSet from lists of lon and lat

Parameters
  • fieldsetparcels.fieldset.FieldSet object from which to sample velocity

  • pclass – mod:parcels.particle.JITParticle or parcels.particle.ScipyParticle object that defines custom particle

  • lon – List of initial longitude values for particles

  • lat – List of initial latitude values for particles

  • depth – Optional list of initial depth values for particles. Default is 0m

  • time – Optional list of start time values for particles. Default is fieldset.U.time[0]

  • repeatdt – Optional interval (in seconds) on which to repeat the release of the ParticleSet

  • lonlatdepth_dtype – Floating precision for lon, lat, depth particle coordinates. It is either np.float32 or np.float64. Default is np.float32 if fieldset.U.interp_method is ‘linear’ and np.float64 if the interpolation method is ‘cgrid_velocity’

Other Variables can be initialised using further arguments (e.g. v=… for a Variable named ‘v’)

abstract classmethod from_particlefile(fieldset, pclass, filename, restart=True, restarttime=None, repeatdt=None, lonlatdepth_dtype=None, **kwargs)[source]

Initialise the ParticleSet from a netcdf ParticleFile. This creates a new ParticleSet based on locations of all particles written in a netcdf ParticleFile at a certain time. Particle IDs are preserved if restart=True

Parameters
  • fieldsetparcels.fieldset.FieldSet object from which to sample velocity

  • pclass – mod:parcels.particle.JITParticle or parcels.particle.ScipyParticle object that defines custom particle

  • filename – Name of the particlefile from which to read initial conditions

  • restart – Boolean to signal if pset is used for a restart (default is True). In that case, Particle IDs are preserved.

  • restarttime – time at which the Particles will be restarted. Default is the last time written. Alternatively, restarttime could be a time value (including np.datetime64) or a callable function such as np.nanmin. The last is useful when running with dt < 0.

  • repeatdt – Optional interval (in seconds) on which to repeat the release of the ParticleSet

  • lonlatdepth_dtype – Floating precision for lon, lat, depth particle coordinates. It is either np.float32 or np.float64. Default is np.float32 if fieldset.U.interp_method is ‘linear’ and np.float64 if the interpolation method is ‘cgrid_velocity’

abstract classmethod monte_carlo_sample(start_field, size, mode='monte_carlo')[source]

Converts a starting field into a monte-carlo sample of lons and lats.

Parameters

start_fieldparcels.fieldset.Field object for initialising particles stochastically (horizontally) according to the presented density field.

returns list(lon), list(lat)

property num_error_particles

Get the number of particles that are in an error state.

show(with_particles=True, show_time=None, field=None, domain=None, projection=None, land=True, vmin=None, vmax=None, savefile=None, animation=False, **kwargs)[source]

Method to ‘show’ a Parcels ParticleSet

Parameters
  • with_particles – Boolean whether to show particles

  • show_time – Time at which to show the ParticleSet

  • field – Field to plot under particles (either None, a Field object, or ‘vector’)

  • domain – dictionary (with keys ‘N’, ‘S’, ‘E’, ‘W’) defining domain to show

  • projection – type of cartopy projection to use (default PlateCarree)

  • land – Boolean whether to show land. This is ignored for flat meshes

  • vmin – minimum colour scale (only in single-plot mode)

  • vmax – maximum colour scale (only in single-plot mode)

  • savefile – Name of a file to save the plot to

  • animation – Boolean whether result is a single plot, or an animation

class parcels.particlesets.baseparticleset.NDCluster[source]

Bases: abc.ABC

Interface.

parcels.particlesets.collections module

class parcels.particlesets.collections.Collection[source]

Bases: abc.ABC

abstract add_collection(pcollection)[source]

Adds another, differently structured ParticleCollection to this collection. This is done by, for example, appending/adding the items of the other collection to this collection.

abstract add_same(same_class)[source]

Adds another, equi-structured ParticleCollection to this collection. This is done by concatenating both collections. The fact that they are of the same ParticleCollection’s derivative simplifies parsing and concatenation.

abstract add_single(particle_obj)[source]

Adding a single Particle to the collection - either as a ‘Particle; object in parcels itself, or via its ParticleAccessor.

abstract append(particle_obj)[source]

This function appends a Particle (as object or via its accessor) to the end of a collection (‘end’ definition depends on the specific collection itself). For collections with an inherent indexing order (e.g. ordered lists, sets, trees), the function just includes the object at its pre-defined position (i.e. not necessarily at the end). For the collections, the function mapping equates to:

append(particle_obj) -> add_single(particle_obj)

The function - in contrast to ‘push’ - does not return the index of the inserted object.

abstract clear()[source]

This function physically removes all elements of the collection, yielding an empty collection as result of the operation.

delete(key)[source]

This is the generic super-method to indicate obejct deletion of a specific object from this collection.

Comment/Annotation: Functions for deleting multiple objects are more specialised than just a for-each loop of single-item deletion, because certain data structures can delete multiple objects in-bulk faster with specialised function than making a roundtrip per-item delete operation. Because of the sheer size of those containers and the resulting performance demands, we need to make use of those specialised ‘del’ functions, where available.

abstract delete_by_ID(id)[source]

This method deletes a particle from the the collection based on its ID. It does not return the deleted item. Semantically, the function appears similar to the ‘remove’ operation. That said, the function in OceanParcels - instead of directly deleting the particle - just raises the ‘deleted’ status flag for the indexed particle. In result, the particle still remains in the collection. The functional interpretation of the ‘deleted’ status is handled by ‘recovery’ dictionary during simulation execution.

abstract delete_by_index(index)[source]

This method deletes a particle from the the collection based on its index. It does not return the deleted item. Semantically, the function appears similar to the ‘remove’ operation. That said, the function in OceanParcels - instead of directly deleting the particle - just raises the ‘deleted’ status flag for the indexed particle. In result, the particle still remains in the collection. The functional interpretation of the ‘deleted’ status is handled by ‘recovery’ dictionary during simulation execution.

empty()[source]

This function retuns a boolean value, expressing if a collection is emoty (i.e. does not [anymore] contain any elements) or not.

get(other)[source]

This is a generic super-method to get one- or multiple Particles (via their object, their ParticleAccessor, their ID or their index) from the collection. Ideally, it just discerns between the types of the ‘other’ parameter, and then forwards the call to the related specific function.

Comment/Annotation: Not all arguments have a sensible use-case in every datastructure, so some concrete classes may not implementat all of them.

abstract get_collection(pcollection)[source]

This function gets particles from this collection that are themselves stored in a ParticleCollection, which is differently structured than this one. That means the other-collection has to be re-formatted first in an intermediary format.

abstract get_multi_by_IDs(ids)[source]

This function gets particles from this collection based on their IDs. For collections where this removal strategy would require a collection transformation or by-ID parsing, it is advisable to rather apply a get- by-objects or get-by-indices scheme.

abstract get_multi_by_PyCollection_Particles(pycollectionp)[source]

This function gets particles from this collection, which are themselves in common Python collections, such as lists, dicts and numpy structures. We can either directly get the referred Particle instances (for internally- ordered collections, e.g. ordered lists, sets, trees) or we may need to parse each instance for its index (for random-access structures), which results in a considerable performance malus.

For collections where get-by-object incurs a performance malus, it is advisable to multi-get particles by indices or IDs.

abstract get_multi_by_indices(indices)[source]

This function gets particles from this collection based on their indices. This works best for random-access collections (e.g. numpy’s ndarrays, dense matrices and dense arrays), whereas internally ordered collections shall rather use a get-via-object-reference strategy.

abstract get_same(same_class)[source]

This function gets particles from this collection that are themselves stored in another object of an equi- structured ParticleCollection.

abstract get_single_by_ID(id)[source]

This function gets a (particle) object from the collection based on the object’s ID. For some collections, this operation may involve a parsing of the whole list and translation of the object’s ID into an index or an object reference in the collection - which results in a significant performance malus. In cases where a get-by-ID would result in a performance malus, it is highly-advisable to use a different get function, e.g. get-by-index.

abstract get_single_by_index(index)[source]

This function gets a (particle) object from the collection based on its index within the collection. For collections that are not based on random access (e.g. ordered lists, sets, trees), this function involves a translation of the index into the specific object reference in the collection - or (if unavoidable) the translation of the collection from a none-indexable, none-random-access structure into an indexable structure. In cases where a get-by-index would result in a performance malus, it is highly-advisable to use a different get function, e.g. get-by-ID.

abstract get_single_by_object(particle_obj)[source]

This function gets a (particle) object from the collection based on its actual object. For collections that are random-access and based on indices (e.g. unordered list, vectors, arrays and dense matrices), this function would involve a parsing of the whole list and translation of the object into an index in the collection - which results in a significant performance malus. In cases where a get-by-object would result in a performance malus, it is highly-advisable to use a different get function, e.g. get-by-index or get-by-ID.

abstract insert(obj, index=None)[source]

This function allows to ‘insert’ a Particle (as object or via its accessor) into this collection. This method needs to be specified to each collection individually. Some collections (e.g. unordered list) allow to define the index where the object is to be inserted. Some collections can optionally insert an object at a specific position - at a significant speed- and memory malus cost (e.g. vectors, arrays, dense matrices). Some collections that manage a specified indexing order internally (e.g. ordered lists, sets, trees), and thus have no use for an ‘index’ parameter. For those collections with an internally-enforced order, the function mapping equates to:

insert(obj) -> add_single(obj)

iterator()[source]

This function is an explicit object-return of a forward-iterator over this collection. If this iterator is persistent or re-created upon call depends on the specific implementation of the ‘__iter__’ function.

This function is an explicit forward to the Collection::__iter__() member function.

abstract merge(same_class=None)[source]

This function merge two strictly equally-structured ParticleCollections into one. This can be, for example, quite handy to merge two particle subsets that - due to continuous removal - become too small to be effective.

On the other hand, this function can also internally merge individual particles that are tagged by status as being ‘merged’ (see the particle status for information on that).

In order to distinguish both use cases, we can evaluate the ‘same_class’ parameter. In cases where this is ‘None’, the merge operation semantically refers to an internal merge of individual particles - otherwise, it performs a 2-collection merge.

Comment: the function can be simplified later by pre-evaluating the function parameter and then reference the individual, specific functions for internal- or external merge.

The function shall return the merged ParticleCollection.

pop(other)[source]

This function pops a Particle (as object or via its accessor) from a collection.

This function removes the particle and then returns it.

Comment/Annotation: Functions for popping multiple objects are more specialised than just a for-each loop of single-item pop, because certain data structures can pop multiple objects faster with specialised function than making a roundtrip per-item check-and-pop operation. Because of the sheer size of those containers and the resulting performance demands, we need to make use of those specialised ‘pop’ functions, where available.

abstract pop_multi_by_IDs(ids)[source]

Searches for Particles with the IDs registered in ‘ids’, removes the Particles from the Collection and returns the Particles (or: their ParticleAccessors). If Particles cannot be retrieved (e.g. because the IDs are not available), returns None.

abstract pop_multi_by_indices(indices)[source]

Searches for Particles with the indices registered in ‘indices’, removes the Particles from the Collection and returns the Particles (or: their ParticleAccessors). If indices is None -> Particle cannot be retrieved -> Assert-Error and return None If index is None, return last item (-1); If index < 0: return from ‘end’ of collection. If index in ‘indices’ is out of bounds, throws and OutOfRangeException. If Particles cannot be retrieved, returns None.

abstract pop_single_by_ID(id)[source]

Searches for Particle with ID ‘id’, removes that Particle from the Collection and returns that Particle (or: ParticleAccessor). If Particle cannot be retrieved (e.g. because the ID is not available), returns None.

abstract pop_single_by_index(index)[source]

Searches for Particle at index ‘index’, removes that Particle from the Collection and returns that Particle (or: ParticleAccessor). If index is None, return last item (-1); If index < 0: return from ‘end’ of collection. If index is out of bounds, throws and OutOfRangeException. If Particle cannot be retrieved, returns None.

abstract push(particle_obj)[source]

This function pushes a Particle (as object or via its accessor) to the end of a collection (‘end’ definition depends on the specific collection itself). For collections with an inherent indexing order (e.g. ordered lists, sets, trees), the function just includes the object at its pre-defined position (i.e. not necessarily at the end). For the collections, the function mapping equates to:

int32 push(particle_obj) -> add_single(particle_obj); return -1;

This function further returns the index, at which position the Particle has been inserted. By definition, the index is positive, thus: a return of ‘-1’ indicates push failure, NOT the last position in the collection. Furthermore, collections that do not work on an index-preserving manner also return ‘-1’.

remove(other)[source]

This is a generic super-method to remove one- or multiple Particles (via their object, their ParticleAccessor, their ID or their index) from the collection. Ideally, it just discerns between the types of the ‘other’ parameter, and then forwards the call to the related specific function.

Comment/Annotation: Functions for removing multiple objects are more specialised than just a for-each loop of single-item removal, because certain data structures can remove multiple objects faster with specialised function than making a roundtrip per-item check-and-remove operation. Because of the sheer size of those containers and the resulting performance demands, we need to make use of those specialised ‘remove’ functions, where available.

abstract remove_collection(pcollection)[source]

This function removes particles from this collection that are themselves stored in a ParticleCollection, which is differently structured than this one. Tht means the removal first requires the removal-collection to be re- formatted in an intermediary format, before executing the removal. That said, this method should still be at least as efficient as a removal via common Python collections (i.e. lists, dicts, numpy’s nD arrays & dense arrays). Despite this, due to the reformatting, in some cases it may be more efficient to remove items then rather by IDs oder indices.

abstract remove_multi_by_IDs(ids)[source]

This function removes particles from this collection based on their IDs. For collections where this removal strategy would require a collection transformation or by-ID parsing, it is advisable to rather apply a removal- by-objects or removal-by-indices scheme.

abstract remove_multi_by_PyCollection_Particles(pycollectionp)[source]

This function removes particles from this collection, which are themselves in common Python collections, such as lists, dicts and numpy structures. In order to perform the removal, we can either directly remove the referred Particle instances (for internally-ordered collections, e.g. ordered lists, sets, trees) or we may need to parse each instance for its index (for random-access structures), which results in a considerable performance malus.

For collections where removal-by-object incurs a performance malus, it is advisable to multi-remove particles by indices or IDs.

abstract remove_multi_by_indices(indices)[source]

This function removes particles from this collection based on their indices. This works best for random-access collections (e.g. numpy’s ndarrays, dense matrices and dense arrays), whereas internally ordered collections shall rather use a removal-via-object-reference strategy.

abstract remove_same(same_class)[source]

This function removes particles from this collection that are themselves stored in another object of an equi- structured ParticleCollection. As the structures of both collections are the same, a more efficient M-in-N removal can be applied without an in-between reformatting.

abstract remove_single_by_ID(id)[source]

This function removes a (particle) object from the collection based on the object’s ID. For some collections, this operation may involve a parsing of the whole list and translation of the object’s ID into an index or an object reference in the collection in order to perform the removal - which results in a significant performance malus. In cases where a removal-by-ID would result in a performance malus, it is highly-advisable to use a different removal functions, e.g. remove-by-object or remove-by-index.

abstract remove_single_by_index(index)[source]

This function removes a (particle) object from the collection based on its index within the collection. For collections that are not based on random access (e.g. ordered lists, sets, trees), this function involves a translation of the index into the specific object reference in the collection - or (if unavoidable) the translation of the collection from a none-indexable, none-random-access structure into an indexable structure, and then perform the removal. In cases where a removal-by-index would result in a performance malus, it is highly-advisable to use a different removal functions, e.g. remove-by-object or remove-by-ID.

abstract remove_single_by_object(particle_obj)[source]

This function removes a (particle) object from the collection based on its actual object. For collections that are random-access and based on indices (e.g. unordered list, vectors, arrays and dense matrices), this function would involves a parsing of the whole list and translation of the object into an index in the collection to perform the removal - which results in a significant performance malus. In cases where a removal-by-object would result in a performance malus, it is highly-advisable to use a different removal functions, e.g. remove-by-index or remove-by-ID.

reverse_iterator()[source]

This function is an explicit object-return of a backward-iterator over this collection. If this iterator is persistent or re-created upon call depends on the specific implementation of the ‘__reversed__’ function.

This function is an explicit forward to the Collection::__reversed__() member function.

abstract split(indices=None)[source]

This function splits this collection into two disect equi-structured collections. The reason for it can, for example, be that the set exceeds a pre-defined maximum number of elements, which for performance reasons mandates a split.

On the other hand, this function can also internally split individual particles that are tagged byt status as to be ‘split’ (see the particle status for information on that).

In order to distinguish both use cases, we can evaluate the ‘indices’ parameter. In cases where this is ‘None’, the split operation semantically refers to an internal split of individual particles - otherwise, it performs a collection-split.

Comment: the function can be simplified later by pre-evaluating the function parameter and then reference the individual, specific functions for element- or collection split.

The function shall return the newly created or extended Particle collection, i.e. either the collection that results from a collection split or this very collection, containing the newly-split particles.

abstract toArray()[source]

This function converts (or: transforms; reformats; translates) this collection into an array-like structure (e.g. Python list or numpy nD array) that can be addressed by index. In the common case of ‘no ID recovery’, the global ID and the index match exactly.

While this function may be very convenient for may users, it is STRONGLY DISADVISED to use the function to often, and the performance- and memory overhead malus may be exceed any speed-up one could get from optimised data structures - in fact, for large collections with an implicit-order structure (i.e. ordered lists, sets, trees, etc.), this may be ‘the most constly’ function in any kind of simulation.

It can be - though - useful at the final stage of a simulation to dump the results to disk.

class parcels.particlesets.collections.ParticleCollection[source]

Bases: parcels.particlesets.collections.Collection

abstract cstruct()[source]

‘cstruct’ returns the ctypes mapping of the particle data. This depends on the specific structure in question.

property data

‘data’ is a reference to the actual barebone-storage of the particle data, and thus depends directly on the specific collection in question.

property lonlatdepth_dtype

‘lonlatdepth_dtype’ stores the numeric data type that is used to represent the lon, lat and depth of a particle. This can be either ‘float32’ (default) or ‘float64’

property particle_data

‘particle_data’ is a reference to the actual barebone-storage of the particle data, and thus depends directly on the specific collection in question. This property is just available for convenience and backward-compatibility, and this returns the same as ‘data’.

property pclass

‘pclass’ stores the actual class type of the particles allocated and managed in this collection

property ptype

‘ptype’ returns an instance of the particular type of class ‘ParticleType’ of the particle class of the particles in this collection.

basically: pytpe -> pclass().getPType()

property pu_centers

The ‘pu_centers” is an array of 2D/3D vectors storing the center of each cluster-of-particle partion that is handled by the respective PU. Storing the centers allows us to only run the initial kMeans segmentation once and then, on later particle additions, just (i) makes a closest-distance calculation, (ii) attaches the new particle to the closest cluster and (iii) updates the new cluster center. The last part may require at some point to merge overlaying clusters and them split them again in equi-sized partions.

property pu_indicators

The ‘pu_indicator’ is an [array or dictionary]-of-indicators, where each indicator entry tells per item (i.e. particle) in the collection to which processing unit (PU) in a parallelised setup it belongs to.

abstract set_variable_write_status(var, write_status)[source]

Method to set the write status of a Variable :param var: Name of the variable (string) :param status: Write status of the variable (True, False or ‘once’)

This function depends on the specific collection in question and thus needs to be specified in specific derivatives classes.

abstract toDictionary()[source]

Convert all Particle data from one time step to a python dictionary. :param pfile: ParticleFile object requesting the conversion :param time: Time at which to write ParticleSet :param deleted_only: Flag to write only the deleted Particles returns two dictionaries: one for all variables to be written each outputdt,

and one for all variables to be written once

This function depends on the specific collection in question and thus needs to be specified in specific derivatives classes.

parcels.particlesets.collectionsoa module

class parcels.particlesets.collectionsoa.ParticleAccessorSOA(pcoll, index)[source]

Bases: parcels.particlesets.iterators.BaseParticleAccessor

Wrapper that provides access to particle data in the collection, as if interacting with the particle itself.

Parameters
  • pcoll – ParticleCollection that the represented particle belongs to.

  • index – The index at which the data for the represented particle is stored in the corresponding data arrays of the ParticleCollecion.

class parcels.particlesets.collectionsoa.ParticleCollectionIteratorSOA(pcoll, reverse=False, subset=None)[source]

Bases: parcels.particlesets.iterators.BaseParticleCollectionIterator

Iterator for looping over the particles in the ParticleCollection.

Parameters
  • pcoll – ParticleCollection that stores the particles.

  • reverse – Flag to indicate reverse iteration (i.e. starting at the largest index, instead of the smallest).

  • subset – Subset of indices to iterate over, this allows the creation of an iterator that represents part of the collection.

property current

Returns a ParticleAccessor for the particle that the iteration is currently at.

class parcels.particlesets.collectionsoa.ParticleCollectionSOA(pclass, lon, lat, depth, time, lonlatdepth_dtype, pid_orig, partitions=None, ngrid=1, **kwargs)[source]

Bases: parcels.particlesets.collections.ParticleCollection

add_collection(pcollection)[source]

Adds another, differently structured ParticleCollection to this collection. This is done by, for example, appending/adding the items of the other collection to this collection.

add_same(same_class)[source]

Adds another, equi-structured ParticleCollection to this collection. This is done by concatenating both collections. The fact that they are of the same ParticleCollection’s derivative simplifies parsing and concatenation.

add_single(particle_obj)[source]

Adding a single Particle to the collection - either as a ‘Particle; object in parcels itself, or via its ParticleAccessor.

append(particle_obj)[source]

This function appends a Particle (as object or via its accessor) to the end of a collection (‘end’ definition depends on the specific collection itself). For collections with an inherent indexing order (e.g. ordered lists, sets, trees), the function just includes the object at its pre-defined position (i.e. not necessarily at the end). For the collections, the function mapping equates to:

append(particle_obj) -> add_single(particle_obj)

The function - in contrast to ‘push’ - does not return the index of the inserted object.

clear()[source]

This function physically removes all elements of the collection, yielding an empty collection as result of the operation.

cstruct()[source]

‘cstruct’ returns the ctypes mapping of the particle data. This depends on the specific structure in question.

delete_by_ID(id)[source]

This method deletes a particle from the the collection based on its ID. It does not return the deleted item. Semantically, the function appears similar to the ‘remove’ operation. That said, the function in OceanParcels - instead of directly deleting the particle - just raises the ‘deleted’ status flag for the indexed particle. In result, the particle still remains in the collection. The functional interpretation of the ‘deleted’ status is handled by ‘recovery’ dictionary during simulation execution.

delete_by_index(index)[source]

This method deletes a particle from the the collection based on its index. It does not return the deleted item. Semantically, the function appears similar to the ‘remove’ operation. That said, the function in OceanParcels - instead of directly deleting the particle - just raises the ‘deleted’ status flag for the indexed particle. In result, the particle still remains in the collection. The functional interpretation of the ‘deleted’ status is handled by ‘recovery’ dictionary during simulation execution.

get_collection(pcollection)[source]

This function gets particles from this collection that are themselves stored in a ParticleCollection, which is differently structured than this one. That means the other-collection has to be re-formatted first in an intermediary format.

get_multi_by_IDs(ids)[source]

This function gets particles from this collection based on their IDs. For collections where this removal strategy would require a collection transformation or by-ID parsing, it is advisable to rather apply a get- by-objects or get-by-indices scheme.

Note that this implementation assumes that IDs of particles are strictly increasing with increasing index. So a particle with a larger index will always have a larger ID as well. The assumption often holds for this datastructure as new particles always get a larger ID than any existing particle (IDs are not recycled) and their data are appended at the end of the list (largest index). This allows for the use of binary search in the look-up. The collection maintains a sorted flag to indicate whether this assumption holds.

get_multi_by_PyCollection_Particles(pycollectionp)[source]

This function gets particles from this collection, which are themselves in common Python collections, such as lists, dicts and numpy structures. We can either directly get the referred Particle instances (for internally- ordered collections, e.g. ordered lists, sets, trees) or we may need to parse each instance for its index (for random-access structures), which results in a considerable performance malus.

For collections where get-by-object incurs a performance malus, it is advisable to multi-get particles by indices or IDs.

get_multi_by_indices(indices)[source]

This function gets particles from this collection based on their indices. This works best for random-access collections (e.g. numpy’s ndarrays, dense matrices and dense arrays), whereas internally ordered collections shall rather use a get-via-object-reference strategy.

get_same(same_class)[source]

This function gets particles from this collection that are themselves stored in another object of an equi- structured ParticleCollection.

get_single_by_ID(id)[source]

This function gets a (particle) object from the collection based on the object’s ID. For some collections, this operation may involve a parsing of the whole list and translation of the object’s ID into an index or an object reference in the collection - which results in a significant performance malus. In cases where a get-by-ID would result in a performance malus, it is highly-advisable to use a different get function, e.g. get-by-index.

This function uses binary search if we know the ID list to be sorted, and linear search otherwise. We assume IDs are unique.

get_single_by_index(index)[source]

This function gets a (particle) object from the collection based on its index within the collection. For collections that are not based on random access (e.g. ordered lists, sets, trees), this function involves a translation of the index into the specific object reference in the collection - or (if unavoidable) the translation of the collection from a none-indexable, none-random-access structure into an indexable structure. In cases where a get-by-index would result in a performance malus, it is highly-advisable to use a different get function, e.g. get-by-ID.

get_single_by_object(particle_obj)[source]

This function gets a (particle) object from the collection based on its actual object. For collections that are random-access and based on indices (e.g. unordered list, vectors, arrays and dense matrices), this function would involve a parsing of the whole list and translation of the object into an index in the collection - which results in a significant performance malus. In cases where a get-by-object would result in a performance malus, it is highly-advisable to use a different get function, e.g. get-by-index or get-by-ID.

In this specific implementation, we cannot look for the object directly, so we will look for one of its properties (the ID) that has the nice property of being stored in an ordered list (if the collection is sorted)

insert(obj, index=None)[source]

This function allows to ‘insert’ a Particle (as object or via its accessor) into this collection. This method needs to be specified to each collection individually. Some collections (e.g. unordered list) allow to define the index where the object is to be inserted. Some collections can optionally insert an object at a specific position - at a significant speed- and memory malus cost (e.g. vectors, arrays, dense matrices). Some collections that manage a specified indexing order internally (e.g. ordered lists, sets, trees), and thus have no use for an ‘index’ parameter. For those collections with an internally-enforced order, the function mapping equates to:

insert(obj) -> add_single(obj)

merge(same_class=None)[source]

This function merge two strictly equally-structured ParticleCollections into one. This can be, for example, quite handy to merge two particle subsets that - due to continuous removal - become too small to be effective.

On the other hand, this function can also internally merge individual particles that are tagged by status as being ‘merged’ (see the particle status for information on that).

In order to distinguish both use cases, we can evaluate the ‘same_class’ parameter. In cases where this is ‘None’, the merge operation semantically refers to an internal merge of individual particles - otherwise, it performs a 2-collection merge.

Comment: the function can be simplified later by pre-evaluating the function parameter and then reference the individual, specific functions for internal- or external merge.

The function shall return the merged ParticleCollection.

pop_multi_by_IDs(ids)[source]

Searches for Particles with the IDs registered in ‘ids’, removes the Particles from the Collection and returns the Particles (or: their ParticleAccessors). If Particles cannot be retrieved (e.g. because the IDs are not available), returns None.

pop_multi_by_indices(indices)[source]

Searches for Particles with the indices registered in ‘indices’, removes the Particles from the Collection and returns the Particles (or: their ParticleAccessors). If indices is None -> Particle cannot be retrieved -> Assert-Error and return None If index is None, return last item (-1); If index < 0: return from ‘end’ of collection. If index in ‘indices’ is out of bounds, throws and OutOfRangeException. If Particles cannot be retrieved, returns None.

pop_single_by_ID(id)[source]

Searches for Particle with ID ‘id’, removes that Particle from the Collection and returns that Particle (or: ParticleAccessor). If Particle cannot be retrieved (e.g. because the ID is not available), returns None.

pop_single_by_index(index)[source]

Searches for Particle at index ‘index’, removes that Particle from the Collection and returns that Particle (or: ParticleAccessor). If index is None, return last item (-1); If index < 0: return from ‘end’ of collection. If index is out of bounds, throws and OutOfRangeException. If Particle cannot be retrieved, returns None.

push(particle_obj)[source]

This function pushes a Particle (as object or via its accessor) to the end of a collection (‘end’ definition depends on the specific collection itself). For collections with an inherent indexing order (e.g. ordered lists, sets, trees), the function just includes the object at its pre-defined position (i.e. not necessarily at the end). For the collections, the function mapping equates to:

int32 push(particle_obj) -> add_single(particle_obj); return -1;

This function further returns the index, at which position the Particle has been inserted. By definition, the index is positive, thus: a return of ‘-1’ indicates push failure, NOT the last position in the collection. Furthermore, collections that do not work on an index-preserving manner also return ‘-1’.

remove_collection(pcollection)[source]

This function removes particles from this collection that are themselves stored in a ParticleCollection, which is differently structured than this one. Tht means the removal first requires the removal-collection to be re- formatted in an intermediary format, before executing the removal. That said, this method should still be at least as efficient as a removal via common Python collections (i.e. lists, dicts, numpy’s nD arrays & dense arrays). Despite this, due to the reformatting, in some cases it may be more efficient to remove items then rather by IDs oder indices.

remove_multi_by_IDs(ids)[source]

This function removes particles from this collection based on their IDs. For collections where this removal strategy would require a collection transformation or by-ID parsing, it is advisable to rather apply a removal- by-objects or removal-by-indices scheme.

remove_multi_by_PyCollection_Particles(pycollectionp)[source]

This function removes particles from this collection, which are themselves in common Python collections, such as lists, dicts and numpy structures. In order to perform the removal, we can either directly remove the referred Particle instances (for internally-ordered collections, e.g. ordered lists, sets, trees) or we may need to parse each instance for its index (for random-access structures), which results in a considerable performance malus.

For collections where removal-by-object incurs a performance malus, it is advisable to multi-remove particles by indices or IDs.

remove_multi_by_indices(indices)[source]

This function removes particles from this collection based on their indices. This works best for random-access collections (e.g. numpy’s ndarrays, dense matrices and dense arrays), whereas internally ordered collections shall rather use a removal-via-object-reference strategy.

remove_same(same_class)[source]

This function removes particles from this collection that are themselves stored in another object of an equi- structured ParticleCollection. As the structures of both collections are the same, a more efficient M-in-N removal can be applied without an in-between reformatting.

remove_single_by_ID(id)[source]

This function removes a (particle) object from the collection based on the object’s ID. For some collections, this operation may involve a parsing of the whole list and translation of the object’s ID into an index or an object reference in the collection in order to perform the removal - which results in a significant performance malus. In cases where a removal-by-ID would result in a performance malus, it is highly-advisable to use a different removal functions, e.g. remove-by-object or remove-by-index.

remove_single_by_index(index)[source]

This function removes a (particle) object from the collection based on its index within the collection. For collections that are not based on random access (e.g. ordered lists, sets, trees), this function involves a translation of the index into the specific object reference in the collection - or (if unavoidable) the translation of the collection from a none-indexable, none-random-access structure into an indexable structure, and then perform the removal. In cases where a removal-by-index would result in a performance malus, it is highly-advisable to use a different removal functions, e.g. remove-by-object or remove-by-ID.

remove_single_by_object(particle_obj)[source]

This function removes a (particle) object from the collection based on its actual object. For collections that are random-access and based on indices (e.g. unordered list, vectors, arrays and dense matrices), this function would involves a parsing of the whole list and translation of the object into an index in the collection to perform the removal - which results in a significant performance malus. In cases where a removal-by-object would result in a performance malus, it is highly-advisable to use a different removal functions, e.g. remove-by-index or remove-by-ID.

set_variable_write_status(var, write_status)[source]

Method to set the write status of a Variable :param var: Name of the variable (string) :param status: Write status of the variable (True, False or ‘once’)

split(indices=None)[source]

This function splits this collection into two disect equi-structured collections. The reason for it can, for example, be that the set exceeds a pre-defined maximum number of elements, which for performance reasons mandates a split.

On the other hand, this function can also internally split individual particles that are tagged byt status as to be ‘split’ (see the particle status for information on that).

In order to distinguish both use cases, we can evaluate the ‘indices’ parameter. In cases where this is ‘None’, the split operation semantically refers to an internal split of individual particles - otherwise, it performs a collection-split.

Comment: the function can be simplified later by pre-evaluating the function parameter and then reference the individual, specific functions for element- or collection split.

The function shall return the newly created or extended Particle collection, i.e. either the collection that results from a collection split or this very collection, containing the newly-split particles.

toArray()[source]

This function converts (or: transforms; reformats; translates) this collection into an array-like structure (e.g. Python list or numpy nD array) that can be addressed by index. In the common case of ‘no ID recovery’, the global ID and the index match exactly.

While this function may be very convenient for may users, it is STRONGLY DISADVISED to use the function to often, and the performance- and memory overhead malus may be exceed any speed-up one could get from optimised data structures - in fact, for large collections with an implicit-order structure (i.e. ordered lists, sets, trees, etc.), this may be ‘the most constly’ function in any kind of simulation.

It can be - though - useful at the final stage of a simulation to dump the results to disk.

toDictionary(pfile, time, deleted_only=False)[source]

Convert all Particle data from one time step to a python dictionary. :param pfile: ParticleFile object requesting the conversion :param time: Time at which to write ParticleSet :param deleted_only: Flag to write only the deleted Particles returns two dictionaries: one for all variables to be written each outputdt,

and one for all variables to be written once

This function depends on the specific collection in question and thus needs to be specified in specific derivative classes.

parcels.particlesets.iterators module

class parcels.particlesets.iterators.BaseParticleAccessor(pcoll)[source]

Bases: abc.ABC

Interface for the ParticleAccessor. Implements a wrapper around particles to provide easy access.

delete()[source]

Signal the underlying particle for deletion.

set_state(state)[source]

Syntactic sugar for changing the state of the underlying particle.

class parcels.particlesets.iterators.BaseParticleCollectionIterator[source]

Bases: abc.ABC

Interface for the ParticleCollection iterator. Provides the ability to iterate over the particles in the ParticleCollection.

property current

Returns a ParticleAccessor for the particle that the iteration is currently at.

property head

Returns a ParticleAccessor for the first particle in the ParticleSet.

property tail

Returns a ParticleAccessor for the last particle in the ParticleSet.

parcels.tools.statuscodes module

Collection of pre-built recovery kernels

exception parcels.tools.statuscodes.FieldOutOfBoundError(x, y, z, field=None)[source]

Bases: RuntimeError

Utility error class to propagate out-of-bound field sampling in Scipy mode

exception parcels.tools.statuscodes.FieldSamplingError(x, y, z, field=None)[source]

Bases: RuntimeError

Utility error class to propagate erroneous field sampling in Scipy mode

exception parcels.tools.statuscodes.KernelError(particle, fieldset=None, msg=None)[source]

Bases: RuntimeError

General particle kernel error with optional custom message

exception parcels.tools.statuscodes.OutOfBoundsError(particle, fieldset=None, lon=None, lat=None, depth=None)[source]

Bases: parcels.tools.statuscodes.KernelError

Particle kernel error for out-of-bounds field sampling

exception parcels.tools.statuscodes.OutOfTimeError(particle, fieldset)[source]

Bases: parcels.tools.statuscodes.KernelError

Particle kernel error for time extrapolation field sampling

exception parcels.tools.statuscodes.ThroughSurfaceError(particle, fieldset=None, lon=None, lat=None, depth=None)[source]

Bases: parcels.tools.statuscodes.KernelError

Particle kernel error for field sampling at surface

exception parcels.tools.statuscodes.TimeExtrapolationError(time, field=None, msg='allow_time_extrapoltion')[source]

Bases: RuntimeError

Utility error class to propagate erroneous time extrapolation sampling in Scipy mode

parcels.tools.converters module

class parcels.tools.converters.Geographic[source]

Bases: parcels.tools.converters.UnitConverter

Unit converter from geometric to geographic coordinates (m to degree)

class parcels.tools.converters.GeographicPolar[source]

Bases: parcels.tools.converters.UnitConverter

Unit converter from geometric to geographic coordinates (m to degree) with a correction to account for narrower grid cells closer to the poles.

class parcels.tools.converters.GeographicPolarSquare[source]

Bases: parcels.tools.converters.UnitConverter

Square distance converter from geometric to geographic coordinates (m2 to degree2) with a correction to account for narrower grid cells closer to the poles.

class parcels.tools.converters.GeographicSquare[source]

Bases: parcels.tools.converters.UnitConverter

Square distance converter from geometric to geographic coordinates (m2 to degree2)

class parcels.tools.converters.TimeConverter(time_origin=0)[source]

Bases: object

Converter class for dates with different calendars in FieldSets

Param

time_origin: time origin of the class. Currently supported formats are float, integer, numpy.datetime64, and netcdftime.DatetimeNoLeap

fulltime(time)[source]

Method to convert a time difference in seconds to a date, based on the time_origin

Param

time: input time

Returns

self.time_origin + time

reltime(time)[source]

Method to compute the difference, in seconds, between a time and the time_origin of the TimeConverter

Param

time: input time

Returns

time - self.time_origin

class parcels.tools.converters.UnitConverter[source]

Bases: object

Interface class for spatial unit conversion during field sampling that performs no conversion.

parcels.tools.converters.convert_xarray_time_units(ds, time)[source]

Fixes DataArrays that have time.Unit instead of expected time.units

parcels.tools.loggers module

Script to create a logger for Parcels

parcels.plotting module

parcels.plotting.cartopy_colorbar(cs, plt, fig, ax)[source]
parcels.plotting.create_parcelsfig_axis(spherical, land=True, projection=None, central_longitude=0, cartopy_features=[])[source]
parcels.plotting.parsedomain(domain, field)[source]
parcels.plotting.parsetimestr(time_origin, show_time)[source]
parcels.plotting.plotfield(field, show_time=None, domain=None, depth_level=0, projection=None, land=True, vmin=None, vmax=None, savefile=None, **kwargs)[source]

Function to plot a Parcels Field

Parameters
  • show_time – Time at which to show the Field

  • domain – dictionary (with keys ‘N’, ‘S’, ‘E’, ‘W’) defining domain to show

  • depth_level – depth level to be plotted (default 0)

  • projection – type of cartopy projection to use (default PlateCarree)

  • land – Boolean whether to show land. This is ignored for flat meshes

  • vmin – minimum colour scale (only in single-plot mode)

  • vmax – maximum colour scale (only in single-plot mode)

  • savefile – Name of a file to save the plot to

  • animation – Boolean whether result is a single plot, or an animation

parcels.plotting.plotparticles(particles, with_particles=True, show_time=None, field=None, domain=None, projection=None, land=True, vmin=None, vmax=None, savefile=None, animation=False, **kwargs)[source]

Function to plot a Parcels ParticleSet

Parameters
  • show_time – Time at which to show the ParticleSet

  • with_particles – Boolean whether particles are also plotted on Field

  • field – Field to plot under particles (either None, a Field object, or ‘vector’)

  • domain – dictionary (with keys ‘N’, ‘S’, ‘E’, ‘W’) defining domain to show

  • projection – type of cartopy projection to use (default PlateCarree)

  • land – Boolean whether to show land. This is ignored for flat meshes

  • vmin – minimum colour scale (only in single-plot mode)

  • vmax – maximum colour scale (only in single-plot mode)

  • savefile – Name of a file to save the plot to

  • animation – Boolean whether result is a single plot, or an animation

scripts.plottrajectoriesfile module

scripts.plottrajectoriesfile.plotTrajectoriesFile(filename, mode='2d', tracerfile=None, tracerfield='P', tracerlon='x', tracerlat='y', recordedvar=None, movie_forward=True, bins=20, show_plt=True, central_longitude=0)[source]

Quick and simple plotting of Parcels trajectories

Parameters
  • filename – Name of Parcels-generated NetCDF file with particle positions

  • mode – Type of plot to show. Supported are ‘2d’, ‘3d’, ‘hist2d’, ‘movie2d’ and ‘movie2d_notebook’. The latter two give animations, with ‘movie2d_notebook’ specifically designed for jupyter notebooks

  • tracerfile – Name of NetCDF file to show as background

  • tracerfield – Name of variable to show as background

  • tracerlon – Name of longitude dimension of variable to show as background

  • tracerlat – Name of latitude dimension of variable to show as background

  • recordedvar – Name of variable used to color particles in scatter-plot. Only works in ‘movie2d’ or ‘movie2d_notebook’ mode.

  • movie_forward – Boolean whether to show movie in forward or backward mode (default True)

  • bins – Number of bins to use in hist2d mode. See also https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hist2d.html

  • show_plt – Boolean whether plot should directly be show (for py.test)

  • central_longitude – Degrees East at which to center the plot

scripts.get_examples module

Get example scripts, notebooks, and data files.

scripts.get_examples.copy_data_and_examples_from_package_to(target_path)[source]

Copy example data from Parcels directory.

Return thos parths of the list file_names that were not found in the package.

scripts.get_examples.download_files(source_url, file_names, target_path)[source]

Mirror file_names from source_url to target_path.

scripts.get_examples.main(target_path=None)[source]

Get example scripts, example notebooks, and example data.

Copy the examples from the package directory and get the example data either from the package directory or from the Parcels website.

Indices and tables