Parcels documentation

Welcome to the documentation of parcels. This page provides detailed documentation for each method, class and function.

See http://www.oceanparcels.org for general information on the Parcels project, including how to install and use.

parcels.particleset module

class parcels.particleset.ParticleSet(fieldset, pclass=<class 'parcels.particle.JITParticle'>, lon=None, lat=None, depth=None, time=None, repeatdt=None, **kwargs)[source]

Bases: object

Container class for storing particle and executing kernel over them.

Please note that this currently only supports fixed size particle sets.

Parameters:
  • fieldsetparcels.fieldset.FieldSet object from which to sample velocity
  • pclass – Optional parcels.particle.JITParticle or parcels.particle.ScipyParticle object that defines custom particle
  • lon – List of initial longitude values for particles
  • lat – List of initial latitude values for particles
  • depth – Optional list of initial depth values for particles. Default is 0m
  • time – Optional list of initial time values for particles. Default is fieldset.U.grid.time[0]
  • repeatdt – Optional interval (in seconds) on which to repeat the release of the ParticleSet

Other Variables can be initialised using further arguments (e.g. v=… for a Variable named ‘v’)

Kernel(pyfunc, c_include='')[source]

Wrapper method to convert a pyfunc into a parcels.kernel.Kernel object based on fieldset and ptype of the ParticleSet

ParticleFile(*args, **kwargs)[source]

Wrapper method to initialise a parcels.particlefile.ParticleFile object from the ParticleSet

add(particles)[source]

Method to add particles to the ParticleSet

density(field=None, particle_val=None, relative=False, area_scale=False)[source]

Method to calculate the density of particles in a ParticleSet from their locations, through a 2D histogram.

Parameters:
  • field – Optional parcels.field.Field object to calculate the histogram on. Default is fieldset.U
  • particle_val – Optional numpy-array of values to weigh each particle with, or string name of particle variable to use weigh particles with. Default is None, resulting in a value of 1 for each particle
  • relative – Boolean to control whether the density is scaled by the total weight of all particles. Default is False
  • area_scale – Boolean to control whether the density is scaled by the area (in m^2) of each grid cell. Default is False
execute(pyfunc=<function AdvectionRK4>, endtime=None, runtime=None, dt=1.0, moviedt=None, recovery=None, output_file=None, movie_background_field=None, verbose_progress=None)[source]

Execute a given kernel function over the particle set for multiple timesteps. Optionally also provide sub-timestepping for particle output.

Parameters:
  • pyfunc – Kernel function to execute. This can be the name of a defined Python function or a parcels.kernel.Kernel object. Kernels can be concatenated using the + operator
  • endtime – End time for the timestepping loop. It is either a datetime object or a positive double.
  • runtime – Length of the timestepping loop. Use instead of endtime. It is either a timedelta object or a positive double.
  • dt – Timestep interval to be passed to the kernel. It is either a timedelta object or a double. Use a negative value for a backward-in-time simulation.
  • moviedt – Interval for inner sub-timestepping (leap), which dictates the update frequency of animation. It is either a timedelta object or a positive double. None value means no animation.
  • output_fileparcels.particlefile.ParticleFile object for particle output
  • recovery – Dictionary with additional :mod:parcels.tools.error recovery kernels to allow custom recovery behaviour in case of kernel errors.
  • movie_background_field – field plotted as background in the movie if moviedt is set. ‘vector’ shows the velocity as a vector field.
  • verbose_progress – Boolean for providing a progress bar for the kernel execution loop.
classmethod from_field(fieldset, pclass, start_field, size, mode='monte_carlo', depth=None, time=None, repeatdt=None)[source]

Initialise the ParticleSet randomly drawn according to distribution from a field

Parameters:
  • fieldsetparcels.fieldset.FieldSet object from which to sample velocity
  • pclass – mod:parcels.particle.JITParticle or parcels.particle.ScipyParticle object that defines custom particle
  • start_field – Field for initialising particles stochastically according to the presented density field.
  • size – Initial size of particle set
  • mode – Type of random sampling. Currently only ‘monte_carlo’ is implemented
  • depth – Optional list of initial depth values for particles. Default is 0m
  • time – Optional start time value for particles. Default is fieldset.U.time[0]
  • repeatdt – Optional interval (in seconds) on which to repeat the release of the ParticleSet
classmethod from_line(fieldset, pclass, start, finish, size, depth=None, time=None, repeatdt=None)[source]

Initialise the ParticleSet from start/finish coordinates with equidistant spacing Note that this method uses simple numpy.linspace calls and does not take into account great circles, so may not be a exact on a globe

Parameters:
  • fieldsetparcels.fieldset.FieldSet object from which to sample velocity
  • pclass – mod:parcels.particle.JITParticle or parcels.particle.ScipyParticle object that defines custom particle
  • start – Starting point for initialisation of particles on a straight line.
  • finish – End point for initialisation of particles on a straight line.
  • size – Initial size of particle set
  • depth – Optional list of initial depth values for particles. Default is 0m
  • time – Optional start time value for particles. Default is fieldset.U.time[0]
  • repeatdt – Optional interval (in seconds) on which to repeat the release of the ParticleSet
classmethod from_list(fieldset, pclass, lon, lat, depth=None, time=None, repeatdt=None, **kwargs)[source]

Initialise the ParticleSet from lists of lon and lat

Parameters:
  • fieldsetparcels.fieldset.FieldSet object from which to sample velocity
  • pclass – mod:parcels.particle.JITParticle or parcels.particle.ScipyParticle object that defines custom particle
  • lon – List of initial longitude values for particles
  • lat – List of initial latitude values for particles
  • depth – Optional list of initial depth values for particles. Default is 0m
  • time – Optional list of start time values for particles. Default is fieldset.U.time[0]
  • repeatdt – Optional interval (in seconds) on which to repeat the release of the ParticleSet

Other Variables can be initialised using further arguments (e.g. v=… for a Variable named ‘v’)

remove(indices)[source]

Method to remove particles from the ParticleSet, based on their indices

show(with_particles=True, show_time=None, field=None, domain=None, projection=None, land=True, vmin=None, vmax=None, savefile=None, animation=False)[source]

Method to ‘show’ a Parcels ParticleSet

Parameters:
  • with_particles – Boolean whether to show particles
  • show_time – Time at which to show the ParticleSet
  • field – Field to plot under particles (either None, a Field object, or ‘vector’)
  • domain – Four-vector (latN, latS, lonE, lonW) defining domain to show
  • projection – type of cartopy projection to use (default PlateCarree)
  • land – Boolean whether to show land. This is ignored for flat meshes
  • vmin – minimum colour scale (only in single-plot mode)
  • vmax – maximum colour scale (only in single-plot mode)
  • savefile – Name of a file to save the plot to
  • animation – Boolean whether result is a single plot, or an animation

parcels.fieldset module

class parcels.fieldset.FieldSet(U, V, fields=None)[source]

Bases: object

FieldSet class that holds hydrodynamic data needed to execute particles

Parameters:
add_constant(name, value)[source]

Add a constant to the FieldSet. Note that all constants are stored as 32-bit floats. While constants can be updated during execution in SciPy mode, they can not be updated in JIT mode.

Parameters:
  • name – Name of the constant
  • value – Value of the constant (stored as 32-bit float)
add_field(field, name=None)[source]

Add a parcels.field.Field object to the FieldSet

Parameters:
add_periodic_halo(zonal=False, meridional=False, halosize=5)[source]

Add a ‘halo’ to all parcels.field.Field objects in a FieldSet, through extending the Field (and lon/lat) by copying a small portion of the field on one side of the domain to the other.

Parameters:
  • zonal – Create a halo in zonal direction (boolean)
  • meridional – Create a halo in meridional direction (boolean)
  • halosize – size of the halo (in grid points). Default is 5 grid points
add_vector_field(vfield)[source]

Add a parcels.field.VectorField object to the FieldSet

Parameters:vfieldparcels.field.VectorField object to be added
advancetime(fieldset_new)[source]

Replace oldest time on FieldSet with new FieldSet :param fieldset_new: FieldSet snapshot with which the oldest time has to be replaced

eval(x, y)[source]

Evaluate the zonal and meridional velocities (u,v) at a point (x,y)

Parameters:
  • x – zonal point to evaluate
  • y – meridional point to evaluate
Return u, v:

zonal and meridional velocities at point

fields

Returns a list of all the parcels.field.Field objects associated with this FieldSet

classmethod from_data(data, dimensions, transpose=False, mesh='spherical', allow_time_extrapolation=None, time_periodic=False, **kwargs)[source]

Initialise FieldSet object from raw data

Parameters:
  • data

    Dictionary mapping field names to numpy arrays. Note that at least a ‘U’ and ‘V’ numpy array need to be given

    1. If data shape is [xdim, ydim], [xdim, ydim, zdim], [xdim, ydim, tdim] or [xdim, ydim, zdim, tdim], whichever is relevant for the dataset, use the flag transpose=True
    2. If data shape is [ydim, xdim], [zdim, ydim, xdim], [tdim, ydim, xdim] or [tdim, zdim, ydim, xdim], use the flag transpose=False (default value)
    3. If data has any other shape, you first need to reorder it
  • dimensions – Dictionary mapping field dimensions (lon, lat, depth, time) to numpy arrays. Note that dimensions can also be a dictionary of dictionaries if dimension names are different for each variable (e.g. dimensions[‘U’], dimensions[‘V’], etc).
  • transpose – Boolean whether to transpose data on read-in
  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.
    2. flat: No conversion, lat/lon are assumed to be in m.
  • allow_time_extrapolation – boolean whether to allow for extrapolation (i.e. beyond the last available time snapshot) Default is False if dimensions includes time, else True
  • time_periodic – boolean whether to loop periodically over the time component of the FieldSet This flag overrides the allow_time_interpolation and sets it to False
classmethod from_nemo(filenames, variables, dimensions, indices=None, mesh='spherical', allow_time_extrapolation=None, time_periodic=False, tracer_interp_method='linear', **kwargs)[source]

Initialises FieldSet object from NetCDF files of Curvilinear NEMO fields.

Parameters:
  • filenames – Dictionary mapping variables to file(s). The filepath may contain wildcards to indicate multiple files, or be a list of file. filenames can be a list [files], a dictionary {var:[files]}, a dictionary {dim:[files]} (if lon, lat, depth and/or data not stored in same files as data), or a dictionary of dictionaries {var:{dim:[files]}} time values are in filenames[data]
  • variables – Dictionary mapping variables to variable names in the netCDF file(s).
  • dimensions

    Dictionary mapping data dimensions (lon, lat, depth, time, data) to dimensions in the netCF file(s). Note that dimensions can also be a dictionary of dictionaries if dimension names are different for each variable. Watch out: NEMO is discretised on a C-grid: U and V velocities are not located on the same nodes (see https://www.nemo-ocean.eu/doc/node19.html ).

    __V1__

    U0 U1 |__V0__| To interpolate U, V velocities on the C-grid, Parcels needs to read the f-nodes, which are located on the corners of the cells. (for indexing details: https://www.nemo-ocean.eu/doc/img360.png )

  • indices – Optional dictionary of indices for each dimension to read from file(s), to allow for reading of subset of data. Default is to read the full extent of each dimension.
  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.
    2. flat: No conversion, lat/lon are assumed to be in m.
  • allow_time_extrapolation – boolean whether to allow for extrapolation (i.e. beyond the last available time snapshot) Default is False if dimensions includes time, else True
  • time_periodic – boolean whether to loop periodically over the time component of the FieldSet This flag overrides the allow_time_interpolation and sets it to False
  • tracer_interp_method – Method for interpolation of tracer fields. Either ‘linear’ or ‘nearest’ Note that in the case of from_nemo(), the velocity fields are default to ‘cgrid_linear’
classmethod from_netcdf(filenames, variables, dimensions, indices=None, mesh='spherical', allow_time_extrapolation=None, time_periodic=False, full_load=False, **kwargs)[source]

Initialises FieldSet object from NetCDF files

Parameters:
  • filenames – Dictionary mapping variables to file(s). The filepath may contain wildcards to indicate multiple files, or be a list of file. filenames can be a list [files], a dictionary {var:[files]}, a dictionary {dim:[files]} (if lon, lat, depth and/or data not stored in same files as data), or a dictionary of dictionaries {var:{dim:[files]}}. time values are in filenames[data]
  • variables – Dictionary mapping variables to variable names in the netCDF file(s).
  • dimensions – Dictionary mapping data dimensions (lon, lat, depth, time, data) to dimensions in the netCF file(s). Note that dimensions can also be a dictionary of dictionaries if dimension names are different for each variable (e.g. dimensions[‘U’], dimensions[‘V’], etc).
  • indices – Optional dictionary of indices for each dimension to read from file(s), to allow for reading of subset of data. Default is to read the full extent of each dimension.
  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.
    2. flat: No conversion, lat/lon are assumed to be in m.
  • allow_time_extrapolation – boolean whether to allow for extrapolation (i.e. beyond the last available time snapshot) Default is False if dimensions includes time, else True
  • time_periodic – boolean whether to loop periodically over the time component of the FieldSet This flag overrides the allow_time_interpolation and sets it to False
  • full_load – boolean whether to fully load the data or only pre-load them. (default: False) It is advised not to fully load the data, since in that case Parcels deals with a better memory management during particle set execution. full_load is however sometimes necessary for plotting the fields.
  • netcdf_engine – engine to use for netcdf reading in xarray. Default is ‘netcdf’, but in cases where this doesn’t work, setting netcdf_engine=’scipy’ could help
classmethod from_parcels(basename, uvar='vozocrtx', vvar='vomecrty', indices=None, extra_fields=None, allow_time_extrapolation=None, time_periodic=False, full_load=False, **kwargs)[source]

Initialises FieldSet data from NetCDF files using the Parcels FieldSet.write() conventions.

Parameters:
  • basename – Base name of the file(s); may contain wildcards to indicate multiple files.
  • indices – Optional dictionary of indices for each dimension to read from file(s), to allow for reading of subset of data. Default is to read the full extent of each dimension.
  • extra_fields – Extra fields to read beyond U and V
  • allow_time_extrapolation – boolean whether to allow for extrapolation (i.e. beyond the last available time snapshot) Default is False if dimensions includes time, else True
  • time_periodic – boolean whether to loop periodically over the time component of the FieldSet This flag overrides the allow_time_interpolation and sets it to False
  • full_load – boolean whether to fully load the data or only pre-load them. (default: False) It is advised not to fully load the data, since in that case Parcels deals with a better memory management during particle set execution. full_load is however sometimes necessary for plotting the fields.
write(filename)[source]

Write FieldSet to NetCDF file using NEMO convention

Parameters:filename – Basename of the output fileset

parcels.field module

class parcels.field.Field(name, data, lon=None, lat=None, depth=None, time=None, grid=None, mesh='flat', fieldtype=None, transpose=False, vmin=None, vmax=None, time_origin=None, interp_method='linear', allow_time_extrapolation=None, time_periodic=False, **kwargs)[source]

Bases: object

Class that encapsulates access to field data.

Parameters:
  • name – Name of the field
  • data

    2D, 3D or 4D numpy array of field data.

    1. If data shape is [xdim, ydim], [xdim, ydim, zdim], [xdim, ydim, tdim] or [xdim, ydim, zdim, tdim], whichever is relevant for the dataset, use the flag transpose=True
    2. If data shape is [ydim, xdim], [zdim, ydim, xdim], [tdim, ydim, xdim] or [tdim, zdim, ydim, xdim], use the flag transpose=False
    3. If data has any other shape, you first need to reorder it
  • lon – Longitude coordinates (numpy vector or array) of the field (only if grid is None)
  • lat – Latitude coordinates (numpy vector or array) of the field (only if grid is None)
  • depth – Depth coordinates (numpy vector or array) of the field (only if grid is None)
  • time – Time coordinates (numpy vector) of the field (only if grid is None)
  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation: (only if grid is None)

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.
    2. flat: No conversion, lat/lon are assumed to be in m.
  • gridparcels.grid.Grid object containing all the lon, lat depth, time mesh and time_origin information. Can be constructed from any of the Grid objects
  • fieldtype – Type of Field to be used for UnitConverter when using SummedFields (either ‘U’, ‘V’, ‘Kh_zonal’, ‘Kh_Meridional’ or None)
  • transpose – Transpose data to required (lon, lat) layout
  • vmin – Minimum allowed value on the field. Data below this value are set to zero
  • vmax – Maximum allowed value on the field. Data above this value are set to zero
  • time_origin – Time origin (TimeConverter object) of the time axis (only if grid is None)
  • interp_method – Method for interpolation. Either ‘linear’ or ‘nearest’
  • allow_time_extrapolation – boolean whether to allow for extrapolation in time (i.e. beyond the last available time snapshot)
  • time_periodic – boolean whether to loop periodically over the time component of the Field This flag overrides the allow_time_interpolation and sets it to False
add_periodic_halo(zonal, meridional, halosize=5, data=None)[source]

Add a ‘halo’ to all Fields in a FieldSet, through extending the Field (and lon/lat) by copying a small portion of the field on one side of the domain to the other. Before adding a periodic halo to the Field, it has to be added to the Grid on which the Field depends

Parameters:
  • zonal – Create a halo in zonal direction (boolean)
  • meridional – Create a halo in meridional direction (boolean)
  • halosize – size of the halo (in grid points). Default is 5 grid points
  • data – if data is not None, the periodic halo will be achieved on data instead of self.data and data will be returned
calc_cell_edge_sizes()[source]

Method to calculate cell sizes based on numpy.gradient method Currently only works for Rectilinear Grids

cell_areas()[source]

Method to calculate cell sizes based on cell_edge_sizes Currently only works for Rectilinear Grids

ctypes_struct

Returns a ctypes struct object containing all relevant pointers and sizes for this field.

depth_index(depth, lat, lon)[source]

Find the index in the depth array associated with a given depth

eval(time, x, y, z, applyConversion=True)[source]

Interpolate field values in space and time.

We interpolate linearly in time and apply implicit unit conversion to the result. Note that we defer to scipy.interpolate to perform spatial interpolation.

classmethod from_netcdf(filenames, variable, dimensions, indices=None, grid=None, mesh='spherical', allow_time_extrapolation=None, time_periodic=False, full_load=False, **kwargs)[source]

Create field from netCDF file

Parameters:
  • filenames – list of filenames to read for the field. Note that wildcards (‘*’) are also allowed filenames can be a list [files] or a dictionary {dim:[files]} (if lon, lat, depth and/or data not stored in same files as data) time values are in filenames[data]
  • variable – Name of the field to create. Note that this has to be a string
  • dimensions – Dictionary mapping variable names for the relevant dimensions in the NetCDF file
  • indices – dictionary mapping indices for each dimension to read from file. This can be used for reading in only a subregion of the NetCDF file
  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.
    2. flat: No conversion, lat/lon are assumed to be in m.
  • allow_time_extrapolation – boolean whether to allow for extrapolation in time (i.e. beyond the last available time snapshot) Default is False if dimensions includes time, else True
  • time_periodic – boolean whether to loop periodically over the time component of the FieldSet This flag overrides the allow_time_interpolation and sets it to False
  • full_load – boolean whether to fully load the data or only pre-load them. (default: False) It is advised not to fully load the data, since in that case Parcels deals with a better memory management during particle set execution. full_load is however sometimes necessary for plotting the fields.
  • netcdf_engine – engine to use for netcdf reading in xarray. Default is ‘netcdf’, but in cases where this doesn’t work, setting netcdf_engine=’scipy’ could help
gradient(update=False, tindex=None)[source]

Method to calculate horizontal gradients of Field. Returns two Fields: the zonal and meridional gradients, on the same Grid as the original Field, using numpy.gradient() method Names of these grids are dNAME_dx and dNAME_dy, where NAME is the name of the original Field

set_scaling_factor(factor)[source]

Scales the field data by some constant factor.

Parameters:factor – scaling factor
show(animation=False, show_time=None, domain=None, projection=None, land=True, vmin=None, vmax=None, savefile=None, **kwargs)[source]

Method to ‘show’ a Parcels Field

Parameters:
  • animation – Boolean whether result is a single plot, or an animation
  • show_time – Time at which to show the Field (only in single-plot mode)
  • domain – Four-vector (latN, latS, lonE, lonW) defining domain to show
  • projection – type of cartopy projection to use (default PlateCarree)
  • land – Boolean whether to show land. This is ignored for flat meshes
  • vmin – minimum colour scale (only in single-plot mode)
  • vmax – maximum colour scale (only in single-plot mode)
  • savefile – Name of a file to save the plot to
spatial_interpolation(ti, z, y, x, time)[source]

Interpolate horizontal field values using a SciPy interpolator

temporal_interpolate_fullfield(ti, time)[source]

Calculate the data of a field between two snapshots, using linear interpolation

Parameters:
  • ti – Index in time array associated with time (via time_index())
  • time – Time to interpolate to
Return type:

Linearly interpolated field

time_index(time)[source]

Find the index in the time array associated with a given time

Note that we normalize to either the first or the last index if the sampled value is outside the time value range.

write(filename, varname=None)[source]

Write a Field to a netcdf file

Parameters:
  • filename – Basename of the file
  • varname – Name of the field, to be appended to the filename
class parcels.field.VectorField(name, U, V, W=None)[source]

Bases: object

Class VectorField stores 2 or 3 fields which defines together a vector field. This enables to interpolate them as one single vector field in the kernels.

Parameters:
  • name – Name of the vector field
  • U – field defining the zonal component
  • V – field defining the meridional component
  • W – field defining the vertical component (default: None)
spatial_c_grid_interpolation3D(ti, z, y, x, time)[source]

U0 U1 | __ V0 __ | The interpolation is done in the following by interpolating linearly U depending on the longitude coordinate and interpolating linearly V depending on the latitude coordinate. Curvilinear grids are treated properly, since the element is projected to a rectilinear parent element.

class parcels.field.SummedField[source]

Bases: list

Class SummedField is a list of Fields over which Field interpolation is summed. This can e.g. be used when combining multiple flow fields, where the total flow is the sum of all the individual flows. Note that the individual Fields can be on different Grids. Also note that, since SummedFields are lists, the individual Fields can still be queried through their list index (e.g. SummedField[1]).

class parcels.field.SummedVectorField(name, U, V, W=None)[source]

Bases: list

Class SummedVectorField stores 2 or 3 SummedFields which defines together a vector field. This enables to interpolate them as one single vector SummedField in the kernels.

Parameters:
  • name – Name of the vector field
  • U – SummedField defining the zonal component
  • V – SummedField defining the meridional component
  • W – SummedField defining the vertical component (default: None)

parcels.gridset module

class parcels.gridset.GridSet[source]

Bases: object

GridSet class that holds the Grids on which the Fields are defined

dimrange(dim)[source]

Returns maximum value of a dimension (lon, lat, depth or time) on ‘left’ side and minimum value on ‘right’ side for all grids in a gridset. Useful for finding e.g. longitude range that overlaps on all grids in a gridset

parcels.grid module

class parcels.grid.GridCode[source]

Bases: enum.IntEnum

class parcels.grid.RectilinearZGrid(lon, lat, depth=None, time=None, time_origin=None, mesh='flat')[source]

Bases: parcels.grid.RectilinearGrid

Rectilinear Z Grid

Parameters:
  • lon – Vector containing the longitude coordinates of the grid
  • lat – Vector containing the latitude coordinates of the grid
  • depth – Vector containing the vertical coordinates of the grid, which are z-coordinates. The depth of the different layers is thus constant.
  • time – Vector containing the time coordinates of the grid
  • time_origin – Time origin (TimeConverter object) of the time axis
  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.
    2. flat: No conversion, lat/lon are assumed to be in m.
class parcels.grid.RectilinearSGrid(lon, lat, depth, time=None, time_origin=None, mesh='flat')[source]

Bases: parcels.grid.RectilinearGrid

Rectilinear S Grid. Same horizontal discretisation as a rectilinear z grid,
but with s vertical coordinates
Parameters:
  • lon – Vector containing the longitude coordinates of the grid
  • lat – Vector containing the latitude coordinates of the grid
  • depth – 4D (time-evolving) or 3D (time-independent) array containing the vertical coordinates of the grid, which are s-coordinates. s-coordinates can be terrain-following (sigma) or iso-density (rho) layers, or any generalised vertical discretisation. The depth of each node depends then on the horizontal position (lon, lat), the number of the layer and the time is depth is a 4D array. depth array is either a 4D array[xdim][ydim][zdim][tdim] or a 3D array[xdim][ydim[zdim].
  • time – Vector containing the time coordinates of the grid
  • time_origin – Time origin (TimeConverter object) of the time axis
  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.
    2. flat: No conversion, lat/lon are assumed to be in m.
class parcels.grid.CurvilinearZGrid(lon, lat, depth=None, time=None, time_origin=None, mesh='flat')[source]

Bases: parcels.grid.CurvilinearGrid

Curvilinear Z Grid.

Parameters:
  • lon – 2D array containing the longitude coordinates of the grid
  • lat – 2D array containing the latitude coordinates of the grid
  • depth – Vector containing the vertical coordinates of the grid, which are z-coordinates. The depth of the different layers is thus constant.
  • time – Vector containing the time coordinates of the grid
  • time_origin – Time origin (TimeConverter object) of the time axis
  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.
    2. flat: No conversion, lat/lon are assumed to be in m.
class parcels.grid.CurvilinearSGrid(lon, lat, depth, time=None, time_origin=None, mesh='flat')[source]

Bases: parcels.grid.CurvilinearGrid

Curvilinear S Grid.

Parameters:
  • lon – 2D array containing the longitude coordinates of the grid
  • lat – 2D array containing the latitude coordinates of the grid
  • depth – 4D (time-evolving) or 3D (time-independent) array containing the vertical coordinates of the grid, which are s-coordinates. s-coordinates can be terrain-following (sigma) or iso-density (rho) layers, or any generalised vertical discretisation. The depth of each node depends then on the horizontal position (lon, lat), the number of the layer and the time is depth is a 4D array. depth array is either a 4D array[xdim][ydim][zdim][tdim] or a 3D array[xdim][ydim[zdim].
  • time – Vector containing the time coordinates of the grid
  • time_origin – Time origin (TimeConverter object) of the time axis
  • mesh

    String indicating the type of mesh coordinates and units used during velocity interpolation:

    1. spherical (default): Lat and lon in degree, with a correction for zonal velocity U near the poles.
    2. flat: No conversion, lat/lon are assumed to be in m.
class parcels.grid.CGrid[source]

Bases: _ctypes.Structure

parcels.particle module

class parcels.particle.ScipyParticle(lon, lat, fieldset, depth=0.0, time=0.0, cptr=None)[source]

Bases: parcels.particle._Particle

Class encapsulating the basic attributes of a particle, to be executed in SciPy mode

Parameters:
  • lon – Initial longitude of particle
  • lat – Initial latitude of particle
  • depth – Initial depth of particle
  • fieldsetparcels.fieldset.FieldSet object to track this particle on
  • time – Current time of the particle

Additional Variables can be added via the :Class Variable: objects

class parcels.particle.JITParticle(*args, **kwargs)[source]

Bases: parcels.particle.ScipyParticle

Particle class for JIT-based (Just-In-Time) Particle objects

Parameters:
  • lon – Initial longitude of particle
  • lat – Initial latitude of particle
  • fieldsetparcels.fieldset.FieldSet object to track this particle on
  • dt – Execution timestep for this particle
  • time – Current time of the particle

Additional Variables can be added via the :Class Variable: objects

Users should use JITParticles for faster advection computation.

class parcels.particle.Variable(name, dtype=<type 'numpy.float32'>, initial=0, to_write=True)[source]

Bases: object

Descriptor class that delegates data access to particle data

Parameters:
  • name – Variable name as used within kernels
  • dtype – Data type (numpy.dtype) of the variable
  • initial – Initial value of the variable. Note that this can also be a Field object, which will then be sampled at the location of the particle
  • to_write – Boolean to control whether Variable is written to NetCDF file
is64bit()[source]

Check whether variable is 64-bit

parcels.kernels.advection module

Collection of pre-built advection kernels

parcels.kernels.advection.AdvectionRK4(particle, fieldset, time, dt)[source]

Advection of particles using fourth-order Runge-Kutta integration.

Function needs to be converted to Kernel object before execution

parcels.kernels.advection.AdvectionEE(particle, fieldset, time, dt)[source]

Advection of particles using Explicit Euler (aka Euler Forward) integration.

Function needs to be converted to Kernel object before execution

parcels.kernels.advection.AdvectionRK45(particle, fieldset, time, dt)[source]

Advection of particles using adadptive Runge-Kutta 4/5 integration.

Times-step dt is halved if error is larger than tolerance, and doubled if error is smaller than 1/10th of tolerance, with tolerance set to 1e-9 * dt by default.

parcels.kernels.advection.AdvectionRK4_3D(particle, fieldset, time, dt)[source]

Advection of particles using fourth-order Runge-Kutta integration including vertical velocity.

Function needs to be converted to Kernel object before execution

parcels.kernels.diffusion module

Collection of pre-built diffusion kernels

parcels.kernels.diffusion.BrownianMotion2D(particle, fieldset, time, dt)[source]

Kernel for simple Brownian particle diffusion in zonal and meridional direction. Assumes that fieldset has fields Kh_zonal and Kh_meridional

parcels.kernels.diffusion.SpatiallyVaryingBrownianMotion2D(particle, fieldset, time, dt)[source]

Diffusion equations for particles in non-uniform diffusivity fields from Ross & Sharples (2004, doi:10.4319/lom.2004.2.289) and Spagnol et al. (2002, doi:10.3354/meps235299)

parcels.codegenerator module

class parcels.codegenerator.IntrinsicTransformer(fieldset, ptype)[source]

Bases: ast.NodeTransformer

AST transformer that catches any mention of intrinsic variable names, such as ‘particle’ or ‘fieldset’, inserts placeholder objects and propagates attribute access.

get_tmp()[source]

Create a new temporary veriable name

visit_Name(node)[source]

Inject IntrinsicNode objects into the tree according to keyword

class parcels.codegenerator.KernelGenerator(fieldset, ptype)[source]

Bases: ast.NodeVisitor

Code generator class that translates simple Python kernel functions into C functions by populating and accessing the ccode attriibute on nodes in the Python AST.

visit_Call(node)[source]

Generate C code for simple C-style function calls. Please note that starred and keyword arguments are currently not supported.

visit_FieldNode(node)[source]

Record intrinsic fields used in kernel

visit_Name(node)[source]

Catches any mention of intrinsic variable names, such as ‘particle’ or ‘fieldset’ and inserts our placeholder objects

visit_SummedFieldNode(node)[source]

Record intrinsic fields used in kernel

visit_SummedVectorFieldNode(node)[source]

Record intrinsic fields used in kernel

visit_VectorFieldNode(node)[source]

Record intrinsic fields used in kernel

class parcels.codegenerator.LoopGenerator(fieldset, ptype=None)[source]

Bases: object

Code generator class that adds type definitions and the outer loop around kernel functions to generate compilable C code.

class parcels.codegenerator.TupleSplitter[source]

Bases: ast.NodeTransformer

AST transformer that detects and splits Pythonic tuple assignments into multiple statements for conversion to C.

parcels.compiler module

class parcels.compiler.Compiler(cc=None, cppargs=None, ldargs=None)[source]

Bases: object

A compiler object for creating and loading shared libraries.

Parameters:
  • cc – C compiler executable (uses environment variable CC if not provided).
  • cppargs – A list of arguments to the C compiler (optional).
  • ldargs – A list of arguments to the linker (optional).
class parcels.compiler.GNUCompiler(cppargs=None, ldargs=None)[source]

Bases: parcels.compiler.Compiler

A compiler object for the GNU Linux toolchain.

Parameters:
  • cppargs – A list of arguments to pass to the C compiler (optional).
  • ldargs – A list of arguments to pass to the linker (optional).

parcels.kernel module

class parcels.kernel.Kernel(fieldset, ptype, pyfunc=None, funcname=None, funccode=None, py_ast=None, funcvars=None, c_include='')[source]

Bases: object

Kernel object that encapsulates auto-generated code.

Parameters:
  • fieldset – FieldSet object providing the field information
  • ptype – PType object for the kernel particle

Note: A Kernel is either created from a compiled <function …> object or the necessary information (funcname, funccode, funcvars) is provided. The py_ast argument may be derived from the code string, but for concatenation, the merged AST plus the new header definition is required.

compile(compiler)[source]

Writes kernel code to file and compiles it.

execute(pset, endtime, dt, recovery=None, output_file=None)[source]

Execute this Kernel over a ParticleSet for several timesteps

execute_jit(pset, endtime, dt)[source]

Invokes JIT engine to perform the core update loop

execute_python(pset, endtime, dt)[source]

Performs the core update loop via Python

parcels.particlefile module

Module controlling the writing of ParticleSets to NetCDF file

class parcels.particlefile.ParticleFile(name, particleset, outputdt=inf, write_ondelete=False, chunksizes=None)[source]

Initialise netCDF4.Dataset for trajectory output.

The output follows the format outlined in the Discrete Sampling Geometries section of the CF-conventions: http://cfconventions.org/cf-conventions/v1.6.0/cf-conventions.html#discrete-sampling-geometries

The current implementation is based on the NCEI template: http://www.nodc.noaa.gov/data/formats/netcdf/v2.0/trajectoryIncomplete.cdl

Developer note: We cannot use xray.Dataset here, since it does not yet allow incremental writes to disk: https://github.com/pydata/xarray/issues/199

Parameters:
  • name – Basename of the output file
  • particleset – ParticleSet to output
  • outputdt – Interval which dictates the update frequency of file output while ParticleFile is given as an argument of ParticleSet.execute() It is either a timedelta object or a positive double.
  • write_ondelete – Boolean to write particle data only when they are deleted. Default is False
  • chunksizes – 2d vector of sizes for NetCDF chunking. Lower values means smaller files, but also slower IO. See e.g. https://www.unidata.ucar.edu/blogs/developer/entry/chunking_data_choosing_shapes
add_metadata(name, message)[source]

Add metadata to parcels.particleset.ParticleSet :param name: Name of the metadata variabale :param message: message to be written

open_dataset()[source]
sync()[source]

Write all buffered data to disk

write(pset, time, sync=True, deleted_only=False)[source]

Write parcels.particleset.ParticleSet data to file

Parameters:
  • pset – ParticleSet object to write
  • time – Time at which to write ParticleSet
  • sync – Optional argument whether to write data to disk immediately. Default is True

parcels.rng module

parcels.rng.seed(seed)[source]

Sets the seed for parcels internal RNG

parcels.rng.random()[source]

Returns a random float between 0. and 1.

parcels.rng.uniform(low, high)[source]

Returns a random float between low and high

parcels.rng.randint(low, high)[source]

Returns a random int between low and high

parcels.rng.normalvariate(loc, scale)[source]

Returns a random float on normal distribution with mean loc and width scale

parcels.rng.expovariate(lamb)[source]

Returns a randome float of an exponential distribution with parameter lamb

parcels.tools.error module

Collection of pre-built recovery kernels

class parcels.tools.error.ErrorCode[source]

Bases: enum.IntEnum

exception parcels.tools.error.FieldSamplingError(x, y, z, field=None)[source]

Bases: exceptions.RuntimeError

Utility error class to propagate erroneous field sampling in Scipy mode

exception parcels.tools.error.TimeExtrapolationError(time, field=None, msg='allow_time_extrapoltion')[source]

Bases: exceptions.RuntimeError

Utility error class to propagate erroneous time extrapolation sampling in Scipy mode

exception parcels.tools.error.KernelError(particle, fieldset=None, msg=None)[source]

Bases: exceptions.RuntimeError

General particle kernel error with optional custom message

exception parcels.tools.error.OutOfBoundsError(particle, fieldset=None, lon=None, lat=None, depth=None)[source]

Bases: parcels.tools.error.KernelError

Particle kernel error for out-of-bounds field sampling

parcels.tools.converters module

class parcels.tools.converters.UnitConverter[source]

Bases: object

Interface class for spatial unit conversion during field sampling that performs no conversion.

class parcels.tools.converters.Geographic[source]

Bases: parcels.tools.converters.UnitConverter

Unit converter from geometric to geographic coordinates (m to degree)

class parcels.tools.converters.GeographicPolar[source]

Bases: parcels.tools.converters.UnitConverter

Unit converter from geometric to geographic coordinates (m to degree) with a correction to account for narrower grid cells closer to the poles.

class parcels.tools.converters.GeographicSquare[source]

Bases: parcels.tools.converters.UnitConverter

Square distance converter from geometric to geographic coordinates (m2 to degree2)

class parcels.tools.converters.GeographicPolarSquare[source]

Bases: parcels.tools.converters.UnitConverter

Square distance converter from geometric to geographic coordinates (m2 to degree2) with a correction to account for narrower grid cells closer to the poles.

class parcels.tools.converters.TimeConverter(time_origin=0)[source]

Bases: object

Converter class for dates with different calendars in FieldSets

Param:time_origin: time origin of the class. Currently supported formats are float, integer, numpy.datetime64, and netcdftime.DatetimeNoLeap
fulltime(time)[source]

Method to convert a time difference in seconds to a date, based on the time_origin

Param:time: input time
Returns:self.time_origin + time
reltime(time)[source]

Method to compute the difference, in seconds, between a time and the time_origin of the TimeConverter

Param:time: input time
Returns:time - self.time_origin

parcels.tools.loggers module

Script to create a logger for Parcels

parcels.plotting module

parcels.plotting.cartopy_colorbar(cs, plt, fig, ax)[source]
parcels.plotting.create_parcelsfig_axis(spherical, land=True, projection=None, central_longitude=0)[source]
parcels.plotting.parsedomain(domain, field)[source]
parcels.plotting.parsetimestr(time_origin, show_time)[source]
parcels.plotting.plotfield(field, show_time=None, domain=None, projection=None, land=True, vmin=None, vmax=None, savefile=None, **kwargs)[source]

Function to plot a Parcels Field

Parameters:
  • show_time – Time at which to show the Field
  • domain – Four-vector (latN, latS, lonE, lonW) defining domain to show
  • projection – type of cartopy projection to use (default PlateCarree)
  • land – Boolean whether to show land. This is ignored for flat meshes
  • vmin – minimum colour scale (only in single-plot mode)
  • vmax – maximum colour scale (only in single-plot mode)
  • savefile – Name of a file to save the plot to
  • animation – Boolean whether result is a single plot, or an animation
parcels.plotting.plotparticles(particles, with_particles=True, show_time=None, field=None, domain=None, projection=None, land=True, vmin=None, vmax=None, savefile=None, animation=False)[source]

Function to plot a Parcels ParticleSet

Parameters:
  • show_time – Time at which to show the ParticleSet
  • with_particles – Boolean whether particles are also plotted on Field
  • field – Field to plot under particles (either None, a Field object, or ‘vector’)
  • domain – Four-vector (latN, latS, lonE, lonW) defining domain to show
  • projection – type of cartopy projection to use (default PlateCarree)
  • land – Boolean whether to show land. This is ignored for flat meshes
  • vmin – minimum colour scale (only in single-plot mode)
  • vmax – maximum colour scale (only in single-plot mode)
  • savefile – Name of a file to save the plot to
  • animation – Boolean whether result is a single plot, or an animation

scripts.plottrajectoriesfile module

scripts.plottrajectoriesfile.plotTrajectoriesFile(filename, mode='2d', tracerfile=None, tracerfield='P', tracerlon='x', tracerlat='y', recordedvar=None, movie_forward=True, bins=20, show_plt=True)[source]

Quick and simple plotting of Parcels trajectories

Parameters:
  • filename – Name of Parcels-generated NetCDF file with particle positions
  • mode – Type of plot to show. Supported are ‘2d’, ‘3d’, ‘hist2d’, ‘movie2d’ and ‘movie2d_notebook’. The latter two give animations, with ‘movie2d_notebook’ specifically designed for jupyter notebooks
  • tracerfile – Name of NetCDF file to show as background
  • tracerfield – Name of variable to show as background
  • tracerlon – Name of longitude dimension of variable to show as background
  • tracerlat – Name of latitude dimension of variable to show as background
  • recordedvar – Name of variable used to color particles in scatter-plot. Only works in ‘movie2d’ or ‘movie2d_notebook’ mode.
  • movie_forward – Boolean whether to show movie in forward or backward mode (default True)
  • bins – Number of bins to use in hist2d mode. See also https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hist2d.html
  • show_plt – Boolean whether plot should directly be show (for py.test)

scripts.get_examples module

Get example scripts, notebooks, and data files.

scripts.get_examples.copy_data_and_examples_from_package_to(target_path)[source]

Copy example data from Parcels directory.

Return thos parths of the list file_names that were not found in the package.

scripts.get_examples.download_files(source_url, file_names, target_path)[source]

Mirror file_names from source_url to target_path.

scripts.get_examples.main(target_path=None)[source]

Get example scripts, example notebooks, and example data.

Copy the examples from the package directory and get the example data either from the package directory or from the Parcels website.

scripts.get_examples.set_jupyter_kernel_to_python_version(path, python_version=2)[source]

Set notebook kernelspec to desired python version.

This also drops all other meta data from the notebook.

Indices and tables