Pyclaw supports the following input and output formats:
Each module contains two main routines read_<format> and write_<format> which Solution can call with the appropriate <format>. In order to create a new file I/O extension the calling signature must match
read_<format>(solution,frame,path,file_prefix,write_aux,options)
Input: |
|
---|
and
write_<format>(solution,frame,path,file_prefix,write_aux,options)
Input: |
|
---|
Note that both allow for an options dictionary that is format specific and should be documented thoroughly. For examples of this usage, see the HDF5 and NetCDF modules.
HDF5 and NetCDF support require installed libraries in order to work, please see the respective modules for details on how to obtain and install the libraries needed.
Note
Pyclaw automatically detects the availability of HDF5 and NetCDF file support and will warn you if you try and use them without the proper libraries.
Routines for reading and writing an ascii output file
Authors: | Kyle T. Mandli (2008-02-07) Initial version |
---|
Read in a set of ascii formatted files
This routine reads the ascii formatted files corresponding to the classic clawpack format ‘fort.txxxx’, ‘fort.qxxxx’, and ‘fort.axxxx’ or ‘fort.aux’ Note that the fort prefix can be changed.
Input: |
|
---|
Read only the fort.t file and return the data
Input: |
|
---|---|
Output: |
|
Write out ascii data file
Write out an ascii file formatted identical to the fortran clawpack files including writing out fort.t, fort.q, and fort.aux if necessary. Note that there are some parameters that assumed to be the same for every grid in this format which is not necessarily true for the actual data objects. Make sure that if you use this output format that all of you grids share the appropriate values of ndim, meqn, maux, and t. Only supports up to 3 dimensions.
Input: |
|
---|
Routines for reading and writing a HDF5 output file
It will first try h5py and then PyTables and use the correct calls according to whichever is present on the system. We recommend that you use h5py as it is a minimal wrapper to the HDF5 library and will create
Authors: | Kyle T. Mandli (2009-02-13) Initial version |
---|
Read in a HDF5 file into a Solution
Input: |
|
---|
Write out a Solution to a HDF5 file.
Input: |
|
---|
Key | Value |
---|---|
compression | (None, string [“gzip” | “lzf” | “szip”] or int 0-9) Enable dataset compression. DEFLATE, LZF and (where available) SZIP are supported. An integer is interpreted as a GZIP level for backwards compatibility. |
compression_opts | (None, or special value) Setting for compression filter; legal values for each filter type are:
See the filters module for a detailed description of each of these filters. |
chunks | (None, True or shape tuple) Store the dataset in chunked format. Automatically selected if any of the other keyword options are given. If you don’t provide a shape tuple, the library will guess one for you. |
shuffle | (True/False) Enable/disable data shuffling, which can improve compression performance. Automatically enabled when compression is used. |
fletcher32 | (True/False) Enable Fletcher32 error detection; may be used with or without compression. |
Routines for reading and writing a NetCDF output file
These interfaces are very similar so if a different module needs to be used, it can more than likely be inserted with a minimal of effort.
This module will first try to import the netcdf4-python module which is based on the compiled libraries and failing that will attempt to import the pure python interface pupynere which requires no libraries.
Authors: | Kyle T. Mandli (2009-02-17) Initial version Josh Jacobs (2011-04-22) NetCDF 3 Support |
---|
Read in a NetCDF data files into solution
Input: |
|
---|
Write out a NetCDF data file representation of solution
Input: |
|
---|
Key | Value |
---|---|
description | Dictionary of key/value pairs that will be attached to the root group as attributes, i.e. {‘time’:3} |
format | Can be one of the following netCDF flavors: NETCDF3_CLASSIC, NETCDF3_64BIT, NETCDF4_CLASSIC, and NETCDF4 default = NETCDF4 |
clobber | if True (Default), file will be overwritten, if False an exception will be raised |
zlib | if True, data assigned to the Variable instance is compressed on disk. default = False |
complevel | the level of zlib compression to use (1 is the fastest, but poorest compression, 9 is the slowest but best compression). Ignored if zlib=False. default = 6 |
shuffle | if True, the HDF5 shuffle filter is applied to improve compression. Ignored if zlib=False. default = True |
fletcher32 | if True (default False), the Fletcher32 checksum algorithm is used for error detection. |
contiguous | if True (default False), the variable data is stored contiguously on disk. Setting to True for a variable with an unlimited dimension will trigger an error. default = False |
chunksizes | Can be used to specify the HDF5 chunksizes for each dimension of the variable. A detailed discussion of HDF chunking and I/O performance is available here. Basically, you want the chunk size for each dimension to match as closely as possible the size of the data block that users will read from the file. chunksizes cannot be set if contiguous=True. |
least_significant_digit | If specified, variable data will be truncated (quantized). In conjunction with zlib=True this produces ‘lossy’, but significantly more efficient compression. For example, if least_significant_digit=1, data will be quantized using around (scale*data)/scale, where scale = 2**bits, and bits is determined so that a precision of 0.1 is retained (in this case bits=4). default = None, or no quantization. |
endian | Can be used to control whether the data is stored in little or big endian format on disk. Possible values are little, big or native (default). The library will automatically handle endian conversions when the data is read, but if the data is always going to be read on a computer with the opposite format as the one used to create the file, there may be some performance advantage to be gained by setting the endian-ness. |
fill_value | If specified, the default netCDF _FillValue (the value that the variable gets filled with before any data is written to it) is replaced with this value. If fill_value is set to False, then the variable is not pre-filled. |
Note
The zlib, complevel, shuffle, fletcher32, contiguous, chunksizes and endian keywords are silently ignored for netCDF 3 files that do not use HDF5.