The pyfits.new_table function is now fully deprecated (though will not be removed for a long time, considering how widely it is used.
Instead please use the more explicit pyfits.BinTableHDU.from_columns to create a new binary table HDU, and the similar pyfits.TableHDU.from_columns` to create a new ASCII table. These otherwise accept the same arguments as ``pyfits.new_table which is now just a wrapper for these.
Fixed an issue where header wildcard matching (for example header['DATE*']) can be used to match any characters that might appear in a keyword. Previously this only matched keywords containing characters in the set [0-9A-Za-z_]. Now this can also match a hyphen - and any other characters, as some conventions like HIERARCH and record-valued keyword cards allow a wider range of valid characters than standard FITS keywords.
Assigning to values in ColDefs.names, ColDefs.formats, ColDefs.nulls and other attributes of ColDefs instances that return lists of column properties is no longer supported. Assigning to those lists will no longer update the corresponding columns. Instead, please just modify the Column instances directly (Column.name, Column.null, etc.)
The pyfits.new_table function is marked “pending deprecation”. This does not mean it will be removed outright or that its functionality has changed. It will likely be replaced in the future for a function with similar, if not subtly different functionality. A better, if not slightly more verbose approach is to use pyfits.FITS_rec.from_columns to create a new FITS_rec table–this has the same interface as pyfits.new_table. The difference is that it returns a plan FITS_rec array, and not an HDU instance. This FITS_rec object can then be used as the data argument in the constructors for BinTableHDU (for binary tables) or TableHDU (for ASCII tables). This is analogous to creating an ImageHDU by passing in an image array. pyfits.FITS_rec.from_columns is just a simpler way of creating a FITS-compatible recarray from a FITS column specification.
The updateHeader, updateHeaderData, and updateCompressedData methods of the CompDataHDU class are pending deprecation and moved to internal methods. The operation of these methods depended too much on internal state to be used safely by users; instead they are invoked automatically in the appropriate places when reading/writing compressed image HDUs.
The CompDataHDU.compData attribute is pending deprecation in favor of the clearer and more PEP-8 compatible CompDataHDU.compressed_data.
The constructor for CompDataHDU has been changed to accept new keyword arguments. The new keyword arguments are essentially the same, but are in underscore_separated format rather than camelCase format. The old arguments are still pending deprecation.
The internal attributes of HDU classes _hdrLoc, _datLoc, and _datSpan have been replaced with _header_offset, _data_offset, and _data_size respectively. The old attribute names are still pending deprecation. This should only be of interest to advanced users who have created their own HDU subclasses.
The following previously deprecated functions and methods have been removed entirely: createCard, createCardFromString, upperKey, ColDefs.data, setExtensionNameCaseSensitive, _File.getfile, _TableBaseHDU.get_coldefs, Header.has_key, Header.ascardlist.
If you run your code with a previous version of PyFITS (>= 3.0, < 3.2) with the python -Wd argument, warnings for all deprecated interfaces still in use will be displayed.
Interfaces that were pending deprecation are now fully deprecated. These include: create_card, create_card_from_string, upper_key, Header.get_history, and Header.get_comment.
The .name attribute on HDUs is now directly tied to the HDU’s header, so that if .header['EXTNAME'] changes so does .name and vice-versa.
The pyfits.file.PYTHON_MODES constant dict was renamed to pyfits.file.PYFITS_MODES which better reflects its purpose. This is rarely used by client code, however. Support for the old name will be removed by PyFITS 3.4.
This is a bug fix release for the 3.1.x series.
The Header class has been rewritten, and the CardList class is deprecated. Most of the basic details of working with FITS headers are unchanged, and will not be noticed by most users. But there are differences in some areas that will be of interest to advanced users, and to application developers. For full details of the changes, see the “Header Interface Transition Guide” section in the PyFITS documentation. See ticket #64 on the PyFITS Trac for futher details and background. Some highlights are listed below:
The Header class now fully implements the Python dict interface, and can be used interchangably with a dict, where the keys are header keywords.
New keywords can be added to the header using normal keyword assignment (previously it was necessary to use Header.update to add new keywords). For example:
>>> header['NAXIS'] = 2
will update the existing ‘FOO’ keyword if it already exists, or add a new one if it doesn’t exist, just like a dict.
It is possible to assign both a value and a comment at the same time using a tuple:
>>> header['NAXIS'] = (2, 'Number of axes')
To add/update a new card and ensure it’s added in a specific location, use Header.set():
>>> header.set('NAXIS', 2, 'Number of axes', after='BITPIX')
This works the same as the old Header.update(). Header.update() still works in the old way too, but is deprecated.
Although Card objects still exist, it generally is not necessary to work with them directly. Header.ascardlist()/Header.ascard are deprecated and should not be used. To directly access the Card objects in a header, use Header.cards.
To access card comments, it is still possible to either go through the card itself, or through Header.comments. For example:
>>> header.cards['NAXIS'].comment
Number of axes
>>> header.comments['NAXIS']
Number of axes
Card objects can now be used interchangeably with (keyword, value, comment) 3-tuples. They still have .value and .comment attributes as well. The .key attribute has been renamed to .keyword for consistency, though .key is still supported (but deprecated).
Memory mapping is now used by default to access HDU data. That is, pyfits.open() uses memmap=True as the default. This provides better performance in the majority of use cases–there are only some I/O intensive applications where it might not be desirable. Enabling mmap by default also enabled finding and fixing a large number of bugs in PyFITS’ handling of memory-mapped data (most of these bug fixes were backported to PyFITS 3.0.5). (#85)
The size() method on HDU objects is now a .size property–this returns the size in bytes of the data portion of the HDU, and in most cases is equivalent to hdu.data.nbytes (#83)
BinTableHDU.tdump and BinTableHDU.tcreate are deprecated–use BinTableHDU.dump and BinTableHDU.load instead. The new methods output the table data in a slightly different format from previous versions, which places quotes around each value. This format is compatible with data dumps from previous versions of PyFITS, but not vice-versa due to a parsing bug in older versions.
Likewise the pyfits.tdump and pyfits.tcreate convenience function versions of these methods have been renamed pyfits.tabledump and pyfits.tableload. The old deprecated, but currently retained for backwards compatibility. (r1125)
A new global variable pyfits.EXTENSION_NAME_CASE_SENSITIVE was added. This serves as a replacement for pyfits.setExtensionNameCaseSensitive which is not deprecated and may be removed in a future version. To enable case-sensitivity of extension names (i.e. treat ‘sci’ as distict from ‘SCI’) set pyfits.EXTENSION_NAME_CASE_SENSITIVE = True. The default is False. (r1139)
A new global configuration variable pyfits.STRIP_HEADER_WHITESPACE was added. By default, if a string value in a header contains trailing whitespace, that whitespace is automatically removed when the value is read. Now if you set pyfits.STRIP_HEADER_WHITESPACE = False all whitespace is preserved. (#146)
The old classExtensions extension mechanism (which was deprecated in PyFITS 3.0) is removed outright. To our knowledge it was no longer used anywhere. (r1309)
Warning messages from PyFITS issued through the Python warnings API are now output to stderr instead of stdout, as is the default. PyFITS no longer modifies the default behavior of the warnings module with respect to which stream it outputs to. (r1319)
The checksum argument to pyfits.open() now accepts a value of ‘remove’, which causes any existing CHECKSUM/DATASUM keywords to be ignored, and removed when the file is saved.
This is a bug fix release for the 3.0.x series.
Fixed Header.values()/Header.itervalues() and Header.items()/ Header.iteritems() to correctly return the different values for duplicate keywords (particularly commentary keywords like HISTORY and COMMENT). This makes the old Header implementation slightly more compatible with the new implementation in PyFITS 3.1. (#127)
Note
This fix did not change the existing behavior from earlier PyFITS versions where Header.keys() returns all keywords in the header with duplicates removed. PyFITS 3.1 changes that behavior, so that Header.keys() includes duplicates.
Fixed a bug where ImageHDU.scale(option='old') wasn’t working at all–it was not restoring the image to its original BSCALE and BZERO values. (#162)
Fixed a bug where opening a file containing compressed image HDUs in ‘update’ mode and then immediately closing it without making any changes caused the file to be rewritten unncessarily. (#167)
Fixed two memory leaks that could occur when writing compressed image data, or in some cases when opening files containing compressed image HDUs in ‘update’ mode. (#168)
The main reason for this release is to fix an issue that was introduced in PyFITS 3.0.5 where merely opening a file containing scaled data (that is, with non-trivial BSCALE and BZERO keywords) in ‘update’ mode would cause the data to be automatically rescaled–possibly converting the data from ints to floats–as soon as the file is closed, even if the application did not touch the data. Now PyFITS will only rescale the data in an extension when the data is actually accessed by the application. So opening a file in ‘update’ mode in order to modify the header or append new extensions will not cause any change to the data in existing extensions.
This release also fixes a few Windows-specific bugs found through more extensive Windows testing, and other miscellaneous bugs.
The following enhancements were added:
The following bugs were fixed:
The following bugs were fixed:
The following enhancements were made:
Completely eliminate support for numarray.
Rework pyfits documention to use Sphinx.
Support python 2.6 and future division.
Added a new method to get the file name associated with an HDUList object. The method HDUList.filename() returns the name of an associated file. It returns None if no file is associated with the HDUList.
Support the python 2.5 ‘with’ statement when opening fits files. (CNSHD766308) It is now possible to use the following construct:
>>> from __future__ import with_statement import pyfits
>>> with pyfits.open("input.fits") as hdul:
... #process hdul
>>>
Extended the support for reading unsigned integer 16 values from an ImageHDU to include unsigned integer 32 and unsigned integer 64 values. ImageHDU data is considered to be unsigned integer 16 when the data type is signed integer 16 and BZERO is equal to 2**15 (32784) and BSCALE is equal to 1. ImageHDU data is considered to be unsigned integer 32 when the data type is signed integer 32 and BZERO is equal to 2**31 and BSCALE is equal to 1. ImageHDU data is considered to be unsigned integer 64 when the data type is signed integer 64 and BZERO is equal to 2**63 and BSCALE is equal to 1. An optional keyword argument (uint) was added to the open convenience function for this purpose. Supplying a value of True for this argument will cause data of any of these types to be read in and scaled into the appropriate unsigned integer array (uint16, uint32, or uint64) instead of into the normal float 32 or float 64 array. If an HDU associated with a file that was opened with the ‘int’ option and containing unsigned integer 16, 32, or 64 data is written to a file, the data will be reverse scaled into a signed integer 16, 32, or 64 array and written out to the file along with the appropriate BSCALE/BZERO header cards. Note that for backward compatability, the ‘uint16’ keyword argument will still be accepted in the open function when handling unsigned integer 16 conversion.
Provided the capability to access the data for a column of a fits table by indexing the table using the column name. This is consistent with Record Arrays in numpy (array with fields). (CNSHD763378) The following example will illustrate this:
>>> import pyfits
>>> hdul = pyfits.open('input.fits')
>>> table = hdul[1].data
>>> table.names
['c1','c2','c3','c4']
>>> print table.field('c2') # this is the data for column 2
['abc' 'xy']
>>> print table['c2'] # this is also the data for column 2
array(['abc', 'xy '], dtype='|S3')
>>> print table[1] # this is the data for row 1
(2, 'xy', 6.6999997138977054, True)
Provided capabilities to create a BinaryTableHDU directly from a numpy Record Array (array with fields). The new capabilities include table creation, writing a numpy Record Array directly to a fits file using the pyfits.writeto and pyfits.append convenience functions. Reading the data for a BinaryTableHDU from a fits file directly into a numpy Record Array using the pyfits.getdata convenience function. (CNSHD749034) Thanks to Erin Sheldon at Brookhaven National Laboratory for help with this.
The following should illustrate these new capabilities:
>>> import pyfits
>>> import numpy
>>> t=numpy.zeros(5,dtype=[('x','f4'),('y','2i4')]) \
... # Create a numpy Record Array with fields
>>> hdu = pyfits.BinTableHDU(t) \
... # Create a Binary Table HDU directly from the Record Array
>>> print hdu.data
[(0.0, array([0, 0], dtype=int32))
(0.0, array([0, 0], dtype=int32))
(0.0, array([0, 0], dtype=int32))
(0.0, array([0, 0], dtype=int32))
(0.0, array([0, 0], dtype=int32))]
>>> hdu.writeto('test1.fits',clobber=True) \
... # Write the HDU to a file
>>> pyfits.info('test1.fits')
Filename: test1.fits
No. Name Type Cards Dimensions Format
0 PRIMARY PrimaryHDU 4 () uint8
1 BinTableHDU 12 5R x 2C [E, 2J]
>>> pyfits.writeto('test.fits', t, clobber=True) \
... # Write the Record Array directly to a file
>>> pyfits.append('test.fits', t) \
... # Append another Record Array to the file
>>> pyfits.info('test.fits')
Filename: test.fits
No. Name Type Cards Dimensions Format
0 PRIMARY PrimaryHDU 4 () uint8
1 BinTableHDU 12 5R x 2C [E, 2J]
2 BinTableHDU 12 5R x 2C [E, 2J]
>>> d=pyfits.getdata('test.fits',ext=1) \
... # Get the first extension from the file as a FITS_rec
>>> print type(d)
<class 'pyfits.core.FITS_rec'>
>>> print d
[(0.0, array([0, 0], dtype=int32))
(0.0, array([0, 0], dtype=int32))
(0.0, array([0, 0], dtype=int32))
(0.0, array([0, 0], dtype=int32))
(0.0, array([0, 0], dtype=int32))]
>>> d=pyfits.getdata('test.fits',ext=1,view=numpy.ndarray) \
... # Get the first extension from the file as a numpy Record
Array
>>> print type(d)
<type 'numpy.ndarray'>
>>> print d
[(0.0, [0, 0]) (0.0, [0, 0]) (0.0, [0, 0]) (0.0, [0, 0])
(0.0, [0, 0])]
>>> print d.dtype
[('x', '>f4'), ('y', '>i4', 2)]
>>> d=pyfits.getdata('test.fits',ext=1,upper=True,
... view=pyfits.FITS_rec) \
... # Force the Record Array field names to be in upper case
regardless of how they are stored in the file
>>> print d.dtype
[('X', '>f4'), ('Y', '>i4', 2)]
Provided support for writing fits data to file-like objects that do not support the random access methods seek() and tell(). Most pyfits functions or methods will treat these file-like objects as an empty file that cannot be read, only written. It is also expected that the file-like object is in a writable condition (ie. opened) when passed into a pyfits function or method. The following methods and functions will allow writing to a non-random access file-like object: HDUList.writeto(), HDUList.flush(), pyfits.writeto(), and pyfits.append(). The pyfits.open() convenience function may be used to create an HDUList object that is associated with the provided file-like object. (CNSHD770036)
An illustration of the new capabilities follows. In this example fits data is written to standard output which is associated with a file opened in write-only mode:
>>> import pyfits
>>> import numpy as np
>>> import sys
>>>
>>> hdu = pyfits.PrimaryHDU(np.arange(100,dtype=np.int32))
>>> hdul = pyfits.HDUList()
>>> hdul.append(hdu)
>>> tmpfile = open('tmpfile.py','w')
>>> sys.stdout = tmpfile
>>> hdul.writeto(sys.stdout, clobber=True)
>>> sys.stdout = sys.__stdout__
>>> tmpfile.close()
>>> pyfits.info('tmpfile.py')
Filename: tmpfile.py
No. Name Type Cards Dimensions Format
0 PRIMARY PrimaryHDU 5 (100,) int32
>>>
Provided support for slicing a FITS_record object. The FITS_record object represents the data from a row of a table. Pyfits now supports the slice syntax to retrieve values from the row. The following illustrates this new syntax:
>>> hdul = pyfits.open('table.fits')
>>> row = hdul[1].data[0]
>>> row
('clear', 'nicmos', 1, 30, 'clear', 'idno= 100')
>>> a, b, c, d, e = row[0:5]
>>> a
'clear'
>>> b
'nicmos'
>>> c
1
>>> d
30
>>> e
'clear'
>>>
Allow the assignment of a row value for a pyfits table using a tuple or a list as input. The following example illustrates this new feature:
>>> c1=pyfits.Column(name='target',format='10A')
>>> c2=pyfits.Column(name='counts',format='J',unit='DN')
>>> c3=pyfits.Column(name='notes',format='A10')
>>> c4=pyfits.Column(name='spectrum',format='5E')
>>> c5=pyfits.Column(name='flag',format='L')
>>> coldefs=pyfits.ColDefs([c1,c2,c3,c4,c5])
>>>
>>> tbhdu=pyfits.new_table(coldefs, nrows = 5)
>>>
>>> # Assigning data to a table's row using a tuple
>>> tbhdu.data[2] = ('NGC1',312,'A Note',
... num.array([1.1,2.2,3.3,4.4,5.5],dtype=num.float32),
... True)
>>>
>>> # Assigning data to a tables row using a list
>>> tbhdu.data[3] = ['JIM1','33','A Note',
... num.array([1.,2.,3.,4.,5.],dtype=num.float32),True]
Allow the creation of a Variable Length Format (P format) column from a list of data. The following example illustrates this new feature:
>>> a = [num.array([7.2e-20,7.3e-20]),num.array([0.0]),
... num.array([0.0])]
>>> acol = pyfits.Column(name='testa',format='PD()',array=a)
>>> acol.array
_VLF([[ 7.20000000e-20 7.30000000e-20], [ 0.], [ 0.]],
dtype=object)
>>>
Allow the assignment of multiple rows in a table using the slice syntax. The following example illustrates this new feature:
>>> counts = num.array([312,334,308,317])
>>> names = num.array(['NGC1','NGC2','NGC3','NCG4'])
>>> c1=pyfits.Column(name='target',format='10A',array=names)
>>> c2=pyfits.Column(name='counts',format='J',unit='DN',
... array=counts)
>>> c3=pyfits.Column(name='notes',format='A10')
>>> c4=pyfits.Column(name='spectrum',format='5E')
>>> c5=pyfits.Column(name='flag',format='L',array=[1,0,1,1])
>>> coldefs=pyfits.ColDefs([c1,c2,c3,c4,c5])
>>>
>>> tbhdu1=pyfits.new_table(coldefs)
>>>
>>> counts = num.array([112,134,108,117])
>>> names = num.array(['NGC5','NGC6','NGC7','NCG8'])
>>> c1=pyfits.Column(name='target',format='10A',array=names)
>>> c2=pyfits.Column(name='counts',format='J',unit='DN',
... array=counts)
>>> c3=pyfits.Column(name='notes',format='A10')
>>> c4=pyfits.Column(name='spectrum',format='5E')
>>> c5=pyfits.Column(name='flag',format='L',array=[0,1,0,0])
>>> coldefs=pyfits.ColDefs([c1,c2,c3,c4,c5])
>>>
>>> tbhdu=pyfits.new_table(coldefs)
>>> tbhdu.data[0][3] = num.array([1.,2.,3.,4.,5.],
... dtype=num.float32)
>>>
>>> tbhdu2=pyfits.new_table(tbhdu1.data, nrows=9)
>>>
>>> # Assign the 4 rows from the second table to rows 5 thru
... 8 of the new table. Note that the last row of the new
... table will still be initialized to the default values.
>>> tbhdu2.data[4:] = tbhdu.data
>>>
>>> print tbhdu2.data
[ ('NGC1', 312, '0.0', array([ 0., 0., 0., 0., 0.],
dtype=float32), True)
('NGC2', 334, '0.0', array([ 0., 0., 0., 0., 0.],
dtype=float32), False)
('NGC3', 308, '0.0', array([ 0., 0., 0., 0., 0.],
dtype=float32), True)
('NCG4', 317, '0.0', array([ 0., 0., 0., 0., 0.],
dtype=float32), True)
('NGC5', 112, '0.0', array([ 1., 2., 3., 4., 5.],
dtype=float32), False)
('NGC6', 134, '0.0', array([ 0., 0., 0., 0., 0.],
dtype=float32), True)
('NGC7', 108, '0.0', array([ 0., 0., 0., 0., 0.],
dtype=float32), False)
('NCG8', 117, '0.0', array([ 0., 0., 0., 0., 0.],
dtype=float32), False)
('0.0', 0, '0.0', array([ 0., 0., 0., 0., 0.],
dtype=float32), False)]
>>>
The following bugs were fixed:
Corrected bugs in HDUList.append and HDUList.insert to correctly handle the situation where you want to insert or append a Primary HDU as something other than the first HDU in an HDUList and the situation where you want to insert or append an Extension HDU as the first HDU in an HDUList.
Corrected a bug involving scaled images (both compressed and not compressed) that include a BLANK, or ZBLANK card in the header. When the image values match the BLANK or ZBLANK value, the value should be replaced with NaN after scaling. Instead, pyfits was scaling the BLANK or ZBLANK value and returning it. (CNSHD766129)
Corrected a byteswapping bug that occurs when writing certain column data. (CNSHD763307)
Corrected a bug that occurs when creating a column from a chararray when one or more elements are shorter than the specified format length. The bug wrote nulls instead of spaces to the file. (CNSHD695419)
Corrected a bug in the HDU verification software to ensure that the header contains no NAXISn cards where n > NAXIS.
Corrected a bug involving reading and writing compressed image data. When written, the header keyword card ZTENSION will always have the value ‘IMAGE’ and when read, if the ZTENSION value is not ‘IMAGE’ the user will receive a warning, but the data will still be treated as image data.
Corrected a bug that restricted the ability to create a custom HDU class and use it with pyfits. The bug fix will allow something like this:
>>> import pyfits
>>> class MyPrimaryHDU(pyfits.PrimaryHDU):
... def __init__(self, data=None, header=None):
... pyfits.PrimaryHDU.__init__(self, data, header)
... def _summary(self):
... """
... Reimplement a method of the class.
... """
... s = pyfits.PrimaryHDU._summary(self)
... # change the behavior to suit me.
... s1 = 'MyPRIMARY ' + s[11:]
... return s1
...
>>> hdul=pyfits.open("pix.fits",
... classExtensions={pyfits.PrimaryHDU: MyPrimaryHDU})
>>> hdul.info()
Filename: pix.fits
No. Name Type Cards Dimensions Format
0 MyPRIMARY MyPrimaryHDU 59 (512, 512) int16
>>>
Modified ColDefs.add_col so that instead of returning a new ColDefs object with the column added to the end, it simply appends the new column to the current ColDefs object in place. (CNSHD768778)
Corrected a bug in ColDefs.del_col which raised a KeyError exception when deleting a column from a ColDefs object.
Modified the open convenience function so that when a file is opened in readonly mode and the file contains no HDU’s an IOError is raised.
Modified _TableBaseHDU to ensure that all locations where data is referenced in the object actually reference the same ndarray, instead of copies of the array.
Corrected a bug in the Column class that failed to initialize data when the data is a boolean array. (CNSHD779136)
Corrected a bug that caused an exception to be raised when creating a variable length format column from character data (PA format).
Modified installation code so that when installing on Windows, when a C++ compiler compatable with the Python binary is not found, the installation completes with a warning that all optional extension modules failed to build. Previously, an Error was issued and the installation stopped.
Updates described in this release are only supported in the NUMPY version of pyfits.
The following bugs were fixed:
Updates described in this release are only supported in the NUMPY version of pyfits.
The following bugs were fixed:
Updates described in this release are only supported in the NUMPY version of pyfits.
The following enhancements were made:
Provide support for the FITS Checksum Keyword Convention. (CNSHD754301)
Adding the checksum=True keyword argument to the open convenience function will cause checksums to be verified on file open:
>>> hdul=pyfits.open('in.fits', checksum=True)
On output, CHECKSUM and DATASUM cards may be output to all HDU’s in a fits file by using the keyword argument checksum=True in calls to the writeto convenience function, the HDUList.writeto method, the writeto methods of all of the HDU classes, and the append convenience function:
>>> hdul.writeto('out.fits', checksum=True)
Implemented a new insert method to the HDUList class that allows for the insertion of a HDU into a HDUList at a given index:
>>> hdul.insert(2,hdu)
Provided the capability to handle unicode input for file names.
Provided support for integer division required by Python 3.0.
The following bugs were fixed:
Corrected a bug that caused an index out of bounds exception to be raised when iterating over the rows of a binary table HDU using the syntax “for row in tbhdu.data: ”. (CNSHD748609)
Corrected a bug that prevented the use of the writeto convenience function for writing table data to a file. (CNSHD749024)
Modified the code to raise an IOError exception with the comment “Header missing END card.” when pyfits can’t find a valid END card for a header when opening a file.
This change addressed a problem with a non-standard fits file that contained several new-line characters at the end of each header and at the end of the file. However, since some people want to be able to open these non-standard files anyway, an option was added to the open convenience function to allow these files to be opened without exception:
>>> pyfits.open('infile.fits',ignore_missing_end=True)
Corrected a bug that prevented the use of StringIO objects as fits files when reading and writing table data. Previously, only image data was supported. (CNSHD753698)
Corrected a bug that caused a bus error to be generated when compressing image data using GZIP_1 under the Solaris operating system.
Corrected bugs that prevented pyfits from properly reading Random Groups HDU’s using numpy. (CNSHD756570)
Corrected a bug that can occur when writing a fits file. (CNSHD757508)
Corrected a bug in CompImageHDU that prevented rescaling the image data using hdu.scale(option=’old’).
Updates described in this release are only supported in the NUMPY version of pyfits.
The following bugs were fixed:
Updates described in this release are only supported in the NUMPY version of pyfits.
The following enhancements were made:
Added new tdump and tcreate capabilities to pyfits.
Added support for case sensitive values of the EXTNAME card in an extension header. (CNSHD745784)
By default, pyfits converts the value of EXTNAME cards to upper case when reading from a file. A new convenience function (setExtensionNameCaseSensitive) was implemented to allow a user to circumvent this behavior so that the EXTNAME value remains in the same case as it is in the file.
With the following function call, pyfits will maintain the case of all characters in the EXTNAME card values of all extension HDU’s during the entire python session, or until another call to the function is made:
>>> import pyfits
>>> pyfits.setExtensionNameCaseSensitive()
The following function call will return pyfits to its default (all upper case) behavior:
>>> pyfits.setExtensionNameCaseSensitive(False)
Added support for reading and writing FITS files in which the value of the first card in the header is ‘SIMPLE=F’. In this case, the pyfits open function returns an HDUList object that contains a single HDU of the new type _NonstandardHDU. The header for this HDU is like a normal header (with the exception that the first card contains SIMPLE=F instead of SIMPLE=T). Like normal HDU’s the reading of the data is delayed until actually requested. The data is read from the file into a string starting from the first byte after the header END card and continuing till the end of the file. When written, the header is written, followed by the data string. No attempt is made to pad the data string so that it fills into a standard 2880 byte FITS block. (CNSHD744730)
Added support for FITS files containing extensions with unknown XTENSION card values. (CNSHD744730) Standard FITS files support extension HDU’s of types TABLE, IMAGE, BINTABLE, and A3DTABLE. Accessing a nonstandard extension from a FITS file will now create a _NonstandardExtHDU object. Accessing the data of this object will cause the data to be read from the file into a string. If the HDU is written back to a file the string data is written after the Header and padded to fill a standard 2880 byte FITS block.
The following bugs were fixed:
Updates described in this release are only supported in the NUMPY version of pyfits.
The following bugs were fixed:
Updates described in this release are only supported in the NUMPY version of pyfits.
The following enhancements were made:
Provide initial support for an image compression convention known as the “Tiled Image Compression Convention” [1].
The principle used in this convention is to first divide the n-dimensional image into a rectangular grid of subimages or “tiles”. Each tile is then compressed as a continuous block of data, and the resulting compressed byte stream is stored in a row of a variable length column in a FITS binary table. Several commonly used algorithms for compressing image tiles are supported. These include, GZIP, RICE, H-Compress and IRAF pixel list (PLIO).
Support for compressed image data is provided using the optional “pyfitsComp” module contained in a C shared library (pyfitsCompmodule.so).
The header of a compressed image HDU appears to the user like any image header. The actual header stored in the FITS file is that of a binary table HDU with a set of special keywords, defined by the convention, to describe the structure of the compressed image. The conversion between binary table HDU header and image HDU header is all performed behind the scenes. Since the HDU is actually a binary table, it may not appear as a primary HDU in a FITS file.
The data of a compressed image HDU appears to the user as standard uncompressed image data. The actual data is stored in the fits file as Binary Table data containing at least one column (COMPRESSED_DATA). Each row of this variable-length column contains the byte stream that was generated as a result of compressing the corresponding image tile. Several optional columns may also appear. These include, UNCOMPRESSED_DATA to hold the uncompressed pixel values for tiles that cannot be compressed, ZSCALE and ZZERO to hold the linear scale factor and zero point offset which may be needed to transform the raw uncompressed values back to the original image pixel values, and ZBLANK to hold the integer value used to represent undefined pixels (if any) in the image.
To create a compressed image HDU from scratch, simply construct a CompImageHDU object from an uncompressed image data array and its associated image header. From there, the HDU can be treated just like any image HDU:
>>> hdu=pyfits.CompImageHDU(imageData,imageHeader)
>>> hdu.writeto('compressed_image.fits')
The signature for the CompImageHDU initializer method describes the possible options for constructing a CompImageHDU object:
def __init__(self, data=None, header=None, name=None,
compressionType='RICE_1',
tileSize=None,
hcompScale=0.,
hcompSmooth=0,
quantizeLevel=16.):
"""
data: data of the image
header: header to be associated with the
image
name: the EXTNAME value; if this value
is None, then the name from the
input image header will be used;
if there is no name in the input
image header then the default name
'COMPRESSED_IMAGE' is used
compressionType: compression algorithm 'RICE_1',
'PLIO_1', 'GZIP_1', 'HCOMPRESS_1'
tileSize: compression tile sizes default
treats each row of image as a tile
hcompScale: HCOMPRESS scale parameter
hcompSmooth: HCOMPRESS smooth parameter
quantizeLevel: floating point quantization level;
"""
Added two new convenience functions. The setval function allows the setting of the value of a single header card in a fits file. The delval function allows the deletion of a single header card in a fits file.
A modification was made to allow the reading of data from a fits file containing a Table HDU that has duplicate field names. It is normally a requirement that the field names in a Table HDU be unique. Prior to this change a ValueError was raised, when the data was accessed, to indicate that the HDU contained duplicate field names. Now, a warning is issued and the field names are made unique in the internal record array. This will not change the TTYPEn header card values. You will be able to get the data from all fields using the field name, including the first field containing the name that is duplicated. To access the data of the other fields with the duplicated names you will need to use the field number instead of the field name. (CNSHD737193)
An enhancement was made to allow the reading of unsigned integer 16 values from an ImageHDU when the data is signed integer 16 and BZERO is equal to 32784 and BSCALE is equal to 1 (the standard way for scaling unsigned integer 16 data). A new optional keyword argument (uint16) was added to the open convenience function. Supplying a value of True for this argument will cause data of this type to be read in and scaled into an unsigned integer 16 array, instead of a float 32 array. If a HDU associated with a file that was opened with the uint16 option and containing unsigned integer 16 data is written to a file, the data will be reverse scaled into an integer 16 array and written out to the file and the BSCALE/BZERO header cards will be written with the values 1 and 32768 respectively. (CHSHD736064) Reference the following example:
>>> import pyfits
>>> hdul=pyfits.open('o4sp040b0_raw.fits',uint16=1)
>>> hdul[1].data
array([[1507, 1509, 1505, ..., 1498, 1500, 1487],
[1508, 1507, 1509, ..., 1498, 1505, 1490],
[1505, 1507, 1505, ..., 1499, 1504, 1491],
...,
[1505, 1506, 1507, ..., 1497, 1502, 1487],
[1507, 1507, 1504, ..., 1495, 1499, 1486],
[1515, 1507, 1504, ..., 1492, 1498, 1487]], dtype=uint16)
>>> hdul.writeto('tmp.fits')
>>> hdul1=pyfits.open('tmp.fits',uint16=1)
>>> hdul1[1].data
array([[1507, 1509, 1505, ..., 1498, 1500, 1487],
[1508, 1507, 1509, ..., 1498, 1505, 1490],
[1505, 1507, 1505, ..., 1499, 1504, 1491],
...,
[1505, 1506, 1507, ..., 1497, 1502, 1487],
[1507, 1507, 1504, ..., 1495, 1499, 1486],
[1515, 1507, 1504, ..., 1492, 1498, 1487]], dtype=uint16)
>>> hdul1=pyfits.open('tmp.fits')
>>> hdul1[1].data
array([[ 1507., 1509., 1505., ..., 1498., 1500., 1487.],
[ 1508., 1507., 1509., ..., 1498., 1505., 1490.],
[ 1505., 1507., 1505., ..., 1499., 1504., 1491.],
...,
[ 1505., 1506., 1507., ..., 1497., 1502., 1487.],
[ 1507., 1507., 1504., ..., 1495., 1499., 1486.],
[ 1515., 1507., 1504., ..., 1492., 1498., 1487.]], dtype=float32)
Enhanced the message generated when a ValueError exception is raised when attempting to access a header card with an unparsable value. The message now includes the Card name.
The following bugs were fixed:
Updates described in this release are only supported in the NUMPY version of pyfits.
The following enhancements were made:
The following bugs were fixed:
Updates described in this release are only supported in the NUMPY version of pyfits.
The following enhancements were made:
Added support for file objects and file like objects.
Added support for record-valued keyword cards as introduced in the “FITS WCS Paper IV proposal for representing a more general distortion model”.
Record-valued keyword cards are string-valued cards where the string is interpreted as a definition giving a record field name, and its floating point value. In a FITS header they have the following syntax:
keyword= 'field-specifier: float'
where keyword is a standard eight-character FITS keyword name, float is the standard FITS ASCII representation of a floating point number, and these are separated by a colon followed by a single blank.
The grammer for field-specifier is:
field-specifier:
field
field-specifier.field
field:
identifier
identifier.index
where identifier is a sequence of letters (upper or lower case), underscores, and digits of which the first character must not be a digit, and index is a sequence of digits. No blank characters may occur in the field-specifier. The index is provided primarily for defining array elements though it need not be used for that purpose.
Multiple record-valued keywords of the same name but differing values may be present in a FITS header. The field-specifier may be viewed as part of the keyword name.
Some examples follow:
DP1 = 'NAXIS: 2'
DP1 = 'AXIS.1: 1'
DP1 = 'AXIS.2: 2'
DP1 = 'NAUX: 2'
DP1 = 'AUX.1.COEFF.0: 0'
DP1 = 'AUX.1.POWER.0: 1'
DP1 = 'AUX.1.COEFF.1: 0.00048828125'
DP1 = 'AUX.1.POWER.1: 1'
As with standard header cards, the value of a record-valued keyword card can be accessed using either the index of the card in a HDU’s header or via the keyword name. When accessing using the keyword name, the user may specify just the card keyword or the card keyword followed by a period followed by the field-specifier. Note that while the card keyword is case insensitive, the field-specifier is not. Thus, hdu[‘abc.def’], hdu[‘ABC.def’], or hdu[‘aBc.def’] are all equivalent but hdu[‘ABC.DEF’] is not.
When accessed using the card index of the HDU’s header the value returned will be the entire string value of the card. For example:
>>> print hdr[10]
NAXIS: 2
>>> print hdr[11]
AXIS.1: 1
When accessed using the keyword name exclusive of the field-specifier, the entire string value of the header card with the lowest index having that keyword name will be returned. For example:
>>> print hdr['DP1']
NAXIS: 2
When accessing using the keyword name and the field-specifier, the value returned will be the floating point value associated with the record-valued keyword card. For example:
>>> print hdr['DP1.NAXIS']
2.0
Any attempt to access a non-existent record-valued keyword card value will cause an exception to be raised (IndexError exception for index access or KeyError for keyword name access).
Updating the value of a record-valued keyword card can also be accomplished using either index or keyword name. For example:
>>> print hdr['DP1.NAXIS']
2.0
>>> hdr['DP1.NAXIS'] = 3.0
>>> print hdr['DP1.NAXIS']
3.0
Adding a new record-valued keyword card to an existing header is accomplished using the Header.update() method just like any other card. For example:
>>> hdr.update('DP1', 'AXIS.3: 1', 'a comment', after='DP1.AXIS.2')
Deleting a record-valued keyword card from an existing header is accomplished using the standard list deletion syntax just like any other card. For example:
>>> del hdr['DP1.AXIS.1']
In addition to accessing record-valued keyword cards individually using a card index or keyword name, cards can be accessed in groups using a set of special pattern matching keys. This access is made available via the standard list indexing operator providing a keyword name string that contains one or more of the special pattern matching keys. Instead of returning a value, a CardList object will be returned containing shared instances of the Cards in the header that match the given keyword specification.
There are three special pattern matching keys. The first key ‘*’ will match any string of zero or more characters within the current level of the field-specifier. The second key ‘?’ will match a single character. The third key ‘...’ must appear at the end of the keyword name string and will match all keywords that match the preceding pattern down all levels of the field-specifier. All combinations of ?, *, and ... are permitted (though ... is only permitted at the end). Some examples follow:
>>> cl=hdr['DP1.AXIS.*']
>>> print cl
DP1 = 'AXIS.1: 1'
DP1 = 'AXIS.2: 2'
>>> cl=hdr['DP1.*']
>>> print cl
DP1 = 'NAXIS: 2'
DP1 = 'NAUX: 2'
>>> cl=hdr['DP1.AUX...']
>>> print cl
DP1 = 'AUX.1.COEFF.0: 0'
DP1 = 'AUX.1.POWER.0: 1'
DP1 = 'AUX.1.COEFF.1: 0.00048828125'
DP1 = 'AUX.1.POWER.1: 1'
>>> cl=hdr['DP?.NAXIS']
>>> print cl
DP1 = 'NAXIS: 2'
DP2 = 'NAXIS: 2'
DP3 = 'NAXIS: 2'
>>> cl=hdr['DP1.A*S.*']
>>> print cl
DP1 = 'AXIS.1: 1'
DP1 = 'AXIS.2: 2'
The use of the special pattern matching keys for adding or updating header cards in an existing header is not allowed. However, the deletion of cards from the header using the special keys is allowed. For example:
>>> del hdr['DP3.A*...']
As noted above, accessing pyfits Header object using the special pattern matching keys will return a CardList object. This CardList object can itself be searched in order to further refine the list of Cards. For example:
>>> cl=hdr['DP1...']
>>> print cl
DP1 = 'NAXIS: 2'
DP1 = 'AXIS.1: 1'
DP1 = 'AXIS.2: 2'
DP1 = 'NAUX: 2'
DP1 = 'AUX.1.COEFF.1: 0.000488'
DP1 = 'AUX.2.COEFF.2: 0.00097656'
>>> cl1=cl['*.*AUX...']
>>> print cl1
DP1 = 'NAUX: 2'
DP1 = 'AUX.1.COEFF.1: 0.000488'
DP1 = 'AUX.2.COEFF.2: 0.00097656'
The CardList keys() method will allow the retrivial of all of the key values in the CardList. For example:
>>> cl=hdr['DP1.AXIS.*']
>>> print cl
DP1 = 'AXIS.1: 1'
DP1 = 'AXIS.2: 2'
>>> cl.keys()
['DP1.AXIS.1', 'DP1.AXIS.2']
The CardList values() method will allow the retrivial of all of the values in the CardList. For example:
>>> cl=hdr['DP1.AXIS.*']
>>> print cl
DP1 = 'AXIS.1: 1'
DP1 = 'AXIS.2: 2'
>>> cl.values()
[1.0, 2.0]
Individual cards can be retrieved from the list using standard list indexing. For example:
>>> cl=hdr['DP1.AXIS.*']
>>> c=cl[0]
>>> print c
DP1 = 'AXIS.1: 1'
>>> c=cl['DP1.AXIS.2']
>>> print c
DP1 = 'AXIS.2: 2'
Individual card values can be retrieved from the list using the value attribute of the card. For example:
>>> cl=hdr['DP1.AXIS.*']
>>> cl[0].value
1.0
The cards in the CardList are shared instances of the cards in the source header. Therefore, modifying a card in the CardList also modifies it in the source header. However, making an addition or a deletion to the CardList will not affect the source header. For example:
>>> hdr['DP1.AXIS.1']
1.0
>>> cl=hdr['DP1.AXIS.*']
>>> cl[0].value = 4.0
>>> hdr['DP1.AXIS.1']
4.0
>>> del cl[0]
>>> print cl['DP1.AXIS.1']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "NP_pyfits.py", line 977, in __getitem__
return self.ascard[key].value
File "NP_pyfits.py", line 1258, in __getitem__
_key = self.index_of(key)
File "NP_pyfits.py", line 1403, in index_of
raise KeyError, 'Keyword %s not found.' % `key`
KeyError: "Keyword 'DP1.AXIS.1' not found."
>>> hdr['DP1.AXIS.1']
4.0
A FITS header consists of card images. In pyfits each card image is manifested by a Card object. A pyfits Header object contains a list of Card objects in the form of a CardList object. A record-valued keyword card image is represented in pyfits by a RecordValuedKeywordCard object. This object inherits from a Card object and has all of the methods and attributes of a Card object.
A new RecordValuedKeywordCard object is created with the RecordValuedKeywordCard constructor: RecordValuedKeywordCard(key, value, comment). The key and value arguments may be specified in two ways. The key value may be given as the 8 character keyword only, in which case the value must be a character string containing the field-specifier, a colon followed by a space, followed by the actual value. The second option is to provide the key as a string containing the keyword and field-specifier, in which case the value must be the actual floating point value. For example:
>>> c1 = pyfits.RecordValuedKeywordCard('DP1', 'NAXIS: 2', 'Number of variables')
>>> c2 = pyfits.RecordValuedKeywordCard('DP1.AXIS.1', 1.0, 'Axis number')
RecordValuedKeywordCards have attributes .key, .field_specifier, .value, and .comment. Both .value and .comment can be changed but not .key or .field_specifier. The constructor will extract the field-specifier from the input key or value, whichever is appropriate. The .key attribute is the 8 character keyword.
Just like standard Cards, a RecordValuedKeywordCard may be constructed from a string using the fromstring() method or verified using the verify() method. For example:
>>> c1 = pyfits.RecordValuedKeywordCard().fromstring(
"DP1 = 'NAXIS: 2' / Number of independent variables")
>>> c2 = pyfits.RecordValuedKeywordCard().fromstring(
"DP1 = 'AXIS.1: X' / Axis number")
>>> print c1; print c2
DP1 = 'NAXIS: 2' / Number of independent variables
DP1 = 'AXIS.1: X' / Axis number
>>> c2.verify()
Output verification result:
Card image is not FITS standard (unparsable value string).
A standard card that meets the criteria of a RecordValuedKeywordCard may be turned into a RecordValuedKeywordCard using the class method coerce. If the card object does not meet the required criteria then the original card object is just returned.
>>> c1 = pyfits.Card('DP1','AUX: 1','comment')
>>> c2 = pyfits.RecordValuedKeywordCard.coerce(c1)
>>> print type(c2)
<'pyfits.NP_pyfits.RecordValuedKeywordCard'>
Two other card creation methods are also available as RecordVauedKeywordCard class methods. These are createCard() which will create the appropriate card object (Card or RecordValuedKeywordCard) given input key, value, and comment, and createCardFromString which will create the appropriate card object given an input string. These two methods are also available as convenience functions:
>>> c1 = pyfits.RecordValuedKeywordCard.createCard('DP1','AUX: 1','comment)
or
>>> c1 = pyfits.createCard('DP1','AUX: 1','comment)
>>> print type(c1)
<'pyfits.NP_pyfits.RecordValuedKeywordCard'>
>>> c1 = pyfits.RecordValuedKeywordCard.createCard('DP1','AUX 1','comment)
or
>>> c1 = pyfits.createCard('DP1','AUX 1','comment)
>>> print type(c1)
<'pyfits.NP_pyfits.Card'>
>>> c1 = pyfits.RecordValuedKeywordCard.createCardFromString \
("DP1 = 'AUX: 1.0' / comment")
or
>>> c1 = pyfits.createCardFromString("DP1 = 'AUX: 1.0' / comment")
>>> print type(c1)
<'pyfits.NP_pyfits.RecordValuedKeywordCard'>
The following bugs were fixed:
Updates described in this release are only supported in the NUMPY version of pyfits.
The following enhancements were made:
Provided support for a new extension to pyfits called stpyfits.
Added a new feature to allow trailing HDUs to be deleted from a fits file without actually reading the data from the file.
Updated pyfits to use the warnings module to issue warnings. All warnings will still be issued to stdout, exactly as they were before, however, you may now suppress warnings with the -Wignore command line option. For example, to run a script that will ignore warnings use the following command line syntax:
python -Wignore yourscript.py
Updated the open convenience function to allow the input of an already opened file object in place of a file name when opening a fits file.
Updated the writeto convenience function to allow it to accept the output_verify option.
Updated the verification code to provide additional detail with a VerifyError exception.
Added the capability to create a binary table HDU directly from a numpy.ndarray. This may be done using either the new_table convenience function or the BinTableHDU constructor.
The following performance improvements were made:
The following bugs were fixed:
The changes to PyFITS were primarily to improve the docstrings and to reclassify some public functions and variables as private. Readgeis and fitsdiff which were distributed with PyFITS in previous releases were moved to pytools. This release of PyFITS is v1.0.1. The next release of PyFITS will support both numarray and numpy (and will be available separately from stsci_python, as are all the python packages contained within stsci_python). An alpha release for PyFITS numpy support will be made around the time of this stsci_python release.
Major Changes since v0.9.6:
Minor changes since v0.9.6:
PyFITS Version 1.0 REQUIRES Python 2.3 or later.
Major changes since v0.9.3:
Some minor changes:
Changes since v0.9.0:
Changes since v0.8.0:
NOTE: This version will only work with numarray Version 0.6. In addition, earlier versions of PyFITS will not work with numarray 0.6. Therefore, both must be updated simultaneously.
Changes since 0.7.6:
0.7.6 (2002-11-22)
NOTE: This version will only work with numarray Version 0.4.
Changes since 0.7.5:
Change x*=n to numarray.multiply(x, n, x) where n is a floating number, in order to make pyfits to work under Python 2.2. (2 occurrences)
Modify the “update” method in the Header class to use the “fixed-format” card even if the card already exists. This is to avoid the mis-alignment as shown below:
After running drizzle on ACS images it creates a CD matrix whose elements have very many digits, e.g.:
CD1_1 = 1.1187596304411E-05 / partial of first axis coordinate w.r.t. x CD1_2 = -8.502767249350019E-06 / partial of first axis coordinate w.r.t. y
with pyfits, an “update” on these header items and write in new values which has fewer digits, e.g.:
CD1_1 = 1.0963011E-05 / partial of first axis coordinate w.r.t. x CD1_2 = -8.527229E-06 / partial of first axis coordinate w.r.t. y
Change some internal variables to make their appearance more consistent:
old name new name
__octalRegex _octalRegex __readblock() _readblock() __formatter() _formatter(). __value_RE _value_RE __numr _numr __comment_RE _comment_RE __keywd_RE _keywd_RE __number_RE _number_RE. tmpName() _tmpName() dimShape _dimShape ErrList _ErrList
Move up the module description. Move the copywright statement to the bottom and assign to the variable __credits__.
change the following line:
self.__dict__ = input.__dict__
to
self.__setstate__(input.__getstate__())
in order for pyfits to run under numarray 0.4.
edit _readblock to add the (optional) firstblock argument and raise IOError if the the first 8 characters in the first block is not ‘SIMPLE ‘ or ‘XTENSION’. Edit the function open to check for IOError to skip the last null filled block(s). Edit readHDU to add the firstblock argument.
Changes since v0.7.3:
Memory mapping now works for readonly mode, both for images and binary tables.
Usage: pyfits.open(‘filename’, memmap=1)
Edit the field method in FITS_rec class to make the column scaling for numbers use less temporary memory. (does not work under 2.2, due to Python “bug” of array *=)
Delete bscale/bzero in the ImageBaseHDU constructor.
Update bitpix in BaseImageHDU.__getattr__ after deleting bscale/bzero. (bug fix)
In BaseImageHDU.__getattr__ point self.data to raw_data if float and if not memmap. (bug fix).
Change the function get_tbdata() to private: _get_tbdata().
Changes since v0.7.2:
It will scale all integer image data to Float32, if BSCALE/BZERO != 1/0. It will also expunge the BSCALE/BZERO keywords.
Add the scale() method for ImageBaseHDU, so data can be scaled just before being written to the file. It has the following arguments:
type: destination data type (string), e.g. Int32, Float32, UInt8, etc.
option: scaling scheme. if ‘old’, use the old BSCALE/BZERO values. if ‘minmax’, use the data range to fit into the full range of specified integer type. Float destination data type will not be scaled for this option.
bscale/bzero: user specifiable BSCALE/BZERO values. They overwrite the “option”.
Deal with data area resizing in ‘update’ mode.
Make the data scaling (both input and output) faster and use less memory.
Bug fix to make column name change takes effect for field.
Bug fix to avoid exception if the key is not present in the header already. This affects (fixes) add_history(), add_comment(), and add_blank().
Bug fix in __getattr__() in Card class. The change made in 0.7.2 to rstrip the comment must be string type to avoid exception.
A couple of bugs were addressed in this version.
The two major improvements from Version 0.6.2 are:
This version of PyFITS requires numarray version 0.3.4.
Other changes include:
There are also many other minor internal bug fixes and technical changes.
This version requires numarray version 0.2.
Things not yet supported but are part of future development: