Abstract
This paper presents a lossy, wavelet-based approach for the compression of digital angiogram video. An analysis of the high-frequency sub-bands of a wavelet decomposition of an angiogram image reveals significantly sized regions containing no diagnostically important information. The encoding of the high-frequency sub-band wavelet coefficients in such regions proves to be burdensome, although if removed, the coefficients are notable by their absence. This paper aims to model these wavelet coefficients using a texture modeling approach. This is only performed in regions which are considered diagnostically unimportant, with diagnostically important regions encoded as normal. The effect of this procedure significantly reduces the bit-rate of diagnostically unimportant areas of the image without a perceptible loss of image quality. The effectiveness of the algorithm at different bit-rates is assessed by a consultant cardiologist with the key aim of identifying any degradation in the diagnostic content of the images.
Original language | English |
---|---|
Pages (from-to) | 126-134 |
Number of pages | 9 |
Journal | Proceedings of SPIE - The International Society for Optical Engineering |
Volume | 4299 |
DOIs | |
Publication status | Published - 1 Jan 2001 |
Event | Human Vision and Electronic Imaging VI - San Jose, United States Duration: 20 Jan 2001 → … |
Keywords
- angiogram
- texture
- compression
- wavelet
- video