if you are unable to access the links to the indicated papers, see my publications
page for details.
I have a variety of projects on the go. Here's a synopsis:
computer vision, texture analysis involves the science of interpreting textural
patterns in digital imagery. This might involve classification of
textures, segmentation of textured images, or possibly depth interpretation
associated with textural cues. For examples of texture segmentation
generated using a preferred Gabor
filter bank and co-occurrence
probabilities separately using a novel
clustering scheme, look here. A
method to improve Markov random field texture features while considering
rotational invariance is presented here.
An investigation of the role of grey level quantization in the
application of co-occurrence probability texture features is presented here. The
role of speckle reduction in the application of co-occurrence probability
texture features in SAR (synthetic aperture radar) images of forested regions
is presented in this paper.
There are numerous examples in the research literature involving theoretical developments on individual texture analysis methods. These leads to two shortcomings: (1) there are very few papers that compare different texture methods, especially for image segmentation and (2) there are few solid examples of texture used in pragmatic, operational settings. (1) We have investigated the roles of different texture operators in classification (Gabor, Markov random fields, co-occurrence probabilities) and segmentation of SAR sea ice imagery (co-occurrrence probabilities versus Markov random fields). (2) One of our goals is to learn how to apply texture theory to operational environments, namely, applying these techniques to the operational interpretation of remotely sensed imagery. More specifically, we are working on techniques to generate segmentations of radar-based (SAR or synthetic aperture radar) large scale satellite images of sea ice (see below).
SAR sea ice image products are important for two communities. From a commercial perspective, knowledge of sea ice types and thicknesses assists ship navigation and ice breakers. Scientifically, such information is necessary to monitor ice volumes which assists determination of parameters used in modelling environmental warming.
We are trying to build computer vision algorithms to assist the creation of products that can be used to support navigation and interpretation of ice-infested waters. This is a very challenging problem given limitations of the sensor as well as the natural complications of the imaged environment. One needs a strong understanding of computer vision theory coupled with a strong understanding of remote sensing systems to properly tackle this problem.
This project is contains a number of aspects: (1) development of dedicated texture operators for distinguishing various ice types in SAR sea ice imagery (see above), (2) development of methods to extract sea ice floe boundaries from the SAR data and (3) developing a segmentation scheme for the purposes of assisting the operator to classify a SAR sea ice scene. We are trying to build operational algorithms to be used at the Canadian Ice Services (CIS) that can improve the speed of processing and reduce operator bias.
This project is being completed in collaboration with the Canadian Ice Services (CIS) .
Orthopaedic surgeons require methods to
accurately landmark spinal vertebrae for screw insertions. Computed tomography
(CT) provides such a means, however, this method uses X-rays which can have
long-term effects on patients and practitioners. Magnetic resonance imaging
(MRI) provides an alternative non-invasive means of landmarking
bone without using X-rays. However, the contrast of bone with the surrounding
tissue in MRI is poorly represented. This project is investigating the
segmentation of MRI spine images for the purposes of generating a 3-d model of
the vertebrae. Accuracy checks will be performed using CT images of the same
region. See the following paper
for a starting point on this work.
This work involves collaborative research into the biomechanics of epithelial cell sheets. Using time lapse sequences of cell sheets undergoing known transverse forces, we are able to see individual cells change shape. We are interested in producing an automated means of estimating the bulk geometrical cell shape changes in such a situation. This is a difficult problem due to the noisy images, pigmentation variations, illumination variations, mitosis, etc. The use of the spatial-frequency domain seems to facilitate determination of the bulk geometrical parameters of interest (aspect ratio, major and minor lengths, and orientation) better than spatial methods that were attempted. With an understanding of the cell shape changes, our colleagues expect that they will be able to assess the mechanical properties of the cell sheet. This is a difficult task due to the sub-millimetre size of the cell sheets, but the mechanical properties are critical information if one wants to simulate the dynamic shape changes that we observe in a developing embryo. This research is in support of work conducted into determining the biomechanical causes of birth defects, such as spina bifidas.
Plasma display panels (PDPs) are quickly replacing cathode
ray tubes (CRTs) as the primary display devices in industry. Large PDPs
are very expensive to produce and, given the high resolution requirements of
assessing defects in the PDPs, require machine-based interpretation methods.
In conjunction with Japanese companies, a number of algorithms have been
designed and implemented to support their automatic assembly line inspection
and recognition of defects in PDPs. A review of the industry and the
techniques and methodogies employed are found in Renyan Ge's thesis.
Image registration is important in remote sensing to overlay
images of the same scene, but perhaps taken with different types of sensors.
We are mostly interested in trying to develop computer algorithms that
automatically determine tie points (ground control points or GCPs) from SAR
(synthetic aperture radar) and visible band scenes. Radar-based systems
capture different relative radiometric information compared to visible band
systems, hence, the use of absolute grey levels for automatic registration is
not warranted. Methods that only consider shapes in the images under
consideration have been developed (described here
The above projects are dedicated initiatives. In general, I am interested in computer vision methods (in the fields of image processing and pattern recognition), with special emphasis on texture analysis, image understanding, wavelet applications, and clustering algorithms.
If you are interested in working in graduate studies or doing a post-doc in any of the above fields at the University of Waterloo, just drop me a line: dclausi 'at' engmail.uwaterloo.ca
Back to David Clausi's home page.