The interpretation of an image as a graph may occur at different levels where the graph nodes are pixels, pixel regions, image objects. Even an image database may be interpreted as a graph whose nodes are the images. In any case, each node is represented by a set of image attributes and the arcs describe binary relations between them. In this scenario, the image operators can take advantage of a rich literature in algorithms with proofs of correctness. Our aim in this project has been to reduce image processing and analysis problems into a graph problem with known algorithm. However, we are also interested in developing new graph algorithms and extend the known ones to solve real problems from several applications. The Image Foresting Transform (IFT) has been the standard methodology to reduce image partition problems, such as image segmentation, pixel clustering, and pixel classification, into an optimum-path forest problem in a graph, by choice of a suitable connectivity function (path-value function in the graph). The IFT has also been used to the design of connected filters, distance transforms, multiscale shape representations, shape descriptors, geodesic transformations, boundary tracking operators, and fuzzy object models. Its extension from the image domain to the feature space allows the design of optimum-path forest classifiers and several machine learning algorithms for them.
This project continues to exploit the
IFT for education in image processing and analysis, improve IFT-based
image operators, develop parallel algorithms for graph-based image
processing and analysis, develop
machine learning techniques for the optimum-path forest classifiers, exploit other
graph-based image
representations, and apply the IFT together with other image processing
techniques to solve problems from different areas.
The project
currently counts
with the collaboration of Roberto Lotufo (FEEC-UNICAMP), Jorge Stolfi
(IC-UNICAMP), Jayaram Udupa
(Dept. of Radiology, University of Pennsylvania, USA), Krzysztof Chris
Ciesielski (Dept. of Mathematics, West
Virginia University, USA), Paulo Miranda (IME-USP-SP), Fabio
Cappabianco
(UNIFESP), Marcelo Henriques Carvalho (UFMS), Willian Paraguassu (PhD
student, UFMS), Thiago Spina (PhD Student, IC-UNICAMP), Priscila Saito
(PhD
student, IC-UNICAMP), Pedro Rezende (IC-UNICAMP),
Celso Suzuki (Immunocamp), Andre Souza (Carestream, USA), João Paulo
Papa (UNESP-Bauru), and Filip Malmberg
(Uppsala University, Sweeden), and the financial support from FAEPEX,
CNPq, CAPES, and FAPESP.
Current image acquisition and storage
technologies allow the use of large image databases to support
entertainment, research, and development of new
technologies. However, image annotation has become
a crucial problem as the image databases grow large. Our ultimate goals in this project are to
minimize user involvement for image annotation and to organize the image database
to support different types of queries. We have investigated active
learning techniques which can select the most representative images for
user annotation in a few iterations of relevance feedback, resulting in
a more effective and efficient classifier for automatic annotation of
the remaining database images. We have also investigated image
segmentation methods, image descriptors, optimization techniques for feature selection and/or
combination of image descriptors, and unsupervised, supervised, and
semi-supervised
techniques for image classification. Our methods
have been validated on some real applications, such as the
identification of coffee crop regions in satellite images,
characterization of graphite particles in metallographic images of
industrial materials, and the
diagnosis of parasites in optical microscopy images.
The project currently counts with the
collaboration of Leo Pini Magalhães
(FEEC-UNICAMP), André Tavares da Silva (UDESC), Priscila
Saito (PhD student, IC-UNICAMP), Pedro Rezende
(IC-UNICAMP), João Paulo Papa (UNESP-Bauru), Jefersson
Santos (PhD student, IC-UNICAMP),
Ricardo Torres (IC-UNICAMP), Sylvie Philipp-Foliguet (ETIS, CNRS,
ENSEA, University of Cergy-Pontoise, France) and Philippe-Henri
Gosselin (ETIS, CNRS, ENSEA, University of Cergy-Pontoise, France),
Victor Hugo Costa de Albuquerque (UNIFOR), João Manuel R. S. Tavares
(Universidade do Porto, Portugal), and
the financial support from FAPESP and
CNPq.
Image segmentation is a tool to
extract relevant information for image analysis. In most applications,
it requires the approximate
location of an interest object and its precise
delineation in the image. Both tasks can be greatly facilitated
when possible appearance variations of the object/"object assembly"
(e.g., a
person in a digital video, a set of organs in a region of the human
body) are enconded in a fuzzy
object model. Such models can be created by supervised
learning from multiple segmentations of a given object/"object
assembly" and/or by affine
transformations on a segmentation result. A fuzzy object model can also
encode the position relation among objects in the case of multiple
object segmentation. For image segmentation, the matching between the
fuzzy object model and a new image provides at the same time object
location and delineation (segmentation), because the criterion function
used for matching is computed on the delineation result for any
given model location inside the image. The figures below illustrate 2D
and 3D fuzzy models used to segment different parts of a person in
video frames (left), of a brain in MR-images (center), and of a
thorax in CT-images (right).
We are also interested in interactive image segmentation techniques which are needed in many applications, to provide training images for model construction, and to correct the segmentation results. In all cases, we are interested in minimizing user involvement and, at the same time, maintaining user control over the segmentation process. Current applications of interest involve MR-images of the brain, CT-images of the thorax, and digital video with people (articulated objects).
The project currently counts with the collaboration of Jayaram Udupa (Dept. of Radiology, University of Pennsylvania, USA), Krzysztof Chris Ciesielski (Dept. of Mathematics, West Virginia University, USA), Paulo Miranda (IME-USP-SP), Thiago Spina (PhD Student, IC-UNICAMP), Guillermo Sapiro (Dept. of Electrical and Computer Engineering, Duke University, USA) and Filip Malmberg (Uppsala University, Sweeden), and the financial support from FAPESP and CNPq.
Face recognition has been addressed from many different perspectives. However, the majority of the techniques consists, at some point, on representing the faces in a given way, common to all individuals, in order to compare them. By doing so, a consensus about the most discriminant way to represent all the faces must be obtained, where common face aspects that better distinguish different persons are emphasized, with the purpose of achieving a good representation. By ignoring face aspects that are not good at distinguishing among all the considered persons, we may also be ignoring aspects that are actually good at discerning a particular person from the others, i.e., we may be disregarding important person-specific face aspects only because their discriminability does not generalize to all persons. Therefore, in this project, we are interested in representing person's face in a feature space conceived to underline its discriminating aspects in order to improve face recognition. The project investigates image registration and segmentation techniques, face characterization, machine learning and pattern recognition methods.
The project counts with the collaboration of Giovani Chiachia (PhD student, IC-UNICAMP), Anderson Rocha (IC-UNICAMP), Nicolas Pinto (Rowland Institute, Harvard University, USA), David Cox (Center for Brain Science, Harvard University, USA), William Schwartz (IC-UNICAMP), and the financial support from FAPESP and CNPq.
Intestinal
parasites can cause diseases and death, under certain circumstances.
However, the physical and mental damages caused by enteroparasitosis in
humans are the most common problems, reducing the quality of children
education and the adult concentration during work. These problems are
very difficult to be assessed and this might explain why the effective
diagnosis and treatment of the enteroparasitosis have been neglected.
In this project, we aim at automation of the diagnosis of parasites in
humans and animals. The work includes the design of suitable image
acquisition systems; parasitological techniques to produce microscopy
slide images; and image analysis methods for image segmentation, object
description, feature selection, machine learning, and pattern
classification. The figure below illustrates the current system for automatic diagnosis of enteroparasitosis.
The project counts with the collaboration of Jancarlo Ferreira Gomes (IB and IC, UNICAMP), Celso Suzuki (Immunocamp), Priscila Saito (PhD student, IC-UNICAMP), Pedro Rezende (IC-UNICAMP), Sumie Shimizo (FCF-USP-SP), Juliana Carvalho (MSc Student, IB-UNICAMP), Kátia Bresciani (UNESP-Araçatuba-SP), Willian Coelho (UNESP-Araçatuba-SP), and the financial support from Immunocamp, FAEPEX, FAPESP, CAPES, and CNPq.
MR images of the human
brain provide visual information about tissues and structures, whose
anatomy and function may be affected by neuronal disturbs and cerebral
diseases. The visualization and quantification of such information are
important to understand the natural course of the disease, plan a
treatment, and study the effects of the treatment. This project
investigates methods for filtering, segmentation, registration,
visualization and morphometric analysis of cerebral structures in
patients with epilepsy and other degenerative diseases. We are
particularly interested in the associations between asymmetry analysis
(shape and texture) and the cerebral diseases (focal cortical
dysplasia, tumors). The figure below shows a screen of the software
under development for brain asymmetry analysis. Fuzzy models together
with delineation and clustering methods based on the IFT algorithm are
used to separate left hemisphere, right hemispheres, cerebellum,CSF , gray matter, and white matter from MR-images.
The project currently counts with the collaboration of Fernando Cendes (FCM-UNICAMP), Clarissa Yasuda (FCM-UNICAMP), Iscia Cendes (FCM-UNICAMP), Guilherme Ruppert (CenPra), Fabio Cappabianco (UNIFESP), and Paulo Miranda (IME-USP-SP), and the financial support from FAEPEX, CAPES, CNPq, and FAPESP (CInAPCe).
Cone beam computed tomography (CBCT)
is an X-ray imaging modality capable of acquiring three-dimensional
information of the human anatomy with a substantially lower radiation
dose to the patient as compared to conventional medical computed
tomography (CT) systems. The use of CBCT as a diagnostic tool for
endodontics has increased the need for better image quality and its
ability to display very low contrast tissue regions, usually
associated to fractures within the tooth structure. Vertical root
fracture (VRF), for instance,
is a severe type of tooth fracture that affects the root causing pain
due to infection and inflammation, leading to the tooth extraction.
VRFs’ diagnosis remains a challenge, but supporting evidences
indicate that CBCT has superior ability to detect VRFs as compared to
periapical radiograph. In this project, we are interested in tooth
segmentation and visualization techniques from CBCT images for tooth
fracture inspection. Image segmentation and visualization must be
interactive, but we aim at minimizing user involvement by using
effective algorithms for tooth delineation, reslicing, and 3D rendition
combined with an intuitive user interface. The IFT algorithm has been
used for segmentation of the teeth from CBCT images (left) and internal
structures, such as fracture and canal (right).
The project currently counts with the collaboration of Andre Souza (Carestream, USA) and Lawrence Ray (Carestream, USA).
The
morfology of plant organs, such as roots, leaves, and seeds, is closely
related to relevant properties of the plant. For
example, there are evidences that the architecture of a plant root
system can indicate its tolerance to acid soils and its
capacity to absorb nutrients, such as
water and phosphorus. These properties are related to certain genes and
the identification of those genes can improve the genetics of the
plants. Genome-wide association studies (GWAS) look for the genes
associated with a given trait of the plant. In this project, we are
interested in phenotypic traits, which can be computed by 2D and
3D image analysis techniques. We are also interested in designing
imaging systems suitable for phenotyping. We have investigated
3D image analysis of plant root systems (rice and sorghum) and 2D image
analysis of rice panicles. The methods require image reconstruction,
filtering, segmentation, and representation (e.g., skeletonization);
phenotypic trait extraction; machine learning and pattern
recognition techniques; and statistical data analysis. The figure
below shows different root architectures of rice along days 3, 6, and 9
of growing.
The project counts with the collaboration of Leon Vicent Kochian (USDA, Cornell University, USA), Susan McCouch (Plant Breeding and Genetics, Cornell University, USA), Randy Clark (PhD Student, Cornell University, USA), Samuel Crowell (PhD Student, Cornell University, USA), Jon Shaff (USDA, Cornell University, USA), and Alexandru Telea (University of Groningen, The Netherlands).
Last time this page was updated and we remembered to update this line: July 8th, 2012 |