Matching entries: 0
settings...
Author Title Year Journal/Proceedings Reftype DOI/URL
Calumby, R.T., Gonçalves, M.A. and da Silva Torres, R. Diversity-based Interactive Learning meets Multimodality 2017 Neurocomputing, pp. -  article  
BibTeX:
@article{Calumby2017Neurocomputing,
  author = {Rodrigo Tripodi Calumby and Marcos André Gonçalves  and Ricardo da Silva Torres},
  title = {Diversity-based Interactive Learning meets Multimodality},
  journal = {Neurocomputing},
  year = {2017},
  pages = {-}
}
      


Lucena, T.F.R., Rosa, S.R.F., Miosso, C.J., da Silva Torres, R., Krueger, T. and Domingues, D.M.G. Walking and health: an enactive affective system 2017 Digital Creativity
Vol. 0(0), pp. 1-20 
article DOI URL 
Abstract: This essay presents a transdisciplinary project whose objective is to create a sensorised insole built with latex (Hevea brasiliensis) to measure physiological variations of the human body, as well as to graphically represent affective reciprocal exchanges of the human behaviour with the environment. The purpose is to improve our enactive system through the use of various physiological sensors (e.g. pressure, galvanic skin response and temperature sensors), which can be processed and visualised by revealing the affective exchanges of the body while walking in the city, pinned on a map as data visualisation. Inspired by the enaction theories, the output of sensors and devices application that models the perception as a laboratory phenomenon, as proposed by Dr Ted Krueger, the investigation directed by Dr Domingues is conceived for the expanded sensorium and for the reengineering of life. The artwork is inserted in the domain of aesthetic investigation of art as experiences related to topics on art and technoscience for innovation in health. The created prototype was tested in the city at different locations and the results demonstrated its use as a kind of personal assistant for vital signals by configuring an enactive system added of affective qualities which possibilities to reveal the ÔpathosÕ, or the affective narratives experienced in the emergent reciprocity with the city. The insole biomaterial, created by Dr Suelia Rodrigues (BioEngeLab), has been successfully used to measure the foot pressure and to serve as a health assistant, especially for people with diabetes. The signal-processing approaches, explored in Biomedical engineering by Dr Cristiano Miosso, were empowered by data visualisation approaches developed by Dr Ricardo Torres (University of Campinas). Finally, we consider this prototype a creative technology for mhealth, which is a disruptive innovation, by providing other forms of existence.

BibTeX:
@article{Lucena2017DigitalCreativity,
  author = {Tiago Franklin Rodrigues Lucena and Suélia Rodrigues Fleury Rosa and Cristiano Jacques Miosso and Ricardo da Silva Torres and Ted Krueger and Diana Maria Gallicchio Domingues},
  title = {Walking and health: an enactive affective system},
  journal = {Digital Creativity},
  year = {2017},
  volume = {0},
  number = {0},
  pages = {1-20},
  url = { 
http://dx.doi.org/10.1080/14626268.2016.1262430

}, doi = {http://doi.org/10.1080/14626268.2016.1262430} }


Júnior, P.R.M., de Souza, R.M., de O. Werneck, R., Stein, B.V., Pazinato, D.V., de Almeida, W.R., Penatti, O.A.B., da S. Torres, R. and Rocha, A. Nearest neighbors distance ratio open-set classifier 2017 Machine Learning, pp. 1-28  article DOI URL 
Abstract: In this paper, we propose a novel multiclass classifier for the open-set recognition scenario. This scenario is the one in which there are no a priori training samples for some classes that might appear during testing. Usually, many applications are inherently open set. Consequently, successful closed-set solutions in the literature are not always suitable for real-world recognition problems. The proposed open-set classifier extends upon the Nearest-Neighbor (NN) classifier. Nearest neighbors are simple, parameter independent, multiclass, and widely used for closed-set problems. The proposed Open-Set NN (OSNN) method incorporates the ability of recognizing samples belonging to classes that are unknown at training time, being suitable for open-set recognition. In addition, we explore evaluation measures for open-set problems, properly measuring the resilience of methods to unknown classes during testing. For validation, we consider large freely-available benchmarks with different open-set recognition regimes and demonstrate that the proposed OSNN significantly outperforms their counterparts in the literature.

BibTeX:
@article{Mendes2017ML,
  author = {Pedro R. Mendes Júnior and Roberto M. de Souza and Rafael de O. Werneck and Bernardo V. Stein and Daniel V. Pazinato and Waldir R. de Almeida and Otávio A. B. Penatti and Ricardo da S. Torres and Anderson Rocha},
  title = {Nearest neighbors distance ratio open-set classifier},
  journal = {Machine Learning},
  year = {2017},
  pages = {1--28},
  url = {http://dx.doi.org/10.1007/s10994-016-5610-8},
  doi = {http://doi.org/10.1007/s10994-016-5610-8}
}
      


Pisani, F., Pedronette, D.C.G., da S. Torres, R. and Borin, E. Contextual Spaces Re-Ranking: accelerating the Re-sort Ranked Lists step on heterogeneous systems 2017 Concurrency and Computation Practice and Experience  article DOI  
Abstract: Re-ranking algorithms have been proposed to improve the effectiveness of content-based image retrieval systems by exploiting contextual information encoded in distance measures and ranked lists. In this paper, we show how we improved the efficiency of one of these algorithms, called Contextual Spaces Re-Ranking (CSRR). One of our approaches consists in parallelizing the algorithm with OpenCL to use the central and graphics processing units of an accelerated processing unit. The other is to modify the algorithm to a version that, when compared with the original CSRR, not only reduces the total running time of our implementations by a median of 1.6x but also increases the accuracy score in most of our test cases. Combining both parallelization and algorithm modification results in a median speedup of 5.4x from the original serial CSRR to the parallelized modified version. Different implementations for CSRR's Re-sort Ranked Lists step were explored as well, providing insights into graphics processing unit sorting, the performance impact of image descriptors, and the trade-offs between effectiveness and efficiency

BibTeX:
@article{Pisani2017CCPE,
  author = {Fl‡via Pisani and Daniel C. G. Pedronette and Ricardo da S. Torres and Edson Borin},
  title = {Contextual Spaces Re-Ranking: accelerating the Re-sort Ranked Lists step on heterogeneous systems},
  journal = {Concurrency and Computation Practice and Experience},
  year = {2017},
  doi = {http://doi.org/10.1002/cpe.3962}
}
      


dos Santos, J.M., de Moura, E.S., da Silva, A.S. and da S. Torres, R. Color and texture applied to a signature-based bag of visual words method for image retrieval 2017 Multimedia Tools and Applications, pp. 1-18  article DOI URL 
Abstract: This article addresses the problem of representation, indexing and retrieval of images through the signature-based bag of visual words (S-BoVW) paradigm, which maps features extracted from image blocks into a set of words without the need of clustering processes. Here, we propose the first ever method based on the S-BoVW paradigm that considers information of texture to generate textual signatures of image blocks. We also propose a strategy that represents image blocks with words which are generated based on both color as well as texture information. The textual representation generated by this strategy allows the application of traditional text retrieval and ranking techniques to compute the similarity between images. We have performed experiments with distinct similarity functions and weighting schemes, comparing the proposed strategy to the well-known cluster-based bag of visual words (C-BoVW) and S-BoVW methods proposed previously. Our results show that the proposed strategy for representing images is a competitive alternative for image retrieval, and overcomes the baselines in many scenarios.

BibTeX:
@article{Santos2017MTAP,
  author = {Joyce Miranda dos Santos and Edleno Silva de Moura and Altigran Soares da Silva and Ricardo da S. Torres},
  title = {Color and texture applied to a signature-based bag of visual words method for image retrieval},
  journal = {Multimedia Tools and Applications},
  year = {2017},
  pages = {1--18},
  url = {http://dx.doi.org/10.1007/s11042-016-3955-4},
  doi = {http://doi.org/10.1007/s11042-016-3955-4}
}
      


Almeida, J., Pedronette, D.C.G., Alberton, B.C., Morellato, L.P.C. and da S. Torres, R. Unsupervised Distance Learning for Plant Species Identification 2016 IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Vol. 9(12), pp. 5325-5338 
article DOI URL 
Abstract: Phenology is among the most trustworthy indicators of climate change effects on plants and animals. The recent application of repeated digital photographs to monitor vegetation phenology has provided accurate measures of plant life cycle changes over time. A fundamental requirement for phenology studies refers to the correct recognition of phenological patterns from plants by taking into account time series associated with their crowns. This paper presents a new similarity measure for identifying plants based on the use of an unsupervised distance learning scheme, instead of using traditional approaches based on pairwise similarities. We experimentally show that its use yields considerable improvements in time-series search tasks. In addition, we also demonstrate how the late fusion of different time series can improve the results on plant species identification. In some cases, significant gains were observed (up to +8.21% and +19.39% for mean average precision and precision at 10 scores, respectively) when compared with the use of time series in isolation.

BibTeX:
@article{Almeida2016JSTARS,
  author = {Jurandy Almeida and Daniel C. G. Pedronette and Bruna C. Alberton and Leonor P. C. Morellato and Ricardo da S. Torres},
  title = {Unsupervised Distance Learning for Plant Species Identification},
  journal = {IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing},
  year = {2016},
  volume = {9},
  number = {12},
  pages = {5325-5338},
  url = {http://ieeexplore.ieee.org/document/7590004/},
  doi = {http://doi.org/10.1109/JSTARS.2016.2608358}
}
      


Almeida, J., dos Santos, J.A., Alberton, B., Morellato, L.P.C. and da Silva Torres, R. Phenological Visual Rhythms: Compact Representations for Fine-Grained Plant Species Identification 2016 Pattern Recognition Letters
Vol. 81, pp. 90-100 
article DOI URL 
Abstract: Abstract Plant phenology, the study of recurrent life cycles events and its relationship to climate, is a key discipline in climate change research. In this context, digital cameras have been effectively used to monitor leaf flushing and senescence on vegetations across the world. A primary condition for the phenological observation refers to the correct identification of plants by taking into account time series associated with their crowns in the digital images. In this paper, we present a novel approach for representing phenological patterns of plant species. The proposed method is based on encoding time series as a visual rhythm. Here, we focus on applications of our approach for plant species identification. In this scenario, visual rhythms are characterized by image description algorithms. A comparative analysis of different descriptors is conducted and discussed. Experimental results show that our approach presents high accuracy on identifying individual plant species from its specific visual rhythm. Additionally, our representation is compact, making it suitable for long-term data series.

BibTeX:
@article{Almeida2016PRL,
  author = {Jurandy Almeida and Jefersson A. dos Santos and Bruna Alberton and Leonor Patricia C. Morellato and Ricardo da Silva Torres},
  title = {Phenological Visual Rhythms: Compact Representations for Fine-Grained Plant Species Identification},
  journal = {Pattern Recognition Letters},
  year = {2016},
  volume = {81},
  pages = {90--100},
  url = {http://www.sciencedirect.com/science/article/pii/S0167865515004274},
  doi = {http://doi.org/10.1016/j.patrec.2015.11.028}
}
      


Calumby, R.T., Gonçalves, M.A. and da Silva Torres, R. On interactive learning-to-rank for IR: Overview, recent advances, challenges, and directions 2016 Neurocomputing
Vol. 208, pp. 3-24 
article DOI URL 
Abstract: Abstract With the amount and variety of information available on digital repositories, answering complex user needs and personalizing information access became a hard task. Putting the user in the retrieval loop has emerged as a reasonable alternative to enhance search effectiveness and consequently the user experience. Due to the great advances on machine learning techniques, optimizing search engines according to user preferences has attracted great attention from the research and industry communities. Interactively learning-to-rank has greatly evolved over the last decade but it still faces great theoretical and practical obstacles. This paper describes basic concepts and reviews state-of-the-art methods on the several research fields that complementarily support the creation of interactive information retrieval (IIR) systems. By revisiting ground concepts and gathering recent advances, this article also intends to foster new research activities on IIR\ by highlighting great challenges and promising directions. The aggregated knowledge provided here is intended to work as a comprehensive introduction to those interested in IIR\ development, while also providing important insights on the vast opportunities of novel research.

BibTeX:
@article{Calumby2016NeurocomputingSurvey,
  author = {Rodrigo Tripodi Calumby and Marcos André Gonçalves  and Ricardo da Silva Torres},
  title = {On interactive learning-to-rank for IR: Overview, recent advances, challenges, and directions},
  journal = {Neurocomputing},
  year = {2016},
  volume = {208},
  pages = {3--24},
  url = {http://www.sciencedirect.com/science/article/pii/S0925231216304842},
  doi = {http://doi.org/10.1016/j.neucom.2016.03.084}
}
      


Carvalho, T., Faria, F.A., Pedrini, H., Torres, R.S. and Rocha, A. Illuminant-based Transformed Spaces for Image Forensics 2016 IEEE Transactions on Information Forensics and Security
Vol. 11(4), pp. 720 - 733 
article DOI  
Abstract: In this paper, we explore transformed spaces, represented by image illuminant maps, to propose a methodology for selecting complementary forms of characterizing visual properties for an effective and automated detection of image forgeries. We combine statistical telltales provided by different image descriptors that explore color, shape and texture features. We focus on detecting image forgeries containing people and present a method for locating the forgery, specifically the face of a person in an image. Experiments performed on three different open-access datasets show the potential of the proposed method for pinpointing image forgeries containing people. In the two first datasets (DSO-1 and DSI-1), the proposed method achieved a classification accuracy of 94% and 84 respectively, a remarkable improvement when compared with the state-of-the-art methods. Finally, when evaluating the third dataset comprising questioned images downloaded from the Internet, we also present a detailed analysis of target images.

BibTeX:
@article{Carvalho2016TIFS,
  author = {Tiago Carvalho and Fabio A. Faria and Helio Pedrini and Ricardo Silva Torres and Anderson Rocha},
  title = {Illuminant-based Transformed Spaces for Image Forensics},
  journal = {IEEE Transactions on Information Forensics and Security},
  year = {2016},
  volume = {11},
  number = {4},
  pages = {720 -- 733},
  doi = {http://doi.org/10.1109/TIFS.2015.2506548}
}
      


Faria, F.A., Almeida, J., Alberton, B., Morellato, L.P.C. and da S. Torres, R. Fusion of time series representations for plant recognition in phenology studies 2016 Pattern Recognition Letters
Vol. 83(Part 2), pp. 205 - 214 
article DOI URL 
Abstract: Nowadays, global warming and its resulting environmental changes is a hot topic in different biology research area. Phenology is one effective way of tracking such environmental changes through the study of plantÕs periodic events and their relationship to climate. One promising research direction in this area relies on the use of vegetation images to track phenology changes over time. In this scenario, the creation of effective image-based plant identification systems is of paramount importance. In this paper, we propose the use of a new representation of time series to improve plants recognition rates. This representation, called recurrence plot (RP), is a technique for nonlinear data analysis, which represents repeated events on time series into a two-dimensional representation (an image). Therefore, image descriptors can be used to characterize visual properties from this RP\ images so that these features can be used as input of a classifier. To the best of our knowledge, this is the first work that uses recurrence plot for plant recognition task. Performed experiments show that RP\ can be a good solution to describe time series. In addition, in a comparison with visual rhythms (VR), another technique used for time series representation, RP\ shows a better performance to describe texture properties than VR. On the other hand, a correlation analysis and the adoption of a well successful classifier fusion framework show that both representations provide complementary information that is useful for improving classification accuracies.

BibTeX:
@article{Faria2016PRLA,
  author = {Fabio A. Faria and Jurandy Almeida and Bruna Alberton and Leonor Patricia C. Morellato and Ricardo da S. Torres},
  title = {Fusion of time series representations for plant recognition in phenology studies},
  journal = {Pattern Recognition Letters},
  year = {2016},
  volume = {83},
  number = {Part 2},
  pages = {205 -- 214},
  url = {http://www.sciencedirect.com/science/article/pii/S016786551600074X},
  doi = {http://doi.org/10.1016/j.patrec.2016.03.005}
}
      


Faria, F.A., Almeida, J., Alberton, B., Morellato, L.P.C., Rocha, A. and da Silva Torres, R. Time series-based classifier fusion for fine-grained plant species recognition 2016 Pattern Recognition Letters
Vol. 81, pp. 101-109 
article DOI URL 
Abstract: Global warming and its resulting environmental changes surely are ubiquitous subjects nowadays and undisputedly important research topics. One way of tracking such environmental changes is by means of phenology, which studies natural periodic events and their relationship to climate. Phenology is seen as the simplest and most reliable indicator of the effects of climate change on plants and animals. The search for phenological information and monitoring systems has stimulated many research centers worldwide to pursue the development of effective and innovative solutions in this direction. One fundamental requirement for phenological systems is concerned with achieving fine-grained recognition of plants. In this sense, the present work seeks to understand specific properties of each target plant species and to provide the solutions for gathering specific knowledge of such plants for further levels of recognition and exploration in related tasks. In this work, we address some important questions such as: (i) how species from the same leaf functional group differ from each other; (ii) how different pattern classifiers might be combined to improve the effectiveness results in target species identification; and (iii) whether it is possible to achieve good classification results with fewer classifiers for fine-grained plant species identification. In this sense, we perform different analysis considering RGB color information channels from a digital hemispherical lens camera in different hours of day and plant species. A study about the correlation of classifiers associated with time series extracted from digital images is also performed. We adopt a successful selection and fusion framework to combine the most suitable classifiers and features improving the plant identification decision-making task as it is nearly impossible to develop just a single ``silver bullet" image descriptor that would capture all subtle discriminatory features of plants within the same functional group. This adopted framework turns out to be an effective solution in the target task, achieving better results than well-known approaches in the literature.

BibTeX:
@article{Faria2016PRLB,
  author = {Fabio A. Faria and Jurandy Almeida and Bruna Alberton and Leonor Patricia C. Morellato and Anderson Rocha and Ricardo da Silva Torres},
  title = {Time series-based classifier fusion for fine-grained plant species recognition},
  journal = {Pattern Recognition Letters},
  year = {2016},
  volume = {81},
  pages = {101--109},
  url = {http://www.sciencedirect.com/science/article/pii/S0167865515003670},
  doi = {http://doi.org/10.1016/j.patrec.2015.10.016}
}
      


Freitas, A.M., da S. Torres, R. and Miranda, P.A. TSS\ & TSB: Tensor scale descriptors within circular sectors for fast shape retrieval 2016 Pattern Recognition Letters
Vol. 83(Part 3), pp. 303-311 
article DOI URL 
Abstract: Abstract We propose two novel region-based descriptors for shape-based image retrieval and analysis, which are built upon an extended tensor scale based on the Euclidean Distance Transform (EDT). First the tensor scale algorithm is applied to extract local structure thickness, orientation, and anisotropy as represented by the largest ellipse within a homogeneous region centered at each image pixel. In this work, we extend the local orientation to 360¡. Then, for the first proposed descriptor, named Tensor Scale Sector descriptor (TSS), the local distributions of relative orientations within circular sectors are used to compose a fixed-length feature vector for a region-based representation. For the second method, named Tensor Scale Band descriptor (TSB), we consider histograms of relative orientations for each circular concentric band to compose a fixed-length feature vector with linear time matching. Experimental results with MPEG-7 and MNIST\ datasets are presented to illustrate and validate the methods. TSS\ can achieve high retrieval values comparable to state-of-the-art methods, which usually rely on time-consuming correspondence optimization algorithms, but uses a simpler and faster distance function, while the even faster linear complexity of TSB\ leads to a suitable and better solution for very large shape collections.

BibTeX:
@article{Freitas2016PRL,
  author = {Anderson M. Freitas and Ricardo da S. Torres and Paulo A.V. Miranda},
  title = {TSS\ & TSB: Tensor scale descriptors within circular sectors for fast shape retrieval},
  journal = {Pattern Recognition Letters},
  year = {2016},
  volume = {83},
  number = {Part 3},
  pages = {303--311},
  url = {http://www.sciencedirect.com/science/article/pii/S0167865516301234},
  doi = {http://doi.org/10.1016/j.patrec.2016.06.005}
}
      


Leite, R.A., Schnorr, L.M., Almeida, J., Alberton, B., Morellato, L.P.C., da S. Torres, R. and Comba, J.L. PhenoVis - A tool for visual phenological analysis of digital camera images using chronological percentage maps 2016 Information Sciences
Vol. 372, pp. 181 - 195 
article DOI URL 
Abstract: ND

BibTeX:
@article{Leite2016InfoScie,
  author = {Roger A. Leite and Lucas Mello Schnorr and Jurandy Almeida and Bruna Alberton and Leonor Patricia C. Morellato and Ricardo da S. Torres and João L.D. Comba},
  title = {PhenoVis - A tool for visual phenological analysis of digital camera images using chronological percentage maps},
  journal = {Information Sciences},
  year = {2016},
  volume = {372},
  pages = {181 - 195},
  url = {http://www.sciencedirect.com/science/article/pii/S0020025516306235},
  doi = {http://doi.org/10.1016/j.ins.2016.08.052}
}
      


Mariano, G.C., Morellato, L.P.C., Almeida, J., Alberton, B., de Camargo, M.G.G. and da S. Torres, R. Modeling plant phenology database: Blending near-surface remote phenology with on-the-ground observations 2016 Ecological Engineering
Vol. 91, pp. 396 - 408 
article DOI URL 
Abstract: Phenology research handles multifaceted information that needs to be organized and made promptly accessed by scientific community. We propose the conceptual design and implementation of a database to store, manage, and manipulate phenological time series and associated ecological information and environmental data. The database was developed in the context of the e-phenology project and integrates ground-based conventional plant phenology direct observations with near-surface remote phenology using repeated images from digital cameras. It also includes site-base information, sensor derived data from the study site weather station and plant ecological traits (e.g., pollination and dispersal syndrome, flower and fruit color, and leaf exchange strategy) at individual and species level. We validated the database design through the implementation of a Web application that generates the time series based on queries, exemplified in two case studies investigating: the relationship between flowering phenology and local weather; and the consistency between leafing patterns derived from ground-based phenology on leaf flush and from vegetation image indices (. The database will store all the information produced in the e-phenology project, monitoring of 12 sites from cerrado savanna to rainforest, and will aggregate the legacy information of other studies developed in the Phenology Laboratory (UNESP, Rio Claro, Brazil) over the last 20 years. We demonstrate that our database is a powerful tool that can be widely used to manage complex temporal datasets, integrating legacy and live phenological information from diverse sources (e.g., conventional, digital cameras, seed traps) and temporal scales, improving our capability of producing scientific and applied information on tropical phenology.

BibTeX:
@article{Mariano2016EcoEng,
  author = {Greice C. Mariano and Leonor Patricia C. Morellato and Jurandy Almeida and Bruna Alberton and Maria Gabriela G. de Camargo and Ricardo da S. Torres},
  title = {Modeling plant phenology database: Blending near-surface remote phenology with on-the-ground observations},
  journal = {Ecological Engineering},
  year = {2016},
  volume = {91},
  pages = {396 - 408},
  url = {http://www.sciencedirect.com/science/article/pii/S0925857416301501},
  doi = {http://doi.org/10.1016/j.ecoleng.2016.03.001}
}
      


Pazinato, D.V., Stein, B.V., de Almeida, W.R., de O. Werneck, R., Junior, P.R.M., Penatti, O.A.B., da S. Torres, R., Menezes, F.A. and Rocha, A. Pixel-Level Tissue Classification for Ultrasound Images 2016 IEEE Journal of Biomedical and Health Informatics
Vol. 20(1), pp. 256-267 
article DOI  
Abstract: Background: Pixel-level tissue classification for ultrasound images, commonly applied to carotid images, is usually based on defining thresholds for the isolated pixel values. Ranges of pixel values are defined for the classification of each tissue. The classification of pixels is then used to determine the carotid plaque composition and, consequently, to determine the risk of diseases (e.g., strokes) and whether or not a surgery is necessary. The use of threshold-based methods dates from the early 2000Õs but it is still widely used for virtual histology. Methodology/Principal Findings:We propose the use of descriptors that take into account information about a neighborhood of a pixel when classifying it. We evaluated experimentally different descriptors (statistical moments, texture-based, gradient-based, local binary patterns, etc.) on a dataset of five types of tissues: blood, lipids, muscle, fibrous, and calcium. The pipeline of the proposed classification method is based on image normalization, multiscale feature extraction, including the proposal of a new descriptor, and machine learning classification. We have also analyzed the correlation between the proposed pixel classification method in the ultrasound images and the real histology with the aid of medical specialists. Conclusions/Significance: The classification accuracy obtained by the proposed method with the novel descriptor in the ultrasound tissue images (around 73 is significantly above the accuracy of the state-of-the-art threshold-based methods (around 54. The results are validated by statistical tests. The correlation between the virtual and real histology confirms the quality of the proposed approach showing it is a robust ally for the virtual histology in ultrasound images.

BibTeX:
@article{Pazinato2016JBHI,
  author = {Daniel V. Pazinato and Bernardo V. Stein and Waldir R. de Almeida and Rafael de O. Werneck and Pedro R. Mendes Junior and Otávio A. B. Penatti and Ricardo da S. Torres and Fábio A. Menezes and Anderson Rocha},
  title = {Pixel-Level Tissue Classification for Ultrasound Images},
  journal = {IEEE Journal of Biomedical and Health Informatics},
  year = {2016},
  volume = {20},
  number = {1},
  pages = {256--267},
  doi = {http://doi.org/10.1109/JBHI.2014.2386796}
}
      


Pedronette, D.C.G. and da Silva Torres, R. Combining re-ranking and rank aggregation methods for image retrieval 2016 Multimedia Tools and Applications
Vol. 75(15), pp. 1-24 
article DOI URL 
Abstract: This paper presents novel approaches for combining re-ranking and rank aggregation methods aiming at improving the effectiveness of Content-Based Image Retrieval (CBIR) systems. Given a query image as input, CBIR systems retrieve the most similar images in a collection by taking into account image visual properties. In this scenario, accurately ranking collection images is of great relevance. Aiming at improving the effectiveness of CBIR systems, re-ranking and rank aggregation algorithms have been proposed. However, different re-ranking and rank aggregation approaches, applied to different image descriptors, may produce different and complementary image rankings. In this paper, we present four novel approaches for combining these rankings aiming at obtaining more effective results. Several experiments were conducted involving shape, color, and texture descriptors. The proposed approaches are also evaluated on multimodal retrieval tasks, considering visual and textual descriptors. Experimental results demonstrate that our approaches can improve significantly the effectiveness of image retrieval systems.

BibTeX:
@article{Pedronette2016MTAP,
  author = {Daniel C. G. Pedronette and Ricardo da Silva Torres},
  title = {Combining re-ranking and rank aggregation methods for image retrieval},
  journal = {Multimedia Tools and Applications},
  publisher = {Springer US},
  year = {2016},
  volume = {75},
  number = {15},
  pages = {1-24},
  url = {http://dx.doi.org/10.1007/s11042-015-3044-0},
  doi = {http://doi.org/10.1007/s11042-015-3044-0}
}
      


Pedronette, D.C.G. and da S. Torres, R. A correlation graph approach for unsupervised manifold learning in image retrieval tasks 2016 Neurocomputing
Vol. 208, pp. 66-79 
article DOI URL 
Abstract: Abstract Effectively measuring the similarity among images is a challenging problem in image retrieval tasks due to the difficulty of considering the dataset manifold. This paper presents an unsupervised manifold learning algorithm that takes into account the intrinsic dataset geometry for defining a more effective distance among images. The dataset structure is modeled in terms of a Correlation Graph (CG) and analyzed using Strongly Connected Components (SCCs). While the Correlation Graph adjacency provides a precise but strict similarity relationship, the Strongly Connected Components analysis expands these relationships considering the dataset geometry. A large and rigorous experimental evaluation protocol was conducted for different image retrieval tasks. The experiments were conducted in different datasets involving various image descriptors. Results demonstrate that the manifold learning algorithm can significantly improve the effectiveness of image retrieval systems. The presented approach yields better results in terms of effectiveness than various methods recently proposed in the literature.

BibTeX:
@article{Pedronette2016Neurocomputing,
  author = {Daniel Carlos Guimar‹es Pedronette and Ricardo da S. Torres},
  title = {A correlation graph approach for unsupervised manifold learning in image retrieval tasks},
  journal = {Neurocomputing},
  year = {2016},
  volume = {208},
  pages = {66--79},
  url = {http://www.sciencedirect.com/science/article/pii/S0925231216304726},
  doi = {http://doi.org/10.1016/j.neucom.2016.03.081}
}
      


Pedronette, D.C.G., Almeida, J. and da S. Torres, R. A graph-based ranked-list model for unsupervised distance learning on shape retrieval 2016 Pattern Recognition Letters
Vol. 83(Part 3), pp. 357-367 
article DOI URL 
Abstract: Abstract Several re-ranking algorithms have been proposed recently. Some effective approaches are based on complex graph-based diffusion processes, which usually are time consuming and therefore inappropriate for real-world large scale shape collections. In this paper, we introduce a novel graph-based approach for iterative distance learning in shape retrieval tasks. The proposed method is based on the combination of graphs defined in terms of multiple ranked lists. The efficiency of the method is guaranteed by the use of only top positions of ranked lists in the definition of graphs that encode reciprocal references. Effectiveness analysis performed in three widely used shape datasets demonstrate that the proposed graph-based ranked-list model yields significant gains (up to +55.52 when compared with the use of shape descriptors in isolation. Furthermore, the proposed method also yields comparable or superior effectiveness scores when compared with several state-of-the-art approaches.

BibTeX:
@article{Pedronette2016PRL,
  author = {Daniel Carlos Guimar‹es Pedronette and Jurandy Almeida and Ricardo da S. Torres},
  title = {A graph-based ranked-list model for unsupervised distance learning on shape retrieval},
  journal = {Pattern Recognition Letters},
  year = {2016},
  volume = {83},
  number = {Part 3},
  pages = {357--367},
  url = {http://www.sciencedirect.com/science/article/pii/S0167865516301052},
  doi = {http://doi.org/10.1016/j.patrec.2016.05.021}
}
      


Perre, P., Faria, F.A., Jorge, L.R., Rocha, A., Torres, R.S., Souza-Filho, M.F., Lewinsohn, T.M. and Zucchi, R.A. Toward an Automated Identification of Anastrepha Fruit Flies in the fraterculus group (Diptera, Tephritidae) 2016 Neotropical Entomology
Vol. 45(5), pp. 554-558 
article DOI URL 
Abstract: In this study, we assess image analysis techniques as automatic identifiers of three Anastrepha species of quarantine importance, Anastrepha fraterculus (Wiedemann), Anastrepha obliqua (Macquart), and Anastrepha sororcula Zucchi, based on wing and aculeus images. The right wing and aculeus of 100 individuals of each species were mounted on microscope slides, and images were captured with a stereomicroscope and light microscope. For wing image analysis, we used the color descriptor Local Color Histogram; for aculei, we used the contour descriptor Edge Orientation Autocorrelogram. A Support Vector Machine classifier was used in the final stage of wing and aculeus classification. Very accurate species identifications were obtained based on wing and aculeus images, with average accuracies of 94 and 95%, respectively. These results are comparable to previous identification results based on morphometric techniques and to the results achieved by experienced entomologists. Wing and aculeus images produced equally accurate classifications, greatly facilitating the identification of these species. The proposed technique is therefore a promising option for separating these three closely related species in the fraterculus group.

BibTeX:
@article{Perre2016Neotropical,
  author = {Perre, P. and Faria, F. A. and Jorge, L. R. and Rocha, A. and Torres, R. S. and Souza-Filho, M. F. and Lewinsohn, T. M. and Zucchi, R. A.},
  title = {Toward an Automated Identification of Anastrepha Fruit Flies in the fraterculus group (Diptera, Tephritidae)},
  journal = {Neotropical Entomology},
  year = {2016},
  volume = {45},
  number = {5},
  pages = {554--558},
  url = {http://dx.doi.org/10.1007/s13744-016-0403-0},
  doi = {http://doi.org/10.1007/s13744-016-0403-0}
}
      


Saraiva, P.C., ao M.B. Cavalcanti, J., de Moura, E.S., Gonçalves, M.A. and da S. Torres, R. A multimodal query expansion based on genetic programming for visually-oriented e-commerce applications 2016 Information Processing & Management
Vol. 52(5), pp. 783 - 800 
article DOI URL 
Abstract: Abstract We present a novel multimodal query expansion strategy, based on genetic programming (GP), for image search in visually-oriented e-commerce applications. Our GP-based approach aims at both: learning to expand queries with multimodal information and learning to compute the ``best'' ranking for the expanded queries. However, different from previous work, the query is only expressed in terms of the visual content, which brings several challenges for this type of application. In order to evaluate the effectiveness of our method, we have collected two datasets containing images of clothing products taken from different online shops. Experimental results indicate that our method is an effective alternative for improving the quality of image search results when compared to a genetic programming system based only on visual information. Our method can achieve gains varying from 10.8% against the strongest learning-to-rank baseline to 54% against an adhoc specialized solution for the particular domain at hand.

BibTeX:
@article{Saraiva2016IPM,
  author = {Patr’cia C. Saraiva and João M.B. Cavalcanti and Edleno S. de Moura and Marcos André Gonçalves  and Ricardo da S. Torres},
  title = {A multimodal query expansion based on genetic programming for visually-oriented e-commerce applications},
  journal = {Information Processing & Management},
  year = {2016},
  volume = {52},
  number = {5},
  pages = {783 - 800},
  url = {http://www.sciencedirect.com/science/article/pii/S0306457316300206},
  doi = {http://doi.org/10.1016/j.ipm.2016.03.001}
}
      


Almeida, J., dos Santos, J.A., Miranda, W.O., Alberton, B., Morellato, L.P.C. and da S. Torres, R. Deriving vegetation indices for phenology analysis using genetic programming 2015 Ecological Informatics
Vol. 26, Part 3, pp. 61 - 69 
article DOI URL 
Abstract: Abstract Plant phenology studies recurrent plant life cycle events and is a key component for understanding the impact of climate change. To increase accuracy of observations, new technologies have been applied for phenological observation, and one of the most successful strategies relies on the use of digital cameras, which are used as multi-channel imaging sensors to estimate color changes that are related to phenological events. We monitor leaf-changing patterns of a cerrado-savanna vegetation by taking daily digital images. We extract individual plant color information and correlate with leaf phenological changes. For that, several vegetation indices associated with plant species are exploited for both pattern analysis and knowledge extraction. In this paper, we present a novel approach for deriving appropriate vegetation indices from vegetation digital images. The proposed method is based on learning phenological patterns from plant species through a genetic programming framework. A comparative analysis of different vegetation indices is conducted and discussed. Experimental results show that our approach presents higher accuracy on characterizing plant species phenology.

BibTeX:
@article{Almeida2015EcoInfo,
  author = {Jurandy Almeida and Jefersson A. dos Santos and Waner O. Miranda and Bruna Alberton and Leonor Patricia C. Morellato and Ricardo da S. Torres},
  title = {Deriving vegetation indices for phenology analysis using genetic programming},
  journal = {Ecological Informatics},
  year = {2015},
  volume = {26, Part 3},
  pages = {61 - 69},
  url = {http://www.sciencedirect.com/science/article/pii/S1574954115000114},
  doi = {http://doi.org/10.1016/j.ecoinf.2015.01.003}
}
      


Pedronette, D.C.G., Calumby, R.T. and da Silva Torres, R. A semi-supervised learning algorithm for relevance feedback and collaborative image retrieval 2015 EURASIP Journal on Image and Video Processing
Vol. 2015(1), pp. 27 
article DOI URL 
Abstract: The interaction of users with search services has been recognized as an important mechanism for expressing and handling user information needs. One traditional approach for supporting such interactive search relies on exploiting relevance feedbacks (RF) in the searching process. For large-scale multimedia collections, however, the user efforts required in RF search sessions is considerable. In this paper, we address this issue by proposing a novel semi-supervised approach for implementing RF-based search services. In our approach, supervised learning is performed taking advantage of relevance labels provided by users. Later, an unsupervised learning step is performed with the objective of extracting useful information from the intrinsic dataset structure. Furthermore, our hybrid learning approach considers feedbacks of different users, in collaborative image retrieval (CIR) scenarios. In these scenarios, the relationships among the feedbacks provided by different users are exploited, further reducing the collective efforts. Conducted experiments involving shape, color, and texture datasets demonstrate the effectiveness of the proposed approach. Similar results are also observed in experiments considering multimodal image retrieval tasks.

BibTeX:
@article{Pedronette2015EurasipJIVP,
  author = {Daniel C. G. Pedronette and Rodrigo Tripodi Calumby and Ricardo da Silva Torres},
  title = {A semi-supervised learning algorithm for relevance feedback and collaborative image retrieval},
  journal = {EURASIP Journal on Image and Video Processing},
  year = {2015},
  volume = {2015},
  number = {1},
  pages = {27},
  url = {http://jivp.eurasipjournals.com/content/2015/1/27},
  doi = {http://doi.org/10.1186/s13640-015-0081-6}
}
      


Penatti, O.A., de O. Werneck, R., de Almeida, W.R., Stein, B.V., Pazinato, D.V., Júnior, P.R.M., da S. Torres, R. and Rocha, A. Mid-level image representations for real-time heart view plane classification of echocardiograms 2015 Computers in Biology and Medicine
Vol. 66, pp. 66 - 81 
article DOI URL 
Abstract: Abstract In this paper, we explore mid-level image representations for real-time heart view plane classification of 2D echocardiogram ultrasound images. The proposed representations rely on bags of visual words, successfully used by the computer vision community in visual recognition problems. An important element of the proposed representations is the image sampling with large regions, drastically reducing the execution time of the image characterization procedure. Throughout an extensive set of experiments, we evaluate the proposed approach against different image descriptors for classifying four heart view planes. The results show that our approach is effective and efficient for the target problem, making it suitable for use in real-time setups. The proposed representations are also robust to different image transformations, e.g., downsampling, noise filtering, and different machine learning classifiers, keeping classification accuracy above 90%. Feature extraction can be performed in 30 fps or 60 fps in some cases. This paper also includes an in-depth review of the literature in the area of automatic echocardiogram view classification giving the reader a through comprehension of this field of study.

BibTeX:
@article{Penatti2015CBM,
  author = {Otávio A.B. Penatti and Rafael de O. Werneck and Waldir R. de Almeida and Bernardo V. Stein and Daniel V. Pazinato and Pedro R. Mendes Júnior and Ricardo da S. Torres and Anderson Rocha},
  title = {Mid-level image representations for real-time heart view plane classification of echocardiograms },
  journal = {Computers in Biology and Medicine },
  year = {2015},
  volume = {66},
  pages = {66 - 81},
  url = {http://www.sciencedirect.com/science/article/pii/S0010482515002814},
  doi = {http://doi.org/10.1016/j.compbiomed.2015.08.004}
}
      


dos Santos, J.M., de Moura, E.S., da Silva, A.S., ao Marcos B. Cavalcanti, J., da Silva Torres, R. and Vidal, M.L.A. A signature-based bag of visual words method for image indexing and search 2015 Pattern Recognition Letters
Vol. 65, pp. 1 - 7 
article DOI URL 
Abstract: Abstract In this paper, we revisit SDLC, an image retrieval method that adopts a signature-based approach to identify visual words, instead of the more conventional approach that identifies them by using clustering techniques. We start by providing a formal and generalized definition of the approach adopted in SDLC, which we call Signature-Based Bag of Visual Words. After that, we present a detailed study of SDLC\ parameters and experiments with distinct weighting schemes used to compute the ranking of results, comparing the method to well-known cluster-based bag of visual words approaches. When compared to the initial proposal of SDLC, the choice of different parameters and a new weighting scheme allowed us to considerably reduce the size of the textual representation generated by the method, reducing also the indexing times and the query processing times in all collections adopted in the experiments. Further, the SDLC\ outperforms the baselines in most of these collections.

BibTeX:
@article{Santos2015PRL,
  author = {Joyce Miranda dos Santos and Edleno Silva de Moura and Altigran Soares da Silva and João Marcos B. Cavalcanti and Ricardo da Silva Torres and Márcio Luiz A. Vidal},
  title = {A signature-based bag of visual words method for image indexing and search },
  journal = {Pattern Recognition Letters },
  year = {2015},
  volume = {65},
  pages = {1 - 7},
  url = {http://www.sciencedirect.com/science/article/pii/S0167865515001956},
  doi = {http://doi.org/10.1016/j.patrec.2015.06.023}
}
      


Alberton, B., Almeida, J., Helm, R., da S. Torres, R., Menzel, A. and Morellato, L.P.C. Using phenological cameras to track the green up in a cerrado savanna and its on-the-ground validation 2014 Ecological Informatics
Vol. 19(0), pp. 62 - 70 
article DOI URL 
Abstract: Plant phenology has gained new importance in the context of global change research, stimulating the development of novel technologies for phenological observations. Regular digital cameras have been effectively used as three-channel imaging sensors, providing measures of leaf color change or phenological shifts in plants. We monitored a species rich Brazilian cerrado savanna to assess the reliability of digital images to detect leaf-changing patterns. Analysis was conducted by extracting color information from selected parts of the image named regions of interest (ROIs). We aimed to answer the following questions: (i) Do digital cameras capture leaf changes in cerrado savanna vegetation? (ii) Can we detect differences in phenological changes among species crowns and the cerrado community? (iii) Is the greening pattern detected for each species by digital camera validated by our on-the-ground leafing phenology (direct observation of tree leaf changes)? We analyzed daily sequences of five images per hour, taken from 6:00 to 18:00 h, recorded during the cerrado main leaf flushing season. We defined 24 ROIs\ in the original digital image, including total or partial regions and crowns of six plant species. Our results indicated that: (i) for the studied period, single plant species ROIs\ were more sensitive to changes in relative green values than the community ROIs, (ii) three leaf strategies could be depicted from the species' ROI\ patterns of green color change, and (iii) the greening patterns and leaf functional groups were validated by our on-the-ground phenology. We concluded that digital cameras are reliable tools to monitor high diverse tropical seasonal vegetation and it is sensitive to inter-species differences of leafing patterns.

BibTeX:
@article{Alberton2014EcoInfo,
  author = {Bruna Alberton and Jurandy Almeida and Raimund Helm and Ricardo da S. Torres and Annette Menzel and Leonor Patricia Cerdeira Morellato},
  title = {Using phenological cameras to track the green up in a cerrado savanna and its on-the-ground validation},
  journal = {Ecological Informatics },
  year = {2014},
  volume = {19},
  number = {0},
  pages = {62 - 70},
  url = {http://www.sciencedirect.com/science/article/pii/S1574954113001325},
  doi = {http://doi.org/10.1016/j.ecoinf.2013.12.011}
}
      


Almeida, J., dos Santos, J.A., Alberton, B., da S. Torres, R. and Morellato, L.P.C. Applying machine learning based on multiscale classifiers to detect remote phenology patterns in Cerrado savanna trees 2014 Ecological Informatics
Vol. 23(0), pp. 49 - 61 
article DOI URL 
Abstract: Abstract Plant phenology is one of the most reliable indicators of species responses to global climate change, motivating the development of new technologies for phenological monitoring. Digital cameras or near remote systems have been efficiently applied as multi-channel imaging sensors, where leaf color information is extracted from the RGB\ (Red, Green, and Blue) color channels, and the changes in green levels are used to infer leafing patterns of plant species. In this scenario, texture information is a great ally for image analysis that has been little used in phenology studies. We monitored leaf-changing patterns of Cerrado savanna vegetation by taking daily digital images. We extract RGB\ channels from the digital images and correlate them with phenological changes. Additionally, we benefit from the inclusion of textural metrics for quantifying spatial heterogeneity. Our first goals are: (1) to test if color change information is able to characterize the phenological pattern of a group of species; (2) to test if the temporal variation in image texture is useful to distinguish plant species; and (3) to test if individuals from the same species may be automatically identified using digital images. In this paper, we present a machine learning approach based on multiscale classifiers to detect phenological patterns in the digital images. Our results indicate that: (1) extreme hours (morning and afternoon) are the best for identifying plant species; (2) different plant species present a different behavior with respect to the color change information; and (3) texture variation along temporal images is promising information for capturing phenological patterns. Based on those results, we suggest that individuals from the same species and functional group might be identified using digital images, and introduce a new tool to help phenology experts in the identification of new individuals from the same species in the image and their location on the ground.

BibTeX:
@article{Almeida2014EcoInfo,
  author = {Jurandy Almeida and Jefersson A. dos Santos and Bruna Alberton and Ricardo da S. Torres and Leonor Patricia C. Morellato},
  title = {Applying machine learning based on multiscale classifiers to detect remote phenology patterns in Cerrado savanna trees},
  journal = {Ecological Informatics },
  year = {2014},
  volume = {23},
  number = {0},
  pages = {49 - 61},
  note = {Special Issue on Multimedia in Ecology and Environment },
  url = {http://www.sciencedirect.com/science/article/pii/S1574954113000654},
  doi = {http://doi.org/10.1016/j.ecoinf.2013.06.011}
}
      


Callumby, R., da Silva Torres, R. and Gonçalves, M.A. Multimodal retrieval with relevance feedback based on genetic programming 2014 Multimedia Tools and Applications
Vol. 69(3), pp. 991-1019 
article DOI URL 
Abstract: This paper presents a framework for multimodal retrieval with relevance feedback based on genetic programming. In this supervised learning-to-rank framework, genetic programming is used for the discovery of effective combination functions of (multimodal) similarity measures using the information obtained throughout the user relevance feedback iterations. With these new functions, several similarity measures, including those extracted from different modalities (e.g., text, and content), are combined into one single measure that properly encodes the user preferences. This framework was instantiated for multimodal image retrieval using visual and textual features and was validated using two image collections, one from the Washington University and another from the ImageCLEF Photographic Retrieval Task. For this image retrieval instance several multimodal relevance feedback techniques were implemented and evaluated. The proposed approach has produced statistically significant better results for multimodal retrieval over single modality approaches and superior effectiveness when compared to the best submissions of the ImageCLEF Photographic Retrieval Task 2008.

BibTeX:
@article{Calumby2014MTAP,
  author = {RodrigoTripodi Callumby and Ricardo da Silva Torres and Marcos André Gonçalves},
  title = {Multimodal retrieval with relevance feedback based on genetic programming},
  journal = {Multimedia Tools and Applications},
  publisher = {Springer US},
  year = {2014},
  volume = {69},
  number = {3},
  pages = {991-1019},
  url = {http://dx.doi.org/10.1007/s11042-012-1152-7},
  doi = {http://doi.org/10.1007/s11042-012-1152-7}
}
      


Faria, F.A., Pedronette, D.C.G., Santos, J.A., Rocha, A. and da S. Torres, R. Rank Aggregation for Pattern Classifier Selection in Remote Sensing Images 2014 Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of
Vol. 7(4), pp. 1103-1115 
article DOI  
Abstract: In the past few years, segmentation and classification techniques have become a cornerstone of many successful remote sensing algorithms aiming at delineating geographic target objects. One common strategy relies on using multiple complex features to guide the delineation process with the objective of gathering complementary information for improving classification results. However, a persistent problem in this approach is how to combine different and noncorrelated feature descriptors automatically. In this regard, one solution is to combine them through multiple classifier systems (MCSs) in which the diversity of simple/noncomplex classifiers is an essential issue in the definition of appropriate strategies for classifier fusion. In this paper, we propose a novel strategy for selecting classifiers (whereby a classifier is taken as a pair of learning method plus image descriptor) to be combined in MCS. In the proposed solution, diversity measures are used to assess the degree of agreement/disagreement between pairs of classifiers and ranked lists are created to sort them according to their diversity score. Thereafter, the classifiers are also sorted according to their performance through different evaluation measures (e.g., kappa and tau indices). In the end, a rank aggregation method is proposed to select the most suitable classifiers based on both the diversity and the effectiveness performance of classifiers. The proposed fusion framework has targeted at coffee crop classification and urban recognition but it is general enough to be used in a variety of other pattern recognition problems. Experimental results demonstrate that the novel strategy yields good results when compared to several baselines while using fewer classifiers and being much more efficient.

BibTeX:
@article{Faria2014JSTARS,
  author = {Fabio A. Faria and Daniel C. G. Pedronette and Jefersson A. Santos and Anderson Rocha and Ricardo da S. Torres},
  title = {Rank Aggregation for Pattern Classifier Selection in Remote Sensing Images},
  journal = {Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of},
  year = {2014},
  volume = {7},
  number = {4},
  pages = {1103-1115},
  doi = {http://doi.org/10.1109/JSTARS.2014.2303813}
}
      


Faria, F., Perre, P., Zucchi, R., Jorge, L., Lewinsohn, T., Rocha, A. and da S. Torres, R. Automatic identification of fruit flies (Diptera: Tephritidae) 2014 Journal of Visual Communication and Image Representation
Vol. 25(7), pp. 1516 - 1527 
article DOI URL 
Abstract: Abstract Fruit flies are pests of major economic importance in agriculture. Among these pests it is possible to highlight some species of genus Anastrepha, which attack a wide range of fruits, and are widely distributed in the American tropics and subtropics. Researchers seek to identify fruit flies in order to implement management and control programs as well as quarantine restrictions. However, fruit fly identification is manually performed by scarce specialists through analysis of morphological features of the mesonotum, wing, and aculeus. Our objective is to find solid knowledge that can serve as a basis for the development of a sounding automatic identification system of the Anastrepha fraterculus group, which is of high economic importance in Brazil. Wing and aculeus images datasets from three specimens have been used in this work. The experiments using a classifier multimodal fusion approach shows promising effectiveness results for identification of these fruit flies, with more than 98% classification accuracy, a remarkable result for this difficult problem.

BibTeX:
@article{Faria2014JVCIR,
  author = {F.A. Faria and P. Perre and R.A. Zucchi and L.R. Jorge and T.M. Lewinsohn and A. Rocha and R. da S. Torres},
  title = {Automatic identification of fruit flies (Diptera: Tephritidae)},
  journal = {Journal of Visual Communication and Image Representation },
  year = {2014},
  volume = {25},
  number = {7},
  pages = {1516 - 1527},
  url = {http://www.sciencedirect.com/science/article/pii/S1047320314001138},
  doi = {http://doi.org/10.1016/j.jvcir.2014.06.014}
}
      


Faria, F.A., dos Santos, J.A., Rocha, A. and da S. Torres, R. A framework for selection and fusion of pattern classifiers in multimedia recognition 2014 Pattern Recognition Letters
Vol. 39(0), pp. 52 - 64 
article DOI URL 
Abstract: Abstract The frequent growth of visual data, either by countless monitoring video cameras wherever we go or the popularization of mobile devices that allow each person to create and edit their own images and videos have contributed enormously to the so-called ``big-data revolution". This shear amount of visual data gives rise to a Pandora box of new visual classification problems never imagined before. Image and video classification tasks have been inserted in different and complex applications and the use of machine learning-based solutions has become the most popular approach for several applications. Notwithstanding, there is no silver bullet that solves all the problems, i.e., it is not possible to characterize all images of different domains with the same description method nor is it possible to use the same learning method to achieve good results in any kind of application. In this work, we aim at proposing a framework for classifier selection and fusion. Our method seeks to combine image characterization and learning methods by means of a meta-learning approach responsible for assessing which methods contribute more towards the solution of a given problem. The framework uses a strategy of classifier selection which pinpoints the less correlated, yet effective, classifiers through a series of diversity measures analysis. The experiments show that the proposed approach achieves comparable results to well-known algorithms from the literature on four different applications but using less learning and description methods as well as not incurring in the curse of dimensionality and normalization problems common to some fusion techniques. Furthermore, our approach is able to achieve effective classification results using very reduced training sets. The proposed method is also amenable to continuous learning and flexible enough for implementation in highly-parallel architectures.

BibTeX:
@article{Faria2014PRL,
  author = {Fabio A. Faria and Jefersson A. dos Santos and Anderson Rocha and Ricardo da S. Torres},
  title = {A framework for selection and fusion of pattern classifiers in multimedia recognition},
  journal = {Pattern Recognition Letters },
  year = {2014},
  volume = {39},
  number = {0},
  pages = {52 - 64},
  note = {Advances in Pattern Recognition and Computer Vision },
  url = {http://www.sciencedirect.com/science/article/pii/S0167865513002870},
  doi = {http://doi.org/10.1016/j.patrec.2013.07.014}
}
      


Li, L.T., Pedronette, D.C.G., Almeida, J., Penatti, O.A., Calumby, R.T. and da S. Torres, R. A rank aggregation framework for video multimodal geocoding 2014 Multimedia Tools and Applications  article DOI URL 
Abstract: This paper proposes a rank aggregation framework for video multimodal geocoding. Textual and visual descriptions associated with videos are used to define ranked lists. These ranked lists are later combined, and the resulting ranked list is used to define appropriate locations for videos. An architecture that implements the proposed framework is designed. In this architecture, there are specific modules for each modality (e.g, textual and visual) that can be developed and evolved independently. Another component is a data fusion module responsible for combining seamlessly the ranked lists defined for each modality. We have validated the proposed framework in the context of the MediaEval 2012 Placing Task, whose objective is to automatically assign geographical coordinates to videos. Obtained results show how our multimodal approach improves the geocoding results when compared to methods that rely on a single modality (either textual or visual descriptors). We also show that the proposed multimodal approach yields comparable results to the best submissions to the Placing Task in 2012 using no extra information besides the available development/training data. Another contribution of this work is related to the proposal of a new effectiveness evaluation measure. The proposed measure is based on distance scores that summarize how effective a designed/tested approach is, considering its overall result for a test dataset.

BibTeX:
@article{Li2014MTAP,
  author = {Lin Tzy Li and Daniel C. G. Pedronette and Jurandy Almeida and Otávio A.B. Penatti and Rodrigo T. Calumby and Ricardo da S. Torres},
  title = {A rank aggregation framework for video multimodal geocoding},
  journal = {Multimedia Tools and Applications},
  year = {2014},
  url = {http://dx.doi.org/10.1007/s11042-013-1588-4},
  doi = {http://doi.org/10.1007/s11042-013-1588-4}
}
      


Mansano, A.F., Matsuoka, J.A., Abiuzzi, N.M., Afonso, L.C.S., Papa, J.P., Faria, F.A., da Silva Torres, R. and Falcão, A.X. Swarm-based Descriptor Combination and its Application for Image Classification 2014 Electronics Letters on Computer Vision and Image Analysis
Vol. 13(3), pp. 13-27 
article  
Abstract: In this paper, we deal with the descriptor combination problem in image classification tasks. This problem refers to the definition of an appropriate combination of image content descriptors that characterize different visual properties, such as color, shape and texture. In this paper, we propose to model the descriptor combination as a swarm-based optimization problem, which finds out the set of parameters that maximizes the classification accuracy of the Optimum-Path Forest (OPF) classifier. In our model, a descriptor is seen as a pair composed of a feature extraction algorithm and a suitable distance function. Our strategy here is to combine distance scores defined by different descriptors, as well as to employ them to weight OPF edges, which connect samples in the feature space. An extensive evaluation of several swarm-based optimization techniques was performed. Experimental results have demonstrated the robustness of the proposed combination approach.

BibTeX:
@article{Mansano2014ELCVIA,
  author = {Alex Fernandes Mansano and Jessica Akemi Matsuoka and Nikolas Mota Abiuzzi and Luis Claudio Sugi Afonso and João Paulo Papa and Fábio A Faria and Ricardo da Silva Torres and Alexandre Xavier Falcão},
  title = {Swarm-based Descriptor Combination and its Application for Image Classification},
  journal = {Electronics Letters on Computer Vision and Image Analysis},
  year = {2014},
  volume = {13},
  number = {3},
  pages = {13-27}
}
      


Nakamura, R., Garcia Fonseca, L., dos Santos, J., da S. Torres, R., Yang, X.-S. and Papa Papa, J. Nature-Inspired Framework for Hyperspectral Band Selection 2014 Geoscience and Remote Sensing, IEEE Transactions on
Vol. 52(4), pp. 2126-2137 
article DOI  
Abstract: Although hyperspectral images acquired by on-board satellites provide information from a wide range of wavelengths in the spectrum, the obtained information is usually highly correlated. This paper proposes a novel framework to reduce the computation cost for large amounts of data based on the efficiency of the optimum-path forest (OPF) classifier and the power of metaheuristic algorithms to solve combinatorial optimizations. Simulations on two public data sets have shown that the proposed framework can indeed improve the effectiveness of the OPF and considerably reduce data storage costs.

BibTeX:
@article{Nakamura2014TGRS,
  author = {Nakamura, R.Y.M. and Garcia Fonseca, L.M. and dos Santos, J.A and da S. Torres, R. and Xin-She Yang and Papa Papa, J.},
  title = {Nature-Inspired Framework for Hyperspectral Band Selection},
  journal = {Geoscience and Remote Sensing, IEEE Transactions on},
  year = {2014},
  volume = {52},
  number = {4},
  pages = {2126-2137},
  doi = {http://doi.org/10.1109/TGRS.2013.2258351}
}
      


Pedronette, D.C.G., Almeida, J. and da S. Torres, R. A scalable re-ranking method for content-based image retrieval 2014 Information Sciences
Vol. 265(0), pp. 91 - 104 
article DOI URL 
Abstract: Abstract Content-based Image Retrieval (CBIR) systems consider only a pairwise analysis, i.e., they measure the similarity between pairs of images, ignoring the rich information encoded in the relations among several images. However, the user perception usually considers the query specification and responses in a given context. In this scenario, re-ranking methods have been proposed to exploit the contextual information and, hence, improve the effectiveness of CBIR\ systems. Besides the effectiveness, the usefulness of those systems in real-world applications also depends on the efficiency and scalability of the retrieval process, imposing a great challenge to the re-ranking approaches, once they usually require the computation of distances among all the images of a given collection. In this paper, we present a novel approach for the re-ranking problem. It relies on the similarity of top-k lists produced by efficient indexing structures, instead of using distance information from the entire collection. Extensive experiments were conducted on a large image collection, using several indexing structures. Results from a rigorous experimental protocol show that the proposed method can obtain significant effectiveness gains (up to 12.19% better) and, at the same time, improve considerably the efficiency (up to 73.11% faster). In addition, our technique scales up very well, which makes it suitable for large collections.

BibTeX:
@article{Pedronette2014InfoScie,
  author = {Daniel C. G. Pedronette and Jurandy Almeida and Ricardo da S. Torres},
  title = {A scalable re-ranking method for content-based image retrieval},
  journal = {Information Sciences },
  year = {2014},
  volume = {265},
  number = {0},
  pages = {91 - 104},
  url = {http://www.sciencedirect.com/science/article/pii/S0020025513008864},
  doi = {http://doi.org/10.1016/j.ins.2013.12.030}
}
      


Pedronette, D.C.G., Penatti, O.A. and da S. Torres, R. Unsupervised manifold learning using Reciprocal kNN Graphs in image re-ranking and rank aggregation tasks 2014 Image and Vision Computing
Vol. 32(2), pp. 120 - 130 
article DOI URL 
Abstract: Abstract In this paper, we present an unsupervised distance learning approach for improving the effectiveness of image retrieval tasks. We propose a Reciprocal kNN Graph algorithm that considers the relationships among ranked lists in the context of a k-reciprocal neighborhood. The similarity is propagated among neighbors considering the geometry of the dataset manifold. The proposed method can be used both for re-ranking and rank aggregation tasks. Unlike traditional diffusion process methods, which require matrix multiplication operations, our algorithm takes only a subset of ranked lists as input, presenting linear complexity in terms of computational and storage requirements. We conducted a large evaluation protocol involving shape, color, and texture descriptors, various datasets, and comparisons with other post-processing approaches. The re-ranking and rank aggregation algorithms yield better results in terms of effectiveness performance than various state-of-the-art algorithms recently proposed in the literature, achieving bull's eye and MAP\ scores of 100% on the well-known MPEG-7 shape dataset.

BibTeX:
@article{Pedronette2014IVC,
  author = {Daniel Carlos Guimarães Pedronette and Otávio A.B. Penatti and Ricardo da S. Torres},
  title = {Unsupervised manifold learning using Reciprocal kNN Graphs in image re-ranking and rank aggregation tasks},
  journal = {Image and Vision Computing},
  year = {2014},
  volume = {32},
  number = {2},
  pages = {120 - 130},
  url = {http://www.sciencedirect.com/science/article/pii/S0262885613001819},
  doi = {http://doi.org/10.1016/j.imavis.2013.12.009}
}
      


Pedronette, D.C.G., da S. Torres, R. and Calumby, R.T. Using contextual spaces for image re-ranking and rank aggregation 2014 Multimedia Tools and Applications
Vol. 69(3), pp. 689-716 
article DOI URL 
Abstract: This article presents two novel re-ranking approaches that take into account contextual information defined by the K-Nearest Neighbours (KNN) of a query image for improving the effectiveness of CBIR systems. The main contributions of this article are the definition of the concept of contextual spaces for encoding contextual information of images; the definition of two new re-ranking algorithms that exploit contextual information encoded in contextual spaces; and the evaluation of the proposed algorithms in several CBIR tasks related to the combination of image descriptors; combination of visual and textual descriptors; and combination of post-processing (re-ranking) methods. We conducted a large evaluation protocol involving visual descriptors (considering shape, color, and texture) and textual descriptors, various datasets, and comparisons with other post-processing methods. Experimental results demonstrate the effectiveness of our approaches.

BibTeX:
@article{Pedronette2014MTAP,
  author = {Daniel C. G. Pedronette and Ricardo da S. Torres and Rodrigo Tripodi Calumby},
  title = {Using contextual spaces for image re-ranking and rank aggregation},
  journal = {Multimedia Tools and Applications},
  publisher = {Springer US},
  year = {2014},
  volume = {69},
  number = {3},
  pages = {689-716},
  url = {http://dx.doi.org/10.1007/s11042-012-1115-z},
  doi = {http://doi.org/10.1007/s11042-012-1115-z}
}
      


Penatti, O.A., Silva, F.B., Valle, E., Gouet-Brunet, V. and da S. Torres, R. Visual word spatial arrangement for image retrieval and classification 2014 Pattern Recognition
Vol. 47(2), pp. 705-720 
article DOI URL 
Abstract: We present word spatial arrangement (WSA), an approach to represent the spatial arrangement of visual words under the bag-of-visual-words model. It lies in a simple idea which encodes the relative position of visual words by splitting the image space into quadrants using each detected point as origin. WSA generates compact feature vectors and is flexible for being used for image retrieval and classification, for working with hard or soft assignment, requiring no pre/post processing for spatial verification. Experiments in the retrieval scenario show the superiority of WSA in relation to Spatial Pyramids. Experiments in the classification scenario show a reasonable compromise between those methods, with Spatial Pyramids generating larger feature vectors, while WSA provides adequate performance with much more compact features. As WSA encodes only the spatial information of visual words and not their frequency of occurrence, the results indicate the importance of such information for visual categorization.

BibTeX:
@article{Penatti2014PR,
  author = {Otávio A.B. Penatti and Fernanda B. Silva and Eduardo Valle and Valerie Gouet-Brunet and Ricardo da S. Torres},
  title = {Visual word spatial arrangement for image retrieval and classification},
  journal = {Pattern Recognition},
  year = {2014},
  volume = {47},
  number = {2},
  pages = {705--720},
  url = {http://dx.doi.org/10.1016/j.patcog.2013.08.012},
  doi = {http://doi.org/10.1016/j.patcog.2013.08.012}
}
      


dos Santos, J.A., Penatti, O.A.B., Gosselin, P.-H., Falcão, A.X., Philipp-Foliguet, S. and da S. Torres, R. Efficient and Effective Hierarchical Feature Propagation 2014 IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Vol. 7(12), pp. 4632-4643 
article DOI  
Abstract: Many methods have been recently proposed to deal with the large amount of data provided by the new remote sensing technologies. Several of those methods rely on the use of segmented regions. However, a common issue in region-based applications is the definition of the appropriate representation scale of the data, a problem usually addressed by exploiting multiple scales of segmentation. The use of multiple scales, however, raises new challenges related to the definition of effective and efficient mechanisms for extracting features. In this paper, we address the problem of extracting features from a hierarchy by proposing two approaches that exploit the existing relationships among regions at different scales. The H-Propagation propagates any histogram-based low-level descriptors. The bag-of-visual-word (BoW)-Propagation approach uses the BoWs model to propagate features along multiple scales. The proposed methods are very efficient, as features need to be extracted only at the base of the hierarchy and yield comparable results to low-level extraction approaches.

BibTeX:
@article{Santos2014JSTARS,
  author = {Jefersson A. dos Santos and Otávio A. B. Penatti and Philippe-Henri Gosselin and Alexandre X. Falcão and Sylvie Philipp-Foliguet and Ricardo da S. Torres},
  title = {Efficient and Effective Hierarchical Feature Propagation},
  journal = {IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing},
  year = {2014},
  volume = {7},
  number = {12},
  pages = {4632-4643},
  doi = {http://doi.org/10.1109/JSTARS.2014.2341175}
}
      


Teodoro, G., Valle, E., Mariano, N., Torres, R., Meira Wagner, J. and Saltz, J. Approximate similarity search for online multimedia services on distributed CPU-GPU platforms 2014 The VLDB Journal
Vol. 23(3), pp. 427-448 
article DOI URL 
Abstract: Similarity search in high-dimensional spaces is a pivotal operation for several database applications, including online content-based multimedia services. With the increasing popularity of multimedia applications, these services are facing new challenges regarding (1) the very large and growing volumes of data to be indexed/searched and (2) the necessity of reducing the response times as observed by end-users. In addition, the nature of the interactions between users and online services creates fluctuating query request rates throughout execution, which requires a similarity search engine to adapt to better use the computation platform and minimize response times. In this work, we address these challenges with Hypercurves, a flexible framework for answering approximate k-nearest neighbor (kNN) queries for very large multimedia databases. Hypercurves executes in hybrid CPU-GPU environments and is able to attain massive query-processing rates through the cooperative use of these devices. Hypercurves also changes its CPU-GPU task partitioning dynamically according to the observed load, aiming for optimal response times. In our empirical evaluation, dynamic task partitioning reduced query response times by approximately 50% compared to the best static task partition. Due to a probabilistic proof of equivalence to the sequential kNN algorithm, the CPU--GPU execution of Hypercurves in distributed (multi-node) environments can be aggressively optimized, attaining superlinear scalability while still guaranteeing, with high probability, results at least as good as those from the sequential algorithm.

BibTeX:
@article{Teodoro2014VLDB,
  author = {Teodoro, George and Valle, Eduardo and Mariano, Nathan and Torres, Ricardo and Meira, Wagner, Jr and Saltz, JoelH.},
  title = {Approximate similarity search for online multimedia services on distributed CPU-GPU platforms},
  journal = {The VLDB Journal},
  publisher = {Springer Berlin Heidelberg},
  year = {2014},
  volume = {23},
  number = {3},
  pages = {427-448},
  url = {http://dx.doi.org/10.1007/s00778-013-0329-7},
  doi = {http://doi.org/10.1007/s00778-013-0329-7}
}
      


Almeida, J., Leite, N.J. and da S. Torres, R. Online video summarization on compressed domain 2013 Journal of Visual Communication and Image Representation
Vol. 24(6), pp. 729 - 738 
article DOI URL 
Abstract: Recent advances in technology have increased the availability of video data, creating a strong requirement for efficient systems to manage those materials. Making efficient use of video information requires that data to be accessed in a user-friendly way. Ideally, one would like to understand a video content, without having to watch it entirely. This has been the goal of a quickly evolving research area known as video summarization. In this paper, we present a novel approach for video summarization that works in the compressed domain and allows the progressive generation of a video summary. The proposed method relies on exploiting visual features extracted from the video stream and on using a simple and fast algorithm to summarize the video content. Experiments on a TRECVID 2007 dataset show that our approach presents high quality relative to the state-of-the-art solutions and in a computational time that makes it suitable for online usage.

BibTeX:
@article{Almeida2013JVCIR,
  author = {Jurandy Almeida and Neucimar J. Leite and Ricardo da S. Torres},
  title = {Online video summarization on compressed domain},
  journal = {Journal of Visual Communication and Image Representation },
  year = {2013},
  volume = {24},
  number = {6},
  pages = {729 - 738},
  note = {Recent advances on analysis and processing for distributed video systems},
  url = {http://www.sciencedirect.com/science/article/pii/S1047320312000223},
  doi = {http://doi.org/10.1016/j.jvcir.2012.01.009}
}
      


Murthy, U., Fox, E.A., Chen, Y., Hallerman, E., Orth, D., da S. Torres, R., Falcão, T.R.C., Ramos, E.J., Kozievitch, N.P. and Li, L.T. SuperIDR: a tool for fish identification and information retrieval 2013 Fisheries
Vol. 38(2), pp. 65-75 
article DOI  
Abstract: Students, fisheries professionals, and the general public may value computer-facilitated assistance for fish identification and access to ecological and life history information. We developed SuperIDR, a software package supporting such applications, by utilizing the search and data retrieval capabilities of digital libraries, as well as key features of tablet PCs. We demonstrated SuperIDR utilizing a database with information on 207 freshwater fishes of Virginia. A user may annotate fish images and identify fishes by using a dichotomous key; searching for key words, similar images, subimages, or annotations on images; or combinations of these approaches. Students using the software demonstrated enhanced ability to correctly identify specimens. Their comments led to improvements, including the addition of new features. The PC-based system for identifying freshwater fishes of Virginia may be downloaded and modified. SuperIDR is a prototype for PC-based species identification applications -- the system architecture and the open-source software that we developed are applicable to other fish faunas and to a broader range of species identification tasks.

BibTeX:
@article{Murthy2013Fisheries,
  author = {Uma Murthy and Edward A. Fox and Yinlin Chen and Eric Hallerman and Donald Orth and Ricardo da S. Torres and Tiago R. C. Falcão and Evandro J. Ramos and Nadia P. Kozievitch and Lin Tzy Li},
  title = {SuperIDR: a tool for fish identification and information retrieval},
  journal = {Fisheries},
  year = {2013},
  volume = {38},
  number = {2},
  pages = {65-75},
  doi = {http://doi.org/10.1080/03632415.2013.757982}
}
      


Pedronette, D.C.G. and da S. Torres, R. Image Re-Ranking and Rank Aggregation based on Similarity of Ranked Lists 2013 Pattern Recognition
Vol. 46(8), pp. 2350-2360 
article DOI URL 
Abstract: In Content-based Image Retrieval (CBIR) systems, ranking accurately collection images is of great relevance. Users are interested in the returned images placed at the first positions, which usually are the most relevant ones. Collection images are ranked in increasing order of their distance to the query pattern (e.g., query image) defined by users. Therefore, the effectiveness of these systems is very dependent on the accuracy of the distance function adopted. In this paper, we present a novel context-based approach for redefining distances and later re-ranking images aiming to improve the effectiveness of CBIR systems. In our approach, distances among images are redefined based on the similarity of their ranked lists. Conducted experiments involving shape, color, and texture descriptors demonstrate the effectiveness of our method.

BibTeX:
@article{Pedronette2013PR,
  author = {Daniel C. G. Pedronette and Ricardo da S. Torres},
  title = {Image Re-Ranking and Rank Aggregation based on Similarity of Ranked Lists},
  journal = {Pattern Recognition},
  year = {2013},
  volume = {46},
  number = {8},
  pages = {2350--2360},
  url = {http://dx.doi.org/10.1016/j.patcog.2013.01.004},
  doi = {http://doi.org/10.1016/j.patcog.2013.01.004}
}
      


dos Santos, J.A., Gosselin, P.-H., Philipp-Foliguet, S., da S. Torres, R. and ao , A.X.F. Interactive Multiscale Classification of High-Resolution Remote Sensing Images 2013 IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (JSTARS)
Vol. 6(4), pp. 2020-2034 
article DOI URL 
Abstract: The use of remote sensing images (RSIs) as a source of information in agribusiness applications is very common. In those applications, it is fundamental to identify and understand trends and patterns in space occupation. However, the identification and recognition of crop regions in remote sensing images are not trivial tasks yet. In high-resolution image analysis and recognition, many of the problems are related to the representation scale of the data, and to both the size and the representativeness of the training set. In this paper, we propose a method for interactive classification of remote sensing images considering multiscale segmentation. Our aim is to improve the selection of training samples using the features from the most appropriate scales of representation. We use a boosting-based active learning strategy to select regions at various scales for user's relevance feedback. The idea is to select the regions that are closer to the border that separates both target classes: relevant and non-relevant regions. Experimental results showed that the combination of scales produces better results than isolated scales in a relevance feedback process. Furthermore, the interactive method achieved good results with few user interactions. The proposed method needs only a small portion of the training set to build classifiers that are as strong as the ones generated by a supervised method that uses the whole training set.

BibTeX:
@article{Santos2013JSTARS,
  author = {Jefersson A. dos Santos and Philippe-Henri Gosselin and Sylvie Philipp-Foliguet and Ricardo da S. Torres and Alexandre X. Falcão},
  title = {Interactive Multiscale Classification of High-Resolution Remote Sensing Images},
  journal = {IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (JSTARS)},
  year = {2013},
  volume = {6},
  number = {4},
  pages = {2020--2034},
  url = {http://dx.doi.org/10.1109/JSTARS.2012.2237013},
  doi = {http://doi.org/10.1109/JSTARS.2012.2237013}
}
      


Saraiva, P.C., Cavalcanti, J.M.B., Gonçalves, M.A., Santos, K.C.L., Moura, E.S. and da S. Torres, R. Evaluation of parameters for combining multiple textual sources of evidence for Web image retrieval using genetic programming 2013 Journal of the Brazilian Computer Society
Vol. 19(2), pp. 147-160 
article DOI URL 
Abstract: Web image retrieval is a research area that is receiving a lot of attention in the last few years due to the growing availability of images on the Web. Since content-based image retrieval is still considered very difficult and expensive in the Web context, most current large-scale Web image search engines use textual descriptions to represent the content of the Web images. In this paper we present a study about the usage of genetic programming (GP) to address the problem of image retrieval on the World Wide Web by using textual sources of evidence and textual queries. We investigate several parameter of choices related to the usage of a framework previously proposed by us. The proposed framework uses GP to provide a good solution to combine multiple textual sources of evidence associated with the Web images. Experiments performed using a collection with more than 195,000 images extracted from the Web showed that our evolutionary approach outperforms the best baseline we used with gains of 22.36 % in terms of mean average precision.

BibTeX:
@article{Saraiva2013JBCS,
  author = {Patrícia Correia Saraiva and João M. B. Cavalcanti and Marcos André Gonçalves  and Katia C. Lage Santos and Edleno S. Moura and Ricardo da S. Torres},
  title = {Evaluation of parameters for combining multiple textual sources of evidence for Web image retrieval using genetic programming},
  journal = {Journal of the Brazilian Computer Society},
  publisher = {Springer-Verlag},
  year = {2013},
  volume = {19},
  number = {2},
  pages = {147--160},
  url = {http://dx.doi.org/10.1007/s13173-012-0087-1},
  doi = {http://doi.org/10.1007/s13173-012-0087-1}
}
      


da Silva Torres, R. and Lewiner, T. Guest Editorial: Image and Video Processing and Analysis 2013 Journal of Mathematical Imaging and Vision
Vol. 45(3), pp. 199 
article  
Abstract: This is an excerpt from the content: This special issue of Journal of Mathematical Imaging and Vision contains expanded versions of papers presented at Sibgrapi 2011, the 24th Conference on Graphics, Patterns, and Images. Sibgrapi is the most traditional meeting in Latin America on Computer Graphics, Image Processing, Pattern Recognition and Computer Vision. This special issue contains eight articles on different aspects of image and video processing and analysis, covering topics from image enhancement and segmentation to shape representation and target tracking in videos. The first article, entitled ``Spatio-Temporal Resolution Enhancement of Vocal Tract MRI Sequences-A Comparison Among Wiener Filter Based Methods" exploits Wiener filter-based approaches for resolution enhancement of vocal tract images. The article ``Analysis of Scalar Maps for the Segmentation of the Corpus Callosum in Diffusion Tensor Fields" also focuses on medical images. In this article, the authors propose the use of new scalar maps based on mathematica...

BibTeX:
@article{Torres2013JMIV,
  author = {Ricardo da Silva Torres and Thomas Lewiner},
  title = {Guest Editorial: Image and Video Processing and Analysis},
  journal = {Journal of Mathematical Imaging and Vision},
  year = {2013},
  volume = {45},
  number = {3},
  pages = {199}
}
      


Almeida, J., Leite, N.J. and da S. Torres, R. VISON: VIdeo Summarization for ONline applications 2012 Pattern Recognition Letters
Vol. 33(4), pp. 397-409 
article DOI URL 
Abstract: Recent advances in technology have increased the availability of video data, creating a strong requirement for efficient systems to manage those materials. Making efficient use of video information requires that data to be accessed in a user-friendly way. This has been the goal of a quickly evolving research area known as video summarization. Most of existing techniques to address the problem of summarizing a video sequence have focused on the uncompressed domain. However, decoding and analyzing of a video sequence are two extremely time-consuming tasks. Thus, video summaries are usually produced off-line, penalizing any user interaction. The lack of customization is very critical, as users often have different demands and resources. Since video data are usually available in compressed form, it is desirable to directly process video material without decoding. In this paper, we present VISON, a novel approach for video summarization that works in the compressed domain and allows user interaction. The proposed method is based on both exploiting visual features extracted from the video stream and on using a simple and fast algorithm to summarize the video content. Results from a rigorous empirical comparison with a subjective evaluation show that our technique produces video summaries with high quality relative to the state-of-the-art solutions and in a computational time that makes it suitable for online usage.

BibTeX:
@article{Almeida2012PRL,
  author = {Jurandy Almeida and Neucimar J. Leite and Ricardo da S. Torres},
  title = {VISON: VIdeo Summarization for ONline applications},
  journal = {Pattern Recognition Letters},
  year = {2012},
  volume = {33},
  number = {4},
  pages = {397--409},
  url = {http://dx.doi.org/10.1016/j.patrec.2011.08.007},
  doi = {http://doi.org/10.1016/j.patrec.2011.08.007}
}
      


Kozievitch, N.P., Almeida, J., da S. Torres, R., Santanchè, A., Leite, N.J., Murthy, U. and Fox, E.A. Reusing a Compound-Based Infrastructure for Searching and Annotating Video Stories 2012 International Journal of Multimedia Technology
Vol. 2(3), pp. 89-97 
article URL 
Abstract: The fast evolution of technology has led to a growing demand for multimedia data, increasing the amount of research into efficient systems to manage those materials. Significant in those efforts is the work being done by the Content-Based Image Retrieval (CBIR) community in processing and retrieving images, along with their further combination with annotations. Nowadays, images play a key role in digital applications. Contextual integration of images with different sources is vital - it involves reusing and aggregating a large amount of information with other media types. In particular, if we consider video data, annotations can be used to summarize textual descriptions and metadata, while images can be used to summarize videos into storyboards, providing an easy way to navigate and to browse large video collections. This has been the goal of a rapidly evolving research area known as video summarization. In this paper, we present a novel approach to reuse a CBIR infrastructure for searching, annotating and publishing video stories, taking advantage of the complex object (CO) concept to integrate resources. Our approach relies on a specific component technology (Digital Content Component - DCC) to encapsulate the CBIR-annotation related tasks and integrate them with video summarization techniques. Such a strategy provides an effective way to reuse, compose, and aggregate both content and processing software.

BibTeX:
@article{Kozievitch2012IJMT,
  author = {Nádia P. Kozievitch and Jurandy Almeida and Ricardo da S. Torres and André Santanchè and Neucimar J. Leite and Uma Murthy and Edward A. Fox},
  title = {Reusing a Compound-Based Infrastructure for Searching and Annotating Video Stories},
  journal = {International Journal of Multimedia Technology},
  year = {2012},
  volume = {2},
  number = {3},
  pages = {89--97},
  url = {http://www.ijmt.org/paperInfo.aspx?ID=769}
}
      


Lewiner, T. and da Silva Torres, R. Preface 2012 The Visual Computer
Vol. 28(10), pp. 957 
article  
Abstract: This is an excerpt from the content: This special issue of The Visual Computer contains expanded versions of eight papers presented at Sibgrapi 2011, the 24th Conference on Graphics, Patterns, and Images. Sibgrapi is the most traditional meeting in Latin America on Computer Graphics, Image Processing, Pattern Recognition and Computer Vision. In 2011, Sibgrapi was hosted in the beautiful city of Maceió, Alagoas, Brazil, being organized by the Instituto de Matemática of Universidade Federal de of Alagoas, and supported by Petrobras, CNPq, CAPES, FAPEAL, UFAL, SBM, and SBC. Thanks to The Visual Computer's editor in chief Nadia Magnenat-Thalmann, this issue presents eight papers selected from a pool of 46 papers presented during the technical sessions of the conference. They represent the main topics inside visual computing, from computational topology and modeling to rendering, visualization, and simulation. From Computational Topology, the first paper, entitled ``Efficient computation of 3d Morse-Smale complexes and persistent...

BibTeX:
@article{Lewiner2012VisualComputer,
  author = {Thomas Lewiner and Ricardo da Silva Torres},
  title = {Preface},
  journal = {The Visual Computer},
  year = {2012},
  volume = {28},
  number = {10},
  pages = {957}
}
      


Pedronette, D.C.G. and da S. Torres, R. Exploiting pairwise recommendation and clustering strategies for image re-ranking 2012 Information Sciences
Vol. 207(10), pp. 19-34 
article DOI URL 
Abstract: In Content-based Image Retrieval (CBIR) systems, accurately ranking collection images is of great relevance. Users are interested in the returned images placed at the first positions, which usually are the most relevant ones. Commonly, image content descriptors are used to compute ranked lists in CBIR systems. In general, these systems perform only pairwise image analysis, that is, compute similarity measures considering only pairs of images, ignoring the rich information encoded in the relations among several images. This paper presents a novel re-ranking approach used to improve the effectiveness of CBIR tasks by exploring relations among images. In our approach, a recommendation-based strategy is combined with a clustering method. Both exploit contextual information encoded in ranked lists computed by CBIR systems. We conduct several experiments to evaluate the proposed method. Our experiments consider shape, color, and texture descriptors and comparisons with other post-processing methods. Experimental results demonstrate the effectiveness of our method.

BibTeX:
@article{Pedronette2012InfScie,
  author = {Daniel Carlos Guimarães Pedronette and Ricardo da S. Torres},
  title = {Exploiting pairwise recommendation and clustering strategies for image re-ranking},
  journal = {Information Sciences},
  year = {2012},
  volume = {207},
  number = {10},
  pages = {19--34},
  url = {http://dx.doi.org/10.1016/j.ins.2012.04.032},
  doi = {http://doi.org/10.1016/j.ins.2012.04.032}
}
      


Pedronette, D.C.G. and da S. Torres, R. Exploiting contextual information for image re-ranking and rank aggregation 2012 International Journal of Multimedia Information Retrieval
Vol. 1(2), pp. 115-128 
article DOI URL 
Abstract: Content-based image retrieval (CBIR) systems aim to retrieve the most similar images in a collection, given a query image. Since users are interested in the returned images placed at the first positions of ranked lists (which usually are the most relevant ones), the effectiveness of these systems is very dependent on the accuracy of ranking approaches. This paper presents a novel re-ranking algorithm aiming to exploit contextual information for improving the effectiveness of rankings computed by CBIR systems. In our approach, ranked lists and distance scores are used to create context images, later used for retrieving contextual information. We also show that our re-ranking method can be applied to other tasks, such as (a) combining ranked lists obtained using different image descriptors (rank aggregation) and (b) combining post-processing methods. Conducted experiments involving shape, color, and texture descriptors and comparisons with other post-processing methods demonstrate the effectiveness of our method.

BibTeX:
@article{Pedronette2012JMIR,
  author = {Daniel Carlos Guimarães Pedronette and Ricardo da S. Torres},
  title = {Exploiting contextual information for image re-ranking and rank aggregation},
  journal = {International Journal of Multimedia Information Retrieval},
  year = {2012},
  volume = {1},
  number = {2},
  pages = {115--128},
  url = {http://dx.doi.org/10.1007/s13735-012-0002-8},
  doi = {http://doi.org/10.1007/s13735-012-0002-8}
}
      


Penatti, O.A., Valle, E. and da S. Torres, R. Comparative study of global color and texture descriptors for web image retrieval 2012 Journal of Visual Communication and Image Representation
Vol. 23(2), pp. 359-380 
article DOI URL 
Abstract: This paper presents a comparative study of color and texture descriptors considering the Web as the environment of use. We take into account the diversity and large-scale aspects of the Web considering a large number of descriptors (24 color and 28 texture descriptors, including both traditional and recently proposed ones). The evaluation is made on two levels: a theoretical analysis in terms of algorithms complexities and an experimental comparison considering efficiency and effectiveness aspects. The experimental comparison contrasts the performances of the descriptors in small-scale datasets and in a large heterogeneous database containing more than 230 thousand images. Although there is a significant correlation between descriptors performances in the two settings, there are notable deviations, which must be taken into account when selecting the descriptors for large-scale tasks. An analysis of the correlation is provided for the best descriptors, which hints at the best opportunities of their use in combination.

BibTeX:
@article{Penatti2012JVCIR,
  author = {Otávio A.B. Penatti and Eduardo Valle and Ricardo da S. Torres},
  title = {Comparative study of global color and texture descriptors for web image retrieval},
  journal = {Journal of Visual Communication and Image Representation},
  year = {2012},
  volume = {23},
  number = {2},
  pages = {359--380},
  url = {http://www.sciencedirect.com/science/article/pii/S1047320311001465},
  doi = {http://doi.org/10.1016/j.jvcir.2011.11.002}
}
      


dos Santos, J.A., Gosselin, P.-H., Philipp-Foliguet, S., da S. Torres, R. and Falcão, A.X. Multiscale Classification of Remote Sensing Images 2012 IEEE Transactions on Geoscience and Remote Sensing
Vol. 50(10), pp. 3764-3775 
article DOI URL 
Abstract: A huge effort has been applied in image classification to create high-quality thematic maps and to establish precise inventories about land cover use. The peculiarities of remote sensing images (RSIs) combined with the traditional image classification challenges made RSI classification a hard task. Our aim is to propose a kind of boost-classifier adapted to multiscale segmentation. We use the paradigm of boosting, whose principle is to combine weak classifiers to build an efficient global one. Each weak classifier is trained for one level of the segmentation and one region descriptor. We have proposed and tested weak classifiers based on linear support vector machines (SVM) and region distances provided by descriptors. The experiments were performed on a large image of coffee plantations. We have shown in this paper that our approach based on boosting can detect the scale and set of features best suited to a particular training set. We have also shown that hierarchical multiscale analysis is able to reduce training time and to produce a stronger classifier. We compare the proposed methods with a baseline based on SVM with radial basis function kernel. The results show that the proposed methods outperform the baseline.

BibTeX:
@article{Santos2012TGRS,
  author = {Jefersson A. dos Santos and Philippe-Henri Gosselin and Sylvie Philipp-Foliguet and Ricardo da S. Torres and Alexandre X. Falcão},
  title = {Multiscale Classification of Remote Sensing Images},
  journal = {IEEE Transactions on Geoscience and Remote Sensing},
  year = {2012},
  volume = {50},
  number = {10},
  pages = {3764--3775},
  url = {http://dx.doi.org/10.1109/TGRS.2012.2186582},
  doi = {http://doi.org/10.1109/TGRS.2012.2186582}
}
      


da Silva, A.T., dos Santos, J.A., Falcão, A.X., da S. Torres, R. and Magalhães, L.P. Incorporating multiple distance spaces in optimum-path forest classification to improve feedback-based learning 2012 Computer Vision and Image Understanding
Vol. 116(4), pp. 510-523 
article DOI URL 
Abstract: In content-based image retrieval (CBIR) using feedback-based learning, the user marks the relevance of returned images and the system learns how to return more relevant images in a next iteration. In this learning process, image comparison may be based on distinct distance spaces due to multiple visual content representations. This work improves the retrieval process by incorporating multiple distance spaces in a recent method based on optimum-path forest (OPF) classification. For a given training set with relevant and irrelevant images, an optimization algorithm finds the best distance function to compare images as a combination of their distances according to different representations. Two optimization techniques are evaluated: a multi-scale parameter search (MSPS), never used before for CBIR, and a genetic programming (GP) algorithm. The combined distance function is used to project an OPF classifier and to rank images classified as relevant for the next iteration. The ranking process takes into account relevant and irrelevant representatives, previously found by the OPF classifier. Experiments show the advantages in effectiveness of the proposed approach with both optimization techniques over the same approach with single distance space and over another state-of-the-art method based on multiple distance spaces.

BibTeX:
@article{Silva2012CVIU,
  author = {André Tavares da Silva and Jefersson Alex dos Santos and Alexandre Xavier Falcão and Ricardo da S. Torres and Léo Pini Magalhães},
  title = {Incorporating multiple distance spaces in optimum-path forest classification to improve feedback-based learning},
  journal = {Computer Vision and Image Understanding},
  year = {2012},
  volume = {116},
  number = {4},
  pages = {510--523},
  url = {http://www.sciencedirect.com/science/article/pii/S107731421100261X},
  doi = {http://doi.org/10.1016/j.cviu.2011.12.001}
}
      


Ferreira, C.D., Santos, J.A., da S. Torres, R., Gonçalves, M.A., Rezende, R.C. and Fan, W. Relevance feedback based on genetic programming for image retrieval 2011 Pattern Recognition Letters
Vol. 32(1), pp. 27-37 
article DOI URL 
Abstract: This paper presents two content-based image retrieval frameworks with relevance feedback based on genetic programming. The first framework exploits only the user indication of relevant images. The second one considers not only the relevant but also the images indicated as non-relevant. Several experiments were conducted to validate the proposed frameworks. These experiments employed three different image databases and color, shape, and texture descriptors to represent the content of database images. The proposed frameworks were compared, and outperformed six other relevance feedback methods regarding their effectiveness and efficiency in image retrieval tasks.

BibTeX:
@article{Ferreira2011PRL,
  author = {Cristiano D. Ferreira and Jefersson A. Santos and Ricardo da S. Torres and Marcos André Gonçalves and Rodrigo Carvalho Rezende and Weiguo Fan},
  title = {Relevance feedback based on genetic programming for image retrieval},
  journal = {Pattern Recognition Letters},
  year = {2011},
  volume = {32},
  number = {1},
  pages = {27--37},
  url = {http://dx.doi.org/10.1016/j.patrec.2010.05.015},
  doi = {http://doi.org/10.1016/j.patrec.2010.05.015}
}
      


Gil, F.B., Kozievitch, N.P. and da S. Torres, R. GeoNote: A Web Service for Geographic Data Annotation in Biodiversity Information Systems 2011 Journal of Information and Data Management (JIDM)
Vol. 2(2), pp. 195-210 
article URL 
Abstract: Biodiversity studies are often based on the use of data associated with eld observations. These data are usually associated with a geographic location. Most of existing biodiversity information systems provides support for storing and querying geographic data. Annotation services, in general, are not supported. This paper presents an annotation web service to correlate biodiversity data with geographic information. We use superimposed information concepts for constructing a Web service for annotating vector geographic data. The Web service specication includes the denition of a generic API for handling annotations and the denition of a data model for storing them. The solution was validated through the implementation of a prototype for the biodiversity domain considering a potential usage scenario.

BibTeX:
@article{Gil2011JIDM,
  author = {Fabiana B. Gil and Nádia P. Kozievitch and Ricardo da S. Torres},
  title = {GeoNote: A Web Service for Geographic Data Annotation in Biodiversity Information Systems},
  journal = {Journal of Information and Data Management (JIDM)},
  year = {2011},
  volume = {2},
  number = {2},
  pages = {195--210},
  url = {http://seer.lcc.ufmg.br/index.php/jidm/article/view/133}
}
      


Kimura, P.A.S., Cavalcanti, J.M.B., Saraiva, P.C., da S. Torres, R. and Gonçalves, M.A. Evaluating Retrieval Effectiveness of Descriptors for Searching in Large Image Databases 2011 Journal of Information and Data Management (JIDM)
Vol. 2(3), pp. 305-320 
article URL 
Abstract: This article presents an evaluation of image descriptors for searching in large image databases. Several image descriptors proposed in the literature achieve high precision levels when experimented in small (less than 20,000 images) and well-behaved image databases. Our assumption is that retrieval effectiveness may be strongly affected by variations in size, quality, and diversity of the images in the database. In order to verify whether an image descriptor maintains its retrieval effectiveness in large databases, experiments were carried out using several image descriptors and three image collections, including one with over 100,000 images collected from the Web. The results obtained show that in general the retrieval effectiveness of the different descriptors varies little in small image collections whereas in large image collections they differ significantly. Among the descriptors used in the experiments, there are two proposed by us for being used in large and heterogeneous image databases. The proposed descriptors outperform significantly the other descriptors used as baselines in the Web collection. These results give us a better understanding about the features and the strategies that should be followed to construct descriptors for practical search tasks in large image databases.

BibTeX:
@article{Kimura2011JIDM,
  author = {Petrina A. S. Kimura and João M. B. Cavalcanti and Patricia C. Saraiva and Ricardo da S. Torres and Marcos André Gonçalves},
  title = {Evaluating Retrieval Effectiveness of Descriptors for Searching in Large Image Databases},
  journal = {Journal of Information and Data Management (JIDM)},
  year = {2011},
  volume = {2},
  number = {3},
  pages = {305--320},
  url = {http://seer.lcc.ufmg.br/index.php/jidm/article/view/161}
}
      


Kozievitch, N.P., da S. Torres, R., Santanchè, A., Pedronette, D.C.G., Calumby, R.T. and Fox, E.A. An Infrastructure for Searching and Harvesting Complex Image Objects 2011 The Information - Interaction - Intelligence (I3) Journal
Vol. 11(2), pp. 39-68 
article URL 
Abstract: In order to reuse, integrate, and unify different resources from a common perspective, complex objects (COs) have emerged to facilitate aggregation abstraction and to help developers to manage heterogeneous resources, and their components. In particular, complex image objects (ICO) play a key role in different domains, due to their large availability and integration with datasets, metadata, and image manipulation software. Applications which manage ICOs still lack support by mechanisms for processing and managing data, creating references, making annotations, searching by content, harvesting, and organizing their components. Examples of common services that need support in these applications include (i) Content-Based Image Retrieval (CBIR) ; (ii) the combination of visual with textual retrieval ; and (iii) the harvesting of COs. This paper presents the design and implementation of a CO-based infrastructure comprising these three services, focusing on the developer view of service integration as components. Our infrastructure relies on a specific component technology - Digital Content Component (DCC) - to wrap the complex image object and to encapsulate CBIR related tasks. Other contributions rely on the use of rerank and rank-aggregation approaches for combining visual and text retrieval, and the integration of our DCC- based framework with Open Archives Initiative mechanisms to support metadata harvesting and the ``discovery" of the digital objects by other applications.

BibTeX:
@article{Kozievitch2011i3,
  author = {Nádia P. Kozievitch and Ricardo da S. Torres and André Santanchè and Daniel C. G. Pedronette and Rodrigo T. Calumby and Edward A. Fox},
  title = {An Infrastructure for Searching and Harvesting Complex Image Objects},
  journal = {The Information - Interaction - Intelligence (I3) Journal},
  year = {2011},
  volume = {11},
  number = {2},
  pages = {39--68},
  url = {http://www.irit.fr/journal-i3/volume11/numero02/article_11_02_03.pdf}
}
      


Kozievitch, N.P., Almeida, J., da S. Torres, R., Leite, N.A., Gonçalves, M.A., Murthy, U. and Fox, E.A. Towards a Formal Theory for Complex Objects and Content-Based Image Retrieval 2011 Journal of Information and Data Management (JIDM)
Vol. 2(3), pp. 321-336 
article URL 
Abstract: Advanced services in digital libraries (DLs) have been developed and are widely used to address the required capabilities of an assortment of systems as DLs expand into diverse application domains. In order to reuse, integrate, unify, manage, and support these heterogeneous resources, the notion of complex objects (COs) has emerged as a means to facilitate aggregation of content and to help developers to manage heterogeneous information resources, and their internal components. In particular, complex image objects (along with the most used service - Content-Based Image Retrieval) have the potential to play a key role in information systems, due to the large availability of images and the need to integrate them with other datasets (and metadata), and image manipulation software. However, the lack of consensus on precise theoretical definitions for these concepts usually leads to ad hoc implementation, duplication of efforts, and interoperability problems. In this article we exploit the 5S Framework to propose a formal description for Complex Objects and Content-Based Image Retrieval, defining their fundamental concepts and relationships from a digital library (DL) perspective. These formalized concepts can be used to classify, compare, and highlight the differences among components, technologies, and applications, impacting digital library researchers, designers, and developers. The theoretical extensions of digital library functionality presented here cover complex image objects, within a practical case study, to exemplify the integrative use of services, thus balancing theory and practice.

BibTeX:
@article{Kozievitch2011JIDM,
  author = {Nádia P. Kozievitch and Jurandy Almeida and Ricardo da S. Torres and Neucimar A. Leite and Marcos André Gonçalves  and Uma Murthy and Edward A. Fox},
  title = {Towards a Formal Theory for Complex Objects and Content-Based Image Retrieval},
  journal = {Journal of Information and Data Management (JIDM)},
  year = {2011},
  volume = {2},
  number = {3},
  pages = {321--336},
  url = {http://seer.lcc.ufmg.br/index.php/jidm/article/view/142}
}
      


Mariote, L.E., Medeiros, C.B., da S. Torres, R. and Bueno, L.M. TIDES - a new descriptor for time series oscillation behavior 2011 GeoInformatica
Vol. 15(1), pp. 75-109 
article DOI URL 
Abstract: Sensor networks have increased the amount and variety of temporal data available, requiring the definition of new techniques for data mining. Related research typically addresses the problems of indexing, clustering, classification, summarization, and anomaly detection. There is a wide range of techniques to describe and compare time series, but they focus on series' values. This paper concentrates on a new aspect-that of describing oscillation patterns. It presents a technique for time series similarity search, and multiple temporal scales, defining a descriptor that uses the angular coefficients from a linear segmentation of the curve that represents the evolution of the analyzed series. This technique is generalized to handle co-evolution, in which several phenomena vary at the same time. Preliminary experiments with real datasets showed that our approach correctly characterizes the oscillation of single time series, for multiple time scales, and is able to compute the similarity among sets of co-evolving series.

BibTeX:
@article{Mariote2011Geoinformatica,
  author = {Leonardo E. Mariote and Claudia Bauzer Medeiros and Ricardo da S. Torres and Lucas M. Bueno},
  title = {TIDES - a new descriptor for time series oscillation behavior},
  journal = {GeoInformatica},
  year = {2011},
  volume = {15},
  number = {1},
  pages = {75--109},
  url = {http://dx.doi.org/10.1007/s10707-010-0112-5},
  doi = {http://doi.org/10.1007/s10707-010-0112-5}
}
      


Medeiros, C.B., Santanchè, A., Madeira, E.R.M., Martins, E., aes , G.C.M., Baranauskas, M.C.C., Leite, N.J. and da S. Torres, R. Data Driven Research at LIS: the Laboratory of Information Systems at UNICAMP 2011 Journal of Information and Data Management (JIDM)
Vol. 2(2), pp. 93-108 
article URL 
Abstract: This article presents an overview of the research conducted at the Laboratory of Information Systems (LIS) at the Institute of Computing, UNICAMP. Its creation, in 1994, was motivated by the need to support data-driven research within multidisciplinary projects involving computer scientists and scientists from other fields. Throughout the years, it has housed projects in many domains - in agriculture, biodiversity, medicine, health, bioinformatics, urban planning, telecommunications, and sports - with scientific results in these fields and in Computer Science, with emphasis in data management, integrating research on databases, image processing, human-computer interfaces, software engineering and computer networks. The research produced 14 PhD theses, 70 MSc dissertations, 40+ journal papers and 200+ conference papers, having been assisted by over 80 undergraduate student scholarships. Several of these results were obtained through cooperation with many Brazilian universities and research centers, as well as groups in Canada, USA, France, Germany, the Netherlands and Portugal. The authors of this article are faculty at the Institute whose students developed their MSc or PhD research in the lab. For additional details, online systems, papers and reports, see http://www.lis.ic.unicamp.br and http://www.lis.ic.unicamp.br/publications.

BibTeX:
@article{Medeiros2011JIDM,
  author = {Claudia Bauzer Medeiros and André Santanchè and Edmundo R. M. Madeira and Eliane Martins and Geovane C. Magalhães and Maria Cecilia C. Baranauskas and Neucimar J. Leite and Ricardo da S. Torres},
  title = {Data Driven Research at LIS: the Laboratory of Information Systems at UNICAMP},
  journal = {Journal of Information and Data Management (JIDM)},
  year = {2011},
  volume = {2},
  number = {2},
  pages = {93--108},
  url = {http://seer.lcc.ufmg.br/index.php/jidm/article/view/116}
}
      


Pedronette, D.C.G. and da S. Torres, R. Exploiting clustering approaches for image re-ranking 2011 Journal of Visual Languages & Computing
Vol. 22(6), pp. 453-466 
article DOI URL 
Abstract: This paper presents the Distance Optimization Algorithm (DOA), a re-ranking method aiming to improve the effectiveness of Content-Based Image Retrieval (CBIR) systems. DOA considers an iterative clustering approach based on distances correlation and on the similarity of ranked lists. The algorithm explores the fact that if two images are similar, their distances to other images and therefore their ranked lists should be similar as well. We also describe how DOA can be used to combine different descriptors and then improve the quality of results of CBIR systems. Conducted experiments involving shape, color, and texture descriptors demonstrate the effectiveness of our method, when compared with state-of-the-art approaches.

BibTeX:
@article{Pedronette2011JVLC,
  author = {Daniel Carlos Guimarães Pedronette and Ricardo da S. Torres},
  title = {Exploiting clustering approaches for image re-ranking},
  journal = {Journal of Visual Languages & Computing},
  year = {2011},
  volume = {22},
  number = {6},
  pages = {453--466},
  url = {http://www.sciencedirect.com/science/article/pii/S1045926X11000632},
  doi = {http://doi.org/10.1016/j.jvlc.2011.08.001}
}
      


Pinto-Cáceres, S., Almeida, J., Neris, V.P.A., Baranauskas, M.C.C., Leite, N.J. and da S. Torres, R. Navigating through Video Stories using Clustering Sets 2011 International Journal of Multimedia Data Engineering and Management
Vol. 2(3), pp. 1-20 
article DOI URL 
Abstract: The fast evolution of technology has led to a growing demand for video data, increasing the amount of research into efficient systems to manage those materials. Making efficient use of video information requires that data be accessed in a user-friendly way. Ideally, one would like to perform video search using an intuitive tool. Most of existing browsers for the interactive search of video sequences, however, have employed a too rigid layout to arrange the results, restricting users to explore the results using list- or grid-based layouts. This paper presents a novel approach for the interactive search that displays the result set in a flexible manner. The proposed method is based on a simple and fast algorithm to build video stories and on an effective visual structure to arrange the storyboards, called Clustering Set. It is able to group together videos with similar content and to organize the result set in a well-defined tree. Results from a rigorous empirical comparison with a subjective evaluation show that such a strategy makes the navigation more coherent and engaging to users.

BibTeX:
@article{Pinto-Caceres2011IJMDEM,
  author = {Sheila Pinto-Cáceres and Jurandy Almeida and Vânia P. A. Neris and Maria C. C. Baranauskas and Neucimar Jerônimo Leite and Ricardo da S. Torres},
  title = {Navigating through Video Stories using Clustering Sets},
  journal = {International Journal of Multimedia Data Engineering and Management},
  year = {2011},
  volume = {2},
  number = {3},
  pages = {1--20},
  url = {http://dx.doi.org/10.4018/jmdem.2011070101},
  doi = {http://doi.org/10.4018/jmdem.2011070101}
}
      


dos Santos, J.A., Ferreira, C.D., da S. Torres, R., Gonçalves, M.A. and Lamparelli, R.A.C. A relevance feedback method based on genetic programming for classification of remote sensing images 2011 Information Sciences
Vol. 181(13), pp. 2671-2684 
article DOI URL 
Abstract: This paper presents an interactive technique for remote sensing image classification. In our proposal, users are able to interact with the classification system, indicating regions of interest (and those which are not). This feedback information is employed by a genetic programming approach to learning user preferences and combining image region descriptors that encode spectral and texture properties. Experiments demonstrate that the proposed method is effective for image classification tasks and outperforms the traditional MaxVer method.

BibTeX:
@article{Santos2011InfSci,
  author = {Jefersson A. dos Santos and Cristiano D. Ferreira and Ricardo da S. Torres and Marcos André Gonçalves and Rubens A. C. Lamparelli},
  title = {A relevance feedback method based on genetic programming for classification of remote sensing images},
  journal = {Information Sciences},
  year = {2011},
  volume = {181},
  number = {13},
  pages = {2671--2684},
  url = {http://www.sciencedirect.com/science/article/pii/S0020025510000575},
  doi = {http://doi.org/10.1016/j.ins.2010.02.003}
}
      


Almeida, J., Valle, E., da S. Torres, R. and Leite, N.J. DAHC-tree: An Effective Index for Approximate Search in High-Dimensional Metric Spaces 2010 Journal of Information and Data Management (JIDM)
Vol. 1(3), pp. 375-390 
article URL 
Abstract: Similarity search in high-dimensional metric spaces is a key operation in many applications, such as multimedia databases, image retrieval, object recognition, and others. The high dimensionality of the data requires special index structures to facilitate the search. A problem regarding the creation of suitable index structures for high-dimensional data is the relationship between the geometry of the data and the organization of an index structure. In this paper, we study the performance of a new index structure, called Divisive-Agglomerative Hierarchical Clustering tree (DAHC-tree), which reduces the effects imposed by the above liability. DAHC-tree is constructed by dividing and grouping the data set into compact clusters. We perform a rigorous experimental design and analyze the trade-offs involved in building such an index structure. Additionally, we present extensive experiments comparing our method against state-of-the-art of exact and approximate solutions. The conducted analysis and the reported comparative test results demonstrate that our technique significantly improves the performance of similarity queries.

BibTeX:
@article{Almeida2010JIDM,
  author = {Jurandy Almeida and Eduardo Valle and Ricardo da S. Torres and Neucimar Jerônimo Leite},
  title = {DAHC-tree: An Effective Index for Approximate Search in High-Dimensional Metric Spaces},
  journal = {Journal of Information and Data Management (JIDM)},
  year = {2010},
  volume = {1},
  number = {3},
  pages = {375--390},
  url = {http://seer.lcc.ufmg.br/index.php/jidm/article/view/82}
}
      


Andaló, F.A., Miranda, P.A.V., da S. Torres, R. and Falcão, A.X. Shape feature extraction and description based on tensor scale 2010 Pattern Recognition
Vol. 43(1), pp. 26-36 
article DOI URL 
Abstract: Tensor scale is a morphometric parameter that unifies the representation of local structure thickness, orientation, and anisotropy, which can be used in several computer vision and image processing tasks. In this article, we exploit this concept for binary images and propose a shape salience detector and a shape descriptor - Tensor Scale Descriptor with Influence Zones. It also introduces a robust method to compute tensor scale, using a graph-based approach - the Image Foresting Transform. Experimental results are provided, showing the effectiveness of the proposed methods, when compared to other relevant methods, such as Beam Angle Statistics and Contour Salience Descriptor, with regard to their use in content-based image retrieval tasks.

BibTeX:
@article{Andalo2010PR,
  author = {Fernanda A. Andaló and Paulo A. V. Miranda and Ricardo da S. Torres and Alexandre X. Falcão},
  title = {Shape feature extraction and description based on tensor scale},
  journal = {Pattern Recognition},
  year = {2010},
  volume = {43},
  number = {1},
  pages = {26--36},
  url = {http://dx.doi.org/10.1016/j.patcog.2009.06.012},
  doi = {http://doi.org/10.1016/j.patcog.2009.06.012}
}
      


Li, L.T. and da S. Torres, R. Revisitando os Desafios da Recuperação de Informação Geográfica na Web 2010 Cadernos CPQD Tecnologia
Vol. 6(1), pp. 7-20 
article URL 
Abstract: The geographic information is part of people's daily life. There is a huge amount of information on the Web about or related to geographic entities and people are interested in localizing them on maps. Nevertheless, the conventional Web search engines, which are keyword-driven mechanisms, do not support queries involving spatial relationships between geographic entities. This paper revises the Geographic Information Retrieval (GIR) area and restates its research challenges and opportunities, based on a proposed architecture for carrying out Web queries involving spatial relationships and an initial implementation of that arquitecture.

BibTeX:
@article{Li2010CPQD,
  author = {Lin Tzy Li and Ricardo da S. Torres},
  title = {Revisitando os Desafios da Recuperação de Informação Geográfica na Web},
  journal = {Cadernos CPQD Tecnologia},
  year = {2010},
  volume = {6},
  number = {1},
  pages = {7--20},
  url = {http://www.cpqd.com.br/cadernosdetecnologia/Vol6_N1_jan_jun_2010/artigo1.html}
}
      


Almeida, J., da S. Torres, R. and Goldenstein, S. SIFT Applied to CBIR 2009 Revista de Sistemas de Informação da Faculdade Salesiana Maria Auxiliadora(4), pp. 41-48  article URL 
Abstract: Content-Based Image Retrieval (CBIR) is a challenging task. Common approaches use only low-level features. Notwithstanding, such CBIR solutions fail on capturing some local features representing the details and nuances of scenes. Many techniques in image processing and computer vision can capture these scene semantics. Among them, the Scale Invariant Features Transform (SIFT) has been widely used in a lot of applications. This approach relies on the choice of several parameters which directly impact its effectiveness when applied to retrieve images. In this paper, we discuss the results obtained in several experiments proposed to evaluate the application of the SIFT in CBIR tasks.

BibTeX:
@article{Almeida2009FSMA,
  author = {Jurandy Almeida and Ricardo da S. Torres and Siome Goldenstein},
  title = {SIFT Applied to CBIR},
  journal = {Revista de Sistemas de Informação da Faculdade Salesiana Maria Auxiliadora},
  year = {2009},
  number = {4},
  pages = {41--48},
  url = {http://www.fsma.edu.br/si/edicao4/FSMA_SI_2009_2_Principal_2_en.html}
}
      


Montoya-Zegarra, J.A., Leite, N.J. and da S. Torres, R. Wavelet-based feature extraction for fingerprint image retrieval 2009 Journal of Computational and Applied Mathematics
Vol. 227(2), pp. 294-307 
article DOI URL 
Abstract: This paper presents a novel approach for personal identification based on a wavelet-based fingerprint retrieval system which encompasses three image retrieval tasks, namely, feature extraction, similarity measurement, and feature indexing. We propose the use of different types of Wavelets for representing and describing the textural information presented in fingerprint images in a compact way. For that purpose, the feature vectors used to characterize the fingerprints are obtained by computing the mean and the standard deviation of the decomposed images in the wavelet domain. These feature vectors are used both to retrieve the most similar fingerprints, given a query image, and their indexation is used to reduce the search spaces of candidate images. The different types of Wavelets used in our study include: Gabor wavelets, tree-structured wavelet decomposition using both orthogonal and bi-orthogonal filter banks, as well as the steerable wavelets. To evaluate the retrieval accuracy of the proposed approach, a total number of eight different data sets were considered. We also took into account different combinations of the above wavelets with six similarity measures. The results show that the Gabor wavelets combined with the Square Chord similarity measure achieves the best retrieval effectiveness.

BibTeX:
@article{Montoya-Zegarra2009CAM,
  author = {Javier A. Montoya-Zegarra and Neucimar Jerônimo Leite and Ricardo da S. Torres},
  title = {Wavelet-based feature extraction for fingerprint image retrieval},
  journal = {Journal of Computational and Applied Mathematics},
  year = {2009},
  volume = {227},
  number = {2},
  pages = {294--307},
  url = {http://dx.doi.org/10.1016/j.cam.2008.03.017},
  doi = {http://doi.org/10.1016/j.cam.2008.03.017}
}
      


da S. Torres, R., Falcão, A.X., Gonçalves, M.A., Papa, J.P., Zhang, B., Fan, W. and Fox, E.A. A genetic programming framework for content-based image retrieval 2009 Pattern Recognition
Vol. 42(2), pp. 283-292 
article DOI URL 
Abstract: This work exploits the resemblance between content-based image retrieval and image analysis with respect to the design of image descriptors and their effectiveness. In this context, two shape descriptors are proposed: contour saliences and segment saliences. Contour saliences revisits its original definition, where the location of concave points was a problem, and provides a robust approach to incorporate concave saliences. Segment saliences introduces salience values for contour segments, making it possible to use an optimal matching algorithm as distance function. The proposed descriptors are compared with convex contour saliences, curvature scale space, and beam angle statistics using a fish database with 11,000 images organized in 1100 distinct classes. The results indicate segment saliences as the most effective descriptor for this particular application and confirm the improvement of the contour salience descriptor in comparison with convex contour saliences.

BibTeX:
@article{Torres2009PR,
  author = {Ricardo da S. Torres and Alexandre X. Falcão and Marcos André Gonçalves  and João P. Papa and Baoping Zhang and Weiguo Fan and Edward A. Fox},
  title = {A genetic programming framework for content-based image retrieval},
  journal = {Pattern Recognition},
  year = {2009},
  volume = {42},
  number = {2},
  pages = {283--292},
  url = {http://dx.doi.org/10.1016/j.patcog.2008.04.010},
  doi = {http://doi.org/10.1016/j.patcog.2008.04.010}
}
      


Montoya-Zegarra, J.A., Papa, J.P., Leite, N.J., da S. Torres, R. and Falcão, A.X. Learning How to Extract Rotation-Invariant and Scale-Invariant Features from Texture Images 2008 Eurasip Journal on Advances in Signal Processing
Vol. 2008:691924 
article DOI URL 
Abstract: Learning how to extract texture features from noncontrolled environments characterized by distorted images is a still-open task. By using a new rotation-invariant and scale-invariant image descriptor based on steerable pyramid decomposition, and a novel multiclass recognition method based on optimum-path forest, a new texture recognition system is proposed. By combining the discriminating power of our image descriptor and classifier, our system uses small-size feature vectors to characterize texture images without compromising overall classification rates. State-of-the-art recognition results are further presented on the Brodatz data set. High classification rates demonstrate the superiority of the proposed system.

BibTeX:
@article{Montoya-Zegarra2008Eurasip,
  author = {Javier A. Montoya-Zegarra and João Paulo Papa and Neucimar Jerônimo Leite and Ricardo da S. Torres and Alexandre X. Falcão},
  title = {Learning How to Extract Rotation-Invariant and Scale-Invariant Features from Texture Images},
  journal = {Eurasip Journal on Advances in Signal Processing},
  year = {2008},
  volume = {2008:691924},
  url = {http://dx.doi.org/10.1155/2008/691924},
  doi = {http://doi.org/10.1155/2008/691924}
}
      


Penatti, O.B. and da S. Torres, R. Descritor de Relacionamento Espacial baseado em Partições 2007 Revista Eletrônica de Iniciação Científica
Vol. VII(3) 
article URL 
Abstract: Spatial relationships can be fundamental for image recognition and retrieval, being useful for geographic and medical applications, for instance. This paper presents a new spatial relationship descriptor for content-based image retrieval. The new descriptor presented is based on partitioning the space of analysis into quadrants and on counting the number of object points in each partition. Experiments demonstrated that the proposed descriptor is more effective than the other descriptors studied in this work.

BibTeX:
@article{Penatti2007REIC,
  author = {Otavio Bizetto Penatti and Ricardo da S. Torres},
  title = {Descritor de Relacionamento Espacial baseado em Partições},
  journal = {Revista Eletrônica de Iniciação Científica},
  year = {2007},
  volume = {VII},
  number = {3},
  url = {http://portal.sbc.org.br/index.php?language=1&subject=101&content=article&option=pdf&aid=583}
}
      


da S. Torres, R. and Falcão, A.X. Contour salience descriptors for effective image retrieval and analysis 2007 Image and Vision Computing
Vol. 25(1), pp. 3-13 
article DOI URL 
Abstract: This work exploits the resemblance between content-based image retrieval and image analysis with respect to the design of image descriptors and their effectiveness. In this context, two shape descriptors are proposed: contour saliences and segment saliences. Contour saliences revisits its original definition, where the location of concave points was a problem, and provides a robust approach to incorporate concave saliences. Segment saliences introduces salience values for contour segments, making it possible to use an optimal matching algorithm as distance function. The proposed descriptors are compared with convex contour saliences, curvature scale space, and beam angle statistics using a fish database with 11,000 images organized in 1100 distinct classes. The results indicate segment saliences as the most effective descriptor for this particular application and confirm the improvement of the contour salience descriptor in comparison with convex contour saliences.

BibTeX:
@article{Torres2007IVC,
  author = {Ricardo da S. Torres and Alexandre X. Falcão},
  title = {Contour salience descriptors for effective image retrieval and analysis},
  journal = {Image and Vision Computing},
  year = {2007},
  volume = {25},
  number = {1},
  pages = {3--13},
  url = {http://dx.doi.org/10.1016/j.imavis.2005.12.010},
  doi = {http://doi.org/10.1016/j.imavis.2005.12.010}
}
      


da S. Torres, R., Medeiros, C.B., Gonçalves, M.A. and Fox, E.A. A digital library framework for biodiversity information systems 2006 International Journal on Digital Libraries
Vol. 6(1), pp. 3-17 
article DOI URL 
Abstract: Biodiversity Information Systems (BISs) involve all kinds of heterogeneous data, which include ecological and geographical features. However, available information systems offer very limited support for managing these kinds of data in an integrated fashion. Furthermore, such systems do not fully support image content (e.g., photos of landscapes or living organisms) management, a requirement of many BIS end-users. In order to meet their needs, these users- e.g., biologists, environmental experts - often have to alternate between separate biodiversity and image information systems to combine information extracted from them. This hampers the addition of new data sources, as well as cooperation among scientists. The approach provided in this paper to meet these issues is based on taking advantage of advances in digital library innovations to integrate networked collections of heterogeneous data. It focuses on creating the basis for a next-generation BIS, combining new techniques of content-based image retrieval and database query processing mechanisms. This paper shows the use of this component-based architecture to support the creation of two tailored BIS systems dealing with fish specimen identification using search techniques. Experimental results suggest that this new approach improves the effectiveness of the fish identification process, when compared to the traditional key-based method.

BibTeX:
@article{Torres2006IJDL,
  author = {Ricardo da S. Torres and Claudia Bauzer Medeiros and Marcos André Gonçalves  and Edward A. Fox},
  title = {A digital library framework for biodiversity information systems},
  journal = {International Journal on Digital Libraries},
  year = {2006},
  volume = {6},
  number = {1},
  pages = {3--17},
  url = {http://dx.doi.org/10.1007/s00799-005-0124-1},
  doi = {http://doi.org/10.1007/s00799-005-0124-1}
}
      


da S. Torres, R. and Falcão, A.X. Content-Based Image Retrieval: Theory and Applications 2006 Revista de Informática Teórica e Aplicada (RITA)
Vol. 13(2), pp. 161-185 
article  
Abstract: Advances in data storage and image acquisition technologies have enabled the creation of large image datasets. In this scenario, it is necessary to develop appropriate information systems to efficiently manage these collections. The commonest approaches use the so-called Content-Based Image Retrieval (CBIR) systems. Basically, these systems try to retrieve images similar to a user-defined specification or pattern (e.g., shape sketch, image example). Their goal is to support image retrieval based on content properties (e.g., shape, color, texture), usually encoded into feature vectors. One of the main advantages of the CBIR approach is the possibility of an automatic retrieval process, instead of the traditional keyword-based approach, which usually requires very laborious and time-consuming previous annotation of database images. The CBIR technology has been used in several applications such as fingerprint identification, biodiversity information systems, digital libraries, crime prevention, medicine, historical research, among others. This paper aims to introduce the problems and challenges concerned with the creation of CBIR systems, to describe the existing solutions and applications, and to present the state of the art of the existing research in this area.

BibTeX:
@article{Torres2006RITA,
  author = {Ricardo da S. Torres and Alexandre X. Falcão},
  title = {Content-Based Image Retrieval: Theory and Applications},
  journal = {Revista de Informática Teórica e Aplicada (RITA)},
  year = {2006},
  volume = {13},
  number = {2},
  pages = {161--185}
}
      


Medeiros, C.B., de Jesús Pérez Alcázar, J., Digiampietri, L.A., Jr., G.Z.P., Santanchè, A., da S. Torres, R., Madeira, E.R.M. and Bacarin, E. WOODSS and the Web: annotating and reusing scientific workflows 2005 SIGMOD Record
Vol. 34(3), pp. 18-23 
article DOI URL 
Abstract: This paper discusses ongoing research on scientific workflows at the Institute of Computing, University of Campinas (IC - UNICAMP) Brazil. Our projects with bio-scientists have led us to develop a scientific workflow infrastructure named WOODSS. This framework has two main objectives in mind: to help scientists to specify and annotate their models and experiments; and to document collaborative efforts in scientific activities. In both contexts, workflows are annotated and stored in a database. This ``annotated scientific workflow" database is treated as a repository of (sometimes incomplete) approaches to solving scientific problems. Thus, it serves two purposes: allows comparison of distinct solutions to a problem, and their designs; and provides reusable and executable building blocks to construct new scientific workflows, to meet specific needs. Annotations, moreover, allow further insight into methodology, success rates, underlying hypotheses and other issues in experimental activities.The many research challenges faced by us at the moment include: the extension of this framework to the Web, following Semantic Web standards; providing means of discovering workflow components on the Web for reuse; and taking advantage of planning in Artificial Intelligence to support composition mechanisms. This paper describes our efforts in these directions, tested over two domains - agro-environmental planning and bioinformatics.

BibTeX:
@article{Medeiros2005Sigmod,
  author = {Claudia Bauzer Medeiros and José de Jesús Pérez Alcázar and Luciano A. Digiampietri and Gilberto Zonta Pastorello Jr. and André Santanchè and Ricardo da S. Torres and Edmundo Roberto Mauro Madeira and Evandro Bacarin},
  title = {WOODSS and the Web: annotating and reusing scientific workflows},
  journal = {SIGMOD Record},
  year = {2005},
  volume = {34},
  number = {3},
  pages = {18--23},
  url = {http://dx.doi.org/10.1145/1084805.1084810},
  doi = {http://doi.org/10.1145/1084805.1084810}
}
      


da S. Torres, R., Falcão, A.X. and da Fontoura Costa, L. A graph-based approach for multiscale shape analysis 2004 Pattern Recognition
Vol. 37(6), pp. 1163-1174 
article DOI URL 
Abstract: This paper presents two shape descriptors, multiscale fractal dimension and contour saliences, using a graph-based approach - the image foresting transform. It introduces a robust approach to locate contour saliences from the relation between contour and skeleton. The contour salience descriptor consists of a vector, with salience location and value along the contour, and a matching algorithm. We compare both descriptors with fractal dimension, Fourier descriptors, moment invariants, Curvature Scale Space and Beam Angle Statistics regarding to their invariance to object characteristics that belong to a same class (compact-ability) and to their ability to separate objects of distinct classes (separability).

BibTeX:
@article{Torres2004PR,
  author = {Ricardo da S. Torres and Alexandre X. Falcão and Luciano da Fontoura Costa},
  title = {A graph-based approach for multiscale shape analysis},
  journal = {Pattern Recognition},
  year = {2004},
  volume = {37},
  number = {6},
  pages = {1163--1174},
  url = {http://dx.doi.org/10.1016/j.patcog.2003.10.007},
  doi = {http://doi.org/10.1016/j.patcog.2003.10.007}
}