Technical Reports Published in 2006

  • IC-06-24 pdf bib
    Proceedings of the 2th ongoing PhD thesis workshop at IC-UNICAMP.
    Claudia M. Bauzer Medeiros, Amanda Meincke Melo, Carla Geovana do Nascimento Macario, Fernando Castor Filho, Neumar Costa Malheiros, and Vânia Paula de Almeida Neris (Eds.).
    December 2006. Partly in English, partly in Portuguese, 97 pages.

    Summary: This technical report contains summaries of 32 papers presented at the II Doctoral Thesis Workshop of the Institute of Computing at UNICAMP. The workshop, held from November 29 to December 1, 2006, allowed the Institute's doctoral students to present the main aspects of their research. Each chapter corresponds to a presentation, the text being limited to 3 pages.

    Participation was voluntary and the academic profile of the participants varied, ranging from students recently admitted to the program to those who already had their defense scheduled in December 2006.

    The publication of abstracts in the form of a technical report aims to promote the dissemination of doctoral work in progress at the IC. In addition, it is a succinct record of the state of several of these surveys.

  • IC-06-23 pdf bib
    A shortest-cycle based heuristics for the multiple genome rearrangement problem.
    Cleber Mira, Qian Peng, João Meidanis, and Pavel Pevzner.
    November 2006. In English, 14 pages.

    Abstract: The multiple genome rearrangement problem consists in finding a phylogenetic tree that is the most `` similar '' to the evolutionary scenario based on rearrangeme nt events involving several species. Bourque and Pevzner devised an algorithm for the multiple genome rearrangement problem that is based on rearrangement events (signed reversals, translocations, fusions, and fissions). However, the heuristics behind their algorithm relies on the time-consuming procedure of searching good events. We propose a simplified version of Bourque and Pevzner's heuristics based on experimental results that show that proper signed reversals acting on short cycles are frequently good events. We show the results of biological input data sets and randomly generated data sets and argue that the new heuristic is faster than the good events search, without compromising the quality of the solutions for input data sets involving few genomes.

  • IC-06-22 pdf bib
    Implementing modular error handling with aspects: Best and worst practices.
    Fernando Castor Filho, Alessandro Garcia, and Cecília Mary F. Rubira.
    November 2006. In English, 18 pages.

    Abstract: One of the fundamental motivations for employing exception handling in the development of robust applications is to lexically separate error handling code so that the latter and the normal code can be independently modified. Also, this scheme makes it possible for the normal code to be reused across various applications, using different error handling strategies. Since error handling is an inherently crosscutting concern, it is usually assumed that it can be better modularized by the use of aspect-oriented programming (AOP) techniques. However, recent studies argue that the ad hoc use of AOP can be detrimental to the quality of a system. When applying AOP to modularize exception handling, developers need to follow clear and simple practices in order to obtain a well-structured system design. If developers do not receive proper guidance, software systems whose exceptional behavior is modularized through aspects can exhibit problems, such as unexpected flows of control and swallowed exceptions, that stem from poorly designed / implemented error handling code. In this paper, we address this problem by providing some design / implementation guidance regarding the use of AOP to modularize exception handling. We propose a classification for error handling code based on the factors that we found out have more influence on its aspectization. Moreover, we present a scenario catalog comprising combinations of these factors and analyze how these scenarios positively or negatively affect the task of aspectizing exception handling.

  • IC-06-21 pdf bib
    A process for testing systems with component reuse.
    Ivan Rodolfo Duran Cruz Perez, Camila Rocha, Regina Moraes, Josiane Aparecida Cardoso, Ruth Fabiana Soliani, and Eliane Martins.
    October 2006. In Portuguese, 27 pages.

    Summary: Due to market needs, where more and more software is being developed in shorter terms, component-based development (DBC) has been increasingly used in the software development industry. However, in order to always maintain a good quality level of the software under development, a testing process for DBC is necessary, both to guarantee the quality of the components under development and to validate the components that will be reused, in this case, checking if the component satisfies the desired specification in the system architecture. This proposes a generic testing process adapted to DBC processes.

  • IC-06-20 pdf bib
    Technologies for sharing the authorial work of the public school teacher.
    Mariângela Pisoni Zanaga, Osmar Mantovani, Maria Helena Pereira Dias, and Hans Liesenberg.
    October 2006. In Portuguese, 16 pages.

    Summary: A study with teachers working in the public network showed that the production and, to a lesser extent, the sharing of materials of small educational interest, in the school environment, are part of the work practice of such professionals. The benefits of sharing are recognized as a way of socializing knowledge useful for generating new knowledge. The lack of adequate sharing mechanisms is felt as a limiting factor to expand the possibilities for reuse, by a wider audience, of materials produced by teachers. A system designed to promote such reuse and sharing is presented here.

    Abstract: A survey with teachers acting at public schools showed that the production and, to a lesser degree, the sharing of small-grained materials of educational interest are part of the working practice of those professionals. The benefits of sharing are reckoned as a way of socializing useful knowledge for the generation of new knowledge. The lack of proper sharing mechanisms is perceived as a limiting factor to increase the possibilities of reuse by a larger audience of materials produced by teachers. A system designed to promote this kind of reuse and sharing is being presented here.

  • IC-06-19 pdf bib
    A UDDI extension for Web service-based business process management systems.
    Diego Zuquim Guimarães Garcia and Maria Beatriz Felgar de Toledo.
    October 2006. In English, 18 pages.

    Abstract: The Universal Description Discovery and Integration (UDDI) is a specification for XML registry that enables functionality-based publication and discovery of Web services. This paper presents a UDDI extension to support Quality of Service (QoS) management in the context of Business Process Management Systems (BPMSs). To allow service selection according to functional and non-functional requirements, a BPMS architecture should include brokers to match consumer policies with policies stored in UDDI repositories and monitors to update QoS attributes. The main contributions of this approach are the use of the Web Services Policy Framework specification to describe policies for Web services and a UDDI extension to include QoS attributes.

  • IC-06-18 pdf bib
    A fault-tolerant Web service architecture for business process management systems.
    Diego Zuquim Guimarães Garcia and Maria Beatriz Felgar de Toledo.
    October 2006. In Portuguese, 16 pages.

    Summary: Business Process Management System (SGPN) supports the realization of business processes. Web services are being indicated as the most suitable technology for SGPN. Consequently, the incorporation of fault tolerance in the architecture of Web services is essential for the continuity of processes. The purpose of this article is to propose a fault-tolerant Web services architecture for use with SGPN. It employs layers of mediation and monitoring, providing transparency, interoperability and privacy. The main contributions of this article are: an approach to offer fault tolerance for Web services considering SGPN, an architecture based on the proposed approach and its partial implementation.

  • IC-06-17 pdf bib
    Verification of three authentication protocols using BAN, SVO and Strand Spaces.
    Fabio Rogério Piva, Augusto Jun Devegili, and Ricardo Dahab.
    September 2006. In English, 41 pages.

    Abstract: We compare three formal verification techniques for cryptographic protocols: BAN and SVO logics, and strand spaces, identifying some weaknesses of these logics and advantages of strand spaces. We also present new proofs of modified Yahalom, modified Kerberos and Woo-Lam Pi.

  • IC-06-16 pdf bib
    Gems: A general data structure for $ d $-dimensional triangles.
    Arnaldo J. Montagner and Jorge Stolfi.
    September 2006. In English, 11 pages.

    Abstract: We describe in detail a novel data structure for $ d $-dimensional triangulations. In an arbitrary $ d $-dimension triangulation, there are $ d! $ ways in which a specific facet of an simplex can be glued to a specific facet of another simplex. Therefore, in data structures for general $ d $-dimensional triangulations, this information must be encoded using $ \ lceil \ log_2 (d!) \ rceil $ bits for each adjacent pair of simplices. We study a special class of triangulations, called the colored triangles, in which there is only one way two simplices can share a specific facet. The gem data structure, described here, makes use of this fact to greatly simplify the repertoire of elementary topological operators.

  • IC-06-15 pdf bib
    Comparison of methods for determining SNPs with reliability measures.
    Christian Baudet, Miguel Galves, and Zanoni Dias.
    September 2006. In Portuguese, 13 pages.

    Summary: In this work we carry out studies on methods for the identification of single-base polymorphisms, commonly known by the acronym SNP (in English, Single Nucleotide Polymorphism). Identifying this type of polymorphism is an important process because, due to the fact that they can influence the stability or even the structure of proteins encoded by genes, SNPs can be related to several diseases. A good identification method must be able to point out the polymorphic positions and provide a measure of reliability for the detection performed. In this sense, we studied two methods: polybayes and MSASNP. The first method is a program that performs Bayesian analysis to analyze the sequences and identify the SNPs. The second method adopts a simple strategy, using basic concepts of probability. The tests showed that polybayes is the most suitable for being able to identify SNPs and provide more reliable values ​​on the probability of correct detection of an SNP.

  • IC-06-14 pdf bib
    An algorithm for identifying multiple correlations of polymorphisms.
    André A. M. Almeida, Miguel Galves, and Zanoni Dias.
    September 2006. In Portuguese, 17 pages.

    Summary: Single-base polymorphism (SNP) is a mutation that affects only one position of an organism's genome. Polymorphism correlation (LD) is a non-random association of SNPs, which can be used as a marker. Multiple polymorphism correlation groups three or more polymorphisms, allowing any two polymorphisms to be used as markers. In this work, we study two definitions of LDs (complete LD and useful LD) and present a definition for multiple LDs based on graph theory, as well as a greedy heuristic, based on the degree of the vertices, for their identification. Promising results obtained in tests with data from the sugarcane genome and the human genome are displayed.

  • IC-06-13 pdf bib
    A Markov model for providing quality of service for failure detectors under message loss bursts.
    Irineu Sotoma and Edmundo Roberto Mauro Madeira.
    September 2006. In English, 32 pages.

    Abstract: The quality of service (QoS) of failure detectors determines how fast a failure detector q detects the crash of a process p, and how accurate q informs the p crash. In wide area networks and wireless networks, the occurrence of process crashes, high delay variations and burst losses in message packets are common. So, this paper proposes a failure detector configurator which takes into account the probability distribution of loss burst lengths of message packets, by using a Markov model. The simulation results show that the parameters provided by the proposed configurator lead the failure detector to satisfy the QoS requirements in networks subject to message loss bursts, while previous works do not satisfy them in some cases.

  • IC-06-12 pdf bib
    Wavelet-based feature extraction for fingerprint image retrieval.
    Javier A. Montoya Zegarra, Neucimar J. Leite, and Ricardo da S. Torres.
    September 2006. In English, 23 pages.

    Abstract: This paper presents a novel approach to fingerprint retrieval for personal identification by joining three image retrieval tasks, namely, feature extraction, similarity measurement, and feature indexing, into a wavelet-based fingerprint retrieval system.

    We propose the use of different types of Wavelets for representing and describing the textural information present in fingerprint images. For that purposes, the feature vectors used to characterize the fingerprints are obtained by computing the mean and the standard deviation of the decomposed images in the Wavelet domain. These feature vectors are used to retrieve the most similar fingerprints given a query image, while their indexation is used to reduce the search spaces of image candidates. The different types of Wavelets used in our study include: Gabor Wavelets (GWs), Tree-Structured Wavelet Decomposition using both Orthogonal Filter Banks (TOWT) and Bi-orthogonal Filter Banks (TBOWT), as well as the Steerable Wavelets.

    To evaluate the retrieval accuracy of the proposed approach, a total number of eight different data sets were used. Experiments also evaluated different combinations of Wavelets with six similarity measures. The results show that the Gabor Wavelets combined with the Square Chord similarity measure achieves the best retrieval effectiveness.

  • IC-06-11 pdf bib
    General convex hull using the gem data structure.
    Arnaldo J. Montagner and Jorge Stolfi.
    May 2006. In English, 14 pages.

    Abstract: We describe in detail a general algorithm for constructing the convex hull of a finite set of points in Euclidean space of arbitrary dimension $ n $. The algorithm handles degenerate situations, such as non-simplicial faces and point sets contained in a lower-dimensional subspace. The topology of the hull is kept in a graph encoded map (gem) data structure, a novel representation for $ n $-dimensional triangulations. The gem representation, which was introduced as a mathematical device by S. Lins in 1982, extends the cell-tuple (or generalized map) representation proposed by Brisson and Lienhardt to maps that are not barycentric subdivisions, to manifolds with borders, and to non -manifold (but triangulable) topological spaces.

  • IC-06-10 pdf bib
    Reasoning about exception flow at the architectural level.
    Fernando Castor Filho, Patrick Henrique da S. Brito, and Cecília Mary F. Rubira.
    May 2006. In English, 21 pages.

    Abstract: An important challenge faced by the developers of fault-tolerant systems is to build fault tolerance mechanisms that are reliable. To achieve the desired levels of reliability, mechanisms for detecting and handling errors should be designed since the early phases of software development, preferably using a rigorous or formal methodology. In recent years, many authors have been advocating the idea that exception handling-related issues should be addressed at the architectural level, as a complement to implementation-level exception handling. However, few works in the literature have addressed the problem of describing how exceptions flow amongst architectural elements. A solution to this problem would enable the early detection of mismatches between architectural elements due to exceptions. Moreover, it would make it possible to validate whether the architecture satisfies some properties of interest regarding exception flow before the system is actually built. We believe that a model for describing the flow of exceptions between architectural elements should be: (i) precise; and (ii) analyzable, preferably automatically. In this paper, we present a rigorous model for reasoning about exception flow in software architectures that satisfies these requirements.

  • IC-06-09 pdf bib
    Specification of exception flow in software architectures.
    Fernando Castor Filho, Patrick Henrique da S. Brito, and Cecília Mary F. Rubira.
    May 2006. In English, 45 pages.

    Abstract: In recent years, various approaches combining software architectures and exception handling have been proposed for increasing the dependability of software systems. This conforms with the idea supported by some authors that addressing exception handling-related issues since early phases of software development may improve the overall dependability of a system. By systematically designing the mechanism responsible for rendering a system reliable, developers increase the probability of the system being able to avoid catastrophic failures at runtime. This paper addresses the problem of describing how exceptions flow amongst architectural elements. In critical systems where rollback-based mechanisms might not be available, such as systems that interact with mechanical devices, exception handling is an important means for recovering from errors in a forward-based manner. We present a framework, named Aereal, that supports the description and analysis of exceptions that flow between architectural components. Since different architectural styles have different policies for exception flow, Aereal makes it possible to specify rules on how exceptions flow in a given style and to check for violations of these rules. We use a financial application and a control system as case studies to validate the proposed approach.

  • IC-06-08 pdf bib
    Exceptions and aspects: The devil is in the details.
    Fernando Castor Filho, Nelio Cacho, Eduardo Figueiredo, Raquel Maranhão, Alessandro Garcia, and Cecília Mary F. Rubira.
    May 2006. In English, 21 pages.

    Abstract: One of the fundamental motivations for employing exception handling in the development of robust applications is to lexically separate error handling code so that it can be independently reused and modified. It is usually assumed that the implementation of exception handling can be better modularized by the use of aspect-oriented programming (AOP). However, the trade-offs involved in using AOP with this goal are not yet well-understood. This paper presents an in-depth investigation of the adequacy of the AspectJ language for modularizing exception handling code. The study consisted in refactoring existing applications so that the code responsible for implementing heterogeneous error handling strategies was moved to separate aspects. We have performed quantitative assessments of four systems - three ob ject-oriented and one aspect-oriented - based on four fundamental quality attributes, namely separation of concerns, coupling, cohesion, and conciseness. Our investigation also included a multi-perspective analysis of the refactored systems, including (i) the reusability of the aspectized error handling code, (ii) the beneficial and harmful aspectization scenarios, and (iii) the scalability of AOP to aspectize exception handling in the presence of other crosscutting concerns.

  • IC-06-07 pdf bib
    Progressive randomization for steganalysis.
    Anderson Rocha and Siome Goldenstein.
    April 2006. In English, 16 pages.

    Abstract: In this paper, we describe a new methodology to detect the presence of hidden digital content in the Least Significant Bits (LSB) of images. We introduce the Progressive Randomization (PR) technique that captures statistical artifacts inserted during the hiding process. Our technique is a progressive application of LSB modifying transformations that receives an image as input, and returns $ n $ images that only differ in the LSB from the initial image. Each step of the progressive randomization approach represents a possible content-hiding scenario with increasing size, and increasing LSB entropy. We validate our method with 20,000 real, non-synthetic images. Using only statistical descriptors of LSB occurrences, our method already performs better than comparable techniques in the literature.

  • IC-06-06 pdf bib
    The 2D-VLIW architecture.
    Ricardo Santos, Rodolfo Azevedo, and Guido Araujo.
    March 2006. In English, 13 pages.

    Abstract: This technical report presents the main aspects of the 2D-VLIW architecture. This architecture is being developed in the Computer Systems Laboratory at the Institute of Computing of the State University of Campinas. The main purpose of the 2D-VLIW architecture is to provide resources to carry out a broad class of applications through a compromise between architectural arrangement and compiler assistance. Some preliminary experiments have shown that this combination can achieve an appealing performance over mainstream architectures like VLIW and EPIC.

  • IC-06-05 pdf bib
    Scalable model-based policy refinement and validation for network security systems.
    João Porto de Albuquerque.
    March 2006. In English, 58 pages.

    Abstract: This report builds upon previous work on Model-based Management, and particularly on the Diagram of Abstract Subsystems (DAS) approach, further elaborating on the correctness and performance of the automated policy refinement process. The modeling technique and the automated policy refinement process are firstly presented to illustrate the practical use of the DAS approach. Subsequently, the graphical model is formalized using an algebraic notation, which is thus used to define validation conditions for a layered model, ie conditions to which a resulting model must comply if the lower-level policy sets have been correctly generated. The scalability of the refinement algorithms and the additional effort needed to validate model instances are also analyzed and discussed.

  • IC-06-04 pdf bib
    A framework based on semantic Web services and AI planning for the management of bioinformatics scientific workflows.
    Luciano Antonio Digiampietri, José de Jésus Pérez Alcázar, Claudia Bauzer Medeiros, and João Carlos Setubal.
    February 2006. In English, 22 pages.

    Abstract: Bioinformatics activities are growing all over the world, with proliferation of data and tools. This brings new challenges, such as how to understand and organize these resources, how to exchange and reuse successful experimental procedures, tools and data, and how to provide interoperability among data and tools across different sites, and for distinct user profiles. This paper describes an effort towards these directions. It is based on combining research on databases, AI and scientific workflows, on the Semantic Web, to design, reuse, annotate and document bioinformatics experiments or parts thereof. The resulting framework allows the integration of heterogeneous data and tools, and the design of experiments as scientific workflows, which are stored in databases. Moreover, it takes advantage of the notion of planning in AI to support automatic or interactive composition of tasks. These ideas are being implemented in a prototype and validated on real bioinformatics data.

  • IC-06-03 pdf bib
    Fractional traffic modeling and policing using envelope process.
    Flávio M. Pereira and Nelson L. S. da Fonseca.
    February 2006. In English, 78 pages.

    Abstract: In this paper, an envelope process called Fractional Bounded Arrival Process is proposed for self-similar traffic representation. A queueing analysis for FBAP traffic is developed, and upper bounds for the backlog and for the delay are obtained. The policing of FBAP traffic is also investigated. Results are then extended to the multifractal traffic case, for which an envelope process called Multifractal Bounded Arrival Process (MFBAP) is proposed. Comments on the queueing analysis and on the policing for MFBAP traffic are also outlined.

  • IC-06-02 pdf bib
    On the representation of a PI-Graph.
    Sheila Morais de Almeida, Célia Picinin de Mello, and Anamaria Gomide.
    February 2006. In English, 9 pages.

    Abstract: Consider two parallel lines (denoted by $ r_1 $ and $ r_2 $). A graph is a PI graph (Point-Interval graph) if it is an intersection graph of a family F of triangles between $ r_1 $ and $ r_2 $ such that each triangle has an interval with two endpoints on $ r_1 $ and a vertex (a point) on $ r_2 $. The family F is the PI representation of G. The PI graphs are an extension of interval and permutation graphs and they form a subclass of trapezoid graphs. In this paper, we characterize the PI graphs in terms of its trapezoid representation. Also we show how to construct a family of trapezoid graphs but not PI graphs from a trapezoid representation of a known graph in this class.

  • IC-06-01 pdf bib
    Sorting by block-interchanges and signed reversals.
    Cleber V. G. Mira and João Meidanis.
    January 2006. In English, 11 pages.

    Summary: One of the block exchange it is a rearrangement event that exchanges two contiguous and not necessarily consecutive regions in a genome, maintaining the original orientation of each block. Signed reversals they are events that invert and change the orientation of the blocks of a region in the genome. Both events are important for comparative genome analysis. For this reason, we propose a new measure that consists of finding a minimum sequence of block exchanges and sign reversals that transform one genome into another. For each event, we assign a weight related to its standard and we argue that this parameter is adequate to indicate the power of each event. We present a formula for the rearrangement measure and a polynomial time algorithm to find a sequence of block exchanges and sign reversals that transform one unicromosomal genome into another.

    Abstract: A block-interchange is a rearrangement event that exchanges two, not necessarily consecutive, contiguous regions in a genome, maintaining the original orientation. Signed reverses are events that invert and change the orientation of a region in a genome. Both events are important for the comparative analysis of genomes. For this reason, we propose a new measure that consists in finding a minimum sequence of block-interchanges and signed reversals that transforms a genome into another. For each event, we assign a weight related to its norm and we argue the adequacy of this parameter to indicate the power of each event.

    We present a formula for the rearrangement measure and a polynomial time sorting algorithm for finding a sequence of block-interchanges and signed reversals that transforms a unichromosomal genome into another.


  • Instituto de Computação :: State University of Campinas
    PO Box 6176 • Av. Albert Einstein, 1251 - Cidade Universitária • CEP 13083-970 • Campinas / SP - Brazil • Phone: [19] 3521-5838