Technical Reports Published in 2006

  • IC-06-24 pdf bib
    Anais do 2o workshop de teses de doutorado em andamento do IC-UNICAMP.
    Claudia M. Bauzer Medeiros, Amanda Meincke Melo, Carla Geovana do Nascimento Macario, Fernando Castor Filho, Neumar Costa Malheiros, and Vânia Paula de Almeida Neris (Eds.).
    December 2006. Partly in English, partly in Portuguese, 97 pages.

    Resumo: Este relatório técnico contém resumos de 32 trabalhos apresentados no II Workshop de Teses de Doutorado do Instituto de Computação da UNICAMP. O workshop, realizado de 29 de novembro a 1 de dezembro de 2006, permitiu que doutorandos do Instituto apresentassem os principais aspectos de suas pesquisas. Cada capítulo corresponde a uma apresentação, sendo o texto limitado a 3 páginas.

    A participação foi voluntária e o perfil acadêmico dos participantes foi variado, cobrindo desde alunos recém-admitidos no programa até aqueles que já tinham defesa marcada em dezembro de 2006.

    A publicação dos resumos sob forma de um relatório técnico tem por objetivo uma maior divulgação de trabalhos de doutorado em andamento no IC. Além disso, é um registro sucinto do estado de várias dessas pesquisas.

  • IC-06-23 pdf bib
    A shortest-cycle based heuristics for the multiple genome rearrangement problem.
    Cleber Mira, Qian Peng, João Meidanis, and Pavel Pevzner.
    November 2006. In English, 14 pages.

    Abstract: The multiple genome rearrangement problem consists in finding a phylogenetic tree that is the most ``similar'' to the evolutionary scenario based on rearrangeme nt events involving several species. Bourque and Pevzner devised an algorithm for the multiple genome rearrangement problem that is based on rearrangement events (signed reversals, translocations, fusions, and fissions). However, the heuristics behind their algorithm relies on the time-consuming procedure of searching good events. We propose a simplified version of Bourque and Pevzner's heuristics based on experimental results that show that proper signed reversals acting on short cycles are frequently good events. We show the results of biological input data sets and randomly generated data sets and argue that the new heuristic is faster than the good events search, without compromising the quality of the solutions for input data sets involving few genomes.

  • IC-06-22 pdf bib
    Implementing modular error handling with aspects: Best and worst practices.
    Fernando Castor Filho, Alessandro Garcia, and Cecília Mary F. Rubira.
    November 2006. In English, 18 pages.

    Abstract: One of the fundamental motivations for employing exception handling in the development of robust applications is to lexically separate error handling code so that the latter and the normal code can be independently modified. Also, this scheme makes it possible for the normal code to be reused across various applications, using different error handling strategies. Since error handling is an inherently crosscutting concern, it is usually assumed that it can be better modularized by the use of aspect-oriented programming (AOP) techniques. However, recent studies argue that the ad hoc use of AOP can be detrimental to the quality of a system. When applying AOP to modularize exception handling, developers need to follow clear and simple practices in order to obtain a well-structured system design. If developers do not receive proper guidance, software systems whose exceptional behavior is modularized through aspects can exhibit problems, such as unexpected flows of control and swallowed exceptions, that stem from poorly designed/implemented error handling code. In this paper, we address this problem by providing some design/implementation guidance regarding the use of AOP to modularize exception handling. We propose a classification for error handling code based on the factors that we found out have more influence on its aspectization. Moreover, we present a scenario catalog comprising combinations of these factors and analyze how these scenarios positively or negatively affect the task of aspectizing exception handling.

  • IC-06-21 pdf bib
    Um processo para testes de sistemas com reuso de componentes.
    Ivan Rodolfo Duran Cruz Perez, Camila Rocha, Regina Moraes, Josiane Aparecida Cardoso, Ruth Fabiana Soliani, and Eliane Martins.
    October 2006. In Portuguese, 27 pages.

    Resumo: Devido a necessidades de mercado, onde se busca cada vez mais desenvolver softwares em prazos mais curtos, o desenvolvimento baseado em componentes (DBC) tem sido cada vez mais empregado na indústria de desenvolvimento de software. Contudo, para manter sempre um bom nível da qualidade do software em desenvolvimento, um processo de testes para DBC se faz necessário, tanto para garantir qualidade dos componentes em desenvolvimento quanto para validação dos componentes que serão reusados, neste caso, verificando se o componente satisfaz a especificação desejada na arquitetura do sistema. Com isto propõe-se um processo de testes genérico adaptado para processos DBC.

  • IC-06-20 pdf bib
    Tecnologias para o compartilhamento do trabalho autoral do professor da rede pública.
    Mariângela Pisoni Zanaga, Osmar Mantovani, Maria Helena Pereira Dias, and Hans Liesenberg.
    October 2006. In Portuguese, 16 pages.

    Resumo: Um estudo junto a professores atuantes na rede pública mostrou que a produção e, em menor grau, o compartilhamento de materiais de interesse educacional de pequeno porte, no ambiente escolar, fazem parte da prática de trabalho de tais profissionais. Os benefícios do compartilhamento são reconhecidos como uma forma de socialização de conhecimentos úteis para a geração de novos conhecimentos. A falta de mecanismos adequados de compartilhamento é sentida como um fator limitante para ampliar as possibilidades de reutilização, por um público mais amplo, de materiais produzidos pelos professores. Um sistema projetado para promover tal reutilização e compartilhamento é aqui apresentado.

    Abstract: A survey with teachers acting at public schools showed that the production and, to a lesser degree, the sharing of small-grained materials of educational interest are part of the working practice of those professionals. The benefits of sharing are reckoned as a way of socializing useful knowledge for the generation of new knowledge. The lack of proper sharing mechanisms is perceived as a limiting factor to increase the possibilities of reuse by a larger audience of materials produced by teachers. A system designed to promote this kind of reuse and sharing is being presented here.

  • IC-06-19 pdf bib
    A UDDI extension for Web service-based business process management systems.
    Diego Zuquim Guimarães Garcia and Maria Beatriz Felgar de Toledo.
    October 2006. In English, 18 pages.

    Abstract: The Universal Description Discovery and Integration (UDDI) is a specification for XML registry that enables functionality-based publication and discovery of Web services. This paper presents a UDDI extension to support Quality of Service (QoS) management in the context of Business Process Management Systems (BPMSs). To allow service selection according to functional and non-functional requirements, a BPMS architecture should include brokers to match consumer policies with policies stored in UDDI repositories and monitors to update QoS attributes. The main contributions of this approach are the use of the Web Services Policy Framework specification to describe policies for Web services and a UDDI extension to include QoS attributes.

  • IC-06-18 pdf bib
    Uma arquitetura de serviços Web tolerante a falhas para sistemas de gerência de processos de negócio.
    Diego Zuquim Guimarães Garcia and Maria Beatriz Felgar de Toledo.
    October 2006. In Portuguese, 16 pages.

    Resumo: Sistema de Gerência de Processos de Negócio (SGPN) apóia a realização de processos de negócio. Serviços Web estão sendo indicados como a tecnologia mais adequada para SGPN. Conseqüentemente, a incorporação de tolerância a falhas na arquitetura de serviços Web é essencial para a continuidade de processos. O objetivo deste artigo é propor uma arquitetura de serviços Web tolerante a falhas para utilização com SGPN. Ela emprega camadas de mediação e monitoramento, provendo transparência, interoperabilidade e privacidade. As principais contribuições deste artigo são: uma abordagem para oferecer tolerância a falhas para serviços Web considerando SGPN, uma arquitetura baseada na abordagem proposta e sua implementação parcial.

  • IC-06-17 pdf bib
    Verification of three authentication protocols using BAN, SVO and Strand Spaces.
    Fabio Rogério Piva, Augusto Jun Devegili, and Ricardo Dahab.
    September 2006. In English, 41 pages.

    Abstract: We compare three formal verification techniques for cryptographic protocols: BAN and SVO logics, and strand spaces, identifying some weaknesses of these logics and advantages of strand spaces. We also present new proofs of modified Yahalom, modified Kerberos and Woo-Lam Pi.

  • IC-06-16 pdf bib
    Gems: A general data structure for $d$-dimensional triangulations.
    Arnaldo J. Montagner and Jorge Stolfi.
    September 2006. In English, 11 pages.

    Abstract: We describe in detail a novel data structure for $d$-dimensional triangulations. In an arbitrary $d$-dimension triangulation, there are $d!$ ways in which a specific facet of an simplex can be glued to a specific facet of another simplex. Therefore, in data structures for general $d$-dimensional triangulations, this information must be encoded using $\lceil \log_2(d!) \rceil$ bits for each adjacent pair of simplices. We study a special class of triangulations, called the colored triangulations, in which there is a only one way two simplices can share a specific facet. The gem data structure, described here, makes use of this fact to greatly simplify the repertoire of elementary topological operators.

  • IC-06-15 pdf bib
    Comparação de métodos para determinação de SNPs com medidas de confiabilidade.
    Christian Baudet, Miguel Galves, and Zanoni Dias.
    September 2006. In Portuguese, 13 pages.

    Resumo: Neste trabalho realizamos estudos sobre métodos para a identificação de polimorfismos de base única, comumente conhecidos pela sigla SNP (em inglês, Single Nucleotide Polymorphism). Identificar este tipo de polimorfismo é um processo importante pois, devido ao fato de eles poderem influenciar a estabilidade ou, até mesmo, a estrutura das proteínas codificadas pelos genes, os SNPs podem estar relacionados a diversas doenças. Um bom método de identificação deve ser capaz de apontar as posições polimórfica e fornecer uma medida de confiabilidade para a detecção realizada. Neste sentido, estudamos dois métodos: polybayes e MSASNP. O primeiro método trata-se de um programa que realiza análise bayesiana para análise das seqüências e identificação dos SNPs. O segundo método adota uma estratégia simples, utilizado conceitos básicos de probabilidade. Os testes mostraram que o polybayes é o mais indicado por ser capaz de indentificar os SNPs e fornecer valores mais confiáveis sobre a probabilidade de detecção correta de um SNP.

  • IC-06-14 pdf bib
    Um algoritmo para identificação de correlações múltiplas de polimorfismos.
    André A. M. Almeida, Miguel Galves, and Zanoni Dias.
    September 2006. In Portuguese, 17 pages.

    Resumo: Polimorfimo de Base Única (SNP) é uma mutação que afeta apenas uma posição do genoma de um organismo. Correlação de polimorfismos (LD) é uma associação não aleatória de SNPs, podendo ser empregada como marcador. Correlação múltipla de polimorfismos agrupa três ou mais polimorfismos, permitindo que quaisquer dois polimorfismos sejam utilizados como marcadores. Neste trabalho, estudamos duas definições de LDs (LD completo e LD útil) e apresentamos uma definição para LDs múltiplos fundamentada em teoria dos grafos, assim como uma heurística gulosa, baseada no grau dos vértices, para sua identificação. São exibidos resultados promissores obtidos em testes com dados do genoma da cana-de-açúcar e do genoma humano.

  • IC-06-13 pdf bib
    A Markov model for providing quality of service for failure detectors under message loss bursts.
    Irineu Sotoma and Edmundo Roberto Mauro Madeira.
    September 2006. In English, 32 pages.

    Abstract: The quality of service (QoS) of failure detectors determines how fast a failure detector q detects the crash of a process p, and how accurate q informs the p crash. In wide area networks and wireless networks, the occurrence of process crashes, high delay variations and burst losses in message packets are common. So, this paper proposes a failure detector configurator which takes into account the probability distribution of loss burst lengths of message packets, by using a Markov model. The simulation results show that the parameters provided by the proposed configurator lead the failure detector to satisfy the QoS requirements in networks subject to message loss bursts, while previous works do not satisfy them in some cases.

  • IC-06-12 pdf bib
    Wavelet-based feature extraction for fingerprint image retrieval.
    Javier A. Montoya Zegarra, Neucimar J. Leite, and Ricardo da S. Torres.
    September 2006. In English, 23 pages.

    Abstract: This paper presents a novel approach to fingerprint retrieval for personal identification by joining three image retrieval tasks, namely, feature extraction, similarity measurement, and feature indexing, into a wavelet-based fingerprint retrieval system.

    We propose the use of different types of Wavelets for representing and describing the textural information present in fingerprint images. For that purposes, the feature vectors used to characterize the fingerprints are obtained by computing the mean and the standard deviation of the decomposed images in the Wavelet domain. These feature vectors are used to retrieve the most similar fingerprints given a query image, while their indexation is used to reduce the search spaces of image candidates. The different types of Wavelets used in our study include: Gabor Wavelets (GWs), Tree-Structured Wavelet Decomposition using both Orthogonal Filter Banks (TOWT) and Bi-orthogonal Filter Banks (TBOWT), as well as the Steerable Wavelets.

    To evaluate the retrieval accuracy of the proposed approach, a total number of eight different data sets were used. Experiments also evaluated different combinations of Wavelets with six similarity measures. The results show that the Gabor Wavelets combined with the Square Chord similarity measure achieves the best retrieval effectiveness.

  • IC-06-11 pdf bib
    General convex hull using the gem data structure.
    Arnaldo J. Montagner and Jorge Stolfi.
    May 2006. In English, 14 pages.

    Abstract: We describe in detail a general algorithm for constructing the convex hull of a finite set of points in Euclidean space of arbitrary dimension $n$. The algorithm handles degenerate situations, such as non-simplicial faces and point sets contained in a lower-dimensional subspace. The topology of the hull is kept in a graph encoded map (gem) data structure, a novel representation for $n$-dimensional triangulations. The gem representation, which was introduced as a mathematical device by S. Lins in 1982, extends the cell-tuple (or generalized map) representation proposed by Brisson and Lienhardt to maps that are not barycentric subdivisions, to manifolds with borders, and to non-manifold (but triangulable) topological spaces.

  • IC-06-10 pdf bib
    Reasoning about exception flow at the architectural level.
    Fernando Castor Filho, Patrick Henrique da S. Brito, and Cecília Mary F. Rubira.
    May 2006. In English, 21 pages.

    Abstract: An important challenge faced by the developers of fault-tolerant systems is to build fault tolerance mechanisms that are reliable. To achieve the desired levels of reliability, mechanisms for detecting and handling errors should be designed since the early phases of software development, preferably using a rigorous or formal methodology. In recent years, many authors have been advocating the idea that exception handling-related issues should be addressed at the architectural level, as a complement to implementation-level exception handling. However, few works in the literature have addressed the problem of describing how exceptions flow amongst architectural elements. A solution to this problem would enable the early detection of mismatches between architectural elements due to exceptions. Moreover, it would make it possible to validate whether the architecture satisfies some properties of interest regarding exception flow before the system is actually built. We believe that a model for describing the flow of exceptions between architectural elements should be: (i) precise; and (ii) analyzable, preferably automatically. In this paper, we present a rigorous model for reasoning about exception flow in software architectures that satisfies these requirements.

  • IC-06-09 pdf bib
    Specification of exception flow in software architectures.
    Fernando Castor Filho, Patrick Henrique da S. Brito, and Cecília Mary F. Rubira.
    May 2006. In English, 45 pages.

    Abstract: In recent years, various approaches combining software architectures and exception handling have been proposed for increasing the dependability of software systems. This conforms with the idea supported by some authors that addressing exception handling-related issues since early phases of software development may improve the overall dependability of a system. By systematically designing the mechanism responsible for rendering a system reliable, developers increase the probability of the system being able to avoid catastrophic failures at runtime. This paper addresses the problem of describing how exceptions flow amongst architectural elements. In critical systems where rollback-based mechanisms might not be available, such as systems that interact with mechanical devices, exception handling is an important means for recovering from errors in a forward-based manner. We present a framework, named Aereal, that supports the description and analysis of exceptions that flow between architectural components. Since different architectural styles have different policies for exception flow, Aereal makes it possible to specify rules on how exceptions flow in a given style and to check for violations of these rules. We use a financial application and a control system as case studies to validate the proposed approach.

  • IC-06-08 pdf bib
    Exceptions and aspects: The devil is in the details.
    Fernando Castor Filho, Nelio Cacho, Eduardo Figueiredo, Raquel Maranhão, Alessandro Garcia, and Cecília Mary F. Rubira.
    May 2006. In English, 21 pages.

    Abstract: One of the fundamental motivations for employing exception handling in the development of robust applications is to lexically separate error handling code so that it can be independently reused and modified. It is usually assumed that the implementation of exception handling can be better modularized by the use of aspect-oriented programming (AOP). However, the trade-offs involved in using AOP with this goal are not yet well-understood. This paper presents an in-depth investigation of the adequacy of the AspectJ language for modularizing exception handling code. The study consisted in refactoring existing applications so that the code responsible for implementing heterogeneous error handling strategies was moved to separate aspects. We have performed quantitative assessments of four systems - three ob ject-oriented and one aspect-oriented - based on four fundamental quality attributes, namely separation of concerns, coupling, cohesion, and conciseness. Our investigation also included a multi-perspective analysis of the refactored systems, including (i) the reusability of the aspectized error handling code, (ii) the beneficial and harmful aspectization scenarios, and (iii) the scalability of AOP to aspectize exception handling in the presence of other crosscutting concerns.

  • IC-06-07 pdf bib
    Progressive randomization for steganalysis.
    Anderson Rocha and Siome Goldenstein.
    April 2006. In English, 16 pages.

    Abstract: In this paper, we describe a new methodology to detect the presence of hidden digital content in the Least Significant Bits (LSB) of images. We introduce the Progressive Randomization (PR) technique that captures statistical artifacts inserted during the hiding process. Our technique is a progressive application of LSB modifying transformations that receives an image as input, and returns $n$ images that only differ in the LSB from the initial image. Each step of the progressive randomization approach represents a possible content-hiding scenario with increasing size, and increasing LSB entropy. We validate our method with 20,000 real, non-synthetic images. Using only statistical descriptors of LSB occurrences, our method already performs better than comparable techniques in the literature.

  • IC-06-06 pdf bib
    The 2D-VLIW architecture.
    Ricardo Santos, Rodolfo Azevedo, and Guido Araujo.
    March 2006. In English, 13 pages.

    Abstract: This technical report presents the main aspects of the 2D-VLIW architecture. This architecture is being developed in the Computer Systems Laboratory at the Institute of Computing of the State University of Campinas. The main purpose of the 2D-VLIW architecture is to provide resources to carry out a broad class of applications through a compromise between architectural arrangement and compiler assistance. Some preliminary experiments have shown that this combination can achieve an appealing performance over mainstream architectures like VLIW and EPIC.

  • IC-06-05 pdf bib
    Scalable model-based policy refinement and validation for network security systems.
    João Porto de Albuquerque.
    March 2006. In English, 58 pages.

    Abstract: This report builds upon previous work on Model-based Management, and particularly on the Diagram of Abstract Subsystems (DAS) approach, further elaborating on the correctness and performance of the automated policy refinement process. The modelling technique and the automated policy refinement process are firstly presented to illustrate the practical use of the DAS approach. Subsequently, the graphical model is formalised using an algebraic notation, which is thus utilised to define validation conditions for a layered model, i.e. conditions to which a resulting model must comply if the lower-level policy sets have been correctly generated. The scalability of the refinement algorithms and the additional effort needed to validate model instances are also analysed and discussed.

  • IC-06-04 pdf bib
    A framework based on semantic Web services and AI planning for the management of bioinformatics scientific workflows.
    Luciano Antonio Digiampietri, José de Jésus Pérez Alcázar, Claudia Bauzer Medeiros, and João Carlos Setubal.
    February 2006. In English, 22 pages.

    Abstract: Bioinformatics activities are growing all over the world, with proliferation of data and tools. This brings new challenges, such as how to understand and organize these resources, how to exchange and reuse successful experimental procedures, tools and data, and how to provide interoperability among data and tools across different sites, and for distinct user profiles. This paper describes an effort towards these directions. It is based on combining research on databases, AI and scientific workflows, on the Semantic Web, to design, reuse, annotate and document bioinformatics experiments or parts thereof. The resulting framework allows the integration of heterogeneous data and tools, and the design of experiments as scientific workflows, which are stored in databases. Moreover, it takes advantage of the notion of planning in AI to support automatic or interactive composition of tasks. These ideas are being implemented in a prototype and validated on real bioinformatics data.

  • IC-06-03 pdf bib
    Fractional traffic modeling and policing using envelope process.
    Flávio M. Pereira and Nelson L. S. da Fonseca.
    February 2006. In English, 78 pages.

    Abstract: In this paper, an envelope process called Fractional Bounded Arrival Process is proposed for self-similar traffic representation. A queueing analysis for FBAP traffic is developed, and upper bounds for the backlog and for the delay are obtained. The policing of FBAP traffic is also investigated. Results are then extended to the multifractal traffic case, for which an envelope process called Multifractal Bounded Arrival Process (MFBAP) is proposed. Comments on the queueing analysis and on the policing for MFBAP traffic are also outlined.

  • IC-06-02 pdf bib
    On the representation of a PI-Graph.
    Sheila Morais de Almeida, Célia Picinin de Mello, and Anamaria Gomide.
    February 2006. In English, 9 pages.

    Abstract: Consider two parallel lines (denoted by $r_1$ and $r_2$). A graph is a PI graph (Point-Interval graph) if it is an intersection graph of a family F of triangles between $r_1$ and $r_2$ such that each triangle has an interval with two endpoints on $r_1$ and a vertex (a point) on $r_2$. The family F is the PI representation of G. The PI graphs are an extension of interval and permutation graphs and they form a subclass of trapezoid graphs. In this paper, we characterize the PI graphs in terms of its trapezoid representation. Also we show how to construct a family of trapezoid graphs but not PI graphs from a trapezoid representation of a known graph in this class.

  • IC-06-01 pdf bib
    Sorting by block-interchanges and signed reversals.
    Cleber V. G. Mira and João Meidanis.
    January 2006. In English, 11 pages.

    Resumo: Uma troca de blocos é um evento de rearranjo que troca duas regiões contíguas e não necessariamente consecutivas em um genoma, mantendo a orientação original de cada bloco. Reversões com sinal são eventos que invertem e trocam a orientação dos blocos de uma região no genoma. Ambos os eventos são importantes para a análise comparativa de genomas. Por essa razão, nós propomos uma nova medida que consiste em encontrar uma seqüência mínima de trocas de bloco e reversões com sinal que transformem um genoma em outro. Para cada evento, atribuimos um peso relacionado a sua norma e argumentamos que esse parâmetro é adequado para indicar o poder de cada evento. Nós apresentamos uma fórmula para a medida de rearranjo e um algoritmo de tempo polinomial para encontrar uma seqüência de trocas de blocos e reversões com sinal que transformem um genoma unicromossomal em outro.

    Abstract: A block-interchange is a rearrangement event that exchanges two, not necessarily consecutive, contiguous regions in a genome, maintaining the original orientation. Signed reversals are events that invert and change the orientation of a region in a genome. Both events are important for the comparative analysis of genomes. For this reason, we propose a new measure that consists in finding a minimum sequence of block-interchanges and signed reversals that transforms a genome into another. For each event, we assign a weight related to its norm and we argue the adequacy of this parameter to indicate the power of each event.

    We present a formula for the rearrangement measure and a polynomial time sorting algorithm for finding a sequence of block-interchanges and signed reversals that transforms a unichromosomal genome into another.


  • Instituto de Computação :: Universidade Estadual de Campinas
    Caixa Postal 6176 • Av. Albert Einstein, 1251 - Cidade Universitária • CEP 13083-970 • Campinas/SP - Brasil • Fone: [19] 3521-5838