Resumo: Este relatório técnico contém resumos de 32 trabalhos apresentados no II Workshop de Teses de Doutorado do Instituto de Computação da UNICAMP. O workshop, realizado de 29 de novembro a 1 de dezembro de 2006, permitiu que doutorandos do Instituto apresentassem os principais aspectos de suas pesquisas. Cada capítulo corresponde a uma apresentação, sendo o texto limitado a 3 páginas.
A participação foi voluntária e o perfil acadêmico dos participantes foi variado, cobrindo desde alunos recém-admitidos no programa até aqueles que já tinham defesa marcada em dezembro de 2006.
A publicação dos resumos sob forma de um relatório técnico tem por objetivo uma maior divulgação de trabalhos de doutorado em andamento no IC. Além disso, é um registro sucinto do estado de várias dessas pesquisas.
Abstract: The multiple genome rearrangement problem consists in finding a phylogenetic tree that is the most ``similar'' to the evolutionary scenario based on rearrangeme nt events involving several species. Bourque and Pevzner devised an algorithm for the multiple genome rearrangement problem that is based on rearrangement events (signed reversals, translocations, fusions, and fissions). However, the heuristics behind their algorithm relies on the time-consuming procedure of searching good events. We propose a simplified version of Bourque and Pevzner's heuristics based on experimental results that show that proper signed reversals acting on short cycles are frequently good events. We show the results of biological input data sets and randomly generated data sets and argue that the new heuristic is faster than the good events search, without compromising the quality of the solutions for input data sets involving few genomes.
Abstract: One of the fundamental motivations for employing exception handling in the development of robust applications is to lexically separate error handling code so that the latter and the normal code can be independently modified. Also, this scheme makes it possible for the normal code to be reused across various applications, using different error handling strategies. Since error handling is an inherently crosscutting concern, it is usually assumed that it can be better modularized by the use of aspect-oriented programming (AOP) techniques. However, recent studies argue that the ad hoc use of AOP can be detrimental to the quality of a system. When applying AOP to modularize exception handling, developers need to follow clear and simple practices in order to obtain a well-structured system design. If developers do not receive proper guidance, software systems whose exceptional behavior is modularized through aspects can exhibit problems, such as unexpected flows of control and swallowed exceptions, that stem from poorly designed/implemented error handling code. In this paper, we address this problem by providing some design/implementation guidance regarding the use of AOP to modularize exception handling. We propose a classification for error handling code based on the factors that we found out have more influence on its aspectization. Moreover, we present a scenario catalog comprising combinations of these factors and analyze how these scenarios positively or negatively affect the task of aspectizing exception handling.
Resumo: Devido a necessidades de mercado, onde se busca cada vez mais desenvolver softwares em prazos mais curtos, o desenvolvimento baseado em componentes (DBC) tem sido cada vez mais empregado na indústria de desenvolvimento de software. Contudo, para manter sempre um bom nível da qualidade do software em desenvolvimento, um processo de testes para DBC se faz necessário, tanto para garantir qualidade dos componentes em desenvolvimento quanto para validação dos componentes que serão reusados, neste caso, verificando se o componente satisfaz a especificação desejada na arquitetura do sistema. Com isto propõe-se um processo de testes genérico adaptado para processos DBC.
Resumo: Um estudo junto a professores atuantes na rede pública mostrou que a produção e, em menor grau, o compartilhamento de materiais de interesse educacional de pequeno porte, no ambiente escolar, fazem parte da prática de trabalho de tais profissionais. Os benefícios do compartilhamento são reconhecidos como uma forma de socialização de conhecimentos úteis para a geração de novos conhecimentos. A falta de mecanismos adequados de compartilhamento é sentida como um fator limitante para ampliar as possibilidades de reutilização, por um público mais amplo, de materiais produzidos pelos professores. Um sistema projetado para promover tal reutilização e compartilhamento é aqui apresentado.
Abstract: A survey with teachers acting at public schools showed that the production and, to a lesser degree, the sharing of small-grained materials of educational interest are part of the working practice of those professionals. The benefits of sharing are reckoned as a way of socializing useful knowledge for the generation of new knowledge. The lack of proper sharing mechanisms is perceived as a limiting factor to increase the possibilities of reuse by a larger audience of materials produced by teachers. A system designed to promote this kind of reuse and sharing is being presented here.
Abstract: The Universal Description Discovery and Integration (UDDI) is a specification for XML registry that enables functionality-based publication and discovery of Web services. This paper presents a UDDI extension to support Quality of Service (QoS) management in the context of Business Process Management Systems (BPMSs). To allow service selection according to functional and non-functional requirements, a BPMS architecture should include brokers to match consumer policies with policies stored in UDDI repositories and monitors to update QoS attributes. The main contributions of this approach are the use of the Web Services Policy Framework specification to describe policies for Web services and a UDDI extension to include QoS attributes.
Resumo: Sistema de Gerência de Processos de Negócio (SGPN) apóia a realização de processos de negócio. Serviços Web estão sendo indicados como a tecnologia mais adequada para SGPN. Conseqüentemente, a incorporação de tolerância a falhas na arquitetura de serviços Web é essencial para a continuidade de processos. O objetivo deste artigo é propor uma arquitetura de serviços Web tolerante a falhas para utilização com SGPN. Ela emprega camadas de mediação e monitoramento, provendo transparência, interoperabilidade e privacidade. As principais contribuições deste artigo são: uma abordagem para oferecer tolerância a falhas para serviços Web considerando SGPN, uma arquitetura baseada na abordagem proposta e sua implementação parcial.
Abstract: We compare three formal verification techniques for cryptographic protocols: BAN and SVO logics, and strand spaces, identifying some weaknesses of these logics and advantages of strand spaces. We also present new proofs of modified Yahalom, modified Kerberos and Woo-Lam Pi.
Abstract: We describe in detail a novel data structure for -dimensional triangulations. In an arbitrary -dimension triangulation, there are ways in which a specific facet of an simplex can be glued to a specific facet of another simplex. Therefore, in data structures for general -dimensional triangulations, this information must be encoded using bits for each adjacent pair of simplices. We study a special class of triangulations, called the colored triangulations, in which there is a only one way two simplices can share a specific facet. The gem data structure, described here, makes use of this fact to greatly simplify the repertoire of elementary topological operators.
Resumo: Neste trabalho realizamos estudos sobre métodos para a identificação de polimorfismos de base única, comumente conhecidos pela sigla SNP (em inglês, Single Nucleotide Polymorphism). Identificar este tipo de polimorfismo é um processo importante pois, devido ao fato de eles poderem influenciar a estabilidade ou, até mesmo, a estrutura das proteínas codificadas pelos genes, os SNPs podem estar relacionados a diversas doenças. Um bom método de identificação deve ser capaz de apontar as posições polimórfica e fornecer uma medida de confiabilidade para a detecção realizada. Neste sentido, estudamos dois métodos: polybayes e MSASNP. O primeiro método trata-se de um programa que realiza análise bayesiana para análise das seqüências e identificação dos SNPs. O segundo método adota uma estratégia simples, utilizado conceitos básicos de probabilidade. Os testes mostraram que o polybayes é o mais indicado por ser capaz de indentificar os SNPs e fornecer valores mais confiáveis sobre a probabilidade de detecção correta de um SNP.
Resumo: Polimorfimo de Base Única (SNP) é uma mutação que afeta apenas uma posição do genoma de um organismo. Correlação de polimorfismos (LD) é uma associação não aleatória de SNPs, podendo ser empregada como marcador. Correlação múltipla de polimorfismos agrupa três ou mais polimorfismos, permitindo que quaisquer dois polimorfismos sejam utilizados como marcadores. Neste trabalho, estudamos duas definições de LDs (LD completo e LD útil) e apresentamos uma definição para LDs múltiplos fundamentada em teoria dos grafos, assim como uma heurística gulosa, baseada no grau dos vértices, para sua identificação. São exibidos resultados promissores obtidos em testes com dados do genoma da cana-de-açúcar e do genoma humano.
Abstract: The quality of service (QoS) of failure detectors determines how fast a failure detector q detects the crash of a process p, and how accurate q informs the p crash. In wide area networks and wireless networks, the occurrence of process crashes, high delay variations and burst losses in message packets are common. So, this paper proposes a failure detector configurator which takes into account the probability distribution of loss burst lengths of message packets, by using a Markov model. The simulation results show that the parameters provided by the proposed configurator lead the failure detector to satisfy the QoS requirements in networks subject to message loss bursts, while previous works do not satisfy them in some cases.
Abstract: This paper presents a novel approach to fingerprint retrieval for personal identification by joining three image retrieval tasks, namely, feature extraction, similarity measurement, and feature indexing, into a wavelet-based fingerprint retrieval system.
We propose the use of different types of Wavelets for representing and describing the textural information present in fingerprint images. For that purposes, the feature vectors used to characterize the fingerprints are obtained by computing the mean and the standard deviation of the decomposed images in the Wavelet domain. These feature vectors are used to retrieve the most similar fingerprints given a query image, while their indexation is used to reduce the search spaces of image candidates. The different types of Wavelets used in our study include: Gabor Wavelets (GWs), Tree-Structured Wavelet Decomposition using both Orthogonal Filter Banks (TOWT) and Bi-orthogonal Filter Banks (TBOWT), as well as the Steerable Wavelets.
To evaluate the retrieval accuracy of the proposed approach, a total number of eight different data sets were used. Experiments also evaluated different combinations of Wavelets with six similarity measures. The results show that the Gabor Wavelets combined with the Square Chord similarity measure achieves the best retrieval effectiveness.
Abstract: We describe in detail a general algorithm for constructing the convex hull of a finite set of points in Euclidean space of arbitrary dimension . The algorithm handles degenerate situations, such as non-simplicial faces and point sets contained in a lower-dimensional subspace. The topology of the hull is kept in a graph encoded map (gem) data structure, a novel representation for -dimensional triangulations. The gem representation, which was introduced as a mathematical device by S. Lins in 1982, extends the cell-tuple (or generalized map) representation proposed by Brisson and Lienhardt to maps that are not barycentric subdivisions, to manifolds with borders, and to non-manifold (but triangulable) topological spaces.
Abstract: An important challenge faced by the developers of fault-tolerant systems is to build fault tolerance mechanisms that are reliable. To achieve the desired levels of reliability, mechanisms for detecting and handling errors should be designed since the early phases of software development, preferably using a rigorous or formal methodology. In recent years, many authors have been advocating the idea that exception handling-related issues should be addressed at the architectural level, as a complement to implementation-level exception handling. However, few works in the literature have addressed the problem of describing how exceptions flow amongst architectural elements. A solution to this problem would enable the early detection of mismatches between architectural elements due to exceptions. Moreover, it would make it possible to validate whether the architecture satisfies some properties of interest regarding exception flow before the system is actually built. We believe that a model for describing the flow of exceptions between architectural elements should be: (i) precise; and (ii) analyzable, preferably automatically. In this paper, we present a rigorous model for reasoning about exception flow in software architectures that satisfies these requirements.
Abstract: In recent years, various approaches combining software architectures and exception handling have been proposed for increasing the dependability of software systems. This conforms with the idea supported by some authors that addressing exception handling-related issues since early phases of software development may improve the overall dependability of a system. By systematically designing the mechanism responsible for rendering a system reliable, developers increase the probability of the system being able to avoid catastrophic failures at runtime. This paper addresses the problem of describing how exceptions flow amongst architectural elements. In critical systems where rollback-based mechanisms might not be available, such as systems that interact with mechanical devices, exception handling is an important means for recovering from errors in a forward-based manner. We present a framework, named Aereal, that supports the description and analysis of exceptions that flow between architectural components. Since different architectural styles have different policies for exception flow, Aereal makes it possible to specify rules on how exceptions flow in a given style and to check for violations of these rules. We use a financial application and a control system as case studies to validate the proposed approach.
Abstract: One of the fundamental motivations for employing exception handling in the development of robust applications is to lexically separate error handling code so that it can be independently reused and modified. It is usually assumed that the implementation of exception handling can be better modularized by the use of aspect-oriented programming (AOP). However, the trade-offs involved in using AOP with this goal are not yet well-understood. This paper presents an in-depth investigation of the adequacy of the AspectJ language for modularizing exception handling code. The study consisted in refactoring existing applications so that the code responsible for implementing heterogeneous error handling strategies was moved to separate aspects. We have performed quantitative assessments of four systems - three ob ject-oriented and one aspect-oriented - based on four fundamental quality attributes, namely separation of concerns, coupling, cohesion, and conciseness. Our investigation also included a multi-perspective analysis of the refactored systems, including (i) the reusability of the aspectized error handling code, (ii) the beneficial and harmful aspectization scenarios, and (iii) the scalability of AOP to aspectize exception handling in the presence of other crosscutting concerns.
Abstract: In this paper, we describe a new methodology to detect the presence of hidden digital content in the Least Significant Bits (LSB) of images. We introduce the Progressive Randomization (PR) technique that captures statistical artifacts inserted during the hiding process. Our technique is a progressive application of LSB modifying transformations that receives an image as input, and returns images that only differ in the LSB from the initial image. Each step of the progressive randomization approach represents a possible content-hiding scenario with increasing size, and increasing LSB entropy. We validate our method with 20,000 real, non-synthetic images. Using only statistical descriptors of LSB occurrences, our method already performs better than comparable techniques in the literature.
Abstract: This technical report presents the main aspects of the 2D-VLIW architecture. This architecture is being developed in the Computer Systems Laboratory at the Institute of Computing of the State University of Campinas. The main purpose of the 2D-VLIW architecture is to provide resources to carry out a broad class of applications through a compromise between architectural arrangement and compiler assistance. Some preliminary experiments have shown that this combination can achieve an appealing performance over mainstream architectures like VLIW and EPIC.
Abstract: This report builds upon previous work on Model-based Management, and particularly on the Diagram of Abstract Subsystems (DAS) approach, further elaborating on the correctness and performance of the automated policy refinement process. The modelling technique and the automated policy refinement process are firstly presented to illustrate the practical use of the DAS approach. Subsequently, the graphical model is formalised using an algebraic notation, which is thus utilised to define validation conditions for a layered model, i.e. conditions to which a resulting model must comply if the lower-level policy sets have been correctly generated. The scalability of the refinement algorithms and the additional effort needed to validate model instances are also analysed and discussed.
Abstract: Bioinformatics activities are growing all over the world, with proliferation of data and tools. This brings new challenges, such as how to understand and organize these resources, how to exchange and reuse successful experimental procedures, tools and data, and how to provide interoperability among data and tools across different sites, and for distinct user profiles. This paper describes an effort towards these directions. It is based on combining research on databases, AI and scientific workflows, on the Semantic Web, to design, reuse, annotate and document bioinformatics experiments or parts thereof. The resulting framework allows the integration of heterogeneous data and tools, and the design of experiments as scientific workflows, which are stored in databases. Moreover, it takes advantage of the notion of planning in AI to support automatic or interactive composition of tasks. These ideas are being implemented in a prototype and validated on real bioinformatics data.
Abstract: In this paper, an envelope process called Fractional Bounded Arrival Process is proposed for self-similar traffic representation. A queueing analysis for FBAP traffic is developed, and upper bounds for the backlog and for the delay are obtained. The policing of FBAP traffic is also investigated. Results are then extended to the multifractal traffic case, for which an envelope process called Multifractal Bounded Arrival Process (MFBAP) is proposed. Comments on the queueing analysis and on the policing for MFBAP traffic are also outlined.
Abstract: Consider two parallel lines (denoted by and ). A graph is a PI graph (Point-Interval graph) if it is an intersection graph of a family F of triangles between and such that each triangle has an interval with two endpoints on and a vertex (a point) on . The family F is the PI representation of G. The PI graphs are an extension of interval and permutation graphs and they form a subclass of trapezoid graphs. In this paper, we characterize the PI graphs in terms of its trapezoid representation. Also we show how to construct a family of trapezoid graphs but not PI graphs from a trapezoid representation of a known graph in this class.
Resumo: Uma troca de blocos é um evento de rearranjo que troca duas regiões contíguas e não necessariamente consecutivas em um genoma, mantendo a orientação original de cada bloco. Reversões com sinal são eventos que invertem e trocam a orientação dos blocos de uma região no genoma. Ambos os eventos são importantes para a análise comparativa de genomas. Por essa razão, nós propomos uma nova medida que consiste em encontrar uma seqüência mínima de trocas de bloco e reversões com sinal que transformem um genoma em outro. Para cada evento, atribuimos um peso relacionado a sua norma e argumentamos que esse parâmetro é adequado para indicar o poder de cada evento. Nós apresentamos uma fórmula para a medida de rearranjo e um algoritmo de tempo polinomial para encontrar uma seqüência de trocas de blocos e reversões com sinal que transformem um genoma unicromossomal em outro.
Abstract: A block-interchange is a rearrangement event that exchanges two, not necessarily consecutive, contiguous regions in a genome, maintaining the original orientation. Signed reversals are events that invert and change the orientation of a region in a genome. Both events are important for the comparative analysis of genomes. For this reason, we propose a new measure that consists in finding a minimum sequence of block-interchanges and signed reversals that transforms a genome into another. For each event, we assign a weight related to its norm and we argue the adequacy of this parameter to indicate the power of each event.
We present a formula for the rearrangement measure and a polynomial time sorting algorithm for finding a sequence of block-interchanges and signed reversals that transforms a unichromosomal genome into another.
Instituto de Computação :: Universidade Estadual de Campinas
Caixa Postal 6176 • Av. Albert Einstein, 1251 - Cidade Universitária • CEP 13083-970 • Campinas/SP - Brasil • Fone:  3521-5838