Abstract: ChkSim is a portable tool to simulate the execution of checkpointing algorithms in distributed applications. It provides quantitative data to system and algorithm designers, enabling the comparative assessment of these algorithms. This report describes the ChkSim simulation model, its software architecture and user manual.
Abstract: The IEEE 802.11e standard is intended to support applications with QoS requirements. However, such provisioning cannot be achieved when the network load is high. In this paper, the effectiveness of two mechanisms for measurement-based admission control in 802.11e networks is evaluated. In addition, a new control mechanism which dynamically tunes parameters of the 802.11e contention-based access method is introduced. The proposed mechanism aims at providing QoS as well as ameliorating the problem of delay asymmetry.
Abstract: In this paper we analyze a known relaxation for the Ratio Cut Problem based on positive semidefinite constraints and present a branch and bound algorithm and heuristics based on this relaxation. The relaxation and the algorithms were tested on small and moderate sized instances. The relaxation leads to values very close to the optimum solution values. The exact algorithm could obtain solutions for small and moderate sized instances and the best heuristics obtained optimum or almost optimum solutions for all tested instances. We prove interesting characteristics for each one of these heuristics. We also compared the semidefinite based branch and bound algorithm with a commercial integer programming solver.
Abstract: Image segmentation using graph cuts have become very popular in the last years. These methods are usually computationally expensive, even with hard constraints (seed pixels). We present a solution that requires only internal seeds and runs in time proportional to the number of pixels. Our method computes an ordered region growing where the propagation order of each pixel is proportional to the cost of an optimum path from the seed set to that pixel. Each pixel defines a region which includes it and all pixels with lower propagation order. The boundary of each region is a possible cut boundary, whose cut measure is also computed and assigned to the corresponding pixel on-the-fly. The object is obtained by selecting the pixel with minimum-cut measure and all pixels within its respective cut boundary. The method works with various cut measures and is evaluated using several experiments.
Abstract: We propose a new approach for object detection using the image foresting transform (IFT). For a given image with seed pixels inside the object, the IFT computes an optimum-path forest in a graph derived from the image, such that object and background are connected by a few optimum paths (leaking paths). The leaking paths cross the object's boundary through its "most weakly connected" parts (called leaking pixels). The topology of the forest is exploited to identify the leaking pixels and eliminate their subtrees, such that the remaining forest defines the object. The method is called tree pruning. We present tree pruning with an automatic approach to detect leaking pixels, which does not require parameters; show several examples; discuss its limitations; and evaluate the method for automatic detection of license plates using a database with 990 images.
Abstract: Slippage is an important sequencing problem that can occur in EST projects. However, there are very few studies about it. In this work we propose three new methods to detect slippage artifacts: Arithmetic Mean Method, Geometric Mean Method, and Echo Coverage Method. Each method is simple and has two different strategies for processing sequences: suffix and subsequence. Using the 291689 EST sequences produced in the SUCEST project, we performed comparative tests between the proposed methods and Telles and Silva Method. The subsequence strategy is better than the suffix strategy because it is not anchored at the end of the sequence, so it is more flexible to find slippage at the beginning of the EST. Comparing with the Telles and Silva Method, the advantage of our methods is that they do not discard the majority of the sequences marked as slippage, but, instead of it, only remove the slipped artifact from the sequence. The tests indicate that the Echo Coverage Method with subsequence strategy has the best compromise between slippage detection and calibration easiness.
Abstract: Before promising locations at petroliferous basins become productive oil wells, it is often necessary to complete drilling activities at these locations. The scheduling of such activities must satisfy several conflicting constraints and attain a number of goals. Moreover, resource displacements between wells are also important. We describe a Greedy Randomized Adaptive Search Procedure (GRASP) for the scheduling of oil well development activities with resource displacement. The results are compared with schedules produced by a well accepted constraint programming implementation. Computational experience on real instances indicates that the GRASP implementation is competitive, outperforming the constraint programming implementation.
Resumo: Atualmente, existe um número considerável de métodos para reconstrução de árvores filogenéticas. Muitos deles são capazes de criar árvores com qualidade considerável, mas nenhum é capaz de, garantidamente, recuperar a árvore filogenética que representa a história da evolução dos seres vivos. Por outro lado, há um grande número de métodos de consenso que geralmente agrupam as partes mais comuns das árvores de uma coleção em uma única árvore. Infelizmente, o uso de métodos de consensos para a reconstrução de árvores filogenéticas é uma espécie de tabu entre os biólogos, que normalmente vêem esse tipo de consenso apenas como um comparador e não como um construtor de árvores.
Neste trabalho, nós desafiamos esse tabu definindo um método de consenso que constrói árvores filogenétcias completamente resolvidas baseadas nas partes mais comuns das árvores completamente resolvidas de uma coleção dada. Também apresentamos resultados de testes simples que mostram que as árvores produzidas por este método são geralmente melhores que as árvores da coleção usada para criá-las.
Abstract: Nowadays, there is a considerable number of phylogeny reconstruction methods and many are able to produce reasonable trees, but none of the methods guarantees to reconstruct the true tree. On the other hand, there is a larger number of phylogenetic consensus methods, which usually put together the most common parts of a collection of phylogenetic trees. Unfortunately, there is also a taboo concerning the use of consensus methods to reconstruct phylogenetic trees, because most biologists see the phylogenetic consensus methods mainly as comparators and not as phylogenetic tree constructors.
In this work, we challenge this taboo by defining a consensus method which builds a fully resolved phylogenetic tree based on the most common parts of fully resolved trees in a given collection. We also present results of simple tests which show that this consensus is usually better than the trees used to build it.
Abstract: In this paper we consider the SONET ring assignment problem (SRAP) presented by Goldschmidt et al. in (SONET/SDH ring assignment with capacity constraints, Discrete Applied Mathematics, 129:99-128, 2003). The authors pointed out to the inadequacy of solving SRAP instances using their integer programming formulation and commercial linear programming solvers. Similar experiences with IP models for SRAP are reported by Aringhieriand Dell'Amico (Comparing metaheuristic algorithms for sonet network design problems, Journal of Heuristics, 11(1):35-57, 2005). In this paper we presented some variants of IP formulations for SRAP and tested the performance of standard branch-and-bound algorithms implemented in a commercial code when computing those models. The results are significantly better than those reported earlier in the literature. Moreover, we also reformulate the problem as a set partitioning model amended with an additional knapsack constraint. This new formulation has an exponential number of columns and, to solve it, we implemented a branch-and-price/column generation algorithm. Extensive computational experiments with our algorithm showed that it is orders of magnitude faster than its competitors. Hundreds of instances taken from both Aringhieri's and Goldschmidt's papers which were not solved by these authors in hours of computation are solved here to optimality in just a few seconds.
Resumo: Transações se tornaram uma técnica largamente usada para garantir a consistência de dados, a confiabilidade e a recuperabilidade de aplicações distribuídas. Além dos modelos voltados para sistemas distribuídos tradicionais, a literatura tem apresentado vários modelos de transações para o ambiente de computação móvel. Porém, a maioria dos modelos propostos foram sugeridos e desenvolvidos para um domínio de aplicação específico. Isto torna estes modelos inflexíveis e inadequados para outros domínios de aplicação ou aplicações que necessitem de uma maior flexibilidade. Este relatório apresenta um levantamento de modelos de transação para a computação móvel destacando algumas das suas contribuições e limitações. Neste relatório também é apresentado um conjunto de requisitos que podem ser úteis no desenvolvimento de uma plataforma de apoio a transações no ambiente de computação móvel.
Abstract: In this paper, we introduce a new method for object tracking which takes into account multiple target features and fuzzy logic for knowledge representation and information fusion. The knowledge about the target location is represented by fuzzy matrices, one matrix for each considered feature. Since the set of all included information belongs to the same domain, the result indicating this final location is given by a fusion of matrices, through fuzzy operators, which expresses the combination of all defined target features.
Resumo: A construção de software através da integração planejada de componentes reutilizáveis, conhecido como desenvolvimento baseado em componentes (DBC), tem conquistado ampla aceitação para o desenvolvimento de sistemas de informação grandes e complexos. A abordagem de arquitetura de software é complementar ao paradigma DBC, com a responsabilidade pela integração dos componentes de forma a se obter as qualidades desejadas para o sistema final. Por isso, os principais processos de DBC são também centrados na arquitetura do software. Os ambientes de desenvolvimento existentes apóiam, em geral, a modelagem UML e a implementação de sistemas orientados a objetos e implementação de componentes. Entretanto, eles não integram a modelagem de arquiteturas de software e DBC. Nesse relatório técnico são descritos os requisitos para ambiente Bellatrix, que é uma extensão do Eclipse para DBC, centrado no processo UML Components e baseado na arquitetura do software. Tais requisitos são descritos em termos de casos de uso e protótipos de interface com usuário.
Resumo: O mercado de software atual está cada vez mais sujeito a fortes restrições de prazos e custos com exigência de alta qualidade. Por agilizar o desenvolvimento e torná-lo menos custoso a longo prazo, o desenvolvimento baseado em componentes (DBC) vem sendo adotado largamente. O objetivo deste trabalho é propor um processo de DBC com alta qualidade, que maximize a reutilização de componentes prontos durante o desenvolvimento. Uma característica importante do processo é o fato dele ser genérico, no sentido de ser constituído de atividades gerais, comuns à maioria das metodologias de DBC atuais. Com isso, pretende-se que ele seja facilmente adaptável aos vários processos utilizados, o que facilita sua utilização prática. Além disso, o método proposto foi adaptado ao processo UML Components.
Abstract: Methods that combine image database descriptors have strong influence on the effectiveness of content-based image retrieval (CBIR) systems. Although there are many combination functions described in the image processing literature, empirical evaluation studies have shown that those functions do not perform consistently well across different contexts (queries, image collections, users). Moreover, it is often very difficult for human beings to identify optimal combination functions for a particular application. In this paper, we propose a novel framework using Genetic Programming to combine image database descriptors for CBIR. Our framework is validated through several experiments involving two image databases and a specific domain, where the images are retrieved based on the shape of their objects.
Abstract: Curvilinear reformatting of MR images has been of great assistance to the diagnosis of dysplastic lesions in epilepsy patients. A recent improvement is its complete automation using Euclidean isosurfaces obtained from an ``envelope'' of the brain. Each isosurface consists of points with the same Euclidean distance from the envelope. The 3D visualization of these isosurfaces is usually computed using voxel projection and the voxel intensities as texture map. In this paper, we introduce a deformable model which combines a polygonal mesh and a graph, both obtained from the envelope. Our data representation can quickly create meshes for isosurfaces at arbitrary depths from the envelope. This representation fits well to modern GPUs, leading to high interactivity with the hardware currently available. We evaluate the visualization performance of our new deformable model side by side to an efficient implementation of voxel projection.
Abstract: The Image Foresting Transform (IFT) has been presented for the design of image processing operators based on connectivity. The IFT reduces image processing problems into a minimum-cost path forest problem in a graph derived from the image. In this paper we propose a new image operator, which solves segmentation by pruning trees of the forest. An IFT is applied to create a minimum-cost path forest whose roots are seed pixels, selected inside a desired object. In this forest, the background consists of a few subtrees rooted at pixels (leaking pointsq) on the object's boundary. The leaking pixels are identified and their subtrees are eliminated, such that the remaining forest defines the object. Tree pruning reduces image segmentation to the choice of a few pixels in the image, favoring solutions for automatic object detection. We present a user-friendly way of identifying leaking pixels and give solutions for their automatic detection. Since automatic seed selection may be different for each application, we evaluate automatic segmentation with tree pruning in three situations: labeling of multiple objects with similar textures, 3D object definition, and shape-based object detection. The results indicate that tree pruning is a promising approach to investigate automatic image segmentation.
Abstract: The notion of ``strength of connectedness'' between pixels has been successfully used in image segmentation. We present extensions to these works, which can considerably improve the efficiency of object definition tasks. A set of pixels is said a kappa-connected component with respect to a seed pixel, when the strength of connectedness of any pixel in that set with respect to the seed is higher than or equal to a threshold. We discuss two approaches that define objects based on kappa-connected components with respect to a given seed set: with and without competition among seeds. While the previous approaches either assume no competition with a single threshold for all seeds or eliminate the threshold for seed competition, we show that seeds with different thresholds can improve segmentation in both paradigms. We also propose automatic and user-friendly interactive methods to determining the thresholds. The proposed methods are presented in the framework of the image foresting transform, which naturally leads to efficient and correct graph algorithms. The improvements are demonstrated through several segmentation experiments involving medical images.
Abstract: This paper presents an alternative methodology to a traditional high-level power estimation flow, that enables the fast gathering of switching activity from SystemC RTL descriptions. The proposed methodology requires minimum effort from the designer, while reducing the steps necessary to obtain switching activity information, and requires only a C++ compiler. The resulting activity is very close to the results obtained using a commercial tool, which has a larger flow with several steps. We also show that our approach overcomes some drawbacks in the commercial tool.
Abstract: In this paper we present approximation results for the on-line class constrained bin packing problem. In this problem we are given bins of capacity with compartments, and items of different classes, each item with class and size . The problem consists to pack the items into bins in an on-line way, where each bin contains at most different classes and has total items size at most . We show that the bounded space version of this problem does not have an algorithm with constant competitive ratio. If each item have size at least , we show that the problem does not admit an algorithm with competitive ratio better than . In the unbounded case we show that the First-Fit algorithm has competitive ratio in and we present an algorithm with competitive ratio in .
Abstract: We consider the Variable-Sized Bin Packing Problem With Color Constraints (VBPCC). In this problem we have an unbounded number of bins, each one with size in , a list of items , each item with size and color . The objective is to pack the items of into bins, such that no bin has items of more than colors, the total size of items packed in a bin is no more than the bin size and the total size of used bins is minimized. We present an asymptotic polynomial time approximation scheme when the number of different item colors is bounded by a constant .
Abstract: We prove in this paper that every brace may be obtained, by means of edge additions and suitable vertex-splittings, from the two basic braces and . McCuaig showed that this may be achieved, staying within the realm of simple graphs, starting with three infinite families of braces. While our theorem may be deduced from McCuaig's, our proof is simpler.
Abstract: The total chromatic number is the least number of colours needed to colour the vertices and edges of a graph such that no incident or adjacent elements (vertices or edges) receive the same colour. This work determines the total chromatic number of grids, particular cases of partial grids, near-ladders, and of -dimensional cubes.
Resumo: Contratos eletrônicos têm apresentado um papel fundamental no ciclo de vida de processos de negócio interorganizacionais. Vários trabalhos tendo sido realizados com o objetivo de definir técnicas sistemáticas que permitam a elaboração e a manutenção de contratos eletrônicos no contexto de sistemas de gerenciamento de processos de negócio. Neste relatório técnico é apresentada uma visão geral dos principais conceitos e de alguns trabalhos realizados nesta área.
Resumo: Este relatório técnico apresenta um modelo de suporte à avaliação formativa para Ambientes de Educação a Distância (ou Learning Management Systems). Este modelo está fundamento nas recomendações para uma avaliação mais formativa que são apresentadas nos estudos de Hadji . O modelo proposto procura apoiar o mapeamento destas recomendações para o escopo da educação a distância, considerando-se a avaliação formativa de atividades de aprendizagem colaborativas e construcionistas. O modelo explora tecnologias computacionais para prover um suporte mais efetivo à avaliação formativa em duas pontas: por um lado espera-se facilitar o planejamento de atividades de aprendizagem a serem avaliadas, bem como o registro de regulações providas pelos formadores para as participações nestas atividades. Por outro lado, espera-se reduzir a quantidade de informações a ser analisada, auxiliando o formador na recuperação e na análise de informações quantitativas e qualitativas relevantes às regulações das participações, de acordo com os critérios definidos no planejamento da avaliação de cada atividade de aprendizagem.
Abstract: This technical report presents a Formative Assessment Support Model for Learning Management Systems. This model is based on Hadji'  recommendations to a more formative assessment. The proposed model aims to support the mapping of these recommendations to the distance education domain, considering the formative assessment of collaborative and constructionist learning activities. The model explores computational technologies to provide a more effective support to formative assessment on two complementary ways: (1) by supporting the planning of learning activities to be assessed, as well as the support to the educator regulation of participations on these planned assessment activities; (2) by reducing the amount of information to be analyzed, helping the educator on the recovery and analysis of relevant information for participation regulation, according to the criteria defined at each learning activity assessment planning.
Resumo: Uma das técnicas mais importantes de análise de genomas distintos é a comparaçáo da ordem de seus blocos de genes. Rerranjos de Genomas sã o eventos mutacionais que movem grandes blocos de genes no interior do genoma. Nós estamos interessados em encontrar uma sequência de eventos de rearranjo tais como reversões e transposições que transformem um genoma em outro.
Esse artigo é composto de duas partes: Na primeira parte, apresentamos as contribuições clássicas ao problema de rearranjo de genomas e discutiremos algumas de suas características. Na segunda parte, nós reescrevemos alguns problemas e resultados importantes envolvendo eventos de rearranjo e apresentamos um novo formalismo algébrico baseado no trabalho de Meidanis e Dias. Nós provamos a equivalência entre problemas e alguns conceitos da teoria clássica e os da nova teoria algébrica. O objetivo desse texto é convencer o leitor qe que o modelo algébrico para problemas de rearranjo de genomas é mais adequado e completo uma vez que ele extende os resultados clássicos e nos permite modelar não apenas eventos de rearranjo como também eventos de recombinação.
Abstract: One of the most important techniques of analysis of distinct genome sequences is the comparison of the order of their blocks of genes. Genome Rearrangements are mutational events that move large blocks of genes in a genome. We are interested in finding a minimum sequence of rearrangement events such as reversals and transpositions that transform one genome into another.
This work is composed by two parts. In the first part, we present the classical contributions for the genome rearrangement problem and discuss some of their characteristics. In the second part, we restate some problems and important results involving rearrangement events and present a new algebraic formalism based on Meidanis and Dias. We prove the equivalence between the problems and some concepts of the classical theory and the new algebraic theory. The last objective of this work is to convince the reader that the algebraic model for genome rearrangement problems is more appropriate and complete in the sense that it extends the classical results and allow us to model not only rearrangement events but also recombination events.
Abstract: Trimming procedures are an important part of the sequence analysis pipeline in an EST Sequencing Project. In general, trimming is done in several phases, each one detecting and removing some kind of undesirable artifact, such as low quality sequence, vectors or adapters sequence, and contamination. However, this strategy often results in a phase being unable to recognize its target because part of it was removed during a previous phase. To remedy this drawback, we propose a new strategy, where each phase detects but does not remove its target, leaving this decision to a post processing step occurring after all phases. Our tests show that this strategy can significantly improve the detection of artifacts.
Abstract: Curvilinear reformatting of MR images has been very useful in epilepsy research for detecting dysplastic lesions in the human brain. However, the method requires careful manual delineation and introduces artifacts. We present a fully automated approach to extract an ``envelope'' of the brain and obtain surfaces of same Euclidean distance to the envelope. The visualization of the voxel intensities on these isosurfaces substitute the previous procedure, eliminating its artifacts. The new approach was successfully evaluated over 40 controls and 10 patients with dysplastic lesions.
Abstract: In this paper we introduce two new WAB-based consensus algorithms for the crash-recovery model. The first one, B*-Consensus, is resilient to up to permanent faults, and can solve consensus in three communication steps. R*-Consensus, our second algorithm, is resilient, and can solve consensus in two communication steps. These algorithms are optimal with respect to the time complexity versus resilience tradeoff. We provide analytical and experimental evaluation of our algorithms, and compare them with Paxos, an optimal leader-election based consensus algorithm for the crash-recovery model.
Resumo: Sistemas de Gerenciamento de Processos de Negócio são sistemas que oferecem apoio à realização de processos envolvendo a cooperação entre organizações que buscam atingir um objetivo comum de negócio. Esses sistemas devem disponibilizar um amplo conjunto de funcionalidades para criação, execução e monitoramento de processos de negócio. Para facilitar a integração entre as aplicações distribuídas entre as organizações, as operações computacionais que devem ser executadas pelas atividades do processo são encapsuladas dentro de serviços eletrônicos, entre eles os serviços Web. Este relatório apresenta uma visão geral de algumas arquiteturas de Sistemas de Gerenciamento de Processos de Negócio baseados em serviços eletrônicos, incluindo alguns sistemas baseados especificamente em serviços Web.
Resumo: Este relatório descreve os detalhes técnicos de um experimento via rede feito na Universidade Estadual de Campinas, SP, para estudar a sumarização de um conjunto de diálogos. Nossa intenção é apresentar a estratégia seguida quando da construção do website do experimento, bem como as questões estatísticas e técnicas envolvidas. Neste relatório não apresentamos os resultados do experimento.
Abstract: This document describes technical details of a web experiment carried out at the State University of Campinas, Brazil, on summarisation of a set of dialogues. Our intention here is to present the strategy we followed to build the experiment's website, as well as the statistical issues and the technicalities involved. In this report, we do not present the experiment's results.
Abstract: A hybrid system is given by a set of real-time reactive components, whose dynamic behavior results from the interplay of continuous and discrete events. Hybrid system have been modeled using formal methods and results have been obtained using the accompanying computational tools. Some such formal methods and tools are briefly presented, emphasizing their different characteristics. The Hybrid Automata formalism is one among such methods. A hybrid automaton is a finite state automaton where each state is extended to contain a description of a system dynamic profile. Transitions between states model a change in the system dynamics, triggered by discrete events. In this work, hybrid automata are used to model realistic hybrid systems stemming from segments of a subway mesh and parts of an air traffic control system. The models constructed are verified in this way using the support of computational tools. Moreover, some important model parameters are synthesized using the same tools. The verification certified that the systems operate safely. The synthesis sessions arrived at tighter values for some operational system parameters, while still guaranteeing a safe operation.
Resumo: Clustering é uma forma de organizar dados por meio do agrupamento destes em conjuntos, a partir da maior similaridade existente entre os dados de um mesmo conjunto que os de outro, com base em algum critério pré-determinado. Neste relatório são apresentados os aspectos gerais que devem ser observados quando se pretende aplicar alguma técnica de clustering na resolução de um problema. Os aspectos gerais apresentados são: definição da forma de representação do conjunto de dados a serem agrupados, definição de uma medida adequada de semelhança entre os dados e a definição de qual técnica de clustering utilizar para a construção dos clusters. Além disso, uma farta quantidade de referências bibliográficas é disponibilizada permitindo ao leitor o aprofundamento em todos os tópicos presentes no texto.
Abstract: Clustering is a kind of data organization that groups data into sets where elements of a set are more similar to each other than are similar to the elements of another set, according to some criteria. In this report general aspects are presented that should be observed when it is intended to apply some clustering technique to solve a problem. The general aspects presented are: definition of the data representation type among data to be grouped, definition of the most suitable similarity measure among data and definition of which clustering technique to use to build the clusters. Besides that, a lot of bibliographic references are available to readers in order to make possible deep studies about all topics presented in the text.
Abstract: A possible solution to the job shop scheduling problem is using genetic algorithms. Although research on the topic can be considered evolved, it mainly concentrates on deterministic static problems. This work reports an experience on using genetic algorithms with the dynamic job shop problem with processing time uncertainties, based on two approaches: the guess and solve, and the decomposition of dynamic job shops into static instances. The guess and solve consists of making a controlled guess on the processing time of the jobs and solving the resulting deterministic scheduling problem with a suitable technique, in this report the genetic algorithms, and the decomposition treats the dynamic job shop as a series of static problems. Processing time uncertainties prevent the direct use of the decomposition approach, so an adjustment to this technique is also proposed. Simulation results show that the guess and solve with genetic algorithms together with the proposed adjustment is an adequate way to handle the dynamic job shop with processing time uncertainties.
Abstract: In the process of drilling oil wells, it is necessary to schedule resources, such as oil derricks and boats, in order to execute development activities that prepare promissing wells for oil extraction. The scheduling of such activities must satisfy a variety of constraints and must attain some objective criteria, such as maximizing oil production. In this paper, we discuss a greedy randomized adaptive search procedure (GRASP) algorithm for the scheduling of oil well drilling activities. We describe in detail our GRASP implementation and compare the results obtained with results derived from a well accepted constraint programming implementation. Computational experience on real instances of the problem indicates that the GRASP implementation is competitive, outperforming the contraint programming implementation.
Instituto de Computação :: Universidade Estadual de Campinas
Caixa Postal 6176 • Av. Albert Einstein, 1251 - Cidade Universitária • CEP 13083-970 • Campinas/SP - Brasil • Fone:  3521-5838