Abstract: We present the first automatic verification methods that automatically generate invariants which are assertions expressed by multivariate formal power series. We also discuss their convergence analysis over hybrid systems that exhibit highly non linear models. As far as we know, this is the first approach that can deal with this type of systems or that can automatically generate this type of invariants. We show that the preconditions for discrete transitions, the Lie-derivatives for continuous evolution and the newly introduced relaxed consecution requirements can be viewed as morphisms and thus can be suitably represented by matrices. By doing so, we reduce the invariant generation problem to linear algebraic matrix systems, from which one can provide very effective methods for solving the original problem. Such methods have much lower time complexities than other approaches based on Grobner basis computations, quantifier eliminations, cylindrical algebraic decompositions, directly solving non-linear systems or abstract operators, or even the more recent constraint-based approaches. More specifically, we bring necessary and sufficient conditions that allow existence and completeness proofs for our formal power series invariant generation methods. We illustrate the efficiency of our computational methods by dealing with highly non linear and complex hybrid systems.
Abstract: Classifying Remote Sensing Images (RSI) is a hard task. There are automatic approaches whose results normally need to be revised. The identification and polygon extraction tasks usually rely on applying classification strategies that exploit visual aspects related to spectral and texture patterns identified in RSI regions. There are a lot of image descriptors proposed in the literature for content-based image retrieval purposes that can be useful for RSI classification. This paper presents a comparative study to evaluate the potential of using successful color and texture image descriptors for remote sensing retrieval and classification. Seven descriptors that encode texture information and twelve color descriptors that can be used to encode spectral information were selected. We highlight the main characteristics and perform experiments to evaluate the effectiveness of these descriptors. To evaluate descriptors in classification tasks, we also proposed a methodology based on KNN classifier. Experiments demonstrate that Joint Auto-Correlogram (JAC), Color Bitmap, Invariant Steerable Pyramid Decomposition (SID) and Quantized Compound Change Histogram (QCCH) yield the best results.
Abstract: We consider an Art Gallery problem (AGP) which aims to minimize the number of vertex guards required to monitor an art gallery whose boundary is an -vertex simple polygon. In this paper, we compile and extend our research on exact approaches for solving the . In prior works, we proposed and tested an exact algorithm for the case of orthogonal polygons. In that algorithm, a discretization that approximates the polygon is used to formulate an instance of the Set Cover Problem which is subsequently solved to optimality. Either the set of guards that characterizes this solution solves the original instance of the AGP, and the algorithm halts, or the discretization is refined and a new iteration begins. This procedure always converges to an optimal solution of the AGP and, moreover, the number of iterations executed highly depends on the way we discretize the polygon. Notwithstanding that the best known theoretical bound for convergence is iterations, our experiments show that an optimal solution is always found within a small number of them, even for random polygons of many hundreds of vertices. Herein, we broaden the family of polygon classes to which the algorithm is applied by including non orthogonal polygons. Furthermore, we propose new discretization strategies leading to additional trade-off analysis of preprocessing vs. processing times and achieving, in the case of the novel Convex Vertices strategy, the most efficient overall performance so far. We report on experiments with both simple and orthogonal polygons of up to 2500 vertices showing that, in all cases, no more than 15 minutes are needed to reach an exact solution, on a standard desktop computer. Ultimately, we more than doubled the size of the largest instances solved to optimality compared to our previous experiments, which were already five times larger than those previously reported in the literature.
Abstract: The number of articles in Wikipedia is quite accurately modeled as a function of time by two exponential regimes or phases, with a relatively sharp transition over a one-year period centered on Janary 2006. The first regime has a positive rate constant per day, corresponding to a doubling time of about 10.5 months. The second regime has a negative rate constant per day, corresponding to a halving time of about 4.5 years. The model predicts that will tend to a finite limit of about 5.9 million articles. We advance some possible explanations and implications of the negative rate.
Abstract: A supply chain comprises several different kind of actors that interact either in an ad hoc fashion (e.g., an eventual deal) or in a previously well planned way. In the latter case, how the interactions develop is described in contracts that are agreed on before the interactions start. This agreement may involve several partners, thus a multi-party contract is better suited than a set of bi-lateral contracts. If one is willing to negotiate automatically such kind of contracts, an appropriate negotiation protocol should be at hand. However, the ones for bi-lateral contracts are not suitable for multi-party contracts, e.g., the way of achieving consensus when only two negotiators are haggling over some issue is quite different if there are several negotiators involved. In the first case, a simple bargain would suffice, but in the latter a ballot process is needed. This paper presents a negotiation protocol for electronic multi-party contracts which seamlessly combines several negotiation styles. It also elaborates on the main negotiation patterns the protocol allows for: bargain (for peer-to-peer negotiation), auction (when there is competition among the negotiators) and ballot (when the negotiation aims at consensus). Finally, it describes an implementation of this protocol based on Web services, and built on the YAWL Workflow Management System.
Resumo: Diferentes instruções são implementadas em processadores com conjunto de instruções específico para a aplicação (ASIPs - Application Specific Instruction-Set Processors), motivados pela necessidade de especializar o processador para uma determinada carga de software em uma plataforma embarcada. Para que o projeto seja concluído em tempo viável, é importante reduzir os esforços necessários para construir ferramentas de desenvolvimento de software e simuladores para a nova plataforma. Para isso, simuladores e demais ferramentas podem ser automaticamente sintetizados baseados na descrição da arquitetura do processador. Neste relatório técnico discutimos o projeto de um sistema completo de ligação dinâmica independente de arquitetura alvo, compreendendo a capacidade de geração automática de ligadores: um ligador de tempo de compilação e um carregador para ligação em tempo de execução. O sistema, entretanto, é dependente de arquivo objeto e adota o formato ELF. O carregador utilizado foi especificamente construído para este projeto de modo que não dependemos do carregador da biblioteca glibc. O objetivo principal é o fácil redirecionamento para aplicação em um processador alvo bem como às suas respectivas regras da interface binária da aplicação (ABI - Application Binary Interface). Tais informações dependentes do alvo são extraídas automaticamente de um modelo descrito em uma linguagem de descrição de arquiteturas (ADL - Architecture Description Language). O estudo foca a implementação na ADL ArchC, e testes de validação para a arquitetura ARM são apresentados, acompanhados de uma breve análise dos resultados experimentais para o simulador.
Abstract: Different instructions are implemented by Aplication Specific Instruction-Set Processors (ASIPs), motivated by the need of processor specialization to a known software load in an embedded platform. In order to meet time demands, it is critical to reduce necessary efforts to build software development tools and simulator for the platform under development. To address this problem, simulators and other tools can be automatically generated based on a processor architecture description. In this technical report we discuss the project of a complete architecture independent dynamic linking system: the linker for compile-time and loader for run-time. The system is object file dependent and relies on the flexible ELF. Our loader is specifically designed for this project and we don't depend upon glibc's loader. The main purpose is to be easily retargetable for application in a target processor and respective Application Binary Interface (ABI) rules. These target specific information are extracted from an Architecture Description Language (ADL) model. Our study focuses in the ADL ArchC, and validation tests for the ARM architecture are presented.
Resumo: Sejam e dois caminhos de e vértices, respectivamente. Um grid é o grafo correspondente ao produto cartesiano . Neste trabalho, tratamos de conjuntos dominantes em grids. Um conjunto dominante de um grafo é um subconjunto de seu conjunto de vértices, tal que todo vértice de ou pertence a , ou é adjacente a um elemento de . Sabe-se que a determinação do número de dominação, isto é, da cardinalidade de um menor conjunto dominante possível, é um problema NP-difícil para grafos arbitrários. Entretanto, esse número é conhecido para a subclasse dos grids nos casos em que . Neste texto, fornecemos um limite superior para o número de dominação de , com , bem como um algoritmo polinomial para obtenção de conjuntos dominantes cujos tamanhos atingem esse limite. Chang, Clark e Hare obtiveram limitantes similares, mas os resultados aqui apresentados foram desenvolvidos independentemente do trabalho destes autores.
Abstract: The main purpose of this document is to define the mechanisms functionalities and detailed elements. Starting from a more general description of the mechanism to be implemented, this technical report presents the meta-communication instrument instantiated at Vila na Rede - an inclusive social network system developed in the context of the e-Cidadania Project.
Abstract: Vila na Rede conception and development is part of the e-Cidadania Project, which aims to study and propose solutions to challenges of interaction and user interface design on systems related to the practice of citizenship, contributing to the promotion of a digital culture in the Brazilian society. This Technical Report presents a preliminary evaluation of the Vila na Rede system by using three distinct dimensions: a) an evaluation of the interaction based on task analysis; b) an inspection of usability and accessibility of its user interface; and c) an analysis of the first nine months of the beta version system usage. Some changes and evolution caused by this analysis are also presented and discussed.
Abstract: We describe a general method for query-by-example retrieval in image collections, using interval arithmetic to perform multi-scale distance estimation. The interval estimates are used to quickly eliminate candidate images at small scales, in a fashion similar to the branch-and-bound optimization paradigm. Experiments indicate that the method can provide significant speedup relative to exhaustive search; nevertheless, the method always returns the exact best match (and not merely an approximation thereof). The technique allows queries with a wide variety of image similarity functions, without the need to precompute or store specific descriptors for each function.
Abstract: Model-based testing has been widely used for testing critical and reactive systems. Some aspects of reactive systems are treated by extended models that captures the notion of data flow on the system as well as its interactions with the environment. Other aspects are captured by conventional timed models that allow for the continuous evolution of time variables. Devising formal methods to automatically generate test suites for systems that comprise both these has remained a challenge. In this work we use a new Timed Input/Output Context Automata (TIOCA) as a formal model for timed systems with context variables. We propose and prove the correctness of a new strategy to discretize TIOCA models, thus obtaining related grid automata. Grid automata allow us to automatically generate test cases based on test purposes, the latter being special TIOCA that specifically model those system properties to be tested. We also discuss how to extract test suites from the synchronous product of a TIOCA and an associated test purpose. Further, we show how to use test suites in order to automatically verify whether given implementations conform to the specification and, also, reflect the desired properties.
Abstract: In Content-based Image Retrieval (CBIR), accurately ranking the returned images is of paramount importance, since it is common-sense that users consider mostly the topmost results. The typical ranking strategy used by many CBIR systems is to employ image content descriptors, so that returned images that are most similar to the query image are placed higher in the rank. While this strategy is well accepted and widely used, improved results may be obtained by combining multiple image descriptors. In this paper we explore this idea, and introduce algorithms that learn to combine information coming from different descriptors. The proposed learning to rank algorithms are based on three diverse learning techniques: Support Vector Machines (CBIR-SVM), Genetic Programming (CBIR-GP), and Association Rules (CBIR-AR). Eighteen image content descriptors (color, texture, and shape information) are used as input and provided as training to the learning algorithms. We performed a systematic evaluation involving two complex and heterogeneous image databases (Corel e Caltech) and two evaluation measures (Precision and MAP). The empirical results show that all learning algorithms provide significant gains when compared to the typical ranking strategy in which descriptors are used in isolation. We concluded that, in general, CBIR-AR and CBIR-GP outperforms CBIR-SVM. A fine-grained analysis revealed the lack of correlation between the results provided by CBIR-AR and the results provided by the other two algorithms, which indicates the opportunity of an advantageous hybrid approach.
Resumo: Este relatório técnico contém os resumos de 76 trabalhos apresentados no V Workshop de Teses, Dissertações e Trabalhos de Iniciação Científica em Andamento do Instituto de Computação da UNICAMP (WTD-IC-UNICAMP 2009).
No ano de 2009, o Workshop fez parte das comemorações dos "40 anos de Computação da UNICAMP", quando o primeiro curso de Bacharelado em Ciência da Computação do Brasil foi criado pela Unicamp em 18 de março de 1969. Nesta edição especial tivemos, além das tradicionais apresentações orais, uma feira de pôsters, com o intuito de permitir um maior número de participações de alunos.
O Workshop, realizado de 29 de setembro a 1 de outubro de 2009, permitiu que doutorandos, mestrandos e alunos de graduação de iniciação científica do Instituto apresentassem os principais aspectos de suas pesquisas.
Cada resumo de trabalho de doutorado, de mestrado ou de iniciação científica, corresponde a uma apresentação oral ou uma apresentação de pôster, sendo o texto limitado a 4 páginas. A publicação dos resumos sob forma de um relatório técnico tem por objetivo (i) divulgar os trabalhos em andamento no IC e (ii) registrar de forma sucinta o estado da arte da pesquisa do IC no ano de 2009.
Abstract: The Classification Problem is the problem of deciding whether a simple graph has chromatic index equals to or , where is the maximum degree of the graph. It is known that to decide if a graph has chromatic index equals to is NP-complete. A split graph is a graph whose vertex set admits a partition into a stable set and a clique. The chromatic indexes for some subsets of split graphs, such as split graphs with odd maximum degree and split-indifference graphs, are known. However, for the general class, the problem remains unsolved. In this paper we exhibit a new subset of split graphs with even maximum degree that have chromatic index equal to . Moreover, we present polynomial time algorithms to perform an edge-coloring and to recognize these graphs.
Abstract: In this paper we put forward an abstract definition of computational model and provide a negative answer to a conjecture involving the converse of Rice's theorem. In particular, any reasonable formal system in which the notion of computation is involved should satisfy our general definition of computational model. Furthermore, in order to give a natural counter-example to our conjecture we present a new interpretation to a result of Bernardi  involving undecidability in sufficiently strong formal theories. Finally, we raise the question if an abstract theory of computational models can lead to new problems and interesting results in theoretical computer science.
Abstract: In this paper we introduce a new class of complexity measures and prove a general hierarchy theorem in computational complexity. In addition, we derive as a particular case of our result the traditional time and space hierarchy theorems.
Resumo: Memória transacional (MT) continua a ser a abordagem mais promissora para substituir locks em programação concorrente, mas sistemas de MT em software (MTS) ainda não possuem desempenho satisfatório quando comparados a implementações utilizando locks bem refinados. Sabe-se que a operação crítica em sistemas de MT é garantir a atomicidade e o isolamento das threads que estejam executando concorrentemente. Essa tarefa é conhecida como a validação dos conjuntos de leitura/escrita. Na tentativa de realizar esse processo o mais rápido possível, sistemas de MTS costumam utilizar uma tabela de posse para realizar a detecção de conflitos, mas essa abordagem gera ocorrência de falsos positivos, os quais resultam em abortos falsos. Este trabalho mostra o verdadeiro impacto dos abortos falsos e como sua relevância aumenta à medida que o número de threads concorrentes também aumenta, mostrando que este fator é essencial para sistemas de MTS. Nós propomos duas diferentes técnicas para evitar abortos falsos e conseqüentemente melhorar o desempenho de MTS. A primeira é uma lista de colisão anexada à já existente tabela hash. A segunda è uma técnica completamente nova e eficiente para detectar conflitos entre transações. Esta nova técnica faz uso de tecnologia SSE para detectar conflitos e substitui a normalmente usada tabela hash. Devido ao seu esquema de mapeamento de memória, esta técnica consegue evitar a ocorrência de falsos positivos. Com uma detecção de colisão rápida e livre de falsos conflitos, nós conseguimos significativa melhora de desempenho em alguns programas do benchmark STAMP. O speedup obtido foi de ate 1,63x. Nós também mostramos que os speedups ficam cada vez maiores com o aumento do número de threads paralelas. Por fim, baseado nos resultados obtidos, nós caracterizamos os programas que foram mais beneficiados pelas duas técnicas.
Abstract: Transactional memory (TM) continues to be the most promising approach replacing locks in concurrent programming, but TM systems based on software (STM) still lack the desired performance when compared to fine-grained lock implementations. It is known that the critical operation in TM systems is to ensure the atomicity and isolation of concurrently executing threads. This task is known as the read/write-set validation. In attempt to make this process as fast as possible, STM systems usually use ownership tables to perform conflict detection, but this approach generates false positive occurrences, which result in false aborts. This paper shows the real impact of false aborts and how its relevance increases along with the number of concurrent threads, showing it is an essential factor for TM systems. We propose two different techniques to avoid false aborts, consequently improving the TM performance. The first is a collision list attached to the existing hash table. The second is a novel and efficient technique to detect conflicts between transactions. This technique makes use of SSE technology to detect conflicts and replaces the commonly used hash table. Due to its memory mapping scheme, the technique avoids false positive occurrences. With fast collision detection, free of false conflicts, we achieved significant performance improvements in some STAMP benchmark programs. The speedup obtained was up to 1.63x. We also show that speedups become higher when the number of parallel threads increases. Finally, based on the results obtained, we characterize which programs are most benefited by the two techniques.
Abstract: Devising formal techniques and methods that can automatically generate test case suites for timed systems has remained a challenge. In this work we use Timed Input/Output Automata (TIOA) as a formal specification model for timed systems. We propose and prove the correctness of a new and more general discretization method that can be used to obtain grid automata corresponding to specification TIOA, using almost any granularity of interest. We also show how test purposes, for modeling specific system properties, can be used together with the specification TIOA in order to generate grid automata that captures the behavior of both the specification and the test purpose. From such grid automata one can, then, algorithmically extract test suites that can be used to verify whether given implementations conform to the specification and reflect the desired properties.
Abstract: The problem of parsimonious phylogeny reconstruction using genome rearrangement is called Multiple Genome Rearrangement Problem. There are two usual approaches: the small phylogeny problem, where a tree is given and we want to find the ancestral nodes that minimize total branch length of the tree; and the big phylogeny problem, finding the tree and ancestral nodes with minimum total branch length. In this paper we show that, under the Single-Cut-or-Join metric, the small phylogeny problem is solvable in polynomial time while the big phylogeny problem is NP-hard.
Abstract: Finite element bases defined by sampling points were used by J. Lehtinen in 2008 for the efficient computation of global illumination in virtual scenes. The bases provide smooth approximations for the radiosity and spontaneous emission functions, leading to a discrete version of Kajiya's rendering equation. Unlike methods that are based on surface subdivision, Lehtinen's method can cope with arbitrarily complex geometries. In this paper we present an experimental validation of Lehtinen's meshless method by comparing its results with an independent numerical solution of the rendering equation on a simple three-dimensional scene. We also compare Lehtinen's special finite-element basis with two other similar bases that are often used for meshless data interpolation, namely a radial basis with a Gaussian mother function, and Shepard's inverse-square-distance weighted interpolation. The results confirm the superiority of Lehtinen's basis and clarify why the other two bases provide inferior-looking results.
Resumo: O design de sistemas “para todos” não é uma tarefa trivial para os designers de interface de usuário (IU), uma vez que inúmeros aspectos devem ser considerados para que TICs (Tecnologias de Informação e Comunicação) façam sentido e possam ser usados pelo cidadão. Estender o conceito de design centrado no usuário de forma que parte significativa dos potenciais usuários pudesse opinar sobre o design das interfaces, implicaria na participação de centenas ou milhares de pessoas nesse processo. Este trabalho propõe o uso da competência humana para auxiliar no desenvolvimento de interfaces acessíveis a todos por meio de Games With A Purpose (GWAP). Propomos envolver o usuário na construção de conhecimento, tornando-o co-participante no processo de design. Análises iniciais sugerem o potencial do uso de jogos com propósito em auxiliar designers de IU em escolhas informadas sobre elementos de design de sistemas que devem ser para todos.
Abstract: The design of systems for all is not a trivial task for user interface (UI) designers. Many aspects should be considered to enable ICT (Information and Communication Technologies) to make sense and be used by citizens. Extending the concept of user centered design to a meaningful portion of potential users would involve the participation of hundreds or thousands of people in this process. This paper proposes the use of the human expertise to assist in the development of accessible interfaces for all, by using the Games With A Purpose (GWAP) approach. We propose to bring the user to the design processes, contributing to his/her construction of knowledge and co-participation. Initial analyses suggest the potential of using games with a purpose to help designers with regard to informed choices of UI elements in the design of systems for all.
Abstract: A graph is called well covered if every two maximal independent sets of have the same number of vertices. In this paper we shall use the modular and primeval decomposition techniques to decide well coveredness of graphs such that, either all their -connected components are separable or they belong to well known classes of graphs that, in some local sense, contain few 's. In particular, we shall consider the class of cographs, -reducible, -sparse, extended -reducible, extended -sparse graphs, -extendible graphs, -lite graphs, and -tidy graphs.
Abstract: Reuse of program parts through cloning and adaptation is a common practice in software development. However it creates later problems in system maintenance. This report describes the main ideas behind a tool developed to assist in identification of software clones based on some techniques used in detecting program plagiarism.
Abstract: A Product Line Architecture (PLA) should remain stable accommodating evolutionary changes of stakeholder’s requirements. Otherwise, architectural modifications may have to be propagated to products of a product line, thereby increasing maintenance costs. Hence, it is important to understand which techniques better cope with PLA stability through evolution. This paper presents a comparative study to evaluate the positive and negative change impact on PLA designs based on components and aspects. The objective of the evaluation is to assess when aspects and components promote PLA stability in the presence of various types of change. To support a broader analysis, we compare the stability of the joint application of components and aspects to a PLA design against the isolated use of aspect-oriented, object-oriented, and component-based design techniques. The results show that the combination of aspects and components tends to promote superior PLA resilience than the other PLAs in most of the circumstances.
Abstract: The estimation of camera motion is one of the most important aspects for video processing, analysis, indexing, and retrieval. Most of existing techniques to estimate camera motion are based on optical flow methods in the uncompressed domain. However, to decode and to analyze a video sequence is extremely time-consuming. Since video data are usually available in MPEG-compressed form, it is desirable to directly process video material without decoding. In this paper, we present a novel approach for estimating camera motion in MPEG video sequences. Our technique relies on linear combinations of optical flow models. The proposed method first creates prototypes of optical flow, and then performs a linear decomposition on the MPEG motion vectors, which is used to estimate the camera parameters. Experiments on synthesized and real-world video clips show that our technique is more effective than the state-of-the-art approaches for estimating camera motion in MPEG video sequences.
Abstract: The project Deployment and Assessment of an Image Annotation and Retrieval Tool has the objective of specifying and implementing an application for image support annotation and search (based on a textual and a visual description) in the biodiversity domain. This technical report presents the activities related to the use of the tablet PC tool in the parasitology domain at Unicamp. The objective of this tool is to help the comparison of morphological characteristics among different species. The report is divided into activities accomplished, application setup and specific features, followed by experimental results and conclusion. Preliminary results showed that students regarded the tool as being very useful, contributing as an alternative learning approach.
Abstract: This technical report presents the results of an experiment whose objective was to observe users’ interactions with television using the remote control. We believe that a culture mediated by the new media of Interactive Digital Television (iDTV) will depend directly on the artifact for physical interaction available to the user. In the work reported here we made an in loco observation of barriers related to the use of remote control in the television context and identified project requirements that should be considered in the design of new physical artifacts of interaction with iDTV.
Abstract: In this work we describe a general algorithm to find a finite-element basis with minimum total support for an arbitrary spline space, given any basis for that same space. The running time is exponential on in the worst case, but for many cases of practical interest, where is the number of mesh cells and is the dimension of the spline space.
Abstract: Science is a competitive business, and researchers from different fields are constantly compared in terms of funding and for promotion evaluations. This article analyzes the quantity of computer scientists' work which is not duly recognized when using Web of Science and/or Scopus bibliometrics. We randomly selected CS faculty from highly-ranked US Computer Science departments and determined that, on average, 67% of their published work is not accounted for within Web of Science searches and 44% is absent when searching with Scopus. We defined this parameter as the invisible work rate. We compared these figures to similar samples from Mathematics, Physics, and Electrical Engineering faculty. While CS and EE have significantly distinct publishing patterns as compared to Mathematics and Physics, the variance of the invisible work rate for CS is high, which suggests that different subareas of CS also have different publication practices - a possible indication that it will be difficult for computer scientists to agree on one bibliometric evaluation criterion.
Abstract: The -packing problem asks for the maximum number of pairwise vertex-disjoint cliques of size in a graph, for a fixed integer . This problem is NP-hard for general graphs when , and even when restricted to chordal graphs. However, Guruswami et al. proposed a polynomial-time algorithm for cographs (when is fixed). In this work we extended this algorithm to -tidy graphs, keeping the same time complexity.
Resumo: Há uma grande quantidade de informação na Web sobre entidades geográficas e grande interesse em localizá-la em mapas. Entretanto, os mecanismos de busca na Web ainda não suportam em uma única ferramenta buscas que envolvam relações espaciais, pois em geral a consulta é processada levando-se em conta apenas as palavras-chaves usadas na consulta. Este artigo faz uma breve revisão da área de Recuperação de Informação Geográfica (GIR) e uma releitura de desafios e oportunidades de pesquisa da área a partir da proposta de uma arquitetura para buscas Web envolvendo relacionamento espacial entre entidades geográficas e uma implementação inicial dela.
Abstract: The geographic information is part of people's daily life. There is a huge amount of information on the Web about or related to geographic entities and people are interested in localizing them on maps. Nevertheless, the conventional Web search engines, which are keywords-driven mechanisms, do not support queries involving spatial relationships between geographic entities. This paper revises the Geographic Information Retrieval (GIR) area and restates its research challenges and opportunities, based on a proposed architecture for executing Web queries involving spatial relationships and an initial implementation of that.
Resumo: A atividade de desenvolvimento de software vem enfrentando crescentes desafios em termos de diminuição de custos, esforço e tempo de chegada ao mercado, acompanhados do aumento da complexidade e do tamanho dos produtos de software. Para atender a estas demandas, novos enfoques que favoreçam o reuso de artefatos de software têm sido propostos, tais como o de Engenharia de Linha de Produto. A engenharia de linha de produto consiste em projetar, implementar e evoluir um conjunto de produtos com alto grau de similaridade entre si, de forma prescritiva, a partir de um conjunto de artefatos básicos. Empresas que desenvolvem produtos de software em um determinado domínio de aplicação podem obter ganhos significativos em termos de redução de esforço e custos utilizando o enfoque de linha de produto, desenvolvendo vários produtos similares ao mesmo tempo, ao invés de focar no desenvolvimento de um único produto por vez.
A Engenharia de Domínio é um dos fundamentos básicos para a engenharia de linha de produto. A Análise de Domínio, que é uma das atividades da engenharia de domínio, consiste na identificação dos objetos e operações de um conjunto de sistemas similares em um domínio específico. Na literatura podem ser encontrados vários métodos de engenharia de domínio e de linha de produto; entretanto, a maioria deles tratam do estabelecimento de uma linha de produtos desde o seu início, e não a partir de componentes e sistemas baseados em componentes já existentes. Hoje já existem várias empresas que usam desenvolvimento baseado em componentes, o que facilita o estabelecimento de uma linha de produtos a partir de artefatos já existentes.
Este relatório apresenta um estudo para a implantação de linha de produto em uma empresa de desenvolvimento de software que desenvolve produtos com alto grau de similaridade entre si, e que já utiliza o paradigma de desenvolvimento baseado em componentes. Como passo inicial para a criação de uma linha de produto foi feita uma análise de domínio, visando identificar e modelar as similaridades existentes entre os produtos desenvolvidos pela empresa. O método escolhido para a análise de domínio foi o FODA, e a ferramenta utilizada para suporte à modelagem do domínio foi o Odyssey. O objetivo deste estudo é obter familiaridade com métodos e ferramentas de análise de domínio, buscando identificar possíveis problemas e apontar soluções para a construção de uma linha de produto de software baseada em componentes pré-existentes.
O estudo de caso apresentado foi realizado em uma empresa da área de aplicações financeiras, em Janeiro de 2006.
Abstract: The Inclusive Social Network (ISN) Vila na Rede is being developed in a conjoined action with community leaders, end users and researchers from diverse fields of knowledge. The features of this ISN are being designed to support the needs of these social groups in a way that it makes sense to them and that it is part of their reality. In order to better understand this reality, the 6th Semio-Participatory Workshop of the e-Cidadania Project: "System and Methods for the Constitution of a Culture mediated by Information and Communication Technology" was held and one of its activities was concerned with the investigation of communication mechanisms for social network systems. This technical report presents how the workshop was conducted, the data collected from the activities and the analysis of the data. As a result, first requirements for online conversation on inclusive social networks are presented.
Abstract: e-Cidadania is a research Project inspired by one of the current grand challenges of Computer Science research in Brazil for the next years: the Participative and Universal Access to Knowledge for the Brazilian Citizen. This Project investigates the potentialities of inclusive social network systems as mediators in the constitution of a digital culture among those digitally illiterate. This work builds on Participatory Design (PD) techniques adapted to an inclusive setting to conduct the 3rd Semio-Participatory Workshop of the e-Cidadania Project and to analyze its achievements. The Workshop aimed at the design of user interface elements for an inclusive social network system. This research report illustrates the participatory practice adapted to an inclusive scenario, presents the main achievements of the Workshop and discusses results that inform the UI design.
Abstract: Vila na Rede is an Inclusive Social Network built for and with Brazilian citizens. It was conceived after the research and analysis that took place throughout the first three Semio-Participatory Workshops conducted at Vila União, a neighborhood of low income families located in Campinas, SP. These workshops, as well as the Vila na Rede itself, are part of the e-Cidadania Project, which aims to study and propose solutions to the challenges of interaction and user interface design on systems related to the exercise of citizenship, contributing to the promotion of a digital culture. This Technical Report describes the activities from e-Cidadania's 4th Workshop, since its preparation until the examination of the collected results. The 4th Workshop had as main objective the launch of Vila na Rede system.
Abstract: The well-known drawbacks imposed by lock-based synchronization have forced researchers to devise new alternatives for concurrent execution, of which transactional memory is a promising one. Extensive research has been carried out on Software Transaction Memory (STM), most of all concentrated on program performance, leaving unattended other metrics of great importance like energy consumption. This paper presents a systematic methodology to characterize a class of STMs that implement a common set of API calls. We thoroughly evaluate the impact on energy consumption due to STM by quantifying the energy costs of the primitives involved in an optimistic time-based STM implementation. This work is a first study towards a better understanding of the energy consumption behavior of STM systems, and could prompt STM designers to research new optimizations in this area, paving the way for an energy-aware transactional memory.
Abstract: Most of existing techniques to estimate camera motion are based on analysis of the optical flow. However, the estimation of the optical flow supports only a limited amount of scene motion. In this report, we present a novel approach to estimate camera motion based on analysis of local invariant features. Such features are robust across a substantial range of affine distortion. Experiments on synthesized video clips with a fully controlled environment show that our technique is more effective than the optical flow-based approaches for estimating camera motion with a large amount of scene motion.
Abstract: Separation of concerns is one of the overarching goals of exception handling in order to keep separate normal and exceptional behaviour of a software system. In the context of software product lines, this separation of concerns is important for designing software variability related to different exception handling strategies. This technical report presents a tool-supported solution for designing, verifying and implementing exceptional behaviour variability into product lines architectures. Our solution is based on: (i) the adoption of an exception handling model that supports explicit exception control flows and pluggable handlers; (ii) a strategy for designing and automatically verifying the selection of variation points related to exception control flows and handlers; and (iii) an aspect-oriented implementation for exceptional behaviour variability. We evaluate qualitatively and quantitatively our solution through a case study targeting a real mobile application.
Abstract: This technical report considers the representation of different software fault tolerance techniques as a product line architecture (PLA) for promoting the reuse of software artefacts, such as formal specifications and verification. The proposed PLA enables to specify a series of closely related applications in terms of a single architecture, which is obtained by identifying variation points associated with design decisions regarding software fault tolerance. These decisions are used to choose the appropriate technique depending on the features selected for the instance, e.g, the number of redundant resources, or the type of adjudicator. The proposed approach also comprises the formalisation of the PLA, using B-Method and CSP, for systematising the verification of fault-tolerant software systems at the architectural level. The properties verified cover two complementary contexts: the selection of the correct architectural variabilities for instantiating the PLA, and also the properties of the chosen fault tolerance techniques.
Abstract: In this paper we propose a Lagrangian relaxation framework to solve the vertex separator problem (VSP). This framework is based on the development of relax-and-cut algorithms which embed the separation of valid inequalities for the VSP discussed in  in the subgradient method. These relax-and-cut algorithms are then used as a preprocessing phase in a hybrid algorithm which combines them with branch-and-cut algorithms proposed in . This is done basically by feeding the branch-and-cut algorithms not only with the primal bound but also the cuts separated during the preprocessing phase. Computational results obtained with benchmarks from the literature showed that the hybrid algorithm developed here outperforms the best exact algorithm available for the VSP to date.
Resumo: Plágio em tarefas de alunos é um problema que vem aumentando ao longo do tempo e instituições de ensino têm trabalho considerável para enfrentá-lo. Neste trabalho, examinamos o problema do ponto de vista de plágio em tarefas de programação. Implementamos alguns dos algoritmos descritos com a finalidade de efetuar uma comparação direta e qualitativa, com ênfase na fase de pré-processamento de programas. Nossa conclusão principal é que o pré-processamento pode ser até mais importante do que o algoritmo de comparação em si, e apontamos novas direções para trabalhos futuros.
Abstract: Plagiarism in student coursework has become increasingly common and significant effort has been undertaken to face this problem. In this work we focus on the plagiarism in computer programs. We implemented some of the algorithms we discuss so that we could perform a direct and qualitative comparison, with emphasis on the program pre-processing phase. Our main conclusion is that pre-processing may be more important than the comparison algorithm itself and we point to new directions for future work.
Abstract: We present a generalized test case generation method, or G-method, extending some previous work. Although inspired on the W-method, the G-method, in contrast, allows for test case suite generation even in the absence of characterization sets for the specification models. Instead, the G-method relies on knowledge about the index of certain equivalences induced in the implementation models. We show that the W-method can be derived from the G-method as a particular case. Moreover, we discuss some naturally occurring infinite classes of FSM models over which the G-method generates test suites that are exponentially more compact than those produced by the W-method. Proofs for important results are presented in detail.
Abstract: We prove that every 4-Pfaffian that is not Pfaffian essentially has a unique signature matrix. We also give a simple composition Theorem of -Pfaffian graphs from Pfaffian spanning subgraphs. We apply these results and exhibit a graph that is 6-Pfaffian but not 4-Pfaffian. This is a counter-example to a conjecture of Norine, which states that if a graph is -Pfaffian but not -Pfaffian then is a power of four.
Abstract: The objective of this paper is to discuss the new demands imposed on museums, the possibilities to achieve them using new Information and Communication Technologies (ICTs) and to propose a platform to meet these requirements. The platform will provide services to the external public, the museum staff and researchers. The artifacts to be collected and interpreted will belong to Brazilian indigenous culture.
Abstract: The existence of digital artefacts commonly used to interact with today's television systems does not guarantee that those devices are adequate to mediate the use of Interactive Digital Television (iDTV) and its new types of applications. This is especially true in the context of developing countries where iDTV is a promise to cope with the digital divide. This technical report reviews the state of the art of physical artefacts of interaction with iDTV. In addition it presents a gesture based interaction model for iDTV that could inform the design of a new interaction language based on gesture and physical artefacts of interaction with iDTV.
Abstract: The effective use of a speciﬁcation language depends on the availability of predeﬁned speciﬁcations. Although the CASL speciﬁcation has such a library, that is not the case of the HasCASL language, one of the CASL’s extensions. Here we start to specify such a library to the HasCASL language, based on the Prelude library of the Haskel l programming language. When completed this approach would create a library that, after reﬁnements, should lead to reusable speciﬁcations for real Haskel l programs. This technical report discusses the speciﬁcation and veriﬁcation of a kernel library to the HasCASL language.
Abstract: This paper repeats part of the analysis performed in the 1995 paper ``Experimental evaluation in computer science: A quantitative study'' by Tichy and collaborators, for 147 papers randomly selected from the ACM, published in the year 2005. The papers published in 2005 are classified in the following way: 4% theory, 17% empirical, 4.7% hypothesis testing, 3.4% other, and 70% design and modeling (using the 1995 paper categories). Within the design and modeling class, 33% of the papers have no evaluation. The numbers of the 2005 sample are very similar to the original figures for the 1995 sample, which shows that Computer Science research has not increased significantly its empirical or experimental component.
Abstract: An edge-coloring of a graph is an assignment of colors to its edges such that no adjacent edges have the same color. A split graph is a graph whose vertex set admits a partition into a stable set and a clique. Split graphs have been introduced by Földes and Hammer and it is a well-studied class of graphs. However, the problem of deciding the chromatic index of any split graph remains unsolved. Chen, Fu, and Ko use a latin square to color any split graph with odd maximum degree. In this work, we also use a latin square to color some split graphs with even maximum degree and we show that these graphs are Class 1.
Instituto de Computação :: Universidade Estadual de Campinas
Caixa Postal 6176 • Av. Albert Einstein, 1251 - Cidade Universitária • CEP 13083-970 • Campinas/SP - Brasil • Fone:  3521-5838