Abstract: Transaction scheduling techniques have been used as a way to reduce aborts by avoiding conflicting transactions to run in parallel, while improving core usage. Traditional scheduling techniques are based on the operating system preemptive scheduler, and thus have little or no control in choosing the best transactions to execute. In this paper, we propose LUTS, a Lightweight User-Level Transaction Scheduler. Unlike other techniques, which serialize conflicting transactions, LUTS provides the means for selecting another transaction to run in parallel, thus improving system throughput. Moreover, it avoids most of the issues caused by pseudo parallelism, as it only launches as many system-level threads as the number of available processor cores. This is achieved by maintaining a set of execution context records (ECRs) which are used to encapsulate the state of the threads. ECRs are inserted into the scheduler queue and are selected for execution, according to its conflicting history with the running transactions. We discuss LUTS design and present a prototype implementation and integration with a state-of-the-art STM, tinySTM. Experimental results, conducted with the STAMP benchmark suite, show LUTS efficiency when running high contention applications, and its stability as the number of threads increase. LUTS achieved better performance results than existing techniques in every STAMP benchmark program, executing programs up to 1.69 times faster.
Abstract: The project Crisis, Tragedy and Recovery Network (CTRnet) is an effort to build an integrated distributed digital library for providing a rich suite of CTR-related services. This report describes an independent study conducted at Virginia Polytechnic Institute and State University, consisting of collecting and archiving information related to the Haiti earthquake, later used to explore content-based image retrieval (CBIR) concepts. The objective was to collect and categorize relevant pictures related to the earthquake, followed by the exploration of practical CBIR concepts, such as descriptors, feature vectors, and experiment design.
Resumo: As árvores PQ foram introduzidas por Booth e Lueker em 1976. Uma árvore PQ é uma estrutura de dados que representa permutações de um conjunto de elementos em que certos subconjuntos ocorrem consecutivamente. Aplicações incluem reconhecimento de grafos de intervalos, de grafos planares, e problemas envolvendo moléculas de DNA.
Apesar de sua grande popularidade em trabalhos teóricos, as árvores PQ são de difícil implementação. Neste trabalho, damos continuidade ao desenvolvimento de uma estrutura mais geral, a árvore PQR, introduzida por Meidanis, Porto e Telles, que tem boas possibilidades de se tornar uma alternativa mais fácil de implementar, além de dar mais informações sobre o problema.
Aqui detalhamos um algoritmo incremental (online) para a construção de árvores PQR, e provamos a corretude de cada operação.
Resumo: Estudos anteriores relatam problemas de interação com o uso do laptop educacional da OLPC. Atividades de avaliação foram realizadas e apontaram que não só as crianças, mas também os adultos encontram dificuldades em utilizar o hardware e o software dos laptops. No entanto, os métodos tradicionais utilizados apontam problemas com baixo nível de detalhes e, ainda, não investigam soluções possíveis. Este artigo apresenta uma atividade de avaliação do laptop XO, por meio de uma abordagem que adota as Leis da Simplicidade de Maeda. Os resultados revelam problemas de design na concepção tanto do hardware quanto do software presente no sistema operacional oferecido pela OLPC e são apontadas sugestões para solucioná-los.
Abstract: Previous studies have reported problems regarding interaction with educational laptops from OLPC. Evaluation activities were conducted and showed that not only children but adults consider as difficult using the hardware and the software of those laptops. However, the traditional methods used pointed to problems in a low-level of details and also did not investigate possible solutions. This paper presents an evaluation activity of the laptop XO using an approach based on the Laws of Simplicity proposed by Maeda. The results reveal design problems of both hardware and software in the operating system offered by OLPC and we point suggestions to solve them.
Abstract: The total chromatic number is the least number of colours needed to colour the vertices and edges of a graph , such that no incident or adjacent elements (vertices or edges) receive the same colour. It is known that the problem of determining the total chromatic number is -hard and it remains -hard even for cubic bipartite graphs. Snarks are simple connected bridgeless cubic graphs which are not 3-edge colourable. In this paper, we show that the total chromatic number is 4 for three infinite families of snarks, namely, the Flower Snarks, the Goldberg Snarks and the Twisted Goldberg Snarks. This result reinforces the conjecture that all snarks are type 1. Moreover, we give recursive procedures to construct 4-total colourings in each case.
Resumo: O acesso ao conhecimento é condição básica para a vida na era digital e Rede Sociais Online (RSO) são uma realidade nos dias de hoje. Mecanismos de busca são cada vez mais essenciais para a interação e recuperação da informação em tais sistemas. Representações adequadas dos significados que as pessoas usam na RSO podem ser um fator determinante para o desenvolvimento de mecanismos de busca mais adequados. A identificação de conceitos e relações semânticas que advém dos dados da RSO é ainda mais relevante para Redes Sociais Inclusivas Online (RSI), que pressupõem o respeito à diversidade de usuários, incluindo aqueles em processo de alfabetização digital. Neste contexto, este trabalho estuda ferramentas e técnicas para a identificação de termos e relações semânticas em RSI visando a concepção de uma estratégia para auxiliar na construção de ontologias que modelem a semântica compartilhada na rede social, rumo a um mecanismo de busca mais adequado a RSI. O processo de extração aponta resultados que demonstram a importância da aplicação de métodos apropriados ao contexto considerado.
Abstract: Access to knowledge is a basic condition for living in the digital age and Social Network Services are a reality nowadays. Search mechanisms are increasingly essential for interaction and information retrieval in such systems. Appropriate representation of the meaning that people use in SNS can be a determining factor for the development of more adequate search engines. The identification of concepts and semantic relationship that come out from the network data are even more relevant for Inclusive Social Network Services (ISN), which presuppose respect for the diversity of users, including those in the process of digital literacy. This work studies tools and techniques for the identification of concepts and semantic relationships in ISN, aiming at designing a strategy to assist the building of ontologies that model the shared semantics in the social network, toward more adequate search mechanism for ISN. The extraction process points out results which demonstrate the importance of applying appropriated methods to the considered context.
Abstract: Similarity search in high-dimensional metric spaces is a key operation in many applications, such as multimedia databases, image retrieval, object recognition, and others. The high dimensionality of the data requires special index structures to facilitate the search. Most of existing indexes are constructed by partitioning the data set using distance-based criteria. However, those methods either produce disjoint partitions, but ignore the distribution properties of the data; or produce non-disjoint groups, which greatly affect the search performance. In this paper, we study the performance of a new index structure, called Ball-and-Plane tree (BP-tree), which overcomes the above disadvantages. BP-tree is constructed by recursively dividing the data set into compact clusters. Distinctive from other techniques, it integrates the advantages of both disjoint and non-disjoint paradigms in order to achieve a structure of tight and low overlapping clusters, yielding significantly improved performance. Results obtained from an extensive experimental evaluation with real-world data sets show that BP-tree consistently outperforms state-of-the-art solutions.
Abstract: Most of the software tools created so far to aid in the construction of distributed applications addressed how to replicate data consistently in the presence of failures, but without offering much relief for the problem of building a dependable and long-running application. This paper describes our experience building Treplica, a replication toolkit offering application developers a very simple programming model based on an object-oriented specification for replication of durable applications. Treplica simplifies the development of high-available applications by making transparent the complexities of dealing with replication of data that must survive process crashes and recoveries. These complexities are not negligible, and we believe we have found a compelling way to address this problem under a simple to understand object-oriented interface. We have used Treplica successfully to add fault tolerance to a implementation of the TPC-W benchmark and we have obtained very good performance, even in the presence of failures.
Resumo: A Web não é acessível quando consideradas diferenças de habilidades e competências do universo das pessoas. Em contextos de diversidade populacional como o brasileiro, serviços de governo na Web estão distantes da maioria dos cidadãos. Este trabalho contribui com uma proposta de método de avaliação de websites composto por passos que incluem desde a validação do código de websites a testes com usuários. O método requer uma equipe de 3 a 5 especialistas e a participação de usuários que complementem os conhecimentos dos especialistas quanto ao contexto de diversidade de perfis. Espera-se que equipes de desenvolvimento e de avaliação de websites percebam a relação custo-benefício positiva de cada um dos passos e se inspirem em replicar instâncias do método.
Abstract: The Web is not accessible when considering different abilities and competencies of people. In contexts of a diverse population, such as the Brazilian one, electronic government services are distant from the majority of its citizens. This work contributes with the proposal of an evaluation method for websites ranging from source code evaluation to tests with users. The method requires the participation of 3 to 5 experts and users that complement the experts' knowledge with regard to the users profile diversity. We expect that Web development and website evaluation teams perceive the positive cost-benefit ratio of each step and are inspired to replicate other instances of the method.
Abstract: Hadoop is becoming the standard framework to process large amounts of data. It takes advantage of distributed computing to do it fast and efficiently. However, this is only possible if data is supplied with high availability and consistency. Those goals are fulfilled by a core piece of Hadoop called the Hadoop Distributed Filesystem (HDFS). HDFS is a very scalable distributed filesystem capable of storing petabytes and providing high throughput data access. It makes intensive use of replication and checksums to protect the system from data loss and corruption. Despite of all those qualities, HDFS has a central component whose maintenance demands the entire system to be shut down. Furthermore, that component is also a single point of failure. Those limitations make HDFS unsuitable for 24x7 applications. In this technical report we are going to make a high-level introduction to HDFS and discuss attempts to solve the mentioned problem.
Abstract: A clique-colouring of a graph is a colouring of the vertices of so that no maximal clique of size at least two is monochromatic. The clique-hypergraph, , of a graph has as its set of vertices and the maximal cliques of as its hyperedges. A vertex-colouring of is a clique-colouring of . Determining the clique-chromatic number, the least number of colours for which a graph admits a clique-colouring, is known to be NP-hard. By determining some structural properties of powers of cycles, we establish that the clique-chromatic number of these graphs is equal to two, except for odd cycles of size at least five, that need three colours. For odd-seq circulant graphs, we show that their clique-chromatic number is at most four, and determine the cases when it is equal to two. Similar bounds for the chromatic number of these graphs are also obtained.
Abstract: Service-Oriented Architecture (SOA) is a promising paradigm for developing distributed systems. It offers flexibility, dynamism and interoperability. Robustness is the dependability attribute which measures the degree to which a system operates correctly even in the presence of stressful environmental conditions or exceptional inputs, such as faults. Web Services are the main technology for implementing SOAs. Since Web Services are usually deployed on the internet – an unstable environment –, robustness problems might arise if the system is not properly tested. These problems might cause unexpected system behavior and must be avoided. On this context, we present WSInject, a fault injection tool for testing single and composed Web Services, which enables the user to evaluate the behavior of Web Services in the presence of faults. It is able to inject both communication and interface faults and was designed to be the minimally intrusive. This report describes WSInject's design and architecture, explains how to use it, compares it to similar tools and presents our preliminary experiments on a simple system.
Resumo: Revisão realizada na literatura tem indicado que o controle remoto, principal artefato físico de interação com o sistema de televisão, nos moldes atuais não é adequado para mediar a interação dos usuários com aplicações de Televisão Digital Interativa (TVDI), principalmente em um cenário de diversidade de perfis de usuários como o encontrado em nosso país. Este trabalho descreve as práticas participativas realizadas com o intuito de, em conjunto com usuários, buscar a definição de um novo artefato físico de interação para a TVDI. Dessa forma, com base nos resultados dessas práticas e em resultados anteriores já logrados no âmbito desta pesquisa chegamos à definição de um artefato a ser adaptado a esse contexto de uso.
Abstract: Despite the number of recommendations for authoring accessible web content there is a lack in providing clear orientation for non specialist developers, as some recommendations are too extensive, too abstract or exclusively technical. These deficiencies contribute to the low adherence to recommendations since designers are not necessarily experts in accessibility. In this work we synthesize recommendations for the design of accessible for all products aiming at providing clear orientation to designers. The work resulted in a mapping of the W3C Web Content Accessibility Guidelines 2.0 and ISO 9241 recommendations into Universal Design Guidelines. The mapping provides an understanding of the relation between accessibility needs and the technical apparatus that can be used to fulfill them, so that experts in accessibility and web designers are both benefited by the mapping. Additionally we pointed out some aspects that are still not covered, or need improvement and, potentially, can help the design of accessible web content.
Abstract: Vila na Rede is an Inclusive Social Network (ISN) system developed as a joint effort with Brazilian communities. As a product of e-Cidadania's Project, it has inherited the objective of being accessible for the widest variety of users, including those less familiar with technology or with low literacy levels. In this direction, different features were incorporated into Vila na Rede in order to provide its users with scaffolding resources that help them profit more from the system. One of these features is the Virtual Presenter, a talking head that allows users to have the textual information converted into speech, presented by a face that moves its lips accordingly. This technical report provides details on the integration of the Virtual Presenter into Vila na Rede, as well as the activities that were conducted to evaluate this new feature at the ISN.
Abstract: We study the following problem, introduced by Chung et al. in 2006. We are given, online or offline, a set of coloured items of different sizes, and wish to pack them into bins of equal size so that we use few bins in total (at most times optimal), and that the items of each colour span few bins (at most times optimal). We call such allocations -approximate. We prove that for , if we desire small , no scheme can beat -approximate allocations and similarly as we desire small , no scheme can beat -approximate allocations. We give offline schemes that come very close to achieving these lower bounds. For the online case, we prove that no scheme can even achieve -approximate allocations. However, a small restriction on item sizes permits a simple online scheme that computes -approximate allocations.
Abstract: The e-Cidadania research project investigates solutions for the interaction design of systems that make sense to the Brazilian citizens. By studying the relationships established around people in their informal networks and the way they interact with each other and with technology, we have been developing an Inclusive Social Network system, named Vila na Rede. To cope with the different interaction needs present in the population, Vila na Rede has been planned to be a tailorable system. This report describes and discuss results of a participatory practice in which participants with different interaction needs built design representations for the system. From the final designs, several norms that represent the system tailorable behavior were defined.
Abstract: The inclusive social network Vila na Rede has been constructed in the context of the e-Cidadania project, with the direct and effective involvement of the target users (Brazilian communities) since the beginning of the project. Now, for the evaluation of this social application, we have created and applied an evaluation artifact based on the Semiotic Ladder, from Organizational Semiotics. Starting from the description of the original artifact - the Semiotic Ladder - we then report the process of constructing this artifact, developed specifically for Vila na Rede. The overall positive results discussed here compose a favorable indication of the system's pertinence to the community it has been designed for, by and with(in).
Abstract: The Capacitated -ring-star Problem is a variant of the classical one-depot capacitated vehicle routing problem in which a customer is either on a route or is connected to another customer or to some Steiner point present in a route. We develop a new exact algorithm for this problem using a branch-and-cut-and-price approach and compare its performance with that of a branch-and-cut algorithm proposed earlier in the literature. Computational results show that the new algorithm outperforms the branch-and-cut one in many instance classes.
Abstract: A set of oil derivative distribution depots, including refineries and terminals, have local demands and productions for different products in a given time horizon. However, in a certain period there may be not enough local stock of some product to satisfy the corresponding demand. This brings the need for transportation of oil derivatives through a network of pipelines. To accomplish that, a tactical pumping plan is composed monthly, and a more detailed operational schedule, spanning a few days, must be updated daily. In real-world applications, both the planning and the scheduling must satisfy a large set of operation constraints. This work defines the tactical planning problem and proposes a novel network flow model to solve it. Also, a procedure is given to decompose the solution into a specific input format, as needed by another solver that computes the final, detailed, daily scheduling solution. Our model treats the oil pipeline network that is operated by the Brazilian oil company Petrobrás. This is one of the most complex and large topologies when compared to other networks treated in the open literature. The model was tested with real-world instances and showed significant improvements over human planning.
Abstract: In Paxos, failures can cause the replacement of the coordinator agent, a key process of this consensus algorithm. The replacement of the coordinator, in its turn, leads to a temporary unavailability of the application implemented atop Paxos. Solutions to the unavailability problem have been sought because of the widely recognized utility of Paxos as a building block of fault-tolerant distributed applications. So far, the problem has been lessened by reducing the coordinator replacement rate through stable leader election. We have observed that the recovery of the newly elected coordinator's state is at the core of the unavailability problem. Thus, in this paper we present a new solution to the problem that allows the state recovery to occur concurrently with new consensus rounds. The complexity and performance of the solution is assessed both theoretically and experimentally. Experimental results show that the unavailability period is removed during the replacement of the coordinator making it seamless to the application.
Resumo: Com a evolução do Desenvolvimento Dirigido por Modelos torna-se necessário evoluir as técnicas de extração das informações implícitas no modelo que possam auxiliar no processo de desenvolvimento, validação ou testes. Um desafio para o modelador é a construção de modelos comportamentais complexos que sejam consistentes e confiáveis. Outro desafio é modelar e validar o fluxo de exceções desses modelos. O objetivo deste relatório é apresentar uma abordagem para a obtenção do fluxo de exceções intraprocedimentais de um modelo comportamental. A abordagem proposta transforma o Diagrama de Atividades da UML em um Grafo de Fluxo de Dados. A semântica das ações da UML é utilizada para determinar as definições e usos das variáveis e também identificar os pontos onde as exceções são lançadas. Uma análise de fluxo de dados é executada no grafo resultante para determinar quais são os tipos de exceções que podem ser lançadas. O fluxo de exceções é demonstrado por arestas que ligam os nós que lançam as exceções com os nós que capturam e fazem o seu tratamento.
Resumo: Devido a constante evolução dos sistemas computacionais, no sentido de maior automatização de tarefas nas organizações e agilidade na troca de informações, a expansão das redes de computadores é algo incontrolável. No entanto a tarefa de administrar essas redes de grandes dimensões é um desafio permanente para os profissionais da computação. Na tentativa de auxiliar esses profissionais, existem inúmeras ferramentas de administração computacional. Este relatório técnico é parte de um trabalho de dissertação de mestrado e apresenta um estudo das principais e mais populares ferramentas, classificando-as e comparando-as, a fim de servir como um material de consulta para a escolha da ferramenta ideal as necessidades de cada administrador.
Abstract: Due to constant changes in computer systems for greater automation of tasks in organizations and speed the exchange of information, the expansion of computer networks is something uncontrollable. However the task of managing these networks and large systems is an ongoing challenge for computer professionals. In an attempt to help these professionals, there are many computational tools for administration. This technical report is part of a working thesis and presents a study of the major and most popular tools, classifying them and comparing them to serve as a reference material for choosing the ideal tool for the needs of each administrator.
Abstract: We describe a robust method for recovery of the depth coordinate from a normal or slope map of a scene, such as those obtained through photometric stereo or interferometry. The key features of the method are multi-scale integration (with proper averaging and interpolation formulas) and the use of a reliability weight mask to handle regions with uncertain or unavailable slope data. We tested our algorithm with several slope maps computed from known height fields, and observed height errors of one pixel spacing or less. We have also tested it with success on slope maps of real objects obtained with photometric stereo methods.
Abstract: A cloud is a triple consisting of a fuzzy object model, a delineation algorithm, and a criterion function for evaluating delineations. It employs recognition and delineation in a tightly coupled manner to accomplish image segmentation. It captures shape variations of a given object/object assembly to form an uncertainty region for its boundary. For any image position, delineation is executed in the uncertainty region to obtain a candidate object/object assembly, and the criterion function assigns a score to it. Image segmentation is defined by the candidate with the highest score. This work presents and compares three cloud models in automatic MR-T1 image segmentation of the cerebrum, the cerebellum, and cerebral hemispheres. These structures are connected in several parts, imposing serious segmentation challenges. The results show that the methods are fast, accurate, and can eliminate user intervention or, at least, reduce it to simple corrections. Their applications go beyond medical imaging to new vistas in various areas served by image segmentation.
Abstract: Magnetic resonance (MR) image segmentation of brain tissues has become crucial to advance research, diagnosis and treatment procedures in Neurology. Despite the large number of published papers, there is no standard method suitable for all cases. We present here a fast, accurate and robust to anatomical variations approach based on inhomogeneity/bias correction and clustering by optimum-path forest (OPF). The method assumes skull stripping and corrects voxel values in the brain based on local estimations of the white-matter intensities. We use two segmentation steps with different parameters (e.g., graph topologies and image features). The first separates the cerebrospinal fluid (CSF) from the white-matter (WM) and gray-matter (GM) tissues and the second separates WM from GM voxels. In each step, random samples are uniformly estimated to form a small unlabeled training set. Groups of training voxels with similar features are obtained by OPF clustering, a class label is assigned to each group and the labels are propagated to the remaining voxels. Each step is also repeated a few times (e.g., 3) to take the majority vote as the final label. The method was evaluated using several data sets from three protocols; control subjects, phantoms and patients; quantitative and qualitative evaluation methodologies; and two state-of-the-art approaches. The results show that it can solve brain tissue segmentation in less than 5 minutes on modern PCs with higher accuracy and robustness than the baseline approaches.
Resumo: Este relatório técnico descreve o Design Rationale para o mecanismo de Meta-comunicação que está sendo desenvolvido como parte de dissertação de mestrado sobre Ferramentas de Comunicação e Expressão em Redes Sociais Inclusivas (RSI). Tal mecanismo será implementado no Vila na Rede, uma RSI do projeto e-Cidadania, com objetivo de apoiar usuários em seus processos de aprendizado durante o uso do sistema.
Abstract: This technical report describes the Design Rationale behind the Meta-communication mechanism being developed as part of a Master thesis on Communication and Expression Tools in Inclusive Social Networks (ISN). This mechanism is going to be implemented at Vila na Rede, an ISN from e-Cidadania Project; it’s objective is to support users in their process of learning with the use of the system.
Abstract: We show that extended -laden graphs have a linear number of minimal separators and present an algorithm to list them in linear time, extending an algorithm for -sparse graphs given by Nikolopoulos and Palios. We also give bounds on the number and total size of all minimal separators of extended -laden graphs and some of its subclasses, such as, -tidy and -lite graphs. Moreover, we show that these bounds are tight for all subclasses considered.
Abstract: This work describes ongoing research which aims to evaluate the limitations of existing tools to support geographic information retrieval on the Web. More specifically, we are interested in assessing how well queries involving geographic relationships can be specified and executed using existing Web tools. Our evaluation considered three different aspects: the relevance of the results and the time and complexity of the steps necessary to obtain these results. More than thirty Web users performed five tasks, considering five scenarios. Experiment results demonstrate that existing Web tools are not enough integrated to meet user needs regarding the specification and execution of queries involving spatial relationships.
Abstract: We present an algorithm for solving the two-color bichromatic closest pair problem using queries if , or if . This result contrasts with the classical probabilistic time complexity of . We also show how to solve the closest pair problem--that is a special case of the bichromatic closest pair problem--using queries. And, we also show a quantum lower bound of queries for this problem, and discuss some open issues.
Abstract: Social Network Services are a reality nowadays. These systems represent a propitious virtual environment for user interaction and communication, and an opportunity for people to share information and knowledge. Considering socio-economical aspects, these systems should provide inclusive access for all. The purpose of Inclusive Social Network Services (ISN) is to create situations where users' diversity is respected and the access dificulties minimized. The use of search engines is the principal alternative to find and to access information generated in digital media. Therefore is important that all people have the possibility to recover information in a natural way, with results that make sense to them. This research report investigates the search process in an ISN based on the 8th Semio-Participatory Workshop of the e-Cidadania project. The activity involved the observation of prospective users of an ISN in a set of search scenarios; the aim was to observe how users behave and their dificulties with the searching mechanism. We point out preliminary results that can lead to better search engines from an inclusive perspective.
Abstract: The automatic generation of test cases is an important problem for conformance testing of several critical systems. In many situations, the system specification is modeled as a Finite State Machine (FSM). In practice, a system is a combination of several subsystems, designed, developed and tested independently. Since the overall number of states in the FSM specification is usually large, the known methods to generate test suites may become impractical. So, to properly describe modular systems, in this paper, we define the concept of combined FSMs. A combined FSM is obtained conjoining previously tested submachines with newly added states. We adapt the well-known W-method and the G-method, and introduce a new method to test combined FSMs. The new method is scalable to the number of states, and can also be used for incremental testing of new systems, or for retesting modified implementations.
Instituto de Computação :: Universidade Estadual de Campinas
Av. Albert Einstein, 1251 - Cidade Universitária Zeferino Vaz • 13083-852 Campinas, SP - Brasil • Fone:  3521-5838