Abstract: This paper describes Treplica, a tool designed for ubiquitous replication. Most of the software tools created so far to aid in the construction of distributed applications addressed solely how to maintain volatile data consistent in the presence of failures, but without offering any relief for the problem of managing persistence. Treplica simplifies the development of high-available applications by making transparent the complexities of dealing with replication and persistence. These complexities are not negligible, and we believe we have found a compelling way of factoring out these concerns in a simple to understand programming interface.
Resumo: Este relatório técnico contém resumos de 27 trabalhos apresentados no III Workshop de Teses de Doutorado do Instituto de Computação da UNICAMP. O workshop, realizado de 21 a 23 de novembro de 2007, permitiu que doutorandos do Instituto apresentassem os principais aspectos de suas pesquisas. Cada capítulo corresponde a uma apresentação, sendo o texto limitado a 4 páginas.
A participação foi voluntária e o perfil acadêmico dos participantes foi variado, cobrindo desde alunos recém-admitidos no programa até aqueles que já tinham defesa marcada em dezembro de 2007.
A publicação dos resumos sob forma de um relatório técnico tem por objetivo uma maior divulgação de trabalhos de doutorado em andamento no IC. Além disso, é um registro sucinto do estado de várias dessas pesquisas.
Abstract: Fast Paxos is an algorithm for consensus that works by a succession of rounds, where each round tries to decide a value v that is consistent with all past rounds. Rounds are started by a coordinator process and consistency is guaranteed by the rule used by this process for the selection of v and by the properties of process sets called quorums. We show a simplified version of this rule for the specific case where the quorums are defined by the cardinality of these process sets. This rule is of special interest for implementors of the algorithm.
Abstract: This work treats harvest and transportation planning problems stemming from the daily operation of some large pulp and paper companies. The problem consists in scheduling harvest and transportation tasks for a planning horizon of around one year. It comprises a large and complex set of constraints, including constraints related to the structure of the harvest areas, structure and capacity of the harvest teams, the weather, and some properties of the wood logs. We developed a hybrid algorithm that has three phases, namely, construction, local search and post-optimization. The first two are based on a GRASP metaheuristic and are used to generate a pool of elite solutions. In the third phase, a relaxed linear model is generated for each solution in the pool, and the solution of these models are converted into valid solutions. The algorithm was tested with real data and proved to be an adequate approach to treat this problem. Although some restrictions are specific for this problem, the same ideas can inspire similar approaches for solving harvest and transportation planning problems in other situations.
Abstract: Software product line is a systematic software reuse approach that promotes the generation of specific products from a set of core assets for a given domain, exploiting the commonalities and variabilities among these products. Feature modeling is one of the most accepted ways to represent commonalities and variabilities at the requirements phase. At the architecture design phase, product line architecture is an important core asset that should represent the common and variable parts in a product line. This technical report proposes an approach that supports the architectural design phase of component-based product line engineering. Our proposal for the conception of a product line architecture is based on the definition of variable architectural elements; and how they are derived from a given feature model and the interactions among features, providing a prescribed way to map features to the variable architectural elements composing the product line architecture. The main results of this work are a metamodel to define product line concepts extended with software architecture concepts, and a method for architectural design of product line engineering, to generate a product line architecture and instantiate it for a specific product. In order to evaluate the method, a case study was carried out, applying the approach to a product line in the mobile domain.
Abstract: This paper analysis the distribution of some characteristics of computer scientists in Brazil according to regions and gender. Computer scientist is defined as the faculty of a graduate level computer science department. Under this definition, there were 886 computer scientists in Brazil in November 2006. We studied the self-declared research areas, the production of journal and conference papers of these scientists, and whether they receive a recognition fellowship by one of Brazilian grant agents, and analyse the differences regarding the geo-political region in Brazil and the gender of the scientists.
Abstract: The identification of natural groups in a dataset is reduced to an optimum-path forest problem in a graph. We define a graph whose nodes are data samples and whose arcs connect adjacent samples in a feature space. The nodes are weighted by their probability density values and different choices of a path-value function lead to effective solutions for data clustering. The method identifies a root node for each cluster and finds the samples which are ``more strongly connected'' to each root than to any other. The output is an optimum-path forest whose trees (clusters) are the influence zones of their roots. This framework extends the image foresting transform from the image domain to the feature space, revealing important theoretical relations among relative fuzzy-connected segmentation, morphological reconstructions, watershed transforms, and clustering by influence zones. It also provides a more general and robust implementation for the popular mean-shift algorithm. The results are illustrated in image segmentation.
Resumo: Muitas diferenças sociais existem na população brasileira. Pesquisas têm demonstrado como a informática comunitária pode colaborar na redução dessas diferenças. Visando o desenvolvimento de comunidades, propomos aqui a utilização mais ampla dos laptops de baixo custo do que a proposta pelo projeto UCA (Um Computador por Aluno) promovido pelo Governo Federal. Algumas recomendações derivadas de boas práticas na área são apresentadas que provaram aumentar as chances de sucesso de iniciativas de desenvolvimento comunitário.
Abstract: Many social inequalities exist within the Brazilian population. Research has shown how community informatics is capable of contributing to the reduction of those inequalities. Aiming at the development of communities, we here propose a broader use of low-cost laptops than the one proposed by the UCA (one computer per student) project sponsored by the Federal Government. Some recommendations derived from best practices in the field are presented that proved to increase the chances of success of community development initiatives.
Abstract: In this paper, we present a new Hybrid Hierarchical Clustering approach for Image Retrieval. Our method combines features from both divisive and agglomerative clustering paradigms in order to yield good-quality clustering solutions with reduced computational cost. We provide several experiments showing that our technique reduces the number of required comparisons to perform a retrieval without significant loss in effectiveness when compared to flat-based solutions.
Abstract: In this paper we propose a new technique to reduce the program footprint and the instruction fetch latency in high performance architectures adopting long instruction in the memory. Our technique is based on an algorithm that factors long instructions into encoded instructions and instruction patterns. An encoded instruction contains no redundant data and it is stored into an I-cache. The instruction patterns, on the other hand, look like a map to the decode logic to prepare the instruction to be executed in the execution stages. These patterns are stored into a new cache, named Pattern cache (P-cache). The technique has shown a suitable alternative to well-known architectural styles such as VLIW and EPIC architectures. We have carried out a case study of this technique in a high performance architecture called 2D-VLIW. We have evaluated the performance ofour encoding technique through trace-driven experiments with MediaBench, SPECint00, and SPECfp programs. Moreover, we have compared the execution time performance of the 2D-VLIW architecture with encoded instructions to the same architecture with non-encoded instructions. Further experiments compare our approach to VLIW and EPIC instruction encoding techniques. Experimental results reveal that our encoding strategy provides a program execution time that is up to 23x (average of 5x) faster than a 2D-VLIW non-encoded program. The results also show that the program code region, by using encoded instructions, is up to 78 (average of 69 using non-encoded instructions.
Abstract: Extraction of the mid-sagittal plane (MSP) is an important step for brain image registration and asymmetry analysis. We present a fast MSP extraction method for 3D MR images, which is based on automatic segmentation of the brain and on heuristic maximization of cerebro-spinal fluid within the MSP. The method is shown to be robust to severe anatomical asymmetries between the hemispheres, caused by surgical procedures and lesions. The experiments used 64 MR images (36 pathological, 20 healthy, 8 synthetic) and the method found an acceptable approximation of the MSP in all images with a mean time of 60.0 seconds per image.
Resumo: Neste trabalho, propõe-se uma rede de anonimização de tráfego a partir da criação de uma política de roteamento anonimizada e eficiente com boa qualidade de anonimato para envio, resposta e par comunicante. A organização, arquitetura e topologia utilizadas na rede são extensamente analisadas e suas características de eficiência e qualidade de anonimato são verificadas empiricamente através de simulações. Aprimoramentos adicionais são ainda propostos para aperfeiçoamento da topologia utilizada.
Resumo: Criptografia de Chave Pública sem Certificados (Certificateless Public Key Cryptography - CL-PKC) é um paradigma de construção de sistemas criptográficos de chave pública projetado para solucionar as limitações da Criptografia Baseada em Identidades (Identity-Based Cryptography - ID-PKC). Neste trabalho, as propriedades, vantagens e implicações desde novo modelo são apresentadas e discutidas, com ênfase em sua instanciação a partir de emparelhamentos bilineares sobre curvas elípticas. Considerações a respeito da segurança do paradigma são formuladas com base no grau de confiança dado à autoridade central, no modelo de adversário adotado e no conjunto de problemas intratáveis mais comuns dos quais depende a segurança dos protocolos. Adicionalmente, esquemas de cifração e assinatura elaborados sob estas premissas são apresentados com detalhes de implementação.
Abstract: The Image Foresting Transform (IFT) is a tool for the design of image processing operators based on connectivity, which reduces image processing problems into a minimum-cost path forest problem in a graph derived from the image. We propose a new image operator, which solves segmentation by pruning trees of the forest. An IFT is applied to create an optimum path forest whose roots are seed pixels, selected inside a desired object. In this forest, object and background are connected by optimum paths (leaking paths), which cross the object's boundary through its "most weakly connected" parts (leaking pixels). These leaking pixels are automatically identified and their subtrees are eliminated, such that the remaining forest defines the object. Tree pruning runs in linear-time, is extensive to multidimensional images, is free of ad-hoc parameters, requires only internal seeds, and works with minimal interference from the heterogeneity of the background. These aspects favor solutions that exploit image features and object information for automatic segmentation. We give a formal definition of the obtained objects and conditions to achieve robust segmentation using tree pruning, describe the algorithms and evaluate automatic segmentation by tree-pruning in two applications: 3D MR-image segmentation of the human brain and segmentation of license plates. Given that its most competitive approach is the watershed transform by markers, we include a comparative analysis between them.
Resumo: Propõe-se um algoritmo dinâmico para manter a árvore com raiz em um vértice especial origem e formada por caminhos mínimos para cada vértice do grafo. O grafo apresenta custo nas arestas e não apresenta restrições adicionais quanto ao domínio do custo, desde que os valores não sejam negativos. Aborda-se o problema com múltiplas operações simultâneas sobre o grafo: aumento e diminuição de custo das arestas. Sua complexidade no pior caso é O(m + n log n) para um grafo com n vértices e m arestas. O tempo de execução é menor ou igaul à computação de uma nova árvore de caminhos mínimos. As operações sobre grafo são tratada por um único modelo, similiar ao algoritmo de Dijkstra.
Abstract: In this paper we introduce the concept of k-flow-critical graphs. These are graphs that do not admit a k-flow but such that any smaller graph obtained from it by contraction of edges or of sets of vertices is k-flowable. Throughout this paper we will refer to k-flow-critical graphs simply as k-critical.
Any minimum counterexample for Tutte's 3-Flow and 5-Flow Conjectures must be 3-critical and 5-critical, respectively. Thus, any progress towards establishing good characterizations of k-critical graphs can represent progress in the study of these conjectures. We present some interesting properties satisfied by k-critical graphs discovered recently.
Abstract: Groupware scheduling is an essential and continuous activity. Complexities on conciliating schedules of several people make proper time management a challenge. It involves sophisticated communication mechanisms to support information sharing and scheduling negotiation among users of diverse systems. Also, scheduling on group demands application support to deal with over-constrained schedules and rescheduling, but such needs have not yet been fulfilled. Based on a review of past approaches for this problem, their successes and failures, new approaches for groupware scheduling are presented. The development of calendaring and scheduling protocols is reviewed, as well as the application of promising techniques on the fields of optimization, artificial intelligence and mixed-initiative interfaces. Towards a new generation of more intelligent and powerful groupware scheduling tools that will increase productivity by minimizing coordination effort, a multidisciplinary application of knowledge from these areas is proposed.
Resumo: O MIT Media Lab iniciou em 2005 um projeto para desenvolver um laptop de baixo custo, inicialmente denominado de ''laptop de 100 dólares`` ou ''The Children's Machine``. Em conseqüência desse projeto, com implicações na educação de crianças em todo o mundo, outras iniciativas de desenvolvimento de laptops de baixo custo para uso educacional surgiram. Após encontros com líderes e representantes dessas iniciativas, o governo brasileiro mostrou interesse na idéia e estuda formas de aplicá-la ao sistema de ensino público no país. Neste contexto, considerando o estágio ainda inicial de discussão sobre a questão, este relatório técnico apresenta resultados de uma análise preliminar realizada em três modelos de laptops educacionais em consideração pelo Governo. A análise baseou-se em uma inspeção dos equipamentos utilizando um framework semiótico.
Abstract: In 2005, the MIT Media Lab initiated a project to develop a laptop of low cost, initially called ''laptop of 100 dollars`` or ''The Children's Machine``. In consequence of this project, with implications in the education of children in the whole world, other initiatives of development of low cost laptops for educational purpose appeared. After meetings with leaders and representatives of these initiatives, the Brazilian government showed interest in the idea and studies ways to apply it to the system of public education in the country. Within this context, considering the initial period of discussion on the question, this technical report presents results of a preliminary analysis carried through in three educational models of laptops in consideration by the Government. The analysis was based on an inspection of the equipment using a semiotic-based framework.
Abstract: In this paper we present a study about scientific production in Computer Science in Brazil and several other countries, as measured by the number of articles in journals indexed by ISI. We compare the Brazilian production from 2001 to 2005 with some Latin American, Latin European, BRIC (Brazil, Russia, India, China), and other relevant countries. We also classify and compare these countries production according to the impact factor of the journals in which they were published, and according to each country known research and development investment.
The results show that Brazil has by far the largest production among Latin American countries, has a production about one third of Spain's, one fourth of Italy's and a little larger than Portugal's. The growth in Brazilian publications during the period places the country in the group with mid range growth, but regarding dollar productivity, Brazil joins the other BRIC countries as the ones with the lowest productivity. The distribution of Brazilian production according to impact factor is similar to most countries.
Abstract: Mesh segmentation offers a desirable divide-and-conquer strategy for many graphics applications. In this paper, we present a novel, efficient, and intrinsic method to segment meshes following the minima rule. The eigenfunctions of the Laplace-Beltrami operator define locality and volume-shape preserving functions over a surface. Inspired on Manifold learning theory, we use these functions as the basis of an embedding space for mesh vertices and group them using -means clustering. We also present a new kind of segmentation hierarchy built from the analysis of the Laplace-Beltrami operator spectrum.
Resumo: A digitalização da transmissão da TV terrestre e a potencial demanda por interatividade na TV estabelecem um novo paradigma de interação, de extrema importância social, especialmente para a população do Brasil. O presente trabalho aborda essa temática sob a ótica do design da interação, um tema pouco explorado na literatura sobre Televisão Digital Interativa (TVDI), principalmente no Brasil. O relatório técnico apresenta uma taxonomia para os dispositivos físicos de interação - compostos por artefatos de hardware e software - além de uma proposta de formas de utilização de artefatos físicos tradicionais para interação com a TVDI. A proposta tem base na natureza das aplicações interativas e no perfil dos usuários. Os resultados deste trabalho poderão orientar a escolha e a formulação de propostas de novos artefatos digitais para interação com a TVDI.
Abstract: The digitalization of the terrestrial TV transmission and the potential demand for interactivity on TV establish a new interaction paradigm, of extreme social importance, especially for the population of Brazil. The present work approaches this topic under the perspective of the interaction design, a subject hardly explored in literature on Interactive Digital Television (iDTV), mainly in Brazil. The technical report presents a taxonomy for the physical devices of interaction - composed by hardware and software components - and a proposal of forms of using the traditional device for interaction with the iDTV. The proposal has basis in the nature of the interactive applications and in a profile of users. Results of this work could guide the choice and the formulation of proposals for new digital devices for interaction with iDTV.
Abstract: A matching covered graph is a nontrivial connected graph in which every edge is in some perfect matching. A matching covered graph G is near-bipartite if it is non-bipartite and there is a removable double ear R such that G-R is matching covered and bipartite. In 2000, C. Little and I. Fischer characterized Pfaffian near-bipartite graphs in terms of forbidden subgraphs. However, their characterization does not imply a polynomial time algorithm to determine whether a near-bipartite graph is Pfaffian. In this report, we give such an algorithm. Our algorithm is based in a Inclusion-Exclusion principle and uses as a subroutine an algorithm by McCuaig and by Robertson, Seymour and Thomas that determines whether a bipartite graph is Pfaffian.
Abstract: In this paper we investigate the use of a specialized data warehousing database management system as a data back-end for web applications and assess the performance of this solution. We have used the Monet database as a drop-in replacement for traditional databases, and performed benchmarks comparing its performance to the performance of two of the most commonly used databases for web applications: MySQL and PostgreSQL. Our main contribution is to show for the first time how a read-optimized database performs in comparison to established general purpose database management systems for the domain of web applications. Monet's performance in relation to MySQL and PostgresSQL allows us to affirm that the widely accepted assumption that relational database management systems are fit for all applications can be reconsidered in the case of dynamic web applications.
Abstract: We introduce a supervised pattern classifier based on optimum path forest. The samples in a training set are nodes of a complete graph, whose arcs are weighted by the distances between sample feature vectors. The training builds a classifier from key samples (prototypes) of all classes, where each prototype defines an optimum path tree whose nodes are its strongest connected samples. The optimum paths are also considered to label unseen test samples with the classes of their strongest connected prototypes. We show how to find prototypes with none classification errors in the training set and propose a learning algorithm to improve accuracy over an evaluation set without overfitting the test set. The method is robust to outliers, handles non-separable classes, and can outperform artificial neural networks and support vector machines.
Abstract: We are facing the proliferation of several kinds of sensing devices, from satellites to tiny sensors. This has opened up new possibilities for us to understand, manage and monitor a given environment, from the small - e.g., a room - to the large - e.g., the planet. This, however, has added a new dimension to the classic problem of heterogeneous data management - how to handle increasing volumes of sensing data from a wide range of sensors. This report is concerned with the problem of sensor data publication. Our solution involves the design and implementation of a framework for sensor data management, which applies technologies based on Semantic Web standards, components and scientific workflows. Individual sensors or networks are encapsulated into a specific kind of component - DCC - which supports homogeneous access to data and software. DCCs are themselves handled by scientific workflows that provide facilities for controlling data production, integration and publication. As a result, applications that require sensor data will instead interact with workflows, being liberated from concerns such as sensor particularities, or provide separate handlers for real time streams. The report also presents initial implementation results.
Resumo: No desenvolvimento de sistemas modernos, a fase de projeto arquitetural vem recebendo uma atenção cada vez maior. Por ser considerado o local ideal para a implementação dos requisitos não-funcionais de um sistema, mudanças tardias na arquitetura costumam ter um custo muito elevado. Além disso, uma arquitetura bem projetada, além de aumentar a qualidade do sistema, facilita a sua compreensão e manutenção. Porém, projetar a arquitetura de um software é uma tarefa complexa e dependente da experiência da equipe de desenvolvimento. Este trabalho propõe um método para sistematizar a especificação da visão lógica da arquitetura do sistema, refinada com o estilo arquitetural de componentes independentes. Dessa forma, esse método pretende reduzir a penalidade decorrente da pouca experiência da equipe de desenvolvimento. O método proposto foi validado através de um estudo de caso simples, apresentado no final do documento.
Resumo: Com o aumento do poder computacional, novos requisitos vêm sendo exigidos dos sistemas computacionais modernos. Além disso, tento em vista a flexibilidade e facilidade de evolução desses sistemas, o papel do software está cada vez mais importante nesse contexto de sistemas automatizados. Uma conseqüência da maior exigência de funcionalidades do software é o aumento da sua complexidade, o que pode comprometer a condução de atividades de manutenção e pode até mesmo reduzir a confiabilidade dos sistemas. Uma forma de controlar essa complexidade é através da arquitetura de software, que representa o sistema em um alto nível de abstração, em função dos seus elementos de alto nível e das regras de interação entre eles. Quando uma família de sistemas possui uma arquitetura de software semelhante, dizemos que elas seguem o mesmo estilo arquitetural. Este relatório apresenta um conjunto de estilos arquiteturais altamente utilizados na prática, exemplificando as principais características de cada um deles, assim como indicações de uso.
Resumo: O projeto, a implementação e os testes do comportamento excepcional de um software são tarefas complexas que normalmente não recebem a atenção necessária das metodologias de desenvolvimento existentes. Este trabalho apresenta uma maneira sistemática de lidar com o tratamento de exceções desde a especificação de requisitos, especificação, implementação e testes, no desenvolvimento de sistemas baseados em componentes. Essa sistemática é apresentada através da execução de um estudo de caso de um sistema financeiro. As atividades de teste são executadas desde os estágios iniciais do desenvolvimento, proporcionando um aumento da qualidade do sistema produzido. Nossa solução refina a Metodologia para Definição do Comportamento Excepcional, MDCE, nas fases de projeto arquitetural, implementação, e testes. O método resultante desse refinamento foi denominado MDCE+. Além de refinar a MDCE, o método MDCE+ foi adaptado ao processo UML Components. Do ponto de vista do projeto CompGov, este estudo de caso foi utilizado para avaliar parcialmente o processo de desenvolvimento baseado em componentes com maximização do reuso. A parte avaliada do processo consistiu na reutilização de componentes a partir das especificações das suas interfaces providas e requeridas. Em um trabalho futuro, as demais atividades de reuso deverão ser avaliadas: (i) negociação dos requisitos a partir dos componentes já existentes; e (ii) reuso de "framework" e de componentes arquiteturais.
Resumo: Mecanismos de controle de concorrência têm sido de grande importância para mediar o acesso concorrente a recursos computacionais compartilhados. Esses mecanismo podem ser categorizados na abordagem otimista ou pessimista. Tais abordagens possuem vantagens e desvantagens que podem ser melhor exploradas quando se trata de ambientes de computação móvel sujeitos a desconexões, largura de banda variável e alto custo de comunicação. Este relatório apresenta um controle de concorrência híbrido que busca reunir as vantagens das abordagens otimista e pessimista de forma a melhor lidar com as características de ambientes móveis. Como enfoque, esse relatório apresenta provas de corretude do controle de concorrência apresentado.
Abstract: This paper presents an approach to support choreography in agricultural supply chains. It depicts a model for this kind of chain that considers both static and dynamic aspects, and their mapping to an underlying architecture. In particular, the model emphasizes mutual agreements, coordination of activities, quality enforcement and activity documentation. The architecture is centered on mapping chain elements to Web Services and their dynamics to the choreography of services. A case study, for soy supply chains, is used to motivate the approach.
Abstract: This work treats a typical Brazilian school timetabling problem, that consists of scheduling a set of lectures and teachers in a prefixed period of time, satisfying a set of operational requirements. We applied a basic GRASP heuristic, followed by a path-relinking improvement. The algorithms use a local search procedure that interleaves two types of movements and a path-relinking strategy that is enhanced with a local search procedure. They were tested with real instances and proved to be good approaches to treat this problem. Although some restrictions are specific to the Brazilian educational institutions, the same ideas can inspire similar approaches for solving the school timetabling problem in other situations.
Abstract: In the last years, large digital image collections are generated, manipulated, and stored in databases. In this scenery, it is very important to develop mechanisms to provide automatic means to retrieve images in an efficient and effective way. However, the subjectivity of the user perception of an image usually hampers a fully automatic search and retrieval. Relevance Feedback is one of the commonest approaches to overcome this difficult.
In this paper, a new content-based image retrieval framework with relevance feedback is proposed. This framework uses Genetic Programming (GP) to learn the user needs. The objective of this learning method is to find a function that combines different values of similarity, from distinct descriptors, and best encodes the user perception of image similarity. Several experiments are performed to validate the proposed method, aiming to compare our work with other relevance feedback techniques. The experiment results show that the proposed method outperforms all of them.
Abstract: This technical report considers the problem of developing dependable component-based software systems through an architectural approach, which combines fault prevention, fault removal, and fault tolerance techniques. The architecture-centred solution comprises a rigorous approach, which systematises the verification and validation of fault tolerant systems. Using B-Method and CSP, we analyse the exception flow at the architectural level and verify important properties regarding the system dependability. Besides that, the it is adopted an architectural solution based on exception handling for transforming untrusted software components into idealised fault-tolerant architectural components, which can be used as building blocks for creating fault-tolerant software architectures. The feasibility of the proposed architectural solution was evaluated on a business critical case study.
Resumo: Neste relatório é feita uma investigação experimental sobre a aplicação do método da geração de colunas ao problema do separador de vértices. Os resultados computacionais são reportados para discutir diferentes aspectos de implementação e técnicas para acelerar o processo de geração de colunas. São abordados problemas de convergência, de oscilação dos valores dos duais ótimos e de degenerescência. Dois métodos de decomposição são utilizados: decomposição de Dantzig-Wolfe e mestre explícito.
Abstract: Although SystemC is considered the most promising language for system-on-chip functional modeling, it doesn't come with power modeling capabilities. This work presents PowerSC, a novel power estimation framework which instruments SystemC for power characterization, modeling and estimation. Since it is entirely based on SystemC, PowerSC allows consistent power modeling from the highest to the lowest abstraction level. Besides, the framework's API provides facilities to integrate alternative modeling techniques, either at the same or at different abstraction levels. As a result, the required power evaluation infrastructure is reduced to a minimum: the standard SystemC library, the PowerSC library itself and a C++ compiler. Although RTL power macromodeling is a mature research topic, it is not yet broadly accepted in the industrial environment. One of the main reasons impairing its widespread use as a power estimation paradigm is that each macromodeling technique makes some assumptions that lead to some sort of intrinsic limitation, thereby affecting its accuracy. Therefore, alternative macromodeling methods can be envisaged as part of a power modeling toolkit from which the most suitable method for a given component should be automatically selected. This paper describes a new multi-model power estimation engine that selects the macromodeling technique leading to the least estimation error for a given system component depending on its input-vector stream properties. A proper selection function is built after component characterization and used during estimation. Experimental results show that our multi-model engine improves the robustness of power analysis, reducing significantly the average and the maximum estimation errors, as compared to conventional single-model estimation.
Abstract: This work presents the Conceptual Multi-Device Design (CMDD) with a deeper discussion about its theoretical assumptions. The proposal suggests multi-device design by maintaining the application's conceptual model (wider perspective, including navigational and presentation models) on every interface to avoid ambiguities on the user's mental model. This consistency gives support to decision making problems, allowing users to behave according to their previous experience while executing one task on different interfaces of a given application. The CMDD framework that provides mobile access (with pocket PCs or smartphones) to desktop web interfaces is improved and the first impressions with beta prototypes are presented. We expect to conduct complete user evaluations sooner for a better identification of this proposal's advantages.
Instituto de Computação :: Universidade Estadual de Campinas
Caixa Postal 6176 • Av. Albert Einstein, 1251 - Cidade Universitária • CEP 13083-970 • Campinas/SP - Brasil • Fone:  3521-5838