Technical Reports Published in 2007

  • IC-07-37 pdf bib
    Treble: Ubiquitous replication.
    Gustavo M. D. Vieira and Luiz E. Buzato.
    December 2007. In English, 15 pages.

    Abstract: This paper describes Treplica, a tool designed for ubiquitous replication. Most of the software tools created so far to aid in the construction of distributed applications addressed solely as to maintain volatile data consistent in the presence of failures, but without offering any relief for the problem of managing persistence. Treplica simplifies the development of high-available applications by making transparent the complexities of dealing with replication and persistence. These complexities are not negligible, and we believe we have found a compelling way of factoring out these concerns in a simple to understand programming interface.

  • IC-07-36 pdf bib
    Proceedings of the 3th ongoing PhD thesis workshop at IC-UNICAMP.
    Claudia M. Bauzer Medeiros, Alan Massaru Nakai, Carla Geovana do Nascimento Macario, Diego Zuquim Guimaraes Garcia, Gilberto Zonta Pastorello Jr, Jorge Lima de Oliveira Filho, Neumar Costa Malheiros, Nielsen Cassiano Simões, and Vânia Paula de Almeida Neris (Eds.) .
    December 2007. Partly in English, partly in Portuguese, 92 pages.

    Summary: This technical report contains summaries of 27 papers presented at the III Doctoral Thesis Workshop at the Institute of Computing at UNICAMP. The workshop, held from November 21 to 23, 2007, allowed the Institute's doctoral students to present the main aspects of their research. Each chapter corresponds to a presentation, the text being limited to 4 pages.

    Participation was voluntary and the academic profile of the participants varied, ranging from students recently admitted to the program to those who already had their defense scheduled in December 2007.

    The publication of abstracts in the form of a technical report aims to promote the dissemination of doctoral work in progress at the IC. In addition, it is a succinct record of the state of several of these surveys.

  • IC-07-35 pdf bib
    On the coordinator's rule for Fast Paxos.
    G. M. D. Vieira and L. E. Buzato.
    November 2007. In English, 7 pages.

    Abstract: Fast Paxos is an algorithm for consensus that works by a succession of rounds, where each round tries to decide a value v that is consistent with all past rounds. Rounds are started by a coordinator process and consistency is guaranteed by the rule used by this process for the selection of v and by the properties of process sets called quorums. We show a simplified version of this rule for the specific case where the quorums are defined by the cardinality of these process sets. This rule is of special interest for implementors of the algorithm.

  • IC-07-34 pdf bib
    A hybrid heuristic for forest harvest and transporation problems.
    Arnaldo Vieira Moura and Rafael Augusto Scaraficci.
    November 2007. In English, 29 pages.

    Abstract: This work treats harvest and transportation planning problems stemming from the daily operation of some large pulp and paper companies. The problem consists of scheduling harvest and transportation tasks for a planning horizon of around one year. It comprises a large and complex set of constraints, including constraints related to the structure of the harvest areas, structure and capacity of the harvest teams, the weather, and some properties of the wood logs. We developed a hybrid algorithm that has three phases, namely, construction, local search and post-optimization. The first two are based on a GRASP metaheuristic and are used to generate a pool of elite solutions. In the third phase, a relaxed linear model is generated for each solution in the pool, and the solution of these models are converted into valid solutions. The algorithm was tested with real data and proved to be an adequate approach to treat this problem. Although some restrictions are specific for this problem, the same ideas can inspire similar approaches for solving harvest and transportation planning problems in other situations.

  • IC-07-33 pdf bib
    A systematic approach for architectural design of component-based product lines.
    Ana Elisa de Campos Lobo, Patrick Henrique da Silva Brito, and Cecília Mary Fischer Rubira.
    November 2007. In English, 23 pages.

    Abstract: Software product line is a systematic software reuse approach that promotes the generation of specific products from a set of core assets for a given domain, exploiting the commonalities and variabilities among these products. Feature modeling is one of the most accepted ways to represent commonalities and variabilities at the requirements phase. At the architecture design phase, product line architecture is an important core asset that should represent the common and variable parts in a product line. This technical report proposes an approach that supports the architectural design phase of component-based product line engineering. Our proposal for the conception of a product line architecture is based on the definition of variable architectural elements; and how they are derived from a given feature model and the interactions among features, providing a prescribed way to map features to the variable architectural elements composing the product line architecture. The main results of this work are a metamodel to define product line concepts extended with software architecture concepts, and a method for architectural design of product line engineering, to generate a product line architecture and instantiate it for a specific product. In order to evaluate the method, a case study was carried out, applying the approach to a product line in the mobile domain.

  • IC-07-32 pdf bib
    Brazilian Computer Science research: gender and regional distributions.
    Denis Arruda, Fabio Bezerra, Vania Almeida Neris, Patricia Toro, and Jacques Wainer.
    October 2007. In English, 17 pages.

    Abstract: This paper analysis the distribution of some characteristics of computer scientists in Brazil according to regions and gender. Computer scientist is defined as the faculty of a graduate level computer science department. Under this definition, there were 886 computer scientists in Brazil in November 2006. We studied the self-declared research areas, the production of journal and conference papers of these scientists, and whether they receive a recognition fellowship by one of Brazilian grant agents, and analyze the differences regarding the geo-political region in Brazil and the gender of the scientists.

  • IC-07-31 pdf bib
    Data clustering based on optimum-path forest and probability density function.
    Leonardo M. Rocha, Alexandre X. Falcão, and Luis G. P. Meloni.
    October 2007. In English, 20 pages.

    Abstract: The identification of natural groups in a dataset is reduced to an optimum-path forest problem in a graph. We define a graph whose nodes are data samples and whose arcs connect adjacent samples in a feature space. The nodes are weighted by their probability density values ​​and different choices of a path-value function lead to effective solutions for data clustering. The method identifies a root node for each cluster and finds the samples which are `` more strongly connected '' to each root than to any other. The output is an optimum-path forest whose trees (clusters) are the influence zones of their roots. This framework extends the image foresting transform from the image domain to the feature space, revealing important theoretical relations among relative fuzzy-connected segmentation, morphological reconstructions, watershed transforms, and clustering by influence zones. It also provides a more general and robust implementation for the popular mean-shift algorithm. The results are illustrated in image segmentation.

  • IC-07-30 pdf bib
    Low-cost educational laptops: Proposed guidelines for community development.
    Leonardo Cunha de Miranda, Heiko Horst Hornung, Roberto Romani, Maria Cecília Calani Baranauskas, and Hans Kurt Edmund Liesenberg.
    September 2007. In Portuguese, 12 pages.

    Summary: Many social differences exist in the Brazilian population. Research has shown how community computing can help to reduce these differences. Aiming at the development of communities, we propose here the wider use of low-cost laptops than that proposed by the UCA project (One Computer per Student) promoted by the Federal Government. Some recommendations derived from good practices in the area are presented that have proven to increase the chances of success for community development initiatives.

    Abstract: Many social inequalities exist within the Brazilian population. Research has shown how community informatics is capable of contributing to the reduction of those inequalities. Aiming at the development of communities, we here propose a broader use of low-cost laptops than the one proposed by the UCA (one computer per student) project sponsored by the Federal Government. Some recommendations derived from best practices in the field are presented that proved to increase the chances of success of community development initiatives.

  • IC-07-29 pdf bib
    A new hybrid clustering approach to image retrieval.
    Anderson Rocha, Jurandy Almeida, Ricardo Torres, and Siome Goldenstein.
    September 2007. In English, 15 pages.

    Abstract: In this paper, we present a new Hybrid Hierarchical Clustering approach for Image Retrieval. Our method combines features from both divisive and agglomerative clustering paradigms in order to yield good-quality clustering solutions with reduced computational cost. We provide several experiments showing that our technique reduces the number of required comparisons to perform a retrieval without significant loss in effectiveness when compared to flat-based solutions.

  • IC-07-28 pdf bib
    Image retrieval based on color and scale representative image regions (CSIR).
    Jurandy Almeida, Anderson Rocha, Ricardo Torres, and Siome Goldenstein.
    September 2007. In English, 16 pages.

    Abstract: Content-based image retrieval (CBIR) is a challenging task. Common techniques use only low-level features. However, these solutions can lead to the so-called `semantic gap 'problem: images with high feature similarities may be different in terms of user perception. In this paper, our objective is to retrieve images based on color cues which may present some affine transformations. For that, we present CSIR: a new method for comparing images based on discrete distributions of distinctive color and scale image regions. We validate the technique using images with a large range of viewpoints, partial occlusion, changes in illumination, and various domains.

  • IC-07-27 pdf bib
    A new technique for instruction encoding in high performance architectures.
    Rafael Fernandes Batistella, Ricardo Ribeiro dos Santos, and Rodolfo Jardim de Azevedo.
    September 2007. In English, 18 pages.

    Abstract: In this paper we propose a new technique to reduce the program footprint and the instruction fetch latency in high performance architectures adopting long instruction in the memory. Our technique is based on an algorithm that factors long instructions into encoded instructions and instruction patterns. An encoded instruction contains no redundant data and it is stored into an I-cache. The instruction patterns, on the other hand, look like a map to the decode logic to prepare the instruction to be executed in the execution stages. These patterns are stored into a new cache, named Pattern cache (P-cache). The technique has shown a suitable alternative to well-known architectural styles such as VLIW and EPIC architectures. We have carried out a case study of this technique in a high performance architecture called 2D-VLIW. We have evaluated the performance ofour encoding technique through trace-driven experiments with MediaBench, SPECint00, and SPECfp programs. Moreover, we have compared the execution time performance of the 2D-VLIW architecture with encoded instructions to the same architecture with non-encoded instructions. Further experiments compare our approach to VLIW and EPIC instruction encoding techniques. Experimental results reveal that our encoding strategy provides a program execution time that is up to 23x (average of 5x) faster than a 2D-VLIW non-encoded program. The results also show that the program code region, by using encoded instructions, is up to 78 (average of 69 using non-encoded instructions.

  • IC-07-26 pdf bib
    Fast and robust mid-sagittal plane location in 3D MR images of the brain.
    FPG Bergo, GCS Ruppert, LF Pinto, and AX Falcão.
    August 2007. In English, 12 pages.

    Abstract: Extraction of the mid-sagittal plane (MSP) is an important step for brain image registration and asymmetry analysis. We present a fast MSP extraction method for 3D MR images, which is based on automatic segmentation of the brain and on heuristic maximization of cerebrospinal fluid within the MSP. The method is shown to be robust to severe anatomical asymmetries between the hemispheres, caused by surgical procedures and lesions. The experiments used 64 MR images (36 pathological, 20 healthy, 8 synthetic) and the method found an acceptable approximation of the MSP in all images with a mean time of 60.0 seconds per image.

  • IC-07-25 pdf bib
    Design of a traffic anonymization network.
    Diego F. Aranha and Julio López.
    August 2007. In Portuguese, 41 pages.

    Summary: In this work, a traffic anonymization network is proposed based on the creation of an anonymous and efficient routing policy with good quality of anonymity for sending, replying and communicating pair. The organization, architecture and topology used in the network are extensively analyzed and their characteristics of efficiency and quality of anonymity are empirically verified through simulations. Additional improvements are also proposed to improve the topology used.

  • IC-07-24 pdf bib
    Public key encryption without certificates.
    Diego F. Aranha and Julio López.
    August 2007. In Portuguese, 33 pages.

    Summary: Certificateless Public Key Cryptography - CL-PKC is a paradigm of building public key cryptographic systems designed to solve the limitations of Identity-Based Cryptography (ID-PKC). In this work, the properties, advantages and implications of this new model are presented and discussed, with an emphasis on its instantiation from bilinear pairings on elliptical curves. Considerations regarding the security of the paradigm are formulated based on the degree of confidence given to the central authority, the adversary model adopted and the set of most common intractable problems on which the security of the protocols depends. Additionally, encryption and signature schemes developed under these premises are presented with details of implementation.

  • IC-07-23 pdf bib
    Automatic image segmentation by tree pruning.
    F. P. G. Bergo, A. X. Falcão, P. A. V. Miranda, and L. M. Rocha.
    July 2007. In English, 33 pages.

    Abstract: The Image Foresting Transform (IFT) is a tool for the design of image processing operators based on connectivity, which reduces image processing problems into a minimum-cost path forest problem in a graph derived from the image. We propose a new image operator, which solves segmentation by pruning trees of the forest. An IFT is applied to create an optimum path forest whose roots are seed pixels, selected inside a desired object. In this forest, object and background are connected by optimum paths (leaking paths), which cross the object's boundary through its "most weakly connected" parts (leaking pixels). These leaking pixels are automatically identified and their subtrees are eliminated, such that the remaining forest defines the object. Tree pruning runs in linear-time, is extensive to multidimensional images, is free of ad-hoc parameters, requires only internal seeds, and works with minimal interference from the heterogeneity of the background. These aspects favor solutions that exploit image features and object information for automatic segmentation. We give a formal definition of the obtained objects and conditions to achieve robust segmentation using tree pruning, describe the algorithms and evaluate automatic segmentation by tree-pruning in two applications: 3D MR-image segmentation of the human brain and segmentation of license plates. Given that its most competitive approach is the watershed transform by markers, we include a comparative analysis between them.

  • IC-07-22 pdf bib
    A dynamic algorithm for Tree of minimum paths.
    Daniel Felix Feber and Arnaldo Vieira Moura.
    July 2007. In English, 42 pages.

    Summary: A dynamic algorithm is proposed to keep the tree rooted in a special vertex origin and formed by minimum paths for each vertex of the graph. The graph has cost at the edges and has no additional restrictions on the cost domain, as long as the values ​​are not negative. The problem is addressed with multiple simultaneous operations on the graph: increasing and decreasing the cost of edges. Its worst-case complexity is O (m + n log n) for a graph with n vertices on edges. The execution time is shorter or igaul to the computation of a new tree of minimum paths. Graph operations are handled by a single model, similar to the Dijkstra algorithm.

  • IC-07-21 pdf bib
    Flow-critical graphics.
    Cândida Nunes da Silva and Cláudio L. Lucchesi.
    June 2007. In English, 16 pages.

    Abstract: In this paper we introduce the concept of k-flow-critical graphs. These are graphs that do not admit a k-flow but such that any smaller graph obtained from it by contraction of edges or of sets of vertices is k-flowable. Throughout this paper we will refer to k-flow-critical graphs simply as k-critical.

    Any minimum counterexample for Tutte's 3-Flow and 5-Flow Conjectures must be 3-critical and 5-critical, respectively. Thus, any progress towards establishing good characterizations of k-critical graphs can represent progress in the study of these conjectures. We present some interesting properties satisfied by k-critical graphs discovered recently.

  • IC-07-20 pdf bib
    New approaches to groupware scheduling.
    Fábio Silva and Rogério Drummond.
    June 2007. In English, 27 pages.

    Abstract: Groupware scheduling is an essential and continuous activity. Complexities on conciliating schedules of several people make proper time management a challenge. It involves sophisticated communication mechanisms to support information sharing and scheduling negotiation among users of diverse systems. Also, scheduling on group demands application support to deal with over-constrained schedules and rescheduling, but such needs have not yet been fulfilled. Based on a review of past approaches for this problem, their successes and failures, new approaches for groupware scheduling are presented. The development of calendaring and scheduling protocols is reviewed, as well as the application of promising techniques on the fields of optimization, artificial intelligence and mixed-initiative interfaces. Towards a new generation of more intelligent and powerful groupware scheduling tools that will increase productivity by minimizing coordination effort, a multidisciplinary application of knowledge from these areas is proposed.

  • IC-07-19 pdf bib
    Low cost educational laptops: Preliminary analysis based on the semiotic ladder.
    Leonardo Cunha de Miranda, Heiko Horst Hornung, Diego Samir Melo Solarte, Roberto Romani, Maristela Regina Weinfurter, Vânia Paula de Almeida Neris, and Maria Cecília Calani Baranauskas.
    June 2007. In English, 24 pages.

    Summary: In 2005, MIT Media Lab started a project to develop a low-cost laptop, initially called the '$ 100 laptop`` or' 'The Children's Machine``. As a result of this project, with implications for the education of children worldwide, other initiatives to develop low-cost laptops for educational use have emerged. After meetings with leaders and representatives of these initiatives, the Brazilian government showed interest in the idea and is studying ways to apply it to the public education system in the country. In this context, considering the still early stage of discussion on the issue, this technical report presents results of a preliminary analysis carried out on three models of educational laptops under consideration by the Government. The analysis was based on an inspection of the equipment using a semiotic framework.

    Abstract: In 2005, the MIT Media Lab initiated a project to develop a laptop of low cost, initially called '' laptop of 100 dollars`` or '' The Children's Machine``. In consequence of this project, with implications in the education of children in the whole world, other initiatives of development of low cost laptops for educational purpose appeared. After meetings with leaders and representatives of these initiatives, the Brazilian government showed interest in the idea and studies ways to apply it to the system of public education in the country. Within this context, considering the initial period of discussion on the question, this technical report presents results of a preliminary analysis carried through in three educational models of laptops in consideration by the Government. The analysis was based on an inspection of the equipment using a semiotic-based framework.

  • IC-07-18 pdf bib
    Scientific production in computer science: Brazil and other countries.
    E. C. Xavier, Fabio Bezerra, and Jacques Wainer.
    May 2007. In English, 13 pages.

    Abstract: In this paper we present a study about scientific production in Computer Science in Brazil and several other countries, as measured by the number of articles in journals indexed by ISI. We compare the Brazilian production from 2001 to 2005 with some Latin American, Latin European, BRIC (Brazil, Russia, India, China), and other relevant countries. We also classify and compare these countries production according to the impact factor of the journals in which they were published, and according to each country known research and development investment.

    The results show that Brazil has by far the largest production among Latin American countries, has a production about one third of Spain's, one fourth of Italy's and a little larger than Portugal's. The growth in Brazilian publications during the period places the country in the group with mid range growth, but regarding dollar productivity, Brazil joins the other BRIC countries as the ones with the lowest productivity. The distribution of Brazilian production according to impact factor is similar to most countries.

  • IC-07-17 pdf bib
    Intrinsic mesh segmentation.
    Fernando de Goes, Siome Goldenstein, and Luiz Velho.
    May 2007. In English, 13 pages.

    Abstract: Mesh segmentation offers a desirable divide-and-conquer strategy for many graphics applications. In this paper, we present a novel, efficient, and intrinsic method to segment meshes following the minimal rule. The eigenfunctions of the Laplace-Beltrami operator defines locality and volume-shape preserving functions over a surface. Inspired on Manifold learning theory, we use these functions as the basis of an embedding space for mesh vertices and group them using $ k $-means clustering. We also present a new kind of segmentation hierarchy built from the analysis of the Laplace-Beltrami operator spectrum.

  • IC-07-16 pdf bib
    Taxonomy and recommendation for the use of physical artifacts for interaction with TVDI.
    Leonardo Cunha de Miranda, Lara Schibelsky Godoy Piccolo, and Maria Cecília Calani Baranauskas.
    May 2007. In Portuguese, 17 pages.

    Summary: The digitalization of terrestrial TV transmission and the potential demand for interactivity on TV establishes a new paradigm of interaction, of extreme social importance, especially for the population of Brazil. The present work approaches this theme from the perspective of interaction design, a theme that is little explored in the literature on Interactive Digital Television (TVDI), mainly in Brazil. The technical report presents a taxonomy for physical interaction devices - composed of hardware and software artifacts - in addition to a proposal for ways of using traditional physical artifacts to interact with TVDI. The proposal is based on the nature of interactive applications and the profile of users. The results of this work may guide the choice and formulation of proposals for new digital artifacts for interaction with TVDI.

    Abstract: The digitalization of the terrestrial TV transmission and the potential demand for interactivity on TV establish a new interaction paradigm, of extreme social importance, especially for the population of Brazil. The present work approaches this topic under the perspective of the interaction design, a subject hardly explored in literature on Interactive Digital Television (iDTV), mainly in Brazil. The technical report presents a taxonomy for the physical devices of interaction - composed by hardware and software components - and a proposal of forms of using the traditional device for interaction with the iDTV. The proposal has basis in the nature of the interactive applications and in a profile of users. Results of this work could guide the choice and the formulation of proposals for new digital devices for interaction with iDTV.

  • IC-07-15 pdf bib
    A polynomial time algorithm for recognizing near-bipartite Pfaffian graphs.
    Alberto Alexandre Assis Miranda and Cláudio Leonardo Lucchesi.
    May 2007. In English, 6 pages.

    Abstract: A matching covered graph is a nontrivial connected graph in which every edge is in some perfect matching. A matching covered graph G is near-bipartite if it is non-bipartite and there is a removable double ear R such that GR is matching covered and bipartite. In 2000, C. Little and I. Fischer characterized Pfaffian near-bipartite graphs in terms of forbidden subgraphs. However, their characterization does not imply a polynomial time algorithm to determine whether a near-bipartite graph is Pfaffian. In this report, we give such an algorithm. Our algorithm is based on an Inclusion-Exclusion principle and uses as a subroutine an algorithm by McCuaig and by Robertson, Seymour and Thomas that determines whether a bipartite graph is Pfaffian.

  • IC-07-14 pdf bib
    Evaluation of a read-optimized database for dynamic web applications.
    A. Supriano, GMD Vieira, and LE Buzato.
    May 2007. In English, 17 pages.

    Abstract: In this paper we investigate the use of a specialized data warehousing database management system as a data back-end for web applications and assess the performance of this solution. We have used the Monet database as a drop-in replacement for traditional databases, and performed benchmarks comparing its performance to the performance of two of the most commonly used databases for web applications: MySQL and PostgreSQL. Our main contribution is to show for the first time how a read-optimized database performs in comparison to established general purpose database management systems for the domain of web applications. Monet's performance in relation to MySQL and PostgresSQL allows us to affirm that the widely accepted assumption that relational database management systems are fit for all applications can be reconsidered in the case of dynamic web applications.

  • IC-07-13 pdf bib
    A new pattern classifier based on optimum path forest.
    J. P. Papa, A. X. Falcão, P. A. V. Miranda, C. T. N. Suzuki, and N. D. A. Mascarenhas.
    May 2007. In English, 12 pages.

    Abstract: We introduce the supervised pattern classifier based on optimum path forest. The samples in a training set are nodes of a complete graph, whose arcs are weighted by the distances between sample feature vectors. The training builds a classifier from key samples (prototypes) of all classes, where each prototype defines an optimum path tree whose nodes are its strongest connected samples. The optimum paths are also considered to label unseen test samples with the classes of their strongest connected prototypes. We show how to find prototypes with none classification errors in the training set and propose a learning algorithm to improve accuracy over an evaluation set without overfitting the test set. The method is robust to outliers, handles non-separable classes, and can outperform artificial neural networks and support vector machines.

  • IC-07-12 pdf bib
    Providing homogeneous access for sensor data management.
    Gilberto Zonta Pastorello Jr, Claudia Bauzer Medeiros, and André Santanchè.
    May 2007. In English, 44 pages.

    Abstract: We are facing the proliferation of several kinds of sensing devices, from satellites to tiny sensors. This has opened up new possibilities for us to understand, manage and monitor a given environment, from the small - eg, a room - to the large - eg, the planet. This, however, has added a new dimension to the classic problem of heterogeneous data management - how to handle increasing volumes of sensing data from a wide range of sensors. This report is concerned with the problem of sensor data publication. Our solution involves the design and implementation of a framework for sensor data management, which applies technologies based on Semantic Web standards, components and scientific workflows. Individual sensors or networks are encapsulated into a specific kind of component - DCC - which supports homogeneous access to data and software. DCCs are themselves handled by scientific workflows that provide facilities for controlling data production, integration and publication. As a result, applications that require sensor data will instead interact with workflows, being liberated from concerns such as sensor particularities, or provide separate handlers for real time streams. The report also presents initial implementation results.

  • IC-07-11 pdf bib
    A method for modeling architecture based on system requirements.
    Patrick Henrique da Silva Brito, Maria Antonia Martins Barbosa, Paulo Asterio de Castro Guerra, and Cecília Mary Fischer Rubira.
    March 2007. In Portuguese, 39 pages.

    Summary: In the development of modern systems, the architectural design phase has received increasing attention. Because it is considered the ideal place to implement the non-functional requirements of a system, late changes in architecture tend to have a very high cost. In addition, a well-designed architecture, in addition to increasing the quality of the system, facilitates its understanding and maintenance. However, designing a software architecture is a complex task and depends on the experience of the development team. This work proposes a method to systematize the specification of the logical view of the system architecture, refined with the architectural style of independent components. Thus, this method aims to reduce the penalty due to the development team's little experience. The proposed method was validated through a simple case study, presented at the end of the document.

  • IC-07-10 pdf bib
    Study on architectural styles for component-based software systems.
    Patrick Henrique da Silva Brito, Paulo Asterio de Castro Guerra, and Cecília Mary Fischer Rubira.
    March 2007. In Portuguese, 16 pages.

    Summary: With the increase in computing power, new requirements are being demanded from modern computer systems. In addition, in view of the flexibility and ease of evolution of these systems, the role of software is increasingly important in this context of automated systems. A consequence of the greater demand for software functionality is the increase in its complexity, which can compromise the conduct of maintenance activities and can even reduce the reliability of the systems. One way to control this complexity is through the software architecture, which represents the system at a high level of abstraction, due to its high level elements and the rules of interaction between them. When a family of systems has a similar software architecture, we say that they follow the same architectural style. This report presents a set of architectural styles highly used in practice, exemplifying the main characteristics of each one, as well as indications of use.

  • IC-07-09 pdf bib
    A method of development and testing for reliable component-based systems: A case study.
    Camila Ribeiro Rocha, Patrick Henrique da Silva Brito, Eliane Martins, and Cecília Mary Fischer Rubira.
    March 2007. In Portuguese, 153 pages.

    Summary: The design, implementation and testing of exceptional software behavior are complex tasks that usually do not receive the necessary attention from existing development methodologies. This work presents a systematic way of dealing with exception handling from requirements specification, specification, implementation and testing, in the development of component-based systems. This system is presented through the execution of a case study of a financial system. The testing activities are carried out from the early stages of development, providing an increase in the quality of the system produced. Our solution refines the Methodology for Defining Exceptional Behavior, MDCE, in the phases of architectural design, implementation, and testing. The method resulting from this refinement was called MDCE +. In addition to refining the MDCE, the MDCE + method was adapted to the UML Components process. From the point of view of the CompGov project, this case study was used to partially evaluate the component-based development process with maximized reuse. The evaluated part of the process consisted of the reuse of components based on the specifications of their provided and required interfaces. In a future work, the other reuse activities must be evaluated: (i) negotiation of requirements based on the existing components; and (ii) reuse of "framework" and architectural components.

  • IC-07-08 pdf bib
    A hybrid concurrency control to adapt transactions in mobile environments: evidence of correctness.
    Tarcisio Rocha and Maria Beatriz F. de Toledo.
    March 2007. In Portuguese, 14 pages.

    Summary: Concurrency control mechanisms have been of great importance to mediate concurrent access to shared computing resources. These mechanisms can be categorized in the optimistic or pessimistic approach. Such approaches have advantages and disadvantages that can be better exploited when it comes to mobile computing environments subject to disconnections, variable bandwidth and high communication costs. This report presents a hybrid concurrency control that seeks to bring together the advantages of optimistic and pessimistic approaches in order to better deal with the characteristics of mobile environments. As a focus, this report presents evidence of correctness of the competition control presented.

  • IC-07-07 pdf bib
    Using choreography to support collaboration in agricultural supply chains.
    Evandro Bacarin and Claudia M. B. Medeiros Edmundo R. M. Madeira.
    March 2007. In English, 18 pages.

    Abstract: This paper presents an approach to support choreography in agricultural supply chains. It depicts a model for this kind of chain that considers both static and dynamic aspects, and their mapping to an underlying architecture. In particular, the model emphasizes mutual agreements, coordination of activities, quality enforcement and activity documentation. The architecture is centered on mapping chain elements to Web Services and their dynamics to the choreography of services. A case study, for soy supply chains, is used to motivate the approach.

  • IC-07-06 pdf bib
    A GRASP strategy for a more constrained school timetabling problem.
    Arnaldo Vieira Moura and Rafael Augusto Scaraficci.
    February 2007. In English, 18 pages.

    Abstract: This work treats a typical Brazilian school timetabling problem, that consists of scheduling a set of lectures and teachers in a prefixed period of time, satisfying a set of operational requirements. We applied a basic GRASP heuristic, followed by a path-relinking improvement. The algorithms use a local search procedure that interleaves two types of movements and a path-relinking strategy that is enhanced with a local search procedure. They were tested with real instances and proved to be good approaches to treat this problem. Although some restrictions are specific to the Brazilian educational institutions, the same ideas can inspire similar approaches for solving the school timetabling problem in other situations.

  • IC-07-05 pdf bib
    Image retrieval with relevance feedback based on genetic programming.
    Cristiano D. Ferreira and Ricardo da S. Torres.
    February 2007. In English, 22 pages.

    Abstract: In the last years, large digital image collections are generated, manipulated, and stored in databases. In this scenery, it is very important to develop mechanisms to provide automatic means to retrieve images in an efficient and effective way. However, the subjectivity of the user perception of an image usually hampers a fully automatic search and retrieval. Relevance Feedback is one of the commonest approaches to overcome this difficult.

    In this paper, a new content-based image retrieval framework with relevance feedback is proposed. This framework uses Genetic Programming (GP) to learn the user needs. The objective of this learning method is to find a function that combines different values ​​of similarity, from distinct descriptors, and best encodes the user perception of image similarity. Several experiments are performed to validate the proposed method, aiming to compare our work with other relevance feedback techniques. The experiment results show that the proposed method outperforms all of them.

  • IC-07-04 pdf bib
    Architecture-centric fault tolerance with exception handling.
    Patrick Henrique da Silva Brito, Rogério de Lemos, Fernando Castor Filho, and Cecília Mary Fischer Rubira.
    February 2007. In English, 154 pages.

    Abstract: This technical report considers the problem of developing dependable component-based software systems through an architectural approach, which combines fault prevention, fault removal, and fault tolerance techniques. The architecture-centered solution comprises a rigorous approach, which systematises the verification and validation of fault tolerant systems. Using B-Method and CSP, we analyze the exception flow at the architectural level and verify important properties regarding the system dependability. Besides that, the it is adopted an architectural solution based on exception handling for transforming untrusted software components into idealized fault-tolerant architectural components, which can be used as building blocks for creating fault-tolerant software architectures. The feasibility of the proposed architectural solution was evaluated on a business critical case study.

  • IC-07-03 pdf bib
    A study of the method of generating columns in whole programming: an application to the vertex separator problem.
    Edna A. Hoshino and Cid C. de Souza.
    February 2007. In Portuguese, 33 pages.

    Summary: In this report, an experimental investigation is carried out on the application of the column generation method to the vertex separator problem. The computational results are reported to discuss different aspects of implementation and techniques to accelerate the process of generating columns. Convergence problems, oscillation of the values ​​of the optimal duos and degeneration are addressed. Two decomposition methods are used: Dantzig-Wolfe decomposition and explicit master.

  • IC-07-02 pdf bib
    PowerSC: A SystemC-based framework for power estimation.
    Felipe Klein, Roberto Leao, Guido Araujo, Luiz Santos, and Rodolfo Azevedo.
    February 2007. In English, 19 pages.

    Abstract: Although SystemC is considered the most promising language for system-on-chip functional modeling, it doesn't come with power modeling capabilities. This work presents PowerSC, a novel power estimation framework which instruments SystemC for power characterization, modeling and estimation. Since it is entirely based on SystemC, PowerSC allows consistent power modeling from the highest to the lowest abstraction level. Besides, the framework's API provides facilities to integrate alternative modeling techniques, either at the same or at different abstraction levels. As a result, the required power evaluation infrastructure is reduced to a minimum: the standard SystemC library, the PowerSC library itself and a C ++ compiler. Although RTL power macromodeling is a mature research topic, it is not yet broadly accepted in the industrial environment. One of the main reasons impairing its widespread use as a power estimation paradigm is that each macromodeling technique makes some assumptions that lead to some sort of intrinsic limitation, thus affecting its accuracy. Therefore, alternative macromodeling methods can be envisaged as part of a power modeling toolkit from which the most suitable method for a given component should be automatically selected. This paper describes a new multi-model power estimation engine that selects the macromodeling technique leading to the least estimation error for a given system component depending on its input-vector stream properties. A proper selection function is built after component characterization and used during estimation. Experimental results show that our multi-model engine improves the robustness of power analysis, reducing significantly the average and the maximum estimation errors, as compared to conventional single-model estimation.

  • IC-07-01 pdf bib
    Conceptual multi-device design: Improving theoretical foundations.
    Rodrigo de Oliveira and Heloísa Vieira da Rocha.
    February 2007. In English, 14 pages.

    Abstract: This work presents the Conceptual Multi-Device Design (CMDD) with a deeper discussion about its theoretical assumptions. The proposal suggests multi-device design by maintaining the application's conceptual model (wider perspective, including navigational and presentation models) on every interface to avoid ambiguities on the user's mental model. This consistency gives support to decision making problems, allowing users to behave according to their previous experience while executing one task on different interfaces of a given application. The CMDD framework that provides mobile access (with pocket PCs or smartphones) to desktop web interfaces is improved and the first impressions with beta prototypes are presented. We expect to conduct complete user evaluations sooner for a better identification of this proposal's advantages.


  • Instituto de Computação :: State University of Campinas
    PO Box 6176 • Av. Albert Einstein, 1251 - Cidade Universitária • CEP 13083-970 • Campinas / SP - Brazil • Phone: [19] 3521-5838