Resumo: Descrevemos um algoritmo para subdividir uma triangulação arbitrária da esfera de modo a produzir uma triangulação que é colorível nos vértices com três cores. (Triangulações 3-coloríveis podem ser eficientemente representadas e manipuladas pela estrutura de dados ``gema'' de Montagner e Stolfi.) A solução usual para este problema é a subdivisão baricêntrica, que produz triângulos quando aplicada a uma triangulação com faces. Nosso algoritmo elimina todos os vértices de grau ímpar (que são os únicos obstáculos para 3-coloração) aos pares, bissectando apenas alguns triângulos de cada vez. Embora nosso algoritmo tenha apenas um limitante superior de pior caso exponencial, espera-se que ele produza entre e triângulos em casos típicos.
Abstract: We describe an algorithm to subdivide an arbitrary triangulation of the sphere to produce a triangulation that is vertex-colorable with three colors. (Three-colorable triangulations can be efficiently represented and manipulated by the ``gem'' data structure of Montagner and Stolfi.) The standard solution to this problem is the barycentric subdivision, which produces triangles when applied to a triangulation with faces. Our algorithm eliminates all vertices of odd degree (which are the only obstacles to 3-colorability) in pairs, by bisecting only a few triangles at a time. While our algorithm has an exponential worst-case upper bound, it is expected to produce between and triangles in typical cases.
Abstract: In 1976, F. Loupekine created a method for constructing new snarks from already known ones. In the present work, we consider an infinite family of snarks constructed from the Petersen Graph using Loupekine's method, and show that this family verifies Fulkerson's Conjecture. In addition, we show that it is possible to extend this result to families constructed from snarks other than the Petersen Graph.
Abstract: The Genome Median Problem is an important problem in phylogenetic reconstruction under rearrangement models. It can be stated as follows: given three genomes, find a fourth that minimizes the sum of the pairwise rearrangement distances between it and the three input genomes. Recently, Feijão and Meidanis extended the algebraic theory for genome rearrangement to allow for linear chromosomes, thus yielding a new rearrangement model (the algebraic model), very close to the celebrated DCJ model.
In this paper, we study the genome median problem under the algebraic model, whose complexity is currently open, proposing a more generalized form of the problem, the matrix median problem, that can be approximated in linear time to a factor of of the optimum. The study of the matrix median might help in the solution of the algebraic median problem.
Abstract: The more robust and dynamic aspects of Web 2.0 applications (also named Rich Internet Applications, RIAs) stimulate the participation and collaboration among people while interacting with such shared interaction spaces. An evident consequence (e.g. Facebook, Instagran, and Twitter) is the increasing influence of RIAs on other media channels as TV and newspapers. However, the current state-of-art of Web 2.0 does not provide equitative opportunities of interaction for people. Accessibility in RIAs is still a challenging objective. Also, for aspects as awareness of others on RIAs that provided collaboration features the development of accessible mechanisms is not restricted to semantic markup but it also involves data structures, politeness, load of data, and other characteristics. This technical report presents a Systematic Literature Review process designed for investigating the aspect of awareness of others in accessible collaborative RIAs; it also reports included and excluded studies and the data collected from the reviewed studies.
Abstract: For decades, photographs were taken as a unique form of documenting space-time events. As such they have often served as evidence in courts. Although, in principle, photographers are able to create analog composites, e. g.,for comic or dramatic effects, this process is very time consuming and requires expert knowledge. However, nowadays, powerful digital image editing software packages have made image modifications easier than ever. This undermines our trust in images and, in particular, questions pictures as evidence for real-world events. In this context, here we analyze one of the most common forms of photograph manipulation nowadays, known as image composition or splicing in which multiple images are combined to create a new, fake photograph. We propose a forgery detection method that exploits subtle inconsistencies in the color of the illumination of images. Our approach is machine learning-based and requires minimal user interaction. In contrast to prior work, the method is applicable to a broad range of images and requires no expert interaction for the tampering decision. To achieve this, we incorporate cues from physics- and statistical-based illuminant estimators on image regions of similar material. From these illuminant estimates we extract texture- and edge-based features feeding a machine learning approach for automatic decision-making. The classification performance using an SVM meta-fusion classifier is promising, yielding a tampering detection rate of 85% on a new image forensics benchmark of 100 skillfully created forgeries and 100 pristine photographs.
Resumo: A resolução automática de anáforas pronominais é uma tarefa bastante árdua, apresentando inúmeras dificuldades, especialmente quando existe mais de um candidato a referente. Ao longo dos anos, diversas estratégias foram propostas para lidar com esse desafio, muitas delas baseadas exclusivamente em informações sintáticas presentes no texto. Em constraste com essas estratégias, a Teoria das Veias provê base para restringir o domínio de referência de um pronome, ao identificar ``veias'' sobre a árvore de estrutura de discurso. Entretanto, essa teoria é pouco explorada para resolução automática de pronomes.
Neste trabalho adaptamos três algoritmos, baseados em Teoria da Centralização, para utilizar os conceitos de Teoria das Veias. Como resultado, avaliamos cada um desses algoritmos, comparando-os com seus originais (baseados em Teoria da Centralização), verificando as contribuições de cada uma das teorias para a tarefa de resolução automática de pronomes em língua portuguesa.
Abstract: Automatic anaphora resolution is a difficult task, presenting a number of issues, specially when there is more than one possible referent for a given pronoun. Over the years, several approaches have been proposed to deal with this challenge, usually taking only syntactic information into account. On the other hand, Veins Theory provides base for restricting a pronoun's domain of accessibility by identifying ``veins'' on the discourse structure tree. However, such theory is still little used for automatic pronoun resolution.
In this work we modified three Centering Theory based algorithms to the Veins Theory concepts. As a result, we evaluated each of these algorithms, comparing it to its original form, so as the contribution of each theory for automatic anaphor resolution in Portuguese could be verified.
Resumo: Este relatório mostra que o Brasil esta perdendo posições no ranking de países ordenados por citações recebidas por artigo. O relatório mostra evidências que esta queda do Brasil colocou o país numa posição pior que a Argentina, México, Africa do Sul, Taiwan e Coreia do Sul, países que em 1996 estavam abaixo do Brasil no ranking. O relatório discute razões para esta queda e propõe uma possível solução.
Resumo: Richard Feynman desencadeou a área da Computação Quântica quando conjecturou que computadores clássicos não conseguiriam simular sistemas quânticos eficientemente. Desde então, vários modelos computacionais quânticos foram propostos para estudar a influência da Mecânica Quântica no poder computacional. Neste trabalho, é estudado o modelo dos autômatos finitos com estados quânticos e clássicos com 2 direções (2QCFA). Foram sistematizados os resultados anteriores sobre o tópico e foram propostos 2QCFAs que reconhecem em tempo polinomial linguages livres de contexto ambíguas e linguagens que não são livres de contexto, estendendo resultados anteriores.
Abstract: Richard Feynman triggered the study of Quantum Computing conjecturing that classical computers could not simulate quantum systems efficiently. Since then, quantum computational models have been proposed to study how quantum mechanics influences the power of computing devices. In this work, we study two-way finite automata with quantum and classical states (2QCFA). We systematize previous results on 2QCFAs and show that 2QCFAs can recognize ambiguous context-free languages and also non-context-free languages in polynomial time, extending previous results.
Resumo: O problema de seleção de portfólio de projetos (PPS) consiste em construir um portfólio de projetos, ou seja uma seleção de projetos escalonados em um período de tempo usando critérios variados, potencialmente conflitantes, e restrições de recursos. O PPS é um problema bem conhecido, que ocorre recorrentemente em diversas indústrias, com uma história rica de abordagens para a sua modelagem e um grande número de técnicas para resolvê-lo.
Nesse trabalho, nós apresentamos um modelo para o problema PPS baseado em um problema do mundo real de seleção e escalonamento de projetos originado da indústria de geração de energia elétrica. Esse modelo inclui restrições que melhor refletem um tipo particular de interdependência entre projetos. Nós também propomos uma heurística baseada na metaheurística GRASP, para resolver o problema e avaliar a sua qualidade e desempenho por meio de exeprimentos computacionais. Descrevemos a implementação de um protótipo de um sistema de suporte a decisão para o problema PPS que contém a heurística proposta e inclui diversas funcionalidades que ajudam os tomadores de decisão ao longo do processo de seleção de portfólios de projetos.
Abstract: The project portfolio selection (PPS) problem consists of constructing a project portfolio, that is a selection of projects scheduled over a period of time using various, potentially conflicting criteria and resource constraints. The PPS is a well known problem, recurrently occurring in several industries, with a rich history of approaches for modeling it and a large number of techniques for solving it.
In this work we present a model for the PPS problem based on a real-world problem of selection and scheduling projects stemming from the electricity generation industry. This model includes constraints that better reflect a particular kind of interdependence among projects. We also propose a heuristics based on the metaheuristics GRASP, to solve the problem and assess its quality and performance through computational experiments. We describe the implementation of a decision support system prototype for the PPS problem that realizes the proposed heuristics and includes several features that can help decision makers through the project portfolio selection process.
Abstract: Removed by the authors.
Resumo: Este relatório técnico contém os resumos de 28 trabalhos apresentados no VII Workshop de Teses, Dissertações e Trabalhos de Iniciação Científica, do Instituto de Computação da Universidade Estadual de Campinas (WTD-IC-UNICAMP 2012). O Workshop ocorreu nos dias 22 e 23 de maio de 2012, e contou com aproximadamente 150 participantes, entre ouvintes, apresentadores de trabalhos e organizadores. Na ocasião houve 28 apresentações orais e 10 pôsteres. Aos alunos foi dada a possibilidade de escolher a forma de apresentação (oral, pôster, ou ambas). A publicação dos resumos, sob a forma de relatório técnico, tem por objetivo divulgar os trabalhos em andamento e registrar, de forma sucinta, o estado da arte da pesquisa do IC no ano de 2012.
Abstract: A number of approaches leverage software fault tolerance techniques based on design diversity to tolerate software faults in service-oriented applications. These solutions moderate the communication between clients and functionally equivalent services, i.e., variant services. The use of design diversity depends on the assumption that variants rarely fail on the same input case. Nevertheless, it is unclear whether variant services are actually diverse and fail on disjoint subsets of the input space. In a previous work, we proposed an experimental setup to assess design diversity of variant services that realize a requirements specification. In this work, we utilize the proposed experimental setup to assess the design diversity of a number of third-party Web services adhering to seven different requirements specifications. In this paper, we describe in detail the main findings and lessons learned from this empirical study. Firstly, we investigate whether variant services present difference in their outputs and failure behaviours to look for evidence that variants are provided by different design and implementations. Secondly, we investigate the effectiveness of service diversity for tolerating faults. The results suggest that there is diversity in the implementation of variant services. However, in some cases, this diversity might not be sufficient to improve system reliability. Our findings provide an important knowledge basis for engineering effective fault-tolerant ser vice applications.
Abstract: IPTV promises to be within the spectrum of services offered in the future Internet. In cooperative IPTV, clients' resources are typically used to build a scalable system to distribute TV content. One of the major challenges of this approach is to reach the same quality of service of traditional television and commercial IPTV by employing only best effort network layer services. This paper proposes a novel architecture based on P2P networking for cooperative IPTV. Challenges are discussed and some solutions are proposed as part of the architecture. The aim is to make this type of system more competitive with traditional television and commercial IPTV.
Abstract: Although peer-to-peer networks are more scalable than client-server ones, they face efficiency challenges. One of them is the selfish behavior of non-cooperative peers. Another challenge is the short time peers stay connected to the system, which causes disruptions of the delivery of time-constrained content. This paper introduces an incentive mechanism to address both problems in peer-to-peer networks with live streaming.
Abstract: An increasing number of web repositories rely on tag-based metadata to organize and classify their content. The users of these systems freely associate tags to system's resources - e.g., URLs, images, bookmarks. The term folksonomy refers to this collective classification, which emerges from tagging carried by users interacting in web social environments.
We proposed a hybrid entity to fuse ontologies and folksonomies, we call ``folksonomized ontology'', which combines complementary aspects of both. The formal and engineered knowledge of ontologies is fused with the latent semantics of social data.
In this technical report we show the implementation of a crawler to collect data from folksonomic systems on the web. We discuss the API of this systems and the way the data was stored.
Abstract: We present the first verification methods that automatically generate bases of inequality and equality invariants expressed by multivariate formal power series and transcendental functions. We also discuss their convergence over hybrid systems that exhibit non linear models and parameters. We reduce the invariant generation problem to linear algebraic matrix systems, from which one can provide effective methods for solving the original problem. We show that the preconditions for discrete transitions, the Lie-derivatives for continuous evolution and the newly introduced relaxed consecution requirements can be viewed as morphisms and, thus, can be represented by matrices. More specifically, we obtain very general sufficient conditions for the existence and the computation of formal power series invariants over multivariate polynomial continuous differential systems. The formal power series invariant generated are often composed by expansion of some well-known transcendental functions (e.g. , ,... ) and has an analysable closed-form facilitating the use of the invariants to verify safety properties. Our examples with non linear continuous evolution similar to those present today in many critical hybrid embedded systems, show the strength of our results and prove that some of them are beyond the limits of other recent approaches.
Abstract: The integration of WiMAX networks with EPON networks combine the large bandwidth availability in optical access networks with the mobility provided by wireless technologies. In this integration, a WiMAX bandwidth scheduler that takes into account the variability of the channel capacity provided by the EPON scheduler is quite important, since the granted bandwidth must be sucient to support the QoS of WiMAX connections. This paper evaluates the performance of a standard-compliant WiMAX uplink scheduler designed to the ONU-BS. The evaluation is conducted using integrated simulators for the WiMAX and for the EPON components. Simulation results show that the proposed scheduler is able to provide QoS to the subscriber stations.
Abstract: This paper proposes three new hybrid mechanisms for the scheduling of grid tasks, which integrate reactive and proactive approaches. They differ by the scheduler used to define the initial schedule of an application and by the scheduler used to reschedule the application. The mechanisms are compared to reactive and proactive mechanisms. Results show that hybrid approach produces performance close to that of the reactive mechanism, but demanding less migrations.
Abstract: Similar to ballistic tests in which we match a gun to its bullets, we can identify a given digital camera that acquired an image under investigation. In this paper, we introduce a method for identifying whether or not an image was captured by a specific digital camera. The method relies on well-known and established noise residual features related to the images under investigation. The novelty of our approach is in the extension of such features considering an ``open set" recognition scenario, under which we can not rely on the assumption of full access to all of the potential source cameras. In this case, we model the decision space to take advantage of a few known cameras and carve the decision boundaries to decrease false matches increasing the reliability of image source attribution as an aid for digital forensics in the court of law.
Resumo: As moscas-das-frutas são pragas de importância quarentenária a nível mundial, dentre as quais se destacam algumas espécies de Anastrepha, que atacam um grande número de frutíferas e estão amplamente distribuídas pelo Brasil. A identificação das espécies de moscas-das frutas é baseada nos caracteres morfológicos do mesonoto, asa e acúleo. Nos últimos anos, têm sido desenvolvidas ferramentas que complementam a taxonomia tradicional na identificação de algumas espécies de insetos. Além de diminuir o tempo gasto pelos especialistas, essas novas ferramentas podem permitir que um número maior de pesquisadores estude e/ou identifique esses insetos. Sendo assim, o presente estudo tem como objetivo testar a eficácia de novas técnicas para identificação de três espécies de moscas-das-frutas. Neste trabalho, foram aplicadas e comparadas técnicas de análise de imagens e aprendizagem de máquina na tarefa de classificação de moscas-das-frutas, visando obter dados que possam futuramente servir de base para o desenvolvimento de sistemas de identificação automática dessas espécies utilizando imagens de asas e acúleos.
Abstract: Insert here the abstract in English.
Resumen El resumen del relatório en español.
Abstract: Complex systems have been widely investigated using formal techniques for testing and verifying critical aspects of their reactive behavior. Such behavior is usually captured by the notion of context transformations that interact with the environment. Other aspects require timed models to describe the systems' continuous evolution in time. In this work, we propose a timed context formalism able to model the combination of time evolution and context transformations. Further, we derive a more flexible and manageable strategy to discretize such models and prove correctness of the discretized model against the original one. The flexibility to find suitable granularities both to time evolution as well as to values of context variables opens the possibility for constructing more compact grid automata. Based on this discretizing approach we present a testing framework to automatically generate test suites from the resulting automata. The test suites can then be used to verify properties of candidate implementations with the aid of test purposes.
Abstract: Can we quantitatively compare different computer scientists? There is a widespread belief among computer science researchers that different subareas within Computer Science (CS) have different publishing practices (production throughput per year, preference either for journals or conferences, number of citations, etc.) making the use of a unified evaluation criterion unfair. In this paper, we present productivity measures (both journal and conference productivity) and impact measures (citations per paper and H-index) for a random set of researchers in 17 different CS subareas. This research quantifies the mentioned intuitions and pre-empirical impressions and shows that, indeed, there are different publication practices and impact value measures for the different CS subareas. However, we show that few of the differences are significant.
Resumo: Apresentamos neste trabalho critérios gráficos (ou visuais) que dão condições necessárias e suficientes para saber se uma permutação algébrica divide outra permutação. Estes critérios podem originar algoritmos eficientes para decidir divisibilidade entre permutações.
Abstract: Grid scheduling is essential to Quality of Service provisioning as well as to efficient management of grid resources. Grid scheduling usually considers the state of the grid resources as well application demands. However, such demands are generally unknown for highly demanding applications, since these often generate data which will be transferred during their execution. Without appropriate assessment of these demands, scheduling decisions can lead to poor performance. Thus, it is of paramount importance to consider uncertainties in the formulation of a grid scheduling problem. This paper introduces the IPDT-FUZZY scheduler, a scheduler which considers the demands of grid applications with such uncertainties. The scheduler uses fuzzy optimization, and both computational and communication demands are expressed as fuzzy numbers. Its performance was evaluated, and it was shown to be attractive when communication requirements are uncertain. Its efficacy is compared, via simulation, to that of a deterministic counterpart scheduler and the results reinforce its adequacy for dealing with the lack of accuracy in the estimation of communication demands.
Abstract: Imprecise input data imposes additional challenges to grid scheduling. This paper introduces a novel scheduler based on fuzzy optimization called IP-FULL-FUZZY which considers uncertainties of both application demands and of resource availability. The effectiveness of the proposed scheduler is compared to those of a non-fuzzy scheduler as well as to those of a fuzzy scheduler which considers only uncertainties of application demands. Results evince the advantages of adopting the proposed scheduler.
Abstract: The uncertainty of the demands of grid applications can cause unpredicted performance and, consequently, can make ineffective schedules derived for target demand values. To produce effective results, schedulers need to take into account the difficulty in estimating the demands of applications. In this paper, a scheduler based on fuzzy optimization is proposed to deal with such uncertainties. It is shown, via numerical results, that the proposed scheduler presents advantages when compared to classical schedulers.
Abstract: Central to grid processing is the scheduling of application tasks to resources. Schedulers need to consider heterogeneous computational and communication resources, producing the shortest possible schedule under time constraints dictated by both the application needs and the frequency of fluctuation of resource availability. This paper introduces a set of schedulers with such characteristics.
Abstract: This paper introduces a procedure called Traffic Engineering for grids for enabling grid networks to self-adjust to resource availability. The proposal is based on monitoring the state of resources and on task migration. It involves several layers of the Internet architecture. Experiments executed in NS-2 are used to illustrate the efficacy of the procedure proposed.
Abstract: Grid systems allow the execution of a class of highly demanding services and applications. These grids involve communication networks, and their links are essential resources for massive data transfers. However, the management of current grid systems requires intervention for efficient service provisioning. Moreover, this need increases with the increase in demand for grid services. Therefore, grid systems will become effective only when they are capable of self-managing resource allocation to cope with fluctuations in resource availability. At present, however, very few integrated self-adaptive mechanisms have been implemented in existing grid systems. The aim of this paper is to provide a survey of existing mechanisms and suggest directions for enabling autonomic operation of grid systems.
Instituto de Computação :: Universidade Estadual de Campinas
Av. Albert Einstein, 1251 - Cidade Universitária Zeferino Vaz • 13083-852 Campinas, SP - Brasil • Fone:  3521-5838