Resumo: Este estudo tem por objetivo avaliar o método de codificação de nomes conhecido como método Soundex com uma ligeira modificação em relação ao apresentado por [], compará-lo com uma adaptação para a língua portuguesa e discutir formas de recuperação de nomes dado um código resultante de codificações obtidas por esses mecanismos.
Abstract: The Network Simulator (ns-2) is a popular tool for the simulation of computer networks which provides substantial support for simulation of the Internet protocols over wired and wireless networks. Although some modules for WiMAX networks simulation have been proposed for the ns-2, none of them implements all MAC features specified by the IEEE 802.16 standard for bandwidth management and QoS support. This paper presents the design and validation of an WiMAX module based on the IEEE 802.16 standard. The implemented module includes mechanisms for bandwidth request and allocation, as well as for QoS provision. Moreover the implementation is standard-compliant.
Abstract: The IEEE 802.16 standard for broadband wireless access is a low cost solution for Internet access in metropolitan and rural areas. Although it defines five service levels to support real-time and bandwidth demanding applications, scheduling mechanisms are not specified in the standard. Due to the wireless channel variability, scheduling mechanisms widely studied for wired networks are not suitable for IEEE 802.16 networks. This paper proposes an uplink cross-layer scheduler which makes bandwidth allocation decisions based on information about the channel quality and on the Quality of Service requirements of each connection. Simulation results show that the proposed scheduler improves the network performance when compared with a scheduler which does not take into account the channel quality.
Resumo: O mecanismo de difusão totalmente ordenada (DTO) de mensagens em sistemas distribuídos assíncronos é fundamental para a construção de aplicações distribuídas tolerantes a falhas. Essencialmente, o mecanismo garante que mensagens enviadas para um conjunto de processos são entregues por todos os processos na mesma ordem total. O problema de difusão totalmente ordenada pode ser reduzido ao problema de consenso distribuído, um outro problema fundamental em algoritmos distribuídos. Mecanismos que implementam soluções para ambos os problemas apresentam diferentes desempenhos e níveis de tolerância a falhas, em função do modelo de computação considerado em seu desenvolvimento. Neste projeto apresentamos um protocolo de difusão síncrona totalmente ordenada de mensagens (DSTO), desenvolvido modularmente e com ênfase em desempenho. O protocolo é direcionado ao ambiente computacional de aglomerados dedicados ao processamento distribuído, cujo comportamento temporal é descrito pelo modelo assíncrono temporizado de computação. A partir de tais premissas, desenvolvemos uma camada de comunicação, que permite que se organize a execução distribuída como uma sequência de etapas síncronas de computação. O protocolo opera sobre esta camada de comunicação e tem seu progresso associado ao comportamento síncrono do sistema, mas garante que a propriedade de ordenação das mensagens seja mantida, mesmo quando processos e canais de comunicação operam assincronamente.
Abstract: This paper introduces admission control policies for the IEEE 802.16 standard which aim to reach three main goals: restrict the number of simultaneous connections in the system so that the resources available to the scheduler are sufficient to guarantee the QoS requirements of each connection, support the service provider expectations by maximizing the revenue, and maximize the users satisfaction by granting them additional resources. The proposed policies are evaluated through simulation experiments.
Abstract: Scheduling is an essential mechanism in IEEE 802.16 networks for distributing the available bandwidth among the active connections so that their quality of service requirements can be furnished. Scheduling mechanisms adopted in wired networks, when used in wireless networks lead to inefficient use of the bandwidth since the location dependent and time varying characteristics of the wireless link usually are ignored by the wired networks. This paper introduces a standard-compliant cross-layer scheduling mechanism which considers the modulation and coding scheme of each mobile station to increase the efficiency of channel utilization while furnishing the QoS requirements of the connections.
Abstract: In order to support real time and high-bandwidth applications the IEEE 802.16 standard is expected to provide Quality of Service (QoS). Although the standard defines a QoS signaling framework and five service levels, scheduling disciplines are unspecified. In this technical report, we introduce a scheduling scheme for the uplink traffic. The proposed solution is fully standard compliant and can be easily implemented in the base station. Simulation results show that this solution is able to meet the QoS requirements of multimedia applications.
Abstract: Although most of the traffic carried over the Internet uses the Transmission Control Protocol (TCP) as the transport layer protocol, it is of paramount importance to develop models for streams that use the User Datagram protocol (UDP), since these streams are inelastic and, consequently, they can jeopardize the acquisition of bandwidth by TCP streams. This paper introduces a traffic model for UDP streams and its performance is compared to those of other traffic models. The proposed model can be used to generate streams of aggregated UDP sources in simulation experiments.
Abstract: A huge effort has been applied in image classification to create high quality thematic maps and to establish precise inventories about land cover use. The peculiarities of Remote Sensing Images (RSIs) combined with the traditional image classification challenges made RSIs classification a hard task. Our aim is to propose a kind of boost-classifier adapted to multi-scale segmentation. We use the paradigm of boosting, whose principle is to combine weak classifiers to build an efficient global one. Each weak classifier is trained for one level of the segmentation and one region descriptor. We have proposed and tested weak classifiers based on linear SVM and region distances provided by descriptors. The experiments were performed on a large image of coffee plantations. We have shown in this paper that our approach based on boosting can detect the scale and set of features best suited to a particular training set. We have also shown that hierarchical multi-scale analysis is able to reduce training time and to produce a stronger classifier. We compare the proposed methods with a baseline based on SVM with RBF kernel. The results show that the proposed methods outperform the baseline.
Abstract: In this work we present a new method for nonlinear optmization based on quadratic interpolation, on the search for stationary point and edge-divided simplex. We show some results of the convergence of our method applied in various functions to one or more guesses.
Resumo: Nos últimos anos, a Computação Orientada a Serviços emergiu como um novo paradigma para o desenvolvimento de aplicações distribuídas. Neste paradigma, fornecedores de serviços desenvolvem seus serviços web e os publicam em repositórios de serviços. Os consumidores de serviços podem encontrar os serviços necessários nos repositórios e criar novos serviços a partir da composição de outros serviços. Mesmo que haja um acordo prévio entre as partes, muitos fatores como infra-estrutura de comunicação não confiável ou alteração dos serviços por parte dos fornecedores podem impedir que as condições pré-acordas sejam mantidas. Tendo em vista esse problema, é importante que os serviços sejam monitorados e comportamentos inconsistentes sejam detetados. O presente estudo tem como objetivo pesquisar o estado da arte em monitoramento de services web, além de fazer uma comparação destas propostas com relação aos tipos e formas de monitoramento presentes na literatura.
Resumo: O projeto "Todos Nós em Rede" (TNR) objetiva a formação continuada à distância de professores de Educação Especial dos sistemas de ensino público brasileiro, por meio da constituição de Redes Sociais Inclusivas (RSIs) desses profissionais. Este relatório técnico apresenta uma avaliação de tecnologias e a viabilidade técnica dessas para o desenvolvimento do sistema computacional do TNR. Este sistema se caracterizará como uma RSI nas quais valores como acessibilidade, autonomia e autoridade demonstram ser importantes. Este trabalho levanta, seleciona, apresenta, analisa e discute, a partir de uma pesquisa exploratória, possibilidades para a plataforma de desenvolvimento do sistema. Também é realizada uma análise comparativa que articula as características das diferentes plataformas e ilustra os aspectos positivos e negativos sob a luz dos requisitos essenciais para o sistema TNR.
Abstract: Most face recognition methods rely on a common feature space to represent the faces, in which the face aspects that better distinguish among all the persons are emphasized. This strategy may be inadequate to represent more appropriate aspects of a specific person's face, since there may be some aspects that are good at distinguishing only a given person from the others. Based on this idea and supported by some findings in the human perception of faces, we propose a face recognition framework that associates a feature space to each person that we intend to recognize. Such feature spaces are conceived to underline the discriminating face aspects of the persons they represent. In order to recognize a probe, we match it to the gallery in all the feature spaces and fuse the results to establish the identity. With the help of an algorithm that we devise, the Discriminant Patch Selection, we were capable of carrying out experiments to intuitively compare the traditional approaches with the person-specific representation. In the performed experiments, the person-specific face representation always resulted in a better identification of the faces.
Abstract: Optimization problems from where no structural information can be obtained arise everyday in many fields of knowledge, increasing the importance of well-performing black-box optimizers. In this paper, we introduce a new approach for global optimization of black-box problems based on the synergistic combination of scaled local searches. By looking farther to the behavior of the problem, the idea is to speed up the search while avoiding it to become locally trapped. The method is fairly compared to other well-performing techniques, showing promising results. A testbed of benchmark problems and image registration problems are considered and the proposed approach outperforms the other algorithms in most cases.
Abstract: Shape deformation methods are important in such fields as geometric modeling and computer animation. In biology, modeling of shape, growth, movement and pathologies of living microscopic organisms or cells require smooth deformations, which are essentially 2D with little change in depth. In this paper, we present a 2.5D space deformation method. The 3D model is modified by deforming an enclosing control grid of prisms. Spline interpolation is used to satisfy the smoothness requirement. We implemented this method in an editor which makes it possible to define and modify the deformation with the mouse in a user-friendly way. The experimental results show that the method is simple and effective.
Resumo: Este relatório técnico contém os resumos de 26 trabalhos apresentados no VI Worshop de Teses, Dissertações e Trabalhos de Iniciação Cientifica, do Instituto de Computação da UNICAMP (WTD-IC-UNICAMP 2011). O Workshop ocorreu no dia 11 de maio de 2011. Na ocasião tivemos apresentações orais, feitas por alunos de mestrado e doutorado, e um feira de pôsters.
Aos alunos de pós-graduação foi dada a possibilidade de escolher a forma de apresentação (oral, pôster, ou ambas), e aos alunos de iniciação científica coube a apresentação na forma de pôster. A publicação dos resumos sob a forma de um relatório técnico tem por objetivo divulgar os trabalhos em andamento e registrar, de forma sucinta, o estado da arte da pesquisa do IC no ano de 2011.
Abstract: Traditional algorithms to solve the problem of sorting by signed reversals output just one optional solution while the space of optimal solutions can be huge. Algorithms for enumerating the complete set of traces of solutions were developed with the objective of supporting the biologists in their studies of alternative evolutionary scenarios. Due to the exponential complexity of the algorithms, their practical use is limited to small permutations. In this work, we propose and evaluate three different approaches for producing a partial enumeration of the complete set of traces of a permutation.
Abstract: We describe a fast and robust method for computing scene depths (or heights) from surface gradient (or surface normal) data, such as would be obtained by photometric stereo or interferometry. Our method allows for uncertain or missing samples, which are often present in experimentally measured gradient maps; for sharp discontinuities in scene's depth, e .g. along object silhouette edges; and for irregularly spaced sampling points. To accomodate these features of the problem, we introduce an original and flexible representation of slope data, the weigth-delta mesh. Like other state of the art solutions, our algorithm reduces the problem to a system of linear equations that is solved by Gauss-Seidel iteration with multi-scale acceleration. Tests with various synthetic and measured gradient data show that our algorithm is as accurate and efficient as the best available integrators for uniformly sampled data. Moreover, thanks to the use of the weight-delta mesh representation, our algorithm remains accurate and efficient even for large sets of weakly-connected data, which cannot be efficiently handled by any existing algorithm.
Abstract: In this work we discuss the use of the histogram of oriented gradients (HOG) descriptors as an effective tool for text description and recognition. Specifically, we propose a Fuzzy HOG-based texture descriptor (F-HOG) that uses a partition of the image into three horizontal cells with fuzzy adaptive boundaries, to characterize single-line texts in outdoor scenes and video frames. The input of our algorithm is a rectangular image presumed to contain a single line of text in latin like characters. The output is a relatively short (54-features) descriptor that provides an effective input to an SVM classifier. Tests show that F-HOG is more accurate than Dalal and Triggs original HOG-based classifier using a 54-features descriptor, and comparable to their best classifier (which uses a 108-features descriptor) while being half as long.
Abstract: The demand for electrical energy has increased making it more expensive. In computing, an environment highly dependent on energy, it is important to develop techniques which allow power savings. The evaluation of power-aware algorithms requires the measurement of actual computer power. This report presents a real power measurement framework. The framework is composed of a custom made board, which is able to capture the power consumption and is installed into a commodity computer, a data acquisition device that samples the measured values, and a piece of software that manages the framework. This work shows the steps taken to develop the framework and also presents two examples of its use. The first example power profiles a small matrix multiplication program and discusses performance and energy trade-offs. The second example uses the framework to characterize and model the power consumption of a web server delivering static web content.
Abstract: Similarity search in high-dimensional metric spaces is a key operation in many applications, such as multimedia databases, image retrieval, object recognition, and others. The high dimensionality of the data requires special index structures to facilitate the search. A problem regarding the creation of suitable index structures for high-dimensional data is the relationship between the geometry of the data and the organization of an index structure. Most of existing indexes are constructed by partitioning the dataset using distance-based criteria. However, those methods either produce disjoint partitions, but ignore the distribution properties of the data; or produce non-disjoint groups, which greatly affect the search performance. In this paper, we study the performance of a new index structure, called Ball-and-Plane tree (BP-tree), which overcomes the above disadvantages. BP-tree is constructed by recursively dividing the dataset into compact clusters. Different from other techniques, it integrates the advantages of both disjoint and non-disjoint paradigms in order to achieve a structure of tight and low overlapping clusters, yielding significantly improved performance. Results obtained from an extensive experimental evaluation with real-world datasets show that BP-tree consistently outperforms the state-of-the-art solutions. In addition, BP-tree scales up well, exhibiting sublinear performance with growing number of objects in the database.
Abstract: Diabetic retinopathy (DR) is a diabetes development that affects the retina’s blood flow. The effect of DR is the weakening of retina’s vessels, resulting on anything from small hemorrhages to the growth of new blood vessels. If left untreated, DR eventually lead to blindness, and, in fact, this is the leading cause of blindness in persons in the age range of 20 to 74 years in developed countries. One of the most successful means for fighting DR is early diagnosing through the analysis of ocular- fundus images of the human retina. In this paper, we present a new approach to detect retina-related pathologies from ocular- fundus images. Our work is intended for an automatic triage scenario, where patients whose retina is considered not-normal by the system will see a specialist. This implies that automatic screening needs an evaluation criteria that rewards low false negative rates, i.e., we should avoid images incorrectly classified as normal as much as possible. Our solution constructs a visual dictionary of the desired pathology’ important features and classifies whether an ocular- fundus image is normal or a DR candidate. We evaluate the methodology on hard exudates, deep hemorrhages, and microa- neurysms, test different parameter configurations, and demon- strate the robustness and reliability of the approach performing cross-data-set validation (using both our own and other publicly available data-sets).
Abstract: Recent advances in technology have increased the availability of video data, creating a strong requirement for efficient systems to manage those materials. Making efficient use of video information requires that data be accessed in a user-friendly way. Ideally, one would like to perform video search using an intuitive tool. Most of existing browsers for the interactive search of video sequences, however, have employed a too rigid layout to arrange the results, restricting users to explore the results using list- or grid-based layouts. In this paper, we present a novel approach for the interactive search that displays the result set in a flexible manner. The proposed method is based on a hierarchical structure called Divisive-Agglomerative Hierarchical Clustering (DAHC). It is able to group together videos with similar content and to organize the result set in a well-defined tree. This strategy makes the navigation more coherent and engaging to users.
Abstract: Fair exchange protocols have been widely studied since their proposal, but are still not implemented on most e-commerce transactions available. For several types of items, the current e-commerce business models fail to provide fairness to customers. The item validation problem is a critical step in fair exchange, and is yet to receive the proper attention from researchers. We believe these issues should be addressed in a comprehensive and integrated fashion before fair exchange protocols can be effectively deployed in the marketplace. This is the aim of our research, and drawing attention to this problems and possible solutions is the goal of this technical report.
Abstract: This work presents an optimum user-steered boundary tracking approach for image segmentation, which simulates the behavior of water flowing through a riverbed. The riverbed approach was devised using the Image Foresting Transform with a never exploited connectivity function. We analyze its properties in the derived image graphs and discuss its theoretical relation with other popular methods, such as live wire and graph cuts. Several experiments show that riverbed can significantly reduce the number of user interactions (anchor points) as compared to live wire for objects with complex shapes. This work also includes a discussion about how to combine different methods in order to take advantage of their complementary strengths.
Abstract: Inclusive Social Network Service (ISN) can be defined as a Social Network Service (SNS) with resources that promote access for all, including those in the margin of the digital culture. An ISN must include adequate means to recover information that make sense for all. A search mechanism capable of understanding the shared meanings used by the ISN users is still needed. In this sense methods and approaches should support capturing the social and cultural aspects from the ISN including its colloquial language and the shared meanings. In order to achieve a better understanding and representation of the semantics utilized by ISN members, this technical report presents the application and the analysis of a semantic modeling method proposed to represent meanings of terms adopted by ISN users. The outcome of the method is intended to be used by an inclusive search mechanism. This approach can enable novel ontology-based search strategies that potentially provide more adequate semantic search results.
Abstract: The evolution of the Web depends on novel techniques and methodologies that can handle and better represent the meanings of the huge amount of information available nowadays. Recent proposals in literature have explored new approaches based on Semiotics. The Semiotic Web ontology? (SWO) is an attempt to model the information in a computer-tractable and more adequate way, and, at the same time to be compatible with the Semantic Web (SW) standards. This work presents an assisted process for building SWOs. The process includes heuristics and transformation rules for deriving an initial Web ontology described in Web Ontology Language from Ontology Charts produced by the Semantic Analysis Method. Moreover, the whole process is discussed; results of the application of the process to a real context show the potential of the approach and the value of the proposed heuristics and implemented rules to create more representative Web ontologies.
Abstract: There are many applications which need support for compound information. Thus, we need new mechanisms for managing data integration; aids for creating references, links and annotations; and services for clustering, organizing and reusing compound objects (COs) and their components. Few attempts have been made to formally characterize compound information, related services, and technologies. We propose the description and interplay of technologies for handling compound information taking advantage of the formalization proposed by the 5S Framework. This paper: (1) analyzes technologies which manage compound information (DCC, Buckets, OAI-ORE); (2) uses 5S formal definitions for describing them; (3) presents a case study, illustrating how CO technologies and the 5S Framework can fit together to support exploration of compound information.
Instituto de Computação ::
Universidade Estadual de Campinas
Av. Albert Einstein, 1251 - Cidade Universitária Zeferino Vaz • 13083-852 Campinas, SP - Brasil • Fone: [19] 3521-5838 |