Summary: This study aims to evaluate the name encoding method known as the Soundex method with a slight modification in relation to that presented by [], to compare it with an adaptation for the Portuguese language and to discuss ways of recovering names given a code resulting from encodings obtained by these mechanisms.
Abstract: The Network Simulator (ns-2) is a popular tool for the simulation of computer networks which provides substantial support for simulation of the Internet protocols over wired and wireless networks. Although some modules for WiMAX networks simulation have been proposed for the ns-2, none of them implements all MAC features specified by the IEEE 802.16 standard for bandwidth management and QoS support. This paper presents the design and validation of an WiMAX module based on the IEEE 802.16 standard. The implemented module includes mechanisms for bandwidth request and allocation, as well as for QoS provision. Moreover the implementation is standard-compliant.
Abstract: The IEEE 802.16 standard for broadband wireless access is a low cost solution for Internet access in metropolitan and rural areas. Although it defines five service levels to support real-time and bandwidth demanding applications, scheduling mechanisms are not specified in the standard. Due to the wireless channel variability, scheduling mechanisms widely studied for wired networks are not suitable for IEEE 802.16 networks. This paper proposes an uplink cross-layer scheduler which makes bandwidth allocation decisions based on information about the channel quality and on the Quality of Service requirements of each connection. Simulation results show that the proposed scheduler improves the network performance when compared with a scheduler which does not take into account the channel quality.
Summary: The mechanism of fully ordered diffusion (DTO) of messages in asynchronous distributed systems is fundamental for the construction of fault tolerant distributed applications. Essentially, the engine ensures that messages sent to a set of processes are delivered by all processes in the same total order. The problem of fully ordered diffusion can be reduced to the problem of distributed consensus, another fundamental problem in distributed algorithms. Mechanisms that implement solutions to both problems have different performances and levels of fault tolerance, depending on the computing model considered in their development. In this project we present a protocol of synchronous diffusion totally ordered of messages (DSTO), developed modularly and with emphasis on performance. The protocol is directed to the computational environment of clusters dedicated to distributed processing, whose temporal behavior is described by the timed asynchronous computing model. Based on these premises, we developed a communication layer, which allows the distributed execution to be organized as a sequence of synchronous computing steps. The protocol operates on this communication layer and has its progress associated with the synchronous behavior of the system, but it ensures that the message ordering property is maintained, even when processes and communication channels operate asynchronously.
Abstract: This paper introduces admission control policies for the IEEE 802.16 standard which aim to reach three main goals: restrict the number of simultaneous connections in the system so that the resources available to the scheduler are sufficient to guarantee the QoS requirements of each connection, support the service provider expectations by maximizing the revenue, and maximize the users satisfaction by granting them additional resources. The proposed policies are evaluated through simulation experiments.
Abstract: Scheduling is an essential mechanism in IEEE 802.16 networks for distributing the available bandwidth among the active connections so that their quality of service requirements can be furnished. Scheduling mechanisms adopted in wired networks, when used in wireless networks lead to inefficient use of the bandwidth since the location dependent and time varying characteristics of the wireless link usually are ignored by the wired networks. This paper introduces a standard-compliant cross-layer scheduling mechanism which considers the modulation and coding scheme of each mobile station to increase the efficiency of channel utilization while furnishing the QoS requirements of the connections.
Abstract: In order to support real time and high-bandwidth applications the IEEE 802.16 standard is expected to provide Quality of Service (QoS). Although the standard defines a QoS signaling framework and five service levels, scheduling disciplines are unspecified. In this technical report, we introduce a scheduling scheme for the uplink traffic. The proposed solution is fully standard compliant and can be easily implemented in the base station. Simulation results show that this solution is able to meet the QoS requirements of multimedia applications.
Abstract: Although most of the traffic carried over the Internet uses the Transmission Control Protocol (TCP) as the transport layer protocol, it is of paramount importance to develop models for streams that use the User Datagram protocol (UDP), since these streams are inelastic and, consequently, they can jeopardize the acquisition of bandwidth by TCP streams. This paper introduces a traffic model for UDP streams and its performance is compared to those of other traffic models. The proposed model can be used to generate streams of aggregated UDP sources in simulation experiments.
Abstract: A huge effort has been applied in image classification to create high quality thematic maps and to establish precise inventories about land cover use. The peculiarities of Remote Sensing Images (RSIs) combined with the traditional image classification challenges made RSIs classification a hard task. Our aim is to propose a kind of boost-classifier adapted to multi-scale segmentation. We use the paradigm of boosting, whose principle is to combine weak classifiers to build an efficient global one. Each weak classifier is trained for one level of the segmentation and one region descriptor. We have proposed and tested weak classifiers based on linear SVM and region distances provided by descriptors. The experiments were performed on a large image of coffee plantations. We have shown in this paper that our approach based on boosting can detect the scale and set of features best suited to a particular training set. We have also shown that hierarchical multi-scale analysis is able to reduce training time and to produce a stronger classifier. We compare the proposed methods with a baseline based on SVM with RBF kernel. The results show that the proposed methods outperform the baseline.
Abstract: In this work we present a new method for nonlinear optmization based on quadratic interpolation, on the search for stationary point and edge-divided simplex. We show some results of the convergence of our method applied in various functions to one or more guesses.
Summary: In recent years, Service Oriented Computing has emerged as a new paradigm for the development of distributed applications. In this paradigm, service providers develop their web services and publish them in service repositories. Service consumers can find the necessary services in the repositories and create new services from the composition of other services. Even if there is a prior agreement between the parties, many factors such as unreliable communication infrastructure or alteration of services by suppliers may prevent pre-agreed conditions from being maintained. In view of this problem, it is important that services are monitored and inconsistent behavior is detected. This study aims to research the state of the art in monitoring web services, in addition to comparing these proposals with respect to the types and forms of monitoring present in the literature.
Summary: The "Todos Nós em Rede" (TNR) project aims to provide continuing education at a distance for Special Education teachers in Brazilian public education systems, through the establishment of Inclusive Social Networks (RSIs) for these professionals. This technical report presents an assessment of technologies and their technical feasibility for the development of the TNR computer system. This system will be characterized as an IHR in which values such as accessibility, autonomy and authority are important. This work raises, selects, presents, analyzes and discusses, from an exploratory research, possibilities for the system development platform. A comparative analysis is also carried out that articulates the characteristics of the different platforms and illustrates the positive and negative aspects in the light of the essential requirements for the TNR system.
Abstract: Most face recognition methods rely on a common feature space to represent the faces, in which the face aspects that better distinguish among all the persons are emphasized. This strategy may be inadequate to represent more appropriate aspects of a specific person's face, since there may be some aspects that are good at distinguishing only a given person from the others. Based on this idea and supported by some findings in the human perception of faces, we propose a face recognition framework that associates a feature space to each person that we intend to recognize. Such feature spaces are conceived to underline the discriminating face aspects of the persons they represent. In order to recognize a probe, we match it to the gallery in all the feature spaces and fuse the results to establish the identity. With the help of an algorithm that we devise, the Discriminant Patch Selection, we were capable of carrying out experiments to intuitively compare the traditional approaches with the person-specific representation. In the performed experiments, the person-specific face representation always resulted in a better identification of the faces.
Abstract: Optimization problems from where no structural information can be obtained everyday in many fields of knowledge, increasing the importance of well-performing black-box optimizers. In this paper, we introduce a new approach for global optimization of black-box problems based on the synergistic combination of scaled local searches. By looking farther to the behavior of the problem, the idea is to speed up the search while avoiding it to become locally trapped. The method is fairly compared to other well-performing techniques, showing promising results. A testbed of benchmark problems and image registration problems are considered and the proposed approach outperforms the other algorithms in most cases.
Abstract: Shape deformation methods are important in such fields as geometric modeling and computer animation. In biology, modeling of shape, growth, movement and pathologies of living microscopic organisms or cells require smooth deformations, which are essentially 2D with little change in depth. In this paper, we present a 2.5D space deformation method. The 3D model is modified by deforming an enclosing control grid of prisms. Spline interpolation is used to satisfy the smoothness requirement. We implemented this method in an editor which makes it possible to define and modify the deformation with the mouse in a user-friendly way. The experimental results show that the method is simple and effective.
Summary: This technical report contains the abstracts of 26 papers presented at the VI Worshop of Theses, Dissertations and Scientific Initiation Papers, from the Computing Institute of UNICAMP (WTD-IC-UNICAMP 2011). The Workshop took place on May 11, 2011. On the occasion we had oral presentations, made by masters and doctoral students, and a poster fair.
Graduate students were given the possibility to choose the form of presentation (oral, poster, or both), and undergraduate students were given the presentation in the form of a poster. The publication of abstracts in the form of a technical report aims to disseminate the work in progress and record, in a succinct way, the state of the art of the research of the CI in 2011.
Abstract: Traditional algorithms to solve the problem of sorting by signed reversals output just one optional solution while the space of optimal solutions can be huge. Algorithms for enumerating the complete set of traces of solutions were developed with the objective of supporting the biologists in their studies of alternative evolutionary scenarios. Due to the exponential complexity of the algorithms, their practical use is limited to small permutations. In this work, we propose and evaluate three different approaches for producing a partial enumeration of the complete set of traces of a permutation.
Abstract: We describe a fast and robust method for computing scene depths (or heights) from surface gradient (or normal surface) data, such as would be obtained by photometric stereo or interferometry. Our method allows for uncertain or missing samples, which are often present in experimentally measured gradient maps; for sharp discontinuities in scene's depth, and .g. along object silhouette edges; and for irregularly spaced sampling points. To accomodate these features of the problem, we introduce an original and flexible representation of slope data, the weigth-delta mesh. Like other state of the art solutions, our algorithm reduces the problem to a system of linear equations that is solved by Gauss-Seidel iteration with multi-scale acceleration. Tests with various synthetic and measured gradient data show that our algorithm is as accurate and efficient as the best available integrators for uniformly sampled data. Moreover, thanks to the use of the weight-delta mesh representation, our algorithm remains accurate and efficient even for large sets of weakly-connected data, which cannot be efficiently handled by any existing algorithm.
Abstract: In this work we discuss the use of the histogram of oriented gradients (HOG) descriptors as an effective tool for text description and recognition. Specifically, we propose a Fuzzy HOG-based texture descriptor (F-HOG) that uses a partition of the image into three horizontal cells with fuzzy adaptive boundaries, to characterize single-line texts in outdoor scenes and video frames. The input of our algorithm is a rectangular image presumed to contain a single line of text in latin like characters. The output is a relatively short (54-features) descriptor that provides an effective input to an SVM classifier. Tests show that F-HOG is more accurate than Dalal and Triggs original HOG-based classifier using a 54-features descriptor, and comparable to their best classifier (which uses a 108-features descriptor) while being half as long.
Abstract: The demand for electrical energy has increased making it more expensive. In computing, an environment highly dependent on energy, it is important to develop techniques which allow power savings. The evaluation of power-aware algorithms requires the measurement of actual computer power. This report presents a real power measurement framework. The framework is composed of a custom made board, which is able to capture the power consumption and is installed into a commodity computer, a data acquisition device that samples the measured values, and a piece of software that manages the framework. This work shows the steps taken to develop the framework and also presents two examples of its use. The first example power profiles a small matrix multiplication program and discusses performance and energy trade-offs. The second example uses the framework to characterize and model the power consumption of a web server delivering static web content.
Abstract: Similarity search in high-dimensional metric spaces is a key operation in many applications, such as multimedia databases, image retrieval, object recognition, and others. The high dimensionality of the data requires special index structures to facilitate the search. A problem regarding the creation of suitable index structures for high-dimensional data is the relationship between the geometry of the data and the organization of an index structure. Most of existing indexes are constructed by partitioning the dataset using distance-based criteria. However, those methods either produce disjoint partitions, but ignore the distribution properties of the data; or produce non-disjoint groups, which greatly affect the search performance. In this paper, we study the performance of a new index structure, called Ball-and-Plane tree (BP-tree), which overcomes the above disadvantages. BP-tree is constructed by recursively dividing the dataset into compact clusters. Different from other techniques, it integrates the advantages of both disjoint and non-disjoint paradigms in order to achieve a structure of tight and low overlapping clusters, yielding significantly improved performance. Results obtained from an extensive experimental evaluation with real-world datasets show that BP-tree consistently outperforms the state-of-the-art solutions. In addition, BP-tree scales up well, exhibiting sublinear performance with growing number of objects in the database.
Abstract: Diabetic retinopathy (DR) is a diabetes development that affects the retina's blood flow. The effect of DR is the weakening of retina's vessels, resulting on anything from small hemorrhages to the growth of new blood vessels. If left untreated, DR eventually lead to blindness, and, in fact, this is the leading cause of blindness in persons in the age range of 20 to 74 years in developed countries. One of the most successful means for fighting DR is early diagnosing through the analysis of ocular-fundus images of the human retina. In this paper, we present a new approach to detect retina-related pathologies from ocular-fundus images. Our work is intended for an automatic triage scenario, where patients whose retina is considered not-normal by the system will see a specialist. This implies that automatic screening needs an evaluation criteria that rewards low false negative rates, ie, we should avoid images incorrectly classified as normal as much as possible. Our solution constructs a visual dictionary of the desired pathology 'important features and classifies whether an ocular- fundus image is normal or a DR candidate. We evaluate the methodology on hard exudates, deep hemorrhages, and microa- neurysms, test different parameter configurations, and demon- strate the robustness and reliability of the approach performing cross-data-set validation (using both our own and other publicly available data- sets).
Abstract: Recent advances in technology have increased the availability of video data, creating a strong requirement for efficient systems to manage those materials. Making efficient use of video information requires that data be accessed in a user-friendly way. Ideally, one would like to perform video search using an intuitive tool. Most of existing browsers for the interactive search of video sequences, however, have employed a too rigid layout to arrange the results, restricting users to explore the results using list- or grid-based layouts. In this paper, we present a novel approach for the interactive search that displays the result set in a flexible manner. The proposed method is based on a hierarchical structure called Divisive-Agglomerative Hierarchical Clustering (DAHC). It is able to group together videos with similar content and to organize the result set in a well-defined tree. This strategy makes the navigation more coherent and engaging to users.
Abstract: Fair exchange protocols have been widely studied since their proposal, but are still not implemented on most e-commerce transactions available. For several types of items, the current e-commerce business models fail to provide fairness to customers. The item validation problem is a critical step in fair exchange, and is yet to receive the proper attention from researchers. We believe these issues should be addressed in a comprehensive and integrated fashion before fair exchange protocols can be effectively deployed in the marketplace. This is the aim of our research, and drawing attention to this problems and possible solutions is the goal of this technical report.
Abstract: This work presents an optimum user-steered boundary tracking approach for image segmentation, which simulates the behavior of water flowing through a riverbed. The riverbed approach was devised using the Image Foresting Transform with a never exploited connectivity function. We analyze its properties in the derived image graphs and discuss its theoretical relation with other popular methods, such as live wire and graph cuts. Several experiments show that riverbed can significantly reduce the number of user interactions (anchor points) as compared to live wire for objects with complex shapes. This work also includes a discussion about how to combine different methods in order to take advantage of their complementary strengths.
Abstract: Inclusive Social Network Service (ISN) can be defined as a Social Network Service (SNS) with resources that promote access for all, including those in the margin of the digital culture. An ISN must include adequate means to recover information that make sense for all. A search mechanism capable of understanding the shared meanings used by the ISN users is still needed. In this sense methods and approaches should support capturing the social and cultural aspects from the ISN including its colloquial language and the shared meanings. In order to achieve a better understanding and representation of the semantics used by ISN members, this technical report presents the application and the analysis of a semantic modeling method proposed to represent meanings of terms adopted by ISN users. The outcome of the method is intended to be used by an inclusive search mechanism. This approach can enable novel ontology-based search strategies that potentially provide more adequate semantic search results.
Abstract: The evolution of the Web depends on novel techniques and methodologies that can handle and better represent the meanings of the huge amount of information available nowadays. Recent proposals in literature have explored new approaches based on Semiotics. The Semiotic Web ontology? (SWO) is an attempt to model the information in a computer-tractable and more adequate way, and, at the same time to be compatible with the Semantic Web (SW) standards. This work presents an assisted process for building SWOs. The process includes heuristics and transformation rules for deriving an initial Web ontology described in Web Ontology Language from Ontology Charts produced by the Semantic Analysis Method. Moreover, the whole process is discussed; results of the application of the process to a real context show the potential of the approach and the value of the proposed heuristics and implemented rules to create more representative Web ontologies.
Abstract: There are many applications which need support for compound information. Thus, we need new mechanisms for managing data integration; aids for creating references, links and annotations; and services for clustering, organizing and reusing compound objects (COs) and their components. Few attempts have been made to formally characterize compound information, related services, and technologies. We propose the description and interplay of technologies for handling compound information taking advantage of the formalization proposed by the 5S Framework. This paper: (1) analyzes technologies which manage compound information (DCC, Buckets, OAI-ORE); (2) uses 5S formal definitions for describing them; (3) presents a case study, illustrating how CO technologies and the 5S Framework can fit together to support exploration of compound information.
Instituto de Computação :: State University of Campinas
Av. Albert Einstein, 1251 - Cidade Universitária Zeferino Vaz • 13083-852 Campinas, SP - Brazil • Phone: [19] 3521-5838 |