Summary: Ramsey's Theory is an area of mathematics that unifies the theme: complete disorder is impossible. More specifically, we observed that if a structure is large enough, then it has a very special and ordered substructure. This phenomenon occurs in several fields of Mathematics, such as Combinatorics, Geometry and Number Theory. This project addresses basic concepts and some of the classic results in Ramsey Theory applied to graphs.
Ramsey's Theory is named after the British mathematician and philosopher Frank P. Ramsey, for his work, in logic, published in 1930, but it only acquired a cohesive theoretical body in the 1970s. The area has received great attention in recent twenty years for its connections with diverse fields of mathematics and, still, many of its fundamental problems remain unsolved. In addition, not much of the theory has spread to undergraduate-level textbooks. Considering this literature gap, this text presents basic concepts of Ramsey Theory in graphs at an introductory level and in Portuguese. In addition, the probabilistic method is presented as an example of a more advanced topic.
Summary: Popular education projects are initiatives that propose to offer complementary training courses, free or at low cost, to the community in which they operate. Due to their non-profit character, it is common to have problems with the lack of teaching material, depending on donations to function. In view of this need and the potential for beneficial influence of technology in education, this work aims to produce digital educational resources to support the Mathematics classes of the Exact Course, a community extension project by PREAC (Dean of Extension and Community Affairs at Unicamp). The Exact Course is a popular education project that seeks the academic development of public high school students in the Campinas region. The material developed must form the basis of the `` Month of Mathematics '', a class period dedicated entirely to teaching elementary mathematics, and consists of: a set of slides, to be used in theoretical classes; a set of lists of exercises to be used in practical classes; indications of tools and digital resources that complement the subject of each class; and guidance for teachers on how to use them.
Abstract: In the past few years, mobile devices and apps have become more and more popular. This increased the demand for mobile development on multiple platforms, more than there are developers available.
A common development pattern used by such projects is to have a central server, and multiple clients in different platforms, like Android, iOS and web. The server exposes an API (Application Program Interface), and the clients send http requests to the API to interact with the business logic of the app. For multiplatform apps, having to implement the API client for every platform can be very time consuming, and the work needed grows with the size of the api, and number of supported platforms. Also, with every refactor of the API, it's needed to review all other platforms' clients.
The first step to improve the client develpment, is to have good documentation and communication between the server developers, and client developers. There are several ways of documenting an API, but the most common frameworks are: RAML, Swagger and API Blueprint. These frameworks are a structured way of documenting an API, but also can be visualized as a more human-readable way, and even be used to generate client code for the applications, saving a lot of time and mistakes for the developer.
In this project, I built an Android client generator based on the RAML framework. It uses other common patterns in Android, like Dependency Injection with Dagger 2, Promises with Bolts, and the Spring Rest Template for the current requests. It reads the RAML documentation, and generates Android code to communicate with the API.
Summary: The technological barriers for the implementation of unicore processors that maintained the pace of performance advancement of the last decades have given rise to the emergence of multicore architectures that today's developers must learn to use. The natural difficulty in developing systems designed for such architectures has led to the emergence of several parallel programming frameworks that seek, each in its own way, to make this type of development more efficient and less prone to errors. In this context, the concept of Task Parallelism arises, capable of encapsulating function calls and small pieces of code in entities called Tasks, which, being a macroscopic analog of processor instructions, can be scaled in parallel in the various processing units available since their dependency relationships - inferred from the way each accesses their data - are respected, in a logic very close to that which governs the scheduling of instructions for the various ALUs available in a superscalar processor. Currently, the OpenMP specification provides extensive support for Task Parallelism, which created the occasion for the present work: the development of a runtime with support for OpenMP 4.0 capable of accelerating task-scheduling through computation offload related to inference of data dependencies for an FPGA accelerator.
Summary: The World Health Organization estimates that to meet the blood transfusion demand, it is necessary that 3% to 5% of the population aged between 18 and 65 years old be a regular donor. In Brazil, the total percentage of donors corresponds to only 1.8%, of which 40% are sporadic donors. Attracting new donors, especially young people, is crucial to meeting this demand. The aim of this work is to propose and develop an application on a mobile device that promotes the exercise of citizenship by supporting and encouraging blood donation in networks of friends.
The following techniques were used aiming at a good quality of the result: the concept of the system was generated based on the methods and artifacts of Organizational Semiotics; volunteer users participated in a BrainDraw (adapted); volunteer users participated in the evaluation of an existing blood donation application; several versions of the interface were made, always seeking a better adaptation to the principles of Usability (ease of learning, efficiency of use, ease of return, frequency of errors, subjective satisfaction), of Affective Quality in Design (based on Norman), and in Maeda's Laws of Simplicity; the project was presented, debriefed and evaluated with the research group at IHC InterHAD (evaluating the prototype according to the set of Mobile Guidelines by Nicastro F., proposed in his Master's dissertation at IC-Unicamp in 2015); finally, several of the improvements identified from this last evaluation were implemented and implemented. The practical result of this work was the specification of the “Friends of Blood” application to support and encourage blood donation, as well as the implementation of its interface for Android.
Abstract: Manually annotating large datasets is unfeasible and, to do it automatically with a pattern classifier, it depends on the quality of a much smaller training set. Active learning techniques have been proposed to select those relevant samples from large datasets by prompting an user with label suggestions to be confirmed or corrected.
In this work we explore variations of an active learning methodology that, given an organization of the data computed beforehand and only once, allows interactive response times during the active learning process involving an expert.
We use the optimum-path forest (OPF) data clustering algorithm for the a priori organization and some combinations of active learning algorithms and classifiers to test the methodology. The active learning algorithms considered are the root distance-based sampling (RDS), a new variation of it that we call root path-weight-based sampling (RWS) and two additional random selection baselines. The included classifiers are the OPF-based supervised and semi-supervised learning methods, and an ensemble of logistic regression classifiers.
We tested each combination of active learning and classification algorithm against a dataset extracted from images of intestinal parasites, in which the presence of a large diverse class, namely impurities, mixed with the actual parasites poses a challenge for learning methods.
Summary: The introduction of technological devices in the educational environment has been a great revolution and has enabled the dissemination of content, the increase in the reach of information, enabling both greater penetration in the various spheres of users, as well as facilitating the task of those who propose to generate this content. . Computers and their main input devices, mouse and keyboard, have been designed for individual and static interaction and have productivity as their objective, thus limiting their use in teaching, in addition to their diminishing presence in users' lives to the detriment of users. mobile equipments. The result of this work was the creation of an application for the Android mobile platform with several resources for recording educational content, this application generates a package that can be shared and sent to students. Another result of this project was the creation of a player, embedded in this package generated by the application, which interprets the exported content and allows the student to simply attend the class, in this package information is still present for cataloging and classifying the class. Another fruit obtained with this application is the intelligent use of storage, managing to significantly reduce the size of the content generated.
Summary: This report deals with the final graduation work of student Victor Roth Cardoso, whose focus is a study for the creation of a server for multiplayer games, as well as its implementation. The study was carried out in several stages: the study of several design patterns for games, the study of architectures for multiplayer games and the implementation of a client-server game. The implementation considers the demands for an online game, including aspects of performance, security and software engineering.
Summary: Sound noises can be defined as waves where there is no periodicity, so that the vibrations in the medium are not harmonious. High levels of noise can cause stress or even damage to people's health, thus being defined as noise pollution. Workplaces such as power plants, airports, industries, civil construction, call centers or even everyday places, like shopping malls, classrooms, avenues or streets, are some examples of where society is subject to effects, such as stress, lack of sleep and in more extreme cases, hearing loss, which can be caused by noise pollution.
Under this context, a low-cost device (a simple microphone coupled to an Arduino board and a WiFi module) was developed, called IoT-Noise, which allows obtaining data to be delivered for analysis to a more complex system, transmitting data through the internet, so that it can make appropriate control decisions aiming at better management of the environment and greater safety, since the human ear has a limit so that there is no hearing loss. In addition, especially in urban areas, there are places like hospitals and schools, where by law, there is a limit to the level of noise pollution and horns are prohibited.
Summary: A game engine is a computational tool that consists of general purpose codes for the development of a game, which provides resources for rendering and audio, artificial intelligence and physics techniques, among many other features. In general, scenario editors and scripting languages are also available. Just as the auto industry can take advantage of the design of an engine, different games can reuse the same game engine, making the development process simpler and faster. The focus of this project is the development of a rendering engine, since this is one of the most important components of a game engine. The C ++ language and the OpenGL graphics package were used to build a Voxel Game Engine (or Block Engine). Height maps (heightmaps) were generated procedurally using Perlin Noise, as well as the creation of two types of camera: first person (FPS camera) through perspective projection and isometric camera through isometric projection. In addition, the project allowed interaction with the map to generate new blocks over those already generated.
Abstract: A game engine is a computational tool that consists of general purpose codes for game development, which provides features for rendering and audio, techniques for artificial intelligence and physics, among many other functionalities. In general, scenario editors and script languages are also available. Just as the automotive industry can take advantage of the design of an engine, different games can reuse the same game engine, making the development process simpler and faster. The focus of this project is the development of a rendering engine, since this is one of the most important component of a game engine. C ++ programming language and OpenGL graphics package were used to build a Voxel Game Engine (or Block Engine). Height maps were generated procedurally using Perlin noise, as well as the creation of two types of camera: (i) first person (FPS camera) through perspective projection and (ii) isometric camera by means of isometric projection. In addition, the project allowed the interaction with the map to generate new blocks on those already generated.
Summary: This work presents an application for the remote control of the NAO robot - a programmable humanoid robot developed by Aldebaran Robotics. The remote control allows the user to send basic commands to the robotic device, in addition to keeping the user informed about the positional aspects of it in relation to the environment through the display on the application control screen, in real time, of both a 3D model. of the robot that reproduces its current pose as well as the images obtained from its vision sensors. For this purpose, we developed an application structure that encapsulates atomic actions, which control each of the robot's 25 degrees of freedom individually, in addition to predefined actions (walking forward, rotating, lowering, etc.) to be performed by the robot. To allow the use of the application to be expanded to a real robot without the need for modifications, we have implemented a high-level protocol for communication between the controller and the robot. Using this protocol, we can easily control a real biped robot, if the protocol is supported, and expand these actions to different contexts, such as robot football, for example. For application validation, we built all the server-side interfaces that allow us to control a simulated version of the robot. The application and the server were implemented in Java and used the Libgdx library. All tests were performed in the V-REP simulated robotic environment.
Summary: Sharing intercity trips is a widespread practice, especially among students at public universities. The popularization of such practice has led to a high growth of digital communities for this purpose present in social networks such as, for example, the Facebook. The growth in travel sharing offers caused an intense bombardment of information on the social network, a fact that, ironically, hindered the practice from the point of view of those looking for travel. Using the idea behind the concept of participatory sensing networks and seeking to improve the user experience that seeks to share travel through Facebook, it was proposed to create an application capable of collecting the large volume of information generated by social sensors to interpret them and then serve them to users in an organized way through an intelligent messenger bot.
Summary: The volume of data used on the internet today is extremely high, and much of that data is sensitive. To allow for high availability of this data, load balancing and high avaliability solutions are available on the market. Using PostgreSQL with a replication method called Streaming Replication, which creates a master-slave architecture, and the pgpool tool, whose role is to distribute the loads of read operations among the replicated banks, it was possible to measure the time of fall between a master instance and the recovery of the slave instance promoting to master.
Summary: Pricing problems aim to determine the price of items to be sold in order to maximize the seller's profit. In this work we propose a new heuristic for the problem of envy-free pricing with unitary demand. In this problem we have as input the values that consumers give for the items, and as a result a price, that is, a price for each item, as well as the allocation of items to consumers. The metaheuristic used was the Biased Random-Key Genetic Algorithm (BRKGA) which is a genetic algorithm. In addition, two heuristics previously defined in the literature, Equal Price e Maximum Reservation Price, were used to generate solutions to be injected into the initial populations of the algorithm. Decoding has been improved using a routine called Improvement that improves the fitness of each chromosome. It does this by finding the minimum paths in an auxiliary graph. This graph is constructed based on the allocation found from a price. In general, BRKGA achieved good results in terms of quality of solutions and was often better and in others it was very close to the algorithms in the literature, being better in approximately 65% of the tested instances and getting at least 97% of the best value in the remaining instances. It sins in relation to the execution time, however with some adjustments and small optimizations it was possible to improve these times.
Summary: The objective of this work is to study the implementation of algorithms that solve the problem of Isomorphism in Graphs (in English Graph Isomorphism or just GI). A polynomial solution to the general problem is not known, one that does not assume any special property for the graph, as well as this problem has not been proven to be NP-Difficult and, therefore, the complexity of this problem is still open.
Different methods for solving the problem were studied, and a method proposed by Weisfeiler and Lehman, known as
-dim WL, which is able to solve the problem in linear time in the number of vertices, but which, in the worst case, is still exponential, both in space and in time.
The implemented method obtained a great speedup in relation to the trivial method, which is to try all possible vertex permutations, when executed with random graphs.
Despite being a method with exponential complexity, the implementation was able to solve many test cases in low computational time and quickly, even for graphs with a lot of vertices. The worst case of the method in the tests was observed in regular graphs, as predicted by the theoretical analysis of the algorithm.
Summary: Auction Theory is an area of Game Theory that studies the properties of auctions and the actions of its participants. This work addresses the main concepts of the Theory of Auctions for the sale of a single item and discusses them for the main types of auctions. We analyze the possible and optimal strategies for participants of first price and second price closed letter auctions and present the expected revenue for the organizer in each type of auction. We also studied the Myerson auction as an alternative to maximize the auctioneer's revenue and its viability in relation to other types of auction.
Summary: Estimating the height of people is an important activity in the area of surveillance and security, facilitating the identification of an individual in applications such as biometrics and forensic science. The images generated by the surveillance systems usually have low quality for facial recognition, so that other characteristics can be used to recognize individuals. This work investigates the problem of approximating the height of people in videos captured by security cameras. Several methods in the literature are analyzed and an approach based on nonlinear regression for camera calibration is selected for implementation due to the satisfactory results presented.
Summary: More and more devices are connected to the Internet, increasing the complexity involved in managing computer networks. The Internet of Things (Internet of Things - IoT), as it is called, requires networks, unlike traditional models and topologies, to be dynamic and flexible. Software-defined networks (Software Defined Networks - SDN) emerged in this scenario and represent a new model for the configuration, control and operation of computer networks. In addition to facilitating server virtualization and the orchestration of computer clouds, software-defined networks still provide great flexibility and scalability, as they allow their administration to be done programmatically and quality of service policies to be implemented. This project aims to study the management of software defined networks within the environment of an OpenStack computational cloud, focusing on the development of a tool that facilitates the configuration of software defined networks. Quality of service policies in these virtual networks were also analyzed, which allowed a reduction in the level of network congestion, in addition to allowing priority data flows to be established according to demand.
Summary: The recent popularization of the Internet of Things has raised a great concern about vulnerabilities in cryptographic mechanisms and side channel attacks in their implementations. In this work, simulation was used, using the MARSSx86 system simulator, and prototyping, using FPGA technology, using a platform similar to such devices. In them, instructions were implemented that allow the safe implementation, that is, resistant to side channel attacks, of some of the main cryptographic algorithms in use. Such tools were also used to validate, in performance and security, the different implementations of these algorithms. Experiments were performed with the X25519 curve-based protocol and the AES block cipher. In the experiments, from the statistics obtained in the simulator and in the FPGA, it was possible to detect the existing vulnerabilities in the evaluated implementations; and also assess the impact, in safety and performance, of the introduction of the proposed new instructions.
Summary: A preliminary security analysis was carried out on the main email system of the Brazilian government, Expresso.Br, with an emphasis on application security website. The first part of the security analysis was carried out dynamically, through scanners application, and the second part statically, by scanning the installed code base. In addition, Expresso's SSL server configuration was verified. Were used the scanners AppScan, Nessus and Arachni application; it's the scanner the code base was implemented by the student himself based on lists of dangerous functions. As a result, the analysis of the SSL certificate configuration returned some vulnerabilities and the code base scan found several uses of dangerous functions.
Abstract: A preliminary security analysis of the main Brazilian government email system was performed, focusing on web application security. The first part of the security analysis was executed dynamically through application scanners AppScan, Nessus and Arachni; followed by static analysis through a code base scanner implemented by the student. The SSL server configuration of the main installation Expresso.BR was also assessed. We report that the some vulnerabilities were found in the assessment of the SSL configuration and that several cases of dangerous functions were found in the code base scan.
Summary: Melanoma is one of the most aggressive types of skin cancer. Treatment success is dependent on premature diagnosis. This project seeks to study and develop an automatic tool to assist the diagnosis of skin lesions. Initially, the images of cutaneous wounds are segmented through algorithms used in image analysis, such as Otsu's threshold, Chan-Vese and Statistical Region Merging. Afterwards, characteristics are extracted using the ABCD rules and second order relations of the image. Based on these characteristics, a decision is made using the combination, through the voting method, of the Extra Tree, Decision Tree, AdaBoost, Linear Discrimination and Random Forest classifiers. The results of the classifications reached a 77% accuracy rate using the ISIC database, which is comparable to the success of specialist dermatologists who only performed the dermatological examination and inspected the patient's history.
Abstract: Melanoma is one of the most aggressive types of skin-cancer. The success of the treatment is very reliant on early diagnosis. This project aims to study and develop an automatic tool to help physicians diagnose skin lesions. Initially, skin lesion images are segmented through well known image analysis algorithms, such as Otsu, Chan-Vese and Statistical Region Merging. Then, features are extracted by following the ABCD rule and second order image characteristics. From these features, a decision is made by means of a voting strategy combining Extra Tree, Decision Tree, AdaBoost, Linear Discrimination and Random Forest classifiers. Classification results achieved a success rate of 77% on the ISIC dataset, which is comparable to expert dermatologists that only had access to dermatology exams and patient history.
Summary: Genome rearrangement is an area of research that studies the distance between genomes by accounting for mutation events that affect large portions of the genome. In general, the distance between two genomes is calculated considering that the cost of each operation is unitary. This work addresses the ordering of permutations without signals and considers that the cost of each operation is the number of elements present in the affected region. We compared the average costs provided by some of the known heuristics in the area, and present the results obtained by modifying some of them to consider the problem with weighted distance. With that, we developed a heuristic based on the combination of those that obtained the best results. This heuristic had lower average costs than the others analyzed in this work. In addition, we performed an analysis of the average costs obtained by the Cycle Graph heuristic, a heuristic that obtained the best results when the costs of operations are unitary.
Abstract: Genome rearrangements is a research area that studies the distance between genomes through mutation events that affect large fragments of the genome. Generally, the distance between two genomes is calculated assuming that each operation cost is unitary. This work addresses the unsigned sorting problem, where the operation cost is given by the number of elements in the affected region. We conducted a comparison of the average costs of some of the known heuristics in this field, and we show the results obtained when some are modified so that the length-weighted problem is considered. From this, we have developed a heuristic obtained from the combination of the ones that presented the best results, and which provided lower average costs than the others analyzed in this work. Furthermore, we conducted an analysis of the average costs obtained by the Heuristic Cycle Graph, which achieved the best results when the operation costs are unitary.
Instituto de Computação :: State University of Campinas
Av. Albert Einstein, 1251 - Cidade Universitária Zeferino Vaz • 13083-852 Campinas, SP - Brazil • Phone: [19] 3521-5838 |