Summary: This work presents the implementation of a server and client using the HTTP/QUIC protocol for the transmission of 360º videos. We develop network rate adaptation algorithms, evaluating their performance with the aim of optimizing both the transmission quality and the efficiency in the use of network resources. The proposal contributes to the improvement of immersive video transmission techniques, exploring solutions that dynamically adjust to variations in network conditions.
In addition, we divide videos into tiles, allowing for the selective transmission and rendering of the areas of greatest interest to the user, which reduces bandwidth consumption and improves the viewing experience, even in unstable connection scenarios. This results in a streaming system that is less susceptible to fluctuations in the quality of the internet connection.
Abstract: This work presents the implementation of a server and client using the HTTP/QUIC protocol for transmitting 360º videos. We developed network rate adaptation algorithms and evaluated their performance with the aim of optimizing both transmission quality and network resource efficiency. This proposal contributes to enhancing immersive video streaming techniques by exploring solutions that dynamically adjust to variations in network conditions.
Additionally, we employed video tiling, which enables selective transmission and rendering of areas of greatest interest to the user, thereby reducing bandwidth consumption and improving viewing experience even in unstable connection scenarios. This results in a streaming system that is less susceptible to fluctuations in internet connection quality.
Summary This work presents the implementation of a server and client using the HTTP/QUIC protocol for the transmission of 360º videos. We develop effective transfer rate adaptation algorithms and evaluate their performance with the aim of optimizing both the transmission quality and the efficiency in the use of network resources. This proposal contributes to the improvement of immersive video transmission techniques, exploring solutions that dynamically adjust to variations in network conditions.
Furthermore, we have implemented the division of videos into mosaics (tiles), which allows selective transmission and rendering of areas of greatest interest to the user, which reduces bandwidth consumption and improves the viewing experience, even in scenes unstable connection. This results in a transmission system that is less susceptible to fluctuations in the quality of the internet connection.
Summary: The evolution of technologies, mainly of devices within what we call the Internet of Things (IoT), has enabled the emergence of increasingly sophisticated systems, often with critical operating requirements. The study on fault tolerance (FT) has been gaining an extremely important space in the IoT scenario, since the existing techniques that make a system fault tolerant are often not efficient or viable for a system of Internet of Things devices. This study aims to analyze some FT techniques designed for the IoT. In addition, we will show a case study where we have a solution to manage a parking lot from cameras that take pictures periodically, obtaining the number of available spaces, and we will present a solution for the system to be fault tolerant.
Summary: In this work, a machine learning solution is developed in order to identify the etiology of thrombi removed from patients who suffered a stroke. For this, it uses standardized methods present in a pipeline capable of solving the end-to-end problem, distributed through a Python library, Slideflow. The problem and the solution are proposed in the environment cloud of Kaggle data science and with that defines the constraints of hardware in development, which made it impossible to submit and evaluate the model on the platform. Even so, it demonstrates how the used library is capable of creating solutions in the used environment through a methodology that divides them into three laptops🇧🇷 Experiments are also reported and the final solution, which was not able to distinguish the classes, comparisons are made with other solutions available on the platform and finally it is shown how more sophisticated techniques and more relevant data are needed for a reliable solution.
Summary: With the popularity of agile methodologies within the software engineering area, behavior-driven development, BDD, has become a recurring practice in development teams. However, looking at its process, we identified manual activities that make it difficult to perform in everyday life. Therefore, model-based testing, MBT, proposes to automate activities such as the generation of test scenarios and the implementation of acceptance tests to facilitate the use of BDD.
Summary: Verifying system behavior using software tests is an indispensable process for quality control. O Behavior Driven Development (BDD) is a software development practice that generates tests as a way of documenting the behavior of the system, but the tests are generated manually. already the Model Based Testins (MBT) is a method that allows automatically generating tests based on a model of system behavior. This project proposes to develop a method that combines the use of MBT and BDD for automatic generation of tests, and that works in incremental development projects. As a system under test, the functionality of login with email and password from Firebase was used. The method applies the MBT to model the behavior of the system, and uses the tool Skyfire to generate tests from the behavior model following the notation used in the BDD. The system behavior model was represented by a state model. Two iterations of the proposed method were performed, adding new requirements to each iteration, in order to evaluate the associated effort. At the end of the execution on the tested functionality, the proposed method was able to generate tests from a state model, documenting the functioning of the system automatically and incrementally.
Abstract: Here we report the development of an algorithm that explores properties of U-shaped curves (“U-curves”) in cost functions of multi-task transfer learning (MTL) models. The proposed algorithm works even with the insertion of different tasks that may or may not be related to the original task of learning. To find a global minimum of the described curve, a Boolean lattice is organized based on the enumeration of the search space with the weights of the group of different tasks used in training. This traversal of that lattice is carried out through a branch-and-bound procedure, in which the pruning criterion is the increase of the cost in a chain of that lattice. To benchmark the proposed algorithm against established MTL methods, we carried out computational experiments with both synthetic and real datasets. We expect that this proposed algorithm will represent a relevant alternative for multi-task learning models.
Summary: This work involved the implementation and publication of an open library to provide discourse analysis techniques with temporal topic modeling. We were able to demonstrate and validate how the library works using use cases from real discourse analysis research. Finally, we publish the library with a free license and make it available in the pip package manager repository.
Summary: Civil engineering companies often need to produce reports and technical reports for their customers. For this, it is often necessary for an engineer to carry out a technical inspection of the buildings, taking pictures and describing details about pathologies found, recommended improvements, integrity of the structure, etc. These reports can easily exceed a hundred pages, occupying several hours of a highly qualified professional and making their production more expensive. This work describes the implementation of a web application that aims to automate the assembly of these documents for a company in the field (Servare Engenharia). The application was developed with the Angular 13 framework on the client side, NodeJS on the server side and MySQL as the database management system. It was hosted in the cloud using AWS and can be accessed by anyone with a registered user. The output of the application is a PDF document, seeking to be as similar as possible to a technical report produced using conventional software, such as text or slide editors.
Summary: With the evolution of the financial market and the increase in the number of financial transactions in circulation, there was a need to create tools that enable the automation of operations and the solution of bottlenecks related to the extraction and manipulation of brokerage notes. Allied to this, the evolution of cloud computing has enabled greater possibilities of resource management, through the creation of services, allowing to scale the automation of processes for multiple clients in the most different market scenarios.
In this context, an algorithm was created that parses the content of the brokerage note, using different AWS services in order to arrive at an economical, fast and reliable system. In this work, the performance impact of this automation for different remote environments is detailed. Seeking critical analysis for different market scenarios and using various cloud services, focusing on performance, cost and scalability.
Summary: Self-distributed systems are capable of changing their composition at runtime to adapt to the environment they are in and better meet their requirements. For that, metrics collected during the execution of the components are used, such as execution time. This work aims to enable the use of energy consumption metrics for self-distributed systems, so that the placement of components can be realized in order to achieve an energy efficient configuration, and then see what impact this has on the system.
Summary: This work is a report of the project carried out in partnership with an extension activity of the Institute of Geosciences (coordinated by Prof. Roberto Greco), whose objective is to design and implement a system using the Internet of Things to monitor native bee hives. The hives will be installed in the Milton Santos Settlement (Americana/SP) with the main objective of pollinating the present vegetation.
After visiting the site and gathering the project requirements, a system was developed to collect data from the hive (temperature, humidity and sound) and a cell phone application to read the data. As the settlement does not have internet, the systems communicate via Bluetooth.
Abstract: This work aims to study some of the most prominent publicly available models for the text-to-image generation task. In addition, we investigated whether an ensemble of these models can achieve better results using a CLIP model as a ranker. To perform these experiments, we selected two available models that performed well on the public MS-COCO benchmark. We also experimented with Stable Diffusion, a diffusion model that became popular due to the quality of the images it generates. We each evaluated model and the ensembles in subsets of the MS-COCO and FLICKR datasets.
Summary: This is a study on creating realistic, physically based materials for 3D graphics applications such as games and animated movies. Traditional methods for recreating textures require expensive tools, expertise and time consuming. A recent method, developed by Henzler et al., proposes to create realistic Bidirectional Reflectance Distribution Function (BRDF) parameters - diffuse, normal, rough and specular maps - from a single photograph obtained by mobile phone with the device's flashlight turned on. He learns to encode the image in latent space and then how to decode it into texture maps with an unattended approach. The results are realistic, seamless and can be interpolated with other materials, resulting in a virtually infinite texture generator. We review this method and describe possible modifications that could be applied to the approach. Using raw images, rather than final photographs processed to benefit from their linear character, can improve the realism of materials. Furthermore, we include Blender as the rendering engine so that more complex materials can be created.
Summary: The optical effects caused by massive objects in the path of light rays have been widely studied for over 200 years. These effects, formally referred to as ``Gravitational Lenses'', are extremely important for modern cosmology and astronomy, as only through them is it possible to critically analyze the data and images collected by astronomers when observing the most diverse cosmological events in the universe. around us. This work aims to describe the physical nature of this effect through a conceptual analysis of the Schwarzschild metric and to present a computational model for the synthesis of graphic images that illustrate the effects of deflection of light rays caused by the curvature of ``space-time'' around massive objects.
Summary: Since the 50s, methods and solutions have been proposed for programs to be able to play chess. Search algorithms such as minimax are widely used, together with heuristic-based evaluation functions. However, recent studies in the area, using machine learning, have shown good results. The main objective of this project is to implement machine learning methods capable of learning to play chess, in order to contribute to studies in the area, by understanding the advantages and disadvantages of using such methods. For this, the project presents a historical context to the studies developed on the proposed problem and, to help understanding, some concepts related to the theme. The work implements a classic algorithm, widely used for the development of modern programs in the area and, in sequence, implements a method that replaces part of the algorithm by a machine learning model that uses, together, the Deep Beliefs Network and the Siamese Neural Network. Initially, the development of the networks is carried out, with the collection and processing of data, with two main sources, the first is the Lichess online game platform and the second is the games between computers of the CCRL group. The architectures of neural networks are developed and their training process is carried out and, finally, the final results obtained are compared between the three approaches. Finally, the benefits obtained by machine learning models are presented, even if they do not surpass the classical method, and new proposals are suggested to advance studies in the area.
Summary: This project aims to investigate and apply computer vision and machine learning techniques for detecting and tracking eyes using a video camera, in addition to offering a prototype for controlling the mouse based on gaze. The prototype can help users with motor disabilities to interact with the computer, eliminating the need for specialized equipment. Different computer vision and machine learning approaches are evaluated and improved, seeking to improve the accuracy of eye tracking, without the need for manual adjustments. Several challenges are associated with the problem, such as the occurrence of large head movements and environmental conditions.
Summary: This work is a report on the use of the vehicle routing problem applied to the context of shopping delivery with two types of vehicles.
From the work developed, it was possible to infer that the use of exact algorithms, as much as possible for small and medium-sized contexts, becomes unfeasible with the growth in the number of delivery points.
Summary: Situations that involve the need to save resources or compute efficiently over paths in a given network occur frequently, and the way to approach this type of problem can be crucial to obtain satisfactory results. It is from the study of these scenarios that minimization problems in spanners are born. This work describes the implementation of an exact solution algorithm for a particular problem about spanners: the tree problem.
minimum weight spanner. The challenge at hand is to find, given a graph
and an integer
, a spanning tree of
in which the distance between each pair of vertices is at most
times the distances between them in
. The implementation results and improvements through heuristics are then discussed and analyzed.
Summary: The application of exact reduction techniques in the resolution of current games is something little explored. In this work, the Integer Linear Programming approach and its derivative analyzes are applied to deal with a very common problem in the game. Path of Exile. The performance of two formulations, one by flow and the other by cuts, is evaluated, based on the resolution of the Steiner Tree Problem with Reward Collection. For the representation of the game's connection network, as well as algorithms for exploration, the graph library was used. LEMON and for solving integer linear programming models, the solver Gurubi. The results obtained for different instances of the problem were evaluated and next steps were suggested to continue this analysis.
Summary: On the one hand, we have NVIDIA's NCCL library in C that provides a quick solution for communication between GPUs, among the functions it provides is the AllReduce operation that is often used in neural network training. On the other, we have Google's JAX library in Python, which provides high-performance solutions for machine learning and numerical computation using the XLA compiler. The objective of this project is to somehow unite the two libraries by implementing a new JAX primitive for the AllReduce operation present in the NCCL library. After the implementation, a comparative analysis of the execution time of the new primitive was made.
Summary: This paper discusses the evaluation of transmission rate adaptation policies for 360º video sessions. It is intended to understand the main transmission rate adaptation algorithms for regular videos and how to explore current algorithms for 360º videos.
The technologies explored in this work are the QUIC network protocol that allows multi-stream connections, 360º videos, and which has become popular over the last few years, and widespread transmission rate adaptation algorithms.
A qualitative analysis was made of how these traditional video technologies behave in 360° video scenarios. It can be concluded that with some adaptations, it is possible to use the presented algorithms.
Summary: TKGEvolViewer is a tool for the interactive visualization of the evolution of knowledge graphs. This software allows the graphical exploration of Temporal Knowledge Graphs (TKGs) from metric values encoded in TKG structures. The software tool allows a way to explore these graphs for your understanding and understanding of metrics about the encoded concepts. Using the tool, people can conduct visual analysis of TKGs by choosing a metric.
In this work, we refined the tool through the design and implementation of an interactive Wizard module, which supports users in conducting guided analyses. The built solution also allows an advanced and free exploration of the structure of knowledge graphs. The Wizard allows the construction of analyzes from answers to different questions, previously created, that permeate the chosen metric.
Summary: Emotions are physiological responses that can be captured from sensors. In this work we used data collected from electroencephalogram (EEG), which evaluates the electrical activity of the brain. This activity is separated into frequency bands (Alpha, Beta, Theta, Delta and Gamma) and can be analyzed to understand the functioning of the examined individual's brain. We conducted a data collection workshop at the State University of Campinas with 21 volunteers who were exposed to different sensory experiences through vision, hearing and touch. Our aim was to induce four different affective states (happiness, sadness, fear and disgust) while using equipment that collects EEG data. Data were analyzed and pre-processed to allow training a machine learning model for classification. The investigated algorithm was the Support Vector Machine (SVM) that proved to be effective in other EEG data classification implementations and in smaller datasets. The best results obtained were for the identification of `happiness' and `fear' based on data from all volunteers summarized by the average of ten measurements, obtaining a result of 59% accuracy and 59% F1-score.
Summary: Behind emotional expressions, a complex phenomenon is triggered in the mind and body of individuals, involving neurological and physiological activities. The latest technological advances demonstrate different ways of impacting and affecting human emotions, such as via visual, auditory or tactile experiences. Our research involves the study of a workshop to capture data on affective aspects. We explored the participants' multi-sensory experience in capturing data via an electroencephalogram device. Our research was evaluated through the perceptions of the participants who used and evaluated the experience in the proposed environment. In our analysis, we explored the use of recorded video of participant interactions in the analysis of results.
Summary: Scene text detection has received great importance in recent years. The challenges of this task are to design detectors capable of handling a wide range of variability, such as font size, font style, color, complex background, among other factors. When dealing with multilingual texts, the current detection proposals have difficulty in detecting different languages with the same performance among them. This work presents a technique to optimize individual language detections. First, we compare two model building strategies, using convolutional neural networks, to detect multilingual textual elements in images: (i) detection model built in a multilingual training scenario and (ii) detection model built in a scenario of language specific training. From this comparison, we propose a fusion algorithm for the training performed with a specific language in order to be able to evaluate our hypothesis in the test context with all languages. The experiments designed in this work indicate that the language-specific model outperforms the detection model trained in a multilingual scenario. With the model fusion algorithm, we obtained a final improvement of 28,21% and 11,80%, in terms of precision and F1-measure, respectively.
Summary: This work has as main proposal the creation of an iOS application with the objective of recognizing physical falls using a cell phone (smartphone), which detects acceleration and rotation signals from the accelerometer and gyroscope sensors. Initially, the report describes the main activities for creating a machine learning model. Such steps involve the processes of data collection, organization of the collected data and, finally, the process of training machine learning models. From the trained model, it was possible to make predictions of human activities from sensor data in real time. Then, the report makes a comparison between a model using the Decision Tree technique with a recurrent neural network variant known as Long Short-Term Memory (LSTM). With this, the report emphasizes that, in cases where the data are sequential and, therefore, the data have a relationship with each other (both in terms of order and clustering), the use of recurrent neural networks has a better performance since learns existing relationships between data.
Summary: This final graduation project aims to study and apply techniques from the area of image processing and computer vision to address the problem of classifying bird species. For this, the database selected for the case study was the BIRDS 400 - Species Image Classification, available in the Kaggle repository and used to train models built during the project. To carry out the work, image processing and analysis techniques were investigated and applied to the classification problem. In addition, transfer learning and data augmentation techniques were explored. Experimental results were collected to demonstrate the effectiveness of the built models.
Summary: Crowd counting through images is a field of research of great interest for its various applications, such as monitoring security camera images, urban planning, in addition to the possibility of using these models to count other objects, in other domains. of problems. In this work, a model is proposed (MCNN-U) based on Generative Adversarial Networks (GANs) with Wasserstein cost and on multi-column neural networks, to obtain better estimates of the number of people. The model was evaluated using two crowd-counting databases, UCF-CC-50 and ShanghaiTech. In the case of the first database, the reduction in the mean absolute error was greater than 30%, while in the case of the second, the gains in effectiveness were smaller.
Summary: Video Frame Interpolation (VFI, Video Frame Interpolation) is a task of Visual Computing that seeks to increase the frame rate of a video by producing new intermediate images to those already existing in the sequence, creating content that is more visually pleasing to viewers. Although little known by the general public, this is an extremely important field of study, having numerous practical applications, such as content-based video retrieval, slow motion content generation, facilitation of the video editing process and even intermediation. of services of streaming of video. The objective of this work is to study and evaluate the quantitative and qualitative performance of three state-of-the-art methods in the area based on Deep Learning: RIFE (Real-time Intermediate Flow Estimation) [], XVFI (eXtreme Video Frame Interpolation) [] and CDFI (Compression-Driven Flow Estimation) []. Although the RIFE and XVFI models obtained the best performances in the metrics used during the tests and the RIFE model presents the best nominal results in its article, all architectures showed good results when generating intermediate frames for different video sequences, with some lack of details often imperceptible to the human eye during the display of the complete content, and can be used as applications for efficient execution in general-purpose computers, however, with a high requirement of computational power from GPUs for the training process.
Summary: With the expansion and popularization of the Internet, the fog computing architecture, which uses mini-servers (cloudlets) responsible for allocating resources for storing and processing user data, in the form of virtual machines or containers, at the edge of the network.
Even so, users may experience latencies and connection drops. To improve this, it is necessary to adapt the allocation of these network resources according to the scenario: more or less users, with different network uses and movement. The purpose of this work is to investigate whether changes in resource migration policies between cloudlets can decrease latencies and connection losses experienced by users. The validation of this proposal was made using the simulator MobFogYes.
Abstract: With the expansion and popularization of the Internet, the cloud computing architecture has become increasingly used, which uses mini-servers (cloudlets) responsible for allocating resources for storing and processing user data, in the form of virtual or containers at the edge of the network.
Even so, users may experience latencies and connection drops. To improve this, it is necessary to adapt the allocation of these network resources according to the scenario: more or less users, with different network uses and movement. The purpose of this work is to investigate whether changes in resource migration policies between cloudlets can reduce latencies and connection losses experienced by users. The validation of this proposal was done using the MobFogSim simulator.
Summary: University students are always engaged in projects, whether they come from academic disciplines or not. Most are complex projects that involve large numbers of people and look for specific skills and interests. Thus, a question arises: how to find projects and people? This work consists of the implementation of a Project Match platform, in which interested profiles (students, professors, entities outside the university) in projects can find them through a social network dedicated to this. Just as creators and project managers can find people to add to the workforce. At the end of development, it was possible to deliver a platform that implements most of the requirements initially defined, and in this way, presents applicability in real contexts. In addition, the platform, available in Web format in React, consumes a REST API implemented in a microservices architecture, presenting potential gains in scalability and availability.
Summary: This technical report studies a tool to track, monitor and improve the use of natural resources on the UNICAMP campus located in Barão Geraldo. The differential of the mechanism is to be able to evolve, to scale, and to present better solutions for the problems of sustainability. Data and information present in the dojot platform that is based on IoT devices spread across the university city of Unicamp in Campinas are analyzed. Measurements were examined for different locations, times, days of the week and months to ascertain the results and then be able to infer, with greater precision, areas of high consumption of natural resources and then use the tool.
Summary: This work seeks to evaluate the execution performance of the implementation of the Node.js library of some native Javascript functions, belonging to String and Array objects. The String functions were: match, substring and toUpperCase. The Array object functions tested were: concat, find, reverse, shift, unshift, sort and push.
To perform the test, we used the benchmark.js library, which executes the code blocks to be tested on top of the datasets provided for a number of times determined via parameter (in the case of this work, around 90 executions per dataset). At the end of the test, the library provides a set of relevant statistical data related to the execution time of the program, such as average execution time, average time error margin and relative error percentage. For this work, we used only the average execution time.
We use datasets of varying sizes, with both random values and specially designed to test edge cases of the algorithms. In addition to the native algorithms, we tested the performance of manual implementations of the same algorithms in order to compare the performance of the native algorithm with the custom one.
With the data in hand, performance graphs were produced for the algorithms, using as points the execution time necessary for the completeness of execution of the program on top of each dataset.
Summary: More and more contemporary systems have their demand and complexity increased. The main factors that have caused this increase are the heterogeneity and volatility present in these systems. An example of this volatility is the sudden increase in the number of users, or services sharing the same infrastructure. Thus, some alternatives have been analyzed and applied to meet this high demand. One of the most widespread is cloud computing, since it has resources to deal with scalability and adaptability, as an example of containers and container orchestrators, which can scale both vertically and horizontally. But due to a low tolerance for high latency and communication delays in some applications, the performance of this solution can be compromised, making room for new solutions such as edge computing. Both technologies are best used by stateless systems (stateless), due to the fact that its implementation is less complex. However, stateful systems ( statefull), although they are more complex, they also have higher performance.
With this scenario in mind, this project was developed in order to study the performance of a hybrid system — partly in the cloud, partly at the edge — with a transparent management of the state, according to different consistency models, in the system distribution process. Through an empirical methodology, several scenarios and compositions are explored in order to evaluate the best compositions for each scenario.
Summary: With the growing popularity of competitive games, ways to improve the user experience are crucial to attracting players and creating products. One of the main forms of improvement is delay compensation. In this work, the implementation of a delay compensation algorithm in a third-person shooter game, using the Unity platform, is described. Subsequently, the impact of this algorithm on the hit rate of shots is analyzed, both in local and remote environments (Cloud), and the influence of these algorithms on the final experience is demonstrated.
Abstract: With the increasing popularity of competitive games, ways of user experience are crucial to attract players and create products. One of the main ways of improving this experience is delay compensation, aka lag compensation. In this work, I describe the implementation of a delay compensation algorithm in a third-person shooter game, using Unity platform. Subsequently, the impact of this algorithm on the hit rate of shots is analyzed, both in local and remote environments (Cloud), and the influence of these algorithms on the final experience is demonstrated.
Summary With the growing popularity of competitive games, ways to improve the user experience are crucial to attracting players and creating products. One of the main forms of improvement is lag compensation. This work describes the implementation of a lag compensation algorithm in a third-person shooting game, using the Unity platform. Subsequently, the impact of this algorithm is analyzed on the firing rate, both on local and remote servers (Cloud), and the influence of these algorithms on the final experience is demonstrated.
Summary: The ability to adapt to different situations is a key process in modern distributed systems. The advent of the internet in our daily lives has brought with it a considerable increase in heterogeneity in distributed systems. One of the known and widely used ways to circumvent this problem is cloud computing, which in recent years has become a key segment for technology giants, managing to provide on-demand scalability efficiently and quickly. Other technologies bring even more flexibility to systems, such as the ability to perform runtime component exchanges and application containerization. Aiming through an empirical methodology to explore the challenges and impacts in the use of the latest technologies in the context of transparent state management, this project presents a study of the use of such technologies according to different consistency models within the systems distribution process.
Abstract: Machine learning has an increasing importance within society. However, for it to continue to evolve, more data from IoT devices and computing resources are needed, which has become a bottleneck for the scalability of this technology.
In order to seek alternatives to solve the scarcity of resources, the option of migrating the original machine learning configuration in which there is a central server that receives and stores all data from edge devices and builds a model has been studied. A federated architecture, known as distributed machine learning, performs the distribution of decision making, in a way that does not require the sharing of user data, instead, it shares its learning model and receives trained local models and aggregates them.
However, this proposal still finds some relevant points that should be considered limiting. Thus, this project aims to show a case study of a form of optimization in the selection of users who will participate in the model training, in order to avoid unnecessary processing expenses and to verify if this proposed way is capable of bringing differences that are significant for applications of federated learning on IoT devices.
Summary: More and more electronic devices have become more connected and intelligent, generating a lot of data. With this, Federated Learning makes it possible to explore this large distributed database intelligently while maintaining data privacy, as they do not need to leave the device that generated them, contrary to what usually occurs in traditional machine learning techniques. In this work, it is proposed to use a model of Federated Hierarchical Learning in order to reduce communication problems existing in Traditional Federated Learning. First, we describe the implementation of a framework to make it possible to create this hierarchical network and later we show the initial results of performance tests performed on the proposed model compared to the traditional one.
Summary: With the growth and popularization of the internet in recent decades, there was a need to create technologies that would allow obtaining content quickly and with good quality, regardless of the location of access or hosting of the however. To achieve these goals, networks specializing in the delivery of online content, known as content delivery networks, or CDNs, began to be implemented. This work aims to analyze the behavior and performance of a CDN, and its impact on cloud systems that need to be accessed from several different regions. For this analysis, we performed tests with files hosted in multiple regions and monitored the network time required to obtain these files directly and then compared these results with those obtained when accessing them through the CDN. Based on these results, we determined that the CDN may have a small overhead when the access and the file are in the same region, but on the other hand, when the files are hosted in a region different from the access region, the CDN provides a gain in significant access time.
Instituto de Computação :: State University of Campinas
Av. Albert Einstein, 1251 - Cidade Universitária Zeferino Vaz • 13083-852 Campinas, SP - Brazil • Phone: [19] 3521-5838 |