|
| 1 | +--- |
| 2 | +title: Benchmarking |
| 3 | +redirect_from: /benchmarking/ |
| 4 | +permalink: /docs/concepts/benchmarking/benchmarking/ |
| 5 | +--- |
| 6 | + |
| 7 | +## Table of contents |
| 8 | + |
| 9 | +* [Introduction to Benchmarking](#introduction-to-benchmarking) |
| 10 | +* [Our benchmarking tool framework](#our-benchmarking-tool-framework) |
| 11 | +* [Trace Framework Abstraction](#trace-framework-abstraction) |
| 12 | +* [Shadow Builder](#shadow-builder) |
| 13 | +* [Binary generation for instrumented code](#binary-generation-for-instrumented-code) |
| 14 | + * [Receiving inputs](#receiving-inputs) |
| 15 | + * [Parse and Check](#parse-and-check) |
| 16 | + * [TFA Execution](#tfa-execution) |
| 17 | + * [Compilation](#compilation) |
| 18 | +* [Step to start benchmarking](#step-to-start-benchmarking) |
| 19 | + |
| 20 | + |
| 21 | + |
| 22 | +## Introduction to Benchmarking |
| 23 | + |
| 24 | +Developing a working and stable application, from the scribbles to the final |
| 25 | +executing binary, is a long and hard task. During this process, developers may come |
| 26 | +across stability and perfomance issues. In addition to these issues, some |
| 27 | +specified QoS might be difficult to quantify. Solving those problems without the |
| 28 | +proper tools might be frustrating, tedious tasks leading to reduce developers |
| 29 | +efficiency. An adapted benchmarking tool could overcome all those development |
| 30 | +obstacles and increase development time. There are different KPI (Keep |
| 31 | +Performance Indicators) that one might be interested into. In the framework of |
| 32 | +micro-ROS, the KPI can be freely chosen by the developer. In this way, the |
| 33 | +benchmarking tool will remain flexible and allow the community to constantly add |
| 34 | +support for a lot of different KPI. |
| 35 | + |
| 36 | +The problems we want to tackle are: |
| 37 | + |
| 38 | + * Out there, many benchmarking tools exist, each of them targeting different KPIs. |
| 39 | + * Different platforms (Linux/Nuttx/Baremetal et.c.). |
| 40 | + * Too few time/resources to code benchmarking tool for each. |
| 41 | + * Avoid code overhead: Keep code clarity. |
| 42 | + * Avoid execution overhead: Do not want to make execution slower when benchmarking. |
| 43 | + |
| 44 | +## Our Benchmarking tool framework |
| 45 | + |
| 46 | +The benchmarking tool under development is providing a framework to allow |
| 47 | +developers to create their own benchmarking tool. Each part a developer wants to |
| 48 | +benchmark can be added as a plugin using the provided framework. In this way |
| 49 | +plugins can be shared and this improves re-usability as much as possible. |
| 50 | + |
| 51 | + |
| 52 | +## Trace Framework Abstraction |
| 53 | + |
| 54 | +The Shadow builder alone only parse comments from the application and pass it |
| 55 | +along to the Trace Framework Abstraction (TFA) Core. The TFA core is aware of |
| 56 | +the plugins that are available, all the plugins’ capabilities and platform |
| 57 | +target. The process goes as explained below: |
| 58 | + |
| 59 | + * The line containing the functionality Benchmarking::XX::YY will be checked |
| 60 | + against all the available plugins. |
| 61 | + * Plugins that are capable of handling functionality will respond with a piece of |
| 62 | + code that will be replaced with a piece of code. |
| 63 | + * Then the output file will be added in a folder corresponding to the platform |
| 64 | + type and benchmarking type. |
| 65 | + |
| 66 | +Being generic is the key for this benchmarking tool. The plugins will in |
| 67 | +contrary bring the specific implementation needed to benchmark a specific |
| 68 | +platform. Every plugin will provide information as requested by the parser: |
| 69 | + |
| 70 | + * Provide a list of supported platforms. |
| 71 | + * Provide a list of functions that are handled. |
| 72 | + * Provide snippet codes that will be added for benchmarking. |
| 73 | + * Provide a list of patches and/or patch code |
| 74 | + * Optional provide an end script to run and execute the benchmarks |
| 75 | + |
| 76 | + |
| 77 | +## Shadow Builder |
| 78 | + |
| 79 | +This section will introduce some concepts related to the shadow builder (SB). |
| 80 | + |
| 81 | +The Shadow builder is a tool that will transparently instrument the code to |
| 82 | +benchmark. The tools will be able to output an “instrumented code” that will be |
| 83 | +later be compiled as a normal code. The following steps describe what the shadow |
| 84 | +builder process flow: |
| 85 | + |
| 86 | + * Get configuration file from the user (Benchmarking Configuration File). |
| 87 | + * Get appropriate sources. |
| 88 | + * Execute Trace Framework Abstraction Configuration file. |
| 89 | + * Parse the sources file needed Injecting code. |
| 90 | + * Compile the targeted binary for different platforms. |
| 91 | + * If needed, depending what type benchmark is undertaken, compile another |
| 92 | + target binary benchmarking. |
| 93 | + |
| 94 | +The SB (Shadow Builder) is meant to be as transparent as possible for the user. |
| 95 | +And if the benchmarking is not activated, it should be bypassed. |
| 96 | + |
| 97 | +The SB is in charge of getting the path/git repository to the source code that |
| 98 | +needs to be benchmarking. The benchmarking. The sources are specified by the |
| 99 | +user in the benchmarking configuration file. |
| 100 | + |
| 101 | +In order to inject code, there are some tools that allow this. CLang AST tool |
| 102 | +will allow to inject some code. |
| 103 | + |
| 104 | + |
| 105 | +## Binary generation for instrumented code |
| 106 | + |
| 107 | +The binary generation is the process of compiling the source code. In order to |
| 108 | +benchmark, previously to compile the source code, it is necessary to instrument |
| 109 | +the code. The code will be instrumented in a transparent way for the |
| 110 | +programmer/user. Therefore, a configuration file provided by the programmer will |
| 111 | +be parsed and code injected as described in a configuration file. |
| 112 | + |
| 113 | +### Receiving inputs |
| 114 | + |
| 115 | +The binary generation's pipeline receives two inputs to work with: |
| 116 | + * Configuration Benchmarking file. |
| 117 | + * Source code to benchmark. |
| 118 | + |
| 119 | +In short, the configuration describes: |
| 120 | + |
| 121 | + * What is benchmarked (sources). |
| 122 | + * Where to benchmark. |
| 123 | + * What type of benchmark. |
| 124 | + * Optionally against what base line to compare (base line source) |
| 125 | + |
| 126 | +### Parse and Check |
| 127 | + |
| 128 | +Once the input received the **Shadow Builder** parses the configuration |
| 129 | +file. From the configuration file, the Shadow builder gets: |
| 130 | + |
| 131 | + * The different benchmarking to be achieved. |
| 132 | + * The targeted platforms. |
| 133 | + |
| 134 | +In addition to parsing, the Shadow Builder is in charge of checking |
| 135 | +capabilities and consistency within the configuration file and the different |
| 136 | +TFA's plugins registered in the TFA module. |
| 137 | + |
| 138 | +### TFA Execution |
| 139 | + |
| 140 | +Once parsed and checked against the TFA module capabilities, the Shadow |
| 141 | +Builder will be in charge of translating configuration into source code. The |
| 142 | +translated sources will also be achieved in cooperation with the TFA module. The |
| 143 | +detailed steps of the TFA can be found here. At the end of this step, the TFA |
| 144 | +will generate the new forged source code ready for compilation. In addition to |
| 145 | +patched source code, the TFA will generate scripts that will the benchmarks. |
| 146 | + |
| 147 | +### Compilation |
| 148 | + |
| 149 | +The compilation will happen for every kind of benchmarks and |
| 150 | +platforms targeted. Depending on the kind of benchmark that is being executed, |
| 151 | +there will be one or more binaries per benchmarks session. The number of binary |
| 152 | +generated also depends on what plugins are provided by the user to the shadow |
| 153 | +builder. The shadow builder will retrieve capabilities of the plugins and |
| 154 | +request from the developer, match them and generated software according to the |
| 155 | +matches. |
| 156 | + |
| 157 | + |
| 158 | +## Step to start benchmarking |
| 159 | + |
| 160 | +The shadow Builder will be executed as follow: |
| 161 | + |
| 162 | + * Software sources are passed to the Shadow Builder. |
| 163 | + * The source are passed and upon comments containing /*Benchmarking::XX::YY*/ |
| 164 | + (a tag) the code line is passed to the Trace Framework Abstraction module. |
| 165 | + Using comments is preferable → No includes needed. |
| 166 | + * All plugins that registered to the TFA the Benchmarking::XX::YY functionality |
| 167 | + will return a piece of code that will be added to the source. |
| 168 | + * Once all parsed, the shadow builder will compile for all the different |
| 169 | + platforms requested either by plugins or by user configuration. |
0 commit comments