|
| 1 | +# mORMot version of The One Billion Row Challenge |
| 2 | + |
| 3 | +## mORMot 2 is Required |
| 4 | + |
| 5 | +This entry requires the **mORMot 2** package to compile. |
| 6 | + |
| 7 | +Download it from https://github.com/synopse/mORMot2 |
| 8 | + |
| 9 | +It is better to fork the current state of the mORMot 2 repository, or get the latest release. |
| 10 | + |
| 11 | +## Licence Terms |
| 12 | + |
| 13 | +This code is licenced by its sole author (A. Bouchez) as MIT terms, to be used for pedagogical reasons. |
| 14 | + |
| 15 | +I am very happy to share decades of server-side performance coding techniques using FPC on x86_64. ;) |
| 16 | + |
| 17 | +## Presentation |
| 18 | + |
| 19 | +Here are the main ideas behind this implementation proposal: |
| 20 | + |
| 21 | +- **mORMot** makes cross-platform and cross-compiler support simple (e.g. `TMemMap`, `TDynArray.Sort`,`TTextWriter`, `SetThreadCpuAffinity`, `crc32c`, `ConsoleWrite` or command-line parsing); |
| 22 | +- Memory map the entire 16GB file at once (so won't work on 32-bit OS, but reduce syscalls); |
| 23 | +- Process file in parallel using several threads (configurable, with `-t=16` by default); |
| 24 | +- Each thread is fed from 64MB chunks of input (because thread scheduling is unfair, it is inefficient to pre-divide the size of the whole input file into the number of threads); |
| 25 | +- Each thread manages its own data, so there is no lock until the thread is finished and data is consolidated; |
| 26 | +- Each station information (name and values) is packed into a record of exactly 64 bytes, with no external pointer/string, so match the CPU L1 cache size for efficiency; |
| 27 | +- Use a dedicated hash table for the name lookup, with direct crc32c SSE4.2 hash - when `TDynArrayHashed` is involved, it requires a transient name copy on the stack, which is noticeably slower (see last paragraph of this document); |
| 28 | +- Store values as 16-bit or 32-bit integers (temperature multiplied by 10); |
| 29 | +- Parse temperatures with a dedicated code (expects single decimal input values); |
| 30 | +- No memory allocation (e.g. no transient `string` or `TBytes`) nor any syscall is done during the parsing process to reduce contention and ensure the process is only CPU-bound and RAM-bound (we checked this with `strace` on Linux); |
| 31 | +- Pascal code was tuned to generate the best possible asm output on FPC x86_64 (which is our target) with no SIMD involved; |
| 32 | +- Some dedicated x86_64 asm has been written to replace mORMot `crc32c` and `MemCmp` general-purpose functions and gain a last few percents; |
| 33 | +- Can optionally output timing statistics and hash value on the console to debug and refine settings (with the `-v` command line switch); |
| 34 | +- Can optionally set each thread affinity to a single core (with the `-a` command line switch). |
| 35 | + |
| 36 | +The "64 bytes cache line" trick is quite unique among all implementations of the "1brc" I have seen in any language - and it does make a noticeable difference in performance. The L1 cache is well known to be the main bottleneck for any efficient in-memory process. We are very lucky the station names are just big enough to fill no more than 64 bytes, with min/max values reduced as 16-bit smallint - resulting in temperature range of -3276.7..+3276.8 which seems fair on our planet according to the IPCC. ;) |
| 37 | + |
| 38 | +## Usage |
| 39 | + |
| 40 | +If you execute the `mormot` executable without any parameter, it will give you some hints about its usage (using mORMot `TCommandLine` abilities): |
| 41 | + |
| 42 | +``` |
| 43 | +ab@dev:~/dev/github/1brc-ObjectPascal/bin$ ./mormot |
| 44 | +The mORMot One Billion Row Challenge |
| 45 | +
|
| 46 | +Usage: mormot <filename> [options] [params] |
| 47 | +
|
| 48 | + <filename> the data source filename |
| 49 | +
|
| 50 | +Options: |
| 51 | + -v, --verbose generate verbose output with timing |
| 52 | + -a, --affinity force thread affinity to a single CPU core |
| 53 | + -h, --help display this help |
| 54 | +
|
| 55 | +Params: |
| 56 | + -t, --threads <number> (default 16) |
| 57 | + number of threads to run |
| 58 | +``` |
| 59 | +We will use these command-line switches for local (dev PC), and benchmark (challenge HW) analysis. |
| 60 | + |
| 61 | +## Local Analysis |
| 62 | + |
| 63 | +On my PC, it takes less than 5 seconds to process the 16GB file with 8 threads. |
| 64 | + |
| 65 | +If we use the `time` command on Linux, we can see that there is little time spend in kernel (sys) land. |
| 66 | + |
| 67 | +If we compare our `mormot` with a solid multi-threaded entry using file buffer reads and no memory map (like `sbalazs`): |
| 68 | + |
| 69 | +``` |
| 70 | +ab@dev:~/dev/github/1brc-ObjectPascal/bin$ time ./mormot measurements.txt -t=10 >resmrel5.txt |
| 71 | +
|
| 72 | +real 0m4,216s |
| 73 | +user 0m38,789s |
| 74 | +sys 0m0,632s |
| 75 | +
|
| 76 | +ab@dev:~/dev/github/1brc-ObjectPascal/bin$ time ./sbalazs measurements.txt 20 >ressb6.txt |
| 77 | +
|
| 78 | +real 0m25,330s |
| 79 | +user 6m44,853s |
| 80 | +sys 0m31,167s |
| 81 | +``` |
| 82 | +We used 20 threads for `sbalazs`, and 10 threads for `mormot` because it was giving the best results on each entry on this particular PC. |
| 83 | + |
| 84 | +Apart from the obvious global "wall" time reduction (`real` numbers), the raw parsing and data gathering in the threads match the number of threads and the running time (`user` numbers), and no syscall is involved by `mormot` thanks to the memory mapping of the whole file (`sys` numbers, which contain only memory page faults). |
| 85 | + |
| 86 | +The `memmap` feature makes the initial `mormot` call slower, because it needs to cache all measurements data from file into RAM (I have 32GB of RAM, so the whole data file will remain in memory, as on the benchmark hardware): |
| 87 | +``` |
| 88 | +ab@dev:~/dev/github/1brc-ObjectPascal/bin$ time ./mormot measurements.txt -t=10 >resmrel4.txt |
| 89 | +
|
| 90 | +real 0m6,042s |
| 91 | +user 0m53,699s |
| 92 | +sys 0m2,941s |
| 93 | +``` |
| 94 | +This is the expected behavior, and will be fine with the benchmark challenge, which ignores the min and max values during its 10 times run. So the first run will just warm up the file into memory. |
| 95 | + |
| 96 | +On my Intel 13h gen processor with E-cores and P-cores, forcing thread to core affinity does not help: |
| 97 | +``` |
| 98 | +ab@dev:~/dev/github/1brc-ObjectPascal/bin$ ./mormot measurements.txt -t=10 -v |
| 99 | +Processing measurements.txt with 10 threads and affinity=false |
| 100 | +result hash=8A6B746A,, result length=1139418, stations count=41343, valid utf8=1 |
| 101 | +done in 4.25s 3.6 GB/s |
| 102 | +ab@dev:~/dev/github/1brc-ObjectPascal/bin$ ./mormot measurements.txt -t=10 -v -a |
| 103 | +Processing measurements.txt with 10 threads and affinity=true |
| 104 | +result hash=8A6B746A, result length=1139418, stations count=41343, valid utf8=1 |
| 105 | +done in 4.42s 3.5 GB/s |
| 106 | +``` |
| 107 | +Affinity may help on Ryzen 9, because its Zen 3 architecture is made of identical 16 cores with 32 threads, not this Intel E/P cores mess. But we will validate that on real hardware - no premature guess! |
| 108 | + |
| 109 | +The `-v` verbose mode makes such testing easy. The `hash` value can quickly check that the generated output is correct, and that it is valid `utf8` content (as expected). |
| 110 | + |
| 111 | +## Benchmark Integration |
| 112 | + |
| 113 | +Every system is quite unique, especially about its CPU multi-thread abilities. For instance, my Intel Core i5 has both P-cores and E-cores so its threading model is pretty unfair. The Zen architecture should be more balanced. |
| 114 | + |
| 115 | +So we first need to find out which options leverage at best the hardware it runs on. |
| 116 | + |
| 117 | +On the https://github.com/gcarreno/1brc-ObjectPascal challenge hardware, which is a Ryzen 9 5950x with 16 cores / 32 threads and 64MB of L3 cache, each thread using around 2.5MB of its own data, we should try several options with 16-24-32 threads, for instance: |
| 118 | + |
| 119 | +``` |
| 120 | +./mormot measurements.txt -v -t=8 |
| 121 | +./mormot measurements.txt -v -t=16 |
| 122 | +./mormot measurements.txt -v -t=24 |
| 123 | +./mormot measurements.txt -v -t=32 |
| 124 | +./mormot measurements.txt -v -t=16 -a |
| 125 | +./mormot measurements.txt -v -t=24 -a |
| 126 | +./mormot measurements.txt -v -t=32 -a |
| 127 | +``` |
| 128 | +Please run those command lines, to guess which parameters are to be run for the benchmark to give the best results on the actual benchmark PC with its Ryzen 9 CPU. We will see if core affinity makes a difference here. |
| 129 | + |
| 130 | +## Feedback Needed |
| 131 | + |
| 132 | +Here we will put some additional information, once our proposal has been run on the benchmark hardware. |
| 133 | + |
| 134 | +Stay tuned! |
| 135 | + |
| 136 | +## Ending Note |
| 137 | + |
| 138 | +There is a "pure mORMot" name lookup version available if you undefine the `CUSTOMHASH` conditional, which is around 40% slower, because it needs to copy the name into the stack before using `TDynArrayHashed`, and has a little bit overhead. |
| 139 | + |
| 140 | +Arnaud :D |
0 commit comments