From 9ea2d05cbc5e781caf3e67dcaf34fa7b1710f1ca Mon Sep 17 00:00:00 2001 From: javier Date: Wed, 10 Dec 2025 17:13:03 +0100 Subject: [PATCH 01/21] added query tracing and links --- documentation/concept/query-tracing.md | 13 +++++++------ documentation/guides/architecture/security.md | 14 +++++++++----- 2 files changed, 16 insertions(+), 11 deletions(-) diff --git a/documentation/concept/query-tracing.md b/documentation/concept/query-tracing.md index b8155b0f5..6c08a91cc 100644 --- a/documentation/concept/query-tracing.md +++ b/documentation/concept/query-tracing.md @@ -7,7 +7,8 @@ description: --- Query tracing is a feature that helps you diagnose performance issues with -queries by recording each query's execution time in a system table called +queries by recording each query's execution time, and the user who launched the query, + in a system table called `_query_trace`. You can then analyze the data in this table using the full power of QuestDB's SQL statements. @@ -31,11 +32,11 @@ This is an example of what the `_query_trace` table may contain: SELECT * from _query_trace; ``` -| ts | query_text | execution_micros | -| --------------------------- | ------------------------- | ---------------- | -| 2025-01-15T08:52:56.600757Z | telemetry_config LIMIT -1 | 1206 | -| 2025-01-15T08:53:03.815732Z | tables() | 1523 | -| 2025-01-15T08:53:22.971239Z | 'sys.query_trace' | 5384 | +| ts | query_text | execution_micros | principal | +| --------------------------- | ------------------------- | ---------------- | --------- | +| 2025-01-15T08:52:56.600757Z | telemetry_config LIMIT -1 | 1206 | admin | +| 2025-01-15T08:53:03.815732Z | tables() | 1523 | admin | +| 2025-01-15T08:53:22.971239Z | 'sys.query_trace' | 5384 | admin | As a simple performance debugging example, to get the text of all queries that took more than 100 ms, run: diff --git a/documentation/guides/architecture/security.md b/documentation/guides/architecture/security.md index 49ff1c04a..69547007c 100644 --- a/documentation/guides/architecture/security.md +++ b/documentation/guides/architecture/security.md @@ -11,27 +11,31 @@ description: QuestDB implements enterprise-grade security with TLS, single-sign- fine-grained granularity. - **Built-in admin and read-only users:** - QuestDB includes built-in admin and read-only users for the PGWire protocol and HTTP endpoints using HTTP Basic Auth. + QuestDB OSS includes built-in admin and read-only users for the PGWire protocol and HTTP endpoints using HTTP Basic Auth. - **HTTP basic authentication:** You can enable HTTP Basic Authentication for the HTTP API, web console, and PGWire protocol. Health-check and metrics endpoints can be configured independently. - **Token-based authentication:** - QuestDB Enterprise offers HTTP and JWT token authentication. QuestDB Open Source - supports token authentication for ILP over TCP. + QuestDB Enterprise offers [HTTP and JWT token authentication](/docs/operations/rbac/#user-management). QuestDB Open Source supports [token authentication](/docs/reference/api/ilp/overview/#tcp-token-authentication-setup) for ILP over TCP. - **TLS on all protocols:** - QuestDB Enterprise supports TLS on all protocols and endpoints. + QuestDB Enterprise supports [TLS on all protocols](/docs/operations/tls/) and endpoints. - **Single sign-on:** - QuestDB Enterprise supports SSO via OIDC with Active Directory, EntraID, or OAuth2. + QuestDB Enterprise supports SSO via [OIDC](/docs/operations/openid-connect-oidc-integration/) with Active Directory, EntraID, or OAuth2. - **Role-based access control:** Enterprise users can create user groups and assign service accounts and users. Grants [can be configured](/docs/operations/rbac/) individually or at the group level with fine granularity, including column-level access. +- **Auditing:** + QuestDB allows [query tracing](/docs/concept/query-tracing), to monitor the executed + queries, how long they took, and —for Enterprise users— which user executed the query. + + ## Next Steps - Back to the [QuestDB Architecture](/docs/guides/architecture/questdb-architecture) overview From 559d68623f50f25389a7b7599d62689a057b0be2 Mon Sep 17 00:00:00 2001 From: javier Date: Wed, 10 Dec 2025 18:49:37 +0100 Subject: [PATCH 02/21] improved architecture guide. Adding multi URL to ILP docs --- .../guides/architecture/replication-layer.md | 37 ++- .../architecture/time-series-optimizations.md | 9 + documentation/reference/api/ilp/overview.md | 42 ++- .../architecture/bring-your-own-cloud.png | Bin 0 -> 370220 bytes .../guides/architecture/replication.svg | 284 ++++++++++++++++++ 5 files changed, 370 insertions(+), 2 deletions(-) create mode 100644 static/images/guides/architecture/bring-your-own-cloud.png create mode 100644 static/images/guides/architecture/replication.svg diff --git a/documentation/guides/architecture/replication-layer.md b/documentation/guides/architecture/replication-layer.md index 1fac51557..9b567ecf0 100644 --- a/documentation/guides/architecture/replication-layer.md +++ b/documentation/guides/architecture/replication-layer.md @@ -14,9 +14,17 @@ can be downloaded and applied by any number of "replica" instances, either conti Full details of replication can be found at the [replication concepts](/docs/concept/replication/) page. + + ### Type of instances on a replicated QuestDB cluster -#### Primary instances +#### Primary Instances Primary instances offer the same features as stand-alone instances, supporting both reads and writes. To keep the overhead of replication to a minimum, primary instances only ship WAL segments to the designated object store. There @@ -42,6 +50,10 @@ the object store, they will catch up with the data ingested by the primaries. At the moment of writing this guide, read replicas will replicate from all tables and partitions from the primaries. +In the event of a Primary instance failing, any Read Replica can be +promoted as a new Primary. See our [Disaster Recovery](/docs/operations/replication/#disaster-recovery) documentation for details of the +different failover modes. + #### Distributed Sequencer When multi-primary ingestion is configured, it is necessary to coordinate writes, to ensure transactions are consistent @@ -51,6 +63,29 @@ The sequencer coordinates transactions by assigning monotonic transaction number uses [FoundationDB](https://www.foundationdb.org/) as the backend for storing transaction metadata and enabling synchronization across primaries. +#### Highly-Available Writes Using a Single Primary + +You can get highly available writes without the need for a multi primary cluster. + +QuestDB ILP clients allow you to configure multiple URLs in the connection string. When you have a cluster composed of a single primary and one or more replicas, the clients will initially connect to the primary instance at startup. If the primary becomes unavailable and a read replica is promoted to the new primary, the clients will automatically resume sending data to this new primary. + +During the failover, which can take about 30 seconds, writes are paused on the client side, and reads remain available from the other read replicas in the cluster. + +Refer to the [ILP overview](/docs/reference/api/ilp/overview/#multiple-urls-for-high-availability) for more details. + +#### Bring Your Own Cloud (BYOC) + +QuestDB Enterprise can be fully managed by the end user, or it can be managed in collaboration with QuestDB's team under the [BYOC model](/byoc). + + + +With BYOC, the QuestDB team handles operations of all primary and replica instances directly on the user’s infrastructure. QuestDB manages infrastructure in a standard way depending on the cloud provider chosen by the customer. For example, when deploying BYOC on AWS, the QuestDB team uses CloudFormation, and when using Azure, it uses Lighthouse. BYOC managed infrastructure is fully owned and auditable by the customer. ## Next Steps diff --git a/documentation/guides/architecture/time-series-optimizations.md b/documentation/guides/architecture/time-series-optimizations.md index 419b1b694..64db695dc 100644 --- a/documentation/guides/architecture/time-series-optimizations.md +++ b/documentation/guides/architecture/time-series-optimizations.md @@ -72,6 +72,15 @@ sequential reads, materialized views, and in-memory processing. - Materialized views can be chained, with the output of one serving as the input to another, and support TTLs for lifecycle management. +### Time-To_Life and Data lifecycle + +QuestDB supports [Time To Live (TTL)](/docs/concept/ttl/) configuration for both regular tables and +materialized views. With TTL enabled, partitions older than the configured horizon will automatically +be removed. + +An alternative is to use QuestDB Enterprise to automatically move older partitions to [cold storage](/docs/guides/architecture/storage-engine/#tier-three-parquet-locally-or-in-an-object-store), with +old partitions converted to Parquet and stored in object storage, while still being available for +querying by the query engine. ### In-memory processing diff --git a/documentation/reference/api/ilp/overview.md b/documentation/reference/api/ilp/overview.md index f559e056b..12073a705 100644 --- a/documentation/reference/api/ilp/overview.md +++ b/documentation/reference/api/ilp/overview.md @@ -38,7 +38,8 @@ and initial configuration: 8. [Timestamp column name](/docs/reference/api/ilp/overview/#timestamp-column-name) 9. [HTTP Transaction semantics](/docs/reference/api/ilp/overview/#http-transaction-semantics) 10. [Exactly-once delivery](/docs/reference/api/ilp/overview/#exactly-once-delivery-vs-at-least-once-delivery) -11. [Health Check](/docs/reference/api/ilp/overview/#health-check) +11. [Multiple URLs for High Availability](/docs/reference/api/ilp/overview/#multiple-urls-for-high-availability) +12. [Health Check](/docs/reference/api/ilp/overview/#health-check) ## Client libraries @@ -500,6 +501,45 @@ The are two ways to mitigate this issue: will receive an error. This effectively turns the client into an at-most-once delivery. +## Multiple URLs for High Availability + +The ILP client can be configured with multiple possible endpoints to send your data to. +Only one will be sent to at any one time. + +:::note + +This feature requires QuestDB OSS 9.1.0+ or Enterprise 3.0.4+. OSS users are discouraged of using +this feature, as once data is sent to another primary, there is no way to reconcilliate the +diverging instances. QuestDB Enterprise users can leverage this feature to transparently +handle replication failover. + +::: + +To configure this feature, simply provide multiple addr entries. For example, when using Java: + +```java +try (Sender sender = Sender.fromConfig("http::addr=localhost:9000;addr=localhost:9999;")) { + // ... +} +``` + +:::tip + +At the moment of writing this guide, only some of the QuestDB clients support multi-url configuration. Please +refer to the documentation of your client to make sure it is available. + +::: + +On initialisation, if `protocol_version=auto`, the sender will identify the first instance that is writeable. Then it +will stick to this instance and write any subsequent data to it. + +In the event that the instance becomes unavailable for writes, the client will retry the other possible endpoints. As long +as one instance becomes writable before the maximum retry timeout is reached, it will stick to it instead. This unvailability is characterised by failures to connect or locate the instance, or the instance returning an error code due to it being read-only. + +By configuring multiple addresses, you can continue capturing data if your primary instance fails, without having to reconfigure the clients, as they will automatically failover to the new primary once available. + +Enterprise users can use multipe URLs to handle replication failover, without the need to introduce a load-balancer, reconfigure clients, or move to a [multi-primary](/docs/operations/multi-primary-ingestion/) deployment model. + ## Health Check To monitor your active connection, there is a `ping` endpoint: diff --git a/static/images/guides/architecture/bring-your-own-cloud.png b/static/images/guides/architecture/bring-your-own-cloud.png new file mode 100644 index 0000000000000000000000000000000000000000..a7202a1cf80abe9b932b33ebf55c9cec79d1b407 GIT binary patch literal 370220 zcmeFZWmBA6*DV~}gS!TIcXtTxPH=a32(E!3!L37}A-G#;++BjZyE_fM?7g$kIaTj- zKfmCvy1rapUB&9T)*5q+G3O*&MM(w)kpS`CyLTubWhK?#y@NA;_YOu59`fyzsxdI( zyLY1RK1zydctafLM0yeNq*mhdv&oZFsZD-Q`qp>*VeIz^%gZXwm23E>#+0?(mBx==p=NQhzWs?_Me}9VaR|Lui+N6 z|Lsd?!Jj`P_xPepibDU#Ctn_6Ky%Z6PKLX%?G3UR4{Qt{b|IJqa3$^%faQ+*d z|B;uXVTP!S8yOint^3}KT*CnYG+&#RO5iFYU+uWQU$UI9*H7z} z%h6s$iNg=xUC;RWdKNr=^eKTIJVUW1rP5eB2|O>jr<2?^L5u8=3IVIHt!b>brGCu3 zTseMl1(!z?k;zeh={pGg7%;E7XhbpK{N!g}y7oanPjU^zr?OOX^?SpDHMTS}I-kSu zSL2W8b{7WQGe`?_tvtc+ZT*<*Dsyqq#WMINH1ALR=6oU>9k|~!Q{FU0z0d6u&h&Ha zoO21>ySQ2p8^eq^b#-%Iq(8>yp5O|n>}t%cJn{ou5`NrI_#bi#5g!D8R07dMq(N2g zxRu;a*Ti~9$9{8gr#U&gGH3P!A9d28q+o7GNJJtegLAV1eHz0~$2Ymc^;gsCufsc+ z58}e(J{1NfRG1Rg-bCReW1i(&2xTJyD6!Jk;f{;nB?M|RevB3^Uz-K)gjiPfNgS_7 zZ849MS$|CbRaq&Lk|me_`Nag#ZMFg6eoz02?LyT)4xJv40l4;2+l7V{XhW#`c)S;D+F?#zQH|Xtk)R*K@oJ|)JjDT*Hi4U>ytCJm zxTbAmSw+zg`2(QQ2bn9{CPSH-_ZYR%nt?RO7q*mBf*vu@+~k@4W5;3zGlCB80KShL zfnU8jAd1XPN?ab-qbeu?D@=I5r|)_|<~FCyl!| zdxp8NQQ2LOZG8u|AZz;mkz)GIXw>>0@k8f_>Vm7VP3f-(0$furL(n%IFMJ9g$Qyrz zP%ajpiH~CP-I(dtyiC$PiN~7uM=f6m03M!c@6q|2Nedk^K@*#qJ!$&3g7vH$?+C`i zK$PU}Vq=(nOPG@|8Ic9kZ0$kzSs>Ehgd_4t67cB^U8{Lc&&={97Ne(bc!BkQ~-qtW5ak1{610(+LmatO5^-bv{S43|{i?l$yuLrz}uQMrzB*!!H zV|5E!{T3sJb7@WP`6h^g#mUEeAABik$v-%uM^AgX?p2d{34o~i1si42fYfXWLGDAK zs`53%neX|bJ@M2V`|e<@%!Q-qUe7hn)lW5t0yE(0_b;J#p0bi+|ATs*2!7Lh#Op*7 zD#q5uS{TtUoq*1bjmZzoWZexwAOKg?^bg)x7ZA5Ck-*2BW>DIx8p!b{A0 zFLQMdptcdpjGy*W{hB@z4-(wmoB230&Ha5EVah?zhT;*&yuaZQXHwe@!)T4_h-SwW z{SeDP;mm6&5K*kI$YnWg>x|4xY}W;thy|AP&xy*@hO~0gParjup{RJaJ$F`>A5unZ z7w{k%4hR=!w$zOOcK13;qvGblM%f}wTAM+WW2>&nqL%s=S4htqjKO-hm zFlg}(bal0Lb-gs*@9Xa$&mru78J@!=)KyupBO;WN^UhgcBI37dK37)6TVxN}M7n~_ zvk+++wv)u;lRAN-Or%XzZ*_lraqu%t+mp4nGXO3S%+T{Sx!NtGCKG{Ul%@sC)2^^@ zk%XGEwWFYOKsn?tx$hU8tHF$X&$2`1{E;7`&9mg|0+Z%lH6zdRXe)q;OE4&WMa2(@ske z6!N^Dsy}(Y%WVrge_l>3jQjq#P!DxFt=Q-O0j$bnA$C#~N-uFilBqvE&{CpORD%|>RMu|!l z{8`j0KgwE9HmH82g9~;y=~v35q>1M@)`tx~IM9F~(K)eZ2NivQp10Idh(=!gH8^K9$AA`pC|XFl=o}=_89TI@LeGzEushRzP7fm78X=l3>x$| zuFXRsmx@>H5^=*qLMR;95P4BE|JPbtZ-R$bw_vcam7GZgBhu6lSFM^yuEbcw@iiJ_ zGsR-&K2mswKH92x`uJ}IzDSS$#OIM+O+xe^wPp1j>(DcnqEY2K!=~X}8>x4o^JM&15 zs&v{o)$>y(bbGo*4CPela>Ei{Y z-oSGy&X91_wn1DR@uV5}E-)%<>uGDN6sdmIs2LAWk>(6|8Wdn?(kXuMIp4_%HFzA` z+FX3vir?L!9}8V0!k}@_oy{-u{R%mzS$DL*N z5l%LNc^J9%^bH6IaB%Q&aBv6+Fr!i57U)mDVWDsTzx*#-0dmI9(~}!tX)(cqfkZ>m zWyZYgC+%uqJcX9sO+pclwRgQ17+r>GnDQ-mfY92L)%mCty$%a!i#hx-hV{|X4h>YS z?6pqjhoHGgdHJ&+`v+_&iyewFH~D;VSIwLk-jfV}qpSP*06-6-j6rg!fiXN+`TB%Z zuS=`BT~vC6PEVp+NQ zeP6&6-x$ia;0X41?Zh!_1nyr=u5hQ z6lSzQOH|RW@EY^?!4vBt<`y4?Y2}G+P>)odd(p@nLz?WhdTl|I@a$t{Zw0_S(34-Bf`J8L?74bTz$)nbwbXN%< z<*v65b@h+q1zmDdAIcJ0+eCHwSL-lRpKo_kCgX`Z<){O>vFcSRD|95eNX@;{zuMZq zczD~`*!VnjO<_p&pFcioarf((fnJND37OOedz2ZMpDfiL|4+WKig>5#aoSsndLQ`8 zT{5*;Qn~*0><)emudHovPvqswFiyKMw|D;hQ<3#&w1^hcru7)*Xl?%HAs9pxwZ@70 z^deo@u#<357X>A@v=5p{u=810qWC>9P%_>S`r2*kA$u+f^KjM;XI|#uPND3YgVi;!HSOpoHSIE}hZsn$S~cAu zQ-;IqK9FEB)!VxTKBbqPKu!pFuGV1IHlL#pRdB!JRQ6h0R2(FM2@6bVds#`j9r9A2 zXIN{S{q#@)dYU<=W?CDhcq86yf|wws5_5llQ}EDEX>u|qE^~IP`@tr5T#vf?-dIk- z@sB1`R~KGe#!$1=i`iM6SqMP^0Rf$`hbR?;3nn~5LbJV=$w^}Y0YXQ6DLNT9Zht%^ zcC#-ZWA?qBe<+iKeU}1*1VleLNxI1Fu>k~bgvQpifaPClafZVeZMFwWu$PJ#(xM9Z zK!DwiPucz8mr0lhlGcD^XZzny4EAl`lkcNj%?-(kDLoIgV2nU{Yo_(1zYHIt$WPNU zsrKVNZM-N`rLL)9`$Kpyp($j%7Pb1ra7urlQPQSMU|cvKh6Sof?$7u)p9KL!Dwk)^ zkeIlPHlJ4N&v~s7L77)iVih5F=qsp^>VYy%TB!U*_R2US2#*z-;>9eSA~I_0(fZi^ zZLrjJ>f$cU17M|1K_$M;3uOU@Cjcl9>v#qsr@ws%^-AJhmwz%o%Lpz>{~kePKeZ2< zc?y`Yb9^Ui0NnyhWlKHiV5%^S$LW&IO2Gx7Qlw$bd2x7nczL;bIHXl#fKNmO5LoZ7 z`Kj}N4k9q;-8TYD(%*L2^kmPS-tS89U_3Q#{X1WSk?8R{^Nca*#aaP=&tgRO&7+y- z1*_IwWQ@+iRW4u7eU!%_VC9Q!#F$NwN=hDv?wPu2;k?i+g*Pw4%f-qu zIZ`<4X|2crmLf}7qJx`fdEq3 zXYj2~emG!ORvchBd`E#$VGK|luMl4^g#nnzm$MZwgw1*)NkBFu@YVE`NOcnCl1GQR z%4NoRy+=8^@#W?DMCf=$soatrd|#Z0);_ucnCa4bKtnZDtlq5#J{=tg?A-MpgucS#W-HH0QL^B4q$hM?Rmgvv7>A4CfPuK<4bL zImYOes7cRC%-R)0`}*Eii`jL1`ZzmBIxLJIq1gjiE>5YmKV|>hOAI~0G2^;fcNjoC z`15Lh1+czmxy||gm~%}#&>Nj;=lOE0A{JPyv^Syb@hxFQdi&tsYp(P4Q}z4rV5J-{ zdL75Ll3OL1DIegNADv{%Qtsu?^A9o5C;dSpTt;Il@Y%nE%U~zV+@P)QDS*_Rqq7Bc zPpa8UZ~&yu)AQD8+2ttz_EzDjE2}(_pjCC2lOH?_%kd9cZ=`kmKvw2Y^>d68g1hA4 zsMHL?vO$M8JkD0OTQ@l-Gg@=1EXMSN4oMs}!A1Z9Sf|M}Ab!I6eK#x+u?W+qhVMAg za54-%I_U7p5YhWKxlyr7LNe`rJtJ^8Oli}PKk_k?^q_HZU55H{uz*ypf?wstUkXqEQyP8jKotH~_|BNcCni+udOMDuJK`pFSNtxg;;j0i$8%;NW0z@FOD(RyOj%%JOoZN!Nz=1~Dwkm6MqpM_ZdzR=dhT zvUUocZ#7+&8}X5@pr%mg@d7nE^RN>;xTKn0FDk$5cb`b7(e?diXJ{Y-Ec53bPBZR zbCCgB;&L^!xMQ@Ys@JL;P1B{vBe`aR`YkZt)X11vYG&+_;3+-&$4GVDZ9(S50Zm@1 zegjFXc6Avu5twbbNtemymMgDYBFQTAjRvp8&Qqq|%PBQq0kfFhXNoShk`a;*TCV($( z*i@84{YQt)WbcW10&UkktTkuL+T zc1K4>My7$vZ(=5EJni6dhpYGEfDm6PM?|EH087&Re!e#r7e6B6;WIWRE9d%sVpRzA ztr1R%c@tZA*ywl?MzcSa_tD1$BZu>D`1p-E3Z3@d0AfuvoVJ4#2V^E>Vfl@;g_kg3 zr%!PMaNb&zBS)nR4K_@y3)T_uhP^YR-TSv6Pi?g=U9MW@5pj zvx@y|{po75v@+7(yMlFh*`5aqe$PcQ*{{8CszSMDmll3Blt}rA4kI>%=H6I?o>>G` zbHy{Y^3;!E1wnXARz#0hr@=URjz>M;q$pBK{0x06>^0tQ#2+jikMt!WV_Z5oU=E=acZU|Hi}2^py;5 zJ~o;`YFsm1D1lN`r zC<(XuN}^dSTP=B-??#4RfsSK8cb@3di%L9ZU2IPGK8R>}P}Z|L<$W%*0C9bwX&kAe zzsI20o+)9JC%SVheh~X%3GmE-0d(6fv zLPU}XfJG6JBLWs{Tm4cQX%or|ocal=nex+gb#ex1xnlsOtIxJdYl5UWu}y-TvTAwT z*P&Ni6^?Yy?j#}JqS=euRLBd{?BbnaaC{B)jdHRy zx2PBTkhaoKwtBtgW{=I{c86T)YHJf&Ql5Orsgl^yu9mmoe|dNEXr9E@j(fJ+c6Gk- zER#S|#R;wTPPph-A|q{QR*Zv}NwC)QNUfD}5cndpx%mO~(lar^_sQ|n<6;vUc4Jr_ zb9#E3$8ovC*?9L|Ac^Fjht(jwnlZkdXM>Tx8I_#YcUHoSStaUmp5V1UU~9X?19f21 z@rA{pC3lxtkC=;vS;oVb9LrUhSgW|L#=t0_g;D=qf#nXl%vEc4D;cFy|6O_!i%Vb# zc%NA9ynt6Ecb1m=V#lCM#A8yQOL23JY>3%+NsYp#q~+v#kNc`*DMyGbKoT(~>q zj~XaYFF#)*&;p$k_G1KJ2Mq19JwtgqGs7!2))-C^h!H2*s4n06ki{=1ro@PNnmFka zXq1|FDjdNgT|bM7XD^SA{Vd6l?)n27=Azn`Fomq&H1LdE6PwslH{?+K;zOVIec$Ei0i;S)U z#ONdeB7nv^Xn_tY{GuAd$HSD;9vik)KEp75L-?Kz&1{PdhGBmNJ>jAJx=}q@GCn&3 zc5?;bHq%2MPw1Yc)t;OVypc5kVy@og>N&y%;0_z|CKtVWNB; zeVyEUj2*ISJ&(6nyJq#ZH*05d)Fq}rzC8C?oG`I1D$tf#{Nzw9nUbhtz7AgIq7eTU)_pPiQZ5TU;I?(^yLXhSfNRDAQ?~h96?!cfE)#RnnGvsu z=jpPwFU|B^R(^etROYNLGcKq7LrPPtz!C}93hKqfCAGHsAW18c=h6ZBr8~PkV=@D=xEwK{j5i@5$ng-5jupIY%TY z{^T#~Iy&~l=OO6ms~F52gkH}coa}Mz1H!>w_TUY^@t;_|wuO=5MDH)CmDQ#;(oCaB z-70lmGKeRZ)JyMX?b=dK{7FN8H_;;pS?X3qz#4s-ILmTJ*C}R*^P;-7*($ndDqxF} z%>;G=d%(oMZ{?Yyw!iaC+Y3%aE!+5B_C<~)?1!H=TgUp37wTj3e5BMu17?^!#XTZA zbB2$}tUBx|V47J@*Xmb`!U2NZfluBcoaB#ZDd~Y|9PX)@VW4HO($wJxTlN0J=_YjG z@j+cP3od~#B*Bx^b!hf`l5Fi>haYIOriQC3*2tvP=qXtq%MmqHkRIcq?UiHWad`$t z%0?`uE5G-b4=Vv0vHi(l%8zmt91z?NpD9`guwuF7g4LQEsxty_Y?*bUO}(AXy+0?2 zgvMFT$tRDkf3qp?ry<zRee2aXDr=FWa?1!+Pr|b$SoeNrJW9#}l@HOf)@E(rs>M4r-balSu zh|G$DI61VbLV~+IsEE5CX0~_Ve1aAgnrLrE7j=q3V>!S6g-yvu`AAhyhWUYKJ1Ow0 z4Mk&-)1_8B4FYqp9T&Im%E<4DY+y+{@b@m#C~47wUX&7!gH(g`H4D~YsJunxnBx&f zGaqo_5aFsvp+&E71pJKXR1!NC_go#+Tr3}wC%SYUKnz^g;=mZPu`@`XuOB%(B6Gri zoV)zC=EIkwxJI?aEg$$8n29q->7ago)|C*o7NldsOaON${3L#bS*p1Yhn_Z`6M%o7 z1w1Js-3H@?Q{cK1TaIicDGq^+UCjy|9*; zotC<~x?*uzKu@QwG|IW=ZWf2gzY5AY;t@Vu&^oT=f`anU`x7cYwb|l8yw%sgb@O}^ z>qn}st>xh0pzL~Qu9a1GCkcxLM>t1vNSs^T^Pu(vcD0R zp*VOVVThq8=|ttp_Mn5Hdwf1OZTMMIB9huAQx|}LF0owWaZ2XGfUGUJYrlR%6-Pzd zn65}NpSKF5lheklnYK(U>^&6>aX6#S^;SY?nVBj*%(pzb!YJxmY6V@Hf2(4_31lkD zD3oT;(bd(enNG-`3d$o3vT&>Z4Sp z%D>(lpnIRdSNjbuj9SLc6CpnPG4OkOPg<6EDN!_ET8QDK<1+C@zOl^JdOtq-l|JI` z-HxmQ7I15Ttg{0<7PW>w5_8DSANr87S(kb5pm%-rI!Dtwah>fAf4Sv=#sH=M!O_s= zkS6uLM)RB^y7a2hc8RaCcKikmNxZI@{LNAq3Csb4V`o^|JyecKv*w`#ANX<2qtnqu zT(6!mfLr}H*ym>6JF?dW?&fik@qTNum&vHOR!r{k1H`&}29l$vIYr6kb1lDqjnFC> z(BtbHommaex;9U?Vn%!4D&6!7rdY}O^j6xq+wTwU=r=klz8-3~KOWbP<3UR2!BVlY z5Pi9Q2=6VF5_B6k`epFxlNCIPKF*hfgq1e8=cz)3GKexsF-D@R_>(VIK3Jc0aZ)-& zmSAVZ8oTuFaNRu+1T9yZl@o{>ikWoy-fjGSwHuq3MoQXG8_{OHicDs{{U-H(?SWw$l%C2WJ28Lr8f`4OVIqN8* zl$D|f;l&K{A}^qD=>stzqtmNQitBs$GJmrO)|aO1eXTGg7j%O5!BF3t9}sKJUx8+t ziQmBXD(_B=8dZNemG;6#5^8l(Bq`=bnE(J?T6naaaiEXY9Ap*7*hbRnI8Sph6UHk* zQXdAyqlc8a;<`a};CbCzLV4{lldpl^H)J`_*>s%AD0SIuK5x#O3khymWP<290+NXJ z>_bnY`gYU0@Ew0FAzp6xH-9{pxQ8qXBYV)Q`|d?=pR+2O94BMZmDaa-2{-%$?eX8(A3G`bn{)1pSr>+UFF5O|9e^mxd?fpYb7 zfn&P#yqo3pvKREScXQO3rO!x26o*wX6W8m@qLlr5TuYchD$)}0#LYtGL?5s7Yv_qz0;Le4I^te*S!=hEjCxIuM=h8yCf01AoWuSQiNY`j9$4!+UJ%1 zJkM9Im<-e||1{l;@?ho*e(60_K(-hK4weib69yhkQNFnCg;23nGa7&H_!`{pydDoQ zP6%_aLQ-droX(hzd#Scy8*pdC;41Zu7{kla){NwyY|7Ku6IA^xsY zHlRHd1`5Q@PlA~A*TSP-&J44}+Z#H45 ztbXk?Ffh!gG&VFWe0B^Be0kHYo%>r4e3T^+{CouMhhs+!0h?JGuK!r#GLrdeKR=fA zv>T6iT;CFSuKVZmqcM$OAX%!GSM!yp5V+G$ktz`nWa@bHTC8J8g_j#CdqmRCM@%0W zRO?uz(s2POT2Kw8|B!zrZit9uW7WDu3r1a=w%U!vLZ+=yC^k8x zFQ#7kRed#|Iffxg7F{gIJsMz@*o0L4_mFz+BSsUaI8j}&M)%X%(MP)zIr$#P#1$O_ zkx;f$qB~;3;hh=F1UgHPfTmWu7SHuVm6!z0(ued`$AierHH0u)RgJVik96 zIFaI)Di-@Wg%O^KktWY=T>;cUq&J-`;>{r*Igy5mRQ=JPPaG8N&+(8QVWa`7?<{ij z#{hh>3~T0!jgkuSUu7GeG_9ddZVzcM_O2zkcYOD(pvAs*%{k zRBdQ@@ZHm?$HOFLVjni&{tmp;pokK=s5NuQ3D(%W*oq8#xyoa`nIi7=>$8(%VH21P zc)pnv0iSm-gW0oSE=V}ob2@wTf?gh01ibbmg@Xd)5aHY+4@SxKe?{qpxiZleUA7|{j>gXef^#O{N%%^2KH@5 zf$~M)7ea+lQ*$<<4Gr~)Ivk3-PC6M%EPi)2_W1iZr|DW(UfZ z<+bl%Wv^<@3eGZT=?`*bWB5E2?KwL#hO)84&Y+A0onnf zKk#Q%s??`1#7A3YB9X#Lf2~i;`q3QAbSEGQrot&M+ zEWz{p>(|#-Mgcc4+@_yo{9sC$S${|Al+ktC_4D3lENI*3?X$#7$Pokes>)=r~1>SYr8kK_Mk}n+hXKE-Uc2wNoQ{f% zgFYf2ZqN6_Y;B+GezHQSGqXM>I|(bYN_()+CJS|>Y0uJs`xL>FOaKEdV;6Ly2f_q1f4TOyd@|tA*UWRt$ z=PC^QqDBT-R=G<(CrU?(O>h!*^z?K;=^E+j85rm+FOF=xNu^G{aQbM{|ATzMJW;`1 z1xi?(L`Me;ORF_@B5zB}h?rR2U_N%zaQ-h`rp<S^Iypu^q#_Z%WL$w@ zp+&!r5rJ3ToU;zva^i+4BeW)CDjU8Tr={kI_=oWyR`G!L&sa{?vKh<6vfH7%YQF!h zGg3L&CB_-nLmdluIsP6w$2N0?68$(L_epk#WOeCcz|Tiwu!X2pnW{G(&}{|qR!k@ z$+s&&oyLl`x7y~%z)*@j;d8O+)10!hs>fhrQvs{rMbxk8pJ|u7Sr$a_jx-0cpmAtt zb6xauXonD$WV!!TUKz4&)SX4$%~&j&H0Uvd=>P|J_sPuV6NNPLTC3mD)p$wdvRE?D zcq)7K*77p=`1p7k+;w>w@kIm3d^zc(@n<&DD&ZYhhBO?PymZPCb_568r;o<03It9_>Nf|uuY3zVxPkggThkm{?DCA`TS3j#sd}r}G zQC4dzKJ>^T`jY+<=Q18b5BFvk;Smu&r@|dW@>hXPhccu|Rhi&FY(lXkQ^V4ypuA@X zU$eHU&Y-+z*EM|SejnZ3lbXV5U$XAoKE8A4Af6ow1GZ|-M8o%3m%zZpjGShFHtQeZ zvFxNbIsVr-bstw`K5XXU5{sogQ(eD_Dt2a<03^`tEI$8AR+0^bE+n<@9mmEJ({1m`qj*I*b?&2ST)SnOE z+rzcAv|Nz>M#sqcIZ|e<{p`_nMN;pM{_m}FeuS@LmsD;W33Thst>AGqX!pFi+@WLa z!pf|>E$>vG9$5tIM6#ilztvyqM0n4#5L3xcAsoLlJWc8s2fR zWeal-RsW4GqJ)sa&phAE7j{;!_GIu89``3Hw4!>|;xf`vg{i`CsA27pcCI)}=0%Rr zf?m%Wo%Tw(e36xRY{he)_?ASIXvIGhqb+QE`abTyRibRdrb|VavYvNVfB9TpFVrq% z;Cs^VpB=S(TO4X9kk$yY+0IB=XHRp94qrDa$slRF=&|Byy< zXHm%X2+U~Xc^2Z89rhz~N8DJV+CgKZ)=#sdzr0;S%2ntBiYHXw=>mDan1q>F-HTEu zuP&Q+9aGoZqI#>A{ApP47^D9?Wif0p2s4Y+gsH1fg7={w85DPl`kw-8lI2D-uH4bNxqN*w?rG-6oC;I!2ZJG7#5B<+e56#HElx^B6*Y`Aqi zqlz$N?@gWyK3bI1JAR?Y?%LS+pb#P;utcQ1B97_cC#JljF3^BHJr8@vjPuAQ#?IICr;@lhf7_$H640yn$ zNxrC>L`4m(ZZYh!fkW4xjy@K@jq}|iiubktz+JfLH;Xji(gF&zH#4wDX=4h^4OVLHwxr|hZ!baUkWr#0Ey-Sbas$J7)@=btF~|C)>sW+pSf zdWtPP%xXk(#~&MQ*VWZ6f{+RpFO6{xE6E_DGtiMr#fGugbjck3p{>oMn?8Wj%7wgfId?NAP4Td!FhT^~f3&65iV+eb(gOPtE zk$gl;uSWu+2R$+aPX?O?bHA!sYsX#Fnx9=I{~tWLLo2R@SSgI%!)FA)3D60mZnx)0 z59upb^z;wiSUx(D`0OL(7Lo(?GeZRvj*)X#b8_B48$R52ulA;jDPXByYTCYQ;?Yf>#WHm=6;Fg`g!~FQaf8zy2>g_SvNAn z%h~IT6HdPOg=vx6X*3P2x1TU8W(@V83<*cp;k+RZr~_4V-`y(mw~hYL<8P!(FCJAf z>O_kxU^O+xl9z*iee|7g$icaUm`aWLI}mSP2o@ygq^_MexNE*U^6u}8ed^WT`ev;7 zA+kwDFsjW1L<{0p!d5{6u{P;c5U&>rLBc+lo4=15hvn@S6mp68bXs5QICqs1y>=x2 z&L*GVNkLIWWCYc;)i<}aC-tZEf;E!^oF zPk1w@10N6e-h{+8e_XsGtLsslTR!}s^KmW=O&cL+-<~|^PZgA0RG zWb)YJ;_l1Qad3VRj#GNJJ5o^_kig)lv>;JNdd?7hUZEy4hNeBrDEDHl} zQQLhjlCyQ zhd<+E%AX-aR6BiN;S<4YTF}FaQ_z$Cal3|qmbP~L)2RSXe8-FY4%E>(>Fa&} zn;Hu#N49ZY>M%L(TJyhp9v2ZZ)cXXqQRn14Rv>EwdCgv(Bak_^uJ-cse))2>H`OI6?dhnNw2O?T zS9T8r)cydPJoL}s{6fhaU7f? z_+h!`K=huoXLmg7Z&9cQEu5WQ=sJLq zRzIixx2#sLGjnzUp_|<1OEnPMLh0T>O&QXkR4r-+dE@=d$=dm?H?TB;WD?S^1B?gj zrdl`ofgshj^H!{3cI|Dx?5|q_25+Z{gO*7hF_flsA_a^@0e_C>I%fV#Shbf2irre@ z>@0P+9j#XtI>&m%sk9tX3ci8;a|$xmd&;)hBRhSXW$7~Xo)>lDuR->4y;K!p$*Cmg zuoJkWi|FYQB@R_@z2q2gn$&@~!R(NVvt8Fcy}DcE<$n0Mw0U&YL7@+>B`)Qk93Z)l z20uUYEgz|MwYvhr8zY+6a5+U+C(@;;Ly+!}zgNx0%#0~{P|Y&~jOO}-@?UGc4PoLZ zJr3Thev_}oVQD1@d{Y_p{Fbn$aS9ZbDrw?fQHWt3Cw2U8j~CuJ=&1YnT;<->Qifhz znnT#?Q&P;oQ>2QJSZv2SGG;$D=q+pUrBP}<^O$yI(4@_YZcSm=CT4~#Y6-KB9t&* zd@i?vd4WEU_7S~AsQ&15|J>oI4ggwyF##Ulwo&1f0LN8kx%bvZ{w=$Z&?scjj6I%B z_}DiFZQ9-ai%;o!X64cTAia{@YhBgIF8Sy`C?E@YOqMrp1q+wl=(qwNHhK}KldhJc z$R1(WirtK>`O-(g0P)rabD8A?>Jpp~W zJca;~zvmS}z|3E1;+{3J`h9H^G?LzAHpugYAYd*%p(ICz#bgCxw(*UhS2Ph4>YvJ4 z^S2BBIcm;5ZzwoYttzm@2n}OU*fqc5e6x|imu`y^uyXCLcAhbqzC%4FaNsmp zT8|?|TQdm|o}pr6<;Qc|kmrb3Z?mQ;Jtj)!l>f!5Dlt5 z5hPBBY{|*VuaA|lG1+{)y{jGA)BWCwUHsq=FF3OQOg(2#q=7}F&)l(ghkBCe2RO&R2R_ypusE=~j zcQ^0fzmH2uFa!zqc3Upsg_uTNpI658R&%uFw?Adiq1WjQT1OfXg{_2_J;H_ zuJwO5#VNV3<9isGsq5(xgQ)Qt-x*GOE4xTYh3$>1Tfrb`rFMvp2paf8y?q$DkSaziko^w^p+-MHCw%2WCvt*lk&X!LT{I*iI3! zU5Am>h{VIz_Baxk*+8_ZrKP5c zRhwBmDO$sc_1%LcFMib8GoCeJn%u+2MsR-usS0{?;4?olBwp0s-kz>dm>j|pI!A4^ zPgB|a{o{>YD&@;tU@a|;ENUJk$$KZZyo@9W{za#3=)-#7HXV9c58RkG+iznIh;ND5 zJjT&MN!Qc#Z4%4TF&CSr__cU2AUi>R?l1NbTtWGn>Aov`^6Bnuiq`CN{HPb8_`8N_ zq71dy8!g>lIs*k=xO!UA^h=A;Sm|hK3p20O{l$9Ri6LNN(uNqW=J_<4W*vbb}wdTVxC;yJ&R8 z-k_{|Mq9^mGmSYs4Q`O~e3E)&>P7;cmO@R`^!e%sW1j z09Z;9-r)H*m(zFuTJbLXM7NiL>@hVv!mkzB()1581H2UyTbu*07wb#L9Sw3aGk<*D zOyaCcRJW`?WliSAFY5mJII?3Quc;e}`kc7`hx_{>sL+>}JA9L?FDWVYI&+vvvBl__#nBO4j>8u6bO{0o{L#n-<2>LyX-uRTX^0!b2wDh6t-`YmiADNYLC0Lgx zs(*3b8hLv6Z-{l-PQO02zdo&aiJ;-d0{=gvzA~(e?){pQ?rsq2Zt0Ye4(V>ByHlh~ zx=Xsd8xGyw-Q5UCyrce}|I1g-b=;P--?^>@FQ!0 zJ9B@y6wvCdYJw_#Z-2wy$Ia@M9E3}h4xm{ZtCF>-d1bviSEh%Qf6m|LE?PEN&wiTJ zbc7UTPYx%0nYkIu=q$eWV#rtB>eWx2aXK0reO+!M1*XG7&*zPySKaEtBz&;PWM!j+ z;{1+|jzBZVJKEB#9$cJpxS$t zDmbPSvlwCQr9vXPz@P+TSJuF|nFVjdO_~q?wr;RUwl<@wO9M1ChMw2e!>`V__VNb_ zqqIcMGi!Ky&LH=v+s6q(!L&3zM$8c?A}N9Rzp@A57YQSTa(=8WYZrRPz58m9DMK%! zuXtr72QqD{jay$-M@UM8RizeC?+Gr-r=VO%ay52)7(ZK454N_4v+4#IO+G%J3V`VZ zh1e56E-&NL>ApPQU_i(Hy6@lCba`0Q&VT<8h}(vp^v;k$p0$_@4D7qTr);HO?exw1 z;9_tsl|`-9pEOdxmw@!tBCma(@cv(jWc!@0+q=ul!|HT5jFm`S@Wt@g7tQ6m3hrrC z&#Ub|0GnN2rfi+o@OP$UIV|`WcH|KJ>+A2z$2f+IkB+`BD{vCUjT%XVghnq7u%%iY z_O5To2>L_}3=HO6-U+U^#`(MHy(W8)9rMkm+9xC~|bmjMf@Ahu+K}l1 zXrg4wA2VzkE}zL)uh?s3WTYN;vk8ia8n;l4T;|Agmhqy|rm+{OD-g>Z-9loJR1ky<)I`8%S9^yD>@U zC{w^QQ4&qiE_;ZC5YzB~B{l(TbEvM&jR2i>&);Ff#1NvC0);Tr_$cn!5m-CXcM zf81~~hp1J-LPWm&j6T0vgi76n;qGIMfk83oZSSoceQ^niesP$ua5tle*`|5SvL>1N zkw1GDqc4FH>JQ%G`?#k;naeHvyXVW_)<2}{D|yxDG*3ro(gb6QZIK)yil7dq22+!z zd~L?8u>=!RfyihT_rKXlS=f)e>Cvk|mktFHWV`-+dVjXk+uNJyh`$mqr5@b*a=((F zpYX}BNRV@O!#DlxSlRze3NYLc&^y3<|E+_K4Yu;o*xXzs(1og!6+HK(_~V|C^^0SF zFa?Ki;ai%fk7FP*xhGqQ_2+^x{uM95!OF+V?LLJFvhwreUc~QD4mxXQ&Bleiilgcr z1z4sMM3Q;fZ5_Y=mEh?v5cv)iX-laHL|8ad;^L^alTbwYyu2wXOO2!HQ|K6lFEn&- ze~yhzY4>KD}Mn-41st=hj6 zIUqsG3~pP$*o)gUvQ zg~sR9g8>?G168`2X0i8FA7SLh=zA*Prjk zy9^4RfWlM?k$}F6*OC^GF3GXa(1es@cA*mevW{P8Pwh`dD#zo$J_KlRly@SCO#QIN zSVou_SOJ1`aaE?$_H@@FP@Udq+E(oGpVQ}|DnwmQA|O8F_w1jpN373bF?omQ5XMP< z+7o7M?H$0WR{RAI7uPiYYHB7Hh+X?R{%-L)@9>0Wq;dHXLF!@da%BSk&Th`m30Rej zTkQ(e0~nCMDf9jv2w;sfpr~KBuc|%T-vGSq)^SFl)8meogJAXX<{&5t`pM%Y!67G+ zaJFuJ=J9)i<=<`53}x=lV)psHveiLvLq{K2bq=MDQME@(duI6lA_=_(Iz#FQWAKI0IMz0!5p!-2a5#w{bm8 z`;t4F(;Xd|jK?YS`|L=Ecl``gPEh|p@|PH*E`@H-u$`EMfS`{@W`i8sFx3e2BRgpE z(%arXMpfNI!n5mis{rF)sKAiSR;{!ybY1kdvA0j@k3YGnQ=vRW>f0Th#;{Ux-RY0) z=m=cz09CIBB7%M!7Fv2ZSw&E>v#=OI8xj+bkAiMBYw3D^ZTNO}RO^&}-JI>f#rg*| z*e%1UA0^2hG&H&_>VUU0BRNZnnl4V77ox7p9ndBmf9;gr`zO%{WP?Sjv;c_T+Y8w% z8d2IZ{L?r7lX|4}nqUJ`;rl;IlQ_AoABc1W|4TnrB;biy6S3S^9$r@vbCP!{-ZHv z$7e&WMV)yFwBHxYNYI+~{#R!a3Ib10;G>HT(8p$YJx6+^`zP1G0u)?(Z5;P&8!gO# zU&RbTAH#m*5cHW@J&b0Gdb3Qs7dEs#X3v@hwU*ePHHAmrBTHQxdYe;7V}&*)42kWV zoB5t2dlWSzgK7{1-l4JRF2g!xmko&2*{!BO^tSn z<$+{DU}1^k!d7fwX_7_@rdZ^i-mI3_gBpy}~ zdxGlP-|m^C$Kpp@qWzQiuU1!AG9p^aRwm;lfjZX-Et`G#z@=u%sW?w{{>Z~gB1226 zH*)Mwt@hQHbY$eLC6;cEl$cfn@tx?@-JddSQwBYx?+JK17>Lk@mVz@5 z1^pu7srEdUmM8fR3}_^hplGv*xfJ7yH1vO&#|k1PqmCg4)WCS_y{xZn<^aMBMPtP+ z!arfauHCyWm7u!tQoS^1O*Trm)AvHU$E z+xuv7A4(F|Gr!|&79MXAS(`4An0xYS(eEzmEQq77{QQ9%q9G72g?NI=M)P3KmS6g> zlt~wcxN`gE{{{H3;kSD>YZiRNGGPEh>sfayEhTvI9-68~&^)|A6!2X83tOk_KrC6= zc=Jc1CBKd<6@y|MR!gxQ32BbhLDBg(aYL`hKRdm4J^e<6R`=(6g+{+?u%WKxvO=;* zMDzi+G4}TE*4DM^JtaT;D3-zE|2*+8s6__&6$=s)!Tkf3h{(v8;R)Im=_5E*FIVNf zSdG~-#1!BKGnh1wwE1IW35uAP$5 z(9qPcxZiqJo##>!xZ#2ENo(pElQSS<6KmbV^Ye)XNt)cO9MQ|}yuRXwqc?^!#r%x{ z&=8Ma4V9HXX2(x&FP(3}b5YnhNv5W{?LKUR9~=EbAJ|v@uia*$l!~{c^1oD6LE5I? zg&fJ}pTye(_6y4Dkuygk#bqB4ib%pL{uaz;(L!XIT)yXojyx}DpjPPZ`ZC) zSMqoFnD-)$sX42<9f%9jEtmpPe-=yl#i2w{o79=T^i(mVojAy z-JOwND3EXyJwVWg)=CTe`Qd$Wxw#+ss7hn9yCHK& zQiV79dJ+=ml!_NAQ~!mS5?E%KZ?gKMEOC3o33LEl+s@}q5yO?Kqb!fWw_H?6Q5qaS zq0QJ>RdsoPro|&`4aJgx3oqN!igFYtrTTXtWlO`xhXtOUMS7I;@$j57u?RaN0-HY3 znP6r`OB10~dL3r1;P1-wL!6$r1FU~tow3&36mcUoB@&$tHY7{mMqhYRP*70x!@3En z4P;wqr>&j2o0C&KjL}3We&`C8fxAhu!>*-R<52$J%f7USq7K7-{meuWK6D*wG%`Dq z#H5UG!13DjeZtL(u6g1?)}^gomhnJm3Qjo1m{+rvc zn?j6MU1Zr0>i&qsB*)zPE99nvg7O}$kf}wRcG6^w7UPQqb4FD)Y4orawM=0~M#dgZ z;z&OKloJ_*S*B3?{EXH>CKneoZuo$@4Z!qFU-jYhczX}%%NT>1+3IFpy%W-usZ~Dri#eR-zlCsk+QL!w0=w*5u6|4?e)mKNq)A$Nv>E4ma-@fv} z&Z5S47|q`dB_YpXoHamXQI=Ws<#Y;|vxuajJUKaOb6u-aBRbIE>M;&)aS*ohv5Qm89D@w(AFp$O~6@-A(ZzH?Wj@~^}n*r z3M0FhQsGcimPzWq6#i#^YhV|1YRjt8mEM(_N{suQ|}_iwqp0&t1zw{q{( zOpLWsIGfhW5%sq64gR9h@{Xxd=%$q03OTU9S-+7qUP_=p+wm9s$OGO zeW3jMQ{fwJiDp%+#ptn;&L)iSmf*JAdCyV@) zJEx6;>;E7iZ79R5Pqd#M>680Eje#s^zRVsbnm-<$PMj`#enOqUJ=3SnI8@jc zZWbgX(>$jg^BrY|ug`TQLBRpVC8OsIn9>sStV#ZE6El%BCTk={4P|qZu(?Vikx(l$ zPfy~UYDIH~sWWvjDROe}!PhNlS1%Im>8pNS{BbCop93^#RD;oE>^%jpF5eP154j9v z0B4%v<7F2CHAy5yia5xNgxBA+zwY{~K3^%DJiR9-M1r#kW1>Mg(rt5#L93$Y^dIBd zhA^lP;^0afJ2PI~F-{{`T3pok_Omigo1+d28o#>n-FQw{@?M7;?dBTkY{yDx4Sq*Z z9klP{R^~*XPQdH>@`QB43cfr$J9~S}p12hlsN#Xe3*v_qzI9Ne<>-K%UmKI}_X#@J z;eq^Br1h`++PMW>xHNC6u=E#uJ%#qbbx{{sZ#793WDxYyc}d?NVFB9q2ux zF`0^Q!Ia!PwpHz635g_u(=mB=&jPhbFYq`s$?6(9 za)v1S%!E1s>8$*0beR@`Dcv5Dg-B-puE*7n37$!&mSuw>gKq7v<;3Q1q=SW1fXVsM zr`WV;LSt6krns@PxH@`(Fsr-nnBI%FubTvvpNF~-W}R1y?GyKQ=jqc(;qY3%K&jnC zu&+<7$F5U5+HC)eFB_mI2)&*w?BU5e1xY` zZp}Kvc&-y9w?xJx^z~oE%qHYVmH^R*7g8Q(_#vdY919RtA@UimeS=MB zfMCy%%IBlQ?tcRE7QfcV!d7l=plpCP+Ia}PS)?d)Kd+GZkTH!qB2E@#mSTt(LX?tw zqkzK#S-n*2Cc+g&eLafJjA}M5k%Xo58A{+Am|mQ^ueCAP-6mjf^SsC8r2$aa16e+= zk9Q}{d~|u7IKqMtSh)39w@1qN2^}Dz?V-rw-}`7Yxx#5_YHO1F@<+`+U}p?C$eoUDwDe&G)1W4FU@d_>IWnos^?7x)rCmiW2J zsHEw|yJ2ulPPGl1$Hjl1RBp7nKWbTOKF z2|0aQix2jW%i{5V_(I$F+Ts7&pF3w`(g0iM>@I%Ev*B$fc`s^H}sD&KFco2{|XV(DAyec92 zK_X7E0Hh{Z7QxOz)rEsZRi$EoH?r|O(F)Itb$$oQ$I-A0u8MGqaMxI`Z+%f-)?Ym* zA+~) zkC4ISwdgM2^znR+@D49Xt@b-dCAQe1MyvJOKY3Nx7DjfBI1-Y?kRb(qJ*Jl7wYy!(hRJ$ zoFcl96kU|b@-n1O=Q2sBAAy+|W9WO@@anoQDr42GxPA7$KmW}eML2}R-~KC)o+cF2 zM6+P*_I7mqockj4X8q0k0;SdLBI@iY)1@Yl+QYW=u=tnoHz}-eu@9o5WEf+NCMMlb z5a6VjVx-I*9Dv^RBCGzZo*qdsdd0zZmT=OPUUnFo`t1>tS(#UDY#7M4_ZhUi8{XZW zlau6lbF!kvtH1nf*ALy@>VV13-Nf76+^Ni?1>HtX7aGUf8t#{OPl&LZ)f;fO1(=z; zelDq9yxV;=7C;;ff~^e=qBQ!ln_Nje>HGHjlI8RC2nf`CtgIZ&3RfNW_`kmT4RH_o zIWBy0EC5HVSY1?4YCf)OZ1iu(b5WAo-(TCSad49`%EMq8bE1unCU?U^{;UFX%7h?B z*hPm9Du?I3!tj^>aA+>_bcZPTMWuSaa?^TMHPWI#j9}q0V{u(`MIDIQR#P7)m|<#4 zOb7?Le%O=lbivvCIiQ1Y7o~$j=Nv}Aa#Hd>s#u)SC=JPlz-ct#!LTBGVG~xtudJ${ znRi|TV#`863BkxziHL~UK}hV*&E8Oz3=YZ?Bk++TD{obEperqvW@B3eGpx=0!AQkl z#n1Y_Mp#NOlf8NAxh1&wG!q?i+v75hggv=ykw9n^d1}flc{CI1;5Wiid1?9isa2=_ z&NARw;TMF4ZL9(xpVxbZ0;maF>)-wGs_!tE$>llE+d6*SUkzOlh@k9FWO2G2bc`b9 z3ch?C4+t06($&~gcQfAI11g$ueVtyR{^}Nk5%mHtA}RqtJ#bJ*%{m2KZf%CnaM0=Z8y|o{S@UZ*S68fq|&f z)W?wS=8k_>ew{QYLE0DRo>dB8nOt6XuWw4`P2XKqFG-hrSMW8JT=b#_VL<9~LRHvE zRAtzU5JL*YJ5wo%T&QkhqEbR^DX-mDQsI)UZ^rAle-K4w*3w-&{BI#QckKP{)uB)& zRDxX&sL$s)%?xkjT&IWyb%|$&b(7@`nV! zWee}9D7gwV-W_{ zP{Ya!E${0~S{lBE&$iGOZr$yjWf1z|FYxYJ&qKsDKa=& z6?5y^OBRZ9G}c~_lE_TGX=n6tOH_3x0`ku}GbViSBg{@y=sjdBz*-V_?vqNHJ_K>D z?ZufJl2m=uQ2q0vbUb9{WHu$dN+<;RNtiFjK0a4haosgFLu|@pD7rC@9YM6aR6h*8 zBi)gd+tR%i+*0*+q=pNZE;*VMlMkV?*!3Z%v%dui#u zWc%mPB0o}u_V#uJgm4Uuoi>3(-I0--Y|TmZ<$KKQ3O&NkqSEpY4*RCY)cD}T*0v|c zbXF3`6sA{aXEk#wI=qclU0p7zlMX!E8wxW^Qx0#h{PuYq?9vrHa9a#p`|ZU;z+}ko zlbIl7iCbJz>Df{9?bt^Y0L=#|`*$vrlo7C_eGjnW?p_s@wtyzBA2l=N*Fj=O4Ajlc zcZXgiWhE779c5)pd=m934F!9+||m;(7?WL&fV*=aH-=) zbUM)W{hX>`07|KT|KRYAVl76@QZz2yg=Bf*N_YguPZ}0 z7aNMylMc2R+ceZp;Y%Hy<;pv4XO|;D=Qf(26PH{(Am*fmfk2J3TX~YR`bhcf%-+-1 zEZZ(rMbd9n*^g`APjOUu1Y@8kkfrt@9y0lJ$r+(J1Z$A(RpDS)p4hq@g`9H65X?)+ zpqQWfoqYfo9PxKHjlD9Ze@AYR%Wj&GIT|eF9Y)1aA4uxXp%re!fJ(|R%;v?z){gP zd)1M0zKBJkF-(vTXC6stX_Hf(sICQ8)BcB4M~d;c#!No<>xIr2XBX49veoxBQC5?D zU;U@CmnrIy-?HDiTC|%rvs&5M7_Hmq(4EFFQ%$-iqo5!n0%PPTWj11y=MAWv<%)c5 z{$-EkaGP>2du@;?NJ!E0qr^5R9)wCNGlVjuw;R;oiDAuD@(~U#=27ak4xo(y)X*Cu z_d_nALi*k4`@7M$^94I&$W37)G|bDw;_VR-`Ggmd1Sg}Q!wjnGsjWHZoVjjzy{x%* zUYwtcqb%y|w_qif6ih6Rvr1{5|>;p&t8V~%}C=-&bf(8Cl_ zgi>O7#Sf)8^e|3nB%l;^`>21bUHzN)4hSXdHx|NK;cLQ|7FLaSG;bZp(?v@*R#%;B zFEr&RRBUM7HQkl-a*OLr$PgnsTIml3kY6X2)RZJj7%3SGg7?80 z*joE18;fm!LpW7OLtzc}>#q#!UtPm9U2N+Trh+TW>I zH+(ECtu3k^;nL+FO8H4A@afe9Y*tR)wG1Z|y$DN79npWHHC{2C83xrr(IRP?90U{5 zO2fXh3a!5vmt-vVJYn?mc}V>K>yEq)wl;=c_Bj#TOeC~*A%6PwX(oUxv3&xlYd4kc zyEI9IB!ynQYcw*U|g8nSK$|#b*Foc^}9M_leVQ(KF zMn$Q65MP$wC3%(>Bi+Vvhevp?2IFE5xjq^^VN4*LfcMiZgK@YtH}6ZOu5}NYzaJ^* zem;a7!g-K0Y6UQUhn(g6c-MF^0kih9CtfzX0D{4!6`JEo7pOj!?n?dfF|akKqar{5 zeU{#}Ra1^nuLaTox!^mMzS#o4(xy+0SraNO*A;LQM<`@WJ;p5GYIY$t&khgOwY7CM zHPg~H7gxPlS-5Amv77EybKd~`1`cvgZ^~Fys)M6*X!k!n_OzuYRwk94M)F|5rV63BVjO?`ym-`PSOHkn-;8iaWV)^v`Da;E=6= z5!E;CkEA!hptS;1ku6B(U)Wgm6eM1%cA_mF9ORLe>DW9MGeyHlEzh(@C6w|m-kBf4 z5Ul!ihjqB{j&kuk-id^kdIo*JJwzGr5kgoW0Y(X(^doxGi@L5>pGKHSHwJ3i!s)TsE*BK^)@EkLuFI9*0z>YE}Iu_!K!#Zc#z$DVrjX9F|&iA+|WJPo55@3k?4XWR1xys z2qNB^xGGUcZ(>YFesqG#+SdB=+|I3-!Z%PEjg1|o$B*N~kdk*vQ8mKg)s+84M;D%v zL8RyV;_NpOm$O7^=1ug5wMPQZUr%pgQP%F${(FRpWI19=rb{9%tgQCY^%aN_8?vx3PZa&=#a<=i*R1I=V{P|D z>qYbBLI&NCZt*nl*=m{}`_F}cpyMQP1JG%pB3Sc2**cJ?ku5EFSd-G zSir;KUbw4Cp;2K-KxrfsCx^t}CYr~02PjN+8RTz}i%3zt>sbs|GSUHU`oQ;>B@ z02bg>RFsKH3+Y`xcO`~j-JPwxO3KP;v$*gk&(6>&6!dG!xe$X0h=|HdJeu1@-|=zO zzm&lmR1Y}BAh~K;60EDL1RGu4-PNC2mjf}Bc@y2VJYeP4>iMmg{3uArg@x!7J&F=bDYYM~e3;aBd!>){W07|_qPbXD^}yV}T7{jK7aq?k zBiA?bP@(U6hEja|V~|6JvJ6^2Mzi)YlpaR(YlZOr2YI>@t;y6@jvVSbd(1j5kc{e(axr367waf~j60Pd48Qz#?F783T)GQn@+~o8@ByCZd3PwzzdPs3p10Ip zpEtVOLqo;c+p_fjV5*hAf1&1KV|{&oR{g>Ur-&lEwJJ46jAr1Iszzi-eZpBV*45Vu zHNxJZMayXtI}}!U&;dlA(CN&^#C|}gA#OK(0Q4415Zk2#M3?Zlh47|YjjOW1ZPA~) z+RlM6B<&^i75o187}J#>a^$aF_%R28dcthQOJbmtx6l10F71f05)wLemEV_;uu8kc zRc~zeJp%(-{evoblFz{20eA78rz)Gq$MU44_L4HUzUSfTxo!TFVxe3ZH5^u$NuE~`V@Qiw`*sGWo5<@@j)7S0^I`@lf% z&h=K;i9QnrH*@R}iNrx|+h`t%DzA=)P=R=;ExuNMSXnhSE#Eg;QQ8E}87HSYNL@V3 z09*9a{e*r9)PM-#NU4w8u!?W@n+v;Q?oinI_;`3*%*F-=ORs}hTfe_&+KsLTAkJ9K z&(5a_q;$RB-sIwle!yX@TZ7%N)*}P^%!xQ9u0J_Z^DqIp6Rkdn7#(Z#V(!M6x@?M; ziD*_x6KD?`UOr9@N~EP`&SD_sl&u3e`}+Et!@9e$U`dWBiiP4iP7WUQ z>SmJJ{w!Rwfx*vJZ=>`f(H2r7f$i!^QuuJ)Eg+uEcURokDS;^Q+_8yHi8UE&rcf~U zJ^W!B|G=%kLE@qd+|e?|A@;P>6dQEWH8LR(ssff|>0F-x41K@8y>7e(FF|2o7ar=i z;jZIm41#S>bodLt!Q)nzAwcCOE@M^@#_e4m==H`G`$u-BZf>Cqp>2MvI+iP2qNmeG zqdFV8GCsc5ZDu{t`s{_OB^Ud%sXt=cB*UZ|YokpKZ$LU79OU`%2^%gy8;M;;ptGgw zkh1%(yHHBF2%zt++5fn;EwZDO$p9(v~FOjbtvsUwEf zFx4QHrBW!Pp;7zEmGsj{%qKI6n|x`VM%#_^latyr%J+DtC-BaF#-g_0EwPS~wATql zBMMX;5T)SWQJ8Mx&-Ok)^F1=neB0B+*sLef@lu9s?nBiJOhg8&wzXC2OT5DJTkBUX@9MOo~ha?|zr2ijI^*r%(Cb zk+uOxDH`_=%!bA%Io0PpsR#Z2LFH%9{r$w7tXnF~|S<@>Gq#DN@T4ZPWZ0BG<&7=u{im&&5_$%L;FBrQLbV@Ooe zUro@A-*2+1rb|_v&nl+!#}$$%l$=C!yjjwc0iN zY?P|o{RdTLMSr8n{FWD#=#1EWikXaIridtMsj9378t~bV^X3SB)V*I7@6#vo0dpX0sg^${Q`>9%gWlBq{6`(I=7y({#l<3>V4&~xg|x$9cp^bczUr_UBmw3wlv`EP=!WpiU!Qr*-7-5JL{l z{GUHSgYXcjM&G{wz7gEBHN+Bth=C_lt2U!y^tsF4`JpAvb^V4m$b}?U#LL>V84D}( z?12wEV*Y<|Xd)HzX!Iv`9n%x-3;8%lmUH_(28zVb(8}_?RD6Q7qmSR?=C`akoWCi} z&n=^DgSFAeg8Zquf>b(x7fPr`uHGnFtlubiouSmX^K7b75V-ARn51CLHLRDdk@P`kE{>7LxN(%illPQwF#P`=Z!+Huvq7z>nU5h~k)jUdJEI3jH# zY8`qdLw5gAu4m+WgW5@%we^9s`~48g6U`WBs2zwgaIl)xIn*2b*R0shXJR}D1!3FY zy02)YaZ->TFLKL}30&aE_Vc87@!`%5x&A^az^Od_Io1y2F z3ZD36>!6 z=M<>&aBU^1)xe4r(J#JBWfD?6skW7?^7e^2Dq8#{6gy8PHV#}NNn2L7DeUZ1IB!OZ zn%dlI)0B5Xfy zYBrv0fXPUs-`eKmyJ3Z|$toM!xXpVqnr$^lU-UMtJ_&rc{j8>WaPa&Z(3(yJX%nQ2 zY;ObhWE^5A*>g*kB=>0PlC{4Fgh3q_hYk83mM8}dkNH&&IJJzFlL2F`LM`UXcaT2pJAA%K^d+vA9lD z4V+Hs_^4%1H|ok7UtSLW-_gEamDZc>ohRi#1R4h=yoHRgd1(h zC$Hw`(RQF*!4r8RvpGse_}1mf)|9qSM(TUaX}yQG*pVSfA$q#&T*t35RYj5_%RGUf z7n8z;%0n6Q3-Tyjs(Z{O1WqVrysXfE^k0Wfx-|TVeVeDFDxdzmr&Q!17->7 zD(&j3X!1#lQ{8-xZJ7((Gt7PfFL3x+UXHo~Oz5I?5;nv_#FF&1#5NP%DhY&K;iX{b z8W8b20Dm?%1#Lf7gSn_5m)uDD>YIyAsSpQqCNd&DOzQtHsLPLo7fJo>y{P!XKchd! zyYB0-ydLM*&NE7<>8oWq(iffqv~7lz5b09sw$M$V5>(QqMkD$ighcQ)_qrwz4{z^X zzGmF>f8`w;QjQ#f?86dAWG#rwFimA!mLq^UEo)){wXAoH%$=7K5m}GPB5QEQP&?$e z9+~rG+nD18x>vCzF>0#X3?v4j3LsH7tXqmli}rITr}my#S@cTA})Pqi1HEj zyj?i+IR@{mU1=0=oqdtbE+Rdn@OwT#Syo&vDY+w*4^I0KAxsN)n<7n_UibHuNNCo;2 z&SWbzJD;xCeV+$>!Rp@tfU*!Ps$`51gOt&;UPna(y65bo(;l#cD13GA2iu{=iD_ZX zy8{N~%XsxV2;YU3m+eX`>ONPr6x8H&JWc11uQYn*{5VElPbj})nWt6>Xk!tx-v+32dRq3_~H61K{onr5M^-Ql-q4iO%NquFbQL27pMoU>NM+- zSURmcV+P!Cf$yASrw*OGjlantk!VJw9YvWkZEa2$7ic~|wzeah!J|j0J3yipY;p5X z`}Fs0J2*O0*}bkgylN|8oPGOWE3c*939O2ws{*wM}do zvUe=x*W||)#hPxW$H^+YdEA?fbtU)(PYilqQ*U<*dRi9Ml+_OJ)u8ATiE$*>QJJM_ zIc3h}X|*(gd;0tdn-2<4*5N~yjlGS{IT!hs`zu^!Sq;WYtEs5=E&&+-hS*uU|4fA<9oBV|S0*(m{OX)^DnS69Fh@6H9(ZFb}vT?EbYHFH8 z-I)E}#+IFZ_Ri^-i;Ih-hy*aco1CdJxTor$1%sxbLWGEdCoqDyZ;lN63O@faQBBo- zm4P<@u(UdZAfiGpu{WcuAOxA9FTw;(8cQ!Ip1Y}eauy_BT1dljRLuwq4MZ{`x4_Lpt1^N{k6&E>3v$zvgYd*tz_}4F-4qt@VPPqS$fXXuvF9% z4?*@PPGkah^pbBU3HJ>){G&mrE022#d?Xykp1wGebopXB^ouJimO1%aIT8aO&FSeY z2x-DvyqMQ2%MJSVP8v>)JEb+mMXaWBoM@1{8yim%_u5p#sV`~6X@ASN)fN#&5o@SQ zU7Ee3LQ{Jwgp1~mQHf_%bIX9&5^ zS$L*CPn3^d)J5gRHkKz%9(N+XzMXNYz${1I{`ApyivS|~=gFg6uJ;`W5H-ccK31b9 z*TPp!8afuf59M8UAbpD9n-Kd%V};_RWK$KJX6x!8XEzcf_pTPM+XN}s`;GF7a@#Un z%}y;H!zlOxXY(4qH%Zn6a8U3N-V&D6<4(7Bu*u!h%YJ6vzG;&+RhQI7AYZH;b{OXj zX#}5iM1}F_)nd+$;c-i|eN*Ptl>j&vBf$ zmE=9VzFhdKDGzZFZu4(~{X&jQvCCO z+WhJ1gy}N}$I0Ujc$(5gW+yIz)(VLr#kBg^S{TxM0?lJag;#wLR>F)`X)(dq(U<&E zMI&W~H{w`V7nc|)d)V0NP=Dli5V~VbQDbrq4fu#_oWu}@buq+dLZg)x3-e0sy_qVC z!e$qx2AX#qS$6jpNpsZyNB_QuIYgn<>!y#0D3Y6$FLKNJQG%lGWM;N9)^P$LT@ECy z<^PYWcM7jG+M;b^I~CiuZQHhORBR^|+qP}nwo$Q-I%}W%?S1ckUr$T_oMW`!`VdSg zIePLMh#1+0$pyZFGQ#zDTT6>JQ*|x<0*Caj;k zUCHfDWSTG4e!$IC898C7*smL-_I-~$Ov>RpaH`oUl2*sh2?wJ2u;fbiKrmfESqZiR zKM=2LSe8sUVh^9|o3Y^u+#0uftf+qHy4$xG;wR7CTYlZ3SHRHhW*;s#1oOq5!xSVg zxcIxS{sgeDVYMKxVQNC=2&JXHf7qyvr;N9EJ;g~8yM4hW^?&r4xL*Rp+qnZ&hN`U( zYej_o^M$Y&4D8DU5X|IERa;xw00s&|8}LghP(?n=%yd5V-+Z>qhMJ1)V{`;`BY~+% zGG@TZ`($9zqgi`4h>s4u#Ib+5u7OQQmjJmb(mlCn4CU~V9vNXGks&JPvkq7cBDgv@ zJ4fAvdDPO6Q5z$N4-^@)v~QvFGQFYi#~HfYyE_@_gAQDAKHd>ZTKX_;-*LcwkEIq? z@4jE!EBvmhx~{Z$08LVQneN(#PfOvWKuI=C=b=Ev1phm1Ezi#07B>%$x(Y6*eOJv* zNu=rn^SLCB{2ct(idFzzlqE2X#Kl-R&r@=l35FAP84SM92T)r|;;+hX{xf?;oe{rc z4VaoY$}NVlD+8yIVlK&^V4zRYSOaZ;V*4w|o0fKV4*rb%9e;aSEmb}C_w-EvTE}Qe z87%cv2nx7YLk7)Oq7ta!A$rpKHKU?4Ab2VyDSl%$c_rVq9f7?e4E7c__WE4p^JK67YHwqKd>LFtf4od`#ZV1_JNB zC9aY!9BJ{3miP1w(F439i+%x^p-c=g(h}aNDt@1Qkqb^`C@aEl^HL~-R819BC^h#< zogoG#&lL*qE8{@J3^aAT{oVV5nF31$o|y&c~&LNjsH8Ou{s7Y8!vKv zj#-788`Hm(7W+A4L-F&&TMwV#wdCF~^tQVI_-1xWRkkQF9i+^%@ba(>{R*{ab!_aD zNeyf0ozuZkdJ4pm?G38czi|pfXJ|?0_megEQj%0-NZAEyC}=ihW$S-U7N*6s%L_^p zB4eLZ)e^I&h8*fNXskl64rE>SQ%(Y^q;xBv?h4dKKxJeWP01vC){YKAa3Nh;nT`hI zU4M&A?<4y9_6!(W`j}f)H0K}%5~~tRENtS{&CZ9dLPv%p&qgs9%32|}TO4Buukoh^ zI-Y8*!95fsUY?su91PEH&HFhCySK@LtU&(y^E#Cc|AK&&1n7 zm9hx=fm_TWZswuVS!*EwL|1f28ptGCFef4qosw9jeWq-&hI%)$1a~dWc{=9G&?vuyL5kVHEtOY(9xuK zJW!N*VWpWsZAW_&w#rrQ8#ONQQ8jH6p@fE!HR1Z(^)LwY1~EioaLDMs-{Y})G^o!8 zN*60HT~D8XaH^@&D1v*EEinW$qd0FD72o0Of`wny>G`}QeONgfu-XG;CbRiC9VlHM z9eDGG$H7(o(frJ6l{EbvUag!Y50hzS zb$7D|!pfR*5XuS-{3mRoj!H>DH%F~O#|f<|M-qnIFLp^=H$wf0>Eskb$eMgQ;u~ly zY~pmxty?;&6)oRLfp&^=Li^#sl6t&3>yaj}|H0Cc&izl;_GpLMhcp42Z`);Z&;Wse;r8fGa7 zFviE{724txkXg3{!lmD3Uj#8ff2Cm&DXe<>)a@q2{NjZ* z4ZXn)N;o(fals07XY(aNe*H}>{Il`PtIG4o40o6?D8arIu1n%q9`A2q2GjG2%1y`k zsS_H|+L7DoTI>)%hy>QWLEEDyC$RnA6rH(~Q7)l4Ht?ByzDw$EsDUcy>Sy|@cMZB7 z*Uu*ZOAeU9>|!AqMQ=}cFd&oD)Aei!8#~&5n*DtMMYFcbuD+IDgdaGNA#WT3PWFtT z7Rvyw@UaU@g>snnhSkLjOmgueEG|Kz?!3IX2%?^T_nmiLghI3lU6GmW}!MsKW|2X`@g4zOAV4M_zLp20MK&X9zt=>w1&|R;*$swm~W$1zK*# z$Pce-J1uD~mj}ls9XWxJY-o%F7S-~Dw;VBipl;KZ|H%*h-xX_K2q?L+OYJ@|F#oxO z`Bu=zXVUz0ZzwzK>Z_|kuS+=Wa@P}J$l@j&BU4%f;wKOCy@V6ZK$B1&FN#Ul81!*) zj6Rj_^K<3`f~3MyeWIySCQYdBIf2<;AKL8w~z}f2=K$Kn` z7$O8|ds$hD7uqWE`tx`1fzY0S1;!ASGV4?Qbs?x*+{~E|UP4y?;AU8*BKE^R3nWcd z#g{MNR>`d)itbKOE|dg;H`5f#PQ&v&pqThX=I%)kU%=W%iDH&lYu@Xu}lg9jy$D_ftw%4O?(d?!-r0!1m3Z(I9N>2$d|1Go~w z#FQL55@fyWTBi%AC!apz%mAZ9Dm&OZTg$R5|%qK*n#(Sjwvb8L?uz~YK z*-)hH%@Ao@dzX+H^ZS0jXXyK3XXEOa_8yUTl^$VQdr0g{?%_99}2R z$OGv*QlN~PRFR_niO>G?1s!feg`bS_-1PP^-YkYpc-(F3>fUPOVO~aVwbjAP3)hQD zAEouAhBuJ_%-ZVqg$AEVUKo;V5+M-D;2DQ_uLWiY_FdfNasfo5uWQtfXk_kzRbE5mbNfdve}7;)#LsEHOd zrevjw4ublUC5kfl5S|eRB(nt;X8ghs$;O*|G#t+7G&IUKu3$?2m9La z`RRB_84CzqAo|AUqT{E#W7?S@3b+*BIXlt;KuN^s`!P*z zUu`TJhnrunsPTsMd2(hz8*FWjIgp)G<AssiKV!GXGlf@jKb#{)PuD;gMD4_hewbQ)4w;)nP`p^_h z*bgejVsP|j4~8Ud4z-z>`&rJJY?w(JS@f5>Vsj^zwA*aoA==C6F7`V*ma&wX9#uN< zEM2rD0u{=l6r=AOC+`H%;n3Sx=TI7>4Ik}B#v5IW1z2dh3a}v3A|Pxv>&TLipMEB`LOTMhsg zO&@#qk$(G%teh~Tbl#^m4rHK=%hmb=I3Eq zmXj!g$5nX5SlL(DQo+&Dkl)n}77R0`�{on)uT7R66uid=1-KYS<>_N?bYPK#=a# zO~=M@vbv>Wy4X{Xfvibnd0AK>S$Ub;IEzb%0c^8s;JK6}%RQa1yoq%5+g6(cC@X`$ z&vb6^3nNF{pe)Z!8RNe}+}*PbZk#osb_xP#;lgaDhK>jR4g6enP$~_Nxa#$}9@dVq zH-mqd{!Tq07b&9aDeIGy%ln&~;tIg1pHge3+5-txPk?DqNGneqVa@CA{607W4S==a zXXFHlk%N5*pdLhZV-7jLnzzVlpwu=(u?g(l^tN6NLMvWiW%>-bRui{o zWm!ACJolehR#sVWQ!cG8ZVXsbaT*m>lI<_Vk>x&XSB4pZ%p#>=`k-0bBwNFfQuqNZ zbvRk{sfH4bW0VJ?n>t@zWw_q=~j zR`KBBI{KNJo?g%1@595kzTZ=`aR^;-?#BCrt30B!t0F2UfIP|dsykP)YTXby?4+SxQeuSnrX&GFhr3Ks&U-kcUype-BuJ{?Hw14`;A_YCP>EzrYnZb)KQt=J4u87H^iy*$44?Hwr&}_B3ry~WlsX0xtsYJ zuuFW=P-pLty?t@FGcniR2U<&(hU$n)}T!W$-js5^;KnMJ^ znaWY18izyUz)ZTEc>+(~PDTl6t?yLdY8q*MEGhxiVs9yBD&At@O&_g0+Hfq$1gUS5`oWI zQ5t1%sLRYVTU7}PtNHy1Oc=)%%VL2Dws;5k&a*ujpzNVZ%utB8g4CZ<}AEGG#D5dSPoUdo1y=$ zGmy1+brn?QYeoKqDn%&<@*O3eO*j&V?~pS{X1(YmFD_z>iW8E*4#DU0*`k(hRvLBV zKz^UCVdG$9>@#$nG>Umol2W4lq3W9*3W!r+k_IwPPTtL7zgWF?<(1Yvl(^oR`Bdz&I0zng2(~0rI<3>rK z1(P%hET;b4m5xrX%0@lO)Nu}9WSU5C0GYQ+%sD;I(Be-IL9uvPXnT5;9;csi4qI1z z3<^PU_8bFhb;Nc=@mO7`gM6nnS$|6s{ln8)oxm9T9oAUrlEPznVLV^o(6;=mtwW7a z?Baku2FIrNewArKm{>3B=GOFP>@p!`y=;JVb0L^6%_qP<-P({F{wAbX&=wX3#-Ce2 zF_86wSgP1H{MQYG5CRXo>p+gRtnFcy(l{8h{lx6BXjMkQ?= za&Z;()jYBz?1=a*v()rk^tXcpL(hz&82w-FMd3Tno_Y_v2S&Y!m|}~$+;2cihap_z z)j@9Q+VAH#9#~oLLyU8YG7`r-2SxwWQklY+5RnH+rym%Kwy6I^9f6z|8x~X?iuvsF zhc(z-$;v?tR~xBacwaIR9{sh~@z^l#!R8R^2il|D#P}U8WM1Z$KGW4M#gyX15&S8X zGr1qsD!S!r{xF*|{}1cW{u!b3gt;L&gbYqt(qi#X;a$$i5bAyD=|dn|kk zG;}ITX6Y}<;JHMrC4&X3$hzU5ynkO57Bc|Wd`jj$B?VrCDoBA$VO^)Ev-8|qNJl_K z#3V=W8h~Bx?@rA|>0334;))MUv`*q8anYeGVe1xIf`7+ZTD5}_&Ju{??t~08{04@V%t)T4=bz7 z`q#arI`^Hq0t#%F&mXKa zJC?7Idj^Bvhy5Qs>i88U-AK((PLF3m(bvojy*(3U9c6FZ5In8Xf|CA8r71S8oMr?J z&Q3~@X-i{=8i@IK$V!aZl7&%|j{zs;>hk(81o|+Y)~O8eQy0J*Ni7VFtOa~3Z>!CI zx6eP#UR(+6qU2f4EO0XOBJg;iZWn5@q<-PGdMzNp69+F;3~v!+NpG(Sz!>*Z2EFie z)$tpE+!5Scxn#FSHG8Zvs-nL%?w_5al@P_46#i+rah1y`nZX2;br)&!08oo3Nl;+_mmOiNL0fx!y}xRu>`i77 zn$(F#N5VSu+l3+&?Asrq3_D|4P%&Akmk|Nw^3Me+D1q8=u?d#JMp;Xv% zbo;ffJ}_5-F4{5blyI2gb{dh}uwa;{IuLJ!P$JQq7%3gyV}2*lRgwT9mAzzUEuowu zqd~%jIXJ3xp}++D8-jYKa$@za(m(GUA4D#t$|-4+fP&%X(ura82T7VW#>^bu~`V;|h=TSi;z{88fKaAZ{~=~jh_r5qU{cKE}RQ`IVwI6V+yYZYu5dNJo_?zYF=}7!3 z!&e&IwRHfx&`<{&``sHRAz?#K!M#HNpdwN9ZP2y4wBtLn5wxKpGk>f}Z*1HZPXMuBmOp=P~4Sr|+aXl2O z(QVBC2s2#J9i}g)UhWn49g=Y_8BVmCTbHr@wb(!`J&RRkqMq~L$jzH7E@KTPx8Lab zXv||p#P2!hB(*uQzl1wneBEE~$zX>3aP=$YM7bZg5 zi__}(eO@<|csZl_L@ti6Y~xlXt55NT6ma<^is!6lHu#iix-tn8VDOG3Lz)sB>P3P` z_egUFtUDw!o#QV)7f2vZCTZaLuyyY2m_ku7W|a zqz%}HM|j|GuvMm-o@w$iCH9P`dj`qxZpC*EDonqxI=CDhf%Wa-i^@;_Ar^;2BV8Tt5L0e`(aU(0)YbE;aclY=1@!S_$xn zl|h_D-@5ysUHJeBax-SGuL}3f1-X?ewgisk+mn5ya4p;*Jr>E7pjm z>1LAIh)Ba{W2X6SRTj1pS;A>xu?lHFa8?zlE_zo-*eNb(^a=Q?6QArJQM##MDCj^y zc1ry&zl1T#D<}OIn2Z|EXuGmjhgt{3z3$w}nuhbG%VRA}hS!5@eN9bEQ4UUsqXOPK z@;csQf{&NEm#r)`S+-eP^}@Y@a@sjDskB)MF^=h^*Em-lE5;{UIv}A7OSb8r>?HjXmnmdDn5(G zbAe>Dd4IjV_wUDFA}x)d2N#}ur}FLqUEkOj1u7$v2mRRi)I&MlxX^Ke`Kak=_>*!h zW(zEZuZtT|{^sizRrfDlnHZn_@9k}A2-r*eAREO3H*fHq=z%j?F^7Fs<8mu_B}66o z7h*J(G!^_LaJ9rA{H~wP#;Vp+LGV8aaQCh#3&DCWG{~}dg0F^x^}T-kY-myXIx{@# zXT;?|tOO+EO+`Z|yLLYH%!;|$nLZD1&;o{PYV51g45v}=z@TgD8uNn&ccHKtD@`M@ z&I6X|3_{ZC?KA69g!K8O1$>*~9K^k1P_Aeg}a|myBe7| z@pu-r)%DB2S!94Nhj`7nZ*7)wgFi5Fr|q;^dIt*POvAo1cwDF>PSEqglqXb$xG7+) zZ7y`p#PfKW^Hgwvad2=h-)?1A+tE8_;%3txTDmC|$hgWhwV}BO)D~Q7rmuUUv^TU* z@;!PH$L6p3h|P-4daQIlcsu>(<*iq&MFW}$u;W~o|8+va7mx=dg~2GOjPdSd7)Z~} z+}s_QYpZ>}hvuAN5_zOJP(l7J|L*7Get9wW#|OeVQQSJjrW5zF&BjtP7MHP0V%yQn z{C?c}Vz<<|QWlOmL6pC_@aM-5JEYDL*WnPzS|1W=DfJ3%rL(|}0M}q$DyEgWjD+x( zP`XyP9>>zqkUV&tn#76@=Gye_`LtW;(Fc&Q1Adw)cBAe1HI#D{2Y;zIPlYqiNp2{<6eZu~VeVF|{8zHWkumrgoZ58q#O8-eGWWuoVjn ztcl5yo?ecVX0t3p$z&h9 zI$7?_tNHI1_=Ka`l)Rb;T#bVklg(E||DS_73nWY#8fq*Sg*?Y?_5p>IWf)j4JNt5T zTTABu$nMH$s7(o1L0Z0JI)i2|SD=|H1Uu;nQ*Xa_wh> z#od%CZmC%{G7VZ0Nuoi`J@O}%90w<^8!01I{|!lvw$mVUTcliE*(oy@%GG?8SKUb6 z?yadOcpQQT=sjAJKuRoPWL&3UXwn!-Nk?U3C!dDiRPvwfow*;VXQQSCnR3rb;WMbv z=wN5|tVN2ogTwiG?_mA+>5Lz>lkV{$mz2rix#ew>g@Rs4`W%B;Yg#oF;q%Wo!_8h7 z31n5sZQsAKse&^b8=A;5aQy(AEFJ5vI{jXkvspkb1Z@@7zlVY8jEr-@NSW6gqSFm^ zVlzB29JR?~%0jSqCFz+busOBa9GLm*X-NSWqPFgql^tE@5<`6^)5*?PtgHzl#u*d5 zf>C~fg^MDoevR@@*{F!nNNGB_*WS)vuvKpy;5s3k%OS6R+BWaYrD^(dV)kyjx&}5j zBcm6|xu9<%BM~uI-H!;ir1U|Qd_N-aKRd-7WaQ?4d1Uxw_v=8`DErlCG<%*dDS*A> zAnP7D*HCG63i9@k+w&Pl`Eb&B>g(}i7nwkxk3hCTgUnbfQ73_=&gC7m!b06L)M*)` z;!#i_n%SLhJroU>e)j6B^U8`*Rs-qCqI2wsp}Q4Vx}7w>#55<^_!t~%!EriV4wH4( z+Sw`kn=b=RRV6e6{EI2B4w2u7P$gzm;Rvf^@k^_^h7qt^2W-S}^sNRLWim&k@YQIx z1t#G2HjO2A^gQxzXJWcc9=e&hhwfT!?36{`oNrEJSH(}KM>-X9935Tm%9z*GiTWp4K8s_ zseKJ`Q0hjGBb2>qP&j!zQ{jeXCTCgRmLV|PdIl|4y9#X;5{hm+Sr2b-Z&V7L1)GpV z#=CxDvst6dC)f2MjZ`jWs)%Br^5mL%k$Tl6;8%!W5n-d^jY7YBT`ORL>y@A?>~u25 z^bD*g2futGE77|PkurOx>CdZm$E!l1nO?FwMadP(z5~+V9+@%LI?S}#SXr%*Gek8k zk$lWgn-$Or~$ijwN!U1I~qQEbv+4|37o=c__h7!Recg@sL9=SIf0JJAnmkwhbh zXM>?}y}dY@VQbjNk)F_IWaFZW(WbC9Vq*`DlzC0BY!zPd5rvcZyP6h%5Vd0eAM*|% z!&1x@Kh2p(CT^=IICOq~UMjA9!8o}|y}(WR)*Pp%a|QWVw@cP+j+3QnV{;R(z6epJ zDM@S+M2u~tjJ}{=$vcRq#yo6lAbIU!l6#PCik7UHuec%Gs@vzBI5es?soz>=UjO3z zMLtrQ5@=;5h4;Osh1(9R7Q{r7sCVR-TLga?oY?|Qp>aMkK-dK}m%>Fd_0}7YKiUyf z0twKaA1_mLDPn-bbVNxE1x1*ASVvxfodFgx

G9v4IJ{q?JUCa>xADN!E2uNJ^SZ z8h~MgOQMC(dn8k80x)GXel?_N>g+Y66fc2@vpwCzdgY}X4Ln?^nW?om%opfGT4x** zD`ZK0f8|d*!NEWaxsmbUty@=hok<(q!qfD67Rn3b_Pb0eAmf(8x$n2MDC%*z&CM}% z+-f0_gM~UM)Ix-RJsztd@L5o$cHRu`yf>opu{uQ$>UYn4{_5FUn6xi~- zP_weA0}iQnmj7BQ7=zWQBUs``9?2&fR2~qi=jX1tx$~O*V+Z@vO@CpJvf0ZUIFibX z&j0R6qQrC+O+g*x)&OlxkupCdWR~DK=B^jCb5-dDyPF%mxuhx7z0^wrNi5M{O^Z1Z z&&^I34>lgzLVsiqnU6TR=N+J={x-uz}FR2L@(q9_XV{wsW=U9~?()i9Bzd<$W}a^V4{ET}c7|1XXwPO$UJ z6cPuA06}dDbh>+_QCWHW)g`pkKH~s)0kJ@lExb|jYFWJl>s!plJ$-Sl0~iJ)9Htyx zx}oEJ#U*o%o|e9&i|`S@gH{8kiI9p*)&Mxrg+9YjNaR?2csO?-Co~VJ3A|cVK4na- zTy>4ZfM%l88I-M<@Ip-FrvxR}+KT$aCG3n79$03qYml%n@$}sc91Fr;g0@K#&2xSu z+ktY|zBVOF1(Z~3YPwXMynN;DlAsP&&!py`kjxy7>Aa1^%=B32X|(GId?Bu&%S#<< zn#rpz*6ne| zT9ja5Wss&v&1}}&0|X#`mnOSeWTrqul+Gp`pezWh9sg0d#wV}*@Oq?Wt7V)Jnk#+Y z=r|d`7#dH?vMfiDOke-|IHw@rRGtir$TV7Q?cU_P03#5OZ;n*Y0&D#S!NiFEx zmKQhgjaKG0;}zU|=;aM@K`;y^Ap+{-ZXr!ww})J!Aa|f=dBPOIT~li#<4Z%I@Hdz zts0X(|G&c#4rG+Mi;CSc-5y<=Lds;}(d5Y#K%urz{WQdS-aM;BM_;pUR$FT{;RFVC z7{MtCr3_uwD46e{(oacuCBDkx`AB0_J;Yp8)l^bf_X?=RGIs!W`A;QVbU|8h@DMC) zIPmav2!nnn)Ug^;oYF|fCQ!r~Mu7APG~JV61b~eOm>y(cvFZuIBYQ7YN>~Y8X|16e z;`mKSlvn!gG=UuoJ<-|WQ)~gk4=X|4LWiX#BnQ4hpwJ7k5og}yMu|EZ)p<7oqPdL` zQ4WMz4j32jEr9YsdM)Mk<$1T)jb}i1py)IY~Gc?d560SmknsG!LPtkQEpBw-3`Ls8Y#b~Xb@*6-BBF4D@!aZhqHbYLP1nh$_N9shuaQcs zgm}YHN69G=B=yuYoh@y8S3S@=z$9tN(V zTkYM-)6>()Z$a&(rbhlZI)lAwAONYnJ_P&7DG?wOtA9~;Zac1(OXugQ=p2(4fc%) z*_>(E%9sXdTHy{oCe*k5C(?sFf%^m7(tIFgn_N!0T1jA5L_uRb_~~Clxm9IIBz*fC$%QG2P(#B=s7qy+bO zjD~tOJLH+1V}Hy%K`5D7eF6*OK)~0nHcH@bAh-`QL%!P%Yr#oOG8Ib z`?%gi!|;`y9Df_d7u@2VHoh{|$M!|SLe_stv$8nQb5~q~LWo0_m3IZa5lAFe6}^>> z6Y=dTMsjLMxuMCZ*6!x^X`NgS$YSruOb|d3mRUppX=!Qd(z;XSFR^3`&C3Mg9dfxN zxWc2z|KtO$n=|NhTi;JrL*K(!jTc` zh*HJ5dC|6#Y--DDIo`-x^xR{6;L%bbw|ap228Zo zQk_k{>flAr#s|1S3(VOw2B0+jt?ljpWh;I|Yl(D5mT^}|PJ$&sW4F)Sey*8@uH!*L z3yzJJP_&X+jd%@TO_48*_Kfn5UDR~h(nT?10oR}xnWoGz;E$p?I2Ygy>BIN}_Y*3N z2+8MGy_t_q!hvNXDvj!vd-TbJ#P+kvD(#MdnCw6&;zEDymymIwz^t;#HsO!ah)^P; zD2V(|fM--4uIl9O;iUd(_BeAviIJTO%JrZ>NL0IfRaWT@cB__<`)IwfbMxY(a~2RG z2E;FneHZuk>QJ>*_G6G(gdz!Pszm}IoUOQvcNSs$2a}kk>F!}&=sgkqQW^CBc6;3Z z1Vg^Yf~L|99v_F0OWwdyCn!2;qu336rG=_CsyD_%9(&TZXIAECg9KvaK{~HG)Dn2) z650z`8vD%aq*NVY#;qT9Z0V!R#dd09f~sk;(Y>!A-wh^1vAiZH_`bK}j<@?`_9ew( zi8HZC0=qkDCbB$kN!y>C+{x621{f>VEDX(%{YP_yK-GlI$HYGpv4~_sPNQX$zrvdm_#8Cg5C<1b9x=7{l*Q!og0Y=f8$s(aubnneo_1fNUeW(I!W zO-lE^yHIVc@H%qAA#ba8>y0Tre?NfsOxM`T_GW_i#r;uOV)+_T(S~BL5^8tcEw}C; z`TaO{cQ<}F{`Yv8m7txqOcFB_F*8tqHjmO<6dP2A!t+ntL@5u!q!|DNZ#Ju0ktBp5&h?bwIRq@z`*86y zG7rlvm8rg88wVQwBFbo(b6?uCh49%SkU)<qsKyK(o$bulKqg)(bhStZx!N8L%jF^5zOfGM!=^WJ zunIy4NS!lTA3;S^6+w7=mLPnnX~kRK@o+w^UI*~0P+)-xf>M>IqpR_-P6;^$k%HuU zMv84NdqHOM+PbmxGTXbc1A_lO+@K*19i=*5m-JjymOby{OQ-$~MI?jP0~YiG=uJ(J zxR`)3FKbC&u*5loZr{_6$Da0?at!j22=-5OWC;RiX~L_3X7uw5H^ruf5CD{=8@8e4 zBpHJZI$Y;)tjSNw5A?X*1{fe=TTo*;g+3&cUmDZB;oojkk<=`;n^T{uC*dF#vjHuo zI|xf#>Uq{_{CmZW4F+2fnPpcY)Y?3v=X|d z$7th5f76?zJSNoO9*Z1zzhZ3(U+M(kIle!fNIIhzw3ApNkV{lMxn|_%+8-f49Gfdu z(0*b?63Iwi1j(r#bV`X0CvD5{5u>D*#^ezDk@W*oqpwf8J$^M?)PH;`m6*Lk@Qcwk zE4iwd=Mmv4P*xS4WFwIG*PP}#BiyVyEPjAZOx{%4zV8^v$|@(MlB?{V!15>(-l^u6 za%mYCcjKrA_Dkxp=VKqf5Iv8y0|@-TkR8m;b3Qf@*+wUwn5H%UY|B1!*QPcsng_*V^r6v(**Gr9MLqhS@7%gw)-K)(1*6CY>Y4%eJ(v z508W(wn9fb)oCaVAfP8&9m+nbV}!elxU-4ny>HlY7Xc{T8CEy>Uq{|YU4pSLZzH)) z(9^1AzdkIpQAAm~#y4yaG*puFlF1Mfi{&0UTqFyHSoh`x{(YtOeT@X>>-tqSUO@G* zp8_E+8%#~P!($buk2t?7*jg1p@_}jKg6|lh6;5hO-U(Ulz4Y)7u`ShAii?X|6-*8i zUnzrgQkl&b%cK1Ky!NrsmY458)4GPY(UmarPas2e?+Xbv#9B6dvY-d5#*^75M=H#m zT`BqEJ?;&uE;bL>&c5vK+O#>`i@L`G3>(mxYW?g1@MR-GIqBDSkO@J5YM_vgxSBOE z%)@8Z@yH(FT8A_`*K;nJ0aAFz^m(?5+`qrZWL6>{01ozhX)xo{rB6!y7h2wRl=u#U zl@xZj_XjbUS|acV8T*z5RjNWDbg|@T`iH9*&#*2pNS)M$*(c z8X1qCKWTt1;7tCl{nr0OT$uePzEJoJ{F7R-uBCqP%)Z z-KW&5lE_L4pS0@n`C|9{eUfHSS(Vq`3yA!$0qbG2m@DgmDz^SO@8$6LRoAa}q{hmI z{b=>S>~;T;{egSEU8`D|>2qT1p6e5SQ*uL5f_pjbwf3^T z@9=+DINg~!#!iMryt&tv@RLMNNQ&?7vA<+IED0^`uymTKvDr%MpIBY>@6R03)t$uH zKxUz~U|(Cd{EZ?zLtAu*J0zAGiKc7xVnbGB>AhwbvWA9j^NZ!xCI%~<@Q@-XA+UV-h8#Q2iWanJC+Iqd#oiQk!}-m^ zwkTNp;;#AjpN*B|&G+Lsu8X8;+V(B7FT4mHQCpr#>~6TZ2ez!-96MVqSAtRhVjvSS z0!?Qd(NNaS4Zb|)cFu^+KXqfKvob>iopZ3ZB2+edNr1G8E%LDD*zb<8IAmhXkx&sY zQHbr^>{ZBg5yB>yE8nCYu0?!*9fbk@8bh~v#lItIkuA8eGFjNI=)yZb~ftT5*QAyCE|+rsiK zS=$f_f>PMAsC=V}@Db%tC9YZiPoPca+gw!PLjMSe&g2|8`>D!cYP8a-H6)Mrks(^P-8h?R|4(PDb zC;gIoghwj>W~meI5Me98iFRWBjntmh%ajNQG^DMecQhuD{!&1Lh$rHN*-{!M2zQED zwUW^P=}ncm2QFJJNTNi)0t9EVwN;$I016aPVNV6@0%O*$dg0+OL1Y#UNG0xWe`yw2 z17p18Ki|vsN>lW-ex#)if&YE5s-cfs!0Lps6FiuGfneYO3D%Nl7&r@uC@`NwBX zZ@+jcBL1dbr>N$<6ciMc*LuAqjhi0-SGc5(aFmJ-S9x!Z`8C$|?@(BM^&P6~CmnzP zy`8Q$t-5$Ju=)T(mP~Q#;|dIHUd^8NAbRT_W(=X41Zqmj6gQ@>4cv;zx#yHsHcC}> zH;PMwv!j~@w+N<%6n*@Y7qhJjtVQqhzveZ|QG0vQ zD#(tGrhZ;o1v4WB5ua$xFt3bbMKpNJMFleEa>ArnKGU0yg^j3TN@M60C^Z`N(IcZ2 z*$@ro+~IOs0ZV(68pEl5S2th+fAF$*AI%Bu`Q_D$HEiEN6VfQ1~#h zP9K1#w3J(;EA;sC?ek4`pL#w#y(dx3(JG0wro_^#2A3;ao_}G>&szSy@B(OKIapXi zi3o>f(inCxZoS{`h%Ujr9F%oK)#E{39t`|_hlep`x;0t`7i-crCG`GU-}X55)YUQ6 zPFXeqggYPiWB!1w4Q{#Iw||TE5^?eM#g(OPerI?Wx1;8fxS0Zf=*Fl;^L>a=uGnLY zGg^V34ha&X52zWl&w)lr$DhwJTAi=TU+~g%e<127>4FSiFkGiOBv&YvnE8q(wI^Vp z0vnM3Bq2T&B8|?GBvD4J52GVzB#$blj3AcSzTg>6quQ$n{Z zy~EzPu=NPZL^}JBsw9NJ zVa&~U$}mJv-D!S%`WTA+$O{+23OMF(91h6yqW#IyywhucLdJhHDH(5^P~@3vQW<-V z3TB|fG5Mxam*tN){YZW--c?+D(2HD@6+W9JcUg$;GzC5~NejrCVOCx(b2l(^osUn) z?AnAd%VVkVW=YIQL1a2Mse_zUSJNh$JAnZa`j+orCAK5azn_$zMdwF_9z_obe=&0Q zx|4P1LBcR~ZvnB+V+x7J|5?;}c{gp0f@|8D$?~1Y0!|oJS475z6tvC1XDd~1Hh}kG z&7y9d%Q9lLv$wL)%9`VvWig2nflHk39X|!UM^7at*rONUA!z@Hr*jIBELxLwxy!a~ z+qP|Y**3b^W!tvZW!tuGn>Wvynfsn^I~QX8Uqo~>(VsYPVgQ9@CU(?bb_Z^Iz%p)Q zDToXqA=I*n^#0xA?og6P7?{u-(9FXp3;2G1JewUIC3PNtZZQL-y4e=ewPYek6*c{G zX6GAdeSgH-nAo#;(3F(jW0}SY3*jV?u@1Dfv@QSqS?fS};fNp^zjss&6L{1;&+BA3q#-2h5NY4vX3CwtM)!=+mcMbU` zpL>pNud|g-wJ!U4xgcZU^G&5A#&p7nHJ@MU{?VnQGflu*c|Lgs_+6KFcjy}|r&4Bo z>i$P~Q1RN--9$`pMPZE{2NxkKF0}+ZNkFFb$ZzD(1%O>sevY!se}}?@Qm75*I49>k ziTB9D8F*!7g~x*XUsppnV{P_pgj^&wPqB80@~F%^wQAqkC1@=<=ij%n$c+jU}Acg@tkuN7Eu`6JW}-1J@xdD##Eg zpBkzv+D@S?4zc%pQz_}foGcee9>S2TJnUSxI$5qXi7LF2^@=m zAvx?|}+xAKuRWigz#EyOJ zA{kUH@K}So-R&bWUO8Kt3X(uNjRG`CK53tskBZM77a|CB>f-U#oI}y3flX}yn)^-& z{9fuip2o^4%(_6z2z}!jZ#zBq;EC7-*SNrIryuzqdUb>L3d$F2-@Mf)Y2}II zAW1>Od+P}eHK8b8FddCsxzLUDxm5~HCaZLRSD0B@1x7%4p0Q>k2`zqNeOr;!4m4P? z%rJG7R9H&;uX^=GA^hXwoIUV>#BaotS4QTj42M$7hAED^uQz&XGfD?m-?4!jvu`#9 z`7!u;GFTN%?9JK!uDSjWnC3^3=3J!e4E{?9y)+NB$#la^#QP|+OyZ23tvvVh<^N+~ zmSB3WbLV3A=Q`5-UGAzkX{LX0Qz5y{vPLp`7db z^W>keo}Yh)MN27R)OFe^gkJD7;;9;Rvq4s4>k*ranPfo|sRC%Ks4%}GX~FKIaX5}K zQs^RA)Kkq*N~!|U&NFLk-}z^RHA_g9#Mgv+*|5b~eBpvgN9E)VRgAr6*Mk}mA$Vvp z-e9+=e+@!uX&5(QsfYbG-I0Az`&{Ce(E3#lUmXK`Ta{!PQ7tBrOX1kSr%=ZoB`l>o z5$;CcUSZi6k&>L^`W>|JbKw8^^7AI}Lywn) zSh=1G&XGWfD})rSol;!K@x#N9&4Cazgdrp0 z>+0)k2ow`an+%d1cC%Ti3@7f_I#}pM^)&bQ{F_+85RGdu#mHL?cDEho+dj9HzF+=Y z{<-p?9QbT=4w+>t1M@zWW9K|eezd;EVToeZ_e1tNRC`%WUI1?`zgf7$;_`6enENX( zkq0_|q?D7LuB9z7IQ{-UjE({|H<4#Aszk+2*hT8;lJu_Vn;yf=Eq5Ok6{WhlAP0-x zjiF|ty3jF}BuTomv_F_Lxhk)5@C>yvJVIf=)n5IZnfSaBk|C8wqdP0-yL#)szMkqm zwX%{%?Qf5Wn|qyjlp}4f?kcO@$LT<$mrY>2G<|RV?=gtfXsxq}tpx>vx6u?9BUC^F z`BT8ioYh61ALr#a!8;UF?wH#ap6-naS;o~F9VLmn)eh|Lf_?!=5 zY*!tQkixJ(NhnGC(r$a*gh``pZhe z%#23=pKAdL18Ile6#|2=I%ije2w1szS=;*Y4M82zn)Nv;%st|yTK0PQDz!-6USrXp zY=kvPQIzzp{g~+jKb>ZU@p}yUTD3NDoF>rEoo@ z!J}6pRO)z0DxUk#&xaX-@7IN|UjtUvWAHF_?G4=TuXAfGViF|x`;NKz5%Xih*Yy;n zQb-aZL;~L#03~gLDML7nO#A9yK9mu<-xRQswyCS zU|kHChvQiUflrT*>8|v2Pe=2d$x|y_@WNu?M4Q>fY@QNPoXpxL+=Mfp>jKZR>o^pU zy=}*Ol}=GYBuOyj0C$3b<3GlEO#6d*57#o=NpXkEe8(DqnQ3}Nur-8tm!Asu0_n8L zaMEzrNY)^3=udCdB-8|wZ7P4A0lhjT6)BfoeXCV;C(4qX6+O9_8NY6@26rqPJGn4C zi5t7rlU7;;5916*yVf)zRk?<|EOPs}Dll3~S2u=q4g6=H=ZWmB;cI`CJk!=caYnAe zy!>;&&$E+E4m<5MS$WyvCWA9sX)1{izZpmK6uev?W?OJ8ajSL}- zJejA;X1p0t-Gr%oZhIh0N-)L7F9==M6|{JXCLn^G*(J`1Xe3gHOwjNHHx8LHHW5KGY7{IDv7rF1~1LXxv$v~S8n*QZV1d!qLUa>vz7 zb3V&GpUm@sFLS4;Y$o?^nHgV1X{=e4=9-wrNdBWtjyX5=ESKQl?RRiIr89tlf50FU ztga!m@xEK^kCGY5Ax}(tSTNP;k|7}hW@$36apB5q-o$>6>~xG0erq)PG7LV%*mIC0*mW#120OUfzg47JH@rt|~nk$Mz4lG~HOJw3C_L z{d;!@V%_=8MNk68>$iPcsGPE-V6?NnyVQ^5cp8#H2bG;1!MAigx zhFp5Iz}+$(ykyz9I$@rI>R@tI8GUo7B~vNfnI+W;g4bX-ngJs}>!!XR zW;jA6@aHg>j9OHgqHT}q@%b}!H9t?Tco4F#pUz5$&JigX0s}03e2gtDEYN>+yWcIZ zE=YtxTaYqX+=(*xL=mndDevE3VX#XT;J7nm9yRH_ff!jmBva5CnispjSsn$+Onu&` z>@R)&eV@+N`jU_?EB-vN^6ooHQ)tP9?z)%4gQ7GeHo{#IW$i&@!!pfGDT|wbWMuwM zZ14GjMbtKAqI*ne`Dd&5`q-cQ{j~#lQeS~4lP!ueE}5g^-(S>?9g!4)l^-C9q`Re9 zD7+;$_3u$vv5*+=A5|giPbT>Ql#YcQd>kgm(i_kxlG^HXPJ==Ivax|NO&>lHs-oJ@Va4P8yz54F1lY9!#!xssKVuaZ_90~zR|VMM5^Hv_WaI{q~A~i{=2hogM%5LVR_c#x06*Rw` z!(JSjdF==NgOY3yzDJ-HtETL`ETm1{I}C1yCE1M2Ux7%UvPk=3p6Pg1&~ ztkNPQvoOe$mz$ZId1IeLG6JPkl}eAcvvbwvt=YjzPh-vOI+>UNgpOSXv7e@&W|Tv;4DV|*T^I&44QRwBe4%;0o~uEAP(`6a+H_;% zFKEOnPAZr=#PCE~a0^DtJlJmFm{04zUD@}Lm{cYG9*(qDSNVCi|0%4{xy*MR8b6=c zHapxa1T@aD$;C`vtAt?99#)01wM9>j#0YkBxx73~f_BeyzA@x`?zG>=N9XsPbI8{kj%&{ z`+CjKE|+wZbPJkOO)hTK+0V6bBE1n~&=bNlPG{R+NY*|dQaG7no{-Ye2;}Utiy>nF zz{u#`HIS`5mVFNn6u7x9mkHLj*oORKzvfb6D*pgB*289vQ15lG{CWCH z%f|E+;<&yl$?JBrLZ0DtmidcFseU72*ARg(pGg1g;$_#Jp_en~pRb)CJatI-z2wF1 zd@4wA#opz6%6OqE1yFkQ)$OOp#a}WqFo#S;1Ce)<=qyK*v-eA@83uBb&!1_uk2Ix( z5Z>^14vGb7Zi9G|)bYx{-*=-xyZrq%8;(|+#@A*+vD4vz+U)}}x4eR>lJ%Cdm-ovv z?^t+0uNP3+L#eunDx3~#XwSJ{fY>n3TIa>s8>{`7^+N3g$Iv{WY1nA91`A- zyiujp;k13dzaOx%3&&)zCL{UWfG$yrFJRVwk+ZpVC%*$fYQ^>XK^BJOh+j)31R(Khxntemo!2^2Eo!3 ze9ih2W(mcAFPDIw(1Qysii&ucEi}_>(6bFJmJI!EK-jt}T-I!|K{hJ|eMLikA~n^% zKb;|c-hrGKj4_&tiaAGnat%eJyDMu?UmnpL$o`Abi zo$Po9*)chcs%cC|OUl(H>REPjird(9q)w3Z!z;4N*5P9zNrRV3t$bYhxFBK)*2W4O z{IU0Q>T;TUe>$7T|Lelw470hJrRLP+UjA?9RlPBKb>*w#h|}(Y_)IolRvKnd;~42F zEpf%Zzu(EUZC&QCWf>f;QF(eN7qZZyJgh<~w!l>5GF)l|%l5WI6xjs&?H2U5)fF+aLwrZ7DakQBy3ta+5rfT| z6ILom9qu>L(F(86N%TH+MpsC!%yt~S+x7#0B73lMrfD)?&3mNJ?DtA zQEUb(>Io6t{q>zWiv6b|uixpfCM0R5XF0sTC~;8A@BvjEu&Kz=db71QUgTs&ypxOq zgR{SP{aSd@SoK&lAGP_{oSRch#J;G%@f|VVj;Z!p-LGl#;$)gHmzNMC9HXM*qcUG= zW>}l&XD!H^Vgdy~{UBAo%H@(Ot;ry<$sk;PF`@uT39|dk6Su!)rUZPi)_a6AVDxVg zw%zLJeYm%(;(7TUH%*bNON=>4Iw~cwUWc*wm_CwmXt#>;W&dT|?TZTUv9!~TIFj?* z(=xkHtqt8Mrru7Ql&dB8N-v6+*N^vZ=ElFl^J$EN!0-KxLf?=iWo~FyeKhkL?_|_=9+A?m*7A<~1xh41nC)+`?O7+`=g>*6K^JpiuIqV*(qV6cw5^B7 zVp&D%LSvfGpdoFYZj@0&9v?UJ{b^rN{I7vnxbC3R=~G_U!E5~#kn_{4;l|pr|&AsRLQ|q`0nKU?@C5OXApa5~2tKF1tKl$FLwH zf~o|6#~ru9uw(E?G2-^rMT~JC6dJP5C=`UHqsV|^IB@0aow`K!iV+&Lxbb9sMWoTbCG!p6zlaD*krTD;aV#(9~`A*S~9Tcjno0J9*$#XLtDq=B=j_9&(MrU=#$t8`) z)hdk{l7}UBRp}?zb|Bd$r+5dhdE|qq0xb6Bt__xu1vqQl3(uXyr;jJQDUW+{Sn``@u`vK>rvHfq?&>gPs4#x*72xkH)5> zj2aehOWnPs&c*3!kh1nXrE2k!I!!B4fxrX~L4Y{T6UGzcIQ4G&;6UZ!VQ4$#_7*8V z>nN#zydb4*?(-9x@#pzp50b*&!O;>>Q%aof1N+>Zw(L+RYErfnf<1lp&9&%gJ9hteH{idBeejk5Cegb#=mP4@(OIj3EUeN?f3Zz4*UPmLDH<&N+91 zb^6A0d(BIjs=3lH(-m@4A|c$%roR)Xv+%tp>{L>l@k368SP;d@|u z2Gz|-F{>g26kJ0G*ZN&eJ*>5q)RU{U(R92mI=2X#WPITt;bq;^Cz`Mu(kKQ*6_J#jf z?GM^PagGe}yTQ;R)Od{@X~y4D=$fAhc=>p^h zdA?bjH^H1(UO_Wwmi=pRubz@rUqTu{?iFZx%T2^XR~o_^9<3!d7AH=PTfnpE9aKo8 z$)@Q!Usi|XUAG-;Ql=>@NGc7L|B;YX>QA?6Ls^2us#7F1Jp>d=gc~(eFK@KQ^{I+8 z`8Yltjs;E`HFAE|rCZ33bcv3mU9p_2ZYW}qi)1i4B~EvobzD5M%6G2<*B*s}TI0(a zdh%VFOG>Y?Arvp?cuyQZZNErdD;}Cte7-+rl8_9n(?vrV1EF_Fho&h=;`ZY*u?7)_ z9Myz)Q<)69>45JlZM-^FsIL6;&1C1hYEC0&Un8xV+USW#?cSy-vG2?I*q_-JStPtd zJPx6W@d%9IlTDJ;t!)fFncxXOUG}+syxm(pKK}U&V)k@&QQPAt&^uQKRF*&f0to9o zHrj>>b~0u1LEu|kBWi%FZ)g`I)22PMw%o6%Nb{F@@O6zqP-ANb?HF#l*j^zvZ$tiH z%?l66s+4-uh+*1qRd;UxN78>WHdK1(>}<+!Ha(;Dk!M&1E-rDV!SBWAiJK%HvIMVB z?pnfuyIjvzi!#G$$LBIWe<}P%IIk;Tm-ddIWYwl|92mquw9&M7gTgVLH8M#-uHBFn zh?Z!XY+2Dp>Gt}3rL$7gY-_idnWn}^yGL?^p?9ndy)eO@)oC(**F;*Gjf$6f)!Y}r zk}B{~7TGfUKL5B#O`Vn?)m%q2CMjp6H$~?;nNB5;!EA5mn$4vGo58oB1v$;Dg^P#t zH?+Lfg%hp%Qo(Fe>Gw~nI zev2&X#r{69*JDJq!jJBFk_RJ2#I@iZT+V(`SINOiEFoTgCNYl3x>o!B?qNHK>*z|= z(4>{xKnBKz%eALJM@Jh~ z3?r(q%(*mo+QkHXm97?ipjV5{=t|->5BbvjSWxv+v>~{>+&hI-{k%Wq(>rfwf{CX} zb9EQ7=`M_+Hjzt;IdkB1oT!!8)aKib=gdF$jFh+t{1|!L1v)7qVK=Z+>HQpz#xd4& z!jcV5ZNcHL6k&?oo0=_&*mdN+xZA)Sfe!^?b@rm7m4HKMVevB;vg~yL!7F zh>bGIU$$~+2&%X!O62bOqnc68lI+@v&t$R=Vt$QFWrUF-1udaDwzx?n&MYxs4`r%a zU2yq0(LS`c243Rl|NhpFYg>Sg5n`?-EwNQaQGd!=TceceT7`wuI{|@qz+}F;6d7K# z5CtsQ;KhVjU>JdwR5mZvo9;AEfZbkx*{Y)D!u@Vjxe^Z1y+8%(%BCs$5h){hbPA{W`o0nJbhig!dD#{zLI* zD&a&Gw>emb3qP`YUs#XsrWuH7xzn=Q!SVU^BCv{<8mLZKpdHiDXqs=PSIHOXl-%qe zVZS!_@5lu7gj7=cwIrC~+#OSFX;_-%AiyZfWKQ^e59?1>rY_-ed|U(pSNm9qLE3E? zh?<(FyhKT3i_2^w@qYuF~kiC7uyc;CUnkQ1cbeqWyo zwz&J+1Ldzx%fyEV=TWUo3N)aK9gE#1HM$B@d7yM->frd;ixqq0PzHTm;1FB4Dw?L2 z4eei$3@f;aW-t%Ds3uyD^;u=A>2%%^ZRX%Gt0GY$A)XZjDk_Pz#fo?(!VR4bX7H*s zk*V`nfjtOGZe2qW*99l+&1SX8Y`$7)Z|MXAn0|uO60svTi3qU#t_k_9Xm9HXEB%ro zCS9@mwhOVL>+T&jW*0~}f5@$Eo5Wnm-C7;>)AHFzHj~;we0<-- z?U8S?dEDg^v$C>0UT^DF@)3Akoeh}$%Q=$xbWwM2y*`W48IR-v^AUiA<)jBl*rz(< zDke{7K6=2!*??dz{U=Rkm)}9n?E4i%<$9*)H3|aid@A^-7)5}1fBIIJ2whU$umwc2 zd71u4qo+O4hM1^=siZHUQ*<(`?JFvQf)#=?T#?bRHRe7_3|Jjr##FneNSQ^bx+JXo znqh}Q`wt+q9BBA`LUcN7nbg;R{1;jTvXuQv`z;SOq!O`QpiN1LJrzI}H6mI_Is7O%f|4~#Ity46ktt`QVJgj9Y0tJVTR?0RtwVY1lQxw8E3t^Jsi_@HqOJ?*izPk zM3C)}=O0INIX_2{l}g{KE)SwO8XG5Ml+GxWe=4unMKK0oXvBbM9&;j!i#!@@sW%!^i3bf=vON_=0t z_jEa*(8F0r^5QxsT6wr){TUGB&QNK6kNH@7=<=IXFp|`?oR_7it9^OA#B}A=#*`7h zu)p9T4B}Q+zgZ7|sriQ>qbUC`^21d*nY3Z^hu%6)Clkjyh`?ZRwd{M>qyUXAgu<#* zRBm=w;Li~&euC%+E)HJfwe>I@7OVG!wYN#Gj}MjVF@RX;AgZ8k^5-j3r<0480IfL^ z2*BTU7*V@je)@47yQ4q!)|A5qz zTkLI6`9!x|#8Bro=$6ZiOi2y&wQ=yWtN2xSTrvW_B@F2sQkT-eoQARKt@tWTlL%IN!!5x8H0m8INr^r8L& zMn}?rs85to>+?^}3NAm5CpGvCAMCernZ*{KG9x9&c_G$`+?0UwYFNdWP!M z+9FNf^mHW#L*THej4M(H>+N_9A{tNW77zy_!7B`XOf-xo?2HOl1KTU*9=4ar+Ywha zS?VEGEdXC!9{uvjTk&Vh8xE=jm?qGx+{`Ab%Y*K;$eN(~fu)$e`tSDq{4j#Rwo4wQ z09YB?$7g+4*NAmjPm#-&X<$XS_lkshqU4el$M*2p_P!i3Zk-712IzX`kchd4fH&sx zgKI->24*v`-0Ky;=%jo;2^^CR_;({NyPMBi1{;4`wJCV+wX{r+4{v=!;@ zW^%|>yv^ACPv^uGeIf#!E+=G&z;2T=e58>44?viNG(X35*FYeDn2;-PJSK+xyx01l zC)Le2vN*R$qxvi4y-b(hor1M`go7mdj&`~aO@omAM`^}EcA|m7QtlZiIl{fDGyB`7 z=M~fqbn#_dV8BG=?a}MK->&TVi0vJV-&}dWWYIjCnZt-%HA+lBmY7bPF26i!?0nwE z{#c=fqD8PtqT;_2XJ9+>ejqUgRD~7gNW@ZLZWIGMpUrz#;YzfrxveUk>2T|Hd8yse z`IMAKwuNu}^9}HG_5o2i@i;^u3i*0V;*k_b>A%J&AVfdHY-lUzLlhJrUtvCQ8VR7% zsPXyUICo$t3eZLUEip22Y2)riZiT7{WSP;h%+394!f{luab#M!>uNHrXn5udfgWQ` zCnR7rP}RHcy)a2Ic{~Di!#X__23?jIo%59$6Iz6^1>c9u{c1fH(C_m2bu<(?B%b>L zG691+ZX8HwdJU7C8RoB@xyWG9U5(x0@CCkCUi4#xBk15PG$D!|iHTBQ8 zOBb$hi-#xOI2Y=RE=I?HE(%7BJM4+sq(G(nQ zv|FS-T1(x0{b;QmIHEn}H9t9y-OC?)U$Q;Q=Yzm4VB+qNkK1e{uAQV#)iq7f`YZ$v49gUvpQrZM z_kgl+f0l$N1oEw*-rkBn=e(Ll9kZ{>iTffITAdnZ3fp5WnF{KBo|^Iml9NsN3pf z_JnSHx#T%kdNeC*oYyBK?O`G?_fA4@rGJ8-CI@MkMU5%l$ z!R7wvYT|JKnU=uzz_>XWw$S+&dHca`ZRLc_5`@<>T))Z`e*bj@=rWrniLjj3Izz&c z7RyI_Seh`MGi0BTk`6Q*^}wBg8|RyHjonPA>~MO5?o&)u$3iC;;zVzySr*ql%#ix@ zD@QGAY)p%KkkffOBlR}jBA+2QghAHUZL(|FWSmLj;SckI{xXW7GU>}!cCCsTqueou zKs`of-pY;VF(p&Mlno^mW-6xL*@X4_$L#-(I!2F2F`(P!_4FUv*=!=xqiz3QusGtu zCPWKv>QK#sR$u4xOGfE0bxwQud?KYpU=|fYWk@S&vL}PyGx zIe-vdjEpMt+#@zDFoo#rzloo(pWXjM7Nw%HjB=!5okc8x)_K%GA@rF7JaY*R@?JGo zu2teTozqd-GuxOZj;RHA*bZSv)6(*7iJ;gsdpXoeQG1_oS_?QWd@816;PhO$D)!>C zxqT@UQlXFU$4eI8%2-%t1USPcpW;>1DdL0Mt%^~xBm(Frr>9;mLell_3Dif%jbk0m z%=Xrsa}{TXsf&*Z4GRn>W0Z7bb-nJW@kj#T7d^~ck7U7RAt#Yx?E$XLe|9_R>3W0k zf5EQ??p=2@9*$9JY(O?UJr&R^{m5ukk|gkSOT^Vy*ed4a=WB2x>!v|DX)<>sQD%4D7$=U{7Q3B;kqzi}+ z9?-ZSe;Pxo>TH2-+j}crTmX_#(7ueNOioFV7%j`UOS{r*rr*@wPe`06{{HubB?X~? z2{=wOzBV6p{2CH6%m^njD`{bVauaA0lG9=$j5rXPR4J9;>2e3e&UK7dEj82ShWB`3 z$$3;jT}e~Y>ga1(Fk-B|uK!;a0CH=?=b*F&$5*wpxq|=s?w!IoMFN-zW4LKPu%|T& zYxG$YDs|v1mP^2T{`HCT@Lc`b)Kvao1PO)Qx4rox7*&5-H^qT43S7m>88XT*`|;FB zR-57L?G72WA@@?yAoHnJ_aUBLgFvbbw)i!ko?d`lDUXR+Mj!wma5C;f(*;rr-pMs? zqUqN`^^opt0HcX$`USR4u$8@?Q=`D|>hnS|ULM8C!&uODI|nK4fsZ*cHJb6aY}t6x z1#ytHzDczvZdt)^0)?g-@OXIM+4@)A`8v`6rmis%f028cBT@#srD+n^h-v$rciAq(xBjf}q+zD6!7m>Ecs(NxvS;9Rz#JlGa2;%12 zN}NnReU(*?z8@ieeD*^dsUJ_rtIEIhL*L;QT30fEmxw#gC}qZo^Ldw8O!w<5m^dt% zo?4`#Jxh75Hk8#z$X9&772vn6E}_uy)7@`h!JB`mSlwK>i`skA`~hH>rh2 zluUBjknD7Mr_u{HgL~CoJE*9hO|szf*oSf}r5bBo&OfznhI!1o8lgmWe3m}|alBp`w1(au~dI)?RO1h%wAr@*UI8zD* zw7qhW-w$_!Q&vB0Cqw-HnSsbsLZAr^vsyu4G`T&BDNk|X?_U)t3 zjPkMi^rg0`DK;(wCBa=z^^h`6ioQRV>ocydZ%vM|iT9(V4i1Pd1ie}!{RIiYu92!w zjnjdu^@w2*C5wh!uJ0~0kGdvGe|cv~Qey?$UYBaSKhG;G(;OVbMR^I_x4V5ih{F(O z{lNkk!;sogCbxlv*BapM?bML_MKO&PPl}LPSy^3v`XDdDmP_!Zg6`PWsuaVxXe7wQ z`c$xL#FB@G#kGzp3p*uoyf#zmbIcVC9gX~c2VIwL@aaXPya{)5$D4Z{DtAcNtxn`~ zcl>=|LGg?R&Kn^N2V*mbUg3q5kY5snLb9CcJ>L-Wx83-$vCnO|w{8kcMFj(d=^$m1 zb_y*SH)&p#snC`-*rStk2t!y1d+{yF=qs> zRy^N^B#T&47f|%0N0Uit_K9~&(tg($H7waEJ2A(hqf=w+0eiku2Tv}S9Krf^l5|S0Y?Cgl#83Vjrlqekx=}6t93@x@SrBQ_bg#|~(ZA3fD#_{dB-`jQn`W@MYLZA)AH?076u>$ol@C0i2Ly!8r^Y21!o zhe`ky9CATBBr)j$)66`s-FXokEG#5Qg4KpH$pNH;suz*qUz0+HH$(80=zjMP^){A}7Hjb6m=$g{p}`GWcY{y;c11jiI}uZk?pS?9 zhin!;-G5nOtv}~DGQkYYTH-lY_{pS? zCNK*Uvy@2^!5Ag~V(CCy!qdGd^E`cDO5aLu7+LR5?>y{p=tjr_!>~K3MGBpR!DtZ{ z9F}+8HNU9(bt{!%hc3YbxUp)NIsl=NN;R24!h z-P_u4Ep2_O=vJ%YiUIj>xxoJKSOuAljAe<~dK)4UOQ$V9?5vEh%SM?}g(sMwTEI&L zwURSZ@r*)F>~gOspPCeACc5g&5{Hd{O9DU>-*mnAUaQom-=my)Yb; zdL+k@=1%?P5qaQ68%QSf3A-9wZ<-sO{jUoMA1CM?6vRf zXo_tI32)-`K9rH&DdUXl`h2})YnOs0QtyQTJDeJ_sdeAv%4)DWn`z>-uyh#%hrhl3 zgByX&dna!9#<-!R=m@sAplGhUd22dYk}J;kMJ2npVYrcAFz?(v9SR`Rqx{WSqy84N zg<}^z0!}ev#V#%HozLa1#eAfDq5ttS0U&`uNKD;9#(hn~LEUR`6c3HS#@j#j{Xfe9 z5_k7#Sk{>41`C#}dJsnEB z+4sxb=MBBMxvpQ;iITCCCPY}7t z)F{ji3CvveXw?M7uj}iPcqbF0uxzRN`g0dVV5<+z3TubkDxgN^2@bo>!V5rR;oThI zy8>c5xQCE6GA-1N0Z3HdE5O+E^C0l!?BtZ?2$|wkX{u^{Kzocef zOv{hyS5yZB^8mog30MUL-^%yyDn-G#i2?ZS{vPB8-s!z*G)o<_a^mg0W8FM8FY@>H zm$$p!&UTvKmh$bEX=@6U8+<=BlnLhSvH0z>sd{BU4}M&@T{F2HR{UUO?X`Svo7>fg zb!DYZXcZH@(!}gSH1;*W|M%kvf$!&0GDD4z)X`5Op)!;>Y4v)hjE0x7kG6o3j3n?< ze{b(u^_3m@^jV5$UwFQ4r5BtUPn-J>F#pB|T(E)fQ{{|64=r22icp91NeufX8fZL* z3VHyhK-F=C@;$szXZ@Ze_034bix`q5IGIbQ9?l4VTP@aya6ixY-H53(WDu;fQD;{Glz)$qEgN=VJt>gFI7V~qLuHv=-s067Q+8Zcff#&B{a(`>Ost^SUb!v9~81(2!> zZN`(V#Ml;5oSTs)5A?wlK2cHfZH8GG0~V{I<%Yk5G)0lMULk}0`jzEmXMMl&LI_)E zkzvDwjP_E=*`Ya_>IAvmSNBxI=GyBfANf2OX*s9CHv%I-@SguV5m$D8&QqbDq#AgT zs*s+|B2KuzfcIMEM+3n1DCIdqv!3RjTE#vmIubvuM>415$K7<6v|fJQgoarzeVtTr z;c`Z4p+6}EI5@4>OK4)5b=ydWety6rq!|10!LT(zLdpuVvg**0-VYJ&ioENfut9MN z4Dj;At>;&Z3Humg z;}x0btV_Fq3ZC3xkZO=Bpi!f&xsE6qMJYFOX@wg@)8WD*(TTBqD*3Pyy_%ZhZu{j8T%SEZNTb*VZcUgh z_S8}_>qc8q#Rl=xg?^+se&-Ywm)So`jEO*HXk%;XamNPhiXb8i$n(Z%1U3Q23se;a8wVTs4#sU$FvYLZ>wpPcF!5x_HUfg7fTr zy_K7l)$Rg?#ak+hA1}&L(7bWv^*S1`$CJr4et$T|8o&l+DV=KwazUy&rwR=olUi(} z_xSRm^#sAP?SlI_mHE#{LkTr&MLLq(`?(3(QmiPs9MuB00>iCErZ(~ViZ~9;v2R1> z-py!cGpBzO;)*X*(isLJR3c5hfvbdO?f`SnRa3lgkmi5L-Q8^AUe($R$dvF*Ce}?J zG9&23|3%th#DG*35on$q1|A}5Vw4w|brv@}erD$&Nk{;0k#4cF+bz%F4_<-;mxp-;V@Zdz1y%w(#fQ;^1BK?wz` zvMj=ny5ErG>m6Zo5HfJb*My;n=A=;aznVpV+)+3>I++;5=%}IT=LaYKUbg<_`rI>l z4mvGvcXc4($HvS|r#RPJ8F|D-ZEOTBI}eZ)rX6N-*yr-Np96F{EZ%PCXslnuMVpvH~iDV52vuMbN|8D_>W|M61c%|kU>mT0eTW;PM|zE5+~3hMsMFi zL#yA{@a*i>Ui^r>Pq}FgpX=|G7-~hZC{mHg82XVWX*Egip6kMmCa`-=@Siupy9Xe- z^sW;W^rYQ;d|8;Yyk5Pl=@P82KP@r9(766;cCEg-lXYc71OC-WXn$;yH%61E4`9@! zG@WDP4JlTxpz04OM?O=`t?uMvUox6?aK4UL}051T`eT2pMeZ5Y#Bu@@9E zH$Ba?m6}d#XmH}!xU8(WBz}6ppE_A4U|FKar|C!DuO;f6L z7t)e zpxlMJe6o2kn$9lxT1e_^dw@!J6_(Q59%4aLeia)LuP{lnf~XnoR=maM_7)Q70w&rc z#y*;7Nvva|T+(rUttqCmq#B(oK%08w^z?9o{f;zdI7V22TL0V-cRP8UIOy$?(23uTcOt*MW%v^BJcfIdZevfSC>>F)5h^Fg2oIhn_gr zyeBp070{2{&_F;dfD^4wY~`h+w<*{&ohqU^%qr+WzMXV?JLQ|El$ZO+rHvbP$DOf} zvl=1&2-GPTE{;()OiwpST1%aIn%fiz20{;KJ$<5?DeV50hq%Q_GFACGwt+0Cvp876Ywp|p|IQcVj z6;J!pDxZc<|_0$hhvAkvIEXmj*)55x*b8n zNnt#Y`S|)xq&XqfDJUo;Ns|3X$yX35a59l8$Q~%f#Lm=Qm4kyXi0Z8(M6gmPzJGVn z9VHxr=%wKaZq|+q+8Zg3>3Svwow=e@E-nW&q~f*;c2i z!LymFr#_88G0Bj$-S$ks3vZe^O!+6%3<@boq`nS=86U7fp8;*msp3%bb z>y_TvXU!_CHe3#yT4#^Li>qap!X+`m(CkQCCcK6qF@i)g)q*4RG9-kk&qsm1g@7pk zs2zn27_ZJv6ae;kGvMPkYxBDV3npSHy6`WkgIc>YGG^FXz_^X|9OAt#Ru6DaaX4YK#*hi07_qOUau`nyWV3&a)oiJ*?$B}U zh8-sNO0A5V#l<^{qoCk`vPOPVR5UBQW@?t<)k)9L%K-~763O4@MwT4nSDQnZ974^$ z9)cRw@nV**b~$Z8HN0s@5MVg&MXvbBbahq(%d^8y(GVv@BrPkvsjApCsV^Be+d?9e^3fM^Ur zwqawt0pVf8ueY3M;{@~IHNVwaA6W-BG*n;Pi4fy9p&5~n4s$GRb59*DEp2`b{5o&r zp2x~A^c_J*4|)D#PCnDMbznMYX#J-d6VoX7rscU1Kjh;;h)6 zj4KrpO|B@9S8G^#)Sj1sSs_YSQ#j1YAfrE}%`jj?i>mR-sYP5Z**yEpH-JJWWS;CEe14+3^%X z(h=v#+o}8~43);6#M3%`PlQ;H0umpUuk4DD%GiKSPD{K~wl9CS}El6>bG& zPZ4l>Z!7u#X`P1w?#+K9SBp+#GE0r75dM=Q3hSaF(U%D`5aqrLy$%1?!85JlrLM7# zML^*3>S~^y)eSk=ts~gCMRo0hCI6L)0CGH#$`7;x=5t__2;#P*g8h;h8S-7reEm7b zy1F=rQHYs~_s1I|VDiIK3Q(0{dIKYyfs-Hh)b=*=S;Fp|t&AUwauI4=?Jg>3QE|4% zUKBd@oQe%$E%N)7@f9M!k0XS_g?GOcY`WGrhDZm()Fw34em@vTes`Ix$J)ayJD`8QB&Ny>{-3$8*5NUi) zYYXI>X{fV>MK3}(AUhgJ)*G;Gz|P0sh$s3OzvJ*--{zJw57V~5=YDo_u9%Q<(eA*J ziN+7;Ze}@MVOM>{^%#v%HF;dWH(^E5@7_ou{a(k!Bq@2*3dZKt(Ak6_0|ob&{p|ir1^&?^H@+c#bJL4B}Eb(*woV3Zm!<) z)n?q32)Mg4-azdJCs_PyWeMR>ecGpv~Ip#Z~!u0Byrli(+B}tNyybS z7FZlvyeWB?Z7ug-P{9>l9^4p!FR(~y>G4FO`I7e&48htoD&n>lfqL-O4C08q*>B8` zmi?IfPKB}s%atooXJ6*t0U@h@D~n*EurS6!n%zCiDAFmd5gzF0L`VJw<9Ue`q)pg; z?ud_f|F+H1oNK`G0OrTZbJ$i!dB`C1_=vx}xJigp*6m@PZJ5Cyl0l7ky@WMkY~oyD z)tXnxtk;T&K^>PSpO}H)xFXP%ZB?09Eti4wbt&ckus0<7g6QdD&1iEnFPvvp8sOn} z3#s_<$nM(`*r^tOmm{U&S>ZXp0Z$79O--gf5P^TgQrXj-PQ=3#@!Dy|3o3+4i2X!v z$nDG1_LElzSGbv3!uBf?CRj|&9NXeTR_?}qU&&lBhN@Z-Ap#<16ren%R6+=2Nvn+! zv*bsa=~}DWqn2?Q>U5yy5QPhtmrk-8wDkVoRej`jSh3V(qnU&9M|?*_pS>t-;z(*n zN+x}HFV*Lg-no3Hy+XN+gfAJ>-qDxWl$3>l1kq@AV7#JzG(lvIxZm#Q0Wnu+#H&*N z{Dd#nL`q+BU7%Xupa<~H+Z^k7TCJyhcpaYMS}Ve*mX~MIf?*D3NXqA{b8=Qu3NFH1 z#r=0fx!m+cK_#ovSh~RSDAa6!h}WYs^6(r^c7P4zhl%A$hm|_HzpSm11T6r-6n+RD z2K5IGhZR$@=nKHa^h*5-zJScic1%qvs9~>XU0-Z(>ic;&)3@i?kD$E4@Jh?ls`&}& zONg4_k9H6V=hw2@_hhFHYm5D_38+rpj7;zv1A^*1V#osSk7nlX!6}gwhWG9^xPrjs zQ87BYnp5gDJkt)Wi_7)*5MhAm4Na<(Tjj6{@7>Yv;K|pkxn@6AAp{`#SrxFBHUv~M zIFE&A3=X@^XC$%kddUz|b27*0)5quh`gtc6E8Jlk-%bkXL_9~Q9~xlpxH;-g$hW^2>&9n7G9eZG7e_F}QSD#&rv6X(-S?mjAb?MglAf8TeL!9-yTg zZ3n63O)~y?zPcp(*~KAg50$bQ6YTy~65Am|6$O414u@#&RpbW=Q@nkvVo098R8Q!Uo(KNJ%`E*jVs;2` zOG|UNC&FKhhX;r3>?^G;_BKg^YUJnp-aV(c4%A5y3UpbhIKR-z zHuuB}nU&{)yLGkJ)C~N(URj~wZpy1oT8G^x=XE_PTk0h>f`#?@0Yd|We%N8kz00^A zeab(^!=o4jsD+ibD1E#bJ~%m_u-?2LN<5~@sQ@Zj(e zH7ay3tu=MF4zq9P$cU|P2lHTQfYPj~_7Em@H8;0*I$s#NN+u9oCO$L1hVms6skgYw z*4HgqHvXi~!gj9+Xpi=Je_A-&{q|sLVv6(&twPJ6Ew8Q88z`8k0)#@SEx@yu(n>Y> zq&>keif1VbT76 z#KPa{dXz{C0xAmx7U3}VijRg8&?XK&59}+?^EjPnTt`%pz~*0k%r&Q8end^?pX|>V zP;^ANws4SWRDe-zkn6=K%BB>XFN+r!c%al$q4CMqaJ$VFsTEFd%0B-KjSQX4tkH&$ ze){dwQJ=hk&YJU|sNAU{Wu_ST{Um3D!bcjFbqbNx)5-9fSr};{pciyg zN?ogcK**Fb0$pyOl^Do!S9lA>vD{c>p6+)Ryx3?zOt>+BVed);iXTqvenjtsh#{p{ zQ?f+Qs?@e2)B_erVe3-m$~K0tS%78)o|YoeN3 zW(_tDNMoauwUyFK#@qYKU3K*^EGB=G>osq~k8_YYMt@GU@ad?breta&$?ItCd@vn3 zkI!c$pO~g2G}l_37E>%clqx$01<=qfYiyn_*Kvi!mD=&J@Un{}s_hL7sK8|eK|b*x`B_gZT4XRqFn7$m#rV-%%^*1F&T#Wq+?FQc}ip%Wp!U;WRRCo ze3K@90FuVU<VfTTX8aXCy_myS0qrq9)gSwT2?@fh6%|%WXZZMOyytzD_7j<;o#A}@sYC_* z+SpFUT}cpj+oAaSY~0VT-UeZOo2Jx7C}rkYa9ZcfZ2TfUeAfujwuUyCUsEp^Z-Ep! z!VaO+4K{GWPTUvuH`0XLcfrGKSc&xlKJLNg{2!^tq8qKFVrjGftJxV05sZjt@(N23 zlOv3-WM@A7=bJYO!KQrfs+6MHgqS-7f;>B=dZ^A&~Q1u%d-~SLtWa}%*RPH1?Y-b#_Xd?HYxb__hpp>Yr->`4nt-blHc_kvSsfty3#j^)J z3-ZhD_{e&+jeaYf7iGhB?BM&m*;izx?uCCc{1-O4hB~8P zbm6ZoR%wrN5q8)qB?tbE%OZ&xsThTy_BQ^pF7dEs`DSo$F4kNMI;4A?mwt9Hp*n~hWY{_jl! zD8&ipXM}eXOSDmnI&T}nR!u4X{NJlwaQ*Z{jHV(IcwYjV^PjENG^PI8Ru==PAC{`2 z4AT^u`0HpdG+#{Ad{7u~-oqS^Mdik#kMcb0+B~zwA(gMs5p>% z=lf)ULHg&ZWF#|e<<#DBCdv3^iho@WK&d6j@4H{J{AzS5-rLyTKL41o@!!FUkRpxV z5rrBZS!zsXnK{;6{W*h5UqQa?D-mV@Vs2%Ox!9j7>EEw6C5i#NRs@>Ajm-nuul;YK zK;o&_twr^edvZDTH&eTRsOTC11tyM%eHPmbd3xO@M9h5r|9%cw1-kT#*7~ouy)uVV zuV_T-t4^NOnOEy&Gx`%FJ0sSYfp+E;OK*q#|B{QtR}iJo253&^55)jP@j=#)DV9W{ zF;iYWeWP#`8suh*+XJhy*HC;=u%4Tww&f8g05wKPqRcy}jsMqm(*WzavE?hkGUKL60=rJ@=TrV;3UFRSkqNz?Q7FhOWa6`a z{^RNa=ne!7duOY-fJY@Sm|=?iFGZxGu#|g-qam2QoAOBKLpD(!Q)K@L-Y`)j=pEB9 z?aRX~!h5^_QHqSXN*dwN{*7$jJ(2C79X0xQ(Q~!cg0bvuh3dF&NWG^>{w1QVFtUXI zV^O%g^COkxjmF=VMTinl4KPuF?hN@M$5EH@YNr0(lL^>~LB1SE^EOWSUl)}O#0Cn$ zFIn)G6|Py4iot)##VH9jR|p}RW5nbW3p7;ohq>C2!R~8&Yd(!uzn0uaZe@i1$85^4aImXkVwwODfDyyu z8~?k8B3lVafITH_jSl^HM9?BcoI6!e-|lp)dyChcKyZaIxc+(7VzgjP&$we_BL~Mq zlA0Q(np)BkDhFY{!2kOY*x$zvZdB%!)si6a-whSX_7faOSPU>J-=2s1e-|VO*~}p1 zEk1C&ptQ7fC2Gxe8;wQyFS%o&cp(9+UHX2hwAT{+E-5)W{`yBq#6YEg1%peSufEw| z%l;Z7Z!bMm^AAT~VIVOc#oHCu=H<2D(tSssGATUj56{E5;@i9~&K@6rpi@}osl94|`vIa7A9w2VVfBts@fS056 z@zh`DH-)XcX8upKff4IB)k$5RIch4Th@R;U zXA#lX6#XGkCAy!wTYXul;hH4;&SJPDMdDxL@CFH*1K4BGJF3ap`R2>f>yN+xN#_MK z%-LWzJg{}U!4Xd6{D*8D{=UHE+AMcpL(Npm=TJW(t`UCH!LC1kGvBe(*5mn= zqW|AMM@72?v5oRc&S>`$YjB=M>GY*C|EQLGWX5butNK*pC}c{0Ew>jEK8KliF@oq2TmERxIka z0{!m;Y(W50!nJT_nJ{YKdbEy|f{L=r`0J1V4jdG@NCwGe4$rRqyx&3pvF$&W=A{II ztm$Ri`I!)Q*vA;~pM}EU2jCC}U`??5t zQhhckyD~g~81E-(Ovv1Mo$peTvsFXrI{umVFLp(@jQ8^K@$-iBf@kM1LdgVq-!W9G z>g&Uzj!sVOy^hbwYpMbB_S)@4NXEtKB6Idl>BL6g3bhoV=TV4A*lpO~oN841k{B#m zGBEEr_KV-(Cw1K;(|8!mnF6?8(>!qh&LBA{ljpXwH# z$gP#3R>EREJn+%krA0fxdDJwca;&akF3(N;KIbm(^2EG=6tF77@!2PAu**u`xVQE1q77W>mpwd{*2H1CxqaQS&Gf$;^$FZVLXMh1}7h z6B70^j^!l5U?W`>stZ|^vhyq_QfvfG`2H++OX(xs@zQ%aYFB7BUPh|QQZCjQaJ@#X zv|G@{dslz|MShfYV*J_Lb;snbsKZTv$Ek{2mF$?3SjR%GuvfX&^`x-xG=_IRQT_+x zy|=TA?~WMsv78eTyp3IhLy5&BX)(3LH3kz{D3wax3WKUvpaTHS-oV2HQa*!U4K^in zF;sBfQDwwJhCNQ|kL^|sg(W|7rPquD)+?J5khpxXLE!Ml%J=u~0cs0RertI+INaN~ zLf2AL5uWAf0Oq0t+qV90(u&s1^Mr)?_xBDzzl$j6U$ThS$*0H1(EhIYX{`+hnmi=~ zI_qy48GNIoqh4NKlZaJgNl9vZ^Xu~qtDhq>iuQ=ND|kIPzBkFYLMd;qLbr_WwQQmg z3A5XX#V4Y@mJx?{ImJJlig-pY#x#%%Me^F0iI#wu8b|R|Y!_dK6uNaHAZ0$AX1L-9 zQQpA2?Rc-1k(c$(Wpdm0l^cB$ecbhah4aC$>7RWa_UW56xv1})sKDmP9LV7A=KlIt zcl^ywlPQx2KN>C9+1Xo-4gGy6cfYvFDSkzHFtQ7o$)1ZuMw(V&c)4x)mvrG1Yb-jE zeR1V>wdAblp2;l!J=F!rXQ`0V%f3k{4DV=(p3P~4k783}W7$PwAT{ZxNkZB~YYt-V zmH?zsvD5RY5cDFRE&?>(%m0ZFU%`k8I%GJ$`7-7QQZ3)%tiN>cIu!!Qygb_o(Mv_* zn`M5m7~ylPRnUO9LisE%m;1A=Eqy&bhJ(h^(zM9Xiko7WX{?JQxFBV}Mt@B9!`mVB zvoFB`JKuD9csy2lxQ}y4#_x?ienkpDSXx&+m10K{agu1-YE^jiPu5SIb#>X*k{5I5 z0T?g!6CM_YHG8Gg^LQHF`yn)`lr*D&g_=Ml&vmW$$Sg{>_Z&*iyN1U})3%)eLA~qF zAqycJ>Ng1MunjoXmj{pgLWPhpu+yyVVxI=ya)(M(UT)7fR#lPlOv($1LT|LN#fP+L z%^p@l(K2pp=^;=b0xX>&ldY@8uuGpW!U9>nhcH+v3rqn1uRRVEck*2!h4Xoh$BDi~ zwC!u!1B_*_SMiW9P0-TP)vZykuQ6OKe%_2`(+QlQ79VJlLcBYYZZ>Nfcjsai=Kr4{ zK?JxOsRr;I+o>G0S%0-!n<*3)aCEguGn_ZXV=HRBa7exf0T_|<$yis=t*JoqS`aMe zL?)ld$y~9ndk-I91)i~ir=96x)P2g=@uz!A-A>%rOwiBg(TyKbebTZQK2;wR(uc^B z?S{YJZdfndM0OaEMWZIMPu=|a0w?IC+>!9hux7(!kuhE0+cplUD+=D~D_a zPrQ=&_y?hdA$;7=`~;{X9g^zW+UjL%Jv}|IudkN-2L}hQFMGpr%gg6c`wjE+ug-VN z@FNyV*S$Po)lel{Rhq8O_Shu8&|jPBEUNs5>8#|=;;|VH7#=tw^% zW{C$zi1&n)ekM@ea9^X55V%>J_29VR58@lFp{Gmvp;@`6YBJV`#f%kB$a%~Wfjn_&7sk2+p=Y? zGBtnK9w1ehUc|bXfB*nxe#DB#vZn0d9xjCW53!Vza4Tvls+ z9k@46`{=UBNItyyT}3Ax^MxpD`n;J z&$`l-iY`Ms--nf7(y-?rc1>O*V#eoAngC$knjR$9Z*i1RJ7i&k*K%BdAP&wt+TjZ% zuFkegq-Ft21dfSgZUes`5dk4#R&sJ?iiWzT?s^Xirt)!+?7fmZQ@ia>W;#wZyI$k# zkvu@r&p+a#i@A|{<>|V(93Ic|0q{b!^2r?=>zz*2TqYH03z=1Q=5+NQ#K1@U{-K$x z5SWw=#w-(S*~KY%Ug!Ik6XdGQ3ehOGyR*mULGS;}+_0`vK>g)43li@)nvLGAtBgAU zhA<0CmiGJv2CcM*`rOqa6&h=41!PtpF1^`s>9ZTII??Bh2{yra^(!k|GySE&VCCOf zK;j1n&(Ufza_%o5pintZ#huDdrY9-^kmO;Icatm1{oF*S&-^Dgl+!0!rebK>&X)*n z<0R>fKE-SI_jIaO=_wj68cu`xm)u^@tmB3dwqNiTk_Tsx;??uF0Fw_U8E63oUlS9P zV`JIT&Bw=|I+~ZUrG`L33tyfg{~9&HqM>tcgGvK-z^emE+(;QsVNNa zt|BgNr?3<^+yQC~!l$Q#ycGm4x~ASKs(&4o7^p7*qo-{VNrva%`wKs6ym{1&{R`Rq zPuThSW$%l#<4OLPqs7kmXTV_Jm*>;U4E{xb@Mn~)Rr}hfT8I?L2G+YCf}+Z1*q549 zYI+m?xrd#Nkr7R;{6zHP`D0d8`F~R_FUX;f&D) zVS#>54|$z4or>Fo2m;=(j|Te$d?&{zb<_0&ctPL|5poiXKa;f`aw^WY?{`1EH><^; z88*FMYyg!WrW5H*uhk;VqSnJZu2JDG-O+VMa(vU^Dgu8*py z+yszay}?>qR`3b%DU#XQ*)cKIk`f=_y7_##E1C8tLT`RaBY&2!2c$u{Cn~F}Qoj5p z(pA*&fLJZfPT=GyKAp>n<43VFVFsVWN$n&*V7y1o>0y>nlza{Y9NtgKrsOeTRtnqe zc6NLN+N+Zj^K=;W7FX|<;e>$f1&%jOP0cd2&`-#f+~VOr53I%e{rTV=vg#S$gQ<33 zmY^%+O&eF>e*2^Hq+O3sJ0m?e_KPDU&od}eE(Z4YCbqV1&yU-6(L@lCkk2pfPRw-? z&H5p9$;BwM49@UxhP*@q3(2Ax41n1Su7XERnrM%DxFRj-aH-%`B`?-U=`@7M#OE#X*CBj4)XrL46yiZ z72;!kLxe0}2C*1D`?-G%$VT6KJUechCJDSBoL2tssc{hhD&^@D`P6?80pf=O^aeN5 z#l~wO1QOEYf4H=yZanDFQ@C91y0LO`barH1TdgpbhEQ2Ler&D!)8T1(BdYZH;#@_j z*zazmv58ino?=~+> znn=eGuCJF90)8&ZeCPt=_Z^e}as9HR-SIg0)BM_6&v?!%zrxfcYY;Bp*~eqv_p*Xo zdltU-jt053nqJRhY>KzAE(elqN&%nuo=We#{l?ArR1AsgnA$q;kM6wWRRL3yN}SJzme7=6LwM&mQgdDW_p{{`Eq9MtjunQ$APYN*6yNvzX=Iy&)M?R3|%UB&+izS z9%6rME8ENNVC!H!tt!^ckaPJ70*<3%o|ax^L*4@`RMPQvuGox(5ud5Eijju}w(jRF z*k3FFqTn<^dCP>PM2zBIML**0+@5Eu_+*n`FZEbpPVrb}Y?mpoL zR{T;7J|8mL)MipS0?cgW%e?F49!kJu{OffR*VD$ce<6uGV^@e+c6Oee&Q`FE> zKC!qMQGfOr?@E9C{X%{?geh|R&C2Cf2IF}XZ*MmzCkJQe;ue#(f`ddBIU6I)pIe%+hW3-UI0UlyQQs-nWK@Bi;JI&4KS~sy|$&Qz8%zu z-Ls=phKJvpP%h~bMcH-qwXCAV(z49T#-gN%%7J%kbMs@IWAjmafHq9vXOD(IDHtXE zUZ5P)B%rHwiy=ZTnfw?TfMD<-@BX5i$nhZU5Avc_Ai!|pX`fxtve1m?7j|!j#|!6J zZ)45ewGU)rYGhPiQ&U%7{uC$APazkTAto;V@p{-9+R(t)u`A!?$sSU=% z3Wa4icOT9n60Cgy+CRN4ATE7BG!r#=FueAta>gZPiP>+>DK_tlgQ6JuZfSug+^r)P2{ZFADEyRHm6$mXXVB_YpD-4ni*OPC zp|PIR2Z>`6!w2H2@YVF&O?8-433wPDM@6dqW`IO3h5kc?PJ8@-(w( z8;7pbo|65pcFOxJmurvdJ0_1LFt~v@2w)25Z;2o-K4=!s(w{|zz`-eCo7iRB3AzY~ zUVX=8fU*nha%X!YW+^*Y2GZGNY!^$o&jLNX;pR^QeO44T7XAF0l&kugIikwO;prv_ z-~cZ#>zf(6nASEF3{>+W8$N!A;_|Lqc<%go!9{9rNiF_$Xp-4E#AQnYDUtDQh(th6 z3}`9b*3e16F7R+kfzVOJ62E4f^5AvEPx2iyniaJlye2cB(ksp5_Z9d}eWF9b3BScd zK?Eei>#Ij$$_m*iL(eJxmzb#g23Sm<3cc2ub=l!#_co^xoOpL5QEkqFe(a&-!Nese z9@dW@Z9#1@h#;o^gtD!)3EfWI7p(f~QsE;v!%z;ITgj zn;7RaaHaO}h|3yX+Nzs_A&q^sEzrzq)YGf zGea3vL#=g?cDNl?j4p1o>}<>ZS)U)qUM)}G6*M4Lq7JRDj3F-E`>6?X#0aV$+B8bv z-hO6_icQ6y7-TsJ*0t33-LR=l^Rn)hW zm6d^#63zJma&G5R3oVONmfqet=nA#zjRj8tz74=0tKqThGCtba+3C5tA?c+MV8QUN zs=0Rwm>1RN z^TF?!{KkW;Ggb8_Sr+G%vv0H(!&o+4URL&Kz0z3SOG+yL@;o>|A=C>W31^Y0Aw*!` zqj@ov_L27Q<$6c~wd14_7B1GcvMPTu@#n(#L6b<)GZlyzX2J^r{qS>*C`$L63nT$c z-TihcyZ$&cH@233Nw2E9xMp@J)mnZVT}NO!i#Yh;SnfQk3;`LrSJ-gNs^A;4x>689 z=)JHo_k`i6S>-{M2 zG^i(Opomionq%q*Qw)<+Ob|GCAM6Xc&6Nn_(7og;dFGK0b>uhjq4@(|aRJ>C%T zPTZ3*^iB}C)Q&6vSGJ{sf?x|ZhK*L2myPvS{^a~Kj83-iM^*V)m`;#qe#`N;q%h^% z@9*!$$Jkj@Es?{AGsgRD_4V_YNhoW>R#_nj|H!(nATFTP;))Uu(Xuj(1~Cfp!GTe! zhn3X;Y~U`!AVW;a{_=WS_#C`wokl^yXsGQrE2l{G>5dJ4XG=;7Q`jJmrfcu*)He~4 z+OLjG1XSIQPpIGc%NBuDefjfH1X0<(>sJ@n_uf)VwY%n3jV=TpJGr@G8@_CHgZ2n& zE+H+g{nDAh2JKrhws9Fn+gSSo3uFiz#GIWd%r1N1pP#V5@1&S1e&vB9$E^774NaCh z{JC%U@pCWvISH7cr3CO~F`5>81PU%U91)XTSpnJMUa5~k`i~9OYGIY%FknA-@+6ek5@&8 zKIc5?lI~0Xqc8u_2kD@uqkecq@6g^D<#OI6_a2X_#?{hEE!LYC0L=6Y09KJ%U2!0J zb|HB(Wz~}E0zX&QwszlGT@A1r2C129&$b#tSk6{?FzFo?$)cpx8u+sl9ORbR zLb%K|W;$4hSLfT<@F*ccVI`3on!?+BlX`M9FyKx^`ehmo(Ka61bw#J9mvr~^R49Ry zj^k8FOM!*9vZySV_XyOXS=bch$8b5AO!3+IMUxeiu8j(9k!1Pj{`}ar$)O@kWTq&n zd7#2!LGif`H}u7(?-0*M$6VLf{)kqtwM8Pa|Jcp=7BehZNq(+!R;TUW)BXPkYtC5ss0q zj`tt^ZsR&^m6cgyh_vbtJL*Fc!>!Tsc&dNVQ+Rjg~ei~9PRx^WeF_&0H?fyaS%w*ehbH> z(KFFUNPTDNW&FKCVmk%xhiUA;BKBW%63q@E#SyV$twn3#YFH&zwFy)0#|`iwG!pfV z&nC_@Cj~yrT3c5ar>aL(_3ulaNmT>cyDD;O<9>4TGBGtbHPuvAvGTDfQ!V`pmd8PK z82uRkM^krAW`Oc9S{2y@9|7v)q9Wv~1bBFPr&m_i78X3Og)uW|7bq>;A}iRh2DDaf zcInDOih(h_lOJ*Q~r?2-D63WM8*FV=1M3_%?X)V(hABf6RvC|yc)d=GqC?tjK**fjkHl@MAY~s9W7mD zzq34QM-AlJJ{okKK_5a)Xrj`TWb0aX&N>y6no6Q`U8A$h=2Tg#Qmh)ib?p{J4f;;s z9Q-jKIbl9$8x$oxTbZSr6Zw=4x?qd9&MXo>rrEb%abx=C+?-H*(^j+%TP2 zkc~yDw8u-UlT#!co71;Ptt&D<@Ok}yIT=Ndz7M{Xb z$Hq)FP5S@04PbhSKOk`ZNS7vjlkiL=Ev9k51AhHkWwh&fJ8Wool@YK;@Vs%n#x;W+a@%9iGH#-&4{#He4=VNDOH4oVp(gyR#FQ-9R(pO_;g}nM& zIXCa%-~h-agE|4+hN*3)mG*-}VCcsf9d?tZAV%2aJBG2)R<}6`~h7C z=!90Q;)V0P`HA?|!vFQ@W_fygVRm-h69Wm= z91(BPFctKHK%fGUZcWa}0E|n4vF~~WJrS(jnrdjE06GeQBy@3cDeW=1d3t~M&Uika z?e3xIYlXU*ShysAn3g&fP3`o=;Tyz=mTM+3m5dN>#-@EXS>#<9iVURzRM zU41}f5(TObw)kd!bY_V@`wjj)QeWA#G0^#{zBGyd{jrd;vIO@M< zicntG2scDz$1Zf*LJP4xw(S%LXCu(qKj`Q__vWGQvV518rmy4~xyF0kX~Qm*QxhMliHmj4~j?i^TNvaYR@l*tR@GJS=eh$4+C_L8CZJS zdl^}905RnApA@u=7JQUv;@$uUfVd404-7mnEd>i3eR2#p7^;$k83NQYEHEkVnNjJr zI$uwmr!krp6$JtrPfvA`FOZFRR_Ib$($di@-T!I`HNL`BV5eJeXgEz`&~x-o&o0#o zRF;7zC!U<2SNV#yan8lf&Cbir#Lo|W2Jz*oZYaT(T3u}w1pha77;xDvOD*oHwVn2L zv3rSyOlLAQqzDM%J}LBtV*G~I3e=bBriwCs)BuzAMx6}&4!4z4vv&YYF#pq@9~qL? zPZEkg_zJSM_QrN{AF0`@FP=>v5A3p5yo-2(>OQ)}nJr&g)tPu&hZtEaM!-Cdp;-6@ z6(3gWM`c3uw5OZ+`S}Zg+LdJZ9sqfgfO2KAf}?RbI8t!GtwmciF|w~n9CN_98RMWe zOkgUUKohrQa8v3k!OJ;=TE@B5ErF>0B1nZ98FZFsW|xj+Qy2(v=1@3h1XbFK8olgM zcbR#?^-htk3HYshDk?gn9?v8D(p#=@m*-wm%B=uDzx5>#U{RcXXXvO3CW8PY$Nl(lqp8w&f27nob}#7=Q7;%h?0gO6Gg^Gd0c#fQGy zZmGa%w-&Utl=Ho+Z7Rq!gV%TqHfw1fVw_qCIq~_S(4>%ted=eL@qb^{#I4kI+L?(= z$u=Ni;^O+jPEQf1zOqwFV?jeBtsdUU1WABD%Qg8tzmB;A{NTX$hgKF z?bo9Ge$rIk(OlOC*09itaQw;Q((LlM*}#$*(S7q`0}Sf%WL{d5@JS>%RpxjJH{OSm z;*foz#KIr_Ci3nsWf>b3jXoz4RYD?^umoyD;NwZ&WM2X-W<}lL*V5X`D3wsw6zdN_ zR0V>aoPJyZH;8>+pAuROhwkD31<(L~%^1L8;x}Bs#lQ=JZO%)+Y@PHjtF9g}{XS*z z-GfHmf{}-OvPq^m;0u3# zf1g?HXkb@=d2DMczB+G#dAZ(h^au{Tx{7OmKDiA`6}HO=gschnL$_O9$i8Fr36SRm&)3nk4@8pJebAuXpe+MQ>3qFOQjlNFRpDRk{vs_+*n^t*}tc%Nqes z-cr3@X@=}j|6+T4d-}&V0Bj==caInCJLP;ma9lVRp*MGhbx^$(0~$u`C?aPI`v>+;eLI1Tzg%%uW=7*HU_Zq`Ke;o)>)ds~03QDo->Lhy4p@C6+F zlcPPFbV;!}-Hpc5`aA%69|VaV$iOq|ElljQ5W;CGgYgirM;~yM5ZxsrC7}M4?|9Ss&EB$)22b{yiC%^}64TDaQP-*^g>v<M_=!m?swP)Y~ zGdLCC;WnIk2ASI2*`TYY3C;)p{NGoH;M8i+`NROfnBzt`?X=Yd+$SB75&Tfw%HsC< z`%(@)9Tfpi&W?5=4PW@r=>S1Lho9*t+lONZBLlz)X2N2(K*v_QC+FP2i`edB&(a8 z?iaJ(;!qJ3zJOB%SeLbRHYOA$Gb8$~*4lX6mDW4>K%yV_zoCm(a`O~!o0(=N&Kc(a zZr1->DL*cFfZ9ZPq)-#Lr}uf`f+SC|`xutr-`3JwiRUHYpOKFG`kCgMj-vK7-QaL9 z(jf1={Pp{_|H(D6uKt?Cia0}l*aR+Wl6ms~M(2EZ=nG54hOzW(yGDADHX7e15)e<; z!yrEoo9W~!dk>ssVC(zAEHvzt7}t>ZdnvH5`+?lC)?(vB{v~4Cq?pJdid)0agZ=0< zMvkW!-5AVIwje0i3lW_FFbz1WNU83ZR;-3lyEHz<7^7X)5dTJ}@ zl!2)brequr3QopcGPOd*qj(L<3%&FB5s5a)*fr zeukB8Sb3Vswb~`W4sj%LGT9==ClY8&C6VOeSk72v%ABzfD$Xu+1E$6{>4HEZ-bx_) zo=cP4i?~l=n2?GyPHWYq zoIv$i{_QbVkJ~LQY2o1kkFWiA3twMLscJUAbhQbH^IclWD@(pN*4F&amt)_buR;H? zm>3#58aOzZd4<_g+ej^?pm}QuY7Jsiw5HLAaehwg{*42?$J(d9ny}{b4})Jr`WMZtdu( zWGzjl2$Qs*4*ejehlxF*;Kb>3(QncU9`kho{oK{n1?-+|Mk0NNN2xB0Q2=SFIr+ATVn+sc=!DYAMEHL4MYPuxe+^YzAV~ueiOc zxUxOx@~N%qrsa!|+cr|)z&X9@l@xkYbwpfQ)m&Lxqk}$S zc^yQ5J;8o_jA2V><07Mf3Y|MzTGuu-ob$j((XoFViHkF1oG>I51m73_HZVQ;%O#Fn z&NQf&8oSBigX}F4Derzsmpq%x$*FdztAM)jIQ`pxhb$9`RMTyHRr8lVQq?MU5NQfg zIRiJdLhMO2D9F_$L+xSkjIagqZ;q8np>2*>ks(Js;nm)MICZGNZ=b_m^Y*T~LG4e2 z>#|HMYika;evS}cE3ukcTuM}<%^Q{O581==*zRk>XSSbu&TH}~Bvm}qW?Q;A4SO+m33X=CH&##)B^0l%&n<{j0aSvf? z&&0!WfQrH--Djp(2;dw*cfWR1oR^BIkOWP$tGPRN+f3L(Gp5uRc|G`BJdlD;F4fny z2%zJVwrg|9!JqA{JxH6!k-V5}mg*6gREDaoz*SbpVOP%PmcsD-kAVC6zY}&aRTR38 zn`S(7Ofn<8GN zKN5i2m=EA_{eO>8&NwKYK}+Os_cz~OnGSR$7?OtYa9^CL0jzvC^P)j>Y6M+y|MrH6 zbK(GjxYs0JGv>RltUQd|iv?t^v(cA?qj43T>l4#M;U-Y-RR=3hgMI(+Xa9w-VlMdY zB`Y+LSP<4;JTFghaBv&hg;C?=75Br?ek9+G3#=eQkh6nc=BzvtQaL*bzlRglQW2OG z1V}VY!2js6N~$g?61stQ~rbfp6K(jy&C@gwgdl-vqL#WCxgyl|wyU1K0& z=McA>$Tm5-x%7%+Tg*B-TLGB#yyAh5!Q6qIw-jK}+f_&{Ud{`>P9P|JNt!x3b`|wq z-rv*IvKzqo`XcBWM`fuRC+5$;D6pK*>A(E6)YQ7$8%UO(ki>RjBq9k3*ykj;{N6Wq ztW-K)Er%)?4@poSWY0hnnHY6-jcUnq%p5_w*YR$dKhHFNonesCYbbyQS%ZOzCKZo@ z($Gp*%L8Q?S~3LI@Ryp(x*~3$%eN&3!5(H}$V4WN(Vwmf5Y(|TB4gM7-`=9%=b(raXgstd-h!n}ejM7F$!l9S7gzG;ZE9 z?F1i9c6AhkJ0X5m6GzKZzhZ~x@@$MichpLT%yW|Vo)S~)PT>2ZD0k8NE>Tr z&DW`d|2*$7Tn8-@rO7sWw3wpSPpYdQaymar!U6jzM2{lb>b)o!z-sdUO6tVm#PGxa znS|^WcW+rjQ^u)U=yZ`Ja^f&h&WtP5n6W|XLkyZ^AIQBgRFy*R!LD=Xaj-o9F-1ao zZ*48TtOe`hr2?Bu{HXqhRJdd3NZeh>rnQ{%m}KyiL~Ba;Im+e=4MF-mi)BH3`aHMN9Y}h zdp3*yh1thGJQnuT|L%z-{J~Zh?VcxXb|S&>42hfoL;Ha%@=4Z*)G}w#nQfe)xRjN> zQp9fUM)onkvfW~=y66Sk@rGfF!mFk%nSy9tJ>zN zb!XaUs>}V6oO8zGZ^-hjJIyi`{gl zC9+*aE}y{G4*l!+wk-{sk#J>-c~Z^@T5<~HdxCoRS#uDyKrVt6gT4k@2m=_r3h$9nnk3hV9wn9s_92v2MSE6!A~3$qbT5K9(s_D%zQaVJ zz;HX|rR`LKs5nTu-DZ0rMf?2vy3k!4*i=*L4ZW`3szGg~fVL;w(d-)PBz8smcvW#J z47R&hn~o;%Q%i>pfoE64A_Jl}{n!Orn}O*-W9TKxufPSp4yW6rsjF&@`du`9j0pdS z_Gil-80$V(s{2M85*E!ulJOxNoalS3R$^E7fSZn)`$l=baZn;eyM-U7NWr3E8I8R2 z5*+WYSkhbx0dr4J>PJ7C66!2#SHk(tbpY zd8qbB6rT{j1ujV5g;2NnoWpkNW8hKF;d>InyFJi5EY}XNoY;49VIg>?yIX+(wx$N7 z%{}g+gKP&w?#uzC7Y}}jpd9=A@z=LF6|4?HA4wrcEbdlpWlg1J8BnKxcZ&~!BZFdD z3k($A3ao`vtEj63J2CULUhTH(#-t9#i`lj_pwWa3H|CpNxub0e^XLP zn~_FEgeQ;WLb8abBM}yEZf>?)=;84=t`;jJ-53~`XaEM_GYk}+Z zI7Q>V-bO6wpgTF@DHK$K|!PfZUoNWpJ)XboOlQ@&fc ztYn%pidfmz&H$m>3k--e1( znqK=Y239mKd~Ysa&^RwqRe>qY12}#doFOZrr^n(Z!%UaO0v0sZ@>y*U{U5x`0P!Oj zNv`6I`)W#(lXOsLN4hEB51CL=m0JmPnRW2YZ$|+|%WNW*+lTaUKuM zN@?`rs6lgt7}R--H44__m#T;WjseZ04A#5b8ym?Of_%2a$dsI~$+D=@64O$EXHoPb z(HGpebYAC8^Pm(yxOAcHz{{yPc6X|R(&)sO&nJ*a=u?Rh|5K1iImQ3$Ue|HgBGa#K zl$Uxi;pk)_ z3eofK|D0aEpk6w?%mLFtN|$E=y8&K7a35}i=>aGMyI{2C3WXvBAKacG#1rS|ymA1} zcA<>BpapjZEl|}Zm9p=!utj|y&@{x0eHXyVx^5~r$1=z;z}rnU3snZqtH2HNAaT*z zVCgwTYeN|7RUB_x3|b2SW0M5DBrHhSI=T-X69z%lJj=DKTl^tCc_2zQX~i)6wine zSRGM#gN7baSJ!yR!|^n9v~;i#3;Dbe_W38rTbDMS1ql$bnJ>b}qje89)dIkjaW%|=)&KPmIAsBcfd}8v(YX+N z=M`ojN>hedo|<8fi!-|JgE5}H#S%>kQVmU_v7mxflYP1BB0j<<`SaKl4w6l$7t*=j zxyqrF4LM$q&HQF{>1^U5`t|Z)z13B1#b%SZ?=ko9hQsQ=TrN z4P`oN^!oaEH1;~+GsW(6L|a%G#$d?X#3Y3@uK$Q@z5TTVHz-mhma- zaeGd1cS8x#;agYjcSb&b07s~2VjzA!_zQ?%ReN!6RMnldy9?9^TJ#ajdMDmL zmXIH^LQ*e*b#^v_W3gsq?1CaUdov*FLv2fg;nH2C_lM>lqg)3~0I<9X%Hx$jmLGrH zB$EilWr{-b8*`1EZ?HW-Vl0g(Wh0@tA8eo0vdkx0zW8r(?`L{=iIRNrKdFbMq%f)9 z;=>7Ms%V@(XbqQhad5~7%N`!s-&AHI0nz)>HA4zge*DQ9{#h*9ZKJ;)*ePp956qWK zDdA*4XC~loS+((ahGSC4sp7vLGKUoe^h;|{aB_P^ui3xLUd0roCAdDg5l2K#=oH}< z$(Wd{*UhnMR0{S4s8Hpm!w;^MbcnIsSx81Uk-$IkE#d&h$AkG&yJKv-YIj^ytbJv5)bChRK=yv@rC*e_k zl-sdH-fM=B^!NhaHDPJN`MkPb@OM$GY>R=M*_(p^VJs%RuNBFT-#g(}E*uq#^{=aj z^NX=5=gI%wV4s7{k^sV?eiHm~ z5?vhlsi2{bR0H+Xs`cq@U0F2v$u^)-nFyk^k8)4_DjN(Uq6{M`V5Yzv2c@{zj@jgi zv}=vESv0n=m%O-mMS`h6ix768fj?0g+0nt_#PVC^j3GO_hZQ*=WoS~RtPo`wLMbVg z`XN&sL&8)+cWMAZPrR+YMWVd5gC}66ft@GWE9h!@ zWkt6WXHqL*+Q3BVDA`qGweAI0M^qF+>3nWZ$t|J0?5LoVO~F!Lpn=PR`$`IojFPhNg6ul}gQv{~276G}Ww5&5n3yCWNA0K~_kS!WS_I;_|uXK;B_(8arRRu&= zK~Yv;SDTr>o_Cy=@xYXyh%sRIm>7Dk9lN>bG+ ziSY)**xypvSBwk{ma`ZnP>(DEq|iv95Nvyw4EK;dT)~Us@0cYlH6Lkd9Q2x#YDkK1Q4_2KNV`b=`jC45toq~1!$Ijb8BXeM~Q?e0Rz&N|0ki8H7> z+}~+OWpa*=mHC4yr^rgjaaBwmaIL)R@ro)~e4S9jfNQ^KuUJ)G@8WS#G6V{%1^m^GmXglZLneyZw6JAH z&YbMUPb(k+T3v1I81o)S^7H0UL5NU`!gn^0%Bdo5Qqa)-4I~n&KQnzX_f3riXY$Jg zhLR6m9wd#4ASru}L%m=BXfWdw$;33h?U!N?eT?Occay{t56t=S>zrnwLW~POOL5pNtr}ck7As zOe3Wv`Nlp)5e8&fpnu+;9xG6d9ih-?3ocFQJhJg*L3b>w9PVs8(IGQzn}SvU;ny^n z*m^w_x~1FmQGTtfAiG-Yd8ixp_;gYeJAPctH|=cYEa>Z0S*nR=l)7YxhJx|~$}l}W zm#^m?M&a!nl(ZlH%Mt?jb1>BM|iipN2R4bTc7Y=1D7!?XTlI`K6ZORLKFx zvFG&o7>hONK>6$rTD~jIwXQhS7bwj8=luL3=yKq)bAbAo>#Kngv9{yX_-%Xh3$QOe z4Ru&AmVqr`Fl+ArJ^L_u(fl$d+>QBD#>KkYpktctdVfD^9v?rLXBQs3GoWx$j!7 zmfPR5Mbi0=-K&@Q@v)UV{@VjdD5OdFndCi9Hen>BBw$#Chgi_(;p8?s@|G5EQT+AK zqJjY7UrVUwTcJK(+A6M}3|LreOA&4GsqHPTg52C6m%n$|6#hwX}0$E3=d?J$D<_B8#*eLItWgcQLdDXXlRpm?+TYMdnbX*un{HV>@sf6g74r+?63LlnYrDqpGC9Q~Q(G*R}tRS(-i( z7}Wdvexee#4XR69%S?pA_lbm_p##KkD&X7d>ug!rNUJ-;96kwevS!KVt5UgIxK4<1 zvhs~}i#UZwDJiq?;~215c)`|q;rlC&&$Qr*`skgtfj7KQ8PxS3Y4jacm3+3am+&w8 zdalQYjlI@vtfnEB`#R#uQVjcI>b?)bG2KNgA16xoBFeMe%P3(k{1l7l=k$!TXGdlM z+`Qstm`E^Ti+l``nKa^Bc)$MwYG%TJN(etmqvr8aDzl}Q9~~f*iyDm#gAOk~#ANn~ zmos?4q$7i&k~kNuW{IB74PapN)z@H@FjDmSIx1Ei%CA3O=BoBKOcSIu6lDM`!|G2- zH|9+5g*8j2*zv!JUdQ-v#ExYI>tML+k0d|d!hh{i>uhjxG6b(!*5jOIx{~Tg;OahD z)Q>VTR_G}on=8TSqbk9+GLPhc9ZHENr9-0@0U~kGjJ|HmBIOe1Q3OuW5kGD-9|vLP z%>;^u5;!(D7V~|F{5OY2M+S^X@3#e+KOR)K^O9~-G-k-J2X7WiKV7%!t`L&TwMz3e zb05slcqsW}(FIOtY?3ZmG9K4<=2Jd6v#crUX`A7{*W}pgR_| zi27}5Le_Dg9r1842ksqXcxuC`SchPqKok33At1wX^y2Inw7wevD!-**^WKd+!TBR# zrqLMN^=P2(Zu{DUkt8eIs$Tc1X+f&RI|x$X4N3jJIC9P~3WvPdI5f64>10Rk4Vn#qsR5R?#35TYyFfuZy7I*+fAcUp6=FIxr(b+iE z);0`M-l#ME$C?(G|L-*?WapW53&$ z(M&hg_trAR{p^KZ3BF_^E^VZqpP%0=@yX`tZDUm-SS-3a+lDsnZQhD%^gpOab33gy z%&U(ARHb2d!BIrTPYVx15~ARid+E%N)4m_d|0+){J!@#_jbhg*B=r6(HE4mDhS531 z8IzN#udeOrbrJXG>p}fH*9faldxpPRiX}9Bc?tHoG@(&Zv;EimIZll%aR6er3W2_Q zoC|)E-hOH2#~#I0--*L;C+ZiBNN%Rq>K1){E&N+^SC`^}S}!X=UzijFWcx;)gNBl)g15NQIG%Lv#%A}ve}Q{#bAHq^54!tg_vAeDDp{H6V3aznB(nt72-R;mFyd!U z4Mju(@Gbl-5~mg;81vj;z(|k)9-eA`_N#z3+b@tcVv(r7A6JWs z!v&ot3BSCUzn7%+{C)q8m-4yW>2-Szoy{7UU`{uXghJBc?D)x)9(bqEw6t`ZA6U5G z>MwIq^Zrjjzs7`43N`E9+9f$aUr1AFJ3At`CVTg^?`A{PlF^ER1C;>5Jvf3>g;W%D zyz?sNS0uy6o=W`F4(GE|K9O#eMFmpnz;C7oRI8yh(Os`DNHU&XTF6EUk@P#n#Y#9G0ewT4L12U+DNj zorB&os0!TnXq_@5iIV2`K0cXHMP%ue&)*Etz!F4?2%4(SjBz(!N z>ysZKI@0N>+`xn;I{HD2j!|2CN(+FX<4+;cv=VeG`iRaeGBPsww4$<_n!dp>8S7IN zyQ+*KF*yPlOOuicc$L}X{&?yH#Ht+#{@%n3IH$_~oixoa;TsI5aZKc+mwtfegLho9 zFPUhhqr}n)zkZjOrja4iNpR3ZaS;bZFvJ2`6$aOUW}ry{Z$H#yyR zjA`l7O+o#|D0$JwsCrdZWyaII!mF)lU~ttCXg|0+3&i5ZS?fvKV`B;R^lX27*#)V@ z(wz-~Rs}TR71g$ewuUVk9C4GRu8Rw5R)r}_b}rRb2b7|6ZfQ5 zBN>k1>^>}#%63W1MNw80?d~qklsksc_j`++I4>dNKM@N+Bdry3UOn0TpriDt1&0(5 z@G%jo2q;gP%g)l__$_pnvXS~v7Qxue-Tj9SBHZJ+h&!uGFXqE^$eg!33yjl9;7<5XFb0WG}SSSBE3l&;1BxbB?91LWiS+#fg zYg`!qmVBz9#&~b9%1Sj+5!Q^^1C>9itQ=3?53On3NJ%vf9F!$U_ z&oOa6cOiA@n<%2%+%6c1n&N;3nEr$6m&ve^ag!aaS4>q5<&+Okdt(mU#F%_&#w2N1 z97J?%(IOYq{QjXxc~pXL#~H(vWY+}a%e!N~V-VI|qn$&kQz`J!L5Gi1?E`L=QZ zvlJfD!tmyN?@45Re{UkF+d}>vI=X~Em)QD+!raFL2T8Wki#Ycfq;T9GEuv)$2gReu zBmehleV&GxQk6;(VZBINQbq&KZ=61MS5}d?FRN$Ri>jd4!?LC9KBb#~PzC@q&X zT!FFXHHwpzKYxn2aEbc^(8?V*1>`*$DE;$b|4Tn|a1BSEHN@v*-TU;2elH(|2^epc z#sLlF?OmC0VZu|q%JwbCAA8{=JHF4s%%?tO%d;nEF}m(AK5ZuQYX5Ck0BoemX+}*A zF#JoL%fmpVr|)GoEQv4|g=(_#lDyq`peuh&>b<7n)n@`hdL^iFlUuLH6mQ$sQN5~{ zM80^H&VuR%zQNLZqwWiLz6FJHf2-Sf*&a0RA)>Ds=-9}R5N`c#Z(w2~Z4LJD+;A~zmv<1aV-DLJ_~TiQ5&DDKo&LV|<#ON>y78=PW1KygFa z0P}*0G}(CL8O8?-p;H3G^2BaWu6h>N7Rc#aeUz9Xe!XJ$J0!U-AT(3^)6-8t-}ro2 z$igSkwWQ98Rl&79;Om$`l>|Eq`J(@4_1}vY?eN^f|FHlFT(oC{k!ZL3huXM)fEuQK z(2S5-Wc?cnhIY5Mt}3Vk#-vZ9clNYjo6WH$tM$x3krB1qy{{&tBJ z%HuAV?6M6@wwFi{wn2dTudG~I>yn|dh@cLB1MH(J=qPgSZUIxzJl);g+srI#OTnZ1 zsc7c?KSSCy3}}fTeqUEf>yOe<2eP)(R`FU#a)Y;XNbbXVwAu}F##8f<-Bsk&PM?3{ z*!lh+r{(!pb}1ImK*>si0tbo*>oWx8vbqO^?++(c3&#&uB1M|^u6P!x`_>RRTg|zx zE!&UP^=)l^?Ls>`DwAbu({{$4zCPh47YxVuCtK3ZrD}~#u?Fr3toJpM)JTb+6gb#$ zuP?X7B8Ks&dy-gJwNQ}zm}HD)0|A3UH)#u?fup^)YfU5!V6f1LYHI2nKYkSCWTHA% zES*doL>hxnQl|mu+g{}|{$+J+Eys*pTbZY*zCXb|!Qy&)#>pz|=hl1ZQR8&PM;dKy zZ7AY*H4Ix~Dqubno=@I5?{(gU6N8;L9>mH)&?Y$w1)pb@CbyPB0xi;03 z68~V3SD0aB=F3=fR?r-T44FWLV}LG-cKWyt(N#(L0&*pIxw)fQ5?1#6i!wrfLVyKx zvDGLV)el9c9YQQA1`vTXS{dPaQS>?dCDjs12DMT3m?+ zsu2_H?DvYU2Qdeks~k9X=uDz*f(_H$iW=XCMIJ6Dce71v#Z6m$m#dGCQJ##N+n=4y0Y&&nBXY=2vDiY7}OD=Vbt7XSP>89a=f8E6R3w=DQ*o+|=idA&sS zzI+w)%X@|6df{#ww#^SUA%V?KU>|r?Sw2wb)c|7A1Po8)gO$Gy^X_{CRRx;|GXz6M z5{^Cdh>7M)?N$8o?)Vj`K45w=GoG>r(VeC{GXPRp8I1IwnYRRm!$xIH%gG&v;B*&5 z+_=A_GO(udJF#I?MDYNRC4rSA)rGNsJrfcX?ff@2Uyvp{@Y)TSHs(&+#A~e}8~@Fi_xe z*<0_t(c0w07<3-`os;oDwd@Zt0<#YpI#NaFEg4;Yb!9h;9RHd6ItoQ(=x7g%+U`#Yu$iEAWc z`N6%4Yy=Kxu?BowwXif+7H4LlQ4ohGHz6WME}hwe?7^mubW|odt*Na7`XEXUu@(}j zPy+9({w?rOa4xRscP_$4iKN1h1aLPu-?s@A$Q1pnZ)%EXRPR4neF6H+1HdNW zBw!DWh{5K|8Fq6y!Q189xt+Y<9%vb4g?ABDNkL|hjyB>2JL9xrFvtwo2naS3*@X-o zEv3+KSl1k0H-g{YdHJ;G;tec)?~%yScSgSMTm-No7}c$;@G1n%m5Sh`2d#K^8$!

    ^7=r z8YI#2vve7!a+Kd!OOUYk)8qkHgxS!>>fGhj%J#s!Rvn}5P4gucp|_6wg{u75_p@SL zk5jXDEPQ&2O2r2ws!aB@(C!7_r`1kZ-hh5>wnp=P2>qvz0cb6zF=CWGyXtX&BWR|H z*W)(HpMewtA3nFJj%qT;;`Qld!+&8ELxRffEpxno1j`#+N|H<(6#t_hXvE^zbsp`-Ypnzm% z{r=teecptTOmrr+mqE!#-jG9GJuL|pHKe;~pC7WWQp>D#O1VVoSy|dzsUSQ{@Y^UM z=a5C0ezcasmkXc=f*2arjOp=Wfb@@CL;CE_N){PH-c@ka#Bz}u4M}2z4qY6EJD&`p zOq^Q990Kf(7a9VtMV5%ajNyIy9XdyXJDS4VkO}J@bGNwguOe+LR_@7+@Pf3<&9G{$ zS4YQHegjWkz6p8(7*y{@mK)DJzx@(!dIjz&jsc;$bqGm)|1B&PGscHsEoc5J4BXA- zdP8teBxLAoF~d)P1bL1mAKCPCMYmAZ*hzO&KMg1&2LCilQ%ooCPtu!65(pR2Jg`n4 z>j}9r?|8t%$F;6@qV$-2R*xzFdFaZ88<+=q0QYW)lQ z%g~$B##`hR8=cVJya6;KSQdRRBn~i5+gT(&NoA5rE_9t?M^KxFj!<(XntxeriVCC? z*rDei!$FejQ&^5Bwji}ssQGGCzz!MPgcq^vr8f#RrmVMp>8P`XUDyi$hVs#LaeLv! zk(XrRdXyso4@F|9Xud_YlHy+^D6FOMg(4bhWt>#tz38w+ z8wS6u|54W_q+ql}w|pd;O3xD%v7aR}TN}TCf>|xtBv+;i2^QHk`uqdekJR=uvJiT$a0*P2u_|^`OSVTfiGbw-meU0!P2Kv*^#WsL_S!`s~7-<8d z3|a(vss>L+sV>$t`ut71{FSc|AN2BlePwR6B~p00XRK!lIkVTLVOX;W5)|IRP#`Z# ztFvW7iTb%vB#i9itN&2YfgWY<<>@o?x1qq8*o}oSv5}al5wU zaz%7>g&D=+#T&7%pC>CswFyk=xR?r)4Go-^#DK6;v6^;cC=u%Clh1{t;^AjS$hHO- zB0jf^mNVlkdqa7}XY=0?J_(nk46Y+4g4)@2t%S3^sX4g}?o(2jJ&_U2ilpqK6ngVi z-<`4S;3-xOV7jcdtA_djf%pscGy7=uD&q<5r`QepE_Hr3C5Naw{J`E3qtNv2Czq1}7RD zu{04a=8(oaxVP$Q*6e4fIJj1&H+rU_B`GHO5`*l{3!5gDJ7j_Kq_hf0o=n3Q84|VJJJ6 zwMe@h)m61WJ+8mr0xEtg1Y!|=yJ5emcKdF}A#LL)ajA@Q<7_!AdI_HLNXMi(=-i?P zM{aKP=T$4~)+&8;rFBRUXFT}y2p{$5uOEgRBu>W1l(v--(xk$s#i$5EP!Avaa18^% zP={m;w`*I=s&ZjB9B6{M@ zc;gc*hFJ`rQS8-4UOFv#X0He{~0P)YdUzN;-}qv*1dO&<@B9 zmwJtTy>}%=)@_pcU$SI-Z>qh zHzIA+ZaezlROJ7?2m}IjlRjsuN@mWZ-3y@z;JL;O|!x{fgi zs&o`<0Y?HTq~}?pyGwQq z+4#AgOWaBWkUTD~EmEJiC=ZdkMv&CjlHp6x(9-^lZ{oerU$RC+*?=dEAE)}3nF*%g z$`Dx!4~GxHoA0OhiYDvV+xC{HP7v-K3y>NQZGO-eldoAG#X2F`ix~=>q3nBKzX+G6 z|HRlM%{w!K*Y>56Kv-a>Lh{vM0os}I03@#ZTy@4bvO_+BD3=lr>CNs&RDX$2`2nb_ zA0FgOzw^7ly;C4xai>W%NfL9?{b)O|$A@>m-qARCY1^tv7Lh`-`Jra{f(Pkf))TRu zz;jhJIgvoVM;lh1xS0T+l%{C?^x>xkn_POqRPW?v*qn{z*1f0x#m*3x&?fQ~z*U{+ zDq|^E$gxHWfsQpC#OmSryS13yJ9VD0OJv3?WWwyswLx2M(#x9;WN_6OaH(4J({vkN zDoJ(294)%uuu!W#=~!{k#O(*yk&x}Lt$4_vYYPeGlx`dOfK{Z!=n{^XeS81Sk)-2RUp`J`y@>f9l zkye-|cTfX2J%j2>@~gD1NkEQ=5It!BYmpSKDa^>SX1+e5zw$#m7P-W+}LQH155*4a``ajq*4gnO^vw1 zJTf@)Nv6zmEiR@-zzIk+^mFgW#j+cX-LSpE;k9&(xu-OXNwTHDmU!>_Hr&yKUI?eO zTX-uzzPO3E7OhSpc^+3!D@w@o3Aie;l)%ZK_?eQPu3Ke*CDiKaklOu$j}7J(wD(O#!4G$_G{=)E)qUzflik>TWG)-O`z z;7&C^ySatHt4M|Niu9cJ%{jz)7~A44vSi(z36dc|_!$4LQDvM?2zsm08{L1z#;2z+ zfKPdHK9bVWX1FX<73OG##dYq4(IoPsahs=&D7+UiRN2bejW=brGu~r4IMZD3@O<=jw@hcbO-h0r#Zq#l zW|r)C z6_qeUc_KH&sW#`UWeUta9*(Q**IQ}b@ML>$ln8Ng_CAs@Zr-D-rP6aD4wwU6tF%4A z6vnON4aIIEHkJEUui-97-(#4y4gtCvWBr=VZz3?<*+f+B3CYlK68;n?UrEOkH3JN{ zIt-UrR->^5msCfKe=RK(>dQH#$Y|de$6Ku< zgE8bjq%2F@U=?%rCNsTf7V`OxEjVCx`2!H@4Pv~gE`Skza%9d{ARnJ{?6-2g5kyeE zMN3HWyl(3MCTnv-q%!S~ByV)DwTIFs1C1wb2~$(N5*b5%o^g!?Q^RK5)Ty~MeIb$T zKWEc0^Jxy?|8aW9FZ(4NB#Dkire{w!plq_{xod3CkyIQKJ&I2{&R5On_H$+715}MR zFWUF|xYE<&WoaBNr%o+~d9THuRv}b{5QghiVQFbtwH6N_(js^1u7Q@7V~6}=@4~=Q zPgFK3GwV)A+=wTMPPuFE+uKUY6{X@-a_nw;{?aTL&Rv>lL5$vhV)m6$rpis8n<_VZj^Zim+7*{H$gF4UtUdsOpvu5U!8A?;9xaf1Y)lYX(g$$VaV0;1gZI8CbhQ0YO7Tw~2#ofzL1OnL zowYp^el9c#Ufndb;&pu$zQvJ{eYs<-Xw+vDW_*CWG_jLWnqP|1-5pfA}*bO*zDNFp(B@YS}qJ; zfBj{9O@xUR9zsTmK!50|{UstnwosnS*jHy<`-H*fG9}w@ak=rNdRpif!9xNjaSAe4 zHi(fX5)~@c3qi17umNrQp}XX5W5)8K(2o@vj*U83sp_;5JS4FnleZWn7Y`@Cg`d-< z++dYiHhP9kg6LsrdOuZG*%M=KtdJ$Mi|)`h2L4bP#2O7hE*5l)*gvlCsZTVRzS}jo z?c;S2?ZnglCWhH2A80LL+(8vuC>R8C!uNQPsGnru3K$?}a|vaY9F7kUU}JR|#VXqB z5;chx=xc^5vD)9iqlibyQ59oy@5L6Y#>CH5O67klxwa zSE#1FMtoDl-Tr~RYjV)ef<7%_!w$*=KU)k`ef%?=py7lCx30$2R5VKmpKx@pEKsCU zbL5)DMMljI4!cnWN*(nzyq9JQ%xn5X85a)<(D}H8m7ls@6R}^`YK>n1z9~Pag3ZK(N+-Aqz?(R)W4X5QW>ATrw?h=Z{ z;ct1npY5l!PUZ&qYqpf`S)D-7?*2YabXM6ael~(>0jVH&S_R1%c6ra}i)H(&`&EdV z9+JBYQuJv`b#6rpd6ZZ@Wsb2NGYcICu!J=!x6&ArQ^P)eJn? zE{6AhuB<-*?09VX0fgkET`MAnK5ZTa0R@|EZ#JaTHA zSqt}{Zy06&G}_`WE-s>_Fz%SpXTb4k$K#%}EgU+S0os6Hp7w>YrXwhO-q-#n3Rl8K5cHAS236SmZ*asCgoxa;jMpPA#Wz{fkW+`<06@w@fcFjP}s(Km>+;bM}a6teg~ zr1CRmEZ)!KDjQowJ4%&o&vfdRX%70z$1GJc2tB`(!xoJ?5606(wTvL>&&RMufZHL^ ziAFWqg&(#UBCl zt05$Jzj1wS3e#?r4H1sN5l{5fv=j=K!1xc(O|7T~$cnei5Z$mALR6R~{%sq}-Q8Q+ zFZY~7)FPt1G;SWBKWDb0=d#Fp2!FFFf+=6f2&C$FAn>$<)1LW7my(tl9&ZBqZtSN6 zGv90)S06>{=&6E0eK|cSh6O!!!f7{PbVkqKOA)kZP{b-ihY}TaL;89zZD!Z<{QCTK zB~Rtb$Tcl&johe)O-QYh17=}ouiG6CrJ6$KXiw=-atRt4?O#U`d-1#9gfnRpG_duQ zIx?XbSd00o*e;ee#ph=%(sTa>0*mIY5j|)WliBbkZ75;*3EM5Sdi`uW79K0zb42v& zj)8=>hqKCA2szQY&CL$v4|DNeMDy)`Q0IRhOyxT1-ZCO{A;R>(7%>XomTJz!2Aice!CVyXP#%xtPb0_&8MotL%$RbCB70QR&FG9zv&36Cyo<%dP;mk2qqoukW1ct zyh~X=eYQa%lsQ@h1I^=@xAKhSTwtC*B!cBtRoiFqp@l@u)Vi!KSU*Lm$O|02wZ=7< z6*cm9F)OG>_3Emdc<%aevw_=W_}w7sBIA3WprUZHakCgy)alCS-e!I1fVaHX`=mKI zN3c-eE7ZT`mWI2$JoQ;BX9?tXggJEESG+sL#AidDk|C>JG>H+@n;Cf>uehc`opRB< z?rzx>Uxd>TG9;^D>F68yX&*Jo&CTuB&0W3^mDko1uBgN% z4l)nj3r_789|;xH_2rP*@t)n-2z5q=M~1A96O&e#3eyC(c|zE)OT!oyJ@+IE)%`cUlO=|Qt66FvFnS+N{+C193){G6Z;?%cahk79}ob6wYwkCVv%(BUjGJGAA8xB2q zMZK^R#8`4@lXPFzkf^I`0!hrTOG2w+n18h{w6{wJz~exh`SbyRuO3wD2`TQ+5{~h+ zddHjJ6wDH~S{k{8nHvHb#!()#5V`jM5x~_HgI?qi?a?@VJ`>tO3q;}aWaj~Sh zp)zU5l72WKpJk#~)|Is__DjDBh_i4a+(OsbobN$IAQY=pIm344niZP$27E7JN&LDb zUZPFOv7=KW^&&OstG2G!aK`eV3$FM)&^2v6Q!**_A+qJ;7kN3KC#N!}6k8FpSz@BU zE|)Si`jhXayF1w8K80|gO&dh9=j9j8d6_}_}LlIKK^@h*oBppQ( z6tBG4$Ec>Vct0 zpz`A6297}p$EXrX{gvy{M0R&)XOE9Ttl_N}yO#&v=fXq;k>U9jkTl?+eqkmhM=}qt zVSQLAJAH4(%EIbNa39S|cqyPxgjnJ;c0kkM#(4}!!h35JG#tTCK3}`6T|(0#(n(|x zmjH*Yz&FdV+>dVq)Qn_uzFG*)l-7Ag<_=<*2k|II?h_qat+AjMh`tL5T!!q+;=}xo zy4;0`SXy)2J@Z!@MZjDVJ?7c6kTkttzE9csg2Nc)<7~f19;RnyrDZWviNq>y=(bm= zz4gf0-p(fcsQ93-hmUMcQdAhqp!KQh8Y`GiTDAt8#u=MXagezuJ_??ncG*HKfz3}g zplP0L%|aN&EHTNfLr1d(aPuZI|ohX!kP|w_hwTd{2R8 ziLEvmaq1q>sd||1IrMmp?cp%YXxVM(RIJ{|7%pfJhrg5>OC6dRdWJ>}X-Qb(Mz(tY z4tlT}8={%UaO#0g%J&0;$h2iaU^>E%u6yQSNChXUXQWgf1y9CN6RoI)m8&hgf}@=B z#iPeqeIu>fBi0KBMTnI-jc|cH13^1%(NdQ)AtG(LqK011(2L%!xcPHboghoO-ZUuo z^j>$&;_tWTz@pDiCyYfsenoSw39QH$5WcUNW6$b;`zye;YxK_^e&c83hM5;BHve?h zQWZ1_w%QR9{d6rr*@8!&igZo=U`=y!N~XUb&80=-wX4TY9@)>a5+fa{Q?~-nl}1I- zS4q~!eVJ9yG_D{_4*5-+&s>ii5!O9y{;YBARomqgu>iT2<+)wn0l+!AoC%y|WRCd+z zR#B6O?X9xSc^)p#$1(Q2s@K(wMk;>{!0RHORy~b;x&oQ_Zp*_oa-aI6e+@4pKBqG` z2G+P&{SZ5k;w-M1*V7t>{zb_aWe{!)xFOaARl$Z~9*2TlO%>WPZ1ZAGF(Mup8`VuB zzH-*z|B1cW=Y$Bgyh~M$3|!Z<$eLCmByo^f8Z+h}(X9Nf6qE2EW>;>$0TGFsZ}f}$ zI+;aU1*EsIALz(xuf?=zfm zq&Pg6jK05IP_Xf_mg8}C0kvl%iIgHnxDy>z4J}fUfz`kAyBC}a`ZcHWV5)}k@70HQ zYme?fc1oNy-96lycvbUUjOc zc0V4-ILF@@&RRGD-r#%g6uDGgS959hO>oNko^#M}IB0m$sH-=GWSu*{;fs<8$D`VL z%_>LICtbMhZ_hJ-Qt287qX@^#(&12PWisn>IZH}>25`-KpE4PgPQ20SM1JM0IV$4D z7UcZ05G1A~iGn=`41hLbac^>}0)r4BO4%N|P(ri1s-_GDt49;@$_Po7iS>14 zP6zT$q&&5F_E1>^>&=5*^WlDlfF`cE zk+O<5GfHm4mXIMyw?4%8+EtUk(L=bOIIt-n`ynlUAWtG>%SqG*J5+RI!}e*ooSQ(M zF@0YJhd&PCzAPGNnY2qBOiQiiWxuaS_tA`5?APgfAKHsI>Y?HHNs5usyq7+TR@G&g zlhy>((@#)Qr08Bq+L9W`qOQxU=>?JkWBr{m>sa@>-+e{HMLbeq;pBoc1dabm2V*Ir ztRK7nb&{FBe>kG*oAGvP(v)sgq|v=q0SsGDns_*Pe$jrvWL;o&6q(!~DLLCCH$ncA zm!Y+QA|1B&RbuiuLH}u}GtWURJv8Loj8GZCzI}d+1%LFiV;-20yV{#M z7Z$!QBinGH#upK)O@w)JpOcW)8)S;~_A7*DgbBMf`iBhPQ{qq2#Ac}D1Q2J|8P!NX z7;UnZ%UJPzCK}Pvp6^D{|57re(igM$a(`8u0Ld4kn3a}4IUEL3=zADGG=C1=3gW`gUPS;QYHh3OpsDqvMoC2uFKxoe$5KwgvS#!2CA7A3%$ZagK4_N zKe8`-wTuOQvMGoSTMiGMlrS{i|8xHC@A+_=Tko^SM48fzf(+TENYNlLKV9JH=SGvW zrm*eCSjp&|jwMzdV54o90pOgP$zP>lThef7pDs=0R;hp4HUHJ>CdoIf-7v@v|*E+EF1`SJ`GVv-+J-zjbKFn(-xIOF{yW&gF@tlV>9&nfFXhCS&E0#o z(LK=c$Iz)GX@b4sqRwEDnTAFUPfJ8)m*wSW9?AVkU{1O9*Xej)AG-pP&=di>N}dq; z+)@f+(XNu|&(Gq)5l_^|8X+v3%-88w;wdIWbjl*H6qC^`D=+T(y?%@XAE7iGgnnvy z*RIWyNGTDXwc%*SWAY+=ob7F=5T+cVh5)%$3XefhP`xqhAwidi*x0yu1^uTa+ozYa zh#vmgY4)8cl1HBB`;_lElS4&De)@@tu1{6{&7%;T8)CQ5tMgeK&Gm0trP)umiq4|> zx>%ujdYXt~u>DtS)+tRh63il8M5eU{9O;gVh+eD)`drN4uv7UX{oN_fMRW zKrcR}Va?KC^S`+R3GeB#5Ics=s-AEPFwxdRA6Y>^j1VBDhs=3;vc5eq!25%xBeM#Z zOr(+imp7Yl)1R9WE7L3^>c}`>9yz`w!?gGEx;G#r(y%UY)D-70O4Bf;W(-_HPbpMwswt zgl=-)F?|2zT>owFkVJiBcp!BN?6+h95c->quy{ zBkLWBOCmp%4F&bo5|ixmdCjW+;#eQs(C`1aeB2nO7S!#%JPr2whPTOemENWF?e_?- zQvk|*Ztm{LNPU9UvdabMS`w=Ty9xeMM=uJaB;KkxKUEltEv@j%ed?q zU-JF2YqDi{*Yvbd2XGA-WR33HA6}F#lil5 z_O;-SOaD~oJb`w5>z)}~<0MNbZUyfBQplP|d1kpzeZ8CenR+)imsuy0$A$_TnCKC>Ry~Z{aZf*gMn9PPz(<5z&qO6AEx6=@JmB#T6<2DnE0|*bk~VDq zZo~>RkYXR3KRB!Fr8=`|g4pk$gzYV2BGSAQoWX|GAzxQzOI2JK>gsTMsPU07S@)b! zAAdeS0^G>xXB4x2eLv>C)5C_Re*541A(U%+A%4)_;`?m%i18MnY-b;$~GPTgA6jYycQ#n=A#FyHDNMc-f5x>+5$v<_KqA? z`AmY%A-~=ihSQYi4^v1iej}lWQIHrn4?P!>w5D%9vvG`Jl5`?moT#+Zlqe0i$dxUu zi4SjOd~`fL4X=YuZxTO_o%OFnT{+@?g`N(0+ox6Ty_Thvkha23d8vo4{w&8aktVOU zXv&gNeg-BwL`ML&>(^Fe2z|bP;CqV!VJxexS~6fbRM!%XxM_&dO&$KAlfoAnRUj@+ zYoElAb78=9nHJG+!(m>_&V+(Ecdeie{L+dP@(iY_K%Z!;ct0pf!BOiWK z%@k%|ArpN;-z~Rf=})fVclMnHj;1BmXu*go*5FFnLJ4%a8oHm?-NoDT1fzm`l*b-bhsb1ARE>j=5?B)rWlW&TDavq2E&bV{xq2N|iA|mUFVT!0 zb-N3;ioR*0FoxR5g{aLZuZ7X-qt)WRxS5(d+SsIus}fOtJN*0S;IH?e%OwA9B2la< zH&+xv>1rH=_c`A3Ck8vHWvCu!l&r~8%~OZXcDDm=lzWo$(sLO?#`RV?==;-X>a{~j!vn4ehodq3R6~CI0?k5co4Jfv3-Q4gz zjmB~wY^n%vQI=H>d0O!Be-o}5)j$i=JM^EKadBi!Q2fq9;-aO_mWYQ(XwX4oI?jym zKkzudL)MMN4b!v&L@lAFSuwbmm5NE0r}`pKu%d!kS2{vcei8@#*&M-H)x&}`?I@; zHzD%Sz|P*D)3hknG25mvBSY9AOLf$eIKupaw6q|PuQX`__fply0>U1h`y{c^uI4D% z4?Vh3&*$zLP+k4EU-mAnSz8)s+^O9nu}HKN^I*#)IYSOlxraHU7>P}N-R&(W0kFNT@87j8bhhqoP<+`{+!L&nq5IsW+x|=^FK-%i{($1wW2Zq zP`zGT+i@4sV0*B~srGCwlbT>si&&a`_~8Hr(5lBFKtHN&ZvH{X8_lpjBCqR>spe$* zP<&=`_C;O$+tbw3*)Fic3G1o9KM~zl7iyO(-M}TK7_$Ks*)!;b)$Y@PpyDB=WB5L~ zzd!^F^3XSUZZ2M>GGv@pF?XDDr-hFNZ3R3hv7qn zFGDAIWTQUnvfK6Rv6p>980(~!lv{obIQ*ddoa3q8?$=`%mOzI6ZMIOHxOqDSt#e$Q za9-Sf_3zKx3jNwX?sFSsU49#3PZ76iY1MuHM!fd1V^6X$A*KojQ6#9qM#wsz!C${Z zv&q})?T4rIcv3rd=tr?BcI328t*!UCLi5FKr0=IX;h$e$2h-C^0DyA$5%JIakL<8~ z#k0@;NE`!LB!6Gp?;TM%UU&XMUF!2)1Vf8vyGoPx_UGu<=LE&W23U=%nO3Lmojy6kH8r(>qIi(toLKSn)kr@Yc;xxDO;Qe z(-;%a)=KT_;|Hm^D8|VoFMkRrYMkpDTXR-Tm|O|i)UC+K~PUdPPwDw^J-V}W;ne~Y1x#8tP}bsvYW z{%~Ni63b#N1g{MB$;XEHxn|C2y3vrpKWCZ#2CmCdKi=QxMrFy~Oh&P9;})VXb!hT- zZ?BiB$CgI`cnJ2H_Cihv^UK`{=pqul{wqV6zl1fy!xyanjtQtx22;vQ)(~z!I@j~N zNc1fgx?8)zg187Rk0>1+ie(C?9jRacvU=A#x;!xpj6;R=<5h}DFIjLu&k~zumDrX` zc*TBp9m0pho0{41YC=bcAJu}DdS^i?QT7kIBttC>wyGqa>vN&6t`+$F_ z3YJ{HcF+YEeG{p!J60HiImXK?NYc2uo)KW6bzUvKWnjK+yfejZ>5+3C#Pf zNN+kJ89^)kL2#w4i2kpoU}NA?txg4R4c2eUu;H&_kwdhpN(Ry6$aD2DL_1lkb-`FoijOaX!+l}5{yZYJ!rp-4Wvx5w5)XZv-Zrbf-jJ0V>DPeO&X#h9q-sV36jB@2^^j?gBLw7>6H zhoy~_k22SsXv-h$Q(pZN{nlIEM-~*uX>49`+3UqJ?B~rO!rzb_D}1mw8_f8I{J4pN zQYN+Lii}i5zHcyUa|?o6&!}FN*zrimrqhIoUk;-nEknl8dtpVE=SngEGk~E8iy1&v ziL;;(Fu?qTvA8G;-?JzGek+ZU(UY8#z)jUM=K98_PF5t79tJiY1bfFuc0`Wx%|doY zew5&NvNIS6>^FJ(vVG%a%ak8c{^H~%ltK5?{nfjJ!^LQcV$G;Mt1PHV0S(ue9~QOD z!vZtss-j7={A+GpEvv0o>@}GZdZ(jNv^o6|+Eqq8F{jvH)!7#WBS+dE(}vME*{_@r zJXM}2n~v(2`+0KcS{Kat$e3x|_CXfMrw+luN-5YL`l!vT3BwKV$LD(fqvtUPkIf5) zD7f1UWLGpGV)iEO(jG}3-}jcix|_zg7x$n*e)8l;2|xjL3!QS25JNzFg>bOx_e=G7kDNn5ZfY)BxRPO85CbuhZZ7^j1>1HEJZFV{?j!s7O_&IRh(+}dAb8kzK72jO8<(k z*Q6y84%fZz`mvI%6|2G1loD*%fFekg3PI~Kf90^tIpLz}Q=xDFll6o7r$vboN1H01 z7euKz$FG%jWBOC^_(T5?1S;VJK>Y0g)aX!y`PiD}VuRBu&8}Yb&JE374BM)w^M7w1 zOtb;fb3L_d-QW6Y(OSXPAE5}J;xeu8ooB_GW<5OR*s8V~#>dxC=-~eQardQ8OJu^m z@@T7{h*qOb4KzV&5A*pa8*oW!jAcFr`Vql$sX`1+@)R)&m`Kt&aiN;)JD%_NnC+nZ zqEo*!+WEYa`ghv?$hOb;3CM#8n>%V0Ko54bP@AOfYqcME3xfEM6%U=Xc3AGX4^-$T z{OiIj^x9Qt>D3(0__+i_HYs-zEY~&SF>0bdHkwr4VINVA+UE?ViuZy)#_UT^fJ8w? z@APh>U=Zm;sa9B2J=pn8>&7B*VuQPuo^ozR$>u^31o*_5mwM&GuZ}r?n(@vQ5fSDJ zR%|ESuJd5Se6hNNgR3-b{zaG=E`oYo64#ur=CoNmy{DMJRAxP4!>1s=cmDC0D7D|@ z!8xHs`WG0I<5MBm@mI7$e}W!=o`*Iew|_QV*cU7Ao-PJ__vNfY@!x+v9VghKyEB`g z; z4W~5N-Bd2G^2^3^Jw5lIhn+)1*n?D-HB$hIaqFD>cqG*R<1}jfD+I#t(DY}Of;fP8 zU804tEJ12>^MX+)b@yCKf-tGu@MGBjb-iC7_SKHBBmv=qI6L_wb{z#&VtadKJdft* zGrw4^#UT4NhT(4*%f`AJ&ANgIU;)JNcCepr*x1}Or=e~P_$UzR;(L2%{zr`B{Jz>X zqt==Ol#Z|70E2|>ZEgoftwvd?sgkHLUqf~`p$xN^&0EDy0C>XI){A|{b}((99m(qc z^3UIq5PAFssJj;HZMb7PhamC4GTV*mSnMt9%EbWW`^~O7z`*jcHS9kf7M~q|-sbBjTmHx(=L1!qdW~`5(!W0h+cn@sD$P_ox~M_p0WH zlX(~R9{!zJCy_htvTQ`Z2+}_|y889!Y3?Whadfp5OwFA#G&@USl$_VS^5P2VD1VdK2mr?z&8#82&1b^Cfnp$?5yMUT-HCwwh&$+gGpYX*g#06c6tnvvBH4)V(V3kq>To>Rke>e~FJ@ zE_=?N7Wo_X6^RaUSk8&Vn?Br{a1y0(Er6#vEqOW3)=!bj1N6z*3Fo81o-|#8LZN$| zxV&TQJW~?u$%(VO>BE8*5WAUHv%kMTK#*r||MCdr_h~uj;1}ezxSx9fnug(RQwxxJ z`KSoYJm?DkLNx{jgvyFzCaOLiFkfYu1 z)q$3+S&-G@M^*`uL#aHp+jGAi$xwj_Tyiu;E-p)FiMdv0(6Dy75o@rNI(Ii6ascCS za}>7xShaPYMW>pehyO8Ju9iIw$$O~;MK3ew4bDtuIKh-%kdI5(w-mFU zyaS+rP+ks|NFQEje9z-8Hl?il%V zRX*pb_`^7G_LrH|nHPmNq_Uv=XP1kdQ?7Nkb$v@qrt}07_y-3EAKHvrb++kSTt7bP zp8No{uK^!H@G6=C;(~vl(f@wk!0-4)cDZxpj^DYtSX|7^4ULmZjPUHC_~}u;);rSl zJXkuL1GPYQ4obC4TV5B9&$L8A0!31Fye{WD6)={CeJWV)4gz{W zqXjl)Wcy_nn=vQ`jj4nP6~l>vk$nlf8B<%+s+|h;wFoTKXYrL5HCvW7(?xLu%}Y*A z-7y35^nC1)ppq_8wqUqXuH*EhHEBb}MvX(Cyo`!H}n-1%zQ|0QU=qJbu*nrxN4a za_5W(H;)}Y7aM=w`PB7U#6#asnvRn|8do}`Fm**u)VvAr^K!r~HG1F7$A8gexP4B1 zAfA}0Kr_)$%j(6kb_Q=P2S^8QLRD1(K{!Vj4%z9j8}I|)!G;2hr6pr~e23aiHXX(3 z^F^`I@A%s7e<_9;MsneRH2%qeP9fO2RgcFkOGr-XqM6+)bL*Yn_@AbeJ3fOp&F8|Vs+gQmRhtSR3{Ht`bHt&*NI!+X@ zdx0{TOPII6=Rg=QRZswBSd&oNqz7a6vIZ_8vdp7AcblcgmzK_9<8O?JblI3D^uh%s z;s`<1kH|%N)mQju86_zK`?e|N6I}6SDovC@!t;B8R@-fv{o65PllayTo^|v?dklpp z)0;I+ADc9eQ9wyv+=%)~%Xa!de=VI>ypm46i$WGBYgob#e3X*r0+fJ83diuuDUQNdvgB{S81dC-hAxfz4C=A$c53>@g+uQwbdJ=IlkoP;E zzfE&58!{}qnrfwR^goU`?g85n&WW0zOtRFeHE}y_F}Z;J{KBy`S-P~7tj7lOm+M$L zqV-h?#h_x}*y6!U1s^R&B)0!3(#CTPDP;5DEESQocMOYL2@7DwR_0qqUMF~>r3kcv0o0_E3Z*! z^wVXA3l&=%gJlze-ux%ra-locqAyhzV2*IRvr}E!AWRcS9WMxZ@M2a z%?_eFQ~)+s3?s|_AJ9ovK;Lv56!c$5V4%+6xd|_e`u;NWlbIvBp}vz45j88%ye-fU z2SQHQ$xmfV`yqN9Ab)@~4M+G({F|O|GHsf_CfG50w4eqFU<`A#0U!b_GR09*M9~!wU?$4aUvCphh@jki z4@Y*C-*f}DW=rm}!DWk~4ix|KbkPP^#Q3zbnk0m0}Mkrr7P7FAOTvTDH*17umB8?j%*?M zZKoyD>)WIkmwQVz28b zFsVfcrRAg-;gIx6({mM~3U1&r(QZa#vLlewu>ZDsG@upLj!(r#L>_?;&YO+;fseA& zn^3WyFbjLqvKs>ARKUELdk>oC+?`;Q7QXl^vo#5pNJ+Iks`VW`O}up6Df5F7-lEC5 zcsbw<8tDLiVfFj|^54?>->>`s(9cUAF{(%R4n2fcGPP=#U4I ztu}X0v4B)|dmMNBJN_&y`*DEvR|B-icc8*XyL0m9vfV*o`CP_bxdZ^%x3@^Gq$WKN zbV+9n>&KTF+F-D+Dp{BPFcJGXM*h=5p^Z-Nk`1rnSuxh&*ZaVZx+w$aI1*ia9N47) z$zuQYKApjVl;`ysJK%MW=z*Mhr!KaC1rrSphtu)!XlQ6u$>zq$)ALN_qG2#j6aV#_ zom^6t^GxZFPA90-3wdhcMuU$jYq?>$ZfSMcwZ;2*W9KymmxqT)uuSViz?uQJhgT{; z=#K44#s|WNN}b3KGD5QL0*HZp_FNa5PWRH+9%*-P?{#3RD=aLG0y9uj8gJtyL9s5R zwzZOC1C%LIBPi#wS38|ZhYR-X!3Ai3#0=hu??HAP+&|3C52(NHRa7Wb0sMC~;J@*X zH;eA56=x*agV{tMm-<~ejQI39*n^OUQJRya2TSbj*)BlRaDxtpPbeP_o1dMX5|cNU z#ZQyNdp!Fw^W@7@EG;pQpPK+t0zjfHjjyJvS}i(%HXu8%kX@oM{GRV*K-|-ssK?m0rZ8DFv5f zk2ImaZ@&?TAp0lb&WU-3pG>;JfQ7No01R+BOegTJM0FT^TDfPmu)ccHaaM!fc_DOt zxP+q0j9#Lwj}KQ@S9_#;Hk=#uaX{YJZkYr__jL_LUQss;CX`#Yjur(<`ZhH^e})%# zK*A6~#&=$=##^;;uw$GHa5qA*L8woP{5dgiBAZ+Gp#vQRf!E_LsGWtAxl;TXF#XL6 zGE?jLeD>ApqgTck-0Mv}gnI}KekoO~dF1~4w?fLyu+=+>U8g3;0ReMN<9dGA#=zsI zIWW|$h=V2XoipIbr56)#L`h(+b;rxlIgo-jQkFE%74WmVayaN5Bs-Fyub3b8t_Na9 zCetA3JO|fos-y#s?o0F(aAIMJ>XXPiG1Yc{6Q1GEy`OZoW>pBZz&{m(Na1CJvkBfo9A zuq3b~DSGJjzvL@v@0JHVBZ-{K%iGr8*OA_2DBknCKxFIHY!ZrTSLfHf!SS;3?I}CH ztS?D}7B!j~3$FwXV#QaTdhH72--j46ycT`l2hF-}3zF`uvl(5pLHGZoYiDC`Z|~?B zqvrISL^xfZ9=Li0K-vRmDXcej@X_fQR#RkHM>fF7mIU(uCiD-5QDjdI8JFBA`%!=MWy(iJKb)%$+VDNmpsoPxMYny4>(NX9J?g8*?ZE z0P6oc#NgZqqNjbQp7qk)iu4S};QT*f9WOwibh(ulk${4IxhDbC;%ej#UFg2qk2}c~ z>+Pk}ELT{4o-{pq(~M?|BUEp~DX)1PM$AF=#3O*RzTDBjat*LLZa?Z%2Oo&c(~LCl zxOxF_$0XvQ&=cwo3H7);`cQ_Eoz>U;ce@2*5aJWlLJwk3P+a}Xuy+ruzlzpDL5dKF?qcUpD9N1IhK+D@AiZ zsA8FGYh{Rg!A;4ZwpTGD9QF;!ojb&#jTZ-9;*WJAZbxgt?z{nc5mn{~&nUhSI@S=U zY+*qi#B2B@K@kPy)d`CCzYHEb3M^N)LN9mDv$D=QFAO$L!fD{A4t0714f?0nyprz6 zzkZ2it$tWkj5F{H+#(2ZsoBB>=X$q$*LaUhjCs*H>;PHJE6OI8+sO}*=qqLGEap^o zpIMc@LKR{Lt;qYJ_wSw6>s5HR`B1|1k~)lbq9=CFuaCOvQ<1#|(Y+KR#L>>_lvct` z-(I$-y^zTzR8mq}O%lf^Oi~_zTc+%GlarH6v-792+qg81d|1xo1uPgHr#SehD8RTW zrE+%%#Fb8cO(-0*uv`?ueJMT(29ql`^#*c+wo>>fpAMCwt6?;RMMsg3dF}=jK+0e`cnvlvKw~Kj{95FdBn=63{k@SkXe{_5- z%{$$mN+>*}(!VnXWtg?ol$7|})g^-UeXji-^@OrF#u(33aLtsRP{shUqB-^1X7l_rv8MhSO$ zYinyNWh=T)KI61xtuglG+x-)405VjOVJM$Hs{aAz{f&(cZEbCb9=6xPl2Iz*WF zftEpOd}%$I=!ZtNoydK_@;BZTq4-sF*X{=328k%2i@)TSA^G*{^#jB+Oh4zK-_T!;}Fy89>Thc^sAmN8SNK+khjotH0WfbnIU}#Sc z{)@;c{t^KJfytQ}4x+FO2B4%O4Up&pHh0vdxcH9Ey#lWy<=%ylcay_SjPmh;{@_Lg zO@UnX--;;!THX*Ji-M6EA_-5b2b6@qCm4$k z7ze{^PGzwE8Yvo}DcB8-v-VT&87?Rt{mDxXQ=;3y(t-9g^y@voTS)VyLlsq~A;g$(8Ch6RfV`iXk^$T#|9TM}Hfi8^>=E>8 zPdj(2=%Qt>xAzi2*25Q7j>YT(BtV3uEbyd@ubE;?Z8R#4!q9)M0k-$+pqw^shDmr5 znw8vS*|+>oMV8O*;VF++2uvaSB||XHrwItc3IAl*b;;Cxpr|i*JETAL?hX&VVZAE-z z*CLm@mjwPL6l_t{XIM1>=^-?5InJE6+9_*eIwZ6Qvappx%B`wv+C}-B>%e5F7lMeZ z2MBf@e*;96kSFUNEj7&PlS@E=o|~KAPo5q=eBXa5GPd@{vMPgrdRQgPD!l}b`%NeI zKmUDW7)^K|W^J7P9O*6_d6|G43OJ(0CEDzy2dj`{0Et9GSr(%A$t>_h<|5YL-RAu3 z#>trNxNE5fl=|GCb+muUfh0}&n`s5(H7C}rI<9N7+HMWBM;gCjM!V{sjQEW+yI=y2 z8Fio%@!exKi;CJ$3`53fsQwrewhk>J^7Un0JVlG{WK-0s(4>?Yk}~)61A?9~WG}oW z)3w~lrlFjOaD6=NB%4%1L{RX}!Tb}^jrH}fJ$ZSYuE{$gFg`zlxB(z9@|Amk)A;%c z)NE&?B1i6O+AKHfQ2bO;1FKQ;2g{FZ@M0k-2|Gx%ePQD`?g?-S!VfvP#x0t}&nogf z;ObbR8aXgv&AE9>C_QVR=~XtA051glB~v=Xbe`ezLewlTKT6ZZ3|MYIMnr$XyQF5f zw<(8&KLV1KDoT&n2Dh~fEgr?GL}xG(r%Dp}7WJDWRDial`&sFa%5FO#z=b*o1c%K+ zfnJ~hBC$m37~qiJ=e(K}@$e*I8}e{jO)?1-;)_*sh-k{7JXVUX_k%3E2xSNh9izWD zuN5fPzA3%}9TSY@C32oR-P?a#HUY&lbfNQnJEN+*b!58{f)r;&b*dU8e?7+O*Qm)- zWu~3KLiUTpv6JmdFwJ`7n`-y7p*{2Wi=Tm5ZVVXVKtexXSl~K^QrgpU0^K76_J>Mw z^Oayn#y5Lm9ba$a`es@U|Jz5(qzr?m=Kqi5V#Q~AJ)ESk>5yT)bXi44us6UBomCZh zoPik?>orcqsVM+g5#vFkF{IPNIqno1Hejo~dpHVpY8vryPqK?-2P#w~x*(5mbXV&! z&i|#(|JG|?dk}m~J+d(^tSzjA6tvD|`dn~~Gp4KOSM@6*&h?}oXkM}j%nOVzoW;!O zM?P2*{}85bqdLHGWEvb8gNsskZhOd zIgx;QM^A$wPQbM9LMStV=?X#^@~9UKat?KJzdG#l74N9(UZ(+I?cmN_eJKv+`QQT? zpgQ^g^mx%4cCA;;!n0xFkzwM*zV!)Q?_WMs7J&8u8bO!Iy@JG`gEJ+^_sHTqCu$U# zo{*$=n6^1_$?4|c&ZQ5SKPxuR|8{s)_OoXikWbJ}zFrxXJq@%y{yU*rk?cS|@)+cp z>(BqPU3k;q6J`c05=+bidVKcyPZFo;H~bTGsGQw$U~z=TCxocVylJFED9p25yA+J8 zK^8f=b}R??D@a#+k-L8rIyv8jPNHC&|K7Tro$Ro{5Qw~}=HCSDiWw*wh}IO)Flyu} zkJ?2faPel@ZNJhYy+o;-;ORub*gSJZ4-BB6Z>xS!X;sy&Yq`sruT}awE9&(gQ@K@3 zyGD#Fur9Q8U|l|V(W!Syxn)pzEb5Q8>pfUO6DmHH3*t1(P7T|qmh8dP!)ES|ZAC`m zoOL-qK)RTcLW|0!G@b(7zZdsU)ddA;oaPj3D3a-NV;-sh==i*1c@7!gXb7PE8;$frtf?xI`=%v12!yJlLiHh-v>c^MC9_q zX=*sDx?|+Qa^ax$Y-rDJc5H|M{0Y}J&<1^+G)PHBRlqalIzt$t8gWnRR2|$G0IV?? zySAD6w(O+TP}D8za}Gj)8Jq^WjyapA!>&nFY!1hsv2d()LWv>9`}ycZmm{JnNW z*BgA~2zF#KEY;_?xiJ@{fTUnN@!QDTD+w1xWg8EHX|I1DrM5pk)PvNPn-jc5_$pAR zs}^EV!_AXna0agDaOJ*`k!FEGoiZ+~VO-*@ykvQG&5hn^pk+aFgRs^u>Sxpb55K)( zb&$BZjdY;+omm=8cUTN&8y`%2P-YL0Q^GuhJkd#P83` zTvR*@k$_(%&r}UXz&}{&=BTCtDuu-VdLlx1uh{<9QAKtn<5xa73O4SHQur{g0=*yt z$>I}=hA%xzQ1rBr!q4@Z$}u+!y>ziLrFInI;*x0OLO&WbFB0dnimIMnGbp%8fuS@& zSs3JB#D3?;8}XlB95#m6UcD&?yE_1#TLx(up0EVH%!M=%8!4BU0oAJj5TcUb$ti zx*UEQMNgS_zq7QrDJVeYq7vqrAs0nik)tbG)!l>37&qYAHtCuZ&gf~x%2mY&d@<&x zsLd@^9C_#IneZKm627NYz?^z;N%xQ)3>b%Fnx$IH?*^czbVwDK$`DFfRA*GVp_;xn z=0r+tZpoA$qTt}*Uc|rwo{Fjm$MfQU`J0CzU|@y~mhXByP3qVrbof~QSF!gQsrvZ1 zX>KXS+ws@h#()DHfi0eScp4dJYum1#+(-BV-maSDBn%3VY#g=5HwcIAH&2N8i`bh} z$g;wa&X+Ae3U=P-eW2qc0|Q++vXvsEJF7wBSFRlfCsa3k1Xdt+go_nJHAp-jzJ>fKw{VQ^Ef{dgxS=&GoEp4(1Ve)g| z3{u9qQv(}iGB;D8*_(xU5FEO;JR;|YNhio3ytEzKgBGgJQ|Fc#kM_kovvDUb@IWub($sz`I%kVp(QI|KLm z<$(!I2%5hDg6c^jv519%CxLJ}bHXP%iT9z|&b~LsTQ>!@9)#nE84S>oi%Q zwnN&*n%9Ztalq4b-%JoR_N$fO?j-~B8$S6$GtACUT`$;3fzH|>VFyZs-kx5$>;>l* z3{jPgLAQR{1&NCiAPB*Ks_g4&5!37<6hUKOFXH`f^SEwP5zv+;I$)a(R`P$`u>wfK z%~!!O+D<(%(R*X9Y4jb0tFBEzkyI%8{w8M^P{FY9+la(54yUy26iV&zw-n)KncBcp zz+YIcEV6?(n%zVaA}A$Bca*w=*5^H9L0jubu=DIx45jN+YpZmOUw?O7(VZ#eQY*sF zBJACz)D-DuK1b%I%td^Jp7&j@@$)UGlYiu@va)(@xhkX9l_s0p+V52&>QX(4i_cBskasA&m>Fay{1ylyv}}xr2b;66eF7 zcz@Y&I%U#wV|zdFmaHWQ%AcKxjyZ0?2B z>ZT-3sXyxDPIZ7-l>r)EEg-)s%!Jg!VOFvS@>4V^D%R0wwFGigH%p zXQgI_{ZSI3IQw?7fLA~@vriN}Kp55k)pO@m2*kDCEzN{8(JheFVSX{+`j(ZI)s?#~ z#LCigePaU_BhU8m$lGmWJb3h7tJVQi)0Ev|T%7=uf=PcHm}XvdRp!$WG^vZ_jf<75 zJ#$PlRj+L}Uq}ZS3UEwH5dXsvX9(P<&m1-3#e3nBk<%m;w-_B%eV3A1^B`;Cgr#VE9a+7xWRlT>p}h`k+}F zY>XXTSXe0G%QS1v`zeHGN2ylM_2HXJ_)XWuN)g9?EK?KuSt(N+9^bbSR>lxr8R zj6(^+5CW1zBN9U+-5?+>BF)g!-7PXmiAaf*NQsn4OAjU8APoXahjiT!@W1yQId?6V zj%&?~FYmYW*?T{GYTnsoi`HK~L3WZy5)Y3A1}tB@iB_=u|IE+-@)`HR|If&2hJ|mGCJ!ZFAY?)7>`;hO^Fb&d% zuY$*N_~T9#m7RC&o)(4c;hI(vw_A>2F>UiL=Vtr{EA-EW;LQ;kd1)63P09u>p<(@ahQG>2Lt#Yf@%`gE z8}c*iK+w?B`AOacR+94XLHF2fLjrX+vTV_pKQGHc;X0yg?#6ek)s;Fe=~YUl13h#n zq*;{hJE+|TN#3JYBbt@RU*D!(x99KfY2|uoOF<@oEKO;EC91?Xi93OBKjA&$;5R+j z>Tx(vwJQ%@-Z6bz6CNxP>JGYxYimQ$yok4KpP^Ye@-~HcFkEKoXpqf$*(?J;^f`@@ zE{gkN0Y*N9g=bY36-4WB#Y8h6v278u%2d6s0^@SPd1~(HKn{v+Pb{vOmy{6`U>f;9rGV?7O^QE@5-&V&Eq5! zJ>U=xUBcGPz#a2q2iamr9o{MTYuo60^^jxctYQD|1RLJ!8|Cp*$u~3Rn`NVYPG~(L z7I}9fx=A25g#hA(9NFv7oBvxNVO3xy^{`iZGUa_wZ12jmZPS;P;hhw~Nmo&w9}fsc zCHKD%d{5^QsYup=C~h~w=Ta_XDiP!IXhTBBk6J=8$t#s8g6#vp^HMsOg|za9W_{|r zDcW!OZro?^$z!~kND1|y3BZOHE{WemMs`-8p*jj2rz0zh#$iVvndUG%8 z!Lr2Vn#C^n8W0MM@Jxq6w&+Re1{+unE#=pRK!=W&y^~NzGm&HGNNu$7D*2ws-h~n_ zXR6zAR6@zW33%ZBj|#C<W@1c+x z5h@QTKe`c20#cV1D;0+>P%Q?&G04@PVejW~I0F`J*qhE^R@aKG#EwkyJl?Od9*^`$ z^m&Sff`Mh)gDwHHZ(Gxz$OO}7H$UKWC)!#U^LzMZ%~F;%DEd&Pe0!atU$3IKKroNC zUsbME)m6_+S1BI9a}~d*n@EVB#4Q`0K$Nt+s@sq4K_&Vptq~>SSqs>!?seJm-a2fm z{K?W22^)KJ)Q)l~wF%X4*k%|lYp*i3e2-0w7trlQ%?-SE-)OqpD9|O0Zsm!3jl>!T z&v<%%^IG{)BQje<5QREua!Yn&x7=FD=x}vXJ71rL=w&m0_JE9h*}4;vS=S@w-;0=o z;VrKR27Y1TX)ZQE0KZSt{SkoG{^u0`yqIYOE3*CKk3bR(2!4(@D~Rb(%#liOAO%Pr zfl;Zi4)ZO`%gf)secRsNZftA>Nd5(qWf&MfsHNzK`z}K(2K~FZ^lEYiag4=)`Y>8${Jo&; z)iO!Ju6VEUb(U4AxoOE5P#TcWl;8!!KQ1m4EVWA(5v$9}s_K)E^+jff@Z7mAG-`I< zO}^+AY-r3N$QI!Kxv@}$=PL>FOtiHY6Vh>kSu_Ii?F2&8 zZ-rq5)C)*f8~ZMMRtLO7o;5V*RzVyW^Pf!Ffr9RZ9$|for#OLgR0rqX$o$$WeVc6e5I}H@YeQ}eKBKO#EzpOi(FIlb(>K8)H%~< zXciB=EjO+|da8Kk<*h9Ac};B?JxTe%!{V8bBFc1a9|h0HWR*LVxhANI-SBXHxX=2q zCxh)@3Wkj*wSwLjnv1TI-AlsbP{s|@g734L-?^m)Ar2j_)BuC$Ov>uhGiX1>(jw|> zkBv_($&;948bmB|G+t#}3FR`Iin!$rTDeRZ@cT?OsD4Z(XmqR|uiRFuyaz zW5K8lfGFM_;HOcrSZ2b*%`adP@jcn>P8KA=GQRh<2o{wIJ1S%xJ$RWPTJ*DoXjbZr z-Vd@Beh;Pa19PvCKqP;>-Zzf&W-czSxJF(rgy23BgC4p7J*xP)Hlbi$sNZNAyO&{d zKW05BObq#~TnKsl+ znAZEX&o0sHovwqN_JgMO_tqy2mD~oC@rC5AY(^rr6WQoV9EK_#b&;l6i^U_<TY}(lEb1>R+{#{5Sw;&4q_Z05etpx2pB0 z44}$5^ZbRXoyC*mqKu{B#Du1`B^`%Qr}dX%?>qxAV?DI9cb#jfPSYqQu0T! z^-uU(8~PmQVbCxAb$|U;?ZIfuPJ-}qs|{|aG9k_Qjt!auqWFfRj)Jg?ip+}qj$S;> z0_kA~CM^xO2jpvQGfHm1e2EHeTyswM52O^n2(1&wm(T|5`J(ROyr{S-eS#Ewa5gE? zvfm9+`V>8&R~1&lsVDD-+^+a-_+Tjv(O(6l1k4ChGiq`b^x- zlNPqn9Ww!a^y$9Tn!(=4o>O-aarHs_yk9pEK2=-#%T=|%_&_Jl*7V6q&Gx_vG zg;0Z3JQoRdQ>aIk0*0BBY<|v6y5THRSx^eI&5K<6TUfIiR|(b!CPaNO#v6s;XR)cz zZpIG8dqlbD-$yD7#Q{V?l6&@oA~WZ+IURTBiKukQ0LxA2^p0!ov85D_90b&$Y5Z&fi@H@D)bCdCA?m*x2yE2RwM zYQ@}7!tB%LavX6#?o;gAZa|cgRm$b;)7uf|v z;u@EojJO`VTI(TE7@MQKu)jMng8dAMtz}tD?q~aaL-+MM&Dc9FH zXhl8pm=Ldi_~|jS(niQi{RceEJyZRR&`}?&4zwlJulD6m02UsiIi`Y_|KpRVj<+Y{ zr*GE1er0E<_&}CbnVq4LVQ?Gg{r=H{mz~dvo?Q>Oir;a7+%nE#G_Hy12KNSX|6^X= zctJ-Y_bZ99)(5P8g<(0w2sazOl4+{#9df=>+PbwP)9nKK?hr^t&qE-L9q`7-3zO!! z9XYGmirV=+)XU1&)7DkUd?gFVtFEX1dz5CHO>SU)#>;mGIfU$`Y5UcKPZIFsylKj7f&hd0W9TPpw<2YVOys58onKBraJDh0f* zkjM7suYn#aVNdttJ)7Tn)2>5SB6I8?(FHK>X&uAK_m6g0;@J&s?d&oLjhW0DnlNg& zkSv}sA`j6~;@z0ny}^CH(T+m-Fa3GBg@xzyR8uB`+F9=KkV**?R}FUTEjN%l z{c`mFRd#G`HGWLgKf1oBpZV-2=8N^JSQ-!&ef|Bo#Kg*!6GS)LbFv}dZYth(FI0Fs zt8n=~`|kcKulE-@!O1eAZ*8fHEluf~oOh-Bo87=VSXv|m9D>H&@Dxj-iUVr&Cd;A2 zZuiLPpIW|o(dfUrySu-CKdN@SSe98{8_=3yo$2`X^^#@gy_z5ZS-^pQzYf)raH3?L z^6pSM3bN_=@N@Gd%d@ag);k{zD2tx{h*4gjXb3;xz29e45H@HP$*-#RvYBSDac`2l zYI8VlAJ<`2=i@!M1YsehKYyZPNFzTXPU5zsSw>{BPfn(%2%9dy^edOv*87eg9wV~O zhXbuxG1}wkAYm^VRKJvXizSP*_Z9hGsXwU`(i6{<%? zO*JA6{g^`G~Udr$N7 zyNC9Cm{(B1a@&|HePXc}f$H%p2vcZ(f=G`i7Qt@e{TEdpd!+e%NX+3(U0I!}!U;DlT}u*H&w&i zvZ!yuxvoH2;$7+u0ro_W^938?^Q^3A6iYN`EpT;UW%%*&Cz00APE#QSz!ZE=%{!}& zA@@=lLw@Ke?VF`+*0%OvA2!>X$#C;*U#GIJvoHnlk3kTssqsWhTaLwLWv;Kfh;QAg zZJ+sVmT`Cc>oIw?@4Ch4?!=IslW<|RC+pwXp7FyQ(l_)Bo!{Di4lYH{h5L&5oKLNZ zp8iTItUcNWrd;I6FdBX~nn;$UZ6O)~LZL4*PKNj5#yxm2-(Ih15@p46*|zT2D>_c3A&r_dDDH zh%r^UVUL?ovP|nM!SJo&;bAJjlZy=sv}#3k6#X2UO(dVCGujgq`_%61))4~TA&>xH zNc}yST`y-M`MEtzH~-FDF)Wf{lTaMCu{H_z-_a3vLYu}? z!o%2%Xa;*hr>k7NoUD##Zln@rMjz3cB1WEowji^4&#vbpSP+s&E(y4q@$p2I-i}L2 zg>^?~#~V*=Fcr$>GXwF(r$_s(oe>w$L-cH3l$Xz~C?RjiK@cw`Sr6w^iyI2gw@Zkc zK*s+GFfNy>zS)aSWuj1dQq5<_H@{^eL^+3Y3{X}3?oYZoCo(dFIP4}j_s$Nsj6a2M ziJvSF-LLI*-aW)qxfDgo|;eXRI_taDm2OCnqkqjmfj+4)#~DS9Av$ zZ+j9s%(?nd26!h7-W#0!9_Z;wt?M*%+CpM4qe+C+2^WXSUu3@lE7JbvwZix{E^A-K z_Va=pRr9sK0TGsm2zlh^$jRkzoUX{Mz3)V1MwDgzn?_re@}=#GAy2<5uP!}(Q=lax z)M?K7^qtZ~Me$2|&S9kYY(PaPf8*1Vk{!Wnp>*7^X77A$+<`z0c=5jNMinYx)Yo3- z)m<~{{oq%a80Os3rpus6zn-?yeMxR7fx9|S6VK;%82tb)@IGh}g-o`XXBrmM?NQ8) z)zyElpY88PtZAPz5h*8P^zXOif-asui4$llh?}&KFKc+j?PYV_>DjT9W`4ET zKJQAmu;*_KHv%DJk7YPC{JYo6a58({$y`^`$oWC*c|)pluCC9%t&`pU=HX5`R@hFt zZNBgJ%sMfDsNWIcqr<$y+BwsyrjR5ns}ZZlRF@lfy#Uoqc!so z&{Xt1J2{*_+bd7?J<1%rMUVf>LSvdUYEU?{1u*8}+yNHcGRND&_~%nKI!Vb%Q=?k* z4;A2*x`~Mo=lKJrmy57!0hf>=n>tnL6w}H=v(eoxp033jUy>NKs?&3Ld~!C0LyK{^ zFC3lWJ$p5!CRoH?PMs+Y;-x0b-zc4QT zrQyj&l!`-1;r&%VkAh;zZc_Vby;{@5p9ZX3hdS@8B>iB9qP;a4)u2r;A-d0_&Bof=+Qw!}1h?AwtKv*l z!sAuMeBXiJ`9|G&YTY?9onSfTHytK2#y5CK^zY|D;s*i}%qk!L&TU&stO`+E?zc?! z3LRMc8O+m`RzD6nPtC-?0gcrs*JW{Te zABpZF(nCpvV+^wj(yt=gXN#RIAQekM@+*|}^mGa=7zYh-y z;9#gnZ&Xm&;SWY0gn5W)>X~&)S1o;+U06x=J1kWp`Oa9MfJVHX*sHOV$1ZwKbw2M0 zg%p1JG@}bjNBI>n+}~}@d)$NzESD(s^EuppoU76FBN5x!`{AJTr3H8hA|c#qjKrIr z9Onbb>_2JdwbkY{ea)LsrXel1gT8CAJRMZc>s0tEA1gVE&bQ1$RaH}3+TN6iy6)Z{ zBG#F1Mo?Ng{EWAle^GCyUZ5+f8R%;t8lqp?1=V;tIXU^ALzB90XVH^pqPbQ+j{dv% z3{l^TNlu34mwTTn_s!m>D|iV<=nqd z8_t2EOh)-p09%kq-1GP5dJJQfY6ypYi$oXpyGIP&OFKDD6_it>qumP~rP^FXF+xHu zfu=|5}`7X`8g|)R)fKkkEdUCRvUC63;%N6I8cwryNOBV2iyNV}3S{uY}G zeQ?5if=@kCilmfP0*YP-hKC)6wP$D5b}(**OtoP-QWfVuPJP_=)5fry z;ae*%V}m-EVu6N2S#~11boy(_fHM)h$HdDrWD&40>^t^nO)E7T-<%HAnc_GTS)y}u zf37Pn)^&((ZlM#kYC&S|MkF}eZ>X>kxw_&w9`?foxStEqLvT^5K3)fr`6F(W=_ygT zdb}vWn>Qsf$HUs;0`#(((#FH1@^s;Iwt-h|C{}PpSN`<5``PwT(yCE+sqPhs8h`>p zNExT+Vkpt2i2B`$MD7I7&BO5;;#+xKk&6i=ezpA2#d}?!UAQsHfa&FLd=M*Ko67(7Sa_+;pB&kF@5gH=LUoyqH^ z;_rIN&egllZ^<#=G0;z`_Hc@oj<9D7*ig_gek~wJAKevG z<5KY7iQmYW85^UYf)&`UE@BM@1z8urT3Vy1W#~l|XyW2>N0KYN4}xg7iH?Yf5IsAXKMafa^154IfCoZC?yRYMy}xT_+Gyi=*%-uHh4N=;%1qdQumC-WkW}?o6;&MkZXb}1 zqAXs>U@P9?_q?K+pri2>69qg#5Ouv{*7g z<|R)xiE8}LYY*Qt#dmn`sIn}KOxeIiI%z4~nxZXEX7_A)z2qpRez*%oYISmnMvrzc z-QSg&pL;}-%|!=^8W>X3`8%AS`HnAUR|>-h$zoyc0I>zcCm=R*HOcgCz$br-t+TT; zV5^Yt$yTd(?K^pP=v~~5;E(E;D9wy8N=gqpdZlCN`K?SfKpvw+H+cerphSgU+doR4 zl{yXzeAIZB=@{(hqjk&9#>URpwzIpt$c#dN(cxi0YpCXZ-GZ;NVkaq|;Q#SlVT7Qj z*Dq@9VgNJUJ~lR(ufw;+9`Nl3%7m<;Naucgb#*mBw>CXCmNyhd(^1=#4?jY^Iurv9Xh%pPc|tQ(u198ekP-LxhMf z+FQxQts|T&8+Vp^ZUr})(AY-v=XP>+#Hxl9r(HL zgaxY)Yy}^VEiBNRph|&1y|AQ42{-^ihKr<8oR>H%+vitB^{o6wA_qjIrWBT!Nhg;fn8|zCHgMWI(`YJ0ZqyCS7fhJoYwG3t~Yq?tc3Z9pYBIL=q}PA;Vc;NdHIzn)MI^3FtnciMHI5#aho;9P;`u&2b^HO|0Tg+rpWR@hOEZ6ArEd&@61t+d5BSKp?ai z9~UQb`dgiDF~}Ol_{T$O`T%#e!cRx^e}fS7^7Y{eL2Xg@9iwOA&F?EHeNPTJrb#mx zpb#Cd1IoM8Z_v@v51`$Yi|_j^&ytyJ+|j$?-{ZhY6?n&CPpJ zBWZw&-wA6r?>Pe)>YwD7Nu(2nalbGw~*em+(?7HUWWAtbA}>f>Wq9Z^t)jVBJl`;$Ua^e(C4H zOMR+G5VS4f?7Q+kD_uT4-oGegh$#jY)5?Had{p>PR!Wu%0aiXBQbo?IelP+}y=-sd z@-s85>SN+dijbl?OpiIi+xT7imNDQ!H{s27CSpS^a9|elrq#_B?$T|negCkn+Yf2OLcX0?#D8?cD{?It#c|{^)!qd~ z*2nQ~-sR*qJw8s1h{zA>Y|vnOVc|U`5qkyF8|Hvjygc4*ted$f0LaTR0LiQR_W2d# z)jBP8*Lb;wH9O{0exIBipWct7mWH+jCg26M_4hm0a6D$?<9EwR{^_$oU_m(PCC7Z7 z71dA$-0590jvxso@^MwkPiV{sJKjLitA8v{2ACdzb&@TfJr?LVeO)Y=b2UOzPXd{# zjOb2c*VyA4>&kzmVW^p7+}rRbVQ3l1RIF=5f#t_jwLaA)CF*%Y@M4lP`d6<9ekDfE zT4D=$M>5DmMbPO?uT1klE7v5@fc0PHiMUC3Hw;Mw$?LOQN8daMNYLE6&8cu_ZFSWQ z7i;JlZHnBBCQd5KPE=La3!Bi(1rnNF*(FkVSn8m34JdV&VH9@9?;2KBx^$(hlB9;%ReB&E|Q`#-K!-ahbXN^Ei+IYKY;w z4YgJ|CbHp;Z|l=+${nNQ-=oWOO(t>K^PNdeQMg%AT3{Di(5l);`Cth9pjlC7{A@ zM1%hNtblH~|HCG^;>qkp86j9aP8$jaQXC7e9lQ_FuzGFHp{CN|T9HmalDT(f9pj<) z_OCpKnKB3NO~_9GXW6;INx7WbQ$u^*k-@*Kl(4b;rNSf48J* zqostt$+m_{1qO#I=4ZJozKF(#F?E;$R(Q=~`DZxq%B%m`T``=<=J^FR8!NA*%w-_= z(T0JxBt<`{-8)hEu>%C*cQHnjQ&SZfvdzungS_{fn+lrPZ=piLI|T-+%@9#fZ)i=e4={tEYcfdfSIcF z_iPXl=CPuHpAMK*hq>TTRpIW!=qaNz2P`%G)5NT&LU5fzO0{c3TymK%uPeZa0E0&- zUAMqHp^JjtBs$0gV0s4dcD}B(1i}ddV_8MP=+L{+{$Gi5&hNR0LYF&>HW-JlJB(g3 z5P$&n!){U+I{l9eKvPea1-te96nn6P4+%7kVsQ?$zh}0>1aTOUFX1@tYL}+&=2>4I z1BC!%>kB)PLZYIpmzS5HUn)8f4=pwUq+;_y_#ccY&*}PE`aOKwNbqo!-#mE%oWtLa z>hZPKHyYpp{%j;dQ=xd#BzNPJh)KV_3Up6Tu`SU32C?|#%b6xI%*`LuJtq2J-6K!J zz@1D*0GAjSe9#E{W%5%F$(tc0G@D97y?eWroKGS9cZ**BJ*5ID7az)?7FGk7BIzD0 z;@C>WIuP>ahH%?JjU0Uv*+i&OHr;5ob`1<))7ER2)K=Z{KSa)KXjMnOQ^z? zfo3gR3?`jt;P4X@(4m3nv}MOHNCbd419U(nrk}^GnFj0g!MNVuw~fWanG~4_Z;{{; zq(^-f{Ed#txeb}0vKsNlMB)U~{PRD!dfmk6PijRyO2|c126Ww0IV_&b-NZn02a%4c zg<&E|@J4js~|t(JZwYCqj;8WRQRy>ypy9uX12CMMF=|tTBMCTkAHlJzqoE zLlD+*en9~z+ombznvqn!cLIt88v~lR%YQA9h}uJdk2ns{Gred%saL^tw?oHtC+jxh1mr!ZUJF7CDbc2Ign>9{@$h!UCxn%5 zmu@}D*Cj$eq0ia*J3$BL{iN%CzL351;$qA`nJIfRl&qyhhtGSH!|JA^KNV)VkQ=fO z;Ix{1dRf0g!ArOSihbr`?nYeK^vn4>7+z~&1U1yHuc}~$wBEhTkXBNFMXi1JTyShLmna zeRoEAsKB=@4DHFu6#dHIM&KH-23)+<_iMtPgL`6_r-lQy?0cRR( zKpra+Ug6GdRir)3{P)o>WJU`gL%yj2XY@oUp+Aa$!1p+96uUK2Pmvpa7NwGEQZ?~P zfr+Uhw%pmwLo%6a^J{6A$o1c4VwY!*eEA>hPSdLnN?9nP(#d?@=z+?x(WLpq2jr2_k z-Hzuts<@8HLIM;ZhQ9072^Gj(Vv$U}_NkP@}Y8xvFp75M^Jv|eerVA2S) zVl9A+2PR9fH}}dDk%kx2)M_=no4z0LhXZy(IZQcgL>HD?*8YztBM3QD4&Rp;pHMc z@W~5a*wf=t`fwE;uf8`H?6*|!Ui}*X`V&YiCYOuu?tET*N|8-WNCulLCa23wu-1c^Om(OYgl(yaa_q`#U@as{-7ZL zJgFoD5Q+_uW_2mHkKrFp^0T}EQnjk0p)jq^f{mbxX`Vdq%i9WwhSnpDMG#$qqY?7q z;^8>Fl)Gi=H%TVDEsDT%Tr)qfV%2}CwH(B7aUu%S!)-bTgSA3CAI_r~VIqdLxeHfL zS??~q3Je3iUB4gRkbl<_8KQu=&uf+gTA}AzYo?hfW zhw7;QT~0+9O@v7j`xq0D1A(&#t_R$nN@=V(Ayo>L{sPvq!z-oonR6@%JP;y=DWKx` zJ=0}(0K;oBescNjCH&B*6MgfhH1Qp?R)`cDi1H5vl$e0PgDg+|pU>>#HLL|Jz=HS{ zn%fiO zEu8UvLxs??Ik6~{4iS=S2H?5=mqorXYxbDVc}-u?^M`d6e^=XR3|hE@I6(ft`ZRZ+ z`#hSk3%|-(8G^(zwT1cHD9OoTY%n1X{8`(^+mM((Sx@Q!cQ{+7z<z+DhLpC+ z#Td-_!6?6bbrl7&o7Ubkew+Mp1NQL}RJ-ttj)ybQAXJVj*rbvFCN^S)hlZe6a0CFh z{xd$J6|c{2D&Wca(%IsZKb*OHA0H=#HX1|63T^kTmg3s!N415k=F)Jw0GSN^y_+{t zZky;9!CptycxM33rzy$Z*K0u}*a2mC?WV1a7pcOefniezzEBzKM(!&Ahc|r+_3vEW zSvWq*6yDtH6tU+hfg>SX?m|RiTw>vBT!r7`fOz<^(a$Rlx^ysq>{viLK0j~Qv*Y-p zvM)nM8yh#YiNl&9h?w>dm`J%MhTGU#dnnE-mvJ%B{1tPf38%jZRVh>^qgIT3K|RJP z4n{Nv)CihQ)oA|?L1sMh#O~p^k~qX29^RnSUXExb$kn(NK%&6$?00@h5q%C$Q?n5?IV-gD>0F_aZ{q?-ogOG zN3xi=^#k6gD|ow6DVJz8pvp7AhI5&4vCbz9eHMyr3=A90d9r_tE&K~~t4CHF zuXh0fuLz>ahy{^RCa2en?obkn{zHFH)%40>gj1ljpbqC6k8{XH=*bF@lCO=9{1+P> zcCR{rLM^K#NIA10q2?CsKNuG~xwO?h2JtEX#$#0-PT|v(W{d z^BM4W_p{2bTc_{`C`m8fNX$!tq-K2yWgIwQOglKesadM7^cP2+?X$}T5{e7~wgDs& z2`KQ_o`R~$i65{sYOm=u4;@5zuaz4<{#I<_iERBS#uMhuUJY780r=tuAqorPwAan% zh*aO!<#G`~OQ64(_^AE|+kRa|z#E4ht3>>*A^|HB^>3aI1nZ?HzSFV4CmfX~!neGj zlJkJ4S+u9}xlN68Pcv8qvGBZNY4CB{It#Hw1T&Nad0Z8-@=)C6k8raafYXe0Mu*Ucnjj0R-shnR#+{Gu^=Q_ltWNC?hm4q0W1mZYO(}B zOdItemk`p7HUt(hYKcU_nlo;9=VX$Dri8Ej2Sp1K-@fIm&l>f8T>6-wO#=t4r-{YnDB|OpGZbCDg2uVUMzJK=|KSIX%_46AeJb$n=5&1MP zjkzJC5>-}09OhVj`7K!mer2?050uM97NW%_7>fYbhJT6YMPkhfeis{;Z0%(B!H7;f zCX`+M=Y#GoR>br%l$YaeJ|@5<`Q-S)OI4z)(5#($<^>%MFWR9oN|SgAz`1M{n-Y%A z+dt+C%>78_K90RURe`9t1Y1i`3Xvwnp~raL-S>fbfgD0mi=8<7I=*SdT}A@{HPXy3 zHT9^L>y{LGgiMyiY+hc~6L#v+vW{kYh(Tmh_up%GE=#3gcnNgfAXf3<=?5Gtq~dX6 z!7(pf-$n{X)8bv}_c+hw>d<|3Gu6;NFg(V=#sX@aH^ieuh0z+n5=dst=H{u~{aS2- zAeZfL;LT-NY=swMF}wL*?!mj+V) zwrff=yZ`DKGO0HtZ?-goy__tY2treY$ip~9!hMOm2O23!BvP|U7p@m2fQlf*5Z`uZ z({K^R0tvWpWrmo}LreeM8(-3A(c^MfhSYW&eD7V#HVy0>m;^^9k{w5e_sqh+=6E~ zXG1*-Ui?867+MpdZpk@|iPVCWFi$$nGTn^U;(i;hh=|p)OU&iI_;7+f2>Uh$5=>K& zw+hz`mmv@3CS99tq@ZX;=qiTKKL(Zz{q;-@;TNH%^BtE8KO!VQNx|d$>ar19EN|uL z5_WVVQs^cd>*&@Q7~;zQtkJPY3m`y+;|rG~ZCa7$??l)_fLN#_R!#8lFP^D_{Om<^ zxsK}GD+sOh4wDy;01v3d3}m@^!; zxCFxq#NnbtRxrVJ=jszMGqC$vlGrL5NCthb7Nxk{eKKKp>CNId^sRe_Z!Uv@e}yK1 zKT-V33ck-*8D|44ezK*7oj03nf-l>&A$(h%I0b>Jg}48aCiaSS9;Oc_i6S9|1cp|` z5wv=bmP370e?~?cOM$ld^?6FK?e$@Z5wl05TumoH98=>N^LI7^U6XP*G^Em=mB}?> zAj`UOA%@xVZe z*o2Yz1QEhid;)iw`LYqE#s3C7PBJP-#;elN+?H)|A+&gvM}Nl@fE7^(Q0uK?A@4!r zd|uLsG#udCOe|>1q>|OaEpI-&y_!p~w*+C!WZ{kHn9v3iAVMK*dIvw>F%c7FII#}m zUlU`LD6v-WaVec2-+e7g2ZO^xfkl=Ch)L8mHcVvX3!00ow-?5Q$ry@T%OGX^`x@{W z+tIh4yML=l+0VH?2?1P-CGr%4K>1mn;LZ-I);`ky#Zzey?EU~`D4Yl7og5TQ3mH#R zAcKNHThE#D2>K(6Ylj8_D&c=70vE4k>ltQkM`$|Rf!sQ`oZ@c5=Z=YK7KAE_?x7Th z52sG8yp_vUpjls!kwCNxhUXDy6ND3k522K!!79@KlA23d8wtBX;D}W)Zj(Pu|C+Lg$dQ=P; zBO`P~f99Q&xeF0e@nG_yCQw+wi~nCqx0J?EyNCdhzrn^qp7Q+8^3ONmIV+bZc_n|J z4%QFuv`?@picR5N4Kak{&T9qGpZI5z>}s`3^=4qQWrpy4{8IxdhSs|yd&Mwt6p_*4 zm>;5~*+ss$0cVsw&c*)wYoQ=x@p`lQhTICC@p!~Byt4yabP+viIf;fGf@lRL{Yhzh zy+9P&MG;bQZ5_?)=^yu5N#sKD5$J(WWB=viFJX*#QT|=!lb^@8TaU@b&I z(Qi^?y6ZetD`L?U&YhTZxic3EI%TMjrw9KqB!gO$IA)-Q98LN+yb-0IB*A7Gx|T-M z09{uYr6d5tB)g`#>w}u+6ll$}_b_%{R#xLu2oYj-nEb4xVEFwB{L>AkDR!Ih6h__qtI))Q5tP@`pGN`RradJQaw2>xY>Q>}TSozj~`8 z3}qnjF2koNUK*|UhYTs{A!0ATn3?*WPxs2Y4-hkJl)4q|#!59J4Mpq`K*J0MS(kCUvksQ?A(- zU!A#sDJUZ^Z*0(7l|HlhIApD%MG;$zZa~Bd+=>FURC|A`G=OHS}{7Phde4+lbz3<<0d9Jb@B$7rE?T7g)l%L&!Y70S3aa-0!JX{ zBigbp1mOoGG#uWVkG2&HK-$UXH*^&p{qww=C@2h5&v9Q*d3pVmTko+7WU@? z{&gx~>Im{|i>oh6pffOGtd;t*}%)HVM zgRz0#jYoyqfYKIOwm@X0LY0?dFPyZKkWMTl%i<1e< zL0pImxNE*+a9FTUGu)dOylcF7y82%a`QP6%9z}P5&Xfx*P^*v}X>T+hXWn3i+%+-Wf{i+G(^w6l zW@kqLX{P_fS_T^FsvK>vv^vTdWaJ9$(8QTj!fjhivS*ar zk7cv~>jqDK)B$^gTO_t?V!5B%h1>&(8cDCb%Ce zul_JoW@1u=t>w(-3wR)W58mV`Y04S#iO4PnJZiS91F~ZRB=@?N zeyhlbo2&Ix_b=KZ@NqPde#%GGCWBB1W+7nbRB3cmvvZ#gHlooLjrji*5rJfBOlGeW zbel&S1uuT(>=x*-$3XOi4PbH5*ImFQ=t)<>u9K4rw7eZ$bb(K==&mGM^q5R;16nJS z{V<37u6X}U?8!~(zS3up+fuK}P4=f}06~Xei&!yTK0qouOPG9x-!$#T0}0|R`eF1E zRl;6;-wpNt%KYn}vKLBLAcU6u+);0m%gmmH3jF%sPgpUr5J`jXWgJtl7VNXVC4YQc zR5GQ(kf$W%|G0TGDGX?;SJ&45ke-@f2ADB@r&*q}h!u(BBaR(VwmYz)pJ;9m0VN)ULA@$HX6={(Gogafy=yqQEVb6JQ8yd6K8Hn<)L3lkX>CF8 zV1Vf|dvx38b#eV9)~eQ3j%$O5Ejol3WiBx>8*G;kXFeiV7KO8Sx(a|xPG5Vs0npj1 zy{{)Xm$0xj1Z{$DYs1BQm7pKk*TLZ=8%zCxE&?Kv1quz(PwNuG1-Jh;S$zG5nQ)K( z2y4m?7+*PK%zgOJi!_rM3jFgGnn450_GRm=$MVs>B2|x%Trb{mgrv3lh}qe84l`Sp zuj6(zrEOGRet#X%Ky@spU>xixDItdU0@jiH`}@JcLBL=Dbc^;)!I2?4t0{@Xe^B3O z^76lRr=_Lk@HnZF%eip-wY0W+WikZbx@YvPiD}63E6%$}<9v=6snV*@imA|Wb!2|p z?8yUU*a6Q+-+7$YMMwrc@xKP|9fXS9pDx|V-My!Y_!G*OB~n2OAiMe9EY?Bx>Y}@? zqzqH=^CcLb0D1J=j|C1 zAP|D5&@}mU7BUAhV5qh#=H}AM%ASA=M9}{coyu54dp?K*ro zs@qOaBMdT1JSWX@qh)!=sJ5P*A91Ux0colm5UcWc3ve~TbdcKMf4%%UXWK17o0F9{ zknK&$w&j`?xTO=ykDfE_+YzBEux@?l(Ia7WZt}flgel2d80GxwK2|*4_EzkqJer93 z71P{)%<=N}Dn_21wC3c$kdt?@E)Uj*I2^VYFDX%m%#SYPtg09L!0Ii!k$F1bu%MI# zd>yce?tCk9^$jv#_5n)6JDA^H99#TUcDxkTG?{mq(v`v+!wmy#$?QG0{qb(WxTdD& z9Wb1CfuD(t?8LYFm+(^Moqi)P^p3Qp`rqrWWei61OYq{!rZjn2sUs%oIx=Gvbk_Y2 z#`WJBjGxo&f1^3=vGO;jln{xWM7Qs|;eDaU-W;I`e8F8Kn+u$Q;!&tq&B?YMXB-y_ zc#b;1IJK{-al(#GvmKvBq^s&XRt^p(B0qJ(@W)*`40c>C*Vg5(SF89wdn2F=Wo67O z_Y{UyFkgaI{r8r%QFP#+4Odim1&j)5Uot&xe_i5YPQG4mG~DZm@bUZ=UO$PCeKj*= zIW$rTd`tjeT8rDkJkN&z8_WAOi**!fzYL7o<~D=Hf zjIUY6?$Ueu=-uE@yO-YQR0VP+$<&A{BkjHkxlIY!?--ceD1MK1MWA(Grs^G&=( zINYAC@BnrXgyySx+;t5~BBB+)msUMRsH;rlO^%#NBnd}|kUd19*jKo(Os0R_5+jXl z__$U^n=`Dqf0DrgdauyRKmV+I4pAU;a*Y*G1-!!R=AeJ zLl2Uh^m^<1tqKCJfOkMJ;2C=y&(9RzIZH)>m2qH!Rf#$YbvkO=q9=Cm<9;7LVG6;E z8O$C4{!lIj{3nw-m`;xN%HhRziyzk!YRT`@@Y`_z_Krf?{@5wrrUcooSep>V>IbJA zy3jovlx8A~AuURk#OPc4d;P@JN_Sc}_!oC5V{FCHaIaD{?#GCnJK!11Fm0%-D^QJRFow{BZqL4f_PkhH+}L5Vcs zBEHs*X{OoQ}QX5~U2J12mwPL0wwHdCu@hPaRgu=a4@W zKVm7Ki>-MvQE!iYHa};&6$^OR+1COXZ+VJ<=?Xe!3pm5DoEdx4_9U#`zOZ3gmWh_W zA50d!14aQmJEB&B8btKS6N?=$38n7)`-7)COSD7Y4j|IZ_Z6Q#(Q%_Qo-;0w%S%Yd zsKdXmzTWNcw&B3X8XnlcQkt{UP~R}%7&f)MdK?3r?kVJ0R~CWgPU0HPf5{>0MOgf~ z!Z)Hpyj7t*sHe6P!Ns4VREDeiN5%U}Aowp54|>)L2zV;M`Tu%7 zS_617*Ou3raaB@s=;QWW9X?^(kc3}9ZI=w8U95Gq^%i@Kj;4m%#zno}uldrZh?_@C zZH#t40+F3Rl4aPjv)Sq}!SsE(-tS<*zLhLD>FlIHhc%sf@O2R@AfO8?O^(aUJG11M zS)D=k?;rc07w(v%&WhV^?T8==t1TEerR1k8MGRF!`nB@>2!NAQ&)K4$O7(o zvJKSV={GOsb3dZq>b$@D^_3YPL{MnN zeWNqrB{ArA2LKwZEsg9V?WxeS<;C=(5uMGeTG%n9jHWd=-Ln!@r;d_y#Xkbv8l=%B zuJ?ua=Ehdt8Xl_yfg1%XgDSHUJsH~GD{pMg1osKbkf>;Z=7t7^GR|PhzMnMx2)m*~ zuRDptMjgIy&*OS!YW1T->EB$b>N+O0Sr;F47y)?Z&E@RN`>k@&6Y$|)ejCQ6<7(cXdP@B)!^7a@8J6ZK^;xQrb1Aw6lE z2S*j5{@S+?RKV00USIQ{jj?~6h@V`beceD!nh#5VRkx#Czm{%QwZp@S{ z%gjjgqWCUP1F;7!jSgIP(}$cPuf4s*%U`T*3DnIEIw=RUzMF@A##In6>N8~qMl=?R zO_})mSvu~RYbz_d`cy2 z?URL{xdfag#_is9ZEeO?+-!^vJJ*4nXOCg;dyXG*zbl6yt2Qi+CrBPRvUdau=TNtSfFoT`<#uqBNp~U z0ZhtmavBqo);1U5O=ka-pSf@GuDYZLGm1ah`X8sGgvl_B{T~N?9x5{b9CUX9#RFe? zY7=P>rVPM{T>s<8C_qa3K(;`IzC#EVi3wPol0B`gf~>5Z?EEJ6`Iy}+CxS~~p2zvp zd+T(d^6&ys{4{r){#7N#vtMt2;m+BCegN*0lkdvH1v!d45Rk$HNAB57`7|nfEyUJ6 z<*ubutZm}fvg;hqcfbR#ms*mjGBP%%V)LGKrV0IaNJV+lk|VJA7ir+*mTPnQ(W%TC zzKc4QHv)u2jWk{$_!WnC5wOOIQA~b8kzV(EZ&A6CV!Bys0m`Glh+8t|0RbmL;K}#9 z(PjWM(n%U60_m zH`V`wY8uQ49&d1vXNf3AUC}703ok2 zU5(R*12Hl7^ync*de2sUX9KEf+|%}64#2F|Z+0}n_P^Ozyfwz=NtYJS77!SdqLStn zd<+6#yvK^uXGRT+N)C=_%4C)9&^0>@-c>p~IVuEgPuB@)JPV$)w^)D_TX)8eeU zcVJ2o|G9KOL;)o{abusvDeE=x=d$^t|f{g?oILloaFW z6=h-$bfE|FwYTNiG`7cjdhU#vR3d)Auy-=w%~AT!3NuKIAB{MT@&)msr&I;W!<1bC z6qRW}9wrZCjU2Xwc?qfoB5@O6Q02*^}%!=S@1U z<0Q_s_?CznYiT=~q(yH>yvDCe?bz_5^s%~mS@}6RkvS3lti-$R5}li!m7PD=H_$6A z*tLijY>@WlFGq&wI@sJrVQ+pP7NWY~eH@ z_eXUONZwCgv-2=Ok=R1koqvm!>PUlNF~oZ{ucMD8cGb;D&rq+hQLwI<=22qVnx?Gp znw0#haqOAW64MgqjlQKNnzCn5{rE$hrB%~8xO-sW8OLi;$U%9Px@_9|S7B~JQB3&h zpFg&YgbtiJx7Jox1uyey{!-Lg3R8b|1Y-PL=eZbNL-%fY4;;7?5bv%h9KRVo3B7L*&(GA$<(3)IVSHTv{cuZcj#;1xJ94Hnz) zzMxkrwF=H}w+%&FKX4J-bw3O64m*3L$w`jgJS|-L?Jm=#u)PBTH9+g?61^n_QBl#S zlk7J1H6BZ}qn4FSId{QhkbrvV4V@GIwztVZ1!-r!25W@{^eDmj|4bPFtH&gS4AMGR z9LhJ#nFsvv z@6T65WJuxc;DkDa`-OrV+`=2mc&!p2Y(&4NY&x@|T&&reuR1?6a^h`meXq9q{FDm> z1||r#c&7?QWm1NJ{>xW?HPl5$dY&<=Y0&8j-yhYr5%_SYZ7d|&5b(A?ezI_(V}u66 zSM_kQx!?MZrXz8_KdxWyUD&Ne2zfi6;{_15t^w?sc2SbB$lH*A4lGrzSnz1i`Smp- z*z;oDCpwwev{`Eqf#vwB3`3G~YblF*zY<{C&RG=*en7<|yu#c0BC&G7lK%!?xv)FQ zd3lMPtF($a52?|myI3`P8#M;Mj`M8TJ36MtL=v4iUCjlf?YZ0DjaZ=}SZ9~tSnTKb z3=IP`pkT=-yn>+nv*ry-W6HZQY zvO+1vTKRwQ_e!EEa<#~k3jTh$TL{|L4y_#>@1BI*BTVs`TpFzxMy>bTQ|rvSG-z?h z4%xZ{XkWmT2mE?)K_(oC%y9d~Nppy|dH`d-k2Ba#G;DztfTd%_^2@cRw+S%w<>TM; znw@yN1A3gnkCY(q-0Jk@e?#d0=L{T6nVgr%OZ{rozC99Vf_a1F%rt&S2fGrNJ5uMa zc&nJaEAR5038)@@Ss8RO9RRa0Rj?W&2NcmYVD8E zVT}zpE9FsN1^Mgbj6=UVBzMpg+@&bbPM|ZZ_A5Bi`1ilQzP3Ho^SUM^C^_%QYOGWh z^R%wH{x&}~hn*;4EE&XH<;{~T$}UR8VB=2ovuce?JhbVXB%x?ghI3Ski%;aA`2* zJZ5z%5F8+2v<_+cg9!r_d<%c;A8vSC3z~E*X>V!*-=FF4>b1*Fqg;*bVg03#^xr;? z?UpNDQzO{PaaU|q`zBw$gME6|Xil_OsI=bZ^)g)&8Pyub#mbf;PeoN>!JdE}twfMm z;yp3jDu8y4Xk|c+<+7U4`tWpPg)GU<3PTGE1+%#k@KDvmHjUMLIMm+bUaqxK%6{?Qz1p58( z9&6Nv*$m>m6~GDg*h_st8EVX>Bh!c%jC+S!_Qy_3C)rAmzB_H9BqJJ$BZF6nOH@)y z5b_pr!$;uomYNO3x~&Kgf&iIT=yVsgENy$aTj%i$p_NBP^-UHC{;MCYg#j;icU#tO z`69Cgez6|6*Bw%3%Nb#29J@IG!m#&;3#|rP%Brvhp%6Bu3!9ku+T(r5<9(q=FkM8T zC4lZvsCdf=jt-o4m}5QUCdyp2YpV)RVSAZlHdLAPYE!IvOcTR z?C9y1`< z;c^Wxs%81o>HY~I>|KB_BZIihFeQLt0Xh2<(H7=e+Ad$;H=t2^KfZfeTT}MrF2M;i z5V>>XnoX`#_18(+n(Da-dVhNdZ@kOBqVQA(W7R`JS+{ieFk}T|m6_3ED%Qqt41D&k zN|4woa*k?24SE`#w6n9rSME*AJZ#X2O#C!=6%F6kq3F<-qrW1*siXJjTsu47gxZ%v zGuse0+WTeush%wGkwhH+1*7FESSourBf(P65aIwr#o!a}y?oiHD^W&@L`vcuQrjSc z?x=lvRbFfkYEnVpuoqS+o_Tf^0&QfXsH?OCf?CQjC^{P%0bD7}AspVc-=Y+xa2F6O z$9Jxu2EnmE#kw#ya3f z(6JPFkCjaRRxPv(tG8zOj4dETTZ~2;`1U>M$@mp!SQ$1D;})(Y7Rm>;rv?+Ad|MUq zYQwvGI40+Yd%hB7KSkLH;~aaKbL_ zqd38fM<`w3QTV!D1LqXO#^j?(ANPPHFyQA&@|AD;Pl-sk2l=M4O+C5v>1I6|se`bZW0OyqyIeG^k{WZMR-j@=!He1i@IS~##t`j2bvQtKgh^vg z9Un%bqrX8vqs_&>Epvi?_ipkwH$Bizf!Gf~f@cBbk&RHy?)LV~^);r_RLjq89QkI~ zXuX!?ZYmwMnr8adKSdV5*i6bJXm4=J(k}75d%;(ULH$alrh1@ogEGEzvGXuYy4Kh* zT6yu7SdmHVa|tMlq=*ffi_9_Jm4x{CP5wB|^0DD>aSDzPgfexz2Mn4fm?Ld#{M~H$ z9^RJwkX7Kfi=fEHZzGg#dV$TVrl~G4#;{}0&&%uUO7+PO+z1z&I9T0SylG17*_}`0 zu?PKsATSi6$tOuIt%O;rz2ii-9zU!^eF@_g)Un$`QnJUzOhi#Bm7W;^_(j_R3bD0e z_A+D0PDW7PcgJZZk5=_@^ss851RYTtR_WC z?arxaWzd*?=O38Z89!tO3~qeyPe*t*+8=@GT=LeXrKF~X2|4cUE`Z!E=;&Bz;U3x^ z;eKLFQvlWyw@){8GP&?=Ef$@C58cr*(A+#0o$yKQyl}2t6W0{=Bi0h0@O*T1js%W% zXoMiWmvwkvOF*kLVYRlDKP7t$n5PvUCC*E~2aErNeQgvCJCudBl{}ZY-8C|> z!$|0vsHLC-H{GhXifN>zBo|6x8Ep4J|2|6%39Wb_m-WD>H#Zob6i=M$B)CrO0BnUFPbQMv+WEL&fSg)JGf11+^ zAEHJ(pM1R$=2s21f%?sQRcPGlKXPbBB;G3~K_&Vc1d7PuQLtvoojvj(*ZKTfa9%cN z(0G6)0MSj_|HZM?CN+9nVeDLoGfSok%?YuH{v=8{$0wjmG}NLV7A^u_&S57A9a5H@ zwBIOY{C^gC{}zojz~q*3=VwVnvB|r5ajk=|J*1Zt@itYT=wuR_z!x%-(1!2go>{8*g3j5AWm$Ti@ekEySzKIu0HE zCH?*Hj=z+t#Yvc#18Ya=p+OblpvNu08ep!x+$SqSy^Xid1Yk-z{jKc z9|lfRd2JvaAf{K^GFJ5mXCF)DQMzY@Zht=TbYof6i6b48m?P^AE@ zVl|8oFcv72S^hhvmt#HN1`qK5VQsVAa|F`w0%Ju}Z}jO*0hyKlSm)o9YZYin$)4&b zk$?~QukBnk)Z6aJSQq%D>zeaJq@CVV!TVD|(A$ae=-%>hyewFXv;m&XK+*sfwM>=W zVYk~36N>ijgpKpTIS`5b14i_gMCyeE=GM!8bV<~hk9XU+>`dJt z=ArDkPL>;Vo{@l-Y~7(grjR2$w))|-IGTSR;X5#Tu`ej6M9s=6ff+Ma#uwsuWUWl@ z$>ic$X8yooF}3k_?gztvN`dB)MaQB?)Jv>adb0m{yva52D8839bm1Mq6-O!qlc4*X zXgs=RVwg*9#YVl^WM@0eeT?-MnGWn&w-a#S{GW#vr@P_}Ra%AeY~`N=<0;lS00TBZ zFHhe{cl^LrCqrRP#P%D@u=7VSnDA{#3!Xd4IgM$TLniV(0tC4oNUk_TDWPCZev%V` z%7=;mXrRC13`O+!(_zYOth!8|;uiehO$lL<$#8^&Ru_MJMz8u|ZWovpn!wvd`spk> zNNc4|&VuBm(X$))UXH{K!Im+P?0YllHPBB4<#v}P!HcyQG=O+#Y|xwGgA`=pSn38+ zXG}Mw@FV!@Eto?%dRG2}*Vd0Mp^DPp(t-~Z`kjs~6I=|#=kDTi1z^*N&Mo86ySuVX zP~CY*!ql&~w?737sDJLRj6~}K$a+T7(u0_}e^j#&1QgNC9BAt!6Xe~|%}O`r9hIpL z_Y}C*J?0!3P&QtZiPX`^RtNASXqdY>33gOFF!G|iK&1-(aXOq~emuf7#~)CYbuYg& z5SN(a zu&D~1Wsl4n(EqC)0J;McYxu>GTxB~9c^7JAOt>{`s(}^T_lDlJABHYQLREu80%$=n z)R=D}zp+t2oHVM@)@P9t@Ept{Eyuw|K1R)am7}q=B;A6EIgF?b$BaabmWp#??9J-p zq!cAT^KoT`!{1!&Uf0i1K&F)7T<#5+r~K2?ce?>T+n9=)Ub7|{O!}GS?ZroF4TC{| zaU)b0bn_%vwO(r*O()>v7I(Iglxgz;k)(+NiQObBQ00?$2uOu*T>l*_%(fQ2&O5RA z{#s2qA;+PK`b-ik#)}Hw`;IW!+{GoUYB!{Zm%=RNqx*3ph0DsHr|X)j zZ~8W)fz{W6o~D7OY{WvD*|`$RrCmwWz$73J6&0(&9+fxI=@fv!bLFhb(>kG#K?E6x z{fS2CWfU0f$VR@TrqfZOBdeL6CODPYkoX3J3Nfu#fsttF?}0F&QZaLcV230)AP&`F zZeg}K^F;%NxYS~BY*gFHs4K9mp~0x4L9L=rqW7(W(z^{5%aIk3TXVJfjq(`-loo1M zmpz&`9s2h3vac%jri5oP|9pRJj0)SKvX3KU!j-t$ZJ@61Uh0u7y<^LUr2}$ zm&v~qBTC1ukM1In_A0}!6_9^I_HgXci0N%5nzYO9IoQW#*X>m<2{U@mkd*Ap#{X?{ zFiSWLFR-rG_UAhZ@gys~6@YP)Qyx+JPr*xz6ioX7x1H;40+#l^al%eRZ$+MbdS`pO zC<`R`agK7vCD;Hbap*Df&vN-r?U4jsi?x^`0x5du=0x7gVMMte`XCxM4JuoM&L=v| z1j196_{F;NfOUfBaN@;a+nv1jVaQ(8!IY%2a%>l@I!vOZh^}|ApL(Qf`Seke%SU21 z7vJS+mp7gl42*i>G! z&@{tRNMf!7H$X2~bO|m~8xt=KLzvn6u~75i`Cxv9p-umaFIZR=rW7EFUT=c&m~sth zpweS>Iki&%AR`tt{1t%RwFF zCk`8X_qe)fpUy4~refPE3qn-fS2qMf1%+d;3(M#WgNk{A*4KJpoBiz>xiYnFGNVEN z0bwC*&k$dAO94`!$EoC41VbPerBC|m=;wjQ|!~UJBmfi zn3l#4xPvY#n3}MBXSlMVyCY5KCGosMM%tq4!^TO9TW( z5C>V1<~eJ-xo@vc@MK%|zaM4OK;7WLgtlSbPjf1fc^z=j{j=qo3}pavcd0X;cuaHH zL>NVy+)||2m}ATUnisD(h0t#L*G9Wf>~>tH!XqkaNyD5bvnDsU7kqV3D58WLrS_EQ zE&PXUHDx+VbbxV;6&C_A%g^67=R2-BxFYx4)A-02bKJ7!t9|FD$)gD1Jm?hCjQz(2 zC<9!8?ec`ocK-7(6U7^fL&+}Bk0W;8uqdhJ@ZRjjvqfq}K}A;-?_1GJxTAOWyydC@ z{P=Z0=ipG?+g**2ePi`Cm&lDg2H9)-EKqW=pK#XhHH&3ky&)W`E-d3D+0kc|^a!0P zQ3eVCjDq(T78n@p>`wrFjHz2Ftwl=KeL}p$b#<5a(c~TXpgfzXBlq_)FqJOA=U(d`823Ctq2HvIBixP7N2aCKm zcfG}XQqY%(>5akMDiX$`Cio%&)2J7eMm(<9&EUoH0%U04CM~yN@-#gbb!>n9&p^Sn zw6;R`3&!@$h0&t#bMtihRO@O24SfIb5|5*On`3)IeN%=vqR46ZMdu6Mt~MWhO6>u# zoM?$pT*Ys;AGgdC8O4@_Wi<6}nK<|VL4l6t72Zu34U-q%O+ra@cYc2Uh5gC5!^eQ+ zn*y;P0`3;>Ij@L5Q@j)k-DxQniVvc>_kD0a5)CZVeTi{jE{442YPqW4RlBA2*7x@I z_VjVwW4ekyi$Qa~42ELbnyRXa@c@%c4waAbj=hPQ)kW1qiP5*eM&@K-yk4Hcfyn_# zE_vL2)d$}nz7L`vddW|1c4s<|DAFPK_Jq)aylbE3|0O*BU-GnWRxFdUwQn+oaY^lS z-d04)by{GX0bkc{8-nUSpLGT47_mz+6S0C`lp8=e^yF`P-?zFy=*gKa@ry4W+ApK1 zvP}IbCk3mWD-rf#dw?TMEX=ZjiE}EWt3x7P49s5#;~R;$E=!yQ+UnuIrnbr@mhU$o zxXDPuPIE2O5t|-7dNOY?*6v5p^P+MzgT^Fs5FPEa!+cn8b`l1bGgL2y$K1zzFAwL% zCTe)S#33wvdep}<^wqs9QBvZ(CX~F(=*ugk%*@r7l)|W8Cnx%=P$^O+2_KdeJfT_C zV%9&K`tjMpgilGGiq7{dDfEVV2Lzy;=6;@^ViDNNjDu#BJ3ow94W6ohg377F*mz>S zzKF&Fz_?`BPp8MngTRTG^bur^NJYqIFV$0B=dON(Mvr|xr zEyA2#*{X65@u0c68F&9GZJi-)sU*>vxIjaxO!2;*pAs5{FT8N>@&4BuF(`jhg;mvs zfcI2OnkiKTZRW`CWCgStq7AoxRbjDxe|AKtT(Ai@?sHHyuqu$uFr~jnj}+NUCW_ZC zL`gJmaXB?@Z*-W7i!2hF@+txl>Z9|*4oqSt501YhFgC>A9@!k|$F048wQ2B>i>QU9 zZjtsBj*L-|t}`d?qP@3mnBJp5WndT`W#ILy_gQlx)XLE8SLrRbEWwE%FKTwOw^vr; zWBa8z&f{c8sXWCj;^43)l}qzHdOvSjVX&!1}PnS-8iVVW3AIf{$O}CMlt6T{-PN?kt)Pkwn-ID9=So! zW}%v_QWG^64Aa!sR^P&!2<9hr;U?f;H}FMC`>DmGs2nfo_Y6-r4n&!tI;kUy){2t{ z0?8Ha?$D~3n55U4+os)21(MRGSd;rHlQ8C;t zpa1bbN6c4zeE`DgW^&1!hPeOCnNuar${;@jPCD)3Ac*2*DQia#)zgo~==Z)@wn z>$a~yZtxS{@n@Zi2@_XOF-Id6vggjOub&g62$cvz7mR!L)60?xGvArzP2WaSBvJ+S zz$@@3MG^JH`d5oNTA;GFK%?E`oq=-TMpwUQX%cy0n9IZw=07#OJ}1iMy_5SI`e&Q z(c^Q#<_Z%Nf!#q(ONftdW-NI!#a^{BN(Yf5H#Z-woOm#qxa3*Wd}UuJ-xtQ}Z{kqD z)S6=6(#K;y?jpO?dwKrq-^k3--&%@z@$V-xRE(WQF50;*pEkx**p9{q#`N8dt@yDW z2}8ZcK>;}h96c={aW|EbtJnjEt+*xCtGQ9_*0-~c4zV9T=+XYjZ(EIDTF~$fMfjWJ z<5G}z`I<8sCdfgLG5jKP|5+jh!;3&rKnsRz=ctChiKyty1ZMRG-N`3ZeDm0}*Y(uf zvNCx7>I-*n#K{&`f8}zZ>2838vHaLCIf#<;io~NfXKg#zRI;8$`4j34dNk&esTlOgmeoA&q>=!mQ zD%BvTb>g7sTeki9I7Z4{2gfsCpH)|}RXbNSxWmk-2+l2Nn|tUjv&4uVQTj1%Yn;`x zX-XT~{HD}lxU(Nx^5eAUeFIm*OFd2V4q=`Sp%DtE z8V?R2waO^{jHAPBDW4b<5QE#rg`y(o=cD*wJ6?%}jVHa|XPnDcWJoOygT#ltl!4}3 zO}}sCOk-c4Kf^0Yy6Ma^WSs|06ju#&moE8tNjc^|9xuv*886|7aP$q2a@nV?rc<9H zlD}8apw=_3QN^}F(H*WmB#sH%&9$@Fz-`1r1 zkzbU1yUv}S*lJgW5sEEl=k{;^-`!!gj~6k;bT190X+bmZF3KL#ZH6T}gi#fTzM$@k zz#%#ucW|pSwXd7&>sDyn;ay9@OwOabTYGJ7LrY88jR&oKBwnz&FiM=u-hhmNQt;aD zp*3Q`ES)l`aKLkzkmSd~HsZOj%5E*+5{ZTNhs=MAVtJ=uzCCQXjV#4GJ0(U9TEYFY zwu8_#5B8cFE-We@^rM>;+GOiK{&W1exD^=M-1Ym;dY@G6eQ+O^c>*xdz0VeaZ-HF9 z-)Xg^0uAKPmjG9sT*f<6x!=Hg3h`^aII$mz&*Q#gh*cp(Npn-SX1@QF?K7$_aeNv) z-OEzIYa<>W9(^vG%+1^QXUETb8C0kuVU`NpbfFG@epw4OMtg34j}pLkW5+AcFM^e@ z!aYv${;*6gZ;h5JTH=qGltyLMKv|a!Q? z<5fQ$^Yy2bz7*CoPBX(c1-P2n3==YdyodQT#{X031p-m4)MyiVyq;z!%oaLvB`}Md ziLPI174c(C^ve!;!VRLkn3fqrFnY;m@V7At2@pRjH4f7^ z@=fG}v^X${ygZ@B&BgGfLK>>pr*eivL!>MaxM^Mh{fZdzwzFZ9!Wx6?gVL!$oBnaR9 z(oAaY9!9iOS5F3JWo^yyrzFgf_o`)z>@ms*aTkd*->T#hzFztagVxvcu`%(%v2&wp z4Zc-~@d)&}EJ$JT2Fu;?bJ2mtIS17Bs=xuH#8cO zWbBknc^W=AU$%g8?LX21`wYq-ek~aW?q7L_7_US-4_{Ce!#y3p@~#zh#YR?b;iE~x zN?(`nQLGS*tj-aT9&X|bb9vkxrDTQTI&LayD(8=}N)T`>l_mpjp^Nppj|OFGbYEfX zq{+HGo;sk z(D+1;awp`nC-?_{#Xci5p+cO46V8F)T`N1gdNq&L6hrsH$)15Tf8UW#*Q}!Oaf4k2 zUuutod+J7xQX88p&+h6_11Ty?2+Mkjv&|(bjx2a47M2}eWlsF(jLsdpF?g$PZ288! zcoN^xnQBd_p9@Ciz9amfIpHv)BEK=rq5tRb3Eu>YvAu5H0=0%9*ZoTZKoFt-a^xIF zGMPIJOIilUdLfD-(j&sekD*{EsKk9HWTUUtG^%yl zN>e07}kaKj*D?)Lx5KShf*Fp1}Efr zP`p3s?iWAYE(J0}oA(ytAH&qRU1@~Y_LBGW^Ro`W7jDv6y)r;FCCWogR`AH$T)Thj zv|k<;woguXTveVxgh>!3 zk%J+j%{?2Gf|^bzOn%=y0yJt-N1WiwBuL%{MRHm+Z46GutPkNH47jdH0lyol>qX+{B;5kIy-P zi91V67bhxpU4YFQxIUYooI#8pd3!rbgJ`x3vA z@Yw3b^ZGwFj6NJBxE97!#<90Gb{xSFP2ep3O)?#$3FOa=Zx-sP|Mjc0wNV`2vRCLb z3W~!-3~k*x0tY=0j+%NGu(D$*}Vy$o71#?J^Ee{+;@?2#VU9gXhGd$8JFPpMhDuiGqXaTXaptNem*jSh4Kzh5lc|Ira3K@tYJl6#YqtK_^}Wt=tAh$sB!d# zK}?p~`2}~UD#UM|wKX(^fMYM1_(%{8qSUQT%e%c3w?(5D`;{ESJSCgAnK7?%De;p1 zrxmP#mGbL>@H-#_kOoOCyS|37q^m$1+Dt5|C-^2~xVWQ4WWV^e`F}~N!(+*RzwATH z1j$AP#J10L)n<+d#(juGkKU;~L6Vh-MVBM~wI7LA4{u}-n9B1C)yz3QaqvfskByN( zO`}r_`+w4}Vr1W#-^cw*S(ILf!XjTeh{>MBL@WmLGmYWN66s5QxRWb8PF6PQ5Kd~? zs)EgFRwQiXJA3uda445}{r(8aAWlVG-*OOrfP{xNQ3@WkrxtUQlKD%UREbzH{I21x z*6(N{YeKHysBIHY6a6g8P*l*XtP#rUF?H!HSIuvirhzs-@q*7kwHD8B!VfvCET_5ktHgwS*jy>6V?$U(Xp;JD-FRZmH=C@JrH-8*=Cdf0O%O_@@b z)sZ}Nr*s~3TCf#(sW+%Ve}@5ifBEu(q`-2<`RnYkR%|hS7UT`!lYO6je?<{qx*cx$ zk=~RcN5*#TV=od|?NVc|zT<4%BSX!{&K|LF_xeitl|R1BCJyB}TNQ*QD2!&I6{6IQ zGO*!C0VTv)Y4FL($%%=j1|26Lr~n01US_V@iN$r~&Nu%VNN$i(1?K)NUEq7W5~FI( zo3wN+Y#KsJPYEEG4dynfD7lot1ft-DRr?3-(j?1*2)PmlGW5sSs~rO0B%>V!6k+k# zZ219un*y>OQ9i{}Br(!N&B`*s?qiDVE7;2^4zdsB|CmN8@Xf_z+EYmUiNqlN zrw2EccY#sLNZ-|m0v&I!yA=?CS2)2~`oA2j|H1fJ_}VS|GEk50^(D938=Xwma~gHM zT1VKlzrVlHkz;7QLFvuXqr-xlqAC(>8-h=sPGPOa60=eOZ<0Go(N`1|07M$LbQW7A zM|$lp%wKNNP&2cIpml_>b*hZ*aKxU_5AH}u|9S%Im$N*<*_Q;O(07*e4o+utb1aNp zscD5uX6#B{l8b|D`OzPLCkPNF+QsKf&r+pW5kpw0`Y7PpeN|5`KYTOl=l?~aa*dN$ zXTS{HMAMvdQdX;@7AhKb=vioAExO^d;`@1p99J@Oi$wYl{X6?nL62jUt5i@k3Z-g7 zveq@Xv6a5UtJQ1(A}L$44hRYA;rN)yY$Oj5V$4X1GSORB=&=n9&A z^Z_Dbj%!D@1)w)wpb`&Cg4aitxYcY&f?Vt z&VI$8q)}qeEsyGlC;|D?=vz9W1)uU*>$!KVn{a+D_;LFIPD9B^4^KdPDmMIDQ^Mev|#sPs$$C94DVRRT8ybkCdMI?hrmHl2dNtyGaUin(Sw<2pY zJbcfdJCxn7x{bKdrEi{7F(sOf)NadZ`&*Xc#q4+QHRhV14nO}7U0)s!WgE6{rpdmH zC0l5+4B2;MA0_*eD9S!%4Uv6|$u_blj0lD7ge=*YL9&OCo$NaiC4Nuz{l53j`yB_4 z`NQ$dbKm!MU(0!4_jzvI|Gu-@H$M`P?#8Z0INNG^yzr0|FH8^F9u9pGB^<`XLndmR z0y+ncz42ZR%(yJrcOly4S|b2X>j?s9!08YlC6P?pZ2(}Tc^X7=q&nrp99n5UDJ)p9!uc^714;<>Y?cJfV;J~x#FXBui&sZXyh6R+f1~S zTK2TcZx$?&wCwJRJ%+h(nZ;NzNWP;<+4~--O%=hxKxWWq3$Z57y8qbp=g;o!Qhq#G z5R?)7Nw9(X4a37jR#MolVG1Z4!}PnzHf{&{nTEY?alTAfT9b3eSi|0rSDpqgNQv(E z*kGu?pZnmXXY2TJ1e%dJuMCnJ_(b&A!Ii z$=!md00ob|uF}LuIY*8)Q+#xF_FC#yRYkZV$Ji1+=$6PEo-N@c3r}H4Zw-bK4GC@v zV~-GwAb$WRt5v#6*?i&K?#dA2-Tb5LGa>sA8rNLCA9#z?Uf;aR&dzB&`}wnYU2M)k zwtSYm#j@mM%&FA$C!+50mz$zbc#5PTBE< zltv{BDnpldxSQe@{vrfWk+grfW=3(1usBZ0AC$f6cl=Y{F&z-4kS0u zxEf2BG{jAqLZLdx;&iR_%_4~p*Dh!8c@P_B(Iag|{o$g`41xH)p1FP2tVOynE>6vz z^Md2wVEd)+egnpqoHw!m6-(-vMy$juEGnxt_zh7%ca!1$2vMA7`ATfKv!S=-lDxKl zxdlD02*~A%hwh)Do;4x?>JClN(gg8XE|To>!O-F69M#fY|2jRkfLBVf`bRN$^tT#- zj3snv(J7-vp3?stofoaZq(1i`DwHcb@~blP9P)8_iDDvQ*Y4dIOqs_GR=ZbJRX2e; z74I_X^>D({j*h$sD~8ge`yS-v<|PmCo6#ZGQ>qtIi_-4 zZE5|1uk-A|F8+N-B|8>H$2pvM6DQ0{fWIKNYxnav+m%g_`6J50m58Yc6Og1t>=j=A z!q+Nb2FgoI&^(RauloKgT=Gzf8bv-+N1x`UVN}k`QY+RHJ3$)0ge0I(DD$9Fe>N*z zxT6#zih7RHQ+ay*iK6=TItaPN7tU)Wcb@rEr0ozZ3i|F=#oaZSQ5w>bXwKd&O!1yt z^nArN?MFvDbX4zm>^FO_sPg>r}6Ayl&jA=a&%0Qj3@)Et{2gL9ZB zBi?&-_*34b;Gy04ixl?9V#|U7liu{rwb>PiRiw<{RN~Z_y$naCaaV~#9QEDtFJImJ zxC}4sGJCvf7GIafwL*eJwQ{dBhv&^v$5!EcWxS!(NfhlFNi(sw=VM0##1TTF_NT>#hY7kCeiq4THh(JP%f1$~;H^ z`AhE1Wpn9k$?enEjT;W!C27gV>;0neC_ms3?Jw}v__#t8>gHwmMgNgLgI>w?8fe{43wSJ(u*QHfKnr|9DTI=p&m^{ zpy?uJq7=Urb4!2u=J#zf&Hf9YKeOcK#oY_)g|lvYairdk6j_e7Ibx~x{JGwhq)%A% zN-#!BYbni){#E&1*X7p@&Lr|BfvKS)EYS~Q+i3`bP;nUkaK{vckL!b{{uclZt!r$S zF6QLVWbqh`9nZl=S!SraT3rL7LSC;NlS9Pmk&gF|K#n#}(*P#~uFkAp?g#jxI|=ub zhr;Jk_!g{WJ3mndqZ=eeZjX%)hdbiDxLhLi6j#cef0Eav3NdOkS-#|6z%}*AdPY?3 z8PR_HM{fP6Mb+^Zq*ut`=_S*MbeCuYN%tPXj}=4jv_%^(7*y#W;3bTyOP-^Qg1rsQ z1(-qf7mX?xKi#o50A%Lj0yD}+g`$}6-i$ZgA(C)!hDZwWjc3f!=tI(|@*STt?*dWb z=jw4cp%mX-#cy?3?i3O@jnPO1O3DcJePN>O8-J@NjpUfnHDd+iN+ruq{#~p21!rru{#$WZe40ia^DR< zckHowv`s@iXB+o+2vbA$`nE?pi4dK)Y`}!HcXb%>UeCxT4{&@WoauiLc>0LvB}68l=T; z6rA^n@r&(RG1?JKie2v~m=Q6vW}`Rj5Vhi|FI{1NljpD+V<`IG$i~5LeQkB#$!MPE z6F2FBZ}roR{-_8`uDBExcEjEs!go%4-J^{o`lG_`%RJX&GzpA7H`8CfNdUp<9so8e5vYyU>~RSy+t))vu&h}Ys-d+y4~^lm#nBd>jsPPXrvFrf6{|N}~49us8H%8RpBL?)6|LtmWTq=UH zdU=zRy3M}{AfrrU<~(y#-7K4Mb9Bln9Jj%6uL{}5a|?Ys zBj*$wl4vyG9zvz!5!=fSBx^7~% z*~R~*a{m)&JDXnD>2x>9SB2GAz34N~ZMa2+;Q#Q#D!xK-U{FcPp{pkp)RVhk|>LI(2@w zbHIP&?R$z#4;}JUnKEXx2$sSjGYZRmBWG^+Hfv5*%B@sUxJ%$WB@q-);va80#_F$} zwv!z~R*5p(amiLvo6jstNo(etuD3;aM8}HfT>Lr_yY~0Y2`WMiy!#n{zOx+pnA}rX z{%o$+6PixreIXG#zud+rPyQz%Z?X5n3eJ(L147F1FD z8Qn8_aX{&@N_}1X#piPo^v3jP?0`MTkJ#Wp#*e&2jgb2(c+C@B9F#;Dl-@V2g4o`g zMqQFv;tUQzcnQAp_ky zA;$JqY?z0UqWF=Nd(WiB5Jfdph*iJcj3|xzj4Lk|_=z+|{=Uh8gj8K?Fnr|QQi|X_ zx>j+1=JrY;lzP~4jHO4A`LxrDR05>UT+aDCyBQZh)&D@uInZ5Zk>HY{*dkxdufL}i z1`befFtn*)P9smJ9Q{$FUXcDYI4QfuwAy_v)9co$xE@lTdQ%kj<{XKv9av}~5ut(( zIM@B%nVR1^3DA;za#Q^8eFH_WQ<4AdKnZOycRnzL{4KVV1GXe0Uk2on2tCY9pOqbUJNAXuO?BiJyXW`?8p| zn-$HMcBx8aFy5+$u4gHDk~JFQf?p1K{7SH=T<7cil_2GzagY^+T{Pq4Ry#?8%=`%j zL7t)S+r@K4C{W!ZPSFVZG}@Quxq$2NPgg&?QnhSTwm42g-1y$78D?_mg6M};Y>^3yxh8HH(xrrIPakrOD`o1%DWTWi>2PKvtyljZ@w z3iCRR`KgXVfQtV}qk1PHA&z}CBXURPJ^M(0U(9D6_}s9SGI5RGB(Q0q^#MGDa<&-d6!5#^TiA$lse86n_2sAD+dJS?!fp=5ceg+p@AU3^UOU-^UDpy$@dG2 z&lv_*%NZHl-7g)<9t2zC^c2NX$biIz(>$iyI^^l;h!;7*kt(b3`SSzj)ELQmre(o) z%ZSIq2O{<{$iu`bm!!Y#6@#=Q^7xpq&l~0#KgmvB6AOT>@%v@?p~v3hjxcoPZ1yS9 zGbu4hpTSK|5X;aggGr>@|EOwZxx76#pne13RsU}ryY7D-(N6m{zUR(McU|T;v>DUq z^eq`C*x-P>!K;Xw;*ox%3h75!_% z0p3^Y`%$Kr2lh+*nOZN4Ut)3xb2*{9Yy@kS{ZP?A;}j@SwoLwN|Ng2w0V2i z;oK zpT>qlBGCM$yB|eyoI6chQ0noNB(L^6m*HOjJqv0Lx|X#}a46uwu8vDjjszI+Hr6{L z(4aTxZ^*R)>!;jRRl1vdBFq9sKZZAio3%qwqNrzGG}+DUmRXO>k7vr46c$_VTscuc z`B%d#vVw4`yd!t?nLYuJ^8(pKiX8;UT|Z&{dByefA-z8b6{a@wG@mh&0P>S9)#gOX z=wLu}ZwGAZEh({!U;Te4eAc=ld=e{;>I-9sEeVEQw`*^+Q5Bb3y%fL~y6x*U!hOm_ zU=~d%Na7L2X|IIfhwg~Rs0R$>_S*yWp;8x(|Lz6>sKhS-DzBph2FFyjqrvaU@0PKl zu~jnO85HhGH6MRlKKh8gfk{z#bd%W)ticq7-TXRx^zN|awkoWSS7`3>5>sN08R!ZjH7ts}mTSbpfqtuU3z&)~2RQFR*gFd~zs_{+t z_`|6>(?3bm(XZI1h)Ry)i-p_NNr3%%eBi*OOVP;T$m+Jccy6_r9ygw?Zt*AIl?1{A zbI`s)@8xbp6u-By#BT62z~eUGDf`zVok~?)YY?3&*O2v&NH{MqJrENeTHFZiP-_uY z)6@40PQi->Q$tft)a{?m^z&ba#V(|~)7^oSTZo8khpIu}5+(-~}+3fH~1 zZ-&vKz=VrU>u!qXqBC_U)_bR>U!~b#$)bx+4E~ZkOS*GDG&FtflmFvE8HxX_LWT+V zeO+ts#ZP9;t^lDag?l)oTLyE5>0D>%aB0S@ST6D;G^6V{{QWj?tc!b6!s4Igw0gwPCT^ zskJT2gS+6RXf|k)jrg?@PUl$XE;6A;r{j^wHesSY#{XByb}^uxh(8Sa-Wh;>YAf~) z$k4_8ga*v*Z!T<*oYZs;VwcULIJIN_HDRp**wA|jf8ahtujrw+sRLh_m4VM@ejMZmOrvq5$BYr9!oyWy_%{z8N}T8pMq zWr9YdCE_aC>X;L9` z7F;Do?2V`YvMakAA;g_bxNtKO8u3RBoL0Q}Li#^@HhYRycE%QN^vj;i%A}wo{{L_R z{414b=5#b-v!v8o?jlZp>PWC7gAtnN!YR_t-gQZSMSRLIyGRJqzvKp)nay;1cJL%a zb2+lPkKxR8fGGDhz-@!(JQ!C*d#L=_oFboM(ziBPtFOUjc$J26%Wn( z>&f55bMBQ^MGj?x|M=!aJY6KWEG@!X@io!$QMS0Yki;Df(bXH@oO$>ifHXd=^yZoS zg;2BurQ`YOy(U>Sf_}A%L`e8ug(~yzYw(mDFU8Crf1msBfmY%}u;ABy)-Whp$USlD zt);T0gD1+N*0{q-zljb(M}nscGLFXZfYTTG-<32&5MixoQ%!d4?o1@Pd*pPCye_pe#Z zTS|sJzR-HJHIA#rxXAFrX|VXlfsUW$>a=vx!m@ckZ3mT7S*a?1*ig?CuzwCb_rLuN zKIq}v+69L%f&$yYPmH4!Rs*rtVqau1#_0WY+l6knWNkUpVUbs z$j}r2tn=QDk=TOHfXZ$|&VoYunpHB3F6Y|m{JZ}E(*1HCy1Kc7Tbv$r$onY<_i1PgTKN{^YW+n;4UUEDu^(ISLcuGJoYMW#r0Wu?|b z&T9A0o0GJe_~QBWSEA@dKcb&QdnAC8k$UMsYx9s3+jY8QItd*UM&k zz?JcGG06yby4hNPBr9Xm+{L;JS2&K&1c~eTCNu>2cPCeZu6iH$! zFC~D3xCto;UUMqBo%bm7A@48NYwS=zO1bgV3H;}(_Fe%RDQaH>QULAiE+MY01W{vH z!l0Q(v%s`i1bI4nNb_RLu2+E(?|kt5Efht`AEW7pH{1`D3TNsBN!q6fQ=qJL`{$<* zN=cTE^QbUC>!a?!`a7%wiG2B7HpNPRs_s?i@E(o3Sp`>4d%}J6eA4Y@>`Bar8E^T zucW0Dt^lOMhMGRXU8bbo8UTske5LblHv4H#0fIH`Wj>}r@w%m&35(&<`}9OVs&h6A ziUCp9znF{q?>%a^NCX89>xS6y@;Fjsx?(G=y0UI22aR}P#+U89rLJnHk?%;7G9r5r zxvamxfu0!LsbDXJ=?jx&Aie&q{o4)EKAS)QsngWYz5E|G+DXwNrwLmxma^M7I7}yx zWg_zT$Hl0~L}nS-an<1Py#qgZ{eJGE5GX0!!_9i9ZyM5P?fAMa-la^62xr`q*Yx)* zBO&|>4Fl#5Z^a~2Gt|c>L-X^Gy8`I1<8h+KLhcnCuv6{XttfJEUpp=6@dyP8gswz) z-mVE9Nh&nS6Hy@b-=!1R0dEEATuJ|XEiuR;LRh_5sG31+>EZe>;;}*S-Aamd@C|AB zw8APe?y169S)G34?~(QvL9m23LJr<##WAr#cc8`J`*rn+RAKAfCZ~$9|C$a*9WWef z==dbw2bsNk_LjBDa-p-$O!(3v-NU9ut7|o z!{b=;Sj1a;#oE-QDq_WcL;bB>#}l*pS1Ca9J4d|Z4aEEqEkio)*Vhs^nfKaYa>PQE z>o>Ok$QGNK5ehO0U16dFXqtrt?JNCXQDFq|RPUVifL`i=29BI+ZGPf%G&ornv)s+1 z_~G-s0#hg#zLUUDg%g+tX$m!Cbvz6)ra-|7$;UV`TG2o1-WFX`Qh}{QRW6)q!DHRD zfZVvprxU&+x9a0_CnnWdV#SX{NMUW5nmlA&bD1V!-`Q;WL>UU`bb%8mAvLZc93^t{ z_V%YODi{C(F>&)afq&%{zgNpq4wNx`gA12sVX>_fP#w0;Vt6XX{clqu0>jBh$yA!- z`JMbiO3EcMlqhfnwuk&e5$g2a@PqMKa2mwyZ`i}<8?I_NJ~sQ}2)YJHEpLcrgm8ZvpQlyZh<&gk|a6x`GSPLI6Se?T7Kl-uR=D^y_K^iI6#1s2Fbd1h%bM zj~ynGW{0{zdHwNUapT0C_EF&M=6J4lYD|6QfI6Lj0DaQHRw%!}tTGEOMx8F>;-6Oq zfFa6$P}OD@cPwCa0XvB@UWeB;W&XSpzx)SybSDxdh{1+uw?-D(IxO2t12Qldiippa)RhpB zqCvm8y@EsDmipLn%bgrbf*G~UCUqQOt6?ovp^Ch;V$F4x`2VAtDhTm-WMdfjg~rTO z`shP+4gu?)CxQ{mg?huN*z@gpSJ4SLf5nv-+1j zWGIPkQmgj(`%khr;8dm0nm103BUY6GTDe%sFGD%0 zeAY7FEL%70XOZ_8YKgyj_(LoBtf7$7gTCtPnypd04$sK!)iD5xJkqG(wk(={QV6UW zc(ogTRqfPLheD8m8+fS>q_^$`=iD1OgFaL`K#W57Zv?g^EHBIa zc43bJg(80_^&2D83N0P)-;Qrein_sq{yO$P3l0vgRtO1J(`C+WHCoupE&n@QHXx}t z$3kQ&P@Lf6B(E|4-*rqli2fReNFS&;4R{t=+=Fs|9}uB=;LuPhonY+w=wP8BZTR_e zKIRezh5peCQ6jg_`UW$=;i$T;dP2X`lkbW%Y7R`_Ett2z>HbwA|AI@D_qD;_aBsr} zY9xtPrW`SDu}P7sslekyIp5l5q__To+@7gare&0 z+vgI@pLHoOHg%F@)V=$$6Y&grc(Cg7ANX4qAtd0kKpn)w3+$BjvEEU#QK`1i?j00! zF#_NHw>^a#@UBa2G-dvYunkj`2n6ACo)$$JDslp%E9efvC{i1IYS6ME_f~Wl;nOrx zk9&HwXV&rtm07KQ=;uV_>bmaGN_i{w(hbCCyYq~TphDKEkD%cJt~la1jNiY&2RTp8 z`WhJ59Mv|%#>Pf#^yEcioL%sv!;qMWDAxJ!z1jS?IVfiMn{M@;oh>?cGm?h_x#{O& z^E*!}+_^7Wvd1pn=Ak5NW8Af@HIcs=XxM00I+#lp^J%YSLoecwj9uC!5C%DPRUxPe zSfhBDKqEQ`JEaSDs&`FXJqrWU;{zv9qq#|gh>7Y#f-QxO(bkpbf&jAulf2tHfiH)i z5+M`SI7X@_V&8f{jI;ey%aVZ>%*|N=uj{v$_$=Q<_o-}cr9|tuG+MwE@k(;fdR|0` ze@?RrJbrRsarNcWmI45rKj)cy#K4Klg;NO$Fvc=!2HCDU>1UsatuBgznr7N&`&^_V z2#PtON&KA;IAjoCD~H5gOXW1}#xwFt&ZZvx*Bb7l3)uKDo8%$5ftU&2XXsNmy?%JMm7{{xKtJ|WKM2OEiy*nq9r z{P{Oo=m!G~`06mZbCLXKkAJ>lfQd;_3Ws+{9TMwL!J$88F;17@I7LLht4maHVD9fF zL+7zVN%1}4ixfhQ-vWhj2~%@aqBFNUh_vNJRjEgNT{0QZNZvMIYgkb55i_~Vd!z%} zyj%{{a53YKz@_txEFWz{Pjbu#Rmdiz7F;(yJza(oNErYm!K5K%pE%Vt<^D!ON-Xj) z6XJUIYzV1Li1mxV5JXUHgJZsOAVRzqf}L`kQ4f!bii(J6K~Z7k&t*BByte`?lsL%g zQVJG31xPuNNCC%!+7v+bc+>ske?UPX)+u0s+ezo&7qoj%LiF%v)Ss(ym~gytOaAcY z%^Qb_nnV@2*TeAtu|f{CBLWao`tbEDyB3bgTcl41Lj)LLiG*IlnH;)d7lw3E)@FXw zQYh?Wu`Gx$%GiY}#`Y|DCSi*L6ZU^!=N)?=tHNHS9FiUPdow!v5(QpuEH2&xxYB18 zJK>or$W_3%ItuDJ(x6b;Rk{&>?xAfHX}LvfvK87M9XHq0-vC6y%XIm^@;kz8yK8ptCMv zC>(DEsELYl$IEOj9zGO^ZO0;gDpki9BNS!6*nSR`MhEFzBrX4jB7ypa3M7H&D@04S zjfkd+Q1Y~EzvScX(SD2N(&rU2apARmkev`^eJCLSxbT~_;&`HJ3u$o^s)G6_AlI6BokfK9f^JfW06XssubXjg2|M4CUt zcdw>>I18X*-wJ4WoX3Eou2nD7Nj%;EKa!E6j9Qt`APKON8(iP|NA6{>ZtLsxispz<^B8&LI|vZl)8TzfWR7}b%nd{%***$Fxm+YTjTQ#(iTxA z35cZDJm`G0XJ;rXCm#9!B}dw)(+sgN87UJpwIKtPz2@QHlb>}7(#7ZVrT=HTxVBc)busj<${XN$f zB53;K>7x3&j`clDD}96P8z+f8nuj$S)20(L&ZinXk*;RRMP-^uj*)xbF(a zTM0c2Lijsxva*0pAtEl$jou%q8#octqc;G1Kqr%vziI=072eL#zq;DoJw`RlY|C^t z*DcNoUqy4cR8DLq`5;_DaFq}+6z~P zy}1oRxq)W-4U@lj)0k@Vc5l&KBl}=kIvn6lmtql!rDwj##u6-$PJU_t!Du8eZE6kB zYy-%tjIOA@(@>Iue>H#AMOfxVaDE216`5!RDD^=xiTgJ~61~(oSf{>oeX51bnA2C= z2K?xDnBI-Nt)a~3Vx1O{Z}nGMOTO%X@v z{n0u$)o4g`=bl3$Xv{coQe&>f9(=fudZG^XMWhM+vOv- zK!n?Oi-?S*3P!|*m%SJ?awzJ{HZBOtH9lG8sSY9-Bth-qs?bG3(bl#$Ui{(|Ev2($b0m}uMQORD zsKT=$@!J!;7!Ct^HmG5;^5X9o-i+^gLAtFdp1mk-1l8f#A`+@a4GL-M6C@}zMDYB-lOjGZNS#%Dpvpv!P$E0km zXrPw)sZ#*1B(gD;Ym_tKEq=}xzM}Fl_sk_@C|UprHIiT5gRo(o6Wsmr{ z+&Z(x(&!+)K}7rt!W1(mdY))0C>R3<>P?9RVEtm8E>#l&;&Jc#th*i?JVW|`qCcKQ zabHcmE3`HqC{*AL3Ypw*M(&b5`GH7{&X9-*vL>i^LS2olo!biyoRSh%d&bTl=?=*> zoI8#Qn9CKa6$B;DZRWs49DM1f|jH1(J4+a=$mKyIF8& zjX7tIHAn?kN60aE_6>iRig!BoT&4?h{nqArXV{feSEwrF@R37vm^@I zCKbfVNLKC8L7k)bR24dDZrg4tvQO!r`;;V}4ZZ&)?YZXJH4K9oTr4GAh*cC*X|DMi zretSYZLsBeo%(#q-3MQJq17_>2?$O}EVZ`;Qp_=azl`=YY`rG!p72 za75)z{++S}Euc&1=R>oPCt&A4Q|rojh=zCO5)+nSD~<$*jZup#OV$i{k(6Y}J8OT; zx^ihk8BmJ2lcQ&1JU%wwVcemmoBrEvn!P1@t8Mf?IjXJAVS*_s(bHw2oB3#)ty`G7 zPf&66W=^1#5x>4+m*B&4mrW;F=6I#T)V;O}eUt(mJ_15|@3sjI82s=Cc}``WaMl}t#~ z7gbUe0RFJx%If-U)Y(`%h&oyqyLzTCq7|a~c6^$v)?=9-i zjuI=iPEATu&(c-Al#-I7Bxbf7iPUINGeMdd899jM&8hLRXulnqXd87o>BVF1orHMI zOUQ|DeQhVz2}tGy&agtBYpCw;)OqbL0}?mh-qp(bQhRz4Z05AYb%~N_=uOb z)_sG2>j|8b^B~V*SuoWBxC<5$(FL$aXu$Og2Be1R^IkBp$qF~IH??L#ZlfL*Yc79` zQzKZZQqdt6a-vlGcot^uH3ly>1^T}G5S1Sr6YM#d5NI$q{#4+uSGJzRJV~4pzqW9X zuqeu>3sDEi4FgI!&(K-BT)o<5F*l}XTVP!ndU z_VK=OM@I)BRxwDN)Zdbbm9n-exGhO2I=sdIQf(Vfypawqz+`3URaIB_+Xmg+av{i* zuHYiVEwJ9i{|}Uv+FEq;>!W9$&{7&@W+k}N<2A2wx@+C<6>&0&HEg$~)kQeLq%f6-MrO=?J*@PfMqN z&@p*Oc;Nctv|<0O`}&w_=FNTHojr|WgM2;Pf&l1l8!D&!vxkEX6{hEU2I<-DYs~5Y z=_BhH1V!7!^%ik!R~JJR0>~&E-!WEYf6djzyGn`a3cy~d<6)Tebv#}{U9PWA;7O2X zC@5L^cI&jeO&+YPt1Dn4a}5Un5Fncr2>K-GH%!Fw?%lg@KXzAEA75spEiV7~W;hlZGNU@4|}!aWFR(-P4?(KOu=y(dF;9tTR@{P{U<24ok3&I=2tk9&*kCC#xN&QQDhymr5 z+bHF?{uE52?(XhRZk93HuSGDo+ohXgV`G2Qwp-4g2k7tCexy5?H@esD!2Uu*nfwyuvC?&`F=4NJ z&%L%_mLJXV%`NX38jg;PLu(QZpo7E$%oOghjZ=i3CWU) z#UkarhCJ8RX0#le$)ULKE~aV2wvHDn@je;O+6+F*tNH;lK>Pyan65E^jR1Gz(RR2A zD0MwDVPCRc^O3<+qa=n|eyQ1fIlFe8*V9ZU=jW48FlSDJk+#~~ymrq!T4JSiIA@_7 ztRkXR+V>y5KOC(PLHIb`i$ZFk3|t7<@mcTr0AvlvfS%#e)RA%W0P%t5WL#xze{Rt! z5JUtu3*}GP2?#~EMP2$eTK{V_T{+6*Z5x%QBasp(RFHeEv$J|6k%DB0=io5sa2+}W z)E%36&lBz=q!su)>V|1B2??HBUf?0m!^>Y4e0(gIFP}Zh`UJ5pTf3Q~_?&2c(22HH zgswjK{UVq$FNDCg_B-d<=C zX-{|eqwBwXrG0%lYa07hXNNp~&(h`rHSCEGipvvFYJkjeYNtR6hkZis4E9BO3IqoE zdU<&{T`}ZNRm&zY;Zh}6YHJ-%cL+?Cc)YZKFclJz-Dcg-bO6HHAFRp~5#g+QjI7S8 zLK9MF>wO%%SgI4{2~GO?RH^e~<(O&$Pl~P2jP&^+PN#Rm5?uCml0b2}-H#%q4u=2{%dVuvh=_t%8F#aM170a1p+{?9#Aeht!x}~o zY-R@&ufBTT5f|2?CK|t4n|RK*o1F)}uOYa?e%6c#sLTSYA|e=zMjqv@K4^s4j!gqs zn)YW}!XB)=e{|r(DzYxNf}b>CZjFR@8rwLHMC)c?A`r#@rHrLJyXtdwvj{g|9>?RW$$Il02?a~xFdH(8}gj?>R0hU0> z8Ht@=&C9zp!%P=8M=dw!6dV>Fsm0P}Qhs(dSRIKWnC8myhP*HdKLO(^%)5`{((ax6 zGz;4Y1$-vQe929Bf!v30}{FhxsfDSg9FJF7|KyCOYY-aU?9XC zBmV?qPCqA~2RV!jXoXt}+sM@12$AHd=shrc?CBYI+>OF-lO(9vsi~3a-B?o#fpx}s zKNgomksjO6NWJ1~cfQQNtW!lmKt1;;$KV6-A~H)&+MMgTmd(p2b&sha`sMWWx3Z!_ zpW|OZyj{|!xFq*gl||h}hNYs-qb6QhR<^6qMIKO(UtJBkT1*wnK33v0Gd8oexVX3p zm?ELGf*$K*%KbzGQ&Z)X#|PUS@{X&z?vaeeG8pYc>LLHLR+p-hcGn^+?Uu>@b@i_M z!i%|z4>w&08?~9XM5_3-qO;? zQ60Pq?xq}a$~se`Th?~JT^T|G+}wOEAq!r2FyHuF{pc&JGn2Rt_CtNk@dO8PE^F3G*IdVd{qO#}1C50RA+nZ5nXa zsF(tf1?9C>RaKRU?YLCTa!0M`^70hy;>V%9K}N#{cSAf=y{R(S^|gi8R`})(5vyr~ zaH8){%~9@ezsHND*n~p(TM>%XhKXi0>gpqdt6x4_zPg*od@fq!6j$71ZTR#ty`u3x zU|K%gpS})chUpg?u{KdiH*K*sy#P3Q&i zT*RuS-bJ&b1SEP0ahTkD_tGKB^ox^)x1ZXP4H0+tUVol@OLV5`=l2}}lN!a~N~c+T zFUV^_T}5WtE2HQv{2VanNzB>aKG>mqXeF0M!vplA`o_O_3bcs-9yua_^6>W*|D?JO^Tie|yCeO&&sBJq%cP!fxKrxR$i z&4JqJPGv2UbmkKsIbiMckct0LSg`Y2h|R2X&z)KG`pe%7H`M`~;>jgrVcj+$zktP) z@B6@BwK>~HQGYNmx8Y1mL8068=_yh}gUBZ&AwJ&9(h?93-)h^T6Di%m;E4)9L&&;f zd(e$UvZ-2R#%icHctYpA-iObhBhH=E^^hX4Yyj&DL$rup)pp@n< zJMsh9l3_C!F;@b~eP+Hz+3IT*2e=ZLuNj$Wa@9eU-`x+b2Y$xdT zMJS9lW-+cKp~z(Sag&EAM#@egA-|wt`op2+i`4LfS(AO6Vc@WmESK@@amkb}q2msd zHfg-tAgXU?uc;_rr}AY=aXl|5BrogKX;LmqPD#OY50=_roiYLFM08BBu<_&n}Oznu|#z%elWg z)8h1Q2p9m>Xb8bW4>IXQOhpShlDYs`#&&*f^y(|eTx!J36cr|lP|fnJL$p%#{}6uQ0}2r+ zf2EyyvqnxvOl+<#(#ZSh(BjqGIF*cElOAhvWkf_j+gpqNSDACG-CDH(NXp9#GlaTh zjYBt4BChAUz#`ioPzo}#JErXCIzLMcf2^x4FHbQx?xs_VQAvIv-*jMcg7T({fL<#; zS8q{y=Xc*@ziDxpOM6&JSvgpwxp`@8ODN@CaqBY-!tSVv>+Rm#2^r=A_3LZhWF9yxknibx6C8KSgttk~eC z^r>~AK?A0PC57#ZNCGI^^mJU^gBC&kJDBb#OYwZAp{{`~<^Id4rk z6?`OY`Jq)-qzx{$!o8zw=m`YddW#3rG}yPWs(3>7L)teDvN+8~~80qli{^)nq=)OG=`D3ZezLcKo9~ ze-tI<7k?4x6d;9%;1l%7mqbHm1iopjC!8G0|8u2{)%~k~*u{Iq^e1X~4QQ+6;p`g} zNHaZL#V&@r{nT>6HFDAJTuiFbWYWwOhICc>98je=AS$=wg+8ADBuJza5U8Y4E5;EU z@L?vKWoKunY?-g77 znZ8_*p{ z18;4Ax@nr;s<{iK2}CXa1pFrYG@j}b2|Uoy%A31jF&Aew_Ghcb6?G>2quSa4@t<;U zrSro4kf~n;4};-_R~8078p1{b%7sCTi5O=FO+qe|}1Q6}lAzE}$cx&hH`Et8JnF`RSd#E|=Esny=z&0pRUmt=q7^Cc0W^LIL_e|b4M z#Z;IZd}8bSy7{3}v-+C#OZNoinB=vHf4bt+&yFP!#w5e{cMCc-vUPPmqEo=mFg za~Dk_TcfauB%t9g?#1XZ*H;pY$=bq)S&gF|Df!`xjwOp?FMbE|WNy&0C-%Ear|FNH z^S#>nXx_&$By3BHtdF*4ARA#&#U})o@d{k!m%vrkIPiyOA7V1&NVj?985Wx!PG}r& zwQVfERdkKw{IumE0y)ev(RXyTREhpt?OtC8Y$+TL!c$Y>0%#O*lNK+Hq&Rq5#NA5e ze9~yLfK5R9L~gp;p45{kr~iLyCA2Kz)QxK04IX_$(PGJ8ZuowNCOoZC^qKz1+Wjy5 z0s=kVh05tEXxWM#huWwOBcIh zl!8O{CmrDd|F6C8jB0Z0wicvI57LVQf*=Bh-m5feA_tI8=+dQk=^X^5OP3~{(3>E= zgOm`ugr@X>v{1fBz4yK6eLds;|Hioam1OLZC)v-=+H21>*PIUAV*{kND+6ETksLVu z?^K}*$hGLZUn)k7SlZeaappQ+175FzmYj2s${0i3KJTcw9bJj zQ&@$;dO5|01JwLlUnJ4z9J&5^mHl~8^QQBcN`+G&^7P;AmVYx8k$>a;%>(jwxYyzu zQ24JdZa^bxfcKYReCVz_X;{u8JQkR}=f@qt6S%t^cU)2}L>k#~`5C>F_mG1rl;s-$ z4(d}}I|<4oX!|7lv1vyn2rGqoA+;cm!)O_7DQXsvf{1%t_b_rODQDTq;X|s5KZC7T zP{*ihOz~$?vaXjNuWP=s44ZP3+J9E)vg&Lko$skSOZ*xh$31|oNMI~wbkJQ%M15A1 zJ*5k0W{elK@2S?oQ-nv`)k1|B?TxaiKRF>vA8B(QL3`?|ym%NiN;Dy8{-~43PvR2? z?!PY_vrr1ldJ520!_EHTk01`5FNf^>{rzuG-`c(P^_Dir+LT^{y;B^JL09jVlq_bw zWgXroaLv&wPZ?}ds;h@i8gV)xzSTE2zBX|Qz9WNoLbs8QeZ+RM)OgaR&W*l`BMc@M zl)gPpS=3Zs&bY6hGWftvf;|PE7NJbV=jq*|wY&lJ^uTOK$YQeEhzuJ1In%R&4WNX? zYO0|jzZ%Ug&6EvsMdZ-cbheV?9-V)6Ld1K3cOn`lFQ+rUV}NJ)F{f8F#8b3)&OC0= z_NtZ%WbuH549|mLzJ-h_B?r?VWgukB^2>A zLKZH;?-nq!=k$<;#4U{&#B7Gy%oOgJ#$1clO~!APo}LcQhM+s-;Z#;SS5B&i%DKoH#=TkfPmhpWNCujZoUtE&y= zDTjGP2j8%oBeLz6`*^V5uTQ35PI|%`ErpGhcSLL!PpMDe4^Zg9zc`?-{$lSHAp$x% zNaD=Eip~UbWc+F-KEb{Lf-*Pn*GL?|G zmlySzr5i#dMUmc71GgKy<9XtrM@3K4@E+2xE`+adHQb%S@A}-*9_ndLk&6~j)`oAL zD%je+@tBPL;q6VKiP6y*3WI!-(wPe_Xhz*`T1d|hgMX`Pp$DtTpoNNuG848piq{;x zh2XqjdB$ZUugum|JQsx6 zlbKhhlTKp}k@er(ly($5x9jta?1XDrCo$~G*mMy1WnTmiLS6VfqnDe#jsboE><7_`?rC06+{|LKJ+<1wo;8aeusdL7N@zJOeO!VSsO}Uua z4s*z1GhQL`2FmAaYB?|cC{=ZELjv&oWjydFc>XEPSW!MKjd4brzHs9|G;h#9a23mTm-xEvK2f7-}UfD8AjbPrE<($H+H6{Hj z@)&`cdk|&SR|qj4cq8t%FFCO;L$&j`324#xxY78gTW-Z?|8ciXKSfMLL(vFY-OgV| zg&mixrOiu2(^=7j#T%q1f?VWTmZO=`Cd~jp;3IqD$n>ICsnn_GZOpU?jke2)8OOfu zLY3Bjg8-2f%@B4>W?QKXm4HHVXx_65krZRkbp3+$AOwb9ki|@!Z*a~C8bS~%CSMiX zDHLWwLk(A%AK|y!?|t5T_;o-0>FuYfuDJdtknSVK+ctd0QkFRj@}_N$cq~mM(){@0 zzMLs~1|t;}q#}KHvby3|gk<+e)(4Y5!eFFdT!iTyl1BHJBI&u|=z3?5D;CqpME zd4ST`f&(akLai1!qvnCDFg5ZIQ7xy=NqF{En@6t?t=~vzJSIb~{55HQUj5v;JqzyY;E0zR=G$^+1=?vUOQXXOH1$hb-oCpP9>{&Vxcz@|0zTu?Au(Q4HG%cPE{a%lF-aeK&8+@UGaYvps3agZ0 zJE3j)ji}PZ&5%`S9JkgiG!;7MX!iIul@_zNPR|iLIQddg$ULtEkY93gZ(R||2$3$) zSTuRLO?t+0tL{~ZxvLN4_@sOraKn52NlDT;RA1bzziOD#5%l2O^ft`KCS!h{;AC$* z;T*f@et@8YjOU;Etv&O&6@CuMCx3}-qZ7ugTkROl1Hb!h|i~`%azs=D6#F+GMCy@k*DK1G|1da zDbzbczA8>sS{MaM{&&7Se&B0H6f6%wZ#y?~W2bsL#$X3?oM zW2sj-%#|uy-d;a1x?-ms1D`F$|Je~ktEE(CzGPs14eB-!LHhBsf! zs^@%R%Ua|)pG?Kom0?ckC?8$MFL}qM>65~@FayCL=D(Xttwt6rJDzsv@~j|XL*Euz z&DKwu`o5D{<=6@8`V>6#N+tKgSRRT`)w0SOZE9&N5t0!^)4IqqyN2u(n!TqL6ad$m z_(03L)d;ETT`zYS?_H-FS%2bepl@ExVuc9p=^ftLvc3GCf8iSzeM&nXvjg6>;E$y{ z*AvO{sEXOP*{sPct!?jQc~wR!nlJLSDnd&C32&e;tv!bB6aGyvWM^C!q>`CjtX4ag znX|JZ(6p+za!E|^8IN0!dT3sZ?OcV3Zrfdw5XIs)=0snibiCmSck+sZV4FTz_+puD zd-RHY`{D3ayDVO0(F=;pgH3}qj@C^9w90SuFb5_=f1uPY3N;g2_r|1(^(o1b%85Qd z2SBNwf`Yn{5k@1ZSl_o$CEV`C12!wdzoSFHpND}~3NsFpfU>|T!$x5t%X2^o@j)&L zh#zf}hadc%VQYDgUAwfdD-6ZKG29i)O@;%$#{BD~r3~`KXqmGILgGk3KtLnx6EhJN zA#T!)<*Oq3I;b(PM8RTZI524xY-Nf_P2<5i5q5b<^VJwChF#0KXN71?3&yH#LhA%u z$iXf3iLW-%E-&{5XagHQ1~(_+Q4S#wm0if9;PEPK!6nzCM|U zhV-DDL@jO)xV8A5jc3fib38h;RTTr3*6lV7lpkLrSZq(cPrzvL?YmT; zzR*nbd16YuR4Jwxis-E$jN@I{n$A3=;e+h<8R~?|cEN_r={YNs4Xu zYbm=6+4ehyms$Pltl)kspPbzz(LKdD8wO!hR$cqBBEpdmHc{CJ^6p#Urp?sO!MLNX zQI8aF68mtI$-10ak?LMmG4kDvbn2ThhAh#a?85J)pJer!KS^z^Mq^>rzP>C3yw$SB zeJ_T%KJD>hX_S~)(VpsCT5|=QaaeEv1LY}`w1cW#qoS;=Ow9ip0)YgU2ME57iUmD! z{|02$0DwhJPybR^*FbN1Zg%$a^=B1Yx%!gmG<=s;;cq6EH@5Rnk+3nkxZonG;XltU&2G+icThJB|z&VUJNF&zK%*p` zQW^T)4XKF`OuUqx@88;5V|DxM7h3vvPQ%X%sx$gRFcg<2%i5M0;Ce+*l2@K46mO2c zw+rMvXUkMYhPpIq3~b&p*lTv(z;TuSl@JBU;P5h(oNHZeDiF5#loy;)tH-i#ez;ArC>XWfTsVfdb@C; z+0R8BhRe*na%)rXcTyXzuyI@4c;5!Y;gurrM3=fiSw5|?**+ryX$BbTxb4?j|5tHk zQL5wV@I4KbI*SlLzt7Qz>jx9YXVI~-j8uRGZ~QqD+6_|TFVxf3y#PF(vc8Ioeo`KG zU$t@Ro2CWd@g!zIR~Y#-Ixi}|@f|%__^4TG15f(#!*$L0B)%_*5;wKGv^~7X6oH`N zQf0vS1#uo=k^0QelNSm(2( zU$`X%nEgy=llHj$&xfFv{v40Z;e=D(>G1X32fB^xQVWc^{(pk(UXdM=%%6Rrr8GlB zQ9w;#?Rl_J9tlER*-9agR82)?vC+@Y#idb$Ejbc%Z}#wbwW7U6}TcQ0E$9O zWy2+gCOs8Z(eV?1gKK8T3TAO345E_{=l(z^XSgdx)LAtZngGb%)|};44VJI-33r>y<=IU zWx#k534E-&UsG6LL@I^f^T?(cR!QcbAs}^c^WsZV;-_}SSl6%EqV4wD(BOOidzrsr zQh#WB_fE!u*(GvnEmK6m|Ll4|;u;~bB62yctsP+#x6l2Bd-U_ke1N3#s&**DI|?qt z%Rc|hzUAwa$yaXqhD9}fQzqdj2@OXjzG%F^B+LG7$xEt%R>Z|L*zcq4A0$?Wg>kK{ z^jJu>5=O(g=?xWn7kc$p6fo9h8Pf*vD=lDy>v;5i>l)P>C8VzC8GFk%?L`u-D!&?> zn!`Fq2?HJMjPI>tvPAareA5y23t@-_qA3ts5kOs?MW&X~k1v(y3wIpNd3ElVb=TFE zLKoaN`IbDuNc25iUBA3sl?*n)YS~1O69)nRhs>mZDDgC>baY7alE&=uUI_3zZ*S8i zDW#1BaMWN_vou!so!@DIn5_;B43rufw28VtNu+0He6mmGU|*3G*`vb|5T|o$m}~J{ zm)7w$278i*y-j8gNq5O7x83gm?dkd_jjIpXk~cCOW~1#P5cIEI6=31rb^?x5b@47^|agrtKj`9E*HM;;>1SEW~U0g6eRa*snU$>8plw0xg6Gwb~ zv7<(c%4!{x5c~Wane`V+$QImCGDj;OwN@p61u;P1klsA>*YbjwD-(ELrZ=wf4Ik=w?) zRV4%5`TcR26OrRl&P}vLz>Y@&(0?Vw#IBI*8IY1Mu<+xHIDm}IbsZ;j>p!P>1FM_v2OoHA7ryBr2EG(94payjb zeb4aOa$)=D5-mhp6u;t2w(33lIDDc478W@;^xfI#50A7fa_$~EuSh(JKA7)RE!t(3 zYgJUPlu(zO~c(buj_KX3IS2Ef$f1sC=M4%PIWpt%`C~`b= zx6=6v(3{kIq(3wD2;8x_)RFC3ukd#Y9!_Bsfw*D;K|1V7%=DNEopW)e6lJ}F=0o*b4o%sppFk+)l5jYP-l)k1#dzJOy06Gn)h3MTvD6Fiq}Jh z`3Zme*MQ~qA?+Ps5_$qs_blUMQNp$R41GC6ReF(O>4C`B{p~N1^ z7!am5L2bv*n$xuH@N`?GiZ?Ju&Y*Y%?Usk*JoLuIuU=COMFgsJ`c$zAc3s&9p77tu zjD6vs;tZ=VT*Nl*WVs=mnR1D!d*P2I&za}|e#BqVemya^@p&Mkh| zU^723<;b3N{$8EbE?-Wz8FnR{XVzOX z-FB`$#!?Y4YZR#kr>r0_&=^3;!_iVUV&C=a z)p$wl$H?JLc|6dbr0#^?hr{)HN6_?o3KdJYbjz1w4GT_J2C@b{U|)Fimnx6wryGHv zqN^L^851LZ6Q7jw^(J0{BaoyJ>Cp)=iM?i#lgU+ ztTj=`^YJw4k1V7ck7QQv#$AZSRkXhDAv+Y56X7H`?SWui34&Lwg$rru1xuE!FjPfD z(`f`A^r*jek4RTHuj)PBcm7h9Qcb;0UyDz;rE6eb+$)ULf^Bop=4lFUJ=T#$$BL$| z-zQF~XFEZ?70(=Lv1;=&IjufzVn~}d8`=}QCmXvXwf2YgRjRT?vEH(pbXY2lJWrrH zf=Nm1I~!2mPt_Nr(snq%dKLJ?C3h>~?xJ9F?zH*)g3{Wim-5=*!1l!_v2F$~tZ8ZM zi}6Y+%VWI+tEy%O(!S-WPL-LE0v4QwW{fy{Gq8m!NIvw?aH@z9U0|b zcEFRWJKB41#LD|_Ea$IF za;3P{J_w}f_7v{SYB}6p>SDte0<9y zUrQ%y!)tIY`Wq-tOKV>=f-psN3tU}(PFuym1|l-~&|-INGFCG*5*rpA9TfD|-PzFs zsGAg4TgGHgaRi;*N8|+X+_BKVeyDBo;iy1fNVoCqnL=1+C&mzuyw;X1BY$f6#a`D^VZP`vlqo(?r(y90aaerBlRGG9M|t*_ z^@kK2bR-Gpg2;ook*(PTdvn(|jS|Q*{H}}l{ z43|Shcr3)FsLSB>q7lGT@*f}pxe}{R1)^>?ISeWLs;PJ0zKbc;YJ%2^tdE0)zXUO) z?U!YN&-8CSbLX-0hvxDR4d3SC&A& zbH>Y|U|~FajuC}guU>|-=`>}JkZjv2<)W2?c>_yDgD$RAofbT^&IC~^_i&2D94^x7 z=Jn=nVj-SnG;-pR@LYaunntaZ>PL9wL?hS)(vyVzm`7phA<;y262^1IWtWe=#(Y}0 zKqCdiMh$L9_H0lu9-D^h2b{CtC2rer230H#&vA9Kc)nxuJ#VhztiD8=#ZJ#2k4>Opgy~C791xS^BYJw4$Olr7+lZ6T zNDwN@(pC4e6_^Eu?LK4RYy=l}q^gq0k=3`-k%S99wote`6O#V`#I;cU*RCQnb7XG* zT>~SY17%-CPFpEHf0L2`I-fovo5GB2a?TX#F!tjChH)3iZ8SO}D`Ah}@rA&ipgw%v z`pt5#@e`X@l?~}KYF@>iBz;Z*P4i6{;SfbvjHgDL7+TiUO0H9^YlTb<5nmOQ!M5IL zn{Aq>j0SGq$k~z#c7R>mX>78g~)wj=P{iH^D`V|)7`~gj2-qB6ncaoP) zel^XUf2_>_Q{oQH$7$5x_id)?vHAIt1z!NyISa-oDd9xDAe_#t2?d4(i~%w{Bfb+u zGdZXv6U%+7jEcx)lSr(mFWw%C^WX2H8_0ohOMv7Bb$efxS@8iSy-}8TxOX#sxe-Vd zm7Z@4nSwkS`F3nf+RKfHeQs?V58IlTvF=H+*nh^>=TgD0r8mcb3Ju=ts}a3;W1qm9 zAXIq-K3^$0y`(*D=`ijKvLmdwb*Qthh3$JLn~uP#T3*b)d|0D#$T}8J{S2$>`x1*l4i?v~(BtMkUBn$OJW*%JE*X^g$Jr4_-7a$$bnl`v!~6sVvx zb!N-|jQAYrLh{MgU{Ch&>Ep~lv6nBX0jZhgRfqPJ5yf6oZQg~f@?1N*TK~ED002#c zlb5>AMOd*rLQ&%77H*y)fR((Awvc)ciJf@(s;1#K<-q7JvJ+orv_s`du}$NkdZBa1 zs639!w{b?x+9AioZ8OshKqg(5B5uCoH%G#)g4`0$Bp5)+>l|9 z9jwfZN(6$azCKJ8Wy`qD;?t3BS2cj#qqK!MYU5j^isG`RX$yq&v^F<`B_!0WzJ`Qg z>W9K$&N$sx2<+kAJJ!JYl>#ZX4gQDpEfc_ewAf&gJ#%wj74?e^Nsl-=Dpo~yY(5A1 zl|?9UQ1_V#g5Pkqf)1r=_F3?KGOP|1pei^*OvP5});+h`iMXrwP(K!-rd!ccGyMB*@Q?e5YaKRztCz|l@)ee0M%(2bK@hC{KT9v{I z3=2)gMpQ(ywuU^#DO3w2`ekic`bVeK`xvN$fPEz{RRfWSBn7NN%(=|aVh2kor}9_L zr;`dsE&S9`$5N(HEbbFKuD6J;d`c0eGHQChf6>vbJ{MRt!S_;q`#Cfhpi!=CYSCoJ zT=kGDmUl%~d~V0s2XgmblHCzmWwK?{v9{=bR}7<>)@@UNNU9D=yx|w9f8EL!D9m9! zg)&={vRziA`*ZeWjg8P1NNdw7HlN94Nl&r!-z5+J42Ur)_ioRdFIm^;fKK>p=|*~H zr|*&T1jpDWU>D6+TB&qu!FpJBBRiwF>x|^J8ECuZs#s|5d$^P@{Xl`-_-1+{J6Vtw zR#cx>^P}q8M~4a4U(L1XB=X>7!Khl4v_Z8EK2^Q&dBE}3SI)vXNl>nD)mmEPsOhNo z7UFgSJcm%*rsukyjwT?#z_?E!U4}k8AR8U#FnD6sC9f90Da@IGf3cs1%TwAgPEcW$#LvQd zfu4AEo@%unt2)Nl37ggn)iN`RC0jIlu+{&+Y13ElTuQoey#$|g-dX?813v((7^|UM32VjWtayL&{2Fws93m~0q9>1S zhAra2ByzoPXSsQUNEr`_$Wx?K0(GUYnKVZbY$0c+J}xA^D0xB?%l`z{lMWDYQdf0@ zs4Tx(3VPt)7O4l14B6f_D0SFn(0}m=B<`Rd*x5V0&~&NX7!_kr7i6H%B_22~*q#RC zQr}luinMda5jbDWNRl#jkd+7_xCDPW7VX!5xeci<8^0@3`mrcgs zc~jt&?kN4Tl7{bLjGt4(#OtYjt@D!#%I{xlmZwv$&@#FbPNNKMhLHK`TI+k?&{=B? zTTb{Dild+obzUbQlBsUy1Y2GpKc**HwAkzj>lT`mzrW#BvstWsR$A$fXZh{!WnSrh zQ%`YcaQ9sSI8597irGI7&#NEi-fIjh$+M!`@m z*Np@7FJt*NA+b^ror}3@RtD&ie)$I7)yU>6&3kMwj$@}IE4c))5~;kygH#-8*JIvb zKzu>}aAQ`wadOZKUUbEXyQv}heIX0GyM7-Crtlaw@$ujJ7z})1RGVh3dz4_sFheFB z`dy|c=Lz^(m4dh#ExRz{YCAI`1wL8UR@2tIPm@Ixsc)kUR^eE(J`p0Be%wQEJvQvK zUEd`R!a)$E)lJ?ZjIM^uV~yj*6_w_2T7=3?v{1gSRC3fw5eN}b$IsYZu3r2?02=g2 z-qW*4J2AP`l=E>Ib3P$=5#|_S)r5T8&OX(&pm6gHz_(tHm<_8~)6qc?XPlWkX-o<6KI?(YYU-~3|T-SS?RjAy) zP7yt*%*A^l_}s^lw%3FbrNG?#+*$&?Pp|)k=IM_=w+P@JV#$m9UmT5=D<}hW1BhsE zBF0QGK0~q{KFOQ6)XW(=&o+$~(FNQkLLLd7npnDeRrAz|QAxQ zl*pT+ivRU`Ll}S}5VEMK>CZ;G|2aEJVgsYxZ>bsluh&b#1**8~YHH4Z($@YpRm1^= zIKZe|Si@l*xv&vo?Oqzy1wS`}>cfBL+sHiMsx-m;BE){x(&_0fsDq zKT?W>*vN1FtA9<;zdsSa2aLMy@%>LF+`nDBO<5A%iOb(Bz2(2Z=-=+|zh=ut5*USj yF!iV8)88)ne+%o^6#n1B`nA0N|6W)DH$)OM+m!|Drw+G(hvHLJ*$Qdn!2buR@>Qh( literal 0 HcmV?d00001 diff --git a/static/images/guides/architecture/replication.svg b/static/images/guides/architecture/replication.svg new file mode 100644 index 000000000..ff7e8e6f3 --- /dev/null +++ b/static/images/guides/architecture/replication.svg @@ -0,0 +1,284 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + From 484a248f20268df625e1d6c41edaea4bf4fbfe8c Mon Sep 17 00:00:00 2001 From: javier Date: Wed, 10 Dec 2025 19:00:01 +0100 Subject: [PATCH 03/21] arrays and nanos --- .../guides/schema-design-essentials.md | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/documentation/guides/schema-design-essentials.md b/documentation/guides/schema-design-essentials.md index a13964892..e194a11e0 100644 --- a/documentation/guides/schema-design-essentials.md +++ b/documentation/guides/schema-design-essentials.md @@ -230,10 +230,9 @@ Symbols are **dictionary-encoded** and optimized for filtering and grouping: ### Timestamps -- **All timestamps in QuestDB are stored in UTC** at **Microsecond resolution**: - Even if you can ingest data sending timestamps in nanoseconds, nanosecond - precision is not retained. -- The **`TIMESTAMP`** type is recommended over **`DATETIME`**, unless you have +- **All timestamps in QuestDB are stored in UTC**, and they will use either +- **Microsecond** or **nanosecond** resolution, depending on the chosen data type: +- The **`TIMESTAMP`** or **`TIMESTAMP_NS`** types are recommended over **`DATETIME`**, unless you have checked the data types reference and you know what you are doing. - **At query time, you can apply a time zone conversion for display purposes**. @@ -244,6 +243,14 @@ Symbols are **dictionary-encoded** and optimized for filtering and grouping: It is a legacy data type. - Use **`VARCHAR`** instead for general string storage. +### Arrays + +QuestDB supports [N-Dimensional arrays](/docs/concept/array/). Arrays are specially indicated when you have data which is very closely related. +For example, if you are storing orderbook data with the price and the volume for each level in the orderbook, it can be +a good idea to store it as a bi-dimensional array, with prices in the first position and volumes in the second. You +could also store them as two independent arrays, one with the sizes and one with the volumes, and access them both using +the same index. + ### UUIDs - QuestDB has a dedicated @@ -290,7 +297,7 @@ Some table properties **cannot be modified after creation**, including: - **The designated timestamp** (cannot be altered once set). - **Partitioning strategy** (cannot be changed later). -- **Symbol capacity** (must be defined upfront, otherwise defaults apply). +- **Symbol capacity** (can be defined upfront, but will auto-increase as needed). For changes, the typical workaround is: From 25e2a6e2d32b28b3fa02063f7d1190780b34bbde Mon Sep 17 00:00:00 2001 From: javier Date: Wed, 10 Dec 2025 19:05:50 +0100 Subject: [PATCH 04/21] byoc link --- documentation/guides/architecture/replication-layer.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/documentation/guides/architecture/replication-layer.md b/documentation/guides/architecture/replication-layer.md index 9b567ecf0..ba43658d5 100644 --- a/documentation/guides/architecture/replication-layer.md +++ b/documentation/guides/architecture/replication-layer.md @@ -75,7 +75,7 @@ Refer to the [ILP overview](/docs/reference/api/ilp/overview/#multiple-urls-for- #### Bring Your Own Cloud (BYOC) -QuestDB Enterprise can be fully managed by the end user, or it can be managed in collaboration with QuestDB's team under the [BYOC model](/byoc). +QuestDB Enterprise can be fully managed by the end user, or it can be managed in collaboration with QuestDB's team under the [BYOC model](https://questdb.com/byoc/). Date: Wed, 10 Dec 2025 19:27:57 +0100 Subject: [PATCH 05/21] added reference to flyway --- documentation/guides/schema-design-essentials.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/documentation/guides/schema-design-essentials.md b/documentation/guides/schema-design-essentials.md index e194a11e0..a08f6202a 100644 --- a/documentation/guides/schema-design-essentials.md +++ b/documentation/guides/schema-design-essentials.md @@ -412,3 +412,11 @@ CREATE TABLE metrics ( PARTITION BY DAY WAL DEDUP UPSERT KEYS(timestamp, name); ``` + +## Schema management tools + +Although QuestDB supports automatic schema creation, some users prefer to use a schema management tool to implement +schema migrations. + +The QuestDB team has contributed a [Flyway driver](https://documentation.red-gate.com/fd/questdb-305791448.html) that +can be used for this purpose. From 5668985faa7b85515277c22fb5984619bcbf2c0d Mon Sep 17 00:00:00 2001 From: javier Date: Wed, 10 Dec 2025 19:31:07 +0100 Subject: [PATCH 06/21] smaller images --- documentation/guides/architecture/replication-layer.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/documentation/guides/architecture/replication-layer.md b/documentation/guides/architecture/replication-layer.md index ba43658d5..c44d7dc26 100644 --- a/documentation/guides/architecture/replication-layer.md +++ b/documentation/guides/architecture/replication-layer.md @@ -18,7 +18,7 @@ page. alt="Architecture of a Multi-Primary Cluster with Multiple Read-Replicas" title="Architecture of a Multi-Primary Cluster with Multiple Read-Replicas" src="/images/guides/architecture/replication.svg" - width={1000} + width={700} forceTheme="dark" /> @@ -81,7 +81,7 @@ QuestDB Enterprise can be fully managed by the end user, or it can be managed in alt="Diagram of the Bring Your Own Cloud Architecture" title="Diagram of the Bring Your Own Cloud Architecture" src="/images/guides/architecture/bring-your-own-cloud.png" - width={1000} + width={700} forceTheme="dark" /> From ffcee5a531289722284e6bf18c9f086ec2e6dc6b Mon Sep 17 00:00:00 2001 From: javier Date: Thu, 18 Dec 2025 18:32:09 +0100 Subject: [PATCH 07/21] playbook initial draft --- documentation/playbook/demo-data-schema.md | 178 +++++++++ .../operations/docker-compose-config.md | 137 +++++++ documentation/playbook/overview.md | 44 +++ .../programmatic/php/inserting-ilp.md | 357 ++++++++++++++++++ .../sql/calculate-compound-interest.md | 112 ++++++ .../sql/force-designated-timestamp.md | 79 ++++ documentation/playbook/sql/pivoting.md | 100 +++++ .../sql/rows-before-after-value-match.md | 137 +++++++ documentation/sidebars.js | 42 +++ 9 files changed, 1186 insertions(+) create mode 100644 documentation/playbook/demo-data-schema.md create mode 100644 documentation/playbook/operations/docker-compose-config.md create mode 100644 documentation/playbook/overview.md create mode 100644 documentation/playbook/programmatic/php/inserting-ilp.md create mode 100644 documentation/playbook/sql/calculate-compound-interest.md create mode 100644 documentation/playbook/sql/force-designated-timestamp.md create mode 100644 documentation/playbook/sql/pivoting.md create mode 100644 documentation/playbook/sql/rows-before-after-value-match.md diff --git a/documentation/playbook/demo-data-schema.md b/documentation/playbook/demo-data-schema.md new file mode 100644 index 000000000..87828440d --- /dev/null +++ b/documentation/playbook/demo-data-schema.md @@ -0,0 +1,178 @@ +--- +title: Demo Data Schema +sidebar_label: Demo data schema +description: Schema and structure of the FX market data available on demo.questdb.io +--- + +The [QuestDB demo instance at demo.questdb.io](https://demo.questdb.io) contains simulated FX market data that you can query directly. This page describes the available tables and their structure. + +## Overview + +The demo instance provides two main tables representing different types of foreign exchange market data: + +- **`core_price`** - Individual price updates from multiple ECNs (Electronic Communication Networks) +- **`market_data`** - Order book snapshots with bid/ask prices and volumes stored as 2D arrays + +Additionally, several materialized views provide pre-aggregated data at different time intervals. + +:::info Simulated Data +The FX data on the demo instance is **simulated**, not real market data. We fetch real reference prices from Yahoo Finance every few seconds for 30 currency pairs, but all order book levels and core price updates are generated algorithmically based on these reference prices. This provides realistic patterns and data volumes for testing queries without actual market data costs. +::: + +## core_price Table + +The `core_price` table contains individual FX price updates from various liquidity providers. Each row represents a bid/ask quote update for a specific currency pair from a specific ECN. + +### Schema + +```sql title="core_price table structure" +CREATE TABLE 'core_price' ( + timestamp TIMESTAMP, + symbol SYMBOL CAPACITY 16384 CACHE, + ecn SYMBOL CAPACITY 256 CACHE, + bid_price DOUBLE, + bid_volume LONG, + ask_price DOUBLE, + ask_volume LONG, + reason SYMBOL CAPACITY 256 CACHE, + indicator1 DOUBLE, + indicator2 DOUBLE +) timestamp(timestamp) PARTITION BY HOUR TTL 3 DAYS WAL; +``` + +### Columns + +- **`timestamp`** - Time of the price update (designated timestamp) +- **`symbol`** - Currency pair from the 30 tracked symbols (see list below) +- **`ecn`** - Electronic Communication Network providing the quote: **LMAX**, **EBS**, **Currenex**, or **Hotspot** +- **`bid_price`** - Bid price (price at which market makers are willing to buy) +- **`bid_volume`** - Volume available at the bid price +- **`ask_price`** - Ask price (price at which market makers are willing to sell) +- **`ask_volume`** - Volume available at the ask price +- **`reason`** - Reason for the price update: "normal", "liquidity_event", or "news_event" +- **`indicator1`**, **`indicator2`** - Additional market indicators + +The table tracks **30 currency pairs**: EURUSD, GBPUSD, USDJPY, USDCHF, AUDUSD, USDCAD, NZDUSD, EURJPY, GBPJPY, EURGBP, AUDJPY, CADJPY, NZDJPY, EURAUD, EURNZD, AUDNZD, GBPAUD, GBPNZD, AUDCAD, NZDCAD, EURCAD, EURCHF, GBPCHF, USDNOK, USDSEK, USDZAR, USDMXN, USDSGD, USDHKD, USDTRY. + +### Sample Data + +```questdb-sql demo title="Recent core_price updates" +SELECT * FROM core_price +WHERE timestamp IN today() +LIMIT -10; +``` + +**Results:** + +| timestamp | symbol | ecn | bid_price | bid_volume | ask_price | ask_volume | reason | indicator1 | indicator2 | +| --------------------------- | ------ | -------- | --------- | ---------- | --------- | ---------- | --------------- | ---------- | ---------- | +| 2025-12-18T11:46:13.059566Z | USDCHF | LMAX | 0.7959 | 219884 | 0.7971 | 223174 | liquidity_event | 0.641 | | +| 2025-12-18T11:46:13.060542Z | USDSGD | Currenex | 1.291 | 295757049 | 1.2982 | 301215620 | normal | 0.034 | | +| 2025-12-18T11:46:13.061853Z | EURAUD | LMAX | 1.7651 | 6207630 | 1.7691 | 5631029 | liquidity_event | 0.027 | | +| 2025-12-18T11:46:13.064138Z | AUDNZD | LMAX | 1.1344 | 227668 | 1.1356 | 212604 | liquidity_event | 0.881 | | +| 2025-12-18T11:46:13.065041Z | GBPNZD | LMAX | 2.3307 | 2021166 | 2.3337 | 1712096 | normal | 0.308 | | +| 2025-12-18T11:46:13.065187Z | USDCAD | EBS | 1.3837 | 2394978 | 1.3869 | 2300556 | normal | 0.084 | | +| 2025-12-18T11:46:13.065722Z | USDZAR | EBS | 16.7211 | 28107021 | 16.7263 | 23536519 | liquidity_event | 0.151 | | +| 2025-12-18T11:46:13.066128Z | EURAUD | EBS | 1.763 | 810471822 | 1.7712 | 883424752 | news_event | 0.027 | | +| 2025-12-18T11:46:13.066700Z | CADJPY | Currenex | 113.63 | 20300827 | 114.11 | 19720915 | normal | 0.55 | | +| 2025-12-18T11:46:13.071607Z | NZDJPY | Currenex | 89.95 | 35284228 | 90.46 | 30552528 | liquidity_event | 0.69 | | + +## market_data Table + +The `market_data` table contains order book snapshots for currency pairs. Each row represents a complete view of the order book at a specific timestamp, with bid and ask prices and volumes stored as 2D arrays. + +### Schema + +```sql title="market_data table structure" +CREATE TABLE 'market_data' ( + timestamp TIMESTAMP, + symbol SYMBOL CAPACITY 16384 CACHE, + bids DOUBLE[][], + asks DOUBLE[][] +) timestamp(timestamp) PARTITION BY HOUR TTL 3 DAYS; +``` + +### Columns + +- **`timestamp`** - Time of the order book snapshot (designated timestamp) +- **`symbol`** - Currency pair (e.g., EURUSD, GBPJPY) +- **`bids`** - 2D array containing bid prices and volumes: `[[price1, price2, ...], [volume1, volume2, ...]]` +- **`asks`** - 2D array containing ask prices and volumes: `[[price1, price2, ...], [volume1, volume2, ...]]` + +The arrays are structured so that: +- `bids[1]` contains bid prices (descending order - highest first) +- `bids[2]` contains corresponding bid volumes +- `asks[1]` contains ask prices (ascending order - lowest first) +- `asks[2]` contains corresponding ask volumes + +### Sample Query + +```questdb-sql demo title="Recent order book snapshots" +SELECT timestamp, symbol, + array_count(bids[1]) as bid_levels, + array_count(asks[1]) as ask_levels +FROM market_data +WHERE timestamp IN today() +LIMIT -5; +``` + +**Results:** + +| timestamp | symbol | bid_levels | ask_levels | +| --------------------------- | ------ | ---------- | ---------- | +| 2025-12-18T12:04:07.071512Z | EURAUD | 40 | 40 | +| 2025-12-18T12:04:07.072060Z | USDJPY | 40 | 40 | +| 2025-12-18T12:04:07.072554Z | USDMXN | 40 | 40 | +| 2025-12-18T12:04:07.072949Z | USDCAD | 40 | 40 | +| 2025-12-18T12:04:07.073002Z | USDSEK | 40 | 40 | + +Each order book snapshot contains 40 bid levels and 40 ask levels. + +## Materialized Views + +Several materialized views provide pre-aggregated data at different time intervals, optimized for dashboard and analytics queries: + +### Best Bid/Offer (BBO) Views + +- **`bbo_1s`** - Best bid and offer aggregated every 1 second +- **`bbo_1m`** - Best bid and offer aggregated every 1 minute +- **`bbo_1h`** - Best bid and offer aggregated every 1 hour +- **`bbo_1d`** - Best bid and offer aggregated every 1 day + +### Core Price Aggregations + +- **`core_price_1s`** - Core prices aggregated every 1 second +- **`core_price_1d`** - Core prices aggregated every 1 day + +### Market Data OHLC + +- **`market_data_ohlc_1m`** - Open, High, Low, Close candlesticks at 1-minute intervals +- **`market_data_ohlc_15m`** - OHLC candlesticks at 15-minute intervals +- **`market_data_ohlc_1d`** - OHLC candlesticks at 1-day intervals + +These materialized views are continuously updated and provide faster query performance for common time-series aggregations. + +## Data Retention and Volume + +Both tables use a **3-day TTL (Time To Live)**, meaning data older than 3 days is automatically removed. This keeps the demo instance responsive while providing sufficient data for testing and examples. + +**Data volume per day:** +- **`market_data`**: Approximately **160 million rows** per day (order book snapshots) +- **`core_price`**: Approximately **73 million rows** per day (price updates across all ECNs and symbols) + +These volumes provide realistic scale for testing time-series queries and aggregations. + +## Using the Demo Data + +You can run queries against this data directly on [demo.questdb.io](https://demo.questdb.io). Throughout the Playbook, recipes using demo data will include a direct link to execute the query. + +:::tip +The demo instance is read-only. For testing write operations (INSERT, UPDATE, DELETE), you'll need to run QuestDB locally. See the [Quick Start guide](/docs/quick-start/) for installation instructions. +::: + +:::info Related Documentation +- [SYMBOL type](/docs/reference/sql/data-types/#symbol) +- [Arrays in QuestDB](/docs/reference/sql/data-types/#arrays) +- [Designated timestamp](/docs/concept/designated-timestamp/) +- [Time-series aggregations](/docs/reference/function/aggregation/) +::: diff --git a/documentation/playbook/operations/docker-compose-config.md b/documentation/playbook/operations/docker-compose-config.md new file mode 100644 index 000000000..02546e39e --- /dev/null +++ b/documentation/playbook/operations/docker-compose-config.md @@ -0,0 +1,137 @@ +--- +title: Configure QuestDB with Docker Compose +sidebar_label: Docker Compose config +description: Override QuestDB configuration parameters using environment variables in Docker Compose +--- + +You can override any QuestDB configuration parameter using environment variables in Docker Compose. This is useful for setting custom ports, authentication credentials, memory limits, and other operational settings without modifying configuration files. + +## Environment Variable Format + +To override configuration parameters via environment variables: + +1. **Prefix with `QDB_`**: Add `QDB_` before the parameter name +2. **Capitalize**: Convert to uppercase +3. **Replace dots with underscores**: Change `.` to `_` + +For example: +- `pg.user` becomes `QDB_PG_USER` +- `pg.password` becomes `QDB_PG_PASSWORD` +- `cairo.sql.copy.buffer.size` becomes `QDB_CAIRO_SQL_COPY_BUFFER_SIZE` + +## Example: Custom PostgreSQL Credentials + +This Docker Compose file overrides the default PostgreSQL wire protocol credentials: + +```yaml title="docker-compose.yml - Override pg.user and pg.password" +version: "3.9" + +services: + questdb: + image: questdb/questdb + container_name: custom_questdb + restart: always + ports: + - "8812:8812" + - "9000:9000" + - "9009:9009" + - "9003:9003" + extra_hosts: + - "host.docker.internal:host-gateway" + environment: + - QDB_PG_USER=borat + - QDB_PG_PASSWORD=clever_password + volumes: + - ./questdb/questdb_root:/var/lib/questdb/ +``` + +This configuration: +- Sets PostgreSQL wire protocol username to `borat` +- Sets password to `clever_password` +- Persists data to `./questdb/questdb_root` on the host machine +- Exposes all QuestDB ports (web console, HTTP, ILP, PostgreSQL wire) + +## Common Configuration Examples + +### Increase Write Buffer Size + +```yaml title="Increase buffer sizes for high-throughput writes" +environment: + - QDB_CAIRO_SQL_COPY_BUFFER_SIZE=4194304 + - QDB_LINE_TCP_MAINTENANCE_JOB_INTERVAL=500 +``` + +### Configure Memory Limits + +```yaml title="Set memory allocation for query execution" +environment: + - QDB_CAIRO_SQL_PAGE_FRAME_MAX_ROWS=1000000 + - QDB_CAIRO_SQL_PAGE_FRAME_MIN_ROWS=100000 +``` + +### Enable Debug Logging + +```yaml title="Enable verbose logging for troubleshooting" +environment: + - QDB_LOG_LEVEL=DEBUG + - QDB_LOG_BUFFER_SIZE=1048576 +``` + +### Custom Data Directory Permissions + +```yaml title="Run with specific user/group for volume permissions" +services: + questdb: + image: questdb/questdb + user: "1000:1000" + environment: + - QDB_CAIRO_ROOT=/var/lib/questdb + volumes: + - ./questdb_data:/var/lib/questdb +``` + +## Complete Configuration Reference + +For a full list of available configuration parameters, see: +- [Server Configuration Reference](/docs/reference/configuration/) - All configurable parameters with descriptions +- [Docker Deployment Guide](/docs/deployment/docker/) - Docker-specific setup instructions + +## Verifying Configuration + +After starting QuestDB with custom configuration, verify the settings: + +1. **Check logs**: View container logs to confirm configuration was applied: + ```bash + docker compose logs questdb + ``` + +2. **Query system tables**: Connect via PostgreSQL wire protocol and query configuration: + ```sql + SELECT * FROM sys.config; + ``` + +3. **Web console**: Access the web console at `http://localhost:9000` and check the "Configuration" tab + +:::tip +Keep sensitive configuration like passwords in a `.env` file and reference them in `docker-compose.yml`: + +```yaml +environment: + - QDB_PG_PASSWORD=${QUESTDB_PASSWORD} +``` + +Then create a `.env` file: +``` +QUESTDB_PASSWORD=your_secure_password +``` +::: + +:::warning Volume Permissions +If you encounter permission errors with mounted volumes, ensure the QuestDB container user has write access to the host directory. You may need to set ownership with `chown -R 1000:1000 ./questdb_root` or run the container with a specific user ID. +::: + +:::info Related Documentation +- [Server Configuration Reference](/docs/reference/configuration/) +- [Docker Deployment Guide](/docs/deployment/docker/) +- [PostgreSQL Wire Protocol](/docs/reference/api/postgres/) +::: diff --git a/documentation/playbook/overview.md b/documentation/playbook/overview.md new file mode 100644 index 000000000..bc899321f --- /dev/null +++ b/documentation/playbook/overview.md @@ -0,0 +1,44 @@ +--- +title: Playbook Overview +sidebar_label: Overview +description: Quick recipes and practical examples for common QuestDB tasks and queries +--- + +The Playbook is a collection of **short, actionable recipes** that demonstrate how to accomplish specific tasks with QuestDB. Each recipe follows a problem-solution-result format, making it easy to find and apply solutions quickly. + +## What is the Playbook? + +Unlike comprehensive reference documentation, the Playbook focuses on practical examples for: + +- **Common SQL patterns** - Window functions, pivoting, time-series aggregations +- **Programmatic integration** - Language-specific client examples +- **Operations** - Deployment and configuration tasks + +Each recipe provides a focused solution to a specific problem, with working code examples and expected results. + +## Structure + +The Playbook is organized into three main sections: + +- **SQL Recipes** - Common SQL patterns, window functions, and time-series queries +- **Programmatic** - Language-specific client examples and integration patterns +- **Operations** - Deployment, configuration, and operational tasks + +## Running the Examples + +**Most recipes run directly on our [live demo instance at demo.questdb.io](https://demo.questdb.io)** without any local setup. Queries that can be executed on the demo site are marked with a direct link to run them. + +For recipes that require write operations or specific configuration, the recipe will indicate what setup is needed. + +The demo instance contains live FX market data with tables for core prices and order book snapshots. See the [Demo Data Schema](/docs/playbook/demo-data-schema/) page for details about available tables and their structure. + +## Using the Playbook + +Each recipe follows a consistent format: + +1. **Problem statement** - What you're trying to accomplish +2. **Solution** - Code example with explanation +3. **Results** - Expected output or verification +4. **Additional context** - Tips, variations, or related documentation links + +Start by browsing the SQL Recipes section for common patterns, or jump directly to the recipe that matches your needs. diff --git a/documentation/playbook/programmatic/php/inserting-ilp.md b/documentation/playbook/programmatic/php/inserting-ilp.md new file mode 100644 index 000000000..00d5c28a6 --- /dev/null +++ b/documentation/playbook/programmatic/php/inserting-ilp.md @@ -0,0 +1,357 @@ +--- +title: Insert Data from PHP Using ILP +sidebar_label: Inserting via ILP +description: Send time-series data from PHP to QuestDB using the InfluxDB Line Protocol +--- + +QuestDB doesn't maintain an official PHP library, but since the ILP (InfluxDB Line Protocol) is text-based, you can easily send your data using PHP's built-in HTTP or socket functions, or use the official InfluxDB PHP client library. + +## Available Approaches + +This guide covers three methods for sending ILP data to QuestDB from PHP: + +1. **HTTP with cURL** (recommended for most use cases) + - Full control over ILP formatting and timestamps + - No external dependencies beyond PHP's built-in cURL + - Requires manual ILP string construction + +2. **InfluxDB v2 PHP Client** (easiest to use) + - Clean Point builder API + - Automatic batching and error handling + - **Limitation:** Cannot use custom timestamps with QuestDB (must use server timestamps) + - Requires Composer packages: `influxdata/influxdb-client-php` and `guzzlehttp/guzzle` + +3. **TCP Socket** (highest throughput) + - Best performance for high-volume scenarios + - No acknowledgments - data loss possible + - Manual implementation required + +## ILP Protocol Overview + +The ILP protocol allows you to send data to QuestDB using a simple line-based text format: + +``` +table_name,comma_separated_symbols comma_separated_non_symbols optional_timestamp\n +``` + +Each line represents one row of data. For example, these two lines are well-formed ILP messages: + +``` +readings,city=London,make=Omron temperature=23.5,humidity=0.343 1465839830100400000\n +readings,city=Bristol,make=Honeywell temperature=23.2,humidity=0.443\n +``` + +The format consists of: +- **Table name**: The target table for the data +- **Symbols** (tags): Comma-separated key-value pairs for indexed categorical data +- **Columns** (fields): Space-separated, then comma-separated key-value pairs for numerical or string data +- **Timestamp** (optional): Nanosecond-precision timestamp; if omitted, QuestDB uses server time + +For complete ILP specification, see the [ILP reference documentation](/docs/reference/api/ilp/overview/). + +## ILP Over HTTP + +QuestDB supports ILP data via HTTP or TCP. **HTTP is the recommended approach** for most use cases as it provides better reliability and easier debugging. + +To send data via HTTP: +1. Send a POST request to `http://localhost:9000/write` (or your QuestDB instance endpoint) +2. Set `Content-Type: text/plain` header +3. Include ILP-formatted rows in the request body +4. For higher throughput, batch multiple rows in a single request + +### HTTP Buffering Example + +The following PHP class provides buffered insertion with automatic flushing based on either row count or elapsed time: + +```php title="Buffered ILP insertion via HTTP" +bufferSize = $bufferSize; + $this->flushInterval = $flushInterval; + $this->lastFlushTime = time(); + } + + public function __destruct() { + // Attempt to flush any remaining data when script is terminating + $this->flush(); + } + + public function insertRow($tableName, $symbols, $columns, $timestamp = null) { + $row = $this->formatRow($tableName, $symbols, $columns, $timestamp); + $this->buffer[] = $row; + $this->checkFlushConditions(); + } + + private function formatRow($tableName, $symbols, $columns, $timestamp) { + $escape = function($value) { + return str_replace([' ', ',', "\n"], ['\ ', '\,', '\n'], $value); + }; + + $symbolString = implode(',', array_map( + function($k, $v) use ($escape) { return "$k={$escape($v)}"; }, + array_keys($symbols), $symbols + )); + + $columnString = implode(',', array_map( + function($k, $v) use ($escape) { return "$k={$escape($v)}"; }, + array_keys($columns), $columns + )); + + // Check if timestamp is provided + $timestampPart = is_null($timestamp) ? '' : " $timestamp"; + + return "$tableName,$symbolString $columnString$timestampPart"; + } + + private function checkFlushConditions() { + if (count($this->buffer) >= $this->bufferSize || (time() - $this->lastFlushTime) >= $this->flushInterval) { + $this->flush(); + } + } + + private function flush() { + if (empty($this->buffer)) { + return; // Nothing to flush + } + $data = implode("\n", $this->buffer); + $this->buffer = []; + $this->lastFlushTime = time(); + + $ch = curl_init(); + curl_setopt($ch, CURLOPT_URL, $this->endpoint); + curl_setopt($ch, CURLOPT_POST, true); + curl_setopt($ch, CURLOPT_POSTFIELDS, $data); + curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); + curl_setopt($ch, CURLOPT_HTTPHEADER, ['Content-Type: text/plain']); + curl_exec($ch); + curl_close($ch); + } +} + +// Usage example: +$inserter = new DataInserter(10, 30); + +// Inserting rows for London +$inserter->insertRow("test_readings", ["city" => "London", "make" => "Omron"], ["temperature" => 23.5, "humidity" => 0.343], "1650573480100400000"); +$inserter->insertRow("test_readings", ["city" => "London", "make" => "Sony"], ["temperature" => 21.0, "humidity" => 0.310]); +$inserter->insertRow("test_readings", ["city" => "London", "make" => "Philips"], ["temperature" => 22.5, "humidity" => 0.333], "1650573480100500000"); +$inserter->insertRow("test_readings", ["city" => "London", "make" => "Samsung"], ["temperature" => 24.0, "humidity" => 0.350]); + +// Inserting rows for Madrid +$inserter->insertRow("test_readings", ["city" => "Madrid", "make" => "Omron"], ["temperature" => 25.5, "humidity" => 0.360], "1650573480100600000"); +$inserter->insertRow("test_readings", ["city" => "Madrid", "make" => "Sony"], ["temperature" => 23.0, "humidity" => 0.340]); +$inserter->insertRow("test_readings", ["city" => "Madrid", "make" => "Philips"], ["temperature" => 26.0, "humidity" => 0.370], "1650573480100700000"); +$inserter->insertRow("test_readings", ["city" => "Madrid", "make" => "Samsung"], ["temperature" => 22.0, "humidity" => 0.355]); + +// Inserting rows for New York +$inserter->insertRow("test_readings", ["city" => "New York", "make" => "Omron"], ["temperature" => 20.5, "humidity" => 0.330], "1650573480100800000"); +$inserter->insertRow("test_readings", ["city" => "New York", "make" => "Sony"], ["temperature" => 19.0, "humidity" => 0.320]); +$inserter->insertRow("test_readings", ["city" => "New York", "make" => "Philips"], ["temperature" => 21.0, "humidity" => 0.340], "1650573480100900000"); +$inserter->insertRow("test_readings", ["city" => "New York", "make" => "Samsung"], ["temperature" => 18.5, "humidity" => 0.335]); +?> +``` + +This class: +- Buffers rows until either 10 rows are accumulated or 30 seconds have elapsed +- Properly escapes special characters (spaces, commas, newlines) in values +- Automatically flushes remaining data when the script terminates +- Uses cURL for HTTP communication + +:::tip +For production use, consider adding error handling to check the HTTP response status and implement retry logic for failed requests. +::: + +## Using the InfluxDB v2 PHP Client + +Another approach is to use the official [InfluxDB PHP client library](https://github.com/influxdata/influxdb-client-php), which supports the InfluxDB v2 write API. QuestDB is compatible with this API, making the client library a convenient option. + +### Installation + +Install the required packages via Composer: + +```bash +composer require influxdata/influxdb-client-php guzzlehttp/guzzle +``` + +**Required dependencies:** +- `influxdata/influxdb-client-php` - The InfluxDB v2 PHP client library +- `guzzlehttp/guzzle` - A PSR-18 compatible HTTP client (required by the InfluxDB client) + +:::info Alternative HTTP Clients +The InfluxDB client requires a PSR-18 compatible HTTP client. While we recommend Guzzle, you can use alternatives like `php-http/guzzle7-adapter` or `symfony/http-client` if preferred. +::: + +### Configuration + +When using the InfluxDB client with QuestDB: + +- **URL**: Use your QuestDB HTTP endpoint (default: `http://localhost:9000`) +- **Token**: Not required - can be left empty or use any string +- **Bucket**: Not required - can be any string (ignored by QuestDB) +- **Organization**: Not required - can be any string (ignored by QuestDB) + +:::warning Write API Only +QuestDB only supports the **InfluxDB v2 write API** when using this client. Query operations are not supported through the InfluxDB client - use QuestDB's PostgreSQL wire protocol or REST API for queries instead. +::: + +### Example Code + +```php title="Using InfluxDB v2 PHP client with QuestDB" + "http://localhost:9000", + "token" => "", // Not required for QuestDB + "bucket" => "default", // Not used by QuestDB + "org" => "default", // Not used by QuestDB + "precision" => WritePrecision::NS +]); + +$writeApi = $client->createWriteApi(); + +// Write points using the Point builder +// Note: Omit ->time() to let QuestDB assign server timestamps +$point = Point::measurement("readings") + ->addTag("city", "London") + ->addTag("make", "Omron") + ->addField("temperature", 23.5) + ->addField("humidity", 0.343); + +$writeApi->write($point); + +// Write multiple points +$points = [ + Point::measurement("readings") + ->addTag("city", "Madrid") + ->addTag("make", "Sony") + ->addField("temperature", 25.5) + ->addField("humidity", 0.360), + + Point::measurement("readings") + ->addTag("city", "New York") + ->addTag("make", "Philips") + ->addField("temperature", 20.5) + ->addField("humidity", 0.330) +]; + +$writeApi->write($points); + +// Always close the client +$client->close(); +?> +``` + +### Benefits and Limitations + +The Point builder provides several advantages: +- **Automatic ILP formatting and escaping** - No need to manually construct ILP strings +- **Built-in error handling** - The client handles HTTP errors and retries +- **Batching support** - Automatically batches writes for better performance +- **Clean API** - Fluent Point builder interface is easy to use + +:::warning Timestamp Limitation +The InfluxDB PHP client **cannot be used with custom timestamps** when writing to QuestDB. When you call `->time()` with a nanosecond timestamp, the client serializes it in scientific notation (e.g., `1.76607297E+18`), which QuestDB's ILP parser rejects. + +**Solution:** Always omit the `->time()` call and let QuestDB assign server-side timestamps automatically. This is the only reliable way to use the InfluxDB PHP client with QuestDB. + +**If you need client-side timestamps:** Use the raw HTTP cURL approach (documented above) where you manually format the ILP string with full control over timestamp formatting. +::: + +## ILP Over TCP Socket + +TCP over socket provides higher throughput but is less reliable than HTTP. The message format is identical - only the transport changes. + +Use TCP when: +- You need maximum ingestion throughput +- Your application can handle potential data loss on connection failures +- You're willing to implement your own connection management and error handling + +### TCP Socket Example + +Here's a basic example using PHP's socket functions: + +```php title="Send ILP data via TCP socket" + +``` + +This basic example: +- Connects to QuestDB's ILP port (default 9009) +- Sends a single row of data +- Closes the connection + +For production use with TCP, you should: +- Keep connections open and reuse them for multiple rows +- Implement batching to reduce network overhead +- Add proper error handling and reconnection logic +- Consider using a connection pool for concurrent writes + +:::warning TCP Considerations +TCP ILP does not provide acknowledgments for successful writes. If the connection drops, you may lose data without notification. For critical data, use HTTP ILP instead. +::: + +## Choosing the Right Approach + +| Feature | HTTP (cURL) | HTTP (InfluxDB Client) | TCP Socket | +|---------|-------------|------------------------|------------| +| **Reliability** | High - responses indicate success/failure | High - responses indicate success/failure | Low - no acknowledgment | +| **Throughput** | Good | Good | Excellent | +| **Error handling** | Manual via cURL | Built-in via client library | Manual implementation required | +| **Ease of use** | Medium - manual ILP formatting | High - Point builder API | Low - manual everything | +| **Custom timestamps** | ✅ Full control | ❌ Must use server timestamps | ✅ Full control | +| **Dependencies** | None (cURL built-in) | `influxdb-client-php`
    `guzzlehttp/guzzle` | None (sockets built-in) | +| **Authentication** | Standard HTTP auth | Standard HTTP auth | Limited options | +| **Recommended for** | Custom timestamps required | Ease of development, server timestamps acceptable | High-volume, loss-tolerant scenarios | + +:::info Related Documentation +- [ILP reference documentation](/docs/reference/api/ilp/overview/) +- [HTTP REST API](/docs/reference/api/rest/) +- [Authentication and security](/docs/operations/rbac/) +::: diff --git a/documentation/playbook/sql/calculate-compound-interest.md b/documentation/playbook/sql/calculate-compound-interest.md new file mode 100644 index 000000000..cca0c1dd1 --- /dev/null +++ b/documentation/playbook/sql/calculate-compound-interest.md @@ -0,0 +1,112 @@ +--- +title: Calculate Compound Interest +sidebar_label: Compound interest +description: Calculate compound interest over time using POWER and window functions +--- + +Calculate compound interest over multiple periods using SQL, where each period's interest is calculated on the previous period's ending balance. This is useful for financial modeling, investment projections, and interest calculations. + +:::info Generated Data +This query uses generated data from `long_sequence()` to create a time series of years, so it can run directly on the demo instance without requiring any existing tables. +::: + +## Problem: Need Year-by-Year Growth + +You want to calculate compound interest over 5 years, starting with an initial principal of 1000, with an annual interest rate of 0.1 (10%). Each year's interest should be calculated on the previous year's ending balance. + +## Solution: Use POWER Function with Window Functions + +Combine the `POWER()` function with `FIRST_VALUE()` window function to calculate compound interest: + +```questdb-sql demo title="Calculate compound interest over 5 years" +WITH +year_series AS ( + SELECT 2000 as start_year, 2000 + (x - 1) AS timestamp, + 0.1 AS interest_rate, 1000.0 as initial_principal + FROM long_sequence(5) +), +compounded_values AS ( + SELECT + timestamp, + initial_principal, + interest_rate, + initial_principal * + POWER( + 1 + interest_rate, + timestamp - start_year + 1 + ) AS compounding + FROM + year_series +), compounding_year_before AS ( +SELECT + timestamp, + initial_principal, + interest_rate, + FIRST_VALUE(cv.compounding) + OVER ( + ORDER BY timestamp + ROWS between 1 preceding and 1 preceding + ) AS year_principal, + cv.compounding as compounding_amount +FROM + compounded_values cv +ORDER BY + timestamp + ) +select timestamp, initial_principal, interest_rate, +coalesce(year_principal, initial_principal) as year_principal, +compounding_amount +from compounding_year_before +``` + +**Results:** + +| timestamp | initial_principal | interest_rate | year_principal | compounding_amount | +|-----------|-------------------|---------------|----------------|-------------------| +| 2000 | 1000.0 | 0.1 | 1000.0 | 1100.0 | +| 2001 | 1000.0 | 0.1 | 1100.0 | 1210.0 | +| 2002 | 1000.0 | 0.1 | 1210.0 | 1331.0 | +| 2003 | 1000.0 | 0.1 | 1331.0 | 1464.1 | +| 2004 | 1000.0 | 0.1 | 1464.1 | 1610.51 | + +Each row shows how the principal grows year over year, with interest compounding on the previous year's ending balance. + +## How It Works + +The query uses a multi-step CTE approach: + +1. **Generate year series**: Use `long_sequence(5)` to create 5 rows representing years 2000-2004 +2. **Calculate compound amount**: Use `POWER(1 + interest_rate, years)` to compute the ending balance for each year +3. **Get previous year's balance**: Use `FIRST_VALUE()` with window frame `ROWS between 1 preceding and 1 preceding` to access the previous row's compounding amount +4. **Handle first year**: Use `COALESCE()` to show the initial principal for the first year + +The `POWER()` function calculates the compound interest formula: `principal * (1 + rate)^periods` + +## Customizing the Calculation + +You can modify the parameters: +- **Start year**: Change `2000` to your desired start year (appears twice in the query) +- **Initial principal**: Change `1000.0` to your starting amount +- **Interest rate**: Change `0.1` to your rate (0.1 = 10%) +- **Number of periods**: Change `long_sequence(5)` to your desired number of years + +```questdb-sql demo title="Example with different parameters" +WITH +year_series AS ( + SELECT 2025 as start_year, 2025 + (x - 1) AS timestamp, + 0.05 AS interest_rate, 5000.0 as initial_principal + FROM long_sequence(10) +), +-- ... rest of query remains the same +``` + +:::tip +For more complex scenarios like monthly or quarterly compounding, adjust the time period generation and the exponent in the POWER function accordingly. +::: + +:::info Related Documentation +- [POWER function](/docs/reference/function/numeric/#power) +- [Window functions](/docs/reference/sql/select/#window-functions) +- [FIRST_VALUE window function](/docs/reference/function/window/#first_value) +- [long_sequence](/docs/reference/function/row-generator/#long_sequence) +::: diff --git a/documentation/playbook/sql/force-designated-timestamp.md b/documentation/playbook/sql/force-designated-timestamp.md new file mode 100644 index 000000000..f8a2d5c0a --- /dev/null +++ b/documentation/playbook/sql/force-designated-timestamp.md @@ -0,0 +1,79 @@ +--- +title: Force a Designated Timestamp +sidebar_label: Force designated timestamp +description: Learn how to explicitly set a designated timestamp column in QuestDB queries using the TIMESTAMP keyword +--- + +Sometimes you need to force a designated timestamp in your query. This happens when you want to run operations like `SAMPLE BY` with a non-designated timestamp column, or when QuestDB applies certain functions or joins and loses track of the designated timestamp. + +## Problem: Lost Designated Timestamp + +When you run this query on the demo instance, you'll notice the `time` column is not recognized as a designated timestamp because we cast it to a string and back: + +```questdb-sql demo title="Query without designated timestamp" +SELECT + TO_TIMESTAMP(timestamp::STRING, 'yyyy-MM-ddTHH:mm:ss.SSSUUUZ') time, + symbol, + ecn, + bid_price +FROM + core_price +WHERE timestamp IN today() +LIMIT 10; +``` + +Without a designated timestamp, you cannot use time-series operations like `SAMPLE BY`. + +## Solution: Use the TIMESTAMP Keyword + +You can force the designated timestamp using the `TIMESTAMP()` keyword, which allows you to run time-series operations: + +```questdb-sql demo title="Force designated timestamp with TIMESTAMP keyword" +WITH t AS ( + ( + SELECT + TO_TIMESTAMP(timestamp::STRING, 'yyyy-MM-ddTHH:mm:ss.SSSUUUZ') time, + symbol, + ecn, + bid_price + FROM + core_price + WHERE timestamp >= dateadd('h', -1, now()) + ORDER BY time + ) TIMESTAMP (time) +) +SELECT * FROM t LATEST BY symbol; +``` + +The `TIMESTAMP(time)` clause explicitly tells QuestDB which column to use as the designated timestamp, enabling `LATEST BY` and other time-series operations. This query gets the most recent price for each symbol in the last hour. + +## Common Case: UNION Queries + +The designated timestamp is often lost when using `UNION` or `UNION ALL`. This is intentional - QuestDB cannot guarantee that the combined results are in order, and designated timestamps must always be in ascending order. + +You can restore the designated timestamp by applying `ORDER BY` and then using `TIMESTAMP()`: + +```questdb-sql demo title="Restore designated timestamp after UNION ALL" +( + SELECT * FROM + ( + SELECT timestamp, symbol FROM core_price WHERE timestamp >= dateadd('m', -1, now()) + UNION ALL + SELECT timestamp, symbol FROM core_price WHERE timestamp >= dateadd('m', -1, now()) + ) ORDER BY timestamp +) +TIMESTAMP(timestamp) +LIMIT 10; +``` + +This query combines the last minute of data twice using `UNION ALL`, then restores the designated timestamp. + +:::warning Order is Required +The `TIMESTAMP()` keyword requires that the data is already sorted by the timestamp column. If the data is not in order, the query will fail. Always include `ORDER BY` before applying `TIMESTAMP()`. +::: + +:::info Related Documentation +- [Designated Timestamp concept](/docs/concept/designated-timestamp/) +- [TIMESTAMP keyword reference](/docs/reference/sql/select/#timestamp) +- [SAMPLE BY aggregation](/docs/reference/function/aggregation/#sample-by) +::: diff --git a/documentation/playbook/sql/pivoting.md b/documentation/playbook/sql/pivoting.md new file mode 100644 index 000000000..07ad07ba5 --- /dev/null +++ b/documentation/playbook/sql/pivoting.md @@ -0,0 +1,100 @@ +--- +title: Pivoting Query Results +sidebar_label: Pivoting results +description: Transform rows into columns using CASE statements to pivot time-series data +--- + +Pivoting transforms row-based data into column-based data, where values from one column become column headers. This is useful for creating wide-format reports or comparison tables. + +## Problem: Long-format Results + +When you aggregate data with `SAMPLE BY`, you get one row per time interval and grouping value: + +```questdb-sql demo title="Query returning rows per symbol and timestamp" +SELECT timestamp, symbol, SUM(bid_price) AS total_bid +FROM core_price +WHERE timestamp IN today() +SAMPLE BY 1m +LIMIT 20; +``` + +**Results:** + +| timestamp | symbol | total_bid | +| --------------------------- | ------ | ------------------ | +| 2025-12-18T00:00:00.000000Z | AUDUSD | 1146.7547999999995 | +| 2025-12-18T00:00:00.000000Z | USDTRY | 77545.1637 | +| 2025-12-18T00:00:00.000000Z | USDSEK | 15655.122000000012 | +| 2025-12-18T00:00:00.000000Z | USDCHF | 1308.9189999999994 | +| 2025-12-18T00:00:00.000000Z | AUDCAD | 1533.120900000004 | +| 2025-12-18T00:00:00.000000Z | EURNZD | 3502.5426999999922 | +| 2025-12-18T00:00:00.000000Z | AUDNZD | 2014.2881000000089 | +| 2025-12-18T00:00:00.000000Z | USDMXN | 31111.124799999983 | +| 2025-12-18T00:00:00.000000Z | EURGBP | 1501.919500000002 | +| 2025-12-18T00:00:00.000000Z | EURJPY | 305747.47 | +| 2025-12-18T00:00:00.000000Z | USDZAR | 28375.69069999998 | +| 2025-12-18T00:00:00.000000Z | EURUSD | 2034.6741000000018 | +| 2025-12-18T00:00:00.000000Z | NZDCAD | 1365.2795000000028 | +| 2025-12-18T00:00:00.000000Z | USDCAD | 2318.794500000005 | +| 2025-12-18T00:00:00.000000Z | GBPNZD | 4033.9539000000054 | +| 2025-12-18T00:00:00.000000Z | NZDUSD | 977.1505000000012 | +| 2025-12-18T00:00:00.000000Z | USDHKD | 13200.823400000027 | +| 2025-12-18T00:00:00.000000Z | GBPCHF | 1856.3431999999962 | +| 2025-12-18T00:00:00.000000Z | NZDJPY | 152123.41999999998 | +| 2025-12-18T00:00:00.000000Z | GBPJPY | 348693.1200000006 | + +This format has multiple rows per timestamp, one for each symbol. + +## Solution: Pivot Using CASE Statements + +To get one row per timestamp with a column for each symbol, use conditional aggregation with `CASE` statements: + +```questdb-sql demo title="Pivot symbols into columns" +SELECT timestamp, + SUM(CASE WHEN symbol='EURUSD' THEN bid_price END) AS EURUSD, + SUM(CASE WHEN symbol='GBPUSD' THEN bid_price END) AS GBPUSD, + SUM(CASE WHEN symbol='USDJPY' THEN bid_price END) AS USDJPY, + SUM(CASE WHEN symbol='USDCHF' THEN bid_price END) AS USDCHF, + SUM(CASE WHEN symbol='AUDUSD' THEN bid_price END) AS AUDUSD, + SUM(CASE WHEN symbol='USDCAD' THEN bid_price END) AS USDCAD, + SUM(CASE WHEN symbol='NZDUSD' THEN bid_price END) AS NZDUSD +FROM core_price +WHERE timestamp IN today() +SAMPLE BY 1m +LIMIT 5; +``` + +Now each timestamp has a single row with all symbols as columns, making cross-symbol comparison much easier. + +## How It Works + +The `CASE` statement conditionally includes values: + +```sql +SUM(CASE WHEN symbol='EURUSD' THEN bid_price END) AS EURUSD +``` + +This means: +1. For each row, if `symbol='EURUSD'`, include the `bid_price` value +2. Otherwise, include `NULL` (implicit) +3. `SUM()` aggregates only the non-NULL values for each timestamp + +The same pattern applies to each symbol, creating one column per unique value. + +## Use Cases + +Pivoting is useful for: +- **Comparison tables**: Side-by-side comparison of metrics across categories +- **Dashboard exports**: Wide-format data for spreadsheets or BI tools +- **Correlation analysis**: Computing correlations between time-series in different columns +- **Report generation**: Creating fixed-width reports with known categories + +:::tip +For unknown or dynamic column lists, you'll need to generate the CASE statements programmatically in your application code. SQL doesn't support dynamic column generation. +::: + +:::info Related Documentation +- [CASE expressions](/docs/reference/sql/case/) +- [SAMPLE BY aggregation](/docs/reference/function/aggregation/#sample-by) +- [Aggregation functions](/docs/reference/function/aggregation/) +::: diff --git a/documentation/playbook/sql/rows-before-after-value-match.md b/documentation/playbook/sql/rows-before-after-value-match.md new file mode 100644 index 000000000..4cb23ef68 --- /dev/null +++ b/documentation/playbook/sql/rows-before-after-value-match.md @@ -0,0 +1,137 @@ +--- +title: Find Rows Before and After Value Match +sidebar_label: Rows before/after match +description: Use LAG and LEAD window functions to access values from surrounding rows +--- + +Access values from rows before and after the current row to find patterns, detect changes, or provide context around events. This is useful for comparing values across adjacent rows or detecting local minimums and maximums. + +## Problem: Need Surrounding Context + +You want to find all rows in the `core_price` table where the bid price is lower than the prices in the surrounding rows (5 rows before and 5 rows after). This helps identify local price drops or troughs in the EURUSD time series. + +## Solution: Use LAG and LEAD Functions + +Use `LAG()` to access rows before the current row and `LEAD()` to access rows after: + +```questdb-sql demo title="Find rows where bid price is lower than surrounding rows" +WITH framed AS ( + SELECT timestamp, bid_price, + LAG(bid_price, 1) OVER () AS bidprice_1up, + LAG(bid_price, 2) OVER () AS bidprice_2up, + LAG(bid_price, 3) OVER () AS bidprice_3up, + LAG(bid_price, 4) OVER () AS bidprice_4up, + LAG(bid_price, 5) OVER () AS bidprice_5up, + LEAD(bid_price, 1) OVER () AS bidprice_1down, + LEAD(bid_price, 2) OVER () AS bidprice_2down, + LEAD(bid_price, 3) OVER () AS bidprice_3down, + LEAD(bid_price, 4) OVER () AS bidprice_4down, + LEAD(bid_price, 5) OVER () AS bidprice_5down + FROM core_price + WHERE timestamp >= dateadd('m', -1, now()) AND symbol = 'EURUSD' +) +SELECT timestamp, bid_price +FROM framed +WHERE bid_price < bidprice_1up AND bid_price < bidprice_2up AND bid_price < bidprice_3up AND bid_price < bidprice_4up AND bid_price < bidprice_5up + AND bid_price < bidprice_1down AND bid_price < bidprice_2down AND bid_price < bidprice_3down AND bid_price < bidprice_4down AND bid_price < bidprice_5down +LIMIT 20; +``` + +This returns all rows where the current bid price is lower than ALL of the surrounding 10 rows (5 before and 5 after), identifying local minimums for EURUSD in the last minute. + +## How It Works + +The query uses a two-step approach: + +1. **Access surrounding rows**: The CTE `framed` uses `LAG()` and `LEAD()` to access values from surrounding rows: + - `LAG(bid_price, N)`: Gets the bid price from N rows **before** the current row + - `LEAD(bid_price, N)`: Gets the bid price from N rows **after** the current row + +2. **Filter for local minimums**: The outer query uses `AND` conditions to find rows where the current price is lower than ALL surrounding prices, identifying true local minimums + +### LAG vs LEAD + +- **`LAG(column, offset)`** - Accesses the value from `offset` rows **before** (earlier in time) +- **`LEAD(column, offset)`** - Accesses the value from `offset` rows **after** (later in time) + +Both functions return `NULL` for rows where the offset goes beyond the dataset boundaries (e.g., `LAG(5)` returns `NULL` for the first 5 rows). + +:::warning Symbol Filter Required +When using window functions without `PARTITION BY`, you must filter by a specific symbol. This ensures the window frame operates on a single symbol's time series, preventing incorrect comparisons across different symbols. +::: + +## Viewing Surrounding Values + +To see all surrounding values for debugging or analysis, select all the LAG/LEAD columns: + +```questdb-sql demo title="Show all surrounding values for inspection" +WITH framed AS ( + SELECT row_number() OVER () as rownum, timestamp, bid_price, + LAG(bid_price, 1) OVER () AS bidprice_1up, + LAG(bid_price, 2) OVER () AS bidprice_2up, + LAG(bid_price, 3) OVER () AS bidprice_3up, + LAG(bid_price, 4) OVER () AS bidprice_4up, + LAG(bid_price, 5) OVER () AS bidprice_5up, + LEAD(bid_price, 1) OVER () AS bidprice_1down, + LEAD(bid_price, 2) OVER () AS bidprice_2down, + LEAD(bid_price, 3) OVER () AS bidprice_3down, + LEAD(bid_price, 4) OVER () AS bidprice_4down, + LEAD(bid_price, 5) OVER () AS bidprice_5down + FROM core_price + WHERE timestamp >= dateadd('m', -1, now()) AND symbol = 'EURUSD' +) +SELECT rownum, timestamp, bid_price, + bidprice_1up, bidprice_2up, bidprice_3up, bidprice_4up, bidprice_5up, + bidprice_1down, bidprice_2down, bidprice_3down, bidprice_4down, bidprice_5down +FROM framed +WHERE bid_price < bidprice_1up AND bid_price < bidprice_2up AND bid_price < bidprice_3up AND bid_price < bidprice_4up AND bid_price < bidprice_5up + AND bid_price < bidprice_1down AND bid_price < bidprice_2down AND bid_price < bidprice_3down AND bid_price < bidprice_4down AND bid_price < bidprice_5down +LIMIT 20; +``` + +This shows each matching row with all its surrounding bid prices as separate columns, making it easy to verify the local minimum detection. + +## Advanced: Checking Against Aggregate Over Large Ranges + +For more complex scenarios where you need to compare against the **maximum or minimum** value across a large range (e.g., 100 rows before and after), you can use the `FIRST_VALUE()` trick with reversed ordering: + +```questdb-sql demo title="Find rows where price is below the max of surrounding 100 rows" +WITH framed AS ( + SELECT timestamp, bid_price, + -- Max of 100 rows before + MAX(bid_price) OVER (ROWS BETWEEN 100 PRECEDING AND 1 PRECEDING) AS max_100_before, + -- Max of 100 rows after (using DESC ordering trick) + MAX(bid_price) OVER (ORDER BY timestamp DESC ROWS BETWEEN 100 PRECEDING AND 1 PRECEDING) AS max_100_after + FROM core_price + WHERE timestamp >= dateadd('h', -1, now()) AND symbol = 'EURUSD' +) +SELECT timestamp, bid_price, max_100_before, max_100_after +FROM framed +WHERE bid_price < max_100_before AND bid_price < max_100_after +LIMIT 20; +``` + +This pattern is useful when you need to: +- Check against **aggregates** (MAX, MIN, AVG) over a range rather than individual values +- Work with **large ranges** (50-100+ rows) where listing individual LAG/LEAD calls would be impractical +- Find rows where the current value is below the maximum or above the minimum in a large window + +### The Reversed Ordering Trick + +To access rows **after** the current row using aggregate functions, use `ORDER BY timestamp DESC`: +- Normal order: `ROWS BETWEEN 100 PRECEDING AND 1 PRECEDING` gives you the 100 rows **before** +- Reversed order: `ORDER BY timestamp DESC ROWS BETWEEN 100 PRECEDING AND 1 PRECEDING` gives you the 100 rows **after** (because descending order reverses what "preceding" means) + +This is a workaround since QuestDB doesn't have `ROWS FOLLOWING` syntax yet. + +:::tip When to Use Each Approach +- **Use LAG/LEAD**: When you need to compare against **specific individual rows** (e.g., the previous 5 rows, the next 3 rows) +- **Use aggregate with window frames**: When you need to compare against an **aggregate value** (MAX, MIN, AVG) over a **large range** of rows (e.g., highest price in the last 100 rows) +::: + +:::info Related Documentation +- [LAG window function](/docs/reference/function/window/#lag) +- [LEAD window function](/docs/reference/function/window/#lead) +- [Window functions overview](/docs/reference/sql/select/#window-functions) +- [Window frame clauses](/docs/reference/sql/select/#frame-clause) +::: diff --git a/documentation/sidebars.js b/documentation/sidebars.js index 1225250be..9527b6b10 100644 --- a/documentation/sidebars.js +++ b/documentation/sidebars.js @@ -653,6 +653,48 @@ module.exports = { "troubleshooting/error-codes", ], }, + { + label: "Playbook", + type: "category", + collapsed: false, + items: [ + "playbook/overview", + "playbook/demo-data-schema", + { + type: "category", + label: "SQL Recipes", + collapsed: true, + items: [ + "playbook/sql/force-designated-timestamp", + "playbook/sql/pivoting", + "playbook/sql/calculate-compound-interest", + "playbook/sql/rows-before-after-value-match", + ], + }, + { + type: "category", + label: "Programmatic", + collapsed: true, + items: [ + { + type: "category", + label: "PHP", + items: [ + "playbook/programmatic/php/inserting-ilp", + ], + }, + ], + }, + { + type: "category", + label: "Operations", + collapsed: true, + items: [ + "playbook/operations/docker-compose-config", + ], + }, + ], + }, { label: "Release Notes", type: "link", From db4ace85ade0267d399f7b1cba3ba23103f7759e Mon Sep 17 00:00:00 2001 From: javier Date: Thu, 18 Dec 2025 19:30:24 +0100 Subject: [PATCH 08/21] more drafts. Needs review --- .../grafana/dynamic-table-queries.md | 225 ++++++++++ .../integrations/grafana/read-only-user.md | 203 +++++++++ .../integrations/grafana/variable-dropdown.md | 270 ++++++++++++ .../telegraf/opcua-dense-format.md | 305 ++++++++++++++ .../operations/monitor-with-telegraf.md | 382 +++++++++++++++++ .../programmatic/cpp/missing-columns.md | 395 ++++++++++++++++++ .../programmatic/ruby/inserting-ilp.md | 355 ++++++++++++++++ .../programmatic/rust/tls-configuration.md | 326 +++++++++++++++ .../sql/advanced/top-n-plus-others.md | 366 ++++++++++++++++ .../playbook/sql/finance/bollinger-bands.md | 175 ++++++++ .../compound-interest.md} | 0 .../sql/finance/cumulative-product.md | 123 ++++++ .../playbook/sql/finance/rolling-stddev.md | 321 ++++++++++++++ .../playbook/sql/finance/tick-trin.md | 191 +++++++++ .../playbook/sql/finance/volume-profile.md | 183 ++++++++ .../playbook/sql/finance/volume-spike.md | 276 ++++++++++++ documentation/playbook/sql/finance/vwap.md | 130 ++++++ .../sql/time-series/latest-n-per-partition.md | 267 ++++++++++++ documentation/sidebars.js | 79 +++- 19 files changed, 4571 insertions(+), 1 deletion(-) create mode 100644 documentation/playbook/integrations/grafana/dynamic-table-queries.md create mode 100644 documentation/playbook/integrations/grafana/read-only-user.md create mode 100644 documentation/playbook/integrations/grafana/variable-dropdown.md create mode 100644 documentation/playbook/integrations/telegraf/opcua-dense-format.md create mode 100644 documentation/playbook/operations/monitor-with-telegraf.md create mode 100644 documentation/playbook/programmatic/cpp/missing-columns.md create mode 100644 documentation/playbook/programmatic/ruby/inserting-ilp.md create mode 100644 documentation/playbook/programmatic/rust/tls-configuration.md create mode 100644 documentation/playbook/sql/advanced/top-n-plus-others.md create mode 100644 documentation/playbook/sql/finance/bollinger-bands.md rename documentation/playbook/sql/{calculate-compound-interest.md => finance/compound-interest.md} (100%) create mode 100644 documentation/playbook/sql/finance/cumulative-product.md create mode 100644 documentation/playbook/sql/finance/rolling-stddev.md create mode 100644 documentation/playbook/sql/finance/tick-trin.md create mode 100644 documentation/playbook/sql/finance/volume-profile.md create mode 100644 documentation/playbook/sql/finance/volume-spike.md create mode 100644 documentation/playbook/sql/finance/vwap.md create mode 100644 documentation/playbook/sql/time-series/latest-n-per-partition.md diff --git a/documentation/playbook/integrations/grafana/dynamic-table-queries.md b/documentation/playbook/integrations/grafana/dynamic-table-queries.md new file mode 100644 index 000000000..a048f32d6 --- /dev/null +++ b/documentation/playbook/integrations/grafana/dynamic-table-queries.md @@ -0,0 +1,225 @@ +--- +title: Query Multiple Tables Dynamically in Grafana +sidebar_label: Dynamic table queries +description: Use Grafana variables to dynamically query multiple tables with the same schema for time-series visualization +--- + +Query multiple QuestDB tables dynamically in Grafana using dashboard variables. This is useful when you have many tables with identical schemas (e.g., sensor data, metrics from different sources) and want to visualize them together without hardcoding table names in your queries. + +## Problem: Visualize Many Similar Tables + +You have 100+ tables with the same structure (e.g., `sensor_1`, `sensor_2`, ..., `sensor_n`) and want to: +1. Display data from all tables on a single Grafana chart +2. Avoid manually updating queries when tables are added or removed +3. Allow users to select which tables to visualize via dashboard controls + +## Solution: Use Grafana Variables with Dynamic SQL + +Create Grafana dashboard variables that query QuestDB for table names, then use string aggregation functions to build the SQL query dynamically. + +### Step 1: Get Table Names + +First, query QuestDB to get all relevant table names: + +```sql +SELECT table_name FROM tables() +WHERE table_name LIKE 'sensor_%'; +``` + +This returns a list of all tables matching the pattern. + +### Step 2: Create Grafana Variables + +Create two dashboard variables to construct the dynamic query: + +**Variable 1: `$table_list`** - Build the JOIN clause + +```sql +WITH tbs AS ( + SELECT string_agg(table_name, ',') as names + FROM tables() + WHERE table_name LIKE 'sensor_%' +) +SELECT replace(names, ',', ' ASOF JOIN ') FROM tbs; +``` + +**Output:** `sensor_1 ASOF JOIN sensor_2 ASOF JOIN sensor_3 ASOF JOIN sensor_4` + +This creates the table list with ASOF JOIN operators between them. + +**Variable 2: `$column_avgs`** - Build the column list + +```sql +SELECT string_agg(concat('avg(', table_name, '.value)'), ',') as columns +FROM tables() +WHERE table_name LIKE 'sensor_%'; +``` + +**Output:** `avg(sensor_1.value),avg(sensor_2.value),avg(sensor_3.value),avg(sensor_4.value)` + +This creates the column selection list with aggregation functions. + +### Step 3: Use Variables in Dashboard Query + +Now reference these variables in your Grafana chart query: + +```sql +SELECT sensor_1.timestamp, $column_avgs +FROM $table_list +SAMPLE BY 1s FROM $__fromTime TO $__toTime FILL(PREV); +``` + +When Grafana executes this query, it interpolates the variables: + +```sql +SELECT sensor_1.timestamp, avg(sensor_1.value),avg(sensor_2.value),avg(sensor_3.value),avg(sensor_4.value) +FROM sensor_1 ASOF JOIN sensor_2 ASOF JOIN sensor_3 ASOF JOIN sensor_4 +SAMPLE BY 1s FROM cast(1571176800000000 as timestamp) TO cast(1571349600000000 as timestamp) FILL(PREV); +``` + +## How It Works + +The solution uses three key QuestDB features: + +1. **`tables()` function**: Returns metadata about all tables in the database +2. **`string_agg()`**: Concatenates multiple rows into a single comma-separated string +3. **`replace()`**: Swaps commas for JOIN operators to build the FROM clause + +Combined with Grafana's variable interpolation: +- `$column_avgs`: Replaced with the aggregated column list +- `$table_list`: Replaced with the joined table expression +- `$__fromTime` / `$__toTime`: Grafana macros for the dashboard's time range + +### Understanding ASOF JOIN + +`ASOF JOIN` is ideal for time-series data with different update frequencies: +- Joins tables on timestamp +- For each row in the first table, finds the closest past timestamp in other tables +- Works like a LEFT JOIN but with time-based matching + +This ensures that even if tables update at different rates, you get a complete dataset with the most recent known value from each table. + +## Adapting the Pattern + +**Filter by different patterns:** +```sql +-- Tables starting with "metrics_" +WHERE table_name LIKE 'metrics_%' + +-- Tables matching a regex pattern +WHERE table_name ~ 'sensor_[0-9]+' + +-- Exclude certain tables +WHERE table_name LIKE 'sensor_%' + AND table_name NOT IN ('sensor_test', 'sensor_backup') +``` + +**Different aggregation functions:** +```sql +-- Maximum values +SELECT string_agg(concat('max(', table_name, '.value)'), ',') + +-- Sum values +SELECT string_agg(concat('sum(', table_name, '.value)'), ',') + +-- Last values (no aggregation needed) +SELECT string_agg(concat(table_name, '.value'), ',') +``` + +**Different join strategies:** +```sql +-- INNER JOIN (only rows with data in all tables) +SELECT replace(names, ',', ' INNER JOIN ') + +-- LEFT JOIN (all rows from first table) +SELECT replace(names, ',', ' LEFT JOIN ') + +-- Add ON clause for explicit joins +SELECT replace(names, ',', ' LEFT JOIN ') || ' ON timestamp' +``` + +**Custom column names:** +```sql +-- Cleaner column names in the chart +SELECT string_agg( + concat('avg(', table_name, '.value) AS ', replace(table_name, 'sensor_', '')), + ',' +) +``` + +Output: `avg(sensor_1.value) AS 1,avg(sensor_2.value) AS 2,...` + +## Programmatic Alternative + +If you're not using Grafana, you can achieve the same result programmatically: + +1. **Query for table names:** + ```sql + SELECT table_name FROM tables() WHERE table_name LIKE 'sensor_%'; + ``` + +2. **Build the query on the client side:** + ```python + # Python example + tables = ['sensor_1', 'sensor_2', 'sensor_3'] + + # Build JOIN clause + join_clause = ' ASOF JOIN '.join(tables) + + # Build column list + columns = ','.join([f'avg({t}.value)' for t in tables]) + + # Final query + query = f""" + SELECT {tables[0]}.timestamp, {columns} + FROM {join_clause} + SAMPLE BY 1s FILL(PREV) + """ + ``` + +## Handling Different Sampling Intervals + +When tables have different update frequencies, use FILL to handle gaps: + +```sql +-- Fill with previous value (holds last known value) +SAMPLE BY 1s FILL(PREV) + +-- Fill with linear interpolation +SAMPLE BY 1s FILL(LINEAR) + +-- Fill with NULL (show actual gaps) +SAMPLE BY 1s FILL(NULL) + +-- Fill with zero +SAMPLE BY 1s FILL(0) +``` + +**Choose based on your data:** +- **PREV**: Best for metrics that persist (temperatures, prices, statuses) +- **LINEAR**: Best for continuous values that change smoothly +- **NULL**: Best when you want to see actual data gaps +- **0 or constant**: Best for counting or rate metrics + +:::tip Performance Optimization +Joining many tables can be expensive. To improve performance: +- Use `SAMPLE BY` to reduce the number of rows +- Add timestamp filters early in the query +- Consider pre-aggregating data into a single table for frequently-accessed views +- Limit the number of tables joined (split into multiple charts if needed) +::: + +:::warning Table Schema Consistency +This pattern assumes all tables have identical schemas. If schemas differ: +- The query will fail at runtime +- You'll need to handle missing columns explicitly +- Consider using separate queries for tables with different structures +::: + +:::info Related Documentation +- [ASOF JOIN](/docs/reference/sql/join/#asof-join) +- [tables() function](/docs/reference/function/meta/#tables) +- [string_agg()](/docs/reference/function/aggregation/#string_agg) +- [SAMPLE BY](/docs/reference/sql/select/#sample-by) +- [Grafana QuestDB data source](https://grafana.com/grafana/plugins/questdb-questdb-datasource/) +::: diff --git a/documentation/playbook/integrations/grafana/read-only-user.md b/documentation/playbook/integrations/grafana/read-only-user.md new file mode 100644 index 000000000..d27e4569e --- /dev/null +++ b/documentation/playbook/integrations/grafana/read-only-user.md @@ -0,0 +1,203 @@ +--- +title: Configure Read-Only User for Grafana +sidebar_label: Read-only user +description: Set up a read-only PostgreSQL user for Grafana dashboards while maintaining admin access for DDL operations +--- + +Configure a dedicated read-only user for Grafana to improve security by preventing accidental data modifications through dashboards. This allows you to maintain separate credentials for visualization (read-only) and administration (full access), following the principle of least privilege. + +:::note QuestDB Enterprise +For QuestDB Enterprise, use the comprehensive [Role-Based Access Control (RBAC)](/docs/operations/rbac/) system to create granular user permissions and roles. The configuration below applies to QuestDB Open Source. +::: + +## Problem: Separate Read and Write Access + +You want to: +1. Connect Grafana with read-only credentials +2. Prevent accidental `INSERT`, `UPDATE`, `DELETE`, or `DROP` operations from dashboards +3. Still be able to execute DDL statements (`CREATE TABLE`, etc.) from the QuestDB web console + +However, QuestDB's PostgreSQL wire protocol doesn't support standard PostgreSQL user management commands like `CREATE USER` or `GRANT`. + +## Solution: Enable the Read-Only User + +QuestDB Open Source supports a built-in read-only user that can be enabled via configuration. This gives you two users: +- **Admin user** (default: `admin`): Full access for DDL and DML operations +- **Read-only user** (default: `user`): Query-only access for dashboards + +### Configuration + +Add these settings to your `server.conf` file or set them as environment variables: + +**Via server.conf:** +```ini +# Enable the read-only user +pg.readonly.user.enabled=true + +# Optional: Customize username (default is "user") +pg.readonly.user=grafana_reader + +# Optional: Customize password (default is "quest") +pg.readonly.password=secure_password_here +``` + +**Via environment variables:** +```bash +export QDB_PG_READONLY_USER_ENABLED=true +export QDB_PG_READONLY_USER=grafana_reader +export QDB_PG_READONLY_PASSWORD=secure_password_here +``` + +**Via Docker:** +```bash +docker run \ + -p 9000:9000 -p 8812:8812 \ + -e QDB_PG_READONLY_USER_ENABLED=true \ + -e QDB_PG_READONLY_USER=grafana_reader \ + -e QDB_PG_READONLY_PASSWORD=secure_password_here \ + questdb/questdb:latest +``` + +### Using the Read-Only User + +After enabling, you have two separate users: + +**Admin user (web console):** +- Username: `admin` (default) +- Password: `quest` (default) +- Permissions: Full access - `SELECT`, `INSERT`, `UPDATE`, `DELETE`, `CREATE`, `DROP`, `ALTER` +- Use for: QuestDB web console, administrative tasks, schema changes + +**Read-only user (Grafana):** +- Username: `grafana_reader` (or whatever you configured) +- Password: `secure_password_here` (or whatever you configured) +- Permissions: `SELECT` queries only +- Use for: Grafana dashboards, monitoring tools, analytics applications + +## Grafana Configuration + +Configure Grafana to use the read-only user: + +### PostgreSQL Data Source + +When adding a QuestDB data source using the PostgreSQL plugin: + +1. **Host:** `your-questdb-host:8812` +2. **Database:** `qdb` +3. **User:** `grafana_reader` (your read-only username) +4. **Password:** `secure_password_here` (your read-only password) +5. **SSL Mode:** Depends on your setup (typically `disable` for local, `require` for remote) + +### QuestDB Data Source Plugin + +When using the [native QuestDB Grafana plugin](https://grafana.com/grafana/plugins/questdb-questdb-datasource/): + +1. **URL:** `http://your-questdb-host:9000` +2. **Authentication:** Select PostgreSQL Wire +3. **User:** `grafana_reader` +4. **Password:** `secure_password_here` + +## Verification + +Test that permissions are working correctly: + +**Read-only user should succeed:** +```sql +-- These queries should work +SELECT * FROM trades LIMIT 10; +SELECT count(*) FROM trades; +SELECT symbol, avg(price) FROM trades GROUP BY symbol; +``` + +**Read-only user should fail:** +```sql +-- These operations should be rejected +INSERT INTO trades VALUES ('BTC-USDT', 'buy', 50000, 0.1, now()); +UPDATE trades SET price = 60000 WHERE symbol = 'BTC-USDT'; +DELETE FROM trades WHERE timestamp < dateadd('d', -30, now()); +CREATE TABLE test_table (id INT, name STRING); +DROP TABLE trades; +``` + +Expected error for write operations: `permission denied` or similar. + +## Security Best Practices + +### Strong Passwords + +Change default passwords immediately: +```ini +# Don't use the defaults in production +pg.user=custom_admin_username +pg.password=strong_admin_password_here + +pg.readonly.user=custom_readonly_username +pg.readonly.password=strong_readonly_password_here +``` + +### Network Access Control + +Restrict network access at the infrastructure level: +- Use firewalls to limit which hosts can connect to port 8812 +- For cloud deployments, use security groups or network policies +- Consider using a VPN for remote access + +### Connection Encryption + +Enable TLS for PostgreSQL connections: +- QuestDB Enterprise has native TLS support +- For Open Source, consider using a TLS termination proxy (e.g., HAProxy, nginx) + +### Regular Password Rotation + +Implement a password rotation policy: +1. Update the password in QuestDB configuration +2. Restart QuestDB to apply changes +3. Update Grafana data source configuration +4. Test connections before rotating further + +## Troubleshooting + +**Connection refused:** +- Verify QuestDB is running and listening on port 8812 +- Check firewall rules allow connections +- Ensure the PostgreSQL wire protocol is enabled (it is by default) + +**Authentication failed:** +- Verify the read-only user is enabled: `pg.readonly.user.enabled=true` +- Check username and password match your configuration +- Restart QuestDB after configuration changes + +**Queries failing for read-only user:** +- Ensure queries are SELECT-only (no INSERT, UPDATE, DELETE, CREATE, DROP, ALTER) +- Check table names are correct (case-sensitive in some contexts) +- Verify user has been correctly configured as read-only + +**DDL statements fail from web console:** +- Verify you're using the admin user, not the read-only user +- Check the web console is configured to use admin credentials + +## Alternative: Connection Pooling with PgBouncer + +For advanced setups with many concurrent Grafana users, consider using PgBouncer: + +1. **Configure PgBouncer** to connect to QuestDB with the read-only user +2. **Set authentication** in PgBouncer for your Grafana instances +3. **Point Grafana** to PgBouncer instead of directly to QuestDB + +This provides connection pooling benefits and an additional authentication layer. + +:::tip Multiple Dashboards +You can use the same read-only credentials across multiple Grafana instances or dashboards. Each connection will be independently managed by QuestDB's PostgreSQL wire protocol implementation. +::: + +:::warning Write Operations from Web Console +The web console uses different authentication than the PostgreSQL wire protocol. Enabling a read-only user does NOT restrict the web console - it will still have full access via the admin user and the REST API. +::: + +:::info Related Documentation +- [PostgreSQL wire protocol](/docs/reference/api/postgres/) +- [QuestDB Enterprise RBAC](/docs/operations/rbac/) +- [Configuration reference](/docs/reference/configuration/) +- [Grafana QuestDB data source](https://grafana.com/grafana/plugins/questdb-questdb-datasource/) +::: diff --git a/documentation/playbook/integrations/grafana/variable-dropdown.md b/documentation/playbook/integrations/grafana/variable-dropdown.md new file mode 100644 index 000000000..a78735a91 --- /dev/null +++ b/documentation/playbook/integrations/grafana/variable-dropdown.md @@ -0,0 +1,270 @@ +--- +title: Grafana Variable Dropdown with Name and Value +sidebar_label: Variable dropdown +description: Create Grafana variable dropdowns that display one value but use another in queries using regex filters +--- + +Create Grafana variable dropdowns where the displayed label differs from the value used in queries. This is useful when you want to show user-friendly names in the dropdown while using different values (like IDs, prices, or technical identifiers) in your actual SQL queries. + +## Problem: Separate Display and Query Values + +You want a Grafana variable dropdown that: +- **Displays:** Readable labels like `"BTC-USDT"`, `"ETH-USDT"`, `"SOL-USDT"` +- **Uses in queries:** Different values like prices (`37779.62`, `2615.54`, `98.23`) or IDs + +For example, with this query result: + +| symbol | price | +|------------|---------| +| BTC-USDT | 37779.62| +| ETH-USDT | 2615.54 | +| SOL-USDT | 98.23 | + +You want the dropdown to show `"BTC-USDT"` but use `37779.62` in your queries. + +## Solution: Use Regex Variable Filters + +When using the QuestDB data source plugin, you can use Grafana's regex variable filters to parse a concatenated string into separate `text` and `value` fields. + +### Step 1: Concatenate Columns in Query + +First, combine both columns into a single string with a separator that doesn't appear in your data: + +```sql +WITH t AS ( + SELECT symbol, first(price) as price + FROM trades + WHERE symbol LIKE '%BTC%' +) +SELECT concat(symbol, '#', price) FROM t; +``` + +**Query results:** +``` +DOGE-BTC#0.00000204 +ETH-BTC#0.05551 +BTC-USDT#37779.62 +SOL-BTC#0.0015282 +MATIC-BTC#0.00002074 +BTC-USDC#60511.1 +``` + +Each row is now a single string with symbol and price separated by `#`. + +### Step 2: Apply Regex Filter in Grafana Variable + +In your Grafana variable configuration: + +**Query:** +```sql +WITH t AS ( + SELECT symbol, first(price) as price + FROM trades + WHERE symbol LIKE '%BTC%' +) +SELECT concat(symbol, '#', price) FROM t; +``` + +**Regex Filter:** +```regex +/(?[^#]+)#(?.*)/ +``` + +This regex pattern: +- `(?[^#]+)`: Captures everything before `#` into the `text` group (the display label) +- `#`: Matches the separator +- `(?.*)`: Captures everything after `#` into the `value` group (the query value) + +### Step 3: Use Variable in Queries + +Now you can reference the variable in your dashboard queries: + +```sql +SELECT timestamp, price +FROM trades +WHERE price = $your_variable_name + AND timestamp >= $__fromTime + AND timestamp <= $__toTime; +``` + +When a user selects "BTC-USDT" from the dropdown, Grafana will substitute the corresponding price value (`37779.62`) into the query. + +## How It Works + +Grafana's regex filter with named capture groups enables the separation: + +1. **Named capture groups**: `(?...)` and `(?...)` tell Grafana which parts to use +2. **`text` group**: Becomes the visible label in the dropdown +3. **`value` group**: Becomes the interpolated value in queries +4. **Pattern matching**: The regex must match the entire string returned by your query + +### Regex Pattern Breakdown + +```regex +/(?[^#]+)#(?.*)/ +``` + +- `/`: Regex delimiters +- `(?...)`: Named capture group called "text" +- `[^#]+`: One or more characters that are NOT `#` (greedy match) +- `#`: Literal separator character +- `(?.*)`: Named capture group called "value" +- `.*`: Zero or more characters of any type (captures rest of string) + +## Choosing a Separator + +Pick a separator that **never** appears in your data: + +**Good separators:** +- `#` - Uncommon in most data +- `|` - Clear visual separator +- `::` - Two characters, unlikely to appear +- `~` - Rarely used in trading symbols or prices +- `^^^` - Multi-character separator for extra safety + +**Bad separators:** +- `-` - Common in trading pairs (BTC-USDT) +- `.` - Common in decimal numbers +- `,` - Common in CSV-like data +- Space - Can cause parsing issues + +## Alternative Patterns + +### Multiple Data Fields + +If you need more than two fields, use additional separators: + +```sql +SELECT concat(symbol, '#', price, '#', volume) FROM trades; +``` + +```regex +/(?[^#]+)#(?[^#]+)#(?.*)/ +``` + +Now you have three captured groups, though Grafana's variable system typically only uses `text` and `value`. + +### Numeric IDs with Descriptions + +Common pattern for entity selection: + +```sql +SELECT concat(name, '#', id) FROM users; +``` + +```regex +/(?[^#]+)#(?\d+)/ +``` + +Output in dropdown: User sees "John Doe", query uses `42`. + +### Escaping Special Characters + +If your data contains regex special characters, escape them in the pattern: + +```sql +-- If data contains parentheses +SELECT concat(name, ' (', id, ')', '#', id) FROM users; +-- Result: "John Doe (42)#42" +``` + +```regex +/(?.*?)#(?\d+)/ +``` + +## PostgreSQL Data Source Alternative + +If using the PostgreSQL data source (instead of the QuestDB plugin), you can use special column aliases: + +```sql +SELECT + symbol AS __text, + price AS __value +FROM trades +WHERE symbol LIKE '%BTC%'; +``` + +The PostgreSQL data source recognizes `__text` and `__value` as special column names for dropdown variables. + +**Note:** This works with the PostgreSQL data source plugin pointing to QuestDB, but NOT with the native QuestDB data source plugin. + +## Adapting the Pattern + +**Different filter conditions:** +```sql +-- Filter by time range +WHERE timestamp IN yesterday() + +-- Filter by multiple criteria +WHERE symbol LIKE '%USDT' AND price > 1000 + +-- Dynamic filter using another variable +WHERE symbol LIKE concat('%', $base_currency, '%') +``` + +**Sorting the dropdown:** +```sql +-- Sort alphabetically by symbol +SELECT concat(symbol, '#', price) FROM trades +ORDER BY symbol; + +-- Sort by price (highest first) +SELECT concat(symbol, '#', price) FROM trades +ORDER BY price DESC; + +-- Sort by volume +WITH t AS ( + SELECT symbol, first(price) as price, sum(amount) as volume + FROM trades + GROUP BY symbol +) +SELECT concat(symbol, '#', price) FROM t +ORDER BY volume DESC; +``` + +**Include additional context in label:** +```sql +-- Show symbol and volume in the label +SELECT concat(symbol, ' (Vol: ', round(sum(amount), 2), ')', '#', first(price)) +FROM trades +GROUP BY symbol; +``` + +Result: "BTC-USDT (Vol: 1234.56)#37779.62" + +## Troubleshooting + +**Dropdown shows concatenated string:** +- Verify the regex pattern is correct +- Check that the regex delimiters are `/.../ ` (forward slashes) +- Ensure named capture groups are spelled correctly: `(?...)` and `(?...)` + +**Variable not interpolating in queries:** +- Verify you're using `$variable_name` syntax in queries +- Check that the variable is defined at the dashboard level +- Test the query manually with a hardcoded value + +**Regex not matching:** +- Test your regex pattern with a regex tester (regex101.com) +- Verify your separator doesn't appear in the data itself +- Check for trailing whitespace in query results + +**Dropdown is empty:** +- Verify the query returns data +- Check that QuestDB is accessible from Grafana +- Review Grafana logs for error messages + +:::tip Multi-Select Variables +This pattern works with multi-select variables too. Enable "Multi-value" in the variable configuration, and users can select multiple options. Use `IN ($variable)` in your queries to handle multiple selected values. +::: + +:::tip Variable Preview +Grafana shows a preview of what the dropdown will look like when you configure the regex filter. Use this to verify your pattern is working correctly before applying it. +::: + +:::info Related Documentation +- [Grafana variables documentation](https://grafana.com/docs/grafana/latest/dashboards/variables/) +- [Grafana regex filters](https://grafana.com/docs/grafana/latest/dashboards/variables/add-template-variables/#filter-variables-with-regex) +- [concat() function](/docs/reference/function/text/#concat) +- [Grafana QuestDB data source](https://grafana.com/grafana/plugins/questdb-questdb-datasource/) +::: diff --git a/documentation/playbook/integrations/telegraf/opcua-dense-format.md b/documentation/playbook/integrations/telegraf/opcua-dense-format.md new file mode 100644 index 000000000..4278cd3fc --- /dev/null +++ b/documentation/playbook/integrations/telegraf/opcua-dense-format.md @@ -0,0 +1,305 @@ +--- +title: Collect OPC-UA Data with Telegraf in Dense Format +sidebar_label: OPC-UA dense format +description: Configure Telegraf to merge sparse OPC-UA metrics into dense rows for efficient storage and querying in QuestDB +--- + +Configure Telegraf to collect OPC-UA industrial automation data and insert it into QuestDB in a dense format. By default, Telegraf creates one row per metric with sparse columns, but for QuestDB it's more efficient to merge all metrics from the same timestamp into a single dense row. + +## Problem: Sparse Data Format + +When using Telegraf's OPC-UA input plugin with the default configuration, each metric value generates a separate row. Even when multiple metrics are collected at the same timestamp, they arrive as individual sparse rows: + +**Sparse format (inefficient):** + +| timestamp | ServerLoad | ServerRAM | ServerIO | +|------------------------------|------------|-----------|----------| +| 2024-01-15T10:00:00.000000Z | 45.2 | NULL | NULL | +| 2024-01-15T10:00:00.000000Z | NULL | 8192.0 | NULL | +| 2024-01-15T10:00:00.000000Z | NULL | NULL | 1250.5 | + +This wastes storage space and makes queries more complex. + +**Dense format (efficient):** + +| timestamp | ServerLoad | ServerRAM | ServerIO | +|------------------------------|------------|-----------|----------| +| 2024-01-15T10:00:00.000000Z | 45.2 | 8192.0 | 1250.5 | + +## Solution: Use Telegraf's Merge Aggregator + +Configure Telegraf to merge metrics with matching timestamps and tags before sending to QuestDB. This requires two key changes: + +1. Add a common tag to all metrics +2. Use the `merge` aggregator to combine rows + +### Complete Configuration + +```toml +[agent] + omit_hostname = true + +# OPC-UA Input Plugin +[[inputs.opcua]] + endpoint = "${OPCUA_ENDPOINT}" + connect_timeout = "30s" + request_timeout = "30s" + security_policy = "None" + security_mode = "None" + auth_method = "Anonymous" + name_override = "${METRICS_TABLE_NAME}" + + [[inputs.opcua.nodes]] + name = "ServerLoad" + namespace = "2" + identifier_type = "s" + identifier = "Server/Load" + default_tags = { source="opcua_merge" } + + [[inputs.opcua.nodes]] + name = "ServerRAM" + namespace = "2" + identifier_type = "s" + identifier = "Server/RAM" + default_tags = { source="opcua_merge" } + + [[inputs.opcua.nodes]] + name = "ServerIO" + namespace = "2" + identifier_type = "s" + identifier = "Server/IO" + default_tags = { source="opcua_merge" } + +# Merge Aggregator +[[aggregators.merge]] + drop_original = true + tags = ["source"] + +# QuestDB Output via ILP +[[outputs.influxdb_v2]] + urls = ["${QUESTDB_HTTP_ENDPOINT}"] + token = "${QUESTDB_HTTP_TOKEN}" + content_encoding = "identity" +``` + +### Key Configuration Elements + +**1. Common Tag** + +```toml +default_tags = { source="opcua_merge" } +``` + +Adds the same tag value (`source="opcua_merge"`) to all metrics. The merge aggregator uses this to identify which metrics should be combined. + +**2. Merge Aggregator** + +```toml +[[aggregators.merge]] + drop_original = true + tags = ["source"] +``` + +- `drop_original = true`: Discards the original sparse rows after merging +- `tags = ["source"]`: Merges metrics with matching `source` tag values and the same timestamp + +**3. QuestDB Output** + +```toml +[[outputs.influxdb_v2]] + urls = ["${QUESTDB_HTTP_ENDPOINT}"] + content_encoding = "identity" +``` + +- Uses the InfluxDB Line Protocol (ILP) over HTTP +- `content_encoding = "identity"`: Disables gzip compression (QuestDB doesn't require it) + +## How It Works + +The data flow is: + +1. **OPC-UA server** → Telegraf collects metrics +2. **Telegraf input** → Creates separate rows for each metric with the `source="opcua_merge"` tag +3. **Merge aggregator** → Combines rows with matching timestamp + `source` tag +4. **QuestDB output** → Sends merged dense rows via ILP + +### Merging Logic + +The merge aggregator combines metrics when: +- **Timestamps match**: Metrics collected at the same moment +- **Tags match**: All specified tags (in this case, `source`) have the same values + +If metrics have different timestamps or tag values, they won't be merged. + +## Handling Tag Conflicts + +If your OPC-UA nodes have additional tags with **different** values, those tags will prevent merging. Solutions: + +### Remove Conflicting Tags + +Use the `override` processor to remove unwanted tags: + +```toml +[[processors.override]] + [processors.override.tags] + node_id = "" # Removes the 'node_id' tag + namespace = "" # Removes the 'namespace' tag +``` + +### Convert Tags to Fields + +Use the `converter` processor to convert tags to fields (fields don't affect merging): + +```toml +[[processors.converter]] + [processors.converter.tags] + string = ["node_id", "namespace"] +``` + +This converts the tags to string fields, which won't interfere with the merge aggregator. + +### Remove the Common Tag After Merging + +If you don't want the `source` tag in your final QuestDB table: + +```toml +# Place this AFTER the merge aggregator +[[processors.override]] + [processors.override.tags] + source = "" # Removes the 'source' tag +``` + +## Environment Variables + +Use environment variables for sensitive configuration: + +```bash +export OPCUA_ENDPOINT="opc.tcp://your-opcua-server:4840" +export METRICS_TABLE_NAME="industrial_metrics" +export QUESTDB_HTTP_ENDPOINT="http://questdb-host:9000" +export QUESTDB_HTTP_TOKEN="your_token_here" +``` + +Alternatively, use a `.env` file: + +```bash +# .env file +OPCUA_ENDPOINT=opc.tcp://localhost:4840 +METRICS_TABLE_NAME=opcua_metrics +QUESTDB_HTTP_ENDPOINT=http://localhost:9000 +QUESTDB_HTTP_TOKEN= +``` + +Then start Telegraf with: + +```bash +telegraf --config telegraf.conf +``` + +## Verification + +Query QuestDB to verify the data format: + +```sql +SELECT * FROM opcua_metrics +ORDER BY timestamp DESC +LIMIT 10; +``` + +**Expected: Dense rows** with all metrics populated: + +| timestamp | source | ServerLoad | ServerRAM | ServerIO | +|------------------------------|-------------|------------|-----------|----------| +| 2024-01-15T10:05:00.000000Z | opcua_merge | 47.8 | 8256.0 | 1305.2 | +| 2024-01-15T10:04:00.000000Z | opcua_merge | 45.2 | 8192.0 | 1250.5 | + +**Problem: Sparse rows** with NULL values: + +| timestamp | source | ServerLoad | ServerRAM | ServerIO | +|------------------------------|-------------|------------|-----------|----------| +| 2024-01-15T10:05:00.000000Z | opcua_merge | 47.8 | NULL | NULL | +| 2024-01-15T10:05:00.000000Z | opcua_merge | NULL | 8256.0 | NULL | + +If you see sparse rows, check: +- All nodes have the same `default_tags` +- The merge aggregator is configured correctly +- Timestamps are identical (not just close) + +## Alternative: TCP Output + +For higher throughput, use TCP instead of HTTP: + +```toml +[[outputs.socket_writer]] + address = "tcp://questdb-host:9009" +``` + +**Differences:** +- **TCP**: Higher throughput, no acknowledgments, potential data loss on connection failure +- **HTTP**: Reliable delivery, acknowledgments, slightly lower throughput + +Choose TCP when: +- You need maximum performance +- Occasional data loss is acceptable +- You're on a reliable local network + +Choose HTTP when: +- Data integrity is critical +- You need error feedback +- You're sending over the internet + +## Multiple OPC-UA Sources + +To collect from multiple OPC-UA servers into separate tables: + +```toml +# Server 1 +[[inputs.opcua]] + endpoint = "opc.tcp://server1:4840" + name_override = "server1_metrics" + [[inputs.opcua.nodes]] + name = "Temperature" + namespace = "2" + identifier_type = "s" + identifier = "Sensor/Temp" + default_tags = { source="server1" } + +# Server 2 +[[inputs.opcua]] + endpoint = "opc.tcp://server2:4840" + name_override = "server2_metrics" + [[inputs.opcua.nodes]] + name = "Pressure" + namespace = "2" + identifier_type = "s" + identifier = "Sensor/Press" + default_tags = { source="server2" } + +# Merge by source tag +[[aggregators.merge]] + drop_original = true + tags = ["source"] +``` + +This creates two tables (`server1_metrics`, `server2_metrics`) with merged metrics from each server. + +:::tip Performance Tuning +For high-frequency OPC-UA data: +- Increase Telegraf's `flush_interval` to batch more data +- Use `aggregators.merge.period` to specify merge window duration +- Monitor QuestDB's ingestion rate and adjust accordingly +::: + +:::warning Timestamp Precision +OPC-UA timestamps may have different precision than QuestDB expects. Ensure: +- Telegraf agent precision matches your requirements (default: nanoseconds) +- OPC-UA server timestamps are synchronized (use NTP) +- Clock drift between systems is minimal +::: + +:::info Related Documentation +- [Telegraf OPC-UA plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/opcua) +- [Telegraf merge aggregator](https://github.com/influxdata/telegraf/tree/master/plugins/aggregators/merge) +- [QuestDB ILP reference](/docs/reference/api/ilp/overview/) +- [InfluxDB Line Protocol](/docs/reference/api/ilp/overview/) +::: diff --git a/documentation/playbook/operations/monitor-with-telegraf.md b/documentation/playbook/operations/monitor-with-telegraf.md new file mode 100644 index 000000000..e5427afe7 --- /dev/null +++ b/documentation/playbook/operations/monitor-with-telegraf.md @@ -0,0 +1,382 @@ +--- +title: Store QuestDB Metrics in QuestDB +sidebar_label: Monitor with Telegraf +description: Scrape QuestDB Prometheus metrics using Telegraf and store them in QuestDB for monitoring dashboards +--- + +Monitor QuestDB by scraping its Prometheus metrics using Telegraf and storing them back in a QuestDB table. This creates a self-monitoring setup where QuestDB stores its own operational metrics, allowing you to track performance, resource usage, and health over time using familiar SQL queries and Grafana dashboards. + +## Problem: Monitor QuestDB Without Prometheus + +You want to monitor QuestDB's internal metrics but: +- Don't want to set up a separate Prometheus instance +- Prefer SQL-based analysis over PromQL +- Want to integrate monitoring with existing QuestDB dashboards +- Need long-term metric retention with QuestDB's compression + +QuestDB exposes metrics in Prometheus format, and Telegraf can scrape these metrics and write them back to QuestDB. + +## Solution: Telegraf as Metrics Bridge + +Use Telegraf to: +1. Scrape Prometheus metrics from QuestDB +2. Merge metrics into dense rows +3. Write back to a QuestDB table + +### Configuration + +This `telegraf.conf` scrapes QuestDB metrics and stores them in QuestDB: + +```toml +# Telegraf agent configuration +[agent] + interval = "5s" + omit_hostname = true + precision = "1ms" + flush_interval = "5s" + +# INPUT: Scrape QuestDB Prometheus metrics +[[inputs.prometheus]] + urls = ["http://localhost:9003/metrics"] + url_tag = "" # Omit URL tag (not needed) + metric_version = 2 # Use v2 for single table output + ignore_timestamp = false + +# AGGREGATOR: Merge metrics into single rows +[[aggregators.merge]] + drop_original = true + +# OUTPUT: Write to QuestDB via ILP over TCP +[[outputs.socket_writer]] + address = "tcp://localhost:9009" +``` + +Save this as `telegraf.conf` and start Telegraf: + +```bash +telegraf --config telegraf.conf +``` + +## How It Works + +The configuration uses three key components: + +### 1. Prometheus Input Plugin + +```toml +[[inputs.prometheus]] + urls = ["http://localhost:9003/metrics"] +``` + +Scrapes metrics from QuestDB's Prometheus endpoint. You must first enable metrics in QuestDB. + +### 2. Merge Aggregator + +```toml +[[aggregators.merge]] + drop_original = true +``` + +By default, Telegraf creates one sparse row per metric. The merge aggregator combines all metrics collected at the same timestamp into a single dense row, which is more efficient for storage and querying in QuestDB. + +### 3. Socket Writer Output + +```toml +[[outputs.socket_writer]] + address = "tcp://localhost:9009" +``` + +Sends data to QuestDB via ILP over TCP for maximum throughput. + +## Enable QuestDB Metrics + +QuestDB metrics are disabled by default. Enable them via configuration: + +### Option 1: server.conf + +Add to `server.conf`: + +```ini +metrics.enabled=true +``` + +### Option 2: Environment Variable + +```bash +export QDB_METRICS_ENABLED=true +``` + +### Option 3: Docker + +```bash +docker run \ + -p 9000:9000 \ + -p 8812:8812 \ + -p 9009:9009 \ + -p 9003:9003 \ + -e QDB_METRICS_ENABLED=true \ + questdb/questdb:latest +``` + +After enabling, metrics are available at `http://localhost:9003/metrics`. + +## Verify Metrics Collection + +After starting Telegraf, verify data is being collected: + +```sql +-- Check if table was created +SELECT * FROM tables() WHERE table_name = 'prometheus'; + +-- View recent metrics +SELECT * FROM prometheus +ORDER BY timestamp DESC +LIMIT 10; + +-- Count metrics collected +SELECT count(*) FROM prometheus; +``` + +## Querying Metrics + +### Available Metrics + +QuestDB exposes various metrics including: + +```sql +-- See all available metrics (columns) +SELECT column_name FROM table_columns('prometheus') +WHERE column_name NOT IN ('timestamp'); +``` + +Common metrics include: +- `questdb_json_queries_total`: Number of REST API queries +- `questdb_pg_wire_queries_total`: Number of PostgreSQL wire queries +- `questdb_ilp_tcp_*`: ILP over TCP metrics (connections, messages, errors) +- `questdb_ilp_http_*`: ILP over HTTP metrics +- `questdb_memory_*`: Memory usage metrics +- `questdb_wal_*`: Write-Ahead Log metrics + +### Example Queries + +**Query rate over time:** +```questdb-sql title="Queries per second over last hour" +SELECT + timestamp, + questdb_json_queries_total + questdb_pg_wire_queries_total as total_queries +FROM prometheus +WHERE timestamp >= dateadd('h', -1, now()) +ORDER BY timestamp DESC; +``` + +**Memory usage trend:** +```questdb-sql title="Memory usage over last 24 hours" +SELECT + timestamp_floor('10m', timestamp) as time_bucket, + avg(questdb_memory_used) as avg_memory_used, + max(questdb_memory_used) as max_memory_used +FROM prometheus +WHERE timestamp >= dateadd('d', -1, now()) +SAMPLE BY 10m; +``` + +**ILP ingestion rate:** +```questdb-sql title="ILP messages per second" +SELECT + timestamp_floor('1m', timestamp) as minute, + max(questdb_ilp_tcp_messages_total) - + min(questdb_ilp_tcp_messages_total) as messages_per_minute +FROM prometheus +WHERE timestamp >= dateadd('h', -1, now()) +SAMPLE BY 1m; +``` + +**Connection counts:** +```sql +SELECT + timestamp, + questdb_ilp_tcp_connections as ilp_tcp_connections, + questdb_pg_wire_connections as pg_wire_connections +FROM prometheus +WHERE timestamp >= dateadd('h', -1, now()) +ORDER BY timestamp DESC +LIMIT 100; +``` + +## Configuration Options + +### Monitoring Multiple QuestDB Instances + +To monitor multiple QuestDB instances, add separate input blocks and include instance tags: + +```toml +[[inputs.prometheus]] + urls = ["http://questdb-prod:9003/metrics"] + [inputs.prometheus.tags] + instance = "production" + +[[inputs.prometheus]] + urls = ["http://questdb-staging:9003/metrics"] + [inputs.prometheus.tags] + instance = "staging" +``` + +Query by instance: + +```sql +SELECT * FROM prometheus +WHERE instance = 'production' + AND timestamp >= dateadd('h', -1, now()); +``` + +### Adjusting Collection Interval + +Change how often metrics are collected: + +```toml +[agent] + interval = "10s" # Collect every 10 seconds instead of 5 + flush_interval = "10s" +``` + +Lower intervals provide more granular data but increase storage. Higher intervals reduce overhead. + +### Using HTTP Instead of TCP + +For more reliable delivery with acknowledgments: + +```toml +[[outputs.influxdb_v2]] + urls = ["http://localhost:9000"] + token = "" + content_encoding = "identity" +``` + +TCP is faster but doesn't confirm delivery. HTTP provides confirmation but slightly lower throughput. + +### Filtering Metrics + +Exclude unnecessary metrics to reduce storage: + +```toml +[[inputs.prometheus]] + urls = ["http://localhost:9003/metrics"] + metric_version = 2 + + # Only collect specific metrics + fieldpass = [ + "questdb_json_queries_total", + "questdb_pg_wire_queries_total", + "questdb_memory_*", + "questdb_ilp_*" + ] +``` + +## Grafana Dashboard Integration + +Create Grafana dashboards using the collected metrics: + +```sql +-- Query rate panel +SELECT + $__timeGroup(timestamp, $__interval) as time, + avg(questdb_json_queries_total) as "REST API Queries" +FROM prometheus +WHERE $__timeFilter(timestamp) +GROUP BY time +ORDER BY time; + +-- Memory usage panel +SELECT + $__timeGroup(timestamp, $__interval) as time, + avg(questdb_memory_used / 1024 / 1024) as "Memory Used (MB)" +FROM prometheus +WHERE $__timeFilter(timestamp) +GROUP BY time +ORDER BY time; +``` + +## Data Retention + +Set up automatic cleanup of old metrics: + +```sql +-- Drop partitions older than 30 days +ALTER TABLE prometheus DROP PARTITION LIST '2024-01', '2024-02'; + +-- Or delete old data +DELETE FROM prometheus +WHERE timestamp < dateadd('d', -30, now()); +``` + +Consider partitioning by day or week: + +```sql +-- Recreate table with daily partitioning +CREATE TABLE prometheus_new ( + timestamp TIMESTAMP, + -- ... metric columns ... +) TIMESTAMP(timestamp) PARTITION BY DAY; +``` + +## Troubleshooting + +**No data appearing in QuestDB:** +- Verify QuestDB metrics are enabled: `curl http://localhost:9003/metrics` +- Check Telegraf logs for errors: `telegraf --config telegraf.conf --debug` +- Ensure port 9009 is accessible from Telegraf host +- Verify Telegraf has network connectivity to QuestDB + +**Table not created automatically:** +- QuestDB auto-creates tables on first ILP write +- Check for errors in QuestDB logs +- Verify ILP is not disabled in QuestDB configuration + +**Metrics are sparse (many NULL values):** +- Ensure merge aggregator is configured: `[[aggregators.merge]]` +- Set `drop_original = true` to discard sparse rows +- Use `metric_version = 2` in prometheus input + +**High cardinality warning:** +- Too many unique tag values can cause performance issues +- Remove unnecessary tags using `url_tag = ""` +- Use `omit_hostname = true` if monitoring single instance + +## Performance Considerations + +**Storage usage:** +- Each metric collection creates one row in QuestDB +- At 5-second intervals: ~17,000 rows/day, ~500K rows/month +- Storage is compressed efficiently due to time-series nature + +**Query performance:** +- Add indexes on frequently filtered columns (like `instance` tag) +- Use timestamp filters to limit query scope +- Leverage SAMPLE BY for aggregating data over time + +**Impact on monitored QuestDB:** +- Metrics endpoint is lightweight (sub-millisecond response time) +- Telegraf scraping adds minimal overhead +- Consider increasing interval to 30s+ if needed + +:::tip Alerting +Combine with monitoring tools to create alerts: +- Query rate drops to zero (instance down) +- Memory usage exceeds threshold +- ILP error rate increases +- WAL segment count grows unexpectedly +::: + +:::warning Circular Dependency +Be cautious about monitoring QuestDB with itself - if QuestDB fails, you lose monitoring data. Consider: +- Monitoring multiple QuestDB instances (write metrics from instance A to instance B) +- Setting up external monitoring as backup +- Using persistent storage volumes to preserve data across restarts +::: + +:::info Related Documentation +- [QuestDB metrics reference](/docs/operations/health-monitoring/) +- [Telegraf prometheus input](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/prometheus) +- [Telegraf merge aggregator](https://github.com/influxdata/telegraf/tree/master/plugins/aggregators/merge) +- [ILP reference](/docs/reference/api/ilp/overview/) +::: diff --git a/documentation/playbook/programmatic/cpp/missing-columns.md b/documentation/playbook/programmatic/cpp/missing-columns.md new file mode 100644 index 000000000..d4f36bad1 --- /dev/null +++ b/documentation/playbook/programmatic/cpp/missing-columns.md @@ -0,0 +1,395 @@ +--- +title: Handle Missing Columns in C++ Client +sidebar_label: Missing columns +description: Send rows with optional columns using the QuestDB C++ client by conditionally calling column methods +--- + +Handle rows with missing or optional columns when using the QuestDB C++ client. Unlike Python's dictionary-based approach where you can simply omit keys, the C++ client requires explicit method calls for each column. This guide shows how to conditionally include columns based on data availability. + +## Problem: Ragged Rows with Optional Fields + +You have data where some columns may be missing for certain rows. In Python, you can use dictionaries with `None` values or omit keys entirely: + +```python +# Python - easy to handle missing data +{"price": 10.0, "volume": 100} # Both columns +{"price": 10.0, "volume": None} # Volume missing +{"price": 10.0} # Volume omitted (equivalent to None) +``` + +In C++, the buffer-based API requires explicit method calls: + +```cpp +buffer + .table("trades") + .symbol("symbol", "ETH-USD") + .column("price", 2615.54) + .column("volume", 0.00044) // What if volume is missing? + .at(timestamp); +``` + +## Solution: Conditional Column Calls + +Use `std::optional` (C++17) or nullable types, then conditionally call `.column()` only when data is present. + +### Complete Example + +```cpp +#include +#include +#include +#include +#include +#include + +int main() +{ + try + { + auto sender = questdb::ingress::line_sender::from_conf( + "http::addr=localhost:9000;username=admin;password=quest;retry_timeout=20000;"); + + auto now = std::chrono::system_clock::now(); + auto duration = now.time_since_epoch(); + auto nanos = std::chrono::duration_cast(duration).count(); + + // Define structure with optional price + struct Trade { + std::string symbol; + std::string side; + std::optional price; // May be missing + double amount; + }; + + // Sample data - some trades missing price + std::vector trades = { + {"ETH-USD", "sell", 2615.54, 0.00044}, + {"BTC-USD", "sell", 39269.98, 0.001}, + {"SOL-USD", "sell", std::nullopt, 5.5} // Missing price + }; + + questdb::ingress::line_sender_buffer buffer; + + // Iterate and conditionally add columns + for (const auto& trade : trades) { + buffer.table("trades") + .symbol("symbol", trade.symbol) + .symbol("side", trade.side); + + // Only add price column if value exists + if (trade.price.has_value()) { + buffer.column("price", trade.price.value()); + } + + buffer.column("amount", trade.amount) + .at(questdb::ingress::timestamp_nanos(nanos)); + } + + sender.flush(buffer); + sender.close(); + + std::cout << "Data successfully sent!" << std::endl; + return 0; + } + catch (const questdb::ingress::line_sender_error& err) + { + std::cerr << "Error running example: " << err.what() << std::endl; + return 1; + } +} +``` + +### How It Works + +1. **`std::optional`**: Represents a value that may or may not be present + - `std::nullopt`: Indicates missing value + - `.has_value()`: Checks if value is present + - `.value()`: Retrieves the value (only call if `.has_value()` is true) + +2. **Conditional column call**: Skip `.column()` when value is missing + ```cpp + if (trade.price.has_value()) { + buffer.column("price", trade.price.value()); + } + ``` + +3. **Buffer accumulation**: Each call to `.table()...at()` adds one row to the buffer + - The buffer accumulates all rows + - Call `.flush()` once to send all rows together + +## Compilation + +```bash +# Basic compilation with C++17 +g++ -std=c++17 -o trades trades.cpp -lquestdb_client + +# With optimization +g++ -std=c++17 -O3 -o trades trades.cpp -lquestdb_client + +# Using CMake +cmake -DCMAKE_BUILD_TYPE=Release .. +make +``` + +Ensure you have: +- C++17 or later compiler +- QuestDB C++ client library installed +- Linker flag `-lquestdb_client` + +## Multiple Optional Columns + +Handle multiple optional fields by checking each one: + +```cpp +struct SensorReading { + std::string sensor_id; + std::optional temperature; + std::optional humidity; + std::optional pressure; + std::optional status; +}; + +// Add to buffer +for (const auto& reading : readings) { + buffer.table("sensor_data") + .symbol("sensor_id", reading.sensor_id); + + if (reading.temperature.has_value()) { + buffer.column("temperature", reading.temperature.value()); + } + + if (reading.humidity.has_value()) { + buffer.column("humidity", reading.humidity.value()); + } + + if (reading.pressure.has_value()) { + buffer.column("pressure", reading.pressure.value()); + } + + if (reading.status.has_value()) { + buffer.column("status", reading.status.value()); + } + + buffer.at(questdb::ingress::timestamp_nanos::now()); +} +``` + +## C++11/14 Alternative (Without std::optional) + +If you can't use C++17, use pointers or sentinel values: + +### Using Pointers + +```cpp +struct Trade { + std::string symbol; + std::string side; + double* price; // nullptr if missing + double amount; +}; + +// Usage +double btc_price = 39269.98; +std::vector trades = { + {"BTC-USD", "sell", &btc_price, 0.001}, + {"SOL-USD", "sell", nullptr, 5.5} // Missing price +}; + +for (const auto& trade : trades) { + buffer.table("trades") + .symbol("symbol", trade.symbol) + .symbol("side", trade.side); + + if (trade.price != nullptr) { + buffer.column("price", *trade.price); + } + + buffer.column("amount", trade.amount) + .at(questdb::ingress::timestamp_nanos::now()); +} +``` + +### Using Sentinel Values + +```cpp +const double MISSING_VALUE = std::numeric_limits::quiet_NaN(); + +struct Trade { + std::string symbol; + std::string side; + double price; // NaN if missing + double amount; +}; + +// Usage +for (const auto& trade : trades) { + buffer.table("trades") + .symbol("symbol", trade.symbol) + .symbol("side", trade.side); + + if (!std::isnan(trade.price)) { + buffer.column("price", trade.price); + } + + buffer.column("amount", trade.amount) + .at(questdb::ingress::timestamp_nanos::now()); +} +``` + +## Symbol vs Column + +Remember the distinction in ILP: +- **Symbols** (`.symbol()`): Categorical data, indexed automatically by QuestDB (e.g., instrument, side, category) +- **Columns** (`.column()`): Numerical, string, or boolean values (e.g., price, amount, status) + +Both can be optional and use the same conditional pattern: + +```cpp +// Optional symbol +if (trade.exchange.has_value()) { + buffer.symbol("exchange", trade.exchange.value()); +} + +// Optional column +if (trade.price.has_value()) { + buffer.column("price", trade.price.value()); +} +``` + +## Type-Specific Column Methods + +The C++ client provides type-specific methods for clarity and performance: + +```cpp +// Explicit type methods (recommended) +buffer.column_f64("price", 2615.54); // 64-bit float +buffer.column_i64("count", 100); // 64-bit integer +buffer.column_bool("active", true); // Boolean +buffer.column_str("status", "ok"); // String + +// Generic column (uses template deduction) +buffer.column("price", 2615.54); // Also works +``` + +Use type-specific methods when handling optional values for better clarity: + +```cpp +if (trade.price.has_value()) { + buffer.column_f64("price", trade.price.value()); +} + +if (trade.volume.has_value()) { + buffer.column_i64("volume", trade.volume.value()); +} +``` + +## Batching and Flushing + +For better performance, accumulate multiple rows before flushing: + +```cpp +constexpr size_t BATCH_SIZE = 1000; + +questdb::ingress::line_sender_buffer buffer; +size_t row_count = 0; + +for (const auto& trade : large_dataset) { + buffer.table("trades") + .symbol("symbol", trade.symbol); + + if (trade.price.has_value()) { + buffer.column("price", trade.price.value()); + } + + buffer.column("amount", trade.amount) + .at(questdb::ingress::timestamp_nanos::now()); + + row_count++; + + // Flush when batch is full + if (row_count >= BATCH_SIZE) { + sender.flush(buffer); + buffer.clear(); // Reset buffer for next batch + row_count = 0; + } +} + +// Flush remaining rows +if (row_count > 0) { + sender.flush(buffer); +} +``` + +## Error Handling + +Always handle potential errors: + +```cpp +try { + sender.flush(buffer); +} catch (const questdb::ingress::line_sender_error& err) { + std::cerr << "Failed to send data: " << err.what() << std::endl; + + // Implement retry logic or save to disk + if (should_retry(err)) { + retry_with_backoff(buffer); + } else { + save_to_disk(buffer); + } +} +``` + +## Performance Considerations + +**Minimize optional checks in hot paths:** +```cpp +// Good: Check once, process many +if (all_prices_present) { + for (const auto& trade : trades) { + buffer.table("trades") + .symbol("symbol", trade.symbol) + .column("price", trade.price) // No conditional + .column("amount", trade.amount) + .at(timestamp); + } +} else { + // Slower path with conditionals + for (const auto& trade : trades) { + buffer.table("trades") + .symbol("symbol", trade.symbol); + + if (trade.price.has_value()) { + buffer.column("price", trade.price.value()); + } + + buffer.column("amount", trade.amount) + .at(timestamp); + } +} +``` + +**Batch sizing:** +- Larger batches (1000-10000 rows) reduce network overhead +- Smaller batches (100-500 rows) reduce memory usage and improve latency +- Tune based on your data rate and memory constraints + +:::tip Schema Evolution +QuestDB automatically creates missing columns when you first send data with that column name. This means: +- You can add new optional columns at any time +- Existing rows will have NULL for new columns +- No schema migration required +::: + +:::warning Thread Safety +The `line_sender_buffer` is NOT thread-safe. Either: +1. Use one buffer per thread +2. Protect shared buffer with mutex +3. Use a queue pattern with single sender thread +::: + +:::info Related Documentation +- [QuestDB C++ client documentation](https://github.com/questdb/c-questdb-client) +- [ILP reference](/docs/reference/api/ilp/overview/) +- [C++ client examples](https://github.com/questdb/c-questdb-client/tree/main/examples) +- [std::optional reference](https://en.cppreference.com/w/cpp/utility/optional) +::: diff --git a/documentation/playbook/programmatic/ruby/inserting-ilp.md b/documentation/playbook/programmatic/ruby/inserting-ilp.md new file mode 100644 index 000000000..a4b27dd9f --- /dev/null +++ b/documentation/playbook/programmatic/ruby/inserting-ilp.md @@ -0,0 +1,355 @@ +--- +title: Insert Data from Ruby Using ILP +sidebar_label: Inserting via ILP +description: Send time-series data from Ruby to QuestDB using the InfluxDB Line Protocol over HTTP +--- + +Send time-series data from Ruby to QuestDB using the InfluxDB Line Protocol (ILP). While QuestDB doesn't maintain an official Ruby client, you can easily use the official InfluxDB Ruby gem to send data via ILP over HTTP, which QuestDB fully supports. + +## Available Approaches + +Two methods for sending ILP data from Ruby: + +1. **InfluxDB v2 Ruby Client** (recommended) + - Official InfluxDB gem with clean API + - Automatic batching and error handling + - Compatible with QuestDB's ILP endpoint + - Requires: `influxdb-client` gem + +2. **TCP Socket** (for custom implementations) + - Direct socket communication + - Manual ILP message formatting + - Higher throughput, no dependencies + - Requires: Built-in Ruby socket library + +## Using the InfluxDB v2 Ruby Client + +The InfluxDB v2 client provides a convenient Point builder API that works with QuestDB. + +### Installation + +```bash +gem install influxdb-client +``` + +Or add to your `Gemfile`: + +```ruby +gem 'influxdb-client', '~> 3.1' +``` + +### Example Code + +```ruby +require 'influxdb-client' + +# Create client +client = InfluxDB2::Client.new( + 'http://localhost:9000', + 'ignore-token', # Token not required for QuestDB + bucket: 'ignore-bucket', # Bucket not used by QuestDB + org: 'ignore-org', # Organization not used by QuestDB + precision: InfluxDB2::WritePrecision::NANOSECOND, + use_ssl: false +) + +write_api = client.create_write_api + +# Write a single point +point = InfluxDB2::Point.new(name: 'readings') + .add_tag('city', 'London') + .add_tag('make', 'Omron') + .add_field('temperature', 23.5) + .add_field('humidity', 0.343) + +write_api.write(data: point) + +# Write multiple points +points = [ + InfluxDB2::Point.new(name: 'readings') + .add_tag('city', 'Madrid') + .add_tag('make', 'Sony') + .add_field('temperature', 25.5) + .add_field('humidity', 0.360), + + InfluxDB2::Point.new(name: 'readings') + .add_tag('city', 'New York') + .add_tag('make', 'Philips') + .add_field('temperature', 20.5) + .add_field('humidity', 0.330) +] + +write_api.write(data: points) + +# Always close the client +client.close! +``` + +### Configuration Notes + +When using the InfluxDB client with QuestDB: + +- **`token`**: Not required - can be empty string or any value +- **`bucket`**: Ignored by QuestDB - can be any string +- **`org`**: Ignored by QuestDB - can be any string +- **`precision`**: Use `NANOSECOND` for compatibility (QuestDB's native precision) +- **`use_ssl`**: Set to `false` for local development, `true` for production with TLS + +### Data Types + +The InfluxDB client automatically handles type conversions: + +```ruby +point = InfluxDB2::Point.new(name: 'measurements') + .add_tag('sensor_id', '001') # SYMBOL in QuestDB + .add_field('temperature', 23.5) # DOUBLE + .add_field('humidity', 0.343) # DOUBLE + .add_field('pressure', 1013) # LONG (integer) + .add_field('status', 'active') # STRING + .add_field('online', true) # BOOLEAN +``` + +## TCP Socket Approach + +For maximum control and performance, send ILP messages directly via TCP sockets. + +### Basic TCP Example + +```ruby +require 'socket' + +HOST = 'localhost' +PORT = 9009 + +# Helper method to get current time in nanoseconds +def time_in_nsec + now = Time.now + return now.to_i * (10 ** 9) + now.nsec +end + +begin + s = TCPSocket.new(HOST, PORT) + + # Single record with timestamp + s.puts "trades,symbol=BTC-USDT,side=buy price=37779.62,amount=0.5 #{time_in_nsec}\n" + + # Omitting timestamp - server assigns one + s.puts "trades,symbol=ETH-USDT,side=sell price=2615.54,amount=1.2\n" + + # Multiple records (newline-delimited) + s.puts "trades,symbol=SOL-USDT,side=buy price=98.23,amount=10.0\n" + + "trades,symbol=BTC-USDT,side=sell price=37800.00,amount=0.3\n" + +rescue SocketError => ex + puts "Socket error: #{ex.inspect}" +ensure + s.close if s +end +``` + +### ILP Message Format + +The ILP format is: + +``` +table_name,tag1=value1,tag2=value2 field1=value1,field2=value2 timestamp\n +``` + +Breaking it down: +- **Table name**: Target table (created automatically if doesn't exist) +- **Tags** (symbols): Comma-separated key=value pairs for indexed categorical data +- **Space separator**: Separates tags from fields +- **Fields** (columns): Comma-separated key=value pairs for numerical or string data +- **Space separator**: Separates fields from timestamp +- **Timestamp** (optional): Nanosecond-precision timestamp; if omitted, server assigns + +**Example:** +``` +readings,city=London,make=Omron temperature=23.5,humidity=0.343 1465839830100400000\n +``` + +### Escaping Special Characters + +ILP requires escaping for certain characters: + +```ruby +def escape_ilp(value) + value.to_s + .gsub(' ', '\\ ') # Space + .gsub(',', '\\,') # Comma + .gsub('=', '\\=') # Equals + .gsub("\n", '\\n') # Newline +end + +# Usage +tag_value = "London, UK" +escaped = escape_ilp(tag_value) # "London\\, UK" + +s.puts "readings,city=#{escaped} temperature=23.5\n" +``` + +### Batching for Performance + +Send multiple rows in a single TCP write: + +```ruby +require 'socket' + +HOST = 'localhost' +PORT = 9009 + +def time_in_nsec + now = Time.now + return now.to_i * (10 ** 9) + now.nsec +end + +begin + s = TCPSocket.new(HOST, PORT) + + # Build batch of rows + batch = [] + (1..1000).each do |i| + timestamp = time_in_nsec + i * 1000000 # 1ms apart + batch << "readings,sensor_id=#{i} value=#{rand(100.0)},status=\"ok\" #{timestamp}" + end + + # Send entire batch at once + s.puts batch.join("\n") + "\n" + s.flush + +rescue SocketError => ex + puts "Socket error: #{ex.inspect}" +ensure + s.close if s +end +``` + +## Comparison: InfluxDB Client vs TCP Socket + +| Feature | InfluxDB Client | TCP Socket | +|---------|----------------|------------| +| **Ease of use** | High - Point builder API | Medium - Manual ILP formatting | +| **Dependencies** | Requires `influxdb-client` gem | None (stdlib only) | +| **Error handling** | Automatic with retries | Manual implementation | +| **Batching** | Automatic | Manual | +| **Performance** | Good | Excellent (direct TCP) | +| **Type safety** | Automatic type conversion | Manual string formatting | +| **Reliability** | HTTP with acknowledgments | No acknowledgments (fire and forget) | +| **Escaping** | Automatic | Manual implementation required | +| **Recommended for** | Most applications | High-throughput scenarios, custom needs | + +## Best Practices + +### Connection Management + +**InfluxDB Client:** +```ruby +# Reuse client for multiple writes +client = InfluxDB2::Client.new(...) +write_api = client.create_write_api + +# ... perform many writes ... + +client.close! # Always close when done +``` + +**TCP Socket:** +```ruby +# Keep connection open for multiple writes +socket = TCPSocket.new(HOST, PORT) + +begin + # ... send multiple batches ... +ensure + socket.close if socket +end +``` + +### Error Handling + +**InfluxDB Client:** +```ruby +begin + write_api.write(data: points) +rescue InfluxDB2::InfluxError => e + puts "Failed to write to QuestDB: #{e.message}" + # Implement retry logic or logging +end +``` + +**TCP Socket:** +```ruby +begin + socket.puts(ilp_messages) + socket.flush +rescue Errno::EPIPE, Errno::ECONNRESET => e + puts "Connection lost: #{e.message}" + # Reconnect and retry +rescue StandardError => e + puts "Unexpected error: #{e.message}" +end +``` + +### Timestamp Generation + +Use nanosecond precision for maximum compatibility: + +```ruby +# Current time in nanoseconds +def current_nanos + now = Time.now + now.to_i * 1_000_000_000 + now.nsec +end + +# Specific time to nanoseconds +def time_to_nanos(time) + time.to_i * 1_000_000_000 + time.nsec +end + +# Usage +timestamp = current_nanos +# or +timestamp = time_to_nanos(Time.parse("2024-09-05 14:30:00 UTC")) +``` + +### Batching Strategy + +For high-throughput scenarios: + +```ruby +BATCH_SIZE = 1000 +FLUSH_INTERVAL = 5 # seconds + +batch = [] +last_flush = Time.now + +data_stream.each do |record| + batch << format_ilp_message(record) + + if batch.size >= BATCH_SIZE || (Time.now - last_flush) >= FLUSH_INTERVAL + socket.puts batch.join("\n") + "\n" + socket.flush + batch.clear + last_flush = Time.now + end +end + +# Flush remaining records +socket.puts batch.join("\n") + "\n" unless batch.empty? +``` + +:::tip Choosing an Approach +- **Use InfluxDB client** for most Ruby applications - it's easier, safer, and handles edge cases +- **Use TCP sockets** only when you need maximum throughput and can handle reliability concerns +::: + +:::warning Data Loss with TCP +TCP ILP has no acknowledgments. If the connection drops, data may be lost silently. For critical data, use HTTP (via the InfluxDB client) which provides delivery confirmation. +::: + +:::info Related Documentation +- [ILP reference](/docs/reference/api/ilp/overview/) +- [ILP over HTTP](/docs/reference/api/ilp/overview/#http) +- [ILP over TCP](/docs/reference/api/ilp/overview/#tcp) +- [InfluxDB Ruby client](https://github.com/influxdata/influxdb-client-ruby) +::: diff --git a/documentation/playbook/programmatic/rust/tls-configuration.md b/documentation/playbook/programmatic/rust/tls-configuration.md new file mode 100644 index 000000000..9881a8b3d --- /dev/null +++ b/documentation/playbook/programmatic/rust/tls-configuration.md @@ -0,0 +1,326 @@ +--- +title: Configure TLS for Rust Client +sidebar_label: TLS configuration +description: Set up TLS certificates for the QuestDB Rust client including self-signed certificates and production CA roots +--- + +Configure TLS encryption for the QuestDB Rust client when connecting to QuestDB instances with TLS enabled. This guide covers both production deployments with proper CA certificates and development environments with self-signed certificates. + +## Problem: TLS Certificate Validation + +When connecting the Rust client to a TLS-enabled QuestDB instance, you'll encounter certificate validation errors if: +- Using self-signed certificates (common in development) +- Using corporate/internal CA certificates not in system trust stores +- Certificate hostname doesn't match the connection address + +The default client configuration validates certificates against system certificate stores, which causes "certificate unknown" errors with self-signed certificates. + +## Solution Options + +The QuestDB Rust client provides three approaches for TLS configuration: + +1. **System + WebPKI roots** (recommended for production) +2. **Custom CA certificate** (best for development and internal CAs) +3. **Skip verification** (development/testing only - unsafe) + +### Option 1: Use System and WebPKI Certificate Roots + +For production deployments with properly signed certificates from public Certificate Authorities: + +```rust +use questdb::ingress::{Sender, SenderBuilder}; + +#[tokio::main] +async fn main() -> Result<(), Box> { + let sender = SenderBuilder::new("http", "production-host.com", 9000)? + .username("admin")? + .password("quest")? + .tls_ca("webpki_and_os_roots")? // Use both WebPKI and OS certificate stores + .build() + .await?; + + // Use sender... + + sender.close().await?; + Ok(()) +} +``` + +The `tls_ca("webpki_and_os_roots")` parameter tells the client to trust: +- **WebPKI roots**: Mozilla's standard root CA certificates +- **OS roots**: Operating system's certificate store (Windows, macOS, Linux) + +This works with certificates from public CAs like Let's Encrypt, DigiCert, etc. + +### Option 2: Custom CA Certificate (Recommended for Development) + +For development environments or internal CAs, provide a PEM-encoded certificate file: + +#### Step 1: Generate Self-Signed Certificate (if needed) + +```bash +# Generate private key +openssl genrsa -out questdb.key 2048 + +# Generate self-signed certificate (valid for 365 days) +openssl req -new -x509 \ + -key questdb.key \ + -out questdb.crt \ + -days 365 \ + -subj "/CN=localhost" + +# Verify certificate +openssl x509 -in questdb.crt -text -noout +``` + +#### Step 2: Configure QuestDB with Certificate + +Add to QuestDB `server.conf`: + +```ini +# Enable TLS on HTTP endpoint +http.security.enabled=true +http.security.cert.path=/path/to/questdb.crt +http.security.key.path=/path/to/questdb.key +``` + +Or via environment variables: + +```bash +export QDB_HTTP_SECURITY_ENABLED=true +export QDB_HTTP_SECURITY_CERT_PATH=/path/to/questdb.crt +export QDB_HTTP_SECURITY_KEY_PATH=/path/to/questdb.key +``` + +#### Step 3: Configure Rust Client + +```rust +use questdb::ingress::{Sender, SenderBuilder}; + +#[tokio::main] +async fn main() -> Result<(), Box> { + let sender = SenderBuilder::new("https", "localhost", 9000)? + .username("admin")? + .password("quest")? + .tls_ca("pem_file")? // Specify PEM file mode + .tls_roots("/path/to/questdb.crt")? // Path to certificate file + .build() + .await?; + + // Write data + sender + .table("trades")? + .symbol("symbol", "BTC-USDT")? + .symbol("side", "buy")? + .column_f64("price", 37779.62)? + .column_f64("amount", 0.5)? + .at_now() + .await?; + + sender.close().await?; + Ok(()) +} +``` + +**Key points:** +- Use `"https"` protocol (not `"http"`) +- `tls_ca("pem_file")`: Tells client to load from a PEM file +- `tls_roots("/path/to/questdb.crt")`: Path to the certificate file +- Certificate file must be PEM-encoded (text format with `-----BEGIN CERTIFICATE-----`) + +### Option 3: Skip Verification (Development Only) + +For development/testing when you want to bypass certificate validation entirely: + +#### Add Feature to Cargo.toml + +```toml +[dependencies] +questdb-rs = { version = "4.0", features = ["insecure-skip-verify"] } +tokio = { version = "1", features = ["full"] } +``` + +The `insecure-skip-verify` feature must be explicitly enabled in your `Cargo.toml`. + +#### Use Unsafe Verification Setting + +```rust +use questdb::ingress::{Sender, SenderBuilder}; + +#[tokio::main] +async fn main() -> Result<(), Box> { + let sender = SenderBuilder::new("https", "localhost", 9000)? + .username("admin")? + .password("quest")? + .tls_verify("unsafe_off")? // Disable certificate verification + .build() + .await?; + + // Use sender... + + sender.close().await?; + Ok(()) +} +``` + +:::danger Security Warning +**Never use `unsafe_off` in production!** This disables all certificate validation and makes your connection vulnerable to man-in-the-middle attacks. Only use for local development with self-signed certificates. +::: + +## Complete Example + +Here's a complete example handling different environments: + +```rust +use questdb::ingress::{Sender, SenderBuilder}; +use std::env; + +#[tokio::main] +async fn main() -> Result<(), Box> { + let environment = env::var("ENVIRONMENT").unwrap_or_else(|_| "development".to_string()); + + let sender = match environment.as_str() { + "production" => { + // Production: Use system CA roots + SenderBuilder::new("https", "production-host.com", 9000)? + .username("admin")? + .password("quest")? + .tls_ca("webpki_and_os_roots")? + .build() + .await? + } + "development" => { + // Development: Use self-signed certificate + SenderBuilder::new("https", "localhost", 9000)? + .username("admin")? + .password("quest")? + .tls_ca("pem_file")? + .tls_roots("./certs/questdb.crt")? + .build() + .await? + } + _ => { + return Err("Unknown environment".into()); + } + }; + + // Write sample data + sender + .table("trades")? + .symbol("symbol", "BTC-USDT")? + .symbol("side", "buy")? + .column_f64("price", 37779.62)? + .column_f64("amount", 0.5)? + .at_now() + .await?; + + println!("Data sent successfully over TLS"); + + sender.close().await?; + Ok(()) +} +``` + +Run with: + +```bash +# Production +ENVIRONMENT=production cargo run + +# Development +ENVIRONMENT=development cargo run +``` + +## TLS Configuration Options + +### Available tls_ca Values + +| Value | Description | Use Case | +|-------|-------------|----------| +| `webpki_roots` | Mozilla's WebPKI root certificates only | Public CAs, web-hosted QuestDB | +| `os_roots` | Operating system certificate store only | Corporate environments with custom CAs | +| `webpki_and_os_roots` | Both WebPKI and OS roots | Production (recommended) - covers all valid certificates | +| `pem_file` | Load from PEM file | Self-signed certificates, internal CAs | + +### Connection String Format + +Alternatively, configure TLS via connection string: + +```rust +let sender = SenderBuilder::from_conf( + "https::addr=localhost:9000;username=admin;password=quest;tls_ca=webpki_and_os_roots;" +)? +.build() +.await?; +``` + +For self-signed certificates with PEM file: + +```rust +let sender = SenderBuilder::from_conf( + "https::addr=localhost:9000;username=admin;password=quest;tls_ca=pem_file;tls_roots=/path/to/cert.crt;" +)? +.build() +.await?; +``` + +## Troubleshooting + +**"certificate unknown" error:** +- Verify certificate is valid and not expired: `openssl x509 -in cert.crt -noout -dates` +- Check certificate hostname matches connection address +- Ensure certificate file path is correct and readable +- For self-signed certs, use `tls_ca("pem_file")` with `tls_roots()` + +**"certificate verify failed":** +- Self-signed certificate: Use Option 2 (custom CA) or Option 3 (unsafe skip) +- Wrong CA: Verify certificate chain is complete in PEM file +- Expired certificate: Regenerate with longer validity period + +**"connection refused":** +- QuestDB TLS not enabled - check QuestDB configuration +- Wrong port - TLS uses same port (9000 for HTTP, 9009 for TCP) +- Firewall blocking HTTPS connections + +**"feature `insecure-skip-verify` is required":** +- Add feature to Cargo.toml: `features = ["insecure-skip-verify"]` +- This feature is required even just to use `tls_verify("unsafe_off")` + +## Certificate File Formats + +The Rust client expects PEM-encoded certificates: + +**Correct format (PEM):** +``` +-----BEGIN CERTIFICATE----- +MIIDXTCCAkWgAwIBAgIJAKL0UG+mRKqzMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV +... +-----END CERTIFICATE----- +``` + +**If you have DER format**, convert to PEM: +```bash +openssl x509 -inform der -in certificate.der -out certificate.pem +``` + +**Certificate chain**: If using an intermediate CA, concatenate certificates: +```bash +cat server.crt intermediate.crt root.crt > chain.pem +``` + +Use `chain.pem` with `tls_roots()`. + +:::tip Production Best Practices +1. **Use proper CA certificates** from Let's Encrypt or commercial CAs in production +2. **Never commit certificates** to version control - use secure secret management +3. **Rotate certificates** before expiration - monitor expiry dates +4. **Use environment variables** for certificate paths to support different environments +5. **Test certificate validation** in staging environment before production deployment +::: + +:::info Related Documentation +- [QuestDB Rust client documentation](https://docs.rs/questdb/) +- [QuestDB Rust client GitHub](https://github.com/questdb/c-questdb-client) +- [TLS configuration examples](https://github.com/questdb/c-questdb-client/tree/main/questdb-rs/examples) +- [QuestDB TLS configuration](/docs/operations/tls/) +::: diff --git a/documentation/playbook/sql/advanced/top-n-plus-others.md b/documentation/playbook/sql/advanced/top-n-plus-others.md new file mode 100644 index 000000000..3533ae7dc --- /dev/null +++ b/documentation/playbook/sql/advanced/top-n-plus-others.md @@ -0,0 +1,366 @@ +--- +title: Top N Plus Others Row +sidebar_label: Top N + Others +description: Group query results into top N rows plus an aggregated "Others" row using rank() and CASE expressions +--- + +Create aggregated results showing the top N items individually, with all remaining items combined into a single "Others" row. This pattern is useful for dashboards and reports where you want to highlight the most important items while still showing the total. + +## Problem: Show Top Items Plus Remainder + +You want to display results like: + +| Browser | Count | +|--------------------|-------| +| Chrome | 450 | +| Firefox | 380 | +| Safari | 320 | +| Edge | 280 | +| Opera | 190 | +| -Others- | 380 | ← Combined total of all other browsers + +Instead of listing all browsers (which might be dozens), show the top 5 individually and aggregate the rest. + +## Solution: Use rank() with CASE Statement + +Use `rank()` to identify top N rows, then use `CASE` to group remaining rows: + +```questdb-sql demo title="Top 5 symbols plus Others" +WITH totals AS ( + SELECT + symbol, + count() as total + FROM trades + WHERE timestamp >= dateadd('d', -1, now()) +), +ranked AS ( + SELECT + *, + rank() OVER (ORDER BY total DESC) as ranking + FROM totals +) +SELECT + CASE + WHEN ranking <= 5 THEN symbol + ELSE '-Others-' + END as symbol, + SUM(total) as total_trades +FROM ranked +GROUP BY 1 +ORDER BY total_trades DESC; +``` + +**Results:** + +| symbol | total_trades | +|------------|--------------| +| BTC-USDT | 15234 | +| ETH-USDT | 12890 | +| SOL-USDT | 8945 | +| MATIC-USDT | 6723 | +| AVAX-USDT | 5891 | +| -Others- | 23456 | ← Sum of all other symbols + +## How It Works + +The query uses a three-step approach: + +1. **Aggregate data** (`totals` CTE): + - Count or sum values by the grouping column + - Creates base data for ranking + +2. **Rank rows** (`ranked` CTE): + - `rank() OVER (ORDER BY total DESC)`: Assigns rank based on count (1 = highest) + - Ties receive the same rank + +3. **Conditional grouping** (outer query): + - `CASE WHEN ranking <= 5`: Keep top 5 with original names + - `ELSE '-Others-'`: Rename all others to "-Others-" + - `SUM(total)`: Aggregate counts (combines all "Others" into one row) + - `GROUP BY 1`: Group by the CASE expression result + +### Understanding rank() + +`rank()` assigns ranks with gaps for ties: + +| symbol | total | rank | +|------------|-------|------| +| BTC-USDT | 1000 | 1 | +| ETH-USDT | 900 | 2 | +| SOL-USDT | 900 | 2 | ← Tie at rank 2 +| AVAX-USDT | 800 | 4 | ← Next rank is 4 (skips 3) +| MATIC-USDT | 700 | 5 | + +If there are ties at the boundary (rank 5), all tied items will be included in top N. + +## Adapting the Pattern + +**Different top N:** +```sql +-- Top 10 instead of top 5 +WHEN ranking <= 10 THEN symbol + +-- Top 3 +WHEN ranking <= 3 THEN symbol +``` + +**Different aggregations:** +```sql +-- Sum instead of count +WITH totals AS ( + SELECT symbol, SUM(amount) as total_volume + FROM trades +) +... +``` + +**Multiple levels:** +```sql +SELECT + CASE + WHEN ranking <= 5 THEN symbol + WHEN ranking <= 10 THEN '-Top 10-' + ELSE '-Others-' + END as category, + SUM(total) as count +FROM ranked +GROUP BY 1; +``` + +Results in three groups: top 5 individual, ranks 6-10 combined, rest combined. + +**Different grouping columns:** +```questdb-sql demo title="Top 5 ECNs plus Others from market data" +WITH totals AS ( + SELECT + ecn, + count() as total + FROM market_data + WHERE timestamp >= dateadd('h', -1, now()) +), +ranked AS ( + SELECT *, rank() OVER (ORDER BY total DESC) as ranking + FROM totals +) +SELECT + CASE WHEN ranking <= 5 THEN ecn ELSE '-Others-' END as ecn, + SUM(total) as message_count +FROM ranked +GROUP BY 1 +ORDER BY message_count DESC; +``` + +**With percentage:** +```questdb-sql demo title="Top 5 symbols with percentage of total" +WITH totals AS ( + SELECT symbol, count() as total + FROM trades + WHERE timestamp >= dateadd('d', -1, now()) +), +ranked AS ( + SELECT *, rank() OVER (ORDER BY total DESC) as ranking + FROM totals +), +summed AS ( + SELECT SUM(total) as grand_total FROM totals +), +grouped AS ( + SELECT + CASE WHEN ranking <= 5 THEN symbol ELSE '-Others-' END as symbol, + SUM(total) as total_trades + FROM ranked + GROUP BY 1 +) +SELECT + symbol, + total_trades, + round(100.0 * total_trades / grand_total, 2) as percentage +FROM grouped CROSS JOIN summed +ORDER BY total_trades DESC; +``` + +## Alternative: Using row_number() + +If you don't want to handle ties and always want exactly N rows in top tier: + +```sql +WITH totals AS ( + SELECT symbol, count() as total + FROM trades +), +ranked AS ( + SELECT *, row_number() OVER (ORDER BY total DESC) as rn + FROM totals +) +SELECT + CASE WHEN rn <= 5 THEN symbol ELSE '-Others-' END as symbol, + SUM(total) as total_trades +FROM ranked +GROUP BY 1 +ORDER BY total_trades DESC; +``` + +**Difference:** +- `rank()`: May include more than N if there are ties at position N +- `row_number()`: Always exactly N in top tier (breaks ties arbitrarily) + +## Multiple Grouping Columns + +Show top N for multiple dimensions: + +```sql +WITH totals AS ( + SELECT + symbol, + side, + count() as total + FROM trades + WHERE timestamp >= dateadd('d', -1, now()) +), +ranked AS ( + SELECT + *, + rank() OVER (PARTITION BY side ORDER BY total DESC) as ranking + FROM totals +) +SELECT + side, + CASE WHEN ranking <= 3 THEN symbol ELSE '-Others-' END as symbol, + SUM(total) as total_trades +FROM ranked +GROUP BY side, 2 +ORDER BY side, total_trades DESC; +``` + +This shows top 3 symbols separately for buy and sell sides. + +## Visualization Considerations + +This pattern is particularly useful for charts: + +**Pie/Donut charts:** +```sql +-- Top 5 slices plus "Others" slice +CASE WHEN ranking <= 5 THEN symbol ELSE '-Others-' END +``` + +**Bar charts:** +```sql +-- Top 10 bars, sorted by value +CASE WHEN ranking <= 10 THEN symbol ELSE '-Others-' END +ORDER BY total_trades DESC +``` + +**Time series:** +```questdb-sql demo title="Top 5 symbols over time with Others" +WITH totals AS ( + SELECT + timestamp_floor('h', timestamp) as hour, + symbol, + count() as total + FROM trades + WHERE timestamp >= dateadd('d', -1, now()) + SAMPLE BY 1h +), +overall_ranks AS ( + SELECT symbol, SUM(total) as grand_total + FROM totals + GROUP BY symbol +), +ranked_symbols AS ( + SELECT symbol, rank() OVER (ORDER BY grand_total DESC) as ranking + FROM overall_ranks +) +SELECT + t.hour, + CASE WHEN rs.ranking <= 5 THEN t.symbol ELSE '-Others-' END as symbol, + SUM(t.total) as hourly_total +FROM totals t +LEFT JOIN ranked_symbols rs ON t.symbol = rs.symbol +GROUP BY t.hour, 2 +ORDER BY t.hour, hourly_total DESC; +``` + +This shows how top 5 symbols trade over time, with all others combined. + +## Filtering Out Low Values + +Add a minimum threshold to exclude negligible values: + +```sql +WITH totals AS ( + SELECT symbol, count() as total + FROM trades + WHERE timestamp >= dateadd('d', -1, now()) +), +ranked AS ( + SELECT *, rank() OVER (ORDER BY total DESC) as ranking + FROM totals + WHERE total >= 10 -- Exclude symbols with less than 10 trades +) +SELECT + CASE WHEN ranking <= 5 THEN symbol ELSE '-Others-' END as symbol, + SUM(total) as total_trades +FROM ranked +GROUP BY 1 +ORDER BY total_trades DESC; +``` + +## Performance Tips + +**Pre-filter data:** +```sql +-- Good: Filter before aggregation +WITH totals AS ( + SELECT symbol, count() as total + FROM trades + WHERE timestamp >= dateadd('d', -1, now()) -- Filter early + AND symbol IN (SELECT DISTINCT symbol FROM watched_symbols) +) +... + +-- Less efficient: Filter after aggregation +WITH totals AS ( + SELECT symbol, count() as total + FROM trades -- No filter +) +, filtered AS ( + SELECT * FROM totals + WHERE ... -- Late filter +) +... +``` + +**Limit ranking scope:** +```sql +-- If you only need top 5, don't rank beyond what's needed +WITH totals AS ( + SELECT symbol, count() as total + FROM trades + WHERE timestamp >= dateadd('d', -1, now()) + ORDER BY total DESC + LIMIT 100 -- Rank only top 100, not all thousands +) +... +``` + +:::tip Custom Labels +Customize the "Others" label for your domain: +- `-Others-` (generic) +- `~Rest~` (shorter) +- `Other Symbols` (explicit) +- `Remaining Browsers` (domain-specific) + +Choose a label that sorts appropriately and is clear in your context. +::: + +:::warning Empty Others Row +If there are N or fewer distinct values, the "Others" row won't appear (or will have 0 count). Handle this in your visualization logic if needed. +::: + +:::info Related Documentation +- [rank() window function](/docs/reference/function/window/#rank) +- [row_number() window function](/docs/reference/function/window/#row_number) +- [CASE expressions](/docs/reference/sql/case/) +- [Window functions](/docs/reference/sql/select/#window-functions) +::: diff --git a/documentation/playbook/sql/finance/bollinger-bands.md b/documentation/playbook/sql/finance/bollinger-bands.md new file mode 100644 index 000000000..095fc3b5e --- /dev/null +++ b/documentation/playbook/sql/finance/bollinger-bands.md @@ -0,0 +1,175 @@ +--- +title: Bollinger Bands +sidebar_label: Bollinger Bands +description: Calculate Bollinger Bands using window functions for volatility analysis and mean reversion trading strategies +--- + +Calculate Bollinger Bands for volatility analysis and mean reversion trading. Bollinger Bands consist of a moving average with upper and lower bands set at a specified number of standard deviations above and below it. They help identify overbought/oversold conditions and measure market volatility. + +## Problem: Calculate Rolling Bands with Standard Deviation + +You want to calculate Bollinger Bands with a 20-period simple moving average (SMA) and bands at ±2 standard deviations. The challenge is that QuestDB doesn't support `STDDEV` as a window function, so you need a workaround using the mathematical relationship between variance and standard deviation. + +## Solution: Calculate Variance Using Window Functions + +Since standard deviation is the square root of variance, and variance is the average of squared differences from the mean, we can calculate it using window functions: + +```questdb-sql demo title="Calculate Bollinger Bands with 20-period SMA" +WITH OHLC AS ( + SELECT + timestamp, symbol, + first(price) AS open, + max(price) as high, + min(price) as low, + last(price) AS close, + sum(amount) AS volume + FROM trades + WHERE symbol = 'BTC-USDT' AND timestamp IN yesterday() + SAMPLE BY 15m +), stats AS ( + SELECT + timestamp, + close, + AVG(close) OVER ( + ORDER BY timestamp + ROWS BETWEEN 19 PRECEDING AND CURRENT ROW + ) AS sma20, + AVG(close * close) OVER ( + ORDER BY timestamp + ROWS BETWEEN 19 PRECEDING AND CURRENT ROW + ) AS avg_close_sq + FROM OHLC +) +SELECT + timestamp, + close, + sma20, + sqrt(avg_close_sq - (sma20 * sma20)) as stdev20, + sma20 + 2 * sqrt(avg_close_sq - (sma20 * sma20)) as upper_band, + sma20 - 2 * sqrt(avg_close_sq - (sma20 * sma20)) as lower_band +FROM stats +ORDER BY timestamp; +``` + +This query: +1. Aggregates trades into 15-minute OHLC candles +2. Calculates a 20-period simple moving average of closing prices +3. Calculates the average of squared closing prices over the same 20-period window +4. Computes standard deviation using the mathematical identity: `σ = √(E[X²] - E[X]²)` +5. Adds/subtracts 2× standard deviation to create upper and lower bands + +## How It Works + +The mathematical relationship used here is: + +``` +Variance(X) = E[X²] - (E[X])² +StdDev(X) = √(E[X²] - (E[X])²) +``` + +Where: +- `E[X]` is the average (SMA) of closing prices +- `E[X²]` is the average of squared closing prices +- `√` is the square root function + +Breaking down the calculation: +1. **`AVG(close)`**: Simple moving average over 20 periods +2. **`AVG(close * close)`**: Average of squared prices over 20 periods +3. **`sqrt(avg_close_sq - (sma20 * sma20))`**: Standard deviation derived from variance +4. **Upper/Lower bands**: SMA ± (multiplier × standard deviation) + +### Window Frame Clause + +`ROWS BETWEEN 19 PRECEDING AND CURRENT ROW` creates a sliding window of exactly 20 rows (19 previous + current), which gives us the 20-period moving calculations required for standard Bollinger Bands. + +## Adapting the Parameters + +**Different period lengths:** +```sql +-- 10-period Bollinger Bands (change 19 to 9) +AVG(close) OVER (ORDER BY timestamp ROWS BETWEEN 9 PRECEDING AND CURRENT ROW) AS sma10, +AVG(close * close) OVER (ORDER BY timestamp ROWS BETWEEN 9 PRECEDING AND CURRENT ROW) AS avg_close_sq +``` + +**Different band multipliers:** +```sql +-- 1 standard deviation bands (tighter) +sma20 + 1 * sqrt(avg_close_sq - (sma20 * sma20)) as upper_band, +sma20 - 1 * sqrt(avg_close_sq - (sma20 * sma20)) as lower_band + +-- 3 standard deviation bands (wider) +sma20 + 3 * sqrt(avg_close_sq - (sma20 * sma20)) as upper_band, +sma20 - 3 * sqrt(avg_close_sq - (sma20 * sma20)) as lower_band +``` + +**Different time intervals:** +```sql +-- 5-minute candles +SAMPLE BY 5m + +-- 1-hour candles +SAMPLE BY 1h +``` + +**Multiple symbols:** +```questdb-sql demo title="Bollinger Bands for multiple symbols" +WITH OHLC AS ( + SELECT + timestamp, symbol, + first(price) AS open, + last(price) AS close, + sum(amount) AS volume + FROM trades + WHERE symbol IN ('BTC-USDT', 'ETH-USDT') + AND timestamp IN yesterday() + SAMPLE BY 15m +), stats AS ( + SELECT + timestamp, + symbol, + close, + AVG(close) OVER ( + PARTITION BY symbol + ORDER BY timestamp + ROWS BETWEEN 19 PRECEDING AND CURRENT ROW + ) AS sma20, + AVG(close * close) OVER ( + PARTITION BY symbol + ORDER BY timestamp + ROWS BETWEEN 19 PRECEDING AND CURRENT ROW + ) AS avg_close_sq + FROM OHLC +) +SELECT + timestamp, + symbol, + close, + sma20, + sma20 + 2 * sqrt(avg_close_sq - (sma20 * sma20)) as upper_band, + sma20 - 2 * sqrt(avg_close_sq - (sma20 * sma20)) as lower_band +FROM stats +ORDER BY symbol, timestamp; +``` + +Note the addition of `PARTITION BY symbol` to calculate separate Bollinger Bands for each symbol. + +:::tip Trading Signals +- **Bollinger Squeeze**: When bands narrow, it indicates low volatility and often precedes significant price moves +- **Band Walk**: Price consistently touching the upper band suggests strong uptrend; lower band suggests downtrend +- **Mean Reversion**: Price touching or exceeding bands often signals potential reversals back to the mean +- **Volatility Measure**: Width between bands indicates market volatility - wider bands mean higher volatility +::: + +:::tip Parameter Selection +- **Standard settings**: 20-period SMA with 2σ bands (captures ~95% of price action) +- **Day trading**: Use shorter periods (10 or 15) for more responsive bands +- **Swing trading**: Use standard 20-period or longer (50-period) for smoother signals +- **Volatility adjustment**: Use 2.5σ or 3σ bands in highly volatile markets +::: + +:::info Related Documentation +- [Window functions](/docs/reference/sql/select/#window-functions) +- [AVG window function](/docs/reference/function/window/#avg) +- [SQRT function](/docs/reference/function/numeric/#sqrt) +- [Window frame clauses](/docs/reference/sql/select/#frame-clause) +::: diff --git a/documentation/playbook/sql/calculate-compound-interest.md b/documentation/playbook/sql/finance/compound-interest.md similarity index 100% rename from documentation/playbook/sql/calculate-compound-interest.md rename to documentation/playbook/sql/finance/compound-interest.md diff --git a/documentation/playbook/sql/finance/cumulative-product.md b/documentation/playbook/sql/finance/cumulative-product.md new file mode 100644 index 000000000..aa27609a2 --- /dev/null +++ b/documentation/playbook/sql/finance/cumulative-product.md @@ -0,0 +1,123 @@ +--- +title: Cumulative Product for Random Walk +sidebar_label: Cumulative product +description: Calculate cumulative product to simulate stock price paths from daily returns +--- + +Calculate the cumulative product of daily returns to simulate a stock's price path (random walk). This is useful for financial modeling, backtesting trading strategies, and portfolio analysis where you need to compound returns over time. + +## Problem: Compound Daily Returns + +You have a table with daily returns for a stock and want to calculate the cumulative price starting from an initial value (e.g., $100). Each day's price is calculated by multiplying the previous price by `(1 + return)`. + +For example, with these daily returns: + +| Date | Daily Return (%) | +|------------|------------------| +| 2024-09-05 | 2.00 | +| 2024-09-06 | -1.00 | +| 2024-09-07 | 1.50 | +| 2024-09-08 | -3.00 | + +You want to calculate: + +| Date | Daily Return (%) | Stock Price | +|------------|------------------|-------------| +| 2024-09-05 | 2.00 | 102.00 | +| 2024-09-06 | -1.00 | 100.98 | +| 2024-09-07 | 1.50 | 102.49 | +| 2024-09-08 | -3.00 | 99.42 | + +## Solution: Use Logarithm Mathematics Trick + +Since QuestDB doesn't allow functions on top of window function results, we use a mathematical trick: **the exponential of the sum of logarithms equals the product**. + +```questdb-sql demo title="Calculate cumulative product via logarithms" +WITH ln_values AS ( + SELECT + date, + return, + SUM(ln(1 + return)) OVER (ORDER BY date) AS ln_value + FROM daily_returns +) +SELECT + date, + return, + 100 * exp(ln_value) AS stock_price +FROM ln_values; +``` + +This query: +1. Calculates `ln(1 + return)` for each day +2. Uses a cumulative `SUM` window function to add up the logarithms +3. Applies `exp()` to convert back to the product + +## How It Works + +The mathematical identity used here is: + +``` +product(1 + r₁, 1 + r₂, ..., 1 + rₙ) = exp(sum(ln(1 + r₁), ln(1 + r₂), ..., ln(1 + rₙ))) +``` + +Breaking it down: +- `ln(1 + return)` converts each multiplicative factor to an additive one +- `SUM(...) OVER (ORDER BY date)` creates a cumulative sum +- `exp(ln_value)` converts the cumulative sum back to a cumulative product +- Multiply by 100 to apply the starting price of $100 + +### Why This Works + +QuestDB doesn't support direct window functions like `PRODUCT() OVER()`, and attempting `exp(SUM(ln(1 + return)) OVER ())` fails with a "dangling literal" error because you can't nest functions around window functions. + +The workaround is to use a CTE to compute the cumulative sum first, then apply `exp()` in the outer query where it's operating on a regular column, not a window function result. + +## Adapting to Your Data + +You can easily modify this pattern: + +**Different starting price:** +```sql +SELECT date, return, 1000 * exp(ln_value) AS stock_price -- Start at $1000 +FROM ln_values; +``` + +**Different time granularity:** +```sql +-- For hourly returns +WITH ln_values AS ( + SELECT + timestamp, + return, + SUM(ln(1 + return)) OVER (ORDER BY timestamp) AS ln_value + FROM hourly_returns +) +SELECT timestamp, 100 * exp(ln_value) AS price FROM ln_values; +``` + +**Multiple assets:** +```sql +WITH ln_values AS ( + SELECT + date, + symbol, + return, + SUM(ln(1 + return)) OVER (PARTITION BY symbol ORDER BY date) AS ln_value + FROM daily_returns +) +SELECT + date, + symbol, + 100 * exp(ln_value) AS stock_price +FROM ln_values; +``` + +:::tip Use Case: Monte Carlo Simulation +This pattern is essential for Monte Carlo simulations in finance. Generate random returns, apply this cumulative product calculation, and run thousands of iterations to model possible future price paths. +::: + +:::info Related Documentation +- [Window functions](/docs/reference/sql/select/#window-functions) +- [Mathematical functions](/docs/reference/function/numeric/) +- [SUM aggregate](/docs/reference/function/aggregation/#sum) +::: diff --git a/documentation/playbook/sql/finance/rolling-stddev.md b/documentation/playbook/sql/finance/rolling-stddev.md new file mode 100644 index 000000000..c42df6cc7 --- /dev/null +++ b/documentation/playbook/sql/finance/rolling-stddev.md @@ -0,0 +1,321 @@ +--- +title: Rolling Standard Deviation +sidebar_label: Rolling std dev +description: Calculate rolling standard deviation for volatility analysis using window functions and variance mathematics +--- + +Calculate rolling standard deviation to measure price volatility over time. Rolling standard deviation shows how much prices deviate from their moving average, helping identify periods of high and low volatility. This is essential for risk management, option pricing, and volatility-based trading strategies. + +## Problem: Window Function Limitation + +You want to calculate standard deviation over a rolling time window, but QuestDB doesn't support `STDDEV` as a window function. However, we can work around this using the mathematical relationship between standard deviation and variance. + +## Solution: Calculate Variance Using Window Functions + +Since standard deviation is the square root of variance, and variance is the average of squared differences from the mean, we can calculate it step by step using CTEs: + +```questdb-sql demo title="Calculate 20-period rolling standard deviation" +WITH rolling_avg_cte AS ( + SELECT + timestamp, + symbol, + price, + AVG(price) OVER (PARTITION BY symbol ORDER BY timestamp) AS rolling_avg + FROM trades + WHERE timestamp IN yesterday() + AND symbol = 'BTC-USDT' +), +variance_cte AS ( + SELECT + timestamp, + symbol, + price, + rolling_avg, + AVG(POWER(price - rolling_avg, 2)) + OVER (PARTITION BY symbol ORDER BY timestamp) AS rolling_variance + FROM rolling_avg_cte +) +SELECT + timestamp, + symbol, + price, + round(rolling_avg, 2) AS rolling_avg, + round(rolling_variance, 4) AS rolling_variance, + round(SQRT(rolling_variance), 2) AS rolling_stddev +FROM variance_cte; +``` + +This query: +1. Calculates the rolling average (mean) of prices +2. Computes the variance as the average of squared differences from the mean +3. Takes the square root of variance to get standard deviation + +## How It Works + +The mathematical relationship used is: + +``` +Variance(X) = E[(X - μ)²] + = Average of squared differences from mean + +StdDev(X) = √Variance(X) +``` + +Where: +- `X` = price values +- `μ` = rolling average (mean) +- `E[...]` = expected value (average) + +Breaking down the calculation: +1. **First CTE** (`rolling_avg_cte`): Calculates running average using `AVG() OVER ()` +2. **Second CTE** (`variance_cte`): For each price, calculates `(price - rolling_avg)²`, then averages these squared differences using another window function +3. **Final query**: Applies `SQRT()` to variance to get standard deviation + +### Window Frame Defaults + +When you don't specify a frame clause (like `ROWS BETWEEN`), QuestDB defaults to: +```sql +ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW +``` + +This calculates from the start of the partition to the current row, giving you an expanding window. For a fixed rolling window, specify the frame explicitly. + +## Fixed Rolling Window + +For a true rolling window (e.g., last 20 periods), specify the frame clause: + +```questdb-sql demo title="20-period rolling standard deviation with fixed window" +WITH rolling_avg_cte AS ( + SELECT + timestamp, + symbol, + price, + AVG(price) OVER ( + PARTITION BY symbol + ORDER BY timestamp + ROWS BETWEEN 19 PRECEDING AND CURRENT ROW + ) AS rolling_avg + FROM trades + WHERE timestamp IN yesterday() + AND symbol = 'BTC-USDT' +), +variance_cte AS ( + SELECT + timestamp, + symbol, + price, + rolling_avg, + AVG(POWER(price - rolling_avg, 2)) OVER ( + PARTITION BY symbol + ORDER BY timestamp + ROWS BETWEEN 19 PRECEDING AND CURRENT ROW + ) AS rolling_variance + FROM rolling_avg_cte +) +SELECT + timestamp, + symbol, + price, + round(rolling_avg, 2) AS rolling_avg, + round(SQRT(rolling_variance), 2) AS rolling_stddev +FROM variance_cte; +``` + +This calculates standard deviation over exactly the last 20 rows (19 preceding + current), providing a consistent window size throughout. + +## Adapting the Query + +**Different window sizes:** +```sql +-- 10-period rolling stddev (change 19 to 9) +ROWS BETWEEN 9 PRECEDING AND CURRENT ROW + +-- 50-period rolling stddev (change 19 to 49) +ROWS BETWEEN 49 PRECEDING AND CURRENT ROW + +-- 200-period rolling stddev (change 19 to 199) +ROWS BETWEEN 199 PRECEDING AND CURRENT ROW +``` + +**Time-based windows instead of row-based:** +```questdb-sql demo title="Rolling stddev over 1-hour time window" +WITH rolling_avg_cte AS ( + SELECT + timestamp, + symbol, + price, + AVG(price) OVER ( + PARTITION BY symbol + ORDER BY timestamp + RANGE BETWEEN 1 HOUR PRECEDING AND CURRENT ROW + ) AS rolling_avg + FROM trades + WHERE timestamp IN yesterday() + AND symbol = 'BTC-USDT' +), +variance_cte AS ( + SELECT + timestamp, + symbol, + price, + rolling_avg, + AVG(POWER(price - rolling_avg, 2)) OVER ( + PARTITION BY symbol + ORDER BY timestamp + RANGE BETWEEN 1 HOUR PRECEDING AND CURRENT ROW + ) AS rolling_variance + FROM rolling_avg_cte +) +SELECT + timestamp, + symbol, + price, + round(rolling_avg, 2) AS rolling_avg, + round(SQRT(rolling_variance), 2) AS rolling_stddev +FROM variance_cte; +``` + +**With OHLC candles:** +```questdb-sql demo title="Rolling stddev of candle closes" +WITH OHLC AS ( + SELECT + timestamp, + symbol, + first(price) AS open, + last(price) AS close, + min(price) AS low, + max(price) AS high + FROM trades + WHERE symbol = 'BTC-USDT' + AND timestamp IN yesterday() + SAMPLE BY 15m +), +rolling_avg_cte AS ( + SELECT + timestamp, + symbol, + close, + AVG(close) OVER ( + PARTITION BY symbol + ORDER BY timestamp + ROWS BETWEEN 19 PRECEDING AND CURRENT ROW + ) AS rolling_avg + FROM OHLC +), +variance_cte AS ( + SELECT + timestamp, + symbol, + close, + rolling_avg, + AVG(POWER(close - rolling_avg, 2)) OVER ( + PARTITION BY symbol + ORDER BY timestamp + ROWS BETWEEN 19 PRECEDING AND CURRENT ROW + ) AS rolling_variance + FROM rolling_avg_cte +) +SELECT + timestamp, + symbol, + close, + round(rolling_avg, 2) AS sma_20, + round(SQRT(rolling_variance), 2) AS stddev_20 +FROM variance_cte; +``` + +**Multiple symbols:** +```questdb-sql demo title="Rolling stddev for multiple symbols" +WITH rolling_avg_cte AS ( + SELECT + timestamp, + symbol, + price, + AVG(price) OVER ( + PARTITION BY symbol + ORDER BY timestamp + ROWS BETWEEN 19 PRECEDING AND CURRENT ROW + ) AS rolling_avg + FROM trades + WHERE timestamp IN yesterday() + AND symbol IN ('BTC-USDT', 'ETH-USDT', 'SOL-USDT') +), +variance_cte AS ( + SELECT + timestamp, + symbol, + price, + rolling_avg, + AVG(POWER(price - rolling_avg, 2)) OVER ( + PARTITION BY symbol + ORDER BY timestamp + ROWS BETWEEN 19 PRECEDING AND CURRENT ROW + ) AS rolling_variance + FROM rolling_avg_cte +) +SELECT + timestamp, + symbol, + round(SQRT(rolling_variance), 2) AS rolling_stddev +FROM variance_cte +ORDER BY symbol, timestamp; +``` + +## Calculating Annualized Volatility + +For option pricing and risk management, convert rolling standard deviation to annualized volatility: + +```sql +-- Assuming daily returns, multiply by sqrt(252) for annual volatility +round(SQRT(rolling_variance) * SQRT(252), 4) AS annualized_volatility_pct + +-- For intraday data, adjust the multiplier: +-- 1-minute bars: SQRT(252 * 390) -- 390 trading minutes per day +-- 5-minute bars: SQRT(252 * 78) +-- 1-hour bars: SQRT(252 * 6.5) +``` + +## Combining with Bollinger Bands + +Rolling standard deviation is the foundation for Bollinger Bands: + +```sql +SELECT + timestamp, + symbol, + price, + rolling_avg AS middle_band, + rolling_avg + (2 * SQRT(rolling_variance)) AS upper_band, + rolling_avg - (2 * SQRT(rolling_variance)) AS lower_band +FROM variance_cte; +``` + +:::tip Volatility Analysis Applications +- **Risk management**: Higher standard deviation indicates higher risk/volatility +- **Position sizing**: Adjust position sizes based on current volatility levels +- **Option pricing**: Volatility is a key input for option valuation models +- **Volatility targeting**: Maintain constant portfolio risk by adjusting to current volatility +- **Regime detection**: Identify transitions between high and low volatility regimes +::: + +:::tip Interpretation +- **High stddev**: Large price swings, high uncertainty, potentially higher risk and opportunity +- **Low stddev**: Stable prices, low uncertainty, often precedes larger moves (volatility compression) +- **Expanding stddev**: Increasing volatility, trend acceleration or market stress +- **Contracting stddev**: Decreasing volatility, consolidation phase +::: + +:::warning Performance Considerations +Calculating rolling standard deviation requires multiple passes over the data (once for average, once for variance). For very large datasets, consider: +- Filtering by timestamp range first +- Using larger time intervals (SAMPLE BY) +- Calculating on aggregated OHLC data rather than tick data +::: + +:::info Related Documentation +- [Window functions](/docs/reference/sql/select/#window-functions) +- [AVG window function](/docs/reference/function/window/#avg) +- [POWER function](/docs/reference/function/numeric/#power) +- [SQRT function](/docs/reference/function/numeric/#sqrt) +- [Window frame clauses](/docs/reference/sql/select/#frame-clause) +::: diff --git a/documentation/playbook/sql/finance/tick-trin.md b/documentation/playbook/sql/finance/tick-trin.md new file mode 100644 index 000000000..f60b6703e --- /dev/null +++ b/documentation/playbook/sql/finance/tick-trin.md @@ -0,0 +1,191 @@ +--- +title: Cumulative Tick and Trin Indicators +sidebar_label: Tick & Trin +description: Calculate cumulative Tick and Trin (ARMS Index) for market sentiment analysis and breadth indicators +--- + +Calculate cumulative Tick and Trin (also known as the ARMS Index) to measure market sentiment and breadth. These indicators compare advancing versus declining trades in terms of both count and volume, helping identify overbought/oversold conditions and potential market reversals. + +## Problem: Calculate Running Market Breadth + +You have a table with trade data including `side` (buy/sell) and `amount`, and want to calculate cumulative Tick and Trin values throughout the trading day. Tick measures the ratio of upticks to downticks, while Trin (Trading Index) adjusts this ratio by volume to identify divergences between price action and volume. + +**Sample data:** + +| timestamp | side | amount | +|------------------------------|------|--------| +| 2023-12-01T10:00:00.000000Z | sell | 100 | +| 2023-12-01T10:01:00.000000Z | buy | 50 | +| 2023-12-01T10:02:00.000000Z | sell | 150 | +| 2023-12-01T10:03:00.000000Z | buy | 100 | +| 2023-12-01T10:04:00.000000Z | buy | 200 | + +## Solution: Use Window Functions with CASE Statements + +Use `SUM` as a window function combined with `CASE` statements to compute running totals of upticks, downticks, and their respective volumes: + +```questdb-sql demo title="Calculate cumulative Tick and Trin indicators" +WITH tick_vol AS ( + SELECT + timestamp, + side, + amount, + SUM(CASE WHEN side = 'sell' THEN 1.0 END) OVER (ORDER BY timestamp) as downtick, + SUM(CASE WHEN side = 'buy' THEN 1.0 END) OVER (ORDER BY timestamp) as uptick, + SUM(CASE WHEN side = 'sell' THEN amount END) OVER (ORDER BY timestamp) as downvol, + SUM(CASE WHEN side = 'buy' THEN amount END) OVER (ORDER BY timestamp) as upvol + FROM trades + WHERE timestamp IN yesterday() AND symbol = 'BTC-USDT' +) +SELECT + timestamp, + side, + amount, + uptick, + downtick, + upvol, + downvol, + uptick / downtick as tick, + (uptick / downtick) / (upvol / downvol) as trin +FROM tick_vol; +``` + +**Results:** + +| timestamp | side | amount | downtick | uptick | downvol | upvol | tick | trin | +|------------------------------|------|--------|----------|--------|---------|-------|------|----------------| +| 2023-12-01T10:00:00.000000Z | sell | 100.0 | 1.0 | NULL | 100.0 | NULL | NULL | NULL | +| 2023-12-01T10:01:00.000000Z | buy | 50.0 | 1.0 | 1.0 | 100.0 | 50.0 | 1.0 | 2.0 | +| 2023-12-01T10:02:00.000000Z | sell | 150.0 | 2.0 | 1.0 | 250.0 | 50.0 | 0.5 | 2.5 | +| 2023-12-01T10:03:00.000000Z | buy | 100.0 | 2.0 | 2.0 | 250.0 | 150.0 | 1.0 | 1.666666666666 | +| 2023-12-01T10:04:00.000000Z | buy | 200.0 | 2.0 | 3.0 | 250.0 | 350.0 | 1.5 | 1.071428571428 | + +Each row shows the cumulative values from the start of the day, with Tick and Trin calculated at every trade. + +## How It Works + +The indicators are calculated using these formulas: + +``` +Tick = Upticks / Downticks + +Trin = (Upticks / Downticks) / (Upvol / Downvol) + = Tick / Volume Ratio +``` + +Where: +- **Upticks**: Cumulative count of buy transactions +- **Downticks**: Cumulative count of sell transactions +- **Upvol**: Cumulative volume of buy transactions +- **Downvol**: Cumulative volume of sell transactions + +The query uses: +1. **Window functions**: `SUM(...) OVER (ORDER BY timestamp)` creates running totals from the start of the period +2. **CASE statements**: Conditionally sum only trades matching the specified side +3. **Type casting**: Using `1.0` instead of `1` ensures results are doubles, avoiding explicit casting + +### Interpreting the Indicators + +**Tick Indicator:** +- **Tick > 1.0**: More buying pressure (bullish sentiment) +- **Tick < 1.0**: More selling pressure (bearish sentiment) +- **Tick = 1.0**: Neutral market (equal buying and selling) + +**Trin (ARMS Index):** +- **Trin < 1.0**: Strong market (volume flowing into advancing trades) +- **Trin > 1.0**: Weak market (volume flowing into declining trades) +- **Trin = 1.0**: Balanced market +- **Extreme readings**: Trin > 2.0 suggests oversold conditions; Trin < 0.5 suggests overbought + +**Divergences:** +When Tick and Trin move in opposite directions, it can signal important market conditions: +- High Tick + High Trin: Advances lack volume confirmation (bearish divergence) +- Low Tick + Low Trin: Declines lack volume confirmation (bullish divergence) + +## Adapting the Query + +**Multiple symbols:** +```questdb-sql demo title="Tick and Trin for multiple symbols" +WITH tick_vol AS ( + SELECT + timestamp, + symbol, + side, + amount, + SUM(CASE WHEN side = 'sell' THEN 1.0 END) + OVER (PARTITION BY symbol ORDER BY timestamp) as downtick, + SUM(CASE WHEN side = 'buy' THEN 1.0 END) + OVER (PARTITION BY symbol ORDER BY timestamp) as uptick, + SUM(CASE WHEN side = 'sell' THEN amount END) + OVER (PARTITION BY symbol ORDER BY timestamp) as downvol, + SUM(CASE WHEN side = 'buy' THEN amount END) + OVER (PARTITION BY symbol ORDER BY timestamp) as upvol + FROM trades + WHERE timestamp IN yesterday() +) +SELECT + timestamp, + symbol, + uptick / downtick as tick, + (uptick / downtick) / (upvol / downvol) as trin +FROM tick_vol; +``` + +**Intraday periods (reset at intervals):** +```questdb-sql demo title="Tick and Trin reset every hour" +WITH tick_vol AS ( + SELECT + timestamp, + side, + amount, + SUM(CASE WHEN side = 'sell' THEN 1.0 END) + OVER (PARTITION BY timestamp_floor('h', timestamp) ORDER BY timestamp) as downtick, + SUM(CASE WHEN side = 'buy' THEN 1.0 END) + OVER (PARTITION BY timestamp_floor('h', timestamp) ORDER BY timestamp) as uptick, + SUM(CASE WHEN side = 'sell' THEN amount END) + OVER (PARTITION BY timestamp_floor('h', timestamp) ORDER BY timestamp) as downvol, + SUM(CASE WHEN side = 'buy' THEN amount END) + OVER (PARTITION BY timestamp_floor('h', timestamp) ORDER BY timestamp) as upvol + FROM trades + WHERE timestamp IN yesterday() AND symbol = 'BTC-USDT' +) +SELECT + timestamp, + uptick / downtick as tick, + (uptick / downtick) / (upvol / downvol) as trin +FROM tick_vol; +``` + +**Daily summary values only:** +```sql +WITH tick_vol AS ( + SELECT + SUM(CASE WHEN side = 'sell' THEN 1.0 END) as downtick, + SUM(CASE WHEN side = 'buy' THEN 1.0 END) as uptick, + SUM(CASE WHEN side = 'sell' THEN amount END) as downvol, + SUM(CASE WHEN side = 'buy' THEN amount END) as upvol + FROM trades + WHERE timestamp IN yesterday() +) +SELECT + uptick / downtick as tick, + (uptick / downtick) / (upvol / downvol) as trin +FROM tick_vol; +``` + +:::tip Market Analysis Applications +- **Intraday momentum**: Track Tick throughout the day to identify accumulation/distribution patterns +- **Overbought/oversold**: Extreme Trin readings often precede short-term reversals +- **Market breadth**: Persistently high/low values indicate broad market strength or weakness +- **Divergence trading**: When price makes new highs/lows but Trin doesn't confirm, it suggests weakening momentum +::: + +:::warning Handling NULL Values +The first buy or sell transaction will produce NULL values for some calculations since there's no previous opposite-side transaction yet. You can filter these out with `WHERE uptick IS NOT NULL AND downtick IS NOT NULL` if needed. +::: + +:::info Related Documentation +- [Window functions](/docs/reference/sql/select/#window-functions) +- [SUM aggregate](/docs/reference/function/aggregation/#sum) +- [CASE expressions](/docs/reference/sql/case/) +::: diff --git a/documentation/playbook/sql/finance/volume-profile.md b/documentation/playbook/sql/finance/volume-profile.md new file mode 100644 index 000000000..1b69db9f8 --- /dev/null +++ b/documentation/playbook/sql/finance/volume-profile.md @@ -0,0 +1,183 @@ +--- +title: Volume Profile +sidebar_label: Volume profile +description: Calculate volume profile to identify key price levels with high trading activity for support and resistance analysis +--- + +Calculate volume profile to identify price levels where significant trading volume occurred. Volume profile shows the distribution of trading activity across different price levels, helping identify strong support/resistance zones, value areas, and potential breakout levels. + +## Problem: Distribute Volume Across Price Levels + +You want to aggregate all trades into price bins and see the total volume traded at each price level. This reveals where most trading activity occurred during a specific period, which often indicates important price levels for future trading. + +## Solution: Use FLOOR to Create Price Bins + +Group trades into price bins using `FLOOR` with a tick size parameter, then sum the volume for each bin: + +```questdb-sql demo title="Calculate volume profile with $1 tick size" +DECLARE @tick_size := 1.0 +SELECT + floor(price / @tick_size) * @tick_size AS price_bin, + round(SUM(amount), 2) AS volume +FROM trades +WHERE symbol = 'BTC-USDT' + AND timestamp IN today() +ORDER BY price_bin; +``` + +**Results:** + +| price_bin | volume | +|-----------|-----------| +| 61000.0 | 12.45 | +| 61001.0 | 8.23 | +| 61002.0 | 15.67 | +| 61003.0 | 23.89 | +| 61004.0 | 11.34 | +| ... | ... | + +Each row shows the total volume traded within that price bin during the period. + +## How It Works + +The volume profile calculation uses: + +1. **`floor(price / @tick_size) * @tick_size`**: Rounds each trade's price down to the nearest tick size, creating discrete price bins +2. **`SUM(amount)`**: Aggregates all volume that occurred within each price bin +3. **Implicit GROUP BY**: QuestDB automatically groups by all non-aggregated columns (price_bin) + +### Understanding Tick Size + +The `@tick_size` parameter controls the granularity of your price bins: +- **Small tick size** (e.g., 0.01): Very detailed profile with many bins - useful for intraday analysis +- **Large tick size** (e.g., 100): Broader view with fewer bins - useful for longer-term patterns +- **Dynamic tick size**: Adjust based on the asset's typical price range + +## Dynamic Tick Size for Consistent Bins + +For assets with different price ranges, a fixed tick size may produce too many or too few bins. This query dynamically calculates the tick size to always produce approximately 50 bins: + +```questdb-sql demo title="Volume profile with dynamic 50-bin distribution" +WITH raw_data AS ( + SELECT price, amount FROM trades + WHERE symbol = 'BTC-USDT' AND timestamp IN today() +), +tick_size AS ( + SELECT (max(price) - min(price)) / 49 as tick_size FROM raw_data +) +SELECT + floor(price / tick_size) * tick_size AS price_bin, + round(SUM(amount), 2) AS volume +FROM raw_data CROSS JOIN tick_size +ORDER BY price_bin; +``` + +This query: +1. Finds the maximum and minimum prices in the dataset +2. Divides the price range by 49 (to create 50 bins) +3. Uses `CROSS JOIN` to apply the calculated tick size to every row +4. Groups trades into evenly-distributed price bins + +The result is a volume profile with approximately 50 bars regardless of the asset's price range or volatility. + +## Adapting the Query + +**Different time periods:** +```sql +-- Specific date +WHERE timestamp IN '2024-09-05' + +-- Last hour +WHERE timestamp >= dateadd('h', -1, now()) + +-- Last week +WHERE timestamp >= dateadd('w', -1, now()) + +-- Between specific times +WHERE timestamp BETWEEN '2024-09-05T09:30:00' AND '2024-09-05T16:00:00' +``` + +**Multiple symbols:** +```questdb-sql demo title="Volume profile for multiple symbols" +DECLARE @tick_size := 1.0 +SELECT + symbol, + floor(price / @tick_size) * @tick_size AS price_bin, + round(SUM(amount), 2) AS volume +FROM trades +WHERE symbol IN ('BTC-USDT', 'ETH-USDT') + AND timestamp IN today() +ORDER BY symbol, price_bin; +``` + +**Filter by minimum volume threshold:** +```sql +-- Only show price levels with significant volume +DECLARE @tick_size := 1.0 +SELECT + floor(price / @tick_size) * @tick_size AS price_bin, + round(SUM(amount), 2) AS volume +FROM trades +WHERE symbol = 'BTC-USDT' + AND timestamp IN today() +HAVING SUM(amount) > 10 -- Only bins with volume > 10 +ORDER BY price_bin; +``` + +**Show top N price levels by volume:** +```questdb-sql demo title="Top 10 price levels by volume" +DECLARE @tick_size := 1.0 +SELECT + floor(price / @tick_size) * @tick_size AS price_bin, + round(SUM(amount), 2) AS volume +FROM trades +WHERE symbol = 'BTC-USDT' + AND timestamp IN today() +ORDER BY volume DESC +LIMIT 10; +``` + +## Interpreting Volume Profile + +**Point of Control (POC):** +The price level with the highest volume is called the Point of Control. This is typically the fairest price where most participants agreed to trade, and often acts as a strong magnet for price. + +```sql +-- Find the POC (price with highest volume) +DECLARE @tick_size := 1.0 +SELECT + floor(price / @tick_size) * @tick_size AS poc_price, + round(SUM(amount), 2) AS poc_volume +FROM trades +WHERE symbol = 'BTC-USDT' + AND timestamp IN today() +ORDER BY poc_volume DESC +LIMIT 1; +``` + +**Value Area:** +The price range where approximately 70% of the volume traded. Prices outside this area are considered "low volume" zones where price tends to move quickly. + +**High Volume Nodes (HVN):** +Price levels with significantly higher volume than surrounding levels. These act as strong support or resistance. + +**Low Volume Nodes (LVN):** +Price levels with minimal volume. Price often moves quickly through these zones. + +:::tip Trading Applications +- **Support/Resistance**: High volume nodes indicate strong support or resistance levels +- **Value Area**: Price tends to return to high-volume areas (mean reversion opportunity) +- **Breakouts**: Low volume nodes above/below current price suggest potential quick moves if broken +- **Acceptance**: Sustained trading at a new price level builds volume profile and establishes new value +::: + +:::tip Visualization +Volume profile is best visualized as a horizontal histogram on a price chart, showing volume distribution across price levels. This can be created in Grafana or other charting tools by rotating the volume axis. +::: + +:::info Related Documentation +- [FLOOR function](/docs/reference/function/numeric/#floor) +- [SUM aggregate](/docs/reference/function/aggregation/#sum) +- [DECLARE variables](/docs/reference/sql/declare/) +- [GROUP BY (implicit)](/docs/reference/sql/select/#implicit-group-by) +::: diff --git a/documentation/playbook/sql/finance/volume-spike.md b/documentation/playbook/sql/finance/volume-spike.md new file mode 100644 index 000000000..e468f710e --- /dev/null +++ b/documentation/playbook/sql/finance/volume-spike.md @@ -0,0 +1,276 @@ +--- +title: Volume Spike Detection +sidebar_label: Volume spikes +description: Detect volume spikes by comparing current volume against recent historical volume using LAG window function +--- + +Detect volume spikes by comparing current trading volume against recent historical patterns. Volume spikes often precede significant price moves and can signal accumulation, distribution, or the start of new trends. This pattern helps identify unusual trading activity that may warrant attention. + +## Problem: Flag Abnormal Volume + +You have aggregated candle data and want to flag trades where volume is significantly higher than recent activity. For this example, a "spike" is defined as volume exceeding twice the previous candle's volume for the same symbol. + +## Solution: Use LAG to Access Previous Volume + +Use the `LAG` window function to retrieve the previous candle's volume, then compare with a `CASE` statement: + +```questdb-sql demo title="Detect volume spikes exceeding 2x previous volume" +DECLARE + @symbol := 'BTC-USDT' +WITH candles AS ( + SELECT + timestamp, + symbol, + sum(amount) AS volume + FROM trades + WHERE timestamp >= dateadd('h', -7, now()) + AND symbol = @symbol + SAMPLE BY 30s +), +prev_volumes AS ( + SELECT + timestamp, + symbol, + volume, + LAG(volume) OVER (PARTITION BY symbol ORDER BY timestamp) AS prev_volume + FROM candles +) +SELECT + timestamp, + symbol, + volume, + prev_volume, + CASE + WHEN volume > 2 * prev_volume THEN 'spike' + ELSE 'normal' + END AS spike_flag +FROM prev_volumes +WHERE prev_volume IS NOT NULL; +``` + +**Results:** + +| timestamp | symbol | volume | prev_volume | spike_flag | +|------------------------------|-----------|--------|-------------|------------| +| 2024-01-15T10:00:30.000000Z | BTC-USDT | 10.5 | 12.3 | normal | +| 2024-01-15T10:01:00.000000Z | BTC-USDT | 9.8 | 10.5 | normal | +| 2024-01-15T10:01:30.000000Z | BTC-USDT | 25.6 | 9.8 | spike | +| 2024-01-15T10:02:00.000000Z | BTC-USDT | 11.2 | 25.6 | normal | +| 2024-01-15T10:02:30.000000Z | BTC-USDT | 8.9 | 11.2 | normal | + +The spike at 10:01:30 shows volume of 25.6, which is more than double the previous volume of 9.8. + +## How It Works + +The query uses a multi-step approach: + +1. **Aggregate to candles**: Use `SAMPLE BY` to create 30-second candles with volume totals +2. **Access previous value**: `LAG(volume) OVER (PARTITION BY symbol ORDER BY timestamp)` retrieves volume from the previous candle +3. **Compare and flag**: `CASE` statement checks if current volume exceeds the threshold (2× previous) +4. **Filter nulls**: The first candle has no previous value, so we filter it out with `WHERE prev_volume IS NOT NULL` + +### Understanding LAG + +`LAG(column, offset)` accesses the value from a previous row: +- **Without offset** (or offset=1): Gets the immediately previous row +- **With PARTITION BY**: Resets for each group (symbol in this case) +- **Returns NULL**: For the first row in each partition (no previous value exists) + +## Alternative: Compare Against Moving Average + +Instead of comparing against the previous single candle, you can compare against a moving average to smooth out noise: + +```questdb-sql demo title="Detect spikes exceeding 2x the 10-period moving average" +DECLARE + @symbol := 'BTC-USDT' +WITH candles AS ( + SELECT + timestamp, + symbol, + sum(amount) AS volume + FROM trades + WHERE timestamp >= dateadd('h', -7, now()) + AND symbol = @symbol + SAMPLE BY 30s +), +moving_avg AS ( + SELECT + timestamp, + symbol, + volume, + AVG(volume) OVER ( + PARTITION BY symbol + ORDER BY timestamp + ROWS BETWEEN 9 PRECEDING AND 1 PRECEDING + ) AS avg_volume_10 + FROM candles +) +SELECT + timestamp, + symbol, + volume, + round(avg_volume_10, 2) AS avg_volume_10, + CASE + WHEN volume > 2 * avg_volume_10 THEN 'spike' + ELSE 'normal' + END AS spike_flag +FROM moving_avg +WHERE avg_volume_10 IS NOT NULL; +``` + +This approach: +- Calculates the 10-period moving average of volume (excluding current candle) +- Compares current volume against this average +- Provides more robust spike detection by smoothing out single-candle anomalies + +## Adapting the Query + +**Different spike thresholds:** +```sql +-- 50% increase (1.5x) +WHEN volume > 1.5 * prev_volume THEN 'spike' + +-- 3x increase (300%) +WHEN volume > 3 * prev_volume THEN 'spike' + +-- Multiple levels +CASE + WHEN volume > 3 * prev_volume THEN 'extreme_spike' + WHEN volume > 2 * prev_volume THEN 'spike' + WHEN volume > 1.5 * prev_volume THEN 'elevated' + ELSE 'normal' +END AS spike_flag +``` + +**Different time intervals:** +```sql +-- 1-minute candles +SAMPLE BY 1m + +-- 5-minute candles +SAMPLE BY 5m + +-- 1-hour candles +SAMPLE BY 1h +``` + +**Multiple symbols:** +```questdb-sql demo title="Volume spikes across multiple symbols" +WITH candles AS ( + SELECT + timestamp, + symbol, + sum(amount) AS volume + FROM trades + WHERE timestamp >= dateadd('h', -7, now()) + SAMPLE BY 30s +), +prev_volumes AS ( + SELECT + timestamp, + symbol, + volume, + LAG(volume) OVER (PARTITION BY symbol ORDER BY timestamp) AS prev_volume + FROM candles +) +SELECT + timestamp, + symbol, + volume, + prev_volume, + CASE + WHEN volume > 2 * prev_volume THEN 'spike' + ELSE 'normal' + END AS spike_flag +FROM prev_volumes +WHERE prev_volume IS NOT NULL + AND volume > 2 * prev_volume -- Only show spikes +ORDER BY timestamp DESC +LIMIT 20; +``` + +**Include price change alongside volume:** +```questdb-sql demo title="Volume spikes with price movement" +DECLARE + @symbol := 'BTC-USDT' +WITH candles AS ( + SELECT + timestamp, + symbol, + first(price) AS open, + last(price) AS close, + sum(amount) AS volume + FROM trades + WHERE timestamp >= dateadd('h', -7, now()) + AND symbol = @symbol + SAMPLE BY 30s +), +with_lags AS ( + SELECT + timestamp, + symbol, + open, + close, + ((close - open) / open) * 100 AS price_change_pct, + volume, + LAG(volume) OVER (PARTITION BY symbol ORDER BY timestamp) AS prev_volume + FROM candles +) +SELECT + timestamp, + symbol, + round(price_change_pct, 2) AS price_change_pct, + volume, + prev_volume, + CASE + WHEN volume > 2 * prev_volume THEN 'spike' + ELSE 'normal' + END AS spike_flag +FROM with_lags +WHERE prev_volume IS NOT NULL; +``` + +## Combining Volume and Price Analysis + +Volume spikes are most meaningful when analyzed with price action: + +```sql +-- Volume spike with price increase (potential breakout) +CASE + WHEN volume > 2 * prev_volume AND price_change_pct > 1 THEN 'bullish_spike' + WHEN volume > 2 * prev_volume AND price_change_pct < -1 THEN 'bearish_spike' + WHEN volume > 2 * prev_volume THEN 'neutral_spike' + ELSE 'normal' +END AS spike_type +``` + +:::tip Trading Signals +- **Breakout confirmation**: Volume spikes during breakouts confirm strength and reduce false breakout risk +- **Reversal warning**: Volume spikes at trend extremes often signal exhaustion and potential reversals +- **Distribution**: High volume with minimal price change can indicate institutional distribution +- **Accumulation**: Volume spikes on dips can signal smart money accumulation +::: + +:::tip Alert Configuration +Set up alerts for volume spikes to catch important market events: +- **Threshold**: Start with 2-3x average volume +- **Time frame**: Match to your trading style (1m for scalping, 1h for swing trading) +- **Confirmation**: Combine with price movement or technical levels for better signals +::: + +:::warning False Positives +Volume spikes can occur due to: +- Market open/close times +- News releases or economic data +- Rollover periods for futures +- Technical glitches or flash crashes + +Always confirm with price action and broader market context. +::: + +:::info Related Documentation +- [LAG window function](/docs/reference/function/window/#lag) +- [AVG window function](/docs/reference/function/window/#avg) +- [SAMPLE BY](/docs/reference/sql/select/#sample-by) +- [CASE expressions](/docs/reference/sql/case/) +::: diff --git a/documentation/playbook/sql/finance/vwap.md b/documentation/playbook/sql/finance/vwap.md new file mode 100644 index 000000000..1880f9313 --- /dev/null +++ b/documentation/playbook/sql/finance/vwap.md @@ -0,0 +1,130 @@ +--- +title: Volume Weighted Average Price (VWAP) +sidebar_label: VWAP +description: Calculate cumulative volume weighted average price using window functions for intraday trading analysis +--- + +Calculate the cumulative Volume Weighted Average Price (VWAP) for intraday trading analysis. VWAP is a trading benchmark that represents the average price at which an asset has traded throughout the day, weighted by volume. It's widely used by institutional traders to assess execution quality and identify trend strength. + +## Problem: Calculate Running VWAP + +You want to calculate the cumulative VWAP for a trading day, where each point shows the average price weighted by volume from market open until that moment. This helps traders determine if current prices are above or below the day's volume-weighted average. + +## Solution: Use Window Functions for Cumulative Sums + +While QuestDB doesn't have a built-in VWAP window function, we can calculate it using cumulative `SUM` window functions for both traded value and volume: + +```questdb-sql demo title="Calculate cumulative VWAP over 10-minute intervals" +WITH sampled AS ( + SELECT + timestamp, symbol, + SUM(amount) AS volume, + SUM(price * amount) AS traded_value + FROM trades + WHERE timestamp IN yesterday() + AND symbol = 'BTC-USDT' + SAMPLE BY 10m +), cumulative AS ( + SELECT timestamp, symbol, + SUM(traded_value) + OVER (ORDER BY timestamp) AS cumulative_value, + SUM(volume) + OVER (ORDER BY timestamp) AS cumulative_volume + FROM sampled +) +SELECT timestamp, symbol, cumulative_value/cumulative_volume AS vwap FROM cumulative; +``` + +This query: +1. Aggregates trades into 10-minute intervals, calculating total volume and total traded value (price × amount) for each interval +2. Uses window functions to compute running totals of both traded value and volume from the start of the day +3. Divides cumulative traded value by cumulative volume to get VWAP at each timestamp + +## How It Works + +VWAP is calculated as: + +``` +VWAP = Total Traded Value / Total Volume + = Σ(Price × Volume) / Σ(Volume) +``` + +The key insight is using `SUM(...) OVER (ORDER BY timestamp)` to create running totals: +- `cumulative_value`: Running sum of (price × amount) from market open +- `cumulative_volume`: Running sum of volume from market open +- Final VWAP: Dividing these cumulative values gives the volume-weighted average at each point in time + +### Window Function Behavior + +When using `SUM() OVER (ORDER BY timestamp)` without specifying a frame clause, QuestDB defaults to summing from the first row to the current row, which is exactly what we need for cumulative VWAP. + +## Adapting the Query + +**Different time intervals:** +```questdb-sql demo title="VWAP with 1-minute resolution" +WITH sampled AS ( + SELECT + timestamp, symbol, + SUM(amount) AS volume, + SUM(price * amount) AS traded_value + FROM trades + WHERE timestamp IN yesterday() + AND symbol = 'BTC-USDT' + SAMPLE BY 1m -- Changed from 10m to 1m +), cumulative AS ( + SELECT timestamp, symbol, + SUM(traded_value) OVER (ORDER BY timestamp) AS cumulative_value, + SUM(volume) OVER (ORDER BY timestamp) AS cumulative_volume + FROM sampled +) +SELECT timestamp, symbol, cumulative_value/cumulative_volume AS vwap FROM cumulative; +``` + +**Multiple symbols:** +```questdb-sql demo title="VWAP for multiple symbols" +WITH sampled AS ( + SELECT + timestamp, symbol, + SUM(amount) AS volume, + SUM(price * amount) AS traded_value + FROM trades + WHERE timestamp IN yesterday() + AND symbol IN ('BTC-USDT', 'ETH-USDT', 'SOL-USDT') + SAMPLE BY 10m +), cumulative AS ( + SELECT timestamp, symbol, + SUM(traded_value) + OVER (PARTITION BY symbol ORDER BY timestamp) AS cumulative_value, + SUM(volume) + OVER (PARTITION BY symbol ORDER BY timestamp) AS cumulative_volume + FROM sampled +) +SELECT timestamp, symbol, cumulative_value/cumulative_volume AS vwap FROM cumulative; +``` + +Note the addition of `PARTITION BY symbol` to calculate separate VWAP values for each symbol. + +**Different time ranges:** +```sql +-- Current trading day (today) +WHERE timestamp IN today() + +-- Specific date +WHERE timestamp IN '2024-09-05' + +-- Last hour +WHERE timestamp >= dateadd('h', -1, now()) +``` + +:::tip Trading Use Cases +- **Execution quality**: Institutional traders compare their execution prices against VWAP to assess trade quality +- **Trend identification**: Price consistently above VWAP suggests bullish momentum; below suggests bearish +- **Support/resistance**: VWAP often acts as dynamic support or resistance during the trading day +- **Mean reversion**: Traders use deviations from VWAP to identify potential reversal points +::: + +:::info Related Documentation +- [Window functions](/docs/reference/sql/select/#window-functions) +- [SUM aggregate](/docs/reference/function/aggregation/#sum) +- [SAMPLE BY](/docs/reference/sql/select/#sample-by) +::: diff --git a/documentation/playbook/sql/time-series/latest-n-per-partition.md b/documentation/playbook/sql/time-series/latest-n-per-partition.md new file mode 100644 index 000000000..5243409cd --- /dev/null +++ b/documentation/playbook/sql/time-series/latest-n-per-partition.md @@ -0,0 +1,267 @@ +--- +title: Get Latest N Records Per Partition +sidebar_label: Latest N per partition +description: Retrieve the most recent N rows for each distinct value using window functions and filtering +--- + +Retrieve the most recent N rows for each distinct partition value (e.g., latest 5 trades per symbol, last 10 readings per sensor). While `LATEST ON` returns only the single most recent row per partition, this pattern extends it to get multiple recent rows per partition. + +## Problem: Need Multiple Recent Rows Per Group + +You want to get the latest N rows for each distinct value in a column. For example: +- Latest 5 trades for each trading symbol +- Last 10 sensor readings per device +- Most recent 3 log entries per service + +`LATEST ON` only returns one row per partition: + +```sql +-- Gets only 1 latest row per symbol +SELECT * FROM trades +LATEST ON timestamp PARTITION BY symbol; +``` + +But you need multiple rows per symbol. + +## Solution: Use ROW_NUMBER() Window Function + +Use `row_number()` to rank rows within each partition, then filter to keep only the top N: + +```questdb-sql demo title="Get latest 5 trades for each symbol" +WITH ranked AS ( + SELECT + *, + row_number() OVER (PARTITION BY symbol ORDER BY timestamp DESC) as rn + FROM trades + WHERE timestamp >= dateadd('d', -1, now()) +) +SELECT timestamp, symbol, side, price, amount +FROM ranked +WHERE rn <= 5 +ORDER BY symbol, timestamp DESC; +``` + +This returns up to 5 most recent trades for each symbol from the last day. + +## How It Works + +The query uses a two-step approach: + +1. **Ranking step (CTE):** + - `row_number() OVER (...)`: Assigns sequential numbers to rows within each partition + - `PARTITION BY symbol`: Separate ranking for each symbol + - `ORDER BY timestamp DESC`: Newest rows get lower numbers (1, 2, 3, ...) + - Result: Each row gets a rank within its symbol group + +2. **Filtering step (outer query):** + - `WHERE rn <= 5`: Keep only rows ranked 1-5 (the 5 most recent) + - `ORDER BY symbol, timestamp DESC`: Sort final results + +### Understanding row_number() + +`row_number()` assigns a unique sequential number within each partition: + +| timestamp | symbol | price | (row number) | +|-----------|-----------|-------|--------------| +| 10:03:00 | BTC-USDT | 63000 | 1 (newest) | +| 10:02:00 | BTC-USDT | 62900 | 2 | +| 10:01:00 | BTC-USDT | 62800 | 3 | +| 10:03:30 | ETH-USDT | 3100 | 1 (newest) | +| 10:02:30 | ETH-USDT | 3095 | 2 | + +With `WHERE rn <= 3`, we keep rows 1-3 for each symbol. + +## Adapting the Query + +**Different partition columns:** +```sql +-- Latest 10 per sensor_id +PARTITION BY sensor_id + +-- Latest 5 per combination of symbol and exchange +PARTITION BY symbol, exchange + +-- Latest N per user_id +PARTITION BY user_id +``` + +**Different sort orders:** +```sql +-- Oldest N rows per partition +ORDER BY timestamp ASC + +-- Highest prices first +ORDER BY price DESC + +-- Alphabetically +ORDER BY name ASC +``` + +**Dynamic N value:** +```sql +-- Latest N trades where N is specified by user +DECLARE @limit := 10 + +WITH ranked AS ( + SELECT *, row_number() OVER (PARTITION BY symbol ORDER BY timestamp DESC) as rn + FROM trades + WHERE timestamp >= dateadd('d', -1, now()) +) +SELECT * FROM ranked WHERE rn <= @limit; +``` + +**Include additional filtering:** +```questdb-sql demo title="Latest 5 buy orders per symbol" +WITH ranked AS ( + SELECT + *, + row_number() OVER (PARTITION BY symbol ORDER BY timestamp DESC) as rn + FROM trades + WHERE timestamp >= dateadd('d', -1, now()) + AND side = 'buy' -- Additional filter before ranking +) +SELECT timestamp, symbol, side, price, amount +FROM ranked +WHERE rn <= 5; +``` + +**Show rank in results:** +```sql +WITH ranked AS ( + SELECT *, row_number() OVER (PARTITION BY symbol ORDER BY timestamp DESC) as rn + FROM trades + WHERE timestamp >= dateadd('d', -1, now()) +) +SELECT timestamp, symbol, price, rn as rank +FROM ranked +WHERE rn <= 5; +``` + +## Alternative: Use Negative LIMIT + +For a simpler approach when you need the latest N rows **total** (not per partition), use negative LIMIT: + +```questdb-sql demo title="Latest 100 trades overall (all symbols)" +SELECT * FROM trades +WHERE symbol = 'BTC-USDT' +ORDER BY timestamp DESC +LIMIT 100; +``` + +Or more efficiently with QuestDB's negative LIMIT feature: + +```questdb-sql demo title="Latest 100 trades using negative LIMIT" +SELECT * FROM trades +WHERE symbol = 'BTC-USDT' +LIMIT -100; +``` + +**But this doesn't work per partition** - it returns 100 total rows, not 100 per symbol. + +## Performance Optimization + +**Filter by timestamp first:** +```sql +-- Good: Reduces dataset before windowing +WITH ranked AS ( + SELECT *, row_number() OVER (PARTITION BY symbol ORDER BY timestamp DESC) as rn + FROM trades + WHERE timestamp >= dateadd('h', -24, now()) -- Filter early +) +SELECT * FROM ranked WHERE rn <= 5; + +-- Less efficient: Windows over entire table +WITH ranked AS ( + SELECT *, row_number() OVER (PARTITION BY symbol ORDER BY timestamp DESC) as rn + FROM trades -- No filter +) +SELECT * FROM ranked WHERE rn <= 5 AND timestamp >= dateadd('h', -24, now()); +``` + +**Limit partitions:** +```sql +-- Process only specific symbols +WHERE timestamp >= dateadd('d', -1, now()) + AND symbol IN ('BTC-USDT', 'ETH-USDT', 'SOL-USDT') +``` + +## Top N with Aggregates + +Combine with aggregates to get summary statistics for top N: + +```questdb-sql demo title="Average price of latest 10 trades per symbol" +WITH ranked AS ( + SELECT + timestamp, + symbol, + price, + row_number() OVER (PARTITION BY symbol ORDER BY timestamp DESC) as rn + FROM trades + WHERE timestamp >= dateadd('d', -1, now()) +) +SELECT + symbol, + count(*) as trade_count, + avg(price) as avg_price, + min(price) as min_price, + max(price) as max_price +FROM ranked +WHERE rn <= 10 +GROUP BY symbol; +``` + +## Comparison with LATEST ON + +| Feature | LATEST ON | row_number() + Filter | +|---------|-----------|----------------------| +| **Rows per partition** | Exactly 1 | Any number (N) | +| **Performance** | Very fast (optimized) | Moderate (requires ranking) | +| **Flexibility** | Limited | High (custom ordering, filtering) | +| **Use case** | Single latest value | Multiple recent values | + +**When to use LATEST ON:** +```sql +-- Get current price for each symbol (1 row per symbol) +SELECT * FROM trades LATEST ON timestamp PARTITION BY symbol; +``` + +**When to use row_number():** +```sql +-- Get latest 5 trades for each symbol (up to 5 rows per symbol) +WITH ranked AS ( + SELECT *, row_number() OVER (PARTITION BY symbol ORDER BY timestamp DESC) as rn + FROM trades +) +SELECT * FROM ranked WHERE rn <= 5; +``` + +:::tip Combining with LATEST ON +For very large tables, use `LATEST ON` to reduce the dataset first, then apply row_number(): + +```sql +WITH recent AS ( + -- Get latest 1000 rows overall + SELECT * FROM trades + ORDER BY timestamp DESC + LIMIT 1000 +) +, ranked AS ( + SELECT *, row_number() OVER (PARTITION BY symbol ORDER BY timestamp DESC) as rn + FROM recent +) +SELECT * FROM ranked WHERE rn <= 5; +``` + +This approach is faster when you only need recent data across all partitions. +::: + +:::warning Row Count +The number of rows returned is `N × number_of_partitions`. If you have 100 symbols and request top 5, you'll get up to 500 rows. Some partitions may have fewer than N rows if insufficient data exists. +::: + +:::info Related Documentation +- [row_number() window function](/docs/reference/function/window/#row_number) +- [LATEST ON](/docs/reference/sql/latest-on/) +- [Window functions](/docs/reference/sql/select/#window-functions) +- [LIMIT](/docs/reference/sql/select/#limit) +::: diff --git a/documentation/sidebars.js b/documentation/sidebars.js index 9527b6b10..b53111878 100644 --- a/documentation/sidebars.js +++ b/documentation/sidebars.js @@ -667,8 +667,63 @@ module.exports = { items: [ "playbook/sql/force-designated-timestamp", "playbook/sql/pivoting", - "playbook/sql/calculate-compound-interest", "playbook/sql/rows-before-after-value-match", + { + type: "category", + label: "Finance", + collapsed: true, + items: [ + "playbook/sql/finance/compound-interest", + "playbook/sql/finance/cumulative-product", + "playbook/sql/finance/vwap", + "playbook/sql/finance/bollinger-bands", + "playbook/sql/finance/tick-trin", + "playbook/sql/finance/volume-profile", + "playbook/sql/finance/volume-spike", + "playbook/sql/finance/rolling-stddev", + ], + }, + { + type: "category", + label: "Time-Series Patterns", + collapsed: true, + items: [ + "playbook/sql/time-series/latest-n-per-partition", + ], + }, + { + type: "category", + label: "Advanced SQL", + collapsed: true, + items: [ + "playbook/sql/advanced/top-n-plus-others", + ], + }, + ], + }, + { + type: "category", + label: "Integrations", + collapsed: true, + items: [ + { + type: "category", + label: "Grafana", + collapsed: true, + items: [ + "playbook/integrations/grafana/dynamic-table-queries", + "playbook/integrations/grafana/read-only-user", + "playbook/integrations/grafana/variable-dropdown", + ], + }, + { + type: "category", + label: "Telegraf", + collapsed: true, + items: [ + "playbook/integrations/telegraf/opcua-dense-format", + ], + }, ], }, { @@ -683,6 +738,27 @@ module.exports = { "playbook/programmatic/php/inserting-ilp", ], }, + { + type: "category", + label: "Ruby", + items: [ + "playbook/programmatic/ruby/inserting-ilp", + ], + }, + { + type: "category", + label: "Rust", + items: [ + "playbook/programmatic/rust/tls-configuration", + ], + }, + { + type: "category", + label: "C++", + items: [ + "playbook/programmatic/cpp/missing-columns", + ], + }, ], }, { @@ -691,6 +767,7 @@ module.exports = { collapsed: true, items: [ "playbook/operations/docker-compose-config", + "playbook/operations/monitor-with-telegraf", ], }, ], From cd5ad32579596f1e88d543ad98bd37f68aa19f80 Mon Sep 17 00:00:00 2001 From: javier Date: Thu, 18 Dec 2025 19:42:06 +0100 Subject: [PATCH 09/21] fixing broken links --- documentation/playbook/demo-data-schema.md | 4 ++-- documentation/playbook/integrations/grafana/read-only-user.md | 2 +- documentation/playbook/operations/docker-compose-config.md | 2 +- documentation/playbook/operations/monitor-with-telegraf.md | 4 ++-- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/documentation/playbook/demo-data-schema.md b/documentation/playbook/demo-data-schema.md index 87828440d..fa5e90639 100644 --- a/documentation/playbook/demo-data-schema.md +++ b/documentation/playbook/demo-data-schema.md @@ -171,8 +171,8 @@ The demo instance is read-only. For testing write operations (INSERT, UPDATE, DE ::: :::info Related Documentation -- [SYMBOL type](/docs/reference/sql/data-types/#symbol) -- [Arrays in QuestDB](/docs/reference/sql/data-types/#arrays) +- [SYMBOL type](/docs/concept/symbol/) +- [Arrays in QuestDB](/docs/concept/array/) - [Designated timestamp](/docs/concept/designated-timestamp/) - [Time-series aggregations](/docs/reference/function/aggregation/) ::: diff --git a/documentation/playbook/integrations/grafana/read-only-user.md b/documentation/playbook/integrations/grafana/read-only-user.md index d27e4569e..aec5d5f17 100644 --- a/documentation/playbook/integrations/grafana/read-only-user.md +++ b/documentation/playbook/integrations/grafana/read-only-user.md @@ -198,6 +198,6 @@ The web console uses different authentication than the PostgreSQL wire protocol. :::info Related Documentation - [PostgreSQL wire protocol](/docs/reference/api/postgres/) - [QuestDB Enterprise RBAC](/docs/operations/rbac/) -- [Configuration reference](/docs/reference/configuration/) +- [Configuration reference](/docs/configuration/) - [Grafana QuestDB data source](https://grafana.com/grafana/plugins/questdb-questdb-datasource/) ::: diff --git a/documentation/playbook/operations/docker-compose-config.md b/documentation/playbook/operations/docker-compose-config.md index 02546e39e..fa2691952 100644 --- a/documentation/playbook/operations/docker-compose-config.md +++ b/documentation/playbook/operations/docker-compose-config.md @@ -131,7 +131,7 @@ If you encounter permission errors with mounted volumes, ensure the QuestDB cont ::: :::info Related Documentation -- [Server Configuration Reference](/docs/reference/configuration/) +- [Server Configuration Reference](/docs/configuration/) - [Docker Deployment Guide](/docs/deployment/docker/) - [PostgreSQL Wire Protocol](/docs/reference/api/postgres/) ::: diff --git a/documentation/playbook/operations/monitor-with-telegraf.md b/documentation/playbook/operations/monitor-with-telegraf.md index e5427afe7..dce97ac3f 100644 --- a/documentation/playbook/operations/monitor-with-telegraf.md +++ b/documentation/playbook/operations/monitor-with-telegraf.md @@ -4,7 +4,7 @@ sidebar_label: Monitor with Telegraf description: Scrape QuestDB Prometheus metrics using Telegraf and store them in QuestDB for monitoring dashboards --- -Monitor QuestDB by scraping its Prometheus metrics using Telegraf and storing them back in a QuestDB table. This creates a self-monitoring setup where QuestDB stores its own operational metrics, allowing you to track performance, resource usage, and health over time using familiar SQL queries and Grafana dashboards. +Store QuestDB's operational metrics in QuestDB itself by scraping Prometheus metrics using Telegraf. This enables you to track performance, resource usage, and health over time using familiar SQL queries and Grafana dashboards, without needing a separate metrics database. ## Problem: Monitor QuestDB Without Prometheus @@ -375,7 +375,7 @@ Be cautious about monitoring QuestDB with itself - if QuestDB fails, you lose mo ::: :::info Related Documentation -- [QuestDB metrics reference](/docs/operations/health-monitoring/) +- [QuestDB metrics reference](/docs/operations/logging-metrics/#metrics) - [Telegraf prometheus input](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/prometheus) - [Telegraf merge aggregator](https://github.com/influxdata/telegraf/tree/master/plugins/aggregators/merge) - [ILP reference](/docs/reference/api/ilp/overview/) From c52ac490b368ba5937b8dd5b83aae2ac5f758409 Mon Sep 17 00:00:00 2001 From: javier Date: Thu, 18 Dec 2025 20:23:43 +0100 Subject: [PATCH 10/21] draft pages. Might contain broken links and queries --- .../integrations/grafana/overlay-timeshift.md | 460 +++++++++++++++ .../{telegraf => }/opcua-dense-format.md | 0 .../operations/copy-data-between-instances.md | 531 ++++++++++++++++++ .../operations/csv-import-milliseconds.md | 402 +++++++++++++ .../operations/query-times-histogram.md | 411 ++++++++++++++ .../playbook/operations/tls-pgbouncer.md | 489 ++++++++++++++++ .../sql/advanced/array-from-string.md | 361 ++++++++++++ .../sql/advanced/conditional-aggregates.md | 353 ++++++++++++ .../advanced/consistent-histogram-buckets.md | 424 ++++++++++++++ .../general-and-sampled-aggregates.md | 448 +++++++++++++++ .../{pivoting.md => advanced/pivot-table.md} | 0 .../playbook/sql/advanced/sankey-funnel.md | 427 ++++++++++++++ .../playbook/sql/advanced/unpivot-table.md | 326 +++++++++++ .../sql/time-series/epoch-timestamps.md | 282 ++++++++++ .../sql/time-series/expand-power-over-time.md | 305 ++++++++++ .../sql/time-series/fill-missing-intervals.md | 404 +++++++++++++ .../sql/time-series/filter-by-week.md | 245 ++++++++ .../sql/time-series/latest-activity-window.md | 273 +++++++++ .../sql/time-series/remove-outliers.md | 466 +++++++++++++++ .../time-series/sample-by-interval-bounds.md | 289 ++++++++++ .../sql/time-series/session-windows.md | 334 +++++++++++ .../sql/time-series/sparse-sensor-data.md | 387 +++++++++++++ documentation/sidebars.js | 31 +- 23 files changed, 7639 insertions(+), 9 deletions(-) create mode 100644 documentation/playbook/integrations/grafana/overlay-timeshift.md rename documentation/playbook/integrations/{telegraf => }/opcua-dense-format.md (100%) create mode 100644 documentation/playbook/operations/copy-data-between-instances.md create mode 100644 documentation/playbook/operations/csv-import-milliseconds.md create mode 100644 documentation/playbook/operations/query-times-histogram.md create mode 100644 documentation/playbook/operations/tls-pgbouncer.md create mode 100644 documentation/playbook/sql/advanced/array-from-string.md create mode 100644 documentation/playbook/sql/advanced/conditional-aggregates.md create mode 100644 documentation/playbook/sql/advanced/consistent-histogram-buckets.md create mode 100644 documentation/playbook/sql/advanced/general-and-sampled-aggregates.md rename documentation/playbook/sql/{pivoting.md => advanced/pivot-table.md} (100%) create mode 100644 documentation/playbook/sql/advanced/sankey-funnel.md create mode 100644 documentation/playbook/sql/advanced/unpivot-table.md create mode 100644 documentation/playbook/sql/time-series/epoch-timestamps.md create mode 100644 documentation/playbook/sql/time-series/expand-power-over-time.md create mode 100644 documentation/playbook/sql/time-series/fill-missing-intervals.md create mode 100644 documentation/playbook/sql/time-series/filter-by-week.md create mode 100644 documentation/playbook/sql/time-series/latest-activity-window.md create mode 100644 documentation/playbook/sql/time-series/remove-outliers.md create mode 100644 documentation/playbook/sql/time-series/sample-by-interval-bounds.md create mode 100644 documentation/playbook/sql/time-series/session-windows.md create mode 100644 documentation/playbook/sql/time-series/sparse-sensor-data.md diff --git a/documentation/playbook/integrations/grafana/overlay-timeshift.md b/documentation/playbook/integrations/grafana/overlay-timeshift.md new file mode 100644 index 000000000..f0180050f --- /dev/null +++ b/documentation/playbook/integrations/grafana/overlay-timeshift.md @@ -0,0 +1,460 @@ +--- +title: Overlay Yesterday on Today in Grafana +sidebar_label: Overlay with timeshift +description: Compare today's metrics with yesterday's using time-shifted queries to overlay historical data on current charts +--- + +Overlay yesterday's data on today's chart in Grafana to visually compare current performance against previous periods. This pattern is useful for identifying anomalies, tracking daily patterns, and comparing weekday vs weekend behavior. + +## Problem: Compare Current vs Previous Period + +You want to see if today's traffic pattern is normal by comparing it to yesterday: + +**Without overlay:** +- View today's data +- Mentally remember yesterday's pattern +- Switch to yesterday's timeframe +- Try to compare (difficult!) + +**With overlay:** +- See both periods on same chart +- Visual comparison is immediate +- Easily spot deviations + +## Solution: Time-Shifted Queries + +Use UNION ALL to combine current and shifted historical data: + +```sql +-- Today's data +SELECT + timestamp as time, + 'Today' as metric, + count(*) as value +FROM trades +WHERE timestamp >= date_trunc('day', now()) +SAMPLE BY 5m + +UNION ALL + +-- Yesterday's data, shifted forward by 24 hours +SELECT + dateadd('d', 1, timestamp) as time, -- Shift forward 24 hours + 'Yesterday' as metric, + count(*) as value +FROM trades +WHERE timestamp >= date_trunc('day', dateadd('d', -1, now())) + AND timestamp < date_trunc('day', now()) +SAMPLE BY 5m + +ORDER BY time; +``` + +**Grafana will plot both series:** +- "Today" line shows current data at actual times +- "Yesterday" line shows previous day's data shifted to align with today's timeline + +## How It Works + +### Time Alignment + +```sql +dateadd('d', 1, timestamp) as time +``` + +Takes yesterday's timestamps and adds 24 hours: +- Yesterday 10:00 → Today 10:00 +- Yesterday 14:30 → Today 14:30 + +This aligns both datasets on the same X-axis (time). + +### Separate Series + +```sql +'Today' as metric +'Yesterday' as metric +``` + +Creates two distinct series in Grafana. Configure Grafana to: +- Different colors per series +- Legend shows "Today" vs "Yesterday" + +## Full Grafana Query + +```questdb-sql title="Today vs Yesterday trade volume" +SELECT + timestamp as time, + 'Today' as metric, + sum(amount) as value +FROM trades +WHERE $__timeFilter(timestamp) + AND timestamp >= date_trunc('day', now()) +SAMPLE BY $__interval + +UNION ALL + +SELECT + dateadd('d', 1, timestamp) as time, + 'Yesterday' as metric, + sum(amount) as value +FROM trades +WHERE timestamp >= date_trunc('day', dateadd('d', -1, now())) + AND timestamp < date_trunc('day', now()) +SAMPLE BY $__interval + +ORDER BY time; +``` + +**Grafana variables:** +- `$__timeFilter(timestamp)`: Respects dashboard time range +- `$__interval`: Auto-adjusts sample interval based on zoom level + +## Week-Over-Week Comparison + +Compare same weekday from last week: + +```sql +SELECT + timestamp as time, + 'This Week' as metric, + avg(price) as value +FROM trades +WHERE symbol = 'BTC-USDT' + AND timestamp >= date_trunc('day', now()) +SAMPLE BY 1h + +UNION ALL + +SELECT + dateadd('d', 7, timestamp) as time, + 'Last Week' as metric, + avg(price) as value +FROM trades +WHERE symbol = 'BTC-USDT' + AND timestamp >= date_trunc('day', dateadd('d', -7, now())) + AND timestamp < date_trunc('day', dateadd('d', -6, now())) +SAMPLE BY 1h + +ORDER BY time; +``` + +Compares Monday to Monday, Tuesday to Tuesday, etc. + +## Multiple Historical Periods + +Overlay several previous days: + +```sql +-- Today +SELECT timestamp as time, 'Today' as metric, count(*) as value +FROM trades +WHERE timestamp >= date_trunc('day', now()) +SAMPLE BY 10m + +UNION ALL + +-- Yesterday +SELECT dateadd('d', 1, timestamp) as time, 'Yesterday' as metric, count(*) as value +FROM trades +WHERE timestamp >= date_trunc('day', dateadd('d', -1, now())) + AND timestamp < date_trunc('day', now()) +SAMPLE BY 10m + +UNION ALL + +-- 2 days ago +SELECT dateadd('d', 2, timestamp) as time, '2 Days Ago' as metric, count(*) as value +FROM trades +WHERE timestamp >= date_trunc('day', dateadd('d', -2, now())) + AND timestamp < date_trunc('day', dateadd('d', -1, now())) +SAMPLE BY 10m + +UNION ALL + +-- 3 days ago +SELECT dateadd('d', 3, timestamp) as time, '3 Days Ago' as metric, count(*) as value +FROM trades +WHERE timestamp >= date_trunc('day', dateadd('d', -3, now())) + AND timestamp < date_trunc('day', dateadd('d', -2, now())) +SAMPLE BY 10m + +ORDER BY time; +``` + +Shows trend over multiple days aligned to current day. + +## Hour-by-Hour Overlay + +Compare specific hours (e.g., current hour vs same hour yesterday): + +```sql +SELECT + timestamp as time, + 'Current Hour' as metric, + count(*) as value +FROM trades +WHERE timestamp >= date_trunc('hour', now()) +SAMPLE BY 1m + +UNION ALL + +SELECT + dateadd('d', 1, timestamp) as time, + 'Same Hour Yesterday' as metric, + count(*) as value +FROM trades +WHERE timestamp >= date_trunc('hour', dateadd('d', -1, now())) + AND timestamp < date_trunc('hour', dateadd('d', -1, now())) + 3600000000 -- +1 hour in micros +SAMPLE BY 1m + +ORDER BY time; +``` + +Compares minute-by-minute within same hour across days. + +## Weekday vs Weekend Pattern + +Overlay weekday average against weekend average: + +```sql +WITH weekday_avg AS ( + SELECT + extract(hour from timestamp) * 3600000000 + + extract(minute from timestamp) * 60000000 as time_of_day_micros, + avg(value) as avg_value + FROM metrics + WHERE timestamp >= dateadd('d', -30, now()) + AND day_of_week(timestamp) BETWEEN 1 AND 5 -- Monday to Friday + GROUP BY time_of_day_micros +), +weekend_avg AS ( + SELECT + extract(hour from timestamp) * 3600000000 + + extract(minute from timestamp) * 60000000 as time_of_day_micros, + avg(value) as avg_value + FROM metrics + WHERE timestamp >= dateadd('d', -30, now()) + AND day_of_week(timestamp) IN (6, 7) -- Saturday, Sunday + GROUP BY time_of_day_micros +) +SELECT + cast(date_trunc('day', now()) + time_of_day_micros as timestamp) as time, + 'Weekday Average' as metric, + avg_value as value +FROM weekday_avg + +UNION ALL + +SELECT + cast(date_trunc('day', now()) + time_of_day_micros as timestamp) as time, + 'Weekend Average' as metric, + avg_value as value +FROM weekend_avg + +ORDER BY time; +``` + +Shows typical weekday pattern vs typical weekend pattern. + +## Grafana Panel Configuration + +**Query settings:** +- Format: Time series +- Min interval: Match your SAMPLE BY interval + +**Display settings:** +- Visualization: Time series (line graph) +- Legend: Show (displays "Today", "Yesterday", etc.) +- Line styles: Different colors or dash styles per series +- Tooltip: All series (shows both values on hover) + +**Advanced:** +- Series overrides: + - "Today": Solid line, blue, bold + - "Yesterday": Dashed line, gray, thin + - Opacity: 80% for historical, 100% for current + +## Use Cases + +**Traffic anomaly detection:** +```sql +-- Is today's traffic normal? +-- Overlay last 7 days +``` + +**Performance regression:** +```sql +-- API latency today vs yesterday +SELECT + timestamp as time, + 'Today P95' as metric, + percentile(latency_ms, 95) as value +FROM api_requests +WHERE timestamp >= date_trunc('day', now()) +SAMPLE BY 5m + +UNION ALL + +SELECT + dateadd('d', 1, timestamp) as time, + 'Yesterday P95' as metric, + percentile(latency_ms, 95) as value +FROM api_requests +WHERE timestamp >= date_trunc('day', dateadd('d', -1, now())) + AND timestamp < date_trunc('day', now()) +SAMPLE BY 5m +ORDER BY time; +``` + +**Sales comparison:** +```sql +-- Today's sales vs same day last week +-- (Mondays often differ from Tuesdays) +``` + +**Seasonal patterns:** +```sql +-- Compare today to same date last month/year +SELECT dateadd('M', 1, timestamp) as time -- Month shift +SELECT dateadd('y', 1, timestamp) as time -- Year shift +``` + +## Performance Optimization + +**Filter early:** +```sql +WHERE timestamp >= date_trunc('day', dateadd('d', -1, now())) + AND timestamp < date_trunc('day', dateadd('d', 2, now())) +``` + +Only query relevant dates (yesterday + today + small buffer). + +**Use SAMPLE BY:** +```sql +SAMPLE BY $__interval +``` + +Let Grafana determine appropriate resolution based on zoom level. + +**Partition pruning:** +```sql +-- If partitioned by day, this efficiently accesses only 2 partitions +WHERE timestamp IN today() +UNION ALL +WHERE timestamp >= dateadd('d', -1, now()) AND timestamp < date_trunc('day', now()) +``` + +## Alternative: Grafana Built-in Timeshift + +**Note:** Some Grafana panels support native timeshift transformations. + +**Using transformation:** +1. Query normal time-series data +2. Add transformation: "Add field from calculation" +3. Mode: "Reduce row" +4. Calculation: "Difference" +5. Apply timeshift transformation (if available in your Grafana version) + +However, SQL-based approach gives more control and works reliably across Grafana versions. + +## Dynamic Period Selection + +Use Grafana variables for flexible comparison: + +**Variable `compare_period`:** +- `1d` = Yesterday +- `7d` = Last week +- `30d` = Last month + +**Query:** +```sql +SELECT timestamp as time, 'Current' as metric, value FROM metrics +WHERE $__timeFilter(timestamp) + +UNION ALL + +SELECT + dateadd('d', $compare_period, timestamp) as time, + 'Previous' as metric, + value +FROM metrics +WHERE timestamp >= dateadd('d', -$compare_period, now()) + AND timestamp < now() +ORDER BY time; +``` + +User can switch comparison period via dropdown. + +## Handling Incomplete Current Day + +Only show overlay up to current time of day: + +```sql +SELECT + timestamp as time, + 'Today' as metric, + count(*) as value +FROM trades +WHERE timestamp >= date_trunc('day', now()) + AND timestamp <= now() -- Don't show future of today +SAMPLE BY 10m + +UNION ALL + +SELECT + dateadd('d', 1, timestamp) as time, + 'Yesterday' as metric, + count(*) as value +FROM trades +WHERE timestamp >= date_trunc('day', dateadd('d', -1, now())) + AND timestamp <= dateadd('d', -1, now()) -- Yesterday at same time +SAMPLE BY 10m +ORDER BY time; +``` + +This prevents showing empty future time slots for today. + +## Combining with Alerts + +Trigger alert when today's value deviates significantly from yesterday: + +```sql +WITH today_value AS ( + SELECT avg(latency_ms) as latency FROM api_requests + WHERE timestamp >= dateadd('m', -5, now()) +), +yesterday_same_time AS ( + SELECT avg(latency_ms) as latency FROM api_requests + WHERE timestamp >= dateadd('d', -1, dateadd('m', -5, now())) + AND timestamp < dateadd('d', -1, now()) +) +SELECT + (today_value.latency - yesterday_same_time.latency) / yesterday_same_time.latency * 100 as pct_change +FROM today_value, yesterday_same_time; +``` + +Alert if `pct_change > 50` (today is 50% slower than yesterday). + +:::tip Best Practices +1. **Match sample intervals**: Use same SAMPLE BY for all series +2. **Label clearly**: Use descriptive metric names ("Today", "Yesterday (Shifted)") +3. **Limit historical depth**: Too many overlays clutter the chart (2-3 periods max) +4. **Adjust colors**: Make current period prominent, historical periods muted +5. **Consider patterns**: Week-over-week often more meaningful than day-over-day +::: + +:::warning Time Zone Considerations +Ensure both queries use the same timezone: +- Use UTC for consistency +- Or explicitly set timezone in date_trunc(): `date_trunc('day', now(), 'America/New_York')` + +Mismatched timezones will misalign the overlay. +::: + +:::info Related Documentation +- [dateadd() function](/docs/reference/function/date-time/#dateadd) +- [date_trunc() function](/docs/reference/function/date-time/#date_trunc) +- [UNION ALL](/docs/reference/sql/union/) +- [SAMPLE BY](/docs/reference/sql/select/#sample-by) +- [Grafana time series](/docs/third-party-tools/grafana/) +::: diff --git a/documentation/playbook/integrations/telegraf/opcua-dense-format.md b/documentation/playbook/integrations/opcua-dense-format.md similarity index 100% rename from documentation/playbook/integrations/telegraf/opcua-dense-format.md rename to documentation/playbook/integrations/opcua-dense-format.md diff --git a/documentation/playbook/operations/copy-data-between-instances.md b/documentation/playbook/operations/copy-data-between-instances.md new file mode 100644 index 000000000..316f861ff --- /dev/null +++ b/documentation/playbook/operations/copy-data-between-instances.md @@ -0,0 +1,531 @@ +--- +title: Copy Data Between QuestDB Instances +sidebar_label: Copy data between instances +description: Copy tables and data between QuestDB instances using backup/restore, SQL export/import, and programmatic methods +--- + +Transfer data between QuestDB instances for migrations, backups, development environments, or multi-region deployments. This guide covers multiple methods with different trade-offs for speed, consistency, and ease of use. + +## Problem: Move Data Between Instances + +Common scenarios: +- **Migration**: Move from development to production +- **Backup/restore**: Copy data to backup instance +- **Testing**: Clone production data to staging +- **Multi-region**: Replicate data across regions +- **Disaster recovery**: Restore from backup + +## Method 1: Filesystem Copy (Fastest) + +Copy table directories directly between instances. + +### Prerequisites + +- Both instances must be **stopped** +- Same QuestDB version (or compatible) +- Same OS architecture recommended + +### Steps + +**On source instance:** +```bash +# Stop QuestDB +docker stop questdb-source +# or +systemctl stop questdb + +# Navigate to QuestDB data directory +cd /var/lib/questdb/db + +# List tables +ls -lh +``` + +**Copy table directory:** +```bash +# Copy to remote server +scp -r /var/lib/questdb/db/trades user@target-server:/var/lib/questdb/db/ + +# Or copy to local destination +cp -r /var/lib/questdb/db/trades /backup/questdb/db/ + +# Or use rsync for large tables +rsync -avz --progress /var/lib/questdb/db/trades/ user@target-server:/var/lib/questdb/db/trades/ +``` + +**On target instance:** +```bash +# Ensure correct ownership +chown -R questdb:questdb /var/lib/questdb/db/trades + +# Start QuestDB +docker start questdb-target +# or +systemctl start questdb +``` + +**Verify:** +```sql +SELECT count(*) FROM trades; +SELECT min(timestamp), max(timestamp) FROM trades; +``` + +### Pros and Cons + +**Pros:** +- Fastest method (no serialization/deserialization) +- Preserves all metadata (symbols, indexes, partitions) +- Exact binary copy + +**Cons:** +- Requires downtime (both instances must be stopped) +- Must copy entire table (no filtering) +- Version compatibility required +- No incremental updates + +## Method 2: Backup and Restore + +Use QuestDB's native backup/restore functionality. + +### Create Backup + +**SQL command:** +```sql +BACKUP TABLE trades; +``` + +This creates a backup in `/backup/trades//`. + +**Backup all tables:** +```sql +BACKUP DATABASE; +``` + +### Copy Backup Files + +```bash +# On source server +cd /var/lib/questdb/backup/trades/2025-01-15T10-30-00/ +tar -czf trades_backup.tar.gz * + +# Transfer to target server +scp trades_backup.tar.gz user@target-server:/tmp/ + +# On target server +mkdir -p /var/lib/questdb/backup/trades/2025-01-15T10-30-00/ +cd /var/lib/questdb/backup/trades/2025-01-15T10-30-00/ +tar -xzf /tmp/trades_backup.tar.gz +``` + +### Restore on Target + +```sql +-- Drop existing table if needed +DROP TABLE IF EXISTS trades; + +-- Restore from backup +RESTORE TABLE trades FROM '/var/lib/questdb/backup/trades/2025-01-15T10-30-00/'; +``` + +### Pros and Cons + +**Pros:** +- Clean, supported method +- Can backup while instance is running +- Verifiable backup integrity + +**Cons:** +- Requires disk space for backup +- Two-step process (backup, then restore) +- No incremental backups + +## Method 3: SQL Export and Import + +Export as SQL inserts or CSV, then import on target. + +### Export as CSV + +**From source:** +```sql +COPY trades TO '/tmp/trades.csv' WITH HEADER true; +``` + +Or via psql: +```bash +psql -h source-host -p 8812 -U admin -d questdb -c \ + "COPY (SELECT * FROM trades WHERE timestamp >= '2025-01-01') TO STDOUT WITH CSV HEADER" \ + > trades.csv +``` + +### Import to Target + +**Via Web Console:** +1. Navigate to http://target-host:9000 +2. Click "Import" +3. Upload trades.csv +4. Configure schema and designated timestamp +5. Click "Import" + +**Via REST API:** +```bash +curl -F data=@trades.csv \ + -F name=trades \ + -F timestamp=timestamp \ + -F partitionBy=DAY \ + http://target-host:9000/imp +``` + +**Via COPY (QuestDB 8.0+):** +```sql +COPY trades FROM '/tmp/trades.csv' +WITH HEADER true +TIMESTAMP timestamp +PARTITION BY DAY; +``` + +### Pros and Cons + +**Pros:** +- Works across different QuestDB versions +- Can filter data during export +- Human-readable format (CSV) +- No downtime required + +**Cons:** +- Slower (serialization overhead) +- Larger file sizes +- Symbol dictionaries not preserved +- Need to recreate indexes + +## Method 4: ILP Streaming (Incremental) + +Stream data via InfluxDB Line Protocol for continuous replication. + +### Python Example + +```python +import psycopg2 +from questdb.ingress import Sender + +# Connect to source +source_conn = psycopg2.connect( + host="source-host", port=8812, + user="admin", password="quest", database="questdb" +) + +# Stream to target via ILP +with Sender('target-host', 9009) as sender: + cursor = source_conn.cursor() + cursor.execute(""" + SELECT timestamp, symbol, price, amount + FROM trades + WHERE timestamp >= now() - interval '1' day + ORDER BY timestamp + """) + + for row in cursor: + timestamp, symbol, price, amount = row + sender.row( + 'trades', + symbols={'symbol': symbol}, + columns={'price': price, 'amount': amount}, + at=int(timestamp.timestamp() * 1_000_000) # Convert to microseconds + ) + + sender.flush() + +source_conn.close() +``` + +### Real-Time Replication + +For ongoing replication, query new data periodically: + +```python +import time +from datetime import datetime, timedelta + +last_sync = datetime.now() - timedelta(days=1) + +while True: + cursor.execute(""" + SELECT timestamp, symbol, price, amount + FROM trades + WHERE timestamp > %s + ORDER BY timestamp + """, (last_sync,)) + + rows = cursor.fetchall() + if rows: + for row in rows: + # Send via ILP as above + sender.row(...) + + last_sync = rows[-1][0] # Update to latest timestamp + sender.flush() + + time.sleep(60) # Check every minute +``` + +### Pros and Cons + +**Pros:** +- Incremental updates possible +- Works while both instances are running +- Can transform data during transfer +- Can replicate to multiple targets + +**Cons:** +- Requires programming +- Network overhead +- Must handle connection failures +- Need to track last synced position + +## Method 5: PostgreSQL Logical Replication (Advanced) + +Use external tools that support PostgreSQL wire protocol. + +### Using Debezium + +Not directly supported, but can use CDC patterns with polling: + +**Source query (periodic):** +```sql +SELECT * +FROM trades +WHERE timestamp > :last_checkpoint +ORDER BY timestamp +LIMIT 10000; +``` + +Stream results to target via ILP or PostgreSQL COPY. + +### Pros and Cons + +**Pros:** +- Can integrate with data pipelines +- Near real-time replication +- Works with heterogeneous targets + +**Cons:** +- Complex setup +- External dependencies +- Requires checkpoint management + +## Comparison Matrix + +| Method | Speed | Downtime | Incremental | Filtering | Complexity | +|--------|-------|----------|-------------|-----------|------------| +| **Filesystem Copy** | ⭐⭐⭐⭐⭐ | Required | ❌ | ❌ | ⭐ | +| **Backup/Restore** | ⭐⭐⭐⭐ | Partial | ❌ | ❌ | ⭐⭐ | +| **SQL Export/Import** | ⭐⭐ | None | ❌ | ✅ | ⭐⭐ | +| **ILP Streaming** | ⭐⭐⭐ | None | ✅ | ✅ | ⭐⭐⭐⭐ | +| **Logical Replication** | ⭐⭐⭐ | None | ✅ | ✅ | ⭐⭐⭐⭐⭐ | + +## Large Table Considerations + +For tables > 100GB: + +### Parallel Export/Import + +```bash +# Export partitions in parallel +for partition in 2025-01-{01..31}; do + psql -h source -c "COPY (SELECT * FROM trades WHERE timestamp::date = '$partition') TO STDOUT" | \ + psql -h target -c "COPY trades FROM STDIN" & +done +wait +``` + +### Compression + +```bash +# Compress during transfer +pg_dump -h source -t trades | gzip | ssh target "gunzip | psql" + +# Or use pigz for parallel compression +pg_dump -h source -t trades | pigz -9 | ssh target "unpigz | psql" +``` + +### Split by Partition + +```bash +# Copy one partition at a time (filesystem method) +for partition in /var/lib/questdb/db/trades/2025-01-*; do + rsync -avz "$partition" target:/var/lib/questdb/db/trades/ +done +``` + +## Verification + +After copying, verify data integrity: + +**Row counts:** +```sql +-- On source +SELECT count(*) FROM trades; + +-- On target (should match) +SELECT count(*) FROM trades; +``` + +**Timestamp range:** +```sql +SELECT min(timestamp), max(timestamp) FROM trades; +``` + +**Checksums:** +```sql +-- On both instances +SELECT + symbol, + count(*) as row_count, + sum(cast(price AS LONG)) as price_checksum, + sum(cast(amount AS LONG)) as amount_checksum +FROM trades +GROUP BY symbol +ORDER BY symbol; +``` + +**Sample verification:** +```sql +-- Compare random samples +SELECT * FROM trades WHERE timestamp = '2025-01-15T12:34:56.789012Z'; +``` + +## Automating Backups + +### Daily Backup Script + +```bash +#!/bin/bash +# backup-questdb.sh + +BACKUP_DIR="/backup/questdb/$(date +%Y-%m-%d)" +SOURCE_DB="/var/lib/questdb/db" + +# Create backup directory +mkdir -p "$BACKUP_DIR" + +# Stop QuestDB (optional, for consistent backup) +# systemctl stop questdb + +# Copy tables +for table in "$SOURCE_DB"/*; do + if [ -d "$table" ]; then + table_name=$(basename "$table") + echo "Backing up $table_name..." + tar -czf "$BACKUP_DIR/${table_name}.tar.gz" -C "$SOURCE_DB" "$table_name" + fi +done + +# Start QuestDB +# systemctl start questdb + +# Cleanup old backups (keep last 7 days) +find /backup/questdb/ -type d -mtime +7 -exec rm -rf {} \; + +echo "Backup complete: $BACKUP_DIR" +``` + +**Add to crontab:** +```bash +# Run daily at 2 AM +0 2 * * * /usr/local/bin/backup-questdb.sh >> /var/log/questdb-backup.log 2>&1 +``` + +## Multi-Region Replication + +For active-active or active-passive setups: + +```python +# Continuous replication with conflict resolution +def replicate_to_regions(source_host, target_hosts): + with psycopg2.connect(host=source_host, ...) as source: + senders = [Sender(host, 9009) for host in target_hosts] + + last_ts = get_last_checkpoint() + + while True: + cursor = source.cursor() + cursor.execute(""" + SELECT * FROM trades + WHERE timestamp > %s + ORDER BY timestamp + LIMIT 10000 + """, (last_ts,)) + + batch = cursor.fetchall() + if not batch: + time.sleep(10) + continue + + # Replicate to all regions + for sender in senders: + for row in batch: + sender.row('trades', ...) + sender.flush() + + last_ts = batch[-1][0] + save_checkpoint(last_ts) +``` + +## Troubleshooting + +### "Table already exists" + +```sql +-- Drop and recreate +DROP TABLE IF EXISTS trades; + +-- Or truncate and append +TRUNCATE TABLE trades; +``` + +### Permission Denied + +```bash +# Fix ownership +chown -R questdb:questdb /var/lib/questdb/db/trades + +# Fix permissions +chmod -R 755 /var/lib/questdb/db/trades +``` + +### Incomplete Transfer + +```sql +-- Check for gaps in time-series +SELECT + timestamp, + lag(timestamp) OVER (ORDER BY timestamp) as prev_timestamp, + timestamp - lag(timestamp) OVER (ORDER BY timestamp) as gap_micros +FROM trades +WHERE timestamp - lag(timestamp) OVER (ORDER BY timestamp) > 3600000000 -- Gaps > 1 hour +ORDER BY timestamp; +``` + +:::tip Best Practices +1. **Test first**: Always test your copy method on a small table +2. **Verify after**: Check row counts, timestamps, and sample data +3. **Monitor during**: Watch disk space, memory, and network usage +4. **Backup before**: Keep a backup before major data operations +5. **Automate**: Script and schedule regular backups +::: + +:::warning Downtime Planning +Methods requiring downtime: +- **Filesystem copy**: Both instances must be stopped +- **Backup** (optional): Source can run, target stopped during restore + +Methods with no downtime: +- **SQL export/import**: Both instances can run +- **ILP streaming**: Both instances remain operational +::: + +:::info Related Documentation +- [Backup command](/docs/reference/sql/backup/) +- [COPY command](/docs/reference/sql/copy/) +- [ILP ingestion](/docs/operations/ingesting-data/) +- [PostgreSQL wire protocol](/docs/reference/api/postgres/) +::: diff --git a/documentation/playbook/operations/csv-import-milliseconds.md b/documentation/playbook/operations/csv-import-milliseconds.md new file mode 100644 index 000000000..a6fa010e9 --- /dev/null +++ b/documentation/playbook/operations/csv-import-milliseconds.md @@ -0,0 +1,402 @@ +--- +title: Import CSV with Millisecond Timestamps +sidebar_label: CSV import with milliseconds +description: Import CSV files with epoch millisecond timestamps and convert them to QuestDB's microsecond format +--- + +Import CSV files containing epoch timestamps in milliseconds (common from JavaScript, Python, and many APIs) and convert them to QuestDB's native microsecond format during import. + +## Problem: CSV with Millisecond Epoch Timestamps + +You have a CSV file with timestamps as epoch milliseconds: + +**trades.csv:** +```csv +timestamp,symbol,price,amount +1737456645123,BTC-USDT,61234.50,0.123 +1737456645456,ETH-USDT,3456.78,1.234 +1737456645789,BTC-USDT,61235.00,0.456 +``` + +QuestDB expects timestamps in microseconds, so direct import will create incorrect dates (off by 1000x). + +## Solution 1: Convert During Web Console Import + +The QuestDB web console CSV import tool automatically detects and converts epoch timestamps. + +**Steps:** +1. Navigate to QuestDB web console (http://localhost:9000) +2. Click "Import" in the top navigation +3. Select your CSV file or drag and drop +4. In the schema detection screen: + - QuestDB detects the `timestamp` column type + - If detected as LONG, manually change to TIMESTAMP + - Check "Partition by" timestamp if appropriate +5. Click "Import" + +**Note:** The web console auto-detects milliseconds vs microseconds based on value magnitude. + +## Solution 2: REST API with Schema Definition + +Define the schema explicitly in the REST API call: + +```bash +curl -F data=@trades.csv \ + -F schema='[ + {"name": "timestamp", "type": "TIMESTAMP", "pattern": "epoch"}, + {"name": "symbol", "type": "SYMBOL"}, + {"name": "price", "type": "DOUBLE"}, + {"name": "amount", "type": "DOUBLE"} + ]' \ + -F timestamp=timestamp \ + -F partitionBy=DAY \ + http://localhost:9000/imp +``` + +**Key parameters:** +- `"pattern": "epoch"`: Tells QuestDB to interpret as epoch timestamp +- `"type": "TIMESTAMP"`: Column type +- `timestamp=timestamp`: Designate as table's designated timestamp +- `partitionBy=DAY`: Partition strategy + +The REST API automatically detects milliseconds vs microseconds. + +## Solution 3: SQL COPY Command with Conversion + +For QuestDB 8.0+, use the COPY command with timestamp conversion: + +```sql +COPY trades FROM 'trades.csv' +WITH HEADER true +FORMAT CSV +TIMESTAMP timestamp +PARTITION BY DAY; +``` + +QuestDB's CSV parser automatically handles epoch millisecond detection and conversion. + +## Solution 4: Pre-Convert in Source System + +Convert to microseconds before export: + +**JavaScript:** +```javascript +const timestampMicros = Date.now() * 1000; // Milliseconds * 1000 = microseconds +console.log(`${timestampMicros},BTC-USDT,61234.50,0.123`); +``` + +**Python:** +```python +import time +timestamp_micros = int(time.time() * 1_000_000) # Seconds * 1M = microseconds +print(f"{timestamp_micros},BTC-USDT,61234.50,0.123") +``` + +**SQL (in source database):** +```sql +-- PostgreSQL example +SELECT + EXTRACT(EPOCH FROM timestamp) * 1000000 AS timestamp_micros, + symbol, price, amount +FROM trades; +``` + +Then export with timestamps already in microseconds. + +## Solution 5: Import Then Convert with SQL + +Import as LONG, then INSERT with conversion: + +```sql +-- Step 1: Create staging table with LONG timestamp +CREATE TABLE trades_staging ( + timestamp_ms LONG, + symbol SYMBOL, + price DOUBLE, + amount DOUBLE +); + +-- Step 2: Import CSV to staging table +-- (via web console or REST API, treating timestamp as LONG) + +-- Step 3: Create final table with TIMESTAMP +CREATE TABLE trades ( + timestamp TIMESTAMP, + symbol SYMBOL INDEX, + price DOUBLE, + amount DOUBLE +) TIMESTAMP(timestamp) PARTITION BY DAY; + +-- Step 4: Convert and insert +INSERT INTO trades +SELECT + cast(timestamp_ms * 1000 AS TIMESTAMP) as timestamp, -- Milliseconds → microseconds + symbol, + price, + amount +FROM trades_staging; + +-- Step 5: Cleanup +DROP TABLE trades_staging; +``` + +This approach gives you full control over the conversion process. + +## Verifying Timestamp Conversion + +After import, verify timestamps are correct: + +```sql +-- Check first few rows +SELECT * FROM trades LIMIT 5; + +-- Verify timestamp range is reasonable +SELECT + min(timestamp) as earliest, + max(timestamp) as latest, + (max(timestamp) - min(timestamp)) / 86400000000 as days_span +FROM trades; + +-- Convert back to milliseconds to compare with source +SELECT + timestamp, + cast(timestamp AS LONG) / 1000 as timestamp_ms_check, + symbol, + price +FROM trades +LIMIT 5; +``` + +**Expected output:** +- Timestamps should show reasonable dates (not year 1970 or 50,000 AD) +- `days_span` should match your data's timeframe +- `timestamp_ms_check` should match your original CSV values + +## Common Mistakes and Fixes + +### Mistake 1: Timestamps 1000x Too Large + +**Symptom:** Dates show far in the future (year ~50,000 AD) + +**Cause:** Imported microseconds as milliseconds (multiplied by 1000 instead of dividing) + +**Fix:** +```sql +-- Create corrected table +CREATE TABLE trades_fixed AS +SELECT + cast(cast(timestamp AS LONG) / 1000 AS TIMESTAMP) as timestamp, + symbol, price, amount +FROM trades_incorrect +TIMESTAMP(timestamp) PARTITION BY DAY; +``` + +### Mistake 2: Timestamps 1000x Too Small + +**Symptom:** All dates show as 1970-01-01 + +**Cause:** Imported seconds as microseconds, or milliseconds treated as microseconds + +**Fix:** +```sql +-- If original was milliseconds, multiply by 1000 +CREATE TABLE trades_fixed AS +SELECT + cast(cast(timestamp AS LONG) * 1000 AS TIMESTAMP) as timestamp, + symbol, price, amount +FROM trades_incorrect +TIMESTAMP(timestamp) PARTITION BY DAY; +``` + +### Mistake 3: Timestamps Imported as Strings + +**Symptom:** Timestamp column type is STRING or VARCHAR + +**Cause:** CSV importer didn't recognize epoch format + +**Fix:** +```sql +INSERT INTO trades_corrected +SELECT + cast(cast(timestamp_string AS LONG) * 1000 AS TIMESTAMP) as timestamp, + symbol, price, amount +FROM trades_incorrect; +``` + +## Handling Mixed Timestamp Formats + +If your CSV has some timestamps in ISO format and some in epoch: + +```csv +timestamp,symbol,price +2025-01-15T10:30:00.000Z,BTC-USDT,61234.50 +1737456645123,ETH-USDT,3456.78 +2025-01-15T10:30:01.000Z,BTC-USDT,61235.00 +``` + +**Solution:** Import as STRING, then use CASE to convert: + +```sql +CREATE TABLE trades_final AS +SELECT + CASE + -- If starts with digit, it's epoch milliseconds + WHEN timestamp_str ~ '^[0-9]+$' THEN cast(cast(timestamp_str AS LONG) * 1000 AS TIMESTAMP) + -- Otherwise, parse as ISO string + ELSE cast(timestamp_str AS TIMESTAMP) + END as timestamp, + symbol, + price +FROM trades_staging +TIMESTAMP(timestamp) PARTITION BY DAY; +``` + +## Batch Import Multiple CSV Files + +Import multiple files with consistent schema: + +```bash +#!/bin/bash +# Import all CSV files in directory + +for file in data/*.csv; do + echo "Importing $file..." + curl -F data=@"$file" \ + -F schema='[ + {"name": "timestamp", "type": "TIMESTAMP", "pattern": "epoch"}, + {"name": "symbol", "type": "SYMBOL"}, + {"name": "price", "type": "DOUBLE"}, + {"name": "amount", "type": "DOUBLE"} + ]' \ + -F timestamp=timestamp \ + -F name=trades \ + -F overwrite=false \ + http://localhost:9000/imp +done +``` + +**Key parameter:** +- `overwrite=false`: Append to existing table (default: true would overwrite) + +## Import with Timezone Conversion + +If timestamps are in milliseconds but represent a specific timezone: + +```sql +-- Example: Timestamps are US Eastern Time, convert to UTC +INSERT INTO trades +SELECT + cast(dateadd('h', 5, cast(timestamp_ms * 1000 AS TIMESTAMP)) AS TIMESTAMP) as timestamp, -- EST is UTC-5 + symbol, + price, + amount +FROM trades_staging; +``` + +Adjust the hour offset based on your source timezone. + +## Performance Tips + +**Partition by appropriate interval:** +```sql +-- High-frequency data (millions of rows per day) +PARTITION BY HOUR + +-- Medium frequency (thousands per day) +PARTITION BY DAY + +-- Low frequency (historical archives) +PARTITION BY MONTH +``` + +**Use SYMBOL type for repeated strings:** +```sql +CREATE TABLE trades ( + timestamp TIMESTAMP, + symbol SYMBOL, -- Not STRING - symbols are interned for efficiency + exchange SYMBOL, + side SYMBOL, + price DOUBLE, + amount DOUBLE +) TIMESTAMP(timestamp) PARTITION BY DAY; +``` + +**Disable WAL for bulk initial load:** +```sql +-- Before import +ALTER TABLE trades SET PARAM maxUncommittedRows = 100000; +ALTER TABLE trades SET PARAM o3MaxLag = 0; + +-- After import complete +ALTER TABLE trades SET PARAM maxUncommittedRows = 1000; -- Restore default +``` + +## Verifying Import Success + +**Row count:** +```sql +SELECT count(*) FROM trades; +``` + +**Timestamp range:** +```sql +SELECT + to_str(min(timestamp), 'yyyy-MM-dd HH:mm:ss') as earliest, + to_str(max(timestamp), 'yyyy-MM-dd HH:mm:ss') as latest +FROM trades; +``` + +**Partition distribution:** +```sql +SELECT + to_str(timestamp, 'yyyy-MM-dd') as partition, + count(*) as row_count +FROM trades +GROUP BY partition +ORDER BY partition DESC +LIMIT 10; +``` + +## Alternative: Use ILP for Programmatic Import + +For programmatic imports, consider using InfluxDB Line Protocol instead of CSV: + +**Python example:** +```python +from questdb.ingress import Sender + +with Sender('localhost', 9009) as sender: + # timestamp_ms from your data source + timestamp_micros = timestamp_ms * 1000 + + sender.row( + 'trades', + symbols={'symbol': 'BTC-USDT'}, + columns={'price': 61234.50, 'amount': 0.123}, + at=timestamp_micros) + + sender.flush() +``` + +ILP handles timestamp precision explicitly and offers better performance for large datasets. + +:::tip Automatic Detection +QuestDB's CSV importer automatically detects millisecond vs microsecond vs second epoch timestamps based on value magnitude: +- Values ~1,700,000,000 → seconds +- Values ~1,700,000,000,000 → milliseconds +- Values ~1,700,000,000,000,000 → microseconds + +This detection works for timestamps from 2020 onwards. +::: + +:::warning Ambiguous Timestamps +Timestamps between 1970 and ~2000 can be ambiguous (seconds could look like milliseconds). For historical data, manually specify the conversion factor or use ISO 8601 string format instead of epoch. +::: + +:::info Related Documentation +- [CSV import via Web Console](/docs/operations/importing-data/#web-console-csv-import) +- [REST API import](/docs/operations/importing-data/#rest-api) +- [COPY command](/docs/reference/sql/copy/) +- [Timestamp types](/docs/reference/sql/datatypes/#timestamp-and-date) +- [ILP ingestion](/docs/operations/ingesting-data/#influxdb-line-protocol) +::: diff --git a/documentation/playbook/operations/query-times-histogram.md b/documentation/playbook/operations/query-times-histogram.md new file mode 100644 index 000000000..e43069aa9 --- /dev/null +++ b/documentation/playbook/operations/query-times-histogram.md @@ -0,0 +1,411 @@ +--- +title: Query Performance Histogram +sidebar_label: Query times histogram +description: Analyze query performance distributions using query logs and execution metrics for optimization +--- + +Analyze the distribution of query execution times to identify performance patterns, slow queries, and optimization opportunities. Use query logs and metrics to create histograms showing how query latency varies across your workload. + +## Problem: Understanding Query Performance + +You need to answer: +- What's the typical query latency? +- How many queries are slow (> 1 second)? +- Are there performance regressions over time? +- Which query patterns are slowest? + +Single-point metrics (average, P99) don't show the full picture. A histogram reveals the distribution. + +## Solution: Query Log Analysis + +QuestDB logs query execution times. Parse logs to create performance histograms. + +### Enable Query Logging + +**server.conf:** +```properties +# Log all queries (development/staging) +http.query.log.enabled=true + +# Or log only slow queries (production) +http.slow.query.log.enabled=true +http.slow.query.threshold=1000 # Log queries > 1 second +``` + +**Log format:** +``` +2025-01-15T10:30:45.123Z I http-server [1234] `SELECT * FROM trades WHERE symbol = 'BTC-USDT'` [exec=15ms, compiler=2ms, rows=1000] +``` + +## Parse Logs into Table + +### Create Query Log Table + +```sql +CREATE TABLE query_log ( + timestamp TIMESTAMP, + query_text STRING, + exec_time_ms INT, + compiler_time_ms INT, + rows_returned LONG +) TIMESTAMP(timestamp) PARTITION BY DAY; +``` + +### Parse and Insert + +**Python script:** +```python +import re +import psycopg2 +from datetime import datetime + +# Regex to parse QuestDB log lines +log_pattern = r'(\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3}Z).*`([^`]+)`.*\[exec=(\d+)ms, compiler=(\d+)ms, rows=(\d+)\]' + +conn = psycopg2.connect(host="localhost", port=8812, user="admin", password="quest", database="questdb") +cursor = conn.cursor() + +with open('/var/log/questdb/query.log', 'r') as f: + for line in f: + match = re.search(log_pattern, line) + if match: + timestamp_str, query, exec_ms, compiler_ms, rows = match.groups() + timestamp = datetime.fromisoformat(timestamp_str.replace('Z', '+00:00')) + + cursor.execute(""" + INSERT INTO query_log (timestamp, query_text, exec_time_ms, compiler_time_ms, rows_returned) + VALUES (%s, %s, %s, %s, %s) + """, (timestamp, query, int(exec_ms), int(compiler_ms), int(rows))) + +conn.commit() +conn.close() +``` + +## Create Performance Histogram + +```questdb-sql demo title="Query execution time histogram" +SELECT + (cast(exec_time_ms / 100 AS INT) * 100) as latency_bucket_ms, + ((cast(exec_time_ms / 100 AS INT) + 1) * 100) as bucket_end_ms, + count(*) as query_count, + (count(*) * 100.0 / sum(count(*)) OVER ()) as percentage +FROM query_log +WHERE timestamp >= dateadd('d', -1, now()) +GROUP BY latency_bucket_ms, bucket_end_ms +ORDER BY latency_bucket_ms; +``` + +**Results:** + +| latency_bucket_ms | bucket_end_ms | query_count | percentage | +|-------------------|---------------|-------------|------------| +| 0 | 100 | 45,678 | 91.4% | +| 100 | 200 | 3,456 | 6.9% | +| 200 | 300 | 567 | 1.1% | +| 300 | 400 | 234 | 0.5% | +| 400 | 500 | 45 | 0.09% | +| 500+ | | 20 | 0.04% | + +**Interpretation:** +- 91.4% of queries complete in < 100ms (fast) +- 1.7% take > 200ms (investigate these) +- 0.13% take > 400ms (definitely need optimization) + +## Time-Series Performance Trends + +Track how query performance changes over time: + +```questdb-sql demo title="Hourly query performance evolution" +SELECT + timestamp_floor('h', timestamp) as hour, + (cast(exec_time_ms / 50 AS INT) * 50) as latency_bucket, + count(*) as count +FROM query_log +WHERE timestamp >= dateadd('d', -7, now()) +GROUP BY hour, latency_bucket +ORDER BY hour DESC, latency_bucket; +``` + +Visualize in Grafana heatmap to see performance degradation over time. + +## Percentile Analysis + +Calculate latency percentiles: + +```questdb-sql demo title="Query latency percentiles" +SELECT + percentile(exec_time_ms, 50) as p50_ms, + percentile(exec_time_ms, 90) as p90_ms, + percentile(exec_time_ms, 95) as p95_ms, + percentile(exec_time_ms, 99) as p99_ms, + percentile(exec_time_ms, 99.9) as p999_ms, + max(exec_time_ms) as max_ms +FROM query_log +WHERE timestamp >= dateadd('d', -1, now()); +``` + +**Results:** + +| p50_ms | p90_ms | p95_ms | p99_ms | p999_ms | max_ms | +|--------|--------|--------|--------|---------|--------| +| 12 | 45 | 89 | 234 | 1,234 | 15,678 | + +## Identify Slow Query Patterns + +Find which query patterns are slowest: + +```questdb-sql demo title="Slowest query patterns" +WITH normalized AS ( + SELECT + exec_time_ms, + -- Normalize query (remove values, keep structure) + regexp_replace(query_text, '\d+', 'N', 'g') as query_pattern, + regexp_replace( + regexp_replace(query_text, '''[^'']*''', '''S''', 'g'), + '\d+', 'N', 'g' + ) as query_normalized + FROM query_log + WHERE timestamp >= dateadd('d', -1, now()) +) +SELECT + query_pattern, + count(*) as execution_count, + avg(exec_time_ms) as avg_ms, + percentile(exec_time_ms, 95) as p95_ms, + max(exec_time_ms) as max_ms +FROM normalized +GROUP BY query_pattern +HAVING count(*) >= 10 -- At least 10 executions +ORDER BY avg_ms DESC +LIMIT 20; +``` + +**Results:** + +| query_pattern | execution_count | avg_ms | p95_ms | max_ms | +|---------------|-----------------|--------|--------|--------| +| SELECT * FROM trades WHERE timestamp BETWEEN ... | 1,234 | 456 | 890 | 2,345 | +| SELECT symbol, sum(amount) FROM trades GROUP BY ... | 567 | 234 | 456 | 1,234 | + +## Slowest Individual Queries + +Find actual slow query instances: + +```questdb-sql demo title="Top 20 slowest queries" +SELECT + timestamp, + exec_time_ms, + rows_returned, + substr(query_text, 1, 100) as query_preview +FROM query_log +WHERE timestamp >= dateadd('d', -1, now()) +ORDER BY exec_time_ms DESC +LIMIT 20; +``` + +## Query Performance by Table + +Analyze which tables have slow queries: + +```questdb-sql demo title="Performance by table accessed" +SELECT + CASE + WHEN query_text LIKE '%FROM trades%' THEN 'trades' + WHEN query_text LIKE '%FROM sensor_readings%' THEN 'sensor_readings' + WHEN query_text LIKE '%FROM api_logs%' THEN 'api_logs' + ELSE 'other' + END as table_name, + count(*) as query_count, + avg(exec_time_ms) as avg_exec_ms, + percentile(exec_time_ms, 95) as p95_exec_ms +FROM query_log +WHERE timestamp >= dateadd('d', -1, now()) +GROUP BY table_name +ORDER BY avg_exec_ms DESC; +``` + +## Grafana Dashboard + +### Query Latency Heatmap + +```questdb-sql demo title="Heatmap data for Grafana" +SELECT + timestamp_floor('5m', timestamp) as time, + (cast(exec_time_ms / 50 AS INT) * 50) as latency_bucket, + count(*) as count +FROM query_log +WHERE $__timeFilter(timestamp) +GROUP BY time, latency_bucket +ORDER BY time, latency_bucket; +``` + +**Grafana config:** +- Visualization: Heatmap +- X-axis: time +- Y-axis: latency_bucket +- Cell value: count + +### Query Rate and Latency + +```questdb-sql demo title="Query rate and P95 latency" +SELECT + timestamp_floor('1m', timestamp) as time, + count(*) as "Query Rate (QPM)", + percentile(exec_time_ms, 95) as "P95 Latency (ms)" +FROM query_log +WHERE $__timeFilter(timestamp) +SAMPLE BY 1m; +``` + +## Using Prometheus Metrics (Alternative) + +QuestDB exposes Prometheus metrics at `http://localhost:9003/metrics`: + +``` +# HELP questdb_json_queries_total +questdb_json_queries_total 123456 + +# HELP questdb_json_queries_completed +questdb_json_queries_completed 123450 + +# HELP questdb_json_queries_failed +questdb_json_queries_failed 6 +``` + +Scrape into Prometheus, then query: + +```promql +# Query rate +rate(questdb_json_queries_completed[5m]) + +# Error rate +rate(questdb_json_queries_failed[5m]) / rate(questdb_json_queries_total[5m]) +``` + +## Custom Query Instrumentation + +Add custom timing in application code: + +**Python example:** +```python +import time +import psycopg2 + +conn = psycopg2.connect(...) +cursor = conn.cursor() + +start = time.time() +cursor.execute("SELECT * FROM trades WHERE symbol = %s", ("BTC-USDT",)) +results = cursor.fetchall() +elapsed_ms = (time.time() - start) * 1000 + +# Log to monitoring system +logger.info(f"Query completed in {elapsed_ms:.2f}ms, returned {len(results)} rows") + +# Or insert into query_log table +cursor.execute(""" + INSERT INTO query_log (timestamp, query_text, exec_time_ms, rows_returned) + VALUES (now(), %s, %s, %s) +""", ("SELECT * FROM trades WHERE symbol = ?", int(elapsed_ms), len(results))) + +conn.close() +``` + +## Query Performance Alerts + +Set up alerts for slow queries: + +```sql +-- Queries taking > 1 second in last 5 minutes +SELECT count(*) as slow_query_count +FROM query_log +WHERE timestamp >= dateadd('m', -5, now()) + AND exec_time_ms > 1000; +``` + +**Alert if** `slow_query_count > 10`. + +## Optimization Workflow + +1. **Identify slow patterns** (from histogram) +2. **Get example queries** (slow query log) +3. **Analyze query plan** (EXPLAIN) +4. **Add indexes** (on filtered/joined columns) +5. **Verify improvement** (re-run histogram) + +**Before optimization:** +``` +P95: 890ms +P99: 2,345ms +``` + +**After adding index:** +``` +P95: 45ms (-94.9%) +P99: 123ms (-94.8%) +``` + +## Comparing Time Periods + +Compare query performance week-over-week: + +```questdb-sql demo title="Week-over-week latency comparison" +WITH this_week AS ( + SELECT + avg(exec_time_ms) as avg_latency, + percentile(exec_time_ms, 95) as p95_latency + FROM query_log + WHERE timestamp >= dateadd('d', -7, now()) +), +last_week AS ( + SELECT + avg(exec_time_ms) as avg_latency, + percentile(exec_time_ms, 95) as p95_latency + FROM query_log + WHERE timestamp >= dateadd('d', -14, now()) + AND timestamp < dateadd('d', -7, now()) +) +SELECT + 'This Week' as period, + this_week.avg_latency, + this_week.p95_latency, + (this_week.avg_latency - last_week.avg_latency) as avg_change, + ((this_week.avg_latency - last_week.avg_latency) / last_week.avg_latency * 100) as avg_pct_change +FROM this_week, last_week + +UNION ALL + +SELECT + 'Last Week', + last_week.avg_latency, + last_week.p95_latency, + 0, + 0 +FROM last_week; +``` + +**Alerts:** +- If `avg_pct_change > 20%`: Performance regression +- If `avg_pct_change < -20%`: Performance improvement + +:::tip Monitoring Best Practices +1. **Log selectively in production**: Use slow query logging only (threshold 500-1000ms) +2. **Sample high-QPS endpoints**: Log 1% of fast queries to reduce overhead +3. **Rotate logs**: Prevent disk space issues +4. **Index query_log table**: For fast analysis queries +5. **Set up alerts**: Automated detection of performance degradation +::: + +:::warning Log Volume +Full query logging can generate significant data: +- 1,000 QPS × 86,400 seconds/day = 86.4M log entries/day +- Use sampling or slow query logging in production +- Rotate and archive old logs regularly +::: + +:::info Related Documentation +- [HTTP slow query logging](/docs/reference/configuration/#http-slow-query-log) +- [Prometheus metrics](/docs/reference/metrics/) +- [percentile() function](/docs/reference/function/aggregation/#percentile) +- [Grafana integration](/docs/third-party-tools/grafana/) +::: diff --git a/documentation/playbook/operations/tls-pgbouncer.md b/documentation/playbook/operations/tls-pgbouncer.md new file mode 100644 index 000000000..5360e9aae --- /dev/null +++ b/documentation/playbook/operations/tls-pgbouncer.md @@ -0,0 +1,489 @@ +--- +title: TLS for PostgreSQL Wire Protocol with PgBouncer +sidebar_label: TLS with PgBouncer +description: Add TLS encryption to QuestDB PostgreSQL wire protocol connections using PgBouncer as a TLS-terminating proxy +--- + +Add TLS/SSL encryption to PostgreSQL wire protocol connections to QuestDB using PgBouncer as a TLS-terminating proxy. QuestDB's PostgreSQL interface doesn't natively support TLS, but PgBouncer provides this capability while also offering connection pooling benefits. + +## Problem: No Native TLS for PostgreSQL Wire Protocol + +QuestDB supports PostgreSQL wire protocol on port 8812, but connections are unencrypted: + +```bash +# Unencrypted connection (passwords and data visible) +psql -h questdb.example.com -p 8812 -U admin -d questdb +``` + +For production deployments, especially over public networks, you need TLS encryption. + +## Solution: PgBouncer as TLS Proxy + +Use PgBouncer to: +1. Accept TLS-encrypted client connections +2. Decrypt and forward to QuestDB's unencrypted PostgreSQL port +3. Provide connection pooling as a bonus + +``` +Client (TLS) → PgBouncer (TLS termination) → QuestDB (unencrypted localhost) +``` + +## Architecture + +**Network flow:** +- Clients connect to PgBouncer on port 5432 with TLS +- PgBouncer terminates TLS and connects to QuestDB on localhost:8812 +- PgBouncer and QuestDB communicate over localhost (no network exposure) + +**Security benefits:** +- Data encrypted in transit from client to server +- Credentials protected during authentication +- No changes required to QuestDB configuration +- Works with any PostgreSQL-compatible client + +## Installation + +### Docker Compose Setup + +**docker-compose.yml:** +```yaml +version: '3.8' + +services: + questdb: + image: questdb/questdb:latest + container_name: questdb + ports: + - "9000:9000" # Web console + - "9009:9009" # ILP + volumes: + - ./questdb-data:/var/lib/questdb + environment: + - QDB_PG_USER=admin + - QDB_PG_PASSWORD=quest + restart: unless-stopped + + pgbouncer: + image: edoburu/pgbouncer:latest + container_name: pgbouncer + ports: + - "5432:5432" # PostgreSQL with TLS + volumes: + - ./pgbouncer/pgbouncer.ini:/etc/pgbouncer/pgbouncer.ini:ro + - ./pgbouncer/userlist.txt:/etc/pgbouncer/userlist.txt:ro + - ./certs/server.crt:/etc/pgbouncer/server.crt:ro + - ./certs/server.key:/etc/pgbouncer/server.key:ro + depends_on: + - questdb + restart: unless-stopped +``` + +### PgBouncer Configuration + +**pgbouncer/pgbouncer.ini:** +```ini +[databases] +questdb = host=questdb port=8812 dbname=questdb + +[pgbouncer] +listen_addr = 0.0.0.0 +listen_port = 5432 +auth_type = md5 +auth_file = /etc/pgbouncer/userlist.txt +pool_mode = session +max_client_conn = 1000 +default_pool_size = 25 + +# TLS Configuration +client_tls_sslmode = require +client_tls_cert_file = /etc/pgbouncer/server.crt +client_tls_key_file = /etc/pgbouncer/server.key +client_tls_protocols = secure + +# Optional: Client certificate authentication +# client_tls_ca_file = /etc/pgbouncer/ca.crt + +# Logging +logfile = /var/log/pgbouncer/pgbouncer.log +pidfile = /var/run/pgbouncer/pgbouncer.pid +admin_users = admin +``` + +**Key parameters:** +- `client_tls_sslmode = require`: Force TLS for all client connections +- `client_tls_cert_file`: Server certificate (signed by CA or self-signed) +- `client_tls_key_file`: Server private key +- `client_tls_protocols = secure`: Only allow TLS 1.2+ + +### User Authentication File + +**pgbouncer/userlist.txt:** +``` +"admin" "md5" +"readonly" "md5" +``` + +Generate MD5 hashes: +```bash +# Format: md5 + md5(password + username) +echo -n "questadmin" | md5sum | awk '{print "md5"$1}' +# Output: md56c4e8a7e9e3b6f8a9d5c4e8a7e9e3b6f +``` + +Then add to userlist.txt: +``` +"admin" "md56c4e8a7e9e3b6f8a9d5c4e8a7e9e3b6f" +``` + +## Generating TLS Certificates + +### Self-Signed Certificate (Development) + +```bash +# Create certificate directory +mkdir -p certs + +# Generate private key +openssl genrsa -out certs/server.key 2048 + +# Generate self-signed certificate (valid for 365 days) +openssl req -new -x509 -key certs/server.key -out certs/server.crt -days 365 \ + -subj "/C=US/ST=State/L=City/O=Organization/CN=questdb.example.com" + +# Set permissions +chmod 600 certs/server.key +chmod 644 certs/server.crt +``` + +### CA-Signed Certificate (Production) + +```bash +# Generate private key +openssl genrsa -out certs/server.key 2048 + +# Generate certificate signing request (CSR) +openssl req -new -key certs/server.key -out certs/server.csr \ + -subj "/C=US/ST=State/L=City/O=Organization/CN=questdb.example.com" + +# Submit CSR to your CA (Let's Encrypt, DigiCert, etc.) +# Receive server.crt from CA + +# Optionally concatenate intermediate certificates +cat server.crt intermediate.crt > certs/server.crt +``` + +### Let's Encrypt with Certbot + +```bash +# Install certbot +sudo apt-get install certbot + +# Obtain certificate (requires port 80 temporarily) +sudo certbot certonly --standalone -d questdb.example.com + +# Certificates will be in /etc/letsencrypt/live/questdb.example.com/ +# Copy to pgbouncer directory +sudo cp /etc/letsencrypt/live/questdb.example.com/fullchain.pem certs/server.crt +sudo cp /etc/letsencrypt/live/questdb.example.com/privkey.pem certs/server.key +sudo chown $USER:$USER certs/* +chmod 600 certs/server.key +``` + +## Starting the Stack + +```bash +# Start QuestDB and PgBouncer +docker-compose up -d + +# Check logs +docker-compose logs pgbouncer +docker-compose logs questdb + +# Verify PgBouncer is listening +netstat -tlnp | grep 5432 +``` + +## Connecting with TLS + +### psql + +```bash +# Require TLS +psql "postgresql://admin:quest@questdb.example.com:5432/questdb?sslmode=require" + +# Verify certificate (production) +psql "postgresql://admin:quest@questdb.example.com:5432/questdb?sslmode=verify-full&sslrootcert=/path/to/ca.crt" + +# Self-signed certificate (development, skips verification) +psql "postgresql://admin:quest@localhost:5432/questdb?sslmode=require" +``` + +### Python (psycopg2) + +```python +import psycopg2 + +conn = psycopg2.connect( + host="questdb.example.com", + port=5432, + database="questdb", + user="admin", + password="quest", + sslmode="require" +) + +cursor = conn.cursor() +cursor.execute("SELECT * FROM trades LIMIT 5") +print(cursor.fetchall()) +conn.close() +``` + +### Node.js (pg) + +```javascript +const { Client } = require('pg'); + +const client = new Client({ + host: 'questdb.example.com', + port: 5432, + database: 'questdb', + user: 'admin', + password: 'quest', + ssl: { + rejectUnauthorized: true, // Verify certificate + ca: fs.readFileSync('/path/to/ca.crt').toString(), + }, +}); + +await client.connect(); +const res = await client.query('SELECT * FROM trades LIMIT 5'); +console.log(res.rows); +await client.end(); +``` + +### Grafana + +**PostgreSQL datasource configuration:** +``` +Host: questdb.example.com:5432 +Database: questdb +User: readonly +Password: +TLS/SSL Mode: require +TLS/SSL Method: File system path +Server Certificate: /path/to/ca.crt (if verifying) +``` + +## Connection Pooling Benefits + +PgBouncer provides connection pooling in addition to TLS: + +**Benefits:** +- Reduces connection overhead (connection setup is expensive) +- Limits concurrent connections to QuestDB +- Handles client connection bursts +- Improves query throughput + +**Pool modes:** +- `session`: Connection reused after client disconnects (recommended for QuestDB) +- `transaction`: Connection returned after each transaction +- `statement`: Connection returned after each statement + +**Configuration:** +```ini +pool_mode = session +default_pool_size = 25 # Connections per database per user +max_client_conn = 1000 # Total client connections +reserve_pool_size = 5 # Emergency connections +reserve_pool_timeout = 3 # Seconds to wait for connection +``` + +## Monitoring PgBouncer + +### Admin Console + +```bash +# Connect to PgBouncer admin console +psql -h localhost -p 5432 -U admin pgbouncer + +# Show pool status +SHOW POOLS; + +# Show client connections +SHOW CLIENTS; + +# Show server connections (to QuestDB) +SHOW SERVERS; + +# Show configuration +SHOW CONFIG; + +# Show statistics +SHOW STATS; +``` + +### Key Metrics + +```sql +-- Active connections by pool +SHOW POOLS; +``` + +**Output:** +| database | user | cl_active | cl_waiting | sv_active | sv_idle | sv_used | +|----------|------|-----------|------------|-----------|---------|---------| +| questdb | admin | 15 | 0 | 20 | 5 | 350 | + +- `cl_active`: Active client connections +- `cl_waiting`: Clients waiting for a server connection +- `sv_active`: Server connections in use +- `sv_idle`: Idle server connections +- `sv_used`: Server connections used since pool started + +## Security Hardening + +### Restrict Client Certificate Authorities + +**pgbouncer.ini:** +```ini +client_tls_ca_file = /etc/pgbouncer/ca.crt +client_tls_sslmode = verify-full +``` + +This requires clients to present certificates signed by your CA. + +### Disable Weak Ciphers + +**pgbouncer.ini:** +```ini +client_tls_ciphers = HIGH:!aNULL:!MD5:!3DES +client_tls_protocols = secure # TLS 1.2 and 1.3 only +``` + +### Firewall Rules + +```bash +# Allow only PgBouncer port from external +sudo ufw allow 5432/tcp + +# Block direct QuestDB PostgreSQL port from external +sudo ufw deny 8812/tcp + +# QuestDB should only listen on localhost +# In server.conf: +# pg.net.bind.to=127.0.0.1 +``` + +### Authentication + +Use strong passwords in userlist.txt: + +```bash +# Generate strong password hash +python3 -c "import hashlib; print('md5' + hashlib.md5(b'admin').hexdigest())" +``` + +## Troubleshooting + +### Connection Refused + +**Symptom:** `psql: error: connection to server failed: Connection refused` + +**Checks:** +1. Verify PgBouncer is running: `docker ps | grep pgbouncer` +2. Check port binding: `netstat -tlnp | grep 5432` +3. Check firewall: `sudo ufw status` +4. Review PgBouncer logs: `docker logs pgbouncer` + +### TLS Certificate Errors + +**Symptom:** `SSL error: certificate verify failed` + +**Solution for self-signed certs:** +```bash +psql "postgresql://admin:quest@localhost:5432/questdb?sslmode=require" +# Note: "require" doesn't verify certificate, only encrypts +``` + +**Solution for production:** +```bash +# Verify certificate chain is complete +openssl s_client -connect questdb.example.com:5432 -showcerts +``` + +### Authentication Failed + +**Symptom:** `password authentication failed` + +**Checks:** +1. Verify userlist.txt hash is correct +2. Ensure auth_type matches (md5 vs scram-sha-256) +3. Check QuestDB credentials in pgbouncer.ini [databases] section +4. Review PgBouncer auth logs + +### Performance Issues + +**Check connection pool exhaustion:** +```sql +SHOW POOLS; +-- If cl_waiting > 0, clients are waiting for connections +``` + +**Solution:** +```ini +default_pool_size = 50 # Increase pool size +max_client_conn = 2000 # Increase if needed +``` + +## Alternative: Nginx Stream Proxy + +For simpler TLS termination without connection pooling: + +**nginx.conf:** +```nginx +stream { + upstream questdb { + server localhost:8812; + } + + server { + listen 5432 ssl; + proxy_pass questdb; + + ssl_certificate /etc/nginx/certs/server.crt; + ssl_certificate_key /etc/nginx/certs/server.key; + ssl_protocols TLSv1.2 TLSv1.3; + ssl_ciphers HIGH:!aNULL:!MD5; + } +} +``` + +**Pros:** Simpler configuration, no authentication handling +**Cons:** No connection pooling, no PostgreSQL-specific features + +:::tip Production Deployment +For production deployments with client applications: +1. Use CA-signed certificates (Let's Encrypt is free) +2. Set `client_tls_sslmode = require` minimum, `verify-full` for maximum security +3. Enable connection pooling to handle traffic bursts +4. Monitor PgBouncer pools regularly +5. Restrict QuestDB PostgreSQL port to localhost only +::: + +:::warning Certificate Renewal +Let's Encrypt certificates expire after 90 days. Set up automatic renewal: + +```bash +# Add to crontab +0 0 1 * * certbot renew && docker-compose restart pgbouncer +``` + +Or use a certbot hook to reload PgBouncer after renewal. +::: + +:::info Related Documentation +- [PostgreSQL wire protocol](/docs/reference/api/postgres/) +- [QuestDB security](/docs/operations/security/) +- [PgBouncer documentation](https://www.pgbouncer.org/config.html) +- [Docker deployment](/docs/operations/deployment/docker/) +::: diff --git a/documentation/playbook/sql/advanced/array-from-string.md b/documentation/playbook/sql/advanced/array-from-string.md new file mode 100644 index 000000000..cd36367dd --- /dev/null +++ b/documentation/playbook/sql/advanced/array-from-string.md @@ -0,0 +1,361 @@ +--- +title: Create Arrays from String Literals +sidebar_label: Array from string literal +description: Cast string literals to array types for use in functions requiring array parameters +--- + +Create array values from string literals for use with functions that accept array parameters. While QuestDB doesn't have native array literals, you can cast string representations to array types like `double[]` or `int[]`. + +## Problem: Functions Requiring Array Parameters + +Some QuestDB functions accept array parameters: + +```sql +-- Hypothetical function signature +percentile_cont(values double[], percentiles double[]) +``` + +But you can't write arrays directly in SQL: + +```sql +-- This doesn't work (not valid SQL) +SELECT func([1.0, 2.0, 3.0]); +``` + +## Solution: Cast String to Array + +Use CAST to convert string literals to array types: + +```questdb-sql demo title="Cast string to double array" +SELECT cast('[1.0, 2.0, 3.0, 4.0, 5.0]' AS double[]) as numbers; +``` + +**Result:** +``` +numbers: [1.0, 2.0, 3.0, 4.0, 5.0] +``` + +## Array Type Casting + +### Double Array + +```sql +SELECT cast('[1.5, 2.7, 3.2]' AS double[]) as decimals; +``` + +### Integer Array + +```sql +SELECT cast('[10, 20, 30]' AS int[]) as integers; +``` + +### Long Array + +```sql +SELECT cast('[1000000, 2000000, 3000000]' AS long[]) as big_numbers; +``` + +## Using Arrays with Functions + +### Custom Percentiles + +```sql +-- Calculate multiple percentiles at once (if function supports) +SELECT + symbol, + percentiles(price, cast('[0.25, 0.50, 0.75, 0.95, 0.99]' AS double[])) as percentile_values +FROM trades +WHERE timestamp >= dateadd('d', -1, now()) +GROUP BY symbol; +``` + +Note: QuestDB's built-in `percentile()` function takes a single percentile value, not an array. This example shows the pattern for custom or future array-accepting functions. + +### Array Aggregation (Example Pattern) + +```sql +-- Conceptual: Aggregate values into array +WITH data AS ( + SELECT + timestamp_floor('h', timestamp) as hour, + collect_list(price) as prices -- Hypothetical array aggregation + FROM trades + SAMPLE BY 1h +) +SELECT + hour, + array_avg(prices) as avg_price, + array_median(prices) as median_price +FROM data; +``` + +## Multidimensional Arrays + +### 2D Array (Matrix) + +```sql +SELECT cast('[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]]' AS double[][]) as matrix; +``` + +**Result:** +``` +matrix: [[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]] +``` + +### Use Case: Time Series Matrix + +```sql +-- Store multiple related time series as matrix +WITH timeseries_matrix AS ( + SELECT cast( + '[[100.0, 101.0, 102.0], + [200.0, 201.0, 202.0], + [300.0, 301.0, 302.0]]' + AS double[][] + ) as series_data +) +SELECT + series_data[0] as series_1, -- [100.0, 101.0, 102.0] + series_data[1] as series_2, -- [200.0, 201.0, 202.0] + series_data[2] as series_3 -- [300.0, 301.0, 302.0] +FROM timeseries_matrix; +``` + +## Array Indexing + +Access array elements by index (0-based): + +```sql +WITH arr AS ( + SELECT cast('[10.5, 20.7, 30.2, 40.9]' AS double[]) as values +) +SELECT + values[0] as first, -- 10.5 + values[1] as second, -- 20.7 + values[3] as fourth -- 40.9 +FROM arr; +``` + +## Dynamic Array Construction + +Build arrays from query results: + +### Using String Aggregation + +```sql +-- Aggregate values into comma-separated string, then cast +WITH aggregated AS ( + SELECT + symbol, + string_agg(cast(price AS STRING), ',') as price_string + FROM ( + SELECT * FROM trades + WHERE symbol = 'BTC-USDT' + LIMIT 10 + ) + GROUP BY symbol +) +SELECT + symbol, + cast('[' || price_string || ']' AS double[]) as price_array +FROM aggregated; +``` + +## Array Literals in WHERE Clauses + +Check if value exists in array: + +```sql +-- Check if symbol is in list +WITH valid_symbols AS ( + SELECT cast('["BTC-USDT", "ETH-USDT", "SOL-USDT"]' AS string[]) as symbols +) +SELECT * +FROM trades +WHERE symbol IN (SELECT unnest(symbols) FROM valid_symbols) +LIMIT 100; +``` + +Note: QuestDB's `IN` clause with arrays may have limited support. Use standard `IN (value1, value2, ...)` syntax where possible. + +## Array Length + +Get number of elements: + +```sql +SELECT + cast('[1, 2, 3, 4, 5]' AS int[]) as arr, + array_length(cast('[1, 2, 3, 4, 5]' AS int[]), 1) as length; -- Returns 5 +``` + +## Common Patterns + +### Percentile Thresholds + +```sql +-- Define alert thresholds as array +WITH thresholds AS ( + SELECT cast('[50.0, 100.0, 500.0, 1000.0]' AS double[]) as latency_thresholds +), +counts AS ( + SELECT + count(CASE WHEN latency_ms < thresholds[0] THEN 1 END) as under_50ms, + count(CASE WHEN latency_ms >= thresholds[0] AND latency_ms < thresholds[1] THEN 1 END) as ms_50_100, + count(CASE WHEN latency_ms >= thresholds[1] AND latency_ms < thresholds[2] THEN 1 END) as ms_100_500, + count(CASE WHEN latency_ms >= thresholds[2] THEN 1 END) as over_500ms + FROM api_requests, thresholds + WHERE timestamp >= dateadd('h', -1, now()) +) +SELECT * FROM counts; +``` + +### Price Levels + +```sql +-- Support/resistance levels +WITH levels AS ( + SELECT cast('[60000.0, 61000.0, 62000.0, 63000.0]' AS double[]) as price_levels +) +SELECT + timestamp, + price, + CASE + WHEN price < price_levels[0] THEN 'Below Support 1' + WHEN price >= price_levels[0] AND price < price_levels[1] THEN 'Support 1-2' + WHEN price >= price_levels[1] AND price < price_levels[2] THEN 'Support 2-3' + WHEN price >= price_levels[2] AND price < price_levels[3] THEN 'Resistance 1-2' + ELSE 'Above Resistance 2' + END as price_zone +FROM trades, levels +WHERE symbol = 'BTC-USDT' + AND timestamp >= dateadd('h', -1, now()); +``` + +## Limitations and Workarounds + +### No Array Literals + +**Problem:** Can't write arrays directly in standard SQL syntax + +**Workaround:** Use CAST with string literals as shown above + +### Limited Array Functions + +**Problem:** QuestDB has limited built-in array manipulation functions + +**Workaround:** Use CASE expressions and indexing to process arrays + +### Array Comparison + +**Problem:** Can't directly compare arrays with `=` operator + +**Workaround:** Compare element-by-element or convert to strings + +```sql +SELECT + CASE + WHEN cast('[1, 2, 3]' AS int[])[0] = cast('[1, 2, 4]' AS int[])[0] + AND cast('[1, 2, 3]' AS int[])[1] = cast('[1, 2, 4]' AS int[])[1] + THEN 'First two elements match' + ELSE 'Different' + END as comparison; +``` + +## Alternative: Use Individual Columns + +For many use cases, separate columns are cleaner than arrays: + +```sql +-- Instead of: [p50, p90, p95, p99] +SELECT + percentile(price, 50) as p50, + percentile(price, 90) as p90, + percentile(price, 95) as p95, + percentile(price, 99) as p99 +FROM trades; +``` + +This avoids array casting and is often more readable. + +## Type Coercion Rules + +```sql +-- String to double[] +cast('[1, 2, 3]' AS double[]) -- [1.0, 2.0, 3.0] + +-- String to int[] +cast('[1.5, 2.5, 3.5]' AS int[]) -- [1, 2, 3] (truncates decimals) + +-- String to long[] +cast('[1000000, 2000000]' AS long[]) -- [1000000, 2000000] +``` + +## JSON Alternative + +For complex nested structures, consider using STRING columns with JSON: + +```sql +-- Store as JSON string +SELECT '{"prices": [100.0, 101.0, 102.0], "volumes": [10, 20, 30]}' as data; + +-- Parse with custom logic or external tools +``` + +QuestDB focuses on time-series performance, so complex nested structures are often better handled in application code. + +## Practical Example: Multiple Symbol Filter + +```sql +-- Define symbols to track +WITH watched_symbols AS ( + SELECT cast('["BTC-USDT", "ETH-USDT", "SOL-USDT", "AVAX-USDT"]' AS string[]) as symbols +) +SELECT + trades.* +FROM trades +CROSS JOIN watched_symbols +WHERE symbol IN ( + -- Expand array to rows + SELECT symbols[0] FROM watched_symbols + UNION ALL SELECT symbols[1] FROM watched_symbols + UNION ALL SELECT symbols[2] FROM watched_symbols + UNION ALL SELECT symbols[3] FROM watched_symbols +) + AND timestamp >= dateadd('h', -1, now()) +LIMIT 100; +``` + +**Simpler alternative:** +```sql +SELECT * FROM trades +WHERE symbol IN ('BTC-USDT', 'ETH-USDT', 'SOL-USDT', 'AVAX-USDT') +LIMIT 100; +``` + +The array approach is useful when the list is dynamically generated or reused across queries. + +:::tip When to Use Arrays +Use arrays when: +- Working with functions that require array parameters +- Storing fixed-size sequences (coordinates, RGB values, etc.) +- Defining reusable threshold or configuration arrays +- Interfacing with external systems expecting array format + +Avoid arrays when: +- Simple column-based representation works fine +- You need frequent element-wise operations (use separate columns instead) +- Data structure is deeply nested (consider JSON or denormalization) +::: + +:::warning Array Support Limited +QuestDB's array support is focused on specific use cases. For extensive array manipulation: +1. Prefer separate columns for better query performance +2. Handle complex array logic in application code +3. Consider alternative databases if arrays are core to your data model +::: + +:::info Related Documentation +- [CAST function](/docs/reference/function/cast/) +- [Data types](/docs/reference/sql/datatypes/) +- [String functions](/docs/reference/function/text/) +::: diff --git a/documentation/playbook/sql/advanced/conditional-aggregates.md b/documentation/playbook/sql/advanced/conditional-aggregates.md new file mode 100644 index 000000000..497f6d6f0 --- /dev/null +++ b/documentation/playbook/sql/advanced/conditional-aggregates.md @@ -0,0 +1,353 @@ +--- +title: Multiple Conditional Aggregates +sidebar_label: Conditional aggregates +description: Calculate multiple conditional aggregates in a single query using CASE expressions for efficient data analysis +--- + +Calculate multiple aggregates with different conditions in a single pass through the data using CASE expressions. This pattern is more efficient than running separate queries and essential for creating summary reports with multiple metrics. + +## Problem: Multiple Metrics with Different Filters + +You need to calculate various metrics from the same dataset with different conditions: + +- Count of buy orders +- Count of sell orders +- Average buy price +- Average sell price +- Total volume for large trades (> 1.0) +- Total volume for small trades (≤ 1.0) + +Running separate queries is inefficient: + +```sql +-- Inefficient: 6 separate scans +SELECT count(*) FROM trades WHERE side = 'buy'; +SELECT count(*) FROM trades WHERE side = 'sell'; +SELECT avg(price) FROM trades WHERE side = 'buy'; +-- ... 3 more queries +``` + +## Solution: CASE Within Aggregate Functions + +Use CASE expressions inside aggregates to calculate all metrics in one query: + +```questdb-sql demo title="Multiple conditional aggregates in single query" +SELECT + symbol, + count(CASE WHEN side = 'buy' THEN 1 END) as buy_count, + count(CASE WHEN side = 'sell' THEN 1 END) as sell_count, + avg(CASE WHEN side = 'buy' THEN price END) as avg_buy_price, + avg(CASE WHEN side = 'sell' THEN price END) as avg_sell_price, + sum(CASE WHEN amount > 1.0 THEN amount END) as large_trade_volume, + sum(CASE WHEN amount <= 1.0 THEN amount END) as small_trade_volume, + sum(amount) as total_volume +FROM trades +WHERE timestamp >= dateadd('d', -1, now()) + AND symbol IN ('BTC-USDT', 'ETH-USDT') +GROUP BY symbol; +``` + +**Results:** + +| symbol | buy_count | sell_count | avg_buy_price | avg_sell_price | large_trade_volume | small_trade_volume | total_volume | +|--------|-----------|------------|---------------|----------------|--------------------|--------------------|-------------- | +| BTC-USDT | 12,345 | 11,234 | 61,250.50 | 61,248.75 | 456.78 | 123.45 | 580.23 | +| ETH-USDT | 23,456 | 22,345 | 3,456.25 | 3,455.50 | 678.90 | 234.56 | 913.46 | + +## How It Works + +### CASE Returns NULL for Non-Matching Rows + +```sql +count(CASE WHEN side = 'buy' THEN 1 END) +``` + +- When `side = 'buy'`: CASE returns 1 +- When `side != 'buy'`: CASE returns NULL (implicit ELSE NULL) +- `count()` only counts non-NULL values +- Result: counts only rows where side is 'buy' + +### Aggregate Functions Ignore NULL + +```sql +avg(CASE WHEN side = 'buy' THEN price END) +``` + +- `avg()` calculates average of non-NULL values only +- Only includes price when side is 'buy' +- Automatically skips all other rows + +## Time-Series Summary Report + +Create comprehensive time-series summaries with multiple conditions: + +```questdb-sql demo title="Hourly trading summary with multiple metrics" +SELECT + timestamp_floor('h', timestamp) as hour, + symbol, + count(*) as total_trades, + count(CASE WHEN side = 'buy' THEN 1 END) as buy_trades, + count(CASE WHEN side = 'sell' THEN 1 END) as sell_trades, + sum(amount) as total_volume, + sum(CASE WHEN side = 'buy' THEN amount END) as buy_volume, + sum(CASE WHEN side = 'sell' THEN amount END) as sell_volume, + min(price) as low, + max(price) as high, + first(price) as open, + last(price) as close, + avg(CASE WHEN amount > 1.0 THEN price END) as avg_large_trade_price, + count(CASE WHEN amount > 10.0 THEN 1 END) as whale_trades +FROM trades +WHERE timestamp >= dateadd('d', -7, now()) + AND symbol = 'BTC-USDT' +GROUP BY hour, symbol +ORDER BY hour DESC +LIMIT 24; +``` + +**Results:** + +| hour | symbol | total_trades | buy_trades | sell_trades | total_volume | buy_volume | sell_volume | low | high | open | close | avg_large_trade_price | whale_trades | +|------|--------|--------------|------------|-------------|--------------|------------|-------------|-----|------|------|-------|----------------------|--------------| +| 2025-01-15 23:00 | BTC-USDT | 1,234 | 645 | 589 | 45.67 | 23.45 | 22.22 | 61,200 | 61,350 | 61,250 | 61,300 | 61,275 | 12 | + +## Conditional Aggregates with SAMPLE BY + +Combine conditional aggregates with time-series aggregation: + +```questdb-sql demo title="5-minute candles with buy/sell split" +SELECT + timestamp, + symbol, + first(price) as open, + last(price) as close, + min(price) as low, + max(price) as high, + sum(amount) as total_volume, + sum(CASE WHEN side = 'buy' THEN amount ELSE 0 END) as buy_volume, + sum(CASE WHEN side = 'sell' THEN amount ELSE 0 END) as sell_volume, + (sum(CASE WHEN side = 'buy' THEN amount ELSE 0 END) / + sum(amount) * 100) as buy_percentage +FROM trades +WHERE symbol = 'BTC-USDT' + AND timestamp >= dateadd('h', -6, now()) +SAMPLE BY 5m; +``` + +This creates 5-minute OHLCV candles with buy/sell volume breakdown. + +## Percentage Calculations + +Calculate percentages within the same query: + +```questdb-sql demo title="Trade distribution by size category" +SELECT + symbol, + count(*) as total_trades, + count(CASE WHEN amount <= 0.1 THEN 1 END) as micro_trades, + count(CASE WHEN amount > 0.1 AND amount <= 1.0 THEN 1 END) as small_trades, + count(CASE WHEN amount > 1.0 AND amount <= 10.0 THEN 1 END) as medium_trades, + count(CASE WHEN amount > 10.0 THEN 1 END) as large_trades, + (count(CASE WHEN amount <= 0.1 THEN 1 END) * 100.0 / count(*)) as micro_pct, + (count(CASE WHEN amount > 0.1 AND amount <= 1.0 THEN 1 END) * 100.0 / count(*)) as small_pct, + (count(CASE WHEN amount > 1.0 AND amount <= 10.0 THEN 1 END) * 100.0 / count(*)) as medium_pct, + (count(CASE WHEN amount > 10.0 THEN 1 END) * 100.0 / count(*)) as large_pct +FROM trades +WHERE timestamp >= dateadd('d', -1, now()) +GROUP BY symbol; +``` + +**Results:** + +| symbol | total_trades | micro_trades | small_trades | medium_trades | large_trades | micro_pct | small_pct | medium_pct | large_pct | +|--------|--------------|--------------|--------------|---------------|--------------|-----------|-----------|------------|-----------| +| BTC-USDT | 50,000 | 35,000 | 10,000 | 4,000 | 1,000 | 70.0 | 20.0 | 8.0 | 2.0 | + +## Ratio and Comparison Metrics + +Calculate buy/sell ratios and imbalances: + +```questdb-sql demo title="Order flow imbalance metrics" +SELECT + timestamp, + symbol, + sum(CASE WHEN side = 'buy' THEN amount END) as buy_volume, + sum(CASE WHEN side = 'sell' THEN amount END) as sell_volume, + (sum(CASE WHEN side = 'buy' THEN amount END) - + sum(CASE WHEN side = 'sell' THEN amount END)) as volume_imbalance, + (sum(CASE WHEN side = 'buy' THEN amount END) / + NULLIF(sum(CASE WHEN side = 'sell' THEN amount END), 0)) as buy_sell_ratio, + count(CASE WHEN side = 'buy' THEN 1 END) * 1.0 / + count(CASE WHEN side = 'sell' THEN 1 END) as trade_count_ratio +FROM trades +WHERE symbol = 'BTC-USDT' + AND timestamp >= dateadd('h', -1, now()) +SAMPLE BY 5m; +``` + +**Key points:** +- `NULLIF(denominator, 0)` prevents division by zero +- Ratio > 1.0 indicates buying pressure +- Ratio < 1.0 indicates selling pressure + +## Multiple Symbols Comparison + +Compare metrics across different assets: + +```questdb-sql demo title="Cross-asset summary statistics" +SELECT + timestamp_floor('h', timestamp) as hour, + sum(CASE WHEN symbol = 'BTC-USDT' THEN amount END) as btc_volume, + sum(CASE WHEN symbol = 'ETH-USDT' THEN amount END) as eth_volume, + sum(CASE WHEN symbol = 'SOL-USDT' THEN amount END) as sol_volume, + avg(CASE WHEN symbol = 'BTC-USDT' THEN price END) as btc_avg_price, + avg(CASE WHEN symbol = 'ETH-USDT' THEN price END) as eth_avg_price, + avg(CASE WHEN symbol = 'SOL-USDT' THEN price END) as sol_avg_price, + count(CASE WHEN symbol = 'BTC-USDT' THEN 1 END) as btc_trades, + count(CASE WHEN symbol = 'ETH-USDT' THEN 1 END) as eth_trades, + count(CASE WHEN symbol = 'SOL-USDT' THEN 1 END) as sol_trades +FROM trades +WHERE timestamp >= dateadd('d', -1, now()) + AND symbol IN ('BTC-USDT', 'ETH-USDT', 'SOL-USDT') +GROUP BY hour +ORDER BY hour DESC; +``` + +This creates a wide-format summary with one column per symbol. + +## SUM vs COUNT for Conditional Counting + +Two equivalent patterns for conditional counting: + +```sql +-- Method 1: COUNT with CASE (recommended) +count(CASE WHEN condition THEN 1 END) + +-- Method 2: SUM with CASE +sum(CASE WHEN condition THEN 1 ELSE 0 END) +``` + +**Recommendation:** Use `count(CASE WHEN ... THEN 1 END)` because: +- More semantically clear (counting occurrences) +- Slightly more efficient (no need to sum zeros) +- Standard SQL pattern + +## Nested Conditions + +Handle multiple condition levels: + +```questdb-sql demo title="Complex conditional aggregates" +SELECT + symbol, + -- Profitable trades by side + count(CASE + WHEN side = 'buy' AND price < avg(price) OVER (PARTITION BY symbol) THEN 1 + END) as good_buy_entries, + count(CASE + WHEN side = 'sell' AND price > avg(price) OVER (PARTITION BY symbol) THEN 1 + END) as good_sell_entries, + -- Volume-weighted metrics + sum(CASE + WHEN side = 'buy' AND amount > 1.0 THEN price * amount + END) / NULLIF(sum(CASE + WHEN side = 'buy' AND amount > 1.0 THEN amount + END), 0) as vwap_large_buys, + -- Time-based conditions + count(CASE + WHEN hour(timestamp) >= 9 AND hour(timestamp) < 16 THEN 1 + END) as market_hours_trades, + count(CASE + WHEN hour(timestamp) < 9 OR hour(timestamp) >= 16 THEN 1 + END) as after_hours_trades +FROM trades +WHERE timestamp >= dateadd('d', -1, now()) +GROUP BY symbol; +``` + +## Performance Considerations + +**Single scan vs multiple queries:** + +```sql +-- Efficient: One scan, multiple aggregates +SELECT + count(CASE WHEN side = 'buy' THEN 1 END), + count(CASE WHEN side = 'sell' THEN 1 END) +FROM trades; + +-- Inefficient: Two scans +SELECT count(*) FROM trades WHERE side = 'buy'; +SELECT count(*) FROM trades WHERE side = 'sell'; +``` + +**Index usage:** + +```sql +-- Filter first, then conditional aggregates +SELECT + count(CASE WHEN side = 'buy' THEN 1 END) as buy_count, + count(CASE WHEN side = 'sell' THEN 1 END) as sell_count +FROM trades +WHERE timestamp >= dateadd('d', -1, now()) -- Uses timestamp index + AND symbol = 'BTC-USDT'; -- Uses symbol index if SYMBOL type +``` + +**Avoid redundant conditions:** + +```sql +-- Good: Simple CASE +count(CASE WHEN amount > 1.0 THEN 1 END) + +-- Wasteful: Unnecessary ELSE +count(CASE WHEN amount > 1.0 THEN 1 ELSE NULL END) -- NULL is implicit +``` + +## Common Patterns + +**Status distribution:** +```sql +SELECT + count(CASE WHEN status = 'active' THEN 1 END) as active, + count(CASE WHEN status = 'pending' THEN 1 END) as pending, + count(CASE WHEN status = 'failed' THEN 1 END) as failed +FROM orders; +``` + +**Success rate:** +```sql +SELECT + (count(CASE WHEN status = 'success' THEN 1 END) * 100.0 / count(*)) as success_rate, + (count(CASE WHEN status = 'error' THEN 1 END) * 100.0 / count(*)) as error_rate +FROM api_requests; +``` + +**Size buckets:** +```sql +SELECT + sum(CASE WHEN amount < 1 THEN amount END) as small_volume, + sum(CASE WHEN amount >= 1 AND amount < 10 THEN amount END) as medium_volume, + sum(CASE WHEN amount >= 10 THEN amount END) as large_volume +FROM trades; +``` + +:::tip When to Use This Pattern +Use conditional aggregates when you need: +- Multiple metrics with different filters from the same dataset +- Summary reports with various breakdowns +- Pivot-like transformations (conditions as columns) +- Performance optimization (single scan vs multiple queries) +::: + +:::warning NULL Handling +Remember that CASE without ELSE returns NULL. This is what makes the pattern work: +- `count()` ignores NULLs (only counts matching rows) +- `sum()`, `avg()`, etc. ignore NULLs (only aggregate matching values) +- Never use `count(*)` with CASE - always use `count(expression)` +::: + +:::info Related Documentation +- [CASE expressions](/docs/reference/sql/case/) +- [Aggregate functions](/docs/reference/function/aggregation/) +- [count()](/docs/reference/function/aggregation/#count) +- [SAMPLE BY](/docs/reference/sql/select/#sample-by) +::: diff --git a/documentation/playbook/sql/advanced/consistent-histogram-buckets.md b/documentation/playbook/sql/advanced/consistent-histogram-buckets.md new file mode 100644 index 000000000..a5cdff6ba --- /dev/null +++ b/documentation/playbook/sql/advanced/consistent-histogram-buckets.md @@ -0,0 +1,424 @@ +--- +title: Consistent Histogram Buckets +sidebar_label: Histogram buckets +description: Generate histogram data with fixed bucket boundaries for consistent time-series distribution analysis +--- + +Create histograms with consistent bucket boundaries across different time periods. This ensures that distributions are comparable over time, essential for monitoring metric distributions, latency percentiles, and value ranges in dashboards. + +## Problem: Inconsistent Histogram Buckets + +You want to track the distribution of trade sizes over time: + +**Naive approach (inconsistent buckets):** +```sql +SELECT + CASE + WHEN amount < 1.0 THEN 'small' + WHEN amount < 10.0 THEN 'medium' + ELSE 'large' + END as bucket, + count(*) as count +FROM trades +GROUP BY bucket; +``` + +This works for a single query, but comparing histograms across different time periods or symbols becomes difficult when bucket boundaries aren't precisely defined. + +## Solution: Fixed Numeric Buckets + +Define consistent bucket boundaries using integer division: + +```questdb-sql demo title="Histogram with fixed 0.5 BTC buckets" +SELECT + (cast(amount / 0.5 AS INT) * 0.5) as bucket_start, + ((cast(amount / 0.5 AS INT) + 1) * 0.5) as bucket_end, + count(*) as count +FROM trades +WHERE symbol = 'BTC-USDT' + AND timestamp >= dateadd('d', -1, now()) +GROUP BY bucket_start, bucket_end +ORDER BY bucket_start; +``` + +**Results:** + +| bucket_start | bucket_end | count | +|--------------|------------|-------| +| 0.0 | 0.5 | 1,234 | +| 0.5 | 1.0 | 890 | +| 1.0 | 1.5 | 456 | +| 1.5 | 2.0 | 234 | +| 2.0 | 2.5 | 123 | + +## How It Works + +### Bucket Calculation + +```sql +cast(amount / 0.5 AS INT) * 0.5 +``` + +**Step by step:** +1. `amount / 0.5`: Divide by bucket width (amount 1.3 → 2.6) +2. `cast(... AS INT)`: Truncate to integer (2.6 → 2) +3. `* 0.5`: Multiply back by bucket width (2 → 1.0) + +**Examples:** +- amount = 0.3 → 0.3/0.5=0.6 → INT(0.6)=0 → 0*0.5=0.0 +- amount = 1.3 → 1.3/0.5=2.6 → INT(2.6)=2 → 2*0.5=1.0 +- amount = 2.7 → 2.7/0.5=5.4 → INT(5.4)=5 → 5*0.5=2.5 + +### Bucket End + +```sql +(cast(amount / 0.5 AS INT) + 1) * 0.5 +``` + +Add 1 before multiplying back to get the upper boundary. + +## Dynamic Bucket Width + +Use a variable for easy adjustment: + +```questdb-sql demo title="Configurable bucket width" +WITH bucketed AS ( + SELECT + amount, + 0.25 as bucket_width, -- Change this to adjust granularity + (cast(amount / 0.25 AS INT) * 0.25) as bucket_start + FROM trades + WHERE symbol = 'BTC-USDT' + AND timestamp >= dateadd('d', -1, now()) +) +SELECT + bucket_start, + (bucket_start + bucket_width) as bucket_end, + count(*) as count, + sum(amount) as total_volume +FROM bucketed +GROUP BY bucket_start, bucket_width +ORDER BY bucket_start; +``` + +**Bucket widths by use case:** +- Latency (milliseconds): 10ms, 50ms, 100ms +- Trade sizes: 0.1, 0.5, 1.0 +- Prices: 100, 500, 1000 +- Temperatures: 1°C, 5°C, 10°C + +## Time-Series Histogram + +Track distribution changes over time: + +```questdb-sql demo title="Hourly histogram evolution" +SELECT + timestamp_floor('h', timestamp) as hour, + (cast(amount / 0.5 AS INT) * 0.5) as bucket, + count(*) as count +FROM trades +WHERE symbol = 'BTC-USDT' + AND timestamp >= dateadd('d', -7, now()) +GROUP BY hour, bucket +ORDER BY hour DESC, bucket; +``` + +**Results:** + +| hour | bucket | count | +|------|--------|-------| +| 2025-01-15 23:00 | 0.0 | 345 | +| 2025-01-15 23:00 | 0.5 | 234 | +| 2025-01-15 23:00 | 1.0 | 123 | +| 2025-01-15 22:00 | 0.0 | 312 | +| 2025-01-15 22:00 | 0.5 | 245 | + +This shows how the distribution shifts over time. + +## Grafana Heatmap Visualization + +Format for Grafana heatmap: + +```questdb-sql demo title="Heatmap data for Grafana" +SELECT + timestamp_floor('5m', timestamp) as time, + (cast(latency_ms / 10 AS INT) * 10) as bucket, + count(*) as count +FROM api_requests +WHERE $__timeFilter(timestamp) +GROUP BY time, bucket +ORDER BY time, bucket; +``` + +**Grafana configuration:** +- Visualization: Heatmap +- X-axis: time +- Y-axis: bucket (latency range) +- Cell value: count + +Creates a heatmap showing latency distribution evolution over time. + +## Logarithmic Buckets + +For data spanning multiple orders of magnitude: + +```questdb-sql demo title="Logarithmic buckets for wide value ranges" +SELECT + POWER(10, cast(log10(amount) AS INT)) as bucket_start, + POWER(10, cast(log10(amount) AS INT) + 1) as bucket_end, + count(*) as count +FROM trades +WHERE symbol = 'BTC-USDT' + AND amount > 0 + AND timestamp >= dateadd('d', -1, now()) +GROUP BY bucket_start, bucket_end +ORDER BY bucket_start; +``` + +**Results:** + +| bucket_start | bucket_end | count | +|--------------|------------|-------| +| 0.01 | 0.1 | 1,234 | +| 0.1 | 1.0 | 4,567 | +| 1.0 | 10.0 | 2,345 | +| 10.0 | 100.0 | 123 | + +**Use cases:** +- Response times (1ms to 10s) +- File sizes (1KB to 1GB) +- Memory usage (1MB to 10GB) + +## Percentile Buckets + +Create buckets representing percentile ranges: + +```questdb-sql demo title="Percentile-based buckets" +WITH percentiles AS ( + SELECT + percentile(price, 10) as p10, + percentile(price, 25) as p25, + percentile(price, 50) as p50, + percentile(price, 75) as p75, + percentile(price, 90) as p90 + FROM trades + WHERE symbol = 'BTC-USDT' + AND timestamp >= dateadd('d', -30, now()) +) +SELECT + CASE + WHEN price < p10 THEN '< P10' + WHEN price < p25 THEN 'P10-P25' + WHEN price < p50 THEN 'P25-P50' + WHEN price < p75 THEN 'P50-P75' + WHEN price < p90 THEN 'P75-P90' + ELSE '> P90' + END as percentile_bucket, + count(*) as count, + (count(*) * 100.0 / sum(count(*)) OVER ()) as percentage +FROM trades, percentiles +WHERE symbol = 'BTC-USDT' + AND timestamp >= dateadd('d', -1, now()) +GROUP BY percentile_bucket, p10, p25, p50, p75, p90 +ORDER BY + CASE percentile_bucket + WHEN '< P10' THEN 1 + WHEN 'P10-P25' THEN 2 + WHEN 'P25-P50' THEN 3 + WHEN 'P50-P75' THEN 4 + WHEN 'P75-P90' THEN 5 + ELSE 6 + END; +``` + +This shows what percentage of recent trades fall into each historical percentile range. + +## Cumulative Distribution + +Calculate cumulative counts for CDF visualization: + +```questdb-sql demo title="Cumulative distribution function" +WITH histogram AS ( + SELECT + (cast(amount / 0.5 AS INT) * 0.5) as bucket, + count(*) as count + FROM trades + WHERE symbol = 'BTC-USDT' + AND timestamp >= dateadd('d', -1, now()) + GROUP BY bucket +) +SELECT + bucket, + count, + sum(count) OVER (ORDER BY bucket) as cumulative_count, + (sum(count) OVER (ORDER BY bucket) * 100.0 / + sum(count) OVER ()) as cumulative_percentage +FROM histogram +ORDER BY bucket; +``` + +**Results:** + +| bucket | count | cumulative_count | cumulative_percentage | +|--------|-------|------------------|----------------------| +| 0.0 | 1,234 | 1,234 | 40.2% | +| 0.5 | 890 | 2,124 | 69.1% | +| 1.0 | 456 | 2,580 | 84.0% | +| 1.5 | 234 | 2,814 | 91.6% | + +Shows that 84% of trades are 1.5 BTC or less. + +## Multi-Dimensional Histogram + +Bucket by two dimensions: + +```questdb-sql demo title="2D histogram: amount vs price range" +SELECT + (cast(amount / 0.5 AS INT) * 0.5) as amount_bucket, + (cast(price / 1000 AS INT) * 1000) as price_bucket, + count(*) as count +FROM trades +WHERE symbol = 'BTC-USDT' + AND timestamp >= dateadd('d', -1, now()) +GROUP BY amount_bucket, price_bucket +HAVING count > 10 -- Filter sparse buckets +ORDER BY amount_bucket, price_bucket; +``` + +**Results:** + +| amount_bucket | price_bucket | count | +|---------------|--------------|-------| +| 0.0 | 61000 | 234 | +| 0.0 | 62000 | 345 | +| 0.5 | 61000 | 123 | +| 0.5 | 62000 | 156 | + +## Adaptive Bucketing + +Adjust bucket width based on data density: + +```questdb-sql demo title="Fine-grained buckets for common ranges" +SELECT + CASE + WHEN amount < 1.0 THEN cast(amount / 0.1 AS INT) * 0.1 -- 0.1 BTC buckets + WHEN amount < 10.0 THEN cast(amount / 1.0 AS INT) * 1.0 -- 1 BTC buckets + ELSE cast(amount / 10.0 AS INT) * 10.0 -- 10 BTC buckets + END as bucket, + count(*) as count +FROM trades +WHERE symbol = 'BTC-USDT' + AND timestamp >= dateadd('d', -1, now()) +GROUP BY bucket +ORDER BY bucket; +``` + +Provides more detail in common ranges, broader buckets for rare large trades. + +## Comparison Across Symbols + +Compare distributions using consistent buckets: + +```questdb-sql demo title="Compare trade size distributions" +SELECT + symbol, + (cast(amount / 0.5 AS INT) * 0.5) as bucket, + count(*) as count, + avg(price) as avg_price +FROM trades +WHERE symbol IN ('BTC-USDT', 'ETH-USDT') + AND timestamp >= dateadd('d', -1, now()) +GROUP BY symbol, bucket +ORDER BY symbol, bucket; +``` + +Shows whether trade size patterns differ between assets. + +## Performance Optimization + +**Index usage:** +```sql +-- Ensure timestamp and symbol are indexed +CREATE TABLE trades ( + timestamp TIMESTAMP, + symbol SYMBOL INDEX, -- SYMBOL type has implicit index + price DOUBLE, + amount DOUBLE +) TIMESTAMP(timestamp) PARTITION BY DAY; +``` + +**Pre-aggregate for dashboards:** +```sql +-- Create hourly histogram summary +CREATE TABLE trade_histogram_hourly AS +SELECT + timestamp_floor('h', timestamp) as hour, + symbol, + (cast(amount / 0.5 AS INT) * 0.5) as bucket, + count(*) as count, + sum(amount) as total_volume +FROM trades +SAMPLE BY 1h; + +-- Query summary instead of raw data +SELECT * FROM trade_histogram_hourly WHERE hour >= dateadd('d', -7, now()); +``` + +**Limit bucket range:** +```sql +-- Exclude extreme outliers +WHERE amount BETWEEN 0.01 AND 100 +``` + +Prevents single extreme values from creating many empty buckets. + +## Common Pitfalls + +**Empty buckets not shown:** +```sql +-- This only returns buckets with data +SELECT bucket, count(*) FROM ... GROUP BY bucket; + +-- To include empty buckets, use generate_series or CROSS JOIN +``` + +**Floating point precision:** +```sql +-- Bad: May have precision issues +cast(amount / 0.1 AS INT) * 0.1 + +-- Better: Use integers where possible +cast(amount * 10 AS INT) / 10.0 +``` + +**Negative values:** +```sql +-- Handle negative values correctly +SIGN(value) * (cast(ABS(value) / bucket_width AS INT) * bucket_width) +``` + +:::tip Choosing Bucket Width +Select bucket width based on: +- **Data range**: 10-50 buckets typically ideal for visualization +- **Precision needed**: Smaller buckets for detailed analysis +- **Query performance**: Fewer buckets = faster aggregation +- **Visual clarity**: Too many buckets create cluttered charts + +Formula: `bucket_width = (max - min) / target_bucket_count` +::: + +:::warning Grafana Heatmap Requirements +Grafana heatmaps require: +1. Time column named `time` +2. Numeric bucket column +3. Count/value column +4. Data sorted by time, then bucket +5. Consistent bucket boundaries across all time periods +::: + +:::info Related Documentation +- [Aggregate functions](/docs/reference/function/aggregation/) +- [CAST function](/docs/reference/function/cast/) +- [percentile()](/docs/reference/function/aggregation/#percentile) +- [Window functions](/docs/reference/sql/select/#window-functions) +::: diff --git a/documentation/playbook/sql/advanced/general-and-sampled-aggregates.md b/documentation/playbook/sql/advanced/general-and-sampled-aggregates.md new file mode 100644 index 000000000..f9e61d0cb --- /dev/null +++ b/documentation/playbook/sql/advanced/general-and-sampled-aggregates.md @@ -0,0 +1,448 @@ +--- +title: General and Sampled Aggregates +sidebar_label: General + sampled aggregates +description: Combine overall statistics with time-bucketed aggregates using CROSS JOIN to show baseline comparisons +--- + +Calculate both overall (baseline) aggregates and time-bucketed aggregates in the same query using CROSS JOIN. This pattern is essential for comparing current values against historical averages, showing percentage of total, or displaying baseline metrics alongside time-series data. + +## Problem: Need Both Total and Time-Series Aggregates + +You want to show hourly trade volumes alongside the daily average: + +**Without baseline (incomplete picture):** + +| hour | volume | +|------|--------| +| 00:00 | 45.6 | +| 01:00 | 34.2 | +| 02:00 | 28.9 | + +**With baseline (shows context):** + +| hour | volume | daily_avg | vs_avg | +|------|--------|-----------|--------| +| 00:00 | 45.6 | 38.2 | +19.4% | +| 01:00 | 34.2 | 38.2 | -10.5% | +| 02:00 | 28.9 | 38.2 | -24.3% | + +## Solution: CROSS JOIN with General Aggregates + +Use CROSS JOIN to attach overall statistics to each time-bucketed row: + +```questdb-sql demo title="Hourly volumes with daily baseline" +WITH general AS ( + SELECT + avg(volume_hourly) as daily_avg_volume, + sum(volume_hourly) as daily_total_volume + FROM ( + SELECT sum(amount) as volume_hourly + FROM trades + WHERE timestamp IN today() + AND symbol = 'BTC-USDT' + SAMPLE BY 1h + ) +), +sampled AS ( + SELECT + timestamp, + sum(amount) as volume + FROM trades + WHERE timestamp IN today() + AND symbol = 'BTC-USDT' + SAMPLE BY 1h +) +SELECT + sampled.timestamp, + sampled.volume as hourly_volume, + general.daily_avg_volume, + (sampled.volume - general.daily_avg_volume) as diff_from_avg, + ((sampled.volume - general.daily_avg_volume) / general.daily_avg_volume * 100) as pct_diff, + (sampled.volume / general.daily_total_volume * 100) as pct_of_total +FROM sampled +CROSS JOIN general +ORDER BY sampled.timestamp; +``` + +**Results:** + +| timestamp | hourly_volume | daily_avg_volume | diff_from_avg | pct_diff | pct_of_total | +|-----------|---------------|------------------|---------------|----------|--------------| +| 2025-01-15 00:00 | 45.6 | 38.2 | +7.4 | +19.4% | 4.98% | +| 2025-01-15 01:00 | 34.2 | 38.2 | -4.0 | -10.5% | 3.73% | +| 2025-01-15 02:00 | 28.9 | 38.2 | -9.3 | -24.3% | 3.15% | + +## How It Works + +### Step 1: Calculate General Aggregates + +```sql +WITH general AS ( + SELECT + avg(volume_hourly) as daily_avg_volume, + sum(volume_hourly) as daily_total_volume + FROM (...) +) +``` + +Creates a CTE with single-row summary statistics (overall average, total, etc.). + +### Step 2: Calculate Time-Bucketed Aggregates + +```sql +sampled AS ( + SELECT timestamp, sum(amount) as volume + FROM trades + SAMPLE BY 1h +) +``` + +Creates time-series data with one row per interval. + +### Step 3: CROSS JOIN + +```sql +FROM sampled CROSS JOIN general +``` + +Attaches the single general row to every sampled row. Since `general` has exactly one row, this repeats that row's values for each time bucket. + +## Performance Metrics vs Baseline + +Compare recent performance against historical averages: + +```questdb-sql demo title="API latency vs 7-day baseline" +WITH baseline AS ( + SELECT + avg(latency_ms) as avg_latency, + percentile(latency_ms, 95) as p95_latency, + percentile(latency_ms, 99) as p99_latency + FROM api_requests + WHERE timestamp >= dateadd('d', -7, now()) +), +recent AS ( + SELECT + timestamp, + avg(latency_ms) as current_latency, + percentile(latency_ms, 95) as current_p95, + count(*) as request_count + FROM api_requests + WHERE timestamp >= dateadd('h', -1, now()) + SAMPLE BY 5m +) +SELECT + recent.timestamp, + recent.request_count, + recent.current_latency, + baseline.avg_latency as baseline_latency, + (recent.current_latency - baseline.avg_latency) as latency_diff, + recent.current_p95, + baseline.p95_latency as baseline_p95, + CASE + WHEN recent.current_latency > baseline.avg_latency * 1.5 THEN 'WARNING' + WHEN recent.current_latency > baseline.avg_latency * 2.0 THEN 'CRITICAL' + ELSE 'OK' + END as status +FROM recent +CROSS JOIN baseline +ORDER BY recent.timestamp DESC; +``` + +**Results show current performance with baseline context and alerts.** + +## Percentage of Daily Total + +Show each hour's contribution to the daily total: + +```questdb-sql demo title="Hourly volume as percentage of daily total" +WITH daily_total AS ( + SELECT + sum(amount) as total_volume, + count(*) as total_trades + FROM trades + WHERE timestamp IN today() + AND symbol = 'BTC-USDT' +), +hourly AS ( + SELECT + timestamp, + sum(amount) as hourly_volume, + count(*) as hourly_trades + FROM trades + WHERE timestamp IN today() + AND symbol = 'BTC-USDT' + SAMPLE BY 1h +) +SELECT + hourly.timestamp, + hourly.hourly_volume, + daily_total.total_volume, + (hourly.hourly_volume / daily_total.total_volume * 100) as volume_pct, + hourly.hourly_trades, + (hourly.hourly_trades * 100.0 / daily_total.total_trades) as trade_count_pct +FROM hourly +CROSS JOIN daily_total +ORDER BY hourly.timestamp; +``` + +**Results:** + +| timestamp | hourly_volume | total_volume | volume_pct | hourly_trades | trade_count_pct | +|-----------|---------------|--------------|------------|---------------|-----------------| +| 00:00 | 45.6 | 916.8 | 4.97% | 1,234 | 4.23% | +| 01:00 | 34.2 | 916.8 | 3.73% | 987 | 3.38% | + +## Multiple Symbol Comparison with Overall Average + +Compare each symbol's volume against the cross-symbol average: + +```questdb-sql demo title="Symbol volumes vs market average" +WITH market_avg AS ( + SELECT + avg(symbol_volume) as avg_volume_per_symbol, + sum(symbol_volume) as total_market_volume + FROM ( + SELECT + symbol, + sum(amount) as symbol_volume + FROM trades + WHERE timestamp >= dateadd('d', -1, now()) + GROUP BY symbol + ) +), +symbol_volumes AS ( + SELECT + symbol, + sum(amount) as volume, + count(*) as trade_count + FROM trades + WHERE timestamp >= dateadd('d', -1, now()) + GROUP BY symbol +) +SELECT + sv.symbol, + sv.volume, + sv.trade_count, + ma.avg_volume_per_symbol, + (sv.volume / ma.avg_volume_per_symbol) as vs_avg_ratio, + (sv.volume / ma.total_market_volume * 100) as market_share +FROM symbol_volumes sv +CROSS JOIN market_avg ma +ORDER BY sv.volume DESC +LIMIT 10; +``` + +**Results:** + +| symbol | volume | trade_count | avg_volume_per_symbol | vs_avg_ratio | market_share | +|--------|--------|-------------|-----------------------|--------------|--------------| +| BTC-USDT | 1,234.56 | 45,678 | 234.56 | 5.26 | 45.2% | +| ETH-USDT | 567.89 | 34,567 | 234.56 | 2.42 | 20.8% | + +## Z-Score Anomaly Detection + +Calculate how many standard deviations current values are from the mean: + +```questdb-sql demo title="Anomaly detection with z-scores" +WITH stats AS ( + SELECT + avg(volume_5m) as mean_volume, + stddev(volume_5m) as stddev_volume + FROM ( + SELECT sum(amount) as volume_5m + FROM trades + WHERE timestamp >= dateadd('d', -7, now()) + AND symbol = 'BTC-USDT' + SAMPLE BY 5m + ) +), +recent AS ( + SELECT + timestamp, + sum(amount) as volume + FROM trades + WHERE timestamp >= dateadd('h', -1, now()) + AND symbol = 'BTC-USDT' + SAMPLE BY 5m +) +SELECT + recent.timestamp, + recent.volume, + stats.mean_volume, + stats.stddev_volume, + ((recent.volume - stats.mean_volume) / stats.stddev_volume) as z_score, + CASE + WHEN ABS((recent.volume - stats.mean_volume) / stats.stddev_volume) > 3 THEN 'ANOMALY' + WHEN ABS((recent.volume - stats.mean_volume) / stats.stddev_volume) > 2 THEN 'UNUSUAL' + ELSE 'NORMAL' + END as classification +FROM recent +CROSS JOIN stats +ORDER BY recent.timestamp DESC; +``` + +**Key points:** +- Z-score > 2: Unusual (95th percentile) +- Z-score > 3: Anomaly (99.7th percentile) +- Works for any metric (volume, latency, error rate, etc.) + +## Time-of-Day Comparison + +Compare current hour against historical average for same hour of day: + +```questdb-sql demo title="Current hour vs historical same-hour average" +WITH historical_by_hour AS ( + SELECT + hour(timestamp) as hour_of_day, + avg(hourly_volume) as avg_volume_this_hour, + stddev(hourly_volume) as stddev_volume_this_hour + FROM ( + SELECT + timestamp, + sum(amount) as hourly_volume + FROM trades + WHERE timestamp >= dateadd('d', -30, now()) + AND symbol = 'BTC-USDT' + SAMPLE BY 1h + ) + GROUP BY hour_of_day +), +current_hour AS ( + SELECT + timestamp, + hour(timestamp) as hour_of_day, + sum(amount) as volume + FROM trades + WHERE timestamp IN today() + AND symbol = 'BTC-USDT' + SAMPLE BY 1h +) +SELECT + current_hour.timestamp, + current_hour.volume as current_volume, + historical_by_hour.avg_volume_this_hour as historical_avg, + ((current_hour.volume - historical_by_hour.avg_volume_this_hour) / + historical_by_hour.avg_volume_this_hour * 100) as pct_diff_from_historical +FROM current_hour +LEFT JOIN historical_by_hour + ON current_hour.hour_of_day = historical_by_hour.hour_of_day +ORDER BY current_hour.timestamp; +``` + +Note: This uses LEFT JOIN instead of CROSS JOIN because we're matching on hour_of_day. + +## Grafana Baseline Visualization + +Format for Grafana with baseline reference line: + +```questdb-sql demo title="Time-series with baseline for Grafana" +WITH baseline AS ( + SELECT avg(response_time_ms) as avg_response_time + FROM api_metrics + WHERE timestamp >= dateadd('d', -7, now()) +), +timeseries AS ( + SELECT + timestamp as time, + avg(response_time_ms) as current_response_time + FROM api_metrics + WHERE $__timeFilter(timestamp) + SAMPLE BY $__interval +) +SELECT + timeseries.time, + timeseries.current_response_time as "Current", + baseline.avg_response_time as "7-Day Average" +FROM timeseries +CROSS JOIN baseline +ORDER BY timeseries.time; +``` + +Grafana will plot both series, making it easy to see when current values deviate from baseline. + +## Simplification: Single Query Without CTE + +For simple cases, you can inline the general aggregate: + +```sql +SELECT + timestamp, + sum(amount) as volume, + (SELECT avg(sum(amount)) FROM trades WHERE timestamp IN today() SAMPLE BY 1h) as daily_avg +FROM trades +WHERE timestamp IN today() +SAMPLE BY 1h; +``` + +However, CTE with CROSS JOIN is more readable and efficient when you need multiple baseline metrics. + +## Performance Considerations + +**General CTE is calculated once:** + +```sql +WITH general AS ( + SELECT expensive_aggregate FROM large_table -- Calculated ONCE +) +SELECT * FROM timeseries CROSS JOIN general; -- General reused for all rows +``` + +**Filter data in both CTEs:** + +```sql +WITH general AS ( + SELECT avg(value) as baseline + FROM metrics + WHERE timestamp >= dateadd('d', -7, now()) -- Same filter +), +recent AS ( + SELECT timestamp, value + FROM metrics + WHERE timestamp >= dateadd('d', -7, now()) -- Same filter + SAMPLE BY 1h +) +``` + +Both queries benefit from the same timestamp index usage. + +## Alternative: Window Functions + +For running comparisons, window functions can be more appropriate: + +```sql +-- CROSS JOIN pattern: Compare against fixed baseline +WITH baseline AS (SELECT avg(value) FROM metrics) +SELECT value, baseline FROM timeseries CROSS JOIN baseline; + +-- Window function: Compare against moving average +SELECT + value, + avg(value) OVER (ORDER BY timestamp ROWS BETWEEN 10 PRECEDING AND CURRENT ROW) as moving_avg +FROM timeseries; +``` + +Use CROSS JOIN when you want a **fixed baseline** (e.g., "7-day average"). +Use window functions for **dynamic baselines** (e.g., "10-period moving average"). + +:::tip When to Use This Pattern +Use CROSS JOIN with general aggregates when you need: +- Percentage of total calculations +- Baseline comparisons (current vs historical average) +- Context for time-series data (is this value high or low?) +- Z-scores or statistical anomaly detection +- Reference lines in Grafana dashboards +::: + +:::warning CROSS JOIN Behavior +CROSS JOIN creates a cartesian product. This only works efficiently when one side has exactly **one row** (the general aggregates). Never CROSS JOIN two multi-row tables - it will explode your result set. + +Safe: `SELECT * FROM timeseries CROSS JOIN (SELECT avg(...))` ← Second table has 1 row +Dangerous: `SELECT * FROM table1 CROSS JOIN table2` ← Both have many rows +::: + +:::info Related Documentation +- [CROSS JOIN](/docs/reference/sql/join/#cross-join) +- [Common Table Expressions (WITH)](/docs/reference/sql/with/) +- [SAMPLE BY](/docs/reference/sql/select/#sample-by) +- [Window functions (for alternative approaches)](/docs/reference/sql/select/#window-functions) +::: diff --git a/documentation/playbook/sql/pivoting.md b/documentation/playbook/sql/advanced/pivot-table.md similarity index 100% rename from documentation/playbook/sql/pivoting.md rename to documentation/playbook/sql/advanced/pivot-table.md diff --git a/documentation/playbook/sql/advanced/sankey-funnel.md b/documentation/playbook/sql/advanced/sankey-funnel.md new file mode 100644 index 000000000..a681ac3bc --- /dev/null +++ b/documentation/playbook/sql/advanced/sankey-funnel.md @@ -0,0 +1,427 @@ +--- +title: Sankey and Funnel Diagrams +sidebar_label: Sankey/funnel diagrams +description: Create flow analysis data for Sankey diagrams and conversion funnels using session-based queries and state transitions +--- + +Build user journey flow data for Sankey diagrams and conversion funnels by tracking state transitions across sessions. This pattern is essential for visualizing how users navigate through your application, where they drop off, and which paths are most common. + +## Problem: Track User Flow Through States + +You have event data tracking user actions: + +| timestamp | user_id | page | +|-----------|---------|------| +| 10:00:00 | user_1 | home | +| 10:00:15 | user_1 | products | +| 10:00:45 | user_1 | cart | +| 10:01:00 | user_1 | checkout | +| 10:00:05 | user_2 | home | +| 10:00:20 | user_2 | products | +| 10:00:30 | user_2 | home | + +You want to count transitions between states: + +| from | to | count | +|------|----|-------| +| home | products | 2 | +| products | cart | 1 | +| products | home | 1 | +| cart | checkout | 1 | + +This data can be visualized as a Sankey diagram or used for funnel analysis. + +## Solution: LAG Window Function for State Transitions + +Use LAG to get the previous state for each user, then aggregate transitions: + +```questdb-sql demo title="Count state transitions for Sankey diagram" +WITH transitions AS ( + SELECT + user_id, + page as current_state, + lag(page) OVER (PARTITION BY user_id ORDER BY timestamp) as previous_state, + timestamp + FROM user_events + WHERE timestamp >= dateadd('d', -7, now()) +) +SELECT + previous_state as from_state, + current_state as to_state, + count(*) as transition_count +FROM transitions +WHERE previous_state IS NOT NULL +GROUP BY previous_state, current_state +ORDER BY transition_count DESC; +``` + +**Results:** + +| from_state | to_state | transition_count | +|------------|----------|------------------| +| home | products | 1,245 | +| products | home | 567 | +| products | details | 489 | +| details | cart | 234 | +| cart | checkout | 156 | +| checkout | complete | 134 | + +## How It Works + +### Step 1: Get Previous State with LAG + +```sql +lag(page) OVER (PARTITION BY user_id ORDER BY timestamp) as previous_state +``` + +For each event, looks back to the previous event for that user: +- `PARTITION BY user_id`: Separate window for each user +- `ORDER BY timestamp`: Previous means earlier in time +- Returns NULL for the first event (no previous state) + +**Example for user_1:** + +| timestamp | page | previous_state | +|-----------|------|----------------| +| 10:00:00 | home | NULL | +| 10:00:15 | products | home | +| 10:00:45 | cart | products | +| 10:01:00 | checkout | cart | + +### Step 2: Filter and Aggregate + +```sql +WHERE previous_state IS NOT NULL +GROUP BY previous_state, current_state +``` + +- Remove first events (NULL previous_state) +- Count occurrences of each transition pair +- Order by count to see most common paths + +## Conversion Funnel Analysis + +Calculate conversion rates through a specific funnel: + +```questdb-sql demo title="E-commerce funnel with conversion rates" +WITH user_pages AS ( + SELECT DISTINCT user_id, page + FROM user_events + WHERE timestamp >= dateadd('d', -7, now()) + AND page IN ('home', 'products', 'cart', 'checkout', 'complete') +), +funnel AS ( + SELECT + count(CASE WHEN page = 'home' THEN 1 END) as step1_home, + count(CASE WHEN page = 'products' THEN 1 END) as step2_products, + count(CASE WHEN page = 'cart' THEN 1 END) as step3_cart, + count(CASE WHEN page = 'checkout' THEN 1 END) as step4_checkout, + count(CASE WHEN page = 'complete' THEN 1 END) as step5_complete + FROM user_pages +) +SELECT + 'Home' as step, step1_home as users, 100.0 as conversion_rate +FROM funnel +UNION ALL +SELECT 'Products', step2_products, (step2_products * 100.0 / step1_home) +FROM funnel +UNION ALL +SELECT 'Cart', step3_cart, (step3_cart * 100.0 / step1_home) +FROM funnel +UNION ALL +SELECT 'Checkout', step4_checkout, (step4_checkout * 100.0 / step1_home) +FROM funnel +UNION ALL +SELECT 'Complete', step5_complete, (step5_complete * 100.0 / step1_home) +FROM funnel; +``` + +**Results:** + +| step | users | conversion_rate | +|------|-------|-----------------| +| Home | 10,000 | 100.00% | +| Products | 6,500 | 65.00% | +| Cart | 2,300 | 23.00% | +| Checkout | 1,800 | 18.00% | +| Complete | 1,500 | 15.00% | + +This shows that 85% of users who reach checkout complete the purchase (1,500 / 1,800). + +## Session-Based Flow Analysis + +Group transitions within sessions (defined by inactivity timeout): + +```questdb-sql demo title="Flow analysis within sessions" +WITH session_events AS ( + SELECT + user_id, + page, + timestamp, + lag(timestamp) OVER (PARTITION BY user_id ORDER BY timestamp) as prev_timestamp, + SUM(CASE + WHEN timestamp - lag(timestamp) OVER (PARTITION BY user_id ORDER BY timestamp) > 1800000000 + OR lag(timestamp) OVER (PARTITION BY user_id ORDER BY timestamp) IS NULL + THEN 1 + ELSE 0 + END) OVER (PARTITION BY user_id ORDER BY timestamp) as session_id + FROM user_events + WHERE timestamp >= dateadd('d', -7, now()) +), +transitions AS ( + SELECT + user_id, + session_id, + page as current_state, + lag(page) OVER (PARTITION BY user_id, session_id ORDER BY timestamp) as previous_state + FROM session_events +) +SELECT + previous_state as from_state, + current_state as to_state, + count(*) as transition_count, + count(DISTINCT user_id) as unique_users +FROM transitions +WHERE previous_state IS NOT NULL +GROUP BY previous_state, current_state +ORDER BY transition_count DESC; +``` + +**Key points:** +- Sessions defined by 30-minute inactivity (1800000000 microseconds) +- Transitions counted within sessions only +- Includes unique user count for each transition + +## Visualizing in Grafana/Plotly + +Format output for Sankey diagram tools: + +```questdb-sql demo title="Sankey diagram data format" +WITH transitions AS ( + SELECT + page as current_state, + lag(page) OVER (PARTITION BY user_id ORDER BY timestamp) as previous_state + FROM user_events + WHERE timestamp >= dateadd('d', -1, now()) +) +SELECT + previous_state as source, + current_state as target, + count(*) as value +FROM transitions +WHERE previous_state IS NOT NULL + AND previous_state != current_state -- Exclude self-loops +GROUP BY previous_state, current_state +HAVING count(*) >= 10 -- Minimum flow threshold +ORDER BY value DESC; +``` + +This format works directly with: +- **Plotly**: `go.Sankey(node=[...], link=[source, target, value])` +- **D3.js**: Standard Sankey input format +- **Grafana Flow plugin**: Source/target/value format + +## Multi-Step Path Analysis + +Find most common complete paths (not just transitions): + +```questdb-sql demo title="Most common 3-step user paths" +WITH paths AS ( + SELECT + user_id, + page, + lag(page, 1) OVER (PARTITION BY user_id ORDER BY timestamp) as prev_1, + lag(page, 2) OVER (PARTITION BY user_id ORDER BY timestamp) as prev_2, + timestamp + FROM user_events + WHERE timestamp >= dateadd('d', -7, now()) +) +SELECT + prev_2 || ' → ' || prev_1 || ' → ' || page as path, + count(*) as occurrences, + count(DISTINCT user_id) as unique_users +FROM paths +WHERE prev_2 IS NOT NULL +GROUP BY path +ORDER BY occurrences DESC +LIMIT 20; +``` + +**Results:** + +| path | occurrences | unique_users | +|------|-------------|--------------| +| home → products → details | 1,234 | 987 | +| products → details → cart | 892 | 765 | +| home → products → home | 654 | 543 | +| cart → checkout → complete | 543 | 543 | + +## Filter by Successful Conversions + +Analyze only paths that led to conversion: + +```questdb-sql demo title="Paths of users who converted" +WITH converting_users AS ( + SELECT DISTINCT user_id + FROM user_events + WHERE timestamp >= dateadd('d', -7, now()) + AND page = 'purchase_complete' +), +transitions AS ( + SELECT + e.user_id, + e.page as current_state, + lag(e.page) OVER (PARTITION BY e.user_id ORDER BY e.timestamp) as previous_state + FROM user_events e + INNER JOIN converting_users cu ON e.user_id = cu.user_id + WHERE e.timestamp >= dateadd('d', -7, now()) +) +SELECT + previous_state as from_state, + current_state as to_state, + count(*) as transition_count +FROM transitions +WHERE previous_state IS NOT NULL +GROUP BY previous_state, current_state +ORDER BY transition_count DESC; +``` + +This shows the paths taken by users who successfully completed a purchase. + +## Drop-Off Analysis + +Identify where users exit the funnel: + +```questdb-sql demo title="Last page visited before exit" +WITH user_last_page AS ( + SELECT + user_id, + page, + timestamp, + row_number() OVER (PARTITION BY user_id ORDER BY timestamp DESC) as rn + FROM user_events + WHERE timestamp >= dateadd('d', -7, now()) +), +non_converters AS ( + SELECT ulp.user_id, ulp.page as exit_page + FROM user_last_page ulp + WHERE ulp.rn = 1 + AND NOT EXISTS ( + SELECT 1 FROM user_events e + WHERE e.user_id = ulp.user_id + AND e.page = 'purchase_complete' + AND e.timestamp >= dateadd('d', -7, now()) + ) +) +SELECT + exit_page, + count(*) as exit_count, + (count(*) * 100.0 / (SELECT count(*) FROM non_converters)) as exit_percentage +FROM non_converters +GROUP BY exit_page +ORDER BY exit_count DESC; +``` + +**Results:** + +| exit_page | exit_count | exit_percentage | +|-----------|------------|-----------------| +| products | 3,456 | 42.5% | +| details | 1,234 | 15.2% | +| cart | 987 | 12.1% | +| home | 876 | 10.8% | + +Shows that most users who don't convert exit from the products page. + +## Time-Based Flow Analysis + +Analyze how quickly users move through states: + +```questdb-sql demo title="Average time between transitions" +WITH transitions AS ( + SELECT + page as current_state, + lag(page) OVER (PARTITION BY user_id ORDER BY timestamp) as previous_state, + timestamp - lag(timestamp) OVER (PARTITION BY user_id ORDER BY timestamp) as time_diff_micros + FROM user_events + WHERE timestamp >= dateadd('d', -7, now()) +) +SELECT + previous_state as from_state, + current_state as to_state, + count(*) as transition_count, + cast(avg(time_diff_micros) / 1000000 as int) as avg_seconds +FROM transitions +WHERE previous_state IS NOT NULL +GROUP BY previous_state, current_state +HAVING count(*) >= 100 +ORDER BY avg_seconds DESC; +``` + +**Results:** + +| from_state | to_state | transition_count | avg_seconds | +|------------|----------|------------------|-------------| +| cart | checkout | 1,234 | 245 | +| details | cart | 2,345 | 180 | +| products | details | 3,456 | 45 | +| home | products | 4,567 | 12 | + +Shows users spend an average of 4 minutes deciding to checkout from cart. + +## Performance Considerations + +**Index on user_id and timestamp:** +```sql +-- Ensure table is partitioned by timestamp +CREATE TABLE user_events ( + timestamp TIMESTAMP, + user_id SYMBOL, + page SYMBOL +) TIMESTAMP(timestamp) PARTITION BY DAY; +``` + +**Limit time range:** +```sql +WHERE timestamp >= dateadd('d', -7, now()) +``` + +**Pre-aggregate for dashboards:** +```sql +-- Create hourly summary table +CREATE TABLE user_flow_hourly AS +SELECT + timestamp_floor('h', timestamp) as hour, + previous_state, + current_state, + count(*) as transitions +FROM ( + SELECT + timestamp, + page as current_state, + lag(page) OVER (PARTITION BY user_id ORDER BY timestamp) as previous_state + FROM user_events +) +WHERE previous_state IS NOT NULL +GROUP BY hour, previous_state, current_state; +``` + +:::tip When to Use Sankey vs Funnel +- **Sankey diagrams**: Show all possible paths and their volumes (exploratory analysis) +- **Funnel charts**: Show conversion through a specific linear path (monitoring KPIs) +- **Drop-off analysis**: Identify specific pain points where users exit +::: + +:::warning Session Definition +Choose appropriate session timeout based on your use case: +- **E-commerce**: 30 minutes typical +- **Content sites**: 60+ minutes (users may pause to read) +- **Mobile apps**: 5-10 minutes (shorter attention spans) +::: + +:::info Related Documentation +- [LAG window function](/docs/reference/function/window/#lag) +- [Window functions overview](/docs/reference/sql/select/#window-functions) +- [PARTITION BY](/docs/reference/sql/select/#partition-by) +- [Session windows pattern](/playbook/sql/time-series/session-windows) +::: diff --git a/documentation/playbook/sql/advanced/unpivot-table.md b/documentation/playbook/sql/advanced/unpivot-table.md new file mode 100644 index 000000000..73198ff79 --- /dev/null +++ b/documentation/playbook/sql/advanced/unpivot-table.md @@ -0,0 +1,326 @@ +--- +title: UNPIVOT Table Results +sidebar_label: UNPIVOT +description: Convert wide-format data to long format using UNION ALL to transform column-based data into row-based data +--- + +Transform wide-format data (multiple columns) into long format (rows) using UNION ALL. This "unpivot" operation is useful for converting column-based data into a row-based format suitable for visualization or further analysis. + +## Problem: Wide Format to Long Format + +You have query results with multiple columns where only one column has a value per row: + +**Wide format (sparse):** + +| timestamp | symbol | buy | sell | +|-----------|-----------|--------|--------| +| 08:10:00 | ETH-USDT | NULL | 3678.25| +| 08:10:00 | ETH-USDT | NULL | 3678.25| +| 08:10:00 | ETH-USDT | 3678.01| NULL | +| 08:10:00 | ETH-USDT | NULL | 3678.00| + +You want to convert this to a format where side and price are explicit: + +**Long format (dense):** + +| timestamp | symbol | side | price | +|-----------|-----------|------|---------| +| 08:10:00 | ETH-USDT | sell | 3678.25 | +| 08:10:00 | ETH-USDT | sell | 3678.25 | +| 08:10:00 | ETH-USDT | buy | 3678.01 | +| 08:10:00 | ETH-USDT | sell | 3678.00 | + +## Solution: UNION ALL with Literal Values + +Use UNION ALL to stack columns as rows, then filter NULL values: + +```questdb-sql demo title="UNPIVOT buy/sell columns to side/price rows" +WITH pivoted AS ( + SELECT + timestamp, + symbol, + CASE WHEN side = 'buy' THEN price END as buy, + CASE WHEN side = 'sell' THEN price END as sell + FROM trades + WHERE timestamp >= dateadd('m', -5, now()) + AND symbol = 'ETH-USDT' +), +unpivoted AS ( + SELECT timestamp, symbol, 'buy' as side, buy as price + FROM pivoted + + UNION ALL + + SELECT timestamp, symbol, 'sell' as side, sell as price + FROM pivoted +) +SELECT * FROM unpivoted +WHERE price IS NOT NULL +ORDER BY timestamp; +``` + +**Results:** + +| timestamp | symbol | side | price | +|-----------|-----------|------|---------| +| 08:10:00 | ETH-USDT | sell | 3678.25 | +| 08:10:00 | ETH-USDT | sell | 3678.25 | +| 08:10:00 | ETH-USDT | buy | 3678.01 | +| 08:10:00 | ETH-USDT | sell | 3678.00 | + +## How It Works + +### Step 1: Create Wide Format (if needed) + +If your data is already in narrow format, you may need to pivot first: + +```sql +CASE WHEN side = 'buy' THEN price END as buy, +CASE WHEN side = 'sell' THEN price END as sell +``` + +This creates NULL values for the opposite side. + +### Step 2: UNION ALL + +```sql +SELECT timestamp, symbol, 'buy' as side, buy as price FROM pivoted +UNION ALL +SELECT timestamp, symbol, 'sell' as side, sell as price FROM pivoted +``` + +This creates two copies of every row: +- First copy: Has 'buy' literal with buy column value +- Second copy: Has 'sell' literal with sell column value + +### Step 3: Filter NULLs + +```sql +WHERE price IS NOT NULL +``` + +Removes rows where the price column is NULL (the opposite side). + +## Unpivoting Multiple Columns + +Transform multiple numeric columns to name-value pairs: + +```questdb-sql demo title="UNPIVOT sensor readings" +WITH sensor_data AS ( + SELECT + timestamp, + sensor_id, + temperature, + humidity, + pressure + FROM sensors + WHERE timestamp >= dateadd('h', -1, now()) +) +SELECT timestamp, sensor_id, 'temperature' as metric, temperature as value FROM sensor_data +WHERE temperature IS NOT NULL + +UNION ALL + +SELECT timestamp, sensor_id, 'humidity' as metric, humidity as value FROM sensor_data +WHERE humidity IS NOT NULL + +UNION ALL + +SELECT timestamp, sensor_id, 'pressure' as metric, pressure as value FROM sensor_data +WHERE pressure IS NOT NULL + +ORDER BY timestamp, sensor_id, metric; +``` + +**Results:** + +| timestamp | sensor_id | metric | value | +|-----------|-----------|-------------|-------| +| 10:00:00 | S001 | humidity | 65.2 | +| 10:00:00 | S001 | pressure | 1013.2| +| 10:00:00 | S001 | temperature | 22.5 | + +## Simplified Syntax (When All Values Present) + +If you know there are no NULL values, skip the filtering: + +```sql +SELECT timestamp, symbol, 'buy' as side, buy_price as price +FROM trades_summary + +UNION ALL + +SELECT timestamp, symbol, 'sell' as side, sell_price as price +FROM trades_summary; +``` + +## Use Cases + +**Grafana visualization:** +```sql +-- Convert wide format to Grafana-friendly long format +SELECT + timestamp as time, + metric_name as metric, + value +FROM ( + SELECT timestamp, 'cpu' as metric_name, cpu_usage as value FROM metrics + UNION ALL + SELECT timestamp, 'memory' as metric_name, memory_usage as value FROM metrics + UNION ALL + SELECT timestamp, 'disk' as metric_name, disk_usage as value FROM metrics +) +WHERE value IS NOT NULL; +``` + +**Pivot table to chart:** +```sql +-- From crosstab format to plottable format +SELECT month, 'revenue' as metric, revenue as value FROM monthly_stats +UNION ALL +SELECT month, 'costs' as metric, costs as value FROM monthly_stats +UNION ALL +SELECT month, 'profit' as metric, profit as value FROM monthly_stats; +``` + +**Multiple symbols analysis:** +```sql +-- Stack different symbols as rows +SELECT timestamp, 'BTC-USDT' as symbol, btc_price as price FROM market_data +UNION ALL +SELECT timestamp, 'ETH-USDT' as symbol, eth_price as price FROM market_data +UNION ALL +SELECT timestamp, 'SOL-USDT' as symbol, sol_price as price FROM market_data; +``` + +## Performance Considerations + +**UNION ALL vs UNION:** +```sql +-- Fast: UNION ALL (no deduplication) +SELECT ... UNION ALL SELECT ... + +-- Slower: UNION (deduplicates rows) +SELECT ... UNION SELECT ... +``` + +Always use `UNION ALL` for unpivoting unless you specifically need deduplication. + +**Index usage:** +- Each SELECT in the UNION can use indexes independently +- Filter before UNION for better performance: + +```sql +-- Good: Filter in each SELECT +SELECT timestamp, 'buy' as side, price FROM trades WHERE side = 'buy' +UNION ALL +SELECT timestamp, 'sell' as side, price FROM trades WHERE side = 'sell' + +-- Less efficient: Filter after UNION +SELECT * FROM ( + SELECT timestamp, 'buy' as side, price_buy as price FROM trades + UNION ALL + SELECT timestamp, 'sell' as side, price_sell as price FROM trades +) WHERE price > 0 +``` + +## Alternative: Case-Based Approach + +For simple scenarios, use CASE without UNION: + +```sql +-- If your source data has a side column already +SELECT + timestamp, + symbol, + side, + CASE + WHEN side = 'buy' THEN buy_price + WHEN side = 'sell' THEN sell_price + END as price +FROM trades +WHERE price IS NOT NULL; +``` + +This works when you have a discriminator column (like `side`) that indicates which price column to use. + +## Dynamic Unpivoting + +For tables with many columns, generate UNION queries programmatically: + +```python +# Python example +columns = ['temperature', 'humidity', 'pressure', 'wind_speed'] +queries = [] + +for col in columns: + query = f"SELECT timestamp, sensor_id, '{col}' as metric, {col} as value FROM sensors WHERE {col} IS NOT NULL" + queries.append(query) + +full_query = " UNION ALL ".join(queries) +``` + +## Unpivoting with Metadata + +Include additional information in unpivoted results: + +```sql +WITH source AS ( + SELECT + timestamp, + device_id, + location, + temperature, + humidity + FROM iot_sensors +) +SELECT timestamp, device_id, location, 'temperature' as metric, temperature as value, 'celsius' as unit +FROM source WHERE temperature IS NOT NULL + +UNION ALL + +SELECT timestamp, device_id, location, 'humidity' as metric, humidity as value, 'percent' as unit +FROM source WHERE humidity IS NOT NULL + +ORDER BY timestamp, device_id, metric; +``` + +## Reverse: Pivot (Long to Wide) + +To go back from long to wide format, use aggregation with CASE: + +```sql +-- From long format +SELECT + timestamp, + sensor_id, + MAX(CASE WHEN metric = 'temperature' THEN value END) as temperature, + MAX(CASE WHEN metric = 'humidity' THEN value END) as humidity, + MAX(CASE WHEN metric = 'pressure' THEN value END) as pressure +FROM sensor_readings_long +GROUP BY timestamp, sensor_id; +``` + +See the [Pivoting](/playbook/sql/pivoting) guide for more details. + +:::tip When to UNPIVOT +Unpivot data when: +- Visualizing multiple metrics on the same chart (Grafana, BI tools) +- Applying the same calculation to multiple columns +- Storing column-based data in a narrow table format +- Preparing data for machine learning (feature columns → feature rows) +::: + +:::warning Performance Impact +UNION ALL creates multiple copies of your data. For very large tables: +- Filter early to reduce dataset size +- Consider if unpivoting is necessary (some tools handle wide format well) +- Use indexes on filtered columns +- Test query performance before using in production +::: + +:::info Related Documentation +- [UNION](/docs/reference/sql/union/) +- [CASE expressions](/docs/reference/sql/case/) +- [Pivoting (opposite operation)](/playbook/sql/pivoting) +::: diff --git a/documentation/playbook/sql/time-series/epoch-timestamps.md b/documentation/playbook/sql/time-series/epoch-timestamps.md new file mode 100644 index 000000000..cf12ffcad --- /dev/null +++ b/documentation/playbook/sql/time-series/epoch-timestamps.md @@ -0,0 +1,282 @@ +--- +title: Query with Epoch Timestamps +sidebar_label: Epoch timestamps +description: Use epoch timestamps in milliseconds or microseconds for timestamp filtering and comparisons +--- + +Query QuestDB using epoch timestamps (Unix time) in milliseconds, microseconds, or nanoseconds. This is useful when working with systems that represent time as integers rather than timestamp strings. + +## Problem: Epoch Time from External Systems + +Your application or API provides timestamps as epoch integers: +- JavaScript: `1746552420000` (milliseconds since 1970-01-01) +- Python time(): `1746552420.123456` (seconds with decimals) +- Go/Java: `1746552420000000` (microseconds) + +You need to query QuestDB using these values. + +## Solution: Use Epoch Directly in WHERE Clause + +QuestDB stores timestamps as microseconds internally and accepts epoch values in timestamp comparisons: + +```questdb-sql demo title="Query with epoch milliseconds" +SELECT * FROM trades +WHERE timestamp BETWEEN 1746552420000000 AND 1746811620000000; +``` + +**Important:** QuestDB expects **microseconds** by default. Multiply milliseconds by 1000. + +## Understanding QuestDB Timestamp Precision + +QuestDB uses **microseconds** as its default timestamp precision: + +| Unit | Example | Multiply by | +|------|---------|-------------| +| Seconds | `1746552420` | × 1,000,000 | +| Milliseconds | `1746552420000` | × 1,000 | +| Microseconds | `1746552420000000` | × 1 (native) | +| Nanoseconds | `1746552420000000000` | ÷ 1000 (for `timestamp_ns` type only) | + +**Converting to microseconds:** +```sql +-- From milliseconds (JavaScript, most APIs) +SELECT * FROM trades +WHERE timestamp >= 1746552420000 * 1000; + +-- From seconds (Unix timestamp) +SELECT * FROM trades +WHERE timestamp >= 1746552420 * 1000000; +``` + +## Epoch to Timestamp Conversion + +Convert epoch values to readable timestamps: + +```questdb-sql demo title="Convert epoch to timestamp for display" +SELECT + timestamp, + cast(timestamp AS long) as epoch_micros, + cast(timestamp AS long) / 1000 as epoch_millis, + cast(timestamp AS long) / 1000000 as epoch_seconds +FROM trades +LIMIT 5; +``` + +**Results:** + +| timestamp | epoch_micros | epoch_millis | epoch_seconds | +|-----------|--------------|--------------|---------------| +| 2025-01-15T10:30:45.123456Z | 1737456645123456 | 1737456645123 | 1737456645 | + +## Timestamp to Epoch Conversion + +Convert timestamp strings to epoch values: + +```questdb-sql demo title="Convert timestamp string to epoch" +SELECT + cast('2025-01-15T10:30:45.123Z' AS timestamp) as ts, + cast(cast('2025-01-15T10:30:45.123Z' AS timestamp) AS long) as epoch_micros, + cast(cast('2025-01-15T10:30:45.123Z' AS timestamp) AS long) / 1000 as epoch_millis +``` + +## Working with Milliseconds from JavaScript + +JavaScript `Date.now()` returns milliseconds. Convert for QuestDB: + +**JavaScript:** +```javascript +const now = Date.now(); // e.g., 1746552420000 +const queryStart = now - (24 * 60 * 60 * 1000); // 24 hours ago + +// Query QuestDB (multiply by 1000 for microseconds) +const query = ` + SELECT * FROM trades + WHERE timestamp >= ${queryStart * 1000} + AND timestamp <= ${now * 1000} +`; +``` + +**Python:** +```python +import time + +now_seconds = time.time() # e.g., 1746552420.123456 +now_micros = int(now_seconds * 1_000_000) + +query = f""" + SELECT * FROM trades + WHERE timestamp >= {now_micros - 86400000000} + AND timestamp <= {now_micros} +""" +``` + +## Comparative Queries + +**Using timestamp strings:** +```sql +SELECT * FROM trades +WHERE timestamp BETWEEN '2025-01-15T00:00:00' AND '2025-01-16T00:00:00'; +``` + +**Using epoch microseconds (equivalent):** +```sql +SELECT * FROM trades +WHERE timestamp BETWEEN 1737417600000000 AND 1737504000000000; +``` + +**Performance:** Both are equally fast - QuestDB converts strings to microseconds internally. + +## Time Range with Epoch + +Calculate time ranges using epoch values: + +```questdb-sql demo title="Last 7 days using epoch calculation" +DECLARE + @now := cast(now() AS long), + @week_ago := @now - (7 * 24 * 60 * 60 * 1000000) +SELECT * FROM trades +WHERE timestamp >= @week_ago +LIMIT 100; +``` + +**Breakdown:** +- 7 days = 7 × 24 × 60 × 60 × 1,000,000 microseconds +- Subtract from current timestamp to get cutoff + +## Aggregating by Epoch Intervals + +Group by time using epoch arithmetic: + +```questdb-sql demo title="Aggregate by 5-minute intervals using epoch" +SELECT + (cast(timestamp AS long) / 300000000) * 300000000 as interval_start, + count(*) as trade_count, + avg(price) as avg_price +FROM trades +WHERE timestamp >= dateadd('d', -1, now()) +GROUP BY interval_start +ORDER BY interval_start; +``` + +**Calculation:** +- 5 minutes = 300 seconds = 300,000,000 microseconds +- Divide, truncate (integer division), multiply back to get interval start + +**Better alternative:** +```sql +SELECT + timestamp_floor('5m', timestamp) as interval_start, + count(*) as trade_count +FROM trades +SAMPLE BY 5m; +``` + +## Nanosecond Precision + +For `timestamp_ns` columns (nanosecond precision): + +```sql +-- Create table with nanosecond precision +CREATE TABLE high_freq_trades ( + symbol SYMBOL, + price DOUBLE, + timestamp_ns TIMESTAMP_NS +) TIMESTAMP(timestamp_ns); + +-- Query with nanosecond epoch +SELECT * FROM high_freq_trades +WHERE timestamp_ns BETWEEN 1746552420000000000 AND 1746811620000000000; +``` + +Note: Multiply microseconds by 1000 or milliseconds by 1,000,000 for nanoseconds. + +## Dynamic Epoch from Current Time + +Calculate epoch values relative to now: + +```questdb-sql demo title="Calculate epoch for queries" +SELECT + cast(now() AS long) as current_epoch_micros, + cast(dateadd('h', -1, now()) AS long) as one_hour_ago_micros, + cast(dateadd('d', -7, now()) AS long) as one_week_ago_micros; +``` + +Use these values in application queries: + +```sql +-- In your application, get the epoch value: +-- epoch_start = execute("SELECT cast(dateadd('d', -1, now()) AS long)") + +-- Then use in parameterized query: +SELECT * FROM trades WHERE timestamp >= ? +``` + +## Common Epoch Conversions + +| Duration | Microseconds | Milliseconds | Seconds | +|----------|--------------|--------------|---------| +| 1 second | 1,000,000 | 1,000 | 1 | +| 1 minute | 60,000,000 | 60,000 | 60 | +| 1 hour | 3,600,000,000 | 3,600,000 | 3,600 | +| 1 day | 86,400,000,000 | 86,400,000 | 86,400 | +| 1 week | 604,800,000,000 | 604,800,000 | 604,800 | + +## Debugging Epoch Values + +Convert suspect epoch values to verify correctness: + +```questdb-sql demo title="Verify epoch timestamp" +SELECT + 1746552420000000 as input_micros, + cast(1746552420000000 as timestamp) as as_timestamp, + CASE + WHEN cast(1746552420000000 as timestamp) > '1970-01-01' THEN 'Valid' + ELSE 'Invalid - too small' + END as validity; +``` + +**Common mistakes:** +- Using milliseconds instead of microseconds (off by 1000x) +- Using seconds instead of microseconds (off by 1,000,000x) +- Wrong epoch (some systems use 1900 or 2000 as base) + +## Mixed Epoch and String Queries + +You can mix epoch and string timestamps in the same query: + +```sql +SELECT * FROM trades +WHERE timestamp >= 1746552420000000 -- Epoch microseconds + AND timestamp < '2025-01-16T00:00:00' -- Timestamp string + AND symbol = 'BTC-USDT'; +``` + +QuestDB handles both formats seamlessly. + +:::tip When to Use Epoch +Use epoch timestamps when: +- Interfacing with systems that provide epoch time +- Doing arithmetic on timestamps (adding/subtracting microseconds) +- Minimizing string parsing overhead in high-frequency scenarios + +Use timestamp strings when: +- Writing queries manually (more readable) +- Debugging timestamp issues +- Working with date/time functions +::: + +:::warning Precision Matters +Always verify the precision of your epoch timestamps: +- Milliseconds: 13 digits (e.g., `1746552420000`) +- Microseconds: 16 digits (e.g., `1746552420000000`) +- Nanoseconds: 19 digits (e.g., `1746552420000000000`) + +Wrong precision will give incorrect results by factors of 1000x! +::: + +:::info Related Documentation +- [CAST function](/docs/reference/function/cast/) +- [Timestamp types](/docs/reference/sql/datatypes/#timestamp-and-date) +- [dateadd()](/docs/reference/function/date-time/#dateadd) +- [now()](/docs/reference/function/date-time/#now) +::: diff --git a/documentation/playbook/sql/time-series/expand-power-over-time.md b/documentation/playbook/sql/time-series/expand-power-over-time.md new file mode 100644 index 000000000..5fa18b348 --- /dev/null +++ b/documentation/playbook/sql/time-series/expand-power-over-time.md @@ -0,0 +1,305 @@ +--- +title: Expand Average Power Over Time +sidebar_label: Expand power over time +description: Distribute average power values across hourly intervals using sessions and window functions for IoT energy data +--- + +Expand discrete energy measurements across time intervals to visualize average power consumption. When IoT devices report cumulative energy (watt-hours) at irregular intervals, you need to distribute that energy across the hours it was consumed. + +## Problem: Sparse Energy Readings to Hourly Distribution + +You have IoT devices reporting watt-hour (Wh) values at discrete timestamps. You want to: +1. Calculate average power (W) between readings +2. Distribute that power across each hour in the period +3. Visualize hourly energy consumption + +**Sample data:** + +| timestamp | operationId | wh | +|-----------|-------------|-----| +| 14:10:59 | 1001 | 0 | +| 18:18:05 | 1001 | 200 | +| 14:20:01 | 1002 | 0 | +| 22:20:10 | 1002 | 300 | + +For operation 1001: 200 Wh consumed over 4 hours 7 minutes → needs to be distributed across hours 14:00, 15:00, 16:00, 17:00, 18:00. + +## Solution: Session-Based Distribution + +Use SAMPLE BY to create hourly intervals, then use sessions to identify and distribute energy across attributable hours: + +```questdb-sql demo title="Distribute average power across hours" +WITH +sampled AS ( + SELECT timestamp, operationId, sum(wh) as wh + FROM meter + SAMPLE BY 1h + FILL(0) +), +sessions AS ( + SELECT *, + SUM(CASE WHEN wh > 0 THEN 1 END) + OVER (PARTITION BY operationId ORDER BY timestamp DESC) as session + FROM sampled +), +counts AS ( + SELECT timestamp, operationId, + FIRST_VALUE(wh) OVER (PARTITION BY operationId, session ORDER BY timestamp DESC) as wh, + COUNT(*) OVER (PARTITION BY operationId, session) as attributable_hours + FROM sessions +) +SELECT + timestamp, + operationId, + wh / attributable_hours as wh_avg +FROM counts; +``` + +**Results:** + +| timestamp | operationId | wh_avg | +|-----------|-------------|--------| +| 14:00:00 | 1001 | 39.67 | +| 15:00:00 | 1001 | 48.56 | +| 16:00:00 | 1001 | 48.56 | +| 17:00:00 | 1001 | 48.56 | +| 18:00:00 | 1001 | 14.64 | +| 14:00:00 | 1002 | 24.98 | +| 15:00:00 | 1002 | 37.49 | +| ... | ... | ... | + +## How It Works + +The query uses a four-step approach: + +### 1. Sample by Hour (`sampled`) + +```sql +SELECT timestamp, operationId, sum(wh) as wh +FROM meter +SAMPLE BY 1h +FILL(0) +``` + +Creates hourly buckets with: +- Sum of wh values if data exists in that hour +- 0 for hours with no data (via FILL(0)) + +This ensures we have a row for every hour in the time range. + +### 2. Identify Sessions (`sessions`) + +```sql +SUM(CASE WHEN wh > 0 THEN 1 END) + OVER (PARTITION BY operationId ORDER BY timestamp DESC) +``` + +Working backwards in time (DESC order), increment a counter whenever we see a non-zero wh value. This creates "sessions" where: +- Each session = one energy reading +- Session includes all preceding zero-value hours +- Sessions are numbered: 1, 2, 3, ... (higher numbers are earlier in time) + +**Example for operation 1001:** + +| timestamp | wh | session | +|-----------|-----|---------| +| 18:00 | 200 | 1 | ← Reading at 18:00 +| 17:00 | 0 | 1 | ← Part of session 1 +| 16:00 | 0 | 1 | ← Part of session 1 +| 15:00 | 0 | 1 | ← Part of session 1 +| 14:00 | 0 | 1 | ← Part of session 1 + +### 3. Calculate Attributable Hours (`counts`) + +```sql +FIRST_VALUE(wh) OVER (PARTITION BY operationId, session ORDER BY timestamp DESC) +``` + +For each session, get the wh value (which appears in the first row when sorted DESC). + +```sql +COUNT(*) OVER (PARTITION BY operationId, session) +``` + +Count how many hours are in each session (how many hours to distribute energy across). + +### 4. Distribute Energy + +```sql +wh / attributable_hours +``` + +Divide the total energy by the number of hours to get average energy per hour. + +## Handling Partial Hours + +The query distributes energy evenly across hours, but actual consumption may not be uniform. For more accuracy with partial hours: + +```questdb-sql demo title="Calculate mean power between readings using LAG" +SELECT + timestamp AS end_time, + cast(prev_ts AS timestamp) AS start_time, + operationId, + (wh - prev_wh) / ((cast(timestamp AS DOUBLE) - prev_ts) / 3600000000.0) AS mean_power_w +FROM ( + SELECT + timestamp, + wh, + operationId, + lag(wh) OVER (PARTITION BY operationId ORDER BY timestamp) AS prev_wh, + lag(cast(timestamp AS DOUBLE)) OVER (PARTITION BY operationId ORDER BY timestamp) AS prev_ts + FROM meter +) +WHERE prev_ts IS NOT NULL +ORDER BY timestamp; +``` + +This calculates true average power (W) between consecutive readings, accounting for exact time differences. + +## Adapting the Pattern + +**Different time intervals:** +```sql +-- 15-minute intervals +SAMPLE BY 15m + +-- Daily intervals +SAMPLE BY 1d +``` + +**Multiple devices:** +```sql +-- Already handled by PARTITION BY operationId +-- Works automatically for any number of devices +``` + +**Filter by time range:** +```sql +WITH sampled AS ( + SELECT timestamp, operationId, sum(wh) as wh + FROM meter + WHERE timestamp >= '2025-01-01' AND timestamp < '2025-02-01' + SAMPLE BY 1h + FILL(0) +) +... +``` + +**Include device metadata:** +```sql +WITH sampled AS ( + SELECT + timestamp, + operationId, + first(location) as location, + first(device_type) as device_type, + sum(wh) as wh + FROM meter + SAMPLE BY 1h + FILL(0) +) +... +``` + +## Visualization in Grafana + +This query output is perfect for Grafana time-series charts: + +```sql +SELECT + timestamp as time, + operationId as metric, + wh / attributable_hours as value +FROM ( + -- ... full query from above ... +) +WHERE $__timeFilter(timestamp) +ORDER BY timestamp; +``` + +Configure Grafana to: +- Group by `metric` (operationId) +- Stack series to show total consumption +- Use area chart for energy visualization + +## Alternative: Pre-calculated Power + +If you calculate power at ingestion time, queries become simpler: + +```sql +-- At ingestion, calculate instantaneous power +INSERT INTO power_readings +SELECT + timestamp, + operationId, + (wh - prev_wh) / seconds_elapsed as power_w +FROM meter; + +-- Then query is simple +SELECT + timestamp_floor('h', timestamp) as hour, + operationId, + avg(power_w) as avg_power +FROM power_readings +GROUP BY hour, operationId +ORDER BY hour; +``` + +## Performance Considerations + +**Filter by operationId:** +```sql +-- For specific devices +WHERE operationId IN ('1001', '1002', '1003') +``` + +**Limit time range:** +```sql +-- Only recent data +WHERE timestamp >= dateadd('d', -30, now()) +``` + +**Pre-aggregate if querying frequently:** +```sql +-- Create materialized hourly view +CREATE TABLE hourly_power AS +SELECT ... FROM meter ... SAMPLE BY 1h; + +-- Refresh periodically +-- (manual or scheduled) +``` + +## Common Issues + +**Negative power values:** +- Occurs when devices report decreasing wh (meter reset, rollover) +- Filter with `WHERE wh_avg > 0` or handle resets explicitly + +**Large gaps in data:** +- Long sessions distribute energy over many hours +- Consider adding a maximum session duration filter +- Or handle gaps differently (mark as "unknown" rather than distribute) + +**First reading has no previous value:** +- LAG returns NULL for first reading +- Filter with `WHERE prev_ts IS NOT NULL` + +:::tip Energy vs Power +- **Energy** (Wh): Cumulative, reported by meter +- **Power** (W): Rate of energy consumption (Wh per hour) +- **Average power** = Energy difference / Time elapsed + +This pattern converts from sparse energy readings to continuous power timeline. +::: + +:::warning Session Direction +The query uses `ORDER BY timestamp DESC` to work backwards in time. This is because we want to group zero-hours that occur BEFORE each reading. If you reverse the order, the distribution won't work correctly. +::: + +:::info Related Documentation +- [SAMPLE BY](/docs/reference/sql/select/#sample-by) +- [FILL](/docs/reference/sql/select/#fill) +- [Window functions](/docs/reference/sql/select/#window-functions) +- [FIRST_VALUE](/docs/reference/function/window/#first_value) +- [LAG](/docs/reference/function/window/#lag) +::: diff --git a/documentation/playbook/sql/time-series/fill-missing-intervals.md b/documentation/playbook/sql/time-series/fill-missing-intervals.md new file mode 100644 index 000000000..47bc9115a --- /dev/null +++ b/documentation/playbook/sql/time-series/fill-missing-intervals.md @@ -0,0 +1,404 @@ +--- +title: Fill Missing Time Intervals +sidebar_label: Fill missing intervals +description: Create regular time intervals and propagate sparse values using FILL with PREV, NULL, LINEAR, or constant values +--- + +Transform sparse event data into regular time-series by creating fixed intervals and filling gaps with appropriate values. This is essential for visualization, resampling, and aligning data from multiple sources. + +## Problem: Sparse Events Need Regular Intervals + +You have configuration changes recorded only when they occur: + +| timestamp | config_key | config_value | +|-----------|------------|--------------| +| 08:00:00 | max_connections | 100 | +| 10:30:00 | max_connections | 150 | +| 14:00:00 | max_connections | 200 | + +You want hourly intervals showing the active value at each hour: + +| timestamp | config_value | +|-----------|--------------| +| 08:00:00 | 100 | +| 09:00:00 | 100 | ← Filled forward +| 10:00:00 | 100 | ← Filled forward +| 11:00:00 | 150 | ← New value +| 12:00:00 | 150 | ← Filled forward +| 13:00:00 | 150 | ← Filled forward +| 14:00:00 | 200 | ← New value + +## Solution: SAMPLE BY with FILL(PREV) + +Use SAMPLE BY to create intervals and FILL(PREV) to forward-fill values: + +```questdb-sql demo title="Forward-fill configuration values" +SELECT + timestamp, + first(config_value) as config_value +FROM config_changes +WHERE config_key = 'max_connections' + AND timestamp >= '2025-01-15T08:00:00' + AND timestamp < '2025-01-15T15:00:00' +SAMPLE BY 1h FILL(PREV); +``` + +**Results:** + +| timestamp | config_value | +|-----------|--------------| +| 08:00:00 | 100 | +| 09:00:00 | 100 | +| 10:00:00 | 100 | +| 11:00:00 | 150 | +| 12:00:00 | 150 | +| 13:00:00 | 150 | +| 14:00:00 | 200 | + +## How It Works + +### SAMPLE BY Creates Intervals + +```sql +SAMPLE BY 1h +``` + +Creates hourly buckets regardless of whether data exists in that hour. + +### FILL(PREV) Propagates Values + +```sql +FILL(PREV) +``` + +When an interval has no data: +- Copies the value from the previous non-empty interval +- First interval with no data remains NULL (no previous value) + +### first() Aggregate + +```sql +first(config_value) +``` + +Takes the first value within each interval. For sparse data with one value per relevant interval, this extracts that value. + +## Different FILL Strategies + +QuestDB supports multiple fill strategies: + +```questdb-sql demo title="Compare FILL strategies" +-- FILL(NULL): Leave gaps as NULL +SELECT timestamp, first(price) as price_null +FROM trades +WHERE symbol = 'BTC-USDT' +SAMPLE BY 1m FILL(NULL); + +-- FILL(PREV): Forward-fill from previous value +SELECT timestamp, first(price) as price_prev +FROM trades +WHERE symbol = 'BTC-USDT' +SAMPLE BY 1m FILL(PREV); + +-- FILL(LINEAR): Linear interpolation between known values +SELECT timestamp, first(price) as price_linear +FROM trades +WHERE symbol = 'BTC-USDT' +SAMPLE BY 1m FILL(LINEAR); + +-- FILL(100.0): Constant value +SELECT timestamp, first(price) as price_const +FROM trades +WHERE symbol = 'BTC-USDT' +SAMPLE BY 1m FILL(100.0); +``` + +**When to use each:** + +| Strategy | Use Case | Example | +|----------|----------|---------| +| **FILL(NULL)** | Explicit gaps, no assumption | Sparse sensor data where missing = no reading | +| **FILL(PREV)** | State changes, step functions | Configuration values, status flags | +| **FILL(LINEAR)** | Smoothly varying metrics | Temperature, stock prices between trades | +| **FILL(constant)** | Default/baseline values | Filling with zero for missing counters | + +## Multiple Columns with Different Strategies + +Apply different fill strategies to different columns: + +```questdb-sql demo title="Mixed fill strategies" +SELECT + timestamp, + first(temperature) as temperature, -- Will use FILL(LINEAR) + first(status) as status -- Will use FILL(PREV) +FROM sensor_events +WHERE sensor_id = 'S001' + AND timestamp >= dateadd('h', -6, now()) +SAMPLE BY 5m +FILL(LINEAR); -- Applies to ALL numeric columns +``` + +**Limitation:** FILL applies to all columns identically. For per-column control, use separate queries with UNION ALL. + +## Forward-Fill with Limits + +Limit how far forward to propagate values: + +```questdb-sql demo title="Forward-fill with maximum gap" +WITH sampled AS ( + SELECT + timestamp, + first(sensor_value) as value + FROM sensor_readings + WHERE sensor_id = 'S001' + AND timestamp >= dateadd('h', -24, now()) + SAMPLE BY 10m FILL(PREV) +), +with_gap_check AS ( + SELECT + timestamp, + value, + timestamp - lag(timestamp) OVER (ORDER BY timestamp) as gap_micros + FROM sampled + WHERE value IS NOT NULL -- Only include intervals with actual or filled data +) +SELECT + timestamp, + CASE + WHEN gap_micros > 1800000000 THEN NULL -- Gap > 30 minutes: don't trust fill + ELSE value + END as value_with_limit +FROM with_gap_check +ORDER BY timestamp; +``` + +This prevents filling forward after large gaps where the value is likely stale. + +## Interpolate Between Sparse Updates + +Use LINEAR fill for numeric values that change gradually: + +```questdb-sql demo title="Linear interpolation between price updates" +SELECT + timestamp, + first(price) as price +FROM market_snapshots +WHERE symbol = 'BTC-USDT' + AND timestamp >= '2025-01-15T00:00:00' + AND timestamp < '2025-01-15T01:00:00' +SAMPLE BY 1m FILL(LINEAR); +``` + +**Example:** +- 00:00: price = 100 +- 00:10: price = 110 +- Result: 00:01→101, 00:02→102, ..., 00:09→109 + +Linear interpolation assumes constant rate of change between known points. + +## Fill State Changes for Grafana + +Create step charts in Grafana by forward-filling status values: + +```questdb-sql demo title="Service status for Grafana step chart" +SELECT + timestamp as time, + first(status) as "Service Status" +FROM service_events +WHERE $__timeFilter(timestamp) +SAMPLE BY $__interval FILL(PREV); +``` + +Configure Grafana to: +- Use "Staircase" line style +- Shows clear state transitions +- No misleading interpolation between discrete states + +## Align Multiple Sensors to Common Timeline + +Fill sparse data from multiple sensors to create aligned time-series: + +```questdb-sql demo title="Align multiple sensors to common intervals" +SELECT + timestamp, + symbol, + first(temperature) as temperature +FROM sensor_readings +WHERE sensor_id IN ('S001', 'S002', 'S003') + AND timestamp >= dateadd('h', -1, now()) +SAMPLE BY 1m FILL(PREV); +``` + +**Results:** + +| timestamp | sensor_id | temperature | +|-----------|-----------|-------------| +| 10:00:00 | S001 | 22.5 | +| 10:00:00 | S002 | 23.1 | +| 10:00:00 | S003 | 22.8 | +| 10:01:00 | S001 | 22.5 | ← Filled forward +| 10:01:00 | S002 | 23.2 | ← New reading +| 10:01:00 | S003 | 22.8 | ← Filled forward + +Now all sensors have values at the same timestamps, enabling cross-sensor analysis. + +## Fill with Context from Another Column + +Propagate one column while aggregating another differently: + +```questdb-sql demo title="Fill status while summing events" +SELECT + timestamp, + first(current_status) as status, -- Forward-fill status + count(*) as event_count -- Count events (0 if none) +FROM system_events +WHERE timestamp >= dateadd('h', -6, now()) +SAMPLE BY 10m FILL(PREV); +``` + +**Results:** + +| timestamp | status | event_count | +|-----------|--------|-------------| +| 10:00 | running | 15 | +| 10:10 | running | 0 | ← Status filled, but no events +| 10:20 | running | 23 | +| 10:30 | stopped | 1 | ← Status changed +| 10:40 | stopped | 0 | ← Status filled forward + +## NULL for First Interval with No Data + +FILL(PREV) can't fill the first interval if it has no data: + +```sql +SELECT timestamp, first(value) as value +FROM sparse_data +WHERE timestamp >= '2025-01-15T00:00:00' +SAMPLE BY 1h FILL(PREV); +``` + +If first interval (00:00-01:00) has no data, it will be NULL (no previous value to copy). + +**Solution:** Start range from a timestamp you know has data, or use COALESCE with a default: + +```sql +SELECT + timestamp, + COALESCE(first(value), 0) as value -- Use 0 if NULL +FROM sparse_data +SAMPLE BY 1h FILL(PREV); +``` + +## Performance: FILL vs Window Functions + +**FILL is optimized for SAMPLE BY:** + +```sql +-- Fast: Native FILL implementation +SELECT timestamp, first(value) +FROM data +SAMPLE BY 1h FILL(PREV); + +-- Slower: Manual implementation with LAG +SELECT + timestamp, + COALESCE( + first(value), + lag(first(value)) OVER (ORDER BY timestamp) + ) as value +FROM data +SAMPLE BY 1h; +``` + +Use FILL when possible for better performance. + +## Creating Complete Time Range + +Ensure coverage of full time range even if no data exists: + +```questdb-sql demo title="Generate intervals for full day" +SELECT + timestamp, + first(temperature) as temperature +FROM sensor_readings +WHERE timestamp >= '2025-01-15T00:00:00' + AND timestamp < '2025-01-16T00:00:00' + AND sensor_id = 'S001' +SAMPLE BY 1h FILL(PREV); +``` + +Even if sensor reported no data for some hours, you'll get 24 rows (one per hour). + +## FILL with LATEST ON + +Combine with LATEST ON for efficient queries on large tables: + +```questdb-sql demo title="Fill recent data efficiently" +SELECT + timestamp, + first(price) as price +FROM trades +WHERE symbol = 'BTC-USDT' + AND timestamp >= dateadd('h', -6, now()) +LATEST ON timestamp PARTITION BY symbol +SAMPLE BY 1m FILL(PREV); +``` + +LATEST ON optimizes retrieval of recent data before sampling and filling. + +## Common Pitfalls + +**Wrong aggregate with FILL(PREV):** + +```sql +-- Bad: sum() with FILL(PREV) doesn't make sense +SELECT timestamp, sum(trade_count) FROM trades SAMPLE BY 1h FILL(PREV); +-- Fills missing hours with previous hour's sum (misleading!) + +-- Good: Use FILL(0) for counts/sums +SELECT timestamp, sum(trade_count) FROM trades SAMPLE BY 1h FILL(0); +``` + +**FILL(LINEAR) with non-numeric types:** + +```sql +-- Error: Can't interpolate strings +SELECT timestamp, first(status_text) FROM events SAMPLE BY 1h FILL(LINEAR); + +-- Correct: Use FILL(PREV) for strings/symbols +SELECT timestamp, first(status_text) FROM events SAMPLE BY 1h FILL(PREV); +``` + +## Comparison with NULL Handling + +| Approach | Result | Use Case | +|----------|--------|----------| +| **No FILL** | Fewer rows (sparse) | Raw data export, missing data is meaningful | +| **FILL(NULL)** | All intervals, NULLs for gaps | Explicit gap tracking, Grafana shows breaks | +| **FILL(PREV)** | All intervals, forward-filled | Step functions, state that persists | +| **FILL(LINEAR)** | All intervals, interpolated | Smooth metrics, estimated intermediate values | +| **FILL(0)** | All intervals, zeros for gaps | Counts, volumes (missing = zero activity) | + +:::tip When to Use FILL(PREV) +Perfect for: +- Configuration values (persist until changed) +- Status/state (remains until transition) +- Categorical data (can't interpolate) +- Creating step charts in Grafana +- Aligning sparse data from multiple sources +::: + +:::warning Data Interpretation +FILL(PREV) creates synthetic data points. Distinguish between: +- **Actual measurements**: Sensor reported a value +- **Filled values**: Value propagated from previous interval + +Consider adding a flag column to mark filled vs actual data if this distinction matters. +::: + +:::info Related Documentation +- [SAMPLE BY](/docs/reference/sql/select/#sample-by) +- [FILL keyword](/docs/reference/sql/select/#fill) +- [first() aggregate](/docs/reference/function/aggregation/#first) +- [LATEST ON](/docs/reference/sql/select/#latest-on) +::: diff --git a/documentation/playbook/sql/time-series/filter-by-week.md b/documentation/playbook/sql/time-series/filter-by-week.md new file mode 100644 index 000000000..450e83d7e --- /dev/null +++ b/documentation/playbook/sql/time-series/filter-by-week.md @@ -0,0 +1,245 @@ +--- +title: Filter Data by Week Number +sidebar_label: Filter by week +description: Query data by ISO week number using week_of_year() or dateadd() for better performance +--- + +Filter time-series data by ISO week number (1-52/53) using either the built-in `week_of_year()` function or `dateadd()` for better performance on large tables. + +## Problem: Query Specific Week + +You want to get all data from week 24 of 2025, regardless of which days that includes. + +## Solution 1: Using week_of_year() Function + +The simplest approach uses the built-in function: + +```questdb-sql demo title="Get all trades from week 24" +SELECT * FROM trades +WHERE week_of_year(timestamp) = 24 + AND year(timestamp) = 2025; +``` + +This works but requires evaluating the function for every row, which can be slow on large tables. + +## Solution 2: Using dateadd() (Faster) + +Calculate the week boundaries once and filter by timestamp range: + +```questdb-sql demo title="Get week 24 data using date range (faster)" +DECLARE + @year := '2025', + @week := 24, + @first_monday := dateadd('d', -1 * day_of_week(@year) + 1, @year), + @week_start := dateadd('w', @week - 1, @first_monday), + @week_end := dateadd('w', @week, @first_monday) +SELECT * FROM trades +WHERE timestamp >= @week_start + AND timestamp < @week_end; +``` + +This approach: +- Calculates week boundaries once +- Uses timestamp index for fast filtering +- Executes much faster on large tables + +## How It Works + +### ISO Week Numbering + +ISO 8601 defines weeks as: +- Week starts on Monday +- Week 1 contains the first Thursday of the year +- Year can have 52 or 53 weeks + +### Calculation Steps + +**1. Find first Monday:** +```sql +@first_monday := dateadd('d', -1 * day_of_week(@year) + 1, @year) +``` +- `day_of_week(@year)`: Day of week for Jan 1 (1=Mon, 7=Sun) +- Calculate days to subtract to get to previous/same Monday +- This gives the Monday of the week containing Jan 1 + +**2. Calculate week start:** +```sql +@week_start := dateadd('w', @week - 1, @first_monday) +``` +- Add `@week - 1` weeks to first Monday +- This gives Monday of the target week + +**3. Calculate week end:** +```sql +@week_end := dateadd('w', @week, @first_monday) +``` +- Add one more week to get the boundary +- Use `<` (not `<=`) to exclude next week's Monday + +## Full Example with Results + +```questdb-sql demo title="Week 24 trades with boundaries shown" +DECLARE + @year := '2025', + @week := 24, + @first_monday := dateadd('d', -1 * day_of_week(@year) + 1, @year), + @week_start := dateadd('w', @week - 1, @first_monday), + @week_end := dateadd('w', @week, @first_monday) +SELECT + @week_start as week_start, + @week_end as week_end, + count(*) as trade_count, + sum(amount) as total_volume +FROM trades +WHERE timestamp >= @week_start + AND timestamp < @week_end; +``` + +**Results:** + +| week_start | week_end | trade_count | total_volume | +|------------|----------|-------------|--------------| +| 2025-06-09 | 2025-06-16 | 145,623 | 89,234.56 | + +## Multiple Weeks + +Query several consecutive weeks: + +```questdb-sql demo title="Weeks 20-25 aggregated by week" +DECLARE + @year := '2025', + @first_monday := dateadd('d', -1 * day_of_week(@year) + 1, @year) +SELECT + week_of_year(timestamp) as week, + count(*) as trade_count, + sum(amount) as total_volume +FROM trades +WHERE timestamp >= dateadd('w', 19, @first_monday) -- Week 20 start + AND timestamp < dateadd('w', 26, @first_monday) -- Week 26 start +GROUP BY week +ORDER BY week; +``` + +## Current Week + +Get data for the current week: + +```sql +DECLARE + @today := now(), + @week_start := timestamp_floor('w', @today) +SELECT * FROM trades +WHERE timestamp >= @week_start + AND timestamp < dateadd('w', 1, @week_start); +``` + +`timestamp_floor('w', timestamp)` rounds down to the most recent Monday. + +## Week-over-Week Comparison + +Compare the same week across different years: + +```sql +DECLARE + @week := 24, + @year1 := '2024', + @year2 := '2025', + @first_monday_2024 := dateadd('d', -1 * day_of_week(@year1) + 1, @year1), + @first_monday_2025 := dateadd('d', -1 * day_of_week(@year2) + 1, @year2), + @week_start_2024 := dateadd('w', @week - 1, @first_monday_2024), + @week_end_2024 := dateadd('w', @week, @first_monday_2024), + @week_start_2025 := dateadd('w', @week - 1, @first_monday_2025), + @week_end_2025 := dateadd('w', @week, @first_monday_2025) +SELECT + '2024' as year, + count(*) as trade_count +FROM trades +WHERE timestamp >= @week_start_2024 AND timestamp < @week_end_2024 + +UNION ALL + +SELECT + '2025' as year, + count(*) as trade_count +FROM trades +WHERE timestamp >= @week_start_2025 AND timestamp < @week_end_2025; +``` + +## Performance Comparison + +**Using week_of_year() (slow on large tables):** +```sql +-- Evaluates function for EVERY row +SELECT * FROM trades +WHERE week_of_year(timestamp) = 24; +``` + +**Using dateadd() (fast):** +```sql +-- Uses timestamp index, evaluates boundaries once +DECLARE + @week_start := ..., + @week_end := ... +SELECT * FROM trades +WHERE timestamp >= @week_start AND timestamp < @week_end; +``` + +On a table with 100M rows: +- `week_of_year()` approach: ~30 seconds +- `dateadd()` approach: ~0.1 seconds (300x faster) + +## Handling Week 53 + +Some years have 53 weeks. Check before querying: + +```sql +DECLARE + @year := '2020', -- 2020 had 53 weeks + @week := 53 +SELECT + CASE + WHEN @week <= 52 THEN 'Valid' + WHEN @week = 53 AND week_of_year(dateadd('d', -1, dateadd('y', 1, @year))) = 53 + THEN 'Valid' + ELSE 'Invalid - year only has 52 weeks' + END as week_validity; +``` + +## ISO vs Other Week Systems + +Different systems define weeks differently: + +**ISO 8601 (Monday start, first Thursday):** +```sql +-- Use dateadd with 'w' unit +dateadd('w', n, start_date) +``` + +**US system (Sunday start):** +```sql +-- Adjust first day calculation +@first_sunday := dateadd('d', -1 * (day_of_week(@year) % 7), @year) +``` + +**Custom week definition:** +```sql +-- Define your own start day and week 1 rules +-- Calculate boundaries manually +``` + +:::tip When to Use Each Approach +- **Use `week_of_year()`**: For small tables, ad-hoc queries, or when you need the week number in results +- **Use `dateadd()`**: For large tables, performance-critical queries, or when filtering by week +::: + +:::warning Year Boundaries +Week 1 may start in the previous calendar year (late December), and week 52/53 may extend into the next year (early January). Always verify boundaries if year matters for your analysis. +::: + +:::info Related Documentation +- [week_of_year()](/docs/reference/function/date-time/#week_of_year) +- [dateadd()](/docs/reference/function/date-time/#dateadd) +- [day_of_week()](/docs/reference/function/date-time/#day_of_week) +- [timestamp_floor()](/docs/reference/function/date-time/#timestamp_floor) +- [DECLARE](/docs/reference/sql/declare/) +::: diff --git a/documentation/playbook/sql/time-series/latest-activity-window.md b/documentation/playbook/sql/time-series/latest-activity-window.md new file mode 100644 index 000000000..615228142 --- /dev/null +++ b/documentation/playbook/sql/time-series/latest-activity-window.md @@ -0,0 +1,273 @@ +--- +title: Query Last N Minutes of Activity +sidebar_label: Latest activity window +description: Get rows from the last N minutes of recorded activity using subqueries with max(timestamp) +--- + +Query data from the last N minutes of recorded activity in a table, regardless of the current time. This is useful when data collection is intermittent or when you want to analyze recent activity relative to when data was last recorded, not relative to "now". + +## Problem: Relative to Latest Data, Not Current Time + +You want the last 15 minutes of activity from your table, but: +- Data collection may have stopped hours or days ago +- Using `WHERE timestamp > dateadd('m', -15, now())` would return empty results if no recent data +- You need a query relative to the latest timestamp IN the table + +**Example scenario:** +- Latest timestamp in table: `2025-03-23T07:24:37` +- Current time: `2025-03-25T14:30:00` (2 days later) +- You want: Data from `2025-03-23T07:09:37` to `2025-03-23T07:24:37` (last 15 minutes of activity) + +## Solution: Subquery with max(timestamp) + +Use a subquery to find the latest timestamp, then filter relative to it: + +```questdb-sql demo title="Last 15 minutes of recorded activity" +SELECT * FROM trades +WHERE timestamp >= ( + SELECT dateadd('m', -15, timestamp) + FROM trades + LIMIT -1 +); +``` + +This query: +1. `LIMIT -1` gets the latest row (by designated timestamp) +2. `dateadd('m', -15, timestamp)` calculates 15 minutes before that +3. Outer query filters all rows from that boundary forward + +**Results:** +All rows from the last 15 minutes of activity, regardless of when that activity occurred relative to now. + +## How It Works + +### The LIMIT -1 Trick + +```sql +SELECT timestamp FROM trades LIMIT -1 +``` + +In QuestDB, negative LIMIT returns the last N rows (sorted by designated timestamp in descending order). `LIMIT -1` returns only the single most recent row. + +### Correlated Subquery Support + +QuestDB supports correlated subqueries in specific contexts, including timestamp comparisons: + +```sql +WHERE timestamp >= (SELECT ... FROM table LIMIT -1) +``` + +The subquery executes once and returns a scalar timestamp value, which is then used in the WHERE clause for all rows. + +### Why Not dateadd on the Left? + +```sql +-- Less efficient (calculates for every row) +WHERE dateadd('m', -15, now()) < timestamp + +-- More efficient (calculates once) +WHERE timestamp >= (SELECT dateadd('m', -15, timestamp) FROM trades LIMIT -1) +``` + +When the calculation is on the right side, it's evaluated once. On the left side, it would need to be evaluated for every row in the table. + +## Different Time Windows + +**Last hour of activity:** +```sql +SELECT * FROM trades +WHERE timestamp >= ( + SELECT dateadd('h', -1, timestamp) + FROM trades + LIMIT -1 +); +``` + +**Last 30 seconds:** +```sql +SELECT * FROM trades +WHERE timestamp >= ( + SELECT dateadd('s', -30, timestamp) + FROM trades + LIMIT -1 +); +``` + +**Last day:** +```sql +SELECT * FROM trades +WHERE timestamp >= ( + SELECT dateadd('d', -1, timestamp) + FROM trades + LIMIT -1 +); +``` + +## With Symbol Filtering + +Get latest activity for a specific symbol: + +```questdb-sql demo title="Last 15 minutes of BTC-USDT activity" +SELECT * FROM trades +WHERE symbol = 'BTC-USDT' + AND timestamp >= ( + SELECT dateadd('m', -15, timestamp) + FROM trades + WHERE symbol = 'BTC-USDT' + LIMIT -1 + ); +``` + +Note that the subquery also filters by symbol to find the latest timestamp for that specific symbol. + +## Multiple Symbols with Different Latest Times + +For each symbol, get its own last 15 minutes: + +```sql +WITH latest_per_symbol AS ( + SELECT symbol, max(timestamp) as latest_ts + FROM trades + GROUP BY symbol +) +SELECT t.* +FROM trades t +JOIN latest_per_symbol l ON t.symbol = l.symbol +WHERE t.timestamp >= dateadd('m', -15, l.latest_ts); +``` + +This handles cases where different symbols have different latest timestamps. + +## Performance Considerations + +**Efficient execution:** +- The subquery with `LIMIT -1` is very fast (O(1) operation on designated timestamp) +- Returns immediately without scanning the table +- The calculated boundary is reused for all rows in the outer query + +**Avoid CROSS JOIN approach:** +```sql +-- Less efficient alternative +WITH ts AS ( + SELECT max(timestamp) as latest_ts FROM trades +) +SELECT * FROM trades CROSS JOIN ts +WHERE timestamp > dateadd('m', -15, latest_ts); +``` + +While this works, the subquery approach is cleaner and equally performant. + +## Combining with Aggregations + +**Count trades in last 15 minutes of activity:** +```questdb-sql demo title="Trade count in last 15 minutes of activity" +SELECT + symbol, + count(*) as trade_count, + sum(amount) as total_volume +FROM trades +WHERE timestamp >= ( + SELECT dateadd('m', -15, timestamp) + FROM trades + LIMIT -1 +) +GROUP BY symbol +ORDER BY trade_count DESC; +``` + +**Average price in latest activity window:** +```sql +SELECT + symbol, + avg(price) as avg_price, + min(timestamp) as window_start, + max(timestamp) as window_end +FROM trades +WHERE timestamp >= ( + SELECT dateadd('m', -15, timestamp) + FROM trades + LIMIT -1 +) +GROUP BY symbol; +``` + +## Alternative: Store Latest Timestamp + +For frequently-run queries, consider materializing the latest timestamp: + +```sql +-- Create a single-row table +CREATE TABLE latest_activity ( + latest_ts TIMESTAMP +); + +-- Update periodically (e.g., every minute) +INSERT INTO latest_activity +SELECT max(timestamp) FROM trades; + +-- Use in queries +SELECT * FROM trades +WHERE timestamp >= ( + SELECT dateadd('m', -15, latest_ts) + FROM latest_activity + LIMIT 1 +); +``` + +This avoids recalculating `max(timestamp)` on every query. + +## Handling Empty Tables + +If the table might be empty: + +```sql +SELECT * FROM trades +WHERE timestamp >= COALESCE( + (SELECT dateadd('m', -15, timestamp) FROM trades LIMIT -1), + '1970-01-01T00:00:00' -- Fallback for empty table +); +``` + +This provides a default timestamp if no data exists. + +## Use Cases + +**Monitoring dashboards:** +- Show recent activity even if data feed has stopped +- Avoid empty charts when data is delayed + +**Data quality checks:** +- "Show me the last 10 minutes of received data" +- Works regardless of current time + +**Replay analysis:** +- Analyze historical data relative to when it was recorded +- "What happened in the 15 minutes before system shutdown?" + +**Testing with old data:** +- Query patterns work on old datasets +- No need to adjust timestamps to "now" + +:::tip When to Use This Pattern +Use this pattern when: +- Data collection is intermittent or may have stopped +- Analyzing historical datasets where "now" is not relevant +- Building replay or analysis tools for past events +- Creating dashboards that show "latest activity" regardless of age +::: + +:::warning Subquery Performance +The subquery with `LIMIT -1` is efficient because: +- It operates on the designated timestamp index +- Returns immediately without table scan +- Only executes once for the entire outer query + +Don't worry about performance - this pattern is optimized in QuestDB. +::: + +:::info Related Documentation +- [LIMIT](/docs/reference/sql/select/#limit) +- [dateadd()](/docs/reference/function/date-time/#dateadd) +- [max()](/docs/reference/function/aggregation/#max) +- [Designated timestamp](/docs/concept/designated-timestamp/) +::: diff --git a/documentation/playbook/sql/time-series/remove-outliers.md b/documentation/playbook/sql/time-series/remove-outliers.md new file mode 100644 index 000000000..ec11d9614 --- /dev/null +++ b/documentation/playbook/sql/time-series/remove-outliers.md @@ -0,0 +1,466 @@ +--- +title: Remove Outliers from Time-Series +sidebar_label: Remove outliers +description: Filter anomalous data points using moving averages, standard deviation, percentiles, and z-scores +--- + +Identify and filter outliers from time-series data using statistical methods. Outliers can skew aggregates, distort visualizations, and trigger false alerts. This guide shows multiple approaches to detect and remove anomalous values. + +## Problem: Noisy Sensor Data with Spikes + +You have temperature sensor readings with occasional erroneous spikes: + +| timestamp | sensor_id | temperature | +|-----------|-----------|-------------| +| 10:00:00 | S001 | 22.5 | +| 10:01:00 | S001 | 22.7 | +| 10:02:00 | S001 | 89.3 | ← Outlier (sensor malfunction) +| 10:03:00 | S001 | 22.6 | +| 10:04:00 | S001 | 22.8 | + +The spike at 10:02 should be filtered out before calculating averages or displaying charts. + +## Solution 1: Moving Average Filter + +Remove values that deviate significantly from the moving average: + +```questdb-sql demo title="Filter outliers using moving average" +WITH moving_avg AS ( + SELECT + timestamp, + sensor_id, + temperature, + avg(temperature) OVER ( + PARTITION BY sensor_id + ORDER BY timestamp + ROWS BETWEEN 5 PRECEDING AND 5 FOLLOWING + ) as ma, + stddev(temperature) OVER ( + PARTITION BY sensor_id + ORDER BY timestamp + ROWS BETWEEN 5 PRECEDING AND 5 FOLLOWING + ) as stddev + FROM sensor_readings + WHERE timestamp >= dateadd('h', -24, now()) +) +SELECT + timestamp, + sensor_id, + temperature, + ma as moving_average +FROM moving_avg +WHERE ABS(temperature - ma) <= 2 * stddev -- Within 2 standard deviations +ORDER BY timestamp; +``` + +**How it works:** +- Calculate 11-point moving average (5 before + current + 5 after) +- Calculate moving standard deviation +- Keep only values within 2σ of moving average +- Typical threshold: 2σ retains ~95% of normal data, 3σ retains ~99.7% + +**Results:** + +| timestamp | sensor_id | temperature | moving_average | +|-----------|-----------|-------------|----------------| +| 10:00:00 | S001 | 22.5 | 22.6 | +| 10:01:00 | S001 | 22.7 | 22.6 | +| 10:03:00 | S001 | 22.6 | 22.7 | ← 10:02 filtered out +| 10:04:00 | S001 | 22.8 | 22.7 | + +## Solution 2: Percentile-Based Filtering + +Remove values outside a percentile range (e.g., below 1st or above 99th percentile): + +```questdb-sql demo title="Filter extreme values using percentiles" +WITH percentiles AS ( + SELECT + sensor_id, + percentile(temperature, 1) as p01, + percentile(temperature, 99) as p99 + FROM sensor_readings + WHERE timestamp >= dateadd('d', -7, now()) + GROUP BY sensor_id +) +SELECT + sr.timestamp, + sr.sensor_id, + sr.temperature +FROM sensor_readings sr +INNER JOIN percentiles p ON sr.sensor_id = p.sensor_id +WHERE sr.timestamp >= dateadd('h', -1, now()) + AND sr.temperature BETWEEN p.p01 AND p.p99 +ORDER BY sr.timestamp; +``` + +**Key points:** +- Calculates baseline percentiles from historical data (7 days) +- Filters recent data (1 hour) using those thresholds +- Adaptable: Use p05/p95 for less aggressive filtering +- Useful when distribution is not normal (skewed data) + +## Solution 3: Z-Score Method + +Calculate z-scores and filter values beyond a threshold: + +```questdb-sql demo title="Remove outliers using z-scores" +WITH stats AS ( + SELECT + sensor_id, + avg(temperature) as mean_temp, + stddev(temperature) as stddev_temp + FROM sensor_readings + WHERE timestamp >= dateadd('d', -7, now()) + GROUP BY sensor_id +), +z_scores AS ( + SELECT + sr.timestamp, + sr.sensor_id, + sr.temperature, + ((sr.temperature - stats.mean_temp) / stats.stddev_temp) as z_score + FROM sensor_readings sr + INNER JOIN stats ON sr.sensor_id = stats.sensor_id + WHERE sr.timestamp >= dateadd('h', -1, now()) +) +SELECT + timestamp, + sensor_id, + temperature, + z_score +FROM z_scores +WHERE ABS(z_score) <= 3 -- Within 3 standard deviations +ORDER BY timestamp; +``` + +**Z-score interpretation:** +- |z| < 2: Normal (95% of data) +- |z| < 3: Acceptable (99.7% of data) +- |z| ≥ 3: Outlier (0.3% of data) + +**Results:** + +| timestamp | sensor_id | temperature | z_score | +|-----------|-----------|-------------|---------| +| 10:00:00 | S001 | 22.5 | -0.12 | +| 10:01:00 | S001 | 22.7 | +0.15 | +| 10:03:00 | S001 | 22.6 | +0.02 | +| 10:04:00 | S001 | 22.8 | +0.28 | + +10:02 (z_score = 15.3) was filtered out. + +## Solution 4: Interquartile Range (IQR) + +Use IQR method for robust outlier detection (less sensitive to extreme values): + +```questdb-sql demo title="IQR-based outlier removal" +WITH quartiles AS ( + SELECT + sensor_id, + percentile(temperature, 25) as q1, + percentile(temperature, 75) as q3, + (percentile(temperature, 75) - percentile(temperature, 25)) as iqr + FROM sensor_readings + WHERE timestamp >= dateadd('d', -7, now()) + GROUP BY sensor_id +) +SELECT + sr.timestamp, + sr.sensor_id, + sr.temperature +FROM sensor_readings sr +INNER JOIN quartiles q ON sr.sensor_id = q.sensor_id +WHERE sr.timestamp >= dateadd('h', -1, now()) + AND sr.temperature >= q.q1 - 1.5 * q.iqr -- Lower fence + AND sr.temperature <= q.q3 + 1.5 * q.iqr -- Upper fence +ORDER BY sr.timestamp; +``` + +**IQR boundaries:** +- Lower fence = Q1 - 1.5 × IQR +- Upper fence = Q3 + 1.5 × IQR +- More robust than z-scores for skewed distributions +- Standard multiplier is 1.5; use 3.0 for more conservative filtering + +## Solution 5: Rate of Change Filter + +Remove values with impossible rate of change: + +```questdb-sql demo title="Filter based on maximum rate of change" +WITH deltas AS ( + SELECT + timestamp, + sensor_id, + temperature, + temperature - lag(temperature) OVER (PARTITION BY sensor_id ORDER BY timestamp) as temp_change, + timestamp - lag(timestamp) OVER (PARTITION BY sensor_id ORDER BY timestamp) as time_diff_micros + FROM sensor_readings + WHERE timestamp >= dateadd('h', -24, now()) +) +SELECT + timestamp, + sensor_id, + temperature, + temp_change, + (temp_change / (time_diff_micros / 60000000.0)) as change_per_minute +FROM deltas +WHERE temp_change IS NULL -- Keep first reading + OR ABS(temp_change / (time_diff_micros / 60000000.0)) <= 5.0 -- Max 5°C per minute +ORDER BY timestamp; +``` + +**Use case:** +- Temperature can't change by 50°C in 1 minute (physical impossibility) +- Stock prices can't change by 100% in 1 second (circuit breaker rules) +- Sensor readings limited by physical constraints + +## Combination: Multi-Method Outlier Detection + +Use multiple methods and flag values detected by any: + +```questdb-sql demo title="Flag outliers using multiple methods" +WITH stats AS ( + SELECT + sensor_id, + avg(temperature) as mean, + stddev(temperature) as stddev, + percentile(temperature, 1) as p01, + percentile(temperature, 99) as p99 + FROM sensor_readings + WHERE timestamp >= dateadd('d', -7, now()) + GROUP BY sensor_id +), +flagged AS ( + SELECT + sr.timestamp, + sr.sensor_id, + sr.temperature, + CASE WHEN ABS((sr.temperature - stats.mean) / stats.stddev) > 3 THEN 1 ELSE 0 END as outlier_zscore, + CASE WHEN sr.temperature < stats.p01 OR sr.temperature > stats.p99 THEN 1 ELSE 0 END as outlier_percentile, + CASE WHEN sr.temperature < 0 OR sr.temperature > 50 THEN 1 ELSE 0 END as outlier_range + FROM sensor_readings sr + INNER JOIN stats ON sr.sensor_id = stats.sensor_id + WHERE sr.timestamp >= dateadd('h', -1, now()) +) +SELECT + timestamp, + sensor_id, + temperature, + (outlier_zscore + outlier_percentile + outlier_range) as outlier_score, + CASE + WHEN (outlier_zscore + outlier_percentile + outlier_range) >= 2 THEN 'OUTLIER' + WHEN (outlier_zscore + outlier_percentile + outlier_range) = 1 THEN 'SUSPICIOUS' + ELSE 'NORMAL' + END as classification +FROM flagged +WHERE (outlier_zscore + outlier_percentile + outlier_range) = 0 -- Keep only clean data +ORDER BY timestamp; +``` + +Only keep values that pass all three tests. + +## Replace Outliers with Interpolation + +Instead of removing, replace outliers with interpolated values: + +```questdb-sql demo title="Replace outliers with linear interpolation" +WITH moving_avg AS ( + SELECT + timestamp, + sensor_id, + temperature, + avg(temperature) OVER ( + PARTITION BY sensor_id + ORDER BY timestamp + ROWS BETWEEN 10 PRECEDING AND 10 FOLLOWING + ) as ma, + stddev(temperature) OVER ( + PARTITION BY sensor_id + ORDER BY timestamp + ROWS BETWEEN 10 PRECEDING AND 10 FOLLOWING + ) as stddev + FROM sensor_readings + WHERE timestamp >= dateadd('h', -24, now()) +) +SELECT + timestamp, + sensor_id, + CASE + WHEN ABS(temperature - ma) > 3 * stddev THEN ma -- Replace outlier with moving average + ELSE temperature -- Keep original value + END as temperature_cleaned +FROM moving_avg +ORDER BY timestamp; +``` + +This preserves data density while smoothing anomalies. + +## Aggregated Data with Outlier Removal + +Calculate clean aggregates by filtering outliers first: + +```questdb-sql demo title="Hourly average with outliers removed" +WITH filtered AS ( + SELECT + timestamp, + sensor_id, + temperature + FROM sensor_readings sr + WHERE timestamp >= dateadd('d', -1, now()) + AND temperature BETWEEN ( + SELECT percentile(temperature, 1) FROM sensor_readings + WHERE sensor_id = sr.sensor_id AND timestamp >= dateadd('d', -7, now()) + ) AND ( + SELECT percentile(temperature, 99) FROM sensor_readings + WHERE sensor_id = sr.sensor_id AND timestamp >= dateadd('d', -7, now()) + ) +) +SELECT + timestamp, + sensor_id, + avg(temperature) as avg_temp, + min(temperature) as min_temp, + max(temperature) as max_temp, + count(*) as reading_count +FROM filtered +SAMPLE BY 1h +ORDER BY timestamp; +``` + +**Results show clean aggregates without spike distortion.** + +## Grafana Visualization: Before and After + +Show both raw and cleaned data for comparison: + +```questdb-sql demo title="Overlay raw and cleaned data for Grafana" +WITH moving_avg AS ( + SELECT + timestamp, + sensor_id, + temperature, + avg(temperature) OVER ( + PARTITION BY sensor_id + ORDER BY timestamp + ROWS BETWEEN 10 PRECEDING AND 10 FOLLOWING + ) as ma, + stddev(temperature) OVER ( + PARTITION BY sensor_id + ORDER BY timestamp + ROWS BETWEEN 10 PRECEDING AND 10 FOLLOWING + ) as stddev + FROM sensor_readings + WHERE timestamp >= dateadd('h', -6, now()) + AND sensor_id = 'S001' +) +SELECT + timestamp as time, + temperature as "Raw Data", + CASE + WHEN ABS(temperature - ma) <= 2 * stddev THEN temperature + ELSE NULL + END as "Cleaned Data" +FROM moving_avg +ORDER BY timestamp; +``` + +Grafana will show both series, making outliers visually obvious as gaps in the "Cleaned Data" series. + +## Performance Considerations + +**Pre-calculate thresholds for repeated queries:** + +```sql +-- Create table with outlier thresholds +CREATE TABLE sensor_thresholds AS +SELECT + sensor_id, + avg(temperature) as mean, + stddev(temperature) as stddev, + percentile(temperature, 1) as p01, + percentile(temperature, 99) as p99 +FROM sensor_readings +WHERE timestamp >= dateadd('d', -30, now()) +GROUP BY sensor_id; + +-- Fast filtering using pre-calculated thresholds +SELECT sr.* +FROM sensor_readings sr +INNER JOIN sensor_thresholds st ON sr.sensor_id = st.sensor_id +WHERE ABS((sr.temperature - st.mean) / st.stddev) <= 3; +``` + +**Use SYMBOL type for sensor_id:** + +```sql +CREATE TABLE sensor_readings ( + timestamp TIMESTAMP, + sensor_id SYMBOL, -- Fast lookups and joins + temperature DOUBLE +) TIMESTAMP(timestamp) PARTITION BY DAY; +``` + +## Choosing the Right Method + +| Method | Best For | Pros | Cons | +|--------|----------|------|------| +| **Moving Average** | Smoothly varying data with occasional spikes | Adaptive to local trends | Requires window tuning | +| **Percentiles** | Skewed distributions | Robust to outliers | Less sensitive to mild anomalies | +| **Z-Score** | Normally distributed data | Simple, well-understood | Assumes normal distribution | +| **IQR** | Robust detection needed | Not affected by extreme outliers | May miss subtle anomalies | +| **Rate of Change** | Physical constraints known | Catches impossible values | Requires domain knowledge | + +## Common Pitfalls + +**Don't calculate stats on already-filtered data:** + +```sql +-- Bad: Circular logic +WITH filtered AS ( + SELECT * FROM data WHERE value < (SELECT avg(value) FROM data) +) +SELECT avg(value) FROM filtered; -- Not meaningful! + +-- Good: Calculate stats on full historical dataset +WITH stats AS ( + SELECT avg(value) as baseline FROM data WHERE timestamp >= dateadd('d', -30, now()) +) +SELECT * FROM recent_data WHERE value < baseline.value; +``` + +**Consider seasonality:** + +```sql +-- Bad: Compare summer temps to winter average +SELECT * FROM readings WHERE temp < (SELECT avg(temp) FROM readings); + +-- Good: Compare to same time of year +SELECT * FROM readings r +WHERE temp < ( + SELECT avg(temp) + FROM readings + WHERE month(timestamp) = month(r.timestamp) +); +``` + +:::tip When to Remove vs Flag Outliers +- **Remove**: For clean aggregates, visualizations, or ML training data +- **Flag**: For monitoring, alerts, or investigating sensor malfunctions +- **Replace**: When data density must be preserved (e.g., for resampling) +::: + +:::warning False Positives +Aggressive outlier removal can filter legitimate extreme events: +- Legitimate price movements during market crashes +- Actual temperature spikes during equipment failure +- Real traffic surges during viral events + +Balance cleanliness with preserving genuine anomalies worth investigating. +::: + +:::info Related Documentation +- [Window functions](/docs/reference/sql/select/#window-functions) +- [stddev()](/docs/reference/function/aggregation/#stddev) +- [percentile()](/docs/reference/function/aggregation/#percentile) +- [LAG()](/docs/reference/function/window/#lag) +::: diff --git a/documentation/playbook/sql/time-series/sample-by-interval-bounds.md b/documentation/playbook/sql/time-series/sample-by-interval-bounds.md new file mode 100644 index 000000000..ea724630e --- /dev/null +++ b/documentation/playbook/sql/time-series/sample-by-interval-bounds.md @@ -0,0 +1,289 @@ +--- +title: Adjust SAMPLE BY Interval Bounds +sidebar_label: Interval bounds +description: Shift SAMPLE BY timestamps to use right interval bound instead of left bound for alignment with period end times +--- + +Adjust SAMPLE BY timestamps to display the end of each interval rather than the beginning. By default, QuestDB labels aggregated intervals with their start time, but you may want to label them with their end time for reporting or alignment purposes. + +## Problem: Need Right Bound Labeling + +You aggregate trades into 15-minute intervals: + +```sql +SELECT + timestamp, + symbol, + first(price) AS open, + last(price) AS close +FROM trades +WHERE symbol = 'BTC-USDT' +SAMPLE BY 15m; +``` + +**Default output (left bound):** + +| timestamp | open | close | +|-----------|------|-------| +| 00:00:00 | 61000 | 61050 | ← Trades from 00:00:00 to 00:14:59 +| 00:15:00 | 61050 | 61100 | ← Trades from 00:15:00 to 00:29:59 +| 00:30:00 | 61100 | 61150 | ← Trades from 00:30:00 to 00:44:59 + +You want the timestamp to show **00:15:00**, **00:30:00**, **00:45:00** (the **end** of each period). + +## Solution: Add Interval to Timestamp + +Use `dateadd()` to shift timestamps by the interval duration: + +```questdb-sql demo title="SAMPLE BY with right bound timestamps" +SELECT + dateadd('m', 15, timestamp) as timestamp, + symbol, + first(price) AS open, + last(price) AS close, + min(price), + max(price), + sum(amount) AS volume +FROM trades +WHERE symbol = 'BTC-USDT' AND timestamp IN today() +SAMPLE BY 15m; +``` + +**Output (right bound):** + +| timestamp | open | close | min | max | volume | +|-----------|------|-------|-----|-----|--------| +| 00:15:00 | 61000 | 61050 | 60990 | 61055 | 123.45 | +| 00:30:00 | 61050 | 61100 | 61040 | 61110 | 98.76 | +| 00:45:00 | 61100 | 61150 | 61095 | 61160 | 145.32 | + +Now each row is labeled with the **end** of the interval it represents. + +## How It Works + +### Default Left Bound + +```sql +SAMPLE BY 15m +``` + +QuestDB internally: +1. Rounds down timestamps to interval boundaries +2. Aggregates data within each [start, end) bucket +3. Labels with the interval start time + +### Shifted Right Bound + +```sql +dateadd('m', 15, timestamp) +``` + +Adds 15 minutes to each timestamp: +- Original: `00:00:00` → Shifted: `00:15:00` +- Original: `00:15:00` → Shifted: `00:30:00` + +The aggregation still happens over the same data; only the label changes. + +## Important Consideration: Designated Timestamp + +When you shift the timestamp, it's no longer the "designated timestamp" for the row: + +```questdb-sql demo title="Notice timestamp color in web console" +SELECT + dateadd('m', 15, timestamp) as timestamp, + symbol, + first(price) AS open +FROM trades +SAMPLE BY 15m; +``` + +In the QuestDB web console, the shifted timestamp appears in **regular font**, not **green** (designated timestamp color), because it's now a derived column, not the original designated timestamp. + +### Impact on Subsequent Operations + +If you use this query as a subquery and need ordering or window functions: + +```sql +-- Force ordering by the derived timestamp +( + SELECT + dateadd('m', 15, timestamp) as timestamp, + symbol, + first(price) AS open + FROM trades + SAMPLE BY 15m +) ORDER BY timestamp; +``` + +The `ORDER BY` ensures the derived timestamp is used for ordering. + +## Different Intervals + +**1-hour intervals:** +```sql +SELECT + dateadd('h', 1, timestamp) as timestamp, + ... +FROM trades +SAMPLE BY 1h; +``` + +**5-minute intervals:** +```sql +SELECT + dateadd('m', 5, timestamp) as timestamp, + ... +FROM trades +SAMPLE BY 5m; +``` + +**1-day intervals:** +```sql +SELECT + dateadd('d', 1, timestamp) as timestamp, + ... +FROM trades +SAMPLE BY 1d; +``` + +**30-second intervals:** +```sql +SELECT + dateadd('s', 30, timestamp) as timestamp, + ... +FROM trades +SAMPLE BY 30s; +``` + +## With Time Range Filtering + +Combine with Grafana macros: + +```sql +SELECT + dateadd('m', 15, timestamp) as timestamp, + symbol, + first(price) AS open, + last(price) AS close +FROM trades +WHERE $__timeFilter(timestamp) +SAMPLE BY 15m; +``` + +Or with explicit time range: + +```sql +SELECT + dateadd('m', 15, timestamp) as timestamp, + ... +FROM trades +WHERE timestamp >= '2025-01-15T00:00:00' + AND timestamp < '2025-01-16T00:00:00' +SAMPLE BY 15m; +``` + +## Alternative: Keep Both Boundaries + +Show both start and end of each interval: + +```questdb-sql demo title="Show both interval start and end" +SELECT + timestamp as interval_start, + dateadd('m', 15, timestamp) as interval_end, + symbol, + first(price) AS open, + last(price) AS close +FROM trades +WHERE symbol = 'BTC-USDT' +SAMPLE BY 15m; +``` + +**Output:** + +| interval_start | interval_end | open | close | +|----------------|--------------|------|-------| +| 00:00:00 | 00:15:00 | 61000 | 61050 | +| 00:15:00 | 00:30:00 | 61050 | 61100 | + +This makes it explicit which period each row represents. + +## Use Cases + +**Financial reporting:** +- Trading periods often labeled by close time +- "End of day" reports use day's end timestamp +- Quarterly reports labeled Q1 end, Q2 end, etc. + +**Billing periods:** +- Monthly usage from Jan 1 to Jan 31 labeled as "Jan 31" +- Hourly electricity usage labeled by hour end + +**SLA monitoring:** +- Availability windows labeled by period end +- "99.9% uptime for hour ending at 14:00" + +**Compliance:** +- Some regulations require end-of-period timestamps +- Audit trails with closing timestamps + +## Grafana Visualization + +When using with Grafana time-series charts, the shifted timestamp aligns with the period represented: + +```sql +SELECT + dateadd('m', 15, timestamp) as time, + avg(price) as value, + symbol as metric +FROM trades +WHERE $__timeFilter(timestamp) + AND symbol IN ('BTC-USDT', 'ETH-USDT') +SAMPLE BY 15m; +``` + +The chart will show data points at 00:15, 00:30, 00:45, etc., representing the aggregated values for the 15 minutes ending at those times. + +## Center of Interval + +For some visualizations, you may want to label with the interval midpoint: + +```sql +SELECT + dateadd('m', 7.5, timestamp) as timestamp, -- 7.5 minutes = halfway through 15min + ... +FROM trades +SAMPLE BY 15m; +``` + +Use decimal minutes for fractional intervals (7.5 minutes = 7 minutes 30 seconds). + +## Alignment with Calendar + +For calendar-aligned intervals: + +```sql +SELECT + dateadd('d', 1, timestamp) as timestamp, + ... +FROM trades +SAMPLE BY 1d ALIGN TO CALENDAR; +``` + +With `ALIGN TO CALENDAR`, day boundaries align to midnight UTC (or configured timezone). The shifted timestamp then represents the end of each calendar day. + +:::tip Left vs Right Bound +- **Left bound (default)**: Common in databases and programming - interval [start, end) +- **Right bound (shifted)**: Common in business reporting - "value as of end of period" +- **Choose based on domain**: Financial data often uses right bound, technical data often uses left bound +::: + +:::warning Timestamp in Green +When QuestDB web console displays timestamp in green, it indicates the designated timestamp column. After applying `dateadd()`, the timestamp is no longer "designated" - it's a derived column. This doesn't affect query correctness, only console display. +::: + +:::info Related Documentation +- [SAMPLE BY](/docs/reference/sql/select/#sample-by) +- [dateadd()](/docs/reference/function/date-time/#dateadd) +- [ALIGN TO CALENDAR](/docs/reference/sql/select/#align-to-calendar-time-zones) +- [Designated timestamp](/docs/concept/designated-timestamp/) +::: diff --git a/documentation/playbook/sql/time-series/session-windows.md b/documentation/playbook/sql/time-series/session-windows.md new file mode 100644 index 000000000..5fa2d2130 --- /dev/null +++ b/documentation/playbook/sql/time-series/session-windows.md @@ -0,0 +1,334 @@ +--- +title: Calculate Sessions and Elapsed Time +sidebar_label: Session windows +description: Identify sessions by detecting state changes and calculate elapsed time between events using window functions +--- + +Calculate sessions and elapsed time by identifying when state changes occur in time-series data. This "flip-flop" or "session" pattern is useful for analyzing user sessions, vehicle rides, machine operating cycles, or any scenario where you need to track duration between state transitions. + +## Problem: Track Time Between State Changes + +You have a table tracking vehicle lock status over time and want to calculate ride duration. A ride starts when `lock_status` changes from `true` (locked) to `false` (unlocked), and ends when it changes back to `true`. + +**Table schema:** +```sql +CREATE TABLE vehicle_events ( + vehicle_id SYMBOL, + lock_status BOOLEAN, + timestamp TIMESTAMP +) TIMESTAMP(timestamp) PARTITION BY DAY; +``` + +**Sample data:** + +| timestamp | vehicle_id | lock_status | +|-----------|------------|-------------| +| 10:00:00 | V001 | true | +| 10:05:00 | V001 | false | ← Ride starts +| 10:25:00 | V001 | true | ← Ride ends (20 min) +| 10:30:00 | V001 | false | ← Next ride starts +| 10:45:00 | V001 | true | ← Ride ends (15 min) + +You want to calculate the duration of each ride. + +## Solution: Session Detection with Window Functions + +Use window functions to detect state changes, assign session IDs, then calculate durations: + +```questdb-sql demo title="Calculate ride duration from lock status changes" +WITH prevEvents AS ( + SELECT *, + first_value(CASE WHEN lock_status=false THEN 0 WHEN lock_status=true THEN 1 END) + OVER ( + PARTITION BY vehicle_id ORDER BY timestamp + ROWS 1 PRECEDING EXCLUDE CURRENT ROW + ) as prev_status + FROM vehicle_events + WHERE timestamp IN today() +), +ride_sessions AS ( + SELECT *, + SUM(CASE + WHEN lock_status = true AND prev_status = 0 THEN 1 + WHEN lock_status = false AND prev_status = 1 THEN 1 + ELSE 0 + END) OVER (PARTITION BY vehicle_id ORDER BY timestamp) as ride + FROM prevEvents +), +global_sessions AS ( + SELECT *, concat(vehicle_id, '#', ride) as session + FROM ride_sessions +), +totals AS ( + SELECT + first(timestamp) as ts, + session, + FIRST(lock_status) as lock_status, + first(vehicle_id) as vehicle_id + FROM global_sessions + GROUP BY session +), +prev_ts AS ( + SELECT *, + first_value(timestamp::long) OVER ( + PARTITION BY vehicle_id ORDER BY timestamp + ROWS 1 PRECEDING EXCLUDE CURRENT ROW + ) as prev_ts + FROM totals +) +SELECT + timestamp as ride_end, + vehicle_id, + (timestamp::long - prev_ts) / 1000000 as duration_seconds +FROM prev_ts +WHERE lock_status = false AND prev_ts IS NOT NULL; +``` + +**Results:** + +| ride_end | vehicle_id | duration_seconds | +|----------|------------|------------------| +| 10:25:00 | V001 | 1200 | +| 10:45:00 | V001 | 900 | + +## How It Works + +The query uses a five-step approach: + +### 1. Get Previous Status (`prevEvents`) + +```sql +first_value(...) OVER (... ROWS 1 PRECEDING EXCLUDE CURRENT ROW) +``` + +For each row, get the status from the previous row. Convert boolean to numbers (0/1) since `first_value` requires numeric types. + +### 2. Detect State Changes (`ride_sessions`) + +```sql +SUM(CASE WHEN lock_status != prev_status THEN 1 ELSE 0 END) + OVER (PARTITION BY vehicle_id ORDER BY timestamp) +``` + +Whenever status changes, increment a counter. This creates sequential session IDs for each vehicle: +- Ride 0: Initial state +- Ride 1: After first state change +- Ride 2: After second state change +- ... + +### 3. Create Global Session IDs (`global_sessions`) + +```sql +concat(vehicle_id, '#', ride) +``` + +Combine vehicle_id with ride number to create unique session identifiers across all vehicles. + +### 4. Get Session Start Times (`totals`) + +```sql +SELECT first(timestamp) as ts, ... +FROM global_sessions +GROUP BY session +``` + +For each session, get the timestamp and status at the beginning of that session. + +### 5. Calculate Duration (`prev_ts`) + +```sql +first_value(timestamp::long) OVER (... ROWS 1 PRECEDING) +``` + +Get the timestamp from the previous session (for the same vehicle), then calculate duration by subtracting. + +### Filter for Rides + +```sql +WHERE lock_status = false +``` + +Only show sessions where status is `false` (unlocked), which represents completed rides. The duration is from the previous session end (lock) to this session start (unlock). + +## Monthly Aggregation + +Calculate total ride duration per vehicle per month: + +```questdb-sql demo title="Monthly ride duration by vehicle" +WITH prevEvents AS ( + SELECT *, + first_value(CASE WHEN lock_status=false THEN 0 WHEN lock_status=true THEN 1 END) + OVER ( + PARTITION BY vehicle_id ORDER BY timestamp + ROWS 1 PRECEDING EXCLUDE CURRENT ROW + ) as prev_status + FROM vehicle_events + WHERE timestamp >= dateadd('M', -3, now()) +), +ride_sessions AS ( + SELECT *, + SUM(CASE + WHEN lock_status = true AND prev_status = 0 THEN 1 + WHEN lock_status = false AND prev_status = 1 THEN 1 + ELSE 0 + END) OVER (PARTITION BY vehicle_id ORDER BY timestamp) as ride + FROM prevEvents +), +global_sessions AS ( + SELECT *, concat(vehicle_id, '#', ride) as session + FROM ride_sessions +), +totals AS ( + SELECT + first(timestamp) as ts, + session, + FIRST(lock_status) as lock_status, + first(vehicle_id) as vehicle_id + FROM global_sessions + GROUP BY session +), +prev_ts AS ( + SELECT *, + first_value(timestamp::long) OVER ( + PARTITION BY vehicle_id ORDER BY timestamp + ROWS 1 PRECEDING EXCLUDE CURRENT ROW + ) as prev_ts + FROM totals +) +SELECT + timestamp_floor('M', timestamp) as month, + vehicle_id, + SUM((timestamp::long - prev_ts) / 1000000) as total_ride_duration_seconds, + COUNT(*) as ride_count +FROM prev_ts +WHERE lock_status = false AND prev_ts IS NOT NULL +GROUP BY month, vehicle_id +ORDER BY month, vehicle_id; +``` + +## Adapting to Different Use Cases + +**User website sessions (1 hour timeout):** +```sql +WITH prevEvents AS ( + SELECT *, + first_value(timestamp::long) OVER ( + PARTITION BY user_id ORDER BY timestamp + ROWS 1 PRECEDING EXCLUDE CURRENT ROW + ) as prev_ts + FROM page_views +), +sessions AS ( + SELECT *, + SUM(CASE + WHEN datediff('h', prev_ts::timestamp, timestamp) > 1 THEN 1 + ELSE 0 + END) OVER (PARTITION BY user_id ORDER BY timestamp) as session_id + FROM prevEvents +) +SELECT + user_id, + session_id, + min(timestamp) as session_start, + max(timestamp) as session_end, + datediff('s', min(timestamp), max(timestamp)) as session_duration_seconds, + count(*) as page_views +FROM sessions +GROUP BY user_id, session_id; +``` + +**Machine operating cycles:** +```sql +-- When machine changes from 'off' to 'running' to 'off' +WITH prevStatus AS ( + SELECT *, + first_value(status) OVER ( + PARTITION BY machine_id ORDER BY timestamp + ROWS 1 PRECEDING EXCLUDE CURRENT ROW + ) as prev_status + FROM machine_status +), +cycles AS ( + SELECT *, + SUM(CASE + WHEN status != prev_status THEN 1 + ELSE 0 + END) OVER (PARTITION BY machine_id ORDER BY timestamp) as cycle + FROM prevStatus +) +SELECT + machine_id, + cycle, + min(timestamp) as cycle_start, + max(timestamp) as cycle_end +FROM cycles +WHERE status = 'running' +GROUP BY machine_id, cycle; +``` + +## Performance Considerations + +**Filter by timestamp first:** +```sql +-- Good: Reduce dataset before windowing +WHERE timestamp >= dateadd('M', -1, now()) +``` + +**Partition by high-cardinality column:** +```sql +-- Good: Each vehicle processed independently +PARTITION BY vehicle_id + +-- Avoid: All vehicles in one partition (slow) +-- (no PARTITION BY) +``` + +**Limit output:** +```sql +-- For testing, limit to specific vehicles +WHERE vehicle_id IN ('V001', 'V002', 'V003') +``` + +## Alternative: Using LAG (QuestDB 8.0+) + +With `LAG` function, the query is simpler: + +```sql +WITH prevEvents AS ( + SELECT *, + LAG(lock_status) OVER (PARTITION BY vehicle_id ORDER BY timestamp) as prev_status, + LAG(timestamp) OVER (PARTITION BY vehicle_id ORDER BY timestamp) as prev_timestamp + FROM vehicle_events + WHERE timestamp IN today() +) +SELECT + timestamp as ride_end, + vehicle_id, + datediff('s', prev_timestamp, timestamp) as duration_seconds +FROM prevEvents +WHERE lock_status = false -- Ride ended (locked) + AND prev_status = true -- Previous state was unlocked (riding) + AND prev_timestamp IS NOT NULL; +``` + +This directly accesses the previous row's values without converting to numbers. + +:::tip Common Session Patterns +This pattern applies to many scenarios: +- **User sessions**: Time between last action and timeout +- **IoT device cycles**: Power on/off cycles +- **Vehicle trips**: Ignition on/off periods +- **Connection sessions**: Login/logout tracking +- **Process steps**: Start/complete state transitions +::: + +:::warning First Row Handling +The first row in each partition will have `NULL` for previous values. Always filter these out with `WHERE prev_ts IS NOT NULL` or similar conditions. +::: + +:::info Related Documentation +- [first_value() window function](/docs/reference/function/window/#first_value) +- [LAG window function](/docs/reference/function/window/#lag) +- [Window functions](/docs/reference/sql/select/#window-functions) +- [datediff()](/docs/reference/function/date-time/#datediff) +::: diff --git a/documentation/playbook/sql/time-series/sparse-sensor-data.md b/documentation/playbook/sql/time-series/sparse-sensor-data.md new file mode 100644 index 000000000..de44ece4b --- /dev/null +++ b/documentation/playbook/sql/time-series/sparse-sensor-data.md @@ -0,0 +1,387 @@ +--- +title: Join Strategies for Sparse Sensor Data +sidebar_label: Sparse sensor data +description: Compare CROSS JOIN, LEFT JOIN, and ASOF JOIN strategies for combining data from sensors that report at different times +--- + +Combine data from multiple sensors that report at different times and frequencies. This guide compares three join strategies—CROSS JOIN, LEFT JOIN, and ASOF JOIN—showing when to use each for optimal results. + +## Problem: Sensors Report at Different Times + +You have three temperature sensors with different reporting schedules: + +**Sensor A (every 1 minute):** +| timestamp | temperature | +|-----------|-------------| +| 10:00:00 | 22.5 | +| 10:01:00 | 22.7 | +| 10:02:00 | 22.6 | + +**Sensor B (every 2 minutes):** +| timestamp | temperature | +|-----------|-------------| +| 10:00:00 | 23.1 | +| 10:02:00 | 23.3 | + +**Sensor C (irregular):** +| timestamp | temperature | +|-----------|-------------| +| 10:00:30 | 21.8 | +| 10:01:45 | 22.0 | + +You want to analyze all sensors together, but their timestamps don't align. + +## Strategy 1: CROSS JOIN for Complete Combinations + +Generate all possible combinations of readings across sensors: + +```questdb-sql demo title="CROSS JOIN all sensor combinations" +WITH sensor_a AS ( + SELECT timestamp as ts_a, temperature as temp_a + FROM sensor_readings + WHERE sensor_id = 'A' + AND timestamp >= '2025-01-15T10:00:00' + AND timestamp < '2025-01-15T10:10:00' +), +sensor_b AS ( + SELECT timestamp as ts_b, temperature as temp_b + FROM sensor_readings + WHERE sensor_id = 'B' + AND timestamp >= '2025-01-15T10:00:00' + AND timestamp < '2025-01-15T10:10:00' +) +SELECT + sensor_a.ts_a, + sensor_a.temp_a, + sensor_b.ts_b, + sensor_b.temp_b, + ABS(sensor_a.ts_a - sensor_b.ts_b) / 1000000 as time_diff_seconds +FROM sensor_a +CROSS JOIN sensor_b +WHERE ABS(sensor_a.ts_a - sensor_b.ts_b) < 30000000 -- Within 30 seconds +ORDER BY sensor_a.ts_a, sensor_b.ts_b; +``` + +**Results:** + +| ts_a | temp_a | ts_b | temp_b | time_diff_seconds | +|------|--------|------|--------|-------------------| +| 10:00:00 | 22.5 | 10:00:00 | 23.1 | 0 | +| 10:01:00 | 22.7 | 10:00:00 | 23.1 | 60 | ← Matched to previous B reading +| 10:02:00 | 22.6 | 10:02:00 | 23.3 | 0 | + +**When to use:** +- Small datasets (CROSS JOIN creates N × M rows) +- Need all combinations within a time window +- Analyzing correlation between sensors with tolerance + +**Pros:** +- Simple to understand +- Captures all possible pairings +- Can filter by time difference after joining + +**Cons:** +- Explodes result set (cartesian product) +- Not scalable for large datasets +- May create duplicate matches + +## Strategy 2: LEFT JOIN on Common Intervals + +Resample both sensors to common intervals, then join: + +```questdb-sql demo title="LEFT JOIN after resampling to common intervals" +WITH sensor_a_resampled AS ( + SELECT timestamp, first(temperature) as temp_a + FROM sensor_readings + WHERE sensor_id = 'A' + AND timestamp >= '2025-01-15T10:00:00' + AND timestamp < '2025-01-15T10:10:00' + SAMPLE BY 1m FILL(PREV) +), +sensor_b_resampled AS ( + SELECT timestamp, first(temperature) as temp_b + FROM sensor_readings + WHERE sensor_id = 'B' + AND timestamp >= '2025-01-15T10:00:00' + AND timestamp < '2025-01-15T10:10:00' + SAMPLE BY 1m FILL(PREV) +) +SELECT + sensor_a_resampled.timestamp, + sensor_a_resampled.temp_a, + sensor_b_resampled.temp_b, + (sensor_a_resampled.temp_a - sensor_b_resampled.temp_b) as temp_difference +FROM sensor_a_resampled +LEFT JOIN sensor_b_resampled + ON sensor_a_resampled.timestamp = sensor_b_resampled.timestamp +ORDER BY sensor_a_resampled.timestamp; +``` + +**Results:** + +| timestamp | temp_a | temp_b | temp_difference | +|-----------|--------|--------|-----------------| +| 10:00:00 | 22.5 | 23.1 | -0.6 | +| 10:01:00 | 22.7 | 23.1 | -0.4 | ← B value filled forward +| 10:02:00 | 22.6 | 23.3 | -0.7 | + +**When to use:** +- Sensors can be resampled to common frequency +- Want aligned timestamps for easy comparison +- Need forward-filled or interpolated values + +**Pros:** +- Clean aligned results +- Predictable row count (one per interval) +- Works well with Grafana visualization + +**Cons:** +- Requires choosing resample interval +- May introduce synthetic data (FILL) +- Less precise than original timestamps + +## Strategy 3: ASOF JOIN for Temporal Proximity + +Match each sensor A reading with the most recent sensor B reading: + +```questdb-sql demo title="ASOF JOIN to match most recent readings" +SELECT + a.timestamp as ts_a, + a.temperature as temp_a, + b.timestamp as ts_b, + b.temperature as temp_b, + (a.timestamp - b.timestamp) / 1000000 as seconds_since_b_reading, + (a.temperature - b.temperature) as temp_difference +FROM sensor_readings a +ASOF JOIN sensor_readings b + ON a.sensor_id = 'A' AND b.sensor_id = 'B' +WHERE a.sensor_id = 'A' + AND a.timestamp >= '2025-01-15T10:00:00' + AND a.timestamp < '2025-01-15T10:10:00' +ORDER BY a.timestamp; +``` + +**Results:** + +| ts_a | temp_a | ts_b | temp_b | seconds_since_b_reading | temp_difference | +|------|--------|------|--------|-------------------------|-----------------| +| 10:00:00 | 22.5 | 10:00:00 | 23.1 | 0 | -0.6 | +| 10:01:00 | 22.7 | 10:00:00 | 23.1 | 60 | -0.4 | ← Most recent B reading +| 10:02:00 | 22.6 | 10:02:00 | 23.3 | 0 | -0.7 | + +**When to use:** +- Need point-in-time comparison (what was B when A reported?) +- Sensors report at irregular intervals +- Want actual timestamps, not resampled intervals +- Large datasets (very efficient) + +**Pros:** +- Extremely fast (optimized for time-series) +- No data synthesis (uses actual readings) +- Handles irregular timestamps naturally +- Scalable to millions of rows + +**Cons:** +- More complex syntax +- May need to filter by max time difference +- Requires understanding of ASOF semantics + +## Comparison: Three Sensors Combined + +Combine three sensors using ASOF JOIN: + +```questdb-sql demo title="ASOF JOIN multiple sensors" +SELECT + a.timestamp as ts_a, + a.temperature as temp_a, + b.timestamp as ts_b, + b.temperature as temp_b, + c.timestamp as ts_c, + c.temperature as temp_c, + (a.temperature + b.temperature + c.temperature) / 3 as avg_temperature +FROM sensor_readings a +ASOF JOIN sensor_readings b + ON a.sensor_id = 'A' AND b.sensor_id = 'B' +ASOF JOIN sensor_readings c + ON a.sensor_id = 'A' AND c.sensor_id = 'C' +WHERE a.sensor_id = 'A' + AND a.timestamp >= '2025-01-15T10:00:00' + AND a.timestamp < '2025-01-15T10:10:00' +ORDER BY a.timestamp; +``` + +Each sensor A reading is matched with the most recent reading from sensors B and C. + +## Filtering by Maximum Time Difference + +Ensure joined readings aren't too stale: + +```questdb-sql demo title="ASOF JOIN with staleness filter" +WITH joined AS ( + SELECT + a.timestamp as ts_a, + a.temperature as temp_a, + b.timestamp as ts_b, + b.temperature as temp_b, + (a.timestamp - b.timestamp) as time_diff_micros + FROM sensor_readings a + ASOF JOIN sensor_readings b + ON a.sensor_id = 'A' AND b.sensor_id = 'B' + WHERE a.sensor_id = 'A' + AND a.timestamp >= '2025-01-15T10:00:00' + AND a.timestamp < '2025-01-15T10:10:00' +) +SELECT * +FROM joined +WHERE time_diff_micros <= 120000000 -- B reading not older than 2 minutes +ORDER BY ts_a; +``` + +This filters out matches where sensor B's reading is too old. + +## LT JOIN for Strictly Before + +Use LT JOIN when you need readings strictly before (not at the same time): + +```questdb-sql demo title="LT JOIN for strictly previous reading" +SELECT + a.timestamp as ts_a, + a.temperature as temp_a, + b.timestamp as ts_b, + b.temperature as temp_b, + (a.temperature - b.temperature) as temp_change +FROM sensor_readings a +LT JOIN sensor_readings b + ON a.sensor_id = 'A' AND b.sensor_id = 'A' -- Same sensor, previous reading +WHERE a.sensor_id = 'A' + AND a.timestamp >= '2025-01-15T10:00:00' + AND a.timestamp < '2025-01-15T10:10:00' +ORDER BY a.timestamp; +``` + +This matches each reading with the strictly previous reading from the same sensor (useful for calculating deltas). + +## Handling NULL Results + +ASOF JOIN returns NULL when no previous reading exists: + +```questdb-sql demo title="Handle NULL from ASOF JOIN" +SELECT + a.timestamp as ts_a, + a.temperature as temp_a, + COALESCE(b.temperature, a.temperature) as temp_b, -- Use A if B is NULL + CASE + WHEN b.timestamp IS NULL THEN 'NO_PREVIOUS_READING' + ELSE 'OK' + END as status +FROM sensor_readings a +ASOF JOIN sensor_readings b + ON a.sensor_id = 'A' AND b.sensor_id = 'B' +WHERE a.sensor_id = 'A' +ORDER BY a.timestamp; +``` + +## Performance Comparison + +| Strategy | Rows Generated | Query Speed | Memory Usage | Best For | +|----------|----------------|-------------|--------------|----------| +| **CROSS JOIN** | N × M | Slow | High | Small datasets, all combinations | +| **LEFT JOIN** | N | Medium | Medium | Regular intervals, visualization | +| **ASOF JOIN** | N | Fast | Low | Large datasets, irregular data | + +**Benchmark example (1M rows each):** +- CROSS JOIN: ~30 seconds, creates 1T rows (filtered to 1M) +- LEFT JOIN: ~5 seconds, creates 1M rows +- ASOF JOIN: ~0.5 seconds, creates 1M rows + +## Combining Strategies + +Use resampling + ASOF for best of both worlds: + +```questdb-sql demo title="Resample then ASOF JOIN" +WITH sensor_a_minute AS ( + SELECT timestamp, first(temperature) as temp_a + FROM sensor_readings + WHERE sensor_id = 'A' + SAMPLE BY 1m +) +SELECT + a.timestamp, + a.temp_a, + b.temperature as temp_b_asof +FROM sensor_a_minute a +ASOF JOIN sensor_readings b + ON b.sensor_id = 'B' +WHERE a.timestamp >= '2025-01-15T10:00:00' +ORDER BY a.timestamp; +``` + +- Resample sensor A for regular intervals +- Use ASOF JOIN to find sensor B readings without resampling B + +## Grafana Multi-Sensor Dashboard + +Format for Grafana with multiple series: + +```questdb-sql demo title="Multi-sensor data for Grafana" +WITH sensor_a AS ( + SELECT timestamp, first(temperature) as temperature + FROM sensor_readings + WHERE sensor_id = 'A' + AND $__timeFilter(timestamp) + SAMPLE BY $__interval FILL(PREV) +), +sensor_b AS ( + SELECT timestamp, first(temperature) as temperature + FROM sensor_readings + WHERE sensor_id = 'B' + AND $__timeFilter(timestamp) + SAMPLE BY $__interval FILL(PREV) +) +SELECT timestamp as time, 'Sensor A' as metric, temperature as value FROM sensor_a +UNION ALL +SELECT timestamp as time, 'Sensor B' as metric, temperature as value FROM sensor_b +ORDER BY time; +``` + +Creates separate series for each sensor in Grafana. + +## Decision Matrix + +**Choose CROSS JOIN when:** +- Datasets are small (< 10K rows each) +- You need all possible combinations +- Time tolerance is flexible (e.g., "within 1 minute") +- Analyzing correlation between sensors + +**Choose LEFT JOIN when:** +- You can resample to common intervals +- Clean, aligned timestamps are important +- Visualizing in Grafana with multiple sensors +- Forward-filling is acceptable + +**Choose ASOF JOIN when:** +- Datasets are large (> 100K rows) +- Timestamps are irregular +- Point-in-time accuracy matters +- Query performance is critical +- You want actual readings, not interpolated values + +:::tip ASOF JOIN is Usually Best +For most real-world sensor data scenarios, ASOF JOIN offers the best combination of performance, accuracy, and simplicity. It's specifically designed for time-series data and handles irregular intervals naturally. +::: + +:::warning CROSS JOIN Explosion +Never use CROSS JOIN without a strong WHERE filter on large tables. A CROSS JOIN of two 1M-row tables creates 1 trillion rows before filtering! + +Safe: `CROSS JOIN ... WHERE ABS(a.ts - b.ts) < threshold` +Dangerous: `CROSS JOIN ... ` (without WHERE on time difference) +::: + +:::info Related Documentation +- [ASOF JOIN](/docs/reference/sql/join/#asof-join) +- [LT JOIN](/docs/reference/sql/join/#lt-join) +- [LEFT JOIN](/docs/reference/sql/join/#left-join) +- [SAMPLE BY](/docs/reference/sql/select/#sample-by) +- [FILL strategies](/docs/reference/sql/select/#fill) +::: diff --git a/documentation/sidebars.js b/documentation/sidebars.js index 422c92462..9487581c9 100644 --- a/documentation/sidebars.js +++ b/documentation/sidebars.js @@ -668,7 +668,6 @@ module.exports = { collapsed: true, items: [ "playbook/sql/force-designated-timestamp", - "playbook/sql/pivoting", "playbook/sql/rows-before-after-value-match", { type: "category", @@ -691,6 +690,15 @@ module.exports = { collapsed: true, items: [ "playbook/sql/time-series/latest-n-per-partition", + "playbook/sql/time-series/session-windows", + "playbook/sql/time-series/latest-activity-window", + "playbook/sql/time-series/filter-by-week", + "playbook/sql/time-series/expand-power-over-time", + "playbook/sql/time-series/epoch-timestamps", + "playbook/sql/time-series/sample-by-interval-bounds", + "playbook/sql/time-series/remove-outliers", + "playbook/sql/time-series/fill-missing-intervals", + "playbook/sql/time-series/sparse-sensor-data", ], }, { @@ -699,6 +707,13 @@ module.exports = { collapsed: true, items: [ "playbook/sql/advanced/top-n-plus-others", + "playbook/sql/advanced/pivot-table", + "playbook/sql/advanced/unpivot-table", + "playbook/sql/advanced/sankey-funnel", + "playbook/sql/advanced/conditional-aggregates", + "playbook/sql/advanced/general-and-sampled-aggregates", + "playbook/sql/advanced/consistent-histogram-buckets", + "playbook/sql/advanced/array-from-string", ], }, ], @@ -708,6 +723,7 @@ module.exports = { label: "Integrations", collapsed: true, items: [ + "playbook/integrations/opcua-dense-format", { type: "category", label: "Grafana", @@ -716,14 +732,7 @@ module.exports = { "playbook/integrations/grafana/dynamic-table-queries", "playbook/integrations/grafana/read-only-user", "playbook/integrations/grafana/variable-dropdown", - ], - }, - { - type: "category", - label: "Telegraf", - collapsed: true, - items: [ - "playbook/integrations/telegraf/opcua-dense-format", + "playbook/integrations/grafana/overlay-timeshift", ], }, ], @@ -770,6 +779,10 @@ module.exports = { items: [ "playbook/operations/docker-compose-config", "playbook/operations/monitor-with-telegraf", + "playbook/operations/csv-import-milliseconds", + "playbook/operations/tls-pgbouncer", + "playbook/operations/copy-data-between-instances", + "playbook/operations/query-times-histogram", ], }, ], From 99219b65f0f03280a37a31d8c18a1ca1618afd0d Mon Sep 17 00:00:00 2001 From: javier Date: Thu, 18 Dec 2025 20:37:25 +0100 Subject: [PATCH 11/21] added trades table to schema info --- documentation/playbook/demo-data-schema.md | 108 +++++++++++++++++---- 1 file changed, 90 insertions(+), 18 deletions(-) diff --git a/documentation/playbook/demo-data-schema.md b/documentation/playbook/demo-data-schema.md index fa5e90639..a8e6910d7 100644 --- a/documentation/playbook/demo-data-schema.md +++ b/documentation/playbook/demo-data-schema.md @@ -1,23 +1,23 @@ --- title: Demo Data Schema sidebar_label: Demo data schema -description: Schema and structure of the FX market data available on demo.questdb.io +description: Schema and structure of the FX market data and cryptocurrency trades available on demo.questdb.io --- -The [QuestDB demo instance at demo.questdb.io](https://demo.questdb.io) contains simulated FX market data that you can query directly. This page describes the available tables and their structure. +The [QuestDB demo instance at demo.questdb.io](https://demo.questdb.io) contains two datasets that you can query directly: simulated FX market data and real cryptocurrency trades. This page describes the available tables and their structure. ## Overview -The demo instance provides two main tables representing different types of foreign exchange market data: +The demo instance provides two independent datasets: -- **`core_price`** - Individual price updates from multiple ECNs (Electronic Communication Networks) -- **`market_data`** - Order book snapshots with bid/ask prices and volumes stored as 2D arrays +1. **FX Market Data (Simulated)** - Foreign exchange prices and order books +2. **Cryptocurrency Trades (Real)** - Live cryptocurrency trades from OKX exchange -Additionally, several materialized views provide pre-aggregated data at different time intervals. +--- -:::info Simulated Data -The FX data on the demo instance is **simulated**, not real market data. We fetch real reference prices from Yahoo Finance every few seconds for 30 currency pairs, but all order book levels and core price updates are generated algorithmically based on these reference prices. This provides realistic patterns and data volumes for testing queries without actual market data costs. -::: +# FX Market Data (Simulated) + +The FX dataset contains simulated foreign exchange market data for 30 currency pairs. We fetch real reference prices from Yahoo Finance every few seconds, but all order book levels and price updates are generated algorithmically based on these reference prices. ## core_price Table @@ -128,9 +128,9 @@ LIMIT -5; Each order book snapshot contains 40 bid levels and 40 ask levels. -## Materialized Views +## FX Materialized Views -Several materialized views provide pre-aggregated data at different time intervals, optimized for dashboard and analytics queries: +The FX dataset includes several materialized views providing pre-aggregated data at different time intervals: ### Best Bid/Offer (BBO) Views @@ -150,21 +150,93 @@ Several materialized views provide pre-aggregated data at different time interva - **`market_data_ohlc_15m`** - OHLC candlesticks at 15-minute intervals - **`market_data_ohlc_1d`** - OHLC candlesticks at 1-day intervals -These materialized views are continuously updated and provide faster query performance for common time-series aggregations. +These views are continuously updated and optimized for dashboard and analytics queries on FX data. -## Data Retention and Volume +### FX Data Volume -Both tables use a **3-day TTL (Time To Live)**, meaning data older than 3 days is automatically removed. This keeps the demo instance responsive while providing sufficient data for testing and examples. - -**Data volume per day:** - **`market_data`**: Approximately **160 million rows** per day (order book snapshots) - **`core_price`**: Approximately **73 million rows** per day (price updates across all ECNs and symbols) -These volumes provide realistic scale for testing time-series queries and aggregations. +--- + +# Cryptocurrency Trades (Real) + +The cryptocurrency dataset contains **real market data** streamed live from the OKX exchange using FeedHandler. These are actual executed trades, not simulated data. + +## trades Table + +The `trades` table contains real cryptocurrency trade data. Each row represents an actual executed trade for a cryptocurrency pair. + +### Schema + +```sql title="trades table structure" +CREATE TABLE 'trades' ( + symbol SYMBOL CAPACITY 256 CACHE, + side SYMBOL CAPACITY 256 CACHE, + price DOUBLE, + amount DOUBLE, + timestamp TIMESTAMP +) timestamp(timestamp) PARTITION BY DAY WAL; +``` + +### Columns + +- **`timestamp`** - Time when the trade was executed (designated timestamp) +- **`symbol`** - Cryptocurrency trading pair from the 12 tracked symbols (see list below) +- **`side`** - Trade side: **buy** or **sell** +- **`price`** - Execution price of the trade +- **`amount`** - Trade size (volume in base currency) + +The table tracks **12 cryptocurrency pairs**: ADA-USDT, AVAX-USD, BTC-USDT, DAI-USD, DOT-USD, ETH-BTC, ETH-USDT, LTC-USD, SOL-BTC, SOL-USD, UNI-USD, XLM-USD. + +### Sample Data + +```questdb-sql demo title="Recent cryptocurrency trades" +SELECT * FROM trades +LIMIT -10; +``` + +**Results:** + +| symbol | side | price | amount | timestamp | +| -------- | ---- | ------- | ---------- | --------------------------- | +| BTC-USDT | buy | 85721.6 | 0.00045714 | 2025-12-18T19:31:11.203000Z | +| BTC-USD | buy | 85721.6 | 0.00045714 | 2025-12-18T19:31:11.203000Z | +| BTC-USDT | buy | 85726.6 | 0.00001501 | 2025-12-18T19:31:11.206000Z | +| BTC-USD | buy | 85726.6 | 0.00001501 | 2025-12-18T19:31:11.206000Z | +| BTC-USDT | buy | 85726.9 | 0.000887 | 2025-12-18T19:31:11.206000Z | +| BTC-USD | buy | 85726.9 | 0.000887 | 2025-12-18T19:31:11.206000Z | +| BTC-USDT | buy | 85731.3 | 0.00004393 | 2025-12-18T19:31:11.206000Z | +| BTC-USD | buy | 85731.3 | 0.00004393 | 2025-12-18T19:31:11.206000Z | +| ETH-USDT | sell | 2827.54 | 0.006929 | 2025-12-18T19:31:11.595000Z | +| ETH-USD | sell | 2827.54 | 0.006929 | 2025-12-18T19:31:11.595000Z | + +## Cryptocurrency Materialized Views + +The cryptocurrency dataset includes materialized views for aggregated trade data: + +### Trades Aggregations + +- **`trades_latest_1d`** - Latest trade data aggregated daily +- **`trades_OHLC_15m`** - OHLC candlesticks for cryptocurrency trades at 15-minute intervals + +These views are continuously updated and provide faster query performance for cryptocurrency trade analysis. + +### Cryptocurrency Data Volume + +- **`trades`**: Approximately **3.7 million rows** per day (real cryptocurrency trades) + +--- + +## Data Retention + +**FX tables** (`core_price` and `market_data`) use a **3-day TTL (Time To Live)**, meaning data older than 3 days is automatically removed. This keeps the demo instance responsive while providing sufficient recent data. + +**Cryptocurrency trades table** has **no retention policy** and contains historical data dating back to **March 8, 2022**. This provides over 3 years of real cryptocurrency trade history for long-term analysis and backtesting. ## Using the Demo Data -You can run queries against this data directly on [demo.questdb.io](https://demo.questdb.io). Throughout the Playbook, recipes using demo data will include a direct link to execute the query. +You can run queries against both datasets directly on [demo.questdb.io](https://demo.questdb.io). Throughout the Playbook, recipes using demo data will include a direct link to execute the query. :::tip The demo instance is read-only. For testing write operations (INSERT, UPDATE, DELETE), you'll need to run QuestDB locally. See the [Quick Start guide](/docs/quick-start/) for installation instructions. From 6a13508fae254bff78d04aaa47e45b0ab6bd80a0 Mon Sep 17 00:00:00 2001 From: javier Date: Thu, 18 Dec 2025 21:30:14 +0100 Subject: [PATCH 12/21] fixing broken links --- .../playbook/integrations/grafana/overlay-timeshift.md | 2 +- .../playbook/operations/copy-data-between-instances.md | 4 ++-- .../playbook/operations/csv-import-milliseconds.md | 6 +++--- documentation/playbook/operations/query-times-histogram.md | 4 ++-- documentation/playbook/operations/tls-pgbouncer.md | 4 ++-- documentation/playbook/sql/advanced/array-from-string.md | 2 +- .../playbook/sql/advanced/consistent-histogram-buckets.md | 2 +- documentation/playbook/sql/advanced/unpivot-table.md | 4 ++-- documentation/playbook/sql/time-series/epoch-timestamps.md | 2 +- 9 files changed, 15 insertions(+), 15 deletions(-) diff --git a/documentation/playbook/integrations/grafana/overlay-timeshift.md b/documentation/playbook/integrations/grafana/overlay-timeshift.md index f0180050f..e8896a991 100644 --- a/documentation/playbook/integrations/grafana/overlay-timeshift.md +++ b/documentation/playbook/integrations/grafana/overlay-timeshift.md @@ -454,7 +454,7 @@ Mismatched timezones will misalign the overlay. :::info Related Documentation - [dateadd() function](/docs/reference/function/date-time/#dateadd) - [date_trunc() function](/docs/reference/function/date-time/#date_trunc) -- [UNION ALL](/docs/reference/sql/union/) +- [UNION ALL](/docs/reference/sql/union-except-intersect/) - [SAMPLE BY](/docs/reference/sql/select/#sample-by) - [Grafana time series](/docs/third-party-tools/grafana/) ::: diff --git a/documentation/playbook/operations/copy-data-between-instances.md b/documentation/playbook/operations/copy-data-between-instances.md index 316f861ff..5de7ec7d9 100644 --- a/documentation/playbook/operations/copy-data-between-instances.md +++ b/documentation/playbook/operations/copy-data-between-instances.md @@ -524,8 +524,8 @@ Methods with no downtime: ::: :::info Related Documentation -- [Backup command](/docs/reference/sql/backup/) +- [Backup command](/docs/operations/backup/) - [COPY command](/docs/reference/sql/copy/) -- [ILP ingestion](/docs/operations/ingesting-data/) +- [ILP ingestion](/docs/ingestion-overview/) - [PostgreSQL wire protocol](/docs/reference/api/postgres/) ::: diff --git a/documentation/playbook/operations/csv-import-milliseconds.md b/documentation/playbook/operations/csv-import-milliseconds.md index a6fa010e9..ff0089ea9 100644 --- a/documentation/playbook/operations/csv-import-milliseconds.md +++ b/documentation/playbook/operations/csv-import-milliseconds.md @@ -394,9 +394,9 @@ Timestamps between 1970 and ~2000 can be ambiguous (seconds could look like mill ::: :::info Related Documentation -- [CSV import via Web Console](/docs/operations/importing-data/#web-console-csv-import) -- [REST API import](/docs/operations/importing-data/#rest-api) +- [CSV import via Web Console](/docs/web-console/import-csv/) +- [REST API import](/docs/guides/import-csv/) - [COPY command](/docs/reference/sql/copy/) - [Timestamp types](/docs/reference/sql/datatypes/#timestamp-and-date) -- [ILP ingestion](/docs/operations/ingesting-data/#influxdb-line-protocol) +- [ILP ingestion](/docs/ingestion-overview/) ::: diff --git a/documentation/playbook/operations/query-times-histogram.md b/documentation/playbook/operations/query-times-histogram.md index e43069aa9..e188bcec9 100644 --- a/documentation/playbook/operations/query-times-histogram.md +++ b/documentation/playbook/operations/query-times-histogram.md @@ -404,8 +404,8 @@ Full query logging can generate significant data: ::: :::info Related Documentation -- [HTTP slow query logging](/docs/reference/configuration/#http-slow-query-log) -- [Prometheus metrics](/docs/reference/metrics/) +- [HTTP slow query logging](/docs/configuration/) +- [Prometheus metrics](/docs/operations/logging-metrics/) - [percentile() function](/docs/reference/function/aggregation/#percentile) - [Grafana integration](/docs/third-party-tools/grafana/) ::: diff --git a/documentation/playbook/operations/tls-pgbouncer.md b/documentation/playbook/operations/tls-pgbouncer.md index 5360e9aae..3ad3199f8 100644 --- a/documentation/playbook/operations/tls-pgbouncer.md +++ b/documentation/playbook/operations/tls-pgbouncer.md @@ -483,7 +483,7 @@ Or use a certbot hook to reload PgBouncer after renewal. :::info Related Documentation - [PostgreSQL wire protocol](/docs/reference/api/postgres/) -- [QuestDB security](/docs/operations/security/) +- [QuestDB security](/docs/guides/architecture/security/) - [PgBouncer documentation](https://www.pgbouncer.org/config.html) -- [Docker deployment](/docs/operations/deployment/docker/) +- [Docker deployment](/docs/deployment/docker/) ::: diff --git a/documentation/playbook/sql/advanced/array-from-string.md b/documentation/playbook/sql/advanced/array-from-string.md index cd36367dd..d32951691 100644 --- a/documentation/playbook/sql/advanced/array-from-string.md +++ b/documentation/playbook/sql/advanced/array-from-string.md @@ -355,7 +355,7 @@ QuestDB's array support is focused on specific use cases. For extensive array ma ::: :::info Related Documentation -- [CAST function](/docs/reference/function/cast/) +- [CAST function](/docs/reference/sql/cast/) - [Data types](/docs/reference/sql/datatypes/) - [String functions](/docs/reference/function/text/) ::: diff --git a/documentation/playbook/sql/advanced/consistent-histogram-buckets.md b/documentation/playbook/sql/advanced/consistent-histogram-buckets.md index a5cdff6ba..379158bd8 100644 --- a/documentation/playbook/sql/advanced/consistent-histogram-buckets.md +++ b/documentation/playbook/sql/advanced/consistent-histogram-buckets.md @@ -418,7 +418,7 @@ Grafana heatmaps require: :::info Related Documentation - [Aggregate functions](/docs/reference/function/aggregation/) -- [CAST function](/docs/reference/function/cast/) +- [CAST function](/docs/reference/sql/cast/) - [percentile()](/docs/reference/function/aggregation/#percentile) - [Window functions](/docs/reference/sql/select/#window-functions) ::: diff --git a/documentation/playbook/sql/advanced/unpivot-table.md b/documentation/playbook/sql/advanced/unpivot-table.md index 73198ff79..769bd3204 100644 --- a/documentation/playbook/sql/advanced/unpivot-table.md +++ b/documentation/playbook/sql/advanced/unpivot-table.md @@ -320,7 +320,7 @@ UNION ALL creates multiple copies of your data. For very large tables: ::: :::info Related Documentation -- [UNION](/docs/reference/sql/union/) +- [UNION](/docs/reference/sql/union-except-intersect/) - [CASE expressions](/docs/reference/sql/case/) -- [Pivoting (opposite operation)](/playbook/sql/pivoting) +- [Pivoting (opposite operation)](/docs/playbook/sql/advanced/pivot-table/) ::: diff --git a/documentation/playbook/sql/time-series/epoch-timestamps.md b/documentation/playbook/sql/time-series/epoch-timestamps.md index cf12ffcad..69c93811e 100644 --- a/documentation/playbook/sql/time-series/epoch-timestamps.md +++ b/documentation/playbook/sql/time-series/epoch-timestamps.md @@ -275,7 +275,7 @@ Wrong precision will give incorrect results by factors of 1000x! ::: :::info Related Documentation -- [CAST function](/docs/reference/function/cast/) +- [CAST function](/docs/reference/sql/cast/) - [Timestamp types](/docs/reference/sql/datatypes/#timestamp-and-date) - [dateadd()](/docs/reference/function/date-time/#dateadd) - [now()](/docs/reference/function/date-time/#now) From 0a583157ef0de0952a5b9a2f4ffcd69a907b74b5 Mon Sep 17 00:00:00 2001 From: javier Date: Fri, 19 Dec 2025 00:14:55 +0100 Subject: [PATCH 13/21] improved playbook content. Need to review queries mostly --- .../integrations/grafana/overlay-timeshift.md | 494 ++-------------- .../integrations/grafana/read-only-user.md | 121 ---- .../operations/copy-data-between-instances.md | 530 +----------------- .../operations/csv-import-milliseconds.md | 377 +------------ .../operations/monitor-with-telegraf.md | 383 +------------ .../operations/query-times-histogram.md | 448 +++------------ .../playbook/operations/tls-pgbouncer.md | 480 +--------------- .../programmatic/cpp/missing-columns.md | 356 ++---------- .../programmatic/rust/tls-configuration.md | 326 ----------- .../programmatic/tls-ca-configuration.md | 102 ++++ .../sql/advanced/array-from-string.md | 361 +----------- .../sql/advanced/conditional-aggregates.md | 293 +--------- .../general-and-sampled-aggregates.md | 426 ++------------ .../rows-before-after-value-match.md | 0 .../playbook/sql/advanced/sankey-funnel.md | 455 +++------------ .../playbook/sql/advanced/unpivot-table.md | 159 +----- .../playbook/sql/finance/rolling-stddev.md | 306 +--------- .../playbook/sql/finance/volume-profile.md | 165 +----- .../playbook/sql/finance/volume-spike.md | 243 +------- .../sql/time-series/epoch-timestamps.md | 276 +-------- .../sql/time-series/expand-power-over-time.md | 273 +-------- .../sql/time-series/fill-missing-intervals.md | 398 +------------ .../sql/time-series/filter-by-week.md | 222 +------- .../force-designated-timestamp.md | 0 .../sql/time-series/latest-activity-window.md | 261 +-------- .../sql/time-series/remove-outliers.md | 486 ++-------------- .../time-series/sample-by-interval-bounds.md | 287 +--------- .../sql/time-series/sparse-sensor-data.md | 454 ++++----------- documentation/sidebars.js | 12 +- 29 files changed, 812 insertions(+), 7882 deletions(-) delete mode 100644 documentation/playbook/programmatic/rust/tls-configuration.md create mode 100644 documentation/playbook/programmatic/tls-ca-configuration.md rename documentation/playbook/sql/{ => advanced}/rows-before-after-value-match.md (100%) rename documentation/playbook/sql/{ => time-series}/force-designated-timestamp.md (100%) diff --git a/documentation/playbook/integrations/grafana/overlay-timeshift.md b/documentation/playbook/integrations/grafana/overlay-timeshift.md index e8896a991..afbbf4330 100644 --- a/documentation/playbook/integrations/grafana/overlay-timeshift.md +++ b/documentation/playbook/integrations/grafana/overlay-timeshift.md @@ -1,460 +1,72 @@ --- -title: Overlay Yesterday on Today in Grafana +title: Overlay Two Time Series with Time Shift sidebar_label: Overlay with timeshift -description: Compare today's metrics with yesterday's using time-shifted queries to overlay historical data on current charts +description: Overlay yesterday's and today's data on the same Grafana chart using time shift --- -Overlay yesterday's data on today's chart in Grafana to visually compare current performance against previous periods. This pattern is useful for identifying anomalies, tracking daily patterns, and comparing weekday vs weekend behavior. +Compare yesterday's data against today's data on the same Grafana chart by overlaying them. -## Problem: Compare Current vs Previous Period +## Problem -You want to see if today's traffic pattern is normal by comparing it to yesterday: +You have a query with Grafana's `timeshift` set to `1d/d` to display yesterday's data. You want to overlay today's data on the same chart, starting from scratch each day, so you can compare the shapes of both time series. -**Without overlay:** -- View today's data -- Mentally remember yesterday's pattern -- Switch to yesterday's timeframe -- Try to compare (difficult!) +## Solution -**With overlay:** -- See both periods on same chart -- Visual comparison is immediate -- Easily spot deviations - -## Solution: Time-Shifted Queries - -Use UNION ALL to combine current and shifted historical data: - -```sql --- Today's data -SELECT - timestamp as time, - 'Today' as metric, - count(*) as value -FROM trades -WHERE timestamp >= date_trunc('day', now()) -SAMPLE BY 5m - -UNION ALL - --- Yesterday's data, shifted forward by 24 hours -SELECT - dateadd('d', 1, timestamp) as time, -- Shift forward 24 hours - 'Yesterday' as metric, - count(*) as value -FROM trades -WHERE timestamp >= date_trunc('day', dateadd('d', -1, now())) - AND timestamp < date_trunc('day', now()) -SAMPLE BY 5m - -ORDER BY time; -``` - -**Grafana will plot both series:** -- "Today" line shows current data at actual times -- "Yesterday" line shows previous day's data shifted to align with today's timeline - -## How It Works - -### Time Alignment - -```sql -dateadd('d', 1, timestamp) as time -``` - -Takes yesterday's timestamps and adds 24 hours: -- Yesterday 10:00 → Today 10:00 -- Yesterday 14:30 → Today 14:30 - -This aligns both datasets on the same X-axis (time). - -### Separate Series - -```sql -'Today' as metric -'Yesterday' as metric -``` - -Creates two distinct series in Grafana. Configure Grafana to: -- Different colors per series -- Legend shows "Today" vs "Yesterday" - -## Full Grafana Query - -```questdb-sql title="Today vs Yesterday trade volume" -SELECT - timestamp as time, - 'Today' as metric, - sum(amount) as value -FROM trades -WHERE $__timeFilter(timestamp) - AND timestamp >= date_trunc('day', now()) -SAMPLE BY $__interval - -UNION ALL - -SELECT - dateadd('d', 1, timestamp) as time, - 'Yesterday' as metric, - sum(amount) as value -FROM trades -WHERE timestamp >= date_trunc('day', dateadd('d', -1, now())) - AND timestamp < date_trunc('day', now()) -SAMPLE BY $__interval - -ORDER BY time; -``` - -**Grafana variables:** -- `$__timeFilter(timestamp)`: Respects dashboard time range -- `$__interval`: Auto-adjusts sample interval based on zoom level - -## Week-Over-Week Comparison - -Compare same weekday from last week: +Leave the timeshift as `1d/d` to cover yesterday, and add a new query to the same chart. In this new query, filter for timestamp plus 1 day to cover today's datapoints, then shift them back by 1 day for display. +**Query 1 (Yesterday's data):** ```sql -SELECT - timestamp as time, - 'This Week' as metric, - avg(price) as value -FROM trades -WHERE symbol = 'BTC-USDT' - AND timestamp >= date_trunc('day', now()) -SAMPLE BY 1h - -UNION ALL - -SELECT - dateadd('d', 7, timestamp) as time, - 'Last Week' as metric, - avg(price) as value -FROM trades -WHERE symbol = 'BTC-USDT' - AND timestamp >= date_trunc('day', dateadd('d', -7, now())) - AND timestamp < date_trunc('day', dateadd('d', -6, now())) -SAMPLE BY 1h - -ORDER BY time; -``` - -Compares Monday to Monday, Tuesday to Tuesday, etc. - -## Multiple Historical Periods - -Overlay several previous days: - -```sql --- Today -SELECT timestamp as time, 'Today' as metric, count(*) as value -FROM trades -WHERE timestamp >= date_trunc('day', now()) -SAMPLE BY 10m - -UNION ALL - --- Yesterday -SELECT dateadd('d', 1, timestamp) as time, 'Yesterday' as metric, count(*) as value -FROM trades -WHERE timestamp >= date_trunc('day', dateadd('d', -1, now())) - AND timestamp < date_trunc('day', now()) -SAMPLE BY 10m - -UNION ALL - --- 2 days ago -SELECT dateadd('d', 2, timestamp) as time, '2 Days Ago' as metric, count(*) as value -FROM trades -WHERE timestamp >= date_trunc('day', dateadd('d', -2, now())) - AND timestamp < date_trunc('day', dateadd('d', -1, now())) -SAMPLE BY 10m - -UNION ALL - --- 3 days ago -SELECT dateadd('d', 3, timestamp) as time, '3 Days Ago' as metric, count(*) as value -FROM trades -WHERE timestamp >= date_trunc('day', dateadd('d', -3, now())) - AND timestamp < date_trunc('day', dateadd('d', -2, now())) -SAMPLE BY 10m - -ORDER BY time; -``` - -Shows trend over multiple days aligned to current day. - -## Hour-by-Hour Overlay - -Compare specific hours (e.g., current hour vs same hour yesterday): - -```sql -SELECT - timestamp as time, - 'Current Hour' as metric, - count(*) as value -FROM trades -WHERE timestamp >= date_trunc('hour', now()) -SAMPLE BY 1m - -UNION ALL - -SELECT - dateadd('d', 1, timestamp) as time, - 'Same Hour Yesterday' as metric, - count(*) as value -FROM trades -WHERE timestamp >= date_trunc('hour', dateadd('d', -1, now())) - AND timestamp < date_trunc('hour', dateadd('d', -1, now())) + 3600000000 -- +1 hour in micros -SAMPLE BY 1m - -ORDER BY time; -``` - -Compares minute-by-minute within same hour across days. - -## Weekday vs Weekend Pattern - -Overlay weekday average against weekend average: - -```sql -WITH weekday_avg AS ( - SELECT - extract(hour from timestamp) * 3600000000 + - extract(minute from timestamp) * 60000000 as time_of_day_micros, - avg(value) as avg_value - FROM metrics - WHERE timestamp >= dateadd('d', -30, now()) - AND day_of_week(timestamp) BETWEEN 1 AND 5 -- Monday to Friday - GROUP BY time_of_day_micros -), -weekend_avg AS ( - SELECT - extract(hour from timestamp) * 3600000000 + - extract(minute from timestamp) * 60000000 as time_of_day_micros, - avg(value) as avg_value - FROM metrics - WHERE timestamp >= dateadd('d', -30, now()) - AND day_of_week(timestamp) IN (6, 7) -- Saturday, Sunday - GROUP BY time_of_day_micros +DECLARE + @symbol := 'BTC-USDT' +WITH sampled AS ( + SELECT + timestamp, symbol, + volume AS volume, + ((open+close)/2) * volume AS traded_value + FROM trades_OHLC_15m + WHERE $__timeFilter(timestamp) + AND symbol = @symbol +), cumulative AS ( + SELECT timestamp, symbol, + SUM(traded_value) + OVER (ORDER BY timestamp) AS cumulative_value, + SUM(volume) + OVER (ORDER BY timestamp) AS cumulative_volume + FROM sampled ) -SELECT - cast(date_trunc('day', now()) + time_of_day_micros as timestamp) as time, - 'Weekday Average' as metric, - avg_value as value -FROM weekday_avg - -UNION ALL - -SELECT - cast(date_trunc('day', now()) + time_of_day_micros as timestamp) as time, - 'Weekend Average' as metric, - avg_value as value -FROM weekend_avg - -ORDER BY time; -``` - -Shows typical weekday pattern vs typical weekend pattern. - -## Grafana Panel Configuration - -**Query settings:** -- Format: Time series -- Min interval: Match your SAMPLE BY interval - -**Display settings:** -- Visualization: Time series (line graph) -- Legend: Show (displays "Today", "Yesterday", etc.) -- Line styles: Different colors or dash styles per series -- Tooltip: All series (shows both values on hover) - -**Advanced:** -- Series overrides: - - "Today": Solid line, blue, bold - - "Yesterday": Dashed line, gray, thin - - Opacity: 80% for historical, 100% for current - -## Use Cases - -**Traffic anomaly detection:** -```sql --- Is today's traffic normal? --- Overlay last 7 days -``` - -**Performance regression:** -```sql --- API latency today vs yesterday -SELECT - timestamp as time, - 'Today P95' as metric, - percentile(latency_ms, 95) as value -FROM api_requests -WHERE timestamp >= date_trunc('day', now()) -SAMPLE BY 5m - -UNION ALL - -SELECT - dateadd('d', 1, timestamp) as time, - 'Yesterday P95' as metric, - percentile(latency_ms, 95) as value -FROM api_requests -WHERE timestamp >= date_trunc('day', dateadd('d', -1, now())) - AND timestamp < date_trunc('day', now()) -SAMPLE BY 5m -ORDER BY time; -``` - -**Sales comparison:** -```sql --- Today's sales vs same day last week --- (Mondays often differ from Tuesdays) -``` - -**Seasonal patterns:** -```sql --- Compare today to same date last month/year -SELECT dateadd('M', 1, timestamp) as time -- Month shift -SELECT dateadd('y', 1, timestamp) as time -- Year shift -``` - -## Performance Optimization - -**Filter early:** -```sql -WHERE timestamp >= date_trunc('day', dateadd('d', -1, now())) - AND timestamp < date_trunc('day', dateadd('d', 2, now())) -``` - -Only query relevant dates (yesterday + today + small buffer). - -**Use SAMPLE BY:** -```sql -SAMPLE BY $__interval -``` - -Let Grafana determine appropriate resolution based on zoom level. - -**Partition pruning:** -```sql --- If partitioned by day, this efficiently accesses only 2 partitions -WHERE timestamp IN today() -UNION ALL -WHERE timestamp >= dateadd('d', -1, now()) AND timestamp < date_trunc('day', now()) -``` - -## Alternative: Grafana Built-in Timeshift - -**Note:** Some Grafana panels support native timeshift transformations. - -**Using transformation:** -1. Query normal time-series data -2. Add transformation: "Add field from calculation" -3. Mode: "Reduce row" -4. Calculation: "Difference" -5. Apply timeshift transformation (if available in your Grafana version) - -However, SQL-based approach gives more control and works reliably across Grafana versions. - -## Dynamic Period Selection - -Use Grafana variables for flexible comparison: - -**Variable `compare_period`:** -- `1d` = Yesterday -- `7d` = Last week -- `30d` = Last month - -**Query:** -```sql -SELECT timestamp as time, 'Current' as metric, value FROM metrics -WHERE $__timeFilter(timestamp) - -UNION ALL - -SELECT - dateadd('d', $compare_period, timestamp) as time, - 'Previous' as metric, - value -FROM metrics -WHERE timestamp >= dateadd('d', -$compare_period, now()) - AND timestamp < now() -ORDER BY time; -``` - -User can switch comparison period via dropdown. - -## Handling Incomplete Current Day - -Only show overlay up to current time of day: - -```sql -SELECT - timestamp as time, - 'Today' as metric, - count(*) as value -FROM trades -WHERE timestamp >= date_trunc('day', now()) - AND timestamp <= now() -- Don't show future of today -SAMPLE BY 10m - -UNION ALL - -SELECT - dateadd('d', 1, timestamp) as time, - 'Yesterday' as metric, - count(*) as value -FROM trades -WHERE timestamp >= date_trunc('day', dateadd('d', -1, now())) - AND timestamp <= dateadd('d', -1, now()) -- Yesterday at same time -SAMPLE BY 10m -ORDER BY time; -``` - -This prevents showing empty future time slots for today. - -## Combining with Alerts - -Trigger alert when today's value deviates significantly from yesterday: - -```sql -WITH today_value AS ( - SELECT avg(latency_ms) as latency FROM api_requests - WHERE timestamp >= dateadd('m', -5, now()) -), -yesterday_same_time AS ( - SELECT avg(latency_ms) as latency FROM api_requests - WHERE timestamp >= dateadd('d', -1, dateadd('m', -5, now())) - AND timestamp < dateadd('d', -1, now()) +SELECT timestamp as time, cumulative_value/cumulative_volume AS vwap_yesterday FROM cumulative; +``` + +**Query 2 (Today's data, shifted back):** +```sql +DECLARE + @symbol := 'BTC-USDT' +WITH sampled AS ( + SELECT + timestamp, symbol, + volume AS volume, + ((open+close)/2) * volume AS traded_value + FROM trades_OHLC_15m + WHERE timestamp BETWEEN dateadd('d',1,$__unixEpochFrom()*1000000) + AND dateadd('d',1,$__unixEpochTo() * 1000000) + AND symbol = @symbol +), cumulative AS ( + SELECT timestamp, symbol, + SUM(traded_value) + OVER (ORDER BY timestamp) AS cumulative_value, + SUM(volume) + OVER (ORDER BY timestamp) AS cumulative_volume + FROM sampled ) -SELECT - (today_value.latency - yesterday_same_time.latency) / yesterday_same_time.latency * 100 as pct_change -FROM today_value, yesterday_same_time; +SELECT dateadd('d',-1,timestamp) as time, cumulative_value/cumulative_volume AS vwap_today FROM cumulative; ``` -Alert if `pct_change > 50` (today is 50% slower than yesterday). +**Note:** This example uses `$__unixEpochFrom()` and `$__unixEpochTo()` macros from the PostgreSQL Grafana plugin. When using the QuestDB plugin, the equivalent macros are `$__fromTime` and `$__toTime` and don't need epoch conversion as those are native timestamps. -:::tip Best Practices -1. **Match sample intervals**: Use same SAMPLE BY for all series -2. **Label clearly**: Use descriptive metric names ("Today", "Yesterday (Shifted)") -3. **Limit historical depth**: Too many overlays clutter the chart (2-3 periods max) -4. **Adjust colors**: Make current period prominent, historical periods muted -5. **Consider patterns**: Week-over-week often more meaningful than day-over-day -::: - -:::warning Time Zone Considerations -Ensure both queries use the same timezone: -- Use UTC for consistency -- Or explicitly set timezone in date_trunc(): `date_trunc('day', now(), 'America/New_York')` - -Mismatched timezones will misalign the overlay. -::: +This creates an overlay chart where yesterday's and today's data align on the same time axis, allowing direct comparison. :::info Related Documentation -- [dateadd() function](/docs/reference/function/date-time/#dateadd) -- [date_trunc() function](/docs/reference/function/date-time/#date_trunc) - [UNION ALL](/docs/reference/sql/union-except-intersect/) -- [SAMPLE BY](/docs/reference/sql/select/#sample-by) -- [Grafana time series](/docs/third-party-tools/grafana/) +- [Window functions](/docs/reference/sql/select/#window-functions) +- [Grafana integration](/docs/third-party-tools/grafana/) ::: diff --git a/documentation/playbook/integrations/grafana/read-only-user.md b/documentation/playbook/integrations/grafana/read-only-user.md index aec5d5f17..cbc232c9f 100644 --- a/documentation/playbook/integrations/grafana/read-only-user.md +++ b/documentation/playbook/integrations/grafana/read-only-user.md @@ -74,127 +74,6 @@ After enabling, you have two separate users: - Permissions: `SELECT` queries only - Use for: Grafana dashboards, monitoring tools, analytics applications -## Grafana Configuration - -Configure Grafana to use the read-only user: - -### PostgreSQL Data Source - -When adding a QuestDB data source using the PostgreSQL plugin: - -1. **Host:** `your-questdb-host:8812` -2. **Database:** `qdb` -3. **User:** `grafana_reader` (your read-only username) -4. **Password:** `secure_password_here` (your read-only password) -5. **SSL Mode:** Depends on your setup (typically `disable` for local, `require` for remote) - -### QuestDB Data Source Plugin - -When using the [native QuestDB Grafana plugin](https://grafana.com/grafana/plugins/questdb-questdb-datasource/): - -1. **URL:** `http://your-questdb-host:9000` -2. **Authentication:** Select PostgreSQL Wire -3. **User:** `grafana_reader` -4. **Password:** `secure_password_here` - -## Verification - -Test that permissions are working correctly: - -**Read-only user should succeed:** -```sql --- These queries should work -SELECT * FROM trades LIMIT 10; -SELECT count(*) FROM trades; -SELECT symbol, avg(price) FROM trades GROUP BY symbol; -``` - -**Read-only user should fail:** -```sql --- These operations should be rejected -INSERT INTO trades VALUES ('BTC-USDT', 'buy', 50000, 0.1, now()); -UPDATE trades SET price = 60000 WHERE symbol = 'BTC-USDT'; -DELETE FROM trades WHERE timestamp < dateadd('d', -30, now()); -CREATE TABLE test_table (id INT, name STRING); -DROP TABLE trades; -``` - -Expected error for write operations: `permission denied` or similar. - -## Security Best Practices - -### Strong Passwords - -Change default passwords immediately: -```ini -# Don't use the defaults in production -pg.user=custom_admin_username -pg.password=strong_admin_password_here - -pg.readonly.user=custom_readonly_username -pg.readonly.password=strong_readonly_password_here -``` - -### Network Access Control - -Restrict network access at the infrastructure level: -- Use firewalls to limit which hosts can connect to port 8812 -- For cloud deployments, use security groups or network policies -- Consider using a VPN for remote access - -### Connection Encryption - -Enable TLS for PostgreSQL connections: -- QuestDB Enterprise has native TLS support -- For Open Source, consider using a TLS termination proxy (e.g., HAProxy, nginx) - -### Regular Password Rotation - -Implement a password rotation policy: -1. Update the password in QuestDB configuration -2. Restart QuestDB to apply changes -3. Update Grafana data source configuration -4. Test connections before rotating further - -## Troubleshooting - -**Connection refused:** -- Verify QuestDB is running and listening on port 8812 -- Check firewall rules allow connections -- Ensure the PostgreSQL wire protocol is enabled (it is by default) - -**Authentication failed:** -- Verify the read-only user is enabled: `pg.readonly.user.enabled=true` -- Check username and password match your configuration -- Restart QuestDB after configuration changes - -**Queries failing for read-only user:** -- Ensure queries are SELECT-only (no INSERT, UPDATE, DELETE, CREATE, DROP, ALTER) -- Check table names are correct (case-sensitive in some contexts) -- Verify user has been correctly configured as read-only - -**DDL statements fail from web console:** -- Verify you're using the admin user, not the read-only user -- Check the web console is configured to use admin credentials - -## Alternative: Connection Pooling with PgBouncer - -For advanced setups with many concurrent Grafana users, consider using PgBouncer: - -1. **Configure PgBouncer** to connect to QuestDB with the read-only user -2. **Set authentication** in PgBouncer for your Grafana instances -3. **Point Grafana** to PgBouncer instead of directly to QuestDB - -This provides connection pooling benefits and an additional authentication layer. - -:::tip Multiple Dashboards -You can use the same read-only credentials across multiple Grafana instances or dashboards. Each connection will be independently managed by QuestDB's PostgreSQL wire protocol implementation. -::: - -:::warning Write Operations from Web Console -The web console uses different authentication than the PostgreSQL wire protocol. Enabling a read-only user does NOT restrict the web console - it will still have full access via the admin user and the REST API. -::: - :::info Related Documentation - [PostgreSQL wire protocol](/docs/reference/api/postgres/) - [QuestDB Enterprise RBAC](/docs/operations/rbac/) diff --git a/documentation/playbook/operations/copy-data-between-instances.md b/documentation/playbook/operations/copy-data-between-instances.md index 5de7ec7d9..94b74b9e0 100644 --- a/documentation/playbook/operations/copy-data-between-instances.md +++ b/documentation/playbook/operations/copy-data-between-instances.md @@ -1,531 +1,39 @@ --- title: Copy Data Between QuestDB Instances sidebar_label: Copy data between instances -description: Copy tables and data between QuestDB instances using backup/restore, SQL export/import, and programmatic methods +description: Copy a subset of data from production to development QuestDB instances --- -Transfer data between QuestDB instances for migrations, backups, development environments, or multi-region deployments. This guide covers multiple methods with different trade-offs for speed, consistency, and ease of use. +Copy a subset of data from one QuestDB instance to another for testing or development purposes. -## Problem: Move Data Between Instances +## Problem -Common scenarios: -- **Migration**: Move from development to production -- **Backup/restore**: Copy data to backup instance -- **Testing**: Clone production data to staging -- **Multi-region**: Replicate data across regions -- **Disaster recovery**: Restore from backup +You want to copy data between QuestDB instances. This method allows you to copy any arbitrary query result, but if you want a full database copy please check the [backup and restore documentation](/docs/operations/backup/). -## Method 1: Filesystem Copy (Fastest) +## Solution: Table2Ilp Utility -Copy table directories directly between instances. +QuestDB ships with a `utils` folder that includes a tool to read from one instance (using the PostgreSQL protocol) and write into another (using ILP). -### Prerequisites +You would need to [compile the jar](https://github.com/questdb/questdb/tree/master/utils), and then use it like this: -- Both instances must be **stopped** -- Same QuestDB version (or compatible) -- Same OS architecture recommended - -### Steps - -**On source instance:** -```bash -# Stop QuestDB -docker stop questdb-source -# or -systemctl stop questdb - -# Navigate to QuestDB data directory -cd /var/lib/questdb/db - -# List tables -ls -lh -``` - -**Copy table directory:** -```bash -# Copy to remote server -scp -r /var/lib/questdb/db/trades user@target-server:/var/lib/questdb/db/ - -# Or copy to local destination -cp -r /var/lib/questdb/db/trades /backup/questdb/db/ - -# Or use rsync for large tables -rsync -avz --progress /var/lib/questdb/db/trades/ user@target-server:/var/lib/questdb/db/trades/ +```shell +java -cp utils.jar io.questdb.cliutil.Table2Ilp \ + -d trades \ + -dilp "https::addr=localhost:9000;username=admin;password=quest;" \ + -s "trades WHERE start_time in '2022-06'" \ + -sc "jdbc:postgresql://localhost:8812/qdb?user=account&password=secret&ssl=false" \ + -sym "ticker,exchange" \ + -sts start_time ``` -**On target instance:** -```bash -# Ensure correct ownership -chown -R questdb:questdb /var/lib/questdb/db/trades - -# Start QuestDB -docker start questdb-target -# or -systemctl start questdb -``` - -**Verify:** -```sql -SELECT count(*) FROM trades; -SELECT min(timestamp), max(timestamp) FROM trades; -``` - -### Pros and Cons - -**Pros:** -- Fastest method (no serialization/deserialization) -- Preserves all metadata (symbols, indexes, partitions) -- Exact binary copy - -**Cons:** -- Requires downtime (both instances must be stopped) -- Must copy entire table (no filtering) -- Version compatibility required -- No incremental updates - -## Method 2: Backup and Restore - -Use QuestDB's native backup/restore functionality. - -### Create Backup - -**SQL command:** -```sql -BACKUP TABLE trades; -``` - -This creates a backup in `/backup/trades//`. - -**Backup all tables:** -```sql -BACKUP DATABASE; -``` - -### Copy Backup Files - -```bash -# On source server -cd /var/lib/questdb/backup/trades/2025-01-15T10-30-00/ -tar -czf trades_backup.tar.gz * - -# Transfer to target server -scp trades_backup.tar.gz user@target-server:/tmp/ - -# On target server -mkdir -p /var/lib/questdb/backup/trades/2025-01-15T10-30-00/ -cd /var/lib/questdb/backup/trades/2025-01-15T10-30-00/ -tar -xzf /tmp/trades_backup.tar.gz -``` - -### Restore on Target - -```sql --- Drop existing table if needed -DROP TABLE IF EXISTS trades; - --- Restore from backup -RESTORE TABLE trades FROM '/var/lib/questdb/backup/trades/2025-01-15T10-30-00/'; -``` - -### Pros and Cons - -**Pros:** -- Clean, supported method -- Can backup while instance is running -- Verifiable backup integrity - -**Cons:** -- Requires disk space for backup -- Two-step process (backup, then restore) -- No incremental backups - -## Method 3: SQL Export and Import - -Export as SQL inserts or CSV, then import on target. - -### Export as CSV - -**From source:** -```sql -COPY trades TO '/tmp/trades.csv' WITH HEADER true; -``` - -Or via psql: -```bash -psql -h source-host -p 8812 -U admin -d questdb -c \ - "COPY (SELECT * FROM trades WHERE timestamp >= '2025-01-01') TO STDOUT WITH CSV HEADER" \ - > trades.csv -``` - -### Import to Target - -**Via Web Console:** -1. Navigate to http://target-host:9000 -2. Click "Import" -3. Upload trades.csv -4. Configure schema and designated timestamp -5. Click "Import" - -**Via REST API:** -```bash -curl -F data=@trades.csv \ - -F name=trades \ - -F timestamp=timestamp \ - -F partitionBy=DAY \ - http://target-host:9000/imp -``` - -**Via COPY (QuestDB 8.0+):** -```sql -COPY trades FROM '/tmp/trades.csv' -WITH HEADER true -TIMESTAMP timestamp -PARTITION BY DAY; -``` - -### Pros and Cons - -**Pros:** -- Works across different QuestDB versions -- Can filter data during export -- Human-readable format (CSV) -- No downtime required - -**Cons:** -- Slower (serialization overhead) -- Larger file sizes -- Symbol dictionaries not preserved -- Need to recreate indexes - -## Method 4: ILP Streaming (Incremental) - -Stream data via InfluxDB Line Protocol for continuous replication. - -### Python Example - -```python -import psycopg2 -from questdb.ingress import Sender - -# Connect to source -source_conn = psycopg2.connect( - host="source-host", port=8812, - user="admin", password="quest", database="questdb" -) - -# Stream to target via ILP -with Sender('target-host', 9009) as sender: - cursor = source_conn.cursor() - cursor.execute(""" - SELECT timestamp, symbol, price, amount - FROM trades - WHERE timestamp >= now() - interval '1' day - ORDER BY timestamp - """) - - for row in cursor: - timestamp, symbol, price, amount = row - sender.row( - 'trades', - symbols={'symbol': symbol}, - columns={'price': price, 'amount': amount}, - at=int(timestamp.timestamp() * 1_000_000) # Convert to microseconds - ) - - sender.flush() - -source_conn.close() -``` - -### Real-Time Replication - -For ongoing replication, query new data periodically: - -```python -import time -from datetime import datetime, timedelta - -last_sync = datetime.now() - timedelta(days=1) - -while True: - cursor.execute(""" - SELECT timestamp, symbol, price, amount - FROM trades - WHERE timestamp > %s - ORDER BY timestamp - """, (last_sync,)) - - rows = cursor.fetchall() - if rows: - for row in rows: - # Send via ILP as above - sender.row(...) - - last_sync = rows[-1][0] # Update to latest timestamp - sender.flush() - - time.sleep(60) # Check every minute -``` - -### Pros and Cons - -**Pros:** -- Incremental updates possible -- Works while both instances are running -- Can transform data during transfer -- Can replicate to multiple targets - -**Cons:** -- Requires programming -- Network overhead -- Must handle connection failures -- Need to track last synced position +This reads from the source instance using PostgreSQL wire protocol and writes to the destination using ILP. -## Method 5: PostgreSQL Logical Replication (Advanced) +## Alternative: Export Endpoint -Use external tools that support PostgreSQL wire protocol. - -### Using Debezium - -Not directly supported, but can use CDC patterns with polling: - -**Source query (periodic):** -```sql -SELECT * -FROM trades -WHERE timestamp > :last_checkpoint -ORDER BY timestamp -LIMIT 10000; -``` - -Stream results to target via ILP or PostgreSQL COPY. - -### Pros and Cons - -**Pros:** -- Can integrate with data pipelines -- Near real-time replication -- Works with heterogeneous targets - -**Cons:** -- Complex setup -- External dependencies -- Requires checkpoint management - -## Comparison Matrix - -| Method | Speed | Downtime | Incremental | Filtering | Complexity | -|--------|-------|----------|-------------|-----------|------------| -| **Filesystem Copy** | ⭐⭐⭐⭐⭐ | Required | ❌ | ❌ | ⭐ | -| **Backup/Restore** | ⭐⭐⭐⭐ | Partial | ❌ | ❌ | ⭐⭐ | -| **SQL Export/Import** | ⭐⭐ | None | ❌ | ✅ | ⭐⭐ | -| **ILP Streaming** | ⭐⭐⭐ | None | ✅ | ✅ | ⭐⭐⭐⭐ | -| **Logical Replication** | ⭐⭐⭐ | None | ✅ | ✅ | ⭐⭐⭐⭐⭐ | - -## Large Table Considerations - -For tables > 100GB: - -### Parallel Export/Import - -```bash -# Export partitions in parallel -for partition in 2025-01-{01..31}; do - psql -h source -c "COPY (SELECT * FROM trades WHERE timestamp::date = '$partition') TO STDOUT" | \ - psql -h target -c "COPY trades FROM STDIN" & -done -wait -``` - -### Compression - -```bash -# Compress during transfer -pg_dump -h source -t trades | gzip | ssh target "gunzip | psql" - -# Or use pigz for parallel compression -pg_dump -h source -t trades | pigz -9 | ssh target "unpigz | psql" -``` - -### Split by Partition - -```bash -# Copy one partition at a time (filesystem method) -for partition in /var/lib/questdb/db/trades/2025-01-*; do - rsync -avz "$partition" target:/var/lib/questdb/db/trades/ -done -``` - -## Verification - -After copying, verify data integrity: - -**Row counts:** -```sql --- On source -SELECT count(*) FROM trades; - --- On target (should match) -SELECT count(*) FROM trades; -``` - -**Timestamp range:** -```sql -SELECT min(timestamp), max(timestamp) FROM trades; -``` - -**Checksums:** -```sql --- On both instances -SELECT - symbol, - count(*) as row_count, - sum(cast(price AS LONG)) as price_checksum, - sum(cast(amount AS LONG)) as amount_checksum -FROM trades -GROUP BY symbol -ORDER BY symbol; -``` - -**Sample verification:** -```sql --- Compare random samples -SELECT * FROM trades WHERE timestamp = '2025-01-15T12:34:56.789012Z'; -``` - -## Automating Backups - -### Daily Backup Script - -```bash -#!/bin/bash -# backup-questdb.sh - -BACKUP_DIR="/backup/questdb/$(date +%Y-%m-%d)" -SOURCE_DB="/var/lib/questdb/db" - -# Create backup directory -mkdir -p "$BACKUP_DIR" - -# Stop QuestDB (optional, for consistent backup) -# systemctl stop questdb - -# Copy tables -for table in "$SOURCE_DB"/*; do - if [ -d "$table" ]; then - table_name=$(basename "$table") - echo "Backing up $table_name..." - tar -czf "$BACKUP_DIR/${table_name}.tar.gz" -C "$SOURCE_DB" "$table_name" - fi -done - -# Start QuestDB -# systemctl start questdb - -# Cleanup old backups (keep last 7 days) -find /backup/questdb/ -type d -mtime +7 -exec rm -rf {} \; - -echo "Backup complete: $BACKUP_DIR" -``` - -**Add to crontab:** -```bash -# Run daily at 2 AM -0 2 * * * /usr/local/bin/backup-questdb.sh >> /var/log/questdb-backup.log 2>&1 -``` - -## Multi-Region Replication - -For active-active or active-passive setups: - -```python -# Continuous replication with conflict resolution -def replicate_to_regions(source_host, target_hosts): - with psycopg2.connect(host=source_host, ...) as source: - senders = [Sender(host, 9009) for host in target_hosts] - - last_ts = get_last_checkpoint() - - while True: - cursor = source.cursor() - cursor.execute(""" - SELECT * FROM trades - WHERE timestamp > %s - ORDER BY timestamp - LIMIT 10000 - """, (last_ts,)) - - batch = cursor.fetchall() - if not batch: - time.sleep(10) - continue - - # Replicate to all regions - for sender in senders: - for row in batch: - sender.row('trades', ...) - sender.flush() - - last_ts = batch[-1][0] - save_checkpoint(last_ts) -``` - -## Troubleshooting - -### "Table already exists" - -```sql --- Drop and recreate -DROP TABLE IF EXISTS trades; - --- Or truncate and append -TRUNCATE TABLE trades; -``` - -### Permission Denied - -```bash -# Fix ownership -chown -R questdb:questdb /var/lib/questdb/db/trades - -# Fix permissions -chmod -R 755 /var/lib/questdb/db/trades -``` - -### Incomplete Transfer - -```sql --- Check for gaps in time-series -SELECT - timestamp, - lag(timestamp) OVER (ORDER BY timestamp) as prev_timestamp, - timestamp - lag(timestamp) OVER (ORDER BY timestamp) as gap_micros -FROM trades -WHERE timestamp - lag(timestamp) OVER (ORDER BY timestamp) > 3600000000 -- Gaps > 1 hour -ORDER BY timestamp; -``` - -:::tip Best Practices -1. **Test first**: Always test your copy method on a small table -2. **Verify after**: Check row counts, timestamps, and sample data -3. **Monitor during**: Watch disk space, memory, and network usage -4. **Backup before**: Keep a backup before major data operations -5. **Automate**: Script and schedule regular backups -::: - -:::warning Downtime Planning -Methods requiring downtime: -- **Filesystem copy**: Both instances must be stopped -- **Backup** (optional): Source can run, target stopped during restore - -Methods with no downtime: -- **SQL export/import**: Both instances can run -- **ILP streaming**: Both instances remain operational -::: +You can also use [the export endpoint](/docs/reference/api/rest/#exp---export-data) to export data to CSV or other formats. :::info Related Documentation -- [Backup command](/docs/operations/backup/) -- [COPY command](/docs/reference/sql/copy/) - [ILP ingestion](/docs/ingestion-overview/) - [PostgreSQL wire protocol](/docs/reference/api/postgres/) +- [REST API export](/docs/reference/api/rest/#exp---export-data) ::: diff --git a/documentation/playbook/operations/csv-import-milliseconds.md b/documentation/playbook/operations/csv-import-milliseconds.md index ff0089ea9..b05599aec 100644 --- a/documentation/playbook/operations/csv-import-milliseconds.md +++ b/documentation/playbook/operations/csv-import-milliseconds.md @@ -1,114 +1,29 @@ --- title: Import CSV with Millisecond Timestamps sidebar_label: CSV import with milliseconds -description: Import CSV files with epoch millisecond timestamps and convert them to QuestDB's microsecond format +description: Import CSV files with epoch millisecond timestamps into QuestDB --- -Import CSV files containing epoch timestamps in milliseconds (common from JavaScript, Python, and many APIs) and convert them to QuestDB's native microsecond format during import. +Import CSV files containing epoch timestamps in milliseconds into QuestDB, which expects microseconds. -## Problem: CSV with Millisecond Epoch Timestamps +## Problem -You have a CSV file with timestamps as epoch milliseconds: +QuestDB does not support flags for timestamp conversion during CSV import. -**trades.csv:** -```csv -timestamp,symbol,price,amount -1737456645123,BTC-USDT,61234.50,0.123 -1737456645456,ETH-USDT,3456.78,1.234 -1737456645789,BTC-USDT,61235.00,0.456 -``` - -QuestDB expects timestamps in microseconds, so direct import will create incorrect dates (off by 1000x). - -## Solution 1: Convert During Web Console Import - -The QuestDB web console CSV import tool automatically detects and converts epoch timestamps. - -**Steps:** -1. Navigate to QuestDB web console (http://localhost:9000) -2. Click "Import" in the top navigation -3. Select your CSV file or drag and drop -4. In the schema detection screen: - - QuestDB detects the `timestamp` column type - - If detected as LONG, manually change to TIMESTAMP - - Check "Partition by" timestamp if appropriate -5. Click "Import" - -**Note:** The web console auto-detects milliseconds vs microseconds based on value magnitude. - -## Solution 2: REST API with Schema Definition - -Define the schema explicitly in the REST API call: - -```bash -curl -F data=@trades.csv \ - -F schema='[ - {"name": "timestamp", "type": "TIMESTAMP", "pattern": "epoch"}, - {"name": "symbol", "type": "SYMBOL"}, - {"name": "price", "type": "DOUBLE"}, - {"name": "amount", "type": "DOUBLE"} - ]' \ - -F timestamp=timestamp \ - -F partitionBy=DAY \ - http://localhost:9000/imp -``` - -**Key parameters:** -- `"pattern": "epoch"`: Tells QuestDB to interpret as epoch timestamp -- `"type": "TIMESTAMP"`: Column type -- `timestamp=timestamp`: Designate as table's designated timestamp -- `partitionBy=DAY`: Partition strategy - -The REST API automatically detects milliseconds vs microseconds. - -## Solution 3: SQL COPY Command with Conversion - -For QuestDB 8.0+, use the COPY command with timestamp conversion: - -```sql -COPY trades FROM 'trades.csv' -WITH HEADER true -FORMAT CSV -TIMESTAMP timestamp -PARTITION BY DAY; -``` - -QuestDB's CSV parser automatically handles epoch millisecond detection and conversion. - -## Solution 4: Pre-Convert in Source System - -Convert to microseconds before export: +## Solution Options -**JavaScript:** -```javascript -const timestampMicros = Date.now() * 1000; // Milliseconds * 1000 = microseconds -console.log(`${timestampMicros},BTC-USDT,61234.50,0.123`); -``` - -**Python:** -```python -import time -timestamp_micros = int(time.time() * 1_000_000) # Seconds * 1M = microseconds -print(f"{timestamp_micros},BTC-USDT,61234.50,0.123") -``` +Here are the options available: -**SQL (in source database):** -```sql --- PostgreSQL example -SELECT - EXTRACT(EPOCH FROM timestamp) * 1000000 AS timestamp_micros, - symbol, price, amount -FROM trades; -``` +### Option 1: Pre-process the Dataset -Then export with timestamps already in microseconds. +Convert timestamps from milliseconds to microseconds before import. If importing lots of data, create Parquet files, copy them to the QuestDB import folder, and read them with `read_parquet('file.parquet')`. Then use `INSERT INTO SELECT` to copy to another table. -## Solution 5: Import Then Convert with SQL +### Option 2: Staging Table -Import as LONG, then INSERT with conversion: +Import into a non-partitioned table as DATE, then `INSERT INTO` a partitioned table as TIMESTAMP: ```sql --- Step 1: Create staging table with LONG timestamp +-- Create staging table CREATE TABLE trades_staging ( timestamp_ms LONG, symbol SYMBOL, @@ -116,10 +31,9 @@ CREATE TABLE trades_staging ( amount DOUBLE ); --- Step 2: Import CSV to staging table --- (via web console or REST API, treating timestamp as LONG) +-- Import CSV to staging table (via web console or REST API) --- Step 3: Create final table with TIMESTAMP +-- Create final table CREATE TABLE trades ( timestamp TIMESTAMP, symbol SYMBOL INDEX, @@ -127,276 +41,27 @@ CREATE TABLE trades ( amount DOUBLE ) TIMESTAMP(timestamp) PARTITION BY DAY; --- Step 4: Convert and insert +-- Convert and insert INSERT INTO trades SELECT - cast(timestamp_ms * 1000 AS TIMESTAMP) as timestamp, -- Milliseconds → microseconds + cast(timestamp_ms * 1000 AS TIMESTAMP) as timestamp, symbol, price, amount FROM trades_staging; --- Step 5: Cleanup +-- Drop staging table DROP TABLE trades_staging; ``` -This approach gives you full control over the conversion process. - -## Verifying Timestamp Conversion - -After import, verify timestamps are correct: - -```sql --- Check first few rows -SELECT * FROM trades LIMIT 5; - --- Verify timestamp range is reasonable -SELECT - min(timestamp) as earliest, - max(timestamp) as latest, - (max(timestamp) - min(timestamp)) / 86400000000 as days_span -FROM trades; - --- Convert back to milliseconds to compare with source -SELECT - timestamp, - cast(timestamp AS LONG) / 1000 as timestamp_ms_check, - symbol, - price -FROM trades -LIMIT 5; -``` - -**Expected output:** -- Timestamps should show reasonable dates (not year 1970 or 50,000 AD) -- `days_span` should match your data's timeframe -- `timestamp_ms_check` should match your original CSV values - -## Common Mistakes and Fixes - -### Mistake 1: Timestamps 1000x Too Large - -**Symptom:** Dates show far in the future (year ~50,000 AD) - -**Cause:** Imported microseconds as milliseconds (multiplied by 1000 instead of dividing) - -**Fix:** -```sql --- Create corrected table -CREATE TABLE trades_fixed AS -SELECT - cast(cast(timestamp AS LONG) / 1000 AS TIMESTAMP) as timestamp, - symbol, price, amount -FROM trades_incorrect -TIMESTAMP(timestamp) PARTITION BY DAY; -``` - -### Mistake 2: Timestamps 1000x Too Small - -**Symptom:** All dates show as 1970-01-01 - -**Cause:** Imported seconds as microseconds, or milliseconds treated as microseconds - -**Fix:** -```sql --- If original was milliseconds, multiply by 1000 -CREATE TABLE trades_fixed AS -SELECT - cast(cast(timestamp AS LONG) * 1000 AS TIMESTAMP) as timestamp, - symbol, price, amount -FROM trades_incorrect -TIMESTAMP(timestamp) PARTITION BY DAY; -``` - -### Mistake 3: Timestamps Imported as Strings - -**Symptom:** Timestamp column type is STRING or VARCHAR - -**Cause:** CSV importer didn't recognize epoch format - -**Fix:** -```sql -INSERT INTO trades_corrected -SELECT - cast(cast(timestamp_string AS LONG) * 1000 AS TIMESTAMP) as timestamp, - symbol, price, amount -FROM trades_incorrect; -``` - -## Handling Mixed Timestamp Formats - -If your CSV has some timestamps in ISO format and some in epoch: - -```csv -timestamp,symbol,price -2025-01-15T10:30:00.000Z,BTC-USDT,61234.50 -1737456645123,ETH-USDT,3456.78 -2025-01-15T10:30:01.000Z,BTC-USDT,61235.00 -``` - -**Solution:** Import as STRING, then use CASE to convert: +You would be using twice the storage temporarily, but then you can drop the initial staging table. -```sql -CREATE TABLE trades_final AS -SELECT - CASE - -- If starts with digit, it's epoch milliseconds - WHEN timestamp_str ~ '^[0-9]+$' THEN cast(cast(timestamp_str AS LONG) * 1000 AS TIMESTAMP) - -- Otherwise, parse as ISO string - ELSE cast(timestamp_str AS TIMESTAMP) - END as timestamp, - symbol, - price -FROM trades_staging -TIMESTAMP(timestamp) PARTITION BY DAY; -``` - -## Batch Import Multiple CSV Files - -Import multiple files with consistent schema: - -```bash -#!/bin/bash -# Import all CSV files in directory - -for file in data/*.csv; do - echo "Importing $file..." - curl -F data=@"$file" \ - -F schema='[ - {"name": "timestamp", "type": "TIMESTAMP", "pattern": "epoch"}, - {"name": "symbol", "type": "SYMBOL"}, - {"name": "price", "type": "DOUBLE"}, - {"name": "amount", "type": "DOUBLE"} - ]' \ - -F timestamp=timestamp \ - -F name=trades \ - -F overwrite=false \ - http://localhost:9000/imp -done -``` - -**Key parameter:** -- `overwrite=false`: Append to existing table (default: true would overwrite) - -## Import with Timezone Conversion - -If timestamps are in milliseconds but represent a specific timezone: - -```sql --- Example: Timestamps are US Eastern Time, convert to UTC -INSERT INTO trades -SELECT - cast(dateadd('h', 5, cast(timestamp_ms * 1000 AS TIMESTAMP)) AS TIMESTAMP) as timestamp, -- EST is UTC-5 - symbol, - price, - amount -FROM trades_staging; -``` - -Adjust the hour offset based on your source timezone. - -## Performance Tips - -**Partition by appropriate interval:** -```sql --- High-frequency data (millions of rows per day) -PARTITION BY HOUR - --- Medium frequency (thousands per day) -PARTITION BY DAY - --- Low frequency (historical archives) -PARTITION BY MONTH -``` - -**Use SYMBOL type for repeated strings:** -```sql -CREATE TABLE trades ( - timestamp TIMESTAMP, - symbol SYMBOL, -- Not STRING - symbols are interned for efficiency - exchange SYMBOL, - side SYMBOL, - price DOUBLE, - amount DOUBLE -) TIMESTAMP(timestamp) PARTITION BY DAY; -``` - -**Disable WAL for bulk initial load:** -```sql --- Before import -ALTER TABLE trades SET PARAM maxUncommittedRows = 100000; -ALTER TABLE trades SET PARAM o3MaxLag = 0; - --- After import complete -ALTER TABLE trades SET PARAM maxUncommittedRows = 1000; -- Restore default -``` - -## Verifying Import Success - -**Row count:** -```sql -SELECT count(*) FROM trades; -``` - -**Timestamp range:** -```sql -SELECT - to_str(min(timestamp), 'yyyy-MM-dd HH:mm:ss') as earliest, - to_str(max(timestamp), 'yyyy-MM-dd HH:mm:ss') as latest -FROM trades; -``` +### Option 3: ILP Client -**Partition distribution:** -```sql -SELECT - to_str(timestamp, 'yyyy-MM-dd') as partition, - count(*) as row_count -FROM trades -GROUP BY partition -ORDER BY partition DESC -LIMIT 10; -``` - -## Alternative: Use ILP for Programmatic Import - -For programmatic imports, consider using InfluxDB Line Protocol instead of CSV: - -**Python example:** -```python -from questdb.ingress import Sender - -with Sender('localhost', 9009) as sender: - # timestamp_ms from your data source - timestamp_micros = timestamp_ms * 1000 - - sender.row( - 'trades', - symbols={'symbol': 'BTC-USDT'}, - columns={'price': 61234.50, 'amount': 0.123}, - at=timestamp_micros) - - sender.flush() -``` - -ILP handles timestamp precision explicitly and offers better performance for large datasets. - -:::tip Automatic Detection -QuestDB's CSV importer automatically detects millisecond vs microsecond vs second epoch timestamps based on value magnitude: -- Values ~1,700,000,000 → seconds -- Values ~1,700,000,000,000 → milliseconds -- Values ~1,700,000,000,000,000 → microseconds - -This detection works for timestamps from 2020 onwards. -::: - -:::warning Ambiguous Timestamps -Timestamps between 1970 and ~2000 can be ambiguous (seconds could look like milliseconds). For historical data, manually specify the conversion factor or use ISO 8601 string format instead of epoch. -::: +Read the CSV line-by-line and convert, then send via the ILP client. :::info Related Documentation -- [CSV import via Web Console](/docs/web-console/import-csv/) -- [REST API import](/docs/guides/import-csv/) -- [COPY command](/docs/reference/sql/copy/) -- [Timestamp types](/docs/reference/sql/datatypes/#timestamp-and-date) +- [CSV import](/docs/web-console/import-csv/) - [ILP ingestion](/docs/ingestion-overview/) +- [read_parquet()](/docs/reference/function/table/#read_parquet) ::: diff --git a/documentation/playbook/operations/monitor-with-telegraf.md b/documentation/playbook/operations/monitor-with-telegraf.md index dce97ac3f..2fef0c346 100644 --- a/documentation/playbook/operations/monitor-with-telegraf.md +++ b/documentation/playbook/operations/monitor-with-telegraf.md @@ -1,382 +1,57 @@ --- title: Store QuestDB Metrics in QuestDB -sidebar_label: Monitor with Telegraf -description: Scrape QuestDB Prometheus metrics using Telegraf and store them in QuestDB for monitoring dashboards +sidebar_label: Store QuestDB metrics +description: Scrape QuestDB Prometheus metrics using Telegraf and store them in QuestDB --- -Store QuestDB's operational metrics in QuestDB itself by scraping Prometheus metrics using Telegraf. This enables you to track performance, resource usage, and health over time using familiar SQL queries and Grafana dashboards, without needing a separate metrics database. +Store QuestDB's operational metrics in QuestDB itself by scraping Prometheus metrics using Telegraf. -## Problem: Monitor QuestDB Without Prometheus +## Solution: Telegraf Configuration -You want to monitor QuestDB's internal metrics but: -- Don't want to set up a separate Prometheus instance -- Prefer SQL-based analysis over PromQL -- Want to integrate monitoring with existing QuestDB dashboards -- Need long-term metric retention with QuestDB's compression +You could use Prometheus to scrape those metrics, but you can also use any server agent that understands the Prometheus format. It turns out Telegraf has input plugins for Prometheus and output plugins for QuestDB, so you can use it to get the metrics from the endpoint and insert them into a QuestDB table. -QuestDB exposes metrics in Prometheus format, and Telegraf can scrape these metrics and write them back to QuestDB. - -## Solution: Telegraf as Metrics Bridge - -Use Telegraf to: -1. Scrape Prometheus metrics from QuestDB -2. Merge metrics into dense rows -3. Write back to a QuestDB table - -### Configuration - -This `telegraf.conf` scrapes QuestDB metrics and stores them in QuestDB: +This is a `telegraf.conf` configuration which works (using default ports): ```toml -# Telegraf agent configuration +# Configuration for Telegraf agent [agent] + ## Default data collection interval for all inputs interval = "5s" omit_hostname = true precision = "1ms" flush_interval = "5s" -# INPUT: Scrape QuestDB Prometheus metrics +# -- INPUT PLUGINS ------------------------------------------------------ # [[inputs.prometheus]] - urls = ["http://localhost:9003/metrics"] - url_tag = "" # Omit URL tag (not needed) - metric_version = 2 # Use v2 for single table output + ## An array of urls to scrape metrics from. + urls = ["http://questdb-origin:9003/metrics"] + url_tag="" + metric_version = 2 # all entries will be on a single table ignore_timestamp = false -# AGGREGATOR: Merge metrics into single rows -[[aggregators.merge]] - drop_original = true - -# OUTPUT: Write to QuestDB via ILP over TCP -[[outputs.socket_writer]] - address = "tcp://localhost:9009" -``` - -Save this as `telegraf.conf` and start Telegraf: - -```bash -telegraf --config telegraf.conf -``` - -## How It Works - -The configuration uses three key components: - -### 1. Prometheus Input Plugin - -```toml -[[inputs.prometheus]] - urls = ["http://localhost:9003/metrics"] -``` - -Scrapes metrics from QuestDB's Prometheus endpoint. You must first enable metrics in QuestDB. - -### 2. Merge Aggregator - -```toml +# -- AGGREGATOR PLUGINS ------------------------------------------------- # +# Merge metrics into multifield metrics by series key [[aggregators.merge]] + ## If true, the original metric will be dropped by the + ## aggregator and will not get sent to the output plugins. drop_original = true -``` - -By default, Telegraf creates one sparse row per metric. The merge aggregator combines all metrics collected at the same timestamp into a single dense row, which is more efficient for storage and querying in QuestDB. -### 3. Socket Writer Output -```toml +# -- OUTPUT PLUGINS ----------------------------------------------------- # [[outputs.socket_writer]] - address = "tcp://localhost:9009" -``` - -Sends data to QuestDB via ILP over TCP for maximum throughput. - -## Enable QuestDB Metrics - -QuestDB metrics are disabled by default. Enable them via configuration: - -### Option 1: server.conf - -Add to `server.conf`: - -```ini -metrics.enabled=true -``` - -### Option 2: Environment Variable - -```bash -export QDB_METRICS_ENABLED=true -``` - -### Option 3: Docker - -```bash -docker run \ - -p 9000:9000 \ - -p 8812:8812 \ - -p 9009:9009 \ - -p 9003:9003 \ - -e QDB_METRICS_ENABLED=true \ - questdb/questdb:latest -``` - -After enabling, metrics are available at `http://localhost:9003/metrics`. - -## Verify Metrics Collection - -After starting Telegraf, verify data is being collected: - -```sql --- Check if table was created -SELECT * FROM tables() WHERE table_name = 'prometheus'; - --- View recent metrics -SELECT * FROM prometheus -ORDER BY timestamp DESC -LIMIT 10; - --- Count metrics collected -SELECT count(*) FROM prometheus; -``` - -## Querying Metrics - -### Available Metrics - -QuestDB exposes various metrics including: - -```sql --- See all available metrics (columns) -SELECT column_name FROM table_columns('prometheus') -WHERE column_name NOT IN ('timestamp'); -``` - -Common metrics include: -- `questdb_json_queries_total`: Number of REST API queries -- `questdb_pg_wire_queries_total`: Number of PostgreSQL wire queries -- `questdb_ilp_tcp_*`: ILP over TCP metrics (connections, messages, errors) -- `questdb_ilp_http_*`: ILP over HTTP metrics -- `questdb_memory_*`: Memory usage metrics -- `questdb_wal_*`: Write-Ahead Log metrics - -### Example Queries - -**Query rate over time:** -```questdb-sql title="Queries per second over last hour" -SELECT - timestamp, - questdb_json_queries_total + questdb_pg_wire_queries_total as total_queries -FROM prometheus -WHERE timestamp >= dateadd('h', -1, now()) -ORDER BY timestamp DESC; -``` - -**Memory usage trend:** -```questdb-sql title="Memory usage over last 24 hours" -SELECT - timestamp_floor('10m', timestamp) as time_bucket, - avg(questdb_memory_used) as avg_memory_used, - max(questdb_memory_used) as max_memory_used -FROM prometheus -WHERE timestamp >= dateadd('d', -1, now()) -SAMPLE BY 10m; -``` - -**ILP ingestion rate:** -```questdb-sql title="ILP messages per second" -SELECT - timestamp_floor('1m', timestamp) as minute, - max(questdb_ilp_tcp_messages_total) - - min(questdb_ilp_tcp_messages_total) as messages_per_minute -FROM prometheus -WHERE timestamp >= dateadd('h', -1, now()) -SAMPLE BY 1m; + # Write metrics to a local QuestDB instance over TCP + address = "tcp://questdb-target:9009" ``` -**Connection counts:** -```sql -SELECT - timestamp, - questdb_ilp_tcp_connections as ilp_tcp_connections, - questdb_pg_wire_connections as pg_wire_connections -FROM prometheus -WHERE timestamp >= dateadd('h', -1, now()) -ORDER BY timestamp DESC -LIMIT 100; -``` - -## Configuration Options - -### Monitoring Multiple QuestDB Instances - -To monitor multiple QuestDB instances, add separate input blocks and include instance tags: - -```toml -[[inputs.prometheus]] - urls = ["http://questdb-prod:9003/metrics"] - [inputs.prometheus.tags] - instance = "production" - -[[inputs.prometheus]] - urls = ["http://questdb-staging:9003/metrics"] - [inputs.prometheus.tags] - instance = "staging" -``` - -Query by instance: - -```sql -SELECT * FROM prometheus -WHERE instance = 'production' - AND timestamp >= dateadd('h', -1, now()); -``` - -### Adjusting Collection Interval - -Change how often metrics are collected: - -```toml -[agent] - interval = "10s" # Collect every 10 seconds instead of 5 - flush_interval = "10s" -``` - -Lower intervals provide more granular data but increase storage. Higher intervals reduce overhead. - -### Using HTTP Instead of TCP - -For more reliable delivery with acknowledgments: - -```toml -[[outputs.influxdb_v2]] - urls = ["http://localhost:9000"] - token = "" - content_encoding = "identity" -``` - -TCP is faster but doesn't confirm delivery. HTTP provides confirmation but slightly lower throughput. - -### Filtering Metrics - -Exclude unnecessary metrics to reduce storage: - -```toml -[[inputs.prometheus]] - urls = ["http://localhost:9003/metrics"] - metric_version = 2 - - # Only collect specific metrics - fieldpass = [ - "questdb_json_queries_total", - "questdb_pg_wire_queries_total", - "questdb_memory_*", - "questdb_ilp_*" - ] -``` - -## Grafana Dashboard Integration - -Create Grafana dashboards using the collected metrics: - -```sql --- Query rate panel -SELECT - $__timeGroup(timestamp, $__interval) as time, - avg(questdb_json_queries_total) as "REST API Queries" -FROM prometheus -WHERE $__timeFilter(timestamp) -GROUP BY time -ORDER BY time; - --- Memory usage panel -SELECT - $__timeGroup(timestamp, $__interval) as time, - avg(questdb_memory_used / 1024 / 1024) as "Memory Used (MB)" -FROM prometheus -WHERE $__timeFilter(timestamp) -GROUP BY time -ORDER BY time; -``` - -## Data Retention - -Set up automatic cleanup of old metrics: - -```sql --- Drop partitions older than 30 days -ALTER TABLE prometheus DROP PARTITION LIST '2024-01', '2024-02'; - --- Or delete old data -DELETE FROM prometheus -WHERE timestamp < dateadd('d', -30, now()); -``` - -Consider partitioning by day or week: - -```sql --- Recreate table with daily partitioning -CREATE TABLE prometheus_new ( - timestamp TIMESTAMP, - -- ... metric columns ... -) TIMESTAMP(timestamp) PARTITION BY DAY; -``` - -## Troubleshooting - -**No data appearing in QuestDB:** -- Verify QuestDB metrics are enabled: `curl http://localhost:9003/metrics` -- Check Telegraf logs for errors: `telegraf --config telegraf.conf --debug` -- Ensure port 9009 is accessible from Telegraf host -- Verify Telegraf has network connectivity to QuestDB - -**Table not created automatically:** -- QuestDB auto-creates tables on first ILP write -- Check for errors in QuestDB logs -- Verify ILP is not disabled in QuestDB configuration - -**Metrics are sparse (many NULL values):** -- Ensure merge aggregator is configured: `[[aggregators.merge]]` -- Set `drop_original = true` to discard sparse rows -- Use `metric_version = 2` in prometheus input - -**High cardinality warning:** -- Too many unique tag values can cause performance issues -- Remove unnecessary tags using `url_tag = ""` -- Use `omit_hostname = true` if monitoring single instance - -## Performance Considerations - -**Storage usage:** -- Each metric collection creates one row in QuestDB -- At 5-second intervals: ~17,000 rows/day, ~500K rows/month -- Storage is compressed efficiently due to time-series nature - -**Query performance:** -- Add indexes on frequently filtered columns (like `instance` tag) -- Use timestamp filters to limit query scope -- Leverage SAMPLE BY for aggregating data over time - -**Impact on monitored QuestDB:** -- Metrics endpoint is lightweight (sub-millisecond response time) -- Telegraf scraping adds minimal overhead -- Consider increasing interval to 30s+ if needed - -:::tip Alerting -Combine with monitoring tools to create alerts: -- Query rate drops to zero (instance down) -- Memory usage exceeds threshold -- ILP error rate increases -- WAL segment count grows unexpectedly -::: - -:::warning Circular Dependency -Be cautious about monitoring QuestDB with itself - if QuestDB fails, you lose monitoring data. Consider: -- Monitoring multiple QuestDB instances (write metrics from instance A to instance B) -- Setting up external monitoring as backup -- Using persistent storage volumes to preserve data across restarts -::: +A few things to note: +* I omit the hostname, so I don't end up with an extra column I don't need. If I was monitoring several QuestDB instances, it would be good to keep it. +* I set the `url_tag` to blank because of the same reason. By default the Prometheus plugin for Telegraf adds the url as an extra column and we don't need it. +* I am using `metric_version` 2 for the input plugin. This is to make sure I get all the metrics into a single table, rather than one table for each different metric, which I find annoying. +* I am using the aggregator so metrics get rolled-up into a single row per data point (with multiple columns), rather than one row per metric. Without the aggregator it works fine, but you end up with a very sparse table. +* On my config, I used a different hostname for the QuestDB output, so we can collect metrics on a different instance. For production this would be a best practice, but for development you can just use the same host you are monitoring. :::info Related Documentation -- [QuestDB metrics reference](/docs/operations/logging-metrics/#metrics) -- [Telegraf prometheus input](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/prometheus) -- [Telegraf merge aggregator](https://github.com/influxdata/telegraf/tree/master/plugins/aggregators/merge) -- [ILP reference](/docs/reference/api/ilp/overview/) +- [QuestDB metrics](/docs/operations/logging-metrics/) +- [ILP ingestion](/docs/ingestion-overview/) +- [Telegraf documentation](https://docs.influxdata.com/telegraf/) ::: diff --git a/documentation/playbook/operations/query-times-histogram.md b/documentation/playbook/operations/query-times-histogram.md index e188bcec9..963715c28 100644 --- a/documentation/playbook/operations/query-times-histogram.md +++ b/documentation/playbook/operations/query-times-histogram.md @@ -1,411 +1,113 @@ --- title: Query Performance Histogram sidebar_label: Query times histogram -description: Analyze query performance distributions using query logs and execution metrics for optimization +description: Create histogram of query execution times using _query_trace table --- -Analyze the distribution of query execution times to identify performance patterns, slow queries, and optimization opportunities. Use query logs and metrics to create histograms showing how query latency varies across your workload. +Create a histogram of query execution times using the `_query_trace` system table. -## Problem: Understanding Query Performance +## Solution: Percentile-Based Histogram -You need to answer: -- What's the typical query latency? -- How many queries are slow (> 1 second)? -- Are there performance regressions over time? -- Which query patterns are slowest? +We can create a subquery that first calculates the percentiles for each bucket, in this case at 10% intervals. Then on a second query we can do a `UNION` of 10 subqueries where each is doing a `CROSS JOIN` against the calculated percentiles and finding how many queries are below the threshold for the bucket. -Single-point metrics (average, P99) don't show the full picture. A histogram reveals the distribution. - -## Solution: Query Log Analysis - -QuestDB logs query execution times. Parse logs to create performance histograms. - -### Enable Query Logging - -**server.conf:** -```properties -# Log all queries (development/staging) -http.query.log.enabled=true - -# Or log only slow queries (production) -http.slow.query.log.enabled=true -http.slow.query.threshold=1000 # Log queries > 1 second -``` - -**Log format:** -``` -2025-01-15T10:30:45.123Z I http-server [1234] `SELECT * FROM trades WHERE symbol = 'BTC-USDT'` [exec=15ms, compiler=2ms, rows=1000] -``` - -## Parse Logs into Table - -### Create Query Log Table +Note in this case the histogram is cumulative, and each bucket includes the results from the smaller buckets as well. If we prefer non-cumulative, the condition would change from less than to `BETWEEN`. ```sql -CREATE TABLE query_log ( - timestamp TIMESTAMP, - query_text STRING, - exec_time_ms INT, - compiler_time_ms INT, - rows_returned LONG -) TIMESTAMP(timestamp) PARTITION BY DAY; -``` - -### Parse and Insert - -**Python script:** -```python -import re -import psycopg2 -from datetime import datetime - -# Regex to parse QuestDB log lines -log_pattern = r'(\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3}Z).*`([^`]+)`.*\[exec=(\d+)ms, compiler=(\d+)ms, rows=(\d+)\]' - -conn = psycopg2.connect(host="localhost", port=8812, user="admin", password="quest", database="questdb") -cursor = conn.cursor() - -with open('/var/log/questdb/query.log', 'r') as f: - for line in f: - match = re.search(log_pattern, line) - if match: - timestamp_str, query, exec_ms, compiler_ms, rows = match.groups() - timestamp = datetime.fromisoformat(timestamp_str.replace('Z', '+00:00')) - - cursor.execute(""" - INSERT INTO query_log (timestamp, query_text, exec_time_ms, compiler_time_ms, rows_returned) - VALUES (%s, %s, %s, %s, %s) - """, (timestamp, query, int(exec_ms), int(compiler_ms), int(rows))) - -conn.commit() -conn.close() -``` - -## Create Performance Histogram - -```questdb-sql demo title="Query execution time histogram" -SELECT - (cast(exec_time_ms / 100 AS INT) * 100) as latency_bucket_ms, - ((cast(exec_time_ms / 100 AS INT) + 1) * 100) as bucket_end_ms, - count(*) as query_count, - (count(*) * 100.0 / sum(count(*)) OVER ()) as percentage -FROM query_log -WHERE timestamp >= dateadd('d', -1, now()) -GROUP BY latency_bucket_ms, bucket_end_ms -ORDER BY latency_bucket_ms; -``` - -**Results:** - -| latency_bucket_ms | bucket_end_ms | query_count | percentage | -|-------------------|---------------|-------------|------------| -| 0 | 100 | 45,678 | 91.4% | -| 100 | 200 | 3,456 | 6.9% | -| 200 | 300 | 567 | 1.1% | -| 300 | 400 | 234 | 0.5% | -| 400 | 500 | 45 | 0.09% | -| 500+ | | 20 | 0.04% | - -**Interpretation:** -- 91.4% of queries complete in < 100ms (fast) -- 1.7% take > 200ms (investigate these) -- 0.13% take > 400ms (definitely need optimization) - -## Time-Series Performance Trends - -Track how query performance changes over time: - -```questdb-sql demo title="Hourly query performance evolution" -SELECT - timestamp_floor('h', timestamp) as hour, - (cast(exec_time_ms / 50 AS INT) * 50) as latency_bucket, - count(*) as count -FROM query_log -WHERE timestamp >= dateadd('d', -7, now()) -GROUP BY hour, latency_bucket -ORDER BY hour DESC, latency_bucket; -``` - -Visualize in Grafana heatmap to see performance degradation over time. - -## Percentile Analysis - -Calculate latency percentiles: - -```questdb-sql demo title="Query latency percentiles" -SELECT - percentile(exec_time_ms, 50) as p50_ms, - percentile(exec_time_ms, 90) as p90_ms, - percentile(exec_time_ms, 95) as p95_ms, - percentile(exec_time_ms, 99) as p99_ms, - percentile(exec_time_ms, 99.9) as p999_ms, - max(exec_time_ms) as max_ms -FROM query_log -WHERE timestamp >= dateadd('d', -1, now()); -``` - -**Results:** - -| p50_ms | p90_ms | p95_ms | p99_ms | p999_ms | max_ms | -|--------|--------|--------|--------|---------|--------| -| 12 | 45 | 89 | 234 | 1,234 | 15,678 | - -## Identify Slow Query Patterns - -Find which query patterns are slowest: - -```questdb-sql demo title="Slowest query patterns" -WITH normalized AS ( +WITH quantiles AS ( SELECT - exec_time_ms, - -- Normalize query (remove values, keep structure) - regexp_replace(query_text, '\d+', 'N', 'g') as query_pattern, - regexp_replace( - regexp_replace(query_text, '''[^'']*''', '''S''', 'g'), - '\d+', 'N', 'g' - ) as query_normalized - FROM query_log - WHERE timestamp >= dateadd('d', -1, now()) -) -SELECT - query_pattern, - count(*) as execution_count, - avg(exec_time_ms) as avg_ms, - percentile(exec_time_ms, 95) as p95_ms, - max(exec_time_ms) as max_ms -FROM normalized -GROUP BY query_pattern -HAVING count(*) >= 10 -- At least 10 executions -ORDER BY avg_ms DESC -LIMIT 20; -``` - -**Results:** - -| query_pattern | execution_count | avg_ms | p95_ms | max_ms | -|---------------|-----------------|--------|--------|--------| -| SELECT * FROM trades WHERE timestamp BETWEEN ... | 1,234 | 456 | 890 | 2,345 | -| SELECT symbol, sum(amount) FROM trades GROUP BY ... | 567 | 234 | 456 | 1,234 | - -## Slowest Individual Queries - -Find actual slow query instances: - -```questdb-sql demo title="Top 20 slowest queries" -SELECT - timestamp, - exec_time_ms, - rows_returned, - substr(query_text, 1, 100) as query_preview -FROM query_log -WHERE timestamp >= dateadd('d', -1, now()) -ORDER BY exec_time_ms DESC -LIMIT 20; -``` - -## Query Performance by Table - -Analyze which tables have slow queries: - -```questdb-sql demo title="Performance by table accessed" -SELECT - CASE - WHEN query_text LIKE '%FROM trades%' THEN 'trades' - WHEN query_text LIKE '%FROM sensor_readings%' THEN 'sensor_readings' - WHEN query_text LIKE '%FROM api_logs%' THEN 'api_logs' - ELSE 'other' - END as table_name, - count(*) as query_count, - avg(exec_time_ms) as avg_exec_ms, - percentile(exec_time_ms, 95) as p95_exec_ms -FROM query_log -WHERE timestamp >= dateadd('d', -1, now()) -GROUP BY table_name -ORDER BY avg_exec_ms DESC; -``` - -## Grafana Dashboard - -### Query Latency Heatmap - -```questdb-sql demo title="Heatmap data for Grafana" -SELECT - timestamp_floor('5m', timestamp) as time, - (cast(exec_time_ms / 50 AS INT) * 50) as latency_bucket, - count(*) as count -FROM query_log -WHERE $__timeFilter(timestamp) -GROUP BY time, latency_bucket -ORDER BY time, latency_bucket; -``` - -**Grafana config:** -- Visualization: Heatmap -- X-axis: time -- Y-axis: latency_bucket -- Cell value: count - -### Query Rate and Latency - -```questdb-sql demo title="Query rate and P95 latency" -SELECT - timestamp_floor('1m', timestamp) as time, - count(*) as "Query Rate (QPM)", - percentile(exec_time_ms, 95) as "P95 Latency (ms)" -FROM query_log -WHERE $__timeFilter(timestamp) -SAMPLE BY 1m; -``` - -## Using Prometheus Metrics (Alternative) - -QuestDB exposes Prometheus metrics at `http://localhost:9003/metrics`: - -``` -# HELP questdb_json_queries_total -questdb_json_queries_total 123456 - -# HELP questdb_json_queries_completed -questdb_json_queries_completed 123450 - -# HELP questdb_json_queries_failed -questdb_json_queries_failed 6 -``` - -Scrape into Prometheus, then query: - -```promql -# Query rate -rate(questdb_json_queries_completed[5m]) - -# Error rate -rate(questdb_json_queries_failed[5m]) / rate(questdb_json_queries_total[5m]) -``` - -## Custom Query Instrumentation - -Add custom timing in application code: + approx_percentile(execution_micros, 0.10, 5) AS p10, + approx_percentile(execution_micros, 0.20, 5) AS p20, + approx_percentile(execution_micros, 0.30, 5) AS p30, + approx_percentile(execution_micros, 0.40, 5) AS p40, + approx_percentile(execution_micros, 0.50, 5) AS p50, + approx_percentile(execution_micros, 0.60, 5) AS p60, + approx_percentile(execution_micros, 0.70, 5) AS p70, + approx_percentile(execution_micros, 0.80, 5) AS p80, + approx_percentile(execution_micros, 0.90, 5) AS p90, + approx_percentile(execution_micros, 1.0, 5) AS p100 + FROM _query_trace +), cumulative_hist AS ( +SELECT '10' AS bucket, p10 as micros_threshold, count(*) AS frequency +FROM _query_trace CROSS JOIN quantiles +WHERE execution_micros < p10 -**Python example:** -```python -import time -import psycopg2 - -conn = psycopg2.connect(...) -cursor = conn.cursor() +UNION ALL -start = time.time() -cursor.execute("SELECT * FROM trades WHERE symbol = %s", ("BTC-USDT",)) -results = cursor.fetchall() -elapsed_ms = (time.time() - start) * 1000 +SELECT '20', p20 as micros_threshold, count(*) +FROM _query_trace CROSS JOIN quantiles +WHERE execution_micros < p20 -# Log to monitoring system -logger.info(f"Query completed in {elapsed_ms:.2f}ms, returned {len(results)} rows") +UNION ALL -# Or insert into query_log table -cursor.execute(""" - INSERT INTO query_log (timestamp, query_text, exec_time_ms, rows_returned) - VALUES (now(), %s, %s, %s) -""", ("SELECT * FROM trades WHERE symbol = ?", int(elapsed_ms), len(results))) +SELECT '30', p30 as micros_threshold, count(*) +FROM _query_trace CROSS JOIN quantiles +WHERE execution_micros < p30 -conn.close() -``` +UNION ALL -## Query Performance Alerts +SELECT '40', p40 as micros_threshold, count(*) +FROM _query_trace CROSS JOIN quantiles +WHERE execution_micros < p40 -Set up alerts for slow queries: +UNION ALL -```sql --- Queries taking > 1 second in last 5 minutes -SELECT count(*) as slow_query_count -FROM query_log -WHERE timestamp >= dateadd('m', -5, now()) - AND exec_time_ms > 1000; -``` +SELECT '50', p50 as micros_threshold, count(*) +FROM _query_trace CROSS JOIN quantiles +WHERE execution_micros < p50 -**Alert if** `slow_query_count > 10`. +UNION ALL -## Optimization Workflow +SELECT '60', p60 as micros_threshold, count(*) +FROM _query_trace CROSS JOIN quantiles +WHERE execution_micros < p60 -1. **Identify slow patterns** (from histogram) -2. **Get example queries** (slow query log) -3. **Analyze query plan** (EXPLAIN) -4. **Add indexes** (on filtered/joined columns) -5. **Verify improvement** (re-run histogram) +UNION ALL -**Before optimization:** -``` -P95: 890ms -P99: 2,345ms -``` +SELECT '70', p70 as micros_threshold, count(*) +FROM _query_trace CROSS JOIN quantiles +WHERE execution_micros < p70 -**After adding index:** -``` -P95: 45ms (-94.9%) -P99: 123ms (-94.8%) -``` +UNION ALL -## Comparing Time Periods +SELECT '80', p80 as micros_threshold, count(*) +FROM _query_trace CROSS JOIN quantiles +WHERE execution_micros < p80 -Compare query performance week-over-week: +UNION ALL -```questdb-sql demo title="Week-over-week latency comparison" -WITH this_week AS ( - SELECT - avg(exec_time_ms) as avg_latency, - percentile(exec_time_ms, 95) as p95_latency - FROM query_log - WHERE timestamp >= dateadd('d', -7, now()) -), -last_week AS ( - SELECT - avg(exec_time_ms) as avg_latency, - percentile(exec_time_ms, 95) as p95_latency - FROM query_log - WHERE timestamp >= dateadd('d', -14, now()) - AND timestamp < dateadd('d', -7, now()) -) -SELECT - 'This Week' as period, - this_week.avg_latency, - this_week.p95_latency, - (this_week.avg_latency - last_week.avg_latency) as avg_change, - ((this_week.avg_latency - last_week.avg_latency) / last_week.avg_latency * 100) as avg_pct_change -FROM this_week, last_week +SELECT '90', p90 as micros_threshold, count(*) +FROM _query_trace CROSS JOIN quantiles +WHERE execution_micros < p90 UNION ALL -SELECT - 'Last Week', - last_week.avg_latency, - last_week.p95_latency, - 0, - 0 -FROM last_week; +SELECT '100', p100 as micros_threshold, count(*) +FROM _query_trace CROSS JOIN quantiles + ) + SELECT * FROM cumulative_hist; ``` -**Alerts:** -- If `avg_pct_change > 20%`: Performance regression -- If `avg_pct_change < -20%`: Performance improvement +**Output:** -:::tip Monitoring Best Practices -1. **Log selectively in production**: Use slow query logging only (threshold 500-1000ms) -2. **Sample high-QPS endpoints**: Log 1% of fast queries to reduce overhead -3. **Rotate logs**: Prevent disk space issues -4. **Index query_log table**: For fast analysis queries -5. **Set up alerts**: Automated detection of performance degradation -::: +```csv +"bucket","micros_threshold","frequency" +"10",215.0,26 +"20",348.0,53 +"30",591.0,80 +"40",819.0,106 +"50",1088.0,133 +"60",1527.0,160 +"70",2293.0,186 +"80",4788.0,213 +"90",23016.0,240 +"100",1078759.0,267 +``` -:::warning Log Volume -Full query logging can generate significant data: -- 1,000 QPS × 86,400 seconds/day = 86.4M log entries/day -- Use sampling or slow query logging in production -- Rotate and archive old logs regularly +:::note Enable Query Tracing +Query tracing needs to be enabled for the `_query_trace` table to be populated. See the [configuration documentation](/docs/configuration/) for details. ::: :::info Related Documentation -- [HTTP slow query logging](/docs/configuration/) -- [Prometheus metrics](/docs/operations/logging-metrics/) -- [percentile() function](/docs/reference/function/aggregation/#percentile) -- [Grafana integration](/docs/third-party-tools/grafana/) +- [_query_trace system table](/docs/reference/system-tables/#_query_trace) +- [approx_percentile() function](/docs/reference/function/aggregation/#approx_percentile) ::: diff --git a/documentation/playbook/operations/tls-pgbouncer.md b/documentation/playbook/operations/tls-pgbouncer.md index 3ad3199f8..0840f4274 100644 --- a/documentation/playbook/operations/tls-pgbouncer.md +++ b/documentation/playbook/operations/tls-pgbouncer.md @@ -1,489 +1,55 @@ --- -title: TLS for PostgreSQL Wire Protocol with PgBouncer +title: TLS with PgBouncer for QuestDB sidebar_label: TLS with PgBouncer -description: Add TLS encryption to QuestDB PostgreSQL wire protocol connections using PgBouncer as a TLS-terminating proxy +description: Configure PgBouncer to provide TLS termination for QuestDB PostgreSQL connections --- -Add TLS/SSL encryption to PostgreSQL wire protocol connections to QuestDB using PgBouncer as a TLS-terminating proxy. QuestDB's PostgreSQL interface doesn't natively support TLS, but PgBouncer provides this capability while also offering connection pooling benefits. +Configure PgBouncer to provide TLS termination for QuestDB Open Source PostgreSQL wire protocol connections. -## Problem: No Native TLS for PostgreSQL Wire Protocol +## Solution: TLS Termination at PgBouncer -QuestDB supports PostgreSQL wire protocol on port 8812, but connections are unencrypted: +QuestDB Open Source does not implement TLS on the PostgreSQL wire protocol, so TLS termination needs to be done at the PgBouncer level. -```bash -# Unencrypted connection (passwords and data visible) -psql -h questdb.example.com -p 8812 -U admin -d questdb -``` - -For production deployments, especially over public networks, you need TLS encryption. - -## Solution: PgBouncer as TLS Proxy - -Use PgBouncer to: -1. Accept TLS-encrypted client connections -2. Decrypt and forward to QuestDB's unencrypted PostgreSQL port -3. Provide connection pooling as a bonus - -``` -Client (TLS) → PgBouncer (TLS termination) → QuestDB (unencrypted localhost) -``` - -## Architecture - -**Network flow:** -- Clients connect to PgBouncer on port 5432 with TLS -- PgBouncer terminates TLS and connects to QuestDB on localhost:8812 -- PgBouncer and QuestDB communicate over localhost (no network exposure) +Configure PgBouncer with: -**Security benefits:** -- Data encrypted in transit from client to server -- Credentials protected during authentication -- No changes required to QuestDB configuration -- Works with any PostgreSQL-compatible client - -## Installation - -### Docker Compose Setup - -**docker-compose.yml:** -```yaml -version: '3.8' - -services: - questdb: - image: questdb/questdb:latest - container_name: questdb - ports: - - "9000:9000" # Web console - - "9009:9009" # ILP - volumes: - - ./questdb-data:/var/lib/questdb - environment: - - QDB_PG_USER=admin - - QDB_PG_PASSWORD=quest - restart: unless-stopped - - pgbouncer: - image: edoburu/pgbouncer:latest - container_name: pgbouncer - ports: - - "5432:5432" # PostgreSQL with TLS - volumes: - - ./pgbouncer/pgbouncer.ini:/etc/pgbouncer/pgbouncer.ini:ro - - ./pgbouncer/userlist.txt:/etc/pgbouncer/userlist.txt:ro - - ./certs/server.crt:/etc/pgbouncer/server.crt:ro - - ./certs/server.key:/etc/pgbouncer/server.key:ro - depends_on: - - questdb - restart: unless-stopped -``` - -### PgBouncer Configuration - -**pgbouncer/pgbouncer.ini:** ```ini [databases] -questdb = host=questdb port=8812 dbname=questdb +questdb = host=127.0.0.1 port=8812 dbname=questdb user=admin password=quest [pgbouncer] -listen_addr = 0.0.0.0 +listen_addr = 127.0.0.1 listen_port = 5432 -auth_type = md5 -auth_file = /etc/pgbouncer/userlist.txt -pool_mode = session -max_client_conn = 1000 -default_pool_size = 25 +auth_type = trust +auth_file = /path/to/pgbouncer/userlist.txt -# TLS Configuration client_tls_sslmode = require -client_tls_cert_file = /etc/pgbouncer/server.crt -client_tls_key_file = /etc/pgbouncer/server.key -client_tls_protocols = secure - -# Optional: Client certificate authentication -# client_tls_ca_file = /etc/pgbouncer/ca.crt - -# Logging -logfile = /var/log/pgbouncer/pgbouncer.log -pidfile = /var/run/pgbouncer/pgbouncer.pid -admin_users = admin -``` - -**Key parameters:** -- `client_tls_sslmode = require`: Force TLS for all client connections -- `client_tls_cert_file`: Server certificate (signed by CA or self-signed) -- `client_tls_key_file`: Server private key -- `client_tls_protocols = secure`: Only allow TLS 1.2+ - -### User Authentication File - -**pgbouncer/userlist.txt:** -``` -"admin" "md5" -"readonly" "md5" -``` - -Generate MD5 hashes: -```bash -# Format: md5 + md5(password + username) -echo -n "questadmin" | md5sum | awk '{print "md5"$1}' -# Output: md56c4e8a7e9e3b6f8a9d5c4e8a7e9e3b6f -``` - -Then add to userlist.txt: -``` -"admin" "md56c4e8a7e9e3b6f8a9d5c4e8a7e9e3b6f" -``` - -## Generating TLS Certificates - -### Self-Signed Certificate (Development) - -```bash -# Create certificate directory -mkdir -p certs - -# Generate private key -openssl genrsa -out certs/server.key 2048 - -# Generate self-signed certificate (valid for 365 days) -openssl req -new -x509 -key certs/server.key -out certs/server.crt -days 365 \ - -subj "/C=US/ST=State/L=City/O=Organization/CN=questdb.example.com" - -# Set permissions -chmod 600 certs/server.key -chmod 644 certs/server.crt -``` - -### CA-Signed Certificate (Production) - -```bash -# Generate private key -openssl genrsa -out certs/server.key 2048 - -# Generate certificate signing request (CSR) -openssl req -new -key certs/server.key -out certs/server.csr \ - -subj "/C=US/ST=State/L=City/O=Organization/CN=questdb.example.com" - -# Submit CSR to your CA (Let's Encrypt, DigiCert, etc.) -# Receive server.crt from CA - -# Optionally concatenate intermediate certificates -cat server.crt intermediate.crt > certs/server.crt -``` - -### Let's Encrypt with Certbot - -```bash -# Install certbot -sudo apt-get install certbot - -# Obtain certificate (requires port 80 temporarily) -sudo certbot certonly --standalone -d questdb.example.com - -# Certificates will be in /etc/letsencrypt/live/questdb.example.com/ -# Copy to pgbouncer directory -sudo cp /etc/letsencrypt/live/questdb.example.com/fullchain.pem certs/server.crt -sudo cp /etc/letsencrypt/live/questdb.example.com/privkey.pem certs/server.key -sudo chown $USER:$USER certs/* -chmod 600 certs/server.key -``` - -## Starting the Stack - -```bash -# Start QuestDB and PgBouncer -docker-compose up -d - -# Check logs -docker-compose logs pgbouncer -docker-compose logs questdb - -# Verify PgBouncer is listening -netstat -tlnp | grep 5432 -``` - -## Connecting with TLS - -### psql - -```bash -# Require TLS -psql "postgresql://admin:quest@questdb.example.com:5432/questdb?sslmode=require" - -# Verify certificate (production) -psql "postgresql://admin:quest@questdb.example.com:5432/questdb?sslmode=verify-full&sslrootcert=/path/to/ca.crt" - -# Self-signed certificate (development, skips verification) -psql "postgresql://admin:quest@localhost:5432/questdb?sslmode=require" -``` - -### Python (psycopg2) - -```python -import psycopg2 - -conn = psycopg2.connect( - host="questdb.example.com", - port=5432, - database="questdb", - user="admin", - password="quest", - sslmode="require" -) - -cursor = conn.cursor() -cursor.execute("SELECT * FROM trades LIMIT 5") -print(cursor.fetchall()) -conn.close() -``` - -### Node.js (pg) - -```javascript -const { Client } = require('pg'); - -const client = new Client({ - host: 'questdb.example.com', - port: 5432, - database: 'questdb', - user: 'admin', - password: 'quest', - ssl: { - rejectUnauthorized: true, // Verify certificate - ca: fs.readFileSync('/path/to/ca.crt').toString(), - }, -}); +client_tls_key_file = /path/to/pgbouncer/pgbouncer.key +client_tls_cert_file = /path/to/pgbouncer/pgbouncer.crt +client_tls_ca_file = /etc/ssl/cert.pem -await client.connect(); -const res = await client.query('SELECT * FROM trades LIMIT 5'); -console.log(res.rows); -await client.end(); +server_tls_sslmode = disable +logfile = /path/to/pgbouncer/pgbouncer.log +pidfile = /path/to/pgbouncer/pgbouncer.pid ``` -### Grafana +The key setting is `server_tls_sslmode = disable`. This makes psql connect using TLS to PgBouncer, but PgBouncer will connect without TLS to your QuestDB instance. -**PostgreSQL datasource configuration:** -``` -Host: questdb.example.com:5432 -Database: questdb -User: readonly -Password: -TLS/SSL Mode: require -TLS/SSL Method: File system path -Server Certificate: /path/to/ca.crt (if verifying) -``` - -## Connection Pooling Benefits - -PgBouncer provides connection pooling in addition to TLS: - -**Benefits:** -- Reduces connection overhead (connection setup is expensive) -- Limits concurrent connections to QuestDB -- Handles client connection bursts -- Improves query throughput - -**Pool modes:** -- `session`: Connection reused after client disconnects (recommended for QuestDB) -- `transaction`: Connection returned after each transaction -- `statement`: Connection returned after each statement - -**Configuration:** -```ini -pool_mode = session -default_pool_size = 25 # Connections per database per user -max_client_conn = 1000 # Total client connections -reserve_pool_size = 5 # Emergency connections -reserve_pool_timeout = 3 # Seconds to wait for connection -``` - -## Monitoring PgBouncer - -### Admin Console - -```bash -# Connect to PgBouncer admin console -psql -h localhost -p 5432 -U admin pgbouncer - -# Show pool status -SHOW POOLS; - -# Show client connections -SHOW CLIENTS; - -# Show server connections (to QuestDB) -SHOW SERVERS; - -# Show configuration -SHOW CONFIG; - -# Show statistics -SHOW STATS; -``` - -### Key Metrics - -```sql --- Active connections by pool -SHOW POOLS; -``` - -**Output:** -| database | user | cl_active | cl_waiting | sv_active | sv_idle | sv_used | -|----------|------|-----------|------------|-----------|---------|---------| -| questdb | admin | 15 | 0 | 20 | 5 | 350 | - -- `cl_active`: Active client connections -- `cl_waiting`: Clients waiting for a server connection -- `sv_active`: Server connections in use -- `sv_idle`: Idle server connections -- `sv_used`: Server connections used since pool started - -## Security Hardening - -### Restrict Client Certificate Authorities - -**pgbouncer.ini:** -```ini -client_tls_ca_file = /etc/pgbouncer/ca.crt -client_tls_sslmode = verify-full -``` - -This requires clients to present certificates signed by your CA. - -### Disable Weak Ciphers - -**pgbouncer.ini:** -```ini -client_tls_ciphers = HIGH:!aNULL:!MD5:!3DES -client_tls_protocols = secure # TLS 1.2 and 1.3 only -``` - -### Firewall Rules - -```bash -# Allow only PgBouncer port from external -sudo ufw allow 5432/tcp - -# Block direct QuestDB PostgreSQL port from external -sudo ufw deny 8812/tcp - -# QuestDB should only listen on localhost -# In server.conf: -# pg.net.bind.to=127.0.0.1 -``` - -### Authentication - -Use strong passwords in userlist.txt: +Connect with: ```bash -# Generate strong password hash -python3 -c "import hashlib; print('md5' + hashlib.md5(b'admin').hexdigest())" +psql "host=127.0.0.1 port=5432 dbname=questdb user=admin sslmode=require" ``` -## Troubleshooting - -### Connection Refused - -**Symptom:** `psql: error: connection to server failed: Connection refused` - -**Checks:** -1. Verify PgBouncer is running: `docker ps | grep pgbouncer` -2. Check port binding: `netstat -tlnp | grep 5432` -3. Check firewall: `sudo ufw status` -4. Review PgBouncer logs: `docker logs pgbouncer` - -### TLS Certificate Errors - -**Symptom:** `SSL error: certificate verify failed` - -**Solution for self-signed certs:** -```bash -psql "postgresql://admin:quest@localhost:5432/questdb?sslmode=require" -# Note: "require" doesn't verify certificate, only encrypts -``` - -**Solution for production:** -```bash -# Verify certificate chain is complete -openssl s_client -connect questdb.example.com:5432 -showcerts -``` - -### Authentication Failed - -**Symptom:** `password authentication failed` - -**Checks:** -1. Verify userlist.txt hash is correct -2. Ensure auth_type matches (md5 vs scram-sha-256) -3. Check QuestDB credentials in pgbouncer.ini [databases] section -4. Review PgBouncer auth logs - -### Performance Issues - -**Check connection pool exhaustion:** -```sql -SHOW POOLS; --- If cl_waiting > 0, clients are waiting for connections -``` - -**Solution:** -```ini -default_pool_size = 50 # Increase pool size -max_client_conn = 2000 # Increase if needed -``` - -## Alternative: Nginx Stream Proxy - -For simpler TLS termination without connection pooling: - -**nginx.conf:** -```nginx -stream { - upstream questdb { - server localhost:8812; - } - - server { - listen 5432 ssl; - proxy_pass questdb; - - ssl_certificate /etc/nginx/certs/server.crt; - ssl_certificate_key /etc/nginx/certs/server.key; - ssl_protocols TLSv1.2 TLSv1.3; - ssl_ciphers HIGH:!aNULL:!MD5; - } -} -``` - -**Pros:** Simpler configuration, no authentication handling -**Cons:** No connection pooling, no PostgreSQL-specific features - -:::tip Production Deployment -For production deployments with client applications: -1. Use CA-signed certificates (Let's Encrypt is free) -2. Set `client_tls_sslmode = require` minimum, `verify-full` for maximum security -3. Enable connection pooling to handle traffic bursts -4. Monitor PgBouncer pools regularly -5. Restrict QuestDB PostgreSQL port to localhost only +:::warning Unencrypted Traffic +Traffic will be unencrypted between PgBouncer and QuestDB. This setup is only suitable when both services run on the same host or within a trusted network. ::: -:::warning Certificate Renewal -Let's Encrypt certificates expire after 90 days. Set up automatic renewal: - -```bash -# Add to crontab -0 0 1 * * certbot renew && docker-compose restart pgbouncer -``` - -Or use a certbot hook to reload PgBouncer after renewal. +:::note QuestDB Enterprise +For QuestDB Enterprise, there is native TLS support, so you can connect directly with TLS or use PgBouncer with full TLS end-to-end encryption. ::: :::info Related Documentation - [PostgreSQL wire protocol](/docs/reference/api/postgres/) - [QuestDB security](/docs/guides/architecture/security/) - [PgBouncer documentation](https://www.pgbouncer.org/config.html) -- [Docker deployment](/docs/deployment/docker/) ::: diff --git a/documentation/playbook/programmatic/cpp/missing-columns.md b/documentation/playbook/programmatic/cpp/missing-columns.md index d4f36bad1..9340c6643 100644 --- a/documentation/playbook/programmatic/cpp/missing-columns.md +++ b/documentation/playbook/programmatic/cpp/missing-columns.md @@ -4,35 +4,51 @@ sidebar_label: Missing columns description: Send rows with optional columns using the QuestDB C++ client by conditionally calling column methods --- -Handle rows with missing or optional columns when using the QuestDB C++ client. Unlike Python's dictionary-based approach where you can simply omit keys, the C++ client requires explicit method calls for each column. This guide shows how to conditionally include columns based on data availability. +Send rows with missing or optional columns to QuestDB using the C++ client. -## Problem: Ragged Rows with Optional Fields +## Problem -You have data where some columns may be missing for certain rows. In Python, you can use dictionaries with `None` values or omit keys entirely: +In Python, you can handle missing columns easily with dictionaries: ```python -# Python - easy to handle missing data -{"price": 10.0, "volume": 100} # Both columns -{"price": 10.0, "volume": None} # Volume missing -{"price": 10.0} # Volume omitted (equivalent to None) +{"price1": 10.0, "price2": 10.1} ``` -In C++, the buffer-based API requires explicit method calls: +And if price2 is not available: + +```python +{"price1": 10.0, "price2": None} +``` + +Which is equivalent to: + +```python +{"price1": 10.0} +``` + +You can pass the dict as the columns argument to `sender.rows` and it transparently sends the rows, with or without missing columns, to the server. + +In C++, the buffer API requires explicit method calls: ```cpp buffer .table("trades") .symbol("symbol", "ETH-USD") + .symbol("side", "sell") .column("price", 2615.54) - .column("volume", 0.00044) // What if volume is missing? - .at(timestamp); + .column("amount", 0.00044) + .at(questdb::ingress::timestamp_nanos::now()); + +sender.flush(buffer); ``` -## Solution: Conditional Column Calls +How do you handle "ragged" rows with missing columns in C++? -Use `std::optional` (C++17) or nullable types, then conditionally call `.column()` only when data is present. +## Solution -### Complete Example +You need to call `at` at the end of the buffer so the data gets queued to be sent, but you can call `symbol` and `column` as many times as needed for each row, and you can do this conditionally. + +The example below builds a vector with three rows, one of them with an empty column, then it iterates over the vector and checks if the optional `price` column is null. If it is, it skips invoking `column` for the buffer on that column. ```cpp #include @@ -53,35 +69,31 @@ int main() auto duration = now.time_since_epoch(); auto nanos = std::chrono::duration_cast(duration).count(); - // Define structure with optional price - struct Trade { + struct Row { std::string symbol; std::string side; - std::optional price; // May be missing + std::optional price; double amount; }; - // Sample data - some trades missing price - std::vector trades = { + std::vector rows = { {"ETH-USD", "sell", 2615.54, 0.00044}, {"BTC-USD", "sell", 39269.98, 0.001}, - {"SOL-USD", "sell", std::nullopt, 5.5} // Missing price + {"SOL-USD", "sell", std::nullopt, 5.5} // Missing price }; questdb::ingress::line_sender_buffer buffer; - // Iterate and conditionally add columns - for (const auto& trade : trades) { + for (const auto& row : rows) { buffer.table("trades") - .symbol("symbol", trade.symbol) - .symbol("side", trade.side); + .symbol("symbol", row.symbol) + .symbol("side", row.side); - // Only add price column if value exists - if (trade.price.has_value()) { - buffer.column("price", trade.price.value()); + if (row.price.has_value()) { + buffer.column("price", row.price.value()); } - buffer.column("amount", trade.amount) + buffer.column("amount", row.amount) .at(questdb::ingress::timestamp_nanos(nanos)); } @@ -99,297 +111,7 @@ int main() } ``` -### How It Works - -1. **`std::optional`**: Represents a value that may or may not be present - - `std::nullopt`: Indicates missing value - - `.has_value()`: Checks if value is present - - `.value()`: Retrieves the value (only call if `.has_value()` is true) - -2. **Conditional column call**: Skip `.column()` when value is missing - ```cpp - if (trade.price.has_value()) { - buffer.column("price", trade.price.value()); - } - ``` - -3. **Buffer accumulation**: Each call to `.table()...at()` adds one row to the buffer - - The buffer accumulates all rows - - Call `.flush()` once to send all rows together - -## Compilation - -```bash -# Basic compilation with C++17 -g++ -std=c++17 -o trades trades.cpp -lquestdb_client - -# With optimization -g++ -std=c++17 -O3 -o trades trades.cpp -lquestdb_client - -# Using CMake -cmake -DCMAKE_BUILD_TYPE=Release .. -make -``` - -Ensure you have: -- C++17 or later compiler -- QuestDB C++ client library installed -- Linker flag `-lquestdb_client` - -## Multiple Optional Columns - -Handle multiple optional fields by checking each one: - -```cpp -struct SensorReading { - std::string sensor_id; - std::optional temperature; - std::optional humidity; - std::optional pressure; - std::optional status; -}; - -// Add to buffer -for (const auto& reading : readings) { - buffer.table("sensor_data") - .symbol("sensor_id", reading.sensor_id); - - if (reading.temperature.has_value()) { - buffer.column("temperature", reading.temperature.value()); - } - - if (reading.humidity.has_value()) { - buffer.column("humidity", reading.humidity.value()); - } - - if (reading.pressure.has_value()) { - buffer.column("pressure", reading.pressure.value()); - } - - if (reading.status.has_value()) { - buffer.column("status", reading.status.value()); - } - - buffer.at(questdb::ingress::timestamp_nanos::now()); -} -``` - -## C++11/14 Alternative (Without std::optional) - -If you can't use C++17, use pointers or sentinel values: - -### Using Pointers - -```cpp -struct Trade { - std::string symbol; - std::string side; - double* price; // nullptr if missing - double amount; -}; - -// Usage -double btc_price = 39269.98; -std::vector trades = { - {"BTC-USD", "sell", &btc_price, 0.001}, - {"SOL-USD", "sell", nullptr, 5.5} // Missing price -}; - -for (const auto& trade : trades) { - buffer.table("trades") - .symbol("symbol", trade.symbol) - .symbol("side", trade.side); - - if (trade.price != nullptr) { - buffer.column("price", *trade.price); - } - - buffer.column("amount", trade.amount) - .at(questdb::ingress::timestamp_nanos::now()); -} -``` - -### Using Sentinel Values - -```cpp -const double MISSING_VALUE = std::numeric_limits::quiet_NaN(); - -struct Trade { - std::string symbol; - std::string side; - double price; // NaN if missing - double amount; -}; - -// Usage -for (const auto& trade : trades) { - buffer.table("trades") - .symbol("symbol", trade.symbol) - .symbol("side", trade.side); - - if (!std::isnan(trade.price)) { - buffer.column("price", trade.price); - } - - buffer.column("amount", trade.amount) - .at(questdb::ingress::timestamp_nanos::now()); -} -``` - -## Symbol vs Column - -Remember the distinction in ILP: -- **Symbols** (`.symbol()`): Categorical data, indexed automatically by QuestDB (e.g., instrument, side, category) -- **Columns** (`.column()`): Numerical, string, or boolean values (e.g., price, amount, status) - -Both can be optional and use the same conditional pattern: - -```cpp -// Optional symbol -if (trade.exchange.has_value()) { - buffer.symbol("exchange", trade.exchange.value()); -} - -// Optional column -if (trade.price.has_value()) { - buffer.column("price", trade.price.value()); -} -``` - -## Type-Specific Column Methods - -The C++ client provides type-specific methods for clarity and performance: - -```cpp -// Explicit type methods (recommended) -buffer.column_f64("price", 2615.54); // 64-bit float -buffer.column_i64("count", 100); // 64-bit integer -buffer.column_bool("active", true); // Boolean -buffer.column_str("status", "ok"); // String - -// Generic column (uses template deduction) -buffer.column("price", 2615.54); // Also works -``` - -Use type-specific methods when handling optional values for better clarity: - -```cpp -if (trade.price.has_value()) { - buffer.column_f64("price", trade.price.value()); -} - -if (trade.volume.has_value()) { - buffer.column_i64("volume", trade.volume.value()); -} -``` - -## Batching and Flushing - -For better performance, accumulate multiple rows before flushing: - -```cpp -constexpr size_t BATCH_SIZE = 1000; - -questdb::ingress::line_sender_buffer buffer; -size_t row_count = 0; - -for (const auto& trade : large_dataset) { - buffer.table("trades") - .symbol("symbol", trade.symbol); - - if (trade.price.has_value()) { - buffer.column("price", trade.price.value()); - } - - buffer.column("amount", trade.amount) - .at(questdb::ingress::timestamp_nanos::now()); - - row_count++; - - // Flush when batch is full - if (row_count >= BATCH_SIZE) { - sender.flush(buffer); - buffer.clear(); // Reset buffer for next batch - row_count = 0; - } -} - -// Flush remaining rows -if (row_count > 0) { - sender.flush(buffer); -} -``` - -## Error Handling - -Always handle potential errors: - -```cpp -try { - sender.flush(buffer); -} catch (const questdb::ingress::line_sender_error& err) { - std::cerr << "Failed to send data: " << err.what() << std::endl; - - // Implement retry logic or save to disk - if (should_retry(err)) { - retry_with_backoff(buffer); - } else { - save_to_disk(buffer); - } -} -``` - -## Performance Considerations - -**Minimize optional checks in hot paths:** -```cpp -// Good: Check once, process many -if (all_prices_present) { - for (const auto& trade : trades) { - buffer.table("trades") - .symbol("symbol", trade.symbol) - .column("price", trade.price) // No conditional - .column("amount", trade.amount) - .at(timestamp); - } -} else { - // Slower path with conditionals - for (const auto& trade : trades) { - buffer.table("trades") - .symbol("symbol", trade.symbol); - - if (trade.price.has_value()) { - buffer.column("price", trade.price.value()); - } - - buffer.column("amount", trade.amount) - .at(timestamp); - } -} -``` - -**Batch sizing:** -- Larger batches (1000-10000 rows) reduce network overhead -- Smaller batches (100-500 rows) reduce memory usage and improve latency -- Tune based on your data rate and memory constraints - -:::tip Schema Evolution -QuestDB automatically creates missing columns when you first send data with that column name. This means: -- You can add new optional columns at any time -- Existing rows will have NULL for new columns -- No schema migration required -::: - -:::warning Thread Safety -The `line_sender_buffer` is NOT thread-safe. Either: -1. Use one buffer per thread -2. Protect shared buffer with mutex -3. Use a queue pattern with single sender thread -::: - :::info Related Documentation - [QuestDB C++ client documentation](https://github.com/questdb/c-questdb-client) - [ILP reference](/docs/reference/api/ilp/overview/) -- [C++ client examples](https://github.com/questdb/c-questdb-client/tree/main/examples) -- [std::optional reference](https://en.cppreference.com/w/cpp/utility/optional) ::: diff --git a/documentation/playbook/programmatic/rust/tls-configuration.md b/documentation/playbook/programmatic/rust/tls-configuration.md deleted file mode 100644 index 9881a8b3d..000000000 --- a/documentation/playbook/programmatic/rust/tls-configuration.md +++ /dev/null @@ -1,326 +0,0 @@ ---- -title: Configure TLS for Rust Client -sidebar_label: TLS configuration -description: Set up TLS certificates for the QuestDB Rust client including self-signed certificates and production CA roots ---- - -Configure TLS encryption for the QuestDB Rust client when connecting to QuestDB instances with TLS enabled. This guide covers both production deployments with proper CA certificates and development environments with self-signed certificates. - -## Problem: TLS Certificate Validation - -When connecting the Rust client to a TLS-enabled QuestDB instance, you'll encounter certificate validation errors if: -- Using self-signed certificates (common in development) -- Using corporate/internal CA certificates not in system trust stores -- Certificate hostname doesn't match the connection address - -The default client configuration validates certificates against system certificate stores, which causes "certificate unknown" errors with self-signed certificates. - -## Solution Options - -The QuestDB Rust client provides three approaches for TLS configuration: - -1. **System + WebPKI roots** (recommended for production) -2. **Custom CA certificate** (best for development and internal CAs) -3. **Skip verification** (development/testing only - unsafe) - -### Option 1: Use System and WebPKI Certificate Roots - -For production deployments with properly signed certificates from public Certificate Authorities: - -```rust -use questdb::ingress::{Sender, SenderBuilder}; - -#[tokio::main] -async fn main() -> Result<(), Box> { - let sender = SenderBuilder::new("http", "production-host.com", 9000)? - .username("admin")? - .password("quest")? - .tls_ca("webpki_and_os_roots")? // Use both WebPKI and OS certificate stores - .build() - .await?; - - // Use sender... - - sender.close().await?; - Ok(()) -} -``` - -The `tls_ca("webpki_and_os_roots")` parameter tells the client to trust: -- **WebPKI roots**: Mozilla's standard root CA certificates -- **OS roots**: Operating system's certificate store (Windows, macOS, Linux) - -This works with certificates from public CAs like Let's Encrypt, DigiCert, etc. - -### Option 2: Custom CA Certificate (Recommended for Development) - -For development environments or internal CAs, provide a PEM-encoded certificate file: - -#### Step 1: Generate Self-Signed Certificate (if needed) - -```bash -# Generate private key -openssl genrsa -out questdb.key 2048 - -# Generate self-signed certificate (valid for 365 days) -openssl req -new -x509 \ - -key questdb.key \ - -out questdb.crt \ - -days 365 \ - -subj "/CN=localhost" - -# Verify certificate -openssl x509 -in questdb.crt -text -noout -``` - -#### Step 2: Configure QuestDB with Certificate - -Add to QuestDB `server.conf`: - -```ini -# Enable TLS on HTTP endpoint -http.security.enabled=true -http.security.cert.path=/path/to/questdb.crt -http.security.key.path=/path/to/questdb.key -``` - -Or via environment variables: - -```bash -export QDB_HTTP_SECURITY_ENABLED=true -export QDB_HTTP_SECURITY_CERT_PATH=/path/to/questdb.crt -export QDB_HTTP_SECURITY_KEY_PATH=/path/to/questdb.key -``` - -#### Step 3: Configure Rust Client - -```rust -use questdb::ingress::{Sender, SenderBuilder}; - -#[tokio::main] -async fn main() -> Result<(), Box> { - let sender = SenderBuilder::new("https", "localhost", 9000)? - .username("admin")? - .password("quest")? - .tls_ca("pem_file")? // Specify PEM file mode - .tls_roots("/path/to/questdb.crt")? // Path to certificate file - .build() - .await?; - - // Write data - sender - .table("trades")? - .symbol("symbol", "BTC-USDT")? - .symbol("side", "buy")? - .column_f64("price", 37779.62)? - .column_f64("amount", 0.5)? - .at_now() - .await?; - - sender.close().await?; - Ok(()) -} -``` - -**Key points:** -- Use `"https"` protocol (not `"http"`) -- `tls_ca("pem_file")`: Tells client to load from a PEM file -- `tls_roots("/path/to/questdb.crt")`: Path to the certificate file -- Certificate file must be PEM-encoded (text format with `-----BEGIN CERTIFICATE-----`) - -### Option 3: Skip Verification (Development Only) - -For development/testing when you want to bypass certificate validation entirely: - -#### Add Feature to Cargo.toml - -```toml -[dependencies] -questdb-rs = { version = "4.0", features = ["insecure-skip-verify"] } -tokio = { version = "1", features = ["full"] } -``` - -The `insecure-skip-verify` feature must be explicitly enabled in your `Cargo.toml`. - -#### Use Unsafe Verification Setting - -```rust -use questdb::ingress::{Sender, SenderBuilder}; - -#[tokio::main] -async fn main() -> Result<(), Box> { - let sender = SenderBuilder::new("https", "localhost", 9000)? - .username("admin")? - .password("quest")? - .tls_verify("unsafe_off")? // Disable certificate verification - .build() - .await?; - - // Use sender... - - sender.close().await?; - Ok(()) -} -``` - -:::danger Security Warning -**Never use `unsafe_off` in production!** This disables all certificate validation and makes your connection vulnerable to man-in-the-middle attacks. Only use for local development with self-signed certificates. -::: - -## Complete Example - -Here's a complete example handling different environments: - -```rust -use questdb::ingress::{Sender, SenderBuilder}; -use std::env; - -#[tokio::main] -async fn main() -> Result<(), Box> { - let environment = env::var("ENVIRONMENT").unwrap_or_else(|_| "development".to_string()); - - let sender = match environment.as_str() { - "production" => { - // Production: Use system CA roots - SenderBuilder::new("https", "production-host.com", 9000)? - .username("admin")? - .password("quest")? - .tls_ca("webpki_and_os_roots")? - .build() - .await? - } - "development" => { - // Development: Use self-signed certificate - SenderBuilder::new("https", "localhost", 9000)? - .username("admin")? - .password("quest")? - .tls_ca("pem_file")? - .tls_roots("./certs/questdb.crt")? - .build() - .await? - } - _ => { - return Err("Unknown environment".into()); - } - }; - - // Write sample data - sender - .table("trades")? - .symbol("symbol", "BTC-USDT")? - .symbol("side", "buy")? - .column_f64("price", 37779.62)? - .column_f64("amount", 0.5)? - .at_now() - .await?; - - println!("Data sent successfully over TLS"); - - sender.close().await?; - Ok(()) -} -``` - -Run with: - -```bash -# Production -ENVIRONMENT=production cargo run - -# Development -ENVIRONMENT=development cargo run -``` - -## TLS Configuration Options - -### Available tls_ca Values - -| Value | Description | Use Case | -|-------|-------------|----------| -| `webpki_roots` | Mozilla's WebPKI root certificates only | Public CAs, web-hosted QuestDB | -| `os_roots` | Operating system certificate store only | Corporate environments with custom CAs | -| `webpki_and_os_roots` | Both WebPKI and OS roots | Production (recommended) - covers all valid certificates | -| `pem_file` | Load from PEM file | Self-signed certificates, internal CAs | - -### Connection String Format - -Alternatively, configure TLS via connection string: - -```rust -let sender = SenderBuilder::from_conf( - "https::addr=localhost:9000;username=admin;password=quest;tls_ca=webpki_and_os_roots;" -)? -.build() -.await?; -``` - -For self-signed certificates with PEM file: - -```rust -let sender = SenderBuilder::from_conf( - "https::addr=localhost:9000;username=admin;password=quest;tls_ca=pem_file;tls_roots=/path/to/cert.crt;" -)? -.build() -.await?; -``` - -## Troubleshooting - -**"certificate unknown" error:** -- Verify certificate is valid and not expired: `openssl x509 -in cert.crt -noout -dates` -- Check certificate hostname matches connection address -- Ensure certificate file path is correct and readable -- For self-signed certs, use `tls_ca("pem_file")` with `tls_roots()` - -**"certificate verify failed":** -- Self-signed certificate: Use Option 2 (custom CA) or Option 3 (unsafe skip) -- Wrong CA: Verify certificate chain is complete in PEM file -- Expired certificate: Regenerate with longer validity period - -**"connection refused":** -- QuestDB TLS not enabled - check QuestDB configuration -- Wrong port - TLS uses same port (9000 for HTTP, 9009 for TCP) -- Firewall blocking HTTPS connections - -**"feature `insecure-skip-verify` is required":** -- Add feature to Cargo.toml: `features = ["insecure-skip-verify"]` -- This feature is required even just to use `tls_verify("unsafe_off")` - -## Certificate File Formats - -The Rust client expects PEM-encoded certificates: - -**Correct format (PEM):** -``` ------BEGIN CERTIFICATE----- -MIIDXTCCAkWgAwIBAgIJAKL0UG+mRKqzMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV -... ------END CERTIFICATE----- -``` - -**If you have DER format**, convert to PEM: -```bash -openssl x509 -inform der -in certificate.der -out certificate.pem -``` - -**Certificate chain**: If using an intermediate CA, concatenate certificates: -```bash -cat server.crt intermediate.crt root.crt > chain.pem -``` - -Use `chain.pem` with `tls_roots()`. - -:::tip Production Best Practices -1. **Use proper CA certificates** from Let's Encrypt or commercial CAs in production -2. **Never commit certificates** to version control - use secure secret management -3. **Rotate certificates** before expiration - monitor expiry dates -4. **Use environment variables** for certificate paths to support different environments -5. **Test certificate validation** in staging environment before production deployment -::: - -:::info Related Documentation -- [QuestDB Rust client documentation](https://docs.rs/questdb/) -- [QuestDB Rust client GitHub](https://github.com/questdb/c-questdb-client) -- [TLS configuration examples](https://github.com/questdb/c-questdb-client/tree/main/questdb-rs/examples) -- [QuestDB TLS configuration](/docs/operations/tls/) -::: diff --git a/documentation/playbook/programmatic/tls-ca-configuration.md b/documentation/playbook/programmatic/tls-ca-configuration.md new file mode 100644 index 000000000..1f581e8f5 --- /dev/null +++ b/documentation/playbook/programmatic/tls-ca-configuration.md @@ -0,0 +1,102 @@ +--- +title: Configure TLS Certificate Authorities +sidebar_label: TLS CA configuration +description: Configure TLS certificate authority validation for QuestDB clients +--- + +Configure TLS certificate authority (CA) validation when connecting QuestDB clients to TLS-enabled instances. + +## Problem + +You are using a QuestDB client (Rust, Python, C++, etc.) to insert data. It works when using QuestDB without TLS, but when you enable TLS on your QuestDB instance using a self-signed certificate, you get an error of "certificate unknown". + +When using the PostgreSQL wire interface, you can insert data passing `sslmode=require`, and it works, so you can discard any problems with QuestDB recognizing the certificate. But you need to figure out the equivalent for your ILP client. + +## Solution: Configure TLS CA + +QuestDB clients support the `tls_ca` parameter, which has multiple values to configure certificate authority validation: + +### Option 1: Use WebPKI and OS Certificate Roots (Recommended for Production) + +If you want to accept both the webpki-root certificates plus whatever you have on the OS, pass `tls_ca=webpki_and_os_roots`: + +``` +https::addr=localhost:9000;username=admin;password=quest;tls_ca=webpki_and_os_roots; +``` + +This will work with certificates signed by standard certificate authorities. + +### Option 2: Use a Custom PEM File + +Point to a PEM-encoded certificate file for self-signed or custom CA certificates: + +``` +https::addr=localhost:9000;username=admin;password=quest;tls_ca=pem_file;tls_roots=/path/to/cert.pem; +``` + +This is useful for self-signed certificates or internal CAs. + +### Option 3: Skip Verification (Development Only) + +For development environments with self-signed certificates, you might be tempted to disable verification by passing `tls_verify=unsafe_off`: + +``` +https::addr=localhost:9000;username=admin;password=quest;tls_verify=unsafe_off; +``` + +:::danger +This is a very bad idea for production and should only be used for testing on a development environment with a self-signed certificate. It disables all certificate validation. +::: + +**Note:** Some clients require enabling an optional feature (like `insecure-skip-verify` in Rust) before the `tls_verify=unsafe_off` parameter will work. Check your client's documentation for details. + +## Available tls_ca Values + +| Value | Description | +|-------|-------------| +| `webpki_roots` | Mozilla's WebPKI root certificates only | +| `os_roots` | Operating system certificate store only | +| `webpki_and_os_roots` | Both WebPKI and OS roots (recommended) | +| `pem_file` | Load from a PEM file (requires `tls_roots` parameter) | + +## Example: Rust Client + +```rust +use questdb::ingress::{Sender, SenderBuilder}; + +#[tokio::main] +async fn main() -> Result<(), Box> { + let sender = SenderBuilder::new("https", "localhost", 9000)? + .username("admin")? + .password("quest")? + .tls_ca("webpki_and_os_roots")? // Use standard CAs + .build() + .await?; + + // Use sender... + + sender.close().await?; + Ok(()) +} +``` + +For self-signed certificates with a PEM file: + +```rust +let sender = SenderBuilder::new("https", "localhost", 9000)? + .username("admin")? + .password("quest")? + .tls_ca("pem_file")? + .tls_roots("/path/to/questdb.crt")? + .build() + .await?; +``` + +The examples are in Rust but the concepts are similar in other languages. Check the documentation for your specific client. + +:::info Related Documentation +- [QuestDB Rust client](https://docs.rs/questdb/) +- [QuestDB Python client](/docs/clients/ingest-python/) +- [QuestDB C++ client](/docs/clients/ingest-cpp/) +- [QuestDB TLS configuration](/docs/operations/tls/) +::: diff --git a/documentation/playbook/sql/advanced/array-from-string.md b/documentation/playbook/sql/advanced/array-from-string.md index d32951691..1091dc58e 100644 --- a/documentation/playbook/sql/advanced/array-from-string.md +++ b/documentation/playbook/sql/advanced/array-from-string.md @@ -1,361 +1,32 @@ --- title: Create Arrays from String Literals sidebar_label: Array from string literal -description: Cast string literals to array types for use in functions requiring array parameters +description: Cast string literals to array types in QuestDB --- -Create array values from string literals for use with functions that accept array parameters. While QuestDB doesn't have native array literals, you can cast string representations to array types like `double[]` or `int[]`. +Cast string literals to array types for use with functions that accept array parameters. -## Problem: Functions Requiring Array Parameters +## Solution -Some QuestDB functions accept array parameters: +To cast an array from a string you need to cast to `double[]` for a vector, or to `double[][]` for a two-dimensional array. You can just keep adding brackets for as many dimensions as the literal has. -```sql --- Hypothetical function signature -percentile_cont(values double[], percentiles double[]) -``` - -But you can't write arrays directly in SQL: - -```sql --- This doesn't work (not valid SQL) -SELECT func([1.0, 2.0, 3.0]); -``` - -## Solution: Cast String to Array - -Use CAST to convert string literals to array types: - -```questdb-sql demo title="Cast string to double array" -SELECT cast('[1.0, 2.0, 3.0, 4.0, 5.0]' AS double[]) as numbers; -``` - -**Result:** -``` -numbers: [1.0, 2.0, 3.0, 4.0, 5.0] -``` - -## Array Type Casting - -### Double Array - -```sql -SELECT cast('[1.5, 2.7, 3.2]' AS double[]) as decimals; -``` - -### Integer Array - -```sql -SELECT cast('[10, 20, 30]' AS int[]) as integers; -``` - -### Long Array - -```sql -SELECT cast('[1000000, 2000000, 3000000]' AS long[]) as big_numbers; -``` - -## Using Arrays with Functions - -### Custom Percentiles - -```sql --- Calculate multiple percentiles at once (if function supports) -SELECT - symbol, - percentiles(price, cast('[0.25, 0.50, 0.75, 0.95, 0.99]' AS double[])) as percentile_values -FROM trades -WHERE timestamp >= dateadd('d', -1, now()) -GROUP BY symbol; -``` - -Note: QuestDB's built-in `percentile()` function takes a single percentile value, not an array. This example shows the pattern for custom or future array-accepting functions. - -### Array Aggregation (Example Pattern) - -```sql --- Conceptual: Aggregate values into array -WITH data AS ( - SELECT - timestamp_floor('h', timestamp) as hour, - collect_list(price) as prices -- Hypothetical array aggregation - FROM trades - SAMPLE BY 1h -) -SELECT - hour, - array_avg(prices) as avg_price, - array_median(prices) as median_price -FROM data; -``` - -## Multidimensional Arrays - -### 2D Array (Matrix) - -```sql -SELECT cast('[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]]' AS double[][]) as matrix; -``` - -**Result:** -``` -matrix: [[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]] -``` - -### Use Case: Time Series Matrix - -```sql --- Store multiple related time series as matrix -WITH timeseries_matrix AS ( - SELECT cast( - '[[100.0, 101.0, 102.0], - [200.0, 201.0, 202.0], - [300.0, 301.0, 302.0]]' - AS double[][] - ) as series_data -) -SELECT - series_data[0] as series_1, -- [100.0, 101.0, 102.0] - series_data[1] as series_2, -- [200.0, 201.0, 202.0] - series_data[2] as series_3 -- [300.0, 301.0, 302.0] -FROM timeseries_matrix; -``` - -## Array Indexing - -Access array elements by index (0-based): - -```sql -WITH arr AS ( - SELECT cast('[10.5, 20.7, 30.2, 40.9]' AS double[]) as values -) -SELECT - values[0] as first, -- 10.5 - values[1] as second, -- 20.7 - values[3] as fourth -- 40.9 -FROM arr; -``` - -## Dynamic Array Construction - -Build arrays from query results: - -### Using String Aggregation - -```sql --- Aggregate values into comma-separated string, then cast -WITH aggregated AS ( - SELECT - symbol, - string_agg(cast(price AS STRING), ',') as price_string - FROM ( - SELECT * FROM trades - WHERE symbol = 'BTC-USDT' - LIMIT 10 - ) - GROUP BY symbol -) -SELECT - symbol, - cast('[' || price_string || ']' AS double[]) as price_array -FROM aggregated; -``` - -## Array Literals in WHERE Clauses - -Check if value exists in array: +This query shows how to convert a string literal into an array, even when there are new lines: -```sql --- Check if symbol is in list -WITH valid_symbols AS ( - SELECT cast('["BTC-USDT", "ETH-USDT", "SOL-USDT"]' AS string[]) as symbols -) -SELECT * -FROM trades -WHERE symbol IN (SELECT unnest(symbols) FROM valid_symbols) -LIMIT 100; +```questdb-sql demo title="Cast string to array" +SELECT CAST('[ + [ 1.0, 2.0, 3.0 ], + [ + 4.0, + 5.0, + 6.0 + ] +]' AS double[][]), +cast('[[1,2,3],[4,5,6]]' as double[][]); ``` -Note: QuestDB's `IN` clause with arrays may have limited support. Use standard `IN (value1, value2, ...)` syntax where possible. - -## Array Length - -Get number of elements: - -```sql -SELECT - cast('[1, 2, 3, 4, 5]' AS int[]) as arr, - array_length(cast('[1, 2, 3, 4, 5]' AS int[]), 1) as length; -- Returns 5 -``` - -## Common Patterns - -### Percentile Thresholds - -```sql --- Define alert thresholds as array -WITH thresholds AS ( - SELECT cast('[50.0, 100.0, 500.0, 1000.0]' AS double[]) as latency_thresholds -), -counts AS ( - SELECT - count(CASE WHEN latency_ms < thresholds[0] THEN 1 END) as under_50ms, - count(CASE WHEN latency_ms >= thresholds[0] AND latency_ms < thresholds[1] THEN 1 END) as ms_50_100, - count(CASE WHEN latency_ms >= thresholds[1] AND latency_ms < thresholds[2] THEN 1 END) as ms_100_500, - count(CASE WHEN latency_ms >= thresholds[2] THEN 1 END) as over_500ms - FROM api_requests, thresholds - WHERE timestamp >= dateadd('h', -1, now()) -) -SELECT * FROM counts; -``` - -### Price Levels - -```sql --- Support/resistance levels -WITH levels AS ( - SELECT cast('[60000.0, 61000.0, 62000.0, 63000.0]' AS double[]) as price_levels -) -SELECT - timestamp, - price, - CASE - WHEN price < price_levels[0] THEN 'Below Support 1' - WHEN price >= price_levels[0] AND price < price_levels[1] THEN 'Support 1-2' - WHEN price >= price_levels[1] AND price < price_levels[2] THEN 'Support 2-3' - WHEN price >= price_levels[2] AND price < price_levels[3] THEN 'Resistance 1-2' - ELSE 'Above Resistance 2' - END as price_zone -FROM trades, levels -WHERE symbol = 'BTC-USDT' - AND timestamp >= dateadd('h', -1, now()); -``` - -## Limitations and Workarounds - -### No Array Literals - -**Problem:** Can't write arrays directly in standard SQL syntax - -**Workaround:** Use CAST with string literals as shown above - -### Limited Array Functions - -**Problem:** QuestDB has limited built-in array manipulation functions - -**Workaround:** Use CASE expressions and indexing to process arrays - -### Array Comparison - -**Problem:** Can't directly compare arrays with `=` operator - -**Workaround:** Compare element-by-element or convert to strings - -```sql -SELECT - CASE - WHEN cast('[1, 2, 3]' AS int[])[0] = cast('[1, 2, 4]' AS int[])[0] - AND cast('[1, 2, 3]' AS int[])[1] = cast('[1, 2, 4]' AS int[])[1] - THEN 'First two elements match' - ELSE 'Different' - END as comparison; -``` - -## Alternative: Use Individual Columns - -For many use cases, separate columns are cleaner than arrays: - -```sql --- Instead of: [p50, p90, p95, p99] -SELECT - percentile(price, 50) as p50, - percentile(price, 90) as p90, - percentile(price, 95) as p95, - percentile(price, 99) as p99 -FROM trades; -``` - -This avoids array casting and is often more readable. - -## Type Coercion Rules - -```sql --- String to double[] -cast('[1, 2, 3]' AS double[]) -- [1.0, 2.0, 3.0] - --- String to int[] -cast('[1.5, 2.5, 3.5]' AS int[]) -- [1, 2, 3] (truncates decimals) - --- String to long[] -cast('[1000000, 2000000]' AS long[]) -- [1000000, 2000000] -``` - -## JSON Alternative - -For complex nested structures, consider using STRING columns with JSON: - -```sql --- Store as JSON string -SELECT '{"prices": [100.0, 101.0, 102.0], "volumes": [10, 20, 30]}' as data; - --- Parse with custom logic or external tools -``` - -QuestDB focuses on time-series performance, so complex nested structures are often better handled in application code. - -## Practical Example: Multiple Symbol Filter - -```sql --- Define symbols to track -WITH watched_symbols AS ( - SELECT cast('["BTC-USDT", "ETH-USDT", "SOL-USDT", "AVAX-USDT"]' AS string[]) as symbols -) -SELECT - trades.* -FROM trades -CROSS JOIN watched_symbols -WHERE symbol IN ( - -- Expand array to rows - SELECT symbols[0] FROM watched_symbols - UNION ALL SELECT symbols[1] FROM watched_symbols - UNION ALL SELECT symbols[2] FROM watched_symbols - UNION ALL SELECT symbols[3] FROM watched_symbols -) - AND timestamp >= dateadd('h', -1, now()) -LIMIT 100; -``` - -**Simpler alternative:** -```sql -SELECT * FROM trades -WHERE symbol IN ('BTC-USDT', 'ETH-USDT', 'SOL-USDT', 'AVAX-USDT') -LIMIT 100; -``` - -The array approach is useful when the list is dynamically generated or reused across queries. - -:::tip When to Use Arrays -Use arrays when: -- Working with functions that require array parameters -- Storing fixed-size sequences (coordinates, RGB values, etc.) -- Defining reusable threshold or configuration arrays -- Interfacing with external systems expecting array format - -Avoid arrays when: -- Simple column-based representation works fine -- You need frequent element-wise operations (use separate columns instead) -- Data structure is deeply nested (consider JSON or denormalization) -::: - -:::warning Array Support Limited -QuestDB's array support is focused on specific use cases. For extensive array manipulation: -1. Prefer separate columns for better query performance -2. Handle complex array logic in application code -3. Consider alternative databases if arrays are core to your data model -::: +Note if you add the wrong number of brackets (for example, in this case if you try casting to `double[]` or `double[][][][]`), it will not error, but will instead convert as null. :::info Related Documentation - [CAST function](/docs/reference/sql/cast/) - [Data types](/docs/reference/sql/datatypes/) -- [String functions](/docs/reference/function/text/) ::: diff --git a/documentation/playbook/sql/advanced/conditional-aggregates.md b/documentation/playbook/sql/advanced/conditional-aggregates.md index 497f6d6f0..ba3c0cc27 100644 --- a/documentation/playbook/sql/advanced/conditional-aggregates.md +++ b/documentation/playbook/sql/advanced/conditional-aggregates.md @@ -1,15 +1,14 @@ --- title: Multiple Conditional Aggregates sidebar_label: Conditional aggregates -description: Calculate multiple conditional aggregates in a single query using CASE expressions for efficient data analysis +description: Calculate multiple conditional aggregates in a single query using CASE expressions --- -Calculate multiple aggregates with different conditions in a single pass through the data using CASE expressions. This pattern is more efficient than running separate queries and essential for creating summary reports with multiple metrics. +Calculate multiple aggregates with different conditions in a single pass through the data using CASE expressions. -## Problem: Multiple Metrics with Different Filters +## Problem You need to calculate various metrics from the same dataset with different conditions: - - Count of buy orders - Count of sell orders - Average buy price @@ -17,15 +16,7 @@ You need to calculate various metrics from the same dataset with different condi - Total volume for large trades (> 1.0) - Total volume for small trades (≤ 1.0) -Running separate queries is inefficient: - -```sql --- Inefficient: 6 separate scans -SELECT count(*) FROM trades WHERE side = 'buy'; -SELECT count(*) FROM trades WHERE side = 'sell'; -SELECT avg(price) FROM trades WHERE side = 'buy'; --- ... 3 more queries -``` +Running separate queries is inefficient. ## Solution: CASE Within Aggregate Functions @@ -47,13 +38,6 @@ WHERE timestamp >= dateadd('d', -1, now()) GROUP BY symbol; ``` -**Results:** - -| symbol | buy_count | sell_count | avg_buy_price | avg_sell_price | large_trade_volume | small_trade_volume | total_volume | -|--------|-----------|------------|---------------|----------------|--------------------|--------------------|-------------- | -| BTC-USDT | 12,345 | 11,234 | 61,250.50 | 61,248.75 | 456.78 | 123.45 | 580.23 | -| ETH-USDT | 23,456 | 22,345 | 3,456.25 | 3,455.50 | 678.90 | 234.56 | 913.46 | - ## How It Works ### CASE Returns NULL for Non-Matching Rows @@ -77,277 +61,8 @@ avg(CASE WHEN side = 'buy' THEN price END) - Only includes price when side is 'buy' - Automatically skips all other rows -## Time-Series Summary Report - -Create comprehensive time-series summaries with multiple conditions: - -```questdb-sql demo title="Hourly trading summary with multiple metrics" -SELECT - timestamp_floor('h', timestamp) as hour, - symbol, - count(*) as total_trades, - count(CASE WHEN side = 'buy' THEN 1 END) as buy_trades, - count(CASE WHEN side = 'sell' THEN 1 END) as sell_trades, - sum(amount) as total_volume, - sum(CASE WHEN side = 'buy' THEN amount END) as buy_volume, - sum(CASE WHEN side = 'sell' THEN amount END) as sell_volume, - min(price) as low, - max(price) as high, - first(price) as open, - last(price) as close, - avg(CASE WHEN amount > 1.0 THEN price END) as avg_large_trade_price, - count(CASE WHEN amount > 10.0 THEN 1 END) as whale_trades -FROM trades -WHERE timestamp >= dateadd('d', -7, now()) - AND symbol = 'BTC-USDT' -GROUP BY hour, symbol -ORDER BY hour DESC -LIMIT 24; -``` - -**Results:** - -| hour | symbol | total_trades | buy_trades | sell_trades | total_volume | buy_volume | sell_volume | low | high | open | close | avg_large_trade_price | whale_trades | -|------|--------|--------------|------------|-------------|--------------|------------|-------------|-----|------|------|-------|----------------------|--------------| -| 2025-01-15 23:00 | BTC-USDT | 1,234 | 645 | 589 | 45.67 | 23.45 | 22.22 | 61,200 | 61,350 | 61,250 | 61,300 | 61,275 | 12 | - -## Conditional Aggregates with SAMPLE BY - -Combine conditional aggregates with time-series aggregation: - -```questdb-sql demo title="5-minute candles with buy/sell split" -SELECT - timestamp, - symbol, - first(price) as open, - last(price) as close, - min(price) as low, - max(price) as high, - sum(amount) as total_volume, - sum(CASE WHEN side = 'buy' THEN amount ELSE 0 END) as buy_volume, - sum(CASE WHEN side = 'sell' THEN amount ELSE 0 END) as sell_volume, - (sum(CASE WHEN side = 'buy' THEN amount ELSE 0 END) / - sum(amount) * 100) as buy_percentage -FROM trades -WHERE symbol = 'BTC-USDT' - AND timestamp >= dateadd('h', -6, now()) -SAMPLE BY 5m; -``` - -This creates 5-minute OHLCV candles with buy/sell volume breakdown. - -## Percentage Calculations - -Calculate percentages within the same query: - -```questdb-sql demo title="Trade distribution by size category" -SELECT - symbol, - count(*) as total_trades, - count(CASE WHEN amount <= 0.1 THEN 1 END) as micro_trades, - count(CASE WHEN amount > 0.1 AND amount <= 1.0 THEN 1 END) as small_trades, - count(CASE WHEN amount > 1.0 AND amount <= 10.0 THEN 1 END) as medium_trades, - count(CASE WHEN amount > 10.0 THEN 1 END) as large_trades, - (count(CASE WHEN amount <= 0.1 THEN 1 END) * 100.0 / count(*)) as micro_pct, - (count(CASE WHEN amount > 0.1 AND amount <= 1.0 THEN 1 END) * 100.0 / count(*)) as small_pct, - (count(CASE WHEN amount > 1.0 AND amount <= 10.0 THEN 1 END) * 100.0 / count(*)) as medium_pct, - (count(CASE WHEN amount > 10.0 THEN 1 END) * 100.0 / count(*)) as large_pct -FROM trades -WHERE timestamp >= dateadd('d', -1, now()) -GROUP BY symbol; -``` - -**Results:** - -| symbol | total_trades | micro_trades | small_trades | medium_trades | large_trades | micro_pct | small_pct | medium_pct | large_pct | -|--------|--------------|--------------|--------------|---------------|--------------|-----------|-----------|------------|-----------| -| BTC-USDT | 50,000 | 35,000 | 10,000 | 4,000 | 1,000 | 70.0 | 20.0 | 8.0 | 2.0 | - -## Ratio and Comparison Metrics - -Calculate buy/sell ratios and imbalances: - -```questdb-sql demo title="Order flow imbalance metrics" -SELECT - timestamp, - symbol, - sum(CASE WHEN side = 'buy' THEN amount END) as buy_volume, - sum(CASE WHEN side = 'sell' THEN amount END) as sell_volume, - (sum(CASE WHEN side = 'buy' THEN amount END) - - sum(CASE WHEN side = 'sell' THEN amount END)) as volume_imbalance, - (sum(CASE WHEN side = 'buy' THEN amount END) / - NULLIF(sum(CASE WHEN side = 'sell' THEN amount END), 0)) as buy_sell_ratio, - count(CASE WHEN side = 'buy' THEN 1 END) * 1.0 / - count(CASE WHEN side = 'sell' THEN 1 END) as trade_count_ratio -FROM trades -WHERE symbol = 'BTC-USDT' - AND timestamp >= dateadd('h', -1, now()) -SAMPLE BY 5m; -``` - -**Key points:** -- `NULLIF(denominator, 0)` prevents division by zero -- Ratio > 1.0 indicates buying pressure -- Ratio < 1.0 indicates selling pressure - -## Multiple Symbols Comparison - -Compare metrics across different assets: - -```questdb-sql demo title="Cross-asset summary statistics" -SELECT - timestamp_floor('h', timestamp) as hour, - sum(CASE WHEN symbol = 'BTC-USDT' THEN amount END) as btc_volume, - sum(CASE WHEN symbol = 'ETH-USDT' THEN amount END) as eth_volume, - sum(CASE WHEN symbol = 'SOL-USDT' THEN amount END) as sol_volume, - avg(CASE WHEN symbol = 'BTC-USDT' THEN price END) as btc_avg_price, - avg(CASE WHEN symbol = 'ETH-USDT' THEN price END) as eth_avg_price, - avg(CASE WHEN symbol = 'SOL-USDT' THEN price END) as sol_avg_price, - count(CASE WHEN symbol = 'BTC-USDT' THEN 1 END) as btc_trades, - count(CASE WHEN symbol = 'ETH-USDT' THEN 1 END) as eth_trades, - count(CASE WHEN symbol = 'SOL-USDT' THEN 1 END) as sol_trades -FROM trades -WHERE timestamp >= dateadd('d', -1, now()) - AND symbol IN ('BTC-USDT', 'ETH-USDT', 'SOL-USDT') -GROUP BY hour -ORDER BY hour DESC; -``` - -This creates a wide-format summary with one column per symbol. - -## SUM vs COUNT for Conditional Counting - -Two equivalent patterns for conditional counting: - -```sql --- Method 1: COUNT with CASE (recommended) -count(CASE WHEN condition THEN 1 END) - --- Method 2: SUM with CASE -sum(CASE WHEN condition THEN 1 ELSE 0 END) -``` - -**Recommendation:** Use `count(CASE WHEN ... THEN 1 END)` because: -- More semantically clear (counting occurrences) -- Slightly more efficient (no need to sum zeros) -- Standard SQL pattern - -## Nested Conditions - -Handle multiple condition levels: - -```questdb-sql demo title="Complex conditional aggregates" -SELECT - symbol, - -- Profitable trades by side - count(CASE - WHEN side = 'buy' AND price < avg(price) OVER (PARTITION BY symbol) THEN 1 - END) as good_buy_entries, - count(CASE - WHEN side = 'sell' AND price > avg(price) OVER (PARTITION BY symbol) THEN 1 - END) as good_sell_entries, - -- Volume-weighted metrics - sum(CASE - WHEN side = 'buy' AND amount > 1.0 THEN price * amount - END) / NULLIF(sum(CASE - WHEN side = 'buy' AND amount > 1.0 THEN amount - END), 0) as vwap_large_buys, - -- Time-based conditions - count(CASE - WHEN hour(timestamp) >= 9 AND hour(timestamp) < 16 THEN 1 - END) as market_hours_trades, - count(CASE - WHEN hour(timestamp) < 9 OR hour(timestamp) >= 16 THEN 1 - END) as after_hours_trades -FROM trades -WHERE timestamp >= dateadd('d', -1, now()) -GROUP BY symbol; -``` - -## Performance Considerations - -**Single scan vs multiple queries:** - -```sql --- Efficient: One scan, multiple aggregates -SELECT - count(CASE WHEN side = 'buy' THEN 1 END), - count(CASE WHEN side = 'sell' THEN 1 END) -FROM trades; - --- Inefficient: Two scans -SELECT count(*) FROM trades WHERE side = 'buy'; -SELECT count(*) FROM trades WHERE side = 'sell'; -``` - -**Index usage:** - -```sql --- Filter first, then conditional aggregates -SELECT - count(CASE WHEN side = 'buy' THEN 1 END) as buy_count, - count(CASE WHEN side = 'sell' THEN 1 END) as sell_count -FROM trades -WHERE timestamp >= dateadd('d', -1, now()) -- Uses timestamp index - AND symbol = 'BTC-USDT'; -- Uses symbol index if SYMBOL type -``` - -**Avoid redundant conditions:** - -```sql --- Good: Simple CASE -count(CASE WHEN amount > 1.0 THEN 1 END) - --- Wasteful: Unnecessary ELSE -count(CASE WHEN amount > 1.0 THEN 1 ELSE NULL END) -- NULL is implicit -``` - -## Common Patterns - -**Status distribution:** -```sql -SELECT - count(CASE WHEN status = 'active' THEN 1 END) as active, - count(CASE WHEN status = 'pending' THEN 1 END) as pending, - count(CASE WHEN status = 'failed' THEN 1 END) as failed -FROM orders; -``` - -**Success rate:** -```sql -SELECT - (count(CASE WHEN status = 'success' THEN 1 END) * 100.0 / count(*)) as success_rate, - (count(CASE WHEN status = 'error' THEN 1 END) * 100.0 / count(*)) as error_rate -FROM api_requests; -``` - -**Size buckets:** -```sql -SELECT - sum(CASE WHEN amount < 1 THEN amount END) as small_volume, - sum(CASE WHEN amount >= 1 AND amount < 10 THEN amount END) as medium_volume, - sum(CASE WHEN amount >= 10 THEN amount END) as large_volume -FROM trades; -``` - -:::tip When to Use This Pattern -Use conditional aggregates when you need: -- Multiple metrics with different filters from the same dataset -- Summary reports with various breakdowns -- Pivot-like transformations (conditions as columns) -- Performance optimization (single scan vs multiple queries) -::: - -:::warning NULL Handling -Remember that CASE without ELSE returns NULL. This is what makes the pattern work: -- `count()` ignores NULLs (only counts matching rows) -- `sum()`, `avg()`, etc. ignore NULLs (only aggregate matching values) -- Never use `count(*)` with CASE - always use `count(expression)` -::: - :::info Related Documentation - [CASE expressions](/docs/reference/sql/case/) - [Aggregate functions](/docs/reference/function/aggregation/) - [count()](/docs/reference/function/aggregation/#count) -- [SAMPLE BY](/docs/reference/sql/select/#sample-by) ::: diff --git a/documentation/playbook/sql/advanced/general-and-sampled-aggregates.md b/documentation/playbook/sql/advanced/general-and-sampled-aggregates.md index f9e61d0cb..19fb69365 100644 --- a/documentation/playbook/sql/advanced/general-and-sampled-aggregates.md +++ b/documentation/playbook/sql/advanced/general-and-sampled-aggregates.md @@ -1,336 +1,71 @@ --- title: General and Sampled Aggregates sidebar_label: General + sampled aggregates -description: Combine overall statistics with time-bucketed aggregates using CROSS JOIN to show baseline comparisons +description: Combine overall statistics with time-bucketed aggregates using CROSS JOIN --- -Calculate both overall (baseline) aggregates and time-bucketed aggregates in the same query using CROSS JOIN. This pattern is essential for comparing current values against historical averages, showing percentage of total, or displaying baseline metrics alongside time-series data. +Combine overall (unsampled) aggregates with sampled aggregates in the same query. -## Problem: Need Both Total and Time-Series Aggregates +## Problem -You want to show hourly trade volumes alongside the daily average: - -**Without baseline (incomplete picture):** - -| hour | volume | -|------|--------| -| 00:00 | 45.6 | -| 01:00 | 34.2 | -| 02:00 | 28.9 | - -**With baseline (shows context):** - -| hour | volume | daily_avg | vs_avg | -|------|--------|-----------|--------| -| 00:00 | 45.6 | 38.2 | +19.4% | -| 01:00 | 34.2 | 38.2 | -10.5% | -| 02:00 | 28.9 | 38.2 | -24.3% | - -## Solution: CROSS JOIN with General Aggregates - -Use CROSS JOIN to attach overall statistics to each time-bucketed row: - -```questdb-sql demo title="Hourly volumes with daily baseline" -WITH general AS ( - SELECT - avg(volume_hourly) as daily_avg_volume, - sum(volume_hourly) as daily_total_volume - FROM ( - SELECT sum(amount) as volume_hourly - FROM trades - WHERE timestamp IN today() - AND symbol = 'BTC-USDT' - SAMPLE BY 1h - ) -), -sampled AS ( - SELECT - timestamp, - sum(amount) as volume - FROM trades - WHERE timestamp IN today() - AND symbol = 'BTC-USDT' - SAMPLE BY 1h -) -SELECT - sampled.timestamp, - sampled.volume as hourly_volume, - general.daily_avg_volume, - (sampled.volume - general.daily_avg_volume) as diff_from_avg, - ((sampled.volume - general.daily_avg_volume) / general.daily_avg_volume * 100) as pct_diff, - (sampled.volume / general.daily_total_volume * 100) as pct_of_total -FROM sampled -CROSS JOIN general -ORDER BY sampled.timestamp; -``` - -**Results:** - -| timestamp | hourly_volume | daily_avg_volume | diff_from_avg | pct_diff | pct_of_total | -|-----------|---------------|------------------|---------------|----------|--------------| -| 2025-01-15 00:00 | 45.6 | 38.2 | +7.4 | +19.4% | 4.98% | -| 2025-01-15 01:00 | 34.2 | 38.2 | -4.0 | -10.5% | 3.73% | -| 2025-01-15 02:00 | 28.9 | 38.2 | -9.3 | -24.3% | 3.15% | - -## How It Works - -### Step 1: Calculate General Aggregates +You have a query with three aggregates: ```sql -WITH general AS ( - SELECT - avg(volume_hourly) as daily_avg_volume, - sum(volume_hourly) as daily_total_volume - FROM (...) -) +SELECT max(price), avg(price), min(price) +FROM trades_2024 +WHERE timestamp IN '2024-08'; ``` -Creates a CTE with single-row summary statistics (overall average, total, etc.). - -### Step 2: Calculate Time-Bucketed Aggregates - -```sql -sampled AS ( - SELECT timestamp, sum(amount) as volume - FROM trades - SAMPLE BY 1h -) +This returns: +``` +max avg min +======== =========== ======== +61615.43 31598.71891 58402.01 ``` -Creates time-series data with one row per interval. - -### Step 3: CROSS JOIN +And another query to get event count per second, then select the maximum: ```sql -FROM sampled CROSS JOIN general +SELECT max(count_sec) FROM ( + SELECT count() as count_sec FROM trades + WHERE timestamp IN '2024-08' + SAMPLE BY 1s +); ``` -Attaches the single general row to every sampled row. Since `general` has exactly one row, this repeats that row's values for each time bucket. - -## Performance Metrics vs Baseline - -Compare recent performance against historical averages: - -```questdb-sql demo title="API latency vs 7-day baseline" -WITH baseline AS ( - SELECT - avg(latency_ms) as avg_latency, - percentile(latency_ms, 95) as p95_latency, - percentile(latency_ms, 99) as p99_latency - FROM api_requests - WHERE timestamp >= dateadd('d', -7, now()) -), -recent AS ( - SELECT - timestamp, - avg(latency_ms) as current_latency, - percentile(latency_ms, 95) as current_p95, - count(*) as request_count - FROM api_requests - WHERE timestamp >= dateadd('h', -1, now()) - SAMPLE BY 5m -) -SELECT - recent.timestamp, - recent.request_count, - recent.current_latency, - baseline.avg_latency as baseline_latency, - (recent.current_latency - baseline.avg_latency) as latency_diff, - recent.current_p95, - baseline.p95_latency as baseline_p95, - CASE - WHEN recent.current_latency > baseline.avg_latency * 1.5 THEN 'WARNING' - WHEN recent.current_latency > baseline.avg_latency * 2.0 THEN 'CRITICAL' - ELSE 'OK' - END as status -FROM recent -CROSS JOIN baseline -ORDER BY recent.timestamp DESC; +This returns: ``` - -**Results show current performance with baseline context and alerts.** - -## Percentage of Daily Total - -Show each hour's contribution to the daily total: - -```questdb-sql demo title="Hourly volume as percentage of daily total" -WITH daily_total AS ( - SELECT - sum(amount) as total_volume, - count(*) as total_trades - FROM trades - WHERE timestamp IN today() - AND symbol = 'BTC-USDT' -), -hourly AS ( - SELECT - timestamp, - sum(amount) as hourly_volume, - count(*) as hourly_trades - FROM trades - WHERE timestamp IN today() - AND symbol = 'BTC-USDT' - SAMPLE BY 1h -) -SELECT - hourly.timestamp, - hourly.hourly_volume, - daily_total.total_volume, - (hourly.hourly_volume / daily_total.total_volume * 100) as volume_pct, - hourly.hourly_trades, - (hourly.hourly_trades * 100.0 / daily_total.total_trades) as trade_count_pct -FROM hourly -CROSS JOIN daily_total -ORDER BY hourly.timestamp; +max +==== +1241 ``` -**Results:** +You want to combine both results in a single row: -| timestamp | hourly_volume | total_volume | volume_pct | hourly_trades | trade_count_pct | -|-----------|---------------|--------------|------------|---------------|-----------------| -| 00:00 | 45.6 | 916.8 | 4.97% | 1,234 | 4.23% | -| 01:00 | 34.2 | 916.8 | 3.73% | 987 | 3.38% | - -## Multiple Symbol Comparison with Overall Average - -Compare each symbol's volume against the cross-symbol average: - -```questdb-sql demo title="Symbol volumes vs market average" -WITH market_avg AS ( - SELECT - avg(symbol_volume) as avg_volume_per_symbol, - sum(symbol_volume) as total_market_volume - FROM ( - SELECT - symbol, - sum(amount) as symbol_volume - FROM trades - WHERE timestamp >= dateadd('d', -1, now()) - GROUP BY symbol - ) -), -symbol_volumes AS ( - SELECT - symbol, - sum(amount) as volume, - count(*) as trade_count - FROM trades - WHERE timestamp >= dateadd('d', -1, now()) - GROUP BY symbol -) -SELECT - sv.symbol, - sv.volume, - sv.trade_count, - ma.avg_volume_per_symbol, - (sv.volume / ma.avg_volume_per_symbol) as vs_avg_ratio, - (sv.volume / ma.total_market_volume * 100) as market_share -FROM symbol_volumes sv -CROSS JOIN market_avg ma -ORDER BY sv.volume DESC -LIMIT 10; ``` - -**Results:** - -| symbol | volume | trade_count | avg_volume_per_symbol | vs_avg_ratio | market_share | -|--------|--------|-------------|-----------------------|--------------|--------------| -| BTC-USDT | 1,234.56 | 45,678 | 234.56 | 5.26 | 45.2% | -| ETH-USDT | 567.89 | 34,567 | 234.56 | 2.42 | 20.8% | - -## Z-Score Anomaly Detection - -Calculate how many standard deviations current values are from the mean: - -```questdb-sql demo title="Anomaly detection with z-scores" -WITH stats AS ( - SELECT - avg(volume_5m) as mean_volume, - stddev(volume_5m) as stddev_volume - FROM ( - SELECT sum(amount) as volume_5m - FROM trades - WHERE timestamp >= dateadd('d', -7, now()) - AND symbol = 'BTC-USDT' - SAMPLE BY 5m - ) -), -recent AS ( - SELECT - timestamp, - sum(amount) as volume - FROM trades - WHERE timestamp >= dateadd('h', -1, now()) - AND symbol = 'BTC-USDT' - SAMPLE BY 5m -) -SELECT - recent.timestamp, - recent.volume, - stats.mean_volume, - stats.stddev_volume, - ((recent.volume - stats.mean_volume) / stats.stddev_volume) as z_score, - CASE - WHEN ABS((recent.volume - stats.mean_volume) / stats.stddev_volume) > 3 THEN 'ANOMALY' - WHEN ABS((recent.volume - stats.mean_volume) / stats.stddev_volume) > 2 THEN 'UNUSUAL' - ELSE 'NORMAL' - END as classification -FROM recent -CROSS JOIN stats -ORDER BY recent.timestamp DESC; +max avg min max_count +======== =========== ======== ========= +61615.43 31598.71891 58402.01 1241 ``` -**Key points:** -- Z-score > 2: Unusual (95th percentile) -- Z-score > 3: Anomaly (99.7th percentile) -- Works for any metric (volume, latency, error rate, etc.) - -## Time-of-Day Comparison +## Solution: CROSS JOIN -Compare current hour against historical average for same hour of day: +A `CROSS JOIN` can join every row from the first query (1 row) with every row from the second (1 row), so you get a single row with all the aggregates combined: -```questdb-sql demo title="Current hour vs historical same-hour average" -WITH historical_by_hour AS ( - SELECT - hour(timestamp) as hour_of_day, - avg(hourly_volume) as avg_volume_this_hour, - stddev(hourly_volume) as stddev_volume_this_hour - FROM ( - SELECT - timestamp, - sum(amount) as hourly_volume - FROM trades - WHERE timestamp >= dateadd('d', -30, now()) - AND symbol = 'BTC-USDT' - SAMPLE BY 1h - ) - GROUP BY hour_of_day -), -current_hour AS ( - SELECT - timestamp, - hour(timestamp) as hour_of_day, - sum(amount) as volume - FROM trades - WHERE timestamp IN today() - AND symbol = 'BTC-USDT' - SAMPLE BY 1h +```questdb-sql demo title="Combine general and sampled aggregates" +WITH +sampled AS ( + SELECT timestamp, count() as count_sec FROM trades + WHERE timestamp IN '2024-08' + SAMPLE BY 1s + ORDER BY 2 DESC + LIMIT -1 ) -SELECT - current_hour.timestamp, - current_hour.volume as current_volume, - historical_by_hour.avg_volume_this_hour as historical_avg, - ((current_hour.volume - historical_by_hour.avg_volume_this_hour) / - historical_by_hour.avg_volume_this_hour * 100) as pct_diff_from_historical -FROM current_hour -LEFT JOIN historical_by_hour - ON current_hour.hour_of_day = historical_by_hour.hour_of_day -ORDER BY current_hour.timestamp; +SELECT max(price), avg(price), min(price), count_sec as max_count +FROM trades_2024 CROSS JOIN sampled +WHERE trades_2024.timestamp IN '2024-08'; ``` -Note: This uses LEFT JOIN instead of CROSS JOIN because we're matching on hour_of_day. - ## Grafana Baseline Visualization Format for Grafana with baseline reference line: @@ -360,89 +95,8 @@ ORDER BY timeseries.time; Grafana will plot both series, making it easy to see when current values deviate from baseline. -## Simplification: Single Query Without CTE - -For simple cases, you can inline the general aggregate: - -```sql -SELECT - timestamp, - sum(amount) as volume, - (SELECT avg(sum(amount)) FROM trades WHERE timestamp IN today() SAMPLE BY 1h) as daily_avg -FROM trades -WHERE timestamp IN today() -SAMPLE BY 1h; -``` - -However, CTE with CROSS JOIN is more readable and efficient when you need multiple baseline metrics. - -## Performance Considerations - -**General CTE is calculated once:** - -```sql -WITH general AS ( - SELECT expensive_aggregate FROM large_table -- Calculated ONCE -) -SELECT * FROM timeseries CROSS JOIN general; -- General reused for all rows -``` - -**Filter data in both CTEs:** - -```sql -WITH general AS ( - SELECT avg(value) as baseline - FROM metrics - WHERE timestamp >= dateadd('d', -7, now()) -- Same filter -), -recent AS ( - SELECT timestamp, value - FROM metrics - WHERE timestamp >= dateadd('d', -7, now()) -- Same filter - SAMPLE BY 1h -) -``` - -Both queries benefit from the same timestamp index usage. - -## Alternative: Window Functions - -For running comparisons, window functions can be more appropriate: - -```sql --- CROSS JOIN pattern: Compare against fixed baseline -WITH baseline AS (SELECT avg(value) FROM metrics) -SELECT value, baseline FROM timeseries CROSS JOIN baseline; - --- Window function: Compare against moving average -SELECT - value, - avg(value) OVER (ORDER BY timestamp ROWS BETWEEN 10 PRECEDING AND CURRENT ROW) as moving_avg -FROM timeseries; -``` - -Use CROSS JOIN when you want a **fixed baseline** (e.g., "7-day average"). -Use window functions for **dynamic baselines** (e.g., "10-period moving average"). - -:::tip When to Use This Pattern -Use CROSS JOIN with general aggregates when you need: -- Percentage of total calculations -- Baseline comparisons (current vs historical average) -- Context for time-series data (is this value high or low?) -- Z-scores or statistical anomaly detection -- Reference lines in Grafana dashboards -::: - -:::warning CROSS JOIN Behavior -CROSS JOIN creates a cartesian product. This only works efficiently when one side has exactly **one row** (the general aggregates). Never CROSS JOIN two multi-row tables - it will explode your result set. - -Safe: `SELECT * FROM timeseries CROSS JOIN (SELECT avg(...))` ← Second table has 1 row -Dangerous: `SELECT * FROM table1 CROSS JOIN table2` ← Both have many rows -::: - :::info Related Documentation - [CROSS JOIN](/docs/reference/sql/join/#cross-join) -- [Common Table Expressions (WITH)](/docs/reference/sql/with/) - [SAMPLE BY](/docs/reference/sql/select/#sample-by) -- [Window functions (for alternative approaches)](/docs/reference/sql/select/#window-functions) +- [Grafana integration](/docs/third-party-tools/grafana/) ::: diff --git a/documentation/playbook/sql/rows-before-after-value-match.md b/documentation/playbook/sql/advanced/rows-before-after-value-match.md similarity index 100% rename from documentation/playbook/sql/rows-before-after-value-match.md rename to documentation/playbook/sql/advanced/rows-before-after-value-match.md diff --git a/documentation/playbook/sql/advanced/sankey-funnel.md b/documentation/playbook/sql/advanced/sankey-funnel.md index a681ac3bc..da6afe8d4 100644 --- a/documentation/playbook/sql/advanced/sankey-funnel.md +++ b/documentation/playbook/sql/advanced/sankey-funnel.md @@ -1,208 +1,102 @@ --- title: Sankey and Funnel Diagrams sidebar_label: Sankey/funnel diagrams -description: Create flow analysis data for Sankey diagrams and conversion funnels using session-based queries and state transitions +description: Create session-based analytics for Sankey diagrams and conversion funnels --- -Build user journey flow data for Sankey diagrams and conversion funnels by tracking state transitions across sessions. This pattern is essential for visualizing how users navigate through your application, where they drop off, and which paths are most common. +Build user journey flow data for Sankey diagrams and conversion funnels by sessionizing event data and tracking state transitions. -## Problem: Track User Flow Through States +## Problem -You have event data tracking user actions: +You want to build a user-flow or Sankey diagram to find out which pages contribute visits to others, and in which proportion. You'd like to track elapsed time, number of pages in a single session, entry/exit pages, etc., similar to web analytics tools. -| timestamp | user_id | page | -|-----------|---------|------| -| 10:00:00 | user_1 | home | -| 10:00:15 | user_1 | products | -| 10:00:45 | user_1 | cart | -| 10:01:00 | user_1 | checkout | -| 10:00:05 | user_2 | home | -| 10:00:20 | user_2 | products | -| 10:00:30 | user_2 | home | +Your issue is that you only capture a flat table with events, with no concept of session. For analytics purposes, you want to define a session as a visit that was more than 1 hour apart from the last one for the same user. -You want to count transitions between states: - -| from | to | count | -|------|----|-------| -| home | products | 2 | -| products | cart | 1 | -| products | home | 1 | -| cart | checkout | 1 | - -This data can be visualized as a Sankey diagram or used for funnel analysis. - -## Solution: LAG Window Function for State Transitions - -Use LAG to get the previous state for each user, then aggregate transitions: - -```questdb-sql demo title="Count state transitions for Sankey diagram" -WITH transitions AS ( - SELECT - user_id, - page as current_state, - lag(page) OVER (PARTITION BY user_id ORDER BY timestamp) as previous_state, - timestamp - FROM user_events - WHERE timestamp >= dateadd('d', -7, now()) -) -SELECT - previous_state as from_state, - current_state as to_state, - count(*) as transition_count -FROM transitions -WHERE previous_state IS NOT NULL -GROUP BY previous_state, current_state -ORDER BY transition_count DESC; -``` - -**Results:** - -| from_state | to_state | transition_count | -|------------|----------|------------------| -| home | products | 1,245 | -| products | home | 567 | -| products | details | 489 | -| details | cart | 234 | -| cart | checkout | 156 | -| checkout | complete | 134 | - -## How It Works - -### Step 1: Get Previous State with LAG +Your simplified table schema: ```sql -lag(page) OVER (PARTITION BY user_id ORDER BY timestamp) as previous_state +CREATE TABLE events ( + visitor_id SYMBOL, + pathname SYMBOL, + timestamp TIMESTAMP, + metric_name SYMBOL +) TIMESTAMP(timestamp) PARTITION BY MONTH WAL; ``` -For each event, looks back to the previous event for that user: -- `PARTITION BY user_id`: Separate window for each user -- `ORDER BY timestamp`: Previous means earlier in time -- Returns NULL for the first event (no previous state) +## Solution: Session Window Functions -**Example for user_1:** +By combining window functions and `CASE` statements: -| timestamp | page | previous_state | -|-----------|------|----------------| -| 10:00:00 | home | NULL | -| 10:00:15 | products | home | -| 10:00:45 | cart | products | -| 10:01:00 | checkout | cart | +1. Sessionize the data by identifying gaps longer than 1 hour +2. Generate unique session ids for aggregations +3. Assign sequence numbers to each hit within a session +4. Assign the session initial timestamp +5. Check next page in the sequence -### Step 2: Filter and Aggregate +With that, you can count page hits for the next page from current, identify elapsed time between hits or since the start of the session, count sessions per user, or power navigation funnels and Sankey diagrams. -```sql -WHERE previous_state IS NOT NULL -GROUP BY previous_state, current_state -``` - -- Remove first events (NULL previous_state) -- Count occurrences of each transition pair -- Order by count to see most common paths - -## Conversion Funnel Analysis - -Calculate conversion rates through a specific funnel: - -```questdb-sql demo title="E-commerce funnel with conversion rates" -WITH user_pages AS ( - SELECT DISTINCT user_id, page - FROM user_events - WHERE timestamp >= dateadd('d', -7, now()) - AND page IN ('home', 'products', 'cart', 'checkout', 'complete') -), -funnel AS ( +```questdb-sql demo title="Sessionize events and track page flows" +WITH PrevEvents AS ( SELECT - count(CASE WHEN page = 'home' THEN 1 END) as step1_home, - count(CASE WHEN page = 'products' THEN 1 END) as step2_products, - count(CASE WHEN page = 'cart' THEN 1 END) as step3_cart, - count(CASE WHEN page = 'checkout' THEN 1 END) as step4_checkout, - count(CASE WHEN page = 'complete' THEN 1 END) as step5_complete - FROM user_pages -) -SELECT - 'Home' as step, step1_home as users, 100.0 as conversion_rate -FROM funnel -UNION ALL -SELECT 'Products', step2_products, (step2_products * 100.0 / step1_home) -FROM funnel -UNION ALL -SELECT 'Cart', step3_cart, (step3_cart * 100.0 / step1_home) -FROM funnel -UNION ALL -SELECT 'Checkout', step4_checkout, (step4_checkout * 100.0 / step1_home) -FROM funnel -UNION ALL -SELECT 'Complete', step5_complete, (step5_complete * 100.0 / step1_home) -FROM funnel; -``` - -**Results:** - -| step | users | conversion_rate | -|------|-------|-----------------| -| Home | 10,000 | 100.00% | -| Products | 6,500 | 65.00% | -| Cart | 2,300 | 23.00% | -| Checkout | 1,800 | 18.00% | -| Complete | 1,500 | 15.00% | - -This shows that 85% of users who reach checkout complete the purchase (1,500 / 1,800). - -## Session-Based Flow Analysis - -Group transitions within sessions (defined by inactivity timeout): - -```questdb-sql demo title="Flow analysis within sessions" -WITH session_events AS ( - SELECT - user_id, - page, + visitor_id, + pathname, timestamp, - lag(timestamp) OVER (PARTITION BY user_id ORDER BY timestamp) as prev_timestamp, - SUM(CASE - WHEN timestamp - lag(timestamp) OVER (PARTITION BY user_id ORDER BY timestamp) > 1800000000 - OR lag(timestamp) OVER (PARTITION BY user_id ORDER BY timestamp) IS NULL - THEN 1 - ELSE 0 - END) OVER (PARTITION BY user_id ORDER BY timestamp) as session_id - FROM user_events - WHERE timestamp >= dateadd('d', -7, now()) -), -transitions AS ( - SELECT - user_id, - session_id, - page as current_state, - lag(page) OVER (PARTITION BY user_id, session_id ORDER BY timestamp) as previous_state - FROM session_events + first_value(timestamp::long) OVER ( + PARTITION BY visitor_id ORDER BY timestamp + ROWS 1 PRECEDING EXCLUDE CURRENT ROW + ) AS prev_ts + FROM + events WHERE timestamp > dateadd('d', -7, now()) + AND metric_name = 'page_view' +), VisitorSessions AS ( + SELECT *, + SUM(CASE WHEN datediff('h', timestamp, prev_ts::timestamp)>1 THEN 1 END) + OVER( + PARTITION BY visitor_id + ORDER BY timestamp + ) as local_session_id FROM PrevEvents + +), GlobalSessions AS ( + SELECT visitor_id, pathname, timestamp, prev_ts, + concat(visitor_id, '#', coalesce(local_session_id,0)::int) AS session_id + FROM VisitorSessions +), EventSequences AS ( + SELECT *, row_number() OVER ( + PARTITION BY session_id ORDER BY timestamp + ) as session_sequence, + row_number() OVER ( + PARTITION BY session_id ORDER BY timestamp DESC + ) as reverse_session_sequence, + first_value(timestamp::long) OVER ( + PARTITION BY session_id ORDER BY timestamp + ) as session_ts + FROM GlobalSessions +), EventsFullInfo AS ( + SELECT e1.session_id, e1.session_ts::timestamp as session_ts, e1.visitor_id, + e1.timestamp, e1.pathname, e1.session_sequence, + CASE WHEN e1.session_sequence = 1 THEN true END is_entry_page, + e2.pathname as next_pathname, datediff('T', e1.timestamp, e1.prev_ts::timestamp)::double as elapsed, + e2.reverse_session_sequence, + CASE WHEN e2.reverse_session_sequence = 1 THEN true END is_exit_page + FROM EventSequences e1 + LEFT JOIN EventSequences e2 ON (e1.session_id = e2.session_id) + WHERE e2.session_sequence - e1.session_sequence = 1 ) -SELECT - previous_state as from_state, - current_state as to_state, - count(*) as transition_count, - count(DISTINCT user_id) as unique_users -FROM transitions -WHERE previous_state IS NOT NULL -GROUP BY previous_state, current_state -ORDER BY transition_count DESC; +SELECT * FROM EventsFullInfo; ``` -**Key points:** -- Sessions defined by 30-minute inactivity (1800000000 microseconds) -- Transitions counted within sessions only -- Includes unique user count for each transition - -## Visualizing in Grafana/Plotly +## Visualizing in Grafana Format output for Sankey diagram tools: ```questdb-sql demo title="Sankey diagram data format" WITH transitions AS ( SELECT - page as current_state, - lag(page) OVER (PARTITION BY user_id ORDER BY timestamp) as previous_state - FROM user_events + pathname as current_state, + lag(pathname) OVER (PARTITION BY visitor_id ORDER BY timestamp) as previous_state + FROM events WHERE timestamp >= dateadd('d', -1, now()) + AND metric_name = 'page_view' ) SELECT previous_state as source, @@ -210,9 +104,9 @@ SELECT count(*) as value FROM transitions WHERE previous_state IS NOT NULL - AND previous_state != current_state -- Exclude self-loops + AND previous_state != current_state GROUP BY previous_state, current_state -HAVING count(*) >= 10 -- Minimum flow threshold +HAVING count(*) >= 10 ORDER BY value DESC; ``` @@ -221,207 +115,8 @@ This format works directly with: - **D3.js**: Standard Sankey input format - **Grafana Flow plugin**: Source/target/value format -## Multi-Step Path Analysis - -Find most common complete paths (not just transitions): - -```questdb-sql demo title="Most common 3-step user paths" -WITH paths AS ( - SELECT - user_id, - page, - lag(page, 1) OVER (PARTITION BY user_id ORDER BY timestamp) as prev_1, - lag(page, 2) OVER (PARTITION BY user_id ORDER BY timestamp) as prev_2, - timestamp - FROM user_events - WHERE timestamp >= dateadd('d', -7, now()) -) -SELECT - prev_2 || ' → ' || prev_1 || ' → ' || page as path, - count(*) as occurrences, - count(DISTINCT user_id) as unique_users -FROM paths -WHERE prev_2 IS NOT NULL -GROUP BY path -ORDER BY occurrences DESC -LIMIT 20; -``` - -**Results:** - -| path | occurrences | unique_users | -|------|-------------|--------------| -| home → products → details | 1,234 | 987 | -| products → details → cart | 892 | 765 | -| home → products → home | 654 | 543 | -| cart → checkout → complete | 543 | 543 | - -## Filter by Successful Conversions - -Analyze only paths that led to conversion: - -```questdb-sql demo title="Paths of users who converted" -WITH converting_users AS ( - SELECT DISTINCT user_id - FROM user_events - WHERE timestamp >= dateadd('d', -7, now()) - AND page = 'purchase_complete' -), -transitions AS ( - SELECT - e.user_id, - e.page as current_state, - lag(e.page) OVER (PARTITION BY e.user_id ORDER BY e.timestamp) as previous_state - FROM user_events e - INNER JOIN converting_users cu ON e.user_id = cu.user_id - WHERE e.timestamp >= dateadd('d', -7, now()) -) -SELECT - previous_state as from_state, - current_state as to_state, - count(*) as transition_count -FROM transitions -WHERE previous_state IS NOT NULL -GROUP BY previous_state, current_state -ORDER BY transition_count DESC; -``` - -This shows the paths taken by users who successfully completed a purchase. - -## Drop-Off Analysis - -Identify where users exit the funnel: - -```questdb-sql demo title="Last page visited before exit" -WITH user_last_page AS ( - SELECT - user_id, - page, - timestamp, - row_number() OVER (PARTITION BY user_id ORDER BY timestamp DESC) as rn - FROM user_events - WHERE timestamp >= dateadd('d', -7, now()) -), -non_converters AS ( - SELECT ulp.user_id, ulp.page as exit_page - FROM user_last_page ulp - WHERE ulp.rn = 1 - AND NOT EXISTS ( - SELECT 1 FROM user_events e - WHERE e.user_id = ulp.user_id - AND e.page = 'purchase_complete' - AND e.timestamp >= dateadd('d', -7, now()) - ) -) -SELECT - exit_page, - count(*) as exit_count, - (count(*) * 100.0 / (SELECT count(*) FROM non_converters)) as exit_percentage -FROM non_converters -GROUP BY exit_page -ORDER BY exit_count DESC; -``` - -**Results:** - -| exit_page | exit_count | exit_percentage | -|-----------|------------|-----------------| -| products | 3,456 | 42.5% | -| details | 1,234 | 15.2% | -| cart | 987 | 12.1% | -| home | 876 | 10.8% | - -Shows that most users who don't convert exit from the products page. - -## Time-Based Flow Analysis - -Analyze how quickly users move through states: - -```questdb-sql demo title="Average time between transitions" -WITH transitions AS ( - SELECT - page as current_state, - lag(page) OVER (PARTITION BY user_id ORDER BY timestamp) as previous_state, - timestamp - lag(timestamp) OVER (PARTITION BY user_id ORDER BY timestamp) as time_diff_micros - FROM user_events - WHERE timestamp >= dateadd('d', -7, now()) -) -SELECT - previous_state as from_state, - current_state as to_state, - count(*) as transition_count, - cast(avg(time_diff_micros) / 1000000 as int) as avg_seconds -FROM transitions -WHERE previous_state IS NOT NULL -GROUP BY previous_state, current_state -HAVING count(*) >= 100 -ORDER BY avg_seconds DESC; -``` - -**Results:** - -| from_state | to_state | transition_count | avg_seconds | -|------------|----------|------------------|-------------| -| cart | checkout | 1,234 | 245 | -| details | cart | 2,345 | 180 | -| products | details | 3,456 | 45 | -| home | products | 4,567 | 12 | - -Shows users spend an average of 4 minutes deciding to checkout from cart. - -## Performance Considerations - -**Index on user_id and timestamp:** -```sql --- Ensure table is partitioned by timestamp -CREATE TABLE user_events ( - timestamp TIMESTAMP, - user_id SYMBOL, - page SYMBOL -) TIMESTAMP(timestamp) PARTITION BY DAY; -``` - -**Limit time range:** -```sql -WHERE timestamp >= dateadd('d', -7, now()) -``` - -**Pre-aggregate for dashboards:** -```sql --- Create hourly summary table -CREATE TABLE user_flow_hourly AS -SELECT - timestamp_floor('h', timestamp) as hour, - previous_state, - current_state, - count(*) as transitions -FROM ( - SELECT - timestamp, - page as current_state, - lag(page) OVER (PARTITION BY user_id ORDER BY timestamp) as previous_state - FROM user_events -) -WHERE previous_state IS NOT NULL -GROUP BY hour, previous_state, current_state; -``` - -:::tip When to Use Sankey vs Funnel -- **Sankey diagrams**: Show all possible paths and their volumes (exploratory analysis) -- **Funnel charts**: Show conversion through a specific linear path (monitoring KPIs) -- **Drop-off analysis**: Identify specific pain points where users exit -::: - -:::warning Session Definition -Choose appropriate session timeout based on your use case: -- **E-commerce**: 30 minutes typical -- **Content sites**: 60+ minutes (users may pause to read) -- **Mobile apps**: 5-10 minutes (shorter attention spans) -::: - :::info Related Documentation -- [LAG window function](/docs/reference/function/window/#lag) -- [Window functions overview](/docs/reference/sql/select/#window-functions) -- [PARTITION BY](/docs/reference/sql/select/#partition-by) -- [Session windows pattern](/playbook/sql/time-series/session-windows) +- [Window functions](/docs/reference/sql/select/#window-functions) +- [LAG function](/docs/reference/function/window/#lag) +- [Grafana integration](/docs/third-party-tools/grafana/) ::: diff --git a/documentation/playbook/sql/advanced/unpivot-table.md b/documentation/playbook/sql/advanced/unpivot-table.md index 769bd3204..25e4883bf 100644 --- a/documentation/playbook/sql/advanced/unpivot-table.md +++ b/documentation/playbook/sql/advanced/unpivot-table.md @@ -1,10 +1,10 @@ --- -title: UNPIVOT Table Results -sidebar_label: UNPIVOT -description: Convert wide-format data to long format using UNION ALL to transform column-based data into row-based data +title: Unpivoting Query Results +sidebar_label: Unpivoting results +description: Convert wide-format data to long format using UNION ALL --- -Transform wide-format data (multiple columns) into long format (rows) using UNION ALL. This "unpivot" operation is useful for converting column-based data into a row-based format suitable for visualization or further analysis. +Transform wide-format data (multiple columns) into long format (rows) using UNION ALL. ## Problem: Wide Format to Long Format @@ -140,59 +140,6 @@ ORDER BY timestamp, sensor_id, metric; | 10:00:00 | S001 | pressure | 1013.2| | 10:00:00 | S001 | temperature | 22.5 | -## Simplified Syntax (When All Values Present) - -If you know there are no NULL values, skip the filtering: - -```sql -SELECT timestamp, symbol, 'buy' as side, buy_price as price -FROM trades_summary - -UNION ALL - -SELECT timestamp, symbol, 'sell' as side, sell_price as price -FROM trades_summary; -``` - -## Use Cases - -**Grafana visualization:** -```sql --- Convert wide format to Grafana-friendly long format -SELECT - timestamp as time, - metric_name as metric, - value -FROM ( - SELECT timestamp, 'cpu' as metric_name, cpu_usage as value FROM metrics - UNION ALL - SELECT timestamp, 'memory' as metric_name, memory_usage as value FROM metrics - UNION ALL - SELECT timestamp, 'disk' as metric_name, disk_usage as value FROM metrics -) -WHERE value IS NOT NULL; -``` - -**Pivot table to chart:** -```sql --- From crosstab format to plottable format -SELECT month, 'revenue' as metric, revenue as value FROM monthly_stats -UNION ALL -SELECT month, 'costs' as metric, costs as value FROM monthly_stats -UNION ALL -SELECT month, 'profit' as metric, profit as value FROM monthly_stats; -``` - -**Multiple symbols analysis:** -```sql --- Stack different symbols as rows -SELECT timestamp, 'BTC-USDT' as symbol, btc_price as price FROM market_data -UNION ALL -SELECT timestamp, 'ETH-USDT' as symbol, eth_price as price FROM market_data -UNION ALL -SELECT timestamp, 'SOL-USDT' as symbol, sol_price as price FROM market_data; -``` - ## Performance Considerations **UNION ALL vs UNION:** @@ -206,91 +153,11 @@ SELECT ... UNION SELECT ... Always use `UNION ALL` for unpivoting unless you specifically need deduplication. -**Index usage:** -- Each SELECT in the UNION can use indexes independently -- Filter before UNION for better performance: - -```sql --- Good: Filter in each SELECT -SELECT timestamp, 'buy' as side, price FROM trades WHERE side = 'buy' -UNION ALL -SELECT timestamp, 'sell' as side, price FROM trades WHERE side = 'sell' - --- Less efficient: Filter after UNION -SELECT * FROM ( - SELECT timestamp, 'buy' as side, price_buy as price FROM trades - UNION ALL - SELECT timestamp, 'sell' as side, price_sell as price FROM trades -) WHERE price > 0 -``` - -## Alternative: Case-Based Approach - -For simple scenarios, use CASE without UNION: - -```sql --- If your source data has a side column already -SELECT - timestamp, - symbol, - side, - CASE - WHEN side = 'buy' THEN buy_price - WHEN side = 'sell' THEN sell_price - END as price -FROM trades -WHERE price IS NOT NULL; -``` - -This works when you have a discriminator column (like `side`) that indicates which price column to use. - -## Dynamic Unpivoting - -For tables with many columns, generate UNION queries programmatically: - -```python -# Python example -columns = ['temperature', 'humidity', 'pressure', 'wind_speed'] -queries = [] - -for col in columns: - query = f"SELECT timestamp, sensor_id, '{col}' as metric, {col} as value FROM sensors WHERE {col} IS NOT NULL" - queries.append(query) - -full_query = " UNION ALL ".join(queries) -``` - -## Unpivoting with Metadata - -Include additional information in unpivoted results: - -```sql -WITH source AS ( - SELECT - timestamp, - device_id, - location, - temperature, - humidity - FROM iot_sensors -) -SELECT timestamp, device_id, location, 'temperature' as metric, temperature as value, 'celsius' as unit -FROM source WHERE temperature IS NOT NULL - -UNION ALL - -SELECT timestamp, device_id, location, 'humidity' as metric, humidity as value, 'percent' as unit -FROM source WHERE humidity IS NOT NULL - -ORDER BY timestamp, device_id, metric; -``` - ## Reverse: Pivot (Long to Wide) To go back from long to wide format, use aggregation with CASE: ```sql --- From long format SELECT timestamp, sensor_id, @@ -301,23 +168,7 @@ FROM sensor_readings_long GROUP BY timestamp, sensor_id; ``` -See the [Pivoting](/playbook/sql/pivoting) guide for more details. - -:::tip When to UNPIVOT -Unpivot data when: -- Visualizing multiple metrics on the same chart (Grafana, BI tools) -- Applying the same calculation to multiple columns -- Storing column-based data in a narrow table format -- Preparing data for machine learning (feature columns → feature rows) -::: - -:::warning Performance Impact -UNION ALL creates multiple copies of your data. For very large tables: -- Filter early to reduce dataset size -- Consider if unpivoting is necessary (some tools handle wide format well) -- Use indexes on filtered columns -- Test query performance before using in production -::: +See the [Pivoting](/docs/playbook/sql/advanced/pivot-table/) guide for more details. :::info Related Documentation - [UNION](/docs/reference/sql/union-except-intersect/) diff --git a/documentation/playbook/sql/finance/rolling-stddev.md b/documentation/playbook/sql/finance/rolling-stddev.md index c42df6cc7..049b84cd2 100644 --- a/documentation/playbook/sql/finance/rolling-stddev.md +++ b/documentation/playbook/sql/finance/rolling-stddev.md @@ -1,244 +1,46 @@ --- title: Rolling Standard Deviation sidebar_label: Rolling std dev -description: Calculate rolling standard deviation for volatility analysis using window functions and variance mathematics +description: Calculate rolling standard deviation using window functions and CTEs --- -Calculate rolling standard deviation to measure price volatility over time. Rolling standard deviation shows how much prices deviate from their moving average, helping identify periods of high and low volatility. This is essential for risk management, option pricing, and volatility-based trading strategies. +Calculate rolling standard deviation to measure price volatility over time. -## Problem: Window Function Limitation +## Problem -You want to calculate standard deviation over a rolling time window, but QuestDB doesn't support `STDDEV` as a window function. However, we can work around this using the mathematical relationship between standard deviation and variance. +You want to calculate the standard deviation in a time window. QuestDB supports stddev as an aggregate function, but not as a window function. -## Solution: Calculate Variance Using Window Functions +## Solution -Since standard deviation is the square root of variance, and variance is the average of squared differences from the mean, we can calculate it step by step using CTEs: +The standard deviation can be calculated from the variance, which is the average of the square differences from the mean. -```questdb-sql demo title="Calculate 20-period rolling standard deviation" -WITH rolling_avg_cte AS ( - SELECT - timestamp, - symbol, - price, - AVG(price) OVER (PARTITION BY symbol ORDER BY timestamp) AS rolling_avg - FROM trades - WHERE timestamp IN yesterday() - AND symbol = 'BTC-USDT' -), -variance_cte AS ( - SELECT - timestamp, - symbol, - price, - rolling_avg, - AVG(POWER(price - rolling_avg, 2)) - OVER (PARTITION BY symbol ORDER BY timestamp) AS rolling_variance - FROM rolling_avg_cte -) -SELECT - timestamp, - symbol, - price, - round(rolling_avg, 2) AS rolling_avg, - round(rolling_variance, 4) AS rolling_variance, - round(SQRT(rolling_variance), 2) AS rolling_stddev -FROM variance_cte; -``` - -This query: -1. Calculates the rolling average (mean) of prices -2. Computes the variance as the average of squared differences from the mean -3. Takes the square root of variance to get standard deviation - -## How It Works - -The mathematical relationship used is: - -``` -Variance(X) = E[(X - μ)²] - = Average of squared differences from mean - -StdDev(X) = √Variance(X) -``` - -Where: -- `X` = price values -- `μ` = rolling average (mean) -- `E[...]` = expected value (average) - -Breaking down the calculation: -1. **First CTE** (`rolling_avg_cte`): Calculates running average using `AVG() OVER ()` -2. **Second CTE** (`variance_cte`): For each price, calculates `(price - rolling_avg)²`, then averages these squared differences using another window function -3. **Final query**: Applies `SQRT()` to variance to get standard deviation - -### Window Frame Defaults - -When you don't specify a frame clause (like `ROWS BETWEEN`), QuestDB defaults to: -```sql -ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW -``` - -This calculates from the start of the partition to the current row, giving you an expanding window. For a fixed rolling window, specify the frame explicitly. - -## Fixed Rolling Window - -For a true rolling window (e.g., last 20 periods), specify the frame clause: - -```questdb-sql demo title="20-period rolling standard deviation with fixed window" -WITH rolling_avg_cte AS ( - SELECT - timestamp, - symbol, - price, - AVG(price) OVER ( - PARTITION BY symbol - ORDER BY timestamp - ROWS BETWEEN 19 PRECEDING AND CURRENT ROW - ) AS rolling_avg - FROM trades - WHERE timestamp IN yesterday() - AND symbol = 'BTC-USDT' -), -variance_cte AS ( - SELECT - timestamp, - symbol, - price, - rolling_avg, - AVG(POWER(price - rolling_avg, 2)) OVER ( - PARTITION BY symbol - ORDER BY timestamp - ROWS BETWEEN 19 PRECEDING AND CURRENT ROW - ) AS rolling_variance - FROM rolling_avg_cte -) -SELECT - timestamp, - symbol, - price, - round(rolling_avg, 2) AS rolling_avg, - round(SQRT(rolling_variance), 2) AS rolling_stddev -FROM variance_cte; -``` - -This calculates standard deviation over exactly the last 20 rows (19 preceding + current), providing a consistent window size throughout. - -## Adapting the Query +In general we could write it in SQL like this: -**Different window sizes:** ```sql --- 10-period rolling stddev (change 19 to 9) -ROWS BETWEEN 9 PRECEDING AND CURRENT ROW - --- 50-period rolling stddev (change 19 to 49) -ROWS BETWEEN 49 PRECEDING AND CURRENT ROW - --- 200-period rolling stddev (change 19 to 199) -ROWS BETWEEN 199 PRECEDING AND CURRENT ROW -``` - -**Time-based windows instead of row-based:** -```questdb-sql demo title="Rolling stddev over 1-hour time window" -WITH rolling_avg_cte AS ( - SELECT - timestamp, - symbol, - price, - AVG(price) OVER ( - PARTITION BY symbol - ORDER BY timestamp - RANGE BETWEEN 1 HOUR PRECEDING AND CURRENT ROW - ) AS rolling_avg - FROM trades - WHERE timestamp IN yesterday() - AND symbol = 'BTC-USDT' -), -variance_cte AS ( - SELECT - timestamp, - symbol, - price, - rolling_avg, - AVG(POWER(price - rolling_avg, 2)) OVER ( - PARTITION BY symbol - ORDER BY timestamp - RANGE BETWEEN 1 HOUR PRECEDING AND CURRENT ROW - ) AS rolling_variance - FROM rolling_avg_cte -) SELECT - timestamp, symbol, price, - round(rolling_avg, 2) AS rolling_avg, - round(SQRT(rolling_variance), 2) AS rolling_stddev -FROM variance_cte; + AVG(price) OVER (PARTITION BY symbol ORDER BY timestamp ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS rolling_mean, + SQRT(AVG(POWER(price - AVG(price) OVER (PARTITION BY symbol ORDER BY timestamp ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW), 2)) + OVER (PARTITION BY symbol ORDER BY timestamp ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)) AS rolling_stddev +FROM + trades +WHERE timestamp IN yesterday() ``` -**With OHLC candles:** -```questdb-sql demo title="Rolling stddev of candle closes" -WITH OHLC AS ( - SELECT - timestamp, - symbol, - first(price) AS open, - last(price) AS close, - min(price) AS low, - max(price) AS high - FROM trades - WHERE symbol = 'BTC-USDT' - AND timestamp IN yesterday() - SAMPLE BY 15m -), -rolling_avg_cte AS ( - SELECT - timestamp, - symbol, - close, - AVG(close) OVER ( - PARTITION BY symbol - ORDER BY timestamp - ROWS BETWEEN 19 PRECEDING AND CURRENT ROW - ) AS rolling_avg - FROM OHLC -), -variance_cte AS ( - SELECT - timestamp, - symbol, - close, - rolling_avg, - AVG(POWER(close - rolling_avg, 2)) OVER ( - PARTITION BY symbol - ORDER BY timestamp - ROWS BETWEEN 19 PRECEDING AND CURRENT ROW - ) AS rolling_variance - FROM rolling_avg_cte -) -SELECT - timestamp, - symbol, - close, - round(rolling_avg, 2) AS sma_20, - round(SQRT(rolling_variance), 2) AS stddev_20 -FROM variance_cte; -``` +But in QuestDB we cannot do any operations on the return value of a window function, so we need to do this using CTEs: -**Multiple symbols:** -```questdb-sql demo title="Rolling stddev for multiple symbols" +```questdb-sql demo title="Calculate rolling standard deviation" WITH rolling_avg_cte AS ( SELECT timestamp, symbol, price, - AVG(price) OVER ( - PARTITION BY symbol - ORDER BY timestamp - ROWS BETWEEN 19 PRECEDING AND CURRENT ROW - ) AS rolling_avg - FROM trades - WHERE timestamp IN yesterday() - AND symbol IN ('BTC-USDT', 'ETH-USDT', 'SOL-USDT') + AVG(price) OVER (PARTITION BY symbol ORDER BY timestamp) AS rolling_avg + FROM + trades + WHERE + timestamp IN yesterday() ), variance_cte AS ( SELECT @@ -246,76 +48,26 @@ variance_cte AS ( symbol, price, rolling_avg, - AVG(POWER(price - rolling_avg, 2)) OVER ( - PARTITION BY symbol - ORDER BY timestamp - ROWS BETWEEN 19 PRECEDING AND CURRENT ROW - ) AS rolling_variance - FROM rolling_avg_cte + AVG(POWER(price - rolling_avg, 2)) OVER (PARTITION BY symbol ORDER BY timestamp) AS rolling_variance + FROM + rolling_avg_cte ) -SELECT - timestamp, - symbol, - round(SQRT(rolling_variance), 2) AS rolling_stddev -FROM variance_cte -ORDER BY symbol, timestamp; -``` - -## Calculating Annualized Volatility - -For option pricing and risk management, convert rolling standard deviation to annualized volatility: - -```sql --- Assuming daily returns, multiply by sqrt(252) for annual volatility -round(SQRT(rolling_variance) * SQRT(252), 4) AS annualized_volatility_pct - --- For intraday data, adjust the multiplier: --- 1-minute bars: SQRT(252 * 390) -- 390 trading minutes per day --- 5-minute bars: SQRT(252 * 78) --- 1-hour bars: SQRT(252 * 6.5) -``` - -## Combining with Bollinger Bands - -Rolling standard deviation is the foundation for Bollinger Bands: - -```sql SELECT timestamp, symbol, price, - rolling_avg AS middle_band, - rolling_avg + (2 * SQRT(rolling_variance)) AS upper_band, - rolling_avg - (2 * SQRT(rolling_variance)) AS lower_band -FROM variance_cte; + rolling_avg, + rolling_variance, + SQRT(rolling_variance) AS rolling_stddev +FROM + variance_cte; ``` -:::tip Volatility Analysis Applications -- **Risk management**: Higher standard deviation indicates higher risk/volatility -- **Position sizing**: Adjust position sizes based on current volatility levels -- **Option pricing**: Volatility is a key input for option valuation models -- **Volatility targeting**: Maintain constant portfolio risk by adjusting to current volatility -- **Regime detection**: Identify transitions between high and low volatility regimes -::: - -:::tip Interpretation -- **High stddev**: Large price swings, high uncertainty, potentially higher risk and opportunity -- **Low stddev**: Stable prices, low uncertainty, often precedes larger moves (volatility compression) -- **Expanding stddev**: Increasing volatility, trend acceleration or market stress -- **Contracting stddev**: Decreasing volatility, consolidation phase -::: - -:::warning Performance Considerations -Calculating rolling standard deviation requires multiple passes over the data (once for average, once for variance). For very large datasets, consider: -- Filtering by timestamp range first -- Using larger time intervals (SAMPLE BY) -- Calculating on aggregated OHLC data rather than tick data -::: +I first get the rolling average/mean, then from that I get the variance, and then I can do the `sqrt` to get the standard deviation as requested. :::info Related Documentation - [Window functions](/docs/reference/sql/select/#window-functions) - [AVG window function](/docs/reference/function/window/#avg) - [POWER function](/docs/reference/function/numeric/#power) - [SQRT function](/docs/reference/function/numeric/#sqrt) -- [Window frame clauses](/docs/reference/sql/select/#frame-clause) ::: diff --git a/documentation/playbook/sql/finance/volume-profile.md b/documentation/playbook/sql/finance/volume-profile.md index 1b69db9f8..69c750968 100644 --- a/documentation/playbook/sql/finance/volume-profile.md +++ b/documentation/playbook/sql/finance/volume-profile.md @@ -1,20 +1,16 @@ --- title: Volume Profile sidebar_label: Volume profile -description: Calculate volume profile to identify key price levels with high trading activity for support and resistance analysis +description: Calculate volume profile by grouping trades into price bins --- -Calculate volume profile to identify price levels where significant trading volume occurred. Volume profile shows the distribution of trading activity across different price levels, helping identify strong support/resistance zones, value areas, and potential breakout levels. +Calculate volume profile to show the distribution of trading volume across different price levels. -## Problem: Distribute Volume Across Price Levels +## Solution -You want to aggregate all trades into price bins and see the total volume traded at each price level. This reveals where most trading activity occurred during a specific period, which often indicates important price levels for future trading. +Group trades into price bins using `FLOOR` and a tick size parameter: -## Solution: Use FLOOR to Create Price Bins - -Group trades into price bins using `FLOOR` with a tick size parameter, then sum the volume for each bin: - -```questdb-sql demo title="Calculate volume profile with $1 tick size" +```questdb-sql demo title="Calculate volume profile with fixed tick size" DECLARE @tick_size := 1.0 SELECT floor(price / @tick_size) * @tick_size AS price_bin, @@ -25,159 +21,34 @@ WHERE symbol = 'BTC-USDT' ORDER BY price_bin; ``` -**Results:** - -| price_bin | volume | -|-----------|-----------| -| 61000.0 | 12.45 | -| 61001.0 | 8.23 | -| 61002.0 | 15.67 | -| 61003.0 | 23.89 | -| 61004.0 | 11.34 | -| ... | ... | - -Each row shows the total volume traded within that price bin during the period. - -## How It Works - -The volume profile calculation uses: - -1. **`floor(price / @tick_size) * @tick_size`**: Rounds each trade's price down to the nearest tick size, creating discrete price bins -2. **`SUM(amount)`**: Aggregates all volume that occurred within each price bin -3. **Implicit GROUP BY**: QuestDB automatically groups by all non-aggregated columns (price_bin) - -### Understanding Tick Size +Since QuestDB does an implicit GROUP BY on all non-aggregated columns, you can omit the explicit GROUP BY clause. -The `@tick_size` parameter controls the granularity of your price bins: -- **Small tick size** (e.g., 0.01): Very detailed profile with many bins - useful for intraday analysis -- **Large tick size** (e.g., 100): Broader view with fewer bins - useful for longer-term patterns -- **Dynamic tick size**: Adjust based on the asset's typical price range +## Dynamic Tick Size -## Dynamic Tick Size for Consistent Bins - -For assets with different price ranges, a fixed tick size may produce too many or too few bins. This query dynamically calculates the tick size to always produce approximately 50 bins: +For consistent histograms across different price ranges, calculate the tick size dynamically to always produce approximately 50 bins: ```questdb-sql demo title="Volume profile with dynamic 50-bin distribution" WITH raw_data AS ( - SELECT price, amount FROM trades - WHERE symbol = 'BTC-USDT' AND timestamp IN today() + SELECT price, amount + FROM trades + WHERE symbol = 'BTC-USDT' AND timestamp IN today() ), tick_size AS ( - SELECT (max(price) - min(price)) / 49 as tick_size FROM raw_data + SELECT (max(price) - min(price)) / 49 as tick_size + FROM raw_data ) SELECT - floor(price / tick_size) * tick_size AS price_bin, - round(SUM(amount), 2) AS volume -FROM raw_data CROSS JOIN tick_size -ORDER BY price_bin; -``` - -This query: -1. Finds the maximum and minimum prices in the dataset -2. Divides the price range by 49 (to create 50 bins) -3. Uses `CROSS JOIN` to apply the calculated tick size to every row -4. Groups trades into evenly-distributed price bins - -The result is a volume profile with approximately 50 bars regardless of the asset's price range or volatility. - -## Adapting the Query - -**Different time periods:** -```sql --- Specific date -WHERE timestamp IN '2024-09-05' - --- Last hour -WHERE timestamp >= dateadd('h', -1, now()) - --- Last week -WHERE timestamp >= dateadd('w', -1, now()) - --- Between specific times -WHERE timestamp BETWEEN '2024-09-05T09:30:00' AND '2024-09-05T16:00:00' -``` - -**Multiple symbols:** -```questdb-sql demo title="Volume profile for multiple symbols" -DECLARE @tick_size := 1.0 -SELECT - symbol, - floor(price / @tick_size) * @tick_size AS price_bin, + floor(price / tick_size) * tick_size AS price_bin, round(SUM(amount), 2) AS volume -FROM trades -WHERE symbol IN ('BTC-USDT', 'ETH-USDT') - AND timestamp IN today() -ORDER BY symbol, price_bin; -``` - -**Filter by minimum volume threshold:** -```sql --- Only show price levels with significant volume -DECLARE @tick_size := 1.0 -SELECT - floor(price / @tick_size) * @tick_size AS price_bin, - round(SUM(amount), 2) AS volume -FROM trades -WHERE symbol = 'BTC-USDT' - AND timestamp IN today() -HAVING SUM(amount) > 10 -- Only bins with volume > 10 -ORDER BY price_bin; -``` - -**Show top N price levels by volume:** -```questdb-sql demo title="Top 10 price levels by volume" -DECLARE @tick_size := 1.0 -SELECT - floor(price / @tick_size) * @tick_size AS price_bin, - round(SUM(amount), 2) AS volume -FROM trades -WHERE symbol = 'BTC-USDT' - AND timestamp IN today() -ORDER BY volume DESC -LIMIT 10; -``` - -## Interpreting Volume Profile - -**Point of Control (POC):** -The price level with the highest volume is called the Point of Control. This is typically the fairest price where most participants agreed to trade, and often acts as a strong magnet for price. - -```sql --- Find the POC (price with highest volume) -DECLARE @tick_size := 1.0 -SELECT - floor(price / @tick_size) * @tick_size AS poc_price, - round(SUM(amount), 2) AS poc_volume -FROM trades -WHERE symbol = 'BTC-USDT' - AND timestamp IN today() -ORDER BY poc_volume DESC -LIMIT 1; +FROM raw_data CROSS JOIN tick_size +ORDER BY 1; ``` -**Value Area:** -The price range where approximately 70% of the volume traded. Prices outside this area are considered "low volume" zones where price tends to move quickly. - -**High Volume Nodes (HVN):** -Price levels with significantly higher volume than surrounding levels. These act as strong support or resistance. - -**Low Volume Nodes (LVN):** -Price levels with minimal volume. Price often moves quickly through these zones. - -:::tip Trading Applications -- **Support/Resistance**: High volume nodes indicate strong support or resistance levels -- **Value Area**: Price tends to return to high-volume areas (mean reversion opportunity) -- **Breakouts**: Low volume nodes above/below current price suggest potential quick moves if broken -- **Acceptance**: Sustained trading at a new price level builds volume profile and establishes new value -::: - -:::tip Visualization -Volume profile is best visualized as a horizontal histogram on a price chart, showing volume distribution across price levels. This can be created in Grafana or other charting tools by rotating the volume axis. -::: +This will produce a histogram with a maximum of 50 buckets. If you have enough price difference between the first and last price for the interval, and if there are enough events with different prices, then you will get the full 50 buckets. If price difference is too small or if there are buckets with no events, then you might get less than 50. :::info Related Documentation - [FLOOR function](/docs/reference/function/numeric/#floor) - [SUM aggregate](/docs/reference/function/aggregation/#sum) - [DECLARE variables](/docs/reference/sql/declare/) -- [GROUP BY (implicit)](/docs/reference/sql/select/#implicit-group-by) +- [CROSS JOIN](/docs/reference/sql/join/#cross-join) ::: diff --git a/documentation/playbook/sql/finance/volume-spike.md b/documentation/playbook/sql/finance/volume-spike.md index e468f710e..841981285 100644 --- a/documentation/playbook/sql/finance/volume-spike.md +++ b/documentation/playbook/sql/finance/volume-spike.md @@ -1,21 +1,23 @@ --- title: Volume Spike Detection sidebar_label: Volume spikes -description: Detect volume spikes by comparing current volume against recent historical volume using LAG window function +description: Detect volume spikes by comparing current volume against previous volume using LAG --- -Detect volume spikes by comparing current trading volume against recent historical patterns. Volume spikes often precede significant price moves and can signal accumulation, distribution, or the start of new trends. This pattern helps identify unusual trading activity that may warrant attention. +Detect volume spikes by comparing current trading volume against the previous candle's volume. -## Problem: Flag Abnormal Volume +## Problem -You have aggregated candle data and want to flag trades where volume is significantly higher than recent activity. For this example, a "spike" is defined as volume exceeding twice the previous candle's volume for the same symbol. +You have candles aggregated at 30 seconds intervals, and you want to show a flag 'spike' if volume is bigger than twice the latest record for the same symbol. Otherwise it should display 'normal'. -## Solution: Use LAG to Access Previous Volume +## Solution Use the `LAG` window function to retrieve the previous candle's volume, then compare with a `CASE` statement: ```questdb-sql demo title="Detect volume spikes exceeding 2x previous volume" DECLARE + @anchor_date := timestamp_floor('30s', now()), + @start_date := dateadd('h', -7, @anchor_date), @symbol := 'BTC-USDT' WITH candles AS ( SELECT @@ -23,7 +25,7 @@ WITH candles AS ( symbol, sum(amount) AS volume FROM trades - WHERE timestamp >= dateadd('h', -7, now()) + WHERE timestamp >= @start_date AND symbol = @symbol SAMPLE BY 30s ), @@ -36,241 +38,16 @@ prev_volumes AS ( FROM candles ) SELECT - timestamp, - symbol, - volume, - prev_volume, + *, CASE WHEN volume > 2 * prev_volume THEN 'spike' ELSE 'normal' END AS spike_flag -FROM prev_volumes -WHERE prev_volume IS NOT NULL; +FROM prev_volumes; ``` -**Results:** - -| timestamp | symbol | volume | prev_volume | spike_flag | -|------------------------------|-----------|--------|-------------|------------| -| 2024-01-15T10:00:30.000000Z | BTC-USDT | 10.5 | 12.3 | normal | -| 2024-01-15T10:01:00.000000Z | BTC-USDT | 9.8 | 10.5 | normal | -| 2024-01-15T10:01:30.000000Z | BTC-USDT | 25.6 | 9.8 | spike | -| 2024-01-15T10:02:00.000000Z | BTC-USDT | 11.2 | 25.6 | normal | -| 2024-01-15T10:02:30.000000Z | BTC-USDT | 8.9 | 11.2 | normal | - -The spike at 10:01:30 shows volume of 25.6, which is more than double the previous volume of 9.8. - -## How It Works - -The query uses a multi-step approach: - -1. **Aggregate to candles**: Use `SAMPLE BY` to create 30-second candles with volume totals -2. **Access previous value**: `LAG(volume) OVER (PARTITION BY symbol ORDER BY timestamp)` retrieves volume from the previous candle -3. **Compare and flag**: `CASE` statement checks if current volume exceeds the threshold (2× previous) -4. **Filter nulls**: The first candle has no previous value, so we filter it out with `WHERE prev_volume IS NOT NULL` - -### Understanding LAG - -`LAG(column, offset)` accesses the value from a previous row: -- **Without offset** (or offset=1): Gets the immediately previous row -- **With PARTITION BY**: Resets for each group (symbol in this case) -- **Returns NULL**: For the first row in each partition (no previous value exists) - -## Alternative: Compare Against Moving Average - -Instead of comparing against the previous single candle, you can compare against a moving average to smooth out noise: - -```questdb-sql demo title="Detect spikes exceeding 2x the 10-period moving average" -DECLARE - @symbol := 'BTC-USDT' -WITH candles AS ( - SELECT - timestamp, - symbol, - sum(amount) AS volume - FROM trades - WHERE timestamp >= dateadd('h', -7, now()) - AND symbol = @symbol - SAMPLE BY 30s -), -moving_avg AS ( - SELECT - timestamp, - symbol, - volume, - AVG(volume) OVER ( - PARTITION BY symbol - ORDER BY timestamp - ROWS BETWEEN 9 PRECEDING AND 1 PRECEDING - ) AS avg_volume_10 - FROM candles -) -SELECT - timestamp, - symbol, - volume, - round(avg_volume_10, 2) AS avg_volume_10, - CASE - WHEN volume > 2 * avg_volume_10 THEN 'spike' - ELSE 'normal' - END AS spike_flag -FROM moving_avg -WHERE avg_volume_10 IS NOT NULL; -``` - -This approach: -- Calculates the 10-period moving average of volume (excluding current candle) -- Compares current volume against this average -- Provides more robust spike detection by smoothing out single-candle anomalies - -## Adapting the Query - -**Different spike thresholds:** -```sql --- 50% increase (1.5x) -WHEN volume > 1.5 * prev_volume THEN 'spike' - --- 3x increase (300%) -WHEN volume > 3 * prev_volume THEN 'spike' - --- Multiple levels -CASE - WHEN volume > 3 * prev_volume THEN 'extreme_spike' - WHEN volume > 2 * prev_volume THEN 'spike' - WHEN volume > 1.5 * prev_volume THEN 'elevated' - ELSE 'normal' -END AS spike_flag -``` - -**Different time intervals:** -```sql --- 1-minute candles -SAMPLE BY 1m - --- 5-minute candles -SAMPLE BY 5m - --- 1-hour candles -SAMPLE BY 1h -``` - -**Multiple symbols:** -```questdb-sql demo title="Volume spikes across multiple symbols" -WITH candles AS ( - SELECT - timestamp, - symbol, - sum(amount) AS volume - FROM trades - WHERE timestamp >= dateadd('h', -7, now()) - SAMPLE BY 30s -), -prev_volumes AS ( - SELECT - timestamp, - symbol, - volume, - LAG(volume) OVER (PARTITION BY symbol ORDER BY timestamp) AS prev_volume - FROM candles -) -SELECT - timestamp, - symbol, - volume, - prev_volume, - CASE - WHEN volume > 2 * prev_volume THEN 'spike' - ELSE 'normal' - END AS spike_flag -FROM prev_volumes -WHERE prev_volume IS NOT NULL - AND volume > 2 * prev_volume -- Only show spikes -ORDER BY timestamp DESC -LIMIT 20; -``` - -**Include price change alongside volume:** -```questdb-sql demo title="Volume spikes with price movement" -DECLARE - @symbol := 'BTC-USDT' -WITH candles AS ( - SELECT - timestamp, - symbol, - first(price) AS open, - last(price) AS close, - sum(amount) AS volume - FROM trades - WHERE timestamp >= dateadd('h', -7, now()) - AND symbol = @symbol - SAMPLE BY 30s -), -with_lags AS ( - SELECT - timestamp, - symbol, - open, - close, - ((close - open) / open) * 100 AS price_change_pct, - volume, - LAG(volume) OVER (PARTITION BY symbol ORDER BY timestamp) AS prev_volume - FROM candles -) -SELECT - timestamp, - symbol, - round(price_change_pct, 2) AS price_change_pct, - volume, - prev_volume, - CASE - WHEN volume > 2 * prev_volume THEN 'spike' - ELSE 'normal' - END AS spike_flag -FROM with_lags -WHERE prev_volume IS NOT NULL; -``` - -## Combining Volume and Price Analysis - -Volume spikes are most meaningful when analyzed with price action: - -```sql --- Volume spike with price increase (potential breakout) -CASE - WHEN volume > 2 * prev_volume AND price_change_pct > 1 THEN 'bullish_spike' - WHEN volume > 2 * prev_volume AND price_change_pct < -1 THEN 'bearish_spike' - WHEN volume > 2 * prev_volume THEN 'neutral_spike' - ELSE 'normal' -END AS spike_type -``` - -:::tip Trading Signals -- **Breakout confirmation**: Volume spikes during breakouts confirm strength and reduce false breakout risk -- **Reversal warning**: Volume spikes at trend extremes often signal exhaustion and potential reversals -- **Distribution**: High volume with minimal price change can indicate institutional distribution -- **Accumulation**: Volume spikes on dips can signal smart money accumulation -::: - -:::tip Alert Configuration -Set up alerts for volume spikes to catch important market events: -- **Threshold**: Start with 2-3x average volume -- **Time frame**: Match to your trading style (1m for scalping, 1h for swing trading) -- **Confirmation**: Combine with price movement or technical levels for better signals -::: - -:::warning False Positives -Volume spikes can occur due to: -- Market open/close times -- News releases or economic data -- Rollover periods for futures -- Technical glitches or flash crashes - -Always confirm with price action and broader market context. -::: - :::info Related Documentation - [LAG window function](/docs/reference/function/window/#lag) -- [AVG window function](/docs/reference/function/window/#avg) - [SAMPLE BY](/docs/reference/sql/select/#sample-by) - [CASE expressions](/docs/reference/sql/case/) ::: diff --git a/documentation/playbook/sql/time-series/epoch-timestamps.md b/documentation/playbook/sql/time-series/epoch-timestamps.md index 69c93811e..fa4e11d0f 100644 --- a/documentation/playbook/sql/time-series/epoch-timestamps.md +++ b/documentation/playbook/sql/time-series/epoch-timestamps.md @@ -1,282 +1,30 @@ --- title: Query with Epoch Timestamps sidebar_label: Epoch timestamps -description: Use epoch timestamps in milliseconds or microseconds for timestamp filtering and comparisons +description: Use epoch timestamps for timestamp filtering in QuestDB --- -Query QuestDB using epoch timestamps (Unix time) in milliseconds, microseconds, or nanoseconds. This is useful when working with systems that represent time as integers rather than timestamp strings. +Query using epoch timestamps instead of timestamp literals. -## Problem: Epoch Time from External Systems +## Problem -Your application or API provides timestamps as epoch integers: -- JavaScript: `1746552420000` (milliseconds since 1970-01-01) -- Python time(): `1746552420.123456` (seconds with decimals) -- Go/Java: `1746552420000000` (microseconds) +You want to query data using an epoch time interval rather than using timestamp literals or timestamp_ns data types. -You need to query QuestDB using these values. +## Solution -## Solution: Use Epoch Directly in WHERE Clause +Use epoch values directly in your WHERE clause. QuestDB expects microseconds by default for `timestamp` columns: -QuestDB stores timestamps as microseconds internally and accepts epoch values in timestamp comparisons: - -```questdb-sql demo title="Query with epoch milliseconds" -SELECT * FROM trades -WHERE timestamp BETWEEN 1746552420000000 AND 1746811620000000; -``` - -**Important:** QuestDB expects **microseconds** by default. Multiply milliseconds by 1000. - -## Understanding QuestDB Timestamp Precision - -QuestDB uses **microseconds** as its default timestamp precision: - -| Unit | Example | Multiply by | -|------|---------|-------------| -| Seconds | `1746552420` | × 1,000,000 | -| Milliseconds | `1746552420000` | × 1,000 | -| Microseconds | `1746552420000000` | × 1 (native) | -| Nanoseconds | `1746552420000000000` | ÷ 1000 (for `timestamp_ns` type only) | - -**Converting to microseconds:** -```sql --- From milliseconds (JavaScript, most APIs) -SELECT * FROM trades -WHERE timestamp >= 1746552420000 * 1000; - --- From seconds (Unix timestamp) -SELECT * FROM trades -WHERE timestamp >= 1746552420 * 1000000; -``` - -## Epoch to Timestamp Conversion - -Convert epoch values to readable timestamps: - -```questdb-sql demo title="Convert epoch to timestamp for display" -SELECT - timestamp, - cast(timestamp AS long) as epoch_micros, - cast(timestamp AS long) / 1000 as epoch_millis, - cast(timestamp AS long) / 1000000 as epoch_seconds +```questdb-sql demo title="Query with epoch microseconds" +SELECT * FROM trades -LIMIT 5; -``` - -**Results:** - -| timestamp | epoch_micros | epoch_millis | epoch_seconds | -|-----------|--------------|--------------|---------------| -| 2025-01-15T10:30:45.123456Z | 1737456645123456 | 1737456645123 | 1737456645 | - -## Timestamp to Epoch Conversion - -Convert timestamp strings to epoch values: - -```questdb-sql demo title="Convert timestamp string to epoch" -SELECT - cast('2025-01-15T10:30:45.123Z' AS timestamp) as ts, - cast(cast('2025-01-15T10:30:45.123Z' AS timestamp) AS long) as epoch_micros, - cast(cast('2025-01-15T10:30:45.123Z' AS timestamp) AS long) / 1000 as epoch_millis -``` - -## Working with Milliseconds from JavaScript - -JavaScript `Date.now()` returns milliseconds. Convert for QuestDB: - -**JavaScript:** -```javascript -const now = Date.now(); // e.g., 1746552420000 -const queryStart = now - (24 * 60 * 60 * 1000); // 24 hours ago - -// Query QuestDB (multiply by 1000 for microseconds) -const query = ` - SELECT * FROM trades - WHERE timestamp >= ${queryStart * 1000} - AND timestamp <= ${now * 1000} -`; -``` - -**Python:** -```python -import time - -now_seconds = time.time() # e.g., 1746552420.123456 -now_micros = int(now_seconds * 1_000_000) - -query = f""" - SELECT * FROM trades - WHERE timestamp >= {now_micros - 86400000000} - AND timestamp <= {now_micros} -""" -``` - -## Comparative Queries - -**Using timestamp strings:** -```sql -SELECT * FROM trades -WHERE timestamp BETWEEN '2025-01-15T00:00:00' AND '2025-01-16T00:00:00'; -``` - -**Using epoch microseconds (equivalent):** -```sql -SELECT * FROM trades -WHERE timestamp BETWEEN 1737417600000000 AND 1737504000000000; -``` - -**Performance:** Both are equally fast - QuestDB converts strings to microseconds internally. - -## Time Range with Epoch - -Calculate time ranges using epoch values: - -```questdb-sql demo title="Last 7 days using epoch calculation" -DECLARE - @now := cast(now() AS long), - @week_ago := @now - (7 * 24 * 60 * 60 * 1000000) -SELECT * FROM trades -WHERE timestamp >= @week_ago -LIMIT 100; -``` - -**Breakdown:** -- 7 days = 7 × 24 × 60 × 60 × 1,000,000 microseconds -- Subtract from current timestamp to get cutoff - -## Aggregating by Epoch Intervals - -Group by time using epoch arithmetic: - -```questdb-sql demo title="Aggregate by 5-minute intervals using epoch" -SELECT - (cast(timestamp AS long) / 300000000) * 300000000 as interval_start, - count(*) as trade_count, - avg(price) as avg_price -FROM trades -WHERE timestamp >= dateadd('d', -1, now()) -GROUP BY interval_start -ORDER BY interval_start; -``` - -**Calculation:** -- 5 minutes = 300 seconds = 300,000,000 microseconds -- Divide, truncate (integer division), multiply back to get interval start - -**Better alternative:** -```sql -SELECT - timestamp_floor('5m', timestamp) as interval_start, - count(*) as trade_count -FROM trades -SAMPLE BY 5m; -``` - -## Nanosecond Precision - -For `timestamp_ns` columns (nanosecond precision): - -```sql --- Create table with nanosecond precision -CREATE TABLE high_freq_trades ( - symbol SYMBOL, - price DOUBLE, - timestamp_ns TIMESTAMP_NS -) TIMESTAMP(timestamp_ns); - --- Query with nanosecond epoch -SELECT * FROM high_freq_trades -WHERE timestamp_ns BETWEEN 1746552420000000000 AND 1746811620000000000; -``` - -Note: Multiply microseconds by 1000 or milliseconds by 1,000,000 for nanoseconds. - -## Dynamic Epoch from Current Time - -Calculate epoch values relative to now: - -```questdb-sql demo title="Calculate epoch for queries" -SELECT - cast(now() AS long) as current_epoch_micros, - cast(dateadd('h', -1, now()) AS long) as one_hour_ago_micros, - cast(dateadd('d', -7, now()) AS long) as one_week_ago_micros; -``` - -Use these values in application queries: - -```sql --- In your application, get the epoch value: --- epoch_start = execute("SELECT cast(dateadd('d', -1, now()) AS long)") - --- Then use in parameterized query: -SELECT * FROM trades WHERE timestamp >= ? -``` - -## Common Epoch Conversions - -| Duration | Microseconds | Milliseconds | Seconds | -|----------|--------------|--------------|---------| -| 1 second | 1,000,000 | 1,000 | 1 | -| 1 minute | 60,000,000 | 60,000 | 60 | -| 1 hour | 3,600,000,000 | 3,600,000 | 3,600 | -| 1 day | 86,400,000,000 | 86,400,000 | 86,400 | -| 1 week | 604,800,000,000 | 604,800,000 | 604,800 | - -## Debugging Epoch Values - -Convert suspect epoch values to verify correctness: - -```questdb-sql demo title="Verify epoch timestamp" -SELECT - 1746552420000000 as input_micros, - cast(1746552420000000 as timestamp) as as_timestamp, - CASE - WHEN cast(1746552420000000 as timestamp) > '1970-01-01' THEN 'Valid' - ELSE 'Invalid - too small' - END as validity; -``` - -**Common mistakes:** -- Using milliseconds instead of microseconds (off by 1000x) -- Using seconds instead of microseconds (off by 1,000,000x) -- Wrong epoch (some systems use 1900 or 2000 as base) - -## Mixed Epoch and String Queries - -You can mix epoch and string timestamps in the same query: - -```sql -SELECT * FROM trades -WHERE timestamp >= 1746552420000000 -- Epoch microseconds - AND timestamp < '2025-01-16T00:00:00' -- Timestamp string - AND symbol = 'BTC-USDT'; +WHERE timestamp BETWEEN 1746552420000000 AND 1746811620000000; ``` -QuestDB handles both formats seamlessly. - -:::tip When to Use Epoch -Use epoch timestamps when: -- Interfacing with systems that provide epoch time -- Doing arithmetic on timestamps (adding/subtracting microseconds) -- Minimizing string parsing overhead in high-frequency scenarios +**Note:** If you have epoch values in milliseconds, you need to multiply by 1000 to convert to microseconds. -Use timestamp strings when: -- Writing queries manually (more readable) -- Debugging timestamp issues -- Working with date/time functions -::: - -:::warning Precision Matters -Always verify the precision of your epoch timestamps: -- Milliseconds: 13 digits (e.g., `1746552420000`) -- Microseconds: 16 digits (e.g., `1746552420000000`) -- Nanoseconds: 19 digits (e.g., `1746552420000000000`) - -Wrong precision will give incorrect results by factors of 1000x! -::: +Nanoseconds can be used when the timestamp column is of type `timestamp_ns`. :::info Related Documentation -- [CAST function](/docs/reference/sql/cast/) - [Timestamp types](/docs/reference/sql/datatypes/#timestamp-and-date) -- [dateadd()](/docs/reference/function/date-time/#dateadd) -- [now()](/docs/reference/function/date-time/#now) +- [WHERE clause](/docs/reference/sql/where/) ::: diff --git a/documentation/playbook/sql/time-series/expand-power-over-time.md b/documentation/playbook/sql/time-series/expand-power-over-time.md index 5fa18b348..16d1feb83 100644 --- a/documentation/playbook/sql/time-series/expand-power-over-time.md +++ b/documentation/playbook/sql/time-series/expand-power-over-time.md @@ -1,34 +1,31 @@ --- title: Expand Average Power Over Time sidebar_label: Expand power over time -description: Distribute average power values across hourly intervals using sessions and window functions for IoT energy data +description: Distribute average power values across hourly intervals using sessions and window functions --- -Expand discrete energy measurements across time intervals to visualize average power consumption. When IoT devices report cumulative energy (watt-hours) at irregular intervals, you need to distribute that energy across the hours it was consumed. +Expand discrete energy measurements (watt-hours) across time intervals. When an IoT device sends a `wh` value at discrete timestamps, you can distribute that energy across the hours between measurements to visualize average power consumption per hour. -## Problem: Sparse Energy Readings to Hourly Distribution +## Problem -You have IoT devices reporting watt-hour (Wh) values at discrete timestamps. You want to: -1. Calculate average power (W) between readings -2. Distribute that power across each hour in the period -3. Visualize hourly energy consumption +You have IoT devices reporting watt-hour (Wh) values at irregular timestamps, identified by an `operationId`. You want to plot the sum of average power per operation, broken down by hour. -**Sample data:** +Raw data: -| timestamp | operationId | wh | -|-----------|-------------|-----| -| 14:10:59 | 1001 | 0 | -| 18:18:05 | 1001 | 200 | -| 14:20:01 | 1002 | 0 | -| 22:20:10 | 1002 | 300 | +| timestamp | operationId | wh | +|-----------------------------|-------------|-----| +| 2025-04-01T14:10:59.000000Z | 1001 | 0 | +| 2025-04-01T14:20:01.000000Z | 1002 | 0 | +| 2025-04-01T15:06:29.000000Z | 1003 | 0 | +| 2025-04-01T18:18:05.000000Z | 1001 | 200 | +| 2025-04-01T20:06:36.000000Z | 1003 | 200 | +| 2025-04-01T22:20:10.000000Z | 1002 | 300 | -For operation 1001: 200 Wh consumed over 4 hours 7 minutes → needs to be distributed across hours 14:00, 15:00, 16:00, 17:00, 18:00. +For operation 1001: 200 Wh consumed between 14:10:59 and 18:18:05 should be distributed across hours 14:00, 15:00, 16:00, 17:00, 18:00. -## Solution: Session-Based Distribution +## Solution -Use SAMPLE BY to create hourly intervals, then use sessions to identify and distribute energy across attributable hours: - -```questdb-sql demo title="Distribute average power across hours" +```questdb-sql demo title="Distribute watt-hours across hourly intervals" WITH sampled AS ( SELECT timestamp, operationId, sum(wh) as wh @@ -55,251 +52,35 @@ SELECT FROM counts; ``` -**Results:** - -| timestamp | operationId | wh_avg | -|-----------|-------------|--------| -| 14:00:00 | 1001 | 39.67 | -| 15:00:00 | 1001 | 48.56 | -| 16:00:00 | 1001 | 48.56 | -| 17:00:00 | 1001 | 48.56 | -| 18:00:00 | 1001 | 14.64 | -| 14:00:00 | 1002 | 24.98 | -| 15:00:00 | 1002 | 37.49 | -| ... | ... | ... | - -## How It Works - -The query uses a four-step approach: - -### 1. Sample by Hour (`sampled`) - -```sql -SELECT timestamp, operationId, sum(wh) as wh -FROM meter -SAMPLE BY 1h -FILL(0) -``` - -Creates hourly buckets with: -- Sum of wh values if data exists in that hour -- 0 for hours with no data (via FILL(0)) - -This ensures we have a row for every hour in the time range. - -### 2. Identify Sessions (`sessions`) - -```sql -SUM(CASE WHEN wh > 0 THEN 1 END) - OVER (PARTITION BY operationId ORDER BY timestamp DESC) -``` - -Working backwards in time (DESC order), increment a counter whenever we see a non-zero wh value. This creates "sessions" where: -- Each session = one energy reading -- Session includes all preceding zero-value hours -- Sessions are numbered: 1, 2, 3, ... (higher numbers are earlier in time) - -**Example for operation 1001:** - -| timestamp | wh | session | -|-----------|-----|---------| -| 18:00 | 200 | 1 | ← Reading at 18:00 -| 17:00 | 0 | 1 | ← Part of session 1 -| 16:00 | 0 | 1 | ← Part of session 1 -| 15:00 | 0 | 1 | ← Part of session 1 -| 14:00 | 0 | 1 | ← Part of session 1 - -### 3. Calculate Attributable Hours (`counts`) - -```sql -FIRST_VALUE(wh) OVER (PARTITION BY operationId, session ORDER BY timestamp DESC) -``` - -For each session, get the wh value (which appears in the first row when sorted DESC). - -```sql -COUNT(*) OVER (PARTITION BY operationId, session) -``` - -Count how many hours are in each session (how many hours to distribute energy across). - -### 4. Distribute Energy - -```sql -wh / attributable_hours -``` - -Divide the total energy by the number of hours to get average energy per hour. - -## Handling Partial Hours - -The query distributes energy evenly across hours, but actual consumption may not be uniform. For more accuracy with partial hours: - -```questdb-sql demo title="Calculate mean power between readings using LAG" -SELECT - timestamp AS end_time, - cast(prev_ts AS timestamp) AS start_time, - operationId, - (wh - prev_wh) / ((cast(timestamp AS DOUBLE) - prev_ts) / 3600000000.0) AS mean_power_w -FROM ( - SELECT - timestamp, - wh, - operationId, - lag(wh) OVER (PARTITION BY operationId ORDER BY timestamp) AS prev_wh, - lag(cast(timestamp AS DOUBLE)) OVER (PARTITION BY operationId ORDER BY timestamp) AS prev_ts - FROM meter -) -WHERE prev_ts IS NOT NULL -ORDER BY timestamp; -``` - -This calculates true average power (W) between consecutive readings, accounting for exact time differences. - -## Adapting the Pattern - -**Different time intervals:** -```sql --- 15-minute intervals -SAMPLE BY 15m - --- Daily intervals -SAMPLE BY 1d -``` - -**Multiple devices:** -```sql --- Already handled by PARTITION BY operationId --- Works automatically for any number of devices -``` - -**Filter by time range:** -```sql -WITH sampled AS ( - SELECT timestamp, operationId, sum(wh) as wh - FROM meter - WHERE timestamp >= '2025-01-01' AND timestamp < '2025-02-01' - SAMPLE BY 1h - FILL(0) -) -... -``` - -**Include device metadata:** -```sql -WITH sampled AS ( - SELECT - timestamp, - operationId, - first(location) as location, - first(device_type) as device_type, - sum(wh) as wh - FROM meter - SAMPLE BY 1h - FILL(0) -) -... -``` - -## Visualization in Grafana - -This query output is perfect for Grafana time-series charts: - -```sql -SELECT - timestamp as time, - operationId as metric, - wh / attributable_hours as value -FROM ( - -- ... full query from above ... -) -WHERE $__timeFilter(timestamp) -ORDER BY timestamp; -``` - -Configure Grafana to: -- Group by `metric` (operationId) -- Stack series to show total consumption -- Use area chart for energy visualization - -## Alternative: Pre-calculated Power - -If you calculate power at ingestion time, queries become simpler: - -```sql --- At ingestion, calculate instantaneous power -INSERT INTO power_readings -SELECT - timestamp, - operationId, - (wh - prev_wh) / seconds_elapsed as power_w -FROM meter; +**How it works:** --- Then query is simple -SELECT - timestamp_floor('h', timestamp) as hour, - operationId, - avg(power_w) as avg_power -FROM power_readings -GROUP BY hour, operationId -ORDER BY hour; -``` +The `sampled` subquery creates an entry for each operationId and missing hourly interval, filling with 0 wh for interpolated rows. -## Performance Considerations +The key trick is dividing the data into "sessions". A session is defined by all the rows with no value for wh before a row with a value for wh. Or, if we reverse the timestamp order, a session would be defined by a row with a value for wh, followed by several rows with zero value for the same operationId: -**Filter by operationId:** ```sql --- For specific devices -WHERE operationId IN ('1001', '1002', '1003') +SUM(CASE WHEN wh > 0 THEN 1 END) OVER (PARTITION BY operationId ORDER BY timestamp DESC) as session ``` -**Limit time range:** -```sql --- Only recent data -WHERE timestamp >= dateadd('d', -30, now()) -``` +For each operationId we get multiple sessions (1, 2, 3...). If we did: -**Pre-aggregate if querying frequently:** ```sql --- Create materialized hourly view -CREATE TABLE hourly_power AS -SELECT ... FROM meter ... SAMPLE BY 1h; - --- Refresh periodically --- (manual or scheduled) +COUNT() as attributable_hours GROUP BY operationId, session ``` -## Common Issues - -**Negative power values:** -- Occurs when devices report decreasing wh (meter reset, rollover) -- Filter with `WHERE wh_avg > 0` or handle resets explicitly +We would get how many attributable rows each session has. -**Large gaps in data:** -- Long sessions distribute energy over many hours -- Consider adding a maximum session duration filter -- Or handle gaps differently (mark as "unknown" rather than distribute) +The `counts` subquery uses a window function to `COUNT` the number of rows per session (notice the count window function is not using `order by` so this will not be a running count, but all rows for the same session will have the same value as `attributable_hours`). -**First reading has no previous value:** -- LAG returns NULL for first reading -- Filter with `WHERE prev_ts IS NOT NULL` +It also gets `FIRST_VALUE` for the session sorted by reverse timestamp, which is the `wh` value for the only row with value in each session. -:::tip Energy vs Power -- **Energy** (Wh): Cumulative, reported by meter -- **Power** (W): Rate of energy consumption (Wh per hour) -- **Average power** = Energy difference / Time elapsed +The final query divides the `wh` reported in the session by the number of `attributable_hours`. -This pattern converts from sparse energy readings to continuous power timeline. -::: - -:::warning Session Direction -The query uses `ORDER BY timestamp DESC` to work backwards in time. This is because we want to group zero-hours that occur BEFORE each reading. If you reverse the order, the distribution won't work correctly. -::: +**Note:** If you want to filter the results by timestamp or operationId, you should add the filter at the first query (the one named `sampled`), so the rest of the process is done on the relevant subset of data. :::info Related Documentation - [SAMPLE BY](/docs/reference/sql/select/#sample-by) - [FILL](/docs/reference/sql/select/#fill) - [Window functions](/docs/reference/sql/select/#window-functions) - [FIRST_VALUE](/docs/reference/function/window/#first_value) -- [LAG](/docs/reference/function/window/#lag) ::: diff --git a/documentation/playbook/sql/time-series/fill-missing-intervals.md b/documentation/playbook/sql/time-series/fill-missing-intervals.md index 47bc9115a..f23054a10 100644 --- a/documentation/playbook/sql/time-series/fill-missing-intervals.md +++ b/documentation/playbook/sql/time-series/fill-missing-intervals.md @@ -1,404 +1,38 @@ --- title: Fill Missing Time Intervals sidebar_label: Fill missing intervals -description: Create regular time intervals and propagate sparse values using FILL with PREV, NULL, LINEAR, or constant values +description: Create regular time intervals and propagate sparse values using FILL with PREV --- -Transform sparse event data into regular time-series by creating fixed intervals and filling gaps with appropriate values. This is essential for visualization, resampling, and aligning data from multiple sources. +Transform sparse event data into regular time-series by creating fixed intervals and filling gaps with previous values. -## Problem: Sparse Events Need Regular Intervals +## Problem -You have configuration changes recorded only when they occur: - -| timestamp | config_key | config_value | -|-----------|------------|--------------| -| 08:00:00 | max_connections | 100 | -| 10:30:00 | max_connections | 150 | -| 14:00:00 | max_connections | 200 | - -You want hourly intervals showing the active value at each hour: - -| timestamp | config_value | -|-----------|--------------| -| 08:00:00 | 100 | -| 09:00:00 | 100 | ← Filled forward -| 10:00:00 | 100 | ← Filled forward -| 11:00:00 | 150 | ← New value -| 12:00:00 | 150 | ← Filled forward -| 13:00:00 | 150 | ← Filled forward -| 14:00:00 | 200 | ← New value - -## Solution: SAMPLE BY with FILL(PREV) - -Use SAMPLE BY to create intervals and FILL(PREV) to forward-fill values: - -```questdb-sql demo title="Forward-fill configuration values" -SELECT - timestamp, - first(config_value) as config_value -FROM config_changes -WHERE config_key = 'max_connections' - AND timestamp >= '2025-01-15T08:00:00' - AND timestamp < '2025-01-15T15:00:00' -SAMPLE BY 1h FILL(PREV); -``` - -**Results:** - -| timestamp | config_value | -|-----------|--------------| -| 08:00:00 | 100 | -| 09:00:00 | 100 | -| 10:00:00 | 100 | -| 11:00:00 | 150 | -| 12:00:00 | 150 | -| 13:00:00 | 150 | -| 14:00:00 | 200 | - -## How It Works - -### SAMPLE BY Creates Intervals - -```sql -SAMPLE BY 1h -``` - -Creates hourly buckets regardless of whether data exists in that hour. - -### FILL(PREV) Propagates Values +You have a query like this: ```sql -FILL(PREV) +SELECT timestamp, id, sum(price) as price, sum(dayVolume) as dayVolume +FROM nasdaq_trades +WHERE id = 'NVDA' +SAMPLE BY 1s FILL(PREV, PREV); ``` -When an interval has no data: -- Copies the value from the previous non-empty interval -- First interval with no data remains NULL (no previous value) - -### first() Aggregate +When there is an interpolation, instead of getting the PREV value for `price` and previous for `dayVolume`, you want both the price and the volume to show the PREV known value for the `dayVolume`. Imagine this SQL was valid: ```sql -first(config_value) -``` - -Takes the first value within each interval. For sparse data with one value per relevant interval, this extracts that value. - -## Different FILL Strategies - -QuestDB supports multiple fill strategies: - -```questdb-sql demo title="Compare FILL strategies" --- FILL(NULL): Leave gaps as NULL -SELECT timestamp, first(price) as price_null -FROM trades -WHERE symbol = 'BTC-USDT' -SAMPLE BY 1m FILL(NULL); - --- FILL(PREV): Forward-fill from previous value -SELECT timestamp, first(price) as price_prev -FROM trades -WHERE symbol = 'BTC-USDT' -SAMPLE BY 1m FILL(PREV); - --- FILL(LINEAR): Linear interpolation between known values -SELECT timestamp, first(price) as price_linear -FROM trades -WHERE symbol = 'BTC-USDT' -SAMPLE BY 1m FILL(LINEAR); - --- FILL(100.0): Constant value -SELECT timestamp, first(price) as price_const -FROM trades -WHERE symbol = 'BTC-USDT' -SAMPLE BY 1m FILL(100.0); -``` - -**When to use each:** - -| Strategy | Use Case | Example | -|----------|----------|---------| -| **FILL(NULL)** | Explicit gaps, no assumption | Sparse sensor data where missing = no reading | -| **FILL(PREV)** | State changes, step functions | Configuration values, status flags | -| **FILL(LINEAR)** | Smoothly varying metrics | Temperature, stock prices between trades | -| **FILL(constant)** | Default/baseline values | Filling with zero for missing counters | - -## Multiple Columns with Different Strategies - -Apply different fill strategies to different columns: - -```questdb-sql demo title="Mixed fill strategies" -SELECT - timestamp, - first(temperature) as temperature, -- Will use FILL(LINEAR) - first(status) as status -- Will use FILL(PREV) -FROM sensor_events -WHERE sensor_id = 'S001' - AND timestamp >= dateadd('h', -6, now()) -SAMPLE BY 5m -FILL(LINEAR); -- Applies to ALL numeric columns -``` - -**Limitation:** FILL applies to all columns identically. For per-column control, use separate queries with UNION ALL. - -## Forward-Fill with Limits - -Limit how far forward to propagate values: - -```questdb-sql demo title="Forward-fill with maximum gap" -WITH sampled AS ( - SELECT - timestamp, - first(sensor_value) as value - FROM sensor_readings - WHERE sensor_id = 'S001' - AND timestamp >= dateadd('h', -24, now()) - SAMPLE BY 10m FILL(PREV) -), -with_gap_check AS ( - SELECT - timestamp, - value, - timestamp - lag(timestamp) OVER (ORDER BY timestamp) as gap_micros - FROM sampled - WHERE value IS NOT NULL -- Only include intervals with actual or filled data -) -SELECT - timestamp, - CASE - WHEN gap_micros > 1800000000 THEN NULL -- Gap > 30 minutes: don't trust fill - ELSE value - END as value_with_limit -FROM with_gap_check -ORDER BY timestamp; -``` - -This prevents filling forward after large gaps where the value is likely stale. - -## Interpolate Between Sparse Updates - -Use LINEAR fill for numeric values that change gradually: - -```questdb-sql demo title="Linear interpolation between price updates" -SELECT - timestamp, - first(price) as price -FROM market_snapshots -WHERE symbol = 'BTC-USDT' - AND timestamp >= '2025-01-15T00:00:00' - AND timestamp < '2025-01-15T01:00:00' -SAMPLE BY 1m FILL(LINEAR); -``` - -**Example:** -- 00:00: price = 100 -- 00:10: price = 110 -- Result: 00:01→101, 00:02→102, ..., 00:09→109 - -Linear interpolation assumes constant rate of change between known points. - -## Fill State Changes for Grafana - -Create step charts in Grafana by forward-filling status values: - -```questdb-sql demo title="Service status for Grafana step chart" -SELECT - timestamp as time, - first(status) as "Service Status" -FROM service_events -WHERE $__timeFilter(timestamp) -SAMPLE BY $__interval FILL(PREV); -``` - -Configure Grafana to: -- Use "Staircase" line style -- Shows clear state transitions -- No misleading interpolation between discrete states - -## Align Multiple Sensors to Common Timeline - -Fill sparse data from multiple sensors to create aligned time-series: - -```questdb-sql demo title="Align multiple sensors to common intervals" -SELECT - timestamp, - symbol, - first(temperature) as temperature -FROM sensor_readings -WHERE sensor_id IN ('S001', 'S002', 'S003') - AND timestamp >= dateadd('h', -1, now()) -SAMPLE BY 1m FILL(PREV); +SELECT timestamp, id, sum(price) as price, sum(dayVolume) as dayVolume +FROM nasdaq_trades +WHERE id = 'NVDA' +SAMPLE BY 1s FILL(PREV(dayVolume), PREV); ``` -**Results:** +## Solution -| timestamp | sensor_id | temperature | -|-----------|-----------|-------------| -| 10:00:00 | S001 | 22.5 | -| 10:00:00 | S002 | 23.1 | -| 10:00:00 | S003 | 22.8 | -| 10:01:00 | S001 | 22.5 | ← Filled forward -| 10:01:00 | S002 | 23.2 | ← New reading -| 10:01:00 | S003 | 22.8 | ← Filled forward +The `FILL` keyword applies the same strategy to all columns in the result set. QuestDB does not currently support column-specific fill strategies in a single `SAMPLE BY` clause. -Now all sensors have values at the same timestamps, enabling cross-sensor analysis. - -## Fill with Context from Another Column - -Propagate one column while aggregating another differently: - -```questdb-sql demo title="Fill status while summing events" -SELECT - timestamp, - first(current_status) as status, -- Forward-fill status - count(*) as event_count -- Count events (0 if none) -FROM system_events -WHERE timestamp >= dateadd('h', -6, now()) -SAMPLE BY 10m FILL(PREV); -``` - -**Results:** - -| timestamp | status | event_count | -|-----------|--------|-------------| -| 10:00 | running | 15 | -| 10:10 | running | 0 | ← Status filled, but no events -| 10:20 | running | 23 | -| 10:30 | stopped | 1 | ← Status changed -| 10:40 | stopped | 0 | ← Status filled forward - -## NULL for First Interval with No Data - -FILL(PREV) can't fill the first interval if it has no data: - -```sql -SELECT timestamp, first(value) as value -FROM sparse_data -WHERE timestamp >= '2025-01-15T00:00:00' -SAMPLE BY 1h FILL(PREV); -``` - -If first interval (00:00-01:00) has no data, it will be NULL (no previous value to copy). - -**Solution:** Start range from a timestamp you know has data, or use COALESCE with a default: - -```sql -SELECT - timestamp, - COALESCE(first(value), 0) as value -- Use 0 if NULL -FROM sparse_data -SAMPLE BY 1h FILL(PREV); -``` - -## Performance: FILL vs Window Functions - -**FILL is optimized for SAMPLE BY:** - -```sql --- Fast: Native FILL implementation -SELECT timestamp, first(value) -FROM data -SAMPLE BY 1h FILL(PREV); - --- Slower: Manual implementation with LAG -SELECT - timestamp, - COALESCE( - first(value), - lag(first(value)) OVER (ORDER BY timestamp) - ) as value -FROM data -SAMPLE BY 1h; -``` - -Use FILL when possible for better performance. - -## Creating Complete Time Range - -Ensure coverage of full time range even if no data exists: - -```questdb-sql demo title="Generate intervals for full day" -SELECT - timestamp, - first(temperature) as temperature -FROM sensor_readings -WHERE timestamp >= '2025-01-15T00:00:00' - AND timestamp < '2025-01-16T00:00:00' - AND sensor_id = 'S001' -SAMPLE BY 1h FILL(PREV); -``` - -Even if sensor reported no data for some hours, you'll get 24 rows (one per hour). - -## FILL with LATEST ON - -Combine with LATEST ON for efficient queries on large tables: - -```questdb-sql demo title="Fill recent data efficiently" -SELECT - timestamp, - first(price) as price -FROM trades -WHERE symbol = 'BTC-USDT' - AND timestamp >= dateadd('h', -6, now()) -LATEST ON timestamp PARTITION BY symbol -SAMPLE BY 1m FILL(PREV); -``` - -LATEST ON optimizes retrieval of recent data before sampling and filling. - -## Common Pitfalls - -**Wrong aggregate with FILL(PREV):** - -```sql --- Bad: sum() with FILL(PREV) doesn't make sense -SELECT timestamp, sum(trade_count) FROM trades SAMPLE BY 1h FILL(PREV); --- Fills missing hours with previous hour's sum (misleading!) - --- Good: Use FILL(0) for counts/sums -SELECT timestamp, sum(trade_count) FROM trades SAMPLE BY 1h FILL(0); -``` - -**FILL(LINEAR) with non-numeric types:** - -```sql --- Error: Can't interpolate strings -SELECT timestamp, first(status_text) FROM events SAMPLE BY 1h FILL(LINEAR); - --- Correct: Use FILL(PREV) for strings/symbols -SELECT timestamp, first(status_text) FROM events SAMPLE BY 1h FILL(PREV); -``` - -## Comparison with NULL Handling - -| Approach | Result | Use Case | -|----------|--------|----------| -| **No FILL** | Fewer rows (sparse) | Raw data export, missing data is meaningful | -| **FILL(NULL)** | All intervals, NULLs for gaps | Explicit gap tracking, Grafana shows breaks | -| **FILL(PREV)** | All intervals, forward-filled | Step functions, state that persists | -| **FILL(LINEAR)** | All intervals, interpolated | Smooth metrics, estimated intermediate values | -| **FILL(0)** | All intervals, zeros for gaps | Counts, volumes (missing = zero activity) | - -:::tip When to Use FILL(PREV) -Perfect for: -- Configuration values (persist until changed) -- Status/state (remains until transition) -- Categorical data (can't interpolate) -- Creating step charts in Grafana -- Aligning sparse data from multiple sources -::: - -:::warning Data Interpretation -FILL(PREV) creates synthetic data points. Distinguish between: -- **Actual measurements**: Sensor reported a value -- **Filled values**: Value propagated from previous interval - -Consider adding a flag column to mark filled vs actual data if this distinction matters. -::: +To achieve different fill strategies for different columns, you would need to use separate queries with UNION ALL, or handle the conditional filling logic in your application layer. :::info Related Documentation - [SAMPLE BY](/docs/reference/sql/select/#sample-by) - [FILL keyword](/docs/reference/sql/select/#fill) -- [first() aggregate](/docs/reference/function/aggregation/#first) -- [LATEST ON](/docs/reference/sql/select/#latest-on) ::: diff --git a/documentation/playbook/sql/time-series/filter-by-week.md b/documentation/playbook/sql/time-series/filter-by-week.md index 450e83d7e..b0f241605 100644 --- a/documentation/playbook/sql/time-series/filter-by-week.md +++ b/documentation/playbook/sql/time-series/filter-by-week.md @@ -4,242 +4,48 @@ sidebar_label: Filter by week description: Query data by ISO week number using week_of_year() or dateadd() for better performance --- -Filter time-series data by ISO week number (1-52/53) using either the built-in `week_of_year()` function or `dateadd()` for better performance on large tables. +Filter time-series data by week number using either the built-in `week_of_year()` function or `dateadd()` for better performance on large tables. -## Problem: Query Specific Week +## Solution 1: Using week_of_year() -You want to get all data from week 24 of 2025, regardless of which days that includes. +There is a built-in `week_of_year()` function, so this could be solved as: -## Solution 1: Using week_of_year() Function - -The simplest approach uses the built-in function: - -```questdb-sql demo title="Get all trades from week 24" +```sql SELECT * FROM trades -WHERE week_of_year(timestamp) = 24 - AND year(timestamp) = 2025; +WHERE week_of_year(timestamp) = 24; ``` -This works but requires evaluating the function for every row, which can be slow on large tables. - ## Solution 2: Using dateadd() (Faster) -Calculate the week boundaries once and filter by timestamp range: +However, depending on your table size, especially if you are not filtering by any timestamp, you might prefer this alternative, as it executes faster: -```questdb-sql demo title="Get week 24 data using date range (faster)" -DECLARE - @year := '2025', - @week := 24, - @first_monday := dateadd('d', -1 * day_of_week(@year) + 1, @year), - @week_start := dateadd('w', @week - 1, @first_monday), - @week_end := dateadd('w', @week, @first_monday) +```sql SELECT * FROM trades -WHERE timestamp >= @week_start - AND timestamp < @week_end; +WHERE timestamp >= dateadd('w', 23, '2024-12-30') + AND timestamp < dateadd('w', 24, '2024-12-30'); ``` -This approach: -- Calculates week boundaries once -- Uses timestamp index for fast filtering -- Executes much faster on large tables - -## How It Works - -### ISO Week Numbering - -ISO 8601 defines weeks as: -- Week starts on Monday -- Week 1 contains the first Thursday of the year -- Year can have 52 or 53 weeks +You need to be careful with that query, as it will start counting time from Jan 1st 1970, which is not a Monday. -### Calculation Steps +## Solution 3: Start at First Monday of Year -**1. Find first Monday:** -```sql -@first_monday := dateadd('d', -1 * day_of_week(@year) + 1, @year) -``` -- `day_of_week(@year)`: Day of week for Jan 1 (1=Mon, 7=Sun) -- Calculate days to subtract to get to previous/same Monday -- This gives the Monday of the week containing Jan 1 - -**2. Calculate week start:** -```sql -@week_start := dateadd('w', @week - 1, @first_monday) -``` -- Add `@week - 1` weeks to first Monday -- This gives Monday of the target week +This alternative would start at the Monday of the week that includes January 1st: -**3. Calculate week end:** -```sql -@week_end := dateadd('w', @week, @first_monday) -``` -- Add one more week to get the boundary -- Use `<` (not `<=`) to exclude next week's Monday - -## Full Example with Results - -```questdb-sql demo title="Week 24 trades with boundaries shown" +```questdb-sql demo title="Filter by week using first Monday calculation" DECLARE @year := '2025', @week := 24, @first_monday := dateadd('d', -1 * day_of_week(@year) + 1, @year), @week_start := dateadd('w', @week - 1, @first_monday), @week_end := dateadd('w', @week, @first_monday) -SELECT - @week_start as week_start, - @week_end as week_end, - count(*) as trade_count, - sum(amount) as total_volume -FROM trades -WHERE timestamp >= @week_start - AND timestamp < @week_end; -``` - -**Results:** - -| week_start | week_end | trade_count | total_volume | -|------------|----------|-------------|--------------| -| 2025-06-09 | 2025-06-16 | 145,623 | 89,234.56 | - -## Multiple Weeks - -Query several consecutive weeks: - -```questdb-sql demo title="Weeks 20-25 aggregated by week" -DECLARE - @year := '2025', - @first_monday := dateadd('d', -1 * day_of_week(@year) + 1, @year) -SELECT - week_of_year(timestamp) as week, - count(*) as trade_count, - sum(amount) as total_volume -FROM trades -WHERE timestamp >= dateadd('w', 19, @first_monday) -- Week 20 start - AND timestamp < dateadd('w', 26, @first_monday) -- Week 26 start -GROUP BY week -ORDER BY week; -``` - -## Current Week - -Get data for the current week: - -```sql -DECLARE - @today := now(), - @week_start := timestamp_floor('w', @today) SELECT * FROM trades WHERE timestamp >= @week_start - AND timestamp < dateadd('w', 1, @week_start); -``` - -`timestamp_floor('w', timestamp)` rounds down to the most recent Monday. - -## Week-over-Week Comparison - -Compare the same week across different years: - -```sql -DECLARE - @week := 24, - @year1 := '2024', - @year2 := '2025', - @first_monday_2024 := dateadd('d', -1 * day_of_week(@year1) + 1, @year1), - @first_monday_2025 := dateadd('d', -1 * day_of_week(@year2) + 1, @year2), - @week_start_2024 := dateadd('w', @week - 1, @first_monday_2024), - @week_end_2024 := dateadd('w', @week, @first_monday_2024), - @week_start_2025 := dateadd('w', @week - 1, @first_monday_2025), - @week_end_2025 := dateadd('w', @week, @first_monday_2025) -SELECT - '2024' as year, - count(*) as trade_count -FROM trades -WHERE timestamp >= @week_start_2024 AND timestamp < @week_end_2024 - -UNION ALL - -SELECT - '2025' as year, - count(*) as trade_count -FROM trades -WHERE timestamp >= @week_start_2025 AND timestamp < @week_end_2025; -``` - -## Performance Comparison - -**Using week_of_year() (slow on large tables):** -```sql --- Evaluates function for EVERY row -SELECT * FROM trades -WHERE week_of_year(timestamp) = 24; -``` - -**Using dateadd() (fast):** -```sql --- Uses timestamp index, evaluates boundaries once -DECLARE - @week_start := ..., - @week_end := ... -SELECT * FROM trades -WHERE timestamp >= @week_start AND timestamp < @week_end; -``` - -On a table with 100M rows: -- `week_of_year()` approach: ~30 seconds -- `dateadd()` approach: ~0.1 seconds (300x faster) - -## Handling Week 53 - -Some years have 53 weeks. Check before querying: - -```sql -DECLARE - @year := '2020', -- 2020 had 53 weeks - @week := 53 -SELECT - CASE - WHEN @week <= 52 THEN 'Valid' - WHEN @week = 53 AND week_of_year(dateadd('d', -1, dateadd('y', 1, @year))) = 53 - THEN 'Valid' - ELSE 'Invalid - year only has 52 weeks' - END as week_validity; -``` - -## ISO vs Other Week Systems - -Different systems define weeks differently: - -**ISO 8601 (Monday start, first Thursday):** -```sql --- Use dateadd with 'w' unit -dateadd('w', n, start_date) -``` - -**US system (Sunday start):** -```sql --- Adjust first day calculation -@first_sunday := dateadd('d', -1 * (day_of_week(@year) % 7), @year) -``` - -**Custom week definition:** -```sql --- Define your own start day and week 1 rules --- Calculate boundaries manually + AND timestamp < @week_end; ``` -:::tip When to Use Each Approach -- **Use `week_of_year()`**: For small tables, ad-hoc queries, or when you need the week number in results -- **Use `dateadd()`**: For large tables, performance-critical queries, or when filtering by week -::: - -:::warning Year Boundaries -Week 1 may start in the previous calendar year (late December), and week 52/53 may extend into the next year (early January). Always verify boundaries if year matters for your analysis. -::: - :::info Related Documentation - [week_of_year()](/docs/reference/function/date-time/#week_of_year) - [dateadd()](/docs/reference/function/date-time/#dateadd) - [day_of_week()](/docs/reference/function/date-time/#day_of_week) -- [timestamp_floor()](/docs/reference/function/date-time/#timestamp_floor) - [DECLARE](/docs/reference/sql/declare/) ::: diff --git a/documentation/playbook/sql/force-designated-timestamp.md b/documentation/playbook/sql/time-series/force-designated-timestamp.md similarity index 100% rename from documentation/playbook/sql/force-designated-timestamp.md rename to documentation/playbook/sql/time-series/force-designated-timestamp.md diff --git a/documentation/playbook/sql/time-series/latest-activity-window.md b/documentation/playbook/sql/time-series/latest-activity-window.md index 615228142..e06fbf5ee 100644 --- a/documentation/playbook/sql/time-series/latest-activity-window.md +++ b/documentation/playbook/sql/time-series/latest-activity-window.md @@ -1,273 +1,42 @@ --- title: Query Last N Minutes of Activity sidebar_label: Latest activity window -description: Get rows from the last N minutes of recorded activity using subqueries with max(timestamp) +description: Get rows from the last N minutes of recorded activity using subqueries with LIMIT -1 --- -Query data from the last N minutes of recorded activity in a table, regardless of the current time. This is useful when data collection is intermittent or when you want to analyze recent activity relative to when data was last recorded, not relative to "now". +Query data from the last N minutes of recorded activity in a table, regardless of the current time. -## Problem: Relative to Latest Data, Not Current Time +## Problem -You want the last 15 minutes of activity from your table, but: -- Data collection may have stopped hours or days ago -- Using `WHERE timestamp > dateadd('m', -15, now())` would return empty results if no recent data -- You need a query relative to the latest timestamp IN the table +You want to get data from a table for the last 15 minutes of activity. -**Example scenario:** -- Latest timestamp in table: `2025-03-23T07:24:37` -- Current time: `2025-03-25T14:30:00` (2 days later) -- You want: Data from `2025-03-23T07:09:37` to `2025-03-23T07:24:37` (last 15 minutes of activity) - -## Solution: Subquery with max(timestamp) - -Use a subquery to find the latest timestamp, then filter relative to it: - -```questdb-sql demo title="Last 15 minutes of recorded activity" -SELECT * FROM trades -WHERE timestamp >= ( - SELECT dateadd('m', -15, timestamp) - FROM trades - LIMIT -1 -); -``` - -This query: -1. `LIMIT -1` gets the latest row (by designated timestamp) -2. `dateadd('m', -15, timestamp)` calculates 15 minutes before that -3. Outer query filters all rows from that boundary forward - -**Results:** -All rows from the last 15 minutes of activity, regardless of when that activity occurred relative to now. - -## How It Works - -### The LIMIT -1 Trick - -```sql -SELECT timestamp FROM trades LIMIT -1 -``` - -In QuestDB, negative LIMIT returns the last N rows (sorted by designated timestamp in descending order). `LIMIT -1` returns only the single most recent row. - -### Correlated Subquery Support - -QuestDB supports correlated subqueries in specific contexts, including timestamp comparisons: +You know you could do: ```sql -WHERE timestamp >= (SELECT ... FROM table LIMIT -1) +SELECT * FROM my_tb +WHERE timestamp > dateadd('m', -15, now()); ``` -The subquery executes once and returns a scalar timestamp value, which is then used in the WHERE clause for all rows. - -### Why Not dateadd on the Left? - -```sql --- Less efficient (calculates for every row) -WHERE dateadd('m', -15, now()) < timestamp - --- More efficient (calculates once) -WHERE timestamp >= (SELECT dateadd('m', -15, timestamp) FROM trades LIMIT -1) -``` +But that would give you the last 15 minutes, not the last 15 minutes of activity in your table. Supposing the last timestamp recorded in your table was `2025-03-23T07:24:37.000000Z`, then you would like to get the data from `2025-03-23T07:09:37.000000Z` to `2025-03-23T07:24:37.000000Z`. -When the calculation is on the right side, it's evaluated once. On the left side, it would need to be evaluated for every row in the table. - -## Different Time Windows - -**Last hour of activity:** -```sql -SELECT * FROM trades -WHERE timestamp >= ( - SELECT dateadd('h', -1, timestamp) - FROM trades - LIMIT -1 -); -``` - -**Last 30 seconds:** -```sql -SELECT * FROM trades -WHERE timestamp >= ( - SELECT dateadd('s', -30, timestamp) - FROM trades - LIMIT -1 -); -``` - -**Last day:** -```sql -SELECT * FROM trades -WHERE timestamp >= ( - SELECT dateadd('d', -1, timestamp) - FROM trades - LIMIT -1 -); -``` - -## With Symbol Filtering - -Get latest activity for a specific symbol: - -```questdb-sql demo title="Last 15 minutes of BTC-USDT activity" -SELECT * FROM trades -WHERE symbol = 'BTC-USDT' - AND timestamp >= ( - SELECT dateadd('m', -15, timestamp) - FROM trades - WHERE symbol = 'BTC-USDT' - LIMIT -1 - ); -``` +## Solution -Note that the subquery also filters by symbol to find the latest timestamp for that specific symbol. +Use a correlated subquery to find the latest timestamp, then filter relative to it: -## Multiple Symbols with Different Latest Times - -For each symbol, get its own last 15 minutes: - -```sql -WITH latest_per_symbol AS ( - SELECT symbol, max(timestamp) as latest_ts - FROM trades - GROUP BY symbol -) -SELECT t.* -FROM trades t -JOIN latest_per_symbol l ON t.symbol = l.symbol -WHERE t.timestamp >= dateadd('m', -15, l.latest_ts); -``` - -This handles cases where different symbols have different latest timestamps. - -## Performance Considerations - -**Efficient execution:** -- The subquery with `LIMIT -1` is very fast (O(1) operation on designated timestamp) -- Returns immediately without scanning the table -- The calculated boundary is reused for all rows in the outer query - -**Avoid CROSS JOIN approach:** -```sql --- Less efficient alternative -WITH ts AS ( - SELECT max(timestamp) as latest_ts FROM trades -) -SELECT * FROM trades CROSS JOIN ts -WHERE timestamp > dateadd('m', -15, latest_ts); -``` - -While this works, the subquery approach is cleaner and equally performant. - -## Combining with Aggregations - -**Count trades in last 15 minutes of activity:** -```questdb-sql demo title="Trade count in last 15 minutes of activity" -SELECT - symbol, - count(*) as trade_count, - sum(amount) as total_volume -FROM trades -WHERE timestamp >= ( - SELECT dateadd('m', -15, timestamp) - FROM trades - LIMIT -1 -) -GROUP BY symbol -ORDER BY trade_count DESC; -``` - -**Average price in latest activity window:** -```sql -SELECT - symbol, - avg(price) as avg_price, - min(timestamp) as window_start, - max(timestamp) as window_end -FROM trades +```questdb-sql demo title="Last 15 minutes of recorded activity" +SELECT * +FROM my_table WHERE timestamp >= ( SELECT dateadd('m', -15, timestamp) - FROM trades + FROM my_table LIMIT -1 -) -GROUP BY symbol; -``` - -## Alternative: Store Latest Timestamp - -For frequently-run queries, consider materializing the latest timestamp: - -```sql --- Create a single-row table -CREATE TABLE latest_activity ( - latest_ts TIMESTAMP -); - --- Update periodically (e.g., every minute) -INSERT INTO latest_activity -SELECT max(timestamp) FROM trades; - --- Use in queries -SELECT * FROM trades -WHERE timestamp >= ( - SELECT dateadd('m', -15, latest_ts) - FROM latest_activity - LIMIT 1 ); ``` -This avoids recalculating `max(timestamp)` on every query. - -## Handling Empty Tables - -If the table might be empty: - -```sql -SELECT * FROM trades -WHERE timestamp >= COALESCE( - (SELECT dateadd('m', -15, timestamp) FROM trades LIMIT -1), - '1970-01-01T00:00:00' -- Fallback for empty table -); -``` - -This provides a default timestamp if no data exists. - -## Use Cases - -**Monitoring dashboards:** -- Show recent activity even if data feed has stopped -- Avoid empty charts when data is delayed - -**Data quality checks:** -- "Show me the last 10 minutes of received data" -- Works regardless of current time - -**Replay analysis:** -- Analyze historical data relative to when it was recorded -- "What happened in the 15 minutes before system shutdown?" - -**Testing with old data:** -- Query patterns work on old datasets -- No need to adjust timestamps to "now" - -:::tip When to Use This Pattern -Use this pattern when: -- Data collection is intermittent or may have stopped -- Analyzing historical datasets where "now" is not relevant -- Building replay or analysis tools for past events -- Creating dashboards that show "latest activity" regardless of age -::: - -:::warning Subquery Performance -The subquery with `LIMIT -1` is efficient because: -- It operates on the designated timestamp index -- Returns immediately without table scan -- Only executes once for the entire outer query - -Don't worry about performance - this pattern is optimized in QuestDB. -::: +QuestDB supports correlated subqueries when asking for a timestamp if the query returns a scalar value. Using `LIMIT -1` we get the latest row in the table (sorted by designated timestamp), and we apply the `dateadd` function on that date, so it needs to be executed just once. If we placed the `dateadd` on the left, the calculation would need to be applied once for each row on the main table. This query should return in just a few milliseconds, independently of table size. :::info Related Documentation - [LIMIT](/docs/reference/sql/select/#limit) - [dateadd()](/docs/reference/function/date-time/#dateadd) -- [max()](/docs/reference/function/aggregation/#max) - [Designated timestamp](/docs/concept/designated-timestamp/) ::: diff --git a/documentation/playbook/sql/time-series/remove-outliers.md b/documentation/playbook/sql/time-series/remove-outliers.md index ec11d9614..33e935efb 100644 --- a/documentation/playbook/sql/time-series/remove-outliers.md +++ b/documentation/playbook/sql/time-series/remove-outliers.md @@ -1,466 +1,66 @@ --- -title: Remove Outliers from Time-Series +title: Remove Outliers from Candle Data sidebar_label: Remove outliers -description: Filter anomalous data points using moving averages, standard deviation, percentiles, and z-scores +description: Filter outliers using window functions to compare against moving averages --- -Identify and filter outliers from time-series data using statistical methods. Outliers can skew aggregates, distort visualizations, and trigger false alerts. This guide shows multiple approaches to detect and remove anomalous values. +Remove outlier trades that differ significantly from recent average prices. -## Problem: Noisy Sensor Data with Spikes +## Problem -You have temperature sensor readings with occasional erroneous spikes: +You have candle data from trading pairs where some markets have very low volume trades that move the candle significantly. These are usually single trades with very low volume where the exchange rate differs a lot from the actual exchange rate. This makes charts hard to use and you would like to remove those from the chart. -| timestamp | sensor_id | temperature | -|-----------|-----------|-------------| -| 10:00:00 | S001 | 22.5 | -| 10:01:00 | S001 | 22.7 | -| 10:02:00 | S001 | 89.3 | ← Outlier (sensor malfunction) -| 10:03:00 | S001 | 22.6 | -| 10:04:00 | S001 | 22.8 | +Current query: -The spike at 10:02 should be filtered out before calculating averages or displaying charts. - -## Solution 1: Moving Average Filter - -Remove values that deviate significantly from the moving average: - -```questdb-sql demo title="Filter outliers using moving average" -WITH moving_avg AS ( - SELECT - timestamp, - sensor_id, - temperature, - avg(temperature) OVER ( - PARTITION BY sensor_id - ORDER BY timestamp - ROWS BETWEEN 5 PRECEDING AND 5 FOLLOWING - ) as ma, - stddev(temperature) OVER ( - PARTITION BY sensor_id - ORDER BY timestamp - ROWS BETWEEN 5 PRECEDING AND 5 FOLLOWING - ) as stddev - FROM sensor_readings - WHERE timestamp >= dateadd('h', -24, now()) -) -SELECT - timestamp, - sensor_id, - temperature, - ma as moving_average -FROM moving_avg -WHERE ABS(temperature - ma) <= 2 * stddev -- Within 2 standard deviations -ORDER BY timestamp; -``` - -**How it works:** -- Calculate 11-point moving average (5 before + current + 5 after) -- Calculate moving standard deviation -- Keep only values within 2σ of moving average -- Typical threshold: 2σ retains ~95% of normal data, 3σ retains ~99.7% - -**Results:** - -| timestamp | sensor_id | temperature | moving_average | -|-----------|-----------|-------------|----------------| -| 10:00:00 | S001 | 22.5 | 22.6 | -| 10:01:00 | S001 | 22.7 | 22.6 | -| 10:03:00 | S001 | 22.6 | 22.7 | ← 10:02 filtered out -| 10:04:00 | S001 | 22.8 | 22.7 | - -## Solution 2: Percentile-Based Filtering - -Remove values outside a percentile range (e.g., below 1st or above 99th percentile): - -```questdb-sql demo title="Filter extreme values using percentiles" -WITH percentiles AS ( - SELECT - sensor_id, - percentile(temperature, 1) as p01, - percentile(temperature, 99) as p99 - FROM sensor_readings - WHERE timestamp >= dateadd('d', -7, now()) - GROUP BY sensor_id -) -SELECT - sr.timestamp, - sr.sensor_id, - sr.temperature -FROM sensor_readings sr -INNER JOIN percentiles p ON sr.sensor_id = p.sensor_id -WHERE sr.timestamp >= dateadd('h', -1, now()) - AND sr.temperature BETWEEN p.p01 AND p.p99 -ORDER BY sr.timestamp; -``` - -**Key points:** -- Calculates baseline percentiles from historical data (7 days) -- Filters recent data (1 hour) using those thresholds -- Adaptable: Use p05/p95 for less aggressive filtering -- Useful when distribution is not normal (skewed data) - -## Solution 3: Z-Score Method - -Calculate z-scores and filter values beyond a threshold: - -```questdb-sql demo title="Remove outliers using z-scores" -WITH stats AS ( - SELECT - sensor_id, - avg(temperature) as mean_temp, - stddev(temperature) as stddev_temp - FROM sensor_readings - WHERE timestamp >= dateadd('d', -7, now()) - GROUP BY sensor_id -), -z_scores AS ( - SELECT - sr.timestamp, - sr.sensor_id, - sr.temperature, - ((sr.temperature - stats.mean_temp) / stats.stddev_temp) as z_score - FROM sensor_readings sr - INNER JOIN stats ON sr.sensor_id = stats.sensor_id - WHERE sr.timestamp >= dateadd('h', -1, now()) -) -SELECT - timestamp, - sensor_id, - temperature, - z_score -FROM z_scores -WHERE ABS(z_score) <= 3 -- Within 3 standard deviations -ORDER BY timestamp; -``` - -**Z-score interpretation:** -- |z| < 2: Normal (95% of data) -- |z| < 3: Acceptable (99.7% of data) -- |z| ≥ 3: Outlier (0.3% of data) - -**Results:** - -| timestamp | sensor_id | temperature | z_score | -|-----------|-----------|-------------|---------| -| 10:00:00 | S001 | 22.5 | -0.12 | -| 10:01:00 | S001 | 22.7 | +0.15 | -| 10:03:00 | S001 | 22.6 | +0.02 | -| 10:04:00 | S001 | 22.8 | +0.28 | - -10:02 (z_score = 15.3) was filtered out. - -## Solution 4: Interquartile Range (IQR) - -Use IQR method for robust outlier detection (less sensitive to extreme values): - -```questdb-sql demo title="IQR-based outlier removal" -WITH quartiles AS ( - SELECT - sensor_id, - percentile(temperature, 25) as q1, - percentile(temperature, 75) as q3, - (percentile(temperature, 75) - percentile(temperature, 25)) as iqr - FROM sensor_readings - WHERE timestamp >= dateadd('d', -7, now()) - GROUP BY sensor_id -) -SELECT - sr.timestamp, - sr.sensor_id, - sr.temperature -FROM sensor_readings sr -INNER JOIN quartiles q ON sr.sensor_id = q.sensor_id -WHERE sr.timestamp >= dateadd('h', -1, now()) - AND sr.temperature >= q.q1 - 1.5 * q.iqr -- Lower fence - AND sr.temperature <= q.q3 + 1.5 * q.iqr -- Upper fence -ORDER BY sr.timestamp; -``` - -**IQR boundaries:** -- Lower fence = Q1 - 1.5 × IQR -- Upper fence = Q3 + 1.5 × IQR -- More robust than z-scores for skewed distributions -- Standard multiplier is 1.5; use 3.0 for more conservative filtering - -## Solution 5: Rate of Change Filter - -Remove values with impossible rate of change: - -```questdb-sql demo title="Filter based on maximum rate of change" -WITH deltas AS ( - SELECT - timestamp, - sensor_id, - temperature, - temperature - lag(temperature) OVER (PARTITION BY sensor_id ORDER BY timestamp) as temp_change, - timestamp - lag(timestamp) OVER (PARTITION BY sensor_id ORDER BY timestamp) as time_diff_micros - FROM sensor_readings - WHERE timestamp >= dateadd('h', -24, now()) -) -SELECT - timestamp, - sensor_id, - temperature, - temp_change, - (temp_change / (time_diff_micros / 60000000.0)) as change_per_minute -FROM deltas -WHERE temp_change IS NULL -- Keep first reading - OR ABS(temp_change / (time_diff_micros / 60000000.0)) <= 5.0 -- Max 5°C per minute -ORDER BY timestamp; -``` - -**Use case:** -- Temperature can't change by 50°C in 1 minute (physical impossibility) -- Stock prices can't change by 100% in 1 second (circuit breaker rules) -- Sensor readings limited by physical constraints - -## Combination: Multi-Method Outlier Detection - -Use multiple methods and flag values detected by any: - -```questdb-sql demo title="Flag outliers using multiple methods" -WITH stats AS ( - SELECT - sensor_id, - avg(temperature) as mean, - stddev(temperature) as stddev, - percentile(temperature, 1) as p01, - percentile(temperature, 99) as p99 - FROM sensor_readings - WHERE timestamp >= dateadd('d', -7, now()) - GROUP BY sensor_id -), -flagged AS ( - SELECT - sr.timestamp, - sr.sensor_id, - sr.temperature, - CASE WHEN ABS((sr.temperature - stats.mean) / stats.stddev) > 3 THEN 1 ELSE 0 END as outlier_zscore, - CASE WHEN sr.temperature < stats.p01 OR sr.temperature > stats.p99 THEN 1 ELSE 0 END as outlier_percentile, - CASE WHEN sr.temperature < 0 OR sr.temperature > 50 THEN 1 ELSE 0 END as outlier_range - FROM sensor_readings sr - INNER JOIN stats ON sr.sensor_id = stats.sensor_id - WHERE sr.timestamp >= dateadd('h', -1, now()) -) -SELECT - timestamp, - sensor_id, - temperature, - (outlier_zscore + outlier_percentile + outlier_range) as outlier_score, - CASE - WHEN (outlier_zscore + outlier_percentile + outlier_range) >= 2 THEN 'OUTLIER' - WHEN (outlier_zscore + outlier_percentile + outlier_range) = 1 THEN 'SUSPICIOUS' - ELSE 'NORMAL' - END as classification -FROM flagged -WHERE (outlier_zscore + outlier_percentile + outlier_range) = 0 -- Keep only clean data -ORDER BY timestamp; -``` - -Only keep values that pass all three tests. - -## Replace Outliers with Interpolation - -Instead of removing, replace outliers with interpolated values: - -```questdb-sql demo title="Replace outliers with linear interpolation" -WITH moving_avg AS ( - SELECT - timestamp, - sensor_id, - temperature, - avg(temperature) OVER ( - PARTITION BY sensor_id - ORDER BY timestamp - ROWS BETWEEN 10 PRECEDING AND 10 FOLLOWING - ) as ma, - stddev(temperature) OVER ( - PARTITION BY sensor_id - ORDER BY timestamp - ROWS BETWEEN 10 PRECEDING AND 10 FOLLOWING - ) as stddev - FROM sensor_readings - WHERE timestamp >= dateadd('h', -24, now()) -) +```sql SELECT - timestamp, - sensor_id, - CASE - WHEN ABS(temperature - ma) > 3 * stddev THEN ma -- Replace outlier with moving average - ELSE temperature -- Keep original value - END as temperature_cleaned -FROM moving_avg -ORDER BY timestamp; + timestamp, symbol, + first(price) AS open, + last(price) AS close, + min(price), + max(price), + sum(amount) AS volume +FROM trades +WHERE timestamp > dateadd('M', -1, now()) +SAMPLE BY 1d ALIGN TO CALENDAR; ``` -This preserves data density while smoothing anomalies. - -## Aggregated Data with Outlier Removal - -Calculate clean aggregates by filtering outliers first: - -```questdb-sql demo title="Hourly average with outliers removed" -WITH filtered AS ( - SELECT - timestamp, - sensor_id, - temperature - FROM sensor_readings sr - WHERE timestamp >= dateadd('d', -1, now()) - AND temperature BETWEEN ( - SELECT percentile(temperature, 1) FROM sensor_readings - WHERE sensor_id = sr.sensor_id AND timestamp >= dateadd('d', -7, now()) - ) AND ( - SELECT percentile(temperature, 99) FROM sensor_readings - WHERE sensor_id = sr.sensor_id AND timestamp >= dateadd('d', -7, now()) - ) -) -SELECT - timestamp, - sensor_id, - avg(temperature) as avg_temp, - min(temperature) as min_temp, - max(temperature) as max_temp, - count(*) as reading_count -FROM filtered -SAMPLE BY 1h -ORDER BY timestamp; -``` +The question is: is there a way to only select trades where the traded amount deviates significantly from recent patterns? -**Results show clean aggregates without spike distortion.** +## Solution -## Grafana Visualization: Before and After +Use a window function to get the moving average for the amount, then `SAMPLE BY` in an outer query and compare the value of the sampled interval against the moving data. You can do this for the whole interval (when you don't specify `ORDER BY` and `RANGE` in the window definition), or you can make it relative to an interval in the past. -Show both raw and cleaned data for comparison: +This query compares with the average of the past 6 days (7 days ago, but excluding the current row): -```questdb-sql demo title="Overlay raw and cleaned data for Grafana" -WITH moving_avg AS ( - SELECT - timestamp, - sensor_id, - temperature, - avg(temperature) OVER ( - PARTITION BY sensor_id - ORDER BY timestamp - ROWS BETWEEN 10 PRECEDING AND 10 FOLLOWING - ) as ma, - stddev(temperature) OVER ( - PARTITION BY sensor_id +```questdb-sql demo title="Filter outliers using 7-day moving average" +WITH moving_trades AS ( + SELECT timestamp, symbol, price, amount, + avg(amount) OVER ( + PARTITION BY symbol ORDER BY timestamp - ROWS BETWEEN 10 PRECEDING AND 10 FOLLOWING - ) as stddev - FROM sensor_readings - WHERE timestamp >= dateadd('h', -6, now()) - AND sensor_id = 'S001' + RANGE BETWEEN 7 days PRECEDING AND 1 day PRECEDING + ) moving_avg_7_days + FROM trades + WHERE timestamp > dateadd('d', -37, now()) ) SELECT - timestamp as time, - temperature as "Raw Data", - CASE - WHEN ABS(temperature - ma) <= 2 * stddev THEN temperature - ELSE NULL - END as "Cleaned Data" -FROM moving_avg -ORDER BY timestamp; -``` - -Grafana will show both series, making outliers visually obvious as gaps in the "Cleaned Data" series. - -## Performance Considerations - -**Pre-calculate thresholds for repeated queries:** - -```sql --- Create table with outlier thresholds -CREATE TABLE sensor_thresholds AS -SELECT - sensor_id, - avg(temperature) as mean, - stddev(temperature) as stddev, - percentile(temperature, 1) as p01, - percentile(temperature, 99) as p99 -FROM sensor_readings -WHERE timestamp >= dateadd('d', -30, now()) -GROUP BY sensor_id; - --- Fast filtering using pre-calculated thresholds -SELECT sr.* -FROM sensor_readings sr -INNER JOIN sensor_thresholds st ON sr.sensor_id = st.sensor_id -WHERE ABS((sr.temperature - st.mean) / st.stddev) <= 3; + timestamp, symbol, + first(price) AS open, + last(price) AS close, + min(price), + max(price), + sum(amount) AS volume +FROM moving_trades +WHERE timestamp > dateadd('M', -1, now()) + AND moving_avg_7_days IS NOT NULL + AND ABS(moving_avg_7_days - price) > moving_avg_7_days * 0.01 +SAMPLE BY 1d ALIGN TO CALENDAR; ``` -**Use SYMBOL type for sensor_id:** - -```sql -CREATE TABLE sensor_readings ( - timestamp TIMESTAMP, - sensor_id SYMBOL, -- Fast lookups and joins - temperature DOUBLE -) TIMESTAMP(timestamp) PARTITION BY DAY; -``` - -## Choosing the Right Method - -| Method | Best For | Pros | Cons | -|--------|----------|------|------| -| **Moving Average** | Smoothly varying data with occasional spikes | Adaptive to local trends | Requires window tuning | -| **Percentiles** | Skewed distributions | Robust to outliers | Less sensitive to mild anomalies | -| **Z-Score** | Normally distributed data | Simple, well-understood | Assumes normal distribution | -| **IQR** | Robust detection needed | Not affected by extreme outliers | May miss subtle anomalies | -| **Rate of Change** | Physical constraints known | Catches impossible values | Requires domain knowledge | - -## Common Pitfalls - -**Don't calculate stats on already-filtered data:** - -```sql --- Bad: Circular logic -WITH filtered AS ( - SELECT * FROM data WHERE value < (SELECT avg(value) FROM data) -) -SELECT avg(value) FROM filtered; -- Not meaningful! - --- Good: Calculate stats on full historical dataset -WITH stats AS ( - SELECT avg(value) as baseline FROM data WHERE timestamp >= dateadd('d', -30, now()) -) -SELECT * FROM recent_data WHERE value < baseline.value; -``` - -**Consider seasonality:** - -```sql --- Bad: Compare summer temps to winter average -SELECT * FROM readings WHERE temp < (SELECT avg(temp) FROM readings); - --- Good: Compare to same time of year -SELECT * FROM readings r -WHERE temp < ( - SELECT avg(temp) - FROM readings - WHERE month(timestamp) = month(r.timestamp) -); -``` - -:::tip When to Remove vs Flag Outliers -- **Remove**: For clean aggregates, visualizations, or ML training data -- **Flag**: For monitoring, alerts, or investigating sensor malfunctions -- **Replace**: When data density must be preserved (e.g., for resampling) -::: - -:::warning False Positives -Aggressive outlier removal can filter legitimate extreme events: -- Legitimate price movements during market crashes -- Actual temperature spikes during equipment failure -- Real traffic surges during viral events - -Balance cleanliness with preserving genuine anomalies worth investigating. -::: - :::info Related Documentation - [Window functions](/docs/reference/sql/select/#window-functions) -- [stddev()](/docs/reference/function/aggregation/#stddev) -- [percentile()](/docs/reference/function/aggregation/#percentile) -- [LAG()](/docs/reference/function/window/#lag) +- [AVG window function](/docs/reference/function/window/#avg) +- [SAMPLE BY](/docs/reference/sql/select/#sample-by) +- [ALIGN TO CALENDAR](/docs/reference/sql/select/#align-to-calendar-time-zone) ::: diff --git a/documentation/playbook/sql/time-series/sample-by-interval-bounds.md b/documentation/playbook/sql/time-series/sample-by-interval-bounds.md index ea724630e..c249664c3 100644 --- a/documentation/playbook/sql/time-series/sample-by-interval-bounds.md +++ b/documentation/playbook/sql/time-series/sample-by-interval-bounds.md @@ -1,289 +1,52 @@ --- -title: Adjust SAMPLE BY Interval Bounds +title: Right Interval Bound with SAMPLE BY sidebar_label: Interval bounds -description: Shift SAMPLE BY timestamps to use right interval bound instead of left bound for alignment with period end times +description: Shift SAMPLE BY timestamps to use right interval bound instead of left bound --- -Adjust SAMPLE BY timestamps to display the end of each interval rather than the beginning. By default, QuestDB labels aggregated intervals with their start time, but you may want to label them with their end time for reporting or alignment purposes. +Use the right interval bound (end of interval) instead of the left bound (start of interval) for SAMPLE BY timestamps. -## Problem: Need Right Bound Labeling +## Problem -You aggregate trades into 15-minute intervals: +Records are grouped in a 15-minute interval. For example, records between 2025-03-22T00:00:00.000000Z and 2025-03-22T00:15:00.000000Z are aggregated with timestamp 2025-03-22T00:00:00.000000Z. -```sql -SELECT - timestamp, - symbol, - first(price) AS open, - last(price) AS close -FROM trades -WHERE symbol = 'BTC-USDT' -SAMPLE BY 15m; -``` - -**Default output (left bound):** - -| timestamp | open | close | -|-----------|------|-------| -| 00:00:00 | 61000 | 61050 | ← Trades from 00:00:00 to 00:14:59 -| 00:15:00 | 61050 | 61100 | ← Trades from 00:15:00 to 00:29:59 -| 00:30:00 | 61100 | 61150 | ← Trades from 00:30:00 to 00:44:59 - -You want the timestamp to show **00:15:00**, **00:30:00**, **00:45:00** (the **end** of each period). +You want the aggregation to show 2025-03-22T00:15:00.000000Z (the right bound of the interval rather than left). -## Solution: Add Interval to Timestamp +## Solution -Use `dateadd()` to shift timestamps by the interval duration: +Simply shift the timestamp in the SELECT: -```questdb-sql demo title="SAMPLE BY with right bound timestamps" +```questdb-sql demo title="SAMPLE BY with right bound" SELECT - dateadd('m', 15, timestamp) as timestamp, - symbol, - first(price) AS open, - last(price) AS close, - min(price), - max(price), - sum(amount) AS volume + dateadd('m', 15, timestamp) AS timestamp, symbol, + first(price) AS open, + last(price) AS close, + min(price), + max(price), + sum(amount) AS volume FROM trades WHERE symbol = 'BTC-USDT' AND timestamp IN today() SAMPLE BY 15m; ``` -**Output (right bound):** - -| timestamp | open | close | min | max | volume | -|-----------|------|-------|-----|-----|--------| -| 00:15:00 | 61000 | 61050 | 60990 | 61055 | 123.45 | -| 00:30:00 | 61050 | 61100 | 61040 | 61110 | 98.76 | -| 00:45:00 | 61100 | 61150 | 61095 | 61160 | 145.32 | - -Now each row is labeled with the **end** of the interval it represents. - -## How It Works - -### Default Left Bound - -```sql -SAMPLE BY 15m -``` - -QuestDB internally: -1. Rounds down timestamps to interval boundaries -2. Aggregates data within each [start, end) bucket -3. Labels with the interval start time - -### Shifted Right Bound +Note that on executing this query, QuestDB is not displaying the timestamp in green on the web console. This is because we are not outputting the original designated timestamp, but a derived column. If you are not going to use this query in a subquery, then you are good to go. But if you want to use the output of this query in a subquery that requires a designated timestamp, you could do something like this to force sort order by the derived timestamp column: ```sql -dateadd('m', 15, timestamp) -``` - -Adds 15 minutes to each timestamp: -- Original: `00:00:00` → Shifted: `00:15:00` -- Original: `00:15:00` → Shifted: `00:30:00` - -The aggregation still happens over the same data; only the label changes. - -## Important Consideration: Designated Timestamp - -When you shift the timestamp, it's no longer the "designated timestamp" for the row: - -```questdb-sql demo title="Notice timestamp color in web console" -SELECT - dateadd('m', 15, timestamp) as timestamp, - symbol, - first(price) AS open -FROM trades -SAMPLE BY 15m; -``` - -In the QuestDB web console, the shifted timestamp appears in **regular font**, not **green** (designated timestamp color), because it's now a derived column, not the original designated timestamp. - -### Impact on Subsequent Operations - -If you use this query as a subquery and need ordering or window functions: - -```sql --- Force ordering by the derived timestamp ( - SELECT - dateadd('m', 15, timestamp) as timestamp, - symbol, - first(price) AS open - FROM trades - SAMPLE BY 15m -) ORDER BY timestamp; -``` - -The `ORDER BY` ensures the derived timestamp is used for ordering. - -## Different Intervals - -**1-hour intervals:** -```sql SELECT - dateadd('h', 1, timestamp) as timestamp, - ... + dateadd('m', 15, timestamp) AS timestamp, symbol, + first(price) AS open, + last(price) AS close, + min(price), + max(price), + sum(amount) AS volume FROM trades -SAMPLE BY 1h; -``` - -**5-minute intervals:** -```sql -SELECT - dateadd('m', 5, timestamp) as timestamp, - ... -FROM trades -SAMPLE BY 5m; -``` - -**1-day intervals:** -```sql -SELECT - dateadd('d', 1, timestamp) as timestamp, - ... -FROM trades -SAMPLE BY 1d; -``` - -**30-second intervals:** -```sql -SELECT - dateadd('s', 30, timestamp) as timestamp, - ... -FROM trades -SAMPLE BY 30s; -``` - -## With Time Range Filtering - -Combine with Grafana macros: - -```sql -SELECT - dateadd('m', 15, timestamp) as timestamp, - symbol, - first(price) AS open, - last(price) AS close -FROM trades -WHERE $__timeFilter(timestamp) -SAMPLE BY 15m; -``` - -Or with explicit time range: - -```sql -SELECT - dateadd('m', 15, timestamp) as timestamp, - ... -FROM trades -WHERE timestamp >= '2025-01-15T00:00:00' - AND timestamp < '2025-01-16T00:00:00' -SAMPLE BY 15m; -``` - -## Alternative: Keep Both Boundaries - -Show both start and end of each interval: - -```questdb-sql demo title="Show both interval start and end" -SELECT - timestamp as interval_start, - dateadd('m', 15, timestamp) as interval_end, - symbol, - first(price) AS open, - last(price) AS close -FROM trades -WHERE symbol = 'BTC-USDT' -SAMPLE BY 15m; -``` - -**Output:** - -| interval_start | interval_end | open | close | -|----------------|--------------|------|-------| -| 00:00:00 | 00:15:00 | 61000 | 61050 | -| 00:15:00 | 00:30:00 | 61050 | 61100 | - -This makes it explicit which period each row represents. - -## Use Cases - -**Financial reporting:** -- Trading periods often labeled by close time -- "End of day" reports use day's end timestamp -- Quarterly reports labeled Q1 end, Q2 end, etc. - -**Billing periods:** -- Monthly usage from Jan 1 to Jan 31 labeled as "Jan 31" -- Hourly electricity usage labeled by hour end - -**SLA monitoring:** -- Availability windows labeled by period end -- "99.9% uptime for hour ending at 14:00" - -**Compliance:** -- Some regulations require end-of-period timestamps -- Audit trails with closing timestamps - -## Grafana Visualization - -When using with Grafana time-series charts, the shifted timestamp aligns with the period represented: - -```sql -SELECT - dateadd('m', 15, timestamp) as time, - avg(price) as value, - symbol as metric -FROM trades -WHERE $__timeFilter(timestamp) - AND symbol IN ('BTC-USDT', 'ETH-USDT') -SAMPLE BY 15m; -``` - -The chart will show data points at 00:15, 00:30, 00:45, etc., representing the aggregated values for the 15 minutes ending at those times. - -## Center of Interval - -For some visualizations, you may want to label with the interval midpoint: - -```sql -SELECT - dateadd('m', 7.5, timestamp) as timestamp, -- 7.5 minutes = halfway through 15min - ... -FROM trades -SAMPLE BY 15m; -``` - -Use decimal minutes for fractional intervals (7.5 minutes = 7 minutes 30 seconds). - -## Alignment with Calendar - -For calendar-aligned intervals: - -```sql -SELECT - dateadd('d', 1, timestamp) as timestamp, - ... -FROM trades -SAMPLE BY 1d ALIGN TO CALENDAR; +WHERE symbol = 'BTC-USDT' AND timestamp IN today() +SAMPLE BY 15m +) ORDER BY timestamp; ``` -With `ALIGN TO CALENDAR`, day boundaries align to midnight UTC (or configured timezone). The shifted timestamp then represents the end of each calendar day. - -:::tip Left vs Right Bound -- **Left bound (default)**: Common in databases and programming - interval [start, end) -- **Right bound (shifted)**: Common in business reporting - "value as of end of period" -- **Choose based on domain**: Financial data often uses right bound, technical data often uses left bound -::: - -:::warning Timestamp in Green -When QuestDB web console displays timestamp in green, it indicates the designated timestamp column. After applying `dateadd()`, the timestamp is no longer "designated" - it's a derived column. This doesn't affect query correctness, only console display. -::: - :::info Related Documentation - [SAMPLE BY](/docs/reference/sql/select/#sample-by) - [dateadd()](/docs/reference/function/date-time/#dateadd) -- [ALIGN TO CALENDAR](/docs/reference/sql/select/#align-to-calendar-time-zones) -- [Designated timestamp](/docs/concept/designated-timestamp/) ::: diff --git a/documentation/playbook/sql/time-series/sparse-sensor-data.md b/documentation/playbook/sql/time-series/sparse-sensor-data.md index de44ece4b..05f18fbd8 100644 --- a/documentation/playbook/sql/time-series/sparse-sensor-data.md +++ b/documentation/playbook/sql/time-series/sparse-sensor-data.md @@ -1,387 +1,131 @@ --- title: Join Strategies for Sparse Sensor Data sidebar_label: Sparse sensor data -description: Compare CROSS JOIN, LEFT JOIN, and ASOF JOIN strategies for combining data from sensors that report at different times +description: Compare CROSS JOIN, LEFT JOIN, and ASOF JOIN strategies for combining data from sensors stored in separate tables --- -Combine data from multiple sensors that report at different times and frequencies. This guide compares three join strategies—CROSS JOIN, LEFT JOIN, and ASOF JOIN—showing when to use each for optimal results. +Efficiently query sparse sensor data by splitting wide tables into narrow tables and joining them with different strategies. -## Problem: Sensors Report at Different Times +## Problem -You have three temperature sensors with different reporting schedules: +You have a sparse sensors table with 120 sensor columns, in which you are getting just a few sensor values at any given timestamp, so most values are null. -**Sensor A (every 1 minute):** -| timestamp | temperature | -|-----------|-------------| -| 10:00:00 | 22.5 | -| 10:01:00 | 22.7 | -| 10:02:00 | 22.6 | +When you want to query data from any given sensor, you first SAMPLE the data with an `avg` or a `last_not_null` function aggregation, and then often build a CTE and call `LATEST ON` to get results: -**Sensor B (every 2 minutes):** -| timestamp | temperature | -|-----------|-------------| -| 10:00:00 | 23.1 | -| 10:02:00 | 23.3 | - -**Sensor C (irregular):** -| timestamp | temperature | -|-----------|-------------| -| 10:00:30 | 21.8 | -| 10:01:45 | 22.0 | - -You want to analyze all sensors together, but their timestamps don't align. - -## Strategy 1: CROSS JOIN for Complete Combinations - -Generate all possible combinations of readings across sensors: - -```questdb-sql demo title="CROSS JOIN all sensor combinations" -WITH sensor_a AS ( - SELECT timestamp as ts_a, temperature as temp_a - FROM sensor_readings - WHERE sensor_id = 'A' - AND timestamp >= '2025-01-15T10:00:00' - AND timestamp < '2025-01-15T10:10:00' -), -sensor_b AS ( - SELECT timestamp as ts_b, temperature as temp_b - FROM sensor_readings - WHERE sensor_id = 'B' - AND timestamp >= '2025-01-15T10:00:00' - AND timestamp < '2025-01-15T10:10:00' -) -SELECT - sensor_a.ts_a, - sensor_a.temp_a, - sensor_b.ts_b, - sensor_b.temp_b, - ABS(sensor_a.ts_a - sensor_b.ts_b) / 1000000 as time_diff_seconds -FROM sensor_a -CROSS JOIN sensor_b -WHERE ABS(sensor_a.ts_a - sensor_b.ts_b) < 30000000 -- Within 30 seconds -ORDER BY sensor_a.ts_a, sensor_b.ts_b; -``` - -**Results:** - -| ts_a | temp_a | ts_b | temp_b | time_diff_seconds | -|------|--------|------|--------|-------------------| -| 10:00:00 | 22.5 | 10:00:00 | 23.1 | 0 | -| 10:01:00 | 22.7 | 10:00:00 | 23.1 | 60 | ← Matched to previous B reading -| 10:02:00 | 22.6 | 10:02:00 | 23.3 | 0 | - -**When to use:** -- Small datasets (CROSS JOIN creates N × M rows) -- Need all combinations within a time window -- Analyzing correlation between sensors with tolerance - -**Pros:** -- Simple to understand -- Captures all possible pairings -- Can filter by time difference after joining - -**Cons:** -- Explodes result set (cartesian product) -- Not scalable for large datasets -- May create duplicate matches - -## Strategy 2: LEFT JOIN on Common Intervals - -Resample both sensors to common intervals, then join: - -```questdb-sql demo title="LEFT JOIN after resampling to common intervals" -WITH sensor_a_resampled AS ( - SELECT timestamp, first(temperature) as temp_a - FROM sensor_readings - WHERE sensor_id = 'A' - AND timestamp >= '2025-01-15T10:00:00' - AND timestamp < '2025-01-15T10:10:00' - SAMPLE BY 1m FILL(PREV) -), -sensor_b_resampled AS ( - SELECT timestamp, first(temperature) as temp_b - FROM sensor_readings - WHERE sensor_id = 'B' - AND timestamp >= '2025-01-15T10:00:00' - AND timestamp < '2025-01-15T10:10:00' - SAMPLE BY 1m FILL(PREV) -) +```sql SELECT - sensor_a_resampled.timestamp, - sensor_a_resampled.temp_a, - sensor_b_resampled.temp_b, - (sensor_a_resampled.temp_a - sensor_b_resampled.temp_b) as temp_difference -FROM sensor_a_resampled -LEFT JOIN sensor_b_resampled - ON sensor_a_resampled.timestamp = sensor_b_resampled.timestamp -ORDER BY sensor_a_resampled.timestamp; + timestamp, + vehicle_id, + avg(sensor_1) AS avg_sensor_1, avg(sensor_2) AS avg_sensor_2, + ... + avg(sensor_119) AS avg_sensor_119, avg(sensor_120) AS avg_sensor_120 +FROM + vehicle_sensor_data +-- WHERE vehicle_id = 'AAA0000' +SAMPLE BY 30s +LIMIT 100000; ``` -**Results:** +This works, but it is not super fast (1sec for 10 million rows, in a table with 120 sensor columns and with 10k different vehicle_ids), and it is also not very efficient because `null` columns take some bytes on disk. -| timestamp | temp_a | temp_b | temp_difference | -|-----------|--------|--------|-----------------| -| 10:00:00 | 22.5 | 23.1 | -0.6 | -| 10:01:00 | 22.7 | 23.1 | -0.4 | ← B value filled forward -| 10:02:00 | 22.6 | 23.3 | -0.7 | +## Solution: Multiple Narrow Tables with Joins -**When to use:** -- Sensors can be resampled to common frequency -- Want aligned timestamps for easy comparison -- Need forward-filled or interpolated values +A single table works, but there is a more efficient (although a bit more cumbersome if you compose queries by hand) way to do this. -**Pros:** -- Clean aligned results -- Predictable row count (one per interval) -- Works well with Grafana visualization +You can create 120 tables, one per sensor, rather than a table with 120 columns. Well, technically you probably want 121 tables, one with the common dimensions, then 1 per sensor. Or maybe you want N tables, one for the common dimensions, then N depending on how many sensor groups you have, as some groups might always send in sync. In any case, rather than a wide table you would end up with several narrow tables that you would need to join. -**Cons:** -- Requires choosing resample interval -- May introduce synthetic data (FILL) -- Less precise than original timestamps +Now for joining the tables there are three potential ways, depending on the results you are after: + * To see the _LATEST_ known value for all the metrics _for a given series_, use a `CROSS JOIN` strategy (example below). This returns a single row. + * To see the _LATEST_ known value for all the metrics and _for all or several series_, use a `LEFT JOIN` strategy. This returns a single row per series (example below). + * To see the _rolling view of all the latest known values_ regarding the current row for one of the metrics, use an `ASOF JOIN` strategy. This returns as many rows as you have in the main metric you are querying (example below). -## Strategy 3: ASOF JOIN for Temporal Proximity +### Performance -Match each sensor A reading with the most recent sensor B reading: +The three approaches perform well. The three queries were executed on a table like the initial one, with 10 million rows representing sparse data from 10k series and across 120 metrics, so 120 tables. Each of the 120 tables had ~83k records (which times 120 is ~10 million rows). -```questdb-sql demo title="ASOF JOIN to match most recent readings" -SELECT - a.timestamp as ts_a, - a.temperature as temp_a, - b.timestamp as ts_b, - b.temperature as temp_b, - (a.timestamp - b.timestamp) / 1000000 as seconds_since_b_reading, - (a.temperature - b.temperature) as temp_difference -FROM sensor_readings a -ASOF JOIN sensor_readings b - ON a.sensor_id = 'A' AND b.sensor_id = 'B' -WHERE a.sensor_id = 'A' - AND a.timestamp >= '2025-01-15T10:00:00' - AND a.timestamp < '2025-01-15T10:10:00' -ORDER BY a.timestamp; -``` - -**Results:** - -| ts_a | temp_a | ts_b | temp_b | seconds_since_b_reading | temp_difference | -|------|--------|------|--------|-------------------------|-----------------| -| 10:00:00 | 22.5 | 10:00:00 | 23.1 | 0 | -0.6 | -| 10:01:00 | 22.7 | 10:00:00 | 23.1 | 60 | -0.4 | ← Most recent B reading -| 10:02:00 | 22.6 | 10:02:00 | 23.3 | 0 | -0.7 | +`CROSS JOIN` is the fastest, executing in 23ms, `ASOF JOIN` is second with 123 ms, and `LEFT JOIN` is the slowest at 880ms. Still not too bad, as you probably will not want to get all the sensors from all the devices all the time, and joining fewer tables would perform better. -**When to use:** -- Need point-in-time comparison (what was B when A reported?) -- Sensors report at irregular intervals -- Want actual timestamps, not resampled intervals -- Large datasets (very efficient) +## Strategy 1: CROSS JOIN -**Pros:** -- Extremely fast (optimized for time-series) -- No data synthesis (uses actual readings) -- Handles irregular timestamps naturally -- Scalable to millions of rows +We first find the latest point in each of the 120 tables for the given series (AAA0000), so we get a value per table, and then do a `CROSS JOIN`, to get a single row. -**Cons:** -- More complex syntax -- May need to filter by max time difference -- Requires understanding of ASOF semantics - -## Comparison: Three Sensors Combined - -Combine three sensors using ASOF JOIN: - -```questdb-sql demo title="ASOF JOIN multiple sensors" -SELECT - a.timestamp as ts_a, - a.temperature as temp_a, - b.timestamp as ts_b, - b.temperature as temp_b, - c.timestamp as ts_c, - c.temperature as temp_c, - (a.temperature + b.temperature + c.temperature) / 3 as avg_temperature -FROM sensor_readings a -ASOF JOIN sensor_readings b - ON a.sensor_id = 'A' AND b.sensor_id = 'B' -ASOF JOIN sensor_readings c - ON a.sensor_id = 'A' AND c.sensor_id = 'C' -WHERE a.sensor_id = 'A' - AND a.timestamp >= '2025-01-15T10:00:00' - AND a.timestamp < '2025-01-15T10:10:00' -ORDER BY a.timestamp; +```sql +WITH +s1 AS (SELECT timestamp, vehicle_id, value FROM vehicle_sensor_1 + WHERE vehicle_id = 'AAA0000' LATEST ON timestamp PARTITION BY vehicle_id), +s2 AS (SELECT timestamp, vehicle_id, value FROM vehicle_sensor_2 + WHERE vehicle_id = 'AAA0000' LATEST ON timestamp PARTITION BY vehicle_id), +... +s119 AS (SELECT timestamp, vehicle_id, value FROM vehicle_sensor_119 + WHERE vehicle_id = 'AAA0000' LATEST ON timestamp PARTITION BY vehicle_id), +s120 AS (SELECT timestamp, vehicle_id, value FROM vehicle_sensor_120 + WHERE vehicle_id = 'AAA0000' LATEST ON timestamp PARTITION BY vehicle_id) +SELECT s1.timestamp, s1.vehicle_id, s1.value AS value_1, +s2.value AS value_2, +... +s119.value AS value_119, +s120.value AS value_120 +FROM s1 +CROSS JOIN s2 +CROSS JOIN ... +CROSS JOIN s119 +CROSS JOIN s120; ``` -Each sensor A reading is matched with the most recent reading from sensors B and C. - -## Filtering by Maximum Time Difference - -Ensure joined readings aren't too stale: - -```questdb-sql demo title="ASOF JOIN with staleness filter" -WITH joined AS ( - SELECT - a.timestamp as ts_a, - a.temperature as temp_a, - b.timestamp as ts_b, - b.temperature as temp_b, - (a.timestamp - b.timestamp) as time_diff_micros - FROM sensor_readings a - ASOF JOIN sensor_readings b - ON a.sensor_id = 'A' AND b.sensor_id = 'B' - WHERE a.sensor_id = 'A' - AND a.timestamp >= '2025-01-15T10:00:00' - AND a.timestamp < '2025-01-15T10:10:00' -) -SELECT * -FROM joined -WHERE time_diff_micros <= 120000000 -- B reading not older than 2 minutes -ORDER BY ts_a; +## Strategy 2: LEFT JOIN + +We first find the latest point in each of the 120 tables for each series, so we get a value per table and series, and then do a `LEFT JOIN` on the series ID, to get a single row for each different series (10K rows in our example). + +```sql +WITH +s1 AS (SELECT timestamp, vehicle_id, value FROM vehicle_sensor_1 + LATEST ON timestamp PARTITION BY vehicle_id), +s2 AS (SELECT timestamp, vehicle_id, value FROM vehicle_sensor_2 + LATEST ON timestamp PARTITION BY vehicle_id), +... +s119 AS (SELECT timestamp, vehicle_id, value FROM vehicle_sensor_119 + LATEST ON timestamp PARTITION BY vehicle_id), +s120 AS (SELECT timestamp, vehicle_id, value FROM vehicle_sensor_120 + LATEST ON timestamp PARTITION BY vehicle_id) +SELECT s1.timestamp, s1.vehicle_id, s1.value AS value_1, +s2.value AS value_2, +... +s119.value AS value_119, +s120.value AS value_120 +FROM s1 +LEFT JOIN s2 ON s1.vehicle_id = s2.vehicle_id +LEFT JOIN ... +LEFT JOIN s119 ON s1.vehicle_id = s119.vehicle_id +LEFT JOIN s120 ON s1.vehicle_id = s120.vehicle_id; ``` -This filters out matches where sensor B's reading is too old. - -## LT JOIN for Strictly Before - -Use LT JOIN when you need readings strictly before (not at the same time): - -```questdb-sql demo title="LT JOIN for strictly previous reading" -SELECT - a.timestamp as ts_a, - a.temperature as temp_a, - b.timestamp as ts_b, - b.temperature as temp_b, - (a.temperature - b.temperature) as temp_change -FROM sensor_readings a -LT JOIN sensor_readings b - ON a.sensor_id = 'A' AND b.sensor_id = 'A' -- Same sensor, previous reading -WHERE a.sensor_id = 'A' - AND a.timestamp >= '2025-01-15T10:00:00' - AND a.timestamp < '2025-01-15T10:10:00' -ORDER BY a.timestamp; +## Strategy 3: ASOF JOIN + +We get all the rows in all the tables, then do an `ASOF JOIN` on the series ID, so we get a row for each row of the first table in the query, in our example ~83K results. + +```sql +WITH +s1 AS (SELECT timestamp, vehicle_id, value FROM vehicle_sensor_1 ), +s2 AS (SELECT timestamp, vehicle_id, value FROM vehicle_sensor_2 ), +... +s118 AS (SELECT timestamp, vehicle_id, value FROM vehicle_sensor_118 ), +s119 AS (SELECT timestamp, vehicle_id, value FROM vehicle_sensor_119 ), +s120 AS (SELECT timestamp, vehicle_id, value FROM vehicle_sensor_120 ) +SELECT s1.timestamp, s1.vehicle_id, s1.value AS value_1, + s2.value AS value_2, + ... + s119.value AS value_119, + s120.value AS value_120 +FROM s1 +ASOF JOIN s2 ON s1.vehicle_id = s2.vehicle_id +ASOF JOIN ... +ASOF JOIN s119 ON s1.vehicle_id = s119.vehicle_id +ASOF JOIN s120 ON s1.vehicle_id = s120.vehicle_id; ``` -This matches each reading with the strictly previous reading from the same sensor (useful for calculating deltas). - -## Handling NULL Results - -ASOF JOIN returns NULL when no previous reading exists: - -```questdb-sql demo title="Handle NULL from ASOF JOIN" -SELECT - a.timestamp as ts_a, - a.temperature as temp_a, - COALESCE(b.temperature, a.temperature) as temp_b, -- Use A if B is NULL - CASE - WHEN b.timestamp IS NULL THEN 'NO_PREVIOUS_READING' - ELSE 'OK' - END as status -FROM sensor_readings a -ASOF JOIN sensor_readings b - ON a.sensor_id = 'A' AND b.sensor_id = 'B' -WHERE a.sensor_id = 'A' -ORDER BY a.timestamp; -``` - -## Performance Comparison - -| Strategy | Rows Generated | Query Speed | Memory Usage | Best For | -|----------|----------------|-------------|--------------|----------| -| **CROSS JOIN** | N × M | Slow | High | Small datasets, all combinations | -| **LEFT JOIN** | N | Medium | Medium | Regular intervals, visualization | -| **ASOF JOIN** | N | Fast | Low | Large datasets, irregular data | - -**Benchmark example (1M rows each):** -- CROSS JOIN: ~30 seconds, creates 1T rows (filtered to 1M) -- LEFT JOIN: ~5 seconds, creates 1M rows -- ASOF JOIN: ~0.5 seconds, creates 1M rows - -## Combining Strategies - -Use resampling + ASOF for best of both worlds: - -```questdb-sql demo title="Resample then ASOF JOIN" -WITH sensor_a_minute AS ( - SELECT timestamp, first(temperature) as temp_a - FROM sensor_readings - WHERE sensor_id = 'A' - SAMPLE BY 1m -) -SELECT - a.timestamp, - a.temp_a, - b.temperature as temp_b_asof -FROM sensor_a_minute a -ASOF JOIN sensor_readings b - ON b.sensor_id = 'B' -WHERE a.timestamp >= '2025-01-15T10:00:00' -ORDER BY a.timestamp; -``` - -- Resample sensor A for regular intervals -- Use ASOF JOIN to find sensor B readings without resampling B - -## Grafana Multi-Sensor Dashboard - -Format for Grafana with multiple series: - -```questdb-sql demo title="Multi-sensor data for Grafana" -WITH sensor_a AS ( - SELECT timestamp, first(temperature) as temperature - FROM sensor_readings - WHERE sensor_id = 'A' - AND $__timeFilter(timestamp) - SAMPLE BY $__interval FILL(PREV) -), -sensor_b AS ( - SELECT timestamp, first(temperature) as temperature - FROM sensor_readings - WHERE sensor_id = 'B' - AND $__timeFilter(timestamp) - SAMPLE BY $__interval FILL(PREV) -) -SELECT timestamp as time, 'Sensor A' as metric, temperature as value FROM sensor_a -UNION ALL -SELECT timestamp as time, 'Sensor B' as metric, temperature as value FROM sensor_b -ORDER BY time; -``` - -Creates separate series for each sensor in Grafana. - -## Decision Matrix - -**Choose CROSS JOIN when:** -- Datasets are small (< 10K rows each) -- You need all possible combinations -- Time tolerance is flexible (e.g., "within 1 minute") -- Analyzing correlation between sensors - -**Choose LEFT JOIN when:** -- You can resample to common intervals -- Clean, aligned timestamps are important -- Visualizing in Grafana with multiple sensors -- Forward-filling is acceptable - -**Choose ASOF JOIN when:** -- Datasets are large (> 100K rows) -- Timestamps are irregular -- Point-in-time accuracy matters -- Query performance is critical -- You want actual readings, not interpolated values - -:::tip ASOF JOIN is Usually Best -For most real-world sensor data scenarios, ASOF JOIN offers the best combination of performance, accuracy, and simplicity. It's specifically designed for time-series data and handles irregular intervals naturally. -::: - -:::warning CROSS JOIN Explosion -Never use CROSS JOIN without a strong WHERE filter on large tables. A CROSS JOIN of two 1M-row tables creates 1 trillion rows before filtering! - -Safe: `CROSS JOIN ... WHERE ABS(a.ts - b.ts) < threshold` -Dangerous: `CROSS JOIN ... ` (without WHERE on time difference) -::: - :::info Related Documentation - [ASOF JOIN](/docs/reference/sql/join/#asof-join) -- [LT JOIN](/docs/reference/sql/join/#lt-join) - [LEFT JOIN](/docs/reference/sql/join/#left-join) -- [SAMPLE BY](/docs/reference/sql/select/#sample-by) -- [FILL strategies](/docs/reference/sql/select/#fill) +- [CROSS JOIN](/docs/reference/sql/join/#cross-join) +- [LATEST ON](/docs/reference/sql/select/#latest-on) ::: diff --git a/documentation/sidebars.js b/documentation/sidebars.js index 9487581c9..efe021a21 100644 --- a/documentation/sidebars.js +++ b/documentation/sidebars.js @@ -667,8 +667,6 @@ module.exports = { label: "SQL Recipes", collapsed: true, items: [ - "playbook/sql/force-designated-timestamp", - "playbook/sql/rows-before-after-value-match", { type: "category", label: "Finance", @@ -689,6 +687,7 @@ module.exports = { label: "Time-Series Patterns", collapsed: true, items: [ + "playbook/sql/time-series/force-designated-timestamp", "playbook/sql/time-series/latest-n-per-partition", "playbook/sql/time-series/session-windows", "playbook/sql/time-series/latest-activity-window", @@ -706,6 +705,7 @@ module.exports = { label: "Advanced SQL", collapsed: true, items: [ + "playbook/sql/advanced/rows-before-after-value-match", "playbook/sql/advanced/top-n-plus-others", "playbook/sql/advanced/pivot-table", "playbook/sql/advanced/unpivot-table", @@ -742,6 +742,7 @@ module.exports = { label: "Programmatic", collapsed: true, items: [ + "playbook/programmatic/tls-ca-configuration", { type: "category", label: "PHP", @@ -756,13 +757,6 @@ module.exports = { "playbook/programmatic/ruby/inserting-ilp", ], }, - { - type: "category", - label: "Rust", - items: [ - "playbook/programmatic/rust/tls-configuration", - ], - }, { type: "category", label: "C++", From bb1fbbcaddcbff78b1c38ccf885483ecc31d640b Mon Sep 17 00:00:00 2001 From: javier Date: Fri, 19 Dec 2025 00:19:41 +0100 Subject: [PATCH 14/21] fixing broken links --- documentation/playbook/operations/csv-import-milliseconds.md | 2 +- documentation/playbook/operations/docker-compose-config.md | 4 ++-- documentation/playbook/operations/query-times-histogram.md | 2 +- documentation/playbook/programmatic/tls-ca-configuration.md | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/documentation/playbook/operations/csv-import-milliseconds.md b/documentation/playbook/operations/csv-import-milliseconds.md index b05599aec..4ea485be2 100644 --- a/documentation/playbook/operations/csv-import-milliseconds.md +++ b/documentation/playbook/operations/csv-import-milliseconds.md @@ -63,5 +63,5 @@ Read the CSV line-by-line and convert, then send via the ILP client. :::info Related Documentation - [CSV import](/docs/web-console/import-csv/) - [ILP ingestion](/docs/ingestion-overview/) -- [read_parquet()](/docs/reference/function/table/#read_parquet) +- [read_parquet()](/docs/reference/function/parquet/) ::: diff --git a/documentation/playbook/operations/docker-compose-config.md b/documentation/playbook/operations/docker-compose-config.md index fa2691952..86b739755 100644 --- a/documentation/playbook/operations/docker-compose-config.md +++ b/documentation/playbook/operations/docker-compose-config.md @@ -93,7 +93,7 @@ services: ## Complete Configuration Reference For a full list of available configuration parameters, see: -- [Server Configuration Reference](/docs/reference/configuration/) - All configurable parameters with descriptions +- [Server Configuration Reference](/docs/configuration/) - All configurable parameters with descriptions - [Docker Deployment Guide](/docs/deployment/docker/) - Docker-specific setup instructions ## Verifying Configuration @@ -131,7 +131,7 @@ If you encounter permission errors with mounted volumes, ensure the QuestDB cont ::: :::info Related Documentation -- [Server Configuration Reference](/docs/configuration/) +- [Server Configuration](/docs/configuration/) - [Docker Deployment Guide](/docs/deployment/docker/) - [PostgreSQL Wire Protocol](/docs/reference/api/postgres/) ::: diff --git a/documentation/playbook/operations/query-times-histogram.md b/documentation/playbook/operations/query-times-histogram.md index 963715c28..c1e4ee87a 100644 --- a/documentation/playbook/operations/query-times-histogram.md +++ b/documentation/playbook/operations/query-times-histogram.md @@ -108,6 +108,6 @@ Query tracing needs to be enabled for the `_query_trace` table to be populated. ::: :::info Related Documentation -- [_query_trace system table](/docs/reference/system-tables/#_query_trace) +- [Query tracing](/docs/concept/query-tracing/) - [approx_percentile() function](/docs/reference/function/aggregation/#approx_percentile) ::: diff --git a/documentation/playbook/programmatic/tls-ca-configuration.md b/documentation/playbook/programmatic/tls-ca-configuration.md index 1f581e8f5..edf2d9d54 100644 --- a/documentation/playbook/programmatic/tls-ca-configuration.md +++ b/documentation/playbook/programmatic/tls-ca-configuration.md @@ -97,6 +97,6 @@ The examples are in Rust but the concepts are similar in other languages. Check :::info Related Documentation - [QuestDB Rust client](https://docs.rs/questdb/) - [QuestDB Python client](/docs/clients/ingest-python/) -- [QuestDB C++ client](/docs/clients/ingest-cpp/) +- [QuestDB C++ client](/docs/clients/ingest-c-and-cpp/) - [QuestDB TLS configuration](/docs/operations/tls/) ::: From d0cb029c16c763174406f5b98d2701e136f6df7d Mon Sep 17 00:00:00 2001 From: javier Date: Fri, 19 Dec 2025 00:31:40 +0100 Subject: [PATCH 15/21] fixing links --- .../playbook/integrations/grafana/dynamic-table-queries.md | 2 +- .../playbook/integrations/grafana/overlay-timeshift.md | 2 +- documentation/playbook/programmatic/ruby/inserting-ilp.md | 4 ++-- .../playbook/sql/advanced/consistent-histogram-buckets.md | 4 ++-- .../playbook/sql/advanced/general-and-sampled-aggregates.md | 2 +- documentation/playbook/sql/advanced/pivot-table.md | 2 +- .../playbook/sql/advanced/rows-before-after-value-match.md | 4 ++-- documentation/playbook/sql/advanced/sankey-funnel.md | 2 +- documentation/playbook/sql/advanced/top-n-plus-others.md | 2 +- documentation/playbook/sql/finance/bollinger-bands.md | 4 ++-- documentation/playbook/sql/finance/compound-interest.md | 2 +- documentation/playbook/sql/finance/cumulative-product.md | 2 +- documentation/playbook/sql/finance/rolling-stddev.md | 2 +- documentation/playbook/sql/finance/tick-trin.md | 2 +- documentation/playbook/sql/finance/volume-spike.md | 2 +- documentation/playbook/sql/finance/vwap.md | 4 ++-- documentation/playbook/sql/time-series/epoch-timestamps.md | 2 +- .../playbook/sql/time-series/expand-power-over-time.md | 4 ++-- .../playbook/sql/time-series/fill-missing-intervals.md | 2 +- .../playbook/sql/time-series/force-designated-timestamp.md | 2 +- .../playbook/sql/time-series/latest-n-per-partition.md | 2 +- documentation/playbook/sql/time-series/remove-outliers.md | 6 +++--- .../playbook/sql/time-series/sample-by-interval-bounds.md | 2 +- documentation/playbook/sql/time-series/session-windows.md | 2 +- .../playbook/sql/time-series/sparse-sensor-data.md | 2 +- 25 files changed, 33 insertions(+), 33 deletions(-) diff --git a/documentation/playbook/integrations/grafana/dynamic-table-queries.md b/documentation/playbook/integrations/grafana/dynamic-table-queries.md index a048f32d6..babc9eaf7 100644 --- a/documentation/playbook/integrations/grafana/dynamic-table-queries.md +++ b/documentation/playbook/integrations/grafana/dynamic-table-queries.md @@ -220,6 +220,6 @@ This pattern assumes all tables have identical schemas. If schemas differ: - [ASOF JOIN](/docs/reference/sql/join/#asof-join) - [tables() function](/docs/reference/function/meta/#tables) - [string_agg()](/docs/reference/function/aggregation/#string_agg) -- [SAMPLE BY](/docs/reference/sql/select/#sample-by) +- [SAMPLE BY](/docs/reference/sql/sample-by/) - [Grafana QuestDB data source](https://grafana.com/grafana/plugins/questdb-questdb-datasource/) ::: diff --git a/documentation/playbook/integrations/grafana/overlay-timeshift.md b/documentation/playbook/integrations/grafana/overlay-timeshift.md index afbbf4330..f3ae233e3 100644 --- a/documentation/playbook/integrations/grafana/overlay-timeshift.md +++ b/documentation/playbook/integrations/grafana/overlay-timeshift.md @@ -67,6 +67,6 @@ This creates an overlay chart where yesterday's and today's data align on the sa :::info Related Documentation - [UNION ALL](/docs/reference/sql/union-except-intersect/) -- [Window functions](/docs/reference/sql/select/#window-functions) +- [Window functions](/docs/reference/sql/over/) - [Grafana integration](/docs/third-party-tools/grafana/) ::: diff --git a/documentation/playbook/programmatic/ruby/inserting-ilp.md b/documentation/playbook/programmatic/ruby/inserting-ilp.md index a4b27dd9f..219bd71ab 100644 --- a/documentation/playbook/programmatic/ruby/inserting-ilp.md +++ b/documentation/playbook/programmatic/ruby/inserting-ilp.md @@ -349,7 +349,7 @@ TCP ILP has no acknowledgments. If the connection drops, data may be lost silent :::info Related Documentation - [ILP reference](/docs/reference/api/ilp/overview/) -- [ILP over HTTP](/docs/reference/api/ilp/overview/#http) -- [ILP over TCP](/docs/reference/api/ilp/overview/#tcp) +- [ILP over HTTP](/docs/reference/api/ilp/overview/#transport-selection) +- [ILP over TCP](/docs/reference/api/ilp/overview/#transport-selection) - [InfluxDB Ruby client](https://github.com/influxdata/influxdb-client-ruby) ::: diff --git a/documentation/playbook/sql/advanced/consistent-histogram-buckets.md b/documentation/playbook/sql/advanced/consistent-histogram-buckets.md index 379158bd8..8daa6d9d7 100644 --- a/documentation/playbook/sql/advanced/consistent-histogram-buckets.md +++ b/documentation/playbook/sql/advanced/consistent-histogram-buckets.md @@ -419,6 +419,6 @@ Grafana heatmaps require: :::info Related Documentation - [Aggregate functions](/docs/reference/function/aggregation/) - [CAST function](/docs/reference/sql/cast/) -- [percentile()](/docs/reference/function/aggregation/#percentile) -- [Window functions](/docs/reference/sql/select/#window-functions) +- [percentile()](/docs/reference/function/aggregation/#approx_percentile) +- [Window functions](/docs/reference/sql/over/) ::: diff --git a/documentation/playbook/sql/advanced/general-and-sampled-aggregates.md b/documentation/playbook/sql/advanced/general-and-sampled-aggregates.md index 19fb69365..4582262b1 100644 --- a/documentation/playbook/sql/advanced/general-and-sampled-aggregates.md +++ b/documentation/playbook/sql/advanced/general-and-sampled-aggregates.md @@ -97,6 +97,6 @@ Grafana will plot both series, making it easy to see when current values deviate :::info Related Documentation - [CROSS JOIN](/docs/reference/sql/join/#cross-join) -- [SAMPLE BY](/docs/reference/sql/select/#sample-by) +- [SAMPLE BY](/docs/reference/sql/sample-by/) - [Grafana integration](/docs/third-party-tools/grafana/) ::: diff --git a/documentation/playbook/sql/advanced/pivot-table.md b/documentation/playbook/sql/advanced/pivot-table.md index 07ad07ba5..9018d656b 100644 --- a/documentation/playbook/sql/advanced/pivot-table.md +++ b/documentation/playbook/sql/advanced/pivot-table.md @@ -95,6 +95,6 @@ For unknown or dynamic column lists, you'll need to generate the CASE statements :::info Related Documentation - [CASE expressions](/docs/reference/sql/case/) -- [SAMPLE BY aggregation](/docs/reference/function/aggregation/#sample-by) +- [SAMPLE BY aggregation](/docs/reference/sql/sample-by/) - [Aggregation functions](/docs/reference/function/aggregation/) ::: diff --git a/documentation/playbook/sql/advanced/rows-before-after-value-match.md b/documentation/playbook/sql/advanced/rows-before-after-value-match.md index 4cb23ef68..816a95db8 100644 --- a/documentation/playbook/sql/advanced/rows-before-after-value-match.md +++ b/documentation/playbook/sql/advanced/rows-before-after-value-match.md @@ -132,6 +132,6 @@ This is a workaround since QuestDB doesn't have `ROWS FOLLOWING` syntax yet. :::info Related Documentation - [LAG window function](/docs/reference/function/window/#lag) - [LEAD window function](/docs/reference/function/window/#lead) -- [Window functions overview](/docs/reference/sql/select/#window-functions) -- [Window frame clauses](/docs/reference/sql/select/#frame-clause) +- [Window functions overview](/docs/reference/sql/over/) +- [Window frame clauses](/docs/reference/sql/over/#frame-types-and-behavior) ::: diff --git a/documentation/playbook/sql/advanced/sankey-funnel.md b/documentation/playbook/sql/advanced/sankey-funnel.md index da6afe8d4..c84cce2ff 100644 --- a/documentation/playbook/sql/advanced/sankey-funnel.md +++ b/documentation/playbook/sql/advanced/sankey-funnel.md @@ -116,7 +116,7 @@ This format works directly with: - **Grafana Flow plugin**: Source/target/value format :::info Related Documentation -- [Window functions](/docs/reference/sql/select/#window-functions) +- [Window functions](/docs/reference/sql/over/) - [LAG function](/docs/reference/function/window/#lag) - [Grafana integration](/docs/third-party-tools/grafana/) ::: diff --git a/documentation/playbook/sql/advanced/top-n-plus-others.md b/documentation/playbook/sql/advanced/top-n-plus-others.md index 3533ae7dc..d7b9c9ab2 100644 --- a/documentation/playbook/sql/advanced/top-n-plus-others.md +++ b/documentation/playbook/sql/advanced/top-n-plus-others.md @@ -362,5 +362,5 @@ If there are N or fewer distinct values, the "Others" row won't appear (or will - [rank() window function](/docs/reference/function/window/#rank) - [row_number() window function](/docs/reference/function/window/#row_number) - [CASE expressions](/docs/reference/sql/case/) -- [Window functions](/docs/reference/sql/select/#window-functions) +- [Window functions](/docs/reference/sql/over/) ::: diff --git a/documentation/playbook/sql/finance/bollinger-bands.md b/documentation/playbook/sql/finance/bollinger-bands.md index 095fc3b5e..7822ba4a9 100644 --- a/documentation/playbook/sql/finance/bollinger-bands.md +++ b/documentation/playbook/sql/finance/bollinger-bands.md @@ -168,8 +168,8 @@ Note the addition of `PARTITION BY symbol` to calculate separate Bollinger Bands ::: :::info Related Documentation -- [Window functions](/docs/reference/sql/select/#window-functions) +- [Window functions](/docs/reference/sql/over/) - [AVG window function](/docs/reference/function/window/#avg) - [SQRT function](/docs/reference/function/numeric/#sqrt) -- [Window frame clauses](/docs/reference/sql/select/#frame-clause) +- [Window frame clauses](/docs/reference/sql/over/#frame-types-and-behavior) ::: diff --git a/documentation/playbook/sql/finance/compound-interest.md b/documentation/playbook/sql/finance/compound-interest.md index cca0c1dd1..a8882e530 100644 --- a/documentation/playbook/sql/finance/compound-interest.md +++ b/documentation/playbook/sql/finance/compound-interest.md @@ -106,7 +106,7 @@ For more complex scenarios like monthly or quarterly compounding, adjust the tim :::info Related Documentation - [POWER function](/docs/reference/function/numeric/#power) -- [Window functions](/docs/reference/sql/select/#window-functions) +- [Window functions](/docs/reference/sql/over/) - [FIRST_VALUE window function](/docs/reference/function/window/#first_value) - [long_sequence](/docs/reference/function/row-generator/#long_sequence) ::: diff --git a/documentation/playbook/sql/finance/cumulative-product.md b/documentation/playbook/sql/finance/cumulative-product.md index aa27609a2..be4053f9a 100644 --- a/documentation/playbook/sql/finance/cumulative-product.md +++ b/documentation/playbook/sql/finance/cumulative-product.md @@ -117,7 +117,7 @@ This pattern is essential for Monte Carlo simulations in finance. Generate rando ::: :::info Related Documentation -- [Window functions](/docs/reference/sql/select/#window-functions) +- [Window functions](/docs/reference/sql/over/) - [Mathematical functions](/docs/reference/function/numeric/) - [SUM aggregate](/docs/reference/function/aggregation/#sum) ::: diff --git a/documentation/playbook/sql/finance/rolling-stddev.md b/documentation/playbook/sql/finance/rolling-stddev.md index 049b84cd2..3769ef0cc 100644 --- a/documentation/playbook/sql/finance/rolling-stddev.md +++ b/documentation/playbook/sql/finance/rolling-stddev.md @@ -66,7 +66,7 @@ FROM I first get the rolling average/mean, then from that I get the variance, and then I can do the `sqrt` to get the standard deviation as requested. :::info Related Documentation -- [Window functions](/docs/reference/sql/select/#window-functions) +- [Window functions](/docs/reference/sql/over/) - [AVG window function](/docs/reference/function/window/#avg) - [POWER function](/docs/reference/function/numeric/#power) - [SQRT function](/docs/reference/function/numeric/#sqrt) diff --git a/documentation/playbook/sql/finance/tick-trin.md b/documentation/playbook/sql/finance/tick-trin.md index f60b6703e..07621c1aa 100644 --- a/documentation/playbook/sql/finance/tick-trin.md +++ b/documentation/playbook/sql/finance/tick-trin.md @@ -185,7 +185,7 @@ The first buy or sell transaction will produce NULL values for some calculations ::: :::info Related Documentation -- [Window functions](/docs/reference/sql/select/#window-functions) +- [Window functions](/docs/reference/sql/over/) - [SUM aggregate](/docs/reference/function/aggregation/#sum) - [CASE expressions](/docs/reference/sql/case/) ::: diff --git a/documentation/playbook/sql/finance/volume-spike.md b/documentation/playbook/sql/finance/volume-spike.md index 841981285..340cad85d 100644 --- a/documentation/playbook/sql/finance/volume-spike.md +++ b/documentation/playbook/sql/finance/volume-spike.md @@ -48,6 +48,6 @@ FROM prev_volumes; :::info Related Documentation - [LAG window function](/docs/reference/function/window/#lag) -- [SAMPLE BY](/docs/reference/sql/select/#sample-by) +- [SAMPLE BY](/docs/reference/sql/sample-by/) - [CASE expressions](/docs/reference/sql/case/) ::: diff --git a/documentation/playbook/sql/finance/vwap.md b/documentation/playbook/sql/finance/vwap.md index 1880f9313..9f3c30ced 100644 --- a/documentation/playbook/sql/finance/vwap.md +++ b/documentation/playbook/sql/finance/vwap.md @@ -124,7 +124,7 @@ WHERE timestamp >= dateadd('h', -1, now()) ::: :::info Related Documentation -- [Window functions](/docs/reference/sql/select/#window-functions) +- [Window functions](/docs/reference/sql/over/) - [SUM aggregate](/docs/reference/function/aggregation/#sum) -- [SAMPLE BY](/docs/reference/sql/select/#sample-by) +- [SAMPLE BY](/docs/reference/sql/sample-by/) ::: diff --git a/documentation/playbook/sql/time-series/epoch-timestamps.md b/documentation/playbook/sql/time-series/epoch-timestamps.md index fa4e11d0f..ceedda95c 100644 --- a/documentation/playbook/sql/time-series/epoch-timestamps.md +++ b/documentation/playbook/sql/time-series/epoch-timestamps.md @@ -25,6 +25,6 @@ WHERE timestamp BETWEEN 1746552420000000 AND 1746811620000000; Nanoseconds can be used when the timestamp column is of type `timestamp_ns`. :::info Related Documentation -- [Timestamp types](/docs/reference/sql/datatypes/#timestamp-and-date) +- [Timestamp types](/docs/reference/sql/datatypes/#timestamp-and-date-considerations) - [WHERE clause](/docs/reference/sql/where/) ::: diff --git a/documentation/playbook/sql/time-series/expand-power-over-time.md b/documentation/playbook/sql/time-series/expand-power-over-time.md index 16d1feb83..163b5946d 100644 --- a/documentation/playbook/sql/time-series/expand-power-over-time.md +++ b/documentation/playbook/sql/time-series/expand-power-over-time.md @@ -79,8 +79,8 @@ The final query divides the `wh` reported in the session by the number of `attri **Note:** If you want to filter the results by timestamp or operationId, you should add the filter at the first query (the one named `sampled`), so the rest of the process is done on the relevant subset of data. :::info Related Documentation -- [SAMPLE BY](/docs/reference/sql/select/#sample-by) +- [SAMPLE BY](/docs/reference/sql/sample-by/) - [FILL](/docs/reference/sql/select/#fill) -- [Window functions](/docs/reference/sql/select/#window-functions) +- [Window functions](/docs/reference/sql/over/) - [FIRST_VALUE](/docs/reference/function/window/#first_value) ::: diff --git a/documentation/playbook/sql/time-series/fill-missing-intervals.md b/documentation/playbook/sql/time-series/fill-missing-intervals.md index f23054a10..94a8f89dc 100644 --- a/documentation/playbook/sql/time-series/fill-missing-intervals.md +++ b/documentation/playbook/sql/time-series/fill-missing-intervals.md @@ -33,6 +33,6 @@ The `FILL` keyword applies the same strategy to all columns in the result set. Q To achieve different fill strategies for different columns, you would need to use separate queries with UNION ALL, or handle the conditional filling logic in your application layer. :::info Related Documentation -- [SAMPLE BY](/docs/reference/sql/select/#sample-by) +- [SAMPLE BY](/docs/reference/sql/sample-by/) - [FILL keyword](/docs/reference/sql/select/#fill) ::: diff --git a/documentation/playbook/sql/time-series/force-designated-timestamp.md b/documentation/playbook/sql/time-series/force-designated-timestamp.md index f8a2d5c0a..6b92d77e8 100644 --- a/documentation/playbook/sql/time-series/force-designated-timestamp.md +++ b/documentation/playbook/sql/time-series/force-designated-timestamp.md @@ -75,5 +75,5 @@ The `TIMESTAMP()` keyword requires that the data is already sorted by the timest :::info Related Documentation - [Designated Timestamp concept](/docs/concept/designated-timestamp/) - [TIMESTAMP keyword reference](/docs/reference/sql/select/#timestamp) -- [SAMPLE BY aggregation](/docs/reference/function/aggregation/#sample-by) +- [SAMPLE BY aggregation](/docs/reference/sql/sample-by/) ::: diff --git a/documentation/playbook/sql/time-series/latest-n-per-partition.md b/documentation/playbook/sql/time-series/latest-n-per-partition.md index 5243409cd..f2f679e38 100644 --- a/documentation/playbook/sql/time-series/latest-n-per-partition.md +++ b/documentation/playbook/sql/time-series/latest-n-per-partition.md @@ -262,6 +262,6 @@ The number of rows returned is `N × number_of_partitions`. If you have 100 symb :::info Related Documentation - [row_number() window function](/docs/reference/function/window/#row_number) - [LATEST ON](/docs/reference/sql/latest-on/) -- [Window functions](/docs/reference/sql/select/#window-functions) +- [Window functions](/docs/reference/sql/over/) - [LIMIT](/docs/reference/sql/select/#limit) ::: diff --git a/documentation/playbook/sql/time-series/remove-outliers.md b/documentation/playbook/sql/time-series/remove-outliers.md index 33e935efb..031ec2e0a 100644 --- a/documentation/playbook/sql/time-series/remove-outliers.md +++ b/documentation/playbook/sql/time-series/remove-outliers.md @@ -59,8 +59,8 @@ SAMPLE BY 1d ALIGN TO CALENDAR; ``` :::info Related Documentation -- [Window functions](/docs/reference/sql/select/#window-functions) +- [Window functions](/docs/reference/sql/over/) - [AVG window function](/docs/reference/function/window/#avg) -- [SAMPLE BY](/docs/reference/sql/select/#sample-by) -- [ALIGN TO CALENDAR](/docs/reference/sql/select/#align-to-calendar-time-zone) +- [SAMPLE BY](/docs/reference/sql/sample-by/) +- [ALIGN TO CALENDAR](/docs/reference/sql/sample-by/#align-to-calendar) ::: diff --git a/documentation/playbook/sql/time-series/sample-by-interval-bounds.md b/documentation/playbook/sql/time-series/sample-by-interval-bounds.md index c249664c3..2036aaa14 100644 --- a/documentation/playbook/sql/time-series/sample-by-interval-bounds.md +++ b/documentation/playbook/sql/time-series/sample-by-interval-bounds.md @@ -47,6 +47,6 @@ SAMPLE BY 15m ``` :::info Related Documentation -- [SAMPLE BY](/docs/reference/sql/select/#sample-by) +- [SAMPLE BY](/docs/reference/sql/sample-by/) - [dateadd()](/docs/reference/function/date-time/#dateadd) ::: diff --git a/documentation/playbook/sql/time-series/session-windows.md b/documentation/playbook/sql/time-series/session-windows.md index 5fa2d2130..c0e17870f 100644 --- a/documentation/playbook/sql/time-series/session-windows.md +++ b/documentation/playbook/sql/time-series/session-windows.md @@ -329,6 +329,6 @@ The first row in each partition will have `NULL` for previous values. Always fil :::info Related Documentation - [first_value() window function](/docs/reference/function/window/#first_value) - [LAG window function](/docs/reference/function/window/#lag) -- [Window functions](/docs/reference/sql/select/#window-functions) +- [Window functions](/docs/reference/sql/over/) - [datediff()](/docs/reference/function/date-time/#datediff) ::: diff --git a/documentation/playbook/sql/time-series/sparse-sensor-data.md b/documentation/playbook/sql/time-series/sparse-sensor-data.md index 05f18fbd8..06013328c 100644 --- a/documentation/playbook/sql/time-series/sparse-sensor-data.md +++ b/documentation/playbook/sql/time-series/sparse-sensor-data.md @@ -125,7 +125,7 @@ ASOF JOIN s120 ON s1.vehicle_id = s120.vehicle_id; :::info Related Documentation - [ASOF JOIN](/docs/reference/sql/join/#asof-join) -- [LEFT JOIN](/docs/reference/sql/join/#left-join) +- [LEFT JOIN](/docs/reference/sql/join/#left-outer-join) - [CROSS JOIN](/docs/reference/sql/join/#cross-join) - [LATEST ON](/docs/reference/sql/select/#latest-on) ::: From 2495dba5ddf63a4b92056eac129a09e38358dfa3 Mon Sep 17 00:00:00 2001 From: javier Date: Fri, 19 Dec 2025 00:44:33 +0100 Subject: [PATCH 16/21] improved recipe --- .../sql/time-series/fill-missing-intervals.md | 47 +++++++++++++------ 1 file changed, 32 insertions(+), 15 deletions(-) diff --git a/documentation/playbook/sql/time-series/fill-missing-intervals.md b/documentation/playbook/sql/time-series/fill-missing-intervals.md index 94a8f89dc..80b67dc36 100644 --- a/documentation/playbook/sql/time-series/fill-missing-intervals.md +++ b/documentation/playbook/sql/time-series/fill-missing-intervals.md @@ -1,38 +1,55 @@ --- -title: Fill Missing Time Intervals -sidebar_label: Fill missing intervals -description: Create regular time intervals and propagate sparse values using FILL with PREV +title: Fill Missing Intervals with Value from Another Column +sidebar_label: Fill from one column +description: Use window functions to propagate values from one column to fill multiple columns in SAMPLE BY queries --- -Transform sparse event data into regular time-series by creating fixed intervals and filling gaps with previous values. +Fill missing intervals using the previous value from a specific column to populate multiple columns. ## Problem You have a query like this: ```sql -SELECT timestamp, id, sum(price) as price, sum(dayVolume) as dayVolume -FROM nasdaq_trades -WHERE id = 'NVDA' +SELECT timestamp, symbol, avg(bid_price) as bid_price, avg(ask_price) as ask_price +FROM core_price +WHERE symbol = 'EURUSD' AND timestamp IN today() SAMPLE BY 1s FILL(PREV, PREV); ``` -When there is an interpolation, instead of getting the PREV value for `price` and previous for `dayVolume`, you want both the price and the volume to show the PREV known value for the `dayVolume`. Imagine this SQL was valid: +But when there is an interpolation, instead of getting the PREV value for `bid_price` and previous for `ask_price`, you want both prices to show the PREV known value for the `ask_price`. Imagine this SQL was valid: ```sql -SELECT timestamp, id, sum(price) as price, sum(dayVolume) as dayVolume -FROM nasdaq_trades -WHERE id = 'NVDA' -SAMPLE BY 1s FILL(PREV(dayVolume), PREV); +SELECT timestamp, symbol, avg(bid_price) as bid_price, avg(ask_price) as ask_price +FROM core_price +WHERE symbol = 'EURUSD' AND timestamp IN today() +SAMPLE BY 1s FILL(PREV(ask_price), PREV); ``` ## Solution -The `FILL` keyword applies the same strategy to all columns in the result set. QuestDB does not currently support column-specific fill strategies in a single `SAMPLE BY` clause. +The only way to do this is in multiple steps within a single query: first get the sampled data interpolating with null values, then use a window function to get the last non-null value for the reference column, and finally coalesce the missing columns with this filler value. + +```questdb-sql demo title="Fill bid and ask prices with value from ask price" +WITH sampled AS ( + SELECT timestamp, symbol, avg(bid_price) as bid_price, avg(ask_price) as ask_price + FROM core_price + WHERE symbol = 'EURUSD' AND timestamp IN today() + SAMPLE BY 1s FILL(null) +), with_previous_vals AS ( + SELECT *, + last_value(ask_price) IGNORE NULLS OVER(PARTITION BY symbol ORDER BY timestamp) as filler + FROM sampled +) +SELECT timestamp, symbol, coalesce(bid_price, filler) as bid_price, coalesce(ask_price, filler) as ask_price +FROM with_previous_vals; +``` -To achieve different fill strategies for different columns, you would need to use separate queries with UNION ALL, or handle the conditional filling logic in your application layer. +Note the use of `IGNORE NULLS` modifier on the window function to make sure we always look back for a value, rather than just over the previous row. :::info Related Documentation - [SAMPLE BY](/docs/reference/sql/sample-by/) -- [FILL keyword](/docs/reference/sql/select/#fill) +- [FILL keyword](/docs/reference/sql/sample-by/#fill) +- [Window functions](/docs/reference/sql/over/) +- [last_value()](/docs/reference/function/window/#last_value) ::: From cf18db066d687ef6fc31ca2f816b38a4bb731bdf Mon Sep 17 00:00:00 2001 From: javier Date: Fri, 19 Dec 2025 00:49:56 +0100 Subject: [PATCH 17/21] fixing link --- .../playbook/sql/time-series/fill-missing-intervals.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/documentation/playbook/sql/time-series/fill-missing-intervals.md b/documentation/playbook/sql/time-series/fill-missing-intervals.md index 80b67dc36..f23e79e74 100644 --- a/documentation/playbook/sql/time-series/fill-missing-intervals.md +++ b/documentation/playbook/sql/time-series/fill-missing-intervals.md @@ -49,7 +49,7 @@ Note the use of `IGNORE NULLS` modifier on the window function to make sure we a :::info Related Documentation - [SAMPLE BY](/docs/reference/sql/sample-by/) -- [FILL keyword](/docs/reference/sql/sample-by/#fill) +- [FILL keyword](/docs/reference/sql/sample-by/#fill-keywords) - [Window functions](/docs/reference/sql/over/) - [last_value()](/docs/reference/function/window/#last_value) ::: From a86a25211e71ff7b2ed0ac474c9c9a66f8ecea82 Mon Sep 17 00:00:00 2001 From: javier Date: Fri, 19 Dec 2025 01:02:13 +0100 Subject: [PATCH 18/21] renamed files to match sidebar titles --- ...tor-with-telegraf.md => store-questdb-metrics.md} | 0 ...er-over-time.md => distribute-discrete-values.md} | 12 ++++++++---- ...-missing-intervals.md => fill-from-one-column.md} | 0 documentation/sidebars.js | 6 +++--- 4 files changed, 11 insertions(+), 7 deletions(-) rename documentation/playbook/operations/{monitor-with-telegraf.md => store-questdb-metrics.md} (100%) rename documentation/playbook/sql/time-series/{expand-power-over-time.md => distribute-discrete-values.md} (79%) rename documentation/playbook/sql/time-series/{fill-missing-intervals.md => fill-from-one-column.md} (100%) diff --git a/documentation/playbook/operations/monitor-with-telegraf.md b/documentation/playbook/operations/store-questdb-metrics.md similarity index 100% rename from documentation/playbook/operations/monitor-with-telegraf.md rename to documentation/playbook/operations/store-questdb-metrics.md diff --git a/documentation/playbook/sql/time-series/expand-power-over-time.md b/documentation/playbook/sql/time-series/distribute-discrete-values.md similarity index 79% rename from documentation/playbook/sql/time-series/expand-power-over-time.md rename to documentation/playbook/sql/time-series/distribute-discrete-values.md index 163b5946d..a69e9b166 100644 --- a/documentation/playbook/sql/time-series/expand-power-over-time.md +++ b/documentation/playbook/sql/time-series/distribute-discrete-values.md @@ -1,15 +1,19 @@ --- -title: Expand Average Power Over Time -sidebar_label: Expand power over time -description: Distribute average power values across hourly intervals using sessions and window functions +title: Distribute Discrete Values Across Time Intervals +sidebar_label: Distribute discrete values +description: Spread cumulative measurements across time intervals using sessions and window functions --- -Expand discrete energy measurements (watt-hours) across time intervals. When an IoT device sends a `wh` value at discrete timestamps, you can distribute that energy across the hours between measurements to visualize average power consumption per hour. +Distribute discrete cumulative measurements across the time intervals between observations. When devices report cumulative values at irregular timestamps, you can spread those values proportionally across the intervals to get per-period averages. + +This pattern is useful for scenarios like energy consumption, data transfer volumes, accumulated costs, or any metric where a cumulative value needs to be attributed to the intervals that contributed to it. ## Problem You have IoT devices reporting watt-hour (Wh) values at irregular timestamps, identified by an `operationId`. You want to plot the sum of average power per operation, broken down by hour. +When an IoT device sends a `wh` value at discrete timestamps, you need to distribute that energy across the hours between measurements to visualize average power consumption per hour. + Raw data: | timestamp | operationId | wh | diff --git a/documentation/playbook/sql/time-series/fill-missing-intervals.md b/documentation/playbook/sql/time-series/fill-from-one-column.md similarity index 100% rename from documentation/playbook/sql/time-series/fill-missing-intervals.md rename to documentation/playbook/sql/time-series/fill-from-one-column.md diff --git a/documentation/sidebars.js b/documentation/sidebars.js index efe021a21..851f396c7 100644 --- a/documentation/sidebars.js +++ b/documentation/sidebars.js @@ -692,11 +692,11 @@ module.exports = { "playbook/sql/time-series/session-windows", "playbook/sql/time-series/latest-activity-window", "playbook/sql/time-series/filter-by-week", - "playbook/sql/time-series/expand-power-over-time", + "playbook/sql/time-series/distribute-discrete-values", "playbook/sql/time-series/epoch-timestamps", "playbook/sql/time-series/sample-by-interval-bounds", "playbook/sql/time-series/remove-outliers", - "playbook/sql/time-series/fill-missing-intervals", + "playbook/sql/time-series/fill-from-one-column", "playbook/sql/time-series/sparse-sensor-data", ], }, @@ -772,7 +772,7 @@ module.exports = { collapsed: true, items: [ "playbook/operations/docker-compose-config", - "playbook/operations/monitor-with-telegraf", + "playbook/operations/store-questdb-metrics", "playbook/operations/csv-import-milliseconds", "playbook/operations/tls-pgbouncer", "playbook/operations/copy-data-between-instances", From fd2eae860dbdfc2123fc511ed93b5387ec5e3a78 Mon Sep 17 00:00:00 2001 From: javier Date: Fri, 19 Dec 2025 12:51:08 +0100 Subject: [PATCH 19/21] new playbook entries --- .../operations/check-transaction-applied.md | 35 ++++++++++ .../operations/optimize-many-tables.md | 37 +++++++++++ .../operations/show-non-default-params.md | 27 ++++++++ .../fill-keyed-arbitrary-interval.md | 64 +++++++++++++++++++ .../sql/time-series/fill-prev-with-history.md | 63 ++++++++++++++++++ .../time-series/force-designated-timestamp.md | 13 ++++ documentation/sidebars.js | 5 ++ 7 files changed, 244 insertions(+) create mode 100644 documentation/playbook/operations/check-transaction-applied.md create mode 100644 documentation/playbook/operations/optimize-many-tables.md create mode 100644 documentation/playbook/operations/show-non-default-params.md create mode 100644 documentation/playbook/sql/time-series/fill-keyed-arbitrary-interval.md create mode 100644 documentation/playbook/sql/time-series/fill-prev-with-history.md diff --git a/documentation/playbook/operations/check-transaction-applied.md b/documentation/playbook/operations/check-transaction-applied.md new file mode 100644 index 000000000..d52c4026d --- /dev/null +++ b/documentation/playbook/operations/check-transaction-applied.md @@ -0,0 +1,35 @@ +--- +title: Check Transaction Applied After Ingestion +sidebar_label: Check transaction applied +description: Verify that all ingested rows to a WAL table are visible for queries in QuestDB +--- + +When ingesting data to a WAL table using ILP protocol, inserts are asynchronous. This playbook shows how to ensure all ingested rows are visible for read-only queries. + +## Problem + +You're performing a single-time ingestion of a large data volume using ILP protocol to a table that uses Write-Ahead Log (WAL). Since inserts are asynchronous, you need to confirm that all ingested rows are visible for read-only queries before proceeding with operations. + +## Solution + +Query the `wal_tables()` function to check if the writer transaction matches the sequencer transaction. When these values match, all rows have become visible: + +```questdb-sql demo title="Check applied transactions from WAL files" +SELECT * +FROM wal_tables() +WHERE name = 'core_price' AND writerTxn = sequencerTxn; +``` + +This query returns a row when `writerTxn` equals `sequencerTxn` for your table: +- `writerTxn` is the last committed transaction available for read-only queries +- `sequencerTxn` is the last transaction committed to WAL + +When they match, all WAL transactions have been applied and all rows are visible for queries. + +Another viable approach is to run `SELECT count(*) FROM my_table` and verify the expected row count. + +:::info Related Documentation +- [Write-Ahead Log concept](/docs/concept/write-ahead-log/) +- [Meta functions reference](/docs/reference/function/meta/) +- [InfluxDB Line Protocol overview](/docs/reference/api/ilp/overview/) +::: diff --git a/documentation/playbook/operations/optimize-many-tables.md b/documentation/playbook/operations/optimize-many-tables.md new file mode 100644 index 000000000..3d0b4e570 --- /dev/null +++ b/documentation/playbook/operations/optimize-many-tables.md @@ -0,0 +1,37 @@ +--- +title: Optimize Disk and Memory Usage with Many Tables +sidebar_label: Optimize for many tables +description: Reduce memory and disk usage when running QuestDB with many tables by adjusting memory allocation and disk chunk sizes +--- + +When operating QuestDB with many tables, the default settings may consume more memory and disk space than necessary. This playbook shows how to optimize these resources. + +## Problem + +QuestDB allocates memory for out-of-order inserts per column and table. With the default setting of `cairo.o3.column.memory.size=256K`, each table and column uses 512K of memory (2x the configured size). When you have many tables, this memory overhead can become significant. + +Similarly, QuestDB allocates disk space in chunks for columns and indexes. While larger chunks make sense for a single large table, multiple smaller tables benefit from smaller allocation sizes, which can noticeably decrease disk storage usage. + +## Solution + +Reduce memory allocation for out-of-order inserts by setting a smaller `cairo.o3.column.memory.size`. Start with 128K and adjust based on your needs: + +``` +cairo.o3.column.memory.size=128K +``` + +Reduce disk space allocation by configuring smaller page sizes for data and indexes: + +``` +cairo.system.writer.data.append.page.size=128K +cairo.writer.data.append.page.size=128K +cairo.writer.data.index.key.append.page.size=128K +cairo.writer.data.index.value.append.page.size=128K +``` + +These settings should be added to your `server.conf` file or set as environment variables. + +:::info Related Documentation +- [Configuration reference](/docs/configuration/) +- [Capacity planning](/docs/operations/capacity-planning/) +::: diff --git a/documentation/playbook/operations/show-non-default-params.md b/documentation/playbook/operations/show-non-default-params.md new file mode 100644 index 000000000..3ca535401 --- /dev/null +++ b/documentation/playbook/operations/show-non-default-params.md @@ -0,0 +1,27 @@ +--- +title: Show Parameters with Non-Default Values +sidebar_label: Show non-default params +description: List all QuestDB configuration parameters that have been modified from their default values +--- + +When troubleshooting or auditing your QuestDB configuration, it's useful to see which parameters have been changed from their defaults. + +## Problem + +You need to identify which configuration parameters have been explicitly set via the configuration file or environment variables, filtering out all parameters that are still using their default values. + +## Solution + +Query the `SHOW PARAMETERS` command and filter by `value_source` to exclude defaults: + +```questdb-sql demo title="Find which params where modified from default values" +-- Show all parameters modified from their defaults, via conf file or env variable +(SHOW PARAMETERS) WHERE value_source <> 'default'; +``` + +This query returns only the parameters that have been explicitly configured, showing their current values and the source of the configuration (e.g., `conf` file or `env` variable). + +:::info Related Documentation +- [SHOW PARAMETERS reference](/docs/reference/sql/show/#show-parameters) +- [Configuration reference](/docs/configuration/) +::: diff --git a/documentation/playbook/sql/time-series/fill-keyed-arbitrary-interval.md b/documentation/playbook/sql/time-series/fill-keyed-arbitrary-interval.md new file mode 100644 index 000000000..05ca5edd3 --- /dev/null +++ b/documentation/playbook/sql/time-series/fill-keyed-arbitrary-interval.md @@ -0,0 +1,64 @@ +--- +title: FILL on Keyed Queries with Arbitrary Intervals +sidebar_label: FILL keyed arbitrary interval +description: Use FILL with keyed queries across arbitrary time intervals by sandwiching data with null boundary rows +--- + +When using `SAMPLE BY` with `FILL` on keyed queries (queries with non-aggregated columns like symbol), the `FROM/TO` syntax doesn't work. This playbook shows how to fill gaps across an arbitrary time interval for keyed queries. + +## Problem + +Keyed queries - queries that include non-aggregated columns beyond the timestamp - do not support the `SAMPLE BY FROM x TO y` syntax when using `FILL`. Without this feature, gaps are only filled between the first and last existing row in the filtered results, not across your desired time interval. + +For example, if you want to sample by symbol and timestamp bucket with `FILL` for a specific time range, standard approaches will not fill gaps at the beginning or end of your interval. + +## Solution + +"Sandwich" your data by adding artificial boundary rows at the start and end of your time interval using `UNION ALL`. These rows contain your target timestamps with nulls for all other columns: + +```questdb-sql demo title="FILL arbitrary interval with keyed SAMPLE BY" + +DECLARE + @start_ts := dateadd('m', -2, now()), + @end_ts := dateadd('m', 2, now()) +WITH +sandwich AS ( + SELECT * FROM ( + SELECT @start_ts AS timestamp, null AS symbol, null AS open, null AS high, null AS close, null AS low + UNION ALL + SELECT timestamp, symbol, open_mid AS open, high_mid AS high, close_mid AS close, low_mid AS low + FROM core_price_1s + WHERE timestamp BETWEEN @start_ts AND @end_ts + UNION ALL + SELECT @end_ts AS timestamp, null AS symbol, null AS open, null AS high, null AS close, null AS low + ) ORDER BY timestamp +), +sampled AS ( + SELECT + timestamp, + symbol, + first(open) AS open, + first(high) AS high, + first(low) AS low, + first(close) AS close + FROM sandwich + SAMPLE BY 30s + FILL(PREV, PREV, PREV, PREV, 0) +) +SELECT * FROM sampled WHERE open IS NOT NULL AND symbol IN ('EURUSD', 'GBPUSD'); +``` + +This query: +1. Creates boundary rows with null values at the start and end timestamps +2. Combines them with filtered data using `UNION ALL` +3. Applies `ORDER BY timestamp` to preserve the designated timestamp +4. Performs `SAMPLE BY` with `FILL` - gaps are filled across the full interval +5. Filters out the artificial boundary rows using `open IS NOT NULL` + +The boundary rows ensure that gaps are filled from the beginning to the end of your specified interval, not just between existing data points. + +:::info Related Documentation +- [SAMPLE BY aggregation](/docs/reference/sql/sample-by/) +- [FILL keyword](/docs/reference/sql/sample-by/#fill-options) +- [Designated timestamp](/docs/concept/designated-timestamp/) +::: diff --git a/documentation/playbook/sql/time-series/fill-prev-with-history.md b/documentation/playbook/sql/time-series/fill-prev-with-history.md new file mode 100644 index 000000000..4b3a17d75 --- /dev/null +++ b/documentation/playbook/sql/time-series/fill-prev-with-history.md @@ -0,0 +1,63 @@ +--- +title: FILL PREV with Historical Values +sidebar_label: FILL PREV with history +description: Use FILL(PREV) with a filler row to carry historical values into a filtered time interval +--- + +When using `FILL(PREV)` with `SAMPLE BY` on a filtered time interval, gaps at the beginning may have null values because `PREV` only uses values from within the filtered interval. This playbook shows how to carry forward the last known value from before the interval. + +## Problem + +When you filter a time range and use `FILL(PREV)` or `FILL(LINEAR)`, QuestDB only considers values within the filtered interval. If the first sample bucket has no data, it will be null instead of carrying forward the last known value from before the interval. + +## Solution + +Use a "filler row" by querying the latest value before the filtered interval with `LIMIT -1`, then combine it with your filtered data using `UNION ALL`. The filler row provides the initial value for `FILL(PREV)` to use: + +```questdb-sql demo title="FILL with PREV values carried over last row before the time range in the WHERE" +DECLARE + @start_ts := dateadd('s', -3, now()), + @end_ts := now() +WITH +filler_row AS ( + SELECT timestamp, open_mid AS open, high_mid AS high, close_mid AS close, low_mid AS low + FROM core_price_1s + WHERE timestamp < @start_ts + LIMIT -1 +), +sandwich AS ( + SELECT * FROM ( + SELECT * FROM filler_row + UNION ALL + SELECT timestamp, open_mid AS open, high_mid AS high, close_mid AS close, low_mid AS low + FROM core_price_1s + WHERE timestamp BETWEEN @start_ts AND @end_ts + ) ORDER BY timestamp +), +sampled AS ( + SELECT + timestamp, + first(open) AS open, + first(high) AS high, + first(low) AS low, + first(close) AS close + FROM sandwich + SAMPLE BY 100T + FILL(PREV, PREV, PREV, PREV, 0) +) +SELECT * FROM sampled WHERE timestamp >= @start_ts; +``` + +This query: +1. Gets the latest row before the filtered interval using `LIMIT -1` (last row) +2. Combines it with filtered interval data using `UNION ALL` +3. Applies `SAMPLE BY` with `FILL(PREV)` - the filler row provides initial values +4. Filters results to exclude the filler row, keeping only the requested interval + +The filler row ensures that gaps at the beginning of the interval carry forward the last known value rather than showing nulls. + +:::info Related Documentation +- [SAMPLE BY aggregation](/docs/reference/sql/sample-by/) +- [FILL keyword](/docs/reference/sql/sample-by/#fill-options) +- [LIMIT keyword](/docs/reference/sql/limit/) +::: diff --git a/documentation/playbook/sql/time-series/force-designated-timestamp.md b/documentation/playbook/sql/time-series/force-designated-timestamp.md index 6b92d77e8..3d4e0f7f7 100644 --- a/documentation/playbook/sql/time-series/force-designated-timestamp.md +++ b/documentation/playbook/sql/time-series/force-designated-timestamp.md @@ -68,6 +68,18 @@ LIMIT 10; This query combines the last minute of data twice using `UNION ALL`, then restores the designated timestamp. +## Querying External Parquet Files + +When querying external parquet files using `read_parquet()`, the result does not have a designated timestamp. You need to force it using `TIMESTAMP()` to enable time-series operations like `SAMPLE BY`: + +```questdb-sql demo title="Query parquet file with designated timestamp" +SELECT timestamp, avg(price) +FROM ((read_parquet('trades.parquet') ORDER BY timestamp) TIMESTAMP(timestamp)) +SAMPLE BY 1m; +``` + +This query reads from a parquet file, applies ordering, forces the designated timestamp, and then performs time-series aggregation. + :::warning Order is Required The `TIMESTAMP()` keyword requires that the data is already sorted by the timestamp column. If the data is not in order, the query will fail. Always include `ORDER BY` before applying `TIMESTAMP()`. ::: @@ -76,4 +88,5 @@ The `TIMESTAMP()` keyword requires that the data is already sorted by the timest - [Designated Timestamp concept](/docs/concept/designated-timestamp/) - [TIMESTAMP keyword reference](/docs/reference/sql/select/#timestamp) - [SAMPLE BY aggregation](/docs/reference/sql/sample-by/) +- [Parquet functions](/docs/reference/function/parquet/) ::: diff --git a/documentation/sidebars.js b/documentation/sidebars.js index 851f396c7..368cb0930 100644 --- a/documentation/sidebars.js +++ b/documentation/sidebars.js @@ -697,6 +697,8 @@ module.exports = { "playbook/sql/time-series/sample-by-interval-bounds", "playbook/sql/time-series/remove-outliers", "playbook/sql/time-series/fill-from-one-column", + "playbook/sql/time-series/fill-prev-with-history", + "playbook/sql/time-series/fill-keyed-arbitrary-interval", "playbook/sql/time-series/sparse-sensor-data", ], }, @@ -777,6 +779,9 @@ module.exports = { "playbook/operations/tls-pgbouncer", "playbook/operations/copy-data-between-instances", "playbook/operations/query-times-histogram", + "playbook/operations/optimize-many-tables", + "playbook/operations/check-transaction-applied", + "playbook/operations/show-non-default-params", ], }, ], From c20e85167867ad9597e06a1510c75ea480ee9023 Mon Sep 17 00:00:00 2001 From: javier Date: Fri, 19 Dec 2025 18:21:27 +0100 Subject: [PATCH 20/21] moved playbook inside guides and tutorials --- documentation/playbook/overview.md | 2 +- documentation/sidebars.js | 262 ++++++++++++++--------------- 2 files changed, 132 insertions(+), 132 deletions(-) diff --git a/documentation/playbook/overview.md b/documentation/playbook/overview.md index bc899321f..897672388 100644 --- a/documentation/playbook/overview.md +++ b/documentation/playbook/overview.md @@ -26,7 +26,7 @@ The Playbook is organized into three main sections: ## Running the Examples -**Most recipes run directly on our [live demo instance at demo.questdb.io](https://demo.questdb.io)** without any local setup. Queries that can be executed on the demo site are marked with a direct link to run them. +**Most recipes run directly on our [live demo instance at demo.questdb.com](https://demo.questdb.com)** without any local setup. Queries that can be executed on the demo site are marked with a direct link to run them. For recipes that require write operations or specific configuration, the recipe will indicate what setup is needed. diff --git a/documentation/sidebars.js b/documentation/sidebars.js index 368cb0930..4eda420d9 100644 --- a/documentation/sidebars.js +++ b/documentation/sidebars.js @@ -546,6 +546,137 @@ module.exports = { "web-console/create-table", ], }, + { + label: "Playbook", + type: "category", + collapsed: false, + items: [ + "playbook/overview", + "playbook/demo-data-schema", + { + type: "category", + label: "SQL Recipes", + collapsed: true, + items: [ + { + type: "category", + label: "Capital Markets", + collapsed: true, + items: [ + "playbook/sql/finance/compound-interest", + "playbook/sql/finance/cumulative-product", + "playbook/sql/finance/vwap", + "playbook/sql/finance/bollinger-bands", + "playbook/sql/finance/tick-trin", + "playbook/sql/finance/volume-profile", + "playbook/sql/finance/volume-spike", + "playbook/sql/finance/rolling-stddev", + ], + }, + { + type: "category", + label: "Time-Series Patterns", + collapsed: true, + items: [ + "playbook/sql/time-series/force-designated-timestamp", + "playbook/sql/time-series/latest-n-per-partition", + "playbook/sql/time-series/session-windows", + "playbook/sql/time-series/latest-activity-window", + "playbook/sql/time-series/filter-by-week", + "playbook/sql/time-series/distribute-discrete-values", + "playbook/sql/time-series/epoch-timestamps", + "playbook/sql/time-series/sample-by-interval-bounds", + "playbook/sql/time-series/remove-outliers", + "playbook/sql/time-series/fill-from-one-column", + "playbook/sql/time-series/fill-prev-with-history", + "playbook/sql/time-series/fill-keyed-arbitrary-interval", + "playbook/sql/time-series/sparse-sensor-data", + ], + }, + { + type: "category", + label: "Advanced SQL", + collapsed: true, + items: [ + "playbook/sql/advanced/rows-before-after-value-match", + "playbook/sql/advanced/top-n-plus-others", + "playbook/sql/advanced/pivot-table", + "playbook/sql/advanced/unpivot-table", + "playbook/sql/advanced/sankey-funnel", + "playbook/sql/advanced/conditional-aggregates", + "playbook/sql/advanced/general-and-sampled-aggregates", + "playbook/sql/advanced/consistent-histogram-buckets", + "playbook/sql/advanced/array-from-string", + ], + }, + ], + }, + { + type: "category", + label: "Integrations", + collapsed: true, + items: [ + "playbook/integrations/opcua-dense-format", + { + type: "category", + label: "Grafana", + collapsed: true, + items: [ + "playbook/integrations/grafana/dynamic-table-queries", + "playbook/integrations/grafana/read-only-user", + "playbook/integrations/grafana/variable-dropdown", + "playbook/integrations/grafana/overlay-timeshift", + ], + }, + ], + }, + { + type: "category", + label: "Programmatic", + collapsed: true, + items: [ + "playbook/programmatic/tls-ca-configuration", + { + type: "category", + label: "PHP", + items: [ + "playbook/programmatic/php/inserting-ilp", + ], + }, + { + type: "category", + label: "Ruby", + items: [ + "playbook/programmatic/ruby/inserting-ilp", + ], + }, + { + type: "category", + label: "C++", + items: [ + "playbook/programmatic/cpp/missing-columns", + ], + }, + ], + }, + { + type: "category", + label: "Operations", + collapsed: true, + items: [ + "playbook/operations/docker-compose-config", + "playbook/operations/store-questdb-metrics", + "playbook/operations/csv-import-milliseconds", + "playbook/operations/tls-pgbouncer", + "playbook/operations/copy-data-between-instances", + "playbook/operations/query-times-histogram", + "playbook/operations/optimize-many-tables", + "playbook/operations/check-transaction-applied", + "playbook/operations/show-non-default-params", + ], + }, + ], + }, { label: "Blog tutorials 🔗", type: "link", @@ -655,137 +786,6 @@ module.exports = { "troubleshooting/error-codes", ], }, - { - label: "Playbook", - type: "category", - collapsed: false, - items: [ - "playbook/overview", - "playbook/demo-data-schema", - { - type: "category", - label: "SQL Recipes", - collapsed: true, - items: [ - { - type: "category", - label: "Finance", - collapsed: true, - items: [ - "playbook/sql/finance/compound-interest", - "playbook/sql/finance/cumulative-product", - "playbook/sql/finance/vwap", - "playbook/sql/finance/bollinger-bands", - "playbook/sql/finance/tick-trin", - "playbook/sql/finance/volume-profile", - "playbook/sql/finance/volume-spike", - "playbook/sql/finance/rolling-stddev", - ], - }, - { - type: "category", - label: "Time-Series Patterns", - collapsed: true, - items: [ - "playbook/sql/time-series/force-designated-timestamp", - "playbook/sql/time-series/latest-n-per-partition", - "playbook/sql/time-series/session-windows", - "playbook/sql/time-series/latest-activity-window", - "playbook/sql/time-series/filter-by-week", - "playbook/sql/time-series/distribute-discrete-values", - "playbook/sql/time-series/epoch-timestamps", - "playbook/sql/time-series/sample-by-interval-bounds", - "playbook/sql/time-series/remove-outliers", - "playbook/sql/time-series/fill-from-one-column", - "playbook/sql/time-series/fill-prev-with-history", - "playbook/sql/time-series/fill-keyed-arbitrary-interval", - "playbook/sql/time-series/sparse-sensor-data", - ], - }, - { - type: "category", - label: "Advanced SQL", - collapsed: true, - items: [ - "playbook/sql/advanced/rows-before-after-value-match", - "playbook/sql/advanced/top-n-plus-others", - "playbook/sql/advanced/pivot-table", - "playbook/sql/advanced/unpivot-table", - "playbook/sql/advanced/sankey-funnel", - "playbook/sql/advanced/conditional-aggregates", - "playbook/sql/advanced/general-and-sampled-aggregates", - "playbook/sql/advanced/consistent-histogram-buckets", - "playbook/sql/advanced/array-from-string", - ], - }, - ], - }, - { - type: "category", - label: "Integrations", - collapsed: true, - items: [ - "playbook/integrations/opcua-dense-format", - { - type: "category", - label: "Grafana", - collapsed: true, - items: [ - "playbook/integrations/grafana/dynamic-table-queries", - "playbook/integrations/grafana/read-only-user", - "playbook/integrations/grafana/variable-dropdown", - "playbook/integrations/grafana/overlay-timeshift", - ], - }, - ], - }, - { - type: "category", - label: "Programmatic", - collapsed: true, - items: [ - "playbook/programmatic/tls-ca-configuration", - { - type: "category", - label: "PHP", - items: [ - "playbook/programmatic/php/inserting-ilp", - ], - }, - { - type: "category", - label: "Ruby", - items: [ - "playbook/programmatic/ruby/inserting-ilp", - ], - }, - { - type: "category", - label: "C++", - items: [ - "playbook/programmatic/cpp/missing-columns", - ], - }, - ], - }, - { - type: "category", - label: "Operations", - collapsed: true, - items: [ - "playbook/operations/docker-compose-config", - "playbook/operations/store-questdb-metrics", - "playbook/operations/csv-import-milliseconds", - "playbook/operations/tls-pgbouncer", - "playbook/operations/copy-data-between-instances", - "playbook/operations/query-times-histogram", - "playbook/operations/optimize-many-tables", - "playbook/operations/check-transaction-applied", - "playbook/operations/show-non-default-params", - ], - }, - ], - }, { label: "Release Notes", type: "link", From 4656c20497b4297e5050ae53ccf2b1bbb3869ff7 Mon Sep 17 00:00:00 2001 From: javier Date: Fri, 19 Dec 2025 18:23:32 +0100 Subject: [PATCH 21/21] new label for playbook --- documentation/sidebars.js | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/documentation/sidebars.js b/documentation/sidebars.js index 4eda420d9..ebde7c4d3 100644 --- a/documentation/sidebars.js +++ b/documentation/sidebars.js @@ -547,7 +547,7 @@ module.exports = { ], }, { - label: "Playbook", + label: "Playbook (useful snippets)", type: "category", collapsed: false, items: [

U(h z;3q!sm%jzSM^G9F!EDS>U~7v%!r5PtrHc9B^~B2rChuYRi0W$M;@mW;`7ywikO(f$ z&$Dx~ioCt~ziv?)7!Kw|v#IIwySy7A-dWj^suz}G zA<+YLDPG1ESYoiOOx zgQe14ue3yS6npz0Nphj`qOq^hUX479vbI?p8(T*5<3mJ-V7zX7RfoheDxdC}>BWT~ zglLX_QG$)E%8h~mmmY4)<)F(;xd+;s62_57p>yzPne&zFzvX=VMCLc>^k4g>hpf*3 zkLIz<$zwnuU!J|zG4wP3-KMv|8`cc9SCF0^m75h5dKMAS^T`_+&W6F=`iFoX5^fC+0%5sXGm5K2&P-SSMJeHs^54G!0bbQI_n4Fi z=qfL}@o@FV-C;!J@cthBOcv2RSh4VmWKCmpIegoAAMt zkW(5Fr4!zp0a8CnjEoc~)Y{uDIQfhP4D$_;C%;6NT-NqaU-b|Mb;=Z#DI)56<+;=< zPZH!NpxDBR{8`^)*B>J_qFFoK#ql6W3zAMxuhuM|&0fTd1CJM#9pu=vuA&va=)W2D zu}Q4u2Q7nz@e3MTKc2%K&_l@REw^o_3vaxQ92>LDs(Sl8g+0*O%*f5Gt%rA?tpsbD zLwG8ssAC30tb7Bbk(vMkcsTOao41r96PH9H$;pcWUoREcmOHz>4N)|{K0Zb0QTd}Y z{bbEOf%k5visF(qnTvxovN0$GzRpl{`kI=dLA0uZcO!29uC7=+CY+{I+J*%JdlhSA zHcCc|RCPdXIs4)Pf+gQ@=U#Uu*tP_l1XG8^zd8A&2?A$-q^oyT*tolgVg+dI2XBg@ zpI9D5nStf^HqqDMkH?O8iDqs0kj$qIq9{y3BR-=OvnVbl$W^=Y#vVoZ>zn zd>gpu$nG=1EhIqJMB}|IO;g~h&2l6X3)wMSlOuWijx^&zuZn-_oXf$(j^|?%rOeDs zAt0xNNBoQ7i31K>1qnazLpLkv7rx{45D;IACw_4LuB)+W_gpZXU#d=T6%Uz!?R`R8 zKz|62ePyPo)=JW!jjuXsx^*P!wAuB%SCN25Y&;OL-dI5pJiQSPL&pY3t)%+a7XHEd zQc>_#AyJi3%bO zOO^Lx&z;JH0D&7$JbQqH=0`9rfPf%%;J=-kb~-9-j#3H7XK$OhknFt6=q$wSkap$; z<;#l;oEJrjS0;*~1u6mJVaYIisG@AFq6SG6ANYO{?hcYz{uDEcGI;|TGh*BblM>t! zo-b!4GqOlCVxwa8QtHBHkt}lAVJC>;u*=sJgSs{k)dpDo-^)Rf=rc*OIWRQgt7#aK zm$V0Ylg3|j=${gRX&rVF(;#l9>Yj6|v#f%=*MyoCFP z7eG@g`%mJv-scJb+xdS$$}g~ZtJoU1*@Y|G;C$l0p|(zZd%bOePnpBxZ&jMg|HGUh zM}nX7N|o9W0s+(~Aj?afz$qSE(N4y)(A1nN%TSjQUW19W~Q8~wC}q%D|;IQt{N7BC?C{kii^ zNP!`Uze5YG6E&h5boDXv@$!POgW+Ox2UqLU%+mjIlwm%;Xu_8;kUvIqpjI@}t0xJl zImDV0_-zMTTjmoB5OsFOOafUVrFQIN>0L+<76JjbXa&TeAYZstAg@YR{uRs&b_b@fq@r8?=EeW|H< zhD1O1mQ~d&!=MY>F`tO#W#j;DooukFV9&sOnU=10c`~-UCm<8~Bs{16l1hoz)ta*Q z#^9>B#l^_aFx2to6eiGIu_At7$H=fRG^} ziQ6;G6>XgcFj_DGFGz%+4^dU+;r~rS0LKka$klGicdG~6-rDiqvNL3HemK=$Zpw)h zPH|bGb)dqY=&9f;3$kzeu3pusx=vdh?1FuSjQ6{%3!imR2gw6%L%!5#qUaa%fTlyk zuz15bmDxVEq6`u&KS_H@OI2N+*1+e?_Xo1zgcYZ5TeH3faa}G2Lnrwylg(8eLcFie zcWvA9tp3sxXHr>Yhid^frE+v z_uq$~nUj-~Myl&HbHSW7+M?u!v#UlkafUif;~U!x5x4jjCMrnO2gA_f^pYkJx__^9 zO@>HO^yjLuoW`|t#=P3mAt`m-IV(1#o?k3Jfo_oVPVkOWRP*Varb2KN@bz4z4T z&sXY&j{^yCWF%*6AQAJlRrFlY(Iy5c!O-P{RH!M&+rEm-EqxV&2tSeP-1?7EYN~&_ zi5DY6*T7$ZAB_0NP4Ymb!71d;5gSne1mafxa858Y$~aa(T<$d5kYVd!=7Je*_NRi^ zZ}{B*i`r8VfMiff0Nc$vPhqt$wh~1R%_KLK3Kj-u-=&}2&~e;qK(u@mYN=;>o7QTqq{O`fBV(ZN#h;=`xHBiA{bTr6|C zv2*!=5>Tlni?JU=aW#medRr*+jA3^)#YtyXM=eu2uj?z^>kI7L-ljf3pTHW4Q)#Yr z`nLX$#RLv6<;J#_+dC=>dAV1i_Ov(I@*a6IMrGOg>O1oK0*feY>Z&SOv_VqyQ_OcKai!%(UqZ2`3H9``y zlt8nC%`hCy^Y4U&8%mcru_;iE2ajaaockPg5XOMj$Hm17_wh+CC-5f^N>Jk zF2RToHah0_!lwXo<(_>eJGQM|F%K8Fm(R=?%*-tt`#R?TwIeVt^eGfTTi^*uH$c|1 zT9d64Jum^WQZBV)no5eylzwqEIGP6OtM1{JxtDk(@hQr* z(Tcn(*xGpss1N|p`STRKb(mwPjkI&D_q?YASpFY@d1QX(?cws{S&H%b1#4 zUW7Ak1LE{=zpSCelR?8H5Sd|Z;#x#C>LR7J#!@}Kw1m;EaXA%{FqImHk4F@Qn7rM1 ztFq%30**~cDonXXK$3X$?1P7LM;_Z5|9 zBc8@cf!8WhahpVzqXCx%JLA6NyR-7_QrsXnhHV#`Z14e(O{7vmO_vqM>oIgB*e$_H z3C0?HwgecB>Eokl&1ph)bx*O-ZS8WRq@mpyG{e{E zqDUH0A7DB_D|9>-J+S&Di84NDS=S=R#WiH`Ky-kj(bLjfL3K?li6c4UjZz5@4cp@6 zPj_qUfyeocln&=3^z7_v>+6}x&yGtVIRDqnwz#?({k{=f$|)jO&s{|r)6qA68-jwt z<_(GDBv@z6zNUU${W~?j2Yz2ZTHa}OpfUGSPVW9t>X)Zy<}#{ZMG)3O8Xcj)Fz7si zr_n(#IRtADnWAsK%SK?J;UgnffK8(L9w0+L$-n6pRh8*!(B_5b*19$8tST<9tZn%Q zGl-nG*_j_FnyvmBmk0EXKMx8^-qomkG>;3hzlJdxUD&V5Tsa|X(eg_4XXhHD*4h@m zH9|PC{d#6c|6!`^3$%jsayBqM2{`V39|hGrFvUJt8xkGPm)E~W#LvNEW}eEruz>%u z7+&4R*v$sw7!X;@Z%F|Y!(MNmb62DAa$r!=ANH+z`N9>3&k~z_M76fO!n&*fk3QDG z(||8GpSP9%H>y%R(NJ}rMZkQ*eq zlAw1?NF_O{DJQ6`8KfP!Vn07c*_)wJ_Oi=XF)~Y49IsDxUwLEJ*TYt70A7pXuMO-Z z>ui+F27e`EJf)2z;hMI#FVOs3==LSlqF~Y+Gktd=qmOv7SA-Za$VOO4m^4P&06hfak zpf@5BLXq&_)MwJ&a9IFeF~k?;^xgdW?2U)7Z$mdcaSDW^5CTfrT`fH~>%@4~8NQFD zM}HAnz+Yd{W_I}ae1uhy06}$)K*AEQ@}ee*ym2GSwH_lElbXs(`LC;m`xqP5k)QE- ziky%MN`GAnbDR#Uj|EEg^v&XpqL_|%XWM;4>hZ~MEzukoMHwwdr>%oHZ-c)$8Cp$t zEZ@f6cK~>}xb_wXQ{)!d8^^1i5eY(Ed-o1FrEVz>US`rh+C^-#_x_;B9`4rD;@hE@ znQP)+B3CBffvP{1S|B|LmloLX74rB&CABUpkM{TRx(#qD+>BcO_#3!ns-1)pTU*0Y z`w+Y6&zjZw1kPKH!+efv$O!l>mUsAaSlz|$bD;BmSQ#kW&f)_jmr*x2D$qQcTMP+%g|VQVb68(% z|NUZo{V<7UOtGM{5Q^v~e;}+XWjMeY5C7}?{{1VEHJ53e-7CpnTP8aT59hx1dk}X7 z!Qlo1NpRgb*tBpaS_U8RAz4(`6f47qNka$~*%{0%V7@}>+oX!>#Ti^4s( zjdUKwAgyBRr`D3QyT6V9%Ig-)Oxw*>k zZ@|~Yi5P3n=ZUlig4v1e#RfOCq242{cyd!i^>`|p8=mPK=Z`R9oRw|wo%Fnj@EO~N4Xt&r?r1$r8~wM8sTOaMtN}tx`yGe8_P;EbeJwxkH@TV? z9Xnhcp2I=fnUGO?9_dUW?R;5WRr6#^MOK=@}vS(~R6C$^gs1?6Pd`QS13=L6*q*u=gFBO?vN z4w5e3)1J4@QAeq`bh<)K7-74*2DqocmsaVk1N7hkj$`ECL8bf{q8nUM_>Zil@xv!9 zMa$+w(p6H|+4Ect)B&(S#~#G`s+wv~?93ZyS6P=Vt z^D=?>+{Oqz07T*e`bRGuB>Vt|AQ@$Xu0K3Hg%y7o@O7=K^WTQ{Q$+L!c}PkM9w!1~ z#X7j&$Y=T0_jN>2@LiRJ_Jmz6LT62h@cCpPjJ_rIN%NRRe7IzuqCZ0h5s~8&FOb*) ziu!H#;0pJGe(HAZW}oQa4CsVts^m4q+x9UmOV z`A?=c$FlDoz8I`T#M|{36`%pe&VIAmgmPb1ts|#Wwr_&=8)G5HafheaD;FttO{4`!JzSb&X^hVCo&PGyItFnrL4#z>{8K_b%qe15GNL zXWXRt8d5}nhlhhp=m0~W+@I?_=XRKuGDnzCHQYjA<;wF%;(FB2+6b4svCHZozUgje}t( zO0L(|Hvhhqk=t88)D;DH-oPxEjkMCWnNXv{ZmKQd(JS1n)W4zDmey9%+v1be-R=+%bO%a8Svy5fPE0^)`n4c{m>yiM zL7fTznL07A366oL;nArH#0r>M0~L|{A-(TgWv@Ly?*Pvv`J&iDf>tJDDRUofhNQyL zM1?(F$-Hz8o^4$LE~`T9Rg8S#Z$E)Xm?|+F-dbg+!vJ(x<$(jGym@q=@S4Sv#5B$uLcu{nUG2>_m`ND(JYwzQ*$L z05J44pXLe}dl+7kf)tc$y&7;5YJ8qqF+~J=YJ0!K+_uQeE7%es-!SXcWR{W+Eyby5fFyzFWNmGpHK(1cYDIvh(6b?qz@wBwE+i0>9VRBJo6N zPq_=^EyF{SiF1cw5Nw=zQwe4q9^vSuIBk)LGul;DAa(G|@DpHj`x`|Ay_T=#?6 z$0z4vf)uN=4zUr#KN8pQpnM=AvhwLKm`zqf!dbv4Kme+X_+f?)zI+LCYX4+_Z;we4 zwuOPSu2AOXw|-vLz&?)^i!-;k!_8;v?^TrrX4+^fUyyYxljO_14C+@v4ZL3&<``!v z^IO@sF6{|40rS4M_*xtkG9SG0O8@PaO|ip@;4+o3tyNg?-4YuIevCXWrmGYJ-(FDo+1l(1AZ$Vwls##q{h;!`F*aB*zDqTx4E)Vy|LLy{|O!_X&;l! zuJX|h3MEW#JJ!k9UoDy47o=t+z&}wE{Rs_Q#{W9Zk%kK2)~?(V7U!G#O`b{~U_U`W5tGLK@~y{Q zGdtl(LHGB{5VF0~4=qi@e270*+YFhp1U%1S3B4Nlm{jkwEY4&B!$iV8MymR8UWhdldDB{!o{UMmV z&j04GD)xF5uJdfI<2}4pd9;`_0TCkvQs=#N_ofrL`73)_EZt?F@}3K|yUu$LbCKWl z-~;{71Fa;-xVYPNXDrTfkU8`LV)x0Jw&hXmqS<#HwdLa{tpug)01Mut2Hq4)BQWi) zr;qKJ>V2Jr_x~}w%uo|FlDjP9TF#kwHnD+9d3$r|NbUl7E#= z_pU%|)2>JU)YYfUs3SLXge3jeTrrD{IL?bng^nP0jp%DigppqE#@beSh22k2B^p0? zqaG;@KNFo$Wb9DVbp0!W*IydB`I*<=cI)<5(?8wzE5^zVU%ff~MxE-hV3GK4n6rPz z6>GPHY6P;BgH_7hk7mu+`7=Mp&50<$wlo`n6D&Hdcjyu5_}7pY z$b^<2KlH+&0s_h^%`-L{5I&)vJ~ zr=`l*TADc74}rW;==X){;7c|TiMoO2Qe8eB{eN_MW(44M4=6)) zQ3P8GTNcLql;V^cUm-0KAxF5IGzeEpRZnY5tdf>Zo*bWy5*%>X3&2!7y&n!lnumFt zZ0wP6l+v^T(1ebS>n*u$C+2oOoo?v1cBgT{8qtL=7m5?nwo9Hljz;L=bD1|@!v4n8Aw3G=pK5# zj9V=A2TWIQqi<(sW%c0>M>}vXly%UR?j3$YWMAdaWk@{V_(n^F5yCThKhg`3or8_f zOY5{hQ8Fr7Zg+;)pXEDCnkmR#^6oFEiQ;&4rb{(NbMZ`8HYt~1JGu8NDl47voTYtp zYI1jJV-DbufmDBgLJb?fg+5ztDm-~t{|nk+d4VB<&nySuT(>Wmf9S}&@({Hw>@&D+ zi(bR!VX)?iIS5={65mjceo(XHRIN8u(GWD3kQ0bCIuQPgvc32T;EE;g*fSHm@?5sW zXq`>!w#*(#MB&Dd=N$WjR*saqOM~zh7Z!g76ny(4$-FU9y^&?(?(vV(Y`FrY)h^8;pMyfGb-979=&qd(M)c(b3~i zpJJw_v|QUE7u|=kV&W^Bg`KHtq3e-?u!hl>*@;FV`?g1lczKMCU2Qwjg=5io7QX&! zD;)!KyJLC)bpu3R1Db4= zKK|PXl4EfGm4Cky*)M9+H#J%nU)1oU4#P!Q-1oV$BA9zo+pkEe z;Aa;*+fDdVwiC}Z#or-YOx~Hfb-#m?@aDzEh#?s5@w>Y_LBGf1=a2aKpUz%wPfyO^ z{ri;*Miy)x_yt%~1+HFqMP@w?Acz7y5pI-d@bT>&-HbZ||1iBX?O>d*6G!ZjBz{mK zurz>5HV$|y<^6Pvt%g((wdD*>1{s-{m>F1@SdEvF4*T!J{}9l^!^6RKp0l;>;;PND zC%5-M&lCxrlM|l2(^fT!oTC=et|r3evpt9{H94uMsrkO~bX|IyWiV%1dYI8CLkd99 zzCNX$!P+ML_w3mBMC9e?^VS5x+A^N*?z6ab%xfQreA-)Uw;j0YHxIv9eh@Pwp;RoG ze2>5@;%Au~XoVN^((x zETIepRj$rosDt&OZ}hq53MxB?B|f|N87 zLF<*t%=>!9+}Y7Fog<80;3q~|VIk#9C3_?J-jzy>OPgZwAtRejQQ{j~*Rij=-e$=07^^$Uq! zvPgZLS(85*&WhNSTD@aT9#uLJ0r*#qa2#VuHwKv;jKCGEXZ8 z4*q62c{vpe_{H>7Ti&R{JQE@ac9FUiHK#0;hv5X;PbDc8PN3Ze+{jB`-{+j1HJ(%U zclVlS!Au75yO@d7*>0`$w!iR9gZR+quy3+eT3Pup{9m%3M#+uxN51dx4JXnU2qEXz z#9^XJDT5DC1ikeR&ZHX*9>a@gHR#44zUAJ$e=jcBE0X!SmL~;{e`1x>C5bMRP;!xa z+5T0=K&;?o4*34>9|jTZ+xdrd!uvkmP@*G|czAde&o0lGCDxHxT0}7(O-*?Sl>x35 zPuUn|en10s%MC4V4UqK-3FE=U$aa;Xb%b9oO}}Il zR%O7+xHEMiO{dwCbczzjROo!0Y1z~g(N90onYHD}<=r8CJNb{5L`C>rGrtM3Op11r zokzWGv$J5|ql>)f2W&4*C_9q=@oAE5g|fMK&F>M8^-p_+rQfT({ljUpAVam%+z)uN zEPLp#6jb6!&og~lOYow(N=K-P^#zP47&(hN6_EoDhvX|Z}$Sy>x^f#YUd5VhstLH+p zGR&zJT5zN>LUtL*#@CcHG6o6@8JeS6*L z@N7+gY;5mt=*m=jkVJ=l>MOSgo9Ab^$Jn5+5VV*dlbAJ@7O!!x|J3pS0QY?!7PO67 zF@x|A#fu!fpDaB*G9EnQ20FhEBxGrO`rQX#m{c{kwKa7$XjUygU5yySJF3~pp|#{w zEe_DQPZu{m7agh1D3Q;OY;-&rb?TUZ>VL)@CU!3Ft4J~E&Kgg5OI`mg$eb?yc7)?( zGYW)+mh7^dF}EmrvzvbPF;f_bK&Fq5m+W}$_V3F!hi`IQ9t{^Ln=qBS)S1E84D4B; zp+*Oa&-3CpHVB8h449@FaD3quqIaV5jvbg_SzGG51H48uG5a{9onWW;oaYGHyDAci zMA~gy+Vx6Hb%Uxuiu$=vRxXSsk6eLACwu<70#6pL+ez57TArw3I{_4$<`(X{!t#f4=q*Jr2st!<~0g~;J|2mF7@)0GxMPQl9+E`{~pt;3)kC+5=myCl{$HdikkQs zgHWU2|85{l1nm|(k2}1gLjR(o??%_c@Z{51`DZN-DoxBYiFo?IN48B4H4{21#gj?z zV1Kcz2%CtRO@9LCOi|Gk8lzj ztE<4ROz@5)cr*K><3p`MwMvKp6qz~h&*8$Gx37eT2Ef&L@?uLJEq)l-d7PTsoG()* z67mZ|XPtW34mXYTc2(Fe>~s$PYaWP$3Ebd09I-fgyKc$WZab%tO_H-&Il|_=8c> zOidYDw`*ZJ$`QUfT@zAHy~fjzVx~R)9n-ajDgHafKkpU{DG@}AZZAg8rB;JVOhDo3 zxg!^`6UEDRjE;lZNwWXZJCGqt-}q5253;6;DX)i0mHeNTM2Rq9CrS|5a^^9m$yMls z)hf1r$5f#i8RI=XW?{t^4h*<;^<`rI4iwmt7srVH@$xbOdc^Ivu44qNFIvq-9;k(9 zdhc9)W!79T5tC^6ycB0CdMVRdZ#mZ&@c5hFLwr{dpm{rr@|%)HVL%GE{5P0h>iwE& z7Mzyp)gbtxZ^4nIv+<{m74_&XWl!=vM_-Ax#GAjC+@6EGn(}h{UYEGCzOGEwfY!a` zKWxWq^$fYpVnlQ+gvu3{Ijn$cGPFIcLw_xVDlZ3?#41f31bVyJl47ebU2R5w1~?-o zFUAeLil->Zby~l>S>F&UWc^;5s2m68sksWsDL|OVnHVEMVLMjnDqNd?cQab9@^&x= z*pNd{E~TFY1V9l9+6Q82>bS=W{xZSQy#ryVK_M++-LsSoUXq4B04wB6rai$?pSN2a zBn-!rFsg-*#Cawf`Sg*_U4!O^uQEjS;%07caZ#yhHHvyDv~~|h+?gk?%PHjCdAH81 zJL!mhIs9qGYMN*~z-J^+LC8cL9N9OlBeK6`j=gBprH?PzzEi{EGMJ39;H5gF`v$84 z|NCbdyS(%1$L+mOHL1p3KFEp_H!%s3=Pdb@wLQt6btSjwR2+aq9((xT_Z!cpr3ZP2 zk%U4GUR_`R8XMTl&_fM9;!L(RHxngF<0~4`@D1)pWVQJGn4#Er=0ap2N;W0%m4o@X zNq?K~Xk}H9+0cdHjCrXY^7vT38f#<;GX?ZS6Uwq6${sJJrUpiPncLc?RJmU>i=u_# zZs|<0C4MMp{qJsm2!=4CQj@0rBE*v>VpZ60jV*mBUTH|4JfqT}MrZk1{#V!g(2fib z6mBT(k6~cnjRXNz8{YJ5W>)pW-+~egWk;0}OAfSa$c5GRYlsCmQe523tZq2*BJlRS z#kIjTEA!&we6!2-=(J`_1c6<}YzAPFJAJfr!H_MmD$KJy&~r0&@jD>8s+2x`^$~!A zJXCG;D6(K(@m>To%>L#^n>6a>tveMF#g%jP$|P7Zf!jVZI12d`ECPT3 z^5p8$es*;diAt=kwb;^(S}WH91rL3frGfkQl+`%FYHjt6of*t&Crx}DM0@NWpfjUb zeRvr2iO4#9sY0+DTf`2*Ejrt~9Oqvf1&#(9`z;|#UN+H-uJ47u?-a_}va|2k}l@5!u6+B3+FDfA9PTw-=Z5Yy^__^6xQUELSkWGfF zsl=7U#5Ydt*w$vP06sYvjssfXc9fLOPSFjKoFce&B>>3ns|Ij9UR;!fAQ}dMV3Z+g zwiA6lORY@g|8Tx^mef#J*B4n`sSUnPl)r7CIzu(-)!imO)ylYgEa&7C5Gt}HA3Z0! z@nI%_%<5?NI?c_3++nuXaO@|cptJ|NhW6@w9}bN>y_#De-5eY;n%;jPy=h{TvZ|=4 zP(d~JzgxgD_6rWjmk^##P|nKC&}we+tozfA5ASF!J%jz`HJ#|7tYL*1_4L=`B%N14 zEN6bo@Hla_tj;h_M#Cwv@^VA2M$dI_wp`-bNCl*{qGys|2!a`v)t0Fs?e)iL*M&s+ z#{oGxy8_?w^@ws1eh_&CiRDnXoJEfKuQceVeGQtom*pe!S&eA`g_vLhPV`JStq3mb zgQOYHr>m!DT2&4L)ig38^rP4u(Z|Ctu$^~WZ+|is+baVE0MX4%Co&(w+SDj7{bu1~ z28Ja}_Y>4;ji3*2B)OXQ2DiK`dYqqT*k}kv z<79f)GG*enb$1JXkFsJ)r2_3Dtl*bT@8pu+A!37OKs?30w8Wiwe3a>9R}PL9NX`UQf12P3 zCgjrj1Ok}y;Ujf_5goyqGk^#A6!=r+8KQIxZU;nM>PQaFzN)o>Ri7tw>5}Rz{`o!{ zs7#5^YE=Y;ywAxuS_atDMT1e8Ix_rHpjoftY4H*(68JR18@(CzEGY^)=~Wp z+g+iXgvAX#);h+C;>w#^9xk(@TYW8YzLA@DDD{IHQOHL!Kg8zQt*Tj;c_v9{kpHN_r^ev`>E84vz?EG5$Kk5f8P#`1ij zyyCz1DQtaoT260FD!_38El13Psb6m?FO@)9PN4VQ5f&owdSLPeqx^Hk6Y{=`_okV4 z#R##brMn9wF~N!^#(&;So=l~Jv({3s%rzWp$YMArxu{b|w86vTe$TJaz+!DdUdws) zW16EehC45f+=(j@=w7>EQvjz75EgGfe#M_PLUMbo#rUqP4XU_HpP-FwI=CE`TQZYv4_x z!qYuCiM93ch&VS2&J2G)0rlJmB?L3*mCwb-&!v)^s;cpiH)(Q%!b@VRddrXYB9zo{xJ#C=XoBUv5;Pkq*r|CPZWQw{!=y?QeKh z1yBtEx#TI)X;7~H4L5fIX^BL$p)%aKH)6TYsN(sU30`t#g->_X^NU@~+c#@LhWsJw zp~#%sk^&nkKQ51YvGsRdh`($v6QT^$D-{B5$YMwhujPP-7KYwH&EyB@CVipuRJhK` z%dN4wy%`KlKdgVBWinu0R81Z!X*)QSlxT1&vFW8etNl2?=a^a!^z$=!@Hm<3N6P8A zGGKX#z?nL5VW^q;mUM>(LL1HnZoFoKdE~*Gsy)|0UJe3rE_A7qJ8c$5#sBQk9&lFY z85sC59jMgc%V>)wF9!#V@Z7vFRQC3dm2>xROt@;k{6W*y-rf~BvpTVTM&~f9Ng0($ z%<(#uJ2PP3Zm9M$%<+Y#gGY>ffk#;nM8^sSCuWj zkKRJ|!5nEhRn7VBxvvNzN46BGCTpkVT@LvwO-d1DV7r-vBjCd6ert`rl)EBO^389> zG;+MzFAP59@`PSj@^mJ5%l%FG$=prZY%)(bqE5~3dIouLjlm8XUNxTNPMU5LKH_^k zZCOY_mI;sCL=rC(6O$M=``P8i`Q-(Xz2ipC zD^E{!`aKPT#eKb)NTxluvg~gSK$k2dRSf?yN>^B4s;iJ^; zPI9BF4-9ax#FBj*9-Mcc_k)q^GAMe<&OR~$&dOPx{g9*)H z;P>(hW}&{ie$aFGvOlBhc>M8Bp-cM>QJ;!IxPv#1w5qr=GPG`<7`Qlq5)0<4SEjv7obL!9-=U^tp6XY{g)iQ-K`zUF?(l?9gQVhlg&vd;w(Wm zYyAV%$f6L_@-zlQ{42}L;4&;wH*Cn!+eiu7{ zZ6&3c!Oen;Dm7(OXcFV>-*=d2bDq+lE~sAf#JDE4+|RlrNg z5ON5SV#^$&n8nQ!v7~tuzRM+3C68omrLwj1;xON+TJhDbw7IzOwg09FpTi+e=dPpi zsIC@HW;Qof!R&AnT(!5oH=edG1QnXB^h5PJStn zb)Ui%of}kT3C}YTN@aL3VxM0^X5d^o(LX7$xVzgDyOs~b$xvg=jz)t(pdZW2H1c%8 z_=;l)h^OrN?}(|)R9?VEq^_<|xzgaO>{mA*ED^`I?<9hLg##I3?WdSF7T@){KQ<=E zf!~)X5Lb)W9?i1K%)oG^t4vXEOw<2iYai8l3cdydG%S+cEtX;><&sM)!^+hXHC*JE zJ_HJvr4FX*MWjxZK6Y!?GaKSvzU?OouBv|nTxb%28Lj$ZL1`L2bacZRswi>>yVsN) zAbh_r8tY)pP-EuxVah*Gt^xo+Mkq>I?#3-3EsKjU6q0gfRvfvMCrhN2`*P(dm)3&| zMBz{2@CK@JY<ifR8LCOoKhUwD@Wa6f_rnjlpHZE*l5KFJ6TT~Y zRc%fFp8>4fV2E=PTy!WnwyB`_a{i?|E#m9MhbIK&kgQ|GqLf{a2dQoqCaiDC)*59@ z-6BlIT%i`w3f_sQipTMRssRA_UJn_tG=@^8cZ3GADef zxCJva_0~9kfw6RYyBvMkp5OP7d@g=*%ErSx^w+sjnDAr_b{750s`X;Ys+r_wY(H6v zN5yE&;O|}^wr+XPfBk~_qa{>jJ||oGm5QnicFHNtQcnJzplKxk`x%?%d||@zda%fd zX)N;U4@Cp6j}8t%W~7bZU|_f+pQPT2`d60z!-utl8{N&KY2%thjlcI@1pNm$l<%`| zx?F%c#r5tW;_Ww04Qgm1{0wKI9QyN%(ufGZ&6SQV#Z&1gIW*#8I!qK^A4`hiY$(i7 zWTm((eDhDr1zF5An%L!sD1Ekd6|5qaUYz%jG$5_+HN{k({tP#p2&S}Da zc&Ybkjv^~2Dr(3r(W8Fo%}`s30IQW`oA8~E4H6_nb?g+U2Xuo*KRp&J#KrB~5_&$% z74}93iTgbU4~HJpS`~17c|bZ)83M{FaCyy5X{pdskpm1gKcf%{Z}!zV3lLQ;(Z~Cw zr?e4@)8X!O@oYf;z{WzrM#qUkgT@NikN&LFjTJr|z!L%;;iTpbUlJHs`s;O^rk-UazjM-rTSW{lO&1`HqVPn z%2Zi{(PHG}B)6~Z-|1^0u4mhunsDNo-pSpZVn~4t-1`=Xq#J{#`h;d*_bMRcb`0j8 z?(NOP^5Yxb9t@bY>9D_Z2?e3bORBwjA!WGQ!^=jJ3w1-R*lLb&`E}jyj=mDRw^I#1 z6$_gEsxzTYM;Y;LCyCvtjooj!fVtXoL!Ej`=8))M=H`BlFZuLBXIPOc=M?j?RfKk> zl%Y#j`rG<^-cU{4CM6Vx58(8zCjNYp*ZO)=ll?(2J14+cN{w zX$C%kUbd13rEt|`h6S(3Gnq}fo?J3Mo;qnJ+WFYJ^!#)Z;uBc$<1Tc!Zpw67-*&3+ z3^6zC>S*v!3gcAF9M?8CJ1_w<4oQgUo%q5OZ&rD;Oe%94n!A$C^ujd2`d0!YbFwM7oLf`>j+`jjLA$pP-Xhi%@8WgPWp*fZ(;|(a>4y_(kereG_S&KHtYL=)~2tZUdUUZ&6I#} zNQ_?<8*uk@n8u&k31@A0PaHlgYp`w%L9){dNu6(7K~7-`V}O^T$~=3al5|K@$IQ%i z8B--Y^l&{f;>C``*X?F&1a^-hZ8;0F3Qe_yUYM-Wg$d_kfXa(L!;D9_0CJT&y>BDj zjL7=Yiy6!-xfbxU6+s3m-+0xqu$y?CUap4=A|i601eD@y(n!A8aE!q%qB3GeOMNM$ z3XXk^ohkbccb32pRx3pcAv(esAE!TdW5Vj^8?Fi4Qy+y$0ZsI$~gpdKvU*$qkEiYf0V`( zt@h<;+1vIc<%42G^)Xc<$FNZx+7X=x=~QN@mzqNDdVDlGtEBS+2?;uzd{RX}x+{lW z*!|t2@qmAhn;Kky>R7@@jq$l`^euJm0;Rn6)nDOJB9UGaewFB&x)oXkzUQ;(kDhW7 zQLr>RUSPiB9CE7V@$vD?tsZ46RR2o7*f#?PH$m9|H&6X#>B@E1xT&Uou!Y;WC*_1q zWW|EfC_N+%{sH3vUno}srjeSYjOGMMhKFBPlnYfqTytn0u5uZb6=f@fv=+Rcx68JV6AVC{B2NIllg$yMhlB@tRM71yVzoKt}X9#knn48J!}1$8nP%DlsrVZL#X9H0Y@gOnJ!;OtN+N<;>2sAemh* z>2J1#rF8_9VgYTE2~-v9BeR;2Q{2gbn~`|)RHPDI+-N5`mT)ITXEhqY#}IU9y7Vy% z*CwBZ{bNs%6ojlm;#%42%3wY-o32JUEt&@|7XkV1%8KrzrvZj5#R0g%e7V^*iDTW7 zsu=oaj4{0?q(Q{HTY<{AlE@c_V98o~qrhEVD8@Ng{=fS8Xzkb&0ur%m?$Esh#W7feW);0D6@~L zZ~u-h$Ysa;y`h{?cHvSAzkGwbLGmYd_Pr+CWJ-igII#uUl3OtC$I%a7&Ty|7vJHcj zDE|`!yQL{VwOxl*=XTm83jRbm69?9ttSJ9c3XJ4d1J1I1iE23*UO6shBBkWWz3V+3 zr$kuFqg?h;6EZ{0$%S!AABI%9r&6!|pECXB@cSH3=NTNg*vXt+X{=M%fTE=$5G0B}fm4U>Hg^hU- z20Cct+@D$lz|pWJ9o!oo7+}ryd(>*dF8R-oYQoiO(P6VCTf-In?z}V)^Ry~!J@mvv z2CzB;P*G6WSBfw|+II>ehKUMg0Y!uG z5Wf8V&|VU<-e0J3<3>whEaB`y9?Z4^RXpIxN48D`~DSdAxgMgX}6G+TsAgB{R zG%Xt;Zb2M9^^UxD{q4ytzl5PLMf$p$xgXD z(*_=blnQTkMM8Fn^N>z8P%XtS}VahhEnVrub}q?C1x@bf$TNKY8e$1x~uJ$Oc8v^yoizw zC%O@@34gyHj%PcS6P;U`2ⅅpq->g%a8yzRGQ+liI}V@XK)?MAyp@$3A;Vqu9G!x z*#KCwdu{Ex%`f^P0n6pC?*c+-aVm3=eLj_)oqb!p?lUZ>f}zV;KZ28o=Yu?DZ8*K> zjrCiFUIZPN2SB!CEx;PFwlgsB9h?9q5mSpL*|61^OdO@5#h#-W%)h+XVT{uEVzbN4 zhCO1NPbuzkde;ABuSSgxnm}B`2DN6j=9lO6RL#%*XFyY-BHD;RQp7MyUS-mwX>v?j z{y(2-6!k6Rjo(8l+N()R6-98p|ArMXhPjIMZJ;YZL)wA&`~;8EDLTN;p~ck)#8Q^8 zceXaV*GA`=Es!h5o?R4`yF-8fjR@5HeZY%MeC&YYkTz>JF$xe$Fv2~fy!Jy1zwxH?*aQs<}ajf)hFFy|0(m1&gx7=TEF zM(6v5+!#N?=A_Egc%DeZ6B81+xVh_+!f7p;`Dx5Xz;(kvJ2Ft8vT~PjlQLF$Z_E|1 zRP(*aSOo>44YsA~ASeN>J6rSZd()+c`-LEYhqk8bvU z_V!rSlLr#kEn$nWBNx%)WQ_ySZs^n4JhjfXrSZVAm?Vdnu;@|)TTIF-&$SDi)3!!t z!at=1x7fK*8|toiVLKA6odm$9d0Gfn7r z)4Z&TN?H)!d$fg5L{Cty;Ze%uv%ADV6uckB9;%*7vmZ;YpvsFv8$+ScO6sC zq;>Jp^EqNXCpL@KO8qc-ewY>oQiP|HoICiw&$>dh1v<~A0%uJrvYAll^HaH22AnN9 z0#ra_vgGXqj4DwPHHyT%`4XAf5lgkGT<6uh6pB@U5jit>y3%CPn=HMFatS5=^cN4J z)OiCoS~wHzE}&4HAQrY71754Sd_SX`JX12{aO_NRCHV~b^2EjYxhNj-c4VE3bKsBt zt|<~Ac@H2^vX2J$))WecJ59mxX8aw&ZnlD@ot@pEZ3zDp$1c53|AnCi@*w5z^SqDP zn3c5P);SHd5b!gUm--JqH2(H&E?CN_4q^X80NcPt053$jLdun>f;JTk&ORe|pF3+I z_W$>4wL`$gE=oZ_-6ILtsTFBdHVody0JHPge)TkKg)*{nn@Z~GQso_vZTubXi_L7) zKvV47#x$xAm;xv1A~7;Rrcv5#`2DsiK8th;? zJlJF+IifLG%gmER{@6?VKg(zT3=F|y5{6VqN5@1{+x@N>RxF$B=4#94_9aKaMRttU z88aC}Ae(Y_KUro;f@$Re-}`JLB^b*I`|C7MzVxoCEm~^FG~Y?p3$ljSwY1h3TuLRFq%^4DSU7RyMK!O2R@Wnx(2}&! zD)_V?b8~ZREySB$#zPmN5vM!!dHNFDWEK32OqOdeBsCE@;IT~*wAS~2TCD|se*V$W zSKHDv?RdR}|6U&_S#7NFs{#JtUxi@O2x}=CPPe}OeGLYmHTb!;%>d=4G~P%B=?R1R zRBE2G)+`pGsN}Yfq6VEzmnMCpQk4G&ikeAg2yDNzl2z}Kt9{`q8qql~#v)00y^}+| z6n#<)7u8AoqTlf+49DZ+y!sh~pC?$Hb7I|K{09?&Rc{RwEgCD5$N>{XG^&VMEi^|9 zKbJximN0`!ptzk9y@syrv!t$Ew7$uiET8Yj+k_czUq2uH0Y+{&7Xw|R)E75wBLFz1 zmplcU!m5X1tg8?(JfJwT$?nqm>Er;GEl!e9@8E4>$E1cdc=QMXk+$y8DeIX3r+LgO52cZ$$ISa3nr2n_r#J zb(PX7T^)*htP8|n>eApxl}IZ(#?iwMi7XN|GTTejlI3$=z#hs7DVN5YA?R0Br^Dkm z`I|DXk*pSSQR$MED3QM;=%Xlud$90~sVEXwD`~aH-RT5T^5WXR8UUE6IimA%e{FoD zt7~9De1g6(0v!o?-^j0mBnrWqE-1@UK-% zXn4P|ef4Z~TKumE$H7F*;AVPU%j;JG(x};~h{tUsRllMR{=}5qM!FY8@IFA=8LuGu zyE^1%mrUODj{WZ-MbMO=M^T$C*%9-+Yi#* zbp12+RB5uq@B3*oN1DO?LMgX#>TPa%M6$yCiT(?qb12GdF=_q=#!%XV2;Y#ndy14T z12$)NB()G)kO8G|^lmiNa7RLHv_HY6)~J(VABU)37_JPBd3z$+AW(I|eG|0oxvZ>Hw$`$}fr5!BLszy<0l1GlPeY};v|7{X z@?V>>LIh%P@KD@AQALcAGXh}%6S9}C+YNt5gCq#Tf-#8QizJ6kXld>4mZ&6eD5vV6~x3^C(1Jd|h)*$0(c_B=)pnJ|-c53R|l zLZjV`Qn1)$Sjs$Xg?m*xFihpi=;^?vrKP~9gG|rMO|UTb*(EV{%zKQs2cJ+IC}7kTnRlX^s(q;e zND?zYd*vJ4&t3qdcwHLvnG+j4rvXHp7mxuf1E)?V_gsBN2jG$#)tF8_iE6I6{;{vMHX8LIX|MWDP{eMyu6< ztW>&_A^WT=V`2Zitiw}Mlb zPe`vKH4VJl>he}ypIipvu0B($Fr$}u<$1g;xoRp={&A+_K5R2D_#`IKBEx(5tu1^@ z%&N$rS^G&b8%qg3*QJXiPp!3ffXx5`C01uhL@3U}V%)h8c^}II7Z3Lr&!@ssi4xnfWH^Pd=4RW!z&i3h zuWLQ8d&9&}Qmpy7nF8h=LnJ0KVH~1E9vzH=BZ~}~v9mv%C>rFR@$ZbZj*Lp)AY`MP9eEy&Izh{GmVy0Czkh1 z2uO~W=UIa1FzJQI=oZl+m!$u01e=fVwZZra%K19jO&^EE=wY zQFL}MEr4~8l$>;rG?esWRTsUM!(=;EG!d>8y-X*{q_R{8Cf3o%UXCXmCR{usK5^Jg z;}$$F!B|uIMQsI47ZtTNb(&oAKw{=|d7uLITqR|eJc~YzkTaO}dB3DZxUa9$-pYzQ zXH<)B={Ra7bI6wTzY`?_%!I%`!@}OW(nzQ>{gA68k!rz~zcZ+&kXv!xix<9g-Qa08 z8MW^n5k@l%jyC+-wC0T)oUIzsUnk?{W-mZWuQ^W;|5kPU47Uyg0R7hFz~96e*hwBO z_kNrUXxW$*t(wXF_L*gyHD%PENgI!lH!BNQS0}3{@Q=_0oIMDHNWScg{aNsIbyScW zi*j&Y^oE)Mchj%hyj5=;ZNcc5von2)3ox>*xV3$*#-v`LmJeeFXzLE;Iwg$caLY6X(LTiV?##)oQrv#{gX?yuN=&52ZvKsR^w=ZHh5|;kvfA<_jJJY%Nce*xiu{ z@id5wJ9hhK;Gm#5+4Q>lK?VvPec@2I-cYZ8XXZ)<=;q>rgiDJVwGW9-Et1#3x^aJx z9S|>q>+?GI{y6@>mjo&CS%Z!$bxLKrLLKjE77FRH)9Ikej!%8!XdDvj89_oV& z*aes-C~aj`R?uqd?0kK_H~g5kjp4@PjT=D|9t9t#dX+;^$O$hIM!31Q7Kv}9udlzg zyu7&z5^FXn2NOdL?-e%_)ampKA!`^kq7HAC0%x&ehgpd&9r=DneU`4PuZ0rfVg@&u z` z$me3W9>1#cpW8rEZhn5=@8)Vh+$<&19sH|YY!3d1^5Y@)0?Rc_dJJ>7ijXD9N380X z!`|Oy>-eP|Cl~|kzFz31700a4+9*2kNj3z`jc`qUT;CoLsL24sk1#Tqv>_yNR~Oj7i! zUomJ?W8RM~cSERHRx4*Eh)JZt&_RR6BfzWRofF}IBxBEcV)ZK~rq$Je7#xe$+~gNS ztEs86aP{+vqZcNou`xDZ6#HMvR5U2suXJ_a-L@3WvhcZ*Y6gx*mm%t455t%azg5gn zp995UTo9^ZnVlUqI{}bE;(^ooIPi^stKnuCnFYiUnN3L6TxRosy;4_n6BqTU;q}B6 z!;|?1D74H%8QjvKP}zblx0ve2YYsav{yJr6txFxuOu452$JA9vMb&;?zyXGkZjhD| z>F#bMr5mI{xrmiKsFk^jw+)V#6-N;p7ZBPvl%qGNB(hnCq=<3Q@D)(RKA9CY+{9zlkuakN&h}UZL>UJ+L zA9r@XVcST@ky~!Ghh;wkGOD+}eflu=FuW;;L;HOUw0U&l48KP8G15Pwk9qK&_{-2F zND>mBo}HW#9iN=NtpSEsA#7XkGwrK-<;*Bf62vV@dHHMy;es-d1$au7Zckrzd9-%J zzT1!}>kGYhddcnENxf@0FsS&F7;wdAaxrXpLF&V+jeE5j_Ib7W;j+UImIJRhGKA+_ zPr7=@E?GbUzA!Zs^~M&iy~iw{JJa~^P8XQs8|{P8BsPl$8l|>5#2F00 z13WQs5#x}gG~z-GAZ6+oe^`M8nXFI2VhU=KasX9!r0n<)n%4aAEbepB{K&ZQWZ{^6gCN2Hc3? zZrOE=Q#1`6el)bYI;Tj*M!SUpGa>R+h6}-{d8@+rqku>CI>4Q#pqZoyF3wk`(dSDa z`1)F$f1jb`@#P&ClCDte=wU3t770)Fy zi(hq^$r^9_rZ8bedRjo9NyX&2D6upAB*~LbR=yN3(B^wz?Urn&cyxt;Nigp zSoq<7xjg_(`~Z*-2R}bQ6H{lyV1ukNDXmf?q*(HZ{rWpN9QMWvcfMUne6vr)Dxz5{@A0Ov-1xcuV;$?fzv6-yx9i!5UA7xJvYxg zK`czh-KHC9%a>D21)vK?wqZmgJAv4;YCp-wZy4CV$){oMV9C!EAf-h^57z&tQvfXm z7DQ~R0DE4iSho)aG-}uUh00eV9!nNJCcKx{S4cfQx}69aP;V<4xBJf+8W_P06iLzn z#(BPuecRjpo10r_t-r8JHyRr`Jy5#vnBBJC3I(cchOfHBc^7mifVlVc=$l(xM{R+G zXaX#nOQwe1LCk#B_WGD%AN6?DX0yB1CWgZlL6A0#+P?JuV%@f?ElYAb_@$>AA>tiH z8`cAVW`KLEP__>p?|V-yCK%^Jrs?UF7f*Z|iEJO!u2?Z_#)H)@m{`Z%n);U(M7oX+ zMFq_P4TY*ieuY2R)6HmWNq2*m0}oe#@*$*VmW>+kZ@V**tUWa@$JU4(7=SRdadELc za^S(q%F0&5B2jczb%lk6p`)XhiK<)cQHP%>F2H*ng%&ep+AKI&11nhEO-wB8(U~{Z z(pP3aPv{-x2dZaq{+`jU7n2=FhjwTjojveNwj&O9#iCQP>2&VfDH(5ixl_L30(q-N z%J2^!px5EOf`tw5O3oQ{OQsik^YJ3HO+Rp4AlTV^#K^)Kz=}CHx<9VJUOP-ly7zTC z3m`5ZHXH5^9(e1cu5=b)0rzqZ`~tj(jX6l!%%xed;X7|~1FST3=Rw64(D6#O^kUk6 z(9Ke%e==hMRZtaEM}oemx!rAAr7qAOv@|w$^B~A|!D!w+bv z=mISSRw$PwysK3#R@`Ntvomwy;N#&lC}|r0#JeFy&=)3*(}@2)B_bEpaiumL_lInpM+H!sHAiR3jDFBjkL9h6 zhe6;NBWz0wcI@w1^BW;YL@8q~kl^Y{W)|(2P*X@S`+^e+|Q17u*m!RQ-k` znJJ{^+R93C&WTsQ;wQfCcjVutNSD|u{~^a=a3ICDxVR@qMn(XH@MhClYZK6JZ)VUU zmzKxTZ#4|??Uuobs$Z}iy`lDipdTi5(A1oV|6C=Dw=~^@ZV2;=r=@R`Z1>tpOzUG8 z6yah+q~_l4xDh=^_FGp{0#WzDiA%xog^}tDd3wx)mX~wz5rvqP<1owkUN;-=F8F76;gtCBLI*9Lvu;m5KVk3;Ir~XcGk&+o7t7R0+ z(PM^yiz3n!vb{q3JF>X0)JuNafD2x?#zEI_Myz?gvbnjry-iNWyiInB0Iuq?Gs@SXPqKLMpbIjA18;di|E{keSG(tlB81-d`hCRlkxTBG`O?2@)cVFiP=-xv zmkPwV>`8@Tay7$jJtzIoq9P&4z8Dy#&jmE2I(YtKCpa-Uwj}5*rzbLi!DR_X4Grqkw-&m-vZRf9)84iNEbb8MAE~_=b6!2klEec!^@lX9>*yA^PjBqpSzK$waiM-F<%E_}tdg z(@3&KPSMp!Ku}!+sZp$@kN#t3ruNA%&#o9PQUHa!eA;LFs!VjqMb9?T-1^+8b>FNB zXRp$G%&_w}EONz!NZu+}X>}M>(Zk<40wZ|(h~vrmHZ(wAm|8E4dZ@@4`7)tXU?}my#Bd+jEUnbQZ?%fN20E3ehMDTV+|+Qnx}w7!46pO%27kR~c0Ekz ze^W1O_+p3b7?88o`}A-u3Jw5j{zj1CXhmTQx2hIX9>sA7*Wl>we+k~tP>5GdffV=X z#`HhL{N5j&b8F$@g@#ATZ8>%-kFfj9rbx~P`w3{axvU9l9FD?fe`O)W8hv|Ct5B?R zAyT~C#YbE!`d^vFKn%_Xn}4JYmNtcSiWfD1K#5Dh_-r2(Xg$ZY=Mu?8995}nP-lW) z9u;n+I9WO9_6}R2hb5fsEeXvO9#qUK{A|;I2PY4LL4=(16VX;qM%AA7mqrnPV)gsx z_;v;t+NfmqCB%6Xw>LLy7^V{t;zf!YlJKBiwfRnwK^2i)X?*X?HEI<1q0O^-@=0Ll z;TtL;44e6zUjaXj0NG0HL%chb>9(T&F$8&&Ds@S+QCmt7ZgZ#i0g{JjL!eESUM1T!=v1+46&BIHbYn_BoV>ezL4s<5AXTtN!EeH< zS4Konr<$!m@^om6a7fwK`vdf_pSXHit*-D$QMyxP3gr9QQ|rSJ7}^0C5pdu>nqdOv zfngz}1l1_r@>Uhd28jv@OM#MdgEtgD(44eWukqfZqo%Nc#%ZuQT?++K#M)(SttYp} z>s~r*>FAL2uz796?@_v~u+cr0uQMH+inaef`k&Yp1BMKP*#`L&RIT z1J#^3X-HoH#>`_Pm!<6`a}XdS#_C4t7BK}Y;}1~@>GIw{qgV4);QKACS1966>DTlc z=L+nnJ}jcIJOyj;1*3|Iu%5M7YV%4}jNB||Yo@8)Ytu3Gx^{cFg) zD(fHU-+!hFbmjI_>jsZ%^;+=)!FrIQf&?4TTD}cb)Gdh>cQ5UFL?kH7Hvo-D7^MlT zTR~mt^#^W`Xn<~x70k565Il&-ogAhk;&>cHhHA^`Gu(*;n=sCn{r7c3inf0Njw=HR zR{>?%a^7q1Zp6u`gk1dlS1RJ)n)z>EGS+1`5Z9bR;@}b7n|SP{*|2Jy+I2UktT98u zWpzWjnf-rNcZZm_{u)5*84{>%{4;&2x88#QTu4iT7qO51k6<{+<6#uY=&nt_`9wDF z+!BeG>-Quk5TC^HCYSCSTilq<11F$>;h;5m!`1}VNc%wAz4l*y1#m^IC^UMpvRe#} z%w9_mj6n4|hODEbhq5OuSyzLbHxWZpvS+-Sa3&{*K)--ZhL%P_oHTt4rR}&x=x=r zAd#gOrBQ%7_2b`CZC#AG-7av>A4JO1uOYS)ysP~c6_~1x8`udjj1P#AB-enp9riaQ zMED5AB-FEDl{8)z|06E^^zklj)u54y$F5DzuRL2iOQrexK6A>8MSkc758n^~c9@~h zy>t^N!?QXOBtN3ojJwroNc;z(6#$>xjmw)Cy|Tb{Ji4FMoLgu9Q4Ey94quPHZig>b z9>*ShbI{r(rF@gxpD>V(N&VOKL9mb$x^+BdF2A;UhNeF`G5lzl9KoAM*W{80|ep&kv01EGZ?T+ z8cLY2O_BoN(gGsL8#r^qxM4O9yVcBybM|253GLf1Wbu>*9J84o5fq|?!T;iUJ1fb$ zIsC%Vw45Q;*w2O3586uHwHhx!nil3wJ92#MIvwqKAY`kPgC{Pt&lKvYW}@5H*ci=| zs{UeeKTx7EBXWKNe;bP?px=?#FMDQ*bBW=J%sV%9qKPDU#+So%Ok(G6i8P3!_i>5S z6E%OA&iM(uTX`?#!-H;I;tL_gCJc37D>B3>ZY$qs!|6uQ!|eK6>`zXlN4}vtRyLow zOsI3r;kq5KdHjgi)&bTN7#3z{OA`VSb*K>hYiouj!%E9ZyAU6^XRC$mjUirW8Q6LESIW zUHmcnutfG@!gSb)QD5q`TCGJJ_~TFk5M+|I8iDJLU(QyCR4aFlO>$%hcchAyX))mQ z6;tHPRcI1Q+#N_Cb?GpJMGQkk48Gq@P{?~1F%**MEos$_Ffw*Y@5f^r*akP z!l`0qniLtd0pps7IFhsg!3x#V9*v9vdm+BXR@3~{FI6Zgj${PY`b*QkE>(oNzb8|n z*`E9+mr5;uzdNiJ+5TFC79JMT9=d2KeeP*4Aif@o_HAsq!M z5EG^>NUEB&2EAdNpW$j&G+Gv5+3FUG=7hS0w;^16V zdl^0!#?L79^0VG19}zZZU$K;0aMl-+3A%R&(`Y9?(`4Y7y#@UbbY>0gpgo(5QIsS- z+>^4#@%fTfkA;!d@x0|Hu)vI1Bp_Kh-JX*@j3D+?TM4!)Q?(3_5)D2#EgZ0#@uy%l6xu3FAAtG_%5TiqU+SfAb#*df3#nc<} z{i#UwpXbK`gTgN_C5G&Bf!LbUEJgR{mmJ zmjsLfRmr}@#=N5G8jC%3%XcY<|Itjappv0kT=~1Unec2~zG}3bK1)s@7 z^l_cf=m4_E&w)5foWs#!8Cf|wneeU!3MTck5d7rp%PGrg{0|$`>@3}9PuHaU?w>da zgr@PZIvAR#rluax11O9GWy&$={JhU;>;_{}m=k-%#l_v--P7!ueQhO@!}y%#RaC7Rjoux2YsnO)G|#Yzul;SPw$$k*Rc2(Fc-kVp%>p3W-WLt z#;8Cez)r_bK1bSuB3}%WK%SX{V!*@Wfb_;_5=bpio*vx)(Dc|+fzfhto^mahGXI(k ziv5ZVMT9L~jZ&GjiLvqh+to(Ez>z%z)Y>P6CU~D9D&t9 zi={I#K1bdX;Nh+TE4Gpn6SX$Sil0tbn!w&iaQ;4gz3&(ek-B_A$6;_lUy*XG#r+No z8yirQZmF#WXzxGj>RfGX)QZ^{x*CcYQu&j?Rq-)9;RHZUJTKjcU@Xo5E&;7uCs>9u zdUT_Oz}udjMSB*DvI=TR$Xx4hRA_ewiP+;lz1_*fkU;h2z$jr&jf{zJ`>2$C_UY%l zoWz%wAww`Y*efP4hA3ZbKVu+2+7wZ@2+k622LWGEP3K1E8|p{0(Do5wh+l(t)I<@1 zyL$y~PrI-A$AI@VZLwonH(j)zzj5`RF6kX$IKUK{q21-z8+?59)MW0BNRr44q~_Y% zmT%v_H8(4lV_Y^W*QylHnd5MKv|d|t-kbGpk!f*C6d*rejj2IiG56WrKV0nr^Mj5l z=xzFV#_#mYjFn;QVQ@g#PDv@5joW9qA1>^v2V0kzQ1vuR=r~{X-<0_q511C6H z^!1@PG=|T!y?v^4Ya+ma?+h{dybHPw@4k+`+Hl2e3>5aD+VEfvZ&%cX2xx{U{*Lrw zN0H@@z*H2KwWBNv1xXdZ)_!=IKgeW{CnWT1#n?*8+)Dw7;e$Zzo%HL&lP_MZjgN0x z8G{xB_l=un(L(fD_YUddq>iS#v*~p&ar7~e;WE{%&z z%H&&w8!w)8w3p%H@n>O(z%}xk!GHkhc|}sap3mln_LM2ZnSE)vuAx1Lr>D=AZBM=Q zNOo8>DqwY~t_S);obzu1E5~Jq^-Yb9198*|KwF40&Pg`vA`L4-J6HRd6=3U`-_UFg z-rvt(4-;rk4ZwmDq%gzQ^YartayS@2WT)BS;Ai2arQ`JZCa+)i#^GR)v6Tx*Tz9A;>WDJx+{DmQ?+;Az++z|9X0?8^5St~P~2W(M(*Q6&wz)T?Q(@$&E(;Pdhj zZ~4sP8V4SBb=`0~V_~hFU+zeM^{wkI!13F3dE~cA(kf3hi#JKWR4y-k<>PG%a*e+h z2H2fK?DD=Y#F;uo{tkP%qaE?Gd(qGOicl!Fw_!wh=EMan>VA|a5ZQ{Tf7|wNz`@Ul ziz+P8S1!A)&5FmV8+IXLsz2PAynkHUjZFT3S2{po4^Lc~koMv&v|~CwYaS}rZzMP> zjd1YcB)~g5e29H}WF;utej*xi0XNLQ2P7pwmp-@oBy~GNAitr9ZECtgM>o8N2Mf+s zTh?gw#LkbEd-}l-zukdiclpSnm8|75xw6tyeG!51WCP!$NRcK*flRC>y#pT^6cwom zEbFo9R@-H=_;eZyoBZ(LYPt_V&A{pefmfUa_zRfupKc9H<}^P$kFLSML`ZDAEQ13` zmA4hUFKDc3X)v9J^Ts_|qsQu{-eX<-ZQPH>nL2`t6Q<5#@?L|H3x)~axhqatzaYy;x6(yU3AOEfWviIb{>pk<-prp z#oEPRwKF3V6Zf5+1idS39#^lU0$(3x8}wAa@~1@(CPOwgG&KCQ11RHJ3>eYq7U_J~ z(d@S&_uoFBP!2NLwgEZg_FY$Rhxee3h_$F7pPUfQ97@k)fBE_S@@7t#jPlz=Qa{YV z<6hh2-7{oCpW){5e1+&1FnE{%J6LkTP_Tug`OWyR=6?iWI>NmioH-`RHXfW1)i7%! zh&9?2p@;Vo_3!N=jFkUcB_LD-37OY<{3B%+(1n_N4AhVj&7}$;u6sAr@V7(*22|Wo zrYRKxIIPn*hYKNU4}$yi@rlz04V^mh{7vR9Ztub80uO;SF~NY^$+#ok1+>9BCghSn zVT(O0L9ZC*?4R&3-({e2T2T|EbWjHN(BN-Q+y&(*Lin`wxYLUt1=%xBkX8_FXtl-< zeP+eG5o>YS8zlgEK(^T)PSstR*m8Y5*lfXM8QzrenW)A+kyk`<+_GMoXzx?sfMS%&ynm!p^;n=nVwNn^wEYF0;&bOW68`TB`K+&=CkC)5m& zzeDf0NLAd9csBX*Vut_v7|Y5a`PYRPZCVCi!tFgE+qP>wQMhC;Ocy1hh`RrTU;&ZP zNpL?@U!JMF02-Z{AOT(^N3*GW`)$Z@szU-+BtZm(5>m&>KOI`5!2wp7y zyq%rB9lb@`?ai?vkHN{eX_~LuGAa-&A(N-rsCl>u@9-i_OXCLUzu-WA13;p>+O@K( zre?|XaNm~AU$9MTj2wqL<8<0`*1X4q?@d_5w`8UyYbO!7&DpCR*?3+SiQiRC5-2|* z-#w03ZP6?dKKoLeqpU09{&kEO;A|uUZg^+gp^5LR*VxoD9@@V1?2e!3NN1pBRq;^% zajY4#!Tnq-dAjlfpfSaLq1W3^GFEzU8-WcI@$^Fik=Qh{%&KmBCr{K3q87R{5rbrzyWU*+X;u;AQM4uZ>CiB_*0NHteRI{%@(A-?BlP=$tLHs zp?p9xN~~o%Tuk$`=O}eZIJpDd!-*#Iz7j&aKr&L?=(Z|fd1QpXFi)r@4u`!z#i;R6 zLWxs?(gc13f@zRgo=Rz=R1d*VD!dH51`4eYdTT1-Nu_-9&Wtj_JlBCv4d<&3o>r*3 zqN1InX!15j9>eslYf2!_Ofs#C(At^WyhS`>Zn|72p5o3a=wTwlK%9NcX1s>O{k0@r77?yw|3^AFV6_8FX{}qM z)}YZhOEK7v#Rb*=!MVrfeLNP9&(HVANV=T4DXGNT1OzYQGR+%&17LFxI!h z+(f6@UxP^*Ch=MXvP7sEaFV6j`Eo(FN2o)|rcvRUI%xhC>yeqGiPv@Mq-qfPub~WR z{QmQV^|+AVVa_mC=0bUKopBIHQnjX0N`hQDN4I2P{-IK-pjwH&LrGjhg(@ZWhOEc( zuhQ*4$43M$E}2U{n6gZggm7dQqKu%cfn0zWRb|xbZgTr@)l=E(_Oz3vYmgo4NOR0E z`j$a{P0tYMD}zs8AAO8aZ*MPUp>YW-p{eO_ zVr{BsT^$``GVbHx>Pswp!!l<$7o>-WJzZd3SpD|K^ZmTCfzj|(H1h{Dp`&_!0-@0< zyG8lwH#LfVznP|VzwX4s6|Phq0CPdPIK42bH9dlC<~*X~>$Eq2lr|qOm)IyXp1QPt z^n%eQ#kmXVNy`g`b@S3SYX8Qg_5>2*!66m>0&_G4Tspz!BvE=d@!h@>Z^YfdcmhD% z1_4riYd$|`PVKw%M^~r1sm*1Yx=>RwZn-dWMc;XtPY_$iY(mH>Sy?`Bysma{u<3Q$ zDTORHb)nXv0ifP}5+DyDO`*i#`QSm4!eco-baaGN9c)H3Op#9Z0TBraX?9j63_y7% z&?k)T7_wI-H`3o9?1J{#{cnyWNKxTsWoN}*%+R7JL-XO8#g1*vpl~_x7__HW$0eZ5 zIgkdBlGlX=jn#MJa&gR$?R*=_K?kk562i`q zaH2&=KA>Yc&GltW)Kzm)UEkEsF+ zX@1|JeBBJ-Akdc|2 znURl^jW1K8I~TP<0lX!j#~98Z1XS<<_*9?nja+y;ni4g7e`8|`H^R4|HS+qat`1Xd zBX5jsQw$#JGA_*lIBFSq|6>jvanCQN-D znuP#KoF*BpIig~K`Ml1FM?)mZfEd)@HQS(7kqT zmpo~k0Zn*XoP=Q4-U7pPUsPcp7;z|6XBLas-<)`GVoNYyMl5qt5QDxCT#$+L;$}vd zB0jgYgMF7TR4gF;w(sNN!`G;h85CwQ(;;|;qfdl?%Z$Ze0#)sy|>fbHnw_p;Ug z0a!ikqQ%M!i`~ihmmu4@%;l)MEES4v$W|hT;$9zMr%{wGuFO3s1m4t+-S`B8ru7>#s&ke_&)0M``w&%;4p``Pw=-uzot14dfMqcXmN>pxF}qkovr)O)e-k$fC7>ANG)bdtjji#K+eT(@L+m+{^=J2 zy$;W>%bRyJJr0Z2<{$F+2^z?e0;((cP8EP%Yu&_T#2FnD{l)tTV1x9K>h8JiI%I?5NG4I$P|VQes9yelR4e)7h zZmz3)HUk#xdU`rL6OCd|3CIcW!&pkzYcF*P-R_iHH7Fw*SEKdhb#8A{fk=_;7U*?k zEXv(epgCJj^;sVLGTwQH=awOs7j0EIdsze2Zyi=$Dh!0GZ~5lsO=hImRMuZy+oIV0 zLI$WCBseVkwf5l)v@1faA-%4f$(PTX zs&i%d;swU{dBVozO)Jo)y-To=n1Mr}K8;#@czrpQi1<~a>I$IzRy$j>l`M1e^7<)z zzxonkvc9d47eq2Wti+JgB$nQ~36uHmtsjH&#DpmT#0;#59__O3^n2ya92UsD+c7fw zfv@4q-Of-FVA1(vI5w*+Wn~^xCUy}8M+tcrvZ)+ku)|5ZgG6Slh2Zj&GZ*-MQ&l)H zF(S5jEi~JJ-Y#0nO_rNi7i@X;9rdHH4owncR&YLsulqJSGL7HPI%Oe+u?Z26`}1*6*d7aPnywr5BkiaY>eJV()wLEdk?4v(EK4m}5DXICQIr2~$sJY5jXbqxEBApm$)oePT+Ikw`{#n*D%AhC~%R=hlFiM%^UR}BAe zK$;Iu=8VvO_p>v@kO=?Nyed6*y<@a#R>eo%3Y(Uf!cYK}0qk&y?SNuAI)Irr@caO- zUrd{pe8_#L54jvGM+?KRo4X4}gld@9PR{bYJwCYF8#7YnYLLe1ay}zgTsu7@VhbX$ zP(QY+{&+%DJB3b)Dq)1EY#Y8l?psrX(sbO;#>zQLXJiwSm`}21_PPUibWFB`++Zw> zJ3tC^aYC9@3HuuD^IJlGuk(|7T{2Akj2OFQnZ3)5rd<8fW$)O|2qw02K$P?6W*9w# zdd~u#N~wc_b(u0j9h$+9iK6%Pl>nVmKgR++p2fz_z{A*hbEcHO_4#lJ$g0LsT=e_p z&HJtsz>tyHLgBigRiEmr5BZaH9H?o$&YIeOc!APy^`3`ga>0*A_t3T>vcJ6mtYUTKa8^4XLPfo&%&B#AEpDyP_0RFh|3{}oUxhKGn z9d(x9_U9Esig@chAE zc;4%XFt{erq?OlUdcIkm00{#V7g9{930$+V9%Y!l-^|PmuyX2?l_zsrPqqU%^p*$8 z@X5`^p*zya<{J*O62>qY2Fdnknwd>|6715M$$ituh0VAx&wKnrq;X%&%*;GJ-R&)i z7AGo-wCEB;AB+tNb#O>yOGi(w4vsG`e`G{(B!%`6eByVzUmv2^6;wCCA9^31kc{}* z(sF&?NAk_3-afo7qQ$_>@B1kJu27`c;#Dq*I9hO-E!=o|v>6(knwpM| zj_!v8Mq#qtUTJC;p4Ggv zW@<-HPW-eg5Y1L=;bjRNK>KhG5JFVS8*`ldBbimvn0)o z8K1@!P>8VnS4aRthILrPbUsoO*uVr_L2_&KJ3;{`}$sKW97v@x$0M}Dp@meD_QTJuNt$? zmUk}ZPruSC40B$I^qxV6f})TlStZ}bmYCn@LsBW9xLLfay)ZDGlk5kPJk5fIcT;(7 znMqMlgTW#oJ~Yrb$qudd$xoe&=rUGVPF0qxq!{`Xiu&nptE45>CBXtB#K3op#M-+? zT9H335GZTkN4rLbynjCu%)@Nx6be6*;PN&;et>z2h`A6NGP0sHK~ZT6(+AroK#LC{ zpmTcq`fBeLJ*+r3Rw9y{JllR_U1BeV4_TdjQ9@|`Mt3G=N1Z?XHbWSek?hAdWpJ`Q zdHH(4tLos}1eY6{Mr;-lN>3%end)wuJfC3cP*o+h??FSVkIpfB^wz43Is@tYK|y_D zV&Z#yyXDq%b4ydJt#9riM5ST6iw577HkGlYOo`5n1_-mgY78;xWd+o>-oJmJBxP~( zW@TigmPtA6>#s2Bhw}1^ogD+YvFXFb#q;y=#X~%a_X33}Tvr`hCn=N4xb=&<_@t2r`QK@5T!R}@$@-9nzVg~yioww*ynSvfvVgHmlm@YE<$ zipVf^@vZOoCk8h+rTE&GCk@8$oE2BiXr6hY!8kj_pZfiE-6R+%u|tU_nya{3GktB{ zO-KVZ6>lO!KKNgcF7kZnAxEq)PmK%FsG+nL1Cc_Z`UJO&`wB2*cVQTF=~cRag=34* z9V3Yy))dTh%SFU2u;0-v=2k)b?}-0={S_+1ia%$6I2J|gt1vCND6Dotn26+r0VW)ZZ3d(yJshz%vz=ev zyXF={E2nL?XR9alZI5U}1E23I0Tpfj-w#`lo!@YfaxSGlZ5^`wHtu+R36j?&W(SBq zCK2YOHmNye;ldlJDk>^vi%In7e%7Z4C!_F0@e?}+6RxH9S3|UeBM~=3)zh-&1ngW@BU8r~SCQIc$B%Jpy4X*u5U!}*Hh{u zBI~b6qh;ZZ>Ba~*V%lE`Co?6NQ$#wGtiq*Jt%=cO3BTN|0*Od}f~HwC4g`^8>}29gE`$e*WGcZqQ#I-4S~_rLwl zJ7;$^JA%(%p#%|w4x@vF8Tu;m6FLIaMEK+a9b#F=pXo|e!J zdAg($)9sT@0x$O;_6^bj1KFL?=Gzn2?iDw?BXSMM2nFGf9}7>%WiG#ek2VTRPomRl zaJhg&R$*GVww(&O12*Vs^Y*6jQ4&;F$jZrvxov12-CS6!s2v&YpPls;!3X$^i(RVF ze7byx^*+*o41M69fqr?*_ER#6JxnHSDQFX;R?PG&Q47d{dUoX}@=1MfalHmP5huWBZ-4o7(j z@0QOIy$gT&S`@AiL))|r<8Ez6Va}G`mKGwiM&B^bsA^jVn^NYHc8eTuWZ6Z!FqxhN zu%u}`8)kxxM-fF;lWLot-HB+s6jQQQna0|xysm!jelu*W?O_*x1A076=q@rcJX_1% z?T0TJjT`t)N!`Okv za5}oq4?_s87uy4I)zu*>DXuP)_-A1@{b~569Us#rU_{|@^C0B(1-yP9>=AwWY361= zi|5%XOHjO3eLf)Q{Tw7s{5UE6Z6e~_y8Xt>PQ-IAR>ao@e?$3g-ApGP7wTjap*PtK zk^k3X_c79w)eKv?|H4@?*nznn+jJcg7CI2cl{HvO_>cwtDC7B~01w}AZncOwEn%y^E* ziqFjy+26sC4g8?92z()~<*qE!U|_~Y$m`Z0hw&B7$9o|{`;{ZT!t zUDug_gZra;AYt`~RfuTe<(rMW2kE+LS=Xf-!efnv!sQ%iGYo!09Gzs4FV3s9=rf)M zXNd>DLwhL;i?EJKrB0A?*RRYu2mefg0V=93jA6c@qrRxD*HJQhXTq90u@l!=Ij=P% z@B-8NG1))k$zV7|VZjw|UtdoHYqNTHj|8|S?i+rSlL@fzvuR5?Jm*PPE=%#H>YS}9 z;Gx8JS)-Gj=w5f7Xte0s7m=t(5|6Yl`e2_n{`kXR;hYGxS59!_;OU+{0F z92KC{JI@3E{eldMp78>AWAa?*)cD1ZL#^w$?P}eR>Q5wHdG_{sv19bLwHKl*CbiXD ztaxK?b71n&Z&&1&!KlR@_3T^r79fV}w2&I}ZxCkd7%q)h`9-ytputdfCOU@O@i(o`m`x zbr{tuI}%!R12+MpWq=(wT}02d3$59C`&UR^Iel9uNHq| z{dBQ7`rSoVs(x0r!6EBdA`609?FQC;KWuMfKoK+nWWttjitAx3v$NHOor`ZEZ6pGf z-S54v0%~FgLmQ4mdrm$=o=o!yVdbsNE`G05`F@{yG!dfya43_nh?caq8PHnU`ZJc9 z!+Ir~!7ru_$3!r$6hU<_La1}O=GhPjXepyrRC@@Lc&2x zz{SPEMaadogGc}*BrPK9Gt$nL4oSCz7)o~0#`C&?D>jiqrdqPonJuMC_Z^wqh-{FM z#984%S3TNk8%ynOJn;z}MuXUB1MW*=at8Kc2ulI`4jiejeh9QPUzq+t@L4pR;jDXM zuiP8=UC4`1T@xYZdHqHJ>uGl(WQ1Y^H(eXx3?)=&I7D(zf4h^oG-Q zIM|(U9)V?oB<-rYo{o2?+VOOCsfuXHTU9+rVq#*1JTB)~WBhV*2-eYEcB*&NA&pwF z#m2^1z)B=o7#J7HFs&S5efTmYl5t$yd%L*3_{SEi`CTl zW0AJfxFy=DqOS`t<-;(|08^T#q<|4*Qosa*Z?CUC9noH09Tw7Pk5Epd>(UNC_gf>0 z9K^!I=Vv*uyQZ^ig*Jh&X$S~nR<1LLj0F+Rt^r(bukU+XDFol`zM-b0r6tQ4X_3a1Z)Mn)XAl%3Yq zgyvRvBR>wPb5Jj1JRT0WI#-MghBU55#q-SBz z1*f;>U-ra~T9jfi%%}cy#QyGmJ8HNrRSK@KkGy_AB_b+adxfW5B)1$5)X^qc*(?Ac zvo$?Hw8riVes_F&V2UHYa*5&f7RDE~*GHv}4ZdfxyPN!5!ca*LvR7ZE-Sj6Z61j)f z(D^ia4Ccy7JVd5X6BY4gXi4{31z+&+Ms+#9JT2w}n~6_37 zGr$)X32t9HjATkwZ?e(HGm+&-O?`NTA9b^{>x^}qVz$p$s08o%G7)Ra{V*i)gs z@%0T|ss@bgd{E?VIx&G&qZ8%(_vOS^!gm&j+X^ENm=8(6?TE);y(_f)HI$$zO)b~o z3sc$?aKxTtyR>MhEpTAj;&1?k_om*uThL!8DK8%r?e2(eie7UI~= z!`KO>?@bhT$p=5mp!&3cLFKpaQm~nP)mAh0E>k&fGhM=Y&Bz8oR6hA9zY0bv(&zn60baTBA$}(`FK8*iI z)Hz0n*{$6=Ns~s6)!1xo+g4-Swrw@GZQE*$#GM%BK-_GrFiP07jw_+{U zzGX#h4K5RvB`iZ_WUTACL)KJV1D9iSb7Pr$(<2HsTuxm3B5M|ZSyw?9rX-krSgifa7+?y}?lh5h*+n=GjeW-!dzF)xohrt7`oM98G29iPHWZHY z8y5ev5LmCO?CI33tdo_ko|(~m)gJAx2Jv#Ce)}4_gLQxo8mICaZtfPagYI0_Nesz9*idmnzwOX#U z)pJl%rn(*Yg*>cBcl5ji6lVvP@)9@{F8xU^ z{}C|$IT)0}dvzKnaR&KsJv%>AQ8YE;zgM(!qZp^s{`^ihIe#xnHfgxz%U!msZW{(t zwEdz@TcfcoW2bXp5!427E=q&OVsG>B0oM@5r$)Jd&1A{)e9|?8SopQR`Y@TL6bO(C zX6k+nf$cAfmr~u6l3@g5mP>jkd!{rgRYK|IapwU0%1QWXz!d&kG$f3fCx5-Ie9+ix zTb0z1ZjU1oTbRg8b)<&e35NLS%-_At5tj8?U~4Z=70Z5Y9}W@4Cp>N7RJ{#;A2Z0S z4T8wd$T+$m6H%XKt%c4`OD}nYk0}pIJbS%qW5PPv_zzm9{~?Vi*jnaJi~{%GQta z=;Yh}N+$D$|5x6#%t{FKfV9*;zyXlS@cr_-tTvW;$^^^f_^sE`;|cN1PVT9%e(-ba)wK8hU@a;T+w+5eIa zi!l}YjL?p*Ov{lP-&{J{*Czmeqk?O@rD}#k%Fu@+ZjRI;<)>#<y zFE(2(Q1k3T6T7NyDrV~Op-IL+@u8xt0K4S;OT$pnZlwQCTG|S*$n+ZbP=10?l_;;Z zeempz>*=ZI)3KH@>Z|jHvQ#ctdnWr<@ns98mF1#aps1&(s)^n0c(=y^mCLf4S^9M2 zRwdr}6N75k*{u4<#*pAjFb5}VzvoIMcJ-vu^T*p2!;pr(1l1vid^m=^Cl?39s+qc1 zohABEtU4YR*6t0Bx)}UjMiUyN)t;aasOsjz!qE?~!cWQ5s&lejKS@Qb_GmGu$#9m8 z!3I2iS5-?X#{znc(dD9ww4YDahl#E3w@Bi2uN$hd!-*5a%o9*73>KTtu8nx$ZcA_% znx8*h+r6c{NVwp73LAU|;#zOAdb>*uY$DUcFuQ3*NCW|H_{1Oi)2-q!Ww_S*uS9rR}|Ecgq!vi|EODzkPtU8)3Eu zoHsJE-=++F*jk7L-aQE9b`{|Bhzp3@v>3aQ!H_s2$%Z|N+<_gK9P~4T1M|7q_Aq)g zHQF5>d&ABQh@DKe2@peG`6Rbau_0~&{Jaq^Ypbs?*7ePCZ5n47x)_-Tj~saKKN5tHnV*ni@M6@DPWI_%Fh4n1+?78EgC; zMbFs|`TAFO*J`<^Ihk6hrCTK}JNxwPY^XT_R7AF%9zCsp<=aawz5qhw=;-LtbdKiA zRY{Id{K|5rSjQEEH(O9LL-!^OVc3fb_GPbqMFhpyV1%3OhWg|drcK=Y2N#`or;qhE zoj`Fn#@1-45Cu(Un;a_eQUP%R3t}mbfx zHrw0nKsHzeC7%Vu;X%aFRJMVSZMx>}y0cR^k~G5|gfRXxEj%ooMp2eM{2yZx$%wv{ z1U<)kTpXIYRMD;e@Fa}b*x{oXKZ8Ku1{AvDS|DX8M+Od6E)13;XlQAf*_QUg7rJ2Wc-{h{9X$G;3t=jG*D>p37WHE79Y%}?DwcHM!blPP+1zKS^+ zm#mdBPijuaH0+G#p>qa1D>83%OYg5{ShkJXyo@Sk7ia6FMq*;a+Y4OW6xv69pSQQS z*_DCpYCCPlNYi{!m)g`>1ouy~E(|qYNL#0-tiL(@VS!(Ta$pj*f<6YKOg?>r>mH{Tt4b zHF>#;HGCV05D%U^_s0mAs@%nN`XZ4U7-3|>(1u~?uMZ`qxeuViaB5=0(bC4kW?Ys~ z$JW->)>cln1l0Dyt*N`ashRdrw5W~{^FSUM;kdTIvZBb#XH>__g}pGUKv`TFQU2*+ zXMNiH^+bE%0@c9nZ||I!xjUsLvxzJ*SQ?{Ie`p@tYxS|!QQ9=1AbEyu;5?oVN??Bw zkZ>?5Y{(6mG#?d?;v9rSc~O9;VvVkyPRh8TR2=jNM;1RKkWMJ9u^^;HXAV^wLDzv{ zT3B^AHJvmyeZd%GxA;-_z9K~MeWlN6iA(9rKsx8oj3wJ;OBv-QMAIA#W&_@TudPia zuob0P%!_9wIi5|^r38g6D-o~^wgIQn*Z6Clnknb2Qn|X-mT>;v>yFxqj06Pgc%*?a ztB3Ij(fM6FrE+HIUJ>6yJu6R6C|ij0n|Xee5md9?-WQM5)GY0KTJ@&N(%Gb4#s`Vg z=>uapeWoFcnS1=tXEBxn_@AY$mjki*eCQkkS*ex*4CgK;lH!lRBB;)*z7M5dY{#Et<`r~(DQfGC_kBEv zaE%`-Dw>*t7pv>qwc^(^Fo;KZ0L1^>?LNu2OtJXLDhnhlq_)vfndi|t&~I;JV~l^v zGlY?K{sapedzj?4U){bA*=oeww|Ap_M+J6bZMJE3&X5}ok+W56iw8#1S`=FmUQWIGpl=OKfjq^fzO42eyG|&dZ6`bJx^z6CdOA1 zB-4hniZd@oCvwSe-{^GMfihKIHqhi!sqt|^dO2C?iK!_w3T3AB7{sUgw}gLdd* z?bE&!sI!fwCD7Bg7F*w&p`%`g#3?Ln?12v{!eJ*kdHXvjLc~P>3H{?a%ZsjI0$yl4 zkpMSW)Oaolk0zKUT-2zQd{_;yU zIb@I}Z`FveWBkwbVG|^nU}0&9a;K5$kPQyC3QQnkgauVgsRlF3Zyco-<|ykc)}2=O z|Dh!#ULmA{6PbkyUK-)o#y-~9w{wSqr`a%tW#Zq?QVC_y6O!60iDh3vmuK0Ara7V! z--2&Ri5O%G*~mzz0dC)RE8ME_6>VeOqoc0-Cq?3Q;Iii17szO(Mev1w6~k}1GD7&= zS=rfW_W~r~qo)=ykcvOu8GM>_X;q)A;o;zZ$eHXHfPL4QY-%Qw)}(R&JaLh+QZ&CKtm(Z3W?Wk&TpuF7+ujnIN8b*D!vUjx}s9BFIHH-#Zp_>M%WFu zrSkwD)*p_OIn+PWo@qQ*1OsWY*|FbO7cW2!)q291I9S%|`BZIF4ltU1Dkax#wL_Ac zQbUYRF<%3|%Zv8q+3}2ZkYpV!P1W+fQ{}dKb?0sb2a81A&F>es4(y@qompz)W_6{L z5n^OW%VKJO-I&Z^6aNlnfO>a+Zq+hm(as4?M=7d|A}Cal<{kDIvtAiCzs2Hk0tp9Qn42OG8fGMP?RNMWBNK=8;)+wHSKknbapiH z^vHtC{7x9mXah;ub(r|%OCkRKa`md`VR=_97{u5SI@6!_EKG-F<3`j|CbeAsj9e{y z-^qFM=d|=ML08$7YlYRPaKCX@rs5_k{eaufhj3!0p$7W`w;V&%wTOQwC?TR+*9ho4 z4OU3pdb~uvHf0TMn#!7_iG*@EZq#mQa>o9gAKlW_%qVR%ef0u^PLmc<}`-2))%fJvY59WN_laml2 zClt@3XO}O6)t#$ru05iF_m>tllyut|T3OxlecpYBIRHpFAIS%Wwe{E@jC*r=f}wf6 z$?72L*Z6|ld8a>dGvIEfLB#Q@^*T5KtxYT~%^EsdW1n|9d^^%TZ`8m3ux5m3F;2jD#L}U`~QPJ}(DLDS^a(KH zOZy5m#Me|HE9oCixn*L>4RgHi+2Gk6o4dM2U8CLj=*-ly5rr*$Wkx{C{3WEDm()K@ z;K`Lm?)g2ok`osi!>5_tgb3`7D6V|>ku*~q;>5Vy_Ga(OiBP#@PZf3gzs^@!FCMrw z!((tb8Q1s9f&vX;3k^-NYV|B?hMoCvtO1?iAd>nr=w|d=ilQ&AXkr2ELVul%+>~VKwZjLk44z@Tr`1nFwsnMm@`RY7}myY)sW@8==9YjVB+}^HtgyySw+*w02(JvyRdxnG-w&c3-DdoQh{|u@$P^jPd??Do zuWQknG3CYz&`GPRR#x19;h_H5hebibepJJsZZnfdyVXj@fomoLHL|h@zI}tcjf121 z@P8H^3lXjG4bI*Ui6TB^obdiG&u#`WvZwmfLR1Zw7L*eV%1&RRz4|g!Oxg_$2MU9Z zEM__cgz#+1M?!~@(;pr_<3xe)M?QIE#`@;0(6Th%__*^wBK_eS8aBL7mZVTSNxduPx_ z>?i5}Bsf=EEGme^hTdJ$KNP)gAHd)J&XU9{JYF}+6bX{wQ1!F4lJ(BWQE9w?_s{(N z1mdDo?PUOnKqmzI>243FNyE=os^=0LbxFLpK zl3HkqvFfLpV5__D&cjLoWip|4zN9yDdb_reS-A>gvOK>n7GQmD@~<1$DF;5m$TDf1 zu&O{q6e}kS&6>wK@Qwz&UJzGCFZ*{T#Hu<)M_S#H5&4;9rzdKvs*di8c6P#R<1+|Z z?@wB59Pp2FBDm1&{pRfJDEJ|?)%M_CxBG69LjV75DY9KcCi0-#OH;V;!M!277NxUwp{{|Ur`XAdeWpxc&X&G5*S((Xc zIuH816EB^v2SjT_%QL845L3prI*&d-o^nb` zFq}evFnDaJ8TBlV3GG!^-(KoU#gajcjV7bLVqs-~_bnTTRzR$=*;d-G=zx?WL3L&X z61VB9dbkLQ*SzhYfsRgrggKJB6(e7p(4Gt#^UUDx$&48vO zq6!@Sv)CoqF*PpfxDlFM{9nxpf5w44qdET8Adyc#`VhU_K5E@Eo~@ySLJ$6Zt~2s zTO1W{oReIPUh6R!=mJdCYdzN_xQv227wOePyyt$WLNp0=)nCl)rF5Vp_@Akp$paqq zw2iXashVR+1bZGUo!eVpS@Ue=e2NyzGDX;;sGgdpZDF=8@-|lDR#w#bmvuwaIeljR z&*pq4eb3OJ)je{#apQFxp3Q_Z{6+V@xiN{BxTC@JN=pn7g9^_Ec`9peulos3+a09d zJ&4I_y9Vj%Xgm0|pF}^Xp|6)LY01O{8z?l8uecoOHbm!$ zIh?s$pmgc!;9?Uz*LVqwc#it1Wl>@I(BPQO;`?!O6bfI|^s<2$nKgN#%AN;;-rdHr zG@l3#%WDxZ{Jd<|CLC}4i$@Ry?%Yn?|KkoDucMy4#?BO~)@^10R!CGZD-=E^I$B82 zn^>Lgs+4HxmK+6C6|S$}Hcp<`Y$StkHD`kM>c>x#Fch_@E>uQPvZrF^uG@BMoL9e| z6HD8MQC?g6fOmZP=GeTGd*d2aiL!Q*R|A50z7igHA)k-;HsEy~>1^ymjXpk|-9r!2 zPN>*6x5l+McnrLar=x#WQa3_PSoTlqB=%TPmsutNxh7niaI)u`|wnjED5rQv7fg&J3AY?b6k0f z8QGYYps71}DF9_0yQUX{+vJ)QLFuMq47qzs5J)~5?yKEII*T*64um)Io9mfrS!ZWF zV)URqgAd!@+#I7R?qymM24)mF{&Bp&Nm4XDhHyu*C1ifLQEi9<7TW9Al#AB|md!z? z?vHu<=}u&m<)1m)&o&4kg!B%43Yktr+#loY2HA&tPk!7yOgyZd?X9hyb?P|~G$h#H znT!k$=Lu^IO4m5iMH3iXcaJ4zB9xZO_MmZ~ceU3ae@1Q$kQ0aru2MoMFoj#NHvDC{ zb3szmrqMua-cH83v(XzJsRuS!Ox3eo^Nw6K}-NDSu`)j<7snXh-brC;e z!pk=wudK+fEA`e`O=I^f8{0u8e%9>?UEzLn(xx$sktV^uV?G%q#s#|46O?{}_jdOZ(QcQ*SEA_SKZR4u+ zcP!t>GvH;jw{dU$p>`UyM`w{({_~nFx`TZBQf*a|eyJH8!x~XS&(@T;xvCULzE1PIh)q7M6^+2k4M=y>MX(?l2P4%ckG?PESCymW%%AkEBTe z6i9k6zl8igir(JL1h%qO$lw>W7NE-p;|iPlmf@jMBcSa!JWM6VFtjN>?Di-j>?hl* zR_J>F-5L4qydpG#a`7#t{55$SX8O-t%Kih5iECrJfxt(4mi=8RRFFNJDr-w)ddOT# z#At!yRmVx0 z^W?cKEmTp-KhaJWTat?j%byT$NFM3D8YHkhudA0p*8Tj5Xnka*`RW&bsY7-T>zzsFiZ_| zCqQ#{pV1mgMa?e8lX3!YA4x3zyX^a#W2eXOvqm(?)jwEx!K_u(N4gJbJxNd9`_8tK zTD)Ecf{5W;<)?wT2#2*Lv^j3CPtE&oaFn0T>0u9@rr=s9ip~YJG;;!+#GUm$?Gr2G zz}5fBV9YH3hW}>g8MwGeVKaFcns76QDQee+dHQP$R+#MQs z*Gvn4D7LwNb*BO^;oQjtaP;W2fqa=YpT!Gpf;ddj_&lO(suqDWWV_Fc8>38LEC)=n1s{)z!uSI~pO z^lPi*rF!RXBvc;ScI(WQx>($|XT%W`NNog^AWGs1sN#@V$XId2s~G1d*`O|F!+*R` zB*sDu`}4BRsFywB+9ujd8#%e7D$ON87K5EF?xdaK7V_5K)x-;byS_W3t8}Ah_7a8K z4SLy>m>H;OX4ZUFS+CeLeCeB({wv%nOICqaFzj}L)yI|s<9r59ltULQC#~>>2@-vI zACylH!z+RXiB^Jv!VO(>nT>g@eqV_?e`$Mrd@*?#;O&M&b_HZXEug^dOC=}G5uEgM z+`Mcz%hU$c1^xP$5C>i6truqhxc)}UAq6qi{V#(-LV$=jYl-)JS;U;P-Y+}ZWopC6 zhEcV?X4&=;@-5;7%?q|(?kaX&ntSIPVXsAcT2@Onlnj!Cf3_QaAqn++cei7P=6EtC zm(WJEHlU|=LK!s#N{0k=V)lJewp$*1%(;;MhoxmJ6)Z;VBeJg?rg*7cBxM6%|V0xB`XrR3X@&EhS1@PqhIauPM9hpsuge zFNgPlG7}|T$PYeu7LkFp5<@_!75R=@6jAd-4Tgc@d*W??n_je-SQQudRSrv7a6#>d zmZc`G5=j%pj&T?+l!1K>axooXAP^$vvNG$!SoGAjwRh|2R8?w}{IbB!5BM8N(h`|f zC|ugf+3;y@Z~N~oBn%!C!_PWyR8k))IQEFco1m^D3#~*lPSeTRWL^T+3KHc<2Cr*P z?f!_E3zByE@&ROAA`WavTgOwx(9to-$x9&8B6+b*ZPyKBc^$wlF5g)PGzZw+oj9H2 zvTYuZ=KKDJfHnLqr^#Teq{-Ha_6yd8*yytVJLdcT+t=&k<%ohWDw`klTqGt(I;gi{ zq*W$2sss6Grny-Kw47D83%X2|Yo1>EIF(lea);Ryf!y#cyPD1?8uyIPSanMyrxjUR z&d^yENQSt6e*8tP{ThpzAOH8l*0-Uao-|LSTe!pHfr?XYnGuWBMGUK@u%@R+cM#fN zQl}f|eQoQ*)5WDzjf(Ad&1Tc&vObbWT6{LJ_0=N!Wq@Eizxai4)V5O7zN0ms&dD{$ z(rnqa>}+5@JyF5*0avKo9^|d2@R58(VcQE->{fKuE5<+rbByNKMcF#)- z-j7Q#fpHi-5qxIIPl!CYRxrT`bwU!$2_2Ze2-QaXRz^-``TBszs{Jl4Y@{XU`c@=` z(G6@}k3=B2;vZ92;|G~FghbBZt zzq%dtHl*H)zel>GIP%k`L>@SpdwKndZxv>NE|LLXX`!=tQ;N}?iESCSH^mK4V+sfP znU^NO;Y{xxs$h$B*s{4&nqZ6jM@|3VZ#PxT;t(Taty$sqs1%Wd(%x5`M&qbF+8&KN z43|-R^@O5;t;sX?vusiUBJx0v7px4aTZKAVuDevB{~yBB4JsANLRQ)|n&eV`m*Z)c zI}o^QRNXGqh(7FI%CWYb({%E{{hx{_a-b`#j?bL_H1_>b(yC{tZ+} zJl>xuRUp<9)Ol_dQhJ9g#~!}zC&+bwyq*;oH~00;D!x?FM0l)M1N#LXkZ*!UE<{HMwgAWcAi|W;>Y}Hb1oUGc5(u5?l z^cIN}pfre_!Vyi9YlVn3PXysg_kn2K15iefm6K{U?cCIO7pnLnp9TXPjoj%%^B6v-L@y z6(vN-f>ngkI}p;IAPjS$lF_MG;m6jR*7`rT`9JUskl4AC;H>Cmta96zzq9Tk3%=X%wZ zfEx{Qw3z!DCPkPUI8NiecTA-rND!8k;vYM-tkbVyF%)Rl++JrXQm_=-E#veDHNAw>@dxi z*P|m0o6YzA;b=JzkM^oy@fu}D4h{w`t}pK341K39Jk1G_JlwsV8?-nbS2u$RAs|DW z6>IZ4`5Zg)t?gCacZdC<)`;662!BNa9;b;>o_g8EA)<@jW+GadMsFr z)?YK3i$Y#yxuD_s$B3)wwy~+{>A_{A+xzbEVvR!YyfzG!n7E+ll=94*I6C}H9Dhhtx ziR3K#Mg~8Pmj7#FafOG4bBh<>Y3k`xoP= z+!B8Pqq!I!KHQ)8YQGjizTos0mf$sq2CqwJ@L@^%JsAc9UVcm|$%TCmf! zK&Lmf)h*8dxgU+T{CxBMtN~meFv2MU6G*18hkm*&RqE?_)|JV{VKDtz)-<(8HWV@%jZGB z)~2Q|XFT$1H7KNGqh;izCn_tclO!QxiHKxpD3D~41+PjV2L|?rAwKj8=97^I%}HSQ&9xVNUCc#h`!7$=Ll`Gx$|z>RIH9{> znufok`b`);*e+mwkDD-r>+R z6nNZL5RS^0+ilk&IQfnJ5iiqlJrH{37tC5TdHGgZwdXs%1r)e8O!Ep&Gr))>xB+{M z==eT?^8%!M_|?PRq!`~ms%-!vf@*DXajeJ1xoINS?JEBFnWg3G zXwwlV7me8PnsQ}`2^{9&zD|N#5q469@imo-_dpj_cFeUsh3ePK#YN63)e$`zw`t=% zj&3B1EqZazcuU(ohUo3ftE;2>F(?=Gj8xwr`ikz0vt{7@Nrv1-f_)xN?7K+^2c&(Z ze-2J=0odTh%U)^H{NNOb9uFoz-ru*|3`%TERp4F3mvnSMM$;zO8O+SvgB;$C|M>A@ zRMtrYu7XIPC-6&y;Nlb+yN!v5l&~gCL|kl0jR1E=l2n?yRYgQZw7@2(=T1!Bsn&+n zOAW^Pn!qO#gXI^PzdqSGX2 zt@!CqtM^Aggk-+M=u{NJ;@?IpAuHzCN8!f6G3>uG1lpQt4*_o0)|*)xSDN?c=?SHi zJLQ{ya4`g2TP*XKx)9U9wak@8uppQ#7uCaz4lJNcSra2Ra?V+&CkPmtia!=|CcPVp zZILKx%Q=6@oJJk}n>OSZ&ixA7pf;IpL?)#)`LVUm4EhyzeFdh%{dcpL;g7F|q9Ij8 z!HhlXrqusGS4sqC`o%xc=It>9m3!olgcqB~+ndfJJZUNA!!72LMNC$3%iMJApM~ye zar&PEoZiSJeCCXP)ylMh?U(?)!dgUq^7DXQHixv$lQ@uQO zbv630!cxvJgs5L4X2@;2+0``C&rVOREze4MMY@$W1k7Drk8FG#hU#rqYjik#@xCk% z3mPZo52&aNlBQyKsmuupxo;Nrzwf$Po^ni(5g{)uF02e=qoG+_^H5v21^5|)lLLEL z{sY)nYlOht!h$c>c7uz+Jn?3@*=_a-2?+tbVBE^GkzC74?7~AvcAWyHDi+qNhK9Tk z^h%bcC5LbK8^Fcr`rg5HKhgQVjGs@mI$x`8w>3wyRqFz3}YTQl}$yMzVHd4 zm&qxSmS?TE7b6Hsl4adKds@1`yX4-0Mr9ylH{9050{wbU9k~ajSEr_?WTmA61s44_ zGWcE55<8%|<7-ah7W!#kp~>!v9@UEr{bG9m@3nar?lLxTe}Z8(;;=3~HMP{O)myDP zZOb-k%VeakO~l5$%e7#iOS7wEM5W!+q;X*FVVmeo)o)`-u&tJ07!e*>bV=};)D;kO2sO~vx;z7r8m;a57#6p732_IKt5@zw(m2e@vm|_HrBihE&Tv9LNtf-m7{JMDAP9xB# zw=55a_I$jop+&h{fGQM+A)52L)S~;N@~rJiUp_D}FrCxW%je?scup90Hj{cVR!$EV zrkaQz7#~Qpn&?<$xGq?c5qG2$iHI7yJ(cC7Ks_^~<@&3^OmLC|bcoF3OUvBdktY|Y zgUSdFevX>{&MOpgFdOoP%&NtBlQYn4VhIIX#&x`LlGYad z@}7=0hZ=A@W`Q*)GdjDQXX47^)xu*S$v%ta4XA&6$IEV(mgJXQnL2+pYpUnn$RwmJ z4Q*|oHac2>zZh2%-(h4Z}%}ZbNk*OBlJb&?@H^ z&Ip?nH5%DdPzD9&Q{wz)<~u^xq}*TXTU#q#U_qLjS??dO@wa3_>?24|I^(t*Y401v z7xhX++}u5a=Z|o#QG-&J&jg~f-kOou53aJiL zgwCdW-e0GEKU$f)W>#18V5K4>2Q)1+*%or0!P$CL9~ndu@rZWXp9nwblVD*ZTV>C$ zby|+G%B)VgPL=wx>l+qASVaQ!!Roqv81_na!~*9^i${meJaQK#%Ml-O} z;AsIRAv0fAEF7Hq=(IRXR0eaw!O`dS{N?)Etd3$3ve|wu{r5NWMelkjXY?^*LQef| z)#tqniTl1%at6`W`Ev~DOVvzp0WSJmMehwV;CJ=k$d3`*_%w63IVBcM7M z)1X}{Iyu+EjFl>VjCnlda)n#*rr~wJeLff$C{L{)xL~V>m2!X`kV+es3l$R;)z_g$ zyY~n^s;Ooi{Zd|1a`&eQ_KEKe_3DZ$7f)QD&al)>p4(xw+dHG*ZfrPKl`3Hrq7enF zsCl@x3m?pGvjaO!vDtTvabNJ``ufL2kU4@co6>mr>@fVKfuS zH~8|fVe-cj$9Q*AV`*&-SReWT2nip+=pPtTQ2sNDWM9~b>sQ>@A3J$m$S)j{hB=^` zBQ;hw)KI~{istRrCblkWpt3Tqo3PjVnClH};|X=H3CKV=pvn5bu4>>5fd#+bJkKl| zyO$_`ViCWgvBDH2i%M%Z1NsDf-c)t=;h_e?ZX>Lq@|A66DJFvF-Z0BIE{RT%-e*Xd zRFYCPDzQ=fAt9kJM9#pVwTL`8x=QUV{xN;I@uK^AbCtffUOF);8~z|o2zR`s>-!hz zI8vhj60xypY1B0YYid%w^Mi;vGh87-LsJvLIG0HL9XwjiVnJFYX7K2aw+ zE;(ENdp-c*(N0W99@g(Y=qTx~uj7Lb?C?JqnG384&ek&RE8(v9gbZ9&Rli@G4#of9 zfQAfqSaV8A&Bc~=@aRsHj<(FUm(+}+uB(r(3!70?gQz)_@l-k-1ZzV|C) zwHd5IxmXH{QB4Fd)}YvXx$(POotWlq2-O8IJoZOV&=H{(w8Yavp^i5sWu9Sc0!%aj z%x0vgYiem|sp|lP0x~B^)r10@6UhM)8_k4ksxX=oEH+c7+X-^`w-FnwGTr)<*}*~Z zW(`HrEAYkdI5^p9Ss1~Xk!x#9cSA={z;Hhf)Cy@W#f3!Vg}7su*r`9#=qI=BVzDfj zCtOvBY$jW_SCpe`S?*(*g4DO(QJHjod67CJ}b%sfrJ2 zK^Zk%2Ip>G-;>)S&6et|Ht%t+klqJN&ddYUz<}t`Z!MMu*SAG>O92T+EJbPidFCiN zu$sijTJTr5oeeiHOVv6m4Z;OZY0bxp{`}{=?gE;a8MA$-SWQxojz{f$9j;G6+oIy~ z8dR3|d$f>fHj>4W3=D$U@a} z<%*Iw^WncEDNvC?29>&JQOq$6gacQ8tM^gC&Lvx`B3@rR9NY#cFQUWl?5oB9wsWJYe>$*^# z`_0=(ROVa!Bz6~H;4=Jpfa0I{eO@|z9nDuBDP}jOGkJya_LMYM>m?uSIzta0QDIzYhz@`$2g1SVh zQkSgSb;RKvaD=+`ga4+J9k2_qQh*;A>AXCK5{-^a(v(S@cXr| zH;1D%l;yPylOVVB&zT}{%(i!D+B+K?>G>kX51%Wc;m6GBnpb`rkhuMt#tTb{~!hx$anAxEmFjE?JS!+JA1gm^$JDNwfYzy z)__^+c>PKHk`dnCtfH=-{ZzdD8ZEWu{g%ZBp4PZ>jAY)Gjq#kXK;U)3I7+CN{ue1)JF$3ozWDs5_AhFgF+`U zvGJygluk+p>H961xan-AK!(U!cQ*c~+YJamcNyeOQZuA%bSACQnt%heo!#L+sF?`i zwu4tMUMnuGv@33g!U&k1P* zn^8-_$+DX!Yws?13{K zP-2&6mWTbkWaQN16i-b%&*;pd%F&9u3jVG>SiWm+h=E~`Sl_KQm;#5)_D;cX12H>! zy?9z$X=`h1wX}&faj!Jn*Q@om=xLFv&7RM+r0Z+{0>ioSafw8oa1N_`Qg@nLvf)x9 za0t3&VH1MyuClUaLH1!=6t}2#zfIk3d@u#hg^A2ZxMm86I1`yRuHStj$+J=weL@;+*lp^75`>6TG^&y_sBE<)uGhM^z*d>52oB zekVnmWD05_bon)z=A6xG3z^rgk%fo6}q zk*RzX-QixFg@xD{85rlRb`9IudcNd)SWo5>XIx=KY_ax4LGM0U7Hk6I;KY)MO}Uxn zzP?|HYUrTEZ7$oyxni!WYtd)P6rG_-tkNWTdetmF0-;J5GE}Nb=@*cC@+2jyPt1qD zDE_&Ld1~@+aV?*0)6o$B_YR~&r0U>#6JO7T<+`PFx{^sy9yS%)Nq3iWw`}t_gZB*C3<@?FQ z7m}3pISOSEWIdyRUWG;P$}XV2%@w8Xe6-6z)q4i0*{IKZ_@5L8UtYhHv-4Q1U~`nJ ze!GyjrpU#+*OiLoAFd%DtFXND#=8d_I=>9RRe}!0p5~&bf4<&HIGPSU!cb8Lu-Wa? z`mbhEQnGOGcG0s$Jf-B5&x(xVYu@E=Wns~ul>v| zc*{n1{of?Ky-!f<3IiK5#$e@4Zeo%IU!JC(lx{kDI(F+qMKGP1#KA!zEb-6h3FHhs zh1!%+6=J8&plnC*M-nzhj=|Hfy(N*?JaV8?i3?BB2f&R=ey=4GptHp*={f&$jbE}c z)3{(E>H>C0Rxy`9)8d7To|m9>kjnDM1VN@zv-IbH=*zW3(RN1JJ5CykU+f(jY%2D7GrI)4w7A@pTE|`bBoV zI51)kFIXyjw%~TFVdNfEIXF5a`sem;z=%FvOSTnDsi9X1l)C)s0pK<1YWCGHU%vc9lX(1nxx+YiI}R?69QtGW#JdmyLXfx~ ze)KwcP}{lrgChhNWlix2^KMoi z$>UI$rv!z?PX$@YNQlFDuaiF_;O#$RvX@C)IhUpWw>>MYcT|Kx+L{Fg$9~fOcj{rkx>dYgwZ#hFx20aB@rpFGLEc|63ZJ7x z43g-8H;DrQn$iQwQ^`+zgh?uaSL#j-9dPt$pO|g{MCABW{rczie70*IsyiB8$2^y4 z7U%JycdhvYW)WGVB*j_uysu>hzc!fEEY!PBL#8|i!S+U??4V`pXy1iV1!2T9>8g1r znRA%%MM2RRxLO4C?61G??keJ$y3TltfjgJAHg{^F;^}#KXgtH-tJ})CR^z%hlQq*= zqIoX`ZLWn2`k|qHEco;ruCqsf0A#~n2{$jp!p1^FV`Eg(-bVQ9+41?{OLsB(0*u^k zoQ_iWA>x(V#^5z$O_r2kPpM+}#C}?ab|qy`N1KoZrEQ50iGceN26Z`kuY=_!e}ufH z(wuQ3!>M{ls$?Z+Jg!-ugQ{dJP`3j0s1ow?AQ~$@`iwFXZh;7s2QVx!z@d7*;YK}`nVDy?M1aje;IhErahCK?bWe;K!A zqNSbu^$QZ~mkh(;6pEkqyG>c-V3)z%G*4+<7KphG3x5a&sMX?Nn^|Tu*OU+s%OpAK zydTv@Oy(dV0E6daag*!`mftC$AXO8Gp0j=G(%yip^UA1tMVA~4RT~l#MqeAHxw+mk zt$R&pAI~Fiu8wFWJET>UawybYucWPwlbwBDXC^1d7nUbbffl9>oWx?a)|Qjf9tYTR ze65_GdN|8S7~9i8s-e*Z&8DPHHnXwWLAGvfcrhj_-&wqLLTMtBR3*_?JPJm8Cynr( zHvuy6Ba2rCEOwcrU>rw&QdmkO>H%q1r}=fqj5g4<@tw=KdWT<$DJCueq}up8X01Xb zrbx%Dt`{f_I%)5eI#VnYZ+{>sNd%J}N>+9_`Hej5=HcShu~~3yRY_%ot^D=q>7BuIxGU3P%YH2s*`i(a{FQN{cq2^pbQYby!!}o-&%@C7Zhy+ z*JdG(+(Hyw3`5jAAmc0sv`%hB7d?Qf)z9i*`|M5BzrP3 zx)55$o@o$}8}I3U!gGyx)6zyc1Qn7s!*lF_(`|s-kVjp*`qdnIb-$#VAmB2`u|vMeI-A(&_4?V`**RKS1+GdZ zmtVAs?6pfe!;y%b!NV;38SIw$7Yf_kDb<+)Alt{7oZdy*p0;H-OK+;J)iva3ZtCVM zp|Db$h%h3*z&0JK&jRa@qy+?eGkQKHmo3pgjO^@|{{hf+aWPPQT~y@v2w{LQdnad{&`sN5NL6)geV}*=%J))BnF!8}Rx;4|LgI|1RfWi9!ASG_x}m zt~?c))OByDB}*~#K3(mF&4Vo={A)IIJcqZ2-$^!NA8zyw zO6@b(3ZJ-o;>#)MIFG8O4{GI~nO(IMoMw)0y-FYc!Og7;-LzabZmN?dmo9ltwyh4I z24zvmV9kA=Vxvm6*2M*St?cZRtIyA1D`3ZMV`B1Omb}vfYxuW@sU509MOXk)rc$_0 z25z9%EcP+ZRw$#YC?BJlRj&1{ay!PfQrF+xMu4Q`-cZE_U8XMUYh>qZb z^lpn444&)4u2tgjANf?sOy&HeqGgxWd#ykP_G|Xeg#?_gfUBWlSb9TrQ3iG`EpF`% z(KN}vM_?5{)4k1HzsVQhmo4{I6P`gVGSQzB8CHE+I8e;PKevSQ&2tPqY%r#F0}6S( z$am*CB|OU<wxjGrXvhXOTtF2S=Yb^IKg8RoYk2jQCjN5Ao zkg9;|Wck5X8oYa{@7jp$y*o`mb_G6#Dy`jZ_fhw$wueqO;<-p0(BaDCW6)m&J(3Hi zAFWG?Foz0sXxyYti8*@c2Nce9^+yU^a`)sEcOJ$85g_!1)vYDJr-_ou^yIc0w?cPM zyZ0mCDUYQ;A1LGqp=HA0iuS*EZ>j>+X=~WoC=B88h#Px$Z-jY)%Obk_()9H8x4ch8 zK=z+Vu}?Z)%Jn(dt0vt#47s~oTR$E?Ui@D7yeE@l;}Gr?kMGZ_Vh<^xbt*}LB}<-~ zU166aGkyMiIQ;fVMot^@g*CE9cOOSp)qYli{o~+@0^HVE1W+AH(PtfwvryZx(cXmh z>iYsJg&HiOR^Qq9Dr&AIJzp&nQ;Oj!$U*aGs3j^+7(-uNh%A+~+gozfp{g)EEtHc0 zxp0Inxot2T)Bx=#SKRJy-BvrlAi^H!6NG6dJx&)3&geMZ4i^Abo?>NLYri2^4VE|x z1<6J^K~rqs2$e3J+IK4p@o0_F+K2M_+A00!m*Dr$1A z=f2l;!JxqtQ~LNLDWy*U$o+4#i4bP{&@ok8wslLM+RJi&96`F4Vzxm~!5Ffs0F<$YtY75aQgM$H+Spoh`}SdQCq8Z+b1L~A2ninC(V;Pwj}5lk zPK*#PLFXRhKr+wM8gUreT2FNZG$ZS7Ea3(K8EZj52umaRqox1HNJcLd$8Esvuj$DN z*leLnhEb=Jx4QB{V_>cD{xW;39x27(@Vp7=ria)CEMP*(djP&}7l7BoWV*iI(b1Q} zrz>B-ybi0rt3V7@q~teJP^p;5?nvnq+s>}BuA|A->Rc{!^))xvkxN1lDRhy8ln( z5l_vgSNokQ^;mx&MT1qP+J-z1H6+-SrrF(1f zD%w6)(LdzqiX@~U$$4R>y*cI7&2OdPlv9%|&*t)Vy828GwF&6#t`_mJy5-H&(~MVs zn#l*-c7(~frJRGonz)&kO7)qY-k-IT{0Fn|7>SAk7@5D*va^)6j`OE&K1k$b5v^Y- z*3`V`rE!L5N`(P1Ct>8D1kDvIDTwzMK`};N#a#AKhsrvwxgxd$v7I$9F#AzcQ!^Ay z%^T}*3ltpurceJt<)DnRRYJ8nj+i1A=WlZXeAu;N#18q@lSNQs%>*Kg^y46S(NJ=V zGDyf#a&j}#(-|;5eeIKcJcu~nu)bA5xXg))yngojG^rRxdO#!}ZeJ!QHr^X(ve~^` zw~D#kh@23K^^lM?NF3kg@$bihA_026#{W1c2$2EM);Brm$8E+_{Fkjm3DQ@}Ab+{) z8&;#n3q|(4+YX&OiiZyq5O36IGbfq3#kIpP4QQVWkQhs8_hiy%-mDfx;Ca>_Fb7j9 zhURy^nK*cM){l#(uFR1Fx9qmw53jd2zP`SubitUc*H*k68)azT?!(W8ZC^smYq~@s`pS= z71_6W+=2b|J^ejUOw1-Emy(cp{@VCV>cUUjA36GrSH=7JzTPHsR=)P^ou>xfGG8u*>nG0wi}RB3qa>_Qp`p;cFhT`yYETEec1;oPUE zCxF*mNOF6)q)-rHse#B`I|~=A-`wqWNa})U6};DM(v8peT}SrNW}4p2)_Pwsr!Kh( zb;lgKEu8@}J^eoNLnO75o(wq)BsgY3W;)kw8PTk+fG8ep8=Vy$JU(0%4y+#kJ&p5@ zoM5!`1#-be-@c4&5>Gm#7hnJ2vSOUuG#&>k(<^(KH8pj8rmDIobvbznWPF@b!M1r< zWNTPNT_<{qb8_bLaw=C~2ed}ua}Ux1c|3ANCMOnPlRSh4!6C&IMPk;W4TCoFiP)K- z)4d8>q`*l)87k<_suMuHpJpLdAW)uf`;|4PwjOF`w-&bTCqh0xCoprMw?4^(ie~RGlNgdWa6-j}vQkX- zUixIWUH=G�^PRpqk_oei{4kr@SnXrSQ01#A*P7b`@v@^?znbHmJ+o_9-6P8sjOR zy`jg=t5*XXI;vdW*5!pK)(o0wgZK+n^Zw@8Cs(jL>!Ijyc`A^Ug2u+^&?z_gw3rhH z2*!=4y+KG^+{%lKmKL7TiRLM0eMs`A6Dl>Adw^T(dX0XtUpjnHF$3ZtVL?iJQNbZ@#Kr%jTdeuq;{dta>X zV``GACFjb|s;B25O#Vn9U1se&?5uEZE-npVa-XY)qSQTzH{^p*8JU@MXp^+!yJe)i z5hY}Lv^f@=muuJ-NN3OXgrRZt zdf>$MAW2(*zleWjCHTU`(<0IKeCZM3qOBV=Hb!l0(cN3{MqR5*e&vbD-mL?aPl<`& zq)9nX>F)f{^vw~mtEDHGyACd|yy2&Fx~ZjO9t2AElP{Anm$4XyqoT{e9X8*;Ij7t2 zT=#mv*iA5V`~e!KnY(LVtaOX-5*X*>h%WG`mv1c6k zz6}@X%Lir6xBhI_W`oqitDS?<-D-b0@wzK`x!zi!s#yj&yZ-JaCf1jb<=R@=Sk zLgVtPWuT{%muJV;KbPol6TrfGa)5Dz@%&rQ_f=jNj9jb~96UIi*L`?wtq664rq~#h zTCP&p>N&g%M=X)q4R)Ir6a>u__)v{Je-Br}9MpC=9VDVPvr+n6&c?>E zYl0yT+xRnzPXRny1I-wx5jm zDcWmwjK-%UQLSB2Da`CxD4$KJj20jyKNeCevsRlQd$XJ|C)hH8XN+GHwZuA&#>PZ{2{~kvfj?v`6aQSq8mtKyGY?QhV z4R+dMN#ye3wnU$e{I;?V3DTX9Mts*;wnz%!&(|~k@B5Y1)bxTzi?zUX(EUSZHd6E~ zgZB^n|2U}AGltVmxDjHd(7)rO>wnfLBh(EO>7br>R{;L|gNKKV9o8naPFf0f_!NIUSed*xr=K_RcaCJz3&bWKbBd*5oN(AfZ}x8gkng?%N|>>Sd)v z6IekNrTtF;bs5_@F6$&TSM>I8?P$UPjG|wr^S6XF1`POT8~A&|*#oQ^S7V!l(Ja&L zWL`uuDgDB*;}# z1$6@}nRZd29IHASGoHF^ZQ)4f_1($|LOq1qN3SNkc+F@bWjDLYkhFAGwCT1yMz51(z2&2Z@kRR@f3KKR$f9)-@oH2RPB0nSOAq;NhaT_m zlc0eRL$}#34t^=AGoM`njch=FM}-baL(!5RPwrJg+x0FVpg<37m_3=?9{y+S8%>D-*%-L>`YF>j6mcpfp3 zEd24lN*6!B0-Y9Gy#cj@DKSSH44GGpNY~+GauJ9zA?yJlo#K$hfB$sonYwaL-*iO$ z{-tAy=c>XDU@16&{{@zeL5cv~qjV4|Z%T8FxW2;8)~1wCIPbX@?CTf|ll&65Vt|5_ zYI6qwk`S0&7*)_&59Q7`H(>5vFyMJ%6tC_QnM@D^IT*UDbaN-wA}t!^C2VXjDong=-~M zw4_K-qQpr4U?A`RYWXA6YBNSUb3UA5kRUx}P>hULtmAqX^nAsj;)!L*(EL>NyR9@* zoT#D-XDRdP_iOnV6uHWVe}&3ON9UKR+NjBo1iB9bJ6-&l85uD%2jca49S4ActT>td z<|T|2uusy}*>h&!>>{}Qn^#SHS>Ow12PK`pEv6_t=I|t+v>a)=Tgu&NdVWSOk|2ZS zY3~W2d%RzKvt+4vLMDQ^F{5NVyA$hw89{7S_-CS%|MlX92@ZR;BA(DUG<9vP!(+EV zfezNcHIsBo!rnJzw|=h4wllNVT725$Q4#P1yi_1q{Urnd0X` z8X^ceXMQ=fmec90&itR?@}*-%oP^^Uqh^d@CyqU7|*d1Apr0`Wu;~F zX3jzxxznmDG0Q7I!nr^PvTVEAyJiNCm!qq}^{nPqjt?On0F@~h&wS7NqtnUD>ipxT z_ZKd%P~ParJ0usoP`DVOWMv#?G0<{%SHny#MpVV`F$S*0=gIItAUc#ob3jPJXw8`A z0P}){g~jKAd+(9gsVMhmo96@h6_hR7?A}qO-b2aA2*!hX%HlGcmNvKm_a=F9#nNe0|qFI0R zKNi5rqb`bwowsIfOG|xWmozCN6E`P$v_t`y2^5o9Gs4Bnzj~53o%^7 zP3q!s*6w#;HP6~~Xz1|vwq8xo-Rn@q4@&Y~BsQGAg~c(coc#BF`cKiMNB35{^C&Ba z7Y8APr$&fCzhj#{xwM15dMOG7Bp{g# z%dS$NieLlIapec)YRuPbP>61>l*kJUzp(HC$PM}@f$6<<2!@}_DO)Z10Ws8R@k_@}cyH(#N^JVb4 zcHqq+lj-KC`Hv}B=3RoM)9dy)ojkLB7{e)!5L?A{u;Q&f3MS)(larV3kQ8lO3#`jn zXKT&B6IvoLa%y09Nqn2A>U$uhz&UP1$vtB;#4Y*6lcMhJlKok=`27|WS`}XHV(6$# zfGsOOW%UY6P#H!0067$RDrB=>UYm$d7KmX`C6%1qXlnXUSvmQ(Ik0@}+EZM*CPa$j zO&tZP^cJw5o#;i2!1(6sJ_I%%)=va9bdd>pSKsy;Xv%&s^AQ>MQuMz1b1~eR3_P^q zi^KDsME z967Aw;pS^#xO0F7^AS3R>1yO)6A(1D*+MkdC1C0VDcpWn9Ts$yLKuw)r=k9ZEE+5& z1o>c5_5)IWbcP9|ey?xC#}HMWf}VxX#4* z^CwSU`CI$?0X0hzc!Bk9nN6&sFf~;+q(Je%eyDew^I3n*y5i}L^Kx_gu-VJW*cSijcw|@ zRY5C$_?HzEwQv}aw=~?JQROt?c}r+^UzgE1!xqb2EOl>)Ms@DH(@juCCgc=IOH~#6p(W(t zKxAcOTh}t7K;{Rn8zKyvxN{9vhXG+^BU>HN(rj-Z$jdr9lz>g6q6LjM-`Egg1zLc0 zmVeviKm|Xi0s}z)nAn+^n(HzeU8&$xE1jc1)}L=<(G!h8XOy+oZvxtYya=)uKTiKG zbiAJ*4Zs9pVg&o21}e~Ers;Q|I~wKT^sgPuTE%M+Judk9Gg<_3^@JcW&>!y=lO1su z-5gc=d0w2ik=n<`D$JLXc_MsP6h^_s+uVX*pPlJzP{P5-3T1}UN}3s~N;a5HR=QYH zL8bw?s96~~y8_t^RoXk%75uC68hU&G@Zva8tBSh2L9=){TE}!%DeFPUIrrsm^n2>0 z8S@1E4Yv(CfLL$N&T=!4tR14DiF*q^dVcm!A|o~VF9uja3xP2#@iB-nFqU86tk5Ex z!wegu)v?I*CJql5PghSbr$7WYU3>_=nNe-{hSyIxh&mz^trP6J`%pInLk4P|5aFEr z^)z&(FSu_9^kiqF+#8yj;3mm+qs8KpSear+DcZV!1fTb{qAlI$8~fKX3&~=+p?YY8 z^J4QurlSF1vy)h$Wjqa6w+9D=)9L*b6SM1OcGj$WGah0@^KHoI%2sU$HwpLUr4?@T zvN~B?u7?L-?ruLpyvHmJibhgdo(0D+!t>QSXF!Y$0L>b4gB&%>b9GLV63gr9^#4JX zX~ZCu5Dz;!5nNilY{_#)p^YUR>v7y}v*mL#8@Z(3Ltg>A6>A+Nr7D;P+ZrV)7weC1Da-@1qPqTe;j1 zatz?CQzpfio}pq(R^1@rDNhYhr@_k+RU}po&oTXAdw0tb`7Py~;q*`?2eR zLcS#+>?_ZU(*+q^&$j#^ltKZY@wa( z>uq{OIPkQg?R#O+e{8HAj-9}SVQaRxzA6H~=40b~dA(DP>*$lq0Uk|J7vxo-Ljh6R zgxRZ8@BR7y$I#0Qju7Gs%)4~}syDOdV@Ln`7n8ol!Yl+|4cmd!UP8H20}j^l|#zIuA75q7~d~X}LDM zhqo&5-6le6v~P@4NVUwaszPvkOrYoVe8F`R@bCBK^(_BaanYb=oNA-x$poaBwf+cAJrv zg~s!@Ouesi`04W+*4p^DtLkv-S1#ZfOUhur!y>-0KKb8AkKpK#eJimzKb5h8%_ zlkN9w?Tvwun&!+RRs3ZF_ott^!+DfCX%9Fy{{nAc7NKc65v+nHvY* z-@Lr=*42E)8+T>v6x(j!sVD8Z>cg+p{!lqc1^vdHaEr&{K1^IecisI;O=UChq@#^u zu4iT9-?P^3Zd{T7r#Q;>TH80*{so5S%vPn?1uVIRES(n@GUJZzqGA~)j_(F zSKFTkQwi?fK}*KaPKoIa29~o(8S;p-Vi0HP!%8d&2hM)9-zhOKnig=%G7pfZt*u%~ ztjkKS$~iw8(m{u%fk!`2SxT=iqNpdSy?;&g=z<}fng;IJZ*}wsv4Ly|Dr!Eh2i_60 zzBX<2NvyVn9SQd{nj)*DK!4y;+CdbEM&`JMmK*_B(w0VU-M9c-Tms+@8Go4hv@e(p zM(8Bo7ZWAp#5`_y1CeA$q^Pxl3Y%S{dYbwc-u0BVx8~>JD21@-+1S`AioPt-dzIsc zwu+3jUm5a8pFCsANVrd|>g2e_)9YyxJp{gbr3%^{~mG+jTkW zPIc4xElJLflqMxOubS;oa}-rfn6IXfpQIijz(zCXD+I-liEIn0?4C3+d5`%A;v;R?-?n?_mO+-V_)a<}RMoe5# zdrd1e8GF+AK0j*C4jt;)+Cr%bl4NNI+=?z5L@Y0|k~tsE2k9#tjJb^|l$7M0zql*f zZ|m8OBG=Taudu49*=EM(TEQaFT;exf*PZV?YBPCGF~x1Ou@QSOPBEZ)xF*y8(`w@$ ziOl872U|*7=Ab^u)#4?-*VmGA1Yq+niI0H3i>bP?pM>9+qBUH6VbUiALVq_9%Lju_ zt&fC}QxY^vd4#OBYXXPL7wyKEz@@RQFw!I&Xcj4gS#iz2pU-JYIOegDR8&|4ND$LN zs1bppJ|9=DVG@&OU%q@aG=|}FzPeu}lUFa-+m1G8fMmVln$tR-ZSum@)Bv{9m{#eo z5ApHkEjSWf#vLU@s<*e9))@O5+Ei6$s{h&h@Gi5MCx#PeN8L6z|FBt6j}@*%&y`t8hs?(D+D?!G@`pE@7OG_-RjQX&`(Q)jsDhT+F+X2~EW z6LMXMh^u>pDVsK~bL1Hi3ZJKL>PZS^N+}b}_14050`cb@QCS77{e|IsYuHE~di~!o zn9X0JxfBc03b}8w8G<~!n};O)BoRr8k_IndM=6k_9pnfjNyKyCgPLGF5dgY3^LQxt`S3^dek!$3tQU`r=$5; zT#k^n$n5Qi@D%Ak(M25WQTbw!9bu$tyxAVL5RJmW&Bkl56V&o^v||Se6z(D8S5yK( z{k+Pr&Qs|(A}Ldb=sN0}IPEE>l*V{-co|^*<3z@2_C$)8JJZ^NL$2B_la=C*LvC-M zsoD?Ml4_;$rr)`%FV3%fo^JPoKVEJYk`|lf?<%{Mz3K`I8Hs*Jn4Ur9`>X9vbg4KO37?^wB zUXL9o;JVYJRTs+3e9N}&$jx7QB*o51izT8{cSY3dX0o!PZf{rB)ZDrh>hK(Q1>bqI z_O=fd8HVgr8kKdZ5IbxeRDnZlml=@|I(7x7PIQZ zhJKz)_f&U(O_&}^ro&$%&o(FV@iVa>ygsxKhlUQO88myDw(rj6&dfYt^B{_8II#nD z>2bL?dq_8`SXy58PK>r(K$>L1IYdSs!0GCcgFsi=) zi!7}op7o$Y|GV3hy2LaTFCJ7o8~rG|nXRyG(b4yaekcGrcey|D#3}H@Bi7rzeGLfZ zA<3V<^9~^7xFUn%^ZLGD#(X+QYeSMmD4}C_Z!beBWiA3UXe`lGYRh+mClw-joX18d zCTf1?1u3Fox^}D>x&x}AvdJ+{N@|{$)5Py&8DaRKL+V2b2*Fw3M1h#l=nolC}GzKMA!S-Y(N+zm~*O-*TTjsN^LD)6ne5b&ge z_~#kW3|%PI7tyS=2Y_0Y(SKE3efj`^v0YtV81dM&rE}Ibqb@^(;qHKXjCc+^o%r1b zUe2&sp#Gj7KRVZIFKg?@fdiz#%Q_Kjul2o)r6q?GStj&{ry!jLYKV}GLdhX?D2AX~ zz0VAKNozz~aR{Ve6L}9WMRzjN6s(DTDk-zSy0HO|gc-pEZ;aES1@8~?d~s>+ncOSK zwnEd0=2fKfPrA0QE|}HXQ3TZxN8BDYlLaE(hlhYyDzMS_ zD=g33?Z|ZHfDWN->^q3dhr9oa#OEj(SW&_v5l|S2LO61K{7GfSz;w|{)+Gyh2Ms8w z2;fnnW{B^laqq&`TkZ?z?Y|DJk{GT3R#(@De5ffGqWc>RS5*TB9KF2z?`+m7j}5XA zhNe7J5Cw6(mrMZv$l53Pkajt{$LTOeL&FDhT>L&Nw-AURw1ajw)44B^Xk`8r(`K;* zoLJDtfhEfkgB_s{z8!IpSg3w^5YQJ4!ytHm3e~Vzx&#V(i&4Jj05@*8n41Janu$xV zf8FK$@lKKNGXSra2}2Z4I;|mbu44Oov}^^6WXs6P%gxi7fTy43)tNl|G!Z2S%3O?o zx0&h57atR&t#55pb=Bj2!qNK{d+#ZZo1a?lASd`Xb@^GT-hFHO?jq1ojt(FDq)zV+ zKv*0kD_oNx^7rKP*21D2I3OIt$4-IW56~p_$ki`}xdi%FQcv1%wvlr~ZmI=t0lNBl zse}L=oOMrBT)(nX^Dv0;D|z5WoVW*~K9M}`y5$HD@WK||;=N_@xWNe}kQ^;Wr1!$s zR(B&K7z>)HrdNrpEByDA_c@>-KlH@6A8v$M;K@Yz5kPx8tn^}Nrvo1mc& z%9)3~*4JGOp9J!uemSnIPDdN=tKT~nyhSC*($rLB(&wXuwPf(KJ!XKf_kVA0AcQ&C z@|mayEM@E$_tN1`rSt71-SR>y1~d(yue2lc_+Po8f`D??j#ZmKBI3LAzvIG{3+fCZ zydQ;UROKP+8utO(#*>_`F5F;?K?sF*K)O?V`FLjJ_yXcjYgfYnrKx8_l*ng)Nl9K_ z9wvhxKq<$(zv!2^jENQc8qSIZa%3Kc{iRj?2DG1;2#t*Hdv$sl02@HpWyhQ&HX0a) zi)$q~c-~D8b})_GYQG~KhNJoK4`d~BP3GKbq7JTk#_~v}L_9%t&GE9@3V>k{FM=C> z8r%LhaPVKOI%bWBcBG*rEIb^&5)XlIl_Ku3x?CjSh)JUL=#+(Jqw8oX4s4}@6o(l5o>jOL$E?4zpfi@c&xE>;)iYW9=QsV`~DXaXn+W9Q!EdZzlx4&J+k|00|raT&!QH z(W`q-CQN9sh0+j%e+WRb@Gy|egTut@{i)YCvIIUJ7XU_37+WY74-_-cm_ugg7P<{B z?Tma&JLCjukC3h6%hKbpixqm`x1*E)pz8?|Cth_SybM7pNiY8eTDt;PmN~#7sXc|% z)YAb2!OTnx-~;yo?VEh$B#|hAnlwbe5k^#DxdH9%n1K!SuDag~`F<<&n*$FkV*>2# zY}i4#;Qr=0y?$ToTi4lfjAFQYUx4Z(|E~*uKt?shYwRTWyncRVdD-vkd=;J0-JHVf zu{JOl(h=EKJ9J=vYfo<&8!v$`ow?hxp1LJ}bW13lUT4@0rg%g2; z!`*V2{-dE1b|R3HYbY2m2?i7AT=?dr>rd?VLKPh0_F&v2tWv~$!ZsHorX-A&AtaO} zBtPcaljk3nBBKAM)c*#EZEQ%3Fj3P`Y&czmcVe-U-JbCpmna;%)9hreEnR*enwE-YIy>dZpw)_}5})z4gw?Z{nOO;&@sJLOR8}7QnSU zODSWvecU1u8%Pa*x++j?R)&Pl&X^?DEXPF`JOzvGOr1@kG|I7PH^x1WiYgga0H>CY zB9r(RcRnv?WY4*ELz0=4sOuAf_h+n1$j!|a>nNs!v>%Fmu77j0OH56KSPq7j6as?y zAoz(IG}qAv@MmDqM^ZchWMo4`6h>T6l18KM9%6wg1E)zxH@#x7^L4qP0O-z{1e~cf z$%s`*+^kvkGnD3w6;%~BmUhQ&FsUt2Wh3bf`u7RC6Le*ue^{=PE^IYp?=3Yh&M&*1 zPcRfeuD9$|sNm;gp+d3Vlw^`|#)oM!fISf%$2+3;zDAyCKyD=NDRrD9Q+VN&{-gt~ z4Ic{SkU{=UOAoa)cl>QDAz|(Qf)6h2+6kPB*#v= zf!^5Fz|O+*YtT?5dMH6MI}9o-kRZBEZGZ$X9#Ke&E%fwr>`&@rih?~mYi$H0vhf7& zIj$wg%oqmGzdw|*CDtw>@^^pAD5>bj$AS?rENnJ2l~bdobK=m&$js9*!$IFU$mNKe zOqMv2%#p8^x%J)_yE-_h`TvsL7+rfScAwnoaq9}|bkX-X6HtmP$R=W~&=*qrliqs~ zcoF^Y$8!<|%9BO(iA%v0so4n+k-0haHr-yxMRI(_)%Oj$&(!zGvx4h3GhyY>!RzHj z{*@Mo#mo#}hiv~-4KOC)MfUAd&ldkGjX zOKR9h9-b~7q(*Eg8q-V(4zNT?BmU9gh$`I7L4#}&Xmon^Dt}p7gXM-8*fG>wH1x#* zn1uSAOu)RFV37RY9URtH*6XNsC^Ci+pBY`fV}rmzcO^2pMI=RXa?NtF0|p^=xPAl4 z9P>bwA4IoUDF{pIqvoGoawx3lxnNNlJ7>vd<=c?G~@`aPapM)>;v zu63;oz!7%3T~*jE{XRhz(jFNZEiJ?BL(;I7c^oh@G%!F*y!8J(nHio^(C%ny>FMfP zU7vTlI7S!SgCDIQO9&kf1ulbmMwFGCrKde~vG9>Ti6Wd|Uxs-+Mg-(8F#W_AwJ(Y} zRY1_Rw136M$;HOWmoAxNo&;{P~uTodi)7q~_u<(gIO$$TEBBF5Br-1Ji!&pjL%k(5+ z2!oi1)DN&8QCZ@Uju)mVQQzTzvtP#|@unkQCQ~RJALB)|5>-r(yqE6XjZ9m++wC-& zmd}BTt`nm zPfiEoZI@3u+Z{ea&2$As(9i2ZymTEuKmxXWLhvl)+iLS`?)*qgT)^FP&hEOiJnWy1 z!nHGe+1|>|_y1l4kkXi75FI1+7EBUaVZB0N{;cclLNNdJ=GuFL>RUZKd)wY7n$zKD zfN|IYH2$H7^7<_I0SLGQBrGhqo2FPyj2w|rj+2M|I2DO_5peA-=q8nvetb9`F<-j2 zFV{+V=_K4X69DQ`BjCSbV{$cCkzFgam z{sI&#VB-C8kikJm$IA?y`NrBnPgP~yZ#c_*%O$_f*jU&KQUZPC;Xz*en&!~AMBY=- zE-t|X&9vVJ(T+DJs^YzsS2y3F$amTB-^$J55F23QVxOk?*ipyo&Yn|Ih9`VBf}CW5XfWDGg<5chuke$Qr&MCA>HUNmEMo* zsB5#}5p|XGb#)M?b3WcU9h20yh8<(3Peq2S=-o@^j0 zD=G^0z5y^ut}m7MO{PMUzfb(XPhNI@41U6J4#vjeGsC)l%1P1;WI=DV=GIqf0i+D6 zXt__!!Fh1vBCKQut!(S+0$ef(lhy10=1S~(_=5-+eUPzvhOo9_@WACF`x;%Ife-ew zL#m0sZvcG$@V~MH8roWQb&jD93re2o%kzIGH=$82;~SK}aPzmJ4By=sN@`?!-<~&W zdp<#4_9d<94Kav@cqx}f!3VcmvC1{pF|-*Kla1X29C5R=maH}FTS^eM^2vq-#+tY8n~km zZX}0C-l&j1`XPIPj4qeam$b2D;b>bs4I)f?N29tDJk6C6p*<*SlY^P}H|1*T&#|cF z>UdRd&5W8KhmRFj@}(>UF6h*?mIYz7Bs)9$U<@17%*DkoJRCeZURF$*^V<}X$*bWm zmq_;+Q+8b3n0~3-wvAgM0(^X_LL|bCKxe$>97tXGZ1KVA>~#D~gf3%YYuI)pR$;~x z3#7v0(w4kvb$bPY@Y19Th0oNY$Jh(qmuuGzhY6Vo&+={y`RBA2rjXoJ_x*wR`#)(7 zfw|p~{jxDH2qScGAS|w6(U@v+L|&u#pNq)v;8qNVRn$XAtG~cVsFpRwP1EUJIkCqM zkR`z}fO?vd-!D-$Xaz|Ha zji1;#GQA>quCAJ}a}4K58*nrI(fEjQlcM{V2cYWDvV{{G)~-KU?pt?Jg+Rkj9cVQH zTmC4g(z?o@Otmu25Q~rH9%bZ=W#KP2Gs^&w zHFPz^GlXdnfG}V-Bq7Gj-rJ|9xT?v+UA@pZwd(o4-Jyi$V|he%^qKHb`+UEr?(BSP z&uCQ5=)qBkC_P?S<`mZA;g3wU(Hb0;#t(O-Wo2!~M9vVkBuF!RREZ|Y%M#Wo8W4$# zOo8ckxwA2ITYO<$^qWH>B+FhmgnyD$@cX3&ZMH40xc=)8{4BU9mY1XP)z0j18`GMo z<~&usM1D{Wwv`7F0()4pGA==iqu~4oECDsvUmBW-Hs~=tA^yUBs)tgO(BZD|Ctxsm zj*OJx6?9_HfHdQC(s72iM#Iz@Xr%9O24N+VA!Ll`b~}!9+93xZA3Z-kr6G%oVt&x} zmfr2w;)A_q{JMt(OgP)AB*Lx74uzCG6kH}0v2nl77tLT`+ApYq!$2}*Rv{^D#?u;^ zTIMD@NTM*txeIDbI5M>s<1A5Al;gd-fp7_xEf#RQWtzN@f%M#bI)LE3oluzl+%qM3 zy79lG9q&v}x*u`HYbZ;|P{j`bQdN!M|2vwa=M;ny<8?v0&pZH0fmU&*GW~Sq%vsrMRKVJ}UF`UnY)Sb7Xx19!+;@rmryQtJqQm zkw`vTghAMXQb6sAa=)i$ zynK8-JgDMD_M_5Qw5&|J9xdChy>Z@F;+j>R}ohME2(S1gf(N68Gqz8ID6`*Ip4+J-q|PLFQ>Kq;LfbO;ed>rW!lFK zi9GEd5zJ=D%avh<#pD?#RqKTBA#bElw%h2IW#?nun?HuE`K+y)F3n;YpkMkq*83LQ zHNcjGwjAey6>uqIcf<_UrhHj%EX~X$^poYR9q{%3a<_FQuVEaPGEv14u`qYP3MC+b zg4s!5Rb^#0)mF||C?h2{#JDlvpvaO5(hr17l}n4->Y|CFki_ z(34?c@~j{65NAiCK=G?!E*PBK1sVB+-Xul+qm&#RfL!IErP9JhweZe?Qht5@jJ}Bl zbe*|!so>9{!1v+!qFE(P0K0tx91AP8zJX6icel!x?!H5`eo*w45KFWUpgzNtnRSyW z&P1au#ji6dWAOVaIDYO9e#iT{!p3yo2#JBLkkBuiyEB`!68a57lzG#>Ep zU^d3$7|<2}di(Qz8@H9bOjpqr)|I}(4jeWZP>j7CV;A4Gzwe-zNFF*(33+~H{tMBq ze=qS-0PbyVQcSJV%nV%yH>*-j2?ZTB_+M!6yb`9l@d`I}V9o+b&cOJ|NePde3|YtW zF$E>d%{*R{Qx@G^8upFOia2Ozy2ud_!a=mGbn;);!iQ(h;e3X=8HL*J|BP#X@vy^{ z+d|f*1a zVL|_Z`0Z{l0+TU81LI%Mzk1zQ!29jnwNwXNZBr{#c%9plGX2H4HX;}fK zq#z@M#6b#OOUs_0FLa>`_G2c>EEkw|TUBW~zu&-SL=8l2lm5J3BOb#|p`U=mY#Yfp zyb^AkKhk_gcXP0_0kZfoCD68 z5mwD}OToG-+3Dv%ZwR7euFKsvUViMZpxiys$?Uq?+{CYk7_8Mj|9F@L>YQX}WgTjq zS}}cL0cUnAw5Ue>J=pptYYT>A?%)uf+oq5VFI!Rn)7}F=gy`?HJ#B|6RiNc&?_@0X zH=`5(BKQb%Z^4pTSSDgt@ zDoQnRl%`6uip{l69)x*hk&Giq0~W6G8ly$MQ05@Un3h->StaQ(v6OpCsaJysR_Olu z5iSG}^tjv+u-qJObY_NTrWW^tsrkaDq?)FjtAi`#?Y{7Mu5dGSTuGqj%Gq#iOaw_DB>@+IFN7vPSzb;ai#aMs@^B6!C#mhpCxe?2 zEN2Z)ly(~*Y?9nLo>XH*@krn~vw+Twrcb9M+=(o-ENo(Tvc$*6MN-=N4DwsE&$Y6; z%*u$KYO=Hr25yHWv~4n#!K0DguQPgjU-laGS}Av2QC=>xTcS(R}wsK6_KSJ$2 zr_~XJKXa&#$TgT+>)+BI`n)fNS$pu7od7!EqYX!{J6+-WL@sI`n~7aLdOk#2cUmFB z;`;Bo>;8jF+2SsnQOt={N>VUSGpM0})hu7)HxtkrngkF_I-Jh1ealW!ha?`_K6oG! z%-(m~Z(Bv|5zX(QM~Q%AvPOIV;sfyJ6zy*N&enoRJhPUEn6C&hJzlz2E$9Hu-HyMl zI2&Ug5cmK$XM2791uv6pc^Qz-$)mGX)NjfM+m0H@1R7O9ubUy0LYn(Pn7>m8*Q%CA(GHmhS6L&X3iU0ra-IFQ1-#o8##5L>0h zv~Zls;NXUbijWW+3UijBzizisAXMWeGS#1txj#{wvkh?$v1^hMD#|j)CjGV;@b2#J z1AF@&9TgqT&A+`YxAP0fVHb7%c87z9Lq-Avg?KIi++@NaA2*?xBq7DuV!J#oof2c znOo(#v$d>EVcWYWZSm>EF^wT}`uH5u0}2AEg>@DcQ8EZ}vw+29O=LBtB4Sb%qg&Qq21S@7+>J_X$4`{ZVsfc$DuRd&}W0$e^}k=U|Ii{W>p2;KVb+W7)aUU+Ksm%RKI+mz1E8V6bOe zjIEOf=M^uP>vm;1B=YR*PwQhHM!}nQ1b>(qnNjPM&>!W^$phvF(Z;QGeLShw((}O* zzA!W0)S z*Y0r%-i5;&i}n#|rniR2+X;3FAPZ$?r(SNFtH@ zbumel9jvH2hy_};}+ zA^I`Rv@Urphe*2M%z}oQl3qzgC6Gwne?-s)No{?sB7tfyCuiqA{5%YvW>gc8o2Vkh zaN!22VjZ&FRi27s9uMR;ge&k_(<_fEfX8uOvnq=>fW$-GQ7>|Q(SH6S6Ps$MfbEqIaru_+TxPw({P!Dque?~J;fx{V!_ISnZI{QL2rA0XJk;$SI7x4x#42v zouGlciNjr2UY=VU!n3xwj|#+JwX5;X48Wu?gEgegHRt!c***UDJ_IMxyS-cwMGyeI zQouQ)qF`fVqSQ?^b|K^WBEu%=cLg8{^ida)T(skcP_QoQ`zROcDRY-fc#PRVhH8@K zvBv{v4>lSv0byUZ=4Q7}JHW#dR46VAh{Rr9`+UB$YSda&OiuitMwcIdK>{QLPW61fC)U|)K*i1xc}vd} ztFS8t-D|)clObFb2a-{I3kW&?)c$%f{Tiun?d~RtbNjhT)${VXT!8~MW%6Kch9L2X zjBGE~=4_ZU%tKpC?#omT`RGFx|XcHQpq&?(ym&?c7FYIL3e z5Klm)OF~7ZdnznyQSCNQqyOLM>*M=z{r$FS$u-}EnEQBQJ5yfzPE~nGdKJS|s6JBO zMa^XD?N-mfWi>s8)DDrq)fE875whL1O;hkq9*2m2^h(J?#Pild*VuD<4=41gKpt-p zsq7fJzb=9B*}&>kDT+l7LKvL?k~MkLrU5tqKV&hYf1Yym3-t{_ydpWuGCy2IoB)sH z|BDFew3uJ44q;Mp>g?_r^hZX?rVv%Tw!GYbd(8NNc050U zY*4`<6E8pNnidSy2LdkyCxUruI<#@#$64K$F}$7W%H5J0H+F2r|a zyRXEMhB|^~F&1Fe`WmkM_6f@bLh9F}LIyvssxCJ%tevE@vCl`qpCRN7&F?L+?2`gbRr*uq=3Edj(*F-`r}fXpw5 zo^IOJ1zK^tNUup!k&mx?D~^o}d_ojiIpdoAtwb^}gnU{i4XH@_HJYw=iU-Bc*;+qu zcVG~pWhuXOxgQMmDzll!_xfn^e|%b5=_sQL$8rbqSg(E^1NNpC133^Q4yCB~L_Hji zH{=SU`)wfkyRWPob>DIKF9q4ka*f9u{(GE?w;{Yb9SXrYVBgGhSiL{2U<8|$hS^GP zMrvWssSitGQqL&$twrVHwB zByxQ?SZHfgPN9-f6NaZIg7uA7_OGsB>3;&`) zfrG-TOhQ6bdNPokD+hN2u~urH>t?I8+V4^@ggr>9VQDjY^i22&vYXcTAi%wjTKhPZ0A5yJC>TE7H@UcZ_wH|62 z(9!U>aN)4i_PN``cUJFBliO>N%a5rhKS_bEwt059GA&@u^Du5$Ua3LX_#X>k-LFH9 zAF$7o(eQqP@f0d0-NrV&Y;9r%G|tc-fJ0mx`u>qN44LJ6o1Wv{J6c{fn$7n7eE*GR zHt6G%MTai$u2!i^7~1}lIcUM_7Sot%+yjD&HyjU&TNCx(FLv53bw+>AX=O&pJk93N zoK4D6#-3?PX{Fto8WBV%W22+~Bd046MqXrWhVg&r&(ye(A;Jks;z*3j`hxS?4m@&^ zR542Bm2HSFztdl+PR7%94_hRCwWHUl!#Nb_KpE5(+U@mY^kTI?XLvc%&l3>+t}k<< ziH&Eevl3h?(BD-?374LMl-@{6mB}g9N>UlbkRcsR@t8{lxs2-1p4#XDYZRcyHosn} zm|o_Ql9FucmLy0EG#r!wj93XPTX-{QrU;Em6qEo#?jdk^&csPNGNE6wOySXRNJOjT zI>v`Y5L%~uRB=!fH%F8@T3T+0Cy`-DpT?brwMuSp^uObNi8Hf&uQy_RskiIY*5HFC z$#cSVfoRqRfLJ&o<6%V^zesyQoj(52^O3ma#?(Vq6QO@Ab@G?#jNztt&>AnsRt2 z(z7Rl!>X;~JeZ+K#998CEFq`30c22N+P$yp>{zscn6Jj5z)H~k71Wc%hVX*t9U>B- zUlrWH#2;V*>2DH`6dx6vVssD3N=e$k)c8@8jp|x)7|p$oCnwOTaMS#(tbiT?8o~jK z&MnGU^_o_zI0w509FH`#`CVRu5yd$>c+K_0L-G^9W8WaW-&Wgt6|qVd4;p#Yso@?OZY`E^0#SYYkh z)y>U{_nmRCS08SeolgaWwz&*9 z@P{9ctMNiCQ!Bq?I)VL2ph|zc1tKtAPN!(c9BuJqrjv-Zxa9v@0D#N!9DKq=8R(+- zIPGq8W|MdYr(AYMnK$yUB#rsfne7D-7Yh+1K1LYPvdG167yj(ag{|5jdKLvzXeh3E z@7h_dmX?Vh^T~AGez%hDY@uOPFmrB{kw99G6XFE*{4Q@ir`LmGccj1iN)qe-hnj(E zR-EI}I&d<^G7S-no;Epjs4Ebm6i6g-zb4a+ zRV?#h*uQ*#UVVQaCP@^U_r}6eo^UGp2m{UI)M~f7eDywWm}Aq=4jk>1MmAVm-PrZi zHEBY-ODS7iT_`(D*yuePRs zNHlELR6KB@#aMYyEoMv6#^hexv5eQIQE&@YEGCAfy}C|59{vc~fI#T@+4cRBNcs2k zz5DYs^|bbIp(-MRqFNb7MNcHcI@_@^vXX-6*I1PDM3W|NDo=#`1&jwyw)RVo*9brP z!WQo7lF69Q1M_&JU_O?h7tx`h9gFGN-Zu=?Ph3L8=7||MAJBLEBWMQBzv6%)$nG#4Og-I2cL zkugB}Qo)V6hl3?3MeUFN6OB-elp=`(S`6#M&Nw;2$X?^(!*f-RlaIv#Z5BhRdgM}D zalI=!ag|SITxaZx8){ZVLIWpsWj()$XzrL;W&^r+6yB!YP>>v4a0B0$%#l#Q9FJrQ zAodW~$BHv!E%A&q`&6{sEpynhyquUoCdKH06I(_MUef`)m0Nik$W+l#(YSQxqN)Eq zz{gQeN=4ZmP1h-Zu!1PD%=gz+dNgEIsPzQ@LZw^{eX@FPz#ijd(3bSoB6=hwq#;wG>(M8ILC?b+nhs2-|($@mVT+2?84v93+6Q!~?CK5*+Cb~|^p zm46N}n%nlkJjnHOeB(+go&;U5d&dVW8)m{qmSsf6#KwPdwKO-|Yl#c&lCT3f(Np&{ zjp-U;rZxs#lYZke8ca0J8THYXz~ZuR-Mt;4x9+0RGD`LEobg1Ult7-QNfB373qjHQ zig5BBQ4BBa_Ej8MFI!b_M+Ks{MlgTf0T_7)3yqVud`{>47?PsPG5dvr_V%gJm}IhHntUHVXd_Fcefi(YHIh;bC?DqQBzwB2X|v@ zG+nxgQ~otF)p+k79QdIex^n5(5oO1pXuV|gsefP~DH^9z5fKO-vgV}gTQWQM996m9 z^&_-{9(zEmcf-TK>={iYM8xP!$IzZt-E#c~j)ixq%bU3g%kQCwJm3GUPCd!w*NId8 zPRj7_=&jv7FB$Rh1gnEIAhj!!E*?<{WU(>)@_m0e23-99dU|4{rho~Yot)gubY@NvwKDm)PGg5Yxa0PU%Wq_&S`;2tG>1l>{VwW9_2TqIZ6#L{uQ@iUBf5( z_!1O&z>dSPT$j%2Q1<6FE~89NT!7S z&IT7+J^-*wYu2aK$)Ta5Y^5J$X1lEQIs5(y7HEUmdmDeDg?GjJ`J}_tiPWUvpx2;7z5q2Yc0GB zO^_QB73nF-emepmzGU0bnFOqHc&@! z>1SMYrJ8G;>*-@f2DiIhf57oq@9C6Y+xtACG(xACEt-IdxTLeRtu+RMg7S5Hw{LB| zfq7LWvi&MIR`q^fKAmE+Wo{HE4Viv+Q1VTTE?=$D>-N3`AB<8QGKip>q)Hy91S0WE z%bwCTiK_3n-D<1Bb)UT2qJ6|*9wm{Wjgn5~uGQHOgLWDJH6Zl3pX*Sr*+(aGP5>8^l>@Zw_WEFSuj!tT%+ zt?3fcDy;BVJZ6Mcr&DR>WFCk?NuE5bvy00eoyiKb1So!ndgDD`!q0!Y4^F9AyevE% z+|1mpytMZA9>2!EB}-4&cP`yKtX8=J8|cY=8Lgx!imTb3og6RVJP&s+ub|7x7WS1D z50TSnRII*;^78ebZ=~AyvolB_FfSm_Ig_6!`JW-!6%8QszOthpRFGtKv@>f9JD7XQ zyvHC2+Mc!{i`KlXF+swRsrfp22z2yB`jB^aHg>UCYMDhatON(szwUQvmuuH`P(mbi z|KjBC6Fc^hDjvIbh~#o?%*~IkP7u80^{ZI=m?2f9r7x$oI0&Hy3D)Y(((R!Y%WqUL z0Fv}Y4fX2@YS{SR?n0fh%WwZXJu#DHqpQ}I5>?gQwcPDxsDZOK*K2B6;bPwNIhirv zuk+OTc%~UYc6tIiua`?K2SVt@7Rl>s5r)f~?$@2K=qGAAIt3^T$3)clePRtM?Grr$ zjN|EN6RsHu2mr%WW8oa3I3txD%)LgBU8ZbdZ{JpxwPC~bO-Fm8*W+oD z8A%7fq~PQvqEgHP>S&^M{0P(j?Qh`kaL&L%kE(+?($hpC z{sGo?D4}pgxulApfKC})cx@UKETw9(E-UYSQoO+4VB<<)1@uukR2yQOo6s>Q&vY*u6Jg=*E?M>d?7~zc4Bh7H-EANW zb+N3wJ-@jbS*dwCr5ho~9=yoh{zXd{0_24Gen0n7M#tkby&!fVSOITX7*-~laEfTfZWmAep;6}_3r@9ZQI!lSNE-5x;?9 zS$@sQ5}c0dMXIXyAKQj-%(Us{QNMv>x@n%Dn`KmOjF~LkK4lcJnuvRZ6leYo$qXaP z0=4tkROV#6l*Ba#4h8e#u5I4{+#l9eE6QpQYXLO}jRlP8hBFkE1qChn^~ZN^&{wsz zLZL+8+1Vbj{m}A8Y&ZA571iYn;nqfc#H^uuF8(KAIkU4?o$*2}%}8(nZS+lD^fib6 zhocCEaJ=@=1j39xgvtdmHKK7n#=TTOBhofCs5lwzF~DS!dnciqR0~gwJDxcY>8*#O2{U@PtXH$>Pyoj zIk5x`$mXyw#`F?01|sN?QwCw|rjEpyjZHiMl7P4EXWMO;&;72i0A^43x-a0n2V@Bg z%dW_`GqBNDE)!I@FKr>fNbfBQT~QGBWmUCQ3zXP|8^SU5j4lSYo) z2(ZZd++5u>F!*l~V*8X7Ccm(ixh)Vf-WS#paO_yXfMoRD|7cQTs6__VsQpKOb9 zm5q(+hlRQQR6G#7y1MESURSdU5iE$%&a8~l`P$psgwzb7{Hj&+J#Pf>y96epyN}Nb z4dOUUNN1cxA+bB&xKh00TuCD7lLM#P%3K)Gk>bq&!M%uJ{v7to0`4={zefhu!haY4 zn2Q#XoI{E5f}vNx%2c9!v=$)kl`;)oRWk5O#(<6=pqsbTJkBeu#!qzI{UD?!_yCFrB_HQD{z|0(P>;A zh2Fq~6!`7r zFGn_dv33NpF#>Vd(^b_{1DrPbrSxk+0Y*#==aZ}4{1sQ9E@H~rpBi;Q{Md}3fRQjM zI@Sga z^=4cpd6|ah5+L9Z&WQnnCe%We)afe;F1CbWKOi!Q+r!2#j?S3nMVBLf8j~|j6d5B) zyFr@cEv>7mnG6cWx!u0mY_;VwC~&edGHX z$R`QDsgV9j5Mg4ku%bqsTixtz#?m(js@Yj}tFr(>q*|pMNXc;O0oj7$YFCL_KpcOU z_Q9(j?dCU>_M^gOHO_T28AeCc(u<>ipO<-HL$|vgqu=CrwL@rLPX+94QLJ!<6R^La zsigL|<&D|CYFvCCO5z|eZd>a$uI*<1qoZ9mNvdWv!WtU0vuWAO7rNCq*LX*Ntq|G!$e)7vsDL~~DmmfC9}L2B4BAo( z(B){P!nU+0=>Bc?T_z_->VS9W?GJ41a`YSI)*P&>BGk!(jo*VnfM1N1~i6%}rq(v=%Y$6h?v+Pd)hkx$u; zHv3z{MW(9++zi95z=zHo?IQX|I^;-8|?$mAd=Ht7%9=yN!u0FESS$U<*be@thV#1K4#Q zn{{&@jFJd+_9s+*Yn)u5nseF1X7&Nl?W3$9W(tg9Ep;!A<#|r zs68`_Ggb^pbcm5bhtU(Ia=+|>!9D;p(vSfG7FJa)6pM3|EG=McKz7=Ol9v1}&dT`! z8My$X&ZZe{yxpPMMOFcU^ZV$Cli+ms`=zz3w}0qJ)B=iS&?5wSVzEax>0W5i#dgb^ zwlPP;wt<2>Vi9Rg@r*Dd_c(IK zKT?OtL=CsVWIp&L0(`9O>~xjj`T4*P5Q^|bt7U^2wcj|!1&OHt5{liXf$@-eii-@H zwH;3>_0YuF!$fJ7h=##?H_vN*hz0(bYT*Ufp9;kOMvRib0u<&wH1$nP77YoAxcxut zMl=hw(q=i>*f3?H{KZLUs>Mq~z;gzEQMju%I|nhf6nHo_dg<#Mdl_QCUB|unc2i( zfRsQw`+K6qbA@q*(N*Hd?7}kp360^H$6(9A9KcQqpJvXnK6@?d?bv@Lhn~Pn+lS#K8e_sm;xy)1$O{ zwP0t!GqcZxU-7-o@E<5UHjQr0)2H|Oc?X`tc@f<|AnfdcpxCNs*b24C2f_gZ$MYCK zCdRxT4^eJhNK)~h@ERh6BvODK!d?)X|4h*^zI#6X>C8C8TWvCw?y|ng_n7&L(q!;) z@H+_6Y3q!yhpX+5BY3WR%oLP(ZWSx(8i4|kIwKbw9gYMSvSD&iU`qlC*cFN{E?%Dl z5Y2%V2+T@jh0& zj$f8YN5)6#T5mjYT?*z4St6RUvyN19Dbgri?FGdKG&WoCZAgV|8u7;zjD84)qz zJk`jd=P0f}9vb1syGM@bS46@4&b>(oy zj?%(pTGLxkm+HKxm;@Z0-2HhPN9!U9pV<|TrJrpAUxox=-gqdEUv8793*zY>Oj&>% ziSO6FS-<;!EN@sls#p_z0p5P^kh-g%R$bP5iWh)?`1<^ek+O<+RTysJFVQ}BoFXAX zeoYFK3=R;Yt2L8R`@)gOjDNSNQsZ_X;t$e6ErzckRVM)M+22yse?lx=WKf+;La&6 zKv9!-IJig6VzV0ofDOx(SjJAH@;j-C9%(s#Em!dfVW6nyVjjmIfH1s7ucjz#8fK^n z;@i3yeFx(LN^9^{_S?GK-ClW|L?aAiuKB_qx2&6f*01$w#N>;;JOU6=R#sLKyK65% zcm1d+RTtKHSr!R+Ju zy%2R`Qi6oNn<=LT;aC#Qy4e3|m=MD&z*tyO1wU z0_Jr@#u!R^GZ?eRjYQLuZ43$Srs;clC_!bWR%2Mqos)kWYcDG|{kAC|^*(;B#nF!6 z_=V~3CuUy+Q3#2o6*j%F;3zLw7O~&oFXR9OxKy_zgYu=vLkXgsa6Yh*0Q)kT*Ct+u zNRSWZ9IJHnE)3r4Z+G<~7TO=&W%2Z;1pfJV?X~@Q3uUcnPnvqfSB(1z6RR0oy}bI` z?TI!3HyC+eQQRDTi^{#PiN^Lk=j+q2o!kxsb1*gCRbGzO7M)mks}0YPio2w$s8|Z& zb`ctKqfL(Q`o;ie%**t<)_=y&+^f>yi7#vIGdqN1(4gJ?4JwxAr*C?oQ|z{LVfbcx z_7pF-t)7Sy+PYoiXEaK2YKfVn_snj0v9=S5R3b~({&DKb*w=TLf`|`bdI%9r`T9;k zi?`u{41!~VxCb=BUN-!BSv%_DM#optvbTLxal;119HNVuehKy8)@hXY z>>y{S&P0??i=PlJv#~Q4PngTcvnrZ+7iv9aC_TM=seYK`BG_AmT>TyKCvhOO1kQC- z!a{a<_9zM!xRd$Mf1H2)=OM-_Q+Df&Y!HiA5+ZsTTPr;RMMTP1lK1rgUe7R7z$_{M zr1IOk=g;_fN7<;v=>D%VP{iprOJZSLr-JG~>Dr!G02e#QuE-v^qaktDF8mYE5;?+ zo9ZQd+q)YHTFf%#F7p^kaekXRQT6iQw5u_$dn;}pgcNnraKQv;C)f15auS)>SWk~0 zJg3Brt-=|Oe*7aFXBx#b99+Dz*+o_4b7!z!%f&{$*54#@NI#-C=HBh^@BKBZuagCA z>5hgYD~I>cuOw5Wa5na4Tr#tdF-3Y*8vh)M(i2ltF*EoVd?e{&X(H<3^gCb4@l+O= zNhj2dMqVliXK{cZUikDH^BTZL`*tw{g2JOUqotstu0lC;mrS00Q-Ve$&^3Lz9G&v- z_z3A>W+I!2t{Hf^NnMUj=c={p?4vV8|E=4NX#Be@=Aq)`q-FJZ_Qnj?-3<4RUb>ze z!{HZ`&$DajdjBZX;}p<=8@XetLi{^yyI1zdK5~LJIr@OaSc}qQ7w4~zM&kdQ>7U{h zq!R5$6o|ZZ2wjnk-Z{vXJLaA$a$P7#O+#}u;1qmn``5zC5r#4u$eDX;iUu!F1Fx#J zu*Ueze{z)hbdB+uV92neR=q;$Bs}5*=o0&j)!LatH~b|PTaM+HR*nP~-QdDJe}2xI zVK5Nn%1&^VUN=><9=o}|y4Z5s>uco+8w(F2r5?MJb5u>XpGWf$uJFo82fGwduM zmiL`Xz($onZ)-g?rU_6W3S0?WmxPw^N)R3VzXlqKl>XLUA2@^DZ{Xg3g;Hia+$5;p zlnnh**VEI`LQ|P3$JWRTPvZOEQ9;iSA@wBe5cn_YOVjI?wN^G?&a^U@90@=Lj|DX) z$GQ9!T0kao7MxvV*ae7DM}*o(43R(PD$z2~Q2!_1iQosCGz0Y%vu`LtbTdLI7x|!M z_>=06*h>UH3DH*-H3ZnOMSzZDA6dGnXsDT8K3}G0WLmDOKDK`mM-h_=$-Jk45x($i zveO^I5N#-?FvDlzz{gFMVH{u^EU`KGr)O|-80QcX0lX^oPOq6*pSM>hJ_KrKzs=Z( zpC!1foi+)ZdrTo`Rv07K6aUYwVUO6U51EJ#5lv`A4kg2VF$TMql{N%q_ODB^OOAAof~675IkALr-%t;N*8`ZzS>vKpYa1JVjEZNq^@O^C>m73!n?|*LBu2?Tee*+^r(+` ze-t|LZSeFd5bYzW@V{pSW%4B(F&;^~>ml^4Bv zQ~IZQl``@2DN2wCKjJ*dEKq;HuLmv1ifix#`w0o(rHo!|u&Nb@G&wN>KA*RXWtx{H zBY{_dOD=jES1Yl5z}#QCNx=NUks3{|%(unP7vPSZpr}5`d`leZ7u6oz30pbYsAe1t zOw`ym8>eAo8*OYiwr$%^qoy$$I}_WsZR?vpZ@>R@U1#od_Px3GTGu-C&|-PmC^&?; z?ssQpdLPFxT8%a=>*3*W9gNaNF!v9>PtVRp--ZVa-vR?mSr8;m+yJ5Bl(+c63cwCmXJ*d@&(32nxeO`~&0;Xy6P8*w9LR&)>x(SGRG;LVOTl*pn$9)?-t zs|(Sz8>W;xr|!}d7{M1cC>u10Rn{P!2JtNnBH*g%ywFy4i>DvGeGOMKuBdVOjJgD z?K1Ey+fNXLek%pp706g8O#mbq84^KUCp+9CA|fz}k$;(&)> zt0q(Gr`>_Pj|R28YoK}OK!5)vt|CH8N=ie+m;3wW^1OS*b)Lx>5)!n5Q7w%5mMFXJ z0&<;@s9$(tTG}X6Fv7pk!!&zqpl6~l@-=Z{wB{Ft|5@z0pkO^N_cOV$?(EmFm15tS z$dkeH9J%Y4S+6P*Kj+}8GXO(VOAClMEwB}(5(rHCrCJ8j9SDoO7#SkA`TF__;&mHN z)~P?!6_ObE`|{2NyTb9qq;y5cp`noP4QQH7lHzsx#R)J``)Q?|C`z~*jG&}NP!BPo z;DcXZ51yRp!mB8WG4s(b8*uPhc5*Ya1i?{yS(b$v8`~Qi8d_R%Rv690!gbS$IW4kO zdGv|DFrlI%0X~`jNkrwepgAyAMFM^%*;#I+K@viZs+q<9xCi6?Ih>|JFg2$JhX210 zD#%Gr@+>VgM5zp^5bY8?mZ1CEA^PmZ*1^+Ygd(U@d49jw5J}8Gu8;Ble0TD&;Twxy z6GRBiPx}Sjb9e$RJAAwz2$a^<)p;XQepZFT02j;Sfaqwl=ZNEF^sPjkrI>;zt9`_V zXLw(wmAL642+k~w5Hx32qZZ)jC7ISO z{kf~Si$3$}5QS?ry2vXeN1}$fu6LdAGE%7nQC>!xqd>9@*s*TtR)Wk{mycH_8j~N! z5N&2wPH%ydGC@1s6!dTYdLIvP|DBXSzgWW%ovs>yB*oZWNdEGk_CQ95w=L{)|Q1N5{nBw(fl6>c#OK?Gt=OTv)!qnAS6T+>wC}AnG zk3_Y{Ni{$OrmZx+NEe~a8?4h*Rax)f^rHgXBAnt&T^J{v(CCEsxM#9(D9h%7drntZ3{8@ZCZ{90Jp5=X;sc6p)GrG;IK&49l7Py_}M;4&moq_iQ2R{gR|8K5g`h* zYP<%VmSRp4Mg_SzImmXn>KIMbCIG~gWR~yT@%!|&hwtv2d`ER#8{gNj(b8QdV&>51 zg;uf;`(v5DuR8-(de}_Up$$qFxyzzJBAh#2*PKK z+ZE8n@GG&wT91vr4R(wJF0Pfg|Bfvj>Zdc-5C`a?SwPmt z6~q`@EywLSv**Cwkf))kS)q!YIOIQhSr!Fygk4DDfxkgIwGf~%0FzJO-xMpcZP?DFMJ+&v%+X4$i=81 zTcth*4@TOGo3%riHBk>SAS7YoSi0N9*sF8UuBJ-gdqmZcfCepiC09M~Eia+pbvw9` z#5D&MikYO0P;x_l=BWL5?N(3(;loJmAVmn`N)-9Rl&n@uM~tlm{!p?zl`paouH}It zn!B~Onz&HOBO`841P~^=0YuVo2POB{+gl%-S3A5j$G%6Ejhr)jS`Qdn^||1VdK8^? z8|c+2u8&&}>l}Z^;(oWbTAu7Fmo9FYAzEu`X<5nmu?h~Brl&WLjh(&= z`SRr)vqCv>apHkfRqy1sv7xBYocF4FIejDuA^~miR{ap~x8A-AOQC@k_01X+6`VY> zy}%$!X@DZg-+SIf0MRk^g#>~tm`7gNPP;OZykZS4L)in-$)nKX(dQdoQFeC4=w=rX z2$vnwWAsXdSadm;Bk8FZi^yjq)CO^EN{QOceb$k59*EDdI=u7AJbal z_z?SDIVig$2nBOhE%Fe2EUl;*`%AfdsJ5!Nvhp-GcJ}@*%D&9pPbeWYg6Ur-+64$~ ziB{(e(?IC*8~miId>1RrGlH=e&$BWRK07D4>UZE#B^nsJ`pi+ z;XonY(4F{~fH;vusHU3d%T}-U@kB*eMw3!O3B;#~C;mQB&hPu)we|LcFJ}`cx{xjK zYG?>9E13AJ$L;IW%|I-%5y`kQYg$MtqG42#Iz#OmJOn3Rv=KYtjYb5#bIqEn!*{Z= zP6YrhG8_at)~Y%b^FfcNU`sf#CZPn2_wjTm5Q8;dG(;8UiqNu<-1Yd>_>Uc8;t2WzbHK
5-EP1s~Ssu0avq7u>uqd>9nC2V!$;o3lIKxhx4z;kJnO+)<`l{!7zWp`g6%cOH|SU!ZqC z&6ayNp$6h)@DtKWCOCW~o#?PdfbDO-wM&zs65|pJyDzSPE>`AN1$D1d=C zl&nP-Tkw4Iy}d*s??;u<&g*#p233gw{*lf^55a;$j$QpphRj#zR7a!Jg2YDf%qc)z zY0yD3+s%stP3q?a`-J?y)7RiATN8=DNj1m@jX=kRY!N01Q!MjDxmlB1q3AfNVRRC< zCNYim*cOk9Mf&PxhwZynqL91I8dZbiAHfz0`CGYk(;^z(zFFH8JvrCs`RHurVabXg zExhj2>EfJ9&wpp7ffyj$1V)$H8#Ei8fyAUG1>tKQC}vhuPrk85_=%8AZ(z%bN{GV{ zuXYA9`uD_*pjkKKMN&RLf77z#nqy51TsqsjtZq=MSas}{e%3+-fJ1Zu1R3=Su^?H> zmyn?~CESs~cdSkGtw;L4+&J&;rlGuNHC?|?ERb$kR_~h?!(?z{QY%&pVE!+Nx8Q&S zh3WtzWQw{EB>kQRNKEZ-*awr;+j`=m^h)WgQOC;GovDkwSuZMVs- zP{`lbC90iK)F~H((a9*giaI5`K+ep^>hLm8^MZZZk-quOD+1`bSEZMF|3Gf7Sl5Ei zZ&PlGkMqE*mtc>Bhey?hGY$@gGID_M%L^8EVPO8xRz6{)xZj*#mVQqdNmn@VDPH}` z-@l29Uo-JAMVSL-qm6zDYHtLK$b#WTtnX?dPtW?%++6p+`l(U^Oo*h^y)cQKR?-U< zH~Npnw#dwdWc{y`**9++tDHLl#$I&vUSwi4lgFz-*asH)*qE4VI+7SD{vLI303$;& zcntftBz;zx8c2YT@oK9#bZKcRd2g@doq{DS(_<3iU)pOaBDg#5-YVgCMW?6>JDoDX zAL)w#3^q%8gj8#_uOL|fx6sn8|L!-aR0an}Z0L~RY0M*VuhClD zt7u#bYm^&urgj(z&;JrH0E_EhG=WFF!<3>IGi2;wN5ajT(CqneFOqq_(HVQeEW>r< zF}FDUz%4BZ=MF-fXs%U!D_d!{Mg$@t=$V=cR8>}D$|O_V8*rfH?gW^B|L?W%MEXf) zz4t>Lmamc0XFID0rQ(EHLKXcwKcq2?+0iJqY$p=*Q$wUdg*@6V2sI7WTK`(0RQ;p- z@{*ZH+$K{YW>%1($K7tQWI?;7re^A3B1c|Mu4sn#+fJ1X{mT*>o<4wEHKh_792H`k z!`%u+{B zBEv33ky5Hqy+>ZIqYRter6HV(q*%DXG+V@Ojho0nsrw*M-E4jJJz(^ljE+J?#kCs* zrsC{|#UgtRe{nLzU`&<{O0fGupGl7~+-F$yjSdST7?dYgBWtnoE}$waCW3rE=!t@Z zD4Xf)t~Sm4k<3^h39DUMT!|L#MxX2e=KD}k>VDXfPwi{9NUF?XN@=dsTvqwt!gPg# z(KAj;Pei)f?zcR_>$^Q4&Ot#3;0wV>a776p3|+Z)$5Q+?6abFcBM;S;qICp_{7Q)d zdTBr1-NjWW)uJTQI8gAoCo9{&P?|F@ENs#{xxZNSG7@?A=Hq>>Yz6lOK7| zb*j7WDX03(b1IULG@iEyky~O0CLdw@2L_1vn>fFIl@siuR>+Z+mq){jg9bCMiB(A) z?4j_1kZjhb#kg*It*#ke@K|0R-b47_6T>ZEO-+sVo1B}Ez2NWpyS|fm&1fZTuu#6T zPmwVi$0qJ83dU>-o$jV4n<7Di_XK?N#XWwgdRiCD)BG$!4m)H_;A_CYYjQmZD+MGG zUvS9LYDZ8sm$1=+YBGk3GG&6dIU5@rySuBH$W0XJ8SmwmY<0BYW3ReD1BBQt9MEkd8~I^Yf-Qp7;+2SA<-=0-7rSd*)^fN|#o#=ALJ4 z*4ylkZ2UH=?)<)G5RO&kOSR{~P^&!cR?=)C>ro5LueT5+aCVy09>^oE4eETOLAI754rMP7%kjdME zlZ{YMmm0a}K(w$aOXd79PO$-(K*Yy<6|7M)1mol&8rsWbsn|$5Pdyt8$qq= za$v=lB$SG>#>*oEccyq<`;DeLuCDC)g8xQJi2#c7-rXxGe^O0@PyI|dC(PkAys@O}B#_m5%Oq?AZ3lz(NJ4Y@*tLp!iU_5?c$oD(H=1+OWC+A#@xhsq5L$T%v z$NXNmOVmS)_?@IGYsr76=ZN{?vVP&ef4maGKyK(Yu^GJ8&k0+ak{7#^MB0NYwcQ`3 z+~G0A>(BK=Y;>bp`d-`<0G24KUIXFZtI;Q?4r52k532b6#>T)R23Kae@N)=azwE|^ z@W4-CD#`!%UjXeP}fQ7vXkUr}m~!p0W#PDz1T{%IMA(s)iDPSPL6Bz`5#Eaf0(j`zzg z5kriyZSWi8Q3)x}pElX{{Od_Vu8_Gu^Bj0vWkUSb$%0PLvwwv0NaW*hPO@kNz^RZG zz&L{$MR)qyl_9TaQ{l$~4~A%yEAQEnxmqRG7-zn*;~aHHpWKuY#@!eH@59^KLuoP^ zn)KC9+&fuwpvdUO6qkb}_Pq6@oH|d0ajCWVo-_&pOI(-2O2KMM3mNTHn z6;MOHMFe8FRhch%sG4tOX~|%s`RU0$oXS7h&8(!p(Kf4xp_kzisWX200a&s)wuUp| zY%o4E90lnzE~ZCM53wcT*SZNcrg6TJQoyx^ai_rRd0OEl+>0)sy}AQu3uF|!p_rK- znutK>`FcKUzjr?UXoUD}KRL_$@cfFP!R6b+uebCaqTkh`+gIbGT#i+yR9r8ukDNOb(P?@Epy}KvL84oN-I+ma2 zV%F79wOA-DmQbTLay-rsMSsxLL26_FKNrCNy7j9RP?I|P3sT@1h9}d8x~`4(I6p}= zdQFkiHBiNk74_cZ)L-6XW2>Qf!~1CrIaKIm)iYp&1*w!KsA0^%78S@kVVh z?}LSe)G_bt-)9@maL4Q}`N>NIScFJR)fqPN0}PR=Odm)g*mlOZJ0_zLZs2A5FK0%nAd=w~=$3E*@`&>_D^o-mq_nZce@xNHw|B1;hl7uwnd396W@@Dqb)+|| zLm(R;i-a?Dpe5cH zjmB~a`{A+HX8!RndCE{DLZH=;-KZX^DWOi*-Yw zbj3mwtG+)nl0p7Q0%v+UoEz+-@Z!+ibjxu^1PR=2>h|lBjxqcq|#teml{!AHZ4i zN|k_Ie@D-y%sAsUf1vtXDRh^OoH7e333jk0*QeiBrda?*UMjq7hbLJjD>LggUYbIjP<-Xcz4yJ)k1={d60FE#m2g@J-L(v0^U(8eGq5h^by5!?b2)7S!_iaL@Z$3 z!P}ny@!ACZ&BI`;Y&ldxxNT)3yp8#~i^?~^`D7B*G-;|GThO8sEfD8&xTM|SeY{{1 zExF>zv|}#Q{{M~%0l1z|_^zZT*pQ@d%qvDmyg8^I-yHG9CXX5MXM`M#-U?U!t*N<-`k zi{U-r74g~4GdVp8mQ-d%`3~;|Q~F4UzdCt6SNz&cMKJlX_aMCfolW$!lTUaVNr}Am z$+3)F=XbxRvd!V6)X?8ajtT}!d--y3$aX{vP+%^u1aixnd0qkDSSg3{F$c%k8axA;yDstG=85{N_qePA(El^!4%mWg)1=aGAVgFSy>V zFaVBnG*!}9PX7*1nv{TvZu|05-xNiMwR^#wCc=$GRzbmW*RY#8wieX(lD|UP%(zzH zsZ2cy!-n7`x*^s4(%NTlt3Y@y>=s;=L>Uio@wf#VD?(nyANB1WJOn$(kk|9vLsD7u zWo5wD%tQ~W-(3yj6&PvL82rlz6-oRQ?WAzFsmt^%Nh-sX{jB7|D6*xAPcA_l z5o}-I4ncH;>Y+;WMfsA9qev9Aq>vnG2PJXIQ_Vz5C)~lp$b|MQTQnA5ogvV)eRx8G z!^lrQD2RVyJs#1)4lBI73)35+v+u)gHN%vaZDi$>he&N>xTiu0=y2n5@2Oqh_{fzq zn3;?>?JGwMcg7UVXfI`x?HoWt1()j$Fxev&F;A+Fw=|3;7xAm#;OAY5-_qLqkL zm|Mz5{l_6;Uz6%hYhkVHu(3|~NE5tIcQAL-Enq|QqT z#Rc)(aOM6GCd8aqrM%6ecAd$E4@au2Yr196nw)Ez41vHkfNU_ucZ^^#8JhMub@ zS?)mVk$H6OxY8?1c%4u6fe!X1xdUgs?@uFD!|Gz#kao9VePlIR*M4JKfTK!c?}pdS zb||rrz6g{lw75uzR)Ezj_AIM$SypjsG^V1E`d?b?0Ryggfc4bJ+Ho$lDxZoKpIpiN zY$)bk3oK+kv3+F&^(Ai38mkuH@k8}AN&6-ngOYd?@=^AuHiud4V+f2slFmj4wE=f2bB|?@GMAT^RN~z{?ODCqlCFOit(1v94dH1lT`R$U< zUxhoZ^O#)e6we>XoVDdx881BI(K=VZ)eg;F{djg)igkm>TE8I z2FYS((w22MvxM-bnqa&1Me&TmSv!$UmEVYCFh?;_#bMs$^|?~~@?$(SvMixNa*({?23249!GtvsNa*u?OS_D4 zQNn^`#MU1rnbSB+U^1S?_x)6xM?a!y+H?e(;40nr-jt=W4T(@g%#0p$Nh9?aU>|n{ z?JMfbs*`Z&X95C39F(c_(MKTPr6O@AWDTb`FEjJt&QpQlqKox z*!gB8^t9VP8_*O}RkcSO;t&J~1}*nfq1TvOO`IonX9LQp3$Sq@3dh=qv+>zXH!S+9 zhm2q+jRCXN8Nitmg}<~>m3T@*TqZ47HePtOeJn%2hg*x3-RWEZG}naR8%FSj*Cg6pQ{Yi`*Q(|@CUL&WnQp2Wlx420YK?L;Xn{Zf7 z(hB^S!Q28Mu8HFP+G*I&2(Oy~%IP{s(tX))?V*8zkXD?bs>rr9Cs}0OnE=)p3Tdv$ z4x8^T=r-yUge9*^BzKavIDp+!DoW+twvorZ%E_t=x+c|gt31(RNVB4tGTgzYy6Vs2 z`@*EmJhTt7)AL}vTY{uczxLsWz^Y!If`zFS4?k`#Tf9hi7F?q^u)m}}o@P%@noWPd zS>f>k8EP^8gQfCeWk4AL4cx2Koc8`Ke5EM~x;f4x?k|Ez+T>VX>An5Bq*mRG2^e`P z6Sc$T7PMU9O$HLjGM-;L;u(VqYLY8;C6Rvu-v4kW)r_D=h2r3%X-g&WNf4PcPS9ST zmyEPJaPRpH_fF2GI=Id=l}u8C54=%`gclHS(F?3;{McKFHvmq|w-xaI8ZKa9|FAfdKImGfR_bE7^Dk@G{tl9Bf+$It zgVzb~TBPaz76K~#>_6VUe`0NZ=MfN4cm>}ls;O?8N$0*xH-9IB?OC_M)Cp6>d^qQK zdwm=NAwv55#g{TtsmytKc~?NZ+;w$q7wer&7!8SVUrq_lr6P#neBy=>S38_tJ)TP^ zrAlD)jJ)+5-QC$ZIUmm7TCEtdW6?V;t0@VIsF-txD57IQz56u9@IJq#47iCg|0N6r zuELbmRPJrnR$ozQOQ{3^e&`D~Y1de$0=R$+K}%|(Yd2W!TVN!?~M2rsfY zI{jO0&M2Zujx8`KO0w>Pfb(d3p|7HXVcJ&TE~r% zU~$RxW-OC4GBOf013q6DD$@R}R*el=h#R~Zb5{3y+bvV#ajFU>iV`MzIM4FDqYP<} zgv8huE0cCJDxYIhYZWqqUZ|8Nk>B>O^;-`O9)&n*uPhq`vD~?+ZvSR3!z5g&A?5H! zG&4flI=#iu$Hxb>fS14&#j)UA=Em&|FJO>yO&fp_5?&vo?PPV=yN|b(_aXc!FA&L} zh099LRhB#Msxa2he+7|y6qJD+5s#vyDWKHRM8Y8RJSdm`q;ZCjMa0;g75~kvr1hNY zIUt9i^@p|^8&U#v8>>?%fEeK*Xe>fGgyi*n_ktnqB%vmN{nN#=S1;5`RJxB_$)oB^cb3vgdE{ zu3$dQ8bQ9t7so<}i(tPdjwl4)OiGX&fa8 z?;b;}LBCXQwIlo`@>8L3=Bl*x3iLCdhc#D)9DVYR&kZu{Wb1#tp01%+_UGq4{wRwJ zFW|#`+&>T&DqjtSv#c8V-jyv(Hai}Cs#-S)TtVSUW_NLSVhhrX#ZU7O18#}E7^=W#nWd(rh)aa-@1w<=dl``rpu^aZOnD7}V|cb9 z6{3R$yja($^B4fg$$LS(8WhEBD~O$*50zE6jEy>%NWf`m_C0xaSx$Cs$KIG2Q=2tB z5aI z`g&CGoLBSD*RR;#c-sv6ioRD+KdQ6^!J0c>N!gijb{Rg=mX{OpHN9kRyoQ({g1-R! z{-o3Acg4e_aY2mbJln`YbPF=QD_mdCOaPGm9Xb`wTi`1ZXs>)*&^p6MI8`>2vaI=) z(^;OOtk>p4Q|2_WT4ErA#=mPUH&{*AGAnkP5Ypb7m1=u)!VXHVn*^1=I+f7e(nDQ4 z%oDyLKAp(12C-dl8qv6`ZL`La5kJxfPmh$A6b9&xak!QYN^Z~Ei(FO(I1zFYEnFJT zX_v7jINxcRF3UibABwHNU7jW!6mIq^`8Xs|><#R_J^PFadb|E^?#K!t9(oke!pc$f zSD6-#xfFy8G$|ba8ByN-v%a>LRiLx!(V6gtE4hjUZvWs6@cw3kQA`FCFi9F8AIj#8 zLpv=I;=9gR?}JU^_nb6T2ol4RF{NAX;l*;=8o?1F9}S ztGRC1yQuzAdLZ7@$19^*QbTJvRED&9&jlPBcc(Vbhxe0dk1wt0B+vp)_uD=%k5@8> z`g4@W&`x&m$q($SA;z20A*08j8D3viS!U*naOjXM6hgxcJ}yyn@Xpm)=?n5Xzx!(% z0I>(rI{kIEZPK6`2MD(|LVLdlS}N1f)HOPdM2K9>Ub;Zy4=!()aS07&Wy%qF+0Q)C zH{7}R(b4G*QnH5p{FE<*0gaW=rYUO-v8Mp8 zOV%6lD3SDkfCqDA2#~dEl3|=0cKMvBJ>0i=DGXwF3{;F{ut~|Pu7Yq?&uz?bY_(KYEvmT zHy7;SwOpff**jCr)b85{Gw(ZD8YJWzQ%uC?Rsu1lxh143W0*HT3Ii<1mpme2dcgO4 zU(pev#a)v!A#!x!(o=$!&lf0|IJm_UR1zLylSXKB85&8YG^K*~$rV*-2KMXx6WCOF zdAq^9KMs~&uC@M*=Ef&|gJhJY-rSdL-e>HkO|>0=G<$o6w%{S6*r9q2r!JAP(tx1` z7b*KoHlKE1{^gzn!6~+q6)_KmacI(mW=lRjpt|UeI}i}(zE*uat8@vbcuGDI@{g`VmqM_fr6ThTSGEICRuqZ%*cK?)&ta7UKcv$i?FbBQ&ZF* zFUXycyH8Ib2IJS8l60ovfSUK1*jRqg=R;7*)EDA^Dek&tg=MkYcd_A{k^K_UuHFv< z1i!}mzLICIwK$((nKj!zBCugS_c0^w!+f*{cJw}8UgAuD*E~`}1(1-qhNkcTnQ$6S zx_E{wfZ6V6XU1ODs^*XnVt`F#gt2F{%$f_;W@FI&3sa5|R3p>hoi`XG+lP@vrQ^3D zVGx-7q5ZVSrWlfUG%eFG3dJvsAffj#uAgNnlftfx&!9FrGPPqe_Zfprx}QS`6Wv-i zCP{`c=6*E3uF{$W{SgioygL~|!BBG$0w(;c!POzKHFlvDTCTfGsihNrktLe z+i|^)-}^_T^@cWAeH=M@P$EZQWF}v|Gj?#>YZ9WwOxKtAXXW#22kDHawhh(m5a5_B z>s+ua=NVEpgY>b44f~Lcfg{I9`D(KALDREO^KNst)UFn_XL57Y>&;8Tu(HFt!!pqV z@M@Fpk&xrSTiL&kT9GS%Zm8t7Zqijpa=)pdp63ti+-t_UMT)77m;h=&#PYk_Uok<8T{dGJ!6-TZ^Tx&^Zk=)4xhU)hg+940;)RMxm{69B)|-|Eq|1RA-)=;pxJ<-{~||yXq5pN z_v<{xsD=WNcY9TV?P9Ii6D1EvN_EL4PZbCrHde8^F);3>82DWrNo~GAn~Z!0?#V@e zedHTl$JJ){8z|-J{t{JDVJlAV9RxRUf=i*-?%@o|!rdSZY}8`}m1vSW7#QOHMHJW8 zX-6k~hYk?v=`Cl|v$G+~B-#6MN_)REFO0lOOze|cC_~AM`~9Fo%Xfe1OT=FX*8K(a zb9|)B$MfBChg|;bp+iq9!Dsuv7^1+MAUs%ITljDM#tpUFn~T^U25KtiJ=D#KuMp;G zyWUcMt%cxTtinLh8p5F3B~D{oK*x^7v19N##p_kC9Mc?@iAfTH8_vQGMvRxp zvCp5Wiz5l2O;qifu$RDVvDghZppDN}Ml(N`f7*+Q(K|dS-Q-Nl{mhFL=iuyRu?sl| zVW#kQlxR)0r=G0h1MNC|)uquH=OB@#LLuEGJB5>(zw=DEg|4KlrN~&PLWWLcZCQJsH1de_ z*7j|a1~=3$nu^|{PUNeld)5M-p-STZw5W~p5NymqHczEC4gdxFOW~vdWwK7J4|J#I z2w$+j!$-+v{dbUxvg^Ewc4sAP`o2A(n!w~LHJQ64;ezUL5!3a7+@ydldg%mPdIR73 z(vSs+EZo}pPrK5E!rm~MqeTTdrkl&zIUY-p7ixNd7@Q-fI6W&X*)dd#riA2S?75iV z8S?d$)${#l#8QyhUZ-JnpKydVs_X1uWqTQ!GB_6re1h)nE_^?D?afRpw~r$wZbFfX!@@^n;?tm+(}bavaV$Whr9<|I7?H{OTP$7y z)YLX0-E}`6D4~AWH2bZ`*eA>l-_@V?vy{@ZdYvrmwPO_W-D&yOVb!oWz6J@Zm&i*X z(o7~|fia*_seiYD1#&It>L393oE8No7)j8Jox*ms2W|$u@iU}trIGYAK0Fa8xvXIy z5rJe~?`oIeb0+r0mL5Uw5rB;E7L+;#H5^`Stc+J6nTjiyb}N_G*0!XM?T6P1xgw(9 z+llnYMA<~eFZ1!ps$qNGa>UBZv|xf$!J$VDnMT`sPW^HZU#tjBw*NaSx#K{Ns@Cj=+TwgSV8c15iJ5Ya5KsKCzLNpFwMh zz=wdL(;%)0noQQoW7b0>i?=NF@`6SrI-^!vqK=w8To|plK1=^AyQv%hJ~dv&KNmw9P9p!phi7{XDKlIW zgyN*TRFU>luC%_PRg;rfQlZ1Ab8fUhmR$et;#??dW3f_S2g_1&jb-QV?jFuubaKK7 zW8TWcZ@|F1c;f;kH2*}VQvIC;YE;))5+`iW8diZD7@Y<=E~9s{VGiYnQ>Zeu((nBU zo%5UKbDGnV(5IK9R##$XE-ID4p>KG1!AA$;!-cg&m^fol(SDn4i|R{({*oOylr_hO z0~CPXjgR*S5Zn*9mvJTv2Qd1T&o@0{dz#)_EOAL-6Z(lL;10BNIJp2|InD$|%e0_*u=TB6sI2hm_O+Cgv>b`-sYafe8M19|}H z0-ar0p5Tj!?3{ib>aYpw)1T$S-U%A{b-9R%&r+~&s;B;_OqV5_G^|-&IQYr^fc_ogkVb->pwVM#XzE|&vD}r) zViS04J~nSM^bW%>;A6p@#%D_wCJtekqRCP=P<<{Btmbf4Ci8NrcUS2N30#N)t29_gQO2I+M=1Wa<4GGq5WYJTbxf$ z!x5ZTaD-Z}kx~?1wy9)(dpIslWmo=Dj=Eby5&JEU#dnh*vXV6T?i!CI>vt;quP4gl z?qmvsoQdfg_^YRR8Z$Gqjkc$~lZE87Y(ugi9r3J* zs}Ta}bm}o_bmp`_Y^&I|S&d;57ads!4I!vfAn09Zab$ZJ4Y9FPU9O?#OjJV6Yt*yT z-$oIEmk}2AwE+o<(P_~*B*a@2g3t2>63BUh)JzO4)b?NB4?uqRudak%2g80tyYivOS?O2;i=-@qMoS zxgYk0=~n45lQ$qbS}x#hZ@n{j%Zevk7WfhpI)(?yiGt=C(nU(dLi1)PzdqGy8MF)@ ziZTBK3LJQ6YGd;$!fQWWJ~J83*#4ELrNi75DA|uybimZYM@YjwJu>6(YWDH9mrL~f zFIvRnj+%j?4Lq*(X0}Po-w<8DJ=yc8_n_71x36jt^d3TBEXQEw^2;XPPJeeY9+RBL z9DKQ-(#<&YXv;G3>)}>l6UZ*wS~>yjrQG#JIO*QTv9^8z1})~0kii%lqu?G0MR8AP zR%vB~;~40Cju+0l)x zoSo4{9O*2Kc@a0?!Z5Y0YSUWW7T)(pdRAPCLDDy0>%QjTgi%}A_B0_kVc*}sq@>K_ zaPc?0vCME38!%L-{T|X^*3}siPG1S>Hv9?AIB1wd^$ptg3i1X#iHeFU_p7J4BO-7$ z#7=mdKadmTx1xm4Ew|DwUZF%@;ea(@tzV|N-){sy&~tLPmH58OU8IG8!>?bGkMBh! zTTBL5if?3OA3>V#nj$lo3Sj!1LZvb9-Bf~K~#S0`Wo;#x|B$*Fv+9%liAfUnzHHsTsN2#z-I`<1{uVgR*yf-0*OmoybDqwMh2~Z zl6P2_%C3emm?~sVm)NXrW2DGY;h4&i3Ag!v_#>@K7ttAbzL5WUcy0qS;S7kP6Je{c z^5FYK2;**xo{2qmOJS*LL{+PY{`H6Lo}fNma#%|f_6{L~Ia`fPpTi%+Lw&=2OgnxB zzu;%`<)AiZsdGF60M7FELKTQjaTP(?W@03NefXDkaB#4g%NhC%^D*u#b7ge&S@am8 z6W{3GFN;;K`XCH~YaZecPYEPSq+oVDxtQ2!Fa{%`?t-}1sDe2({0oTiN3=76Ad+(KJP@p$ zr$JWbnUA#FeoM>5;U&u5?M=L)8J9EM&(w;lpm_iR2;?d`G&A#t_`Toai-fRj*Jpy^ zpoWQdQ2F`)%hOH-;3ZNdsG|7>;QE}UmgeiJW^>Kn^5twp zc2!gu<~KD}SEI#tn!P1T_p_#B$dh?V4x%dHra;x6h0|Q^YTzl2W388?Tq-=>{xCHS z37%$%@WeDaSXyWd?Zo&z>E!ce7nMc50O*Y)LL?v- z65)>nIo}*+xrZMChV5+!hF}WXvMMc^QDF9CMG?M5qe@dp+H~zMRtf zlN0IJh@Iu*`IKXzIfL-|v2}HhFP^O@_)DUE;iwvh0+gg@Ea7Zx@=5E-~J?w!o@_--l{E}v9#>r40>bi20>1pYDl6zyif4 zq9|7GdrU$lqpj+mJ*JR^ejK;OfGb=1MX&}=1}$;`26pV+S`m!$)uM)k>2noIb!Oax zT$6#XF)~b+MIaN!q;9qU&&x^Z3cGO*r<4?#qT%L4uCLK>$b`k54DnuS8&s9D;7Ce^ zO?r~XkKz04-?R72NUEHA{6iMHLrqc{& zvw#H{p$fz6V`1Asa@?6ckai)$Ax6kdB#C3G$KI1Wawyhi`uz4^)E+Ay(uW^YAy1OZ zu~D_;u01VTE$&C>PCBJm`G>#Sr~&F`rzKVyjAsEfX3V zFMX$PnK*%ly};>^YubT`HqD5sK)HjCZi*-mH6NJKkaX@mCZy!9gNWZ>Gb zqS`)fec1F#*TE72|Hkx-*3gAbw!f8QfXkf(r*JZhV(~GuA%u+CUrbb zxij)jT*%+4s5krHrZAZdB9YNXfp&{-wADJEDoJb9}F2J#vHf8>h`PTh=xGQS1M89npoqmHfH zu$a*t&$yzioN!!zxQaH0`LtW7a`(#@2Zx9%lkJ)g(C`{`Wp}DYdKgDTQ*a_Ra<;Tb z?L9m)Vw4^D5MDJ4*^#bN^@q_y80VI3nb*IW@*|zEvBfsZ6(V#xfltN|nM`BO> zA5m8w7v=YKDPida6qfErVCn8IY3W!}KuHM+>F(}QKm-H?q?BAj8bP|dTcmv-`Qq>W zA3o3Bd*{x~ojG%kX{%pMU^|v1AgN|XNjDK3oOL9a@Z$TKS%-S;0aRG{dZKtZZxcF? zgB@ql{ytMrs#IE)hY+7qeR{pQz=O~Vg@*0MJVJ(GgHh&)S!{0=JB-?;ny!1A?#t8% z=l4Zvt6NT{K==O|$o+#usp4LzAAfE)7BlsyzgX_AfGbebbdr&^{drSgX#fnYh@7l6 zwi70?w!wCa#Ly`u(!Ga#7auV~I+)+n4k>vvSv)e)(@W3DFurPPs?kC*6bvy$7+3!j zOn`ahPwlA^D7~GD#ow~qQ8{GN$CUM~gp%7*E={E+=lVt#8!jdznqp%IeX$<0^4&v; zzujf;>9}f;NWY$^CMU9y>ZqVVQ{s{3-k54_>dZt_M+sKJw=e!j(FTGK*m>IsghUL0*A*&FQ-@H>1$Udw)ec(guch<@+nZCo1SJrk%XBB$KlO$y`!%(`0!s* z7VLsD#QG*aovjc|49SI%bYPYdqB7$XFSh>pi7trLx6lQaP^B?Bn6lj%efPZ5^ley8 z6^ev}2kcciy@rEy@{*K72WGm+DT*($RS^Hg2g=7M)|okTUpz3-!V)Z5zGuS^s|M7X zWCA~EJ1M2$<~qH~$ul)Fp3R>Mu@%Vhitb!@k-h3lBEo6CJvS_GJk4t@FL$V#J|x}- zyO(K}NXJcS?)%}^r=>mj_)@>pEjkli6%W7FU?w7-0Uh%kEcpG}$vSND{@XM4DH|=X z`ssIb;cR7v9$v+nhVB_1sSCyFatasCaWaQv%q}cM%K`Pij$-`b{g$_FQa)JruX!2F zSd#Lq>J8bSU+c6}L|;@|r85OO$)hmAXV!v-x_pmL|p@Png5lwJ3KvL)Q?KRqob3uC z80nnuLp*!I#^OG};10X9uk*{hzTkKZkPQOaN#W4Ycqkgw=j5>kzH7b|Xj(V*-v~*7 ze}W<3B^(cJYgYkdI;KCZ|;%yv#zc)$fY{I zOxN9FRF7u*qN;kK;_jhr3Bp7IrJa8ctTH3E`W7@8iQXPy+1~V+6HNMyj5H`+Xx~br zGz2BUE@bNS&*S6kKUbB1cKKT`u``jAR39_4ixS_+O&8z{B9<|6WGSveLr)AykZE?% ztaVQyvCKVmk=2PW%i-e+5CQ}9>0pqq@<4IxpWDOfes+>u*E$25rl6eXCBrW_SSp(e zg38!@@phK~7NkGoxT}aDa5^6zWawd+NJVO*FjiSgyX;$J$a=tJ^2B^U>SD&jO61C` zrAh+Zzch&Q@x8ewjQu&*diQG=Xf2v&$=!B(q%(Lb6Kqn8aw?&t6a=AXiS$7%17NS* zKN&p5XWg5qIF`pf1cveuhBh}h#eFU$IHrmV}P$KgZ6t2!TfWDzm9#wO-G) zsTkx~c$q3V2`zpz9^)Xa#bo{7e!ks+Q+Lx%+xFO;zH7|?yXz{N>jomw{YHH9%K6KX0T`_5JEabk)n>I$f#&XUg#B)k>NXj z%CS}tPr8vDrQIdKZ{ha4q(4;;K{AFCCaQNXhS{32xlC<8qRcjW6_$dG)5XTEN8x1I z-YwKxJyfc#7gm)?LP=@yoH)7*$PAYok~og4875q*q<(jrq88X?_qiujl#|&}l9;-H zW{e)@;J85H4d@_pP@hoDs#ipfwp6|uBYt(@T8B`yq!PZDt!5Yo2xYCqYuo-dp0>?s z<*JY=CKem-&Bb4!DmQY+Y9Sv0YpjgpeB<)VtNm+TN*7&ZUx07waGW}Y()!6793}RX zo;Q>fs#gq_?RWi|NhOht3BpCRBBbGd5jhBZM9tWLh&2rYaVm z#s^@$0m;3(=VlhdQ4}I8){YV<9q|kaS>CW0KKp10iFh(VGpOTB`?cI%_uX!ES1yQOM4-ZKP zf(SRt&XSdi8JZq1(^-b0;lgzMvQ*Wt#T5WuR;lb@nypZ{{SynWNpx{kwXm=dz=pa4 zMWXMqm|J83yNsp1H1c|&AS3b1y z6j3}+_><&Ipf0d_l8E9Nror-TC|avLoc<%<_EG+-3jrh1o7wqYQ)4USrr(reK6Jt4 zsiqTVKDfPhVbN^jOJMMls#pEA@JKL6phl7r~ve(u-y;yGZ3|?5|%2r?-)WjJJIWiGf5t`HVG9orv^KLSmtD zf!9?UZ)t@vs!^K-db`#PRNy^i9NNOa7xm}^f?fw>CDYHi^Z5B|qGQtz-AR%KPo&H} zDh=4G%M9KAo3?)<}ZKsJPs79ihPt55n0LZ<^aYd zcvhZ)^XfRc_3iIFpgq=ea$?Q?JW}qes(wW&S}r&>s4rNbY*oNwahmzaFT3q)sU^nb zLPCOhc@tgvCpx*S?XeQ^yW=>xm!64!k2+}#i7>~`W0Cio&P-;K-8gCl2MkhnX7i}T zLMxzpy|A~JA$Ye6t!7=|d}dTwc2r66L=-(68N^A|3742beF^ah0C!-)lb30t%ifwz zIC~%H$sZOJU?unT$VCunRjgGfVBy4`P5KrVJxQrflE|Yq?qpO2*;ZFq>#(fL_jDi3 zwOk)9&kK7*OHzh9dvyy9^J2Jt6mP~yET&v_!S!LFjwtO&3}_sz-R{$ZZ zbb{&W1niU_v3w2=q526r8#9QkRfe zdJ;%Nw!3-S`Q>%Qos1n}+Sc)NTAuCjp9HJz6+v%9I{1*uN$&y9xnlG_-hJIx=_!z7 zI{nM3E4)xThi+6c|=&zAVz~+!^coUj4#YuW3XA?EA!dihMc{CD@H1goDO;ve>m~d4G zM#gTFmYYx~>~#6Yg9fx!itCL*4#&low|~r2eK8Gik`Ajm1y{0m>>u&;tSCAKt)~pO zw2;3D|DL)(4yuY8&GI8WKah?!>g}=_~e-B-AP(wqgP8yWRDh?O5jCmBz-btASN=^ zFE9GIMxx9*4EHF(+VVI`aoCO(fp|X64iQxwcZ%Yc1Ay-eBNgucQf(9HZDzzEnW_Px z7Y}wPt!k%bK2MaYr_wI-qOVm~9X&#Zs!d$bo~HJo#!it!)_DIU&{>!ZxV3Ld_$y1x zOY|}8U2SC9$aF#|D3EN}wz;{vm6iM3yn`d}woCwm5!!k>?gGEO^N~q90Btbn%n(fH z*B^YuUC@N*dC?6Mp;`6mN22>Lo*3&$-J`cn@9DWkF*t%R)n|A+jD^p7S;DR2l1`jLs^FFbBfkvitrVl)o6yDhE=VwMOp5r=g!C2VaP>CpVDvbA zC=aN{!OSn3t#q0EZkmIn$N1r9FjFFYQ z+A~m>3HL+@@sx(60IG^#-d5078!4AGkWcycCW0UPS*}O{ha0|1fo8^wP?q%#YV3>* z6_J62amG$I`r|?p&Go)J04=q;d3IJi(k@Fhk4hm^AAn<1EA(MtFKfW@>PB39u!8$)z81;hBIiR6Y-1 zP;QQ+v}y1#Y2_P@tga2jbujZTdMan4(%CH}(mJw88K$c8Jt>JGIJfP66qH0XhSpHk zjOvr_Z)doPkx_>Z@w-!ispUwRArUq2LfGLJYp5;Clqm5#=z<&N0LG5?F{gskryl+( z04rOS%_*l?EBZXz(zcm&GjOB5C7fe=+*h-_N5$H8S{&G4lNsqo?- z$FDy=h?&~1dfp{cW=a5k+TaCpDgaQJgSg^8an(<}X*fb7=p-DB*s3VSeY#K6(49^` z6eipWWWn`qTZkRi3uLaJoU*foy>{chu4>#?MZJ|7 z#}%=G_A0S~&YI}1OUUJ_hlhrT3x1dduZO30G{3}Z#wy6b;m*bg_VveM(6C3~CEcP= zMAj4G=bvPT+ow+vrOP-IhEg)L*6@xIVluz3T0+c<{jn|CYow96yXVsxAK zA~6VXx*XefcSIRO9Dw$Y%>Wz?tvW!mCStL#Z)b0h4fRc-r9Yo4BqV)OFiOYwc-QXc z@Xu1zr>-tpqpIrjZrJwS8y}m>KVl*P)r&6cI#Q~K%D1^j7*iNjBjUENOY?=-G)m}( zi#jpFcG{KyGv2BEC#Y|h3bh1!L~@4HxTIG#UgI4(?-hw~M>4lJ6J|UY<#IDV8fUK+ z$zMm5#Cyj2hne!@v^o*>`mLy#*N1Icx%BC0g;HG~x$gNf{AorsP-JMTKDR~a(FN88 z!cg~x1rD1ov7(hc-0(iIN+o?${p)#NSmyCYb*9@omRC+pI9;<4Z)WT)Y)qV^%#K83 zEi-gp$>Z-zS0haUXu{CmBs~gFAzhLSA`59Vjo?51?R?e8LZXk09`ZCNAtaIgQo3gj zv4ht6Ns1#rlRn9gFmRFwkO@E)70})zlqgxLgaEjOuxf$%92iujQC^gk1tnZGs*-7_ zW(o@JNc8iHdf{p9>>Q%ZUd~l%nOV0+CcN5r(zgZ|2sJ46zmn{L&uWWnG9MlnBVIFC zQiFu5Yis**9WE%}1@jR1z3!*vE%T5q#<&p*46PQ ze;7W^BUUcZTV5AEH*IkQ=v_=M3}eY4Vi8-#;`& zRt@Nq=jXWsIy}UDvWY1u*4<6?jW?r&M(S|GXjP};)Av!gGX+SeiZhTQJud2l8yLQnBqM zleeVk$&1r#HHo*r6yexCHf-?8!OVa8wo?(P>OIq1&4-SLT&i|;t39DEhj!;of&tl! z#|54Efon6gtfN(eO&+NS<8_vL3u*mmGROlPk(6QsMOpxt5?ym_6~$7N6n?CDdFc}I zxdc|ZKwtPC)71xmNn&njp6H5x=VXpd^--A5)72kp57bg*NS{?sEfa)YHACga7Naew zHH{zrWo1eFT02J2;(O zU&r37a*L6-Gq&f#^eWCi>(ziMn{_{ZjOlwDp`DpOCHK6&CueYXs=(u}|72&!n&uc$ zc3oRY=7axGyE$9R5QLT$k7f0Hr(U6vV0wMwI;Wc*KoX-lNHN zoM#LZR3)K~yJFHX(i58?ADz7lmFX6Xj`hW%P%b)Uv=kPCC;FP(=6?RccmNYwE!Uum zcJQEp>`VTAeFx-9=jY997W(9eb&biW!ABxkgHL1Zgvk`hY?NQa@zUH>x%ziP=cjU7 zcd+)TFcceVEo){y2}%;7Evy5d$vt0r+b|BNrRe!p4z)oUM@fKOU z?*JW)BT`1pbSdxc&bSCAZ=+|c%6N;|oa6NrYrxx*FQNvL(Wu2LD)s(O6n-?YKcY&f zy&Tm28-5pTl!E4OIl(s&BS9sDQtiu=5a0cly+dclgLjr)g)i!-k!Oe~#(tcw9pP*s zn$2CuwnZ7tZ4Lagdnw!&jvtQZ%)lMKoe3i4?5vZP~k zx_Hx~^0b(yQW2EV@pSBVwK7g!G~8aHH`r-RYD0#+$mw<+FB&z7qVuyZ9O6JagnLXh z`}Fb2wp+JSf>U+H+{37SEji-A=L^x8=%q``+?#TTQER#@L&m)fhSaVC08|NPWcnUo z9spsrn+@~S4;)qS@i=s{30t50F`;`me;s%6c$JSP!dSKUD1*BCCcXj-8&jC`Bw`%^ z+Z=Ej>Yta>GMV?UcUyUloj-Jnh@=5VH!pDne%wpsRT5?jI9qXi?wZ$KO?o1K`BzKP zEqd3suA|n1imFd{XQ#~}A@qFj)?XeExd(h3ozoi#ysCwNa;}~-7c;%xUG(c%Og4TQ zLGrccTTJ_LZSk9yQ2(#J{1Z`~Ogk2DDSKE?96MWvby!nbWL{H{LGnG=$<7IkLXWAL z2$gLc&XKvl>4}EJ&i+1Zvep!BDAYY#IZ&*nVfj*1M?Q;)EOdUqPktEi#XFtE>QlvoEbo7gH zE8?{t7Ceo1F=1jQK&uoVC(i#3se|?BW2E@g$6~1Ayp+ZN_#-Uy*dr3 zKaEI6IU$yePMq>1AO_y}-3Yi6%Xtw~#uP`u2KlUSnz2evUfJiwR&M?jUQW66AiLNK z=*jtU!Y=2(DgG{rmbe-vGGMAj-SSu^^4+Qwm0kbTV!lsr3LUfk@>yBot(vp z1Run7u}aN_gVkA-?+aa^y><0xf7{~jp97#;lalJgO z-bY!D7*}9x?}_%&ZfeS$pBs1^IZRN=PO+Pnot>VW%OC<3P8}E=tSB#kdALRN?qw^I zlKTCb*3dWw>Lq9~YTHhuaXH{3zK}+z?$s0h{w5Nu5W5qax1{GPaU^Q1Y-ivZ1U~we zN(%q-c}5P!4W|}w;DGign<^LsMzxUw`yKPU0k0Iz8Jjqlh+PAIW1CCU6X4mkUJjrB z@|Y3kHTl$`v*LpQL-Mt%UZE+}b|&YZq02Y|Vb~iAK20buchsN$++syQ@m_j1jO~2c zx=+`70b!TZ->f(F$fbMeB`r>xlCRm9R(Xgdl9KijEdx9Cwf?N=V8pMtqInWuZfBS! z@1F>-$dCqCy0MpGE@+Ltv3e6?4um&gNhT^$!AO}U0R_P$?82=vIXu&Ae-e!@4D+`Bu{zX;EQn$Ab=N14z^|Qu>;I zZ6kjYPlD$CSZP869v1v7Z?dgE+p7vR=WFu(^7Pt{9|YWLu$cyx{lhbiPtfL{+>y7p zpk;=s_i!II-FA10{6Cx@5uYQ3yB9nRzJR0XQPJe4kD7;DZj$P94G|*Ipd9)DFg4}5 zDcU`YSDQWtr6&PnxJ)T8oJ}Mui(9Z6+H>9N z4!z7tU#a?_8%148^*Nxw;f=QCc<_Rg5eT`@kdcF6RD8p4uRqO_gbROeA6k}N4Dc$T zs70a;QkX=CuV10cEi_35#lvrE;E?z$98(Ha>0 zXqHs$EWv0%Ern`eVzFrDZsApx^1YaObK@$j=z5m9gg z(3nmJ5ik?rJP+LzhQ1m;jyudb$~F_#gZz9|Vj-({)G)?MtbQ zFTBozz^zATQ`tpmZX;w^NR&Tk7LR-<;j*JTSO4*sfB5g9l{Z*GGG36_tvVN+hLQ|0 zr#}uRq3?oeSb$BjG++K_ZiBLIMEvFUV2q1Su+hpCajqaI$~8uE&cDANuiu3~ zn*p|5VHJ)}8l>mjKr*TAdjGrK_>rcp(NIx2!8TmPgiZKQqxK)43mrip|6t4+qFX8T zr0Mcj@=Nr8&Xsuhs`jT2Hq zoM19UJd{|)1ib5Ohd%Q5M@hWEaOJNI3X&c4$wcC%L~Uv1D*Jg(lJF3I8XPohfWHx= zntT`kHrT$R?iE|5M{3 zkqX8}d>=iGFV4}Z*@+S@2vLljbEApoFNaYT&mF3rK5VgZv_A(-Z+=QvmzcQSz1Ub} zCsq2dnWc7icBcxfZ945pDerx%R)-{c?M-a1NV4U2P4;HF-(PxwP5BYXj3~CJaN8|# z=`z7sf-%yp6)xE#_l}1$RU9ePdJL_r2~7i2Ur(5o8v!Q54VD3ZTb&CrH%tWK5CwDB!fFJNnEiLAF zeAyOj1yu(2B4u#^O9RmL>`W%m%?YI|Zv+!?Rdm8DPwpoOfog*_N)=AeDspFMXQ_5% zPm2Od?8%R;u9p&WwbkBCq^DKCs6xvXRAS%rofp2p-|4?^hEz~EyfJ!&8nxOJOgoxS zdnf=!TT|&f|HU|c(`+^Q4>Isq?E*vxW#@X0yV56lNF4q%sDZD3SF7XZNe*r8kh)Xv zXCppZ7Y_yz0@FZ9G8xCar-Ffi!oh#8ha0KdlZM+8|Ct}2xya4;suimU%^eb%TXiHTKdM}rWZd# zK>62^Q(5+d?YJQ!(kMq`;Q|!Fk!aO!0i&tEo|5iqN2`sktj@VR(@dCaWLdr-SBseb zbj(8juPG%j@m{6hFGr_-eOe1sU+$8c0cV(N?~Rf(jtf(hRcF2bz{pb&ilyqj zghSxCM?ZL22&ay2A|9ruuu{)NR)D?HZ;#bN&QLH7s&`RgNOG{uQz>>!F=_sAbx&#? z08DSC1?q(XN*+J9Yoimx!~>(#_2JLh2v2gxUX(ZR9LIiFnCu-qkTsBhpgbIT393>g zCp0fd_>5p_uER#iTw9&hO%P#OkFD49P?K^JLHhi`Tvi%&)V~wkrNA|+bd|VqgPSC% z!5Q(Uj=kT(o&o0_;=I~4Eyp+J07EQzh6+Q9Qir#NcBa;Ur@1AEqSpe9%05$7#Rkgh zNMLii)aFM?kw53Z$>5KUBqY$fy%*+kGwUiM>tNy$Mhd|HXWSx43dDFkgT~aaGckwt zkW@D%RR|JtzZzOiHYxDrs!W(P=O$$8{`(9AirzDQWS}SMJrtye4Yy{f8L!)s-&Z%> zFd%gXF*Y$l2Mms>zT8m4M!S6h?n=5O;vcF{q&$Bn)8C)eOq7I>sMGaO4xkQsv=+uC z6#%cV_A&V}Nu&%HCXJ&229H)0CPAu};BBt{U+01E#vd-lDs=7p2)n0Bmli3XF$dQQ8i({G%B`l)$IE#mh?BA&Eho-z_6F;O~J0g zP&$%hdY48cbmAHBJJ`!so`+dQ z2&3x0jtEBB`tVw>O(x2POQF1ztdp6id2eSIn6&_oVe1i!-n@d)tEx08SUQQk6)(-l zOPV4=&Vd079zsI_CpwlFi&X@8?(hG)K@!w!fI6xkCpqWJzzZaJ#jFd!xI3L2o`wK> zsk_?&pTG1a%SS;t3I%*yJ25WWwK2x7XiktK`SU}JdQDx`YN|+grB$lOl0p@+`>N4s zD6_Wfw|b5>7Bm=vhMH#^(>da=UXSp6&5wuAZc(Ge*5EZ;kz`=DwNG*ggqu_0c!G(d z9{s9nM97`F$J7RVl9WglgoHcyC8wxOV(qgbkAewM={3Xv6`~K{U4Fxadi0%IRmh{K zAmNHzJWlL4oh-sLt1<)b=ie<051rmRF8Xj6DLzEuEU?+r0qgg`?EUB{o=;d8S3kn2 z#TSKXha#h$GfzBG1rp9ZWq!zr`YZzVG@_(&1Z3b60Vj;E`UOr2ZKCcF?j?arc8Qt?;( z`aSewM<6Le=GV%?9I2Mzf9e^$NS?bm*x1032~*KaK5Nml zi64l#Fd!2W^G`{9*mY)*k@faanXV;a3oqF=s%Ed&jYH!1$5Q9XvA=2`mgPZhX3XiO zY-<#n07@T+?{m#~Gw{|?PZYb)iy6%}Ob~~63Y-o;*SJncNNTo8a1B$>y{MQefNj`!Y8z@0VX>mDFsYz2G4;NmZBlsf z|4*0;g`L(dUcYxlZ=gJ+An2)o`rUTzC3{J~`j6#@V2~Y+5Jh4^@rg9t%PPFnN=S4h z2wzgx?B&PsfD~Hvr|m^aHa~u4um5!C@$-bxpVRtq*$Z}{XS-VR#YIQwujHQ@;d;-M zJ^b1paGRrtr{@}Qp|!NML^jvIKXaT4nKlb^qo;uPbW!OR67_SK=Z^6KY}4sMPmu8VffS>L|d!lonJ|< z=1vsWuZ7*vs%KG$Lpi(nv!3}jKUjbc6(?R^LyZFnHsh5cGG^0`i77cyvG@E~(Yd~Op^AKv9`2Ypo%Uv5Jw6;!9?b58_T|ENbD$!d!zxExqwe))x`z}~O>%+V+uOW&v ziw;ZLCn4#tkKOgo&5zlel(D3lnuf~!>WcrhR+PcUN{H$@J1^rgAMyg&d^upuCe z6>HtF3?Vf`2yrO_2*!adpWrnYtDsF|=W^Ju@Cd{kSmu90Y$y&v)t5cXj<2sE{w)CB zEj9tc6g7sZ0EVaX#E7s2m=t?OVSKmwwNTp3qc1zvbH{RZOBq7#5{N;ZRX;_*M`=pd z(oiy}oe^K264;PQ`+ghElEL}%Czk?@qm)sydJ`LMGFQw^Ps0tqBv$FAf49Wwf_-xy zr!~?zUN8EZmf~*txt(V=M6(d%MF0ese!x#{VG_MbOF9H(}pR18;3ztcCF-pM^A+Yobd8IH`sMwvxp_F)ba6BCz0WNKD9wJ-GQ4yH4L@b7v=AZ$L;DYME;BVZ^N zgdqj$oqUfEL8Y8u2E=DxI=)QnpLELya}*f{P^AuqP$Y>dps%!-i)6Ci|MJksh}=ca zDjs*U+;s#x(arDN8piE(l)*`v7cQEnA{1d%Y7`sY-F@U5XBz*dSY~v1W;DnF8xvH; z>2BC6bw_qQ!;6zxGxqW=}OI)f!@@Q%D!jzRT_Fq#CAL%Eyb9@R4ERb5>jyd80DewX=i zC80YcfIpEK%XiTKch)3wY>VuF3tCzMDg1Fcc6(DSv7(aJ6qoK#1rF5X4fkFtj=kTb zXgVY*a1?6wnHb;)oa)gC;w;;nF^fpxPgOo3s+UbcvMrmU=CP;!8{$=Ce{9^8YnFDH z!yyXhCvtbpAj~D(46A77gFOl?g=zd&&wu#kDaxuwW1F^1x>qFrr!x;voU517_9?w}j)$ZM!=Nt=4de_Kazo6fjr8CSkJU03t z#(F&oM4|o&-*O$>HtYM;Rh*$Jt+EZdSlQQ4v-4g%#ZS%7_HUljNqwoN|KkJ)5M-`3 z<-zj4Vy;rvBoF6lF6o~tLWZHDCI|uLSjv)o0Mx$4q>8j`PMy?Th&#c-k zg;lS}!zgtFFEU$${#$my-^^rmNL1?`>AJ!5#`kJX+SRf_>Bi~p`S|eqTDuq3Kt8cV zNVCKcUL>6V*1Xsv_T#3)x2qSk`f^z8^x=bOMfmxmY$R|IP15uOSSYNzV_}yBvI8CZymFKxLw#Dwh5judLaC?0(^Lq=kTL4rQc|d zbmCe#x>v~5zT~`dt1fos&Dx2;}tM#0$It zrgM$Vla+{E1>eESALMt;n80S4S6j!cz=+PE5T!A`mi~;p3l4XGhE$^wb<)$+{?|C3 z0$!-T^pA=<-$%78_`M%<)72Rkf8LrHZCCi6^PdNg8LW=Rct%1^7{e49seuj0}v{xJ**p znK!C@i3cn1JyJam;>M=|gCNAiA1i=NXI!K6xRIhk7$)E96P(90^Yio1_GssN<3m~! z2f|UIT=Vp+ECVr?8xuM>PpkfOY5qZNjl$$_*r&{s9o8Zpf4`68(a*_&S__fk7)!|O zi~oB`OVEzwj%s&{s*?De+jA|`^l-j-Dw|$B&E%D1MWsA>*>vDhit3C>(1P*{AxMHg ze*1uTGW1}0EDrPZM2=gXdr+ z#U8n*e|@p(LYpUT`thMm%Yc#@V_mK7z*pwbQy~BD4-7{EW6I5Pk1Lb{fN@$b4oPzL zlO3x>*#CqGMm}F(U$e5ZW@V8MI5^UsmKPad!?>vC zi{454&`yu0JtRSMLMU5=&`q{;fiPZPCHmy>QzpFGAO=gNUU=j*0m?BP{ys-D=Tw&D z{$qgIspI7j+9Z7&7LL7gqQf`829dx8xMp1zVL~6KTN8c$9+>U~DiH3lA=v{a(*-k6 zm9aO9h2lzvDli=6ie0!Ts!42)Fo$zRR}muTzjeZ@-_YwQ50z^)9%BBJ|GF|kt*u3Jr$ca7vH*R4d9R}nZR1O>A|PHkDs3v2>nJw9}}#6E`D3(V>nm>+;||gWo8G|`PeKk+tvVk{@`R?mbOVD8f}f z0l{%&qaaUsM*q~Fa}@I!d+Z6pt;Vh&X>eACAr95&Xw>wr84@ZIxKtI6*{kLn8Qsoh zNRPcuO$b0@!*X?VBl4A#n)&hOb9N36+SR%*=hK32S@??s3u^}hWl;|vjS3ibcJ}aE zZVy@yxfIl`*^XW^uO_auteFOAV@%)Po#xehoPTRk_X!M&aI_|kKTEe%5mNn5aSnc+ zX|VY3Fo+PFNTEb*Kxhagx3vP*g|XFIjA!+6-VQJ@`8Gd;(qYG+a;KD@$_-s7`0ffhA{jFT%r zQs8F$$VY{-WbW0UN~M<356-C=vq{WIT?IyS_A(Iw-(=wa>19&bTO~Tp04fjO(6I90 zOymC&4=|#3YIjM=>TsU;WhdpG6TlFudyyZy9In5=+OUjEDRy%5Ado!XzjaC^wr+NIjEB}D`kg*iyzwKr!-Q$|-(t?^1lQ(oeFf-P5bb4PTi zJD5|4r1|Py-~hy)qoLO|(<&@=hrh2W#Ql7J9~irhSl+-@atoibvvWaG6YklnnXYE z{{nOC8~MVV^6`78Q4!2q)9&Fvb=#b;`-%Wj|+nAAcDRWRi67 zFT@vMR7vL!w-V@(nH;5}XlXpSXsFltO_#Kt_x5gj^Yq)BUpwtVXj}P`V6Ay)^UlZ3 zAO311@I*}VyJLhH*0cQBbd(OdFY7pAgE%o?@|DfS9{rj$1?YSar#9P1!`vkH98*Z~ zA^AR6XP09obvZfTq|B)pKAfdgb?OGC4~1Y^=ox=Qg`Yin$h+!M5I3y2cz4-3jGxu7 z>KhBRmKyZ|7^-|FmB7zGzLRcm#E;gPOFt{n8zTCs1kQV6h<#rmZEK@xlPT6@mZiP3 z<=GR11;+ZbBVfb$(wu^rRsXaRE=O_|qc~&gX;7w8hezXDhC49_#j##_iY8&BNcp|3 zGAeELI%So&CqT{dstqmr6xBh*?qH4+((!XY$rFMT0XY=#x z!?(jf$gh;tyVX07Dr%>yI?33Nx5swQ)o8MK$0qH2e-N3r&*_6;qt%7YZ~>HK3CqK% z%gO)MB;wi>$2kSl7LHbYY0}11l=PFsN2%OfV;5rlumd5YS#Auib+Sa+q&{$-u!sG* zrBbg*t;ty1Q%S}&Yk2>-Df(&F8In}yG%bT2Z!urJ@~tfTitepBDS7y^sim#y@07{Qbp!F9x#p<_}lGmA; zl)x2%=%D25zepV=H!;Y*o3ui3?H6rZOOGv2M}={=*uaA7hN%k;t6;#Bf%TJ8?a+N%hix)7 zxuctK7DY5>e2}Ffps5JDyiL+U)Xxy`-WG9hz1tJo~|x5H@}{9 zy!~^1m@DSnw}7*{b<8QbK2qj!xNv(uJNqne@(Z7b)^<60-JHXh^?kMHnn1~K`7C3` z7ptRZ^goB{#0rkr`HuN1`}3&_%w=K|mniDV8DvvZb&0AuGH`ue!zCV>`>y z0V>s`S>71~Z7=PM1h6}CpA*Y!SSrc1S3w!6@`1wffAP7kh?+`ts7sh!$V$K*=f|+6 zYjbmRfjX$dymb-qwjx{>FdSa~e$+M2-;G8Fh&8u=wv`iGg{ zUiy5}Uio+=dbD6hm)4~j=4lVG&UojiS^79r2?*9k#>TUAbBbhS8!nG9DSeaN zG$+=JQIt?52NGG{eh*syM(%ptc%_=6uvqpGmyG@_`{( zNF7m(?6F{*ADv%wX>{Og21+l#0j7jXAj4X?YrjaRth3-p;nI5DJeTZn7B=mtQ#z6o zkG9Q|p+P&RoqG1|pHoLe-1ew~5^=n)(_EOHj|jetW3x}=IzeG=^qqW& zRJDP3UjIwuo7b8D3UPxp{wqhT07!ZzPFB*un>svWrVZzCZ;z9C6;4gz$oGFlU1eBQ zYu6?Q7`kDQ9=cPcyGuf8fuWTg8tLwC0RaIKmG15iDe3N(l19IMj-L1X4_p`fskQEP zNAk0CcZLW+xRL)JZsIZ$EKC&t0UgEb;Ah7tk>bs43Z;&%sO&gIPw$HWTV==jZJI_XbQP6HnAQlO)~f{%S01#q~D_thkd$#GIJ zSATzhb925N<%wUoiecqKzUs~F-5X~8jJ$$n#q%p(% z??GJRM;e)Gz}Yz~Km)4kfL|M$5vFeuHRtm&P1yjtLwecn#|v-dAG5vR%=5ZKc+LB{ zX-xfa;0HgaRf>FQ@WPMyR0F~gYmmgp6IAGD_FtLE;7S6YdX&PFltUZ1X;LlZH=^oHV$ zjIGL98Jc3uc*x^X)*|+WnV)curKOOV_`Kwras5kJAwon$w65Awo#Vs;W42Dr(ThEy z;K|{AC7?~;}*-WONEO|ZN{RE^M zbI%GF7i=GL;ysL(RzSCu~ROJ6ev?NSc{`VIjS#RW9i%A&|4^0tNf+UMg~~ z?30<_PzdQ_CaY4NoH!IuvG7NbJ(}JqX;|np!7jU{GMDl}A}w`wB#};mTFCYXK7qIX zwh8<`46_t}#&|+t{>C3u)R+bGJwja&hxXoxfVcR(6g*QZ&0iI#l*v^V*?#oDn)&qF zT*%`l_xA%+{Uigypp7dpL1sg?s4<+?Mq<{Mc#LhsSlCz&t`XWd2b&>oH%b1F^27li zC>%-Cv0@X9syrdZpECrkDSg*{+AaY_lpb;>wvq&Ajv!mJ;0O6BySL8-JPzxB2Hi)m zeTDz~fgAKe2r0|XrsjIA_Wg;=ZRJ-(z?Hoe8H++HtoP6f<>f^NB!l)o@D-H zFaP^Su>ug|t?Ey{Ory$a)1n)_ahP|WAnyn0W}{J{oO8T&=oudA(<;bVV$WWi<{M3W zx|oHy?`29)FPI_<4$cx2Ns3KUi|JqGA6!kfEUK& z*AX4gFP^S5^G5lN%Ng(GV;N=Qt$HV&iI$&MD((a$DTvX0a9-P}i(9_?hS){f7 z1_?{0A4??TTn4W?Mnk1=LRf;tb&d*hImhlIYt?A9x*$IHoc@OP;S2~0iT{0r3*%>Y z^DcHm#?v!c|Fn_LWg?vgs5;?<4|d|dcP~n|&b^q41c?MKUA`y%8DIcSW4Sdu+KnF!k3S{9&48?>q4=p$jh;R8dLhbW9UEK~=~YnL`99tFFSMI5WBaei!=} zeOFiFUw{}ctPiO~!e-brZzP0!Q)eU=P7*9{E}a|C+!#sr$N2Y(VVp%(`Y4URAv z)pzr>@%}zN=XIMngCc`<1dKUEy8-;N&WpG4?t0WRw`RCl6Q1O+1vk4$a)a18ITu^x zmrB>(ibm}@r^u?uL{tT-4Plh+Dd4|~OWM)w;VdUKC&Nvz#P2QLJR|CVZ>&Y4doF*J zY4vT-8k>Fnxsu7sfMkG9X*L4@;N0sVLUk0ObLM)dB( z;*L;--u>IO_i{))heh3mAieDV&5b2CdD{s13D|9`Q zncFM-X1&zcpYb_o?g+1IUcP7WoZ^!@Hx80O-~>_M#>DOBwWpY$DAHzr9xhUfamO39 zZv5v{%EJ6SC)3vN#K9(7By_xWL{SKkkLe9~NnsO59Rn7E|J={3J`W8@B1Wv47psGb zL@s)ExvSLCLEf;!Y&vZTx{Lk?5(ZKjbNPS;A}(N6dLiX{Rx^dN~<(X>Y)~0eqdZmU}$*X-xb?~ zj7+1rwv%CPZ(po;GxhM$+}oQ87#G6Lo^8?!ntg(Gokp9zkd1_|d#c%?%5zgbBp|S6 zZJXe1o4IaqXR1DVd@#!gM=c$ZMynYMzgWWV#n!~@!U1eJdAm!&n45D}4SNj`x0W;b zCPNrgf6=2|5E+@nr(=3euw~3eln&9ZCWV^bt|l~-`Fub{4llmWoUP@;X2rUjMiKRE zK{JgY5~aY6VIxOPhLIOhNMh-JpHU0FltsO10bK&3m#YZWLl#~aV1*(WJenUM&JYghpaMTS$v7ray7-9Bo>h&CKJ3Fr+0P7pM|QVo2{VBKh;|yn7l$ z*ipDS5SL7tv`y~)1hSo_FpFs#?sRs$C`ZavN*a ziJmd^paNZ+b6ZfMpW2FeglTpVo=V{&QI^H01L72X68&7Z%jzYWo3T7De=r1u#r6%d zAZe^jEp<2MZP-|SOC;sT7{bJYG-cpSy*xC8k;OeT`wSPyhO>J6yn<=~;R}uhvraJT zc2NKYPay23x8Uzo7O3f!@e@{Zb-AnBE=hS30q;H;!I4O_e`fUvs-ZjU$sp3@WD>)4ROCEiDTEnnj*+BgFXqp_$h`5bt7Fyb`Rkb%ymQX zIbm>s!C8-N%#={#de}ptX{8?OU>wiXemqiz?ba-mUucb&0^`p2%x1bm07Qp1U;S}& z;+WHT40i<8UoKm zlrOq_uRFvxT-YEtyrR!o9HyK&zV9Wnlf{q=B3St-xnCF{pqGnuNC+W&M~EWH4Q;T8 zdo0e(_Eojb3^&o7VjRZNRNc33Y{Uf}f%qG%+9)-JCleLO=P(5H>s1s?(WlmJx;+}M zN5Xug)rRCrNh3pJ#$j7*5wW@+kd|?J;_|o=#SMbqAuy%m5AgW$BEFx@_DX9!kQQnS90=)mc0EFXopM;5dtez~g7+JW7?c9> zH3=yx+*;r8m*gqXQHfgKx$fy?yX}AbeK*^_(&FXfnem$c`LwG$)6)uGJ0S?tJ25jP z)Fs*oZ1(>URieOkxx5#gj&^o=Ox=xId+!E~$)aS!)k(iCogS9A%a@utN;KA|SLFOU zP1AMx7nttYgN_&S#D{EN|5)|CI+_Q@x3mozLp*8)Kjlqbm`J;iPEK~685ji%qGj3{ z{&}>tt|HLEW{R9v?H3mpN6a`N`D7Y*Qs8{YElKNUtoGgYos96y4ua6|T+F$4?K(0} z4TI8BVR&nj{@}()5lMV4Fp@&4Ag1=XP27D8q9=WrHidX+e1*O2;rQH*Dduk|OESYf zI2ZgB_Xcm?&tgwtUBfFPdA4g+jT|a`O$M8vxeXzGzaj?yV%z~^;JTLOu zldKH>O#3DAo$&Iw%nRbbwO7~HMx$a);h~~l{@`t}{M2#j|WX6*yYLJ!` z{eXuH)w|Z9T0vFA%~Wn(UI^~|;%UTkXt*Co>KBn_G{ePvzu)1JkpoLd)0_RcK)x2r zROcI^6f`<|W(XF3D4SHk^T0QV%<#&(hpPz8pCgKxQhKR4hzcJXT%eI6M-{k3VdVjp z!F*@O;BN&jWsA#A*#v7uT#?C-E1fuP;g%l1nn@Gxhk9o}2^=%Aw$af=Q>a;a)GgC`Yd+itB z4rjrD*rq$0OY>75ovaus!NFA@oO zgcf_c6`Pyu=UZPoD8PjVbF9#0T94~W3y7B*otdKT3uY|qr;REzV3;Sqh|*2#zL0`QZ6S&4y$5eU^{ zj27?Db|xn`g3+#x+xw}Cj$z}ZjbS*XYx!omu+UJ@pJEF!=CpRK} z^bPX>r=E}kl+Y(HVln7JmI>yLtw(BB8>^RZN9R-OJ3E4iQ~<+lS=Lnl>k~rj-(lTf zb6r;Yh^$?sq--I`$_UW*XMT20Z9(yFfpMKEZ`0w#0x<4|2%0L{O8iJR`42RhrFOw? z{Z+=+1e^>x+61(_%eVXcp32Z=mP^-0MU34z??-&^v}NI|o|(J%&}_O5p%03vW29$3 zm7bfoQ1||Wn;Foa)4r!GXH)$tcN zDILtFr*NWMJ`5)evh9p~vMXx|A@7dtMjM=l@n#M03fg=ykXg4WEECB)N~1xgc&pH) z9!{e}%xVb#LyHHH6=~3(ZyC26pPkKDzhBvNt5%2^`?&Q)dDUIyWGdWjQhrC-mgv15 zKdV!nVLqaa-N9_5sf>T8r+{;l)7wl}XDh4o!*Y{HAkXq1D7{~IWNx6dG2DqQW&HdH z`0c0mUznSd_khIqe>u@s|8}u zqPc$KKH5;+M=srwGeh~ z>oK-S>~5Wi=974rmtucbRleAh?C6?X_nS2Ty&ld$&v0D){*tf{+dyLd9^GYz-~L9N z@CTn>F7tf7Vg9$vdo0Gm_bg8F*;${J4m;Zk-J%z6)8)goX$(s7G+%FUgQ0|47M0tH z6j5-#Vf$wn=`7F(N5Cr(o4m6<-i- zL7-HXL2?=N$T8HCeK41!O)V7y#NJtHlm6p5sjf*EZG_a=S-XiNN-~kuFza}r`l2$;d zPycxgcG1#0ITeVa->cgOoRfs9dS8uK8(&TIN$h;*%7w z4}sd#^U&edQc&u$*IwL)`+h{1dHpcZ$EU>Uo#?tM)VJvfis6q%o!-tL&997kyiG2l01XXKmjj-XLX<5%F9b>`+lvJL+Tb2i#z0rZFYrbHoM zQs3rhaFN`U>-7xQnC_5>7)Id}BCu1L`yYc4CLNK$bR1aAJrSudqW{ySir|}`lEw9i z8JnDQKNd?oMi#IFZ?VLXJ|Pw^q)bwV+b)3S*N@-7endtN0x+bPxAil;2q)2%V_8*| zKi&+ZP!0B~0-k7r%_%0Xw%eU7QZ8esVrHg+WVUy%R$e3e8ry-Ce)m_)S7UyRh9UM( z)>#&X#KQq-N7DQxeu3S^4;a|~um;h9&K|m?YwL8Cw&Ua)p5l%(nw$RuqJi8q{P&0! zKe)WWRIqjKbQGl`tP5SufP_2pkiZT}ZIWcvgnW(O!-Jh%9%O4MMQ}vc+jdyUO+Wcum=d-We{OsaO;y(uy9I8@n?Y?8G&)}jd=3{aD$GMrfOFI?Ms zS;l3>I*usEja`5Ad`fGAu;NWb zkX@$0;Oi6$l#Ll**G38^0^*X-;;WHAFZ6K2eg`@tpp4Afe$_Xh9EqZ_5bh-i${2Jh z7;IFvHa!lfCg2GCy9Z)vP0?a?-ywf%C^xVN-9+vVBuc5xCva;Hb|XP zJnh|9bzM0;hK7dit-jY+$KkP~_C)?fZ4MxyY7bN5CW39}&u>AWx}XHgDDeX{?lym0 z+uwRNzBY7tSk9j@BfONi@AMj1)Q;UwGmbsQeqrp`9~q`-#5>i0R|2nUEBk+}bp9Q9 zk2POhrjT2rIr2+}AmhLpgLmWmb4b6D218L5dsL)-Hy%8az z9;3|r8SKU|%hA;Baqr&XgRj(eot1}Ro|eDR_``D^Jsz(uQ8DI&SD)DT1wAp79jA!1 zGj=yDu3JmmdNnrUMG~P`$X@*ODJz|u=zV60RW3sxX3@RKjI9-~mTn-e7Nb9|yKwkc zka%=1#%Xp2u>ENcmJEbjP^Pp0Bgmlv3ai;<&AHH<9<1mEz>WBu6<-CGQBPN@0v753 zLN`)V7a%P&8rQjB{r$KKR2IKq;x>kNF!vih#{pv2|KSiUS%Df9V??T%HVA)TIkxZW zGy%*txUuN4=Z6zI5KprN?B622zP;5%e|})6iW|G0tR{J=3)wp?rWc{VLU^j>NHs9N zb?2OlVrN$MyG_r8T-YfkdGlOgBi?Ym-2qydU+yf0Bt`r%3&ole*I)$}wr!Jo?o~p% zuKCkHl1T?4q~ejLvRKNr=N*Tp!GP4|$RUJsr%w1z9A{%!kJUOh)4OF8ujev`;^oMP z{ey$Mbr`1!vruqsb!@rAt-|cazaMd20*Qa@2rVZoC+4{^qYFLIpM*=Eis)33?fmC(4&r{5Hr=Dz$!9eYOSH>r? z*Wgn1zqor;2Zv-pCrS9T=jyU^a?y9(Q|4I6T0tRR z#=#>CbV{XULTF;~1i_IK{@ZqYNKCKR()lf)>{Ru$jJ&gA>2SYL{~+!@UR=DsF} zhgq5IZ|ymPAnPd`M$cI;|14|_0n)n0+R1V+koyi8N@t^H?xf3vv*hvDQ6TnA02Ehm zAeV!@esbdN4?8AaGS~3$*%1K*Jz{z5F}>64v`-^hd-(xsqE#0K4}NrgT*C>zLeAfw zjwmouK42a$O7%)F!_pGas#^7aQvQCB&AzqyH9wKFmE=El2r3?gLW{VTb|`mFVr18O z+nMZ1kP`H54|lusI}oCvCDNlAe{|y&!q8lE`iR9j`$qz}6NWRgiKfZF@pv7UG?Fqy zAIR@jY&^U3$Is6OB9T4)%WUpT--3T{O}a3IC!O0` zerCLJZQMJ*TDJipd=f0PRHdC>pcRY{4-el0 z@gfOB$j^}oqkDm#QO#s9pHMs(?RY(Qv=y;ev-*#Z(kYF`mo%~WebW|~S+iX2zkjWs zEa@jv{;uh!;Ey@}1c{%*DLA|R25i-T2rVbZRT`2gDZ_)WV-4l!*98c2{8MR8WJ9b! ze_L5`;D7S=?vf?Gilp-$xAnUUVa<|h5BFqXDG}8F+kEm*5IMm+Wmlo0C~>A{sqgdk zgme1bPu9uylWf=jw?PGvDB5#Ji3F|p>sMFcU0SR;uiC0>0GhbH8++cQoHPj`$)g!K zT9IDhX{(!))4(ew_7dJH*9)R!`k#N)AW@P`NO5gegqCVEi*r-dV0M?2Qy?wX|GIfq z%ayjH3gC^KE;1nZ2m@-QM;vDL@mD@y&QmCt{b!Iu|>n z3FLV!ug)(U8oqjLIET9ser)JMeD@#5+7=<%s)GpC^t;VrV2@{`a)Cw@O9!V5kzK$&sW~l52 zLQuv^jxf{rIPU}C>8@{W-My%FosHl4^w)Gh4;;gfwC==Bf(;P9U4$j*xQW7PBj!6& zN39yM%?hK-LjTgJH9&EA0g3n1+ zeg0f+JT@#S&{NdQ)26J^g@5Hq&zouTO{%eyQ>s?Uzpn%t(payvT70;zgs)$&$I@m| zCN|1rrJfTUz9QsuL+eU`>8Q~?B5NaEsmUX63n3dN9{Kg_SJpE}K_fGJd-EkiFe+VD z?q(Xy>(>S=aO^}55+v~6@V6{E545DWN`gMbD+E)1^n7z*UNBY?J?JF(1RV0VNB8fw z^3N0%&qm-NyIn$z7&{d58ilM(Xlg+i4UZ3J*rIC0QQ=dA0ny(ph+IBE-@kevcD~9- zeOfVdiRCmJ7i#9^YhL`Kj4`1gDCB3Qh)%P`$={{8Fm1#7aX>W||SfSdB; zp!Qk$EKQ*AXwAQqT!Qg%{zc+83ouOfE++hsQ zVzSp|NCP&w1wfR-Y=-vTy{=zu55vRB=3u#8f`UclY1zPiv#C<)M~)%aFV?12GBk?3 zB!q_(o{3v67a{)s-QZD!X8H8t5YDya^Z%VvEX0qs>x$eYm~v3sG!;t&^>AhoTsUV^ zBZUrO=je6`|DAn&V{AnqkEi)gX4#~$;iAndz zqVF6W9F(_*wXDb&Wpfc<)HY27&^@fBD7yTSvbsS#Vj2C~t(~DmzH`~UBfR9;jHJ00 zxX4-=*#mF0m@%hNBJQ|3Shr3cv(bJ)()Jqo*eNn)J#K{gBzS)S?LyCUeNNrI-Ro;Y zpk|nv>~|3rh^O#z_RRP^GeQM@`o9<5hHmZRUcbV`$fyv_B25KmIf*IN5`Hp8OIpA2 z$#=tM7K!7}caQ3L4VE-4T{f-o?xbwxP6rzc9m7mrXOuf& zRxux~%?{_TEY>O^#yIWGkj7UlVol+RuKKw6vH%=Ig|z3VGlbH8=Ovhp(iblQq05wr zVUNXExSu;uMNo$j=26ZGXPIuyUGcUTa4Iat3B4znJL6sNt!PCIs z7Z|YmBP%P%9G*g@@m2=aIOZEoL?h$PDK}-L`u=((-C0R3-rtqjRp>bUnLZSIO|7G& zD^d9wLjmh|G!;d%0vKl}=`#NV3l_3;dKBq_<57Irks>4Rt%p;Gm>^o@W9t@-E|+cQ z06X`2X5588eyq>{*vZ+0ET}=BPr*q9^%41(aA8HhFJ(PXzub|#U_Wy7WNk1RQ(`Z^ zeoObn{gV%VyLa%q3xhwOg z`}+jFtw(<3(v!#@dPy zhC@3F0vvq^NCH`H_iwMN3hHPSqhOZ0U$D8`AgsRP!=@AuPJl5)Jb%EL7X8=;u-lB1 z2H=|0d~cZY_lLbktoDcg`SRf0|2}OD29k}?=*Sr06Jj)U6QlXW*Q`>{%%{)!b?U8c zun2>EQH=r4pKAyh;kJOeLuD}=GzO|xQ{s}??4yc!(KMW1zTzKJ>c)@bEYy{A1SGTh z5;PIGI5-X0PcySULgeVpQit45I==`}a-g|NmpkoA5D}>27xPEYKrw4vhjvHH`RQV^!>&p;|)D0JCj$* zv@O7FCJ*3F2xg@1*OS^XF!b3@Y&cP{$U?ku=vP!Er+QYe!Zv;;LC|q*Rgf$mw&aoV zzNTD|U~;@QZr^=hy&n}I)aP)3{ru;4aW4=xnZ-YP#dr);OgS*t+k2dk6XciV^f2hk zm(8A9%m(E?Un8nC8HTkSz5vC`{6Hqkov0jjbJ^)UJiA8{Z};$tnH_a`RV;%+U7Z(3 z8(BF)O2V2Qlf*X2sf|lQX!CsDxbq14X9|F&kG01i9v-H?@pX0{`+=KaKJoPA+2NBi zOhkr=W}4R`8l?N5>+1x5th}=N9`uew{BoNe$-3(HXY6uirL0K9e;I8+Q9y_oX5>;` zZTHh`N#0R0ci=Qd#pp|I*evOeof+9uyw!T`i~n6u`go`JGV9!hsn^`8<$ik0D?&Fy zSMJWN;v#&FEk5Cq^p7m;sO(sGaBi9e4OQ;MeT%aoM10o57aNYX8tjGIWO4VT-8i%C z{WLpl+i%b(3w5Y5L(QxiT*tJYV2cQYQ--bJ)K%l_M|r0q?ZF(O$;9uzNAs2!^xA$h zJ|O1(oNcCUo%1%@@=f{0-ZR|3jJyz6`VXc(Dod1T{h@CiJc(DV_u7TiP6zUitjy(3 zoQ^!{V;UNkmpL68=_+D3a5mGt!|D-DqX3m{O^~43wd3192K}Yy${>^Y{(fB~arsTB zk~p)r}M7hUU~>;f#kOyrM{ zj71;L-gNhwc0u+o+`&a*v7iEaD6H)K-C9k~V2NG|G4DHt%|T(Zu!NZk&%VO}R)La4 zG)cTHB{YRIC)ax*0vI5!p)GA=TC_>4K)`)SQrgM)gP+50p-54ik$7i|GD@EQ?Sqw! zu4B{cuaQ2j+GtWzQ1qEi5Yl}K|F3f6W(9*D+5ifa5I6=V<_krxeIfj+bXtrG^{zlm zo#TH$=L7@g33S2uo^L_cwOBG0=@XF=e*;X@M{gjrW!+Lx2G9kk^(4_~*o^lGZkR#% z!~KE=VA4_QJR#r*aNh*4x^QHky6%SXNBM5UvKQOLIQF47a|{>#JPwV=?(6|>HAsXe zY`M?(ZYQ;qk{-0Kw7ZNxq+9%0ay;bA^%>MY_jwB~zIngVd)S02=jouk5u*DX%L?bJ zuIPh`T+vMJpjkl^Ta_WVtYbP{r0Y!I|1fUNy&e&Z4gy|N5#J_solcHZh+a`T5uhZm zDt1zK^|;|ebwS(vg|W<;yNMkpZpJ#W(>uy-eyMEUPJH08&WoXFEGM@2Gf7atts3iFMb zglIdn$eyp|X|Tu1^VAak@ARcY0}ggN5kzZBB9GQuGGn5LjsC-QPo(4;!t%+mLrWh_CD88B>#ZQsqAj%8j^J_WrDHN9<2Xb-YfCh$zCiAs(U;A_1&SPr@34~O)OSNf5sA$9x$)ZaVVhHw`w1fM3bK4kTK|lYy0e&GBPK&yR z5Z@e0=Z|STUMK4@18EX2(T$?O157k zmE9VRx%evZvAjaTT^5mc_GAwc_LTTvMS=D!y<4B-9g z`hplGZt^*N*?P(e$h|yN7zgL)vb%>-53i#~`W-X3M$H$7IQ5{?4gvTFWmaHksq2iY zc)26e;9GmoUaGyYFYBdf_ zicX407V+665jmoac`q9`bX>Ah6~PgUAxBuXPTlZ%QHE`WH8hKD=Cc~cjJlTJ=dllQ zZd&W-c!EfG=aUq*DHx#?yRiPviT-%%>Kc`^Pcskn@5Q|3W6QtyUtep33v}!_;{TKY z9iSu#{TQNRZ8A=BPQkX z#gm~v`E||dL`k98UyaT{+g)TtE-o(2IOc+FgydHcchoC168W9?b2I@Qg{e;bg~+L@ z=^U*BZBVcW9a()9Sj=8}!+|he!PWy)Ao0Di;QK9g*|Q+nCNQjO|a%DZkj@OQ$Sq4>;Ol>P7vm| zpHN+)#g8dHi(x+eje14L?)6J{%Mrkgx^|biMKXqlCKerCi~1`q1Rm-tAY_xvBQ4vd z*rP+1ozQThn3`bFd5Opgnaq@-Hx1OpBjL;tlFN@EU8MQ^^PSNLYekE+iBIlF#%o}b ztgjflu>cTmA}^Rb4DkPhhW)7TUPH$x&ki?|^!?Du$rlrym|s18hWzQT7avffg9CRi zr->FV$7!tcSdeAqy<_GfBb6v$^_XKdV6}gALBJys5>SELono$QoUiX0M7Di|_N&n+ zX749IBU8b6*VsJa&Lui+^!z9QIHT63_(AkbkA`0$7e zYv#d2iQJ3B=%{C3MbrN#kYB2V6B+tM${`qAFQ26b1)%gXyTkTsa%`J&$nuP--iSzTj(T4Vx7S;6rjQD&i zI`zIhtfL0z4_e!wlWDpa5ILi){<^#Xa5*msP`ojA(>9LUl~vVyLt@Jc@NH^Px@mJs zEn9^R0zRNKzer4wGK)6IT@0Q7a=8YEG@av^QJ^=y((0Z@z$qnop(SX;fduTZCqX4{7`T-h*4C& zYmNEj1+pL?PIO`xZ%&j{sldXXQ?JI#t&e)#GtSgcqT{yDv?wo&a)6~?id4MjGqk}a zAWOLw3WH=aJn@rwNVo}{cwSx8>|>q(g1tiGllN`C$Wyofoj+6xMCTDzWi29UYqgid zm+jzFMH_tSkCF)bX5b{n2HJ40H~KT7UTk{k=G&(lUG_qt3>ouv<~C_!STl;xw%kci zVyN94K7Hde256?}{#5qX3RBr`oZm@=uhj@Dviu54sTKOub(hjn1zD1;#MD%H=YZaq zVAoAeRW$7If(qvi6{8rn1iQ7d$}wXTFwXeS=>dwMLbu!b!59D8%vA*OY|zPt0Ag6v z8(XfRdwhMY)D{jZls-9V)EbWIt0nX#u)H7dRk-;pJn)fUP1Qq)SIXOMe$S5+o0SDy z4y0^`o$`swW}C_gc38&FH^>5E`E4(_HSf1IjXvp`NhTr8=zh?kh2u9H1b@fTnDwHp zm`f*UmAUft{YB@0Skg60lLtCWhZmy&M5o5j4%XJ`)r>aH^RF?Vs;c^}Q}CYwE@pR2 zt1Eslf^LRw-hlri*?|fHaQ0^XADmp1{gw8yl`Ov0GQ#rV>?zK}P9-m31O1m8-^0;< znZH5(S7-v!0WH1gsz_TmyR8H3A!;$^e#_I^yU6-YVTJCF-b50W#&$Fg>%u3^&mJjq z|3sneI&2{nlaOzL=TuV%2vjBdC*bux`YH-@UZg2$-$VIJ&PkY)8<+iu3fNpyRBlcLM z@k^tL=@EL52x4TouH6+2P!D6syi~GQQcfKf8N1Lt1{s4yG7L$$1``|VUm3WCDfDN^ zzQcB63&x|4=*SBWh+UuHh%TsIZUa(Nl!}WB=p#77)?E$x{g3J3%kesl2(EFqmgy&fu;IRQki~(T_%rmWYo=w z!J4y)pIbo9DuN}N2!}(Jh%iHMTN%9+c3?~|EHy#DJI6$(+bY5m(PTo6q>SfJW;=Z` z>NrKZFu#1ioo*6wx;041IPyS$6ACD=&ZFh9{mostQiuG6gs}h3`1yxSfF*!*>2>{M z8BnXDhpTXNTdGU@hp$gpJU;q5q<(tVXMUznc7^S($?x&EWtNFV$icZK@n*(!dYTzF z9VN@S9!)VeGH!J_5c{UUfSfH=MOCH0ztT{YOW(;TZu-Kr=JtT4`9Xw5$oRO>ik)fu z1Yu_sXL#=G?9h?E0!or5NS>KHu`={Yx8@RuwiR*&O`C&)hhn|mR83%MIq{M)5uyi6 z1lekXAZxu{rS#rZj;VTO%uHR6^qzt{8lxyEoA3u#+9*BTlr`^S;q{uisFF4oF8G{e za~*OfeRNt&i+}1&jyQfisa@AeA*+gK)I)F)#qp{U;M|Zi1uek?r#4yyaQRzbsr2rfvYv6v-Ah z;#uGcPV*Z5uc+mal59vd)Y9|dVewS>ZOhCRKIMYe3>QjQnf)xMC#V@n{|$@2f+ww# znAGkNiul#OK#$bJ6@C=K7GGS*g4h$524W76TbEs+8$rlKt1KNqvso4a8LJIUVQYe| z?2)cx7+5xKQIzU7Cr~?HwJ_t;J&V!iH(!q%MZh(uT&`M7@7Sk+3b?XfC{Bo zVbajY&{c^lb6b!b`)q3>MXOp$Cip0M8d?kU+D1E~Akj42-_OO<;M)%XKovQn`m^E? z5W%lAseD(>139Pa?sM>N^!{3qR}Cjv8c)Q5cn_vv(y&Bv(PPH<03Emu#m)`LHlRGi zEcUo3$V*zX%jYVhTC&40+dsA$akO_J!X%sfM|ChGAjvOA@r>u`)5aRX`ubf6Jq zW6?I{EYWxWub46e=V^7UeD`GMe)jDCa$g&V#%f@|G;~7XsB0NI^@z zditRI2(BvMnxF+CMa@leW1Xtz7lWErLt6LyRthPl(jq^3V^YyB$yxB@5h#f9pEMS6Xrgc=g^UtjK!O+ zN>6~-v$!a7q(Yv@ZmaVj#Q>P?Cg4fJAya$`BgvU;&ki;Q5xQe|9L77+U$XVSU8We9 z5-=ssTsHSSTC>=$Tzq8|A)P4n61lmTkovbdp3uBXgZaI?mP{SXVs=Khkh~8*15`*t zT}mU(JO-rn6W#eY?Y-)$PiGwI?(pkuieF$1mM669FwH_si|9(9FTOn}bn!DJr2YM# z^9uELz6@z0>#q`8=9n_4e7%SjBtyF3Z8zzfdu$-4P7TMAm+8JJvy%1b!0_SzaiNBB z-RY|_SFsk^$Nqech^cTUMni!?hg&ZIm zQxIyM*8b~`*q34oBBGR6Z2s_0-TL|p7gt-q5yuzd{EPm7ZbbSZbt@kOgAa6u&Kz@3 zp&i++6BOk-u|bU4r#BFf!~C_8SQo8PS6Vgi7+OlYL#0Co zhHj)AhwkncL_{QpM!H+1yGyz|{9oVt-tYZ=f7e~J2A0g3d(Sz0@3Z&wJljRR6zvC} z?Y(+7TY@z^S||D99P9@`}8b_ zr=m0eddoFy5UN{nE2J`$LPL z0c})`oKF#2<}7$_u8xhU2vKI6|I=eDf+woUvgT?X(wV6A6oXZ%Zz{#4l+ z8OE<7Dv+)vfWE0I13*Pc>5~9oH^_{$qPYI=96=YVg}HOY%Lh|J5qxx%fae)`M;|Y6 zK8zs~faSVrCd_HV=C0~U1ZIxd+dgi2+C2T(iF`Ba-hk-b8@=_~xkqb0(l$YQ9X|8^ zo;50J!jO}S<%j^6MBD^UM{RJz=#RFNnlOLNQzkGKyd7OfOD&^4exVlA0&3N;0rkCq z>h{Lj9~^yt-GSgoj7m!#o$5+`GG4KOU_)HH+$#%d8rV(kuTdV7*l@HAvM~?fQAVap z5RGXON?n$D#2enKK?4MpGuUSP90@}J#tBS@De8R{+Z}p!7d$ibT(C=j8z6dBH#cjr z!b6>FI(h$UEduoCQbWxxX}xv_w3+}lagUp`?eTH^7aszQ)zO7Ahs=Ot+KT{zlz|0x z%=3Rq;ta%}u~u-=ix|D`IeotT`BBbut6@t~&?ZTIU4QPKYsE~56+v`e7$5gXc*dQ} zz_qAUZn#qG>Xoz1S3qFr0p@bTW#67V+aG+f%l99En$?(MOr}y zrdhAsa*|EeuSMriXZL^XvEjRFwbYI#cVSO=xk^1#6n1&Jj3AZcffNqQFaXY+lC#vv zMrfHBRsWSw{f*jR1S^1B;tcW@9hMSqEa&w4^&nDeyn!u3;!tTEY#a)xS1mRBq)acF zi>r@^bBG}QO_v1_L0E_qx!R-!IgF$-ICUbZM`NfVYk7FKSQ_OmL?fE6A~o`T^qG$b zgWAwGEYOxeOFw&y5Rola8?!|??=7g0zAZq=AqdS03UymtSf}7cGh4Etq+^=OPe?~I z?_WFJmTw^BCcuYr{5}j2J=(c2gJyDvtLp)>GeG4TAWshFrM={i3ubNhx@32Rt1ryY zvJuu6gjJVlxLNJ|r85dZ%mff4?+Od)4;J)1Pu9V1WAEBUvq&2`)ex27+8+mQ0r=j7 zuY_0CvtQcj2!Dqoza@WF#9LoOcYWlSnc(vV!6vqGHJBrw>uv<$qY`4ZmKL{D=(6@& zztL``*no8#equ+39oF5gurbTga;Go=KqZ$%{T_tLFjEFhFH9YXkK5bZn;HhNC|I?a z3!DXel9J4QLuCHiS-^)NBSaUmB%iD}EH5^t08C{+6G|mOdY+tb3R>P9mKWM1iPBT? zv%kMboq36D>{hF6qvNOIUrs?qrd6lerc#yxMeVn++g~606Ysax1??~re=J{m*|*X` zDf%^0p-YaoT~puY(^#bXQA8J1Q4wP}N2FAPB}c0kH*Ed2Nz&hr1|}}?zBqoj`i(%R zMllwEue%j;JBSq`HtM6fX*XL^Vy^|HUF+-nl3d_dXQun?WVFOQe>cMXyE63$(%0a3 zRb#8RY$_wo#>mV35U7wvL`3K>cLeJn6s>uqeK%j94=ZbVAng}O7}`HP#3Sx;YL?Ic z$M|ani*XWbr}~|pZ5y+H%1BF#dkX%HH83Kd$<=o5>};~_Oz1kj+w{Y=i-jeS{-6FY ze==}hpbBh-@e0go*_AqdN7f9wA86B5dkuO_f?&!veC?9?wze4xf7m`vUy6Bkx0zx8 zgPi_FVhL45j)>P|@y2i8E-W{Un{`RJy&S7GL>OlOR^?p|$4<>my}M{AYrbFDm5reg zM;{9Uv&1IQ{p>Ofsn51^a%#}E4QvN+i&_&QWj`e?D*k#V_cUT!LEA4Ig8>yd*aOb8 z-8ZxR9TNK+Vp7k0o9xnw4cz*Nu^+L(|K{`t(h}Ye_T98;r$Lvlcx%SWEid1%5}+u9 z0>s})C8c(R)|Ve#pPJv^dSzq?)q980Flze?{krA(^iRZAgvxoBla}T2us0Jy#4&x_ zPWx!VD9y10hlVzxFDWT$xVZRQB8)qc{d^-m_3yv>qxo0sqvwW%2I)~^t4CFX`uMX&^O- zm4p-1Q*RSyv@$!pBVMsnblaUs)qIWh`Nb>HgWtuXQ$rlBVl6F?7@LG*@Sxo5iej&W zue_(@r;nWcoK=;lvL@bAvuKzw1{=P@(?zNr!ZLJ5qzXuXbfqIBI24nSmSIce;4R?0 z*N7@dCHkyl#%5)r*iC9HEi*GcBSVsOGlOr1zOIj*jqP~n0O%Q@`tp7cs=QvgeP*K1 zv6KDOy_y=4nhKHH!3J^PH9_{*#s*5u?Sk8H?&tIPtE$!QoL~ns75aog3IiRT>uo^4 z>FT&pzj-=4RF>NFeq%aQLmVU}29jFt1U*LAZswiybSHsmli*+@sOC;cfJ$*+6GIF) zlf_tFIbxL^z{ll8Mw7IWgP)xb7n^{p#wid$y=panr0D2_WG}E3%|$F8HS@O?0Pwxm zWW^AVXUNpI&MODq9gBEVHe{9TS8e=gY=uLzxJFZB0&kdgX~Eq`u4` zFD3wn|9uJ2!_v|QOEMIJ+c|?gY(Ng?>pL!Co=te-h_KYo!Qz(>S8D)I5)hs_Dw+vZ zQXRI+o-pRpeD}xS!%pd_T&}#Yznp{V-izs%C8b$XCxn@E1rpn7Fw1qaapB=?Qb%{9 z(Z==ZH@HU0N(nC>Jrg6BVUn4*;GxBi*|z*(6ecDH+<>Jj=-|VJ1p8B4J3Da7@VZHG z*(AU+TrTZzN)lq?{zZj->iW3y{*|v=?W2A;3a|`X#t=8lZSzQ|>Wmxrlh3y~jcvyMdZ~Cx9re_yR)R}pPs$R}}*EgN)u={p0 zwtGMOH2`^-0xX1KI2^FI7>g_PoB8k3ss)4WjFD$>Sd$%Q@ zftVC;=3~pCCDaLzGxI*ZK>Ni;Bj`8PoXP$q8QSr-%5IY;YYF(VB z?F!1m$=|!<&&x0o)tLhM-9r?%q(Hn1LqZsSb4TOxpt^jdiIEXP6!ldXGD&{h7*&|8 z6z)Mss!f$Rf(LfY=m#x`^l}~WohehMVH@?vH-{^sEx+!zhU;+c?GJ@LPJTC0_}l?H zSm_aL4*vz{My>Nk0R+eEDG20n@4$?!j#!9SNB*yG@jXC{8b?o8*ZpRF#NPMx=_qH5 z{z=-#CT*Ky;CtCH5Q%7Zs_Hr5zguwiR`)~no1G~{UT6HZlKxmvL~`%V8xOtRBNbH(654E^c6F#p2y!+O4Yw4vs5o*dUxNy{%bX9#Xc&fJoikJS0maa zvYT%_UVg{|%;Bog;IklayjC6J9&8CEY;k{1C#Br}6qwtgCUyv*4bjk$ao%~wWeCA) zl>TjFe_wU_hz9U)^L7@8)ST3y%&G_wj8W-_kpT@LiYJw_oQD0{zfe4armf?-QOG;G z?%p~?A9q!nNojuS9^D0em>N#3m^ZWceNqSHV|~=7V}a*Ao&=yu-mrAH$q&%Be$_NK z4iY)qHT>+llj%7Lq4f1W-#NdzajjoP6u+otiumLCzDFgC0I=Xz;{%EjW5X?;XRBQi zvI7Hd2lL&kf$wqe4EtmWDES-JdutItmHJ*DkXMd2-JK5s92Pe>C%JB4$O;y(y}Ha_ zYm`0ZeKXHncE5d}KvZlu;$s9#qg4ak-4g))AKG<=k-gN<8503D)%-<^np8Cd=rJ0+cvc*wTVm#x4f8RdY8}3n^eqv{y=&$x!9q!lq!^d1 ztewm1!}f+ts(*oHJSJ_3y4Aeo^4ajn4{zp{BzjBES=yA|AJ!U+BZjC#@9h zkCXKV`xc|exa|r@A3A!TLXh_8TYJ>lQzTh^0igXQAVUO1LCrq5M%E2$LW6xvZm*xr z2uzUw>4l1w4VNW1CuH(QBE+x|sIhAE;?SxL+u!28BaTWnmpFq^5P9I!&KN(}!lULo z6wFtN#f~_(G*0j%{rH})>1;kgyO^3&1u!3w&Ji0vhHyHfq&nUE9B+bO_lJHlFNu~W z>Uo{E=#ImCCJ7I(4!)L_HhG+you68e{K(2~v+t@Kvy|#g%KHc~2M>tSO{^R}JtuDt`541mf8Z@AnmO+5 zP7(f5UA>ESL)({@$iCVZ!n*NZJxY2nY}XEO67P>?FVP3KhM(!pHO%Ob#KaW0ZYAOT z={Em(C4PYz_{^GzSpcM}@_jYGJHtdOR*Y4Kh@LR*wjWRUa7K%d0_Pnx|Ir5q?9kWD z?sl}VWtM_v4t^AO&?qdAmhsBR6204WsI{%8=i_&s6fG+)^@XO?;W|M!4+c401}PN+ z-8V6S1}+;SOD0^icKlHIg2V@jw1}TM7dX@o+fyc=Ib?@?;ltilV}yciErh4D*oP#@ z#p6zWuEy5^$+ApqyA%cGF0IMUSZ!+ki~c1w&=~slrmmBfi6*OCCrx_&(UWH7rLP>4 z!BQ{_RwIw(Pes=gFp{NWn4WB}03Q;`^G)*!4S&p~y5_5&BLP5j54R=XTeg_1hjF^9 z%(vF!?yEQ5-H(mv9@bty7@KZLmV5^<-CAya70h@2Kdwm(#2z75*);|+b>w>S@?{VA zbCu3eEa%58ssX~(vub%F*9o%tpPr0@NN|^e7Xux$f(jlg~ zt~q)w`wIW7X1Ef`PjAiqs~^(-!=0&mPE3tEq6m_DZ$--mh8w7M(POm{l1v-Y3`-h* zk_u7OeK;`+o8(p5A*|tTvD&{Rwsk4XE1TP=3~s`=XgvrzHcO}-*u8_c@wLdZY-tI3 z?OgPvoRL6UwcyHTwUeH#zG?!-bR{^D2$gv4^MTfKhNrj6qGM{UT2+WOr0je9B6J<5 zKM95~=SKxh1EhTr)mjBnycwonac{hzm@bo}a(FI$i?^M7|Fj z8-=RYfqdcJZ`$6rMZw}VH%=aZ6p8rL+5PdFnf%^ozm*)Y5ZSmmen>sl$sX9JPwXKo z_m$NCNZ&wljDTuXXu}e_hlKNXmCqX=z&$^w&*iuva&xGwRZ;%#pyhLabHB5*S)w;Z z$+iU4-B=EK=Zmg+W0V_jU~dNZ^1MSJkR|u~o8KbCB6$XtI^XJ|qqnnns;w$DB&QvX zOlde{XhJ*JzSge?UQ=Cb3|vhdxL&?c;A+@dkTs&_-8c7zjcGHgEm`HQ^>c6HYuZC*(YuAqWNd;CI-iH17cjcpsC&YT3H!Z^F}Kc3 z?_r73mT7oo57XY&kKq7(s8=ggJ_XSwsfE5!P}wDSfVufmFO;Ea_iEGXwqyDRqzmt=mMrq-k5%r|#0al$5d5Bd$I6jG$A1jf&$1br^Q zpZncUy%Yur=Q>5#h^^0!U0=duh^aA8Ne6)Z8*q^L_3Uy53is#9lrt0P@%i<_ z!`t9*KTWKHct0H5F{65}0Ai1Ov~+p}4HZ)oQc^D;`50Muhtz??#&wd1d75vl0 zo1AJMw1l_^j~y6#Q`dL`gg;y2t?Yuy`p#P;4=c7i5%(+J-yMsET495o`@re6(M!*g z?@Qx?IF+2gVrntb2L^{L3HnWIVR8Ra%f@d|C#OoGhU+(Q%_7ZMyPb39$|uJLa@D%l zJnp6f{j6>&ju&JRn3UYTo@d;zl(JR{dkI;Up1G7A8S||0^wx1DPsdH12HSn2S-hUO z@3JYjP9~f!@hsMe=o!Kwwm|Ll>eKX!^vVbz(~~dlfpJ+YwpeTS-4+a!neYg8u_7Bg zN5NTG*@$#OVwH3&x!v>x%ZPS|Mm zo+f6;tD$6}WEU@i19I9*1#5xyAsN4N5*8 zWJEA>CVy3Ejr*s@7h%QA~FuJ%>myb2oH zAu$VOH<21}jpTdKU2Z$cNsR(Qu_>XR=YTkg{bhBz_6Z@^+l5Bp z3trK#Cxlam2VLH$vhQowQ*GVH#sX`n&E+gawA~mvFtTrn7gBEm#w5R*R~JT>1kWA9 zU-=kfgs300ouwQrUvrs(A5er<+?)ZQn2YlmI-zsZsd+Nl%zqPkh)c*~nbGbFnKrj6 zE?dRWtG$mOd>*Dti#|PSUAMwqQ`EB>rP)<8O0tAcvP2rRKiDRb4P})qGIvp3yDOz4F#;4UJ13uB(!|-&|=n%gzvNI&DHCba^*fzVOV4 zWUhadA)Uc~p#1oe7Y(T*_tN89Dhk)9DGN{fZjpAlZ=*El*?bUkj%Ak8YwOTIFxOZq z=1Ykt&bVbPwDcvddUpmOwZtfq^6xY87@Rn@t!_=lKco3~bl49^Ph&>>QiG%V0e|g|DEw(P zxl58(HH@|jshP@aCZ+6!zLYCDd|jfF!}-D|v*bd|4%s`SA2@XCKSF3Yz`VHgMd=a3?S_Y5s#u-Zx&aW9 zh4}SFQ<;H%(yX9gND*^n__^U@<^kg~q&%i zyoX>88dgbVWj`sNd{rn_Zbj*ZkQTk0=L4#!0+Hf++&b@pUD1*KzS4GiBj#(ICBsO< zjSe|%qUJuq2*8*&Ej<=&+=$e=Yg)cXRaT|X+ElXez}2kR!XyJBDT&*qm_G*egL0Yl zg3*nCD)~HP7ZYt%xIXGY@zAR-u$5-o<=A9fIk`Q>-?p7+#jlJnm5R0D<#aOy#@(LU{fC&fOs(6R%816%Ye2})B@wkQ38%a=s-LmO zvf&*$Uz!|T3jZ+YYKyT>NHd}Cwl^OeQ}bptTn%WoU3RNAOj^SCNZ+EN${6B8Sor=O zvxVNw>007L;?O3lQ6g{r&vBWr&(?ZDXo5p zGk=}^>K)Z`{kV6H`*IaLgmMiWKNu7dzpnG9b@U288VVc_+CgX1V5PM;Cvnc6@pA{? z?TpMXd)y91^;^KdEUpnQ?G6U!Lp%6Q=X>ii)>zH>EgE#T9Iu~dcJk@i;xN>Ml>xqR@S!IilIrWloH^gAz{I zfCOH+$ZbizME0hi_vgQP(%@jFI?!Ze=VgS~wF=F}c+7 zRTf`~j^3=V4uy$xLBqmnSFbjuO)ah@gakXIaU7jJfXEM*6 z@r#=s%blAhZ#WKS8i#lK;PxEJG$O^lKH0fvy#dRNJ%whZsY8t$kA^0jM^uu=$)ZKE zFAEKCgng&N%}FyYA$p}1oeu)n3fGDTOW$as0?VWZn~+RR{8t?=Uu(kyR?o;9tDas; z&78J_QyxV>i-JOjIwgTa7EGR5Ks_(_iI)i;5vyCva=Qyg!?ez96xjRiWy(k5zA4A^ z1~zS%IV)Ga$z7$NQ>E3!Rzp&xXawIW~ehsXssxcToW{5z+&rtscA zQ}|AXX2{v|VU{uwY};^Tf>b_mc^KnhQW)WM#(a7^wot=;5&h5SV|C61Aq znWvFawjq99%yT@Dzl`iixwrQ6kzSniG?N#j?gan}BcUhkMZZc}v( z;je9IT8Gwx`M7UW(#hf~x0$mv5`2Q|UXcSSjp~_!=Q{?QsuSOBZu}bP9(AMjoN-H6 zxDmfcCYJs;5&_o7d|RPAWLW#gwZ@JeXJG$0AGSl$_JF%b7 z11}28S`E^uaWC)RfQPkTbnG*mZI>z)2A=3|MA2{6FUk2UlfwjJ!^4(#H6??FmRZG( z)R5j{G!whJDpq%y- z_fm-EH5Vhz!1=Y>um|PlJX>gz^`szPGdTFL>fYl<$>`vE{u+%jQgCoIdyNF=5}kBw zv<6kWkwI?t9p$BwFXLSmsnqkE#&7lYNgRpS*D+4-?aI}0|Ee>*rfu!{*=l!+CiH_n zABa|Ie%Ku|a}3AIf!wh(=3;++vr;X5h+yb~Pa9W?sL!o^CMlx(2&^Kl!1til6~Q0g zzsfYzfp%Jz^)0{7#)@a^i@JhlGWf%o-vwj{i8C<1&utw_KfL=Wrfri?ZYU+o+0tHFUs8=V~B0HDxvK@{CNZTK98;PS9$x z9M-z_2R*!^s=Jw=P|&iRVLnlOMmQI^z-8X-d?h$Y;>Iu%V&{5TMxvwJ9nB0W_AldaEc;${AO0-iF)gCG zbgPIxUPNR{%rVubG>=z*^_CKa*#5~e5e{MY3!Q3nweVwBh3v>f>|)WfNFbN0jTVZH9MH-mnhnXt1OLbFM2Gw@#`W4nrepnTHX_7U; zJrs6FxqKy!Euo%Y0=4!>G4!I7%w?U0LeEE?M<3+q-L4%>C#Xk+*UEc zYE$rJc;(xX)BoO1tAf?@z9@|(Yy1qRL>P>8k4RxA+`{>CEorHYd zdrQXOMwUhgBk}Sr^`TY=Ne-p9W9|s=_>|-Hs`ZC>oKYI=@JzXW6gVaHN4J5{x_3^1 zGClpx8_}_Muqu}?%PBU$u6Gs&4f;xh6tc{Qa9JX8%RUsWRf7L^SGd!~FJX(r%ktMdMKy4rd*LkM^dpH-+~ z5mGlcYXDysAIp*|c|lE?DmIw^QAAmnzHwbALBrJ+xpCGyokI~lR_T}Nc7zArbV}7X zqXu4U4eguAXFJVpv$4Zw+plBcCbv6%ZYm9}q~DEEJ*P%ye0trvKyl>-G4$e~=btiY zdcBL(nNnB^xYQDo(+aE9nUh(DR->5R^$I=g5&6yv&a3Ndl}nv>O=6y=yB>C zLH_2p6d5?^HhdUo2&F`XK@TFL6N2A*oZ{Q)4rK!a`rl3XPhf-mY!$Nud!TzL( z^;jgUPfk-;ttCliLM>mPdjvr_P3TB`i7vc!C=uGSiM-*H+>~S&{i%1BD|6fK(5Av? z#Zsx+Ny@B*@%GuVZ}T?$**vF!eE6J5)L*bPBZVJ`xB3 z^(Qyk0HirI{<#FKTFWM`=hMjs7p6(wD5ZJGA;qz^8e^eFrs|f$nR~e*Y=wPeq(~ov zV7Gz*%Yyz;aG6!f#7Hq2>kM-6>w16RwlS!-*6q|I(U>23;{l(>ndXfTm#mPNXDa&+gOvQTMe>ej32;K2?8Gq^p(Kz&^tn-cYGbR4b691> z_~~vC$u69-EqoQe;t!1$j~4bQnv}aI5t-xGi1F5fBlvRh!P*zhCgtCmR~ZY1XXsv&`5F zU~c5q--s=#35x;uC)CF9?-E=)NT1Z~q=*`CbQDkGu^Q!<%*HRoF-}$zS@-i>OqB52 zSE;;N_!hO`gNf4j?Cq|2`yeEFGIXOOqdghsVZD;e?7K_E-7MzLu{fQut+NlLO|*ON zDwY=Q;qCVqz5|vDqon3;KM+-=8MGeV|G440D3>0%CEXGWQKV>A=yQNMERv$>wO>D! zgz!(OT^>zv@KLZED#NSR8`{2X+;ZQ$_Fv_z2KPW1>L2$dLX$6)Y!vmL5*_Ic0c{Tg zefqKq8O=Q-a1x zXfLtHtTTZ%Z*n4o6`jV$H37N4n@4}v8CTi~42}fM-GF@;0=M4wP-0@kj8jEU;cf<1lH$+6R9l8yj0bNH= zsGyPNy zlH@k)ItJxen$dD&{|biTArqdq;QjVYRZFtisDH=UY!SXXi4CF>koxu z(-nSW2IgOt+;?+zn~cxAE`zSi?ZyYZW9K-&X|{6g8&4+__gyt|j2T~&bAxA7YL(nA z*$lz?_MA4y8I52Yw8Pmw-2PC$`z5lKd8dg*v@A0S>>}1QR*p5byg0(8kzi{nkup|lSSjVbfZEPNmM90D zU)S@7Bg@q8ZL*Dt!(K4HA9;vBuH4*yqWF7(k7 z_zE|m(Rw5#?)~)}F&LYWiPGN>)BP?9cL26b?mO0l?J$#9jbF@!SSnMx{1- zx-OsOD%VDHnY}>t3P4I23jelKF>BL8=yt1(HrMQQ0rS-XO9V|D5?N4tS?LPpXMFG5P1!QBU z0u1pkvNbAE=DdiSaf;i&6f8P^a6vDhgbeNpYn}6$(hWQGeb`DHe3?9h_iR67&!U)> z?e#R5X0ElX-~Fd@_)GgPz*k{J46Cs;hGzu^nLMqSUE)eEUm)W1a!;@a<{6#SSVHPc6#v#HkEBl>6I7PgA3+cTDg>7c||Z|3OcQfr8`2j!YR?v5l9} z=9`h?+SGuu7a(TJNy$Y*(My8tdbSGSI=Sh%4mto$U1^=}gBF$%st*iuBV1~9iWeDA;Y0Yby`4JTE=(6RIRE3w&fNmp`E{ny@%6RP$E&0kTe^zx*Iv~gl zaSmC4IAg_GF!^4zrB@qTstLHBXnz~@KY5xQXEG>%ZD;;}ay)C&t=W{1qx)XgJe_CG zN7Xk)bf?TcT16ck6~@tycc`Zd1#yq#K-)o3R((@9@zh5gv)ejV09Qyz zsO||3x-ptJm##aZaZi9d0I4gOJ~MSTD&^YYvL$!qw&d3%#*!ip0>T6_z^tJ_<6eW< zqs1<_FT`A~BHq4OADP+lg|`y9eE=|m>LCB(&GnSQS5eYg{{Tt^*kK2!r|)))8c`!( zB4B{Q09;IIZ>dTWRfb&VZeezo`U$xB6XvN%TqiRrd*7N_)#{GWJX$4XuPnHq3>QTOLLPh*p--$y{ z7KqmAj+YU`iRZp7aCf{Wi-?_%tfz&_-lln?udIoCeFb!zW?^9o?fryl-1EWR(h@C7 zAgf=$leh{CDDNv$1L(W>R_ogTjzxtiOaH#uxb7>@D(BM;X{w`37esJ#hZJ7-DN)lj zSs+fRo10r&7$yUh8pmnhP?pL}ei0W6HU-Ev<=?)Ai`irh1`yY${lwc=s$hu!kP3L} zNIbw(uQ~JZ68=4p|3c+p!&qb=fr&yOd6LVg5qMyDKnEiDb@0^WWKC`DP{1?@dMIu~ zHxl#?+#QqZfJaYhTua=Ef#HOSu3%CS)VO4E}GR#UF7eC$2!JBF=~v zvDa6?#+YI+D_HXEoa?^8wNzFyJ0dspeW4##!eP<*xyhpW+8h4rt9o&i=0H%AVKG{# z<{$`~Ed@9WGKaT-L(fUjoBXyw=P<)J zf(+Pdt?@;zsBOLdtVKbhHY%O?#w=(l>w2|zY#ba*jqkWA4nGg)D=RkvAmnLP0K@D{ zp^NwLDg5U>Y=r9l;DED}GFeC%vN&}eh`c9!JVJ=mh>nk+KQ?S&!-_ht0$eXRM2~DU zF>VxSs0m&biev#JGX}?%1eVmjTLa)OmuV&Jq%Mm+HWlA=0bd!3@_Y5XBjZK?|5-g$ z*se4nf3a@5I0OVlb~KnfAX($$!kaRbhSNkO#9U4n`Ll)AiSmLJtjE-?$!gOc0BnZs z?^yt+h+9cRs|f!`?93M+Lk1|*0!Zzl594BgKFa^RmJ%X3>(i%k!`$P;)+H!9rNXCY zXLa@TdIE?Eh+9^|Fc$(x&>#)EZ&jAa6CXN5&c=WFzBvKIJu$7o1G)%MR$y^c~p33PMvG4 zXBZ726OA4Ej+#ALfjEp7fsTvzl*%mWn_P!GHdf<(n}f|Kq#h-+8f}m9UXWBz{>&K{qM^G9}yOiA5-X=NXkruxTf?dj;O~- zI8@#Q+YiSTFsqd_`X!5@C}Wdg-#m00G%wh+7j`~of<-6s^^)NcX}6nul>EWulqI$o zeS$a*{g2M5-KkBpLbW2={U~moC}mLT6JW1sZN9S3{lCWj?Luxqjh?(Wk|U^6%T=Je z4a0<9usk_QA|!AjD(Y#bA4`(su;>jk)QS265uQ{?j0ira6ZO6O4JWtZQ>^^i>&}>u z^V0q3{mc#_Az^Hm*X-19fGs2LtI)95!d5$#|6Npnp27uyhZp#ssvm^*ln4t7g3kN$ z^YMR43OARsK^xbPAQvY&6yHOJCS>U726J^5wFaazE4j+PGTyQzi@>p7eYn2`36xGS zpd&{}?YZUsPD9K6PD96sC}$D=V|D&~1A{ZXXPw}d87r}wsgy6Kc0fjRVC!6in{dI5 zInXYRsI)Cm8L-Rj6D1MhTLC;ah~N}rw$!~{4tB0&bC!e!9DRSfhu%PnPsltLAn0Zh zu*Zsvb7=6Qtg7SXX!A!>?I8es3l>KM1Ul=OX@ac(_uNt;8Y@0hW=>To)*`(?;@j>o z1Ev_jc@SZ&+R!J^3v#M-ULn#v+)TGUd$t>8hX3?MWGKcUM(1>W?q}1!@E0Bea$n!1 zPNwc*zFY;GOVZHMJjLtv=j;?<8vd@*1KroGtZj=b& zlKDPvlx&Cf7Uu5aLhy99$^rKFITGIG#&9idGjxQhO;iD5TW{3XF|z0VkXD-Ko_uxdQ#KQ`U)EKJJ+iq@n>vL#ELJ8$Y`SccmM?fj; z{)!R*G)Ms{OdH^~EPn*;jjrVhSt2o%xh-Q!G1$N1{ohFG&l&ojAVy32{A=g>I;}1n z7aXm#3mT`5z2VqJEf;thGXO84ep&wrZvh)`MSiD|$A<2nnItnBCv>UNBS2=5-6Al& z&jk0E>8CGV08M|8v+1#jYB>|wT>>~Jd1ieF3X(0{l{CmS5aonSJII!dwJ{}H66rS*=)j7UoG6FNHY z!+<+>ib7J>0)Xni5En{%3wWPYG{F0$#zf!H|JOWG|4@8%{$=W;3r-yuC>0}`WG#Kj zij_i=s3!2$fQ$GO=&B-Tm_A&a3uVQZ*`nDd?;{#8>t5KEfbuV7}+r2}T5Orx_Y5 zv~|0`=)uM0qjO*Ile)OVfqykM-O-Gas&aMQc=_#Lei!IC9e>A-^Uu{n{yW>tJGh+Z z{i?MtF5(vJTFGL|a|1!|d2Dt#{mX#6b*zsJu`g(+bwPq+=U>DR`lK@R_zUjM9ZT&q zS5D+U?ypcwpy?ebA#4yMe*lCX*i-Ux_V!XxdrLMAb4-`MasiKAQV|Hpe^8LRDE*JM z%T0MSs!=AtOYM+^n)|bRjp-7Q4(M5yR4H4o+ma#*V9G`$wJgDvJ=Vgdd$MK)$Gv3CF6#ds zwiFRoE;{kjfI$*jSK}AGj?^+UAXqvEv|h4kc=K-MeX&Jb8$sJBM(4&Ga9a8&Z7giJ zV^E)UKRtE^Lj^$%Mr=Wet_KiZe);xT$gZcNUYHd8oZf-#6Y+EU_&yWbQJe9B30)^H z?kGLS;nHem;OP&7*2RSfWo3Q-dnAFeD75aXA@$ggsH51$zXo+@Jc8O$$3;uUSLg$irHw9;V@0Rcv0&zC%XkC4w)?+ z?6^3Q+3{INc_N3b!Ib+Lfzhjek6slbL;l~%JCAa@jLlUxS-h`Ept!;n={6$=5q!i7Jev$4iafFWrj%VEnqn1_Q z`Z(wrbwCFRg1}R3{-jw` z<)6uc|DQjV4I!sahDFK)Xj%V?-bRz5#D>IWTtMj6L+Vp&(^zApQW`=TqnV!=f-#&i zW=5YDTL|Q_GBy+}n~D~$u}`e%-R58+)T{e#Z3#gNmN26FQY==FV6CG2}z4xRw zq6odxFoh=nvnjE3!zlQ&p_$|pER^7zPZk$~p9>mCW%C?CHZJkrO>Zv!Oh0>NBHxDb2xqV0iHo8B>A7<-9 z@H+wc?*)n?1CjWY`oUlya1R4lXbwD(?&gXTs-bNnQ74gOG$lxenw4e_Nnj>M*L#O* zV0OS%L?m%0x=VBh{VxNO*eFG3Sv@L^(%i-d)ls5{_hJnxFb)Sf$2)i<0Hw7S#NN%e zb)y2*>{SdEc?yMV{WRJN>o_Cys}ndmr767;o; zkd~(i;sv}}6h32sF8=s8@0dcVKX-`NE~=vchYR5UI*Qw(WaY{zM3cBk$mP^PrnGVaZ9x7(Ubk^3=LqCiIF<11$ z*L4s%%@b20>KE+cuui-`ojm<;AI1NTHGd8O$Oq=9cOZJNohXT=F{ya~av2pvHT_qe z$F!djK-TC14EG8c45;V-r-c2#PYr02*68=lL993>Iw+SC-Kt#)mj*1NfZM#~#bFYq z*`F-VjqEA|z-om*)3oXtHveLc4Hp=|fMT+3B0EzMQu(~^HC*d4apF5xzRCSrcSc~{ zDZd=d{1=zPe?w<@hENIN19uBHX5ylqy@nZKhtZzskBdi7v(otAYwJf^08+AmCmsH^>i#7Epi)P{T7vmtjPs8698_)s6d03b&M5~0l7a9A+;e>~sbq;0YdT6O zQ&|Au{?7)a3{`|7Pfs$DU#6NV4C!|oCrU;Gh6OE%Da0MlhfQ=CE(HR6U=9>7yusHE z_J127I%pg79C6<9T`ADA<&UvJ(H#h26H*hXXrKT-9j5y}q9LDEAo2$4$2ed;mVn^q zz{7#fr@yTSfT9!_o@~)8qUxd2Q6jM-3TKnSvVu^oKOES2IR)F|!f2&G1szoaui^-J zXCc)0N&kXn9%ZP##=$AJ=N7RAikoF~z+QOxAX{R(NqY4h1&>%NcufVsa^^8>wQ_&T zvlNieEc_ADiavkmcOX_;vbP?iXw@O^B`_+!&Aq9)F=q2=&1m@;%l7QwB`Dfw>}>Xd ze*yPT^n(S>+5NjYO0u0RB+83ne=ht*AnuH@!iK7sunfC3O!Xc4K1cu~r@(?rqSlA} zgMsoTa*jKBnZ`tV|I&TL>8BIQUb`=Ri9oPa@AEn&=i{0;eyz9A4vX0T-s$G%Hy-AL$=vl&?HsotXbkryeKwJQxI+#ew@B7 zqqMWbQ3GJU7k|u`ggNPNqx)YR2$&U2DIK>$Z(y3j&WXR9^>n!}AoUiZ)_s^?!XvUp z5rv5n11wIJt^_D3{)J6C(J@3;3G&vyy`x|WfN7Od=obiEWpot{Ly$5hsA;*HO#qly^FQr&lpJ~MtCyFKlC&UTQUyX}3iwm_i76)$qv;I5eA8V)DgLgERjV6t zLYolMQw#BW_x;f8Lmg>4U0uRNPG*f;R{HUS`hn^KI&k#LvHpwk=f7iCqcC`)myj0O zkK!}>u97fPasK+!R=rf0dM?l8GzF-!8=wyEAVaV6ce=QoX!DSD(f0 z)J_j!p)iO~;d;|UAPYVrR>J#UnR9zsVjB}wCJU_ptiUeH(W~dL#R&V-x6Sz}`cKCMI_RD({~rhXU#SETE~NeG4+FFhTjYT>THA~ml z_HQX6m2&H}wD3NKITFPTV0j>7CGg-x0>zv@6V=LEn+DCQ#v{()RGxKHdv1=HNJY%S z@#?q$d?+}if&FFj2H!vJtnCS*`hkBRdB9Kj+p(n)f2@);YJ#s{j@NvrR=FXr7cO;N zfm1ivKoM_IF&8CeTH%HTN`S&9tn`gFwRc!C(91B?OC2$rs__Ug`dL2mmCq3{Z{!%XonwGf*H68iZQO zhk8S=CN(k6fzTPQ)yer*4;YGyETcUMJgemVNMVM&zdGmOxL(O-DF;y^lsOKtWea%U zJ6KrIB+0#AUsYNDfa9bAim*VUg5+Nh^fLbUGYUL!1R%he%x(DGe)|}RGz1h<(0N`l z6?h0T!)Q$;qOG`SZ8&jd#6E>YLqLXB$RE0RIXc#;(BC(<-U(f*Fzcsc>iuUhq z-CX&Y6po|g*L?jnHkByQ|+_S{A=`VO{#4^}uQgqE|)pOo-w zsTHc0_@0?53tV(;ejUb79ow6q`yNrq_fop=@O7oY#z>he0Y)LmXQSm?KVMI5CxP^@ zjW&b>>LTE)xOj*tH!C8n6a0s}#)57(?r(n(*6E4G;_4B^VZu?XQxSG>b$>B!Igw?- zmU!d!+1>IEP?04SuE0TT02E%=epZS-W?FY!`H#fJ*g&BiuKWgcDmFBy_%lYJAYaM>sq+-DF z1SX+MKXCj$fY+~1@u(rIN{qXdHDjuwUAlQo1?&$Zi3`gAv_bIoI`G{|J5Se%186T3 zuiJjv+1bn~Bc#H<3WX-{X>_86Y8Bf;9-@K-6RSDJvo`R(vtaNSpzmo<@EN!p5cS|r zXHKcI(hFALdv@@AXzX|f$gpBFAH5iE0IBZ^&3An|r)F$E-!ZAOx*Kd*aUvPtkNG^F zeyq`A%v3txmf`wXszI>rIi7j=?dMPG{A_wvPB(!+QD*?CdHiKg0HR3hB>$0wK$lOQ zkN)^}f2_H=8JG!KYjL+XHU_R$DP~y}qJSf&R-#imQ_9FN>(m_Nt;hHkvCq40O@T^H z0)gk_lIg6+@>u_y+1=gU7WXqHrLA&(zsFzCOG{k7E2f-N`6wYqXjjh8+;%HXzR!FS zpIIKYhx=BnGNGfn5?32|XH$}svjjY$yl)q*IMzB|TMd?*)1G48zTV0$W}?_#lC1^i ztw>`0q#cutpY#3uL;yiu3IZIC2{QJB0*9xONB`3(~cFdhp07%DLe zb=cN{)q-lMBAJ1qp`nhBmW~dUFH@p#!_ox)v!^ZqGE0P$Z23CFPWI zGTz~S&Lncp*o`L?D|R%Vuu1Qo?R@3fcqQ8p@?Y1EKSHj~dSNI4@FGkC!`_LKwG&s7 z@v*{95G=PuKcF3?r&~5_%4%^QnK@9i>$3TkCK=Yb977KF-h)Y5(T^wAhd6Y}>g%op zdHyn*QH)m50*5Q=4J)%J!3Gg|kW9q(I5VUkY(rW)av}!-8Ned-|87g;T6worwg{h5Y* zw8U1FWxaSr!~ji^?U99F$C2sylA1jjO`n>afQ9$wPNF}W4V|!P@n*0Qjmhk#cO2$xo=lOkrayU(n zTDU5H?xJ8{niO?Rl|U8z|AsqZos#GqK8`O*!wnN|cs^JQG$ ztXoqjPHYZDslJ*}D6wR&c6TGE8L@#E)cJ*a}j;x=Hlk2!6HAP z{tloprhtftNBs?Os|mk8OaK*oy6$iNL8+b5Db%&74DZcnZE+@7~s*pU0#1l7Npp(VZP;OXAi8p3`IL59FwESit2+qh%H_@ z^VC2KI#J5`;DQqp_{!+CfI*#NtQp!BY~*qF@_g3r)Ai7U-yI7xYT4A>Jokz7*!6ci zvzI@i$ERT!6A9-S8YtFc?R-ZAsG4Ozvoir!5YIt~GX+=G!WGYf=`;;MVu+WE;Y^nc z$LGsogRZ{AQ#aeURYS;a2kDhaHI$M-zjV&-TxLI?qpgr^9>wd)L=9 zdpL+$NPJO_j$Mb{K+fy{Em5PdrzdlDNc3x|tzI`vk8 zCRLWOxeeZD6PsObw7<|Ew}_M&{Z4Ofa_bz9r4`f%K==ga-2eXXHbT69A7>ftS9s0!vl7%AcXLyn&D`L6=R zGpJYfuRbBlA&U3-`%b)OCN{y5nX*~8(DU;;y<^eIK}=({RKo1+Uyg6bXq8FcUOu>3 zu;B+$pi7AL4ik)ciC=rN}u=~ zANMsdp63C67u$z!yOs|F)4Z8k!Lhtstmp!C*#e%U43L=^BX}gbsu8k8s4%=7>LCx) z${kNP_UqZU_rY~rMmCJXTydKoxw!k6NhUSiA<%&t>6vEtW^_Yn5M8lYPxU`9s)fw` zx}OMdymFhc94}(FpMF5-5)?zU>=iu!?z+_3Na5sWs@A!I`l3`bg3+z<=b?0*)c4(#}D+mi8JkU`tFqh^4iC z(6fEO@)wfMKOGvyETt3|!R@J<(@nH>s!-;fG^611OYj zl5nLwc%amSHJIGuhy~vY0`dEWM{QK)ViN|=MLR`~G#(2; zW8NM^Zfp8zZrPnQOIY{I^#OSdsQwCIt^dR3r%@onrOQ)ev}RTB^nhtMawB!s6N(ll zM<*xRJ2gL8)ch&A^47)PLlYNw-+s--`Q`ol_l%GzTSW9SE52s1nJG^)c2UnL%; zQX~jycCS1suG;-xUwQ^&%TXtq-rQ0Vgj1`_p5={%p_@SuwQREzSN08sXex=B^(u`* z7C2B?XXLW;e5x|kZS_3<-C!Fe3>Nq96-&8?45StYk-2meky`rB8ew@dI!Kl7(K*CS zQm-@u%`$cE-^aTU?_L#vGSJB+cV8^ieH3j1{Gz(0FLceDo|daeLA9bXZC#}Lkc5$k zQ2{>`3b7!mi^92|?WVIj$EV9lUeB*DfId`H9*_oPq9sl7s748TfvdZ-GcISdLD~0Z z#T}9i$au+)-$(DbOFmo;=v4;YZMM_Iv<9$zWa4%#lq`MR%zn+$hSB7!?vL`w7M?$@ zuwwKvx!a+<+X{kpRD3>qNUUo2x&^v5^6+!SppwTav%ILkemJ?epircc!vxeIKss_w z`=6!8WLR@@GOMw0KSZl;7K}_vUp25&ki8LVh7KZD;$lW;cLUmS&b@`^t+u{>;d-g- zm1y@hJxs64_Pd#N_N^3wm0HmqJS2JCG~xAJ_yu}KKC?9f^+aoC_Sp2D6i8C;&19zXkl<4!H0oew3)@$; z8iK)D6m%#Af)N6=BFXcR479DPL~DMZiJ_sOe^HvUFaYJ#ze6K1UgmMw`^Hs{cW-Ox zryo`r#=QcWRF$^A{_4`2RKs=M=q$pBPFZhWZw8sR1SEEz3`LehOll<3eb3UD-{=mA zM(5)^omVNJQRMG!ff|k4 z6*J+{Q6XAPNxGlA&%Y1v5>#yJ>sD|sRSZfLDE2k;J~@E9?P$O|o2VZN6S%d_pU~}ddq=Go8GRdgBh;NXm{^=t(#H7Wvv5-?SzV2CF8R3~ z&TE;!nlG)aEbaK$PD=;`Cwx&+r>&cA^lDqO!N>zaZx%7_c!;8b@M==Ue=4(OjyrU3 z%*x~YO48vRn$_r)u)1alFP5`M1*oGUq$H={;6jJmZt=7UlZC zQ8(YfgKX!MvxNO_pB|jMOe$NwnbtS6(ag3s6r9=X z9$r|q(sHwhBocTrmxF@gZ>?ZsTFe-q;#m0k1^LCjcX%B2HP`$YaO8)~(k%2t%L)}N zoU*LR=_Rz1MyX`-Rf}gaYk7Hkm6nROn;DDcsJAjE;j(Y1)zuY-xdjD>I(Y|-#%KJA zg!I7dhchSTS9N+ z8r2GRVU$}~TRNv_>QJ^OtI*FMHN2_33N~rp+;E0pKp6NX(}7;E>#{7F1y0L~lH7g} ztBhlw0&9KheAxR^IePG?JNUdVsIb_aB%|7&=?ogDqkjJ(7&T+ zh3VK-rhu4Fw}IIVirJdYGrvbj5MF%cYN`1Ycp&yy$Ng_x?@t$oMYJcx`(u8-uRcsI z+TT(zd^|D*Gr=|u4mdG;mmgw8^A}?zr%%6OmD12o$0nZ%dOe*_US5ulo&aemxj=va zZw3rlRNSyhboN7-z;@^T+f07<3)AOg!`Cl975?gHF5aN<(WlPX)ZDAhd|`G$aS`aq z^eB~)UseP1ZP!G+XF!JL_i~kO*x{=#zV}%Yqa?i|Sr@bNZDDIT8dnB79omUlgPIt9 zoA2jeL)%}JvxLmsnV-rr#=nh+>BU7P-M!uqtQ&82%MM`HgtOy>VL0%0!&Bmcb8u3@ND#SCz5=q~VrZvk^Sxzw-m+%IS)iKpm^DrYka`}{sw ztSi>@0aR69o4?=sRrsudGuxEV;Re2EWM?U@wmCmvB+}{`W+-WZ2rbiKdpFRpbjKf) zl0H@q!}pz990MrT*Z7y=>vbrQx>(lnzdI9d{qoqZQOY8MtNgS?lrjNG)}>TpF!T3; zL&dpmfb#Ac!-ry;Q1P+*#Ef;z?!f%AXen1?-`_87xv+TcT?(QQPu#3+*}=mwv~IXV*$Q0Zvemz`-dp3dno5+tnxU7}-RstYyFL__f%e z$sF9dw*Ry6#DM?bD_cMp0PB2eXtE4hT=tr+NKSl}Jsjs0Sz2RHS-zMSV?1sK3RK(H z#1gH`z=nhQRvxo2?!iNrNHxJBFU;{iC=YEbb*SZm%8VveLD?LjAP2#Oqy=i+9Zp-e zDqc08ROoVN3HdywuX~54zkgrt?iPJ6-KXbvH(O@tUZ*|%ctFmEULKbftwlw^&Wcd- zdpuKj-S_GSSR^Y*zq#Z2WX-j9^7GR`<0mkepY6Ws!#GdJ08^U z=UbeZHBUXu&&8YFq5FsA2g!iIfUxf%bti_I3ebt4Yi~8hlLp8lvwDBF{ymv5jr=;4 zNc-*8BY)Xk%+(oYuT`I7_>z%iG8YxUuFx$5B|6k1~t zSIGDGsI~9@?alr~w!r=?tuh{N7!Mf383dgN2(z`UGm;%!q`MgLy1tL61;>|#=PbLV zCdQ56yQa=Hv3F*$(4WLx2%=wm*5Bl8!F-f7I&0GbP??_xKge+R07BT_;keqzk$!w^ zVx_5FCJ3=okFTdn;zG;CmOrxPdDF5<@jLEzfvdXHC81hEWEzeYcm)3<`}JfWobyo! z840)<>2ac(0j3oHTc5}OYH-O%XX)FaqYf=bwu$#dr`*aZ!l2%0Icpbx?e0RysQV>` zz^$qR{|;15_>I9XfJ@dHZ<+#IPLQi)Y9M-FTIBG@{$P)2J z@qK-+RK5VW@U!sC!0YXR@JpA5@Q$RJqK-&;?kJ!zVjb6b|JWw?rKxZVzSljBJ@R{S zwWMWSp+gWOPN@&piEAla0YHe(&L4hup8ax73uBUzBkX;D4v@LC-^%s%4!h;1Mh{TT zSFR2;U+#_R%F9Pq$`4^TS%pctx$Bdm?wlx>9WO60X2o3C385s9P!m|He5ERS4@@~M z+s8X@BEJi;CR+*!v-na<+CHrNFl%FN!|=S-$K3(2w*N$rM}?FbMciu$g#H4wy z;ZpeZ@|an;V{L2?&HkZDcXE*pAJ>(@WyV}j52_14Sg?n8n^0o!T@qq^IL#%GC9X6P zsR5DE!&{I}d#ZWZ(8{-uRfdMY0Y-R0{AJzAJ3fRyWGQSf@??-nZza5txTxTSt0C-A267458N$8*ToY<25m-z3$H+-ste~(UtZ}HhVfF%zp_|LPgi%McodES+KZ2gk;{N zRnDS_q-=I_;Qg8*fNi>g9O~l;W5P5<)6pm)sV=VBh;ArD{ zeOgSc>hOg$UdcGb3{BlOfs53I?|YNSGbjU>CHSQm0X??wg4MFJA6RubL~eql=EY`Q zL&3)szXwW^Er?AyD)Cpp--q+0eqMbfXg2KM*67{>XiL4(epTPV*JZn35jTwnycI;= zIk@cpBzEoQ&aGP}o#P_IzvR*dbNh=Dx7GAi=^tnfTuR7qG*JvXE(;f54hlArQlRBl zvcV+tKu}!gw5G1VlXr{u2qFqxl39~FN~!nX3W+3riK27-qhYk1FIFMW1c*H{6WLow3%vNC&?>fITd6ViC=D%F zY8QThs+(dZD9&cgv}+UuQqgiCF>;Ok5{-;Q$`@rw&x1yBm(? zM_L)XxCyLY8x!)_GV)M1qPW3c&FPZa*$*y+Tv&<0Liy;GwBysWw-gJ?X>N;tUHHhq z;Z3O>?g8vU$a10GX{f6!?HJ#S!!9qmAWk`n*P^*x*O=1eD@qj3$TYeEQ-TB<>4Y3; zl8IpnAco#A)r{C=iJaHA&L>9@fvL&#%vmO3vupz9x~bV^Rjx+SO>}@nw})0I6JI{6 z)Uh{8$Tevsy5cktOLV1srDgSfDjp920W7lXnly^|PJ>c+3s=`feyS~}ELveUzU~5e zZ!KB0;8~&BjIGPeXgq6nrENF}L0)W4*MQ7F6EaWmI))qNa8$Ky%$p`$!q(GsGu_^I zV8aQreAF54-4WYdQptEL+jt756&@j1#?bNYC+aQyDL!|u&YC`K*h(wbtO0Dmp%g^<)H6iF*;I%4+VJq)k|Echzb?~!KisZ0G@(fS@tJj zCun-QCWya(zk5V7If{}9h(vP!s!-*5%X;>tnpdYFiR@XX-SZ_edgb1jPiA}6ZHnXh z3mgnp*crr+(tmsX4wQ4mKVNq^$c*a*EFhk7zTnaZc!&v4dbJ`T_G?GR_azz0;!_;$p;Q+w)@4WHB$NUTPM+`NK#@zaLMmlB-2(B0hNLHc1c zN)HTK*Iy7YPrg$WrYQ?7Q8Qs0V(1@BVb*?!+tHLT9k;c4T+CW#YYVpx98KwLsPJ&0 zx43{~rg6U#-6W8MW~VU2IfhflG$IODm-)@;t33qze)xX*(qdIRw7?;s=g1#)NYJq) zY}!Bm+38LcQ@Xrd;Nj_N!ipr5y7Csl^~cAA1!ppa>;-3J$pA;^N}e_a+Gpt}df&R1 z9OaksJhL4%(SUrKi|(%jv^n=dc&i0a&=yW)T{~$bV6WBD@Q6CZFf(&3#xR} za(q!GReIJviQ=rv=5=lxGkoq+Q*k{u)k6$ov^N2Ecd)yu0|Q@`W}^l_8u|>Kvo_qz zwO#?@e{$TOF@qbIvndnp&$oB2Z4iv=*jY1uWw=?`RP@QI2n#mc+Ile}`f^5J;|orl zk*>f}ZJ&U}2)i0JHc>2e0apoxcuT^i-{B~b&>#eX#aeqr`q&HvLR#I)$=S&lVP>ud zK;GmllDPK;M3u%_R}mCMkS{yKf#z9Xfj^%nosLnAgnYcWM`JH>AJ`cbJK%|8L>