Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -263,13 +263,6 @@ with pushdown, Cube will need to transform it and generate the SQL for an
upstream [data source][ref-data-sources].
Learn more in the [SQL API documentation][ref-sql-api-qpd].

<InfoBox>

Query pushdown in the SQL API is available in public preview.
[Read more](https://cube.dev/blog/query-push-down-in-cubes-semantic-layer) in the blog.

</InfoBox>

Queries with pushdown, since they don't include regular queries, can not
utilize pre-aggregations; however, they still benefit from [in-memory
cache][ref-caching]. Queries with pushdown can also be modified before
Expand Down
42 changes: 12 additions & 30 deletions docs/pages/product/apis-integrations/core-data-apis/sql-api.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -134,20 +134,10 @@ There are trade-offs associated with each query type:

<InfoBox>

Query pushdown in the SQL API is available in public preview.
[Read more](https://cube.dev/blog/query-push-down-in-cubes-semantic-layer) in the blog.
[Read more](https://cube.dev/blog/query-push-down-in-cubes-semantic-layer) about Query pushdown in the SQL API in the blog.

</InfoBox>

<WarningBox>

**Query pushdown is disabled by default.** You should explicitly [enable
it](#query-planning). In future versions, it will be enabled by default. Also,
enabling query pushdown would affect how [ungrouped queries][ref-ungrouped-queries]
are executed; check [query format][ref-sql-query-format] for details.

</WarningBox>

## Configuration

### Cube Core
Expand Down Expand Up @@ -214,25 +204,8 @@ variables by navigating to <Btn>Settings → Configration</Btn>.

### Query planning

**By default, the SQL API executes queries as [regular queries][ref-regular-queries]
or [queries with post-processing][ref-queries-wpp].** Such queries support only a limited
set of SQL functions and operators, and sometimes you can get the following error:
`Error during rewrite: Can't detect Cube query and it may be not supported yet.`

You can use the `CUBESQL_SQL_PUSH_DOWN` environment variable to instruct the SQL API
to execute such queries as [queries with pushdown][ref-queries-wpd].

<InfoBox>

Query pushdown in the SQL API is available in public preview.
[Read more](https://cube.dev/blog/query-push-down-in-cubes-semantic-layer) in the blog.

</InfoBox>

Query planning is a resource-intensive task, and sometimes you can get the following
error: `Error during rewrite: Can't find rewrite due to 10002 AST node limit reached.`
Use the following environment variables to allocate more resources for query planning:
`CUBESQL_REWRITE_MAX_NODES`, `CUBESQL_REWRITE_MAX_ITERATIONS`, `CUBESQL_REWRITE_TIMEOUT`.
The SQL API executes queries as [regular queries][ref-regular-queries], [queries with
post-processing][ref-queries-wpp], or [queries with pushdown][ref-queries-wpd].

### Streaming

Expand All @@ -254,6 +227,15 @@ to establish too many connections at once can lead to an out-of-memory crash.
You can use the `CUBEJS_MAX_SESSIONS` environment variable to adjust the session
limit.

## Troubleshooting

### `Can't find rewrite`

[Query planning](#query-planning) is a resource-intensive task, and sometimes you can get the following
error: `Error during rewrite: Can't find rewrite due to 10002 AST node limit reached.`
Use the following environment variables to allocate more resources for query planning:
`CUBESQL_REWRITE_MAX_NODES`, `CUBESQL_REWRITE_MAX_ITERATIONS`, `CUBESQL_REWRITE_TIMEOUT`.


[link-postgres]: https://www.postgresql.org
[ref-dax-api]: /product/apis-integrations/dax-api
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,9 @@
SQL API runs queries in the Postgres dialect that can
reference those tables and columns.

By default since 1.0, the SQL API executes [regular queries][ref-regular-queries],
[queries with post-processing][ref-queries-wpp] and [queries with
pushdown][ref-queries-wpd]. This page explains their format and details if they
are handled differently by the SQL API.

<InfoBox>

Query pushdown in the SQL API is available in public preview.
[Read more](https://cube.dev/blog/query-push-down-in-cubes-semantic-layer) in the blog.

</InfoBox>
The SQL API is able to execute [regular queries][ref-regular-queries], [queries with
post-processing][ref-queries-wpp] and [queries with pushdown][ref-queries-wpd]. This page
explains their format and details if they are handled differently by the SQL API.

## Data model mapping

Expand Down Expand Up @@ -202,16 +194,6 @@ queries **not** querying cube tables.

### Query pushdown

<WarningBox heading={`🐣 Preview`}>

Query pushdown is currently in public preview, and the API and behavior may
change in future versions.

Query pushdown is enabled by default since 1.0 and is controlled by
`CUBESQL_SQL_PUSH_DOWN` environment variable.

</WarningBox>

<ReferenceBox>

Query pushdown currently has the following limitations:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -933,14 +933,6 @@ The default [time zone][ref-time-zone] for queries.
You can set the time zone name in the [TZ Database Name][link-tzdb] format, e.g.,
`America/Los_Angeles`.

<WarningBox>

Increasing the maximum row limit may cause out-of-memory (OOM) crashes and make
Cube susceptible to denial-of-service (DoS) attacks if it's exposed to
untrusted environments.

</WarningBox>

## `CUBEJS_DEFAULT_API_SCOPES`

[API scopes][ref-rest-scopes] used to allow or disallow access to REST API
Expand Down Expand Up @@ -1233,6 +1225,15 @@ after it has finished writing the last response, before a socket will be destroy
| ----------------------------------------- | ------------------------ | ------------------------ |
| A valid number or string representing one | NodeJS's version default | NodeJS's version default |

## `CUBEJS_MAX_REQUEST_SIZE`

The maximum allowed size for incoming requests. Applies to both HTTP body parser
and WebSocket message payload limits. Must be between `100kb` and `64mb`.

| Possible Values | Default in Development | Default in Production |
| ------------------------------------------------- | ---------------------- | --------------------- |
| A size string between 100kb and 64mb (e.g., 1mb) | `50mb` | `50mb` |

## `CUBEJS_SQL_USER`

A username required to access the [SQL API][ref-sql-api].
Expand Down Expand Up @@ -1292,14 +1293,7 @@ If `true`, enables query pushdown in the [SQL API][ref-sql-api].

| Possible Values | Default in Development | Default in Production |
| --------------- | ---------------------- | --------------------- |
| `true`, `false` | `true` | `true` |

<InfoBox>

Query pushdown in the SQL API is available in public preview.
[Read more](https://cube.dev/blog/query-push-down-in-cubes-semantic-layer) in the blog.

</InfoBox>
| `true`, `false` | `true` | `true` |

## `CUBESQL_STREAM_MODE`

Expand Down
2 changes: 1 addition & 1 deletion packages/cubejs-api-gateway/src/gateway.ts
Original file line number Diff line number Diff line change
Expand Up @@ -325,7 +325,7 @@ class ApiGateway {
});
}));

const jsonParser = bodyParser.json({ limit: '1mb' });
const jsonParser = bodyParser.json({ limit: getEnv('maxRequestSize') });
app.post(`${this.basePath}/v1/load`, jsonParser, userMiddlewares, userAsyncHandler(async (req, res) => {
await this.load({
query: req.body.query,
Expand Down
43 changes: 43 additions & 0 deletions packages/cubejs-backend-shared/src/env.ts
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,32 @@ export function convertTimeStrToSeconds(
throw new InvalidConfiguration(envName, input, description);
}

export function convertSizeToBytes(
input: string,
envName: string,
description: string = 'Must be a number in bytes or size string (1kb, 1mb, 1gb).',
): number {
if (/^\d+$/.test(input)) {
return parseInt(input, 10);
}

if (input.length > 2) {
switch (input.slice(-2).toLowerCase()) {
case 'kb':
return parseInt(input.slice(0, -2), 10) * 1024;
case 'mb':
return parseInt(input.slice(0, -2), 10) * 1024 * 1024;
case 'gb':
return parseInt(input.slice(0, -2), 10) * 1024 * 1024 * 1024;
default: {
throw new InvalidConfiguration(envName, input, description);
}
}
}

throw new InvalidConfiguration(envName, input, description);
}

export function asPortNumber(input: number, envName: string) {
if (input < 0) {
throw new InvalidConfiguration(envName, input, 'Should be a positive integer.');
Expand Down Expand Up @@ -148,6 +174,23 @@ const variables: Record<string, (...args: any) => any> = {
.asInt(),
serverKeepAliveTimeout: () => get('CUBEJS_SERVER_KEEP_ALIVE_TIMEOUT')
.asInt(),
maxRequestSize: () => {
const value = process.env.CUBEJS_MAX_REQUEST_SIZE || '50mb';
const bytes = convertSizeToBytes(value, 'CUBEJS_MAX_REQUEST_SIZE');

const minBytes = 100 * 1024; // 100kb
const maxBytes = 64 * 1024 * 1024; // 64mb

if (bytes < minBytes || bytes > maxBytes) {
throw new InvalidConfiguration(
'CUBEJS_MAX_REQUEST_SIZE',
value,
'Must be between 100kb and 64mb.'
);
}

return bytes;
},
rollupOnlyMode: () => get('CUBEJS_ROLLUP_ONLY')
.default('false')
.asBoolStrict(),
Expand Down
1 change: 1 addition & 0 deletions packages/cubejs-backend-shared/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ export {
assertDataSource,
keyByDataSource,
isDockerImage,
convertSizeToBytes,
} from './env';
export * from './enums';
export * from './package';
Expand Down
50 changes: 49 additions & 1 deletion packages/cubejs-backend-shared/test/env.test.ts
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
import { getEnv, convertTimeStrToSeconds } from '../src/env';
import { getEnv, convertTimeStrToSeconds, convertSizeToBytes } from '../src/env';

test('convertTimeStrToMs', () => {
expect(convertTimeStrToSeconds('1', 'VARIABLE_ENV')).toBe(1);
Expand All @@ -16,6 +16,28 @@ test('convertTimeStrToMs(exception)', () => {
);
});

test('convertSizeToBytes', () => {
expect(convertSizeToBytes('1024', 'VARIABLE_ENV')).toBe(1024);
expect(convertSizeToBytes('1kb', 'VARIABLE_ENV')).toBe(1024);
expect(convertSizeToBytes('10KB', 'VARIABLE_ENV')).toBe(10 * 1024);
expect(convertSizeToBytes('1mb', 'VARIABLE_ENV')).toBe(1024 * 1024);
expect(convertSizeToBytes('50MB', 'VARIABLE_ENV')).toBe(50 * 1024 * 1024);
expect(convertSizeToBytes('1gb', 'VARIABLE_ENV')).toBe(1024 * 1024 * 1024);
expect(convertSizeToBytes('2GB', 'VARIABLE_ENV')).toBe(2 * 1024 * 1024 * 1024);
});

test('convertSizeToBytes(exception)', () => {
expect(() => convertSizeToBytes('', 'VARIABLE_ENV')).toThrowError(
`Value "" is not valid for VARIABLE_ENV. Must be a number in bytes or size string (1kb, 1mb, 1gb).`
);
expect(() => convertSizeToBytes('abc', 'VARIABLE_ENV')).toThrowError(
`Value "abc" is not valid for VARIABLE_ENV. Must be a number in bytes or size string (1kb, 1mb, 1gb).`
);
expect(() => convertSizeToBytes('1tb', 'VARIABLE_ENV')).toThrowError(
`Value "1tb" is not valid for VARIABLE_ENV. Must be a number in bytes or size string (1kb, 1mb, 1gb).`
);
});

describe('getEnv', () => {
test('port(exception)', () => {
process.env.PORT = '100000000';
Expand Down Expand Up @@ -94,4 +116,30 @@ describe('getEnv', () => {
process.env.CUBEJS_LIVE_PREVIEW = 'false';
expect(getEnv('livePreview')).toBe(false);
});

test('maxRequestSize', () => {
delete process.env.CUBEJS_MAX_REQUEST_SIZE;
expect(getEnv('maxRequestSize')).toBe(50 * 1024 * 1024); // default 50mb

process.env.CUBEJS_MAX_REQUEST_SIZE = '64mb';
expect(getEnv('maxRequestSize')).toBe(64 * 1024 * 1024);

process.env.CUBEJS_MAX_REQUEST_SIZE = '100kb';
expect(getEnv('maxRequestSize')).toBe(100 * 1024);

process.env.CUBEJS_MAX_REQUEST_SIZE = '512kb';
expect(getEnv('maxRequestSize')).toBe(512 * 1024);
});

test('maxRequestSize(exception)', () => {
process.env.CUBEJS_MAX_REQUEST_SIZE = '50kb';
expect(() => getEnv('maxRequestSize')).toThrowError(
'Value "50kb" is not valid for CUBEJS_MAX_REQUEST_SIZE. Must be between 100kb and 64mb.'
);

process.env.CUBEJS_MAX_REQUEST_SIZE = '100mb';
expect(() => getEnv('maxRequestSize')).toThrowError(
'Value "100mb" is not valid for CUBEJS_MAX_REQUEST_SIZE. Must be between 100kb and 64mb.'
);
});
});
2 changes: 1 addition & 1 deletion packages/cubejs-server/src/server.ts
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ export class CubejsServer {

const app = express();
app.use(cors(this.config.http.cors));
app.use(bodyParser.json({ limit: '50mb' }));
app.use(bodyParser.json({ limit: getEnv('maxRequestSize') }));

if (this.config.gracefulShutdown) {
app.use(gracefulMiddleware(this.status, this.config.gracefulShutdown));
Expand Down
3 changes: 2 additions & 1 deletion packages/cubejs-server/src/websocket-server.ts
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import WebSocket from 'ws';
import crypto from 'crypto';
import util from 'util';
import { CancelableInterval, createCancelableInterval } from '@cubejs-backend/shared';
import { CancelableInterval, createCancelableInterval, getEnv } from '@cubejs-backend/shared';

import type { CubejsServerCore } from '@cubejs-backend/server-core';
import type http from 'http';
Expand Down Expand Up @@ -29,6 +29,7 @@ export class WebSocketServer {
this.wsServer = new WebSocket.Server({
server,
path: this.options.webSocketsBasePath,
maxPayload: getEnv('maxRequestSize'),
});

const connectionIdToSocket: Record<string, any> = {};
Expand Down
2 changes: 0 additions & 2 deletions packages/cubejs-snowflake-driver/src/SnowflakeDriver.ts
Original file line number Diff line number Diff line change
Expand Up @@ -953,8 +953,6 @@ export class SnowflakeDriver extends BaseDriver implements DriverInterface {

if (scale === 0) {
type.type = 'int';
} else if (precision && scale && scale <= 10) {
type.type = 'decimal';
} else {
type.type = this.toGenericType(column.getType(), precision, scale);
}
Expand Down
Loading