Skip to content

Commit 7b6bee9

Browse files
committed
fix(api): typo in docs
1 parent 182e85a commit 7b6bee9

File tree

1 file changed

+30
-10
lines changed

1 file changed

+30
-10
lines changed

openapi.yaml

Lines changed: 30 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -873,11 +873,12 @@ paths:
873873
title: Enrichment input too large error
874874
status: 400
875875
detail: The input to the enrichment model exceeds its maximum
876-
local context window. We will soon be adding a `chunk` option
877-
to the `overflow_strategy` parameter allowing a near-infinite
878-
global context window through intelligent chunking and stitching.
879-
In the meantime, you can use the `drop_end` overflow strategy
880-
to drop excess tokens from the end of the input.
876+
context window yet no overflow strategy was set. We recommend
877+
setting `overflow_strategy` to `auto` or `chunk`, both of which
878+
will break documents up into smaller chunks that fit within
879+
the model's context window and then intelligently merge the
880+
results into a single prediction at the cost of a minor accuracy
881+
drop.
881882
instance: null
882883
'401':
883884
description: The API key you provided does not exist, is expired or revoked,
@@ -978,6 +979,16 @@ paths:
978979
status: 500
979980
detail: An unexpected error occurred while processing the request.
980981
instance: null
982+
Chunking timeout error:
983+
summary: Chunking timeout error
984+
value:
985+
type: https://docs.isaacus.com/api-reference/errors#500-internal-server-error
986+
title: Chunking timeout error
987+
status: 500
988+
detail: Chunking timed out. Did you try to chunk a very large
989+
text with a very low chunk size or very little variation in
990+
levels of whitespace?
991+
instance: null
981992
deprecated: false
982993
components:
983994
schemas:
@@ -1648,14 +1659,23 @@ components:
16481659
enum:
16491660
- auto
16501661
- drop_end
1662+
- chunk
16511663
- null
16521664
description: 'The strategy for handling content exceeding the model''s maximum
16531665
input length.
16541666
16551667
1656-
`auto` currently behaves the same as `drop_end`, dropping excess tokens
1657-
from the end of input. In the future, `auto` may implement more sophisticated
1658-
strategies such as chunking and context-aware stitching.
1668+
`auto`, which is the recommended setting, currently behaves the same as
1669+
`chunk`, which intelligently breaks the input up into smaller chunks and
1670+
then stitches the results back together into a single prediction. In the
1671+
future `auto` may implement even more sophisticated strategies for handling
1672+
long contexts such as leveraging chunk overlap and/or a specialized stitching
1673+
model.
1674+
1675+
1676+
`chunk` breaks the input up into smaller chunks that fit within the model''s
1677+
context window and then intelligently merges the results into a single
1678+
prediction at the cost of a minor accuracy drop.
16591679
16601680
16611681
`drop_end` drops tokens from the end of input exceeding the model''s maximum
@@ -1665,7 +1685,7 @@ components:
16651685
`null`, which is the default setting, raises an error if the input exceeds
16661686
the model''s maximum input length.'
16671687
examples:
1668-
- null
1688+
- auto
16691689
type: object
16701690
required:
16711691
- model
@@ -1685,7 +1705,7 @@ components:
16851705
to the plaintiff, Ms. Moody, given the definition of an "employee" at §203(e)(4)
16861706
of the Labor Title does not include volunteers, and, regardless, she lives
16871707
in Austria."'
1688-
overflow_strategy: null
1708+
overflow_strategy: auto
16891709
EnrichmentResponse:
16901710
properties:
16911711
results:

0 commit comments

Comments
 (0)