From 10c4eadbd42bf1c029c6dbf80484f85644796e6f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Micha=C5=82=20Olender?= <92638966+TC-MO@users.noreply.github.com> Date: Tue, 27 Jan 2026 03:20:54 +0100 Subject: [PATCH 1/3] docs: fix links fix dead link to milvus fix redirects --- .../development/quick-start/build_with_ai.md | 2 +- .../actors/publishing/monetize/index.mdx | 6 +++--- .../console/two-factor-authentication.md | 2 +- sources/platform/integrations/ai/agno.md | 2 +- sources/platform/integrations/ai/aws_bedrock.md | 2 +- sources/platform/integrations/ai/langchain.md | 16 ++++++++-------- sources/platform/integrations/ai/langflow.md | 4 ++-- sources/platform/integrations/ai/llama.md | 4 ++-- sources/platform/integrations/ai/mastra.md | 2 +- sources/platform/integrations/ai/mcp.md | 2 +- .../platform/integrations/ai/openai_agents.md | 2 +- sources/platform/integrations/ai/skyfire.md | 4 ++-- .../integrations/data-storage/airbyte.md | 2 +- .../workflows-and-notifications/kestra.md | 2 +- .../workflows-and-notifications/n8n/index.md | 2 +- .../workflows-and-notifications/windmill.md | 6 +++--- 16 files changed, 30 insertions(+), 30 deletions(-) diff --git a/sources/platform/actors/development/quick-start/build_with_ai.md b/sources/platform/actors/development/quick-start/build_with_ai.md index 754e440483..43da362911 100644 --- a/sources/platform/actors/development/quick-start/build_with_ai.md +++ b/sources/platform/actors/development/quick-start/build_with_ai.md @@ -21,7 +21,7 @@ This guide provides best practices for building new Actors or improving existing ## AI coding assistant instructions -Use the following prompt in your AI coding assistant such as [Cursor](https://www.cursor.com/), [Claude Code](https://www.claude.com/product/claude-code) or [GitHub Copilot](https://github.com/features/copilot): +Use the following prompt in your AI coding assistant such as [Cursor](https://cursor.com/), [Claude Code](https://claude.com/product/claude-code) or [GitHub Copilot](https://github.com/features/copilot): diff --git a/sources/platform/actors/publishing/monetize/index.mdx b/sources/platform/actors/publishing/monetize/index.mdx index 89ccfa12ad..b9dc90c4da 100644 --- a/sources/platform/actors/publishing/monetize/index.mdx +++ b/sources/platform/actors/publishing/monetize/index.mdx @@ -77,14 +77,14 @@ All other changes (such as decreasing prices, adjusting descriptions, or removin :::important Frequency of major monetization adjustments -You can make major monetization changes to each Actor only **once per month**. After making a major change, you must wait until it takes effect (14 days) plus an additional period before making another major change. For further information & guidelines, please refer to our [Terms & Conditions](https://apify.com/store-terms-and-conditions) +You can make major monetization changes to each Actor only **once per month**. After making a major change, you must wait until it takes effect (14 days) plus an additional period before making another major change. For further information & guidelines, please refer to our [Terms & Conditions](/legal/store-publishing-terms-and-conditions) ::: ## Monthly payouts and analytics Payout invoices are automatically generated on the 11th of each month, summarizing the profits from all your Actors for the previous month. -In accordance with our [Terms & Conditions](https://apify.com/store-terms-and-conditions), only funds from legitimate users who have already paid are included in the payout invoice. +In accordance with our [Terms & Conditions](/legal/store-publishing-terms-and-conditions), only funds from legitimate users who have already paid are included in the payout invoice. :::note How negative profits are handled @@ -98,7 +98,7 @@ If no action is taken, the payout will be automatically approved on the 14th, wi - $20 for PayPal - $100 for other payout methods -If the monthly profit does not meet these thresholds, as per our [Terms & Conditions](https://apify.com/store-terms-and-conditions), the funds will roll over to the next month until the threshold is reached. +If the monthly profit does not meet these thresholds, as per our [Terms & Conditions](/legal/store-publishing-terms-and-conditions), the funds will roll over to the next month until the threshold is reached. ## Handle free users diff --git a/sources/platform/console/two-factor-authentication.md b/sources/platform/console/two-factor-authentication.md index b92c28887a..0061735f77 100644 --- a/sources/platform/console/two-factor-authentication.md +++ b/sources/platform/console/two-factor-authentication.md @@ -26,7 +26,7 @@ If it's not enabled, click on the **Enable** button. You should see the two-fact ![Apify Console setup two-factor authentication - app](./images/console-two-factor-app-setup.png) -In this view, you can use your favorite authenticator app to scan the QR code. We recommend using Google Authenticator ([Google Play Store](https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2&hl=en_US)/[Apple App Store](https://apps.apple.com/us/app/google-authenticator/id388497605)) or [Authy](https://authy.com/)([Google Play Store](https://play.google.com/store/apps/details?id=com.authy.authy)/[Apple App Store](https://apps.apple.com/us/app/twilio-authy/id494168017) but any other authenticator app should work as well. +In this view, you can use your favorite authenticator app to scan the QR code. We recommend using Google Authenticator ([Google Play Store](https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2&hl=en_US)/[Apple App Store](https://apps.apple.com/us/app/google-authenticator/id388497605)) or [Authy](https://www.authy.com/)([Google Play Store](https://play.google.com/store/apps/details?id=com.authy.authy)/[Apple App Store](https://apps.apple.com/us/app/twilio-authy/id494168017) but any other authenticator app should work as well. You can also set up your app/browser extension manually without the QR code. To do that, click on the **Setup key** link below the QR code. This view with the key will pop up: diff --git a/sources/platform/integrations/ai/agno.md b/sources/platform/integrations/ai/agno.md index 786efe190b..b0bc22d5d1 100644 --- a/sources/platform/integrations/ai/agno.md +++ b/sources/platform/integrations/ai/agno.md @@ -140,6 +140,6 @@ Agno supports any Apify Actor via the ApifyTools class. You can specify a single - [How to build an AI Agent](https://blog.apify.com/how-to-build-an-ai-agent/) - [Agno Framework Documentation](https://docs.agno.com) - [Apify Platform Documentation](https://docs.apify.com) -- [Apify Actor Documentation](https://docs.apify.com/actors) +- [Apify Actor Documentation](/platform/actors) - [Apify Store - Browse available Actors](https://apify.com/store) - [Agno Apify Toolkit Documentation](https://docs.agno.com/tools/toolkits/others/apify#apify) diff --git a/sources/platform/integrations/ai/aws_bedrock.md b/sources/platform/integrations/ai/aws_bedrock.md index 0b22a86f22..04d3cf9ccb 100644 --- a/sources/platform/integrations/ai/aws_bedrock.md +++ b/sources/platform/integrations/ai/aws_bedrock.md @@ -26,7 +26,7 @@ Before getting started, ensure you have: - An active AWS Account. - An Apify account and an [API token](https://docs.apify.com/platform/integrations/api#api-token). -- Granted access to any Large Language Model from Amazon Bedrock. To add access to a LLM, follow this [guide](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access-modify.html). We'll use **Anthropic Claude 3.5 Sonnet** in this example. +- Granted access to any Large Language Model from Amazon Bedrock. To add access to a LLM, follow this [guide](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html). We'll use **Anthropic Claude 3.5 Sonnet** in this example. The overall process for creating an agent includes the following [steps](https://docs.aws.amazon.com/bedrock/latest/userguide/agents.html): diff --git a/sources/platform/integrations/ai/langchain.md b/sources/platform/integrations/ai/langchain.md index b9f722a4b9..58cfe9875f 100644 --- a/sources/platform/integrations/ai/langchain.md +++ b/sources/platform/integrations/ai/langchain.md @@ -10,13 +10,13 @@ slug: /integrations/langchain --- -> For more information on LangChain visit its [documentation](https://python.langchain.com/docs/). +> For more information on LangChain visit its [documentation](https://docs.langchain.com/oss/python/langchain/overview). In this example, we'll use the [Website Content Crawler](https://apify.com/apify/website-content-crawler) Actor, which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs and extract text content from the web pages. Then we feed the documents into a vector index and answer questions from it. This example demonstrates how to integrate Apify with LangChain using the Python language. -If you prefer to use JavaScript, you can follow the [JavaScript LangChain documentation](https://js.langchain.com/docs/integrations/document_loaders/web_loaders/apify_dataset/). +If you prefer to use JavaScript, you can follow the [JavaScript LangChain documentation](https://docs.langchain.com/oss/javascript/integrations/document_loaders/web_loaders/apify_dataset). Before we start with the integration, we need to install all dependencies: @@ -54,7 +54,7 @@ llm = ChatOpenAI(model="gpt-4o-mini") loader = apify.call_actor( actor_id="apify/website-content-crawler", - run_input={"startUrls": [{"url": "https://python.langchain.com/docs/get_started/introduction"}], "maxCrawlPages": 10, "crawlerType": "cheerio"}, + run_input={"startUrls": [{"url": "https://docs.langchain.com/oss/python/langchain/overviewget_started/introduction"}], "maxCrawlPages": 10, "crawlerType": "cheerio"}, dataset_mapping_function=lambda item: Document( page_content=item["text"] or "", metadata={"source": item["url"]} ), @@ -107,7 +107,7 @@ llm = ChatOpenAI(model="gpt-4o-mini") print("Call website content crawler ...") loader = apify.call_actor( actor_id="apify/website-content-crawler", - run_input={"startUrls": [{"url": "https://python.langchain.com/docs/get_started/introduction"}], "maxCrawlPages": 10, "crawlerType": "cheerio"}, + run_input={"startUrls": [{"url": "https://docs.langchain.com/oss/python/langchain/overviewget_started/introduction"}], "maxCrawlPages": 10, "crawlerType": "cheerio"}, dataset_mapping_function=lambda item: Document(page_content=item["text"] or "", metadata={"source": item["url"]}), ) print("Compute embeddings...") @@ -130,7 +130,7 @@ After running the code, you should see the following output: answer: LangChain is a framework designed for developing applications powered by large language models (LLMs). It simplifies the entire application lifecycle, from development to productionization and deployment. LangChain provides open-source components and integrates with various third-party tools, making it easier to build and optimize applications using language models. -source: https://python.langchain.com/docs/get_started/introduction +source: https://docs.langchain.com/oss/python/langchain/overviewget_started/introduction ``` LangChain is a standard interface through which you can interact with a variety of large language models (LLMs). @@ -154,6 +154,6 @@ Similarly, you can use other Apify Actors to load data into LangChain and query ## Resources -- [LangChain introduction](https://python.langchain.com/docs/get_started/introduction) -- [Apify Dataset loader](https://python.langchain.com/docs/integrations/document_loaders/apify_dataset) -- [LangChain Apify Provider](https://python.langchain.com/docs/integrations/providers/apify) +- [LangChain introduction](https://docs.langchain.com/oss/python/langchain/overviewget_started/introduction) +- [Apify Dataset loader](https://docs.langchain.com/oss/python/langchain/overviewintegrations/document_loaders/apify_dataset) +- [LangChain Apify Provider](https://docs.langchain.com/oss/python/langchain/overviewintegrations/providers/apify) diff --git a/sources/platform/integrations/ai/langflow.md b/sources/platform/integrations/ai/langflow.md index e50fa72b7f..d0f0852269 100644 --- a/sources/platform/integrations/ai/langflow.md +++ b/sources/platform/integrations/ai/langflow.md @@ -12,7 +12,7 @@ slug: /integrations/langflow ## What is Langflow -[Langflow](https://langflow.org/) is a low-code, visual tool that enables developers to build powerful AI agents and workflows that can use any API, models, or databases. +[Langflow](https://www.langflow.org/) is a low-code, visual tool that enables developers to build powerful AI agents and workflows that can use any API, models, or databases. :::note Explore Langflow @@ -37,7 +37,7 @@ This guide will demonstrate two different ways to use Apify Actors with Langflow :::note Cloud vs local setup -Langflow can either be installed locally or used in the cloud. The cloud version is available on the [Langflow](http://langflow.org/) website. If you are using the cloud version, you can skip the installation step, and go straight to [Creating a new flow](#creating-a-new-flow) +Langflow can either be installed locally or used in the cloud. The cloud version is available on the [Langflow](https://www.langflow.org/) website. If you are using the cloud version, you can skip the installation step, and go straight to [Creating a new flow](#creating-a-new-flow) ::: diff --git a/sources/platform/integrations/ai/llama.md b/sources/platform/integrations/ai/llama.md index c83d86b656..785de5426e 100644 --- a/sources/platform/integrations/ai/llama.md +++ b/sources/platform/integrations/ai/llama.md @@ -10,7 +10,7 @@ slug: /integrations/llama-index --- -> For more information on LlamaIndex, visit its [documentation](https://docs.llamaindex.ai/en/stable/). +> For more information on LlamaIndex, visit its [documentation](https://developers.llamaindex.ai/python/framework/). ## What is LlamaIndex? @@ -76,4 +76,4 @@ documents = reader.load_data( ## Resources * [Apify loaders](https://llamahub.ai/l/readers/llama-index-readers-apify) -* [LlamaIndex documentation](https://docs.llamaindex.ai/en/stable/) +* [LlamaIndex documentation](https://developers.llamaindex.ai/python/framework/) diff --git a/sources/platform/integrations/ai/mastra.md b/sources/platform/integrations/ai/mastra.md index 1de306904d..7c666d73d2 100644 --- a/sources/platform/integrations/ai/mastra.md +++ b/sources/platform/integrations/ai/mastra.md @@ -31,7 +31,7 @@ This guide demonstrates how to integrate Apify Actors with Mastra by building an ### Prerequisites - _Apify API token_: To use Apify Actors, you need an Apify API token. Learn how to obtain it in the [Apify documentation](https://docs.apify.com/platform/integrations/api). -- _LLM provider API key_: To power the agents, you need an LLM provider API key. For example, get one from the [OpenAI](https://platform.openai.com/account/api-keys) or [Anthropic](https://console.anthropic.com/settings/keys). +- _LLM provider API key_: To power the agents, you need an LLM provider API key. For example, get one from the [OpenAI](https://platform.openai.com/account/api-keys) or [Anthropic](https://platform.claude.com/settings/keys). - _Node.js_: Ensure you have Node.js installed. - _Packages_: Install the following packages: diff --git a/sources/platform/integrations/ai/mcp.md b/sources/platform/integrations/ai/mcp.md index 1ec47611b3..bbf32665d0 100644 --- a/sources/platform/integrations/ai/mcp.md +++ b/sources/platform/integrations/ai/mcp.md @@ -13,7 +13,7 @@ import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; The Apify's MCP server ([mcp.apify.com](https://mcp.apify.com)) allows AI applications and agents to interact with the Apify platform -using [Model Context Protocol](https://modelcontextprotocol.io/). The server enables AI agents to +using [Model Context Protocol](https://modelcontextprotocol.io/docs/getting-started/intro). The server enables AI agents to discover and run Actors from [Apify Store](https://apify.com/store), access storages and results, and enabled AI coding assistants to access Apify documentation and tutorials. diff --git a/sources/platform/integrations/ai/openai_agents.md b/sources/platform/integrations/ai/openai_agents.md index 16a975eaea..1ec6ce8f26 100644 --- a/sources/platform/integrations/ai/openai_agents.md +++ b/sources/platform/integrations/ai/openai_agents.md @@ -283,4 +283,4 @@ For a comprehensive example with error handling and reporting, refer to the [Ope - [OpenAI Agent MCP Tester GitHub repository](https://github.com/apify/openai-agent-mcp-tester) - Source code for the MCP tester Actor - [Apify MCP server](https://mcp.apify.com) - Interactive configuration tool for the Apify MCP server - [Apify MCP documentation](/platform/integrations/mcp) - Complete guide to using the Apify MCP server -- [Model Context Protocol specification](https://modelcontextprotocol.io/) - Learn about the MCP specification +- [Model Context Protocol specification](https://modelcontextprotocol.io/docs/getting-started/intro) - Learn about the MCP specification diff --git a/sources/platform/integrations/ai/skyfire.md b/sources/platform/integrations/ai/skyfire.md index 52ce106da1..20e1f4a38c 100644 --- a/sources/platform/integrations/ai/skyfire.md +++ b/sources/platform/integrations/ai/skyfire.md @@ -33,7 +33,7 @@ The [Apify MCP server](https://docs.apify.com/platform/integrations/mcp) provide Before using agentic payments through MCP, you need: -1. _A Skyfire account_ with a funded wallet - [sign up at Skyfire](https://app.skyfire.xyz/) +1. _A Skyfire account_ with a funded wallet - [sign up at Skyfire](https://app.skyfire.xyz/auth) 1. _An MCP client_ that supports multiple server connections, such as [OpenCode](https://opencode.ai/), [Claude Desktop](https://claude.com/download) with MCP support, or other compatible clients 1. _Both MCP servers configured_: Skyfire's MCP server and Apify's MCP server @@ -103,7 +103,7 @@ If you're using [Claude Desktop](https://claude.com/download), add this configur -Replace `YOUR_SKYFIRE_API_KEY` with Skyfire buyer API key, which you can obtain from your [Skyfire dashboard](https://app.skyfire.xyz/). +Replace `YOUR_SKYFIRE_API_KEY` with Skyfire buyer API key, which you can obtain from your [Skyfire dashboard](https://app.skyfire.xyz/auth). ### How it works diff --git a/sources/platform/integrations/data-storage/airbyte.md b/sources/platform/integrations/data-storage/airbyte.md index 8ba62f75e0..003de09864 100644 --- a/sources/platform/integrations/data-storage/airbyte.md +++ b/sources/platform/integrations/data-storage/airbyte.md @@ -39,4 +39,4 @@ To find your Apify API token, you need to navigate to the **Settings** tab and s And that's it! You now have Apify datasets set up as a Source, and you can use Airbyte to transfer your datasets to one of the available destinations. -To learn more about how to setup a Connection, visit [Airbyte's documentation](https://docs.airbyte.com/using-airbyte/getting-started/set-up-a-connection) +To learn more about how to setup a Connection, visit [Airbyte's documentation](https://docs.airbyte.com/platform/using-airbyte/getting-started/set-up-a-connection) diff --git a/sources/platform/integrations/workflows-and-notifications/kestra.md b/sources/platform/integrations/workflows-and-notifications/kestra.md index b2f1666cbc..9a3e8cbe32 100644 --- a/sources/platform/integrations/workflows-and-notifications/kestra.md +++ b/sources/platform/integrations/workflows-and-notifications/kestra.md @@ -19,7 +19,7 @@ This guide shows you how to set up the integration, configure authentication, an Before you begin, make sure you have: - An [Apify account](https://console.apify.com/) -- A [Kestra instance](https://kestra.io/docs/getting-started/quickstart) (self‑hosted or cloud) +- A [Kestra instance](https://kestra.io/docs/quickstart) (self‑hosted or cloud) ## Authentication diff --git a/sources/platform/integrations/workflows-and-notifications/n8n/index.md b/sources/platform/integrations/workflows-and-notifications/n8n/index.md index a32fdbec4b..eecb6a3641 100644 --- a/sources/platform/integrations/workflows-and-notifications/n8n/index.md +++ b/sources/platform/integrations/workflows-and-notifications/n8n/index.md @@ -201,7 +201,7 @@ Automatically start an n8n workflow when an Actor or task run finishes: ## Resources -- [n8n Community Nodes Documentation](https://docs.n8n.io/integrations/community-nodes/) +- [n8n Community Nodes Documentation](https://docs.n8n.io/integrations/) - [Apify API Documentation](https://docs.apify.com) - [n8n Documentation](https://docs.n8n.io) diff --git a/sources/platform/integrations/workflows-and-notifications/windmill.md b/sources/platform/integrations/workflows-and-notifications/windmill.md index 517deef64b..188c3c697d 100644 --- a/sources/platform/integrations/workflows-and-notifications/windmill.md +++ b/sources/platform/integrations/workflows-and-notifications/windmill.md @@ -227,11 +227,11 @@ The Apify integration provides several operations you can use in your Windmill w ## Resources -- [Windmill Documentation](https://www.windmill.dev/docs) +- [Windmill Documentation](https://www.windmill.dev/docs/) - [Windmill Local Development](https://www.windmill.dev/docs/advanced/local_development) - [Apify API Documentation](https://docs.apify.com) -- [Apify Webhooks](https://docs.apify.com/webhooks) -- [Apify Actors & Tasks](https://docs.apify.com/actors) +- [Apify Webhooks](/platform/integrations/webhooks) +- [Apify Actors & Tasks](/platform/actors) ## Troubleshooting From 1739f0ef9632f6be85baea0b36ab5061b8fbfb6d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Micha=C5=82=20Olender?= <92638966+TC-MO@users.noreply.github.com> Date: Wed, 28 Jan 2026 11:48:55 +0100 Subject: [PATCH 2/3] fix vale issues --- .../workflows-and-notifications/kestra.md | 28 +++++++++---------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/sources/platform/integrations/workflows-and-notifications/kestra.md b/sources/platform/integrations/workflows-and-notifications/kestra.md index 9a3e8cbe32..dec808cd5d 100644 --- a/sources/platform/integrations/workflows-and-notifications/kestra.md +++ b/sources/platform/integrations/workflows-and-notifications/kestra.md @@ -27,25 +27,25 @@ The Apify plugin uses API key authentication. Store your API key in [Kestra Secr To add your Apify API token, go to the Secrets section in the Kestra UI and create a new secret with the key `APIFY_API_KEY` and your token as the value. -## Use Apify Tasks as an action +## Use Apify tasks as an action Tasks allow you to perform operations like running an Actor within a workflow. 1. Create a new flow. -1. Inside the **Flow code** tab change the hello task's type to be **io.kestra.plugin.apify.actor.Run**. -1. Change the task's id to be **run_apify_actor** +1. Inside the **Flow code** tab change the hello task's type to be `io.kestra.plugin.apify.actor.Run`. +1. Change the task's id to be `run_apify_actor` 1. Remove the message property. -1. Configure the **run_apify_actor** task by adding your required values for the properties listed below: - - **actorId**: Actor ID or a tilde-separated owner's username and Actor name. - - **apiToken**: A reference to the secret value you set up earlier. For example "\{\{secret(namespace=flow.namespace, key='APIFY_API_KEY')\}\}" -1. Add a new task below the **run_apify_actor** with an ID of **get_dataset** and a type of **io.kestra.plugin.apify.dataset.Get**.: -1. Configure the **get_dataset** to fetch the dataset generated by the **run_apify_actor** task by configuring the following values: - - **datasetId**: The ID of the dataset to fetch. You can use the value from the previous task using the following syntax: "\{\{secret(namespace=flow.namespace, key='APIFY_API_KEY')\}\}" - - **input**: Input for the Actor run. The input is optional and can be used to pass data to the Actor. For our example we will add 'hashtags: ["fyp"]' - - **maxItems**: The maximum number of items to fetch from the dataset. For our example we will set this to 5. -1. Now add the final task to log the output of the dataset. Add a new task below the **log_output** with an ID of **log_output** and a type of **io.kestra.plugin.core.log.Log**. -1. Configure the **log_output** task to log the output of the dataset by configuring the following values: - - **message**: The message to log. You can use the value from the previous task using the following syntax: '\{\{outputs.get_dataset.dataset\}\}' +1. Configure the `run_apify_actor` task by adding your required values for the properties listed below: + - `actorId`: Actor ID or a tilde-separated owner's username and Actor name. + - `apiToken`: A reference to the secret value you set up earlier. For example "\{\{secret(namespace=flow.namespace, key='APIFY_API_KEY')\}\}" +1. Add a new task below the `run_apify_actor` with an ID of `get_dataset` and a type of `io.kestra.plugin.apify.dataset.Get`.: +1. Configure the `get_dataset` to fetch the dataset generated by the `run_apify_actor` task by configuring the following values: + - `datasetId`: The ID of the dataset to fetch. You can use the value from the previous task using the following syntax: "\{\{secret(namespace=flow.namespace, key='APIFY_API_KEY')\}\}" + - `input`: Input for the Actor run. The input is optional and can be used to pass data to the Actor. For our example we will add 'hashtags: ["fyp"]' + - `maxItems`: The maximum number of items to fetch from the dataset. For our example we will set this to 5. +1. Now add the final task to log the output of the dataset. Add a new task below the `log_output` with an ID of `log_output` and a type of `io.kestra.plugin.core.log.Log`. +1. Configure the `log_output` task to log the output of the dataset by configuring the following values: + - `message`: The message to log. You can use the value from the previous task using the following syntax: '\{\{outputs.get_dataset.dataset\}\}' 1. Now save and run your flow. Your completed template should match the template below. From 0a38e957101518b67f3048575ed760cac8330165 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Micha=C5=82=20Olender?= <92638966+TC-MO@users.noreply.github.com> Date: Thu, 29 Jan 2026 00:05:48 +0100 Subject: [PATCH 3/3] fix malformed links --- sources/platform/integrations/ai/langchain.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/sources/platform/integrations/ai/langchain.md b/sources/platform/integrations/ai/langchain.md index 58cfe9875f..106f9ee2ab 100644 --- a/sources/platform/integrations/ai/langchain.md +++ b/sources/platform/integrations/ai/langchain.md @@ -54,7 +54,7 @@ llm = ChatOpenAI(model="gpt-4o-mini") loader = apify.call_actor( actor_id="apify/website-content-crawler", - run_input={"startUrls": [{"url": "https://docs.langchain.com/oss/python/langchain/overviewget_started/introduction"}], "maxCrawlPages": 10, "crawlerType": "cheerio"}, + run_input={"startUrls": [{"url": "https://docs.langchain.com/oss/python/langchain/quickstart"}], "maxCrawlPages": 10, "crawlerType": "cheerio"}, dataset_mapping_function=lambda item: Document( page_content=item["text"] or "", metadata={"source": item["url"]} ), @@ -107,7 +107,7 @@ llm = ChatOpenAI(model="gpt-4o-mini") print("Call website content crawler ...") loader = apify.call_actor( actor_id="apify/website-content-crawler", - run_input={"startUrls": [{"url": "https://docs.langchain.com/oss/python/langchain/overviewget_started/introduction"}], "maxCrawlPages": 10, "crawlerType": "cheerio"}, + run_input={"startUrls": [{"url": "https://docs.langchain.com/oss/python/langchain/quickstart"}], "maxCrawlPages": 10, "crawlerType": "cheerio"}, dataset_mapping_function=lambda item: Document(page_content=item["text"] or "", metadata={"source": item["url"]}), ) print("Compute embeddings...") @@ -130,7 +130,7 @@ After running the code, you should see the following output: answer: LangChain is a framework designed for developing applications powered by large language models (LLMs). It simplifies the entire application lifecycle, from development to productionization and deployment. LangChain provides open-source components and integrates with various third-party tools, making it easier to build and optimize applications using language models. -source: https://docs.langchain.com/oss/python/langchain/overviewget_started/introduction +source: https://docs.langchain.com/oss/python/langchain/quickstart ``` LangChain is a standard interface through which you can interact with a variety of large language models (LLMs). @@ -154,6 +154,6 @@ Similarly, you can use other Apify Actors to load data into LangChain and query ## Resources -- [LangChain introduction](https://docs.langchain.com/oss/python/langchain/overviewget_started/introduction) -- [Apify Dataset loader](https://docs.langchain.com/oss/python/langchain/overviewintegrations/document_loaders/apify_dataset) -- [LangChain Apify Provider](https://docs.langchain.com/oss/python/langchain/overviewintegrations/providers/apify) +- [LangChain quickstart](https://docs.langchain.com/oss/python/langchain/quickstart) +- [Apify Dataset loader](https://docs.langchain.com/oss/python/integrations/document_loaders/apify_dataset) +- [LangChain Apify Provider](https://docs.langchain.com/oss/python/integrations/providers/apify)