Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ This guide provides best practices for building new Actors or improving existing

## AI coding assistant instructions

Use the following prompt in your AI coding assistant such as [Cursor](https://www.cursor.com/), [Claude Code](https://www.claude.com/product/claude-code) or [GitHub Copilot](https://github.com/features/copilot):
Use the following prompt in your AI coding assistant such as [Cursor](https://cursor.com/), [Claude Code](https://claude.com/product/claude-code) or [GitHub Copilot](https://github.com/features/copilot):

<PromptButton prompt={AGENTS_PROMPT} title="Use pre-built prompt for your AI coding assistant" />

Expand Down
6 changes: 3 additions & 3 deletions sources/platform/actors/publishing/monetize/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -77,14 +77,14 @@ All other changes (such as decreasing prices, adjusting descriptions, or removin

:::important Frequency of major monetization adjustments

You can make major monetization changes to each Actor only **once per month**. After making a major change, you must wait until it takes effect (14 days) plus an additional period before making another major change. For further information & guidelines, please refer to our [Terms & Conditions](https://apify.com/store-terms-and-conditions)
You can make major monetization changes to each Actor only **once per month**. After making a major change, you must wait until it takes effect (14 days) plus an additional period before making another major change. For further information & guidelines, please refer to our [Terms & Conditions](/legal/store-publishing-terms-and-conditions)

:::

## Monthly payouts and analytics

Payout invoices are automatically generated on the 11th of each month, summarizing the profits from all your Actors for the previous month.
In accordance with our [Terms & Conditions](https://apify.com/store-terms-and-conditions), only funds from legitimate users who have already paid are included in the payout invoice.
In accordance with our [Terms & Conditions](/legal/store-publishing-terms-and-conditions), only funds from legitimate users who have already paid are included in the payout invoice.

:::note How negative profits are handled

Expand All @@ -98,7 +98,7 @@ If no action is taken, the payout will be automatically approved on the 14th, wi
- $20 for PayPal
- $100 for other payout methods

If the monthly profit does not meet these thresholds, as per our [Terms & Conditions](https://apify.com/store-terms-and-conditions), the funds will roll over to the next month until the threshold is reached.
If the monthly profit does not meet these thresholds, as per our [Terms & Conditions](/legal/store-publishing-terms-and-conditions), the funds will roll over to the next month until the threshold is reached.

## Handle free users

Expand Down
2 changes: 1 addition & 1 deletion sources/platform/console/two-factor-authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ If it's not enabled, click on the **Enable** button. You should see the two-fact

![Apify Console setup two-factor authentication - app](./images/console-two-factor-app-setup.png)

In this view, you can use your favorite authenticator app to scan the QR code. We recommend using Google Authenticator ([Google Play Store](https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2&hl=en_US)/[Apple App Store](https://apps.apple.com/us/app/google-authenticator/id388497605)) or [Authy](https://authy.com/)([Google Play Store](https://play.google.com/store/apps/details?id=com.authy.authy)/[Apple App Store](https://apps.apple.com/us/app/twilio-authy/id494168017) but any other authenticator app should work as well.
In this view, you can use your favorite authenticator app to scan the QR code. We recommend using Google Authenticator ([Google Play Store](https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2&hl=en_US)/[Apple App Store](https://apps.apple.com/us/app/google-authenticator/id388497605)) or [Authy](https://www.authy.com/)([Google Play Store](https://play.google.com/store/apps/details?id=com.authy.authy)/[Apple App Store](https://apps.apple.com/us/app/twilio-authy/id494168017) but any other authenticator app should work as well.

You can also set up your app/browser extension manually without the QR code. To do that, click on the **Setup key** link below the QR code. This view with the key will pop up:

Expand Down
2 changes: 1 addition & 1 deletion sources/platform/integrations/ai/agno.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,6 @@ Agno supports any Apify Actor via the ApifyTools class. You can specify a single
- [How to build an AI Agent](https://blog.apify.com/how-to-build-an-ai-agent/)
- [Agno Framework Documentation](https://docs.agno.com)
- [Apify Platform Documentation](https://docs.apify.com)
- [Apify Actor Documentation](https://docs.apify.com/actors)
- [Apify Actor Documentation](/platform/actors)
- [Apify Store - Browse available Actors](https://apify.com/store)
- [Agno Apify Toolkit Documentation](https://docs.agno.com/tools/toolkits/others/apify#apify)
2 changes: 1 addition & 1 deletion sources/platform/integrations/ai/aws_bedrock.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Before getting started, ensure you have:

- An active AWS Account.
- An Apify account and an [API token](https://docs.apify.com/platform/integrations/api#api-token).
- Granted access to any Large Language Model from Amazon Bedrock. To add access to a LLM, follow this [guide](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access-modify.html). We'll use **Anthropic Claude 3.5 Sonnet** in this example.
- Granted access to any Large Language Model from Amazon Bedrock. To add access to a LLM, follow this [guide](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html). We'll use **Anthropic Claude 3.5 Sonnet** in this example.

The overall process for creating an agent includes the following [steps](https://docs.aws.amazon.com/bedrock/latest/userguide/agents.html):

Expand Down
16 changes: 8 additions & 8 deletions sources/platform/integrations/ai/langchain.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,13 @@ slug: /integrations/langchain

---

> For more information on LangChain visit its [documentation](https://python.langchain.com/docs/).
> For more information on LangChain visit its [documentation](https://docs.langchain.com/oss/python/langchain/overview).

In this example, we'll use the [Website Content Crawler](https://apify.com/apify/website-content-crawler) Actor, which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs and extract text content from the web pages.
Then we feed the documents into a vector index and answer questions from it.

This example demonstrates how to integrate Apify with LangChain using the Python language.
If you prefer to use JavaScript, you can follow the [JavaScript LangChain documentation](https://js.langchain.com/docs/integrations/document_loaders/web_loaders/apify_dataset/).
If you prefer to use JavaScript, you can follow the [JavaScript LangChain documentation](https://docs.langchain.com/oss/javascript/integrations/document_loaders/web_loaders/apify_dataset).

Before we start with the integration, we need to install all dependencies:

Expand Down Expand Up @@ -54,7 +54,7 @@ llm = ChatOpenAI(model="gpt-4o-mini")

loader = apify.call_actor(
actor_id="apify/website-content-crawler",
run_input={"startUrls": [{"url": "https://python.langchain.com/docs/get_started/introduction"}], "maxCrawlPages": 10, "crawlerType": "cheerio"},
run_input={"startUrls": [{"url": "https://docs.langchain.com/oss/python/langchain/overviewget_started/introduction"}], "maxCrawlPages": 10, "crawlerType": "cheerio"},
dataset_mapping_function=lambda item: Document(
page_content=item["text"] or "", metadata={"source": item["url"]}
),
Expand Down Expand Up @@ -107,7 +107,7 @@ llm = ChatOpenAI(model="gpt-4o-mini")
print("Call website content crawler ...")
loader = apify.call_actor(
actor_id="apify/website-content-crawler",
run_input={"startUrls": [{"url": "https://python.langchain.com/docs/get_started/introduction"}], "maxCrawlPages": 10, "crawlerType": "cheerio"},
run_input={"startUrls": [{"url": "https://docs.langchain.com/oss/python/langchain/overviewget_started/introduction"}], "maxCrawlPages": 10, "crawlerType": "cheerio"},
dataset_mapping_function=lambda item: Document(page_content=item["text"] or "", metadata={"source": item["url"]}),
)
print("Compute embeddings...")
Expand All @@ -130,7 +130,7 @@ After running the code, you should see the following output:
answer: LangChain is a framework designed for developing applications powered by large language models (LLMs). It simplifies the
entire application lifecycle, from development to productionization and deployment. LangChain provides open-source components and integrates with various third-party tools, making it easier to build and optimize applications using language models.

source: https://python.langchain.com/docs/get_started/introduction
source: https://docs.langchain.com/oss/python/langchain/overviewget_started/introduction
```

LangChain is a standard interface through which you can interact with a variety of large language models (LLMs).
Expand All @@ -154,6 +154,6 @@ Similarly, you can use other Apify Actors to load data into LangChain and query

## Resources

- [LangChain introduction](https://python.langchain.com/docs/get_started/introduction)
- [Apify Dataset loader](https://python.langchain.com/docs/integrations/document_loaders/apify_dataset)
- [LangChain Apify Provider](https://python.langchain.com/docs/integrations/providers/apify)
- [LangChain introduction](https://docs.langchain.com/oss/python/langchain/overviewget_started/introduction)
- [Apify Dataset loader](https://docs.langchain.com/oss/python/langchain/overviewintegrations/document_loaders/apify_dataset)
- [LangChain Apify Provider](https://docs.langchain.com/oss/python/langchain/overviewintegrations/providers/apify)
4 changes: 2 additions & 2 deletions sources/platform/integrations/ai/langflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ slug: /integrations/langflow

## What is Langflow

[Langflow](https://langflow.org/) is a low-code, visual tool that enables developers to build powerful AI agents and workflows that can use any API, models, or databases.
[Langflow](https://www.langflow.org/) is a low-code, visual tool that enables developers to build powerful AI agents and workflows that can use any API, models, or databases.

:::note Explore Langflow

Expand All @@ -37,7 +37,7 @@ This guide will demonstrate two different ways to use Apify Actors with Langflow

:::note Cloud vs local setup

Langflow can either be installed locally or used in the cloud. The cloud version is available on the [Langflow](http://langflow.org/) website. If you are using the cloud version, you can skip the installation step, and go straight to [Creating a new flow](#creating-a-new-flow)
Langflow can either be installed locally or used in the cloud. The cloud version is available on the [Langflow](https://www.langflow.org/) website. If you are using the cloud version, you can skip the installation step, and go straight to [Creating a new flow](#creating-a-new-flow)

:::

Expand Down
4 changes: 2 additions & 2 deletions sources/platform/integrations/ai/llama.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ slug: /integrations/llama-index

---

> For more information on LlamaIndex, visit its [documentation](https://docs.llamaindex.ai/en/stable/).
> For more information on LlamaIndex, visit its [documentation](https://developers.llamaindex.ai/python/framework/).

## What is LlamaIndex?

Expand Down Expand Up @@ -76,4 +76,4 @@ documents = reader.load_data(
## Resources

* [Apify loaders](https://llamahub.ai/l/readers/llama-index-readers-apify)
* [LlamaIndex documentation](https://docs.llamaindex.ai/en/stable/)
* [LlamaIndex documentation](https://developers.llamaindex.ai/python/framework/)
2 changes: 1 addition & 1 deletion sources/platform/integrations/ai/mastra.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ This guide demonstrates how to integrate Apify Actors with Mastra by building an
### Prerequisites

- _Apify API token_: To use Apify Actors, you need an Apify API token. Learn how to obtain it in the [Apify documentation](https://docs.apify.com/platform/integrations/api).
- _LLM provider API key_: To power the agents, you need an LLM provider API key. For example, get one from the [OpenAI](https://platform.openai.com/account/api-keys) or [Anthropic](https://console.anthropic.com/settings/keys).
- _LLM provider API key_: To power the agents, you need an LLM provider API key. For example, get one from the [OpenAI](https://platform.openai.com/account/api-keys) or [Anthropic](https://platform.claude.com/settings/keys).
- _Node.js_: Ensure you have Node.js installed.
- _Packages_: Install the following packages:

Expand Down
2 changes: 1 addition & 1 deletion sources/platform/integrations/ai/mcp.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

The Apify's MCP server ([mcp.apify.com](https://mcp.apify.com)) allows AI applications and agents to interact with the Apify platform
using [Model Context Protocol](https://modelcontextprotocol.io/). The server enables AI agents to
using [Model Context Protocol](https://modelcontextprotocol.io/docs/getting-started/intro). The server enables AI agents to
discover and run Actors from [Apify Store](https://apify.com/store), access storages and results,
and enabled AI coding assistants to access Apify documentation and tutorials.

Expand Down
2 changes: 1 addition & 1 deletion sources/platform/integrations/ai/openai_agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -283,4 +283,4 @@ For a comprehensive example with error handling and reporting, refer to the [Ope
- [OpenAI Agent MCP Tester GitHub repository](https://github.com/apify/openai-agent-mcp-tester) - Source code for the MCP tester Actor
- [Apify MCP server](https://mcp.apify.com) - Interactive configuration tool for the Apify MCP server
- [Apify MCP documentation](/platform/integrations/mcp) - Complete guide to using the Apify MCP server
- [Model Context Protocol specification](https://modelcontextprotocol.io/) - Learn about the MCP specification
- [Model Context Protocol specification](https://modelcontextprotocol.io/docs/getting-started/intro) - Learn about the MCP specification
4 changes: 2 additions & 2 deletions sources/platform/integrations/ai/skyfire.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ The [Apify MCP server](https://docs.apify.com/platform/integrations/mcp) provide

Before using agentic payments through MCP, you need:

1. _A Skyfire account_ with a funded wallet - [sign up at Skyfire](https://app.skyfire.xyz/)
1. _A Skyfire account_ with a funded wallet - [sign up at Skyfire](https://app.skyfire.xyz/auth)
1. _An MCP client_ that supports multiple server connections, such as [OpenCode](https://opencode.ai/), [Claude Desktop](https://claude.com/download) with MCP support, or other compatible clients
1. _Both MCP servers configured_: Skyfire's MCP server and Apify's MCP server

Expand Down Expand Up @@ -103,7 +103,7 @@ If you're using [Claude Desktop](https://claude.com/download), add this configur
</TabItem>
</Tabs>

Replace `YOUR_SKYFIRE_API_KEY` with Skyfire buyer API key, which you can obtain from your [Skyfire dashboard](https://app.skyfire.xyz/).
Replace `YOUR_SKYFIRE_API_KEY` with Skyfire buyer API key, which you can obtain from your [Skyfire dashboard](https://app.skyfire.xyz/auth).

### How it works

Expand Down
2 changes: 1 addition & 1 deletion sources/platform/integrations/data-storage/airbyte.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,4 +39,4 @@ To find your Apify API token, you need to navigate to the **Settings** tab and s

And that's it! You now have Apify datasets set up as a Source, and you can use Airbyte to transfer your datasets to one of the available destinations.

To learn more about how to setup a Connection, visit [Airbyte's documentation](https://docs.airbyte.com/using-airbyte/getting-started/set-up-a-connection)
To learn more about how to setup a Connection, visit [Airbyte's documentation](https://docs.airbyte.com/platform/using-airbyte/getting-started/set-up-a-connection)
Original file line number Diff line number Diff line change
Expand Up @@ -19,33 +19,33 @@ This guide shows you how to set up the integration, configure authentication, an
Before you begin, make sure you have:

- An [Apify account](https://console.apify.com/)
- A [Kestra instance](https://kestra.io/docs/getting-started/quickstart) (self‑hosted or cloud)
- A [Kestra instance](https://kestra.io/docs/quickstart) (self‑hosted or cloud)

## Authentication

The Apify plugin uses API key authentication. Store your API key in [Kestra Secrets](https://kestra.io/docs/concepts/secret) through the UI or environment variables. In the open-source version, manage Secrets using base64-encoded environment variables. You can also use [Kestra's KV Store](https://kestra.io/docs/concepts/kv-store) to persist API keys across executions and workflows.

To add your Apify API token, go to the Secrets section in the Kestra UI and create a new secret with the key `APIFY_API_KEY` and your token as the value.

## Use Apify Tasks as an action
## Use Apify tasks as an action

Tasks allow you to perform operations like running an Actor within a workflow.

1. Create a new flow.
1. Inside the **Flow code** tab change the hello task's type to be **io.kestra.plugin.apify.actor.Run**.
1. Change the task's id to be **run_apify_actor**
1. Inside the **Flow code** tab change the hello task's type to be `io.kestra.plugin.apify.actor.Run`.
1. Change the task's id to be `run_apify_actor`
1. Remove the message property.
1. Configure the **run_apify_actor** task by adding your required values for the properties listed below:
- **actorId**: Actor ID or a tilde-separated owner's username and Actor name.
- **apiToken**: A reference to the secret value you set up earlier. For example "\{\{secret(namespace=flow.namespace, key='APIFY_API_KEY')\}\}"
1. Add a new task below the **run_apify_actor** with an ID of **get_dataset** and a type of **io.kestra.plugin.apify.dataset.Get**.:
1. Configure the **get_dataset** to fetch the dataset generated by the **run_apify_actor** task by configuring the following values:
- **datasetId**: The ID of the dataset to fetch. You can use the value from the previous task using the following syntax: "\{\{secret(namespace=flow.namespace, key='APIFY_API_KEY')\}\}"
- **input**: Input for the Actor run. The input is optional and can be used to pass data to the Actor. For our example we will add 'hashtags: ["fyp"]'
- **maxItems**: The maximum number of items to fetch from the dataset. For our example we will set this to 5.
1. Now add the final task to log the output of the dataset. Add a new task below the **log_output** with an ID of **log_output** and a type of **io.kestra.plugin.core.log.Log**.
1. Configure the **log_output** task to log the output of the dataset by configuring the following values:
- **message**: The message to log. You can use the value from the previous task using the following syntax: '\{\{outputs.get_dataset.dataset\}\}'
1. Configure the `run_apify_actor` task by adding your required values for the properties listed below:
- `actorId`: Actor ID or a tilde-separated owner's username and Actor name.
- `apiToken`: A reference to the secret value you set up earlier. For example "\{\{secret(namespace=flow.namespace, key='APIFY_API_KEY')\}\}"
1. Add a new task below the `run_apify_actor` with an ID of `get_dataset` and a type of `io.kestra.plugin.apify.dataset.Get`.:
1. Configure the `get_dataset` to fetch the dataset generated by the `run_apify_actor` task by configuring the following values:
- `datasetId`: The ID of the dataset to fetch. You can use the value from the previous task using the following syntax: "\{\{secret(namespace=flow.namespace, key='APIFY_API_KEY')\}\}"
- `input`: Input for the Actor run. The input is optional and can be used to pass data to the Actor. For our example we will add 'hashtags: ["fyp"]'
- `maxItems`: The maximum number of items to fetch from the dataset. For our example we will set this to 5.
1. Now add the final task to log the output of the dataset. Add a new task below the `log_output` with an ID of `log_output` and a type of `io.kestra.plugin.core.log.Log`.
1. Configure the `log_output` task to log the output of the dataset by configuring the following values:
- `message`: The message to log. You can use the value from the previous task using the following syntax: '\{\{outputs.get_dataset.dataset\}\}'
1. Now save and run your flow.

Your completed template should match the template below.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@ Automatically start an n8n workflow when an Actor or task run finishes:

## Resources

- [n8n Community Nodes Documentation](https://docs.n8n.io/integrations/community-nodes/)
- [n8n Community Nodes Documentation](https://docs.n8n.io/integrations/)
- [Apify API Documentation](https://docs.apify.com)
- [n8n Documentation](https://docs.n8n.io)

Expand Down
Loading