Official JavaScript/TypeScript SDK for the ScrapeGraph AI API - Smart web scraping powered by AI.
- β¨ Smart web scraping with AI
- π Fully asynchronous design
- π Detailed error handling
- β‘ Automatic retries and logging
- π Secure API authentication
- π§ AI-powered schema generation
- π€ Agentic scraping with browser automation
- π Scheduled jobs with cron support
- π₯· Stealth mode to avoid bot detection
- π§ͺ Mock mode for testing and development
- π¨ Toonify for visual data representation
- πͺ Cookie support for authenticated scraping
- π Infinite scroll and pagination support
- πΊοΈ Sitemap discovery and crawling
Install the package using npm or yarn:
# Using npm
npm i scrapegraph-js
# Using yarn
yarn add scrapegraph-jsNote: Store your API keys securely in environment variables. Use
.envfiles and libraries likedotenvto load them into your app.
import { smartScraper } from 'scrapegraph-js';
import 'dotenv/config';
// Initialize variables
const apiKey = process.env.SGAI_APIKEY; // Set your API key as an environment variable
const websiteUrl = 'https://example.com';
const prompt = 'What does the company do?';
(async () => {
try {
const response = await smartScraper(apiKey, websiteUrl, prompt);
console.log(response.result);
} catch (error) {
console.error('Error:', error);
}
})();import { scrape } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const url = 'https://example.com';
(async () => {
try {
const response = await scrape(apiKey, url);
console.log('HTML content:', response.html);
console.log('Status:', response.status);
} catch (error) {
console.error('Error:', error);
}
})();import { scrape } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const url = 'https://example.com';
(async () => {
try {
const response = await scrape(apiKey, url, {
renderHeavyJs: true
});
console.log('HTML content with JS rendering:', response.html);
} catch (error) {
console.error('Error:', error);
}
})();import { scrape } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const url = 'https://example.com';
(async () => {
try {
const response = await scrape(apiKey, url, {
headers: {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36',
'Cookie': 'session=123'
}
});
console.log('HTML content with custom headers:', response.html);
} catch (error) {
console.error('Error:', error);
}
})();import { getScrapeRequest } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const requestId = 'your-request-id';
(async () => {
try {
const response = await getScrapeRequest(apiKey, requestId);
console.log('Request status:', response.status);
if (response.status === 'completed') {
console.log('HTML content:', response.html);
}
} catch (error) {
console.error('Error:', error);
}
})();import { smartScraper } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const url = 'https://example.com';
const prompt = 'Extract the main heading and description.';
(async () => {
try {
const response = await smartScraper(apiKey, url, prompt);
console.log(response.result);
} catch (error) {
console.error('Error:', error);
}
})();Note
To use this feature, it is necessary to employ the Zod package for schema creation.
Here is a real-world example:
import { smartScraper } from 'scrapegraph-js';
import { z } from 'zod';
const apiKey = 'your-api-key';
const url = 'https://scrapegraphai.com/';
const prompt = 'What does the company do? and ';
const schema = z.object({
title: z.string().describe('The title of the webpage'),
description: z.string().describe('The description of the webpage'),
summary: z.string().describe('A brief summary of the webpage'),
});
(async () => {
try {
const response = await smartScraper(apiKey, url, prompt, schema);
console.log(response.result);
} catch (error) {
console.error('Error:', error);
}
})();For websites that load content dynamically through infinite scrolling (like social media feeds), you can use the numberOfScrolls parameter:
import { smartScraper } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const url = 'https://example.com/infinite-scroll-page';
const prompt = 'Extract all the posts from the feed';
const numberOfScrolls = 10; // Will scroll 10 times to load more content
(async () => {
try {
const response = await smartScraper(apiKey, url, prompt, null, numberOfScrolls);
console.log('Extracted data from scrolled page:', response);
} catch (error) {
console.error('Error:', error);
}
})();The numberOfScrolls parameter accepts values between 0 and 100, allowing you to control how many times the page should be scrolled before extraction.
Use cookies for authentication and session management when scraping websites that require login or have user-specific content:
import { smartScraper } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const url = 'https://example.com/dashboard';
const prompt = 'Extract user profile information';
// Define cookies for authentication
const cookies = {
session_id: '<SESSION_ID>',
auth_token: '<JWT_TOKEN>',
user_preferences: '<USER_PREFERENCES>'
};
(async () => {
try {
const response = await smartScraper(apiKey, url, prompt, null, null, null, cookies);
console.log(response.result);
} catch (error) {
console.error('Error:', error);
}
})();Common Use Cases:
- E-commerce sites: User authentication, shopping cart persistence
- Social media: Session management, user preferences
- Banking/Financial: Secure authentication, transaction history
- News sites: User preferences, subscription content
- API endpoints: Authentication tokens, API keys
Combine cookies with infinite scrolling and pagination for comprehensive data extraction:
import { smartScraper } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const url = 'https://example.com/feed';
const prompt = 'Extract all posts from the feed';
const cookies = { session_token: 'xyz789abc123' };
const numberOfScrolls = 10; // Scroll 10 times
const totalPages = 5; // Scrape 5 pages
(async () => {
try {
const response = await smartScraper(apiKey, url, prompt, null, numberOfScrolls, totalPages, cookies);
console.log('Extracted data:', response);
} catch (error) {
console.error('Error:', error);
}
})();Perform automated browser actions on webpages using step-by-step instructions. Perfect for interacting with dynamic websites, filling forms, clicking buttons, and extracting data after performing actions.
import { agenticScraper } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const url = 'https://dashboard.example.com/';
const steps = [
'Type email@gmail.com in email input box',
'Type password123 in password inputbox',
'Click on login button'
];
(async () => {
try {
const response = await agenticScraper(
apiKey,
url,
steps,
true, // useSession
null, // userPrompt (not needed for basic scraping)
null, // outputSchema (not needed for basic scraping)
false // aiExtraction = false
);
console.log('Request ID:', response.request_id);
console.log('Status:', response.status);
} catch (error) {
console.error('Error:', error);
}
})();Extract structured data after performing browser actions:
import { agenticScraper } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const url = 'https://dashboard.example.com/';
const steps = [
'Type email@gmail.com in email input box',
'Type password123 in password inputbox',
'Click on login button',
'Wait for dashboard to load completely'
];
const outputSchema = {
dashboard_info: {
type: "object",
properties: {
username: { type: "string" },
email: { type: "string" },
available_sections: {
type: "array",
items: { type: "string" }
},
credits_remaining: { type: "number" }
},
required: ["username", "available_sections"]
}
};
const userPrompt = "Extract the user's dashboard information including username, email, available dashboard sections, and remaining credits";
(async () => {
try {
const response = await agenticScraper(
apiKey,
url,
steps,
true, // useSession
userPrompt, // userPrompt for AI extraction
outputSchema, // outputSchema for structured data
true // aiExtraction = true
);
console.log('Request ID:', response.request_id);
console.log('Status:', response.status);
} catch (error) {
console.error('Error:', error);
}
})();import { getAgenticScraperRequest } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const requestId = 'your-request-id';
(async () => {
try {
const response = await getAgenticScraperRequest(apiKey, requestId);
console.log('Status:', response.status);
if (response.status === 'completed') {
console.log('Result:', response.result);
}
} catch (error) {
console.error('Error:', error);
}
})();Search and extract information from multiple web sources using AI.
import { searchScraper } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const prompt = 'What is the latest version of Python and what are its main features?';
(async () => {
try {
const response = await searchScraper(apiKey, prompt);
console.log(response.result);
} catch (error) {
console.error('Error:', error);
}
})();Use locationGeoCode to target search results from a specific geographic location:
import { searchScraper } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const prompt = 'Best restaurants near me';
(async () => {
try {
const response = await searchScraper(apiKey, prompt, 5, null, null, {
locationGeoCode: 'us' // Search results targeted to United States
});
console.log(response.result);
} catch (error) {
console.error('Error:', error);
}
})();Start a crawl job to extract structured data from a website and its linked pages, using a custom schema.
import { crawl, getCrawlRequest } from 'scrapegraph-js';
import 'dotenv/config';
const apiKey = process.env.SGAI_APIKEY;
const url = 'https://scrapegraphai.com/';
const prompt = 'What does the company do? and I need text content from there privacy and terms';
const schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "ScrapeGraphAI Website Content",
"type": "object",
"properties": {
"company": {
"type": "object",
"properties": {
"name": { "type": "string" },
"description": { "type": "string" },
"features": { "type": "array", "items": { "type": "string" } },
"contact_email": { "type": "string", "format": "email" },
"social_links": {
"type": "object",
"properties": {
"github": { "type": "string", "format": "uri" },
"linkedin": { "type": "string", "format": "uri" },
"twitter": { "type": "string", "format": "uri" }
},
"additionalProperties": false
}
},
"required": ["name", "description"]
},
"services": {
"type": "array",
"items": {
"type": "object",
"properties": {
"service_name": { "type": "string" },
"description": { "type": "string" },
"features": { "type": "array", "items": { "type": "string" } }
},
"required": ["service_name", "description"]
}
},
"legal": {
"type": "object",
"properties": {
"privacy_policy": { "type": "string" },
"terms_of_service": { "type": "string" }
},
"required": ["privacy_policy", "terms_of_service"]
}
},
"required": ["company", "services", "legal"]
};
(async () => {
try {
// Start the crawl job
const crawlResponse = await crawl(apiKey, url, prompt, schema, {
cacheWebsite: true,
depth: 2,
maxPages: 2,
sameDomainOnly: true,
sitemap: true, // Use sitemap for better page discovery
batchSize: 1,
});
console.log('Crawl job started. Response:', crawlResponse);
// If the crawl is asynchronous and returns an ID, fetch the result
const crawlId = crawlResponse.id || crawlResponse.task_id;
if (crawlId) {
for (let i = 0; i < 10; i++) {
await new Promise((resolve) => setTimeout(resolve, 5000));
const result = await getCrawlRequest(apiKey, crawlId);
if (result.status === 'success' && result.result) {
console.log('Crawl completed. Result:', result.result.llm_result);
break;
} else if (result.status === 'failed') {
console.log('Crawl failed. Result:', result);
break;
} else {
console.log(`Status: ${result.status}, waiting...`);
}
}
} else {
console.log('No crawl ID found in response. Synchronous result:', crawlResponse);
}
} catch (error) {
console.error('Error occurred:', error);
}
})();You can use a plain JSON schema or a Zod schema for the schema parameter. The crawl API supports options for crawl depth, max pages, domain restriction, sitemap discovery, and batch size.
Sitemap Benefits:
- Better page discovery using sitemap.xml
- More comprehensive website coverage
- Efficient crawling of structured websites
- Perfect for e-commerce, news sites, and content-heavy websites
Converts a webpage into clean, well-structured markdown format.
import { markdownify } from 'scrapegraph-js';
const apiKey = 'your_api_key';
const url = 'https://scrapegraphai.com/';
(async () => {
try {
const response = await markdownify(apiKey, url);
console.log(response);
} catch (error) {
console.error(error);
}
})();Extract all URLs from a website's sitemap. Automatically discovers sitemap from robots.txt or common sitemap locations.
import { sitemap } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const websiteUrl = 'https://example.com';
(async () => {
try {
const response = await sitemap(apiKey, websiteUrl);
console.log('Total URLs found:', response.urls.length);
console.log('URLs:', response.urls);
} catch (error) {
console.error('Error:', error);
}
})();import { getCredits } from 'scrapegraph-js';
const apiKey = 'your-api-key';
(async () => {
try {
const credits = await getCredits(apiKey);
console.log('Available credits:', credits);
} catch (error) {
console.error('Error fetching credits:', error);
}
})();import { sendFeedback } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const requestId = '16a63a80-c87f-4cde-b005-e6c3ecda278b';
const rating = 5;
const feedbackText = 'This is a test feedback message.';
(async () => {
try {
const response = await sendFeedback(apiKey, requestId, rating, feedbackText);
console.log('Feedback response:', response);
} catch (error) {
console.error('Error sending feedback:', error);
}
})();Create and manage scheduled scraping jobs that run automatically on a cron schedule. Perfect for monitoring websites, collecting data periodically, and automating repetitive scraping tasks.
import { createScheduledJob } from 'scrapegraph-js';
const apiKey = 'your-api-key';
(async () => {
try {
const job = await createScheduledJob(
apiKey,
'Daily News Scraper', // jobName
'smartscraper', // serviceType
'0 9 * * *', // cronExpression (daily at 9 AM)
{
website_url: 'https://news.ycombinator.com',
user_prompt: 'Extract the top 5 news titles and their URLs',
render_heavy_js: false
},
true // isActive
);
console.log('Job created:', job.id);
console.log('Job name:', job.job_name);
} catch (error) {
console.error('Error:', error);
}
})();import { getScheduledJobs } from 'scrapegraph-js';
const apiKey = 'your-api-key';
(async () => {
try {
const response = await getScheduledJobs(apiKey, { page: 1, pageSize: 10 });
console.log('Total jobs:', response.total);
response.jobs.forEach(job => {
console.log(`- ${job.job_name} (${job.service_type}) - Active: ${job.is_active}`);
});
} catch (error) {
console.error('Error:', error);
}
})();import {
getScheduledJob,
updateScheduledJob,
pauseScheduledJob,
resumeScheduledJob,
deleteScheduledJob,
triggerScheduledJob,
getJobExecutions
} from 'scrapegraph-js';
const apiKey = 'your-api-key';
const jobId = 'your-job-id';
(async () => {
try {
// Get job details
const job = await getScheduledJob(apiKey, jobId);
console.log('Job details:', job);
// Update job
const updated = await updateScheduledJob(apiKey, jobId, {
jobName: 'Updated Job Name',
cronExpression: '0 8 * * *' // Change to 8 AM
});
console.log('Updated job:', updated);
// Pause job
await pauseScheduledJob(apiKey, jobId);
// Resume job
await resumeScheduledJob(apiKey, jobId);
// Manually trigger job
const triggerResult = await triggerScheduledJob(apiKey, jobId);
console.log('Execution ID:', triggerResult.execution_id);
// Get execution history
const executions = await getJobExecutions(apiKey, jobId, { page: 1, pageSize: 10 });
console.log('Total executions:', executions.total);
// Delete job
await deleteScheduledJob(apiKey, jobId);
} catch (error) {
console.error('Error:', error);
}
})();Supported Service Types:
smartscraper- Smart scraping with AI extractionsearchscraper- Search-based scrapingcrawl- Website crawlingmarkdownify- Markdown conversionscrape- Basic HTML scraping
Cron Expression Format:
ββββββββββββββ minute (0 - 59)
β ββββββββββββββ hour (0 - 23)
β β ββββββββββββββ day of month (1 - 31)
β β β ββββββββββββββ month (1 - 12)
β β β β ββββββββββββββ day of week (0 - 6) (Sunday to Saturday)
β β β β β
* * * * *
Examples:
0 9 * * *- Daily at 9 AM0 10 * * 1- Every Monday at 10 AM*/15 * * * *- Every 15 minutes0 0 1 * *- First day of every month at midnight
Convert structured data into a visual "toon" format. Useful for creating visual representations of scraped data.
import { toonify } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const data = {
products: [
{ sku: "LAP-001", name: "Gaming Laptop", price: 1299.99 },
{ sku: "MOU-042", name: "Wireless Mouse", price: 29.99 }
]
};
(async () => {
try {
const result = await toonify(apiKey, data);
console.log('Toonified result:', result);
} catch (error) {
console.error('Error toonifying data:', error);
}
})();Enable stealth mode to avoid bot detection when scraping websites. Stealth mode uses advanced techniques to make requests appear more like those from a real browser.
import { smartScraper } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const url = 'https://example.com';
const prompt = 'Extract product information';
(async () => {
try {
const response = await smartScraper(
apiKey,
url,
prompt,
null, // schema
null, // numberOfScrolls
null, // totalPages
null, // cookies
{}, // options
false, // plain_text
false, // renderHeavyJs
true // stealth - Enable stealth mode
);
console.log('Result:', response.result);
} catch (error) {
console.error('Error:', error);
}
})();Stealth mode is available for all major endpoints:
import { scrape, searchScraper, markdownify, agenticScraper, crawl } from 'scrapegraph-js';
const apiKey = 'your-api-key';
// Scrape with stealth mode
const scrapeResult = await scrape(apiKey, 'https://example.com', {
stealth: true,
renderHeavyJs: true
});
// SearchScraper with stealth mode
const searchResult = await searchScraper(apiKey, 'Search query', 5, null, null, {
stealth: true,
extractionMode: true
});
// Markdownify with stealth mode
const markdownResult = await markdownify(apiKey, 'https://example.com', {
stealth: true
});
// AgenticScraper with stealth mode
const agenticResult = await agenticScraper(apiKey, 'https://example.com', ['click button'], true, null, null, false, {
stealth: true
});
// Crawl with stealth mode
const crawlResult = await crawl(apiKey, 'https://example.com', 'Extract data', schema, {
stealth: true,
depth: 2,
maxPages: 5
});Mock mode allows you to test your code without making actual API calls. Perfect for development, testing, and CI/CD pipelines.
import { enableMock, disableMock, getCredits, smartScraper } from 'scrapegraph-js';
// Enable mock mode globally
enableMock();
const apiKey = 'mock-api-key';
(async () => {
try {
// All API calls will return mock responses
const credits = await getCredits(apiKey);
console.log('Mock credits:', credits);
const result = await smartScraper(apiKey, 'https://example.com', 'Extract data');
console.log('Mock result:', result);
} catch (error) {
console.error('Error:', error);
} finally {
// Disable mock mode when done
disableMock();
}
})();import { scrape, smartScraper } from 'scrapegraph-js';
const apiKey = 'your-api-key';
(async () => {
try {
// Enable mock for specific requests
const result = await scrape(apiKey, 'https://example.com', { mock: true });
console.log('Mock scrape result:', result);
const smartResult = await smartScraper(apiKey, 'https://example.com', 'Test', null, null, null, null, { mock: true });
console.log('Mock smartScraper result:', smartResult);
} catch (error) {
console.error('Error:', error);
}
})();import { setMockResponses, getCredits, smartScraper, enableMock } from 'scrapegraph-js';
enableMock();
// Set custom mock responses
setMockResponses({
'/v1/credits': {
remaining_credits: 1000,
total_credits_used: 500
},
'/v1/smartscraper': () => ({
request_id: 'custom-mock-id',
status: 'completed',
result: { custom_data: 'Generated by custom function' }
})
});
const apiKey = 'mock-api-key';
(async () => {
try {
const credits = await getCredits(apiKey);
console.log('Custom credits:', credits);
const result = await smartScraper(apiKey, 'https://example.com', 'Test');
console.log('Custom result:', result);
} catch (error) {
console.error('Error:', error);
}
})();You can also enable mock mode using the SGAI_MOCK environment variable:
export SGAI_MOCK=1
node your-script.jsOr in your .env file:
SGAI_MOCK=1
Generate JSON schemas from natural language prompts using AI. This feature helps you create structured data schemas for web scraping and data extraction.
import { generateSchema } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const prompt = 'Find laptops with specifications like brand, processor, RAM, storage, and price';
(async () => {
try {
const response = await generateSchema(prompt, null, { apiKey });
console.log('Generated schema:', response.generated_schema);
console.log('Request ID:', response.request_id);
} catch (error) {
console.error('Error generating schema:', error);
}
})();import { generateSchema } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const existingSchema = {
type: 'object',
properties: {
name: { type: 'string' },
price: { type: 'number' }
},
required: ['name', 'price']
};
const modificationPrompt = 'Add brand and rating fields to the existing schema';
(async () => {
try {
const response = await generateSchema(modificationPrompt, existingSchema, { apiKey });
console.log('Modified schema:', response.generated_schema);
} catch (error) {
console.error('Error modifying schema:', error);
}
})();import { getSchemaStatus } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const requestId = '123e4567-e89b-12d3-a456-426614174000';
(async () => {
try {
const response = await getSchemaStatus(requestId, { apiKey });
console.log('Status:', response.status);
if (response.status === 'completed') {
console.log('Generated schema:', response.generated_schema);
}
} catch (error) {
console.error('Error checking status:', error);
}
})();import { pollSchemaGeneration } from 'scrapegraph-js';
const apiKey = 'your-api-key';
const requestId = '123e4567-e89b-12d3-a456-426614174000';
(async () => {
try {
const finalResult = await pollSchemaGeneration(requestId, {
apiKey,
maxAttempts: 15,
delay: 3000,
onProgress: ({ attempt, maxAttempts, status, response }) => {
if (status === 'checking') {
console.log(`Checking status... (${attempt}/${maxAttempts})`);
} else {
console.log(`Status: ${status} (${attempt}/${maxAttempts})`);
}
}
});
console.log('Schema generation completed!');
console.log('Final schema:', finalResult.generated_schema);
} catch (error) {
console.error('Error during polling:', error);
}
})();Converts a webpage into HTML format with optional JavaScript rendering.
Parameters:
apiKey(string): Your ScrapeGraph AI API keyurl(string): The URL of the webpage to convertoptions(object, optional): Configuration optionsrenderHeavyJs(boolean, optional): Whether to render heavy JavaScript (default: false)headers(object, optional): Custom headers to send with the requeststealth(boolean, optional): Enable stealth mode to avoid bot detection (default: false)mock(boolean, optional): Override mock mode for this request
Returns: Promise that resolves to an object containing:
html: The HTML content of the webpagestatus: Request status ('completed', 'processing', 'failed')scrape_request_id: Unique identifier for the requesterror: Error message if the request failed
Example:
const response = await scrape(apiKey, 'https://example.com', {
renderHeavyJs: true,
headers: { 'User-Agent': 'Custom Agent' },
stealth: true
});Retrieves the status or result of a previous scrape request.
Parameters:
apiKey(string): Your ScrapeGraph AI API keyrequestId(string): The unique identifier for the scrape request
Returns: Promise that resolves to the request result object.
Example:
const result = await getScrapeRequest(apiKey, 'request-id-here');smartScraper(apiKey, url, prompt, schema, numberOfScrolls, totalPages, cookies, options, plainText, renderHeavyJs, stealth)
Extracts structured data from websites using AI-powered scraping.
Parameters:
apiKey(string): Your ScrapeGraph AI API keyurl(string): The URL of the website to scrapeprompt(string): Natural language prompt describing what to extractschema(object, optional): Zod schema for structured outputnumberOfScrolls(number, optional): Number of scrolls for infinite scroll pages (0-100)totalPages(number, optional): Number of pages to scrapecookies(object, optional): Cookies for authenticationoptions(object, optional): Additional optionsmock(boolean): Override mock mode for this request
plainText(boolean, optional): Whether to return plain text instead of structured datarenderHeavyJs(boolean, optional): Whether to render heavy JavaScript (default: false)stealth(boolean, optional): Enable stealth mode to avoid bot detection (default: false)
Returns: Promise that resolves to an object containing:
result: Extracted data (structured or plain text)request_id: Unique identifier for the requeststatus: Request status
Searches and extracts information from multiple web sources using AI.
Parameters:
apiKey(string): Your ScrapeGraph AI API keyprompt(string): Search query or promptnumResults(number, optional): Number of results to returnschema(object, optional): Schema for structured outputuserAgent(string, optional): Custom user agent stringoptions(object, optional): Additional optionsstealth(boolean): Enable stealth modeextractionMode(boolean): Whether to use AI extractionrenderHeavyJs(boolean): Whether to render heavy JavaScriptlocationGeoCode(string): The geo code of the location to search in (e.g., "us", "gb", "de")mock(boolean): Override mock mode for this request
Returns: Promise that resolves to an object containing:
result: Extracted search resultsrequest_id: Unique identifier for the requeststatus: Request statusreference_urls: Array of URLs used for the search
Starts a crawl job to extract structured data from a website and its linked pages.
Parameters:
apiKey(string): Your ScrapeGraph AI API keyurl(string): The starting URL for the crawlprompt(string): AI prompt to guide data extraction (required for AI mode)dataSchema(object): JSON schema defining extracted data structure (required for AI mode)options(object): Optional crawl parametersextractionMode(boolean, default: true): true for AI extraction, false for markdown conversioncacheWebsite(boolean, default: true): Whether to cache website contentdepth(number, default: 2): Maximum crawl depth (1-10)maxPages(number, default: 2): Maximum pages to crawl (1-100)sameDomainOnly(boolean, default: true): Only crawl pages from the same domainsitemap(boolean, default: false): Use sitemap.xml for better page discoverybatchSize(number, default: 1): Batch size for processing pages (1-10)renderHeavyJs(boolean, default: false): Whether to render heavy JavaScriptstealth(boolean, default: false): Enable stealth mode to avoid bot detectionmock(boolean): Override mock mode for this request
Returns: Promise that resolves to an object containing:
idorcrawl_id: Unique identifier for the crawl jobstatus: Crawl statusmessage: Status message
Sitemap Benefits:
- Better page discovery using sitemap.xml
- More comprehensive website coverage
- Efficient crawling of structured websites
- Perfect for e-commerce, news sites, and content-heavy websites
Retrieves the status or result of a previous crawl request.
Parameters:
apiKey(string): Your ScrapeGraph AI API keycrawlId(string): The unique identifier for the crawl request
Returns: Promise that resolves to the crawl result object.
Converts a webpage into clean, well-structured markdown format.
Parameters:
apiKey(string): Your ScrapeGraph AI API keyurl(string): The URL of the webpage to convertoptions(object, optional): Configuration optionsstealth(boolean): Enable stealth moderenderHeavyJs(boolean): Whether to render heavy JavaScriptmock(boolean): Override mock mode for this request
Returns: Promise that resolves to an object containing:
result: Markdown contentrequest_id: Unique identifier for the requeststatus: Request status
Retrieves the status or result of a previous markdownify request.
Extracts all URLs from a website's sitemap. Automatically discovers sitemap from robots.txt or common sitemap locations.
Parameters:
apiKey(string): Your ScrapeGraph AI API keywebsiteUrl(string): The URL of the website to extract sitemap fromoptions(object, optional): Additional optionsmock(boolean): Override mock mode for this request
Returns: Promise resolving to an object containing:
urls(array): List of URLs extracted from the sitemapsitemap_url: The sitemap URL that was used
Performs automated actions on webpages using step-by-step instructions.
Parameters:
apiKey(string): Your ScrapeGraph AI API keyurl(string): The URL of the webpage to interact withsteps(string[]): Array of step-by-step instructions (e.g., ["Type email@gmail.com in email input box", "click on login"])useSession(boolean, optional): Whether to use session for the scraping operations (default: true)userPrompt(string, optional): Prompt for AI extraction (required when aiExtraction=true)outputSchema(object, optional): Schema for structured data extraction (used with aiExtraction=true)aiExtraction(boolean, optional): Whether to use AI for data extraction from the scraped content (default: false)options(object, optional): Additional optionsmock(boolean): Override mock mode for this requestrenderHeavyJs(boolean): Whether to render heavy JavaScript on the pagestealth(boolean): Enable stealth mode to avoid bot detection
Returns: Promise that resolves to an object containing:
request_id: Unique identifier for the requeststatus: Request status ('processing', 'completed', 'failed')
Example:
const response = await agenticScraper(
apiKey,
'https://example.com',
['Type text in input', 'Click submit'],
true,
'Extract form data',
schema,
true,
{ stealth: true }
);Retrieves the status or result of a previous agentic scraper request.
Parameters:
apiKey(string): Your ScrapeGraph AI API keyrequestId(string): The unique identifier for the agentic scraper request
Returns: Promise that resolves to the request result object.
Creates a new scheduled job that runs automatically on a cron schedule.
Parameters:
apiKey(string): Your ScrapeGraph AI API keyjobName(string): Name of the scheduled jobserviceType(string): Type of service ('smartscraper', 'searchscraper', 'crawl', 'markdownify', 'scrape')cronExpression(string): Cron expression for scheduling (e.g., '0 9 * * *' for daily at 9 AM)jobConfig(object): Configuration for the job (service-specific)isActive(boolean, optional): Whether the job is active (default: true)options(object, optional): Additional optionsmock(boolean): Override mock mode for this request
Returns: Promise that resolves to the created job object.
Retrieves a list of all scheduled jobs.
Parameters:
apiKey(string): Your ScrapeGraph AI API keyoptions(object, optional): Query optionspage(number): Page number (default: 1)pageSize(number): Number of jobs per page (default: 10)
Returns: Promise that resolves to an object containing:
jobs: Array of job objectstotal: Total number of jobs
Retrieves details of a specific scheduled job.
Updates a scheduled job.
Parameters:
updates(object): Fields to updatejobName(string, optional): New job namecronExpression(string, optional): New cron expressionjobConfig(object, optional): Updated job configurationisActive(boolean, optional): Active status
Pauses a scheduled job.
Resumes a paused scheduled job.
Manually triggers a scheduled job execution.
Returns: Promise that resolves to an object containing:
execution_id: Unique identifier for the executionmessage: Status message
Retrieves execution history for a scheduled job.
Parameters:
options(object, optional): Query optionspage(number): Page number (default: 1)pageSize(number): Number of executions per page (default: 10)
Returns: Promise that resolves to an object containing:
executions: Array of execution objectstotal: Total number of executions
Deletes a scheduled job.
Converts structured data into a visual "toon" format.
Parameters:
apiKey(string): Your ScrapeGraph AI API keydata(object): The data object to be converted to toon formatoptions(object, optional): Configuration optionsmock(boolean): Override mock mode for this request
Returns: Promise that resolves to the toonified data response.
Enables mock mode globally. All API calls will return mock responses.
Disables mock mode globally.
Sets custom mock responses for specific endpoints.
Parameters:
responses(object): Object mapping endpoint paths to mock responses or functions that return mock responses
Example:
setMockResponses({
'/v1/credits': { remaining_credits: 100 },
'/v1/smartscraper': () => ({ request_id: 'mock-id', status: 'completed' })
});Sets a custom handler function that overrides all mock responses.
Parameters:
handler(function): Function that takes (method, url) and returns a mock response
Checks if mock mode is currently enabled.
Returns: Boolean indicating mock mode status.
Retrieves your current credit balance and usage statistics.
Returns: Promise that resolves to an object containing:
remaining_credits: Number of credits remainingtotal_credits_used: Total credits used
Submits feedback for a specific request.
Parameters:
apiKey(string): Your ScrapeGraph AI API keyrequestId(string): The request ID to provide feedback forrating(number): Rating from 1 to 5feedbackText(string): Optional feedback text
Returns: Promise that resolves to the feedback response.
For detailed documentation, visit docs.scrapegraphai.com
-
Clone the repository:
git clone https://github.com/ScrapeGraphAI/scrapegraph-sdk.git cd scrapegraph-sdk/scrapegraph-js -
Install dependencies:
npm install
-
Run linting and testing:
npm run lint npm test
# Run all tests
npm test
# Run tests with coverage
npm run test:coverageThis project is licensed under the MIT License - see the LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
- π§ Email: support@scrapegraphai.com
- π» GitHub Issues: Create an issue
- π Feature Requests: Request a feature
Made with β€οΈ by ScrapeGraph AI
