web-scraping
Scrape product listings, articles, tables, and dynamic content into structured data. Use when the agent needs to extract information from web pages or crawl multiple pages.
MITInstall
npx skills add browser-use/browser-skills --skill web-scrapingExtract structured data from any website using the browser-use Python SDK. Handles single-page extraction and multi-page crawls with Pydantic models for typed output. Includes stealth mode, CAPTCHA solving, and parallel sessions.
bash
pip install browser-use-sdk
Prerequisites
Set your Browser Use API key before running any examples. Get one at cloud.browser-use.com.
bash
export BROWSER_USE_API_KEY=your_key
References
| Topic | Reference | Use for |
|---|---|---|
| Extraction Patterns | extraction-patterns.md | Scraping lists, tables, paginated results, and infinite scroll content |
| Anti-Detection | anti-detection.md | Handling bot detection, rate limiting, CAPTCHAs, and stealth techniques |
| Structured Output | structured-output.md | Formatting extracted data as JSON, CSV, or other structured formats |
| Dynamic Content | dynamic-content.md | Handling JS-rendered content, SPAs, lazy loading, and client-side routing |
| Multi-Page Crawling | multi-page-crawling.md | Following links, crawling sitemaps, depth-limited traversal |