This FAQ was generated by CrawlFAQs itself

Frequently Asked
Questions

Everything you need to know about CrawlFAQs. Can't find the answer you're looking for? Get in touch with our team.

Eating our own dogfood

Yes, we used CrawlFAQs to generate this FAQ page

We pointed CrawlFAQs at our own application, let it crawl our dashboard, landing page, and documentation - and it generated the comprehensive FAQ you see on this page.

The AI analyzed our UI, understood our features, and created questions that real users would actually ask. We then refined and expanded the content - exactly how we expect our users to work with generated docs.

3
Pages crawled
3
Articles generated
15
FAQs curated
<2m
Total time

After signing up, click 'New Project' from your dashboard. Enter your application's URL and an optional description. CrawlFAQs will automatically detect your app type and configure the crawler settings for optimal results.

CrawlFAQs can crawl virtually any web application - single-page applications (SPAs) built with React, Vue, Angular, or Svelte, traditional server-rendered apps, and hybrid applications. Our Playwright-powered crawler handles JavaScript-heavy sites, dynamic content loading, and complex navigation patterns.

No installation required. CrawlFAQs is a fully cloud-based solution. Simply sign up, add your project URL, and start generating documentation. All crawling and AI processing happens on our servers.

CrawlFAQs uses GPT-4 Vision to analyze screenshots of your application. The AI identifies UI elements, understands their purpose, reads visible text, recognizes patterns, and extracts meaningful facts about how users interact with each page. This visual understanding enables us to generate contextually accurate documentation.

When you start a crawl, our Playwright-powered browser navigates through your application starting from the URL you provide. It discovers links, captures screenshots at each page, records UI interactions, and builds a comprehensive map of your application's structure. You can configure depth limits and URL patterns to control the scope.

Generation time depends on your app's size. A typical 10-page application takes about 2-3 minutes to crawl and another 1-2 minutes to generate documentation. Larger applications with 50+ pages may take 10-15 minutes. You can monitor progress in real-time from your dashboard.

Yes! You can add credentials to your project for authenticated crawling. CrawlFAQs securely stores your login information using AES-256 encryption and automatically handles the login flow before crawling protected pages.

CrawlFAQs generates three main types of documentation: FAQs (question-and-answer pairs addressing common user queries), Help Articles (comprehensive guides explaining features and workflows), and Tutorials (step-by-step instructions for specific tasks). Each type is optimized for different user needs.

Absolutely. After generation, you have full control to edit, reorganize, and refine all content. You can adjust the tone, add custom sections, merge or split articles, and regenerate specific pieces with different prompts. The editor supports rich markdown formatting.

CrawlFAQs exports to Markdown, HTML, and JSON formats. This makes it easy to integrate with documentation platforms like GitBook, Docusaurus, Notion, ReadMe, or your own custom documentation site. The JSON format includes full metadata for programmatic use.

Security is our top priority. All credentials are encrypted with AES-256. Crawling happens in isolated container environments that are destroyed after each session. Screenshots are processed and then deleted. We're SOC 2 compliant and never share your data with third parties.

CrawlFAQs stores only the extracted facts and generated documentation, not your actual application data. Screenshots are processed by AI and immediately deleted. We retain your documentation until you delete your project, giving you full control over your data lifecycle.

Yes. CrawlFAQs can crawl internal applications accessible from the internet. For applications behind firewalls, we offer an Enterprise plan with self-hosted crawler agents that run within your infrastructure while still using our cloud-based AI processing.

The free plan includes 1 project, up to 50 pages crawled per month, and basic export functionality. It's perfect for side projects, personal websites, or evaluating CrawlFAQs before upgrading.

A 'page' is counted each time our crawler visits a unique URL during a crawl session. If you crawl the same page in multiple sessions, it counts each time. Duplicate URLs within a single crawl are only counted once.

Yes, you can change your plan at any time. When upgrading, you get immediate access to new features and limits. When downgrading, changes take effect at your next billing cycle. Unused page credits don't roll over between months.

Still have questions?

Can't find the answer you're looking for? Our team is here to help.