ahCrawler is a set to implement your own search on your website and an analyzer for your web content.
It consists of

  • crawler (spider) and indexer
  • search for your website
  • search statistics
  • website analyzer (SSL check, http header, short titles and keywords, linkchecker, ...)

You need to install it on your own server. So all crawled data stay in your environment. If you made fixes - the rescan of your website to update search index or analyzer data is under your control.

GNU GPL v 3.0

The project is hosted on Github: ahcrawler



  • any webserver with PHP 5.5+ up to PHP 8.0 (PHP 7.3+ or PHP 8 is strongly recommended)
  • php-curl (could be included in php-common)
  • php-pdo and database extension (sqlite or mysql)
  • php-mbstring
  • php-xml

Last Updates

  • 2021-04-14: v0.144
    • ADDED: chart of load time over all pages on start page
    • ADDED: http header check for http version. If below http version 2 you get a warning

  • 2021-04-10: v0.143
    • FIX: usage of local vendor libs
    • FIX: public service pages did not work with a set internal auth user

  • 2021-03-19: v0.142
    • ADDED: software updater script; see php cronscripts/updater.php -h
    • ADDED: updater verifies a md5 checksum of download file
    • UPDATE: setup page in backend shows checkboxes to activate menu items
    • UPDATE lib: pure -> 2.0.5
    • UPDATE lib: font-awesome -> 5.14.0
    • UPDATE lib: jquery -> 3.6.0
    • FIX/ UPDATE: typos in english texts; update texts for setup

  • 2021-01-08: v0.141
    • UPDATE: cronscript supports update and single profiles

  • 2020-12-30: v0.140
    • UPDATE: crawling processes
    • UPDATE: cli action "update" uses GET requets to handle errors caused by denying http head requests
    • FIX: remove a var_dump output in crawling process
    • FIX: remove context box in about page


So, why did I write this tool?

The starting point was to write a crawler and website search in PHP as a replacement for Sphider that was discontinued and had open bugs.

On my domain I don't use just a CMS ... I also have a blog tool, several small applications for photos, demos, ... That's why the internal search of a CMS is not enough. I wanted to have a search index with all of my content of different tools.
So I wrote a crawler and an html parser to index my pages...

If I had a spider ... and an html parser ... and all content information ... the next logical step was to add more checks for my website. And there are a lot of possibilities like link checkers, check metadata, http response headers, ssl information ...

You can install it on a shared hoster or your own server. All data are under your control. All timers are under your control. If you fix something: just reindex and then immediately check if the error is gone.


  • written in PHP; usable on shared hosters
  • support of multiple websites in a single backend
  • spider to index content: CLI / cronjob
  • verify search content
  • check entered search commands of your visitors
  • analyze http reponse header
  • analyze ssl certificate
  • analyze html metadata: title, keywords, loading time
  • link checker
  • built in web updater


Just to get a few first impressions ... :-)

Backend - statistics of the entered search terms

ahcrawler :: backend

ahcrawler :: backend

Backend - anlysis of the website

ahcrawler :: backend

ahcrawler :: backend

ahcrawler :: backend

ahcrawler :: backend

ahcrawler :: backend

ahcrawler :: backend

Copyright © 2015-2021 Axel Hahn
project page: GitHub (en)
Axels Webseite (de)
results will be here