need help in file search engine crawlerwant a web


Need help in File search engine crawler

Want a web crawler to gather data on an continuous basis from different file hosting services and filter it.

1) Data will be used to provide a files search engine to users. I don't want a clone of any similar kind website but infect something better than that one.

2) Process of data get-together will be a continuous process and will be running on the servers 24x7.

3) Data must be properly filtered to erase duplicate entries and new entries of the same data should merge.

I want the crawler itself part of the websites

It has to crawl multiple websites through given xml feed. Crawler must detect all links to files on rapidshare.com, 4shared, uploading as well as all other file sharing hosts. After detection it must add those links found to some kind of database (prefer MySQL) with addition information found about file Like Meta description, size, title, date if available.

Desired Skills are PHP5, cURL, MySQL Programming

Request for Solution File

Ask an Expert for Answer!!
PHP Web Programming: need help in file search engine crawlerwant a web
Reference No:- TGS0375391

Expected delivery within 24 Hours