Distributed Web Crawling is an application that provides search results from distributed servers depending on user search string. It has a multiple servers, which are responsible to hold specific information based on location of different university websites. Each server is based on proximity from specific University. Crawling takes place in a distributed manner and results that are fetched, are stored in secondary stable storage for high reliability. Users can search for any string, to get different results from different web pages as URLs. In addition to that, this application provides support for multi-threading, to support multiple clients at the same time. In our application, we have handled distributed computing models like interaction model, failure models and security models.
-
Notifications
You must be signed in to change notification settings - Fork 0
imrvshah/Distributed-Web-Crawling
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Distributed Web Crawling is an application that provides search results from distributed servers depending on user search string
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published