Skip to content

Distributed Web Crawling is an application that provides search results from distributed servers depending on user search string

Notifications You must be signed in to change notification settings

imrvshah/Distributed-Web-Crawling

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Distributed-Web-Crawling

Distributed Web Crawling is an application that provides search results from distributed servers depending on user search string. It has a multiple servers, which are responsible to hold specific information based on location of different university websites. Each server is based on proximity from specific University. Crawling takes place in a distributed manner and results that are fetched, are stored in secondary stable storage for high reliability. Users can search for any string, to get different results from different web pages as URLs. In addition to that, this application provides support for multi-threading, to support multiple clients at the same time. In our application, we have handled distributed computing models like interaction model, failure models and security models.

About

Distributed Web Crawling is an application that provides search results from distributed servers depending on user search string

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages