-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Abuse Moderation #16
Comments
Maybe add some sort of rate limiting for creating new links and set up security threshold alerts for overly popular links to be examined. |
Another possible URL check tool: https://www.virustotal.com/gui/home/url, API link. This might be the best option because it combines and checks reports from lots of different services all at once. |
Maybe add a feature to supply a ntfy server to receive notifications for events like new users and when a link is blocked or reported. |
I need to think about how many tries I want to allow for VirusTotal request or maybe increase the delay, because even if it finishes in time for the first check that's still two calls per create or edit action. I might also want to consider if I need to buffer these request because I don't know if the 4 requests per minute is strictly enforced or not. It might also be wise to, store, mark, or notify someone about links that were unable to be checked due to check timeout limits or other request failures. |
Created a new branch to address this issue. Starting with commit 61da33e, which offers a quick first implementation to scan URLs using the VirusTotal API. I will want try to clean this up some to add some rate limiting for API calls. Maybe abstract this process out into an interface so there could be other URL check implementations, or even multiple check steps in the future. I might also want to add some page on the frontend especially for when a link is blocked. Add a clear way to report links, and file appeal claims when your link gets blocked. Currently this VirusTotal system is not perfect and will get a lot of false positives; for example, during testing it flagged amazon.com. The system might also miss some bad links because they are constantly changing. Ultimately, this might be the best I can do for the time being without taking a very extreme approach. |
Rate limiting api calls is gonna be important. |
To fix the production Demo I will want to add some features that allow me to better moderate and block bad actors trying to abuse the demo service.
The biggest thing would be adding checks on the destination links users use to ensure they are not malicious/harmful scams. This can be accomplished using either the the Google Safe Browsing API (Demo) or the Google Web Risk API to check these links against a known list. If users is found to be abusing the service with a harmful link there should be a way to configure the response behavior with multiple different options:
In addition, I think it could be valuable to add ability to disable/block accounts with a message for the user. Maybe add additional tracking for stuff like click times. I could also block links to other link shorteners. Maybe we could make use of reCAPTCHA here as well. A more extreme approach maybe to make the demo more temporary when everything only lasts a short time before it gets removed, but this would be an additionally measure on top of the other security improvements above.
Another more extreme approach would be to put the demo in a demo mode with a feature flag that doesn't actually redirect to the URL, but simple displays a page saying the URL and that it won't actually redirect due to abuse. This demo mode could also offer several other restricting features like fast expiring links or one time use, IDK.
The text was updated successfully, but these errors were encountered: