Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skip Folia's EntityScheduler executeTick checks if the scheduler hasn't been used at least once #9984

Conversation

MrPowerGamerBR
Copy link
Contributor

@MrPowerGamerBR MrPowerGamerBR commented Nov 29, 2023

This PR is mostly for "hey maybe there is a better solution for this", since I don't expect this PR to be accepted since it is mostly a hacky hack that probably has concurrency issues (accessing hasScheduledAtLeastOneTask outside of a synchronized block) and probably has a better solution.

Folia's EntityScheduler executeTick loop in MinecraftServer's tickChildren is slooooow if you have a lot of entities (example: for a server with 15k entities, it uses ~1.7ms every tick, even if none of your plugins are using Folia's EntityScheduler)

To workaround this, we just don't attempt to do any of the checks if the scheduler hasn't been used at least once. Yes, I know, the scheduler increments the tickCount, but that doesn't matter since all the scheduled tasks are "relative" to the current tick count, not bound to the server's tick count.

The big performance boost comes from not needing to do all the checks executeTick does. Most server entities won't ever use a entity scheduler anyway, just think about it: Are plugins really going to add a scheduler to each item frame, each armor stand, each dropped item, each... you get my point.

Another way of doing this is changing the scheduler to store entities that have pending tasks into a global list to avoid getting all entities from all worlds, this is what I have done in my fork, and maybe this is a better less hacky solution, BUT that will probably make Folia explode, this needs to be tested further in a Folia environment instead of a Paper environment (on a Paper environment, I have been using this patch for one week+ without any issues): https://github.com/SparklyPower/SparklyPaper/blob/ver/1.20.2/patches/server/0010-Skip-EntityScheduler-s-executeTick-checks-if-there-i.patch (See PR #9985)

If anyone wants to reproduce the performance impact: Just spawn a bunch of entities, I used Armor Stands to test this.

(In the profiler, the loop is net.minecraft.server.MinecraftServer.lambda$tickChildren$15())

Profiler Before the Changes: https://spark.lucko.me/eRvT1QqnI0
Profiler After the Changes: https://spark.lucko.me/DDAk1JQC0u

@MrPowerGamerBR MrPowerGamerBR requested a review from a team as a code owner November 29, 2023 15:48
@MrPowerGamerBR
Copy link
Contributor Author

MrPowerGamerBR commented Nov 29, 2023

I have made another PR that also solves the same issue, but that one is based off the patch I mentioned in this PR's description: #9985

I decided to open a new PR instead of editing this one because I think both approaches are nice:

  • This one requires less changes, but if a EntityScheduler has been used at least once, then the scheduler will continue being ticked until retired. So theorically if a plugin scheduled a task for all entities, then this patch wouldn't help. However, this is a rare situation. The implementation I made on this PR was my previous solution that I was using on my server, before trying to decrease lag even... further... beyond!
  • The other one requires more changes, but is cleaner and better overall, but may have other issues that I don't know about. That one I'm running on my server and I haven't experienced any issues, however I don't have a lot of plugins that are using entity schedulers, so further testing may be needed.

@kennytv
Copy link
Member

kennytv commented Dec 2, 2023

Closing this in favor of #9985

@kennytv kennytv closed this Dec 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants