Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jsparagus lexer is 42% slower than SpiderMonkey's lexer #589

Open
nbp opened this issue Jul 9, 2020 · 1 comment
Open

Jsparagus lexer is 42% slower than SpiderMonkey's lexer #589

nbp opened this issue Jul 9, 2020 · 1 comment
Labels
lexer optimization Would improve performance.

Comments

@nbp
Copy link
Collaborator

nbp commented Jul 9, 2020

This result comes from the following profile where both SmooshMonkey and SpiderMonkey succeed: https://share.firefox.dev/38DHNBm

Under js::frontend::GeneralParser::parse, filtering with Token highlights that we spent 3.407s in the lexer.
Under smoosh_test_parse_script, filtering with lexer highlights that we spent 4.858s in the lexer.

@nbp nbp added lexer optimization Would improve performance. labels Jul 9, 2020
@nbp
Copy link
Collaborator Author

nbp commented Jul 9, 2020

One aspect to be noted, while running DHAT valgrind tool (after disabling jemalloc), is that declare and declare_var are the major source of short-lived allocation/reallocation through std::collections::hash::map::HashMap<K,V,S>::insert.

Maybe we should consider using a sparse BitSet or the EntitySet from cranelift.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lexer optimization Would improve performance.
Development

No branches or pull requests

1 participant