-
-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parse Error while loading into Elasticsearch Cluster #457
Comments
I have also seen these recently, usually two at the beginning of the import and not repeated? Nothing to worry about. |
Thank you @missinglink |
Yeah, these messages are nothing to worry about. It turns out Elasticsearch 5 emits a LOT of deprecation warnings for our current schema, and does so using HTTP headers. Node.js recently added a max header size, which triggers these errors. You can read all the details in pelias/schema#337 (comment) We'll be able to fix these errors soon by cleaning up some of the deprecation warnings now that we've completely dropped support for ES2. Until then, nothing to worry about |
This change makes our Elasticsearch schema compatible with Elasticsearch 5 and 6. It shouln't have any effect on performance or operation, but it will completely drop compatibility for Elasticsearch 2. The primary change is that Elasticsearch 5 introduces two types of text fields: `text` and `keyword`, whereas Elasticsearch 2 only had 1: `string`. Roughly, a `text` field is for true full text search and a `keyword` field is for simple values that are primarily used for filtering or aggregation (for example, our `source` and `layer` fields). The `string` datatype previously filled both of those roles depending on how it was configured. Fortunately, we had already roughly created a concept similar to the `keyword` datatype in our schema, but called it `literal`. This has been renamed to `keyword` to cut down on the number of terms needed One nice effect of this change is that it removes all deprecation warnings printed by Elasticsearch 5. Notably, as discovered in #337 (comment), these warnings were quite noisy and required special handling to work around Node.js header size restrictions. This special handling can now been removed. Fixes pelias/whosonfirst#457 Connects pelias/pelias#719 Connects pelias/pelias#461
This change makes our Elasticsearch schema compatible with Elasticsearch 5 and 6. It shouldn't have any effect on performance or operation, but it will completely drop compatibility for Elasticsearch 2. The primary change is that Elasticsearch 5 introduces two types of text fields: `text` and `keyword`, whereas Elasticsearch 2 only had 1: `string`. Roughly, a `text` field is for true full text search and a `keyword` field is for simple values that are primarily used for filtering or aggregation (for example, our `source` and `layer` fields). The `string` datatype previously filled both of those roles depending on how it was configured. Fortunately, we had already roughly created a concept similar to the `keyword` datatype in our schema, but called it `literal`. This has been renamed to `keyword` to cut down on the number of terms needed One nice effect of this change is that it removes all deprecation warnings printed by Elasticsearch 5. Notably, as discovered in #337 (comment), these warnings were quite noisy and required special handling to work around Node.js header size restrictions. This special handling can now been removed. Fixes pelias/whosonfirst#457 Connects pelias/pelias#719 Connects pelias/pelias#461
I got
POST http://sdt-dev-elastic-001.test.pro:9200/_bulk => Parse Error
while runningnpm start
to import data into our Elastic cluster.What might be the reason?
The importer settings from the pelias.json:
The Elastic version is
The text was updated successfully, but these errors were encountered: