You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I found datasette a couple of days ago (many thanks!) and have imported some biodiversity data, ~29,000 occurrence records of marine organisms (with latitude longitude, in one table). I'm running datasette locally on ubuntu and am encouraged by my early experiences. However... Your cluster-map plugin displays facets nicely but display of all 29,000 records seems to choke the browsers I've tried which stop after the first 7,000 records (or fewer). Several of your examples use >30,000 records so I'm doing something wrong - but what? I've tried removing "limit 101" from the SQL query. Can you please help me with settings that will display all records by default in cluster-map without having to scroll multiple pages of rows?
(In general arbitrary default limits on how many records are displayed on maps or other plots makes no sense and is actually misleading from a research point of view so I need to get past that. I'm aware that numbers of records much greater than mine presents no barrier for sqlite but not so sure regarding datasette and plugins? perhaps it is just cluster-map which has default limits? But I'm at the beginning of my SQL learning curve.)
[update 7 April]
Having now viewed the Learn SQL with Datasette tutorial I read
The limit 101 clause limits the query to returning just the first 101 results. In most SQL databases omitting this will cause all results will be returned - but Datasette applies an additional limit of 1,000 (example here) to prevent large queries from causing performance issues
This is a problem for me and without a solution will cause me to view & plot my data in R instead. What exactly are the "performance issues"? Half an hour would be a problem, but I'd happily wait a few minutes to see all the data plotted, not just some of it. Is it possible to override the 1,000 record limit?
[This is some background, no need to read on unless you are interested in context]
I'm not a developer, I work on the systematics and distribution of marine organisms. I very recently discovered datasette which looks great and I hope might replace an MS Access application built buy a colleague for managing biodiversity data. Depending on the analysis and outputs required I'm also using the data with various tools in the R environment especially marmap. However linux is my preferred OS and datasette and sqlite look like my best bet for maintaining the master copy of the data, currently ~190,000 rows of occurrences plus several linked tables. If i get past teething issues I'll import the entire data set and other tables to sqlite.
Ideally I'll ultimately also be able to make data from sqlite available to wider audiences via TiddlyWiki which would provide different functionality. I'm aware of the datasette-tiddlywiki plugin but I would like to run Datasette in TiddlyWiki instead of the other way around - in due course I'll raise that with the TiddlyWiki folk.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I found datasette a couple of days ago (many thanks!) and have imported some biodiversity data, ~29,000 occurrence records of marine organisms (with latitude longitude, in one table). I'm running datasette locally on ubuntu and am encouraged by my early experiences. However... Your cluster-map plugin displays facets nicely but display of all 29,000 records seems to choke the browsers I've tried which stop after the first 7,000 records (or fewer). Several of your examples use >30,000 records so I'm doing something wrong - but what? I've tried removing "limit 101" from the SQL query. Can you please help me with settings that will display all records by default in cluster-map without having to scroll multiple pages of rows?
(In general arbitrary default limits on how many records are displayed on maps or other plots makes no sense and is actually misleading from a research point of view so I need to get past that. I'm aware that numbers of records much greater than mine presents no barrier for sqlite but not so sure regarding datasette and plugins? perhaps it is just cluster-map which has default limits? But I'm at the beginning of my SQL learning curve.)
[update 7 April]
Having now viewed the Learn SQL with Datasette tutorial I read
This is a problem for me and without a solution will cause me to view & plot my data in R instead. What exactly are the "performance issues"? Half an hour would be a problem, but I'd happily wait a few minutes to see all the data plotted, not just some of it. Is it possible to override the 1,000 record limit?
[This is some background, no need to read on unless you are interested in context]
I'm not a developer, I work on the systematics and distribution of marine organisms. I very recently discovered datasette which looks great and I hope might replace an MS Access application built buy a colleague for managing biodiversity data. Depending on the analysis and outputs required I'm also using the data with various tools in the R environment especially marmap. However linux is my preferred OS and datasette and sqlite look like my best bet for maintaining the master copy of the data, currently ~190,000 rows of occurrences plus several linked tables. If i get past teething issues I'll import the entire data set and other tables to sqlite.
Ideally I'll ultimately also be able to make data from sqlite available to wider audiences via TiddlyWiki which would provide different functionality. I'm aware of the datasette-tiddlywiki plugin but I would like to run Datasette in TiddlyWiki instead of the other way around - in due course I'll raise that with the TiddlyWiki folk.
Beta Was this translation helpful? Give feedback.
All reactions