diff --git a/README.rst b/README.rst
index ba584a29..19768284 100644
--- a/README.rst
+++ b/README.rst
@@ -46,34 +46,33 @@ Introduction
------------
-Trafilatura is a cutting-edge **Python package and command-line tool** designed to **gather text on the Web and simplify the process of turning raw HTML into structured, meaningful data**. It includes all necessary discovery and text processing components to perform **web crawling, downloads, scraping, and extraction** of main texts, metadata and comments. It aims at staying **handy and modular**: no database is required, the output can be converted to multiple commonly used formats.
+Trafilatura is a cutting-edge **Python package and command-line tool** designed to **gather text on the Web and simplify the process of turning raw HTML into structured, meaningful data**. It includes all necessary discovery and text processing components to perform **web crawling, downloads, scraping, and extraction** of main texts, metadata and comments. It aims at staying **handy and modular**: no database is required, the output can be converted to commonly used formats.
-Smart navigation and going from HTML bulk to essential parts can alleviate many problems related to text quality, by **focusing on the actual content**, **avoiding the noise** caused by recurring elements (headers, footers etc.), **making sense of the data** with selected information. The extractor is designed to be **robust and reasonably fast**, it runs in production on millions of documents.
+Going from HTML bulk to essential parts can alleviate many problems related to text quality, by **focusing on the actual content**, **avoiding the noise** caused by recurring elements (headers, footers etc.), and **making sense of the data** with selected information. The extractor is designed to be **robust and reasonably fast**, it runs in production on millions of documents.
+
+The tool's versatility makes it **useful for quantitative and data-driven approaches**. It is used in the academic domain and beyond (e.g. in natural language processing, computational social science, search engine optimization, and information security).
-The tool's versatility makes it useful for a wide range of applications leveraging web content for knowledge discovery such as **quantitative and data-driven approaches**. Trafilatura is used in the academic domain and beyond (e.g. in NLP, SEO, business analytics).
Features
~~~~~~~~
- Advanced web crawling and text discovery:
- - Focused crawling adhering to politeness rules
- Support for sitemaps (TXT, XML) and feeds (ATOM, JSON, RSS)
- - Smart navigation and URL management (blacklists, filtering and deduplication)
+ - Smart crawling and URL management (filtering and deduplication)
- Parallel processing of online and offline input:
- Live URLs, efficient and polite processing of download queues
- Previously downloaded HTML files and parsed HTML trees
-- Robust and customizable extraction of key elements:
+- Robust and configurable extraction of key elements:
- Main text (common patterns and generic algorithms like jusText and readability)
- Metadata (title, author, date, site name, categories and tags)
- Formatting and structure: paragraphs, titles, lists, quotes, code, line breaks, in-line text formatting
- Optional elements: comments, links, images, tables
- - Extensive configuration options
- Multiple output formats:
- Text (minimal formatting or Markdown)
- - CSV (with metadata, tab-separated values)
+ - CSV (with metadata)
- JSON (with metadata)
- - XML (with metadata, text formatting and page structure) and `TEI-XML `_
-- Add-ons:
+ - XML or `XML-TEI `_ (with metadata, text formatting and page structure)
+- Optional add-ons:
- Language detection on extracted content
- Graphical user interface (GUI)
- Speed optimizations
@@ -87,7 +86,8 @@ Evaluation and alternatives
Trafilatura consistently outperforms other open-source libraries in text extraction benchmarks, showcasing its efficiency and accuracy in extracting web content. The extractor tries to strike a balance between limiting noise and including all valid parts.
-For more detailed results see the `benchmark `_. The results can be reproduced, see the `evaluation readme `_ for instructions.
+For more information see the `benchmark section `_ and the `evaluation readme `_ to reproduce the results.
+
=============================== ========= ========== ========= ========= ======
750 documents, 2236 text & 2250 boilerplate segments (2022-05-18), Python 3.8
@@ -107,6 +107,7 @@ readabilipy 0.2.0 0.877 0.870 0.874 0.874 248x
trafilatura 1.2.2 (standard) 0.914 0.904 **0.910** **0.909** 7.1x
=============================== ========= ========== ========= ========= ======
+
Other evaluations:
^^^^^^^^^^^^^^^^^^
@@ -125,11 +126,9 @@ Usage and documentation
- `Core Python functions `_
- Interactive Python Notebook: `Trafilatura Overview `_
- `Tutorials and use cases `_
- - `Text embedding for vector search `_
- - `Custom web corpus `_
- - `Word frequency list `_
-For video tutorials see this Youtube playlist:
+
+Youtube playlist with video tutorials in several languages:
- `Web scraping tutorials and how-tos `_
@@ -155,15 +154,15 @@ Many thanks to the `contributors `_.
+Developed with practical applications of academic research in mind, this software is part of a broader effort to derive information from web documents. Extracting and pre-processing web texts to the exacting standards of scientific research presents a substantial challenge. This software package simplifies text data collection and enhances corpus quality, it is currently used to build `text databases for linguistic research `_.
-*Trafilatura* is an Italian word for `wire drawing `_ symbolizing the industrial-grade extraction, refinement and conversion process.
+*Trafilatura* is an Italian word for `wire drawing `_ symbolizing the refinement and conversion process. It is also the way shapes of pasta are formed.
Author
~~~~~~
-Reach out via the `contact page `_ for inquiries, collaborations, or feedback. See also `Twitter/X `_ for the latest updates.
+Reach out via ia the software repository or the `contact page `_ for inquiries, collaborations, or feedback. See also X or LinkedIn for the latest updates.
This work started as a PhD project at the crossroads of linguistics and NLP, this expertise has been instrumental in shaping Trafilatura over the years. It has first been released under its current form in 2019, its development is referenced in the following publications:
@@ -175,7 +174,7 @@ This work started as a PhD project at the crossroads of linguistics and NLP, thi
Citing Trafilatura
~~~~~~~~~~~~~~~~~~
-Trafilatura is used in the academic domain, chiefly for data acquisition in corpus linguistics, natural language processing, and computational social science. Here is how to cite it:
+Trafilatura is widely used in the academic domain, chiefly for data acquisition. Here is how to cite it:
.. image:: https://img.shields.io/badge/DOI-10.18653%2Fv1%2F2021.acl--demo.15-blue
:target: https://aclanthology.org/2021.acl-demo.15/
@@ -201,7 +200,7 @@ Trafilatura is used in the academic domain, chiefly for data acquisition in corp
Software ecosystem
~~~~~~~~~~~~~~~~~~
-This software is part of a larger ecosystem. It is employed in a variety of academic and development projects, demonstrating its versatility and effectiveness. Case studies and publications are listed on the `Used By documentation page `_.
+Case studies and publications are listed on the `Used By documentation page `_.
Jointly developed plugins and additional packages also contribute to the field of web data extraction and analysis:
diff --git a/docs/evaluation.rst b/docs/evaluation.rst
index de938686..b5a5bb3b 100644
--- a/docs/evaluation.rst
+++ b/docs/evaluation.rst
@@ -14,6 +14,15 @@ Although text is ubiquitous on the Web, extracting information from web pages ca
The extraction focuses on the main content, which is usually the part displayed centrally, without the left or right bars, the header or the footer, but including potential titles and (optionally) comments. This task is also known as web scraping, boilerplate removal, DOM-based content extraction, main content identification, or web page cleaning.
+External evaluations
+--------------------
+
+- Most efficient open-source library in *ScrapingHub*'s `article extraction benchmark `_
+- Best overall tool according to `Bien choisir son outil d'extraction de contenu à partir du Web `_ (Lejeune & Barbaresi 2020)
+- Comparison on a small `sample of Polish news texts and forums `_ (now integrated in the internal benchmark, Trafilatura has improved since)
+- Best single tool by ROUGE-LSum Mean F1 Page Scores in `An Empirical Comparison of Web Content Extraction Algorithms `_ (Bevendorff et al. 2023)
+
+
Alternatives
------------
@@ -84,14 +93,6 @@ trafilatura 1.2.2 (standard) 0.914 0.904 **0.910** **0.909** 7.1x
=============================== ========= ========== ========= ========= ======
-External evaluations
---------------------
-
-- Most efficient open-source library in *ScrapingHub*'s `article extraction benchmark `_
-- Best overall tool according to `Bien choisir son outil d'extraction de contenu à partir du Web `_ (Lejeune & Barbaresi 2020)
-- Comparison on a small `sample of Polish news texts and forums `_ (now integrated in the internal benchmark, Trafilatura has improved since)
-- Best single tool by ROUGE-LSum Mean F1 Page Scores in `An Empirical Comparison of Web Content Extraction Algorithms `_ (Bevendorff et al. 2023)
-
Older results
-------------
diff --git a/docs/index.rst b/docs/index.rst
index 3890d671..319b185f 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -38,7 +38,7 @@ A Python package & command-line tool to gather text on the Web
Description
-----------
-Trafilatura is a **Python package and command-line tool** designed to gather text on the Web. It includes discovery, extraction and text processing components. Its main applications are **web crawling, downloads, scraping, and extraction** of main texts, metadata and comments. It aims at staying **handy and modular**: no database is required, the output can be converted to various commonly used formats.
+Trafilatura is a **Python package and command-line tool** designed to gather text on the Web. It includes discovery, extraction and text processing components. Its main applications are **web crawling, downloads, scraping, and extraction** of main texts, metadata and comments. It aims at staying **handy and modular**: no database is required, the output can be converted to commonly used formats.
Going from raw HTML to essential parts can alleviate many problems related to text quality, first by avoiding the **noise caused by recurring elements** (headers, footers, links/blogroll etc.) and second by including information such as author and date in order to **make sense of the data**. The extractor tries to strike a balance between limiting noise (precision) and including all valid parts (recall). It also has to be **robust and reasonably fast**, it runs in production on millions of documents.
@@ -48,28 +48,29 @@ This tool can be **useful for quantitative research** in corpus linguistics, nat
Features
~~~~~~~~
-- Web crawling and text discovery:
- - Focused crawling and politeness rules
+- Advanced web crawling and text discovery:
- Support for sitemaps (TXT, XML) and feeds (ATOM, JSON, RSS)
- - URL management (blacklists, filtering and de-duplication)
-- Seamless and parallel processing, online and offline:
- - URLs, HTML files or parsed HTML trees usable as input
- - Efficient and polite processing of download queues
- - Conversion of previously downloaded files
-- Robust and efficient extraction:
- - Main text (with LXML, common patterns and generic algorithms: jusText, fork of readability-lxml)
+ - Smart crawling and URL management (filtering and deduplication)
+- Parallel processing of online and offline input:
+ - Live URLs, efficient and polite processing of download queues
+ - Previously downloaded HTML files and parsed HTML trees
+- Robust and configurable extraction of key elements:
+ - Main text (common patterns and generic algorithms like jusText and readability)
- Metadata (title, author, date, site name, categories and tags)
- - Formatting and structural elements: paragraphs, titles, lists, quotes, code, line breaks, in-line text formatting
- - Comments (if applicable)
-- Output formats:
+ - Formatting and structure: paragraphs, titles, lists, quotes, code, line breaks, in-line text formatting
+ - Optional elements: comments, links, images, tables
+- Multiple output formats:
- Text (minimal formatting or Markdown)
- - CSV (with metadata, `tab-separated values `_)
+ - CSV (with metadata)
- JSON (with metadata)
- - XML (with metadata, text formatting and page structure) and `TEI-XML `_
+ - XML or `XML-TEI `_ (with metadata, text formatting and page structure)
- Optional add-ons:
- Language detection on extracted content
- Graphical user interface (GUI)
- Speed optimizations
+- Actively maintained with support from the open-source community:
+ - Regular updates, feature additions, and optimizations
+ - Comprehensive documentation
Evaluation and alternatives
@@ -77,20 +78,13 @@ Evaluation and alternatives
Trafilatura consistently outperforms other open-source libraries in text extraction benchmarks, showcasing its efficiency and accuracy in extracting web content. The extractor tries to strike a balance between limiting noise and including all valid parts.
-For detailed results see the `benchmark `_. The results can be reproduced, see the `evaluation readme _` for instructions.
-
-
-Other evaluations:
-^^^^^^^^^^^^^^^^^^
-
-- Most efficient open-source library in *ScrapingHub*'s `article extraction benchmark `_
-- Best overall tool according to Gaël Lejeune & Adrien Barbaresi, `Bien choisir son outil d'extraction de contenu à partir du Web `_ (2020, PDF, French)
+The `benchmark section `_ details alternatives and results, the `evaluation readme `_ describes how to reproduce the evaluation.
In a nutshell
-------------
-Primary installation method is with a Python package manager: ``pip install trafilatura``. See `installation documentation `_.
+Primary installation method is with a Python package manager: ``pip install trafilatura`` (→ `installation documentation `_).
With Python:
@@ -108,7 +102,7 @@ On the command-line:
$ trafilatura -u "https://github.blog/2019-03-29-leader-spotlight-erin-spiceland/"
# outputs main content and comments as plain text ...
-For more information please refer to `usage documentation `_ and `tutorials `_.
+For more see `usage documentation `_ and `tutorials `_.
.. raw:: html
@@ -128,29 +122,26 @@ For insights into GPL and free software licensing with emphasis on a business co
-Context
--------
-
-Extracting and pre-processing web texts to the exacting standards of scientific research presents a substantial challenge. These documentation pages also provide information on `concepts behind data collection `_ as well as practical tips on how to gather web texts (see `tutorials `_).
-
+Contributing
+------------
+Contributions of all kinds are welcome. Visit the `Contributing page `_ for more information. Bug reports can be filed on the `dedicated issue page `_.
-Contributing
-~~~~~~~~~~~~
+Many thanks to the `contributors `_ who extended the docs or submitted bug reports, features and bugfixes!
-Contributions are welcome! See `CONTRIBUTING.md `_ for more information. Bug reports can be filed on the `dedicated page `_.
+Context
+-------
-Roadmap
-~~~~~~~
+Extracting and pre-processing web texts to the exacting standards of scientific research presents a substantial challenge. These documentation pages also provide information on `concepts behind data collection `_ as well as practical tips on how to gather web texts (see `tutorials `_).
-For planned enhancements and relevant milestones see `issues page `_.
+*Trafilatura* is an Italian word for `wire drawing `_ symbolizing the refinement and conversion process. It is also the way shapes of pasta are formed.
Author
~~~~~~
-Reach out via the `contact page `_ for inquiries, collaborations, or feedback. See also `Twitter/X `_ for the latest updates.
+Reach out via the software repository or the `contact page `_ for inquiries, collaborations, or feedback. See also X or LinkedIn for the latest updates.
This work started as a PhD project at the crossroads of linguistics and NLP, this expertise has been instrumental in shaping Trafilatura over the years. It has first been released under its current form in 2019, its development is referenced in the following publications:
@@ -160,6 +151,11 @@ This work started as a PhD project at the crossroads of linguistics and NLP, thi
- Barbaresi, A. "`Efficient construction of metadata-enhanced web corpora `_", Proceedings of the `10th Web as Corpus Workshop (WAC-X) `_, 2016.
+Citing Trafilatura
+~~~~~~~~~~~~~~~~~~
+
+Trafilatura is widely used in the academic domain, chiefly for data acquisition. Here is how to cite it:
+
.. image:: https://img.shields.io/badge/DOI-10.18653%2Fv1%2F2021.acl--demo.15-blue
:target: https://aclanthology.org/2021.acl-demo.15/
:alt: Reference DOI: 10.18653/v1/2021.acl-demo.15
@@ -182,13 +178,10 @@ This work started as a PhD project at the crossroads of linguistics and NLP, thi
}
-You can contact me via my `contact page `_ or on `GitHub `_.
-
-
Software ecosystem
~~~~~~~~~~~~~~~~~~
-This software is part of a larger ecosystem. It is employed in a variety of academic and development projects, demonstrating its versatility and effectiveness. Case studies and publications are listed on the `Used By documentation page `_.
+Case studies and publications are listed on the `Used By documentation page `_.
Jointly developed plugins and additional packages also contribute to the field of web data extraction and analysis:
diff --git a/docs/sources.rst b/docs/sources.rst
index d0f1844c..1ae76f1d 100644
--- a/docs/sources.rst
+++ b/docs/sources.rst
@@ -62,7 +62,7 @@ Searching for URLs
The Common Crawl is a good place to start looking for already known URLs, and possibly for the corresponding pages stored by the project. So is the Internet Archive (with a different focus):
-- `CommonCrawl index `_
+- `getallurls (gau) `_ to fetch known URLs from the Wayback Machine and the Common Crawl (among others)
- `cdx_toolkit `_ (toolkit for CDX indices such as Common Crawl and the Internet Archive's Wayback Machine) & `Python example `_
- `Python script `_ to extract all URLs known by the Internet Archive for a given domain
diff --git a/docs/troubleshooting.rst b/docs/troubleshooting.rst
index e01ac263..f5b32653 100644
--- a/docs/troubleshooting.rst
+++ b/docs/troubleshooting.rst
@@ -87,3 +87,4 @@ Download first and extract later
Since the they have distinct characteristics it can be useful to separate the infrastructure needed for download from the extraction. Using a custom IP or network infrastructure can also prevent your usual IP from getting banned.
+For an approach using files from the Common Crawl and Trafilatura, see the external tool `datatrove/process_common_crawl_dump.py `_.
diff --git a/docs/tutorial-epsilla.rst b/docs/tutorial-epsilla.rst
index ae0dcc58..8b647363 100644
--- a/docs/tutorial-epsilla.rst
+++ b/docs/tutorial-epsilla.rst
@@ -8,7 +8,7 @@ Tutorial: Text embedding
Why perform text embedding with crawled data?
-------------------------------------------------
+---------------------------------------------
If you are doing natural language research, you may want to perform text embeddings on text crawled with Trafilatura.
@@ -19,14 +19,18 @@ Text embedding involves converting text into numerical vectors, and is commonly
- Anomaly detection (identify outliers)
In this tutorial, we will show you how to perform text embedding on results from Trafilatura. We will use
-`Epsilla `_, an open source vector database for storing and searching vector embeddings. It is 10x faster than regular vector databases for vector operations.
+`Epsilla `_, an open source vector database for storing and searching vector embeddings.
+
+Alternatives include `Qdrant `_, `Redis `, and `ChromaDB `_. They mostly work in a similar way.
+
.. note::
For a hands-on version of this tutorial, try out the `Colab Notebook `_.
+
Setup Epsilla
-------------------------------------------------
+-------------
In this tutorial, we will need an Epsilla database server. There are two ways to get one: use the free cloud version or start one locally.
@@ -87,7 +91,7 @@ We can now connect to the demo server.
Crawl project homepages and store their vector embeddings in Epsilla
------------------------------------------------------------------------------------
+--------------------------------------------------------------------
Suppose we want to find the most relevant open source project based on a query string.
@@ -132,7 +136,7 @@ Now the vector embeddings are stored in Epsilla. In the next section, we will pe
Perform vector search
--------------------------
+---------------------
We have stored the homepages of PyTorch, TensorFlow and React in the database.
We can now perform a vector search to find the most relevant project based on a query string.
diff --git a/docs/tutorials.rst b/docs/tutorials.rst
index 694b6a42..b5e271bb 100644
--- a/docs/tutorials.rst
+++ b/docs/tutorials.rst
@@ -32,7 +32,7 @@ Blog posts
Videos
^^^^^^
-Youtube playlist
+Youtube playlist with video tutorials in several languages
`Web scraping how-tos and tutorials `_.
diff --git a/docs/usage-api.rst b/docs/usage-api.rst
index e032a56b..5c7fdff1 100644
--- a/docs/usage-api.rst
+++ b/docs/usage-api.rst
@@ -3,16 +3,18 @@ API
.. meta::
:description lang=en:
- See how to use the official Trafilatura API to download or extract data for free or for larger volumes.
+ See how to use the official Trafilatura API to download and extract data for free or for larger volumes.
Introduction
------------
-Use the last version of the software straight from the application programming interface. This is especially useful if you want to try out Trafilatura without installing it or if you want to support the project while saving time.
+Simplify the process of turning URLs and HTML into structured, meaningful data. Use the last version of the software straight from the application programming interface. The Trafilatura API gives you access to add its capabilities to your projects and apps.
-- Fast URL download, or use HTML file as input
-- Configurable output
+This is especially useful if you want to try out Trafilatura without installing it or if you want to support the project while saving time.
+
+- Download URLs or provide your own data, web scraping included
+- Configurable output with conversion to supported formats
Endpoints
@@ -21,12 +23,55 @@ Endpoints
The official API comes in two versions, available from two different gateways:
- `Free for demonstration purposes `_ (including documentation page)
-- `For a larger volume of requests `_ (documentation and plans)
+- `For a larger volume of requests `_ (documentation with code snippets and plans)
+
+
+Examples
+--------
+
+The API takes JSON as input and a corresponding header is required. It then returns a JSON string with the result.
+
+
+CLI
+~~~
+
+.. code-block:: bash
+
+ $ curl -X POST "https://trafilatura.mooo.com/extract-demo" \
+ -H "content-type: application/json" \
+ --data '{
+ "url": "https://example.org",
+ "args": {
+ "output_format": "xml"
+ }
+ }'
+
+
+Python
+~~~~~~
+
+.. code-block:: python
+
+ import requests
+
+ url = "https://trafilatura.mooo.com/extract-demo"
+
+ payload = {
+ "url": "https://example.org",
+ "args": { "output_format": "xml" }
+ }
+ headers = {
+ "content-type": "application/json",
+ }
+
+ response = requests.post(url, json=payload, headers=headers)
+
+ print(response.json())
Further information
-------------------
-The API is still an early-stage product and the code is currently not available under an open-source license.
+The API is still an early-stage product and the code is not available under an open-source license.
diff --git a/docs/usage-cli.rst b/docs/usage-cli.rst
index 989268c5..7274ede6 100644
--- a/docs/usage-cli.rst
+++ b/docs/usage-cli.rst
@@ -3,14 +3,13 @@ On the command-line
.. meta::
:description lang=en:
- Trafilatura offers a robust CLI. Learn how to download and extract text from HTML web pages without writing code,
- including parallel processing and data mining capabilities.
+ Trafilatura offers a robust CLI. Learn how to download and extract text from HTML web pages without writing code, including parallel processing and data mining capabilities.
Introduction
------------
-Trafilatura offers a robust `command-line interface `_ and can be conveniently used without writing code. Learn how to perform various tasks and leverage the full power of the tool from the terminal.
+Trafilatura offers a robust command-line interface and can be conveniently used without writing code. Learn how to perform various tasks and leverage the full power of the tool from the terminal.
For the very first steps please refer to this multilingual, step-by-step `Introduction to the command-line interface `_ and this `section of the Introduction to Cultural Analytics & Python `_.
@@ -21,11 +20,6 @@ For instructions related to specific platforms see:
- `How to use the Terminal command line in macOS `_
- or `An introduction to the Linux Terminal `_
-As well as these compendia:
-
-- `Introduction to the Bash Command Line `_ (The Programming Historian)
-- `Basic Bash Command Line Tips You Should Know `_ (freeCodeCamp)
-
Quickstart
----------
diff --git a/docs/usage-gui.rst b/docs/usage-gui.rst
index 309641c4..8dc21a6e 100644
--- a/docs/usage-gui.rst
+++ b/docs/usage-gui.rst
@@ -2,9 +2,9 @@ Graphical user interface
========================
-For cases where the other usage options do not appear to be convenient, a `graphical user interface `_ (GUI) is available. This type of interface allows for interact with *trafilatura* through graphical icons and menus instead of text-based user interfaces, typed command labels or text navigation.
+For cases where the other usage options do not appear to be convenient, a `graphical user interface `_ (GUI) is available. This type of interface allows for interact with the software through graphical icons and menus instead of text-based user interfaces, typed command labels or text navigation.
-Although it is still experimental, the interface should provide access to all main functions of *trafilatura*. For more information please refer to the `installation instructions `_.
+This interface is still experimental and not actively maintained. It should work on most systems and provide access to all main functions. For more information please refer to the `installation instructions `_. If installation fails, usage on the command-line is recommended.
Screenshot
diff --git a/docs/usage-python.rst b/docs/usage-python.rst
index 7040fd86..a0bc89f7 100644
--- a/docs/usage-python.rst
+++ b/docs/usage-python.rst
@@ -15,7 +15,6 @@ Python can be easy to pick up whether you're a first time programmer or you're e
- Official `Python Tutorial `_
- `The Hitchhiker’s Guide to Python `_
-- `Learn Python Programming Step by Step `_
- `The Best Python Tutorials (freeCodeCamp) `_
diff --git a/docs/usage-r.rst b/docs/usage-r.rst
index 72fa5097..e3009f2c 100644
--- a/docs/usage-r.rst
+++ b/docs/usage-r.rst
@@ -135,8 +135,5 @@ Going further
-------------
- `Basic Text Processing in R `_
-- `Quanteda `_ is an R package for managing and analyzing text:
- - `Quickstart `_
- - `Quanteda tutorials `_
- - `Advancing Text Mining with R and quanteda `_
+- `Quanteda `_ is an R package for managing and analyzing text
diff --git a/docs/used-by.rst b/docs/used-by.rst
index f99799ef..e8e85ca9 100644
--- a/docs/used-by.rst
+++ b/docs/used-by.rst
@@ -24,6 +24,7 @@ Trafilatura has been employed in a variety of contexts and projects. Some of the
Known institutional users
^^^^^^^^^^^^^^^^^^^^^^^^^
+- Allen Institute for AI with the `Dolma toolkit `_ used to pre-train the OLMo LLM
- Falcon LLM (TII UAE) and its underlying `RefinedWeb Dataset `_
- `FinGPT `_ (Finland)
- `Media Cloud platform `_ for media analysis, e.g. `Data against Feminicide `_
@@ -37,6 +38,7 @@ Various software repositories
- `Benson `_, to turn a list of URLs into mp3s of the contents of each web page
- `CommonCrawl downloader `_, to derive massive amounts of language data
+- `DataTrove `_, to process, filter and deduplicate text data
- `GLAM Workbench `_ for cultural heritage (web archives section)
- `llama-hub `_, a library of data loaders for large language models
- `LlamaIndex `_, a data framework for LLM applications
@@ -115,6 +117,7 @@ Publications citing Trafilatura
- Alakukku, L. (2022). "Domain specific boilerplate removal from web pages with entropy and clustering", Master's thesis, University of Aalto.
+- Alexandrescu, A., & Butincu, C.N. (2023). Decentralized news-retrieval architecture using blockchain technology. Mathematics, 11(21), 4542.
- Alhamzeh, A., Bouhaouel, M., Egyed-Zsigmond, E., & Mitrović, J. (2021). "DistilBERT-based Argumentation Retrieval for Answering Comparative Questions", Proceedings of CLEF 2021 – Conference and Labs of the Evaluation Forum.
- Bender, M., Bubenhofer, N., Dreesen, P., Georgi, C., Rüdiger, J. O., & Vogel, F. (2022). Techniken und Praktiken der Verdatung. Diskurse–digital, 135-158.
- Bevendorff, J., Gupta, S., Kiesel, J., & Stein, B. (2023). An Empirical Comparison of Web Content Extraction Algorithms.
@@ -147,12 +150,15 @@ Publications citing Trafilatura
- Meier-Vieracker, S. (2022). "Fußballwortschatz digital–Korpuslinguistische Ressourcen für den Sprachunterricht." Korpora Deutsch als Fremdsprache (KorDaF), 2022/01 (pre-print).
- Meng, K. (2021). "An End-to-End Computational System for Monitoring and Verifying Factual Claims" (pre-print).
- Miquelina, N., Quaresma, P., & Nogueira, V. B. (2022). Generating a European Portuguese BERT Based Model Using Content from Arquivo. pt Archive. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 280-288). Springer, Cham.
+- Naira, A. M., & Benelallam, I. (2023). Evaluating ESG Impacts in African Cities through Topic-Level Sentiment Analysis. In 2023 10th International Conference on Wireless Networks and Mobile Communications (WINCOM) (pp. 1-6). IEEE.
+- Nguyen, Q.C., et al. (2024). Rosie, a Health Education Question-and-Answer Chatbot for New Mothers: Randomized Pilot Study. JMIR Formative Research, 8(1), e51361.
- Nissopoulou, T. X. (2023). Web content classification analysis, MSc thesis, International Hellenic University.
- Nolda, A., Barbaresi, A., & Geyken, A. (2023). Korpora für die lexikographische Beschreibung diatopischer Variation in der deutschen Standardsprache. Korpora in der germanistischen Sprachwissenschaft: Mündlich, schriftlich, multimedial, 29.
- Öhman, J., Verlinden, S., Ekgren, A., Gyllensten, A. C., Isbister, T., Gogoulou, E., ... & Sahlgren, M. (2023). The Nordic Pile: A 1.2 TB Nordic Dataset for Language Modeling. arXiv preprint arXiv:2303.17183.
- Penedo, G., Malartic, Q., Hesslow, D., Cojocaru, R., Cappelli, A., Pannier, B., ... & Launay, J. The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only.
- Piskorski, J., Stefanovitch, N., Da San Martino, G., & Nakov, P. (2023). Semeval-2023 task 3: Detecting the category, the framing, and the persuasion techniques in online news in a multi-lingual setup. In Proceedings of the the 17th International Workshop on Semantic Evaluation (SemEval-2023) (pp. 2343-2361).
- Pohlmann, J., Barbaresi, A., & Leinen, P. (2023). Platform regulation and “overblocking”–The NetzDG discourse in Germany. Communications, 48(3), 395-419.
+- Rastislav, K. (2024). Backend platformy pro sdílené ověřování faktů (Master's thesis, České vysoké učení technické v Praze. Vypočetní a informační centrum.)
- Robertson, F., Lagus, J., & Kajava, K. (2021). "A COVID-19 news coverage mood map of Europe", Proceedings of the EACL Hackashop on News Media Content Analysis and Automated Report Generation (pp. 110-115).
- Salmela, A. (2022). "Distinguishing Noise and Main Text Content from Web-Sourced Plain Text Documents Using Sequential Neural Networks", Master's thesis, University of Turku.
- Sawczyn, A., Binkowski, J., Janiak, D., Augustyniak, Ł., & Kajdanowicz, T. (2021). "Fact-checking: relevance assessment of references in the Polish political domain", Procedia Computer Science, 192, 1285-1293.