forked from acl-org/acl-anthology
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy path2020.clinicalnlp.xml
418 lines (418 loc) · 53.5 KB
/
2020.clinicalnlp.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
<?xml version='1.0' encoding='UTF-8'?>
<collection id="2020.clinicalnlp">
<volume id="1" ingest-date="2020-11-06">
<meta>
<booktitle>Proceedings of the 3rd Clinical Natural Language Processing Workshop</booktitle>
<editor><first>Anna</first><last>Rumshisky</last></editor>
<editor><first>Kirk</first><last>Roberts</last></editor>
<editor><first>Steven</first><last>Bethard</last></editor>
<editor><first>Tristan</first><last>Naumann</last></editor>
<publisher>Association for Computational Linguistics</publisher>
<address>Online</address>
<month>November</month>
<year>2020</year>
</meta>
<frontmatter>
<url hash="794ae931">2020.clinicalnlp-1.0</url>
</frontmatter>
<paper id="1">
<title>Various Approaches for Predicting Stroke Prognosis using Magnetic Resonance Imaging Text Records</title>
<author><first>Tak-Sung</first><last>Heo</last></author>
<author><first>Chulho</first><last>Kim</last></author>
<author><first>Jeong-Myeong</first><last>Choi</last></author>
<author><first>Yeong-Seok</first><last>Jeong</last></author>
<author><first>Yu-Seop</first><last>Kim</last></author>
<pages>1–6</pages>
<abstract>Stroke is one of the leading causes of death and disability worldwide. Stroke is treatable, but it is prone to disability after treatment and must be prevented. To grasp the degree of disability caused by stroke, we use magnetic resonance imaging text records to predict stroke and measure the performance according to the document-level and sentence-level representation. As a result of the experiment, the document-level representation shows better performance.</abstract>
<url hash="3479fe47">2020.clinicalnlp-1.1</url>
<doi>10.18653/v1/2020.clinicalnlp-1.1</doi>
</paper>
<paper id="2">
<title>Multiple Sclerosis Severity Classification From Clinical Text</title>
<author><first>Alister</first><last>D’Costa</last></author>
<author><first>Stefan</first><last>Denkovski</last></author>
<author><first>Michal</first><last>Malyska</last></author>
<author><first>Sae Young</first><last>Moon</last></author>
<author><first>Brandon</first><last>Rufino</last></author>
<author><first>Zhen</first><last>Yang</last></author>
<author><first>Taylor</first><last>Killian</last></author>
<author><first>Marzyeh</first><last>Ghassemi</last></author>
<pages>7–23</pages>
<abstract>Multiple Sclerosis (MS) is a chronic, inflammatory and degenerative neurological disease, which is monitored by a specialist using the Expanded Disability Status Scale (EDSS) and recorded in unstructured text in the form of a neurology consult note. An EDSS measurement contains an overall ‘EDSS’ score and several functional subscores. Typically, expert knowledge is required to interpret consult notes and generate these scores. Previous approaches used limited context length Word2Vec embeddings and keyword searches to predict scores given a consult note, but often failed when scores were not explicitly stated. In this work, we present MS-BERT, the first publicly available transformer model trained on real clinical data other than MIMIC. Next, we present MSBC, a classifier that applies MS-BERT to generate embeddings and predict EDSS and functional subscores. Lastly, we explore combining MSBC with other models through the use of Snorkel to generate scores for unlabelled consult notes. MSBC achieves state-of-the-art performance on all metrics and prediction tasks and outperforms the models generated from the Snorkel ensemble. We improve Macro-F1 by 0.12 (to 0.88) for predicting EDSS and on average by 0.29 (to 0.63) for predicting functional subscores over previous Word2Vec CNN and rule-based approaches.</abstract>
<url hash="9f1af080">2020.clinicalnlp-1.2</url>
<doi>10.18653/v1/2020.clinicalnlp-1.2</doi>
</paper>
<paper id="3">
<title><fixed-case>BERT</fixed-case>-<fixed-case>XML</fixed-case>: Large Scale Automated <fixed-case>ICD</fixed-case> Coding Using <fixed-case>BERT</fixed-case> Pretraining</title>
<author><first>Zachariah</first><last>Zhang</last></author>
<author><first>Jingshu</first><last>Liu</last></author>
<author><first>Narges</first><last>Razavian</last></author>
<pages>24–34</pages>
<abstract>ICD coding is the task of classifying and cod-ing all diagnoses, symptoms and proceduresassociated with a patient’s visit. The process isoften manual, extremely time-consuming andexpensive for hospitals as clinical interactionsare usually recorded in free text medical notes.In this paper, we propose a machine learningmodel, BERT-XML, for large scale automatedICD coding of EHR notes, utilizing recentlydeveloped unsupervised pretraining that haveachieved state of the art performance on a va-riety of NLP tasks. We train a BERT modelfrom scratch on EHR notes, learning with vo-cabulary better suited for EHR tasks and thusoutperform off-the-shelf models. We furtheradapt the BERT architecture for ICD codingwith multi-label attention. We demonstratethe effectiveness of BERT-based models on thelarge scale ICD code classification task usingmillions of EHR notes to predict thousands ofunique codes.</abstract>
<url hash="2fa42ef4">2020.clinicalnlp-1.3</url>
<doi>10.18653/v1/2020.clinicalnlp-1.3</doi>
</paper>
<paper id="4">
<title>Incorporating Risk Factor Embeddings in Pre-trained Transformers Improves Sentiment Prediction in Psychiatric Discharge Summaries</title>
<author><first>Xiyu</first><last>Ding</last></author>
<author><first>Mei-Hua</first><last>Hall</last></author>
<author><first>Timothy</first><last>Miller</last></author>
<pages>35–40</pages>
<abstract>Reducing rates of early hospital readmission has been recognized and identified as a key to improve quality of care and reduce costs. There are a number of risk factors that have been hypothesized to be important for understanding re-admission risk, including such factors as problems with substance abuse, ability to maintain work, relations with family. In this work, we develop Roberta-based models to predict the sentiment of sentences describing readmission risk factors in discharge summaries of patients with psychosis. We improve substantially on previous results by a scheme that shares information across risk factors while also allowing the model to learn risk factor-specific information.</abstract>
<url hash="4a86ef4f">2020.clinicalnlp-1.4</url>
<doi>10.18653/v1/2020.clinicalnlp-1.4</doi>
</paper>
<paper id="5">
<title>Information Extraction from <fixed-case>S</fixed-case>wedish Medical Prescriptions with Sig-Transformer Encoder</title>
<author><first>John</first><last>Pougué Biyong</last></author>
<author><first>Bo</first><last>Wang</last></author>
<author><first>Terry</first><last>Lyons</last></author>
<author><first>Alejo</first><last>Nevado-Holgado</last></author>
<pages>41–54</pages>
<abstract>Relying on large pretrained language models such as Bidirectional Encoder Representations from Transformers (BERT) for encoding and adding a simple prediction layer has led to impressive performance in many clinical natural language processing (NLP) tasks. In this work, we present a novel extension to the Transformer architecture, by incorporating signature transform with the self-attention model. This architecture is added between embedding and prediction layers. Experiments on a new Swedish prescription data show the proposed architecture to be superior in two of the three information extraction tasks, comparing to baseline models. Finally, we evaluate two different embedding approaches between applying Multilingual BERT and translating the Swedish text to English then encode with a BERT model pretrained on clinical notes.</abstract>
<url hash="d2aa377c">2020.clinicalnlp-1.5</url>
<attachment type="OptionalSupplementaryMaterial" hash="98b44386">2020.clinicalnlp-1.5.OptionalSupplementaryMaterial.zip</attachment>
<doi>10.18653/v1/2020.clinicalnlp-1.5</doi>
</paper>
<paper id="6">
<title>Evaluation of Transfer Learning for Adverse Drug Event (<fixed-case>ADE</fixed-case>) and Medication Entity Extraction</title>
<author><first>Sankaran</first><last>Narayanan</last></author>
<author><first>Kaivalya</first><last>Mannam</last></author>
<author><first>Sreeranga P</first><last>Rajan</last></author>
<author><first>P Venkat</first><last>Rangan</last></author>
<pages>55–64</pages>
<abstract>We evaluate several biomedical contextual embeddings (based on BERT, ELMo, and Flair) for the detection of medication entities such as Drugs and Adverse Drug Events (ADE) from Electronic Health Records (EHR) using the 2018 ADE and Medication Extraction (Track 2) n2c2 data-set. We identify best practices for transfer learning, such as language-model fine-tuning and scalar mix. Our transfer learning models achieve strong performance in the overall task (F1=92.91%) as well as in ADE identification (F1=53.08%). Flair-based embeddings out-perform in the identification of context-dependent entities such as ADE. BERT-based embeddings out-perform in recognizing clinical terminology such as Drug and Form entities. ELMo-based embeddings deliver competitive performance in all entities. We develop a sentence-augmentation method for enhanced ADE identification benefiting BERT-based and ELMo-based models by up to 3.13% in F1 gains. Finally, we show that a simple ensemble of these models out-paces most current methods in ADE extraction (F1=55.77%).</abstract>
<url hash="0b8af9dc">2020.clinicalnlp-1.6</url>
<doi>10.18653/v1/2020.clinicalnlp-1.6</doi>
</paper>
<paper id="7">
<title><fixed-case>B</fixed-case>io<fixed-case>BERT</fixed-case>pt - A <fixed-case>P</fixed-case>ortuguese Neural Language Model for Clinical Named Entity Recognition</title>
<author><first>Elisa Terumi Rubel</first><last>Schneider</last></author>
<author><first>João Vitor Andrioli</first><last>de Souza</last></author>
<author><first>Julien</first><last>Knafou</last></author>
<author><first>Lucas Emanuel Silva e</first><last>Oliveira</last></author>
<author><first>Jenny</first><last>Copara</last></author>
<author><first>Yohan Bonescki</first><last>Gumiel</last></author>
<author><first>Lucas Ferro Antunes de</first><last>Oliveira</last></author>
<author><first>Emerson Cabrera</first><last>Paraiso</last></author>
<author><first>Douglas</first><last>Teodoro</last></author>
<author><first>Cláudia Maria Cabral Moro</first><last>Barra</last></author>
<pages>65–72</pages>
<abstract>With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72%, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.</abstract>
<url hash="bcfce65d">2020.clinicalnlp-1.7</url>
<doi>10.18653/v1/2020.clinicalnlp-1.7</doi>
</paper>
<paper id="8">
<title>Dilated Convolutional Attention Network for Medical Code Assignment from Clinical Text</title>
<author><first>Shaoxiong</first><last>Ji</last></author>
<author><first>Erik</first><last>Cambria</last></author>
<author><first>Pekka</first><last>Marttinen</last></author>
<pages>73–78</pages>
<abstract>Medical code assignment, which predicts medical codes from clinical texts, is a fundamental task of intelligent medical information systems. The emergence of deep models in natural language processing has boosted the development of automatic assignment methods. However, recent advanced neural architectures with flat convolutions or multi-channel feature concatenation ignore the sequential causal constraint within a text sequence and may not learn meaningful clinical text representations, especially for lengthy clinical notes with long-term sequential dependency. This paper proposes a Dilated Convolutional Attention Network (DCAN), integrating dilated convolutions, residual connections, and label attention, for medical code assignment. It adopts dilated convolutions to capture complex medical patterns with a receptive field which increases exponentially with dilation size. Experiments on a real-world clinical dataset empirically show that our model improves the state of the art.</abstract>
<url hash="a2b4c9e0">2020.clinicalnlp-1.8</url>
<doi>10.18653/v1/2020.clinicalnlp-1.8</doi>
</paper>
<paper id="9">
<title>Classification of Syncope Cases in <fixed-case>N</fixed-case>orwegian Medical Records</title>
<author><first>Ildiko</first><last>Pilan</last></author>
<author><first>Pål H.</first><last>Brekke</last></author>
<author><first>Fredrik A.</first><last>Dahl</last></author>
<author><first>Tore</first><last>Gundersen</last></author>
<author><first>Haldor</first><last>Husby</last></author>
<author><first>Øystein</first><last>Nytrø</last></author>
<author><first>Lilja</first><last>Øvrelid</last></author>
<pages>79–84</pages>
<abstract>Loss of consciousness, so-called syncope, is a commonly occurring symptom associated with worse prognosis for a number of heart-related diseases. We present a comparison of methods for a diagnosis classification task in Norwegian clinical notes, targeting syncope, i.e. fainting cases. We find that an often neglected baseline with keyword matching constitutes a rather strong basis, but more advanced methods do offer some improvement in classification performance, especially a convolutional neural network model. The developed pipeline is planned to be used for quantifying unregistered syncope cases in Norway.</abstract>
<url hash="24c0f953">2020.clinicalnlp-1.9</url>
<doi>10.18653/v1/2020.clinicalnlp-1.9</doi>
</paper>
<paper id="10">
<title>Comparison of Machine Learning Methods for Multi-label Classification of Nursing Education and Licensure Exam Questions</title>
<author><first>John</first><last>Langton</last></author>
<author><first>Krishna</first><last>Srihasam</last></author>
<author><first>Junlin</first><last>Jiang</last></author>
<pages>85–93</pages>
<abstract>In this paper, we evaluate several machine learning methods for multi-label classification of text questions. Every nursing student in the United States must pass the National Council Licensure Examination (NCLEX) to begin professional practice. NCLEX defines a number of competencies on which students are evaluated. By labeling test questions with NCLEX competencies, we can score students according to their performance in each competency. This information helps instructors measure how prepared students are for the NCLEX, as well as which competencies they may need help with. A key challenge is that questions may be related to more than one competency. Labeling questions with NCLEX competencies, therefore, equates to a multi-label, text classification problem where each competency is a label. Here we present an evaluation of several methods to support this use case along with a proposed approach. While our work is grounded in the nursing education domain, the methods described here can be used for any multi-label, text classification use case.</abstract>
<url hash="514bd798">2020.clinicalnlp-1.10</url>
<doi>10.18653/v1/2020.clinicalnlp-1.10</doi>
</paper>
<paper id="11">
<title>Clinical <fixed-case>XLN</fixed-case>et: Modeling Sequential Clinical Notes and Predicting Prolonged Mechanical Ventilation</title>
<author><first>Kexin</first><last>Huang</last></author>
<author><first>Abhishek</first><last>Singh</last></author>
<author><first>Sitong</first><last>Chen</last></author>
<author><first>Edward</first><last>Moseley</last></author>
<author><first>Chih-Ying</first><last>Deng</last></author>
<author><first>Naomi</first><last>George</last></author>
<author><first>Charolotta</first><last>Lindvall</last></author>
<pages>94–100</pages>
<abstract>Clinical notes contain rich information, which is relatively unexploited in predictive modeling compared to structured data. In this work, we developed a new clinical text representation Clinical XLNet that leverages the temporal information of the sequence of the notes. We evaluated our models on prolonged mechanical ventilation prediction problem and our experiments demonstrated that Clinical XLNet outperforms the best baselines consistently. The models and scripts are made publicly available.</abstract>
<url hash="4dface8f">2020.clinicalnlp-1.11</url>
<attachment type="OptionalSupplementaryMaterial" hash="8dc70f55">2020.clinicalnlp-1.11.OptionalSupplementaryMaterial.pdf</attachment>
<doi>10.18653/v1/2020.clinicalnlp-1.11</doi>
</paper>
<paper id="12">
<title>Automatic recognition of abdominal lymph nodes from clinical text</title>
<author><first>Yifan</first><last>Peng</last></author>
<author><first>Sungwon</first><last>Lee</last></author>
<author><first>Daniel C.</first><last>Elton</last></author>
<author><first>Thomas</first><last>Shen</last></author>
<author><first>Yu-xing</first><last>Tang</last></author>
<author><first>Qingyu</first><last>Chen</last></author>
<author><first>Shuai</first><last>Wang</last></author>
<author><first>Yingying</first><last>Zhu</last></author>
<author><first>Ronald</first><last>Summers</last></author>
<author><first>Zhiyong</first><last>Lu</last></author>
<pages>101–110</pages>
<abstract>Lymph node status plays a pivotal role in the treatment of cancer. The extraction of lymph nodes from radiology text reports enables large-scale training of lymph node detection on MRI. In this work, we first propose an ontology of 41 types of abdominal lymph nodes with a hierarchical relationship. We then introduce an end-to-end approach based on the combination of rules and transformer-based methods to detect these abdominal lymph node mentions and classify their types from the MRI radiology reports. We demonstrate the superior performance of a model fine-tuned on MRI reports using BlueBERT, called MriBERT. We find that MriBERT outperforms the rule-based labeler (0.957 vs 0.644 in micro weighted F1-score) as well as other BERT-based variations (0.913 - 0.928). We make the code and MriBERT publicly available at https://github.com/ncbi-nlp/bluebert, with the hope that this method can facilitate the development of medical report annotators to produce labels from scratch at scale.</abstract>
<url hash="2016534d">2020.clinicalnlp-1.12</url>
<doi>10.18653/v1/2020.clinicalnlp-1.12</doi>
</paper>
<paper id="13">
<title>How You Ask Matters: The Effect of Paraphrastic Questions to <fixed-case>BERT</fixed-case> Performance on a Clinical <fixed-case>SQ</fixed-case>u<fixed-case>AD</fixed-case> Dataset</title>
<author><first>Sungrim (Riea)</first><last>Moon</last></author>
<author><first>Jungwei</first><last>Fan</last></author>
<pages>111–116</pages>
<abstract>Reading comprehension style question-answering (QA) based on patient-specific documents represents a growing area in clinical NLP with plentiful applications. Bidirectional Encoder Representations from Transformers (BERT) and its derivatives lead the state-of-the-art accuracy on the task, but most evaluation has treated the data as a pre-mixture without systematically looking into the potential effect of imperfect train/test questions. The current study seeks to address this gap by experimenting with full versus partial train/test data consisting of paraphrastic questions. Our key findings include 1) training with all pooled question variants yielded best accuracy, 2) the accuracy varied widely, from 0.74 to 0.80, when trained with each single question variant, and 3) questions of similar lexical/syntactic structure tended to induce identical answers. The results suggest that how you ask questions matters in BERT-based QA, especially at the training stage.</abstract>
<url hash="02b90c0a">2020.clinicalnlp-1.13</url>
<doi>10.18653/v1/2020.clinicalnlp-1.13</doi>
</paper>
<paper id="14">
<title>Relative and Incomplete Time Expression Anchoring for Clinical Text</title>
<author><first>Louise</first><last>Dupuis</last></author>
<author><first>Nicol</first><last>Bergou</last></author>
<author><first>Hegler</first><last>Tissot</last></author>
<author><first>Sumithra</first><last>Velupillai</last></author>
<pages>117–129</pages>
<abstract>Extracting and modeling temporal information in clinical text is an important element for developing timelines and disease trajectories. Time information in written text varies in preciseness and explicitness, posing challenges for NLP approaches that aim to accurately anchor temporal information on a timeline. Relative and incomplete time expressions (RI-Timexes) are expressions that require additional information for their temporal anchor to be resolved, but few studies have addressed this challenge specifically. In this study, we aimed to reproduce and verify a classification approach for identifying anchor dates and relations in clinical text, and propose a novel relation classification approach for this task.</abstract>
<url hash="a641c1be">2020.clinicalnlp-1.14</url>
<doi>10.18653/v1/2020.clinicalnlp-1.14</doi>
</paper>
<paper id="15">
<title><fixed-case>M</fixed-case>e<fixed-case>DAL</fixed-case>: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining</title>
<author><first>Zhi</first><last>Wen</last></author>
<author><first>Xing Han</first><last>Lu</last></author>
<author><first>Siva</first><last>Reddy</last></author>
<pages>130–135</pages>
<abstract>One of the biggest challenges that prohibit the use of many current NLP methods in clinical settings is the availability of public datasets. In this work, we present MeDAL, a large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding pre-training in the medical domain. We pre-trained several models of common architectures on this dataset and empirically showed that such pre-training leads to improved performance and convergence speed when fine-tuning on downstream medical tasks.</abstract>
<url hash="5eaf318e">2020.clinicalnlp-1.15</url>
<attachment type="OptionalSupplementaryMaterial" hash="a36a04dd">2020.clinicalnlp-1.15.OptionalSupplementaryMaterial.zip</attachment>
<doi>10.18653/v1/2020.clinicalnlp-1.15</doi>
</paper>
<paper id="16">
<title>Knowledge Grounded Conversational Symptom Detection with Graph Memory Networks</title>
<author><first>Hongyin</first><last>Luo</last></author>
<author><first>Shang-Wen</first><last>Li</last></author>
<author><first>James</first><last>Glass</last></author>
<pages>136–145</pages>
<abstract>In this work, we propose a novel goal-oriented dialog task, automatic symptom detection. We build a system that can interact with patients through dialog to detect and collect clinical symptoms automatically, which can save a doctor’s time interviewing the patient. Given a set of explicit symptoms provided by the patient to initiate a dialog for diagnosing, the system is trained to collect implicit symptoms by asking questions, in order to collect more information for making an accurate diagnosis. After getting the reply from the patient for each question, the system also decides whether current information is enough for a human doctor to make a diagnosis. To achieve this goal, we propose two neural models and a training pipeline for the multi-step reasoning task. We also build a knowledge graph as additional inputs to further improve model performance. Experiments show that our model significantly outperforms the baseline by 4%, discovering 67% of implicit symptoms on average with a limited number of questions.</abstract>
<url hash="61beec04">2020.clinicalnlp-1.16</url>
<doi>10.18653/v1/2020.clinicalnlp-1.16</doi>
</paper>
<paper id="17">
<title>Pretrained Language Models for Biomedical and Clinical Tasks: Understanding and Extending the State-of-the-Art</title>
<author><first>Patrick</first><last>Lewis</last></author>
<author><first>Myle</first><last>Ott</last></author>
<author><first>Jingfei</first><last>Du</last></author>
<author><first>Veselin</first><last>Stoyanov</last></author>
<pages>146–157</pages>
<abstract>A large array of pretrained models are available to the biomedical NLP (BioNLP) community. Finding the best model for a particular task can be difficult and time-consuming. For many applications in the biomedical and clinical domains, it is crucial that models can be built quickly and are highly accurate. We present a large-scale study across 18 established biomedical and clinical NLP tasks to determine which of several popular open-source biomedical and clinical NLP models work well in different settings. Furthermore, we apply recent advances in pretraining to train new biomedical language models, and carefully investigate the effect of various design choices on downstream performance. Our best models perform well in all of our benchmarks, and set new State-of-the-Art in 9 tasks. We release these models in the hope that they can help the community to speed up and increase the accuracy of BioNLP and text mining applications.</abstract>
<url hash="98e9c10d">2020.clinicalnlp-1.17</url>
<attachment type="OptionalSupplementaryMaterial" hash="4d64c64b">2020.clinicalnlp-1.17.OptionalSupplementaryMaterial.pdf</attachment>
<doi>10.18653/v1/2020.clinicalnlp-1.17</doi>
</paper>
<paper id="18">
<title>Assessment of <fixed-case>D</fixed-case>istil<fixed-case>BERT</fixed-case> performance on Named Entity Recognition task for the detection of Protected Health Information and medical concepts</title>
<author><first>Macarious</first><last>Abadeer</last></author>
<pages>158–167</pages>
<abstract>Bidirectional Encoder Representations from Transformers (BERT) models achieve state-of-the-art performance on a number of Natural Language Processing tasks. However, their model size on disk often exceeds 1 GB and the process of fine-tuning them and using them to run inference consumes significant hardware resources and runtime. This makes them hard to deploy to production environments. This paper fine-tunes DistilBERT, a lightweight deep learning model, on medical text for the named entity recognition task of Protected Health Information (PHI) and medical concepts. This work provides a full assessment of the performance of DistilBERT in comparison with BERT models that were pre-trained on medical text. For Named Entity Recognition task of PHI, DistilBERT achieved almost the same results as medical versions of BERT in terms of F1 score at almost half the runtime and consuming approximately half the disk space. On the other hand, for the detection of medical concepts, DistilBERT’s F1 score was lower by 4 points on average than medical BERT variants.</abstract>
<url hash="de61eacb">2020.clinicalnlp-1.18</url>
<doi>10.18653/v1/2020.clinicalnlp-1.18</doi>
</paper>
<paper id="19">
<title>Distinguishing between Dementia with Lewy bodies (<fixed-case>DLB</fixed-case>) and <fixed-case>A</fixed-case>lzheimer’s Disease (<fixed-case>AD</fixed-case>) using Mental Health Records: a Classification Approach</title>
<author><first>Zixu</first><last>Wang</last></author>
<author><first>Julia</first><last>Ive</last></author>
<author><first>Sinead</first><last>Moylett</last></author>
<author><first>Christoph</first><last>Mueller</last></author>
<author><first>Rudolf</first><last>Cardinal</last></author>
<author><first>Sumithra</first><last>Velupillai</last></author>
<author><first>John</first><last>O’Brien</last></author>
<author><first>Robert</first><last>Stewart</last></author>
<pages>168–177</pages>
<abstract>While Dementia with Lewy Bodies (DLB) is the second most common type of neurodegenerative dementia following Alzheimer’s Disease (AD), it is difficult to distinguish from AD. We propose a method for DLB detection by using mental health record (MHR) documents from a (3-month) period before a patient has been diagnosed with DLB or AD. Our objective is to develop a model that could be clinically useful to differentiate between DLB and AD across datasets from different healthcare institutions. We cast this as a classification task using Convolutional Neural Network (CNN), an efficient neural model for text classification. We experiment with different representation models, and explore the features that contribute to model performances. In addition, we apply temperature scaling, a simple but efficient model calibration method, to produce more reliable predictions. We believe the proposed method has important potential for clinical applications using routine healthcare records, and for generalising to other relevant clinical record datasets. To the best of our knowledge, this is the first attempt to distinguish DLB from AD using mental health records, and to improve the reliability of DLB predictions.</abstract>
<url hash="40631b32">2020.clinicalnlp-1.19</url>
<doi>10.18653/v1/2020.clinicalnlp-1.19</doi>
</paper>
<paper id="20">
<title>Weakly Supervised Medication Regimen Extraction from Medical Conversations</title>
<author><first>Dhruvesh</first><last>Patel</last></author>
<author><first>Sandeep</first><last>Konam</last></author>
<author><first>Sai</first><last>Prabhakar</last></author>
<pages>178–193</pages>
<abstract>Automated Medication Regimen (MR) extraction from medical conversations can not only improve recall and help patients follow through with their care plan, but also reduce the documentation burden for doctors. In this paper, we focus on extracting spans for frequency, route and change, corresponding to medications discussed in the conversation. We first describe a unique dataset of annotated doctor-patient conversations and then present a weakly supervised model architecture that can perform span extraction using noisy classification data. The model utilizes an attention bottleneck inside a classification model to perform the extraction. We experiment with several variants of attention scoring and projection functions and propose a novel transformer-based attention scoring function (TAScore). The proposed combination of TAScore and Fusedmax projection achieves a 10 point increase in Longest Common Substring F1 compared to the baseline of additive scoring plus softmax projection.</abstract>
<url hash="4ad0b00a">2020.clinicalnlp-1.20</url>
<doi>10.18653/v1/2020.clinicalnlp-1.20</doi>
</paper>
<paper id="21">
<title>Extracting Relations between Radiotherapy Treatment Details</title>
<author><first>Danielle</first><last>Bitterman</last></author>
<author><first>Timothy</first><last>Miller</last></author>
<author><first>David</first><last>Harris</last></author>
<author><first>Chen</first><last>Lin</last></author>
<author><first>Sean</first><last>Finan</last></author>
<author><first>Jeremy</first><last>Warner</last></author>
<author><first>Raymond</first><last>Mak</last></author>
<author><first>Guergana</first><last>Savova</last></author>
<pages>194–200</pages>
<abstract>We present work on extraction of radiotherapy treatment information from the clinical narrative in the electronic medical records. Radiotherapy is a central component of the treatment of most solid cancers. Its details are described in non-standardized fashions using jargon not found in other medical specialties, complicating the already difficult task of manual data extraction. We examine the performance of several state-of-the-art neural methods for relation extraction of radiotherapy treatment details, with a goal of automating detailed information extraction. The neural systems perform at 0.82-0.88 macro-average F1, which approximates or in some cases exceeds the inter-annotator agreement. To the best of our knowledge, this is the first effort to develop models for radiotherapy relation extraction and one of the few efforts for relation extraction to describe cancer treatment in general.</abstract>
<url hash="09caa91e">2020.clinicalnlp-1.21</url>
<doi>10.18653/v1/2020.clinicalnlp-1.21</doi>
</paper>
<paper id="22">
<title>Cancer Registry Information Extraction via Transfer Learning</title>
<author><first>Yan-Jie</first><last>Lin</last></author>
<author><first>Hong-Jie</first><last>Dai</last></author>
<author><first>You-Chen</first><last>Zhang</last></author>
<author><first>Chung-Yang</first><last>Wu</last></author>
<author><first>Yu-Cheng</first><last>Chang</last></author>
<author><first>Pin-Jou</first><last>Lu</last></author>
<author><first>Chih-Jen</first><last>Huang</last></author>
<author><first>Yu-Tsang</first><last>Wang</last></author>
<author><first>Hui-Min</first><last>Hsieh</last></author>
<author><first>Kun-San</first><last>Chao</last></author>
<author><first>Tsang-Wu</first><last>Liu</last></author>
<author><first>I-Shou</first><last>Chang</last></author>
<author><first>Yi-Hsin Connie</first><last>Yang</last></author>
<author><first>Ti-Hao</first><last>Wang</last></author>
<author><first>Ko-Jiunn</first><last>Liu</last></author>
<author><first>Li-Tzong</first><last>Chen</last></author>
<author><first>Sheau-Fang</first><last>Yang</last></author>
<pages>201–208</pages>
<abstract>A cancer registry is a critical and massive database for which various types of domain knowledge are needed and whose maintenance requires labor-intensive data curation. In order to facilitate the curation process for building a high-quality and integrated cancer registry database, we compiled a cross-hospital corpus and applied neural network methods to develop a natural language processing system for extracting cancer registry variables buried in unstructured pathology reports. The performance of the developed networks was compared with various baselines using standard micro-precision, recall and F-measure. Furthermore, we conducted experiments to study the feasibility of applying transfer learning to rapidly develop a well-performing system for processing reports from different sources that might be presented in different writing styles and formats. The results demonstrate that the transfer learning method enables us to develop a satisfactory system for a new hospital with only a few annotations and suggest more opportunities to reduce the burden of cancer registry curation.</abstract>
<url hash="837bfaa6">2020.clinicalnlp-1.22</url>
<doi>10.18653/v1/2020.clinicalnlp-1.22</doi>
</paper>
<paper id="23">
<title><fixed-case>PHICON</fixed-case>: Improving Generalization of Clinical Text De-identification Models via Data Augmentation</title>
<author><first>Xiang</first><last>Yue</last></author>
<author><first>Shuang</first><last>Zhou</last></author>
<pages>209–214</pages>
<abstract>De-identification is the task of identifying protected health information (PHI) in the clinical text. Existing neural de-identification models often fail to generalize to a new dataset. We propose a simple yet effective data augmentation method PHICON to alleviate the generalization issue. PHICON consists of PHI augmentation and Context augmentation, which creates augmented training corpora by replacing PHI entities with named-entities sampled from external sources, and by changing background context with synonym replacement or random word insertion, respectively. Experimental results on the i2b2 2006 and 2014 de-identification challenge datasets show that PHICON can help three selected de-identification models boost F1-score (by at most 8.6%) on cross-dataset test setting. We also discuss how much augmentation to use and how each augmentation method influences the performance.</abstract>
<url hash="4d62a311">2020.clinicalnlp-1.23</url>
<doi>10.18653/v1/2020.clinicalnlp-1.23</doi>
</paper>
<paper id="24">
<title>Where’s the Question? A Multi-channel Deep Convolutional Neural Network for Question Identification in Textual Data</title>
<author><first>George</first><last>Michalopoulos</last></author>
<author><first>Helen</first><last>Chen</last></author>
<author><first>Alexander</first><last>Wong</last></author>
<pages>215–226</pages>
<abstract>In most clinical practice settings, there is no rigorous reviewing of the clinical documentation, resulting in inaccurate information captured in the patient medical records. The gold standard in clinical data capturing is achieved via “expert-review”, where clinicians can have a dialogue with a domain expert (reviewers) and ask them questions about data entry rules. Automatically identifying “real questions” in these dialogues could uncover ambiguities or common problems in data capturing in a given clinical setting. In this study, we proposed a novel multi-channel deep convolutional neural network architecture, namely Quest-CNN, for the purpose of separating real questions that expect an answer (information or help) about an issue from sentences that are not questions, as well as from questions referring to an issue mentioned in a nearby sentence (e.g., can you clarify this?), which we will refer as “c-questions”. We conducted a comprehensive performance comparison analysis of the proposed multi-channel deep convolutional neural network against other deep neural networks. Furthermore, we evaluated the performance of traditional rule-based and learning-based methods for detecting question sentences. The proposed Quest-CNN achieved the best F1 score both on a dataset of data entry-review dialogue in a dialysis care setting, and on a general domain dataset.</abstract>
<url hash="8cf7e093">2020.clinicalnlp-1.24</url>
<doi>10.18653/v1/2020.clinicalnlp-1.24</doi>
</paper>
<paper id="25">
<title>Learning from Unlabelled Data for Clinical Semantic Textual Similarity</title>
<author><first>Yuxia</first><last>Wang</last></author>
<author><first>Karin</first><last>Verspoor</last></author>
<author><first>Timothy</first><last>Baldwin</last></author>
<pages>227–233</pages>
<abstract>Domain pretraining followed by task fine-tuning has become the standard paradigm for NLP tasks, but requires in-domain labelled data for task fine-tuning. To overcome this, we propose to utilise domain unlabelled data by assigning pseudo labels from a general model. We evaluate the approach on two clinical STS datasets, and achieve r= 0.80 on N2C2-STS. Further investigation reveals that if the data distribution of unlabelled sentence pairs is closer to the test data, we can obtain better performance. By leveraging a large general-purpose STS dataset and small-scale in-domain training data, we obtain further improvements to r= 0.90, a new SOTA.</abstract>
<url hash="3d5250b1">2020.clinicalnlp-1.25</url>
<doi>10.18653/v1/2020.clinicalnlp-1.25</doi>
</paper>
<paper id="26">
<title>Joint Learning with Pre-trained Transformer on Named Entity Recognition and Relation Extraction Tasks for Clinical Analytics</title>
<author><first>Miao</first><last>Chen</last></author>
<author><first>Ganhui</first><last>Lan</last></author>
<author><first>Fang</first><last>Du</last></author>
<author><first>Victor</first><last>Lobanov</last></author>
<pages>234–242</pages>
<abstract>In drug development, protocols define how clinical trials are conducted, and are therefore of paramount importance. They contain key patient-, investigator-, medication-, and study-related information, often elaborated in different sections in the protocol texts. Granular-level parsing on large quantity of existing protocols can accelerate clinical trial design and provide actionable insights into trial optimization. Here, we report our progresses in using deep learning NLP algorithms to enable automated protocol analytics. In particular, we combined a pre-trained BERT transformer model with joint-learning strategies to simultaneously identify clinically relevant entities (i.e. Named Entity Recognition) and extract the syntactic relations between these entities (i.e. Relation Extraction) from the eligibility criteria section in protocol texts. When comparing to standalone NER and RE models, our joint-learning strategy can effectively improve the performance of RE task while retaining similarly high NER performance, likely due to the synergy of optimizing toward both tasks’ objectives via shared parameters. The derived NLP model provides an end-to-end solution to convert unstructured protocol texts into structured data source, which will be embedded into a comprehensive clinical analytics workflow for downstream trial design missions such like patient population extraction, patient enrollment rate estimation, and protocol amendment prediction.</abstract>
<url hash="7a9864aa">2020.clinicalnlp-1.26</url>
<doi>10.18653/v1/2020.clinicalnlp-1.26</doi>
</paper>
<paper id="27">
<title>Extracting Semantic Aspects for Structured Representation of Clinical Trial Eligibility Criteria</title>
<author><first>Tirthankar</first><last>Dasgupta</last></author>
<author><first>Ishani</first><last>Mondal</last></author>
<author><first>Abir</first><last>Naskar</last></author>
<author><first>Lipika</first><last>Dey</last></author>
<pages>243–248</pages>
<abstract>Eligibility criteria in the clinical trials specify the characteristics that a patient must or must not possess in order to be treated according to a standard clinical care guideline. As the process of manual eligibility determination is time-consuming, automatic structuring of the eligibility criteria into various semantic categories or aspects is the need of the hour. Existing methods use hand-crafted rules and feature-based statistical machine learning methods to dynamically induce semantic aspects. However, in order to deal with paucity of aspect-annotated clinical trials data, we propose a novel weakly-supervised co-training based method which can exploit a large pool of unlabeled criteria sentences to augment the limited supervised training data, and consequently enhance the performance. Experiments with 0.2M criteria sentences show that the proposed approach outperforms the competitive supervised baselines by 12% in terms of micro-averaged F1 score for all the aspects. Probing deeper into analysis, we observe domain-specific information boosts up the performance by a significant margin.</abstract>
<url hash="218dcd43">2020.clinicalnlp-1.27</url>
<doi>10.18653/v1/2020.clinicalnlp-1.27</doi>
</paper>
<paper id="28">
<title>An Ensemble Approach for Automatic Structuring of Radiology Reports</title>
<author><first>Morteza</first><last>Pourreza Shahri</last></author>
<author><first>Amir</first><last>Tahmasebi</last></author>
<author><first>Bingyang</first><last>Ye</last></author>
<author><first>Henghui</first><last>Zhu</last></author>
<author><first>Javed</first><last>Aslam</last></author>
<author><first>Timothy</first><last>Ferris</last></author>
<pages>249–258</pages>
<abstract>Automatic structuring of electronic medical records is of high demand for clinical workflow solutions to facilitate extraction, storage, and querying of patient care information. However, developing a scalable solution is extremely challenging, specifically for radiology reports, as most healthcare institutes use either no template or department/institute specific templates. Moreover, radiologists’ reporting style varies from one to another as sentences are written in a telegraphic format and do not follow general English grammar rules. In this work, we present an ensemble method that consolidates the predictions of three models, capturing various attributes of textual information for automatic labeling of sentences with section labels. These three models are: 1) Focus Sentence model, capturing context of the target sentence; 2) Surrounding Context model, capturing the neighboring context of the target sentence; and finally, 3) Formatting/Layout model, aimed at learning report formatting cues. We utilize Bi-directional LSTMs, followed by sentence encoders, to acquire the context. Furthermore, we define several features that incorporate the structure of reports. We compare our proposed approach against multiple baselines and state-of-the-art approaches on a proprietary dataset as well as 100 manually annotated radiology notes from the MIMIC-III dataset, which we are making publicly available. Our proposed approach significantly outperforms other approaches by achieving 97.1% accuracy.</abstract>
<url hash="912a4471">2020.clinicalnlp-1.28</url>
<doi>10.18653/v1/2020.clinicalnlp-1.28</doi>
</paper>
<paper id="29">
<title>Utilizing Multimodal Feature Consistency to Detect Adversarial Examples on Clinical Summaries</title>
<author><first>Wenjie</first><last>Wang</last></author>
<author><first>Youngja</first><last>Park</last></author>
<author><first>Taesung</first><last>Lee</last></author>
<author><first>Ian</first><last>Molloy</last></author>
<author><first>Pengfei</first><last>Tang</last></author>
<author><first>Li</first><last>Xiong</last></author>
<pages>259–268</pages>
<abstract>Recent studies have shown that adversarial examples can be generated by applying small perturbations to the inputs such that the well- trained deep learning models will misclassify. With the increasing number of safety and security-sensitive applications of deep learn- ing models, the robustness of deep learning models has become a crucial topic. The robustness of deep learning models for health- care applications is especially critical because the unique characteristics and the high financial interests of the medical domain make it more sensitive to adversarial attacks. Among the modalities of medical data, the clinical summaries have higher risks to be attacked because they are generated by third-party companies. As few works studied adversarial threats on clinical summaries, in this work we first apply adversarial attack to clinical summaries of electronic health records (EHR) to show the text-based deep learning systems are vulnerable to adversarial examples. Secondly, benefiting from the multi-modality of the EHR dataset, we propose a novel defense method, MATCH (Multimodal feATure Consistency cHeck), which leverages the consistency between multiple modalities in the data to defend against adversarial examples on a single modality. Our experiments demonstrate the effectiveness of MATCH on a hospital readmission prediction task comparing with baseline methods.</abstract>
<url hash="c7203e39">2020.clinicalnlp-1.29</url>
<doi>10.18653/v1/2020.clinicalnlp-1.29</doi>
</paper>
<paper id="30">
<title>Advancing Seq2seq with Joint Paraphrase Learning</title>
<author><first>So Yeon</first><last>Min</last></author>
<author><first>Preethi</first><last>Raghavan</last></author>
<author><first>Peter</first><last>Szolovits</last></author>
<pages>269–279</pages>
<abstract>We address the problem of model generalization for sequence to sequence (seq2seq) architectures. We propose going beyond data augmentation via paraphrase-optimized multi-task learning and observe that it is useful in correctly handling unseen sentential paraphrases as inputs. Our models greatly outperform SOTA seq2seq models for semantic parsing on diverse domains (Overnight - up to 3.2% and emrQA - 7%) and Nematus, the winning solution for WMT 2017, for Czech to English translation (CzENG 1.6 - 1.5 BLEU).</abstract>
<url hash="bdb0b4cb">2020.clinicalnlp-1.30</url>
<doi>10.18653/v1/2020.clinicalnlp-1.30</doi>
</paper>
<paper id="31">
<title>On the diminishing return of labeling clinical reports</title>
<author><first>Jean-Baptiste</first><last>Lamare</last></author>
<author><first>Oloruntobiloba</first><last>Olatunji</last></author>
<author><first>Li</first><last>Yao</last></author>
<pages>280–290</pages>
<abstract>Ample evidence suggests that better machine learning models may be steadily obtained by training on increasingly larger datasets on natural language processing (NLP) problems from non-medical domains. Whether the same holds true for medical NLP has by far not been thoroughly investigated. This work shows that this is indeed not always the case. We reveal the somehow counter-intuitive observation that performant medical NLP models may be obtained with small amount of labeled data, quite the opposite to the common belief, most likely due to the domain specificity of the problem. We show quantitatively the effect of training data size on a fixed test set composed of two of the largest public chest x-ray radiology report datasets on the task of abnormality classification. The trained models not only make use of the training data efficiently, but also outperform the current state-of-the-art rule-based systems by a significant margin.</abstract>
<url hash="9fdbf8b6">2020.clinicalnlp-1.31</url>
<doi>10.18653/v1/2020.clinicalnlp-1.31</doi>
</paper>
<paper id="32">
<title>The <fixed-case>C</fixed-case>hilean Waiting List Corpus: a new resource for clinical Named Entity Recognition in <fixed-case>S</fixed-case>panish</title>
<author><first>Pablo</first><last>Báez</last></author>
<author><first>Fabián</first><last>Villena</last></author>
<author><first>Matías</first><last>Rojas</last></author>
<author><first>Manuel</first><last>Durán</last></author>
<author><first>Jocelyn</first><last>Dunstan</last></author>
<pages>291–300</pages>
<abstract>In this work we describe the Waiting List Corpus consisting of de-identified referrals for several specialty consultations from the waiting list in Chilean public hospitals. A subset of 900 referrals was manually annotated with 9,029 entities, 385 attributes, and 284 pairs of relations with clinical relevance. A trained medical doctor annotated these referrals, and then together with other three researchers, consolidated each of the annotations. The annotated corpus has nested entities, with 32.2% of entities embedded in other entities. We use this annotated corpus to obtain preliminary results for Named Entity Recognition (NER). The best results were achieved by using a biLSTM-CRF architecture using word embeddings trained over Spanish Wikipedia together with clinical embeddings computed by the group. NER models applied to this corpus can leverage statistics of diseases and pending procedures within this waiting list. This work constitutes the first annotated corpus using clinical narratives from Chile, and one of the few for the Spanish language. The annotated corpus, the clinical word embeddings, and the annotation guidelines are freely released to the research community.</abstract>
<url hash="c574ca00">2020.clinicalnlp-1.32</url>
<doi>10.18653/v1/2020.clinicalnlp-1.32</doi>
</paper>
<paper id="33">
<title>Exploring Text Specific and Blackbox Fairness Algorithms in Multimodal Clinical <fixed-case>NLP</fixed-case></title>
<author><first>John</first><last>Chen</last></author>
<author><first>Ian</first><last>Berlot-Attwell</last></author>
<author><first>Xindi</first><last>Wang</last></author>
<author><first>Safwan</first><last>Hossain</last></author>
<author><first>Frank</first><last>Rudzicz</last></author>
<pages>301–312</pages>
<abstract>Clinical machine learning is increasingly multimodal, collected in both structured tabular formats and unstructured forms such as free text. We propose a novel task of exploring <i>fairness</i> on a multimodal clinical dataset, adopting <i>equalized odds</i> for the downstream medical prediction tasks. To this end, we investigate a modality-agnostic fairness algorithm - equalized odds post processing - and compare it to a text-specific fairness algorithm: debiased clinical word embeddings. Despite the fact that debiased word embeddings do not explicitly address equalized odds of protected groups, we show that a text-specific approach to fairness may simultaneously achieve a good balance of performance classical notions of fairness. Our work opens the door for future work at the critical intersection of clinical NLP and fairness.</abstract>
<url hash="67e02280">2020.clinicalnlp-1.33</url>
<doi>10.18653/v1/2020.clinicalnlp-1.33</doi>
</paper>
</volume>
</collection>