-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathmembers.html
More file actions
697 lines (360 loc) · 36.2 KB
/
members.html
File metadata and controls
697 lines (360 loc) · 36.2 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
<!doctype html>
<!--
Minimal Mistakes Jekyll Theme 4.16.4 by Michael Rose
Copyright 2013-2019 Michael Rose - mademistakes.com | @mmistakes
Free for personal and commercial use under the MIT license
https://github.com/mmistakes/minimal-mistakes/blob/master/LICENSE
-->
<html lang="en" class="no-js">
<head>
<meta charset="utf-8">
<!-- begin _includes/seo.html --><title>Network members - Interpreting Deep Learning</title>
<meta name="description" content="Website for 2019 NWA-ORC proposal BD.1910: ‘Interpreting Deep Learning Models for Text and Sound: Methods & Applications’.">
<meta property="og:type" content="website">
<meta property="og:locale" content="en_US">
<meta property="og:site_name" content="Interpreting Deep Learning">
<meta property="og:title" content="Network members">
<meta property="og:url" content="/members">
<meta property="og:image" content="/assets/images/network-bw-1.png">
<link rel="canonical" href="/members">
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "Person",
"name": "InterpretingDL",
"url": "https://github.com/pages/interpretingdl/interpretingDL.github.io",
"sameAs": null
}
</script>
<!-- end _includes/seo.html -->
<link href="/feed.xml" type="application/atom+xml" rel="alternate" title="Interpreting Deep Learning Feed">
<!-- https://t.co/dKP3o1e -->
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<script>
document.documentElement.className = document.documentElement.className.replace(/\bno-js\b/g, '') + ' js ';
</script>
<!-- For all browsers -->
<link rel="stylesheet" href="/assets/css/main.css">
<!--[if IE ]>
<style>
/* old IE unsupported flexbox fixes */
.greedy-nav .site-title {
padding-right: 3em;
}
.greedy-nav button {
position: absolute;
top: 0;
right: 0;
height: 100%;
}
</style>
<![endif]-->
<!-- start custom head snippets -->
<!-- insert favicons. use https://realfavicongenerator.net/ -->
<link rel="apple-touch-icon" sizes="180x180" href="/assets/images/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/assets/images/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/assets/images/favicon-16x16.png">
<link rel="manifest" href="/assets/images/site.webmanifest">
<link rel="mask-icon" href="/assets/images/safari-pinned-tab.svg" color="#5bbad5">
<link rel="shortcut icon" href="/assets/images/favicon.ico">
<meta name="msapplication-TileColor" content="#da532c">
<meta name="msapplication-config" content="/assets/images/browserconfig.xml">
<meta name="theme-color" content="#ffffff">
<!-- end custom head snippets -->
</head>
<body class="layout--single wide">
<!--[if lt IE 9]>
<div class="notice--danger align-center" style="margin: 0;">You are using an <strong>outdated</strong> browser. Please <a href="https://browsehappy.com/">upgrade your browser</a> to improve your experience.</div>
<![endif]-->
<div class="masthead">
<div class="masthead__inner-wrap">
<div class="masthead__menu">
<nav id="site-nav" class="greedy-nav">
<a class="site-logo" href="/"><img src="/assets/images/brain.png" alt=""></a>
<a class="site-title" href="/">InterpretingDL</a>
<ul class="visible-links"></ul>
<button class="greedy-nav__toggle hidden" type="button">
<span class="visually-hidden">Toggle menu</span>
<div class="navicon"></div>
</button>
<ul class="hidden-links hidden"></ul>
</nav>
</div>
</div>
</div>
<div class="initial-content">
<div class="page__hero--overlay"
style="background-color: #5e616c; background-image: url('/assets/images/network-bw-1.png');"
>
<div class="wrapper">
<h1 id="page-title" class="page__title" itemprop="headline">
Network members
</h1>
</div>
</div>
<div id="main" role="main">
<div class="sidebar sticky">
<nav class="nav__list">
<input id="ac-toc" name="accordion-toc" type="checkbox" />
<label for="ac-toc">Toggle Menu</label>
<ul class="nav__items">
<li>
<a href="/"><span class="nav__sub-title">Home</span></a>
</li>
<li>
<a href="/projects"><span class="nav__sub-title">Projects</span></a>
</li>
<li>
<a href="/people"><span class="nav__sub-title">People</span></a>
</li>
<li>
<a href="/methods"><span class="nav__sub-title">Methods</span></a>
</li>
<li>
<a href="/papers"><span class="nav__sub-title">Key papers</span></a>
</li>
</ul>
</nav>
</div>
<article class="page" itemscope itemtype="https://schema.org/CreativeWork">
<meta itemprop="headline" content="Network members">
<div class="page__inner-wrap">
<section class="page__content" itemprop="text">
<p>All members of the network are actively working on interpretability, but they have come to this topic from very different domains. The network brings together crucial expertise on methodology, lexical semantics, semantic and syntactic parsing, machine translation, computational phonology, music recommendation, language acquisition and more. This will allow us to give different perspectives towards what interpretation of deep learning means in different scenerios and for different goals.</p>
<p><img style="float: left; width: 20%; margin-right: 20px; margin-top: 15px; margin-bottom: 5px;" src="../assets/images/jelle2015.jpg" />
<strong>Willem Zuidema</strong> is associate professor of computational linguistics and cognitive science at ILLC (UvA), with a long term interest in the neural basis of language. Because of that cognitive interest, was early contributor to deep learning in NLP, with work on neural parsing published as early as 2008 (Borensztajn & Zuidema, 2008, CogSci), and pioneering contributions on tree-shaped neural networks, including the TreeLSTM (Le & Zuidema <a class="citation" href="#le2015">(2015)</a> <!--2015-->, *SEM; published concurrently with groups from Stanford and Montreal). In 2016 he and his students introduced Diagnostic Classification <a class="citation" href="#veldhoen2016">(Veldhoen, Hupkes, & Zuidema, 2016; Hupkes, Veldhoen, & Zuidema, 2018; Giulianelli, Harding, Mohnert, Hupkes, & Zuidema, 2018)</a> <!--(Veldhoen et al., 2016; Hupkes et al 2018; Giulianelli et al. 2018)-->, one of the key <em>interpretability</em> techniques. He further performed research on the integration of formal logic and deep learning <a class="citation" href="#veldhoen2017">(Veldhoen & Zuidema, 2017; Repplinger, Beinborn, & Zuidema, 2018; Mul & Zuidema, 2019)</a> <!--(Veldhoen & Zuidema, 2017; Repplinger, Beinborn & Zuidema, 2018; Mul & Zuidema, 2019)-->. Other directly relevant work focuses on other <em>interpretability techniques</em> including Representational Similarity Analysis <a class="citation" href="#abnar2019">(Abnar, Beinborn, Choenni, & Zuidema, 2019)</a> <!--(Abnar et al., 2019)--> and contextual decomposition (Jumelet et al., 2019).</p>
<p><img style="float: left; width: 20%; margin-right: 20px; margin-top: 15px; margin-bottom: 5px;" src="../assets/images/alishahi.png" />
<strong>Afra Alishahi</strong> is an Associate Professor of Cognitive Science and Artificial Intelligence at Tilburg University. Her main research interests are developing computational models for studying the process of human language acquisition, studying the emergence of linguistic structure in grounded models of language learning, and developing tools and techniques for analyzing linguistic representations in neural models of language. She has received a number of research grants including an NWO Aspasia, an NWO Natural Artificial Intelligence and an e-Science Center/NWO grant. She is the co-organizer of the BlackboxNLP 2018 workshop, the first official venue dedicated to analyzing and interpreting neural networks for NLP. She has a number of well-received publications on the topic of interpretability of neural network models of language, including the recipient of the best paper award at the Conference on Computational Language Learning (CoNLL) in 2017.</p>
<p><img style="float: left; width: 20%; margin-right: 20px; margin-top: 15px; margin-bottom: 5px;" src="../assets/images/chrupała.jpg" />
<strong>Grzegorz Chrupała</strong> is an assistant professor at the Department of Cognitive Science and Artificial Intelligence at Tilburg University. His research focuses on computational models of language learning from multimodal signals such as speech and vision and on the analysis and interpretability of representations emerging in deep neural networks. He has served as area chair for ACL, EMNLP and CoNLL, and was general chair for Benelearn 2018. He co-organized the 2018 and 2019 editions of BlackboxNLP, the Workshop on Analyzing and Interpreting Neural Networks for NLP. Together with Afra Alishahi and students, he did some of the pioneering research on analyzing deep learning methods for visually grounded language <a class="citation" href="#kadar2017">(Kádár, Chrupała, & Alishahi, 2017)</a><!--(Kádár, Chrupała and Alishahi 2017, CL)--> as well as for speech <a class="citation" href="#alishahi2017">(Alishahi, Barking, & Chrupała, 2017)</a> <!--(Alishahi, Barking and Chrupała 2017, CoNLL)-->. In their most recent work in the area of analysis and interpretation Chrupała and Alishahi <a class="citation" href="#chrupala2019">(2019)</a> <!--(2019, ACL)--> introduced methods based on Representational Similarity Analysis (RSA) and Tree Kernels (TK) which directly quantify how strongly information encoded in neural activation patterns corresponds to information represented by symbolic structures.</p>
<p><img style="float: left; width: 20%; margin-right: 20px; margin-top: 10px; margin-bottom: 5px;" src="../assets/images/bisazza.jpg" />
<strong>Arianna Bisazza</strong>
is an assistant professor at the Leiden Institute of Advanced Computer Science (LIACS) of Leiden University, fully funded by a VENI grant since 2016. Her research aims at identifying intrinsic limitations of current language modeling paradigms, and to design robust NLP algorithms that can adapt to a diverse range of linguistic phenomena observed among the world’s languages. She has a long track record of contributions to machine translation for challenging language pairs <a class="citation" href="#bisazza2012">(Bisazza & Federico, 2012; Tran, Bisazza, & Monz, 2014; Fadaee, Bisazza, & Monz, 2017)</a>
<!--(Bisazza & Federico 2012; Tran, Bisazza & Monz, 2014; Fadaee, Bisazza & Monz, 2016)-->. Together with colleagues at the University of Amsterdam, she proposed the Recurrent Memory Network, one of the very first modifications to deep-learning based language models aimed at improving interpretability <a class="citation" href="#tran2016">(Tran, Bisazza, & Monz, 2016)</a><!--(Tran, Bisazza & Monz, 2016)-->. Other recent contributions to the interpretability of NLP models include analyses of MT outputs <a class="citation" href="#bentivogli2018">(Bentivogli, Bisazza, Cettolo, & Federico, 2018)</a><!--(Bentivogli et al., 2018)--> and probing tasks for recurrent language models <a class="citation" href="#tran2018">(Tran, Bisazza, & Monz, 2018; Bisazza & Tump, 2018)</a><!--(Tran, Bisazza, and Monz, 2018; Bisazza & Tump, 2018)-->.</p>
<p><img style="float: left; width: 20%; margin-right: 20px; margin-top: 10px; margin-bottom: 5px;" src="../assets/images/hupkes.jpg" />
<strong>Dieuwke Hupkes</strong> is a PhD student at the Institute for Logic, Language and Computation, working together with Willem Zuidema. In her research, she focuses on understanding how recurrent neural networks can understand and learn the types of hierarchical structures that occur in natural language, for her a problem that touches on the core of understanding natural language. Although artificial neural networks are of course nothing like the real brain, she hopes that understanding the principles by which they can encode processes can still teach us something that will lead to a better understanding of language!</p>
<p><img style="float: left; width: 20%; margin-right: 20px; margin-top: 10px; margin-bottom: 5px;" src="../assets/images/lentz.jpg" />
<strong>Tom Lentz</strong> is an assistant professor in computational phonology and cognitive science at the ILLC of the UvA. He works on the detection of prosodic structure in speech, including automatic classification of pitch contours as gathered in controlled experiments. He has recently obtained an interdisciplinary research grant for a project on the detection of irony in spoken speech (funding for one PhD student). Relevant other experience is an investigation on the individual variation in the use of prosody to mark focus <a class="citation" href="#lentz2015">(Lentz & Chen, 2015)</a><!--(Lentz & Chen, 2015)-->.</p>
<p><img style="float: left; width: 20%; margin-right: 20px; margin-top: 15px; margin-bottom: 5px;" src="../assets/images/tenbosch.png" /><strong>Louis ten Bosch</strong> (RU, Nijmegen) has expertise in automatic speech recognition, <em>computational modelling of cognitive processes</em>, speech decoding techniques using phonological features, and structure discovery methods. He is one of the coorganizers of the successful DNN interpretation session “what we learn from DNNs” that took place in 2018 at the language and speech technology conference Interspeech in Hyderabad, India. One of the recent advances in understanding artificial networks is achieved by relating the mathematical layer-to-layer transformations in a network to the more structural description of datasets as shown by linear mixed effect models and by Generalized Additive Models. More recently, in collaboration with Mirjam Ernestus, he is involved in computational models of human spoken word comprehension, a number of abstract-versus-exemplar studies in psycholinguistics, and (with Ton Dijkstra) in computational modelling of online sentence processing of idiomatic expressions.</p>
<p><img style="float: left; width: 20%; margin-right: 20px; margin-top: 15px; margin-bottom: 5px;" src="../assets/images/hendrickx.jpeg" />
<strong>Iris Hendrickx</strong> (RU, Nijmegen) is a researcher in computational linguistics and digital humanities with a focus on the areas of machine learning, lexical and relational semantics, natural language processing, techniques for document understanding and text mining. She provides expertise to the network on creating text data enriched with human annotation for training such models, and on applying and evaluating these models and augmenting them with domain expert knowledge.</p>
<p><img style="float: left; width: 20%; margin-right: 20px; margin-top: 15px; margin-bottom: 5px;" src="../assets/images/fokkens.jpg" /> <strong>Antske Fokkens</strong> is an assistant professor in computational linguistics at the Vrije Universiteit. Her main expertise lie in methodological questions in computational linguistics and, in particular, the importance of understanding the implications of chosen technologies, training data and features when applying computational language models in interdisciplinary contexts. In her research she has (among others) pointed out fundamental problems with reproducibility <a class="citation" href="#fokkens2013">(Fokkens et al., 2013)</a><!--(Fokkens et al. (2013))--> as well as the need for deeper analysis of the accuracy of our tools <a class="citation" href="#le2017">(Le & Fokkens, 2017; Fokkens et al., 2017)</a><!--(Le and Fokkens, 2017; Fokkens et al. 2017)-->. She collaborates extensively with researchers in the humanities and social sciences, as can be seen in multiple joint publications, grants and events and is a member of the Computational Communication Science lab Amsterdam. She is a recognized international expert and has obtained multiple research grants, including a VENI grant in 2015 and co-applicantship of an NWO Vrije Competitie grant, as well as project funding from societal partners.</p>
<p><img style="float: left; width: 20%; margin-right: 20px; margin-top: 15px; margin-bottom: 5px;" src="../assets/images/burgoyne.jpg" /><strong>John Ashley Burgoyne</strong> is the Lecturer in Computational Musicology at the University of Amsterdam and researcher in the Music Cognition Group at the Institute for Logic, Language, and Computation. Cross-appointed in Musicology and Artificial Intelligence, he is interested in understanding musical behaviour at the audio level, using large-scale experiments and audio corpora. His McGill–Billboard corpus of time-aligned chord and structure transcriptions has served as a backbone for audio chord estimation techniques. His Hooked on Music project reached hundreds of thousands of participants in almost every country on Earth while collecting data to understand long-term musical memory.</p>
<h2 id="references">References</h2>
<div class="bibliography"><div>
<div id="bib-item-abnar2019" class="bib-entry my-3" data-searchable="" data-year="2019" data-title="Blackbox meets blackbox: Representational Similarity and Stability Analysis of Neural Language Models and Brains" data-author="Abnar, Samira and Beinborn, Lisa and Choenni, Rochelle and Zuidema, Willem" data-publication="">
<span id="abnar2019">Abnar, S., Beinborn, L., Choenni, R., & Zuidema, W. (2019). Blackbox meets blackbox: Representational Similarity and Stability Analysis of Neural Language Models and Brains.</span><br />
<a href="https://arxiv.org/abs/1906.01539" target="_blank">
<button type="button" class="btn btn--inverse">
arXiv
</button></a>
</div>
<br />
</div>
<div>
<div id="bib-item-alishahi2017" class="bib-entry my-3" data-searchable="" data-year="2017" data-title="Encoding of phonology in a recurrent neural model of grounded speech" data-author="Alishahi, Afra and Barking, Marie and Chrupała, Grzegorz" data-publication="Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)">
<span id="alishahi2017">Alishahi, A., Barking, M., & Chrupała, G. (2017). Encoding of phonology in a recurrent neural model of grounded speech. In R. Levy & L. Specia (Eds.), <i>Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)</i> (pp. 368–378). Association for Computational Linguistics.</span><br />
<a href="https://doi.org/10.18653/v1/K17-1037" target="_blank" title="Encoding of phonology in a recurrent neural model of grounded speech">
<button type="button" class="btn btn--inverse">
DOI
</button></a>
</div>
<br />
</div>
<div>
<div id="bib-item-bentivogli2018" class="bib-entry my-3" data-searchable="" data-year="2018" data-title="Neural versus phrase-based MT quality: An in-depth analysis on English–German and English–French" data-author="Bentivogli, Luisa and Bisazza, Arianna and Cettolo, Mauro and Federico, Marcello" data-publication="Computer Speech & Language">
<span id="bentivogli2018">Bentivogli, L., Bisazza, A., Cettolo, M., & Federico, M. (2018). Neural versus phrase-based MT quality: An in-depth analysis on English–German and English–French. <i>Computer Speech & Language</i>, <i>49</i>, 52–70.</span><br />
<a href="https://doi.org/10.1016/j.csl.2017.11.004" target="_blank" title="Neural versus phrase-based MT quality: An in-depth analysis on English–German and English–French">
<button type="button" class="btn btn--inverse">
DOI
</button></a>
</div>
<br />
</div>
<div>
<div id="bib-item-bisazza2012" class="bib-entry my-3" data-searchable="" data-year="2012" data-title="Cutting the Long Tail: Hybrid Language Models for Translation Style Adaptation" data-author="Bisazza, Arianna and Federico, Marcello" data-publication="Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics">
<span id="bisazza2012">Bisazza, A., & Federico, M. (2012). Cutting the Long Tail: Hybrid Language Models for Translation Style Adaptation. In <i>Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics</i> (pp. 439–448). Avignon, France: Association for Computational Linguistics.</span><br />
</div>
<br />
</div>
<div>
<div id="bib-item-bisazza2018" class="bib-entry my-3" data-searchable="" data-year="2018" data-title="The Lazy Encoder: A Fine-Grained Analysis of the Role of Morphology in Neural Machine Translation" data-author="Bisazza, Arianna and Tump, Clara" data-publication="Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing">
<span id="bisazza2018">Bisazza, A., & Tump, C. (2018). The Lazy Encoder: A Fine-Grained Analysis of the Role of Morphology in Neural Machine Translation. In <i>Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing</i> (pp. 2871–2876). Brussels, Belgium: Association for Computational Linguistics.</span><br />
</div>
<br />
</div>
<div>
<div id="bib-item-chrupala2019" class="bib-entry my-3" data-searchable="" data-year="2019" data-title="Correlating neural and symbolic representations of language" data-author="Chrupała, Grzegorz and Alishahi, Afra" data-publication="Proceedings of the 57th Annual Meeting of the Association for
Computational Linguistics">
<span id="chrupala2019">Chrupała, G., & Alishahi, A. (2019). Correlating neural and symbolic representations of language. In <i>Proceedings of the 57th Annual Meeting of the Association for
Computational Linguistics</i>.</span><br />
<a href="https://arxiv.org/abs/1905.06401" target="_blank">
<button type="button" class="btn btn--inverse">
arXiv
</button></a>
</div>
<br />
</div>
<div>
<div id="bib-item-fadaee2017" class="bib-entry my-3" data-searchable="" data-year="2017" data-title="Data Augmentation for Low-Resource Neural Machine Translation" data-author="Fadaee, Marzieh and Bisazza, Arianna and Monz, Christof" data-publication="Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)">
<span id="fadaee2017">Fadaee, M., Bisazza, A., & Monz, C. (2017). Data Augmentation for Low-Resource Neural Machine Translation. <i>Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)</i>, 567–573.</span><br />
<a href="https://doi.org/10.18653/v1/P17-2090" target="_blank" title="Data Augmentation for Low-Resource Neural Machine Translation">
<button type="button" class="btn btn--inverse">
DOI
</button></a>
<a href="https://arxiv.org/abs/1705.00440" target="_blank">
<button type="button" class="btn btn--inverse">
arXiv
</button></a>
</div>
<br />
</div>
<div>
<div id="bib-item-fokkens2017" class="bib-entry my-3" data-searchable="" data-year="2017" data-title="BiographyNet: Extracting Relations Between People and Events" data-author="Fokkens, Antske and ter Braake, Serge and Ockeloen, Nick and Vossen, Piek and Legêne, Susan and Schreiber, Guus and de Boer, Victor" data-publication="Europa baut auf Biographien: Aspekte, Bausteine, Normen und Standards für eine europäische Biographik">
<span id="fokkens2017">Fokkens, A., ter Braake, S., Ockeloen, N., Vossen, P., Legêne, S., Schreiber, G., & de Boer, V. (2017). BiographyNet: Extracting Relations Between People and Events. In Á. Z. Bernád, C. Gruber, & M. Kaiser (Eds.), <i>Europa baut auf Biographien: Aspekte, Bausteine, Normen und Standards für eine europäische Biographik</i> (1st ed., pp. 193–224). Vienna: New Academic Press.</span><br />
</div>
<br />
</div>
<div>
<div id="bib-item-fokkens2013" class="bib-entry my-3" data-searchable="" data-year="2013" data-title="Offspring from Reproduction Problems: What Replication Failure Teaches Us" data-author="Fokkens, Antske and van Erp, Marieke and Postma, Marten and Pedersen, Ted and Vossen, Piek and Freire, Nuno" data-publication="Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)">
<span id="fokkens2013">Fokkens, A., van Erp, M., Postma, M., Pedersen, T., Vossen, P., & Freire, N. (2013). Offspring from Reproduction Problems: What Replication Failure Teaches Us. In <i>Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)</i> (pp. 1691–1701). Sofia, Bulgaria: Association for Computational Linguistics.</span><br />
</div>
<br />
</div>
<div>
<div id="bib-item-giulianelli2018" class="bib-entry my-3" data-searchable="" data-year="2018" data-title="Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information" data-author="Giulianelli, Mario and Harding, Jack and Mohnert, Florian and Hupkes, Dieuwke and Zuidema, Willem" data-publication="Proceedings EMNLP workshop Analyzing and interpreting neural networks for NLP (BlackboxNLP)">
<span id="giulianelli2018">Giulianelli, M., Harding, J., Mohnert, F., Hupkes, D., & Zuidema, W. (2018). Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information. In <i>Proceedings EMNLP workshop Analyzing and interpreting neural networks for NLP (BlackboxNLP)</i>.</span><br />
<a href="https://arxiv.org/abs/1808.08079" target="_blank">
<button type="button" class="btn btn--inverse">
arXiv
</button></a>
</div>
<br />
</div>
<div>
<div id="bib-item-hupkes2018" class="bib-entry my-3" data-searchable="" data-year="2018" data-title="Visualisation and ‘Diagnostic Classifiers’ reveal how recurrent and recursive neural networks process hierarchical structure" data-author="Hupkes, Dieuwke and Veldhoen, Sara and Zuidema, Willem" data-publication="Journal of Artificial Intelligence Research">
<span id="hupkes2018">Hupkes, D., Veldhoen, S., & Zuidema, W. (2018). Visualisation and ‘Diagnostic Classifiers’ reveal how recurrent and recursive neural networks process hierarchical structure. <i>Journal of Artificial Intelligence Research</i>, <i>61</i>, 907–926.</span><br />
<a href="https://arxiv.org/abs/1711.10203" target="_blank">
<button type="button" class="btn btn--inverse">
arXiv
</button></a>
</div>
<br />
</div>
<div>
<div id="bib-item-kadar2017" class="bib-entry my-3" data-searchable="" data-year="2017" data-title="Representation of Linguistic Form and Function in Recurrent Neural Networks" data-author="Kádár, Ákos and Chrupała, Grzegorz and Alishahi, Afra" data-publication="Computational Linguistics">
<span id="kadar2017">Kádár, Á., Chrupała, G., & Alishahi, A. (2017). Representation of Linguistic Form and Function in Recurrent Neural Networks. <i>Computational Linguistics</i>, <i>43</i>, 761–780.</span><br />
<a href="https://doi.org/10.1162/COLI_a_00300" target="_blank" title="Representation of Linguistic Form and Function in Recurrent Neural Networks">
<button type="button" class="btn btn--inverse">
DOI
</button></a>
</div>
<br />
</div>
<div>
<div id="bib-item-le2017" class="bib-entry my-3" data-searchable="" data-year="2017" data-title="Tackling Error Propagation through Reinforcement Learning: A Case of Greedy Dependency Parsing" data-author="Le, Minh and Fokkens, Antske" data-publication="arXiv:1702.06794 [cs]">
<span id="le2017">Le, M., & Fokkens, A. (2017). Tackling Error Propagation through Reinforcement Learning: A Case of Greedy Dependency Parsing. <i>ArXiv:1702.06794 [Cs]</i>.</span><br />
<a href="https://arxiv.org/abs/1702.06794" target="_blank">
<button type="button" class="btn btn--inverse">
arXiv
</button></a>
</div>
<br />
</div>
<div>
<div id="bib-item-le2015" class="bib-entry my-3" data-searchable="" data-year="2015" data-title="Compositional Distributional Semantics with Long Short Term Memory" data-author="Le, Phong and Zuidema, Willem" data-publication="Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics">
<span id="le2015">Le, P., & Zuidema, W. (2015). Compositional Distributional Semantics with Long Short Term Memory. In <i>Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics</i> (pp. 10–19). Denver, Colorado: Association for Computational Linguistics.</span><br />
<a href="https://doi.org/10.18653/v1/S15-1002" target="_blank" title="Compositional Distributional Semantics with Long Short Term Memory">
<button type="button" class="btn btn--inverse">
DOI
</button></a>
</div>
<br />
</div>
<div>
<div id="bib-item-lentz2015" class="bib-entry my-3" data-searchable="" data-year="2015" data-title="Unbalanced adult production and perception in prosody." data-author="Lentz, T.O. and Chen, A." data-publication="Proceedings of the 18th International Congress of Phonetic Sciences">
<span id="lentz2015">Lentz, T. O., & Chen, A. (2015). Unbalanced adult production and perception in prosody. In <i>Proceedings of the 18th International Congress of Phonetic Sciences</i>. University of Glasgow, Glasgow.</span><br />
</div>
<br />
</div>
<div>
<div id="bib-item-mul2019" class="bib-entry my-3" data-searchable="" data-year="2019" data-title="Siamese recurrent networks learn first-order logic reasoning and exhibit zero-shot compositional generalization" data-author="Mul, Mathijs and Zuidema, Willem" data-publication="">
<span id="mul2019">Mul, M., & Zuidema, W. (2019). Siamese recurrent networks learn first-order logic reasoning and exhibit zero-shot compositional generalization.</span><br />
<a href="https://arxiv.org/abs/1906.00180" target="_blank">
<button type="button" class="btn btn--inverse">
arXiv
</button></a>
</div>
<br />
</div>
<div>
<div id="bib-item-repplinger2018" class="bib-entry my-3" data-searchable="" data-year="2018" data-title="Vector-space models of words and sentences" data-author="Repplinger, Michael and Beinborn, Lisa and Zuidema, Willem" data-publication="Nieuw Archief voor de Wiskunde">
<span id="repplinger2018">Repplinger, M., Beinborn, L., & Zuidema, W. (2018). Vector-space models of words and sentences. <i>Nieuw Archief Voor De Wiskunde</i>.</span><br />
</div>
<br />
</div>
<div>
<div id="bib-item-tran2016" class="bib-entry my-3" data-searchable="" data-year="2016" data-title="Recurrent Memory Networks for Language Modeling" data-author="Tran, Ke and Bisazza, Arianna and Monz, Christof" data-publication="Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies">
<span id="tran2016">Tran, K., Bisazza, A., & Monz, C. (2016). Recurrent Memory Networks for Language Modeling. In <i>Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</i> (pp. 321–331).</span><br />
<a href="https://doi.org/10.18653/v1/N16-1036" target="_blank" title="Recurrent Memory Networks for Language Modeling">
<button type="button" class="btn btn--inverse">
DOI
</button></a>
</div>
<br />
</div>
<div>
<div id="bib-item-tran2018" class="bib-entry my-3" data-searchable="" data-year="2018" data-title="The Importance of Being Recurrent for Modeling Hierarchical Structure" data-author="Tran, Ke and Bisazza, Arianna and Monz, Christof" data-publication="Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing">
<span id="tran2018">Tran, K., Bisazza, A., & Monz, C. (2018). The Importance of Being Recurrent for Modeling Hierarchical Structure. In <i>Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing</i> (pp. 4731–4736).</span><br />
<a href="https://arxiv.org/abs/1803.03585" target="_blank">
<button type="button" class="btn btn--inverse">
arXiv
</button></a>
</div>
<br />
</div>
<div>
<div id="bib-item-tran2014" class="bib-entry my-3" data-searchable="" data-year="2014" data-title="Word Translation Prediction for Morphologically Rich Languages with Bilingual Neural Networks" data-author="Tran, Ke M. and Bisazza, Arianna and Monz, Christof" data-publication="Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)">
<span id="tran2014">Tran, K. M., Bisazza, A., & Monz, C. (2014). Word Translation Prediction for Morphologically Rich Languages with Bilingual Neural Networks. In <i>Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)</i> (pp. 1676–1688). Association for Computational Linguistics.</span><br />
<a href="https://doi.org/10.3115/v1/D14-1175" target="_blank" title="Word Translation Prediction for Morphologically Rich Languages with Bilingual Neural Networks">
<button type="button" class="btn btn--inverse">
DOI
</button></a>
</div>
<br />
</div>
<div>
<div id="bib-item-veldhoen2016" class="bib-entry my-3" data-searchable="" data-year="2016" data-title="Diagnostic classifiers: revealing how neural networks process hierarchical structure" data-author="Veldhoen, Sara and Hupkes, Dieuwke and Zuidema, Willem" data-publication="Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches (at NIPS)">
<span id="veldhoen2016">Veldhoen, S., Hupkes, D., & Zuidema, W. (2016). Diagnostic classifiers: revealing how neural networks process hierarchical structure. In <i>Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches (at NIPS)</i>.</span><br />
</div>
<br />
</div>
<div>
<div id="bib-item-veldhoen2017" class="bib-entry my-3" data-searchable="" data-year="2017" data-title="Can Neural Networks learn Logical Reasoning?" data-author="Veldhoen, Sara and Zuidema, Willem" data-publication="Proceedings of the Conference on Logic and Machine Learning in Natural Language (LaML)">
<span id="veldhoen2017">Veldhoen, S., & Zuidema, W. (2017). Can Neural Networks learn Logical Reasoning? In <i>Proceedings of the Conference on Logic and Machine Learning in Natural Language (LaML)</i> (pp. pp. 35–41). University of Gothenburgh, Sweden.</span><br />
</div>
<br />
</div></div>
</section>
<footer class="page__meta">
</footer>
</div>
</article>
</div>
</div>
<div class="page__footer">
<footer>
<!-- start custom footer snippets -->
<!-- end custom footer snippets -->
<div class="page__footer-follow">
<ul class="social-icons">
<li><a href="/feed.xml"><i class="fas fa-fw fa-rss-square" aria-hidden="true"></i> Feed</a></li>
</ul>
</div>
<div class="page__footer-copyright">© 2019 InterpretingDL. Powered by <a href="https://jekyllrb.com" rel="nofollow">Jekyll</a> & <a href="https://mademistakes.com/work/minimal-mistakes-jekyll-theme/" rel="nofollow">Minimal Mistakes</a>.</div>
</footer>
</div>
<script src="/assets/js/main.min.js"></script>
<script defer src="https://use.fontawesome.com/releases/v5.8.2/js/all.js" integrity="sha384-DJ25uNYET2XCl5ZF++U8eNxPWqcKohUUBUpKGlNLMchM7q4Wjg2CUpjHLaL8yYPH" crossorigin="anonymous"></script>
</body>
</html>