See the most up-to-date schedule on Discourse forums
Please note that all times are in Central European Time (CET)
Wednesday, December 6, 2023 (Day 1)
- 09:00-10:30: Workshops, registration
- 10:30-11:15: Coffee break, registration
- 11:15-11:30: CHR opening words
- 11:30-13:00: Session 1: Historical change. Chair: Artjoms Šeļa
- 13:00-14:30: Lunch
- 14:30-16:00: Session 2A: Language. Chair: Kristoffer Nielbo ////// Session 2B: History. Chair: Sarah Lang
- 16:00-16:30: Coffee break
- 16:30-18:00: Keynote Richard McElreath (MPI for Evolutionary Anthropology). The importance of analog
thinking for digital scholarship. Chair: Iza Romanowska
- 18:00: Opening reception
Thursday, December 7, 2023 (Day 2)
- 09:00-10:30: Session 3A: Literature and society. Chair: Alie Lassche ////// Session 3B: HTR and manuscripts. Chair: Thibault Clérice
- 10:30-11:00: Coffee break
- 11:00-12:20: Keynote: Roberta Sinatra (University of Copenhagen). Quantifying the dynamics of impact in science and art. Chair: Fotis Jannidis
- 12:30-13:30: Lunch
- 13:30-15:00: Session 4A: Narrative. Chair: Rebecca M. M. Hicke ////// Session 4B: Libraries and Collections. Chair: Iza Romanowska
- 15:00-15:30: Coffee break
- 15:30-17:00: Session 5A: Authorship attribution. Chair: Thierry Poibeau ////// Session 5B: Large Language Models. Chair: Thora Hagen
- 17:00-: Poster walk-around
- 20:00: Conference dinner
Friday, December 8, 2023 (Day 3)
- 09:00-10:30: Keynote: Olivier Morin (Institut Jean Nicod, PSL).
Humanity’s second language: How images carry information. Chair: Artjoms Šeļa
- 10:30-11:00: Coffee break
- 11:00-12:30: Session 6: Audio/Visual. Chair: Oleg Sobchuk
- 12:30-13:30: Lunch
- 13:30-15:00: Lightning talks session. Chair: Iza Romanowska
- 15:00-15:30: Coffee break
- 15:30-17:00: Session 7: Literature. Chair: Fotis Jannidis
- 17:00-17:30: Award ceremony, concluding remarks
Detailed Programme
Parallel workshops
-
picture_as_pdf
Transforming archives into data-driven analyses
Florian Cafiero (PSL) and Jean-Luc Falcone (UNIGE)
This workshop bridges the historical practice of diplomatics – the critical study of historical documents – with the modern field of computational diplomacy to illustrate how archival materials can be transformed into analyzable digital data. Aimed at graduate students and researchers, this course requires no prior experience and provides a pathway from the physical archive to digital analytics across various disciplines. Participants will be introduced to the process of digitizing historical documents, ensuring their suitability for computational techniques (OCR, template matching, computer vision), and applying advanced analytical methods such as network analysis to gain new insights on their content.
-
picture_as_pdf
Textual analysis with Python and Large Language Models
Marianne Reboul (Ecole Normale Supérieure de Lyon)
This workshop offers a comprehensive introduction to the burgeoning field of textual analysis using Python and Large Language Models (LLMs). Designed for graduate students and researchers across disciplines, the workshop requires no prerequisites and promises to equip participants with the foundational skills necessary to harness the power of textual data. Through a blend of theoretical instruction and practical exercises, attendees will explore the capabilities of Python as a tool for natural language processing and how LLMs can be integrated to unlock deeper insights.
-
picture_as_pdf
Mapping with R for humanities
Giovanni Pietro Vitali
This workshop is designed to introduce graduate students and researchers to the powerful capabilities of Geographic Information Systems (GIS) using the R programming language. With no prerequisites necessary, participants from various disciplines will learn to manage, analyze, and visualize spatial data effectively. The workshop aims to demystify the complexities of spatial analysis by providing hands-on experience with R’s dedicated GIS packages.
Session 1. Historical change
-
picture_as_pdf
Modeling temporal uncertainty in historical datasets
Vojtech Kase, Adéla Sobotková and Petra Heřmánková
This paper explores several approaches to assess temporal trends within archaeological and historical datasets containing records marked with significant extent of uncertainty accompanying their dating. We evaluate the strengths and pitfalls of these methodologies by employing two datasets: one comprising ancient shipwrecks and the other ancient Greek inscriptions. While these objects can, in principle, be precisely dated to specific years, they are often assigned broader date ranges, spanning centuries or longer historical periods. We propose that the most promising approaches involve using these date ranges as defining probabilities. By randomly assigning specific dates based on these probabilities, we enable hypothesis testing for temporal trends. As we want to encourage other scholars to employ the methods we propose, we offer a detailed description of the implementation of these methods using functions from the Python tempun package.
short talk
-
picture_as_pdf
Structural Characteristics in Historical Networks Reveal Changes in Political Culture: An Example From Northern Song China (960–1127 C.E.)
Wenyi Shang, Song Chen, Yuqi Chen and Jana Diesner
The mass digitization and datafication of historical records brings about new possibilities to study or re-assess a broad range of individual events. By evaluating microlevel events in a social context simultaneously, insights into the macrolevel dynamics of society can be gained. This paper presents an innovative framework for historical network research that allows the comparison of structural characteristics in networks across different time periods, and illustrates it with an example of the political networks of Northern Song China. By using machine learning models for valence prediction and tracking the changes of structural characteristics related to structural balance, clustering, and connectivity in temporal networks, we reveal that the mid-to-late 11th century, during which political reforms took place, was characterized by political pluralism and even political tolerance, compared to earlier or later periods. Our findings challenge the traditional view of Northern Song politics as a dichotomy between reformers and conservatives. The replicable framework proposed in this paper is capable of revealing significant historical changes that would otherwise be obscured, shedding light on the underlying historical dynamics of such changes.
short talk, online
-
picture_as_pdf
Introducing Traveling Word Pairs in Historical Semantic Change: A Case Study of Privacy Words in 18th and 19th Century English
Thora Hagen and Erik Ketzan
Lexical semantic change detection (LSCD) has become one of the central tasks in NLP in recent years. Most studies in LSCD, however, only consider the semantic stability score of a single word for semantic change analysis. In this paper, we propose a new direction for the analysis of semantic shifts: traveling word pairs. First, we introduce shift correlation to find pairs of words that semantically shift together in a similar fashion. Second, we propose word relation shift to analyze how the relationship between two words has changed over time. We explore both methods by investigating semantic shift around the term privacy as a test case, as privacy is an area of vigorous contemporary and historical study, and we were able to make use of a pre-existing dictionary of words relating to privacy. We report that the term privacy shows relatively little semantic change, a surprising result given the presumed manifold semantics of this term, and expand the investigation of privacy by detecting traveling word pairs; we report, for example, that revealing and protecting shift in tandem semantically, which could be due to both of these terms (as fuzzy antonyms) shifting from more figurative to more concrete over time.
short talk
-
picture_as_pdf
Using a neural network word embedding model to measure cultural innovation
Edgar Dubourg, Andrei Mogoutov and Nicolas Baumard.
In the quest to understand cultural evolution, quantifying innovation poses a significant challenge. This study introduces a novel approach which employs natural language processing techniques and embedding methods to measure semantic novelty of products’ descriptions. We apply this methodology to cinema, analyzing plot summaries of over 19,000 movies spanning more than a century. Our measure’s robustness is validated through a series of tests, including a comparison with a genre-based novelty score, manual inspection of films identified as highly innovative, and correlations with award recognitions. The application of our Innovation Score reveals a compelling pattern: a surge in cinematic innovation throughout the 20th century, followed by a stabilization in the 21st, despite an ever-growing production of films. The study concludes with a discussion on potential factors driving this pattern, setting the stage for future research to further explain the causes of cultural innovation.
short talk
-
picture_as_pdf
Oscillation between Contemplation and Revelation - Recurrence and Change in the Life History of Teresa of Ávila
Kristoffer Nielbo, Jan Kostkan, Katrine F. Baunvig, Ekaterina Borisova and Armin W. Geertz
Advancements in language technology and applied mathematics offer a plethora of tools that can enrich textual cultural heritage research. Using an information-theoretical approach to author profiling, this paper tries to leverage some of these tools to reconstruct mental states in the Early Modern Spanish author Teresa of Ávila. We shift away from traditional static textual feature analysis and instead approach author profiling as a dynamic problem, requiring a representation of the author’s life history. Teresa of Ávila was an Early Modern Spanish mystic and Carmelite nun whose authorship offers a unique dataset due to her prolific output and well-preserved, digitized writings. We model Teresa's letter corpus as a complex system with multiple states and try to track her mental and socio-cultural dynamics through lexical co-occurrence structures and affective valences in her letters. We find that Teresa's letters reflect a life history of state switching between contemplation and revelation. This relatively new approach offers a more robust and dynamic perspective on author profiling in cultural heritage research.
long talk
Session 2A. Language
-
picture_as_pdf
Testing the Limits of Neural Sentence Alignment Models on Classical Greek and Latin Texts and Translations
Caroline Craig, Kartik Goyal, Gregory Crane, Farnoosh Shamsian, and David A. Smith
The Greek and Latin classics, like many other ancient texts, have been widely translated into a variety of languages over the past two millennia. Although many digital editions and libraries contain one or two translations for a given text, about one hundred translations of the Iliad and twenty of Herodotus, for example, exist in English alone. Aligning the corpus of classical texts and translations at the sentence and word level would provide a valuable resource for studying translation theory, digital humanities, and natural language processing (NLP). Precise and faithful sentence alignment via computational methods however remains a challenging problem because current computational state-of-the-art NLP methods for doing so tend to have poor coverage and recall as their primary aim is merely to extract bitext for training machine translation systems. This paper evaluates and examines the limits of such state-of-the-art models for cross-language sentence embedding and alignment of ancient Greek and Latin texts with translations into English, French, German, and Persian. We will release evaluation data for Plato's Crito manually annotated at the word and sentence level and larger test datasets based on coarser structural metadata for Thucydides (Greek) and Lucretius (Latin). Testing LASER and LaBSE for sentence embedding and nearest-neighbor retrieval and Vecalign for sentence alignment, we found best results using LaBSE-Vecalign. LaBSE worked surprisingly well on ancient Greek most probably because it had been merged with modern Greek data in its training. Both LASER-Vecalign and LaBSE-Vecalign did best when there were many ground-truth one-to-one alignments between source and target sentences, and when the order of sentences in the source was preserved in the translation. However, these conditions are often not present in the kinds of literary and free translation we wish to study or in editions with multiple translations, extensive commentary, or other paratext. We perform book-level and chapter-level error analysis to inform the development of a software pipeline that can be deployed on the vast corpus of translations of ancient texts.
short talk
-
picture_as_pdf
German Question Tags: A Computational Analysis
Yulia Clausen
German language exhibits a range of question tags that can typically, but not always, be substituted for one another. Moreover, the same words can have other meanings while occurring in the sentence-final position. The tags' felicity conditions were addressed in previous corpus-based and experimental work and attributed to semantic and pragmatic properties of tag questions. This paper addresses the question of whether and to what extent the differences among German tags can be determined automatically. We assess the performance of three pretrained German BERT models on a tag question dataset and fine-tune one of these models on the tag-word prediction task. A close examination of this model's output indicates that BERT can identify properties relevant for the tags’ felicity conditions and interchangeability consistent with previous studies.
long talk
-
picture_as_pdf
🇵🇱🤝🇪🇺 : Emoji, language games and political polarisation
Sara Luxmoore, Pedro Ramaciotti Morales and Jonathan Cardoso-Silva
Are emoji political? In an increasing body of research, emoji have variably been viewed as emotional data or personality identifiers. However, little attention has been paid to the social and political import of emoji. We ask whether emoji are used for political self-representation, and discuss the implications for political identity formation and mobilisation online. Adapting a new method of ideal point estimation, we identify patterns in the employment of emoji in user Twitter bios across a latent political space computed from a Twitter following network. We find that emoji are used as stand-ins for offline political symbols such as 🇪🇺,🏳️🌈 and ✝️. Additionally, we find that the use of emoji without recognisable political meaning, such as ✌️,💪,💯 and 🌱 is contingent on a users estimated political ideal point. Users on the left are likelier to employ ✌️ and 🌱, while those on the right are likelier to employ 💪 and 💯. Using Ludwig Wittgenstein’s theory of language games, we argue that this points to the use of emoji for communication of both political values and affect, and to the development of a new political language game of emoji.
long talk
-
picture_as_pdf
(De)constructing Binarism in Journalism: Automatic Antonym Detection in Dutch Newspaper Articles
Alie Lassche, Ruben Ros and Joris Veerbeek
Binary oppositions, since their introduction by Claude Levi-Strauss and other structuralists in the seventies, are under pressure, especially because of their legitimization of societal power structures. Deconstruction of binary oppositions such as man/woman, black/white, left/right, and rich/poor is therefore increasingly encouraged. The question arises of what kind of effect the debate about binary oppositions has on its linguistic use. We have therefore detected antonyms in a corpus of Dutch newspaper articles from the period 1990-2020, in order to study the development of binarism in journalism. Our method consists of two parts: the use of a good-old lexicon, and the finetuning of a BERT model for antonym detection. In this paper, we not only present our results regarding the (de)construction of binary oppositions in Dutch journalism, but we also reflect on the two methodological stages and discuss their gain.
short talk
Session 2B. History
-
picture_as_pdf
The Middle Dutch Manuscripts Surviving from the Carthusian Monastery of Herne (14th century): Constructing an Open Dataset of Digital Transcriptions
Wouter Haverals and Mike Kestemont
A substantial collection of Middle Dutch manuscripts survives from the Carthusian monastery of Herne (Hérinnes-lez-Enghien) in nowadays Belgium. During the latter half of the fourteenth century, Herne served as a significant literary hotspot in the region around Brussels, with a devoted community of monks deeply involved in the production of (vernacular) texts and manuscripts, often as collaborative efforts. This surviving corpus offers abundant material for the (computational) exploration of authorship, translation, and scribal cultures in the premodern Low Countries. Yet, much of this material has remained digitally inaccessible. In this paper, we describe the creation of an almost exhaustive, open-access dataset comprising diplomatic (true-to-sign) transcriptions of all known Middle Dutch Herne manuscripts, acquired through handwritten text recognition. Apart from rich codicological and textual metadata, we include a normalized text layer (that contains expansions of the abbreviations), as well as a linguistic annotation layer (that contains lemmas and part of speech tags). We conclude this paper by discussing our work against current trends in medievalist scholarship. The dataset will be released together with this paper and we encourage its re-use in future research.
long talk
-
picture_as_pdf
Formulas and decision-making: the case of the States General of the Dutch Republic
Marijn Koolen, Ronald Sluijter, Rik Hoekstra and Joris Oddens
Formulaic expressions are commonly used in administrative texts, and may reflect standardisation of the decision-making process or its recording process. In this paper we investigate whether the use of formulas in the Resolutions of the Dutch States General (1576-1796) reveal in increase in standardisation. We use stylometric analysis and measures of textual repetition to identify shifts in the use of formulas, and study how the fraction of paragraphs that is covered by formulas changes over time to identify templates consisting of frequent combinations of formulas. Our findings are that there are stylistically clearly distinguishable periods, and that the use of formulas and templates increases between subsequent periods.
long talk
-
picture_as_pdf
Using Online Catalogs to Estimate Economic Development in Classical Antiquity
Charles de Dampierre, Valentin Thouzeau and Nicolas Baumard
Despite significant progress, economic development in Classical Antiquity remains difficult to study: proper economic data (e.g. agricultural production, wages) are scarce, estimates of urbanization, GDP or population remain highly uncertain, and the use of indirect markers of development such as shipwrecks or coin hoards is limited. Here, we propose a different approach based on the production of immaterial works (e.g. poems, philosophical treatises, musical pieces, scientific work). Immaterial works require time, energy, resources, and human capital to be produced, disseminated and appreciated, and thus indirectly reflect a wide range of economic processes. Moreover, their survival rate tends to be higher because they can be abstracted from their initial material incarnation (e.g. scrolls, manuscripts) and preserved throughout the centuries. We build a large database of cultural producers (painters, scientists, etc.) that exist in online catalogs (Library of Congress ID, GND ID, VIAF ID, Iranica ID etc) and create an estimate of immaterial production that is robust and consistent across cultures and sources. We show that immaterial production in Ancient Greece and Ancient Rome is closely related to economic development, and reveals important phases of economic development. Overall, immaterial production provides new insights into the roots and the evolution of economic development in the very long run in Classical Antiquity.
short talk
-
picture_as_pdf
Sunken Ships Shan’t Sail: Ontology Design for Reconstructing Events in the Dutch East India Company Archives
Stella Verkijk and Piek Vossen
This short paper describes ongoing work on the design of an event ontology that supports state-of-the-art event extraction in the archives of the Dutch East India Company (VOC). The ontology models Dynamic Events (actions or processes) and Static Events (states). By modelling the transition of a given to a new state as a logical implication that can be inferred automatically from the occurrence of a Dynamic Event, the ontology supports implied information extraction. It also considers implied sub-event detection and models event arguments as coreferential between event classes where possible. By doing so, it enables the extraction of much more information than is only explicitly stated in the archival texts with minimal annotation effort. We define this complete event extraction task that adopts both Natural Language Processing techniques as well as reasoning components as Event Reconstruction. The Event Reconstruction module will be embedded in a search interface that facilitates historical research in the VOC archives.
short talk
Session 3A. Literature and society
-
picture_as_pdf
Beyond Canonicity. Modeling Canon/Archive Literary Change in French Fiction
Jean Barré and Thierry Poibeau
This study offers a fresh perspective on the Canon/Archive problem in literature through computational analysis. Following Tynianov's understanding of literature, we adopt a dynamic approach to literature by proposing a model of literary variability using the Kullback-Leibler divergence. We retrieve key authors and works that are shaping the broad outlines of literary change. Our aim is to evaluate the importance of canonical authors on literary variability. We opt for a cohort-driven setup to analyze the variability brought by a given text, focusing on specific textual aspects such as topics, lexicon, characterization, and chronotope. The findings reveal that canonical authors tend to contribute slightly more to literary change than those from the archive.
long talk
-
picture_as_pdf
The evolution of romantic love in Chinese fiction in the very long run (618 - 2022): A quantitative approach
Ying Zhong, Nicolas Baumard and Valentin Thouzeau
Literary scholars have long observed the fluctuating popularity of romantic love in Chinese fiction, and the existence of a period when romantic love was particularly central: Tang short stories, Yuan plays, Qing scholar-beauty novels, as well as modern series or web novels. What is the pattern of the development of love in Chinese fiction? Can we describe it quantitatively? And can we explain it? Here, we present a new database of summary plots of Chinese fiction (N = 3496) from the Tang Dynasty (618 AD) to the modern era (2022). Using the method of linguistic inquiry, we first confirm that the evolutionary pattern of romantic love corresponds to qualitative observations reported by literary scholars and cultural historians, with an increase during the High Tang, the High Qing, and the contemporary phase (post-1982) following reform and opening up, and a striking decrease during the period following the Opium Wars and during the Cultural Revolution. We then test whether these patterns can be explained by a change in people’s preferences in response to increasing economic development. Consistent with previous work, we show that the rise of romantic love correlates with the ups and downs of economic development in Chinese history.
short talk
-
picture_as_pdf
Gender bias in French literature
Laurine Vianne, Yoann Dupont and Jean Barré
This study delves into the representation of gender in French literature from 1800 to the present, aiming to assess the prevalence of gender stereotypes in the description of fictional characters. By employing an annotated corpus and statistical modeling techniques, the research explores how authors unconsciously perpetuated gender biases while shaping characters and narratives. The findings reveal significant linguistic patterns that reinforce traditional gender norms, with women being characterized by emotional and physical attributes, while men are associated with action and agency.
short talk
-
picture_as_pdf
Putting Dutchcoref To the Test: Character Detection and Gender Dynamics in Contemporary Dutch Novels
Joris J. Van Zundert, Roel Smeets and Andreas Van Cranenburgh
Although coreference resolution is a necessary requirement for a wide range of automated narratological analyses, most of the systems performing this task leave much to be desired in terms of either accuracy or their practical application in literary studies. While there are coreference resolution systems that demonstrate good performance on annotated fragments of novels, evaluations typically do not consider performance on the full texts of novels. In order to optimize its output for concrete use in Dutch literary studies, we are in the process of evaluating and finetuning Dutchcoref. Dutchcoref is an implementation of the Stanford Multi-Pass Sieve Coreference System for Dutch. Using a “silver standard” of annotated data on 2,137 characters in 170 contemporary Dutch novels, we assess the extent to which Dutchcoref is able to identify the most prominent characters and their gender. Furthermore, we explore the usability of the system by exploring a specific narratological question about the gender distribution of the characters. We find that Dutchcoref is highly accurate in detecting noun phrases, proper names, and pronouns referring to characters, and that it is accurate in establishing their gender. However, the ability to cluster co-references together in a character profile, which we compare to BookNLP’s performance in this respect, is still sub-optimal and deteriorates with text length. We show that, notwithstanding current state of development, Dutchcoref can be applied for meaningful literary analysis, and we outline future prospects.
short talk
-
picture_as_pdf
Profiling anonymous authors in the Corsican autonomist press of the inter-war period
Vincent Sarbach-Pulicani
With the emergence of nationalism in the 19th century came regionalist movements to assert and claim cultural particularities. Corsica fitted very well within this dynamic and even presented itself as a favourable location for the development of such ideas. The centralization of the state around a strong capital and the policies of assimilation of the indigenous populations on the border with France led certain players to defend these particularisms. It was in this context that the Corsican autonomist newspaper A Muvra was born in May 1920 in Paris, under the impetus of Petru and Matteu Rocca. For almost 19 years, hundreds of authors participated in the writing of this massive dialectal work. This paper presents the results of a research that aimed to carry out author profiling, i.e., to determine the style and subjects covered by an author. The goals of this study were to determine the identity behind certain authors and also to highlight the role pseudonyms played in the newspaper's propaganda. We conducted authorship attribution to achieve the first objective before completing these analyses with topic modelling in order to meet the second one.
short talk
Session 3B. HTR and manuscripts
-
picture_as_pdf
Enhancing HTR of Historical Texts through Scholarly Editions: A Case Study from an Ancient Collation of the Hebrew Bible
Luigi Bambaci and Daniel Stoekl Ben Ezra
Printed critical editions of literary texts are a largely neglected source of knowledge in computational humanities. However, under certain conditions, they hold significant potential for multifaceted exploration: First, through Optical Character Recognition (OCR) of the text and its apparatus, coupled with intelligent parsing of the variant readings, it becomes possible to reconstruct comprehensive manuscript collations, which can prove invaluable for a variety of investigations, including phylogenetic analyses, redaction history studies, linguistic inquiries, and more. Second, by aligning the printed edition with manuscript images, a substantial amount of Handwritten Text Recognition (HTR) ground truth can be generated. This serves as valuable material for paleography, layout analysis, as well as for the study of the collation criteria adopted by the editor and for assessing their quality. The present paper focuses on the challenges mastered in the processes of the OCR, the apparatus parsing, the text reconstruction, and the alignment with the manuscript images, taking as a case study the edition of the Hebrew Bible published by Kennicott in the late eighteenth century. After a brief introduction (§ 1) and a description of this edition (§ 2), we will provide an overview of the adopted method (§ 3), from image acquisition (§ 3.1) to the final textual reconstruction (§ 4). Finally, we will conclude with an assessment of the work carried out and an outlook on potential future developments (§ 5).
long talk
-
picture_as_pdf
Automatic Collation for Diversifying Corpora: Commonly Copied Texts as Distant Supervision for Handwritten Text Recognition
David Smith, Jacob Murel, Jonathan Parkes Allen and Matthew Thomas Miller
Handwritten text recognition (HTR) has enabled many researchers to gather textual evidence from the human record. One common training paradigm for HTR is to identify an individual manuscript or coherent collection and to transcribe enough data to achieve acceptable performance on that collection. To build generalized models for Arabic-script manuscripts, perhaps one of the largest textual traditions in the pre-modern world, we need an approach that can improve its accuracy on unseen manuscripts and hands without linear growth in the amount of manually annotated data. We propose Automatic Collation for Diversifying Corpora (ACDC), taking advantage of the existence of multiple manuscripts of popular texts. Starting from an initial HTR model, ACDC automatically detects matching passages of popular texts in noisy HTR output and selects high-quality lines for retraining HTR without any manually annotated data. We demonstrate the effectiveness of this approach to distant supervision by annotating a test set drawn from a diverse collection of 59 Arabic-script manuscripts and a training set of 81 manuscripts of popular texts embedded within a larger corpus. After a few rounds of ACDC retraining, character accuracy rates on the test set increased by 19.6% absolute percentage, while a supervised model trained on manually annotated data from the same collection increased accuracy by 15.9%. We analyze the variation in ACDC's performance across books and languages and discuss further applications to collating manuscript families.
long talk
-
picture_as_pdf
Algorithms for the manipulation and transformation of text variant graphs
Tara Andrews
While text variant graphs are increasingly frequently used for the visualization of a text transmitted in multiple versions, the graph is also a very appropriate model for the querying and transformation of such a text in the course of producing a critical edition. This article describes the algorithms used in the TVGR repository for variant text traditions.
long talk, online
-
picture_as_pdf
Toward a Computational Historiography of Alchemy: Challenges and Obstacles of Object Detection for Historical Illustrations of Mining, Metallurgy and Distillation in 16th--17th Century Print
Sarah Lang, Bernhard Liebl and Manuel Burghardt
This study explores the use of modern computer vision methods for object detection in historical images extracted from 16th--17th-century printed books with illustrations of distillation, mining, metallurgy, and alchemical objects. We found that the transfer of knowledge from contemporary photographic data to historical etchings proves less effective than anticipated, revealing limitations in current methods like visual feature descriptors, pixel segmentation, representation learning, and object detection with YOLO. These findings highlight the stylistic disparities between modern images and early print illustrations, suggesting new research directions for historical image analysis.
short talk
Session 4A. Narrative
-
picture_as_pdf
Modeling Narrative Revelation
Andrew Piper, Hao Xu and Eric D. Kolaczyk
A core aspect of human storytelling is the element of narrative time. In this paper, we propose a model of narrative revelation using the information-theoretic concept of relative entropy, which has been used in a variety of settings to understand textual similarity, along with methods in time-series analysis to model the properties of revelation over narrative time. Given a beginning state of no knowledge about a story (beyond paratextual clues) and an end state of full knowledge about a story's contents, what are the rhythms of dissemination through which we arrive at this final state? Using a dataset of over 2,700 books of contemporary English prose, we test the time-dependent effects of narrative revelation against four stylistic categories of interest: audience age level, prestige, point-of-view, and fictionality.
long talk
-
picture_as_pdf
Operationalizing and Measuring Conflict in German Novels
Julian Häußler and Evelyn Gius
In this contribution we explore ways of detecting conflict representation in literary texts. First, we operationalize Glasl’s concept of social conflict for manual annotation and second, we adapt a word embedding-based sentiment analysis (SentiArt) for the attribution of conflict values based on two scalar conflict operationalizations. By translating the values of the latter approaches into binary labels, we compare the embedding approaches with the manual annotation. Though correlation between the approaches is low, the paper demonstrates possible approaches to conflict analysis in literary texts and outlines directions for future research.
short talk
-
picture_as_pdf
On character perception and plot structure of romance novels
Leonard Konle, Agnes Hilger and Fotis Jannidis
In this paper we describe a plot model for romance novels (German dime novel romances). To achieve this goal, we begin with the identification of essential structural parts of their plot based on scholarly analysis of romances. Then we formalize this conceptual model, apply it to two texts and compare the result to summaries of these novels based on reading them, to discuss the strengths and weaknesses of the plot model in making the plot structure visible. After a description of the corpus with its 950 novels and the annotation guidelines for the selected plot elements, we outline the methods we used to automatically detect these elements and to analyze the resulting time series data. The results section is divided into four parts: First, we report the performance of the automatic extraction of the plot elements. Secondly, we discuss the performance of the model in distinguishing between publishers, genres and series respectively. This will give us some insight into how well the model represents some of those distinctive plot elements. Thirdly, we apply the model to a larger corpus of romance novels to detect typical structures of the genre. Finally, we show how the plot model made the specific plot structure of one specific publisher, Cora, visible.
short talk
-
picture_as_pdf
Not just Plot(ting): A Comparison of Two Approaches for Understanding Narrative Text Dynamics
Pascale Moreira, Yuri Bizzoni, Emily Öhman and Kristoffer Nielbo
This paper presents the outcomes of a study that leverages emotion annotation to investigate the narrative dynamics in novels. We use two lexicon-based models, VADER sentiment annotation and a novel annotation of 8 primary NRC emotions, comparing them in terms of overlaps and assessing the dynamics of the sentiment and emotional arcs resulting from these two approaches. Our results indicate that whereas the simple valence annotation does not capture the intricate nature of narrative emotions, the two types of narrative profiling exhibit evident correlations. Additionally, we manually annotate selected emotion arcs to comprehensively analyse the resource.
short talk
-
picture_as_pdf
Evaluation and Alignment of Movie Events Extracted via Machine Learning from a Narratological Perspective
Feng Zhou and Federico Pianzola
We combine distant viewing and close reading to evaluate the usefulness of events extracted via machine learning from audio description of movies. To do this, we manually annotate events from Wikipedia summaries for three movies and align them to ML-extracted events. Our exploration suggests that computational narratology should combine datasets with events extracted from multimodal data sources that take into account both visual and verbal cues when detecting events.
short talk
Session 4B. Libraries and collections
-
picture_as_pdf
“The library is open!”: Open data and an open API for the HathiTrust Digital Library
John Walsh, Glen Layne-Worthey, Jacob Jett, Boris Capitanu, Peter Organisciak, Ryan Dubnicek and J. Stephen Downie
This paper describes the history, policy, semantics, and uses of the HathiTrust Research Center Extracted Features dataset, an open-access representation of the 17+ million volume HathiTrust Digital Library, including a major current effort to extend computational access in a variety of more flexible and easily implemented ways, including a modern API supporting customizable visualizations and analyses.
short talk
-
picture_as_pdf
The Past is a Foreign Place: Improving Toponym Linking for Historical Newspapers
Mariona Coll Ardanuy, Federico Nanni, Kaspar Beelen and Luke Hare
In this paper, we examine the application of toponym linking to digitised historical newspapers. These collections constitute the largest trove of historical text data. They contain varied, fine-grained information about the past, anchored in a specific place and time. Place names (or toponyms) are common entry points for starting exploring these collections. In this paper, we introduce our own tool for toponym linking and resolution, T-Res, a modular, flexible, and open-source pipeline, which is built on top of robust state-of-the-art approaches. We present a comprehensive step-by-step examination of this task in English, and conclude with a case study in which we show how toponym linking enables historical research in the digitised press.
long talk
-
picture_as_pdf
Greetings from! Extracting address information from 100,000 historical picture postcards
Thomas Smits, Wouter Haverals, Mike Kestemont, Loren Verreyen and Mona Allaert
This paper explores the potential of computational methods in analyzing the vast corpus of historical picture postcards. By connecting three distinct locations – the sender’s, the recipient’s, and the depicted – the medium of the picture postcard has contributed to the formation of extensive spatial networks of information exchange. So far, the analysis of these spatial networks was hampered by the fact that picture postcards are – literally and figuratively – hard to read. Using traditional methods, transcribing and analyzing a sizeable number of postcards would take a lifetime. To address this challenge, this paper presents a pipeline that leverages computer vision, hand-written text recognition, and large language models to extract and disambiguate address information from a collection of 102K historical postcards sent from Belgium, France, Germany, Luxembourg, the Netherlands, and the UK. We report a MaP of 0.94 for the CV model, a character error rate of 7.62%, and a successful extraction rate of 419 coordinates from an initial sample set of 500 postcards for the LLM. Overall, our pipeline demonstrates a reliable address information extraction rate for a significant proportion of the postcards in our data (with an average distance difference between the HTR-determined addresses and the Ground Truth text of 36.95km). Deploying our pipeline on a larger scale, we will be able to reconstruct the spatial networks that the medium of the postcard enabled.
long talk
-
picture_as_pdf
A Topological Data Analysis of Navigation Paths within Digital Libraries
Bayrem Kaabachi and Simon Dumas Primbault
The digitization of library resources and services have opened up physical informational spaces to new dimensions by allowing users to access a wealth of documents in ways that differ from browsing bookshelves traditionally organized according to the "tree of knowledge". How do readers of digital library orient themselves within big corpora? What landmarks do they use to navigate masses of digital documents? Taking Gallica--the digital heritage platform of the French national library--as a case study, this paper presents an experimental research on the navigation practices of its users. Using methods from topological data analysis, we inferred from Gallica's server logs an informational space as it is roamed by readers. Coupled with user interviews, this mixed-methods study allowed us to identify a set of "regimes of navigation" characterizing how readers deploy various strategies to browse the digital library's corpus. From directed search to wandering to crawling, these regimes answer different needs and show that a single corpus can, in turns, be apprehended as a heritage collection, a database, a set of documents, and a mass of information.
long talk
Session 5A. Authorship attribution
-
picture_as_pdf
Unraveling the Synoptic puzzle: stylometric insights into Luke's potential use of Matthew
Sophie Robert-Hayek, Jacques Istas and Frédérique Rey
The literary sources behind the three canonical Synoptic Gospels, namely Luke, Matthew and Mark, have long intrigued scholars because of the Gospels striking similarities and notable differences in their accounts of Jesus's life. Various theories have been proposed to explain these textual relationships, including common oral witnesses, lost sources or communities possessing each other's works. However, a universally accepted solution remains elusive. Leveraging advancements in statistics, data analysis, and computing power, researchers have begun treating this as a statistical problem and quantitatively measuring the likelihood of the different theories based on verbal agreements and stylometric features. In this paper, we present a novel Machine Learning based approach to solve the synoptic problem. We use Machine Learning classifiers two-sample tests to detect differences in sources within Luke's Gospel and variations in the edition patterns of Markan material between Matthew and Luke on a pericope-per-pericope basis. The results suggest significant dissimilarities in style and edit distance, indicating that the double and triple material within the Gospel of Luke likely originate from different sources. This suggests that Luke derived his triple tradition from Mark and not from Matthew. Despite the necessity of cautious interpretation due to the size of the dataset, our study thus offers substantial evidence supporting the theory of Luke's dependency on Mark's material for his triple tradition and makes the two-source hypothesis, which suggests that Luke did not have access to Matthew's work, the most likely explanation based on our methodology.
long talk
-
picture_as_pdf
Twenty-One Pseudo-Chrysostoms and more: authorship verification in the patristic world
Thibault Clérice and Anthony Glaise
As the most prolific of the Church Fathers, John Chrysostom (344–407 CE) has a vast textual mass and theological importance that has led to a significant misattribution of texts, resulting in the existence of a second corpus known as the pseudo-Chrysostomian corpus. Like many Greek-language Church Fathers’ works, this corpus comprises anonymous texts, which scholars have attempted to reattribute or group together based on factors such as the person’s function, biography, ideology, style, etc. One survey conducted by Voicu in 1981 explored potential groupings of such texts and produced a critical list of 21 Pseudo-Chrysostom works identified by scholars, including Montfaucon (1655–1741), one of the first modern editors of Chrysostom’s writings. In this paper, we present a novel approach to addressing pseudonymous work in the context of Chrysostomian studies. We propose to employ Siamese networks within an authorship verification framework, following the methodology commonly used in recent computational linguistic competitions. Our embedding model is trained using commonly used features in the digital humanities landscape, such as the most frequent words, affixes, and POS trigrams, utilizing a signal-to-noise ratio distance and pair mining. The results of our model show high AUCROC scores (84.5%). Furthermore, the article concludes with an analysis of the pseudo-Chrysostoms proposed by Voicu. We validate a significant portion of the hypotheses found in Voicu’s survey while also providing counter-arguments for two Pseudo-Chrysostoms. This research contributes to shedding light on the attribution of ancient texts and enriches the field of Chrysostomian studies.
long talk
-
picture_as_pdf
T5 meets Tybalt: author attribution in Early Modern English drama using large language models
Rebecca M. M. Hicke and David Mimno
Large language models have shown breakthrough potential in many NLP domains. Here we consider their use for stylometry, specifically authorship identification in Early Modern English drama. We find both promising and concerning results; LLMs are able to accurately predict the author of surprisingly short passages, but are also prone to confidently misattribute samples to specific authors. A fine-tuned T5-large model outperforms all baselines we test, including logistic regression and cosine delta, at attributing small passages. However, we see indications that the presence of certain authors in the model's pre-training data affects predictive results in ways that are difficult to assess.
long talk
-
picture_as_pdf
Detecting Psychological Disorders with Stylometry: the Case of ADHD in Adolescent Autobiographical Narratives
Juan Barrios, Florian Cafiero and Simon Gabay
Attention-deficit/hyperactivity disorder (ADHD) is one of the most common psychological neurodevelopmental disorder among children and adolescents, with a prevalence ranging from 3 to 10% in this population (worldwide). Its diagnosis is reliable and valid when evaluated with standard criteria for psychiatric disorders, but it is time consuming and requires a high level of expertise to arrive at a correct differential diagnosis. The development of low-cost, fast and efficient tools supporting the ADHD diagnosis process would therefore be important for practitioners. It should help identify and prevent risks in different populations. In this paper, we study the possibility of detecting ADHD with Natural Language Processing (NLP), based on the analysis of a specific type of adolescent's autobiographical narratives linked to the self-concept called Self-Defining Memories (SDMs). (1) We evaluate the specificity of SDMs written by adolescents having ADHD compared to a control group; (2) we train a Support Vector Machine (SVM) to predict ADHD diagnosis and (3) we identify ADHD markers in both groups and go back to the text narratives for further analysis. With an accuracy of 85%, the SVM manages to classify texts from both group (ADHD vs Control), revealing a signal specific to autobiographical texts narratives written by people with ADHD. The quality of the detection is confirmed by the interpretative yield of the main markers identified. However, several methodological improvements remain necessary to improve the accuracy and the automation of ADHD diagnosis with stylometric methods.
short talk
Session 5B. Large language models
-
picture_as_pdf
Death of the Dictionary? – The Rise of Zero-Shot Sentiment Classification
Janos Borst, Jannis Klähn and Manuel Burghardt
In our study, we conducted a comparative analysis between dictionary-based sentiment analysis and entailment zero-shot text classification for German sentiment analysis. We evaluated the performance of a selection of dictionaries on eleven data sets, including four domain-specific data sets with a focus on historic German language. Our results demonstrate that, in the majority of cases, zero-shot text classification outperforms general-purpose dictionary-based approaches but falls short of the performance achieved by specifically fine-tuned models. Notably, the zero-shot approach exhibits superior performance, particularly in historic German cases, surpassing both general-purpose dictionaries and even a broadly trained sentiment model. These findings indicate that zero-shot text classification holds significant promise as an alternative, reducing the necessity for domain-specific sentiment dictionaries and narrowing the availability gap of off-the-shelf methods for German sentiment analysis. Additionally, we thoroughly discuss the inherent trade-offs associated with the application of these approaches.
short talk
-
picture_as_pdf
If the Sources Could Talk: Evaluating Large Language Models for Research Assistance in History
Giselle Gonzalez Garcia and Christian Weilbach
Historical research is the continued peruse of memories in the form of archives. The recent advent of powerful Large-Language Models (LLM) provides a new conversational form of inquiry into historical memory (or, training data, in this case). We show that by augmenting such LLMs with vector embeddings from highly specialized academic sources, a conversational methodology can be made accessible to historians and other researchers in the Humanities. Concretely, we evaluate and demonstrate how LLMs have the ability of assisting researchers while they examine a customized corpora of different types of documents, including, but not exclusive to: (1). primary sources, (2). secondary sources written by experts, and (3). the combination of these two. Compared to established search interfaces for digital catalogues, such as metadata and full-text search, we evaluate the richer conversational style of LLMs on the performance of two main types of tasks: (1). question-answering, and (2). extraction and organization of data. We demonstrate that LLMs semantic retrieval and reasoning abilities on problem-specific tasks can be applied to large textual archives that have not been part of the its training data. Therefore, LLMs can be augmented with sources relevant to specific research projects, and can be queried privately by researchers.
long talk, online
-
picture_as_pdf
Querying the Past: Automatic Source Attribution with Language Models
Ryan Muther, Mathew Barber and David Smith
This paper explores new methods for locating the sources used to write a text by fine-tuning a variety of language models to rerank candidate sources. These methods promise to shed new light on traditions with complex citational practices, such as in medieval Arabic where citations are ambiguous and boundaries of quotation are poorly defined. After retrieving candidates sources using a baseline BM25 retrieval model, a variety of reranking methods are tested to see how effective they are at the task of source attribution. We conduct experiments on two datasets---English Wikipedia and medieval Arabic historical writing---and employ a variety of retrieval- and generation-based reranking models. In particular, we seek to understand how the degree of supervision required affects the performance of various reranking models. We find that semi-supervised methods can be nearly as effective as fully supervised methods while avoiding potentially costly span-level annotation of the target and source documents.
long talk
-
picture_as_pdf
Style Transfer of Modern Hebrew Literature Using Text Simplification and Generative Language Modeling
Ophir Münz-Manor, Pavel Kaganovich and Elishai Ezra-Tsur
The task of Style Transfer (ST) in Natural Language Processing (NLP), involves altering the style of a given sentence to match another target style while preserving its semantics. Currently, the availability of Hebrew models for NLP, specifically generative models, is scarce. The development of such models is a non-trivial task due to the complex nature of Hebrew. The Hebrew language presents notable challenges to NLP as a result of its rich morphology, intricate inflectional structure, and orthography, which have undergone significant transformations throughout its history . In this work, we propose a generative ST model of modern Hebrew language that rewrites sentences to a target style in the absence of parallel style corpora. Our focus is on the domain of Modern Hebrew literature, which presents unique challenges for the ST task. To overcome the lack of parallel data, we initially create a pseudo-parallel corpus using back translation (BT) techniques for the purpose of achieving text simplification. Subsequently, we fine-tune a pre-trained Hebrew language model (LM) and leverage a Zero-shot Learning (ZSL) approach for ST. Our study demonstrates significant achievements in terms of transfer accuracy, semantic similarity, and fluency in the ST of source sentence to a target style using our model. Notably, to the best of our knowledge, no prior research has focused on the development of ST models specifically for Modern Hebrew literature. As such, our proposed model constitutes a novel and valuable contribution to the field of Hebrew NLP, Modern Hebrew Literature and more generally computational literary studies.
long talk
Session 6. Audio/Visual
-
picture_as_pdf
Studying Tonal Evolution of Western Choral Music: A Corpus-Based Strategy
Christof Weiß and Meinard Mueller
The availability of large digital music archives combined with significant advances in computational analysis methods have enabled novel strategies for musicological corpus studies. This includes approaches based on audio recordings, which are available in large quantities for different musical works and styles. In this paper, we take up such an audio-based approach for studying the tonal complexity of music and its evolution over centuries. In particular, we examine the tonal evolution of Western choral and sacred music exploiting a novel audio corpus (5773 tracks) with a rich set of annotations. The data stems from one of the world's leading music publisher for choral music, the Carus-Verlag, which is specialized on scholarly-critical sheet music editions of this repertoire and also runs an own record label. Based on this corpus, we revisit a heuristic strategy that exploits composer life dates to approximate work count curves over the years, validate this approximation strategy, and optimize its parameters using the reference composition years annotated in the Carus dataset. We then apply this strategy to derive evolution curves from the full Carus dataset. We compare the results to a study based on a purely instrumental dataset and test three hypotheses on tonal evolution, namely that (1) global complexity increases faster than local complexity, that (2) major keys are tonally more complex than minor keys, and that (3) instrumental music is more complex than vocal music. The results provide interesting insights into the choral music repertoire and suggest that well-curated publisher data constitutes a valuable resource for the computational humanities.
long talk
-
picture_as_pdf
Understanding individual and collective diversity of cultural taste through large-scale music listening events
Harin Lee, Romain Hennequin and Manuel Moussallam
Emerging research suggests that cultural richness and complexity intensify with population size. Yet the mechanism underlying this phenomenon remains unclear: Do populated areas exhibit more cultural diversity simply due to there being a larger spectrum of individuals with varied backgrounds, or does the urban environment itself stimulate individuals to explore a wider variety of cultural experiences, raising the population's baseline? To decipher this, we leverage a large-scale dataset of 69 million music listening events from the real world, examining the listening patterns of over 408 thousand unique individuals across 96 regions in France. Our study presents a dual perspective on diversity by (1) measuring one's diversity of musical consumption by evaluating the breadth of their music listening history, and (2) assessing the shared repertoire among individuals as a collective. We found that both individual and collective levels of musical consumption diversity increase with population size. This trend held true when segmenting the population by gender and age groups, while a gender-specific divergence in consumption appeared from a particular age. We further delineate potential confounding variables to consider in future research aimed at identifying causal pathways, presenting this model using a Directed Acyclic Graph (DAG). Together, our preliminary work represents a crucial step towards unravelling the complexity of cultural diversity and its ties to population dynamics.
short talk
-
picture_as_pdf
From Clusters to Graphs – Toward a Scalable Viewing of News Videos
Nicolas Ruth, Manuel Burghardt and Bernhard Liebl
In this short paper, we present a novel approach that combines density-based clustering and graph modeling to create a scalable viewing application for the exploration of similarity patterns in news videos. Unlike most existing video analysis tools that focus on individual videos, our approach allows for an overview of a larger collection of videos, which can be further examined based on their connections or communities. By utilizing scalable reading, specific subgraphs can be selected from the overview and their respective clusters can be explored in more detail on the video frame level.
short talk
-
picture_as_pdf
Blind Dates: Examining the Expression of Temporality in Historical Photographs
Alexandra Barancová, Melvin Wevers and Nanne van Noord
This paper explores the capacity of computer vision models to discern temporal information in visual content, focusing specifically on historical photographs. We investigate the dating of images using OpenCLIP, an open-source implementation of CLIP, a multi-modal language and vision model. Our experiment consists of three steps: zero-shot classification, fine-tuning, and analysis of visual content. We use the De Boer Scene Detection dataset, containing 39,866 gray-scale historical press photographs from 1950 to 1999. The results show that zero-shot classification is relatively ineffective for image dating, with a bias towards predicting dates in the past. Fine-tuning OpenCLIP with a logistic classifier improves performance and eliminates the bias. Additionally, our analysis reveals that images featuring buses, cars, cats, dogs, and people are more accurately dated, suggesting the presence of temporal markers. The study highlights the potential of machine learning models like OpenCLIP in dating images and emphasizes the importance of fine-tuning for accurate temporal analysis. Future research should explore the application of these findings to color photographs and diverse datasets.
short talk
-
picture_as_pdf
Towards a Phenomenographic Framework for Exploratory Visual Analysis of Bibliographic Data
Martin Ruskov and Sara Sullam
A recurring challenge when studying history of translation is interpreting catalogue metadata. On one hand such interpretation is limited by the fact that data present in catalogue records is nominative, and not quantitative. On the other hand, such research is guided by tacit knowledge of scholars in the humanities, and thus could be at odds with reproducible science. We take inspiration from phenomenography - a discipline within educational research that examines how students perceive the phenomena being learned. We adopt the view that scientific inquiry is a collective form of learning. By doing this, we turn to the phenomenographic theory that variation is necessary to understand the phenomena being studied, and is achieved through three distinct patterns of variance: contrast, generalisation and fusion. We propose an approach to visualise the combination of nominal data and tacit knowledge by subjecting it to these three patterns. We illustrate our approach with two case studies from literary translations between Italy and the UK in the post-war 20th century. Our claim is that on one hand this guides scholars on how to analytically approach their research questions, on the other it drives them to externalise and validate hidden assumptions. Our approach offers a way of doing reproducible science not only when conducting literature research with bibliographic data. It is also applicable in the wider cases within the humanities when tabular (including relational) databases are available.
short talk
Session 7. Literature
-
picture_as_pdf
Make Love or War? Monitoring the Thematic Evolution of Medieval French Narratives
Jean-Baptiste Camps, Nicolas Baumard, Pierre-Carl Langlais, Olivier Morin, Thibault Clérice and Jade Norindr
In this paper, we test the hypothesis according to which the cultural importance of love, as it can be perceived through narrative fictions, varied through time concomitantly with material well-being and economic development. To do that, we focus on the large and culturally important body of manuscripts containing medieval French long narrative fictions, in particular epics (chansons de geste, of the Matter of France) and romances (chiefly romans on the Matters of Britain and of Rome), both in verse and in prose, from the 12th to the 15th century. We introduce the largest available corpus of these texts, the Corpus of Medieval French Epics and Romances, composed of digitised manuscripts drawn from Gallica, and processed through layout analysis and handwritten text recognition. We then use semantic representations based on embeddings to monitor the place given to love and violence in this corpus, through time. We observe that themes (such as the relation between love and death) and emblematic works well identified by literary history do indeed play a central part in the representation of love in the corpus, but our modelling also points to the characteristic nature of more overlooked works. Variation in time seems to show that there is indeed an phase of expansion of love in these fictions, in the 13th and early 14th century, followed by a period of contraction, that seem to correlate with the Crisis of the Late Middle Ages.
long talk
-
picture_as_pdf
How Exactly does Literary Content Depend on Genre? A Case Study of Animals in Children’s Literature
Kirill Maslinsky
The content of literary fiction at least partly depends on literary tradition. The dependence is attested quantitatively in the association of genre with lexical statistical patterns. This short paper is a step to formal modeling of the content-moderating processes associated with literary genres. The idea is to explain prevalence of the particular lemmas in a literary text by the genre-dependent accessibility of the semantic category during the creative process. Data on animals mentioned in various sub-genres in a corpus of Russian children's literature is used as an empirical case. Vocabulary growth models are applied to infer genre-related differences in overall diversity of animal vocabularies. A constrained topic model is employed to infer preferences for particular animal lemmas displayed by various genres. Results demonstrate the models' potential to infer genre-related content preferences in the context of high variance and data imbalance.
short talk
-
picture_as_pdf
Comparing ChatGPT to Human Raters and Sentiment Analysis Tools for German Children's Literature
Simone Rebora, Marina Lehmann, Anne Heumann, Wei Ding and Gerhard Lauer
In this paper, we apply the ChatGPT Large Language Model (gpt-3.5-turbo) to the 4books dataset, a German language collection of children's and young adult novels comprising a total of 22,860 sentences annotated for valence by 80 human raters. We verify if ChatGPT can (a) compare to the behaviour of human raters and/or (b) outperform state of the art sentiment analysis tools. Results show that, while inter-rater agreement with human readers is low (independently from the inclusion/exclusion of context), efficiency scores are comparable to the most advanced sentiment analysis tools.
short talk
-
picture_as_pdf
The Chatbot and the Canon: Poetry Memorization in LLMs
Lyra D'Souza and David Mimno
Large language models are able to memorize and generate long passages of text from their pretraining data. Poetry is commonly available on the web and often fits within language model context sizes. As LLMs become a potential tool in literary analysis, the accessibility of poems will determine the effective canon. We assess whether we can prompt current language models to produce poems, and what methods lead to the most successful retrieval. For the highest performing model, ChatGPT, we then evaluate which features of poets best predict memorization and document changes over time in ChatGPT's ability and willingness to retrieve poetry.
short talk
-
picture_as_pdf
Persistence of gender asymmetries in book reviews within and across genres
Ida Marie S. Lassen, Pascale Feldkamp Moreira, Yuri Bizzoni, Mads Rosendah Thomsen and Kristoffer Nielbo
While literary judgements are considered highly subjective or noisy, gender asymmetries are shown in readers' and reviewers' assessments of literature. This study adds genre categories to the examination of literary reviews. By considering both the gender of authors and reviewers, the media type of reviews (newspapers, online blogs), as well as genres, this analysis provides a detailed overview of the Danish review scene and sheds light on structural biases. Analyzing how genres are reviewed in newspapers and blogs, we identify systematic trends that may be attributed to gender biases in reviewer judgments across, as well as within, different genres and media types. Our results show that the book reviews in Danish media are polarized between the reviewer genders and the two considered media types, upholding additional gender asymmetries in rating.
short talk
Lightning talks
-
picture_as_pdf
From the television corpus to the web corpus using an automatic visual tool: methods of the CROBORA project
Shiming Shen
-
picture_as_pdf
AI Unleashed: Testing Automatic Metadata against Human Labeling on Handwritten Sources in City-state Bern (1528-1795)
Christel Annemieke Romein, Sara Veldhoen, Andreas Wagner and J.C. Romain
-
picture_as_pdf
Theorizing risk attitudes and rationality using agent based modeling
Rebecca Sutton Koeser and Lara Buchak
-
picture_as_pdf
Agent-based models to understand the spatiotemporal patterns of the earliest occupations of Western Europe
Carolina Cucart Mora, Jan-Olaf Reschke, Harry Hall, Kamilla Lomborg, Christine Hertler, Mehdi Saqalli, Matt Grove and Marie-Hélène Moncel
-
picture_as_pdf
Dissecting the Trendline: Explaining Historical Change with the Price Equation
Oleg Sobchuk and Bret Beheim
-
picture_as_pdf
A Computational Approach to the Cultural Evolution of Cognitive Metaphors in Historical Texts (1517-1716)
Vojtech Kase and Petr Pavlas
-
picture_as_pdf
Information Flow in Dramatic Texts
Botond Szemes and Mihály Nagy
-
picture_as_pdf
Night tactics. Gender and the creation of the modern urban night from a data-driven perspective (Antwerp, 1870-1940)
Lith Lefranc
-
picture_as_pdf
From Page to Persona: Exploring Gendered Linguistic Patterns in Manga
Mareeha Ahmad and Maheen Zia
-
picture_as_pdf
Computational thematics: Comparing algorithms for clustering the genres of literary fiction
Oleg Sobchuk and Artjoms Šeļa
-
picture_as_pdf
Connecting Reading Impact to Characteristics of Novels: Getting from Proxies to Concrete Features?
Joris J. Van Zundert, Marijn Koolen, Carsten Schnober, Eva Viviani and Willem Van Hage
-
picture_as_pdf
Recetas: A Bilingual Newspaper Recipe Website
Sarah Tew and Melissa Jerome
-
picture_as_pdf
Radio entropy: Experiments with large-scale audio data
Johan Malmstedt
-
picture_as_pdf
Studying a global network of cultural influence across 1,423 cities and 53 nations through large-scale music discovery behavior
Manuel Anglada-Tort, Harin Lee, Marc Schönwiesner, Minsu Park and Nori Jacoby
Posters
-
picture_as_pdf
Computational analysis of artistic style prevalence in generative AI art
Thomas Efer and Andreas Niekler
-
picture_as_pdf
Evaluating State-of-the-Art Handwritten Text Recognition (HTR) Engines with Large Language Models (LLMs) for Historical Document Digitisation
Christel Annemieke Romein, Achim Rabus, Gundram Leifert, Tobias Hodel and Phillip Ströbel
-
picture_as_pdf
The Flemish Operation: Language Choices in the Repertoire of the Antwerp Opera (1893 – 1934)
Mona Allaert and Mike Kestemont
-
picture_as_pdf
Metronome: tracing variation in poetic meters via local sequence alignment
Benjamin Nagy, Artjoms Šeļa, Mirella De Sisto, Wouter Haverals and Petr Plecháč
-
picture_as_pdf
Analysing Image Similarity Recommendations Across Photographic Collections
Taylor Arnold and Lauren Tilton
-
picture_as_pdf
Understanding the Role of Speech Acts in a Large Corpus of Political Communication
Klaus Schmidt, Andreas Niekler and Manuel Burghardt
-
picture_as_pdf
Mining the Dutch attitudes towards animals and plants
Arjan van Dalfsen
-
picture_as_pdf
Towards emotion analysis for Alsatian theater
Qinyue Liu, Pablo Ruiz Fabo and Delphine Bernhard
-
picture_as_pdf
Large language models to supercharge humanities research
Andres Karjus
-
picture_as_pdf
A Graph Database and an Ontology for Computational Literary Studies
Federico Pianzola, Andreas van Cranenburgh, Xiaoyan Yang, Noa Visser, Michiel van der Ree, Luca Scotti and Ze Yu
-
picture_as_pdf
Telling a Story with Data: shift in the Mediterranean Diet’s discourse from 1950-2020
Arina Melkozernova, Juliann Vitullo, Ryan Dubnicek, Daniel J. Evans and Boris Capitanu
-
picture_as_pdf
‘Go into the sea’ or ‘to venture’: Using token embeddings to disentangle lexical usages in Chinese
Jing Chen and Chu-Ren Huang
-
picture_as_pdf
Encoded literary history: A word embedding approach to literary history
Judith Brottrager
-
picture_as_pdf
Profiling charged domains through the lens of correlating subtexts
Ryan Brate and Marieke Van Erp
-
picture_as_pdf
How far back into the past can we trust language phylogenies?
Emma Kopp and Robin Ryder
-
picture_as_pdf
FicTag Visualizer: A Tool for Fanfiction Tag Analysis and Three Use Cases in Fan Interpretation
Julia Neugarten, Christoph Minixhofer and David Slot
-
picture_as_pdf
Investigating the reliability of expert queries in a historical corpus
Thomas Rainsford and Mathilde Regnault
-
picture_as_pdf
Publishing the Neulateinische Wortliste as Linked Open Data
Federica Iurescia, Eleonora Litta, Marco Passarotti and Matteo Pellegrini
-
picture_as_pdf
Explicit References to Societal Values in Fairy Tales: A Comparison between Three European Cultures
Alba Morollon Diaz-Faes, Carla Sofia Ribeiro Murteira and Martin Ruskov
-
picture_as_pdf
How to Evaluate Coreference in Literary Texts?
Ana Duron Tejedor, Pascal Amsili and Thierry Poibeau
-
picture_as_pdf
Understanding the impact of two derived text formats on DistilBERT-based binary sentiment classification
Keli Du and Christof Schöch
-
picture_as_pdf
Словотвiр: a natural experiment in word replacement
Alexey Koshevoy, Olivier Morin and Oleg Sobchuk
-
picture_as_pdf
Dating the Stylistic Turn: the Strength of the Auctorial Signal in Early Modern French Plays
Florian Cafiero and Simon Gabay
-
picture_as_pdf
Computer vision, historical photographs and halftone visual culture
Mohamed Salim Aissi, Marina Giardinetti, Isabelle Bloch, Julien Schuh and Daniel Foliard