Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

{de} Im Forschungskolloquium werden internationale Projekte aus dem Bereich der digitalen Geisteswissenschaften vorgestellt. In diesem Semester wird ein Schwerpunkt auf Themen aus den digitalen Altertumswissenschaften liegen. Die Veranstaltung ist offen für externe Teilnehmer*innen, eine Registrierung ist nicht erforderlich, einfach diesen Link klicken: https://meet.in-berlin.de/dh_colloquium_fu_berlin  Bei Rückfragen wenden Sie sich bitte an Prof. Dr. Stefan Heßbrüggen-Walter (s.hessbrueggen-walter@fu-berlin.de).

{en} This research colloquium presents international Digital Humanities projects. This semester will feature a number of talks from digital classics. The event is open to external participants, registration is not required, just click this link: https://meet.in-berlin.de/dh_colloquium_fu_berlin  If you have any questions, please contact Prof. Dr. Stefan Heßbrüggen-Walter (s.hessbrueggen-walter@fu-berlin.de).

*

Semesterplan / Schedule

Mi/Wed · 26.10.2022 · 18:15–19:45

Maximilian Noichl (Universität Bamberg/Universität Wien): »Modeling Multlingual Philosophies«

The vast increase in scholarly output since the early 20th century poses interesting challenges to the historiography of the humanities and sciences. While case-studies of discoveries, exchanges and biographies remain informative, more general statements about the shifting structure and focus of inquiry become challenging. This, among other factors, has led scholars to embrace computer-aided forms of research that explore large corpora of material by means of natural language processing. One additional complication that arises here for the study of the humanistic fields is that they, much more than the empirical sciences, have retained a multilingual alignment, which digital scholarship ought to, but often fails to take into account. Apart from issues of data-availability and compatibility, a common reason for this is that multilingualism is challenging on a technical level.

In this contribution, we explore one technical solution to this problem. We describe how we use a pipeline of pre-training and alignement-schemes to produce a multilingual language model (philroBERTa) fine-tuned on roughly three hundred-thousand philosophical texts from the 20th and 21st century. We discuss how we can evaluate the quality of this language model and explore what the structure of philosophy it encodes looks like. Finally, we put the model into service to investigate the relationship between analytical and Continental philosophy. While still in an exploratory stage, we can show how models of this kind can be used to answer recuring questions about disciplinary structure, e. g. whether the analytical/Continental-divide has been closing up over the last few decades, or whether it remains stable.



Mi/Wed · 09.11.2022 · 18:15–19:45

Sarah Lang (Karl-Franzens-Universität Graz): »Experimentorum decades. Processing books of secrets and extracting recipes from Neo-Latin sources«

So-called “books of secrets” are a genre which saw a great boom during the ongoing print revolution towards the end of the 17th century. They usually contain recipes, among which are both simple household recipes as well as alchemical and chymical experiments and process instructions. Yet despite their intriguing (and maybe somewhat misleading) name, the genre remains relatively understudied. There has been an increased interest in the topic after publications by William Eamon and Columbia university’s flagship project dealing with such recipe books, Making and Knowing. 

The setup of such books is not entirely uniform, yet the presence of recipes which look more or less similar suggests that it might be possible - after a considerable amount of pre-processing and the development and application of machine learning algorithms - to automatically extract such recipes. This would allow for a bird’s eye view of the genre and to answer research questions such as the following: What types of recipes appear most frequently in the books under investigation? Do they reprint a similar set of standard recipes or is their sales value in their diversity? Which materials are used most frequently? Are they common household materials or difficult to source and thus, probably not intended for an audience of most citizens to recreate in a laboratory? 

To start this work in progress, a set of 14 books of secrets was run through Transkribus to create 2.419.491 words of text. However, the project still has a number of difficulties before anything close to recipe extraction will be possible. It is those challenges that will be discussed in this talk. 

Mi/Wed · 23.11.2022 · 18:15–19:45

Annette von Stockhausen (Berlin-Brandenburgische Akademie der Wissenschaften Berlin): »Digitale Editionen im Bereich der (spät)antiken christlichen Literatur - Erwartungen, Schwierigkeiten, Lösungen«

Im Vortrag wird anhand des Projektes der »Alexandrinischen und antiochenischen Bibelexegese in der Spätantike« und des dafür geschaffenen »Patristischen Textarchivs« diskutiert, welche Hoffnungen und Erwartungen mit digitalen Editionsprojekten in den Altertumswissenschaften verbunden sind, welche Chancen sich eröffnen, welche Schwierigkeiten bisher aufgetreten und noch zu erwarten sind und welche Lösungen schon gefunden wurden oder zu erwarten sind.

Mi/Wed · 07.12.2022 · 18:15–19:45

Charles Pence (Université catholique de Louvain): »Topic Modeling for Conceptual Cartography«

A not uncommon desire in digital humanities work is to take the measure of a concept or idea and its changing use and meaning over time. How and where has it been invoked? With what has it been related, and in what contexts? A natural tool, in turn, for this kind of work is topic modeling – the kinds of results that topic modeling provides could give us the “semantic neighborhood” surrounding a key term, and, carefully applied, show us how that neighborhood varies across a corpus. In this talk, I’ll present some efforts in this direction: successes and failures in a number of efforts to use topic models, supplemented with various kinds of other information or further complexity, to better understand the cartography of concepts in historical and contemporary philosophy of science. I hope to offer both some fruitful examples and some terrible warnings.

Mi/Wed · 18.01.2023 · 18:15–19:45

Federica Iurescia (Università Cattolica del Sacro Cuore, Milano): »LiLa: Linking Latin. Interoperability between Lexical and Textual Resources for Latin with the Linked Open Data Paradi

The “LiLa: Linking Latin” project builds a Linked Data-based Knowledge Base of interoperable Linguistic Resources and Natural Language Processing (NLP) tools for Latin. Its aim is to make the existing corpora, dictionaries, and lexica for Latin interact, in order to boost the potential of the single resources. The key to achieve interoperability is to use a standard way to represent data and metadata. Such a way consists in applying the principles of the so-called Linked Open Data Paradigm, which is the standard approach for knowledge description in the Semantic Web.

The talk will detail the basic architecture of LiLa and will give an overview of the lexical and textual resources connected to the knowledge base so far. Moreover, a number of queries on the interoperable resources in LiLa will be presented.

Mi/Wed · 01.02.2023 · 18:15–19:45

Maria Chiara Parisi (Universiteit van Amsterdam): »Mathematics & Scientific Explanation in Antiquity: A ›Slow Science‹ and ›Big Data‹ Study«

Science explainswhyreality is as it is. But what is scientific explanation? In philosophy of science, it is debated whether explanations are causal or can also be non-causal. Importantly, if all scientific explanations are causal, then mathematics does not explain, because mathematics provides non-causal, conceptual explanations. A virtually identical debate originated in antiquity with Proclus (412–485 CE), opposing two views:


(1) mathematics is explanatory

(2) mathematics is not explanatory.

This ancient opposition ruled the debate on scientic explanation for millennia. An unsolved mystery surrounds (2), however. Proclus follows (1) and attributes (2) to Aristotle, contradicting Aristotle's own
authoritativewritings.Whatdynamicsofthoughtcouldexplainsuchastarkandmomentous misattribution?

My main hypothesis is that, against current opinion, both the misattribution and the emergence of (2) are due to a conceptual shift regarding the objects of scientic knowledge. To show this, and to reconstruct this extremely consequential debate, I use a novel approach combining traditional, qualitative methods or ‘slow science’, and quantitative, computational techniques on a 'big data' corpus in Greek and Latin spanning 450 authors and nine centuries. This approach allows me to remain grounded in textual exegesis, while enlarging the evidence basis for a wide-scope investigation. In this talk, I focus on this novel approach. After introducing the theoretical framework, I discuss corpus selection and processing to enable (string and semantic) searches and collect relevant passages. Moreover, I illustrate the modelling of the concept of scientific explanation in Antiquity by adapting the Classical Model of Science (de Jong and Betti 2010) to the ancient context.

Mi/Wed · 15.02.2023 · 18:15–19:45

Rachele Sprugnoli (Università di Parma): »Sentiment Analysis for Latin: lexicons, annotation and automatic approaches«

The talk will present the development of sentiment lexicons for Latin, their publication as Linked Open Data in the context of the ERC project “LiLa: Linking Latin” and will report on the pilot manual annotation of sentiment in a set of poems written by Horace. Moreover, the results of a few preliminary experiments on the automatic detection of sentiment in Horace texts will be described.

Mi/Wed · 22.02.2023 · 18:15–19:45

Monica Berti (Universität Leipzig): »Canons of Ancient Historians in the Age of Linked Open Data«

The goal of this talk is to discuss characteristics of canons and catalogs of ancient literature in the digital age. The talk will present a project for extracting data about ancient Greek authors and works from still extant sources. The talk will show concrete examples about the extraction, annotation, and analysis of the language used by ancient authors to refer to other authors and works. The talk will address questions concerning the use of Linked Open Data best practices for collecting and sharing this data. 

*

(Die Veranstaltung im Vorlesungsverzeichnis.)