Abstracts

What are the Digital Humanities? – Øyvind Eide (University of Oslo), June 14, 10:00am

DH is becoming increasingly popular in many parts of the world and is seen by some as the next big thing. I will discuss a number of organisational developments the last few years, including some notes on ALLC: The European Association for Digital Humanities and the history behind the establishment of The Alliance of Digital Humanities Organizations (ADHO).
Much has happened at the European and International levels. But what about regional and local organisational issues? Should an organisation for DH be established in Norway or Scandinavia? Should the informal inter-departemental networks we have at the University of Oslo be formalised in any way? We also face a naming issue. What should we call DH in Norwegian?
There is a growth in the number of DH positions at many universities. I will give some examples of how this is connected to research strategies and curriculum development. Are there lessons to be learned for Norwegian institutions as well?
I will give some examples of discussions of what DH is and bring forwards a few attempts which has recently been made towards categorisations including DH types I and II suggested by Stephen Ramsay and Marjorie Burghart’s three orders of DH. Should we establish borders around DH as a discipline by identifying what falls outside of DH, or should we keep the tent as open as possible?
With the development towards a truly international DH, issues of cultural and language diversity, which has always been with us, become critical. How can we keep an integrated area of DH while opening up for the diversity of languages and research cultures? We are no longer a marginalised crowd sticking together in small groups; we are rather, if not dominant, then at least visible. A novel position of strength is a challenge for groups who has traditionally seen themselves as marginalised. I see the fact that DH is now under attack as a sign of strength and basically a good thing. Some issues people have with DH are clearly based on misunderstandings of what DH is, but others, including some critical approaches to DH from the postcolonial and feminist side, are well worth listening to.
Technology is not cultural neutral, but neither are XML and TEI hidden vehicles of anglo-american imperialism. In order to find useful critical positions between these two extremes it is important to see the political potential of technology. A similar example is map technologies. Cartography has been, and still is, a power system used by empires, political as commercial. But it is also used by the marginalised as part of their strategies. Working in DH is not about being leftish or indeed any kind of -ish. But the tools we use have a political potential which is there whether we acknowledge it or not. Any scholar, digital or not, in the humanities as well as beyond, need a critical approach to what they are doing. So do we.

What Propp probably really had in mind, but didn’t dare to think: Toward an applied Computational Narratology – Jan Christoph Meister (University of Hamburg), June 15, 10:45am

Among current approaches and theories of literary studies narratology is arguably the most likely candidate to benefit from the blend of its own concepts and methods with Digital Humanities methodology and technology. However, although many of the classical narratological concepts  are rooted in Formalism and Structuralism and thus seem comparatively accessible to abstract reformulation, their ultimate ‘raison d’etre’ and function remains to be hermeneutic: we employ them to interpret, not just to describe and measure. In my talk I will sketch out some of the related philosophical and methodological questions that concern computational narratology as an emerging field for a methodological intersection of Literary Studies and DH.

Dead Languages and Digital Humanties: Social Network Analysis in the Ancient World – Lieve Van Hoof (University of Göttingen), June 15, 10:00am

“Dead languages and digital humanities” – a contradiction in terms? Not at all! As opposed to what one might expect, Classics and Ancient History were in fact amongst the earliest subjects in the humanities to go digital. This paper therefore starts with a sketch of the history and reach of Digital Classics. After this introduction, the paper zooms in on one particular kind of digital humanities research that has been applied to the ancient world: social network analysis. Using the example of a fourth-century A.D. letter collection, it demonstrates what social network analysis is, how it works in practice, and what it can add to our understanding of classical literature.

Weakly Connecting Early Modern Science – Scott Weingart (Indiana University), June 14, 3:00pm

The Early Modern period in European history saw a flourishing of scholarly activities which has since been dubbed a scientific revolution. Often discussed in the same breath is an entity collectively known as the Republic of Letters, a self-titled international group of intellectuals spanning hundreds of years and connected via correspondence networks. Letters were the chief means of scholarly communication at a distance, and some historians have suggested this network to be instrumental in forming what eventually was called The Scientific Revolution. Much impressive and sprawling historical research has gone into studying this community, often taking individual historians decades to get a grasp on the hundreds of thousands of primary sources necessary for understanding this period. This presentation begins in this rich historiographic tradition, before demonstrating how modern social network analysis can reveal in a flash what once took careful lifetimes to uncover. An analysis of 100,000 Early Modern letters reveals the Republic of Letters to be as tightly knit—and as exclusive—as historians suggest, dominated by central figures who controlled the flow of information and structured in a way particularly well-suited to the rapid diffusion of knowledge. After showing that computational techniques can be trusted because they lead to similar conclusions as sound historiographic research, I will argue that a combination of historiographic and computational techniques can lead to new insights faster and more efficiently.

Conversational Machines and Their Role in Digital Humanities – Laetitia Le Chatton (University of Bergen), June 14, 2:30pm

Digital Humanities is being taught as a vision before being taught as a discipline. Students are promised highlights on the very nature of man’s relationship to computing and are asked to rethink the nature of the digital media. Research is being conducted from different angles, ranging from digital media aesthetics to social and political perspectives on digital technologies, passing by its historical and theoretical roots. It is commonplace to hear statements such as “I am doing an ethnography, but a digital one”, “I am studying economy, but the digital one”, “I meet my friends, but I do it online” and it is assumed that “the digital” makes a difference. Yet, digital humanists think there is no clear or unilateral theory of “the digital” because they observe the type of relationship between men and the digital program depends on the nature of the activity under their scope. — Since five years, I have been driven into finding a global answer to this problem, not a local one. As I wanted to be realistic and pragmatic, I have designed a working model of conversational machine in order to back up the general hypothesis according to which the digital media is a tool supporting human conversations. This abstract explains what “Conversational Machine” means for the development of digital technologies, and what it adds to the understanding of human relationships to digital machines.

Digital Humanities in Norway – the first 25 years: 1975–2000 – Espen S. Ore (University of Oslo), June 14, 11:30am

This paper looks into the development of Digital Humanities – or Humanities Computing as it was once known – from around 1975 until about 2000 although it will extend outside those time limits. It covers the development from mainframe/mini computer based data processing to the use of PCs, and the development from the data preparation – data processing – data output model into the many flowers that are now in full bloom. A central player in Norway was the Norwegian Computing Centre for the Humanities  (NCCH) established by the Norwegian Research Council for Science and the Humanities. The NCCH published a journal, Humanistiske Data (HD) between 1973 and 1991 and information on most of what was going on in Digital Humanities in that period was published in HD. One fairly clear development was the change from proprietary systems as this may be understood in many ways into open standardized solutions. The NCCH was for instance involved in a collaboration with the Massachusetts Institute of Technology (MIT) on a multiuser, hypertext system (Athena Muse) which in many ways suddenly became a dead end with the development of the World Wide Web. From the early days of the NCCH textual computing was important. this made it necessary to produce marked up input texts for various pieces of software. And then with the arrival of first SGML and then XML all those marked up texts became something similar to texts in dead languages. Some of the work that was done in Norway did, however, influence the development of the international standards: the concept of wellformedness which is essential in XML was at first introduced in the proprietary encoding system MECS developed mainly for the work on Wittgenstein’s Nachlass at the University of Bergen. Around the end of the time frame given in this paper’s title, Wittgenstein’s Nachlass was published on CD-ROMs at the OUP and the next large scale edition/encoding project in Norway: Henrik Ibsen’s Writings was well under way.
Digital Humanities is definitely more than editions and text encoding.  Some of the fields within the Humanities have established their own digital communities, this goes to a certain degree for History, Archeology and Art History and others. But if we look back to the 1970s and 80s we find many cases showing examples of cross-field Digital Humanities collaboration. Text and textual data have always remained important. And if we look at the situation today we can find that within the study of music an encoding scheme, the Music Encoding Initiative (MEI), is clearly based on the work done in the Text Encoding Initiative (TEI), where Norwegian universities and individual scholars have been active participants.

The ELMCIP Electronic Literature Knowledge Base: Documentation, Connections and Visualisations – Jill Walker Rettberg (University of Bergen), June 14, 12:00pm

Over the last few years, the ELMCIP project at the University of Bergen has studied creativity and innovation in electronic literature communities in Europe. A major outcome of the project has been the development of the Electronic Literature Knowledge Base, which is an open-access, online, relational database containing more than 7000 records documenting authors, scholars, events, creative works, critical writing, syllabi, publishers and other material relevant to the field of electronic literature. Each entry is cross-linked to other entries, so that the entry for a creative work also connects to critical writing about the work, to events at which the work was presented and syllabi for courses where the work has been taught. In addition to being a valuable resource for scholars and readers interested in electronic literature, having this interconnected documentation in a digital format allows us to analyse and interpret electronic literature from a different perspective than we do in traditional literary studies, allowing what Franco Moretti calls “distant reading” of the field of electronic literature. Currently, we are exploring ways in which visualisations of the large quantities of data gathered in the Electronic Literature Database can give us new insights into the ways in which electronic literature is developing as a field.

Henrik Ibsen’s Writings. The Digital Scholarly Edition and Its Readers – Stine Brenna Taugbøl (University of Oslo), June 15, 12:15pm

The work on making a new edition of Henrik Ibsen’s writings started in 1998. It has resulted in a book edition in 33 volumes (2005–2010) and in a digital edition (2011–) that is still under development. The digital edition derives from the same TEI xml-files as were used for the book edition, though with an upgrade in standard (from version P4 to P5).
In addition to the material in the printed edition the digital edition includes a large text archive with all prints and manuscripts from Ibsen’s lifetime, both in diplomatic transcription and in facsimile. The archive contains approximately 60,000 pages written by Ibsen, of which 20,000 are handwritten and 40,000 are printed. Such large texts corpuses are preferably published digitally – both because of the large amount of pages and because of the possibilities inherent in the text encoding.
In this presentation I will show how we have organized the material in the digital edition, what kind of structures and information we have encoded when transcribing the texts, and how we have made use of the encoding in the digital edition. I will briefly demonstrate editorial comments, person biographies, textual variants, metrical structures and searches.
I will also summarize our findings from the users’ experiences of our digital edition. We are aiming at reaching both novice and expert users. The complexity of the edition, with all text versions and variants, information on textual material, editorial guidelines and more, has shown to have an overwhelming and a bit confusing effect on many users who only seek a simple version of one of Ibsen’s texts. On the other hand this flow of texts is exactly what the edition is meant to provide. For the first time all these historical text sources are available digitally for free for scholars all over the world. To meet the needs of both scholars and the common public, we have decided to split the edition in one simple and one advanced version. These two versions are not yet established, but I want to discuss our plans and show some preliminary sketches for the audience.

Standard Gains and Growing Pains – How Did Digitisation Change Norsk Ordbok 2014? – Oddrun Grønvik (University of Oslo), June 14, 12:30pm

This paper will present a review of the digitisation process of a major academic dictionary project – Norsk Ordbok 2014 (2002 – early 2013), and discuss some of the requirements and challenges inherent in this undertaking. Three issues will be addressed in particular, i.e. the meeting of linguistics and ICT through the Dictionary Writing System (DWS) application, the digitised handling of sources, and tools available for linguistic analysis.
The aim and purpose of Norsk Ordbok 2014 is to provide an in depth presentation of the lexicon in Norwegian dialects from 1600 onwards, and the written standard language Nynorsk (from ca 1860). Norsk Ordbok focusses on a thorough linguistic analysis of accessible materials, resulting in clear definitions, usage examples rooted in documented spoken Norwegian as well as citations from literature.
From 1946 to 2002 Norsk Ordbok was edited as a traditional paper dictionary, resulting in 4 volumes out of a planned 12. Since 2002, 6 volumes have been published, the deadline for volume 11 is 30.6.2013, and volume 12 will be out by the end of 2014. A web edition launched 15.3.2012 covers the part of Norsk Ordbok edited in the project period (letters i-stæ, so far – the last two volumes will follow in due course).
The hypothesis at the start was that a thorough revision of editorial practice, linked to creating a stringent digitised dictionary writing system, would create a more reliable and consistent dictionary, with clearer procedures for processing source materials and composing entries. An efficient Dictionary Writing System (DWS) application would also help train new editors and make them productive in less time than what has traditionally been thought necessary.
Norsk Ordbok 2014 has another 18 months to go before the project ends. There is therefore also reason to look at what has not been achieved, and what should be adjusted – with a view to the future.

HUMlab: Humanities, Culture, and Information Technology – Emma Ewadotter (Umeå University), June 15, 1:15pm

I will talk about HUMlab as a structure, as a creative environment and about the challenge to merge information technology, art/culture and humanities scholarship. I’ll give some examples of projects that HUMlab are currently involved in and also talk a bit about our plans for the future.

MENOTA: Medieval Nordic Text Archive – Christian-Emil Smith Ore (University of Oslo), June 15, 12:45pm

MENOTA is a network of 18 Nordic archives, libraries and research departments working with medieval texts and manuscript facsimiles. Established in Oslo September 10, 2001 the aim of MENOTA is to preserve and publish medieval texts in digital form and to adapt and develop encoding standards necessary for this work. The archive may contain texts in the Nordic languages as well as in Latin. This joint nordic organisation hosts the information platform http://www.menota.org.

Advertisements