h+d Portraits: An interview with Jeremy Grubman
By Josh Cowls on May 18, 2017
Welcome to the second edition of h+d Portraits, an ongoing series of interviews with scholars and practitioners of the Digital Humanities. In this series, we aim to explore what the label Digital Humanities means in practice – by assembling a diverse set of perspectives from those engaged in this still-emerging field.
Our second conversation is with Jeremy Grubman. Jeremy has been at MIT since 2009, first as manager of an MIT Libraries collection, and since 2012 has been an archivist in the Arts, Culture and Technology (ACT) program. As our discussion demonstrates, ACT sits at the intersection of the artistic and the digital, and the program’s holdings contain a remarkable range of artefacts covering MIT’s decades of experimentation with emerging technologies.
As such, Jeremy is expertly placed to comment on the intersection of subject matter experts, information scientists and “technologists”, whose collaboration, as Jeremy sees it, gives rise to Digital Humanities.
So our first question is always the same: What brings you to your existing institution, and in this case to your department?
Sure. So I received a Masters in Library and Information Science from Simmons College in 2008, and almost immediately after I began working in spring 2009 for the MIT Libraries, so that’s what brought me to MIT, and in the summer of 2012 I was asked to come and bring the kinds of work I was doing for the Libraries to the program in Art, Culture and Technology (ACT), which owned a significant amount of archival materials, covering the history of MIT’s experiments with emerging technologies in art over the past 60 years or so.
Maybe for people who don’t know quite so much about ACT, what makes ACT quite unique inside MIT?
Well ACT is born from two different organizations, the Center for Advanced Visual Studies which brought artists from all around the world to collaborate with MIT’s scientists and engineers as research fellows, beginning in the mid-1960s, and the Visual Arts program which was an academic arts program that began in the late 1980s. And the two merged to form ACT which is where we are today, which is both a research center and an academic graduate program in the arts and we also provide classes for undergraduates.
Can you tell us about how the interplay of different stakeholder groups works in practice?
I think that DH really emerges from the needs of the subject matter experts in answering complex research questions. To get at those materials they encounter the information science professionals, the librarians, archivists, museum professionals who are the gatekeepers and enablers of access, and then you have the people I refer to as technologists for lack of a better word, who build the tools and applications that promote that level of access. And DH as I see it is the convergence of those three areas of practice.
What makes these collections special – maybe not just in terms of their artistic or cultural qualities, but in that process of making them digital?
“Special collection” is a very specific term in the library world, it’s somewhat different from an archive, but what makes the collections themselves so interesting and unique is, on one hand, the breadth of material, the difference of material types. Currently I’m working most prominently with the Center for Advanced Visual Studies special collection, and it’s a challenge to do research in that collection, because if you’re looking at a particular project that was done here at MIT you need to be able to access a poster, a film, a catalog, a publication, the records of meetings and memos that reveal a process of collaboration. … It can be really challenging, in just a physical space, to go around a room and pull all of those materials from their different locations.
At the same time, in a digital space, it can be really difficult to ensure that a user can discover all of the materials that they need to discover on a given topic. So I think that the breadth of material types in a given collection is one thing that makes it challenging to deal with, and then also … there are some really complex research questions that people tend to ask about these projects. “Who were all the people who came to work on holography and MIT, and where did they all come from? And then where did they produce their work?” And so in order to be able to answer that question it takes a lot of digging, and not in the sort of approach that is typically used when you’re looking at just raw data.
What makes ACT a particularly interesting place to work in this space?
Well it’s very interesting to not work under the umbrella of the MIT Libraries, or the MIT Museum, which is typically where these materials would sit at an institution like this. It just so happens that the materials were collected and retained by ACT, and on one hand this allows me to move very freely without the weight of a larger institution behind me, but on the other hand it means that I have to do a lot of things on my own. ACT’s a very interesting place, because even though it’s a relatively small program we have a really broad range of different kinds of people coming to experiment, produce work, conduct research, and it’s always changing every semester too, because we have all kinds of people coming in, whether it’s our graduate students or visiting faculty.
How do you see archives evolving in the longer term?
I think there is a clear trend toward reaching wider audiences, and the way to do that is to develop repositories and methods of retaining information and providing access to them that are much more global, in the sense that when you build a digital collection you don’t want it to just necessarily be accessible in that one mode, in that one repository; we need to stop building these sorts of silos of information and build repositions that can really rapidly import and export collection information. And that’s an incredible challenge because we all have different kinds of information and we all have different kinds of metadata, and there’s different schema that we use to describe it. So the challenge there is to be able to build repositories that can incorporate collections from other institutions, so that you might be able to access related materials in one spot, even though the materials themselves might physically be held by very different institutions across the world.
That’s one thing that needs to happen. And another thing is greater collaboration between libraries, archives, museums, galleries – any of these types of organizations that collect materials. When we build these types of global repositories we need people to contribute to them. And I see very often these types of DH projects that are typically spurred by a particular need, for a particular collection or a particular topic, and there’s often collaboration at the project level, where you might get input from your expert users, you might have the technologists working very closely with the information professionals, and the program moves forward. We need to see collaboration at a much higher level, at an institutional level and at an extra-institutional level, as we begin to answer questions like, how are we all going to describe our data in a way that will be intuitive for users to access and discover it?
Is that kind of collaboration a more formal, institutional framework or is it just doing different things with the same people in the same ways?
I think it needs to be a mix of both. I think we need a sort of administrative buy-in, that that collaboration matters and at the same time we need some sort of more serendipitous conversations happening, between these different groups. And very often we like to sort of close our doors and work on our own, we can all be guilty of that, but the more conversations we have about enabling access to wider audiences, and how we get at more intuitive ways of helping people discover things, the better the products are going to be.
Do you have any ideas about how you would enable those conversations?
I think that it falls at least on the institutions themselves to have the conversations between the sometimes disparate departments that exist, whether it’s the MIT Libraries talking to the Museum, talking to ACT, talking to HyperStudio, or the IT department in a museum or a school getting in touch with the various academic departments to see what kinds of tools the need to manage their information to promote access to research data. I think it can start at that institutional level again with a larger administrative buy-in, and also I’ve seen the most success when people become closer colleagues and collaborate with each other, just to keep each other updated on the different things we’re working on. We often discover that we’re reinventing the wheel of someone else’s project. And when we can sit and put our heads together and share that information of the process then we all tend to learn from it, in ways that make our projects much more expansive.
Introducing the Active Archives Initiative: Making Stories, within the Archive
By Evan Higgins on April 25, 2017
During our tenure at HyperStudio as research assistants, we’ve had the chance to work on a number of archival projects at various stages of implementation. From uploading, encoding, and storing of data, to visualizing, displaying and making it accessible, we’ve gotten first hand experience into the tremendous amount of possibilities and variables at play in the creation of digital repositories. Under the guidance of Kurt Fendt, HyperStudio’s Director, we’ve come to focus on the idea of an Active Archive–or a digital repository in which users have the ability to interact with resources in order to craft, discover and share links between content previously unknown. Key to this idea is the understanding of the “an-archive,” first proposed by Laearns & Gielen in their 2007 article “The archive of the digital an-archive.”
The “an-archive”, as defined by Laearns & Gielen:
“both is and is not an archive in the traditional sense of the word. It is, for it actualises the storage function that is usually associated with the notion of the archive; it is not, for the digital an-archive is synonymous with an ever expanding and constantly renewed mass of information of which no representation at all can be made.”
In short, the idea of an an-archive is useful because it is a large, open, digital repository, such as YouTube or Wikipedia, allowing for near constant expansion. This is in direct contrast to the traditional, analogue archive that is exclusive – both in terms of content and access.
This tension between these two types of repositories is nothing new, but it is a useful example for highlighting the different options each provides. An an-archive has “active users” which allows for growth and accessibility, while traditional archives have “stable sources,” that instead provide accuracy but also a measure of control. Of course, the question implicitly raised by this contrast is: who is controlling these data sources? In the case of the an-archive, it is the black-box algorithm, while the traditional archive is instead governed by the equally unapproachable white-haired curator.
In both of these scenarios, the role of the ordinary user is diminished. This might have made sense in 2007, when this article was published, but in the decade since, people’s expectations around data have changed considerably. In 2007, for example, the bulk of interaction with the internet took place on desktop computers – it was only with the introduction of the iPhone that year that powerful personalized devices truly began to reconfigure our expectations around personal data. Or take social networks: Facebook and Twitter were both available in 2007 – albeit in nascent forms – but the raft of popular apps that have emerged since have helped bring forth the notion of holding much greater control over one’s data. Whatsapp encrypts conversations by default, while Snapchat’s messages disappear after 24 hours – making it, if anything, an anti-archive.
Of course, it’s not surprising that as more and more personal data goes online, users are likely to demand proportionally more control over it. (Edward Snowden’s explosive revelations about government surveillance certainly helped bring clarity to the issue.) But even in those apps and services where outward promotion is a goal, not a threat, users have become accustomed to exercising much more control.
Consider the case of the hashtag: this now-ubiquitous symbol lets users sort and categorize content even as they’re creating it. Or take geo-tagging, which allows users to instantly add a layer of geographic metadata to a photo. With scores of other services, each with their own terminology and functions – from pinning on Pinterest to retweeting on Twitter – it seems that the internet and the devices and data connected to it have fostered a virtual playground, in which ordinary users exercise extraordinary amounts of agency and creativity. It should come as no surprise, then, that users might expect an equivalent amount of control when they turn their attention to archives.
So what should an archive look like today? The obvious solution is to diminish the place of both the unreachable curator and the black box and instead focus on the user who is interacting with the content directly. In order to engage modern users of archives, we need to make sure that they have not only access to the same information as traditional curators, but also control over this information. In short, we need to create archives that put the user, not unseen forces, directly at the center of the data.
At HyperStudio, two of our current projects – Blacks in American Medicine and US-Iran Relations – are developing with this goal in mind. As part of our Active Archives Initiative, both of these digital projects are being refined to make sure that their vast repositories are not only open to all users but responsive to these users’ wishes and needs. By thinking of users first, we aim to create archives that will engage a new generation of audiences who play an active role in shaping the stories that have been handed down to us all.
So what will our Active Archives Initiative encompass and involve? We’re still at an early stage in thinking through our approach, but something we certainly plan to highlight is the idea of “storymaking”. In contrast to storytelling, storymaking emphasizes the importance of historical materials – as accessed through archives – and narratives that we create with them. HyperStudio’s experimentation with storymaking actually predates the Active Archives Initiative: an earlier iteration of our US-Iran Relations project had similar functionality, whereby users could “write their own narrative” within the archive. But we’re hoping that through this initiative we can offer users an easy-to-use interface which is built to scale across a diverse array of archives.
Consider for example our Blacks in American Medicine project. The project archive consists of over 23,000 biographical records of African American physicians from 1860-1980 and numerous associated primary documents. Inevitably, the scale of this archive enables very different stories to be constructed from the diverse set of raw materials available. Emphasizing storymaking means encouraging users to think through the subjective decisions involved in using archives to explore and explain the past. For instance, when looking at correspondence from notable African American physicians one can choose to focus on the purpose of the document, the time period surrounding it or the geographic information it contains.
This is just an example, but it serves to show the complexities involved in understanding the past – complexity that our Active Archives Initiative will encourage users to embrace, not avoid. Obviously, this work is still at the planning stage, but in future work we’ll be thinking in more depth about how the user interface and functionality of the initiative can best reflect the mission we’ve described here. And we’ll be sure to keep this blog updated with our progress!
h+d Portraits: An interview with Ece Turnator
By Josh Cowls on March 16, 2017
Welcome to h+d Portraits, a new series of interviews with scholars and practitioners of the Digital Humanities. In this series, we aim to explore what the label Digital Humanities means in practice – by assembling a diverse set of perspectives from those engaged in this still-emerging field.
We kick off the series with our conversation with Ece Turnator. Ece recently joined MIT Libraries as a Humanities and Digital Scholarship Librarian. She completed her PhD in Byzantine History at Harvard, before joining the University of Texas Libraries to conduct postdoctoral work in medieval data curation. In our interview, Ece offered her perspective on the intersection between traditional historical research and the use of new methods, resources, and ways of thinking.
Ece, tell us about your academic background prior to joining MIT. How did you become involved in Digital Humanities?
For my PhD, I worked on the economy of the Byzantine empire during and after the 13th century – specifically before and after the Latin conquest of Constantinople. I looked at how it impacted the economy, in what ways, and what the state of the economy was before they arrived in that area in 1204 AD. I used textual data, but I also used textile analyses, evidence from numismatics, and ceramics, altogether.
So those were my three main sources of data, and I put them all into Excel, and tried to create maps out of them to understand what exactly was going on with the data that I had collected. The more I looked at them, the more it helped me understand the context, so that was my introduction to doing humanities with the help of digital tools.
I then worked at UT with a professor in the English department, and at UT Libraries my office was in the IT Unit of the Libraries. Essentially our goal was to design a medieval project that would be updated and sustained with the help of the Libraries. I worked on that for two years – it was a great learning experience. I think the libraries learnt a lot from the experience of working with a humanities project, and trying to understand what goes in and out of creating a DH project. I think it also benefited the faculty member, and clearly it did benefit me – now I’m here at MIT Libraries.
What does your work at MIT Libraries involve, and what has been the reaction to your efforts in the MIT community more broadly?
Here at MIT, I’m working with Doug O’Reagan, who’s a SHASS postdoctoral fellow doing very similar things here to what I did at UT – so we’re kind of joined forces in that regard, working towards understanding what DH means at MIT. It’s a different kind of animal here, from what I understand so far. So as part of Doug and the Libraries’ efforts to understand what’s going on at MIT with respect to Digital Humanities, we meet with faculty in different departments, as well as graduate students.
So when Doug and I go round and meet faculty we ask them, “what does DH mean to you?”, “do you think there should be a center for DH?, and “how should the training be done in your specific departments?”. We get different kinds of answers and focuses. Sometimes the emphasis, interestingly, is on training in social statistics and computation, and how to integrate these into humanities. But when you talk to the folks who are trained inside of the humanities department they ask different kinds of questions, and often emphasize the need for more ethics, and more guidance on how to create resources which acknowledge the social and historical bearings of what it means to create a digital tool, resource, or machine. That kind of tension here exists that I had not noticed at other places that I have been. This could be creative, or it could lead to other directions, and I can’t tell at this point in time. But it is interesting – and I think it is MIT-specific, that tension.
What, in your view, are the key questions around the push towards open access in the humanities?
We have to appreciate that it’s a big cultural change, for especially humanities departments but also other departments as well: what does it mean to be open access, is it actually doable, and how to do we get to that ideal? Those are all challenges that are posed by the systems that we created in the post-second world war world, and now we’re trying to rewind the calendar back a little bit, to get to the time before the locking down of resources. So now the question is – how do we move back to earlier ideals, and create a knowledge commons such that information is not locked down, but is available for whoever wants to actually use it.
How do you see libraries changing in the future?
I think a lot of the future development is going to incorporate formalized training of students. So far what has happened is students have to gather computational skills on their own, with not a lot of help that’s recognized – transcribable recognition, that we prioritize. We prioritize other aspects of their training, but we don’t prioritize training around how we create data that’s shareable, how we create data that’s usable by others, and how we think about who is going to access that data once we create it.
Specifically thinking about humanities fields, this has not so far been explicitly taught in our fields across the board. But maybe the trend is going to be towards trying to incorporate that big cultural change into humanities, and work toward integrating those values and skills – not just skills on their own, but values and skills – into the training of humanities students at large.
Annotation Integration: An Interview with HyperStudio Fellow Daniel Cebrián Robles
By Josh Cowls on October 24, 2016
At a time when it’s commonplace to see a movie trailer embedded in a tweet, or photos posted in a message thread, it’s clear that the experience of using the web involves an immersive mix of text, images and video. Of course, underlying what appears a seamless combination of media content is a huge amount of technical sophistication. The story is no different for annotation programs, which allow users not only to view but also to comment on various types of content. Integrating different annotation programs is the work of our latest HyperStudio Fellow, Daniel Cebrián Robles, who was spending September visiting us from the University of Málaga, where he is a Professor of Science and Education. I recently sat down with Daniel to talk about his work, and what he hopes to achieve in his time with us.
One of Daniel’s areas of expertise is his development of technology designed to meet the needs of students and researchers. This makes him a natural fit as a visitor to HyperStudio, given our focus on both research and pedagogy in the context of Digital Humanities. Daniel’s focus while at HyperStudio will be on integrating the Open Video Annotation project (OVA), for which he is the Lead Application Developer, with our own online annotation tool, Annotation Studio. OVA, an initiative originally based out of the Center for Hellenic Studies at Harvard, enables the annotation of online video material, allowing students and teachers alike to create, collate and share tags and comments at different points in a video. Given the explosive growth of online video in recent years, the project serves to make watching video online a more interactive and immersive experience.
As noted, here at HyperStudio we have our own online annotation tool, Annotation Studio, which is also designed for students, researchers and others to collaboratively annotate online material. The crucial difference, however, is that Annotation Studio is currently designed for annotating text, but not – as yet – video. This, then, is the basis of Daniel’s work with us – to integrate the video annotation capacities of the Open Video Annotation Project with Annotation Studio. Doing so undoubtedly poses several technical challenges, which will require Daniel’s depth of experience in this area. Daniel explained that his ambition for this month is to develop a first working version of the integrated functionality.
As both a developer and educator, Daniel is perfectly placed to negotiate between what users want and what is technically feasible, allowing him to swiftly fix bugs and incorporate suggestions made by instructors and students. This ground-level engagement thus guides his development efforts, serving a similar function to the workshops and trials we regularly hold with users of Annotation Studio. These engagements are often the most rewarding part of the development process: Daniel mentioned the time that one of his users, a teacher in training, told him that he would use the program with his future high school students.
Looking further ahead, Daniel believes that his work integrating the Open Video Annotation project with Annotation Studio is only the beginning of a much wider process of bringing diverse forms of media together for annotation into one platform. Daniel speculates that beyond just text, photos and videos, potentially also maps and even 3D objects might belong to such a platform in the future. And the impact on user experience could be empowering and inspirational. Giving students, teachers and the general public the ability not only consume media online, but also share opinions and perspectives on it through annotation, could revolutionize how we experience the vast catalog of content available online. Daniel’s work marks just the start of this process, but we are excited to have him on board!
Telling Forgotten Stories; Testing Traditional Narratives
By Evan Higgins on December 7, 2015
By Evan Higgins on December 7, 2015
As HyperStudio’s other new resident Research Assistant, I’m finding it hard to believe that I’m nearly one fourth of the way through my time here. I’ve worked so far on a number of interesting Digital Humanities projects exploring topics as varied as US foreign relations research and methods of collaborative annotation. And while all these assignments have been fascinating in their own way, the one that has commanded the majority of my attention and interest is our new interactive archive that explores the history of African American physicians. This project, tentatively titled Blacks in American Medicine (BAM), has been in development for years but is now beginning to take shape as an interactive online archive thanks to the possibilities provided by Digital Humanities tools and techniques. As with several of the other projects here, BAM makes use of digital tools to tell stories that have been left untold for too long.
A Brief History of the Blacks in American Medicine
BAM has been in development since the mid 1980’s when Pulitzer Prize Finalist author, Kenneth Manning, undertook the herculean task of aggregating the biographical records of African Americans in medicine from 1860-1980. With the help of his colleague and fellow researcher, Philip Alexander, Ken set out to create a nearly comprehensive list of black American medical practitioners to not only make research about this community less arduous for scholars but also to test traditional narratives about African American communities in the United States.
Over the years, this team built up an impressive collection of biographical records for over 23,000 African American doctors. These records were collected through the careful combing of academic, medical and professional directories. Once a record was found, they were then stored in a digital database with the aim of one day making this content available to a wider audience. Each of these mini-biographies includes personal, geographic, academic, professional and other information about doctors that helps shed light on this unexplored corner of American history.
While searching for these biographical records, Ken and Philip also set about gathering documents associated with these doctors. Correspondence, reports, surveys, diaries, autobiographies, articles and other content collected from years of archival research help flesh out aspects of these doctors’ lives and allow readers to understand the complex situations and challenges that these doctors faced.
In my many hours spent searching through the archive, I’ve come across hundreds of documents that provide a window into the history of the black experience in America. One that continually comes to my mind is a letter written by Dr. S Clifford Boston to his alma mater, the University of Pennsylvania in 1919. In this letter, Dr. Clifford politely asks the General Alumni Catalogue to “kindly strike out the words ‘first colored graduate in Biology [sic], as I find it to be a disadvantage to me professionally, as I am not regarded as a colored in my present location.” This letter is an important artifact not only because it provides evidence of the ways in which blacks “passed,” but because it elucidates some of the complex societal challenges that many African Americans in medicine faced. The formal, detached way in which this doctor asks to be dissociated from his heritage gives a brief glimpse into the systemic racism and segregation that blacks of this era faced. These types of first-hand documents provide a chance to add nuance to traditional histories of the black experience in America that is too often told in large, overly simplistic narratives.
These unique stories in combination with our massive amounts of standardized, biographical information create a unique archive that allows for layers of interaction. By incorporating both a focused study into the history of specific physicians and a broader analysis of the trends within the African American medical community, this trove of content highlights untold chapters in the vast history of the black experience.
HyperStudio Takes the Project into the Information Age
With an eye towards the dissemination of this rare and important content, Ken and his team recently began working with HyperStudio to take better advantage of the affordances of digital humanities.
While still in the initial stages of formalizing the structure of the platform, we are working on a number of intersectional methods to display this trove of content. As with most of HyperStudio’s archival projects, content will be discoverable by both scholars as well as more casual audiences. To do this, documents and records will be encoded using established metadata standards such as Dublin Core, allowing us to connect our primary materials and biographical records to other, relevant archives.
We’re also planning on integrating our Repertoire faceted browser, which allows for both a targeted search given a specific criteria and the ability to explore freely documents that interest the user. Additionally, this project will feature our Chronos Timeline, which dynamically uses events and occurrences to present historical data. We also plan on incorporating geographic, visual and biographical features, as well as a storytelling tool that will enable users to actively engage with our content.
As I round the corner on my first semester at at MIT, I can’t help but be excited by this project. Too often existing narratives about marginalized groups go untested and unchallenged. By providing an multi-faceted interface and rich, previously inaccessible content, we’re creating a tool that will help interrogate these traditional views of African American history. For more information on the project as it develops follow us here on the blog.
Image: Leonard Medical School on Wikipedia (source)
Taking Artbot Forward
By Josh Cowls on October 6, 2015
It’s great to be getting underway here at MIT, as a new graduate student in CMS and an RA in HyperStudio. One of my initial tasks for my Hyperstudio research has been to get to grips with the exciting Artbot project, developed by recent alumni Desi Gonzalez, Liam Andrew, and other HyperStudio members, and think about how we might take it forward.
The genesis of Artbot was the realisation that, though the Boston area is awash with a remarkable array of cultural offerings, residents lacked a comprehensive, responsive tool bringing together all of these experiences in an engaging way. This is the gap that Artbot sought to fill. A recent conference paper introducing the project outlined the three primary aims of Artbot:
- To encourage a meaningful and sustained relationship to art
- To do so by getting users physically in front of works of art
- To reveal the rich connections among holdings and activities at cultural institutions in Boston
With these aims in mind, the team built a highly sophisticated platform to serve up local art experiences in two ways: through a recommendation system responsive to a user’s expressed interests, and through a discovery system drawing on meaningful associations between different offerings. Both these processes were designed to be automated, building on a network of scrapers and parsers which allow the app to automatically categorize, classify, and create connections between different entities. The whole project was built using open-source software, and can be accessed via artbotapp.com in mobile web browsers.
I’ve spent some time getting first-hand experience with Artbot as a user, and several things stick out. First, and most importantly: it works! The app is instantly immersive, drawing the user in through its related and recommended events feeds. Experiencing art is typically a subjective and self-directed process, and the app succeeds in mimicking this by nudging rather than pushing the user through an exploration of the local art scene.
Second, it is interesting to note how the app handles the complexity of cultural events and our varied interest in them. On one level, events are by definition fixed to a given time and place (even when they span a wide timespan or multiple venues.) Yet on another level, a complex package of social, cultural and practical cues usually governs the decision over whether or not we want to actually attend any particular event. This is where the app’s relation and recommendation systems really become useful, drawing meaningful links between events to highlight those that users are more likely to be genuinely interested in but may not have searched for or otherwise come across.
Finally, the successful implementation of the app for Boston’s art scene led us to think about the different directions we might take it going forward. In principle, although the app currently only scrapes museum and gallery websites for event data, the underlying architecture for categorization and classification is culturally agnostic, suggesting the possibility for a wider range of local events to be included.
The value of such a tool could be immense. It’s exciting to imagine a single platform offering information about every music concert, sporting event and civic meeting in a given locality, enabling residents to make informed choices about how they might engage with their community. But this is crucially dependent on a second new component: allowing users to enter information themselves, thus providing another stream of information about local events. As such, we’re proposing both a diversification of the cultural coverage of the app, but also a democratisation of the means by which events can be discovered and promoted. We’ve also given it a new, more widely applicable name: Knowtice.
This move towards diversification and democratisation chimes with the broader principles of the platform. ‘Parserbot’ – the core component of Artbot which performs natural language processing and entity extraction of relevant data – is open source, and therefore could in future allow communities other than our launch locality Boston to adopt and implement it independently, shaping it to their own needs. At root, all events require some basic information: a time and date, a location, a purpose, and people to attend. This data is standardisable, making it possible to collect together information about a wide array of events in a similar format. Yet despite these structural similarities, in substantive terms no two events are ever the same, which is why we are committed to providing a platform which facilitates distinctiveness, letting communities to express themselves through their events.
We recently entered the Knight Foundation’s News Challenge with a view to taking the app forward in these new directions. You can view our submitted application (and up-vote it!) here. As we state in our proposal, we think that there’s tremendous potential for a tool that helps to unlock the cultural and social value of local activities in a way that engages and enthuses the whole community. We plan to build on the firm foundations of Artbot to create a social, sustainable, open-source platform to accomplish this broad and bold ambition. Keep checking this blog to find out how we get on!
Insights on “Collaborative Insights”: Four Lessons from the Digital Annotation Workshop
By Andy Stuhl on March 19, 2015
On January 23, 2015, HyperStudio hosted a workshop that convened more than seventy educators and technologists to discuss the future of annotation. “Collaborative Insights through Digital Annotation: Rethinking the Connections between Annotation, Reading & Writing” drew thoughtful perspectives on the opportunities and challenges facing HyperStudio’s Annotation Studio and other pedagogical tools more broadly. The workshop’s dynamic combination of formal panel conversations and unconference crowdsourced breakout sessions allowed particular topics of interest to emerge as flashpoints over the course of the day; these themes became the organizing basis for the closing remarks delivered by HyperStudio research assistants Andy Stuhl and Desi Gonzalez.
Copyright
One issue that arose over and over was the question of copyright. What kinds of texts can educators share on Annotation Studio? In our terms and conditions, we ask users of Annotation Studio to respect intellectual property and not post materials that violate copyright. But the question of what constitutes fair use for educational purposes is itself difficult to answer. However, there are a few guidelines that one might want to follow. A useful guide has been put together by the Association of Research Libraries specifically for faculty & teaching assistants in higher education.
When digital annotation tools are used for public humanities projects, these questions become all the more pressing. During a breakout session on using annotations in public humanities projects, Christopher Gleason and Jody Gordon of the Wentworth Institute of Technology shared their digital project on the legacy of former Boston mayor James Curley. As a part of the project, Gleason and Gordon asked students to annotate a historical text describing Curley’s mansion near Jamaica Pond. The student-scholars added comments that would better help them understand the original interiors of the house, complete with definitions and images of historical furnishings. This project stressed a recurring question for Annotation Studio: how do we best deal with issues of copyright—not just of the original text, but also of the content with which the text is annotated? The Annotation Studio team is exploring ways to simplify the addition of metadata, including copyright information to media documents used in annotations.
Pedagogy first, tool second
Both Ina Lipkowitz and Wyn Kelley have used Annotation Studio in multiple classes in the Literature Department at MIT. But the kinds of texts they teach—from classic novels to recipes to the Bible—and the ways in which they and their students annotate differ wildly. When reading entries from the medieval cookbook Forme of Cury, annotations might be used to translate Old English words; in Frankenstein, students might share anything from personal impressions to interpretations of the text.
Annotation Studio was built as an easy-to-use web application with a core functionality: the collaborative annotation of texts. This one simple software, however, has yielded a multiplicity of affordances. It’s not the tool that determines how texts are used in the classroom, but rather the texts determine the tool: for one, we implement new features based on how educators hope to teach texts to their students; moreover, educators constantly find new strategies for using collaborative annotations in their classrooms.
Advancing technical and scholarly literacy together
The workshop demonstrated how annotation is a perfect case study for a bridging of readerly, writerly, and technical skill development central to the digital humanities. In the first panel, Mary Isbell documented how both reading/writing assignments and the work of learning and troubleshooting digital tools can both be mutually reinforcing components of a DH curriculum: by factoring in space for learning and troubleshooting these tools within the course plan, she’s found that formerly stressful encounters with software become opportunities to engage with and adapt the technical piece of the puzzle. This type of work often includes putting different tools and different types of media into conversation with one another, as Annotation Studio’s multimedia annotation evidences. Through these uses, students, as HyperStudio’s Executive Director Kurt Fendt noted, come to “think across media” and thereby expand their understanding of how meaning is constructed and conveyed differently through different media.
In the Teaching Annotation breakout session, participants brainstormed ways to create new kinds of assignments that integrated the new affordances of digital tools with existing pedagogical goals. This conversation included suggestions about directing students to turn to one another for technical aid in using these tools—this conversation, in turn, was part of a larger one about challenging notions of expertise and conceptual categories in the classroom. This subtle back-and-forth between technical and scholarly engagement offers instructors and students alike new ways to expand and combine their skill sets.
Voices in (and voices of) annotation
Much as digital annotation recasts and recombines different kinds of expertise from different sources, the voices of those annotating and of those annotated are also put into a dynamic exchange. The notion of voices cropped up at the center of insights both about annotation’s potential in the classroom and about considerations we should carry forward in refining our approaches to it. In his keynote, John Bryant demonstrated how annotation and revision can help expose the multiple voices of a singular author, giving scholars a more nuanced access to that author’s voice and to the process of critical thinking via that voice. Panelists including Suzanne Lane and Wyn Kelley touched on how environments like Annotation Studio can put students’ voices in the same plane as the authorial voices they study. Co-mingled voices of informal debate, of traditional student roles, and of teacherly authority can democratize the learning space and inspire confidence; they can also, as participants noted, require a careful negotiation and reassertion of pedagogical roles in order to advance a constructive learning conversation.
These opportunities and challenges are at the foreground of HyperStudio’s design process, as Lead Web Applications Developer Jamie Folsom described, in building more writing-focused features that will help students transform their reader voices into authorial voices. More broadly, the theme of voices opened to all participants exciting ways to think about the project of helping students discover, build, and reinvent their own scholarly voices—a project in which, as the workshop made clear from many angles, annotation has always been and continues to be a very powerful method.
Browsing Theater History with CFRP
By Andy Stuhl on December 19, 2014
In mid October, a transatlantic group of scholars gathered in New York City to present their research into more than a century of French theater and to discuss the tool that had helped them examine this history. This tool was a faceted browser, developed by HyperStudio as part of the Comédie-Française Registers Project (CFRP).
Under the guidance of Jeffrey Ravel, a professor of history at MIT and the principle investigator of the project, the HyperStudio team has developed ways to catalog, browse, and visualize the contents of thousands of register pages—the hand-written ledgers in which the ticket sales of every performance had been meticulously recorded since the Comédie-Française’s opening in 1680. The faceted browser, the latest of these tools, was made available to this group of scholars a few months prior, and the workshop in New York marked the first time they had convened to discuss their work with CFRP data. In preparation for this meeting, scholars used the faceted browser to examine the dates of the workshop (October 14th and 15th) throughout the theater’s 300-year history. From that launching point, each presenter crafted a research question and probed it through the faceted browser. At the conference, the scholars shared the impressive array of findings from their research, as well as their insights into the design of CFRP tools.
It quickly became clear that researchers drew on the flexibility of the faceted browser in a myriad of ways. For some, screenshots of the faceted browser’s register entries and responsive filters, as well as of the built-in time wheel data visualizer, ably illustrated their arguments. These presentations often relied on juxtaposition—for example, showing the difference in attendance between Molière’s plays and of Voltaire’s during the 1781-1782 season. For others, the browser served as a method by which they could generate data that could be entered into outside tools, allowing them to create additional visualizations. This latter category reminded us as developers of digital research tools how users will always incorporate additional tools into their projects. Accordingly, tailoring for every possible use should take a back seat to enabling smooth transitions of data from HyperStudio’s platform into others.
As we go about bringing CFRP tools to a wider scholarly audience, one priority is to develop case studies to familiarize viewers with the interface, while also showing compelling stories about its usefulness. Reflecting on their experiences of learning to use the CFRP faceted browser, many scholars at the workshop noted that such case studies would be very helpful to new users. Their presentations, meanwhile, provided invaluable examples of the stories we might build these studies around and indicated paths toward an interactive design for the case studies. Building on the thorough documentation of related digital humanities tools such as the French Book Trade in Enlightenment Europe, we are working to design a model for web-based CFRP case studies that will inspire readers to jump straight from the page into the data with their own questions.
The need to continue to craft engaging entry points to the Comédie-Française data was driven home by the workshop’s final discussion, which raised questions about the use of CFRP and similar tools in the classroom. Damien Chardonnet-Darmaillacq of Paris West University Nanterre La Défense discussed his introduction of CFRP and the faceted browser to a group of high school students, who enthusiastically took it up in launching their own investigations into French theater history. The students’ excitements and frustrations with the tool demonstrate both the rewards and the challenges of opening a still-developing project to pedagogical use. Chardonnet’s students, he noted, were quick to take on the data-based research approach and to become adept with its tools.
It’s always tempting with large-scale data projects to think that the body of data offers a neat and tidy representation of the underlying texts and events; the workshop’s discussions reminded all that the data from the registers is thoroughly embodied in the history and the physical space of the theater itself. For example, participants raised challenging problems of defining the different categories of seats within the theater and of understanding how these definitions changed across the troupe’s movement into a new theater building in the 1780s. Some called for a visualization tool that would evoke the three-dimensional space of the theater itself in representing data trends. Projects in the digital humanities bring along with them strong and complicating connections to the materiality of texts, performances, and spaces; yet the digital humanities also provide unique approaches to harness this materiality in digital representations through thoughtful design. When the presentations had concluded, participants headed uptown for a performance of a Voltaire play, tying the workshop’s ventures into the realm of data, queries, and visualization back into the lively theater tradition that inspired them.
A Year of Digital Humanities Thinking
By Desi Gonzalez on October 21, 2014
I recently celebrated my one-year-anniversary as the author of h+d insights, HyperStudio’s weekly newsletter that shares the latest news, projects, resources, and fellowship and conference opportunities related to the intersection of technology and the humanities. (Subscribe here to stay in the loop!) That’s 52 weeks of combing through blogs, tweets, videos, slide shares, and news articles to find the most pressing issues in the field of digital humanities.
This responsibility (and privilege) has afforded me a unique perspective on DH: what’s happening to the field, what the current controversies are, and where the most exciting and cutting edge work is happening. Here, I highlight a few trends I’ve noticed over the past year.
1. What happens in the world affects digital humanities. While humanists often study the past and its artifacts, current events influence the work we do all the time. In April, net neutrality—the idea that Internet service providers and the government should allow equal access to all content and applications, without favoring specific products or websites—was threatened in the U.S. with the announcement that the Federal Communications Commission was considering changes its policies. Digital humanists immediately responded, recognizing how net neutrality could affect the type of work—often open-access and done with little funding—we do. Adeline Koh explained the implications: “Imagine this: an Internet where you can access Apple.com in a fraction of a heartbeat, but a non-profit activist website would take five minutes to load.” A group of leaders in DH penned an open letter urging the FCC to protect “the fundamental character of the open, non-discriminatory, creative, and competitive Internet.” Hybrid Pedagogy hosted a discussion on the implications of net neutrality for educators and learners, which turned out to be a riveting discussion.
Others are recognizing the importance of creating digital archives of major events happening now so that scholars may be able to use them in the future. For example, Ed Summers advocated for the documentation of the aftermath of the Michael Brown shooting and the subsequent protests in Ferguson, Missouri. He wrote a blog post sharing the process he used to archive tweets from the event.
2. Universities are evolving. DH continues to be placed at the center of heated and often hyperbolic debates about the so-called demise of the humanities. No matter which side you land on (or, if you’re like me, if you believe these debates are asking the wrong questions), it is clear that there are certain, very real changes happening in academia right now.
The Modern Language Association undertook the daunting task of producing a comprehensive report on the current state of doctoral education in the humanities. The report’s executive summary offers many recommendations, including redesigning the doctoral program to fit with “the evolving character of our fields,” providing support and training for technology work, and reimagining what a dissertation might look like. (In fact, a few months prior Cathy Davidson had reflected on what it means to write a digital dissertation.)
And speaking of dissertations, Mark Sample’s Disembargo project challenges the concept of the academic embargo, in which dissertations are withheld from being circulated digitally for up to six years. Every ten minutes, a character from his dissertation manuscript is added to the project website—published under a Creative Commons license—at “an excruciating pace that dramatizes the silence of an embargo.”
Finally, more and more scholars are opting for #altac (alternative academic) and #postac (post-academic) careers—paths outside of the traditional tenure-track route, which is increasingly becoming more untenable. Sarah Werner considered the relationship between #altac work and gender and shared her advice on pursuing an untraditional career.
3. Digital humanities is happening in the public sphere. As the #altac movement shows, digital humanities is often happening outside of the university. Crowdsourcing continues to be a popular strategy to involve the public in the development and execution of DH projects. Letters of 1916, for example, recently celebrated its first anniversary. Within the past year, families (in addition to cultural institutions, libraries, and archives) have donated letters penned during the Easter Rising in Ireland. Others have considered how the potential of crowdsourcing can be further enhanced: Mia Ridge outlines cultural heritage projects that combine crowdsourced data with machine learning, while Trevor Owens imagines crowdsourcing being used in tandem with linked open data.
Additionally, museums, archives, and libraries outside of academia are spearheading some exciting technology initiatives. Open data from museums and libraries allows broad audiences to do DH in their own homes. Nina Simon highlighted some of the data visualization projects that resulted from the Tate releasing an API of its collection. A consortium of museums involved in the Getty Foundation’s Online Scholarly Catalogue Initiative are reimagining what art publications look like in the 21st century. The Walker Art Center, for example, challenges the notion of what is a page in the digital age.
The above are just a handful of the ideas and projects flowing through the digital humanities community. A blog post can’t do justice to all of the fantastic work being done in the field; to keep up with the latest, I recommend that you follow the #DH hashtag on Twitter and subscribe to h+d insights. I’m excited to see how the next year shapes up!
Image: visualization of the digital humanities community on Twitter by Martin Grandjean (source)
Annotation Studio 2.0 Released
By Rachel Schnepper on September 18, 2014
After an exceptionally busy summer, we are thrilled to be able to announce the release of Annotation Studio 2.0. The new version of Annotation Studio offers the following features:
- Anchored annotations that correspond to the text currently visible
- Touch capability so that Annotation Studio can be used on mobile devices
- Upload of very basic PDF files
- Slide-in menus with context aware tools
- Private subdomains (e.g., MassU.annotationstudio.org) hosted by HyperStudio
- Improved annotation sidebar
- Wider document view
- Breadcrumb navigation
- Expanded help forum at support.annotationstudio.org
User feedback has been incredibly important to determining which features were added to Annotation Studio 2.0. We have listened carefully to what the Annotation Studio community was saying, what both instructors and students liked and disliked about Annotation Studio. Accordingly, we have endeavored to preserve what users like best about the application, namely its simplicity and ease of use, while nonetheless adding the features and functions they wanted.
In the weeks to come, we will continue to add new features, including increased functionality to the PDF uploader. As we do so, we hope to continue to hear back from the Annotation Studio community. We encourage you to make use of the Annotation Studio support forum, where users can learn from one another. We hope you continue to share with us how you are using Annotation Studio, like the subjects of our pedagogy case studies. Our goal is to make a tool that reflects the user’s needs, and we can’t do that without your feedback.
Please check out the new version of Annotation Studio here!