Blog
Transition
By Kurt Fendt on December 30, 2018
As of January 1, 2019, HyperStudio’s research and development will be rebranded and refocused under a new name: “Active Archives Initiative” (https://aai.mit.edu). Existing projects such as “Annotation Studio”, “Idea Space”, the “US-Iran Relations” project, “Blacks in American Medicine”, and others will continue and see further development. Please watch this space for further info.
You might want to check out the new MIT School of Humanities, Arts, and Social Sciences Initiative: MIT DIGITAL HUMANITIES.
For further information and questions, please contact Kurt Fendt <fendt[at]mit.edu>.
Renovating Artbot
By Nikhil Dharmaraj on June 20, 2018
Hello! Nikhil here – I’m having a blast staying in Cambridge alone this summer so far, and I’m thrilled to be joining the HyperStudio development team as an intern! My work here this summer will concentrate on Artbot. As a recap, Artbot is a mobile website (www.Artbotapp.com) that serves as an art exhibit recommendation system for users in the Boston area; it is built on top of the open-source parserbot, a natural language processing tool that performs high-level entity extraction developed at HyperStudio. (https://github.com/hyperstudio/parserbot).
Originally, Artbot was designed as a mobile website by HyperStudio Research Assistants Liam Andrew and Desi Gonzalez as a personalized art recommendation system. The concept and approach was explicated in a paper published on Museums and the Web (https://mw2015.museumsandtheweb.com/paper/playful-engineering-designing-and-building-art-discovery-systems/). Currently, Artbot sources from seven different art museums in the Boston Area, and it employs a plethora of different natural-language-processing and entity-extraction services to parse and link related art exhibitions.
This summer, my goals for Artbot can be conceived of as a three-phase process: (1) web scraper improvement, (2) iOS transportation, and (3) adding social capabilities.
First, I hope to fix current bugs in Artbot. The foundation of Artbot relies on scraperbot, a program that scrapes various museum websites to pull different exhibits; afterwards, said art exhibitions are run through a chain of four entity-extraction and tagging services, namely Stanford NER, DBPedia, OpenCalais, and Zemanta. The scraper is designed using BeautifulSoup, a Python framework that allows for scraping and extraction from the HTML content of a given website. Each scraper is custom-designed to one of the eight museums in Artbot’s system right now, meaning that they are all adjusted to the format and structure of their respective webpages. However, the issue arises when considering the dynamic nature of these webpages. Essentially, the static code that was written a while ago does not always properly scrape current museum websites, due to updates and renovations on the page structure. Fixing this issue boils down to a manual task. Essentially, the instructions given to BeautifulSoup must be hand-modified to fit the new structures of some of the museum pages (for example, the DeCordova museum page: https://decordova.org/). In general, given the frequency with which museums remodel their websites, fixing the static scraper code may become something of a headache. However, for now, we hope that performing a broad pass through our BeautifulSoup instructions in scraperbot and updating them will serve to eliminate scraper bugs in Artbot in the foreseeable future. Moreover, with extra time, we hope to also add more web scrapers to expand Artbot’s reach to other museums in the Greater Boston area.
Once the scrapers are fixed and the bugs are eliminated, my second — and more broad — goal for the summer is converting Artbot to an iOS app, downloadable off the app store. Currently, Artbot is a mobile website; essentially, by accessing www.Artbotapp.com on their smartphone and saving it to their home screen, users can make Artbot a quasi-app. I hope to make mobile Artbot a concrete reality by porting it to a native iOS app. Two years ago, I completed an iOS App Development Track at Make School Summer Academy in Silicon Valley; since then, I have created a vast portfolio of iOS apps, all built natively in Xcode using Swift. I hope to now make Artbot fully functional as an iOS app by porting all the current code (written in a combination of Python, JavaScript and Ruby on Rails) to an Xcode-compatible portfolio. Most of the UI design will stay consistent; this task is just a matter of code generation, translation, and Apple Xcode design proficiency.
Finally, once Artbot is available both as a website and a native iOS app, we will work on implementing the final summer deliverable: adding social capabilities. To be sure, Artbot has a simple and specific goal: to recommend art exhibits and galleries and illuminate serendipitous connections in artwork to users. It is by no means to become a social platform in and of itself. Rather than making Artbot a vessel of social interaction, however, we believe that linking the app to certain social media will boost app use and experience. Examples of such connections would include a “Tweet,” “Instagram,” and “Facebook” button where users can share exhibits and galleries they have found on the app. Clicking the Tweet button, for instance, would pull up a suggested tweet that might look like this: “I just found XYZ exhibit on the @DeCordovaSPandM! Check it out #Artbot” In adding these core social connections and features, we hope to give users a positive channel to share their insights without distorting the original purpose and platform of Artbot.
Indeed, I’m very excited about Artbot’s future and growth this summer! With our three-step plan, I think we can truly advance its functionality and enhance its mission. Stay tuned for more updates!
HyperStudio welcomes summer intern Nikhil Dharmaraj
By Kurt Fendt on June 19, 2018
For the summer of 2018, we have a new intern at HyperStudio, Nikhil Dharmaraj! Sixteen years old, Nikhil is a rising senior at The Harker School in sunny San Jose, California. Thrilled to be working at HyperStudio this summer, Nikhil has a budding interest in the “digital humanities;” essentially, he is eager to investigate what lies at the intersection of technology and the humanities.
An aspiring software developer, Nikhil is fluent in many computer languages, and he has pursued computer science through post-AP coursework, hackathons, and internships. Extracurricularly, for the past six years, he has been nationally ranked in Original Oratory (a competitive speech even within NSDA); moreover, he has excelled in the classics as a member of CAJCL, serving as vice president of his school’s chapter.
This summer at HyperStudio, Nikhil is excited to be working on a variety of projects, with his focus being improving ArtBot, a program that uses a double machine-learning algorithm to make serendipitous connections between art pieces and exhibits. Nikhil will be with us till August 17, and we are excited to have him on HyperStudio’s team!
h+d Portraits: An interview with Jeremy Grubman
By Josh Cowls on May 18, 2017
Welcome to the second edition of h+d Portraits, an ongoing series of interviews with scholars and practitioners of the Digital Humanities. In this series, we aim to explore what the label Digital Humanities means in practice – by assembling a diverse set of perspectives from those engaged in this still-emerging field.
Our second conversation is with Jeremy Grubman. Jeremy has been at MIT since 2009, first as manager of an MIT Libraries collection, and since 2012 has been an archivist in the Arts, Culture and Technology (ACT) program. As our discussion demonstrates, ACT sits at the intersection of the artistic and the digital, and the program’s holdings contain a remarkable range of artefacts covering MIT’s decades of experimentation with emerging technologies.
As such, Jeremy is expertly placed to comment on the intersection of subject matter experts, information scientists and “technologists”, whose collaboration, as Jeremy sees it, gives rise to Digital Humanities.
So our first question is always the same: What brings you to your existing institution, and in this case to your department?
Sure. So I received a Masters in Library and Information Science from Simmons College in 2008, and almost immediately after I began working in spring 2009 for the MIT Libraries, so that’s what brought me to MIT, and in the summer of 2012 I was asked to come and bring the kinds of work I was doing for the Libraries to the program in Art, Culture and Technology (ACT), which owned a significant amount of archival materials, covering the history of MIT’s experiments with emerging technologies in art over the past 60 years or so.
Maybe for people who don’t know quite so much about ACT, what makes ACT quite unique inside MIT?
Well ACT is born from two different organizations, the Center for Advanced Visual Studies which brought artists from all around the world to collaborate with MIT’s scientists and engineers as research fellows, beginning in the mid-1960s, and the Visual Arts program which was an academic arts program that began in the late 1980s. And the two merged to form ACT which is where we are today, which is both a research center and an academic graduate program in the arts and we also provide classes for undergraduates.
Can you tell us about how the interplay of different stakeholder groups works in practice?
I think that DH really emerges from the needs of the subject matter experts in answering complex research questions. To get at those materials they encounter the information science professionals, the librarians, archivists, museum professionals who are the gatekeepers and enablers of access, and then you have the people I refer to as technologists for lack of a better word, who build the tools and applications that promote that level of access. And DH as I see it is the convergence of those three areas of practice.
What makes these collections special – maybe not just in terms of their artistic or cultural qualities, but in that process of making them digital?
“Special collection” is a very specific term in the library world, it’s somewhat different from an archive, but what makes the collections themselves so interesting and unique is, on one hand, the breadth of material, the difference of material types. Currently I’m working most prominently with the Center for Advanced Visual Studies special collection, and it’s a challenge to do research in that collection, because if you’re looking at a particular project that was done here at MIT you need to be able to access a poster, a film, a catalog, a publication, the records of meetings and memos that reveal a process of collaboration. … It can be really challenging, in just a physical space, to go around a room and pull all of those materials from their different locations.
At the same time, in a digital space, it can be really difficult to ensure that a user can discover all of the materials that they need to discover on a given topic. So I think that the breadth of material types in a given collection is one thing that makes it challenging to deal with, and then also … there are some really complex research questions that people tend to ask about these projects. “Who were all the people who came to work on holography and MIT, and where did they all come from? And then where did they produce their work?” And so in order to be able to answer that question it takes a lot of digging, and not in the sort of approach that is typically used when you’re looking at just raw data.
What makes ACT a particularly interesting place to work in this space?
Well it’s very interesting to not work under the umbrella of the MIT Libraries, or the MIT Museum, which is typically where these materials would sit at an institution like this. It just so happens that the materials were collected and retained by ACT, and on one hand this allows me to move very freely without the weight of a larger institution behind me, but on the other hand it means that I have to do a lot of things on my own. ACT’s a very interesting place, because even though it’s a relatively small program we have a really broad range of different kinds of people coming to experiment, produce work, conduct research, and it’s always changing every semester too, because we have all kinds of people coming in, whether it’s our graduate students or visiting faculty.
How do you see archives evolving in the longer term?
I think there is a clear trend toward reaching wider audiences, and the way to do that is to develop repositories and methods of retaining information and providing access to them that are much more global, in the sense that when you build a digital collection you don’t want it to just necessarily be accessible in that one mode, in that one repository; we need to stop building these sorts of silos of information and build repositions that can really rapidly import and export collection information. And that’s an incredible challenge because we all have different kinds of information and we all have different kinds of metadata, and there’s different schema that we use to describe it. So the challenge there is to be able to build repositories that can incorporate collections from other institutions, so that you might be able to access related materials in one spot, even though the materials themselves might physically be held by very different institutions across the world.
That’s one thing that needs to happen. And another thing is greater collaboration between libraries, archives, museums, galleries – any of these types of organizations that collect materials. When we build these types of global repositories we need people to contribute to them. And I see very often these types of DH projects that are typically spurred by a particular need, for a particular collection or a particular topic, and there’s often collaboration at the project level, where you might get input from your expert users, you might have the technologists working very closely with the information professionals, and the program moves forward. We need to see collaboration at a much higher level, at an institutional level and at an extra-institutional level, as we begin to answer questions like, how are we all going to describe our data in a way that will be intuitive for users to access and discover it?
Is that kind of collaboration a more formal, institutional framework or is it just doing different things with the same people in the same ways?
I think it needs to be a mix of both. I think we need a sort of administrative buy-in, that that collaboration matters and at the same time we need some sort of more serendipitous conversations happening, between these different groups. And very often we like to sort of close our doors and work on our own, we can all be guilty of that, but the more conversations we have about enabling access to wider audiences, and how we get at more intuitive ways of helping people discover things, the better the products are going to be.
Do you have any ideas about how you would enable those conversations?
I think that it falls at least on the institutions themselves to have the conversations between the sometimes disparate departments that exist, whether it’s the MIT Libraries talking to the Museum, talking to ACT, talking to HyperStudio, or the IT department in a museum or a school getting in touch with the various academic departments to see what kinds of tools the need to manage their information to promote access to research data. I think it can start at that institutional level again with a larger administrative buy-in, and also I’ve seen the most success when people become closer colleagues and collaborate with each other, just to keep each other updated on the different things we’re working on. We often discover that we’re reinventing the wheel of someone else’s project. And when we can sit and put our heads together and share that information of the process then we all tend to learn from it, in ways that make our projects much more expansive.
Introducing the Active Archives Initiative: Making Stories, within the Archive
By Evan Higgins on April 25, 2017
During our tenure at HyperStudio as research assistants, we’ve had the chance to work on a number of archival projects at various stages of implementation. From uploading, encoding, and storing of data, to visualizing, displaying and making it accessible, we’ve gotten first hand experience into the tremendous amount of possibilities and variables at play in the creation of digital repositories. Under the guidance of Kurt Fendt, HyperStudio’s Director, we’ve come to focus on the idea of an Active Archive–or a digital repository in which users have the ability to interact with resources in order to craft, discover and share links between content previously unknown. Key to this idea is the understanding of the “an-archive,” first proposed by Laearns & Gielen in their 2007 article “The archive of the digital an-archive.”
The “an-archive”, as defined by Laearns & Gielen:
“both is and is not an archive in the traditional sense of the word. It is, for it actualises the storage function that is usually associated with the notion of the archive; it is not, for the digital an-archive is synonymous with an ever expanding and constantly renewed mass of information of which no representation at all can be made.”
In short, the idea of an an-archive is useful because it is a large, open, digital repository, such as YouTube or Wikipedia, allowing for near constant expansion. This is in direct contrast to the traditional, analogue archive that is exclusive – both in terms of content and access.
This tension between these two types of repositories is nothing new, but it is a useful example for highlighting the different options each provides. An an-archive has “active users” which allows for growth and accessibility, while traditional archives have “stable sources,” that instead provide accuracy but also a measure of control. Of course, the question implicitly raised by this contrast is: who is controlling these data sources? In the case of the an-archive, it is the black-box algorithm, while the traditional archive is instead governed by the equally unapproachable white-haired curator.
In both of these scenarios, the role of the ordinary user is diminished. This might have made sense in 2007, when this article was published, but in the decade since, people’s expectations around data have changed considerably. In 2007, for example, the bulk of interaction with the internet took place on desktop computers – it was only with the introduction of the iPhone that year that powerful personalized devices truly began to reconfigure our expectations around personal data. Or take social networks: Facebook and Twitter were both available in 2007 – albeit in nascent forms – but the raft of popular apps that have emerged since have helped bring forth the notion of holding much greater control over one’s data. Whatsapp encrypts conversations by default, while Snapchat’s messages disappear after 24 hours – making it, if anything, an anti-archive.
Of course, it’s not surprising that as more and more personal data goes online, users are likely to demand proportionally more control over it. (Edward Snowden’s explosive revelations about government surveillance certainly helped bring clarity to the issue.) But even in those apps and services where outward promotion is a goal, not a threat, users have become accustomed to exercising much more control.
Consider the case of the hashtag: this now-ubiquitous symbol lets users sort and categorize content even as they’re creating it. Or take geo-tagging, which allows users to instantly add a layer of geographic metadata to a photo. With scores of other services, each with their own terminology and functions – from pinning on Pinterest to retweeting on Twitter – it seems that the internet and the devices and data connected to it have fostered a virtual playground, in which ordinary users exercise extraordinary amounts of agency and creativity. It should come as no surprise, then, that users might expect an equivalent amount of control when they turn their attention to archives.
So what should an archive look like today? The obvious solution is to diminish the place of both the unreachable curator and the black box and instead focus on the user who is interacting with the content directly. In order to engage modern users of archives, we need to make sure that they have not only access to the same information as traditional curators, but also control over this information. In short, we need to create archives that put the user, not unseen forces, directly at the center of the data.
At HyperStudio, two of our current projects – Blacks in American Medicine and US-Iran Relations – are developing with this goal in mind. As part of our Active Archives Initiative, both of these digital projects are being refined to make sure that their vast repositories are not only open to all users but responsive to these users’ wishes and needs. By thinking of users first, we aim to create archives that will engage a new generation of audiences who play an active role in shaping the stories that have been handed down to us all.
So what will our Active Archives Initiative encompass and involve? We’re still at an early stage in thinking through our approach, but something we certainly plan to highlight is the idea of “storymaking”. In contrast to storytelling, storymaking emphasizes the importance of historical materials – as accessed through archives – and narratives that we create with them. HyperStudio’s experimentation with storymaking actually predates the Active Archives Initiative: an earlier iteration of our US-Iran Relations project had similar functionality, whereby users could “write their own narrative” within the archive. But we’re hoping that through this initiative we can offer users an easy-to-use interface which is built to scale across a diverse array of archives.
Consider for example our Blacks in American Medicine project. The project archive consists of over 23,000 biographical records of African American physicians from 1860-1980 and numerous associated primary documents. Inevitably, the scale of this archive enables very different stories to be constructed from the diverse set of raw materials available. Emphasizing storymaking means encouraging users to think through the subjective decisions involved in using archives to explore and explain the past. For instance, when looking at correspondence from notable African American physicians one can choose to focus on the purpose of the document, the time period surrounding it or the geographic information it contains.
This is just an example, but it serves to show the complexities involved in understanding the past – complexity that our Active Archives Initiative will encourage users to embrace, not avoid. Obviously, this work is still at the planning stage, but in future work we’ll be thinking in more depth about how the user interface and functionality of the initiative can best reflect the mission we’ve described here. And we’ll be sure to keep this blog updated with our progress!
h+d Portraits: An interview with Ece Turnator
By Josh Cowls on March 16, 2017
Welcome to h+d Portraits, a new series of interviews with scholars and practitioners of the Digital Humanities. In this series, we aim to explore what the label Digital Humanities means in practice – by assembling a diverse set of perspectives from those engaged in this still-emerging field.
We kick off the series with our conversation with Ece Turnator. Ece recently joined MIT Libraries as a Humanities and Digital Scholarship Librarian. She completed her PhD in Byzantine History at Harvard, before joining the University of Texas Libraries to conduct postdoctoral work in medieval data curation. In our interview, Ece offered her perspective on the intersection between traditional historical research and the use of new methods, resources, and ways of thinking.
Ece, tell us about your academic background prior to joining MIT. How did you become involved in Digital Humanities?
For my PhD, I worked on the economy of the Byzantine empire during and after the 13th century – specifically before and after the Latin conquest of Constantinople. I looked at how it impacted the economy, in what ways, and what the state of the economy was before they arrived in that area in 1204 AD. I used textual data, but I also used textile analyses, evidence from numismatics, and ceramics, altogether.
So those were my three main sources of data, and I put them all into Excel, and tried to create maps out of them to understand what exactly was going on with the data that I had collected. The more I looked at them, the more it helped me understand the context, so that was my introduction to doing humanities with the help of digital tools.
I then worked at UT with a professor in the English department, and at UT Libraries my office was in the IT Unit of the Libraries. Essentially our goal was to design a medieval project that would be updated and sustained with the help of the Libraries. I worked on that for two years – it was a great learning experience. I think the libraries learnt a lot from the experience of working with a humanities project, and trying to understand what goes in and out of creating a DH project. I think it also benefited the faculty member, and clearly it did benefit me – now I’m here at MIT Libraries.
What does your work at MIT Libraries involve, and what has been the reaction to your efforts in the MIT community more broadly?
Here at MIT, I’m working with Doug O’Reagan, who’s a SHASS postdoctoral fellow doing very similar things here to what I did at UT – so we’re kind of joined forces in that regard, working towards understanding what DH means at MIT. It’s a different kind of animal here, from what I understand so far. So as part of Doug and the Libraries’ efforts to understand what’s going on at MIT with respect to Digital Humanities, we meet with faculty in different departments, as well as graduate students.
So when Doug and I go round and meet faculty we ask them, “what does DH mean to you?”, “do you think there should be a center for DH?, and “how should the training be done in your specific departments?”. We get different kinds of answers and focuses. Sometimes the emphasis, interestingly, is on training in social statistics and computation, and how to integrate these into humanities. But when you talk to the folks who are trained inside of the humanities department they ask different kinds of questions, and often emphasize the need for more ethics, and more guidance on how to create resources which acknowledge the social and historical bearings of what it means to create a digital tool, resource, or machine. That kind of tension here exists that I had not noticed at other places that I have been. This could be creative, or it could lead to other directions, and I can’t tell at this point in time. But it is interesting – and I think it is MIT-specific, that tension.
What, in your view, are the key questions around the push towards open access in the humanities?
We have to appreciate that it’s a big cultural change, for especially humanities departments but also other departments as well: what does it mean to be open access, is it actually doable, and how to do we get to that ideal? Those are all challenges that are posed by the systems that we created in the post-second world war world, and now we’re trying to rewind the calendar back a little bit, to get to the time before the locking down of resources. So now the question is – how do we move back to earlier ideals, and create a knowledge commons such that information is not locked down, but is available for whoever wants to actually use it.
How do you see libraries changing in the future?
I think a lot of the future development is going to incorporate formalized training of students. So far what has happened is students have to gather computational skills on their own, with not a lot of help that’s recognized – transcribable recognition, that we prioritize. We prioritize other aspects of their training, but we don’t prioritize training around how we create data that’s shareable, how we create data that’s usable by others, and how we think about who is going to access that data once we create it.
Specifically thinking about humanities fields, this has not so far been explicitly taught in our fields across the board. But maybe the trend is going to be towards trying to incorporate that big cultural change into humanities, and work toward integrating those values and skills – not just skills on their own, but values and skills – into the training of humanities students at large.
Annotation Integration: An Interview with HyperStudio Fellow Daniel Cebrián Robles
By Josh Cowls on October 24, 2016
At a time when it’s commonplace to see a movie trailer embedded in a tweet, or photos posted in a message thread, it’s clear that the experience of using the web involves an immersive mix of text, images and video. Of course, underlying what appears a seamless combination of media content is a huge amount of technical sophistication. The story is no different for annotation programs, which allow users not only to view but also to comment on various types of content. Integrating different annotation programs is the work of our latest HyperStudio Fellow, Daniel Cebrián Robles, who was spending September visiting us from the University of Málaga, where he is a Professor of Science and Education. I recently sat down with Daniel to talk about his work, and what he hopes to achieve in his time with us.
One of Daniel’s areas of expertise is his development of technology designed to meet the needs of students and researchers. This makes him a natural fit as a visitor to HyperStudio, given our focus on both research and pedagogy in the context of Digital Humanities. Daniel’s focus while at HyperStudio will be on integrating the Open Video Annotation project (OVA), for which he is the Lead Application Developer, with our own online annotation tool, Annotation Studio. OVA, an initiative originally based out of the Center for Hellenic Studies at Harvard, enables the annotation of online video material, allowing students and teachers alike to create, collate and share tags and comments at different points in a video. Given the explosive growth of online video in recent years, the project serves to make watching video online a more interactive and immersive experience.
As noted, here at HyperStudio we have our own online annotation tool, Annotation Studio, which is also designed for students, researchers and others to collaboratively annotate online material. The crucial difference, however, is that Annotation Studio is currently designed for annotating text, but not – as yet – video. This, then, is the basis of Daniel’s work with us – to integrate the video annotation capacities of the Open Video Annotation Project with Annotation Studio. Doing so undoubtedly poses several technical challenges, which will require Daniel’s depth of experience in this area. Daniel explained that his ambition for this month is to develop a first working version of the integrated functionality.
As both a developer and educator, Daniel is perfectly placed to negotiate between what users want and what is technically feasible, allowing him to swiftly fix bugs and incorporate suggestions made by instructors and students. This ground-level engagement thus guides his development efforts, serving a similar function to the workshops and trials we regularly hold with users of Annotation Studio. These engagements are often the most rewarding part of the development process: Daniel mentioned the time that one of his users, a teacher in training, told him that he would use the program with his future high school students.
Looking further ahead, Daniel believes that his work integrating the Open Video Annotation project with Annotation Studio is only the beginning of a much wider process of bringing diverse forms of media together for annotation into one platform. Daniel speculates that beyond just text, photos and videos, potentially also maps and even 3D objects might belong to such a platform in the future. And the impact on user experience could be empowering and inspirational. Giving students, teachers and the general public the ability not only consume media online, but also share opinions and perspectives on it through annotation, could revolutionize how we experience the vast catalog of content available online. Daniel’s work marks just the start of this process, but we are excited to have him on board!
Telling Forgotten Stories; Testing Traditional Narratives
By Evan Higgins on December 7, 2015
By Evan Higgins on December 7, 2015
As HyperStudio’s other new resident Research Assistant, I’m finding it hard to believe that I’m nearly one fourth of the way through my time here. I’ve worked so far on a number of interesting Digital Humanities projects exploring topics as varied as US foreign relations research and methods of collaborative annotation. And while all these assignments have been fascinating in their own way, the one that has commanded the majority of my attention and interest is our new interactive archive that explores the history of African American physicians. This project, tentatively titled Blacks in American Medicine (BAM), has been in development for years but is now beginning to take shape as an interactive online archive thanks to the possibilities provided by Digital Humanities tools and techniques. As with several of the other projects here, BAM makes use of digital tools to tell stories that have been left untold for too long.
A Brief History of the Blacks in American Medicine
BAM has been in development since the mid 1980’s when Pulitzer Prize Finalist author, Kenneth Manning, undertook the herculean task of aggregating the biographical records of African Americans in medicine from 1860-1980. With the help of his colleague and fellow researcher, Philip Alexander, Ken set out to create a nearly comprehensive list of black American medical practitioners to not only make research about this community less arduous for scholars but also to test traditional narratives about African American communities in the United States.
Over the years, this team built up an impressive collection of biographical records for over 23,000 African American doctors. These records were collected through the careful combing of academic, medical and professional directories. Once a record was found, they were then stored in a digital database with the aim of one day making this content available to a wider audience. Each of these mini-biographies includes personal, geographic, academic, professional and other information about doctors that helps shed light on this unexplored corner of American history.
While searching for these biographical records, Ken and Philip also set about gathering documents associated with these doctors. Correspondence, reports, surveys, diaries, autobiographies, articles and other content collected from years of archival research help flesh out aspects of these doctors’ lives and allow readers to understand the complex situations and challenges that these doctors faced.
In my many hours spent searching through the archive, I’ve come across hundreds of documents that provide a window into the history of the black experience in America. One that continually comes to my mind is a letter written by Dr. S Clifford Boston to his alma mater, the University of Pennsylvania in 1919. In this letter, Dr. Clifford politely asks the General Alumni Catalogue to “kindly strike out the words ‘first colored graduate in Biology [sic], as I find it to be a disadvantage to me professionally, as I am not regarded as a colored in my present location.” This letter is an important artifact not only because it provides evidence of the ways in which blacks “passed,” but because it elucidates some of the complex societal challenges that many African Americans in medicine faced. The formal, detached way in which this doctor asks to be dissociated from his heritage gives a brief glimpse into the systemic racism and segregation that blacks of this era faced. These types of first-hand documents provide a chance to add nuance to traditional histories of the black experience in America that is too often told in large, overly simplistic narratives.
These unique stories in combination with our massive amounts of standardized, biographical information create a unique archive that allows for layers of interaction. By incorporating both a focused study into the history of specific physicians and a broader analysis of the trends within the African American medical community, this trove of content highlights untold chapters in the vast history of the black experience.
HyperStudio Takes the Project into the Information Age
With an eye towards the dissemination of this rare and important content, Ken and his team recently began working with HyperStudio to take better advantage of the affordances of digital humanities.
While still in the initial stages of formalizing the structure of the platform, we are working on a number of intersectional methods to display this trove of content. As with most of HyperStudio’s archival projects, content will be discoverable by both scholars as well as more casual audiences. To do this, documents and records will be encoded using established metadata standards such as Dublin Core, allowing us to connect our primary materials and biographical records to other, relevant archives.
We’re also planning on integrating our Repertoire faceted browser, which allows for both a targeted search given a specific criteria and the ability to explore freely documents that interest the user. Additionally, this project will feature our Chronos Timeline, which dynamically uses events and occurrences to present historical data. We also plan on incorporating geographic, visual and biographical features, as well as a storytelling tool that will enable users to actively engage with our content.
As I round the corner on my first semester at at MIT, I can’t help but be excited by this project. Too often existing narratives about marginalized groups go untested and unchallenged. By providing an multi-faceted interface and rich, previously inaccessible content, we’re creating a tool that will help interrogate these traditional views of African American history. For more information on the project as it develops follow us here on the blog.
Image: Leonard Medical School on Wikipedia (source)
Taking Artbot Forward
By Josh Cowls on October 6, 2015
It’s great to be getting underway here at MIT, as a new graduate student in CMS and an RA in HyperStudio. One of my initial tasks for my Hyperstudio research has been to get to grips with the exciting Artbot project, developed by recent alumni Desi Gonzalez, Liam Andrew, and other HyperStudio members, and think about how we might take it forward.
The genesis of Artbot was the realisation that, though the Boston area is awash with a remarkable array of cultural offerings, residents lacked a comprehensive, responsive tool bringing together all of these experiences in an engaging way. This is the gap that Artbot sought to fill. A recent conference paper introducing the project outlined the three primary aims of Artbot:
- To encourage a meaningful and sustained relationship to art
- To do so by getting users physically in front of works of art
- To reveal the rich connections among holdings and activities at cultural institutions in Boston
With these aims in mind, the team built a highly sophisticated platform to serve up local art experiences in two ways: through a recommendation system responsive to a user’s expressed interests, and through a discovery system drawing on meaningful associations between different offerings. Both these processes were designed to be automated, building on a network of scrapers and parsers which allow the app to automatically categorize, classify, and create connections between different entities. The whole project was built using open-source software, and can be accessed via artbotapp.com in mobile web browsers.
I’ve spent some time getting first-hand experience with Artbot as a user, and several things stick out. First, and most importantly: it works! The app is instantly immersive, drawing the user in through its related and recommended events feeds. Experiencing art is typically a subjective and self-directed process, and the app succeeds in mimicking this by nudging rather than pushing the user through an exploration of the local art scene.
Second, it is interesting to note how the app handles the complexity of cultural events and our varied interest in them. On one level, events are by definition fixed to a given time and place (even when they span a wide timespan or multiple venues.) Yet on another level, a complex package of social, cultural and practical cues usually governs the decision over whether or not we want to actually attend any particular event. This is where the app’s relation and recommendation systems really become useful, drawing meaningful links between events to highlight those that users are more likely to be genuinely interested in but may not have searched for or otherwise come across.
Finally, the successful implementation of the app for Boston’s art scene led us to think about the different directions we might take it going forward. In principle, although the app currently only scrapes museum and gallery websites for event data, the underlying architecture for categorization and classification is culturally agnostic, suggesting the possibility for a wider range of local events to be included.
The value of such a tool could be immense. It’s exciting to imagine a single platform offering information about every music concert, sporting event and civic meeting in a given locality, enabling residents to make informed choices about how they might engage with their community. But this is crucially dependent on a second new component: allowing users to enter information themselves, thus providing another stream of information about local events. As such, we’re proposing both a diversification of the cultural coverage of the app, but also a democratisation of the means by which events can be discovered and promoted. We’ve also given it a new, more widely applicable name: Knowtice.
This move towards diversification and democratisation chimes with the broader principles of the platform. ‘Parserbot’ – the core component of Artbot which performs natural language processing and entity extraction of relevant data – is open source, and therefore could in future allow communities other than our launch locality Boston to adopt and implement it independently, shaping it to their own needs. At root, all events require some basic information: a time and date, a location, a purpose, and people to attend. This data is standardisable, making it possible to collect together information about a wide array of events in a similar format. Yet despite these structural similarities, in substantive terms no two events are ever the same, which is why we are committed to providing a platform which facilitates distinctiveness, letting communities to express themselves through their events.
We recently entered the Knight Foundation’s News Challenge with a view to taking the app forward in these new directions. You can view our submitted application (and up-vote it!) here. As we state in our proposal, we think that there’s tremendous potential for a tool that helps to unlock the cultural and social value of local activities in a way that engages and enthuses the whole community. We plan to build on the firm foundations of Artbot to create a social, sustainable, open-source platform to accomplish this broad and bold ambition. Keep checking this blog to find out how we get on!
Insights on “Collaborative Insights”: Four Lessons from the Digital Annotation Workshop
By Andy Stuhl on March 19, 2015
On January 23, 2015, HyperStudio hosted a workshop that convened more than seventy educators and technologists to discuss the future of annotation. “Collaborative Insights through Digital Annotation: Rethinking the Connections between Annotation, Reading & Writing” drew thoughtful perspectives on the opportunities and challenges facing HyperStudio’s Annotation Studio and other pedagogical tools more broadly. The workshop’s dynamic combination of formal panel conversations and unconference crowdsourced breakout sessions allowed particular topics of interest to emerge as flashpoints over the course of the day; these themes became the organizing basis for the closing remarks delivered by HyperStudio research assistants Andy Stuhl and Desi Gonzalez.
Copyright
One issue that arose over and over was the question of copyright. What kinds of texts can educators share on Annotation Studio? In our terms and conditions, we ask users of Annotation Studio to respect intellectual property and not post materials that violate copyright. But the question of what constitutes fair use for educational purposes is itself difficult to answer. However, there are a few guidelines that one might want to follow. A useful guide has been put together by the Association of Research Libraries specifically for faculty & teaching assistants in higher education.
When digital annotation tools are used for public humanities projects, these questions become all the more pressing. During a breakout session on using annotations in public humanities projects, Christopher Gleason and Jody Gordon of the Wentworth Institute of Technology shared their digital project on the legacy of former Boston mayor James Curley. As a part of the project, Gleason and Gordon asked students to annotate a historical text describing Curley’s mansion near Jamaica Pond. The student-scholars added comments that would better help them understand the original interiors of the house, complete with definitions and images of historical furnishings. This project stressed a recurring question for Annotation Studio: how do we best deal with issues of copyright—not just of the original text, but also of the content with which the text is annotated? The Annotation Studio team is exploring ways to simplify the addition of metadata, including copyright information to media documents used in annotations.
Pedagogy first, tool second
Both Ina Lipkowitz and Wyn Kelley have used Annotation Studio in multiple classes in the Literature Department at MIT. But the kinds of texts they teach—from classic novels to recipes to the Bible—and the ways in which they and their students annotate differ wildly. When reading entries from the medieval cookbook Forme of Cury, annotations might be used to translate Old English words; in Frankenstein, students might share anything from personal impressions to interpretations of the text.
Annotation Studio was built as an easy-to-use web application with a core functionality: the collaborative annotation of texts. This one simple software, however, has yielded a multiplicity of affordances. It’s not the tool that determines how texts are used in the classroom, but rather the texts determine the tool: for one, we implement new features based on how educators hope to teach texts to their students; moreover, educators constantly find new strategies for using collaborative annotations in their classrooms.
Advancing technical and scholarly literacy together
The workshop demonstrated how annotation is a perfect case study for a bridging of readerly, writerly, and technical skill development central to the digital humanities. In the first panel, Mary Isbell documented how both reading/writing assignments and the work of learning and troubleshooting digital tools can both be mutually reinforcing components of a DH curriculum: by factoring in space for learning and troubleshooting these tools within the course plan, she’s found that formerly stressful encounters with software become opportunities to engage with and adapt the technical piece of the puzzle. This type of work often includes putting different tools and different types of media into conversation with one another, as Annotation Studio’s multimedia annotation evidences. Through these uses, students, as HyperStudio’s Executive Director Kurt Fendt noted, come to “think across media” and thereby expand their understanding of how meaning is constructed and conveyed differently through different media.
In the Teaching Annotation breakout session, participants brainstormed ways to create new kinds of assignments that integrated the new affordances of digital tools with existing pedagogical goals. This conversation included suggestions about directing students to turn to one another for technical aid in using these tools—this conversation, in turn, was part of a larger one about challenging notions of expertise and conceptual categories in the classroom. This subtle back-and-forth between technical and scholarly engagement offers instructors and students alike new ways to expand and combine their skill sets.
Voices in (and voices of) annotation
Much as digital annotation recasts and recombines different kinds of expertise from different sources, the voices of those annotating and of those annotated are also put into a dynamic exchange. The notion of voices cropped up at the center of insights both about annotation’s potential in the classroom and about considerations we should carry forward in refining our approaches to it. In his keynote, John Bryant demonstrated how annotation and revision can help expose the multiple voices of a singular author, giving scholars a more nuanced access to that author’s voice and to the process of critical thinking via that voice. Panelists including Suzanne Lane and Wyn Kelley touched on how environments like Annotation Studio can put students’ voices in the same plane as the authorial voices they study. Co-mingled voices of informal debate, of traditional student roles, and of teacherly authority can democratize the learning space and inspire confidence; they can also, as participants noted, require a careful negotiation and reassertion of pedagogical roles in order to advance a constructive learning conversation.
These opportunities and challenges are at the foreground of HyperStudio’s design process, as Lead Web Applications Developer Jamie Folsom described, in building more writing-focused features that will help students transform their reader voices into authorial voices. More broadly, the theme of voices opened to all participants exciting ways to think about the project of helping students discover, build, and reinvent their own scholarly voices—a project in which, as the workshop made clear from many angles, annotation has always been and continues to be a very powerful method.