HyperStudio Logo
skip navigation
Menu
  • About
  • Projects
  • Blog
  • Events
  • Software
  • Research
  • People

Blog

Categories

  • Basic Research
  • Events News
  • Thoughts
  • Visualization
  • Popular Posts

    • Software
    • Projects
    • about
    • Research
    • Timeline Visualizations: A Brief and Incomplete Teleological History (Part 2)
  • Artbot Screenshot

    Renovating Artbot

    By Nikhil Dharmaraj on June 20, 2018

    Hello! Nikhil here – I’m having a blast staying in Cambridge alone this summer so far, and I’m thrilled to be joining the HyperStudio development team as an intern! My work here this summer will concentrate on Artbot. As a recap, Artbot is a mobile website (www.Artbotapp.com) that serves as an art exhibit recommendation system for users in the Boston area; it is built on top of the open-source parserbot, a natural language processing tool that performs high-level entity extraction developed at HyperStudio. (https://github.com/hyperstudio/parserbot).

    Originally, Artbot was designed as a mobile website by HyperStudio Research Assistants Liam Andrew and Desi Gonzalez as a personalized art recommendation system. The concept and approach was explicated in a paper published on Museums and the Web (https://mw2015.museumsandtheweb.com/paper/playful-engineering-designing-and-building-art-discovery-systems/). Currently, Artbot sources from seven different art museums in the Boston Area, and it employs a plethora of different natural-language-processing and entity-extraction services to parse and link related art exhibitions.

    This summer, my goals for Artbot can be conceived of as a three-phase process: (1) web scraper improvement, (2) iOS transportation, and (3) adding social capabilities.

    First, I hope to fix current bugs in Artbot. The foundation of Artbot relies on scraperbot, a program that scrapes various museum websites to pull different exhibits; afterwards, said art exhibitions are run through a chain of four entity-extraction and tagging services, namely Stanford NER, DBPedia, OpenCalais, and Zemanta. The scraper is designed using BeautifulSoup, a Python framework that allows for scraping and extraction from the HTML content of a given website. Each scraper is custom-designed to one of the eight museums in Artbot’s system right now, meaning that they are all adjusted to the format and structure of their respective webpages. However, the issue arises when considering the dynamic nature of these webpages. Essentially, the static code that was written a while ago does not always properly scrape current museum websites, due to updates and renovations on the page structure. Fixing this issue boils down to a manual task. Essentially, the instructions given to BeautifulSoup must be hand-modified to fit the new structures of some of the museum pages (for example, the DeCordova museum page: https://decordova.org/). In general, given the frequency with which museums remodel their websites, fixing the static scraper code may become something of a headache. However, for now, we hope that performing a broad pass through our BeautifulSoup instructions in scraperbot and updating them will serve to eliminate scraper bugs in Artbot in the foreseeable future. Moreover, with extra time, we hope to also add more web scrapers to expand Artbot’s reach to other museums in the Greater Boston area.

    Once the scrapers are fixed and the bugs are eliminated, my second — and more broad — goal for the summer is converting Artbot to an iOS app, downloadable off the app store. Currently, Artbot is a mobile website; essentially, by accessing www.Artbotapp.com on their smartphone and saving it to their home screen, users can make Artbot a quasi-app. I hope to make mobile Artbot a concrete reality by porting it to a native iOS app. Two years ago, I completed an iOS App Development Track at Make School Summer Academy in Silicon Valley; since then, I have created a vast portfolio of iOS apps, all built natively in Xcode using Swift. I hope to now make Artbot fully functional as an iOS app by porting all the current code (written in a combination of Python, JavaScript and Ruby on Rails) to an Xcode-compatible portfolio. Most of the UI design will stay consistent; this task is just a matter of code generation, translation, and Apple Xcode design proficiency.

    Finally, once Artbot is available both as a website and a native iOS app, we will work on implementing the final summer deliverable: adding social capabilities. To be sure, Artbot has a simple and specific goal: to recommend art exhibits and galleries and illuminate serendipitous connections in artwork to users. It is by no means to become a social platform in and of itself. Rather than making Artbot a vessel of social interaction, however, we believe that linking the app to certain social media will boost app use and experience. Examples of such connections would include a “Tweet,” “Instagram,” and “Facebook” button where users can share exhibits and galleries they have found on the app. Clicking the Tweet button, for instance, would pull up a suggested tweet that might look like this: “I just found XYZ exhibit on the @DeCordovaSPandM! Check it out :) #Artbot” In adding these core social connections and features, we hope to give users a positive channel to share their insights without distorting the original purpose and platform of Artbot.

    Indeed, I’m very excited about Artbot’s future and growth this summer! With our three-step plan, I think we can truly advance its functionality and enhance its mission. Stay tuned for more updates!

     

    Screen Shot 2015-10-05 at 12.07.49

    Taking Artbot Forward

    By Josh Cowls on October 6, 2015

    It’s great to be getting underway here at MIT, as a new graduate student in CMS and an RA in HyperStudio. One of my initial tasks for my Hyperstudio research has been to get to grips with the exciting Artbot project, developed by recent alumni Desi Gonzalez, Liam Andrew, and other HyperStudio members, and think about how we might take it forward.

    The genesis of Artbot was the realisation that, though the Boston area is awash with a remarkable array of cultural offerings, residents lacked a comprehensive, responsive tool bringing together all of these experiences in an engaging way. This is the gap that Artbot sought to fill. A recent conference paper introducing the project outlined the three primary aims of Artbot:

    1. To encourage a meaningful and sustained relationship to art
    2. To do so by getting users physically in front of works of art
    3. To reveal the rich connections among holdings and activities at cultural institutions in Boston

    With these aims in mind, the team built a highly sophisticated platform to serve up local art experiences in two ways: through a recommendation system responsive to a user’s expressed interests, and through a discovery system drawing on meaningful associations between different offerings. Both these processes were designed to be automated, building on a network of scrapers and parsers which allow the app to automatically categorize, classify, and create connections between different entities. The whole project was built using open-source software, and can be accessed via artbotapp.com in mobile web browsers.

    I’ve spent some time getting first-hand experience with Artbot as a user, and several things stick out. First, and most importantly: it works! The app is instantly immersive, drawing the user in through its related and recommended events feeds. Experiencing art is typically a subjective and self-directed process, and the app succeeds in mimicking this by nudging rather than pushing the user through an exploration of the local art scene.

    Second, it is interesting to note how the app handles the complexity of cultural events and our varied interest in them. On one level, events are by definition fixed to a given time and place (even when they span a wide timespan or multiple venues.) Yet on another level, a complex package of social, cultural and practical cues usually governs the decision over whether or not we want to actually attend any particular event. This is where the app’s relation and recommendation systems really become useful, drawing meaningful links between events to highlight those that users are more likely to be genuinely interested in but may not have searched for or otherwise come across.

    Finally, the successful implementation of the app for Boston’s art scene led us to think about the different directions we might take it going forward. In principle, although the app currently only scrapes museum and gallery websites for event data, the underlying architecture for categorization and classification is culturally agnostic, suggesting the possibility for a wider range of local events to be included.

    The value of such a tool could be immense. It’s exciting to imagine a single platform offering information about every music concert, sporting event and civic meeting in a given locality, enabling residents to make informed choices about how they might engage with their community. But this is crucially dependent on a second new component: allowing users to enter information themselves, thus providing another stream of information about local events. As such, we’re proposing both a diversification of the cultural coverage of the app, but also a democratisation of the means by which events can be discovered and promoted. We’ve also given it a new, more widely applicable name: Knowtice.

    This move towards diversification and democratisation chimes with the broader principles of the platform. ‘Parserbot’ – the core component of Artbot which performs natural language processing and entity extraction of relevant data – is open source, and therefore could in future allow communities other than our launch locality Boston to adopt and implement it independently, shaping it to their own needs. At root, all events require some basic information: a time and date, a location, a purpose, and people to attend. This data is standardisable, making it possible to collect together information about a wide array of events in a similar format. Yet despite these structural similarities, in substantive terms no two events are ever the same, which is why we are committed to providing a platform which facilitates distinctiveness, letting communities to express themselves through their events.

    We recently entered the Knight Foundation’s News Challenge with a view to taking the app forward in these new directions. You can view our submitted application (and up-vote it!) here. As we state in our proposal, we think that there’s tremendous potential for a tool that helps to unlock the cultural and social value of local activities in a way that engages and enthuses the whole community. We plan to build on the firm foundations of Artbot to create a social, sustainable, open-source platform to accomplish this broad and bold ambition. Keep checking this blog to find out how we get on!

    newsreader

    Old News: Digital newspaper archives at DH2014

    By Liam Andrew on August 27, 2014

    Books and manuscripts are an archivist’s bread and butter, respectively. Librarians have honed techniques for storing, maintaining, and retrieving their contents for millennia—go into any stack in the library, organized by call number, for ample evidence. But newer media artifacts often don’t fit into old ways of storing and finding information. Digital media brings this problem into full relief, but centuries ago, the newspaper might have been the modern archivist’s first challenge.

    Today, archives face the challenge of digitizing their collections, an issue of particular importance for us at HyperStudio, as our research focuses on the potential for digital archives to provide new opportunities for collaborative knowledge creation. For archivists, the digitization of newspapers raises unique questions when compared to their traditional stock. At the DH2014 conference in Lausanne, Switzerland, one panel in particular addressed historical newspaper digitization head-on.

    Newspapers are rich archival documents, because they store both ephemera and history. The saying goes that “newspapers are the first draft of history,” but not all news becomes history. In a typical paper, you might find today’s weather sitting next to a long story summarizing a major historic event; historians have traditionally been more interested in the latter. Journalists sometimes divide these types of news into “stock” and “flow”. Flow is the constant stream of new information, meant for right now (think of your Twitter feed). Stock is the “durable stuff,” built to stand the test of time (for instance, a New Yorker longread).

    For archivists, everything must be considered “stock”: stored forever. Some historians may be in search of ephemera in an effort to glean insight from fragments of local news snippets, advertisements or classifieds—so everything is of potential historical importance. The Europeana Newspapers project has digitized over 2 million pages with the help of a dozen key partner libraries around Europe, but by their calculations, 90% of European culture is not digitized. The project anticipates reaching 10 million records by 2015, along with metadata for millions more, but it is still a small fraction of Europe’s newspapers.

    It is also no surprise that many biases exist even in this wide net of 10 million records. The 10% of culture that is digitized generally consists of culture’s most well-known and well-funded fragments. The lamentable quality of OCR (Online Character Recognition—a technology that turns scans into searchable text) likewise means that better image scans lead to better discovery. Moreover, groups like Europeana must work across dozens of countries, languages, and copyright laws; some of these will inevitably be better represented and better funded than others. So it seems you’re much more likely to find a major piece in a highbrow English paper than a blurb in the sports section of an obscure Polish daily.

    Even taking as a given that everything is potentially important, newspapers present a unique metadata challenge for archivists. A newspaper is a very complex design object with specific affordances; Paul Gooding, a researcher at University College London, sees digitized newspapers as ripe for analysis due to their irregular size and their seriality. A paper’s physical appearance and content are closely linked together, so simply “digitizing” a newspaper changes it massively, reshaping a great deal of context.

    Seriality and page placement also extend the ways in which researchers might want to query the archive. For some researchers, placement will be important (was an article’s headline on the first page? Above or below the fold? Was there an image, or a counterpoint article next to it?). Others could be examining the newspaper itself over time, rather than the contents within (for instance, did a paper’s writing style or ad placement change over the course of a decade?) Still others may be hoping to deep-dive into a particular story across various journals. Each of these modes of research requires different data, some of which is remarkably difficult to code and store.

    In order to learn more about how people use digitized newspaper archives, Gooding analyzed user web logs from Welsh Newspapers Online, a newspaper portal maintained by the National Library of Wales, hoping to gain insight from users’ behavior. He found that most researchers were not closely reading the newspapers page by page, but instead searching and browsing at a high level before diving into particular pages. He sees this behavior as an accelerated version of the way people used to browse through archives—when faced with boxes of archived newspapers, most researchers do not flip through pages, but instead skip through reams of them before delving in. So while digital newspapers do not replace the physical archive, they do mostly mimic the physical experience of diving into an archive; in Gooding’s words, “digitized newspapers are amazing at being digitized newspapers.” Portals like Welsh Newspapers Online are not fundamentally rethinking archive access, but they certainly let more people access it.

    The TOME project at Georgia Tech is aiming to rethink historical newspaper analysis from a different angle. Instead of providing an interface for qualitative researchers to dive in, TOME hopes to facilitate automatic topic modeling and entity recognition, to quickly get a high-level glance of a vast archive with quantitative methods. They are beginning with a set of 19th-century American newspaper archives focused on abolition. The project simplifies statistical analysis tools into a visually compelling interface, but at the risk of losing the context that seriality and page placement provide.

    Perhaps the biggest challenge is how to present such a vast presence — and such a vast absence — to historians, curious researchers and individuals, all of whom may be after something slightly different. Where Gooding divided queries into three types — “search,” “browse,” and “content” — the TOME group follows John Tukey’s divide between “exploration” and “investigation”—or those who know what they want, and those who are looking for what they want. A good portal into a newspaper archive requires all of these avenues to be covered, but it remains to be decided how best to turn news into data, to visualize troves of ephemera, and to represent absence and bias.

    Important books and manuscripts — the “great works” that line history books — tend to present a polished and completed version of events. Newspapers offer another angle into history, where routines, patterns, and debates are incidentally documented forever. Where a book is usually written for posterity, the newspaper is always written for today, reminding the archive diver of history’s unprepared chances and contingencies. The historians who mine old newspapers — and the archivists who enable them — have many new digital tools at their disposal to unearth promising archives, but much effort remains to fairly represent news archives, and determine how we might best use them.

    IMG_2122s

    I Annotated

    By Liam Andrew on May 13, 2014

    The first thing that struck me about I Annotate 2014 was the setting: unlike the standard stuffy, windowless conference hall, the event was held at the Fort Mason Center in San Francisco, a historic, bright, and beautiful space overlooking the Golden Gate Bridge. Given the impeccable weather, the location was one consolation for staying indoors, but what really kept all of us there was a drive to annotate the world’s knowledge.

    Organized and run by Hypothes.is, the second annual conference demonstrated how far the annotation community has come in just a year, making inroads in a wide variety of industries and research groups. Whenever you find Wolfram Research next to Rap Genius on a conference attendee list, you know it’s going to be interesting. Attendees represented research labs, startups, foundations, and organizations. They showcased tools, tricks and platforms ranging from HyperStudio’s own Annotation Studio, to Harvard’s edX and H2O, to semantic initiatives at the BBC, Financial Times, and the OpenGov Foundation. Scores of other projects and tools came from academia and industry, such as Rhizi and PeerLibrary. The W3C was also in attendance, in their effort to further the incorporation of annotation into the next generation of web standards. Based on the wide range of attendees and uses, they have their work cut out for them.

    Most conference presenters did not try to pose a singular, overarching definition of annotation, instead showcasing their own projects and related examples. To me, this seemed wise; otherwise, we might have argued for days about what annotation is. When we talk about annotation, we are talking about many different practices couched in one ambiguous term. We were all gathered to advance the ways in which media can be published and discussed online, whether in the humanities, sciences, finance, law, or rap music. Each of these fields has different uses for annotation, and the one thing that we all had in common was a single word with many meanings.

    It does seem clear that annotation tools provide a way to make online texts “read-write” rather than just “read.” Whether providing peer-review of scientific texts or analyzing your favorite rap lyrics, when you annotate you are providing feedback, complicating communication on a technical as well as social level. You are also forging connections between texts, making content both more discoverable and more nuanced. Much discussion revolved around whether annotations themselves should be annotatable, or even publishable (bringing up questions of copyright). Is annotating an archival act, or a discursive one? This adds a complex layer to the web, but gives unprecedented priority and weight to everyday users, and encourages more focused interaction with a text than a simple comment or published response.

    There was also much discussion about possible new frontiers in annotation. How can we give people the ability to annotate images, audio, video, and scientific data? What about more complex media, like video games, or even real-world experiences? Does this stray too far from what “defines” an annotation? Some naysayers suggested what couldn’t or shouldn’t be annotated (perhaps we can go too far in archiving and explaining). Others claimed that the focus was in the wrong place: perhaps an annotation platform should be framed as a social network, with precedence placed on building communities rather than technologies.

    In the end, all of these claims are probably true for some of the platforms and less applicable to others. There was a wide breadth of tools in many disciplines, using annotation in myriad different ways—the conference was exploring a technique and architecture rather than an industry, a horizontal instead of a vertical. This made it vary in relevance, but always focused and utterly unique. Conference breakout sessions ranged from the technical to the philosophical, and allowed for those with aligned interests to interact after the presentations.

    The proceedings were followed by a two-day hackathon, where a handful of coders worked on expanding the open-source Annotator library (which also powers Annotation Studio) and its community to new ends. It was a fitting conclusion to the conference: after talking, it was time to make.

    Liam Andrew is a graduate student in Comparative Media Studies and a research assistant in HyperStudio.

     

    500artmap

    Recommending Art, Suggesting Culture

    By Liam Andrew on November 25, 2013

    Think of the word “algorithm” and you might picture a data scientist crunching numbers in front of a terminal, analyzing functions and equations that you can’t begin to understand. If they’re building models of weather systems, you might be right (I can’t help you there). But recommendation systems are another story. The algorithms can be complex, but the output tends to be very simple: a list of some news articles, movies, or other content you might like.

    Recommendation science is not rocket science, but it has always been the realm of the engineer. There’s no doubt that a good engineer can hone and chisel a recommendation algorithm to near-perfection (whatever that means), but first you have to choose the type of stone. Each recommendation system has certain methods and assumptions baked into it, and determining what kinds of inputs belong can be more of an art than a science.

    Recommendation systems are generally divided into two types: collaborative filtering, and content-based filtering. Content-based filtering focuses on the product itself, like a traditional library classification system. Netflix provides one example: after culling their records down to several thousand movies and TV shows, they hire freelance film buffs to tag content with delightfully contrived categories like “Mind-Bending Romantic Foreign Movies” or “Understated Detective TV Shows” (though they also match you to similar users). While these are more fun than generic, automated tags (and Netflix deserves credit for using humans), these categories are still boxes; they place cultural products into certain discourses and implicitly exclude others. Tagging systems are inherently stale and lacking in dynamism, and folksonomies aren’t always feasible and come with their own problems.

    One attempt to sidestep classification is via collaborative filtering, which looks at the user, their past behavior, and similar users or social networks for clues into what the user might like. Consider Amazon, who uses this model extensively; given the massive scale of products on offer, many from third parties, it is more manageable to leverage machine-learning algorithms that watch what you buy and browse, rather than attempting to infer the properties of thousands of new products a day. A user-history based approach maximizes efficiency but at the sake of variety, assuming that you want to keep seeing more of the same.

    If it’s looking to your social networks, it feeds you what’s already most popular, stifling individual preference and shepherding audiences into identical routes. If you were to sit around a table with “users like you” and start a conversation, would you rather have seen everything they’ve seen, or something a little different? If you’re all on the same pages, how would anyone bring anything new to a discussion? What about all the possible treasures out there that haven’t been discovered yet? Collaborative filtering may work when shopping for consumer products, but it creates a filter bubble for art and culture.

    The challenges and limitations behind each of these models is very different, and they point to a need to focus on the objects and users of recommendation systems, rather than the algorithms. As taste plays more of a factor in recommendation, this becomes even more crucial. If you click away from a product on Amazon, maybe it’s because you didn’t like the price, the quality, or the reviews. Regardless, the company assumes something about you. It gets even more complicated with art or music. Art has ever-changing material, cultural, and discursive properties; which ones are most important to a given viewer? Do they like being challenged and broadening their horizons, or prefer staying in their comfort zone within a certain style or mood? If so, is it worth trying to change their mind?

    This brings up another variable that goes underserved when the focus is on the algorithm: what is the metric for a “successful” recommendation system, and how does that change from company to institution? Industry tends to give people more of what they want, with the end goal of a click or purchase. Cultural institutions like ours can break convention here, and at HyperStudio we hope to challenge users by making unique connections and new introductions, so long as users are ultimately delighted and informed. Unencumbered by industry demands like growth and scale, we can maintain a very different metric for success, and it could be unique to each project or user.

    We also hope to open up further discussion about these systems and their limitations. The extreme secrecy behind companies’ proprietary algorithms and the domain’s traditionally engineering bent make the recommendation system something of a black box. Given the extent to which automatic recommendations affect what we read or hear about, it’s important to understand what can go into them. HyperStudio and other open-source initiatives can play a role in making them more transparent: creation and discussion of recommendation algorithms could lead to insight about the decisions computers are making for us and their assumptions about what we want. Perhaps in our effort to improve them, we’ll discover some ways in which proprietary algorithms are failing us.

    Regardless, I should hope that people have a different relationship to art than to a product or piece of information, and cultural institutions should treat their audiences differently from companies. It’s important to devise recommendation systems that avoid reducing cultural objects to the level of products, and museum audiences to consumers. At the heart of the collaborative and content-based systems is the notion that more personalization leads to higher quality, and that existing networks, discussions and canons are there to be reinforced. These are meaningful and important signals, but they need not be the only ones. While quality is always important, categories should be fuzzy, as should networks of people; the most important signals are often the nodes that link them, and we hope to surface these new connections. When it comes to art and culture, looking past the current limitations of discovery will be vital for generating new ideas and conversations.

    Liam Andrew is a graduate student in Comparative Media Studies and a research assistant in HyperStudio.

    500x500

    Creating Meaningful Art Experiences with Digital Tools

    By Desi Gonzalez on October 11, 2013

    Artist Sherrie Levine is best known for her 1979 series, After Walker Evans, in which she rephotographed Depression-era images by Walker Evans and presented them as her own. With this series, she posed questions about authorship and originality that would remain central to her art production: What is an original work of art? What is a copy?

    To me, Levine’s work hints at another interesting idea: How do we learn about art? Levine grew up and attended college in the Midwest, miles away from the mainstream art world. In a 1985 interview the artist revealed that her early experiences with art involved “seeing everything through magazines and books.” To create After Walker Evans, she didn’t take snapshots of the original Evans photographic prints, but instead photographed the images from an exhibition catalog. While museums and galleries allow us to experience art directly, much of what we learn about art is indirect. Most people can recognize da Vinci’s Mona Lisa and Vermeer’s Girl with a Pearl Earring without ever having traveled to Paris or the Netherlands.

    Today, in addition to magazines and books, we also learn about works of art through the internet. A spate of recent websites and mobile applications, such as Artsy, ArtStack, and Curiator, allow users to aggregate virtual collections of art. The missions behind such websites often have a democratic impulse, making works of art accessible on the web and transforming users into art collectors. As a museum educator, I am all for democratizing the art experience, but I wonder if we’re asking the right questions. What the internet affords us is a wealth of art images at our easy disposal, and web projects like Artsy allows us to cull through them in one place. But how, then, do we create meaningful experiences from these images? How do we get people to look closely, think critically, and engage in conversations with others about a work of art? Additionally, the majority of the works of art being shared, collected, followed, and Tweeted on these platforms were intended to be experienced in person. While the internet can be a great tool for first forays into learning about art, it doesn’t replace witnessing an artwork firsthand. How can our online art dabblings connect us to onsite art experiences?

    These are some of the issues I’ve been thinking about as a new research assistant in HyperStudio. Digital media can provide new ways to discover art, but the best experiences are the ones that put you in direct contact with a work of art. And that’s why we’re starting local. I’m working with the HyperStudio team to develop a digital tool that provides an easy way for Boston-area residents to access information about artworks, exhibitions, and events. But we also want to be more than a mere listings or image-aggregating website, instead asking: what are the as-of-yet-unearthed art threads happening in the Boston area? For example, let’s say you see the print Under the Wave off Kanagawa by Hokusai at the Museum of Fine Arts and are immediately enamored. Our tool might connect you to a workshop on woodblock printing led by a local artist or inform you about a lecture at Harvard about Edo period Japan.

    The principles underlying this new project is a concept that runs through many HyperStudio endeavors. For example, Annotation Studio is an open source web tool that aims to enhance the ways a student interacts with a text. A student can add multimedia annotations onto a text, search and link to other content, and ultimately engage in a more active reading of the text. The digital tool doesn’t overshadow the original text, but is instead an avenue to dig deeper into it. Like Annotation Studio, the goal of our new project is to privilege engagement with the art first.

    Over the next few months, we will be researching into how digital tools might allow us to create meaningful art experiences through out the Boston area. We have started by looking into what examples are already out there, from event listings to museum collection guides. Ultimately, we want to go from looking at images online to fostering rich, direct experiences with art. For Sherrie Levine, books and magazines were a medium through which she experienced art, far away from art world centers. Here in Boston, we’re lucky; we have many world-class museums and intellectual centers at our disposal. We hope that our tool will be a way to connect the community to these onsite experiences. Like Levine’s books and magazines, this project will be a first step to learning about art—but it won’t be the last.

    2013

    Reporting from DH 2013 at the University of Nebraska-Lincoln

    By Jason Lipshin on July 23, 2013

    As a scholar who subscribes to a “big tent” conception of digital humanities, I must admit that I was initially a bit nervous about attending DH 2013 at the University of Nebraska-Lincoln. As the annual international conference of the Alliance of Digital Humanities Organizations, DH 2013 is one of the oldest and most established conferences supporting emerging work in the field of digital humanities, but it also has its roots in old school “humanities computing.” For those who are unfamiliar with the history of the field, humanities computing is the predecessor to digital humanities in its current form and is often associated with work in text mining Shakespeare, using word frequencies to understand an author’s style, or experimenting with TEI to compare different aspects of a text. Although much of this work can be extremely interesting and rigorous in terms of its play with computational affordances, I consider myself much more oriented towards flavors of DH with roots in digital art, design, experimental “multimodal scholarship,” and digital media studies. The distinctions between these camps are, of course, getting fuzzier by the minute, but the history and reputation of the conference implicitly reinforces these boundaries. And if there's one thing I've learned in my short time on the conference circuit, when you’re presenting, it certainly pays to know the composition of your audience.

    Upon arriving at the conference, however, I was pleasantly surprised to find scholars from many camps present and conversing in relative harmony. In addition to the humanities computing mainstays, there were also programmers using computer vision to identify dance techniques, literature scholars experimenting with sound art, and even historians using 3-D printing to better understand historical artifacts. Defining what is and what isn’t digital humanities in such an environment is increasingly difficult, but the flip side of this lack of definition is an exciting sense of openness and experimentation. Scholars were tinkering, hacking, failing, and experimenting with the affordances of non-textual forms, while always trying to keep in mind the implications of this work for asking traditional humanities questions.

    One of the more interesting talks that I attended was Jentery Sayers’ “Made to Make: Expanding Digital Humanities Through Desktop Fabrication.” As a self-described media archeologist, Jentery is very interested in the materiality of historical artifacts (particularly 19th and 20th century sound technologies) and has begun using 3-D printing in order to better understand this materiality. Leading a “maker lab” on humanities-oriented fabrication at the University of Victoria, Jentery has already begun producing a series of “maker kits for cultural history,” allowing, for instance, students or scholars interested in the history of radio or the telegraph to build their own. Drawing on recent work by Jonathan Sterne, Wendy Chun, Matt Kirschenbaum, and Jeffrey Schnapp, Jentery is interested in “how the hermeneutics of screwing around” with these technologies can fold back into theoretical reflection.

    Another fascinating talk that I attended was Michael Aeschbach’s “Dyadic Pulsations as a Signature of Sustainability in Correspondence Networks.” Although Michael’s talk fit much more easily into traditional DH paradigms than Jentery’s, I thought it was equally innovative for the way that it proposed a new direction for research in data visualization. At the core of Michael’s talk was the argument that data viz research has traditionally privileged spatial approaches like topological analysis and graph theory over the temporal. Temporal approaches like measuring response time in a communication exchange between two participants, Michael argues, can be extremely instructive in measuring the “sustainability” or “health” of a communication network. Although he took the long standing Usenet discussion group as his primary case study, he also mentioned that this approach could be applied to non-online phenomena – for instance, mapping the temporality of letter correspondences in Stanford’s “Republic of Letters” project. Many historians in the audience, including Brown’s DH librarian Jean Bauer, seemed to be extremely excited about this possibility.

    I delivered my own talk, “Visualizing Centuries: Data Visualization and the Comedie Francaise Registers Project,” on Friday morning to a fairly packed crowd. As an overview of HyperStudio’s use of data visualization in order to understand a large archive of theatrical ticket sales, the talk seemed to strike a chord across many different disciplines, with positive feedback coming from theater historians, computer scientists, and statisticians alike. In particular, many questions following the talk seemed to address the unique dynamism of our data visualization tools and the ways that they allow the user to dynamically combine multiple parameters to help facilitate what we have termed an “exploratory research process.” One scholar was particularly struck by my notion of “combinatorial research” and how this process of combinatorial play might facilitate intellectual exploration, while many others were interested in the ways that our tools could become open source and generalizable enough to be applied to other kinds of data. Hearing such positive feedback was incredibly gratifying, especially after having worked on the project for almost a year.

    In all, I felt that attending DH 2013 was an extremely valuable experience, both in terms of promoting HyperStudio’s own work and learning about emerging work in the field. Although I’m still very new to the conference circuit, I’ve already had many wonderfully stimulating discussions, met many like-minded grad students and faculty, and received many encouraging words about my work. Certainly, all of the fault-lines and fissures, boundary work and growing pains of a field in transition are ever present. But so is the nascent sense of an ever expanding scholarly community.

    987zzz321
    DH2012 Hamburg

    DH 2012, day two: What does it mean to be digitally engaged?

    By Ayse Gursoy on July 19, 2012

    I sat in on several sessions today, the first day of conferences. In the morning I checked out the short paper session, and later I went to two long paper sessions. All of the talks were interesting, quick paced (only ten minutes each for the short paper sessions, something to watch out for tomorrow!), and made me think of digital humanities in, if not a new, a richer light. The twitter stream (hashtag #dh2012) was lively throughout the day and offered many a chance to respond to the talks (only a few minutes for questions, unfortunately—though questions were many). Some of the key issues according to the twitter stream: the constant issue of digitized versus born-digital projects, the openness of data and access, and the insufferable hardness of the chairs in the lecture halls (no, really).

    There were a few talks that resonated with my background and interests, and I”ll probably get to each of them in some way. But I”d like to focus one one of the long papers, and the questions of design, purpose, and audience that it made clear. The talk in question is Claire Ross, presenting “Engaging the Museum Space: Mobilising Visitor Engagement with Digital Content Creation”. While at first glance this might seem like an interesting topic that isn”t directly related to academic research in Digital Humanities, the central question was actually an extremely important one for any DH researcher: how do you get audiences engaged with digital content? Ross”s case study, the use of QRator at UCL”s Grant Museum top online casino of Zoology, offered one approach to this question. If you have a discussion forum tied to the object on display, accessible with a smartphone or tablet, then give your visitors a tablet and let them talk! We could go on for days about the actual mechanics of this approach or the validity of competing approaches such as short URLs and established social media sites. What I”d like to suggest, however, is that Ross” talk asks us to reframe the question.

    It isn”t about engaging with digital content.
    It”s about engaging through digital content.

    We need to design systems that unobtrusively let people do what they are already good at doing, that step back and make conversations happen. I know this is hard. I know it might mean changing the definition of “conversation”, or expanding it a teensy bit. But I do believe this will impact digital engagement in a positive way.

    I”d like to offer the example of the games Demon”s Souls (From Software, 2009) and Dark Souls (2011). These games feature a unique and much-discussed multiplayer engagement in an essentially single-player game: the two games are always-online (unusual for a single-player game), allowing other players to intrude upon your game. The game has been written about in many places and in better ways than I could right now. But there is one particular engagement that I think is highly relevant here: players can leave messages tied to in-game locations, that appear to any player who accesses that location. Yes, it takes an active will to leave a message, but once that message is placed, any player, not just a player with the equivalent of a smartphone, can read it. The systems that we design for digital engagement with physical spaces need to, dare I say it, intrude upon the corporeal experience of the visitor. And doing this is hard. I don”t pretend to have a solution, but it”s something that”s been on my mind.

    cryo_tank

    Models for the Future Humanities

    By Whitney Anne Trettien on February 14, 2011

    Walking through MIT to reach HyperStudio’s home base, you pass man-sized aluminum tanks of cryogenic nitrogen and share elevators with lab-coated technicians escorting racks of test tubes. It can be a bizarre world for a humanities scholar; yet the scientific labs with which HyperStudio shares a building are increasingly put forward as a model for Digital Humanities work — in fact, for the Humanities, writ large.

    It’s easy to see why. Labs are collaborative environments where work, writing and credit is distributed. Labs are also “hands on” and experimental: things are poked, prodded and pulled apart; objects are made; hypotheses are tested. And, unlike in the humanities, teaching and research aren’t strongly distinguished in labs, since students — even undergraduates — are often solicited to assist with experiments.

    But scientific labs aren’t the only model for future work in the humanities. During MLA, Kari Kraus pointed to the studio arts, emphasizing the role of process, building and design in evaluating Digital Humanities projects. I’m drawn to this idea not only for the kind of physical workspace it imagines humanists collaborating in, but for what it suggests about how humanities scholars should position themselves in relation to the world — namely, as individuals whose curiosity drives them to produce things that sparks the curiosity of others.

    Last year, I began keeping a sketchbook in my office. Mentally, it’s the best addition I’ve ever made to my workspace. Whereas ruled lines demand writing, blank paper is an open field; and in the span of the month, I had already filled two books with everything from bored doodles to project designs mindmapping current research.

    Mixing a studio arts model with a commune-like living space in the 1950s, Black Mountain College also exemplifies a space where innovation, creativity and play were not opposed to a scholarly work ethic. So many of us — especially in Digital Humanities labs, where delivery deadlines loom large — are overscheduled, throttled by a calendar of back-to-back meetings that leave little time for productive thought, for serious play. (Bethany Nowviskie wrote beautifully on this in her blogpost on DH’s eternal September.) By loosening the reins on scheduling, institutions like Black Mountain College allowed for the kind of movement that oxygenates ideas, breathing new — and more productive — life into the lab.
    The commune living space of Black Mountain College provides another model for DH work. In From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism, Fred Turner traces the development of the web back to the ideals of mid-century counterculture — a history which pushes back against the (ongoing, increasing) commercialization of code and content on the web. Open source movements keep this link alive, illustrating the methods and means for collaborative work across distances. While many Digital Humanities projects are themselves open source, it would be worth, I think, collectively thinking through what these values mean for the methodologies we employ, as well as the ways and spaces in which we work together. It isn’t just the end result that should embody our ideals, but the many micro-habits and modes of production we participate in every day.

    The search for models isn’t just about the kinds of physical spaces we want to work in, but the identities — artists, creators, designers, playsmiths — we choose for ourselves. Like a personal identity, the search doesn’t end: it evolves.

    What other spaces inspire you to cross interdisciplinary boundaries? What changes in your methods of work have sparked more collaborative work?

    links

    Digital Humanities vs. the digital humanist

    By Whitney Anne Trettien on April 26, 2010

    What does it mean to be a Digital Humanist?

    In a Dave Parry’s widely-circulated, post-MLA2009 blog post, tauntingly titled “Be Online or be Irrelevant,” Parry argued that social media should be front-and-center in Digital Humanities:

    The more digital humanities associates itself with social media the better off it will be. Not because social media is the only way to do digital scholarship, but because I think social media is the only way to do scholarship period.

    Perhaps not surprisingly, this claim sparked fierfce debate over the role, nature and future of digital scholarship. Who can claim to be a digital humanist? Do you have to have a PhD? How much coding do you have to know? Are humanities bloggers and twitterers participating in e-scholarship? At the root of it all: how do we (or do we not) want to delimit our community? (more…)

    Older posts →
    Massachusetts Institute of Technology

    Hyperstudio is a part of: MIT School of Humanities, Arts and Social Sciences Comparative Media Studies

    Contact:
    Hyperstudio:
    Massachusetts Institute of Technology
    Bldg 16-635
    77 Massachusetts Ave.
    Cambridge, MA 02139
    617 258 6512 View Map

    • About
    • Projects
    • Blog
    • Events
    • Software
    • Research
    • People