Skip to content

HASTAC 2015

June 1, 2015

I just returned from a HASTAC conference at Michigan State, where it was my first time being within a larger community with many folks self-proclaiming as “digital humanists” (or “digital liberal arts” scholars, depending on your semantic preferences).   It was fun to see what got people excited: from text mining and topic modeling, to time geography, gaming pedagogy, sonic body movement, and even a time capsule headed for the moon.

It was a generally very optimistic and energetic crowd, so I thought I’d frame my reflections in the spirit of looking forward, with optimism.  When I think ahead to ten years from now, some predictions…

“Digital Humanities” will again become “the Humanities”

Ten years hence, we’ll forget there was ever any other way of doing things.  Humanities will continue to be about studying the diverse forms of human culture, human ingenuity, and human thought.  Part of that study will naturally include using digital tools in diverse ways: as tools to support humanistic inquiry, as expressive media for artistic creation, and as useful foils to our own humanity that help us explore the recesses of what it means to be “human”.

Humanists will constantly be striving to make opaque tools more transparent and malleable

Currently, most of us who make use of software, the Internet, and other digital tools use them in a relatively “black box” way.  We give inputs and expect outputs.  We hope for results, and fear “the blue screen of death”.  Our digital tools are relatively opaque to us; we accept them as means to our ends.  (One way to test whether you’re using a tool as a means in this way is to ask yourself: How upset will I be if this breaks?  If your answer is “very upset”, then you’re using the tool primarily as a means.)

If humanists are doing their jobs ten years from now, they will help us examine our digital tools as ends in themselves–just as painting, or sculpture, or literature are manifestations of the human drive to create.  They will strive to make the digital tools that we employ opaquely now into more transparent objects for discourse and consideration moving forward.  This likely means that they will be learning some computer science as part of their humanistic training.  This does not mean they will become computer scientists, mind you, since computer science will also undoubtedly make great advances in ten years’ time–but humanists will know enough to be literate and able to compose meaningful code to advance humanistically meaningful ends.  They will also be formidable partners when working with computer science colleagues–holding them to high ethical standards, and challenging them to think in creative, critical and humane ways.

Humanists will be architects building their own worlds

Humanists will no longer settle for digital software Levittowns monolithically peddled to satisfy the latest flights of fancy.  Nor will they have to ask software people to “build them the world” each time they want to embark on a creative foray into digitally-supported inquiry.  Instead, they will claim their role as architects–ever more versed in the nuance and substrate of digital tools and how they can be constructed.  They may still enlist the aid of programmers to construct some of their grander homes of inquiry, but humanists themselves will lay the blueprints, and their tools will become custom-built houses of their own design.

Text will likely still be a favorite modality of theirs–old habits shouldn’t die–but other modalities will surface.  Humanists will become witty, reflective, critical, and mindful authors of code.  And they will read others’ code just as voraciously as they read others’ text articles.

Digital modes of inquiry will have crystallized

This is the prediction I have mixed enthusiasm about.  Right now, we’re in a very exciting, tinker-y space.  We’re swimming in what sometimes feels like an interestingly hyper-saturated saccharine bath: digital humanistic questions are suspended in a fun kind of quasi-articulation, and the substrate is thick with possibilities for digital lines of inquiry. But ten years hence, we’ll be rock candy.  We will, perhaps, have crystallized into interesting forms, but we’ll have more inertia around these forms.

So let’s enjoy this liminal space while it lasts, and let’s make sure we steer it creatively and ethically while we can, before it gets too entrenched.


 

And future self, if you’re reading this ten years from now (although I hope you have something better to do by then): remember today and the sense of vagueness, confusion, low-grade angst, but above all…possibility.  Today, there are relatively few dogmas when it comes to digital approaches to humanities scholarship.  Some of us are feeling the vague sense that “we must figure something out” or we’ll be “missing out on something important”.  It’s the same sense that accompanies an impending adventure.  Ten years from now, may we not forget this sense of adventure, and may we suspend our sense of having “figured things out” in favor of an ongoing hope for new possibilities.

Yet all experience is an arch wherethrough
Gleams that untravelled world, whose margin fades
For ever and for ever when I move.

(Ulysses, Alfred, Lord Tennyson)

May we grow stronger and more curious together.

Advertisements

So…you wanna be a geek?

November 13, 2014

I’ve embarked into the realm computer geekdom as kind of a shadow figure.  (I actually think this is pretty common for women entering technology fields, but that’s another story…)  In any case, I know what it’s like to choose to stay in on a Friday night so I can fiddle with the drivers on a fresh operating system install–but I also know what it’s like not to do that.  I’ve never forged a virtual friendship, never owned an Xbox, and faced some serious scruples about investing in a smart phone.   And yet, I’m a geek–and happy to proclaim that to the world.

I appreciate and value the fact that a lot of people in this world do a lot of things other than hang around with computers.  Some people even hate computers.  And yet, I’ll to make the case that a little bit of geeking out may be healthy–and necessary–for everyone.  In my experience, the more time I spend tinkering with computers and devices, the more agency I’ve felt over the ways I choose to have technology to fit into my life.  Geeking out helps mitigate computer anxiety.  Delving head-first into computer problems and coming back alive–or learning when to step away–has become hugely empowering for me.

I’ve noticed that there’s actually very little that separates geeks from n00bs.  It seems to be more of a stepwise transition into realms of ever-increasing confidence.  So, if you want to be a “geek” (and I hope you do!), here are a few things that I’ve noticed that might help catalyze the transition:

  1. Back it up.
    There’s no way around this.  You’ve just got to back up your stuff.  This frees you up to play around on your computer and get over your fear of breaking things (see #2).
  2. Get over your fear of “breaking it”.
    Seriously–get over it. This takes an almost “Buddhist turn” in attitude: accept that you do not need to indulge computer-related anxiety, accept that you have friends and problem-solving tools at your disposal (see #3), and have faith that all will be well in the end. Yes, it may take time.  No, it may not work right away. Sure, you may be frustrated–but you can also be fascinated. Geeks tend to see the “blue screen of death” as an opportunity to fiddle around and learn, not a reason to chuck your computer out the window.
  3. When in doubt: Google it!
    Don’t cry or run to the nearest coworker! The key to solving all of your computer woes is always at your fingertips! Just Google it, and start reading. It will take a while to learn the relevant terms to search–but I promise it’s no harder than your average high school French class. (And I actually suspect it’s a lot easier!)  A starting hint: if your computer throws you an error message, try copying and pasting the exact text of it into a Google search, and see what potential solutions come out.
  4. Don’t accept defaults.
    In my experience, most geeks aren’t born with nerd glasses and a pocket protector. Instead, they start down the path to geekdom by making minor tweaks and improvements to the basic things they do on their devices and computers on a day-to-day basis. Start by installing some add-ons to your favorite web browser. (Don’t know what an “add-on” is, or a “web browser”? See #3.) Move on to installing some free software. Check out sites like CNET or How-To Geek to get suggestions and reviews.
  5. Don’t assume “nothing in life is free”.
    It is simply not true that everything involving computers needs to be expensive. That is an attitude that will just set you up to get scammed in life. There’s actually a lot of free, and cheap, and amazing stuff available, if you know where to look. Computers don’t have to be nearly as expensive as repair shops and software dealers make them out to be. Paying an arm and a leg for repair service or software products is for n00bs.
    Here’s a list of some propritary software you might be using, and their cheap/free/open source* equivalents:
    – Microsoft Office –> Libre Office
    – PhotoShop/photo editor –> GIMP
    – Microsoft OneNote –> Evernote
    – Video/Media Player –> VLC Player
    *Don’t know what “open source” is? Google it!
  6. Learn basic HTML.
    A webpage is basically just a regular document–like anything else you’d write on your computer. The only difference is that it has a little extra information embedded in it to let your web browser (Firefox, Safari, Chrome, etc.) know how to display it for you. In fact, you can write your own basic webpage right now. So what are you waiting for? To create a webpage document, you’ll want to ditch Word and stick with a simple text editor (like Notepad or TextEditor) that comes standard on most computers. Open up your basic text editor, and create a new document. In the document, type in (or copy and paste) the following:

    <h1>Your Title Goes Here</h1>
    <p>This is a paragraph. You can write anything you want in here. Make it long or short. Experiment. Try this text from Lewis Carroll, for example: `Twas brillig, and the slithy toves did gyre and gimble in the wabe: All mimsy were the borogoves, and the mome raths outgrabe.</p>
    <h2>You Can Put a Smaller Heading Here</h2>
    <p>And now you can type more text. <span style="color: green">You can even make it green.</span> <span style="color: purple">Or purple.</span> <span style="color: red">Or red.</span></p>
    <p>You can also make a <a href="www.google.com">link to Google</a>.</p>
    

    When you’ve copied and pasted, save the document with whatever name you’d like–just make sure you tack on the extension “.html” at the end when you’re saving, and remember where on your computer you’re saving it to.  Then, open up a web browser. Choose File > Open File > then select the file you just saved. Your document will open and display in your browser! It takes a little more work to get your new webpage to be accessible online to anybody–right now, your webpage is only saved locally to your computer. But the principle is the same: any webpage you look at is simply a variation of the type of document you just created. In fact, if you’re curious, you can hit CTRL + U while you’re browsing the Internet and view the HTML code for any page you’re looking at. Try it, and see how much you can recognize!

  7. Make it political.
    Technologies aren’t just fun to geek out with–they are growing increasingly political. Cultivating a sense of agency surrounding the ways you choose to use or eschew certain technologies can become a political act. Private industries, the government, individual thinkers and activists–all will play an increasingly heated role in shaping our norms and policies surrounding privacy, online commercialization, distribution rights for creative works, equity of access to networked technologies, censorship, etc. These are things that have very real consequences for our day-to-day lives. It’s already happening. History has taught us that old societal power structures have a way of replicating themselves in new domains–and online life is no exception. The Internet is still a very new phenomenon, and is still closer to the “carefree and fun” stage of its evolution. As this changes, and as new norms and policies are forged, a little healthy geeking out can prepare you to lend your voice to the discussion.

Thoughts on Peer Review

April 2, 2013

I just started a Coursera course on Gamification, and stumbled across an interesting discussion forum thread on the topic of peer review in MOOCS.  From the thread, I gleaned that the idea of peer review has some participants concerned about privacy, fairness, and their own “qualifications” (or lack thereof) to judge others’ work.  Out of respect for the bounded nature of the course community, I won’t reproduce any of that thread here.  I did, however, post my own two cents to the discussion forum, which I wanted to throw up here as my own perspective on peer review in MOOCs (and beyond):

“For me, peer review in this course is a very specific learning opportunity, and I would encourage others not to miss out!  I had a really meaningful experience with the peer review process in another Coursera course (Data Analysis by Jeff Leek) and found myself learning more than I typically do when I get a paper back that has been graded by a professor.  I reviewed the work of 4 other classmates anonymously, and was asked to assign point values from a rubric.  With that format, peer review turned into a really useful metacognitive process for me.  It forced me to elevate myself outside of my own work and mindset, and have a look at four other people’s creative work.  It forced me to get a better grasp of the fundamental conventions and expectations of the field in order to grade my peers fairly.  In essence, the peer review forced me to “think like a professor”, which is never a bad exercise.

It helps to think of the purpose of peer review lying in the process rather than the result.  The point of peer review is not for us to get a grade.  (Especially since grades in Coursera ultimately just boil down to getting a PDF certificate whose meaning is still pretty ambiguous.)  The point of peer review is to give us some exposure to others’ creative ideas, and to expand our own metacognition about the course topic in the process.

Also remember: Coursera courses attract a bizarrely vast range of participants, so it’s not impossible that your work could be “peer reviewed” by professors, CEOs, homemakers, politicians, undergraduates, high school students, civil servants, retirees, etc.  Couldn’t ask for a more diverse and relevant audience than that for a course on gamification!

Yes, peer review also happens to fill a practical need, as we all know it would be impossible for a single instructor (even with a cadre of teaching assistants) to look at each of our work individually.  But I encourage you to think of peer review as more than that, or you will likely miss out on the kind of learning experience it can be.”

MOOCs: The Tacit Prerequisites

February 26, 2013

I was excited about MOOCs. I am excited about MOOCs. In fact, I have been gorging myself on MOOCs recently. I dabbled in some Udacity courses two semesters ago, checked out some free courses from Udemy’s Faculty Project, finished up a Coursera course on Social Network Analysis last semester, and have spent a few late nights recently hammering out Data Analysis assignments.

But I have reservations. Because MOOCs aren’t exactly the broad public service that I had expected they would be, nor are they necessarily headed in that direction.

These classes have a lot of tacit–or perhaps just unexamined–assumptions of the skills that students are coming in with. The pessimist (or realist) in me wants to say that, unless you’ve already got a pretty decent undergraduate foundation in whatever MOOC you’ve signed up for, you’re going to quickly fall through the cracks. As a MOOC student, the onus of getting from point A to point B in these classes has fallen squarely on my shoulders. And overall, that has been fine for me. I have dropped out of those courses I felt underqualified for, and have stuck to it with those that have felt closer to my “zone of proximal development”.

But I am a graduate student, and I have a pretty broad array of (physical classroom-based) courses already under my belt. If I were an English major hoping to expand my math skills, a high school student poking around to get exposure to college-level work, a GED-earner trying to decide if college is right for me, or a late-career professional looking to shift my field of work, I would likely be somewhat lost in the MOOC jungle. Because MOOCs help you get better at the stuff you already sort of know.

Currently, the MOOC platforms I’ve seen very much replicate a lecture hall style of content delivery. Some have bells and whistles for auto-correcting assignments (which may break on occasion *cough* Udacity *cough*), some have peer assessments, but most are simply a set of video lectures and a set of associated quizzes. The main “constructivist” moment of most of these courses is the discussion forums. The MOOC discussion forums I’ve experienced offer some handy “how-to” support on occasion, but they currently read more like online user forums and are not the same kind of agile “give-and-take” that a face-to-face discussion section or study group offers. The students who end up contributing to the forums I’ve been involved in seem to be a small subset of more advanced participants, and their manners are mixed: some are sympathetic and have a decently pedagogical sense of how to frame advice and hints for others who aren’t as experienced in the topic. Some are more advanced in the subject at interest and will brook no fools: they often treat the discussion boards more like user forums (i.e. getting irrationally upset when someone posts in the “wrong” thread). It’s an interesting culture clash, to say the least.

So, you wanna make it through a MOOC? The tacit prerequisites for succeeding in MOOCs, as I’ve experienced them so far:

  • Know how to Google, and be proactive about doing it. The MOOCs I’ve looked at aren’t exactly “self-contained”: they’re often just a vehicle to point you towards concepts you should Google and explore on your own. It also helps to understand the structure and etiquette of large-scale discussion forums–something which will likely be familiar to MOOC students with a strong technology background, but may be unfamiliar to those who haven’t perused or participated in online user forums before.
  • Have copious prereqs. In the MOOCs I’ve worked on, I can pinpoint what has led me to drop out or stick with it–and it usually comes down to the match or disconnect between the MOOC content and my prior experience with the subject. In Udacity’s Web Development course, for example, I stopped showing up to watch the video lectures when I got frustrated for not knowing enough about Python programming to keep up. In Coursera’s Computing for Data Analysis and Data Analysis, on the other hand, I’ve stuck around, but relied heavily on prior knowledge I gleaned from Intro to Computer Science and a pretty rigorous Economics class I took as an undergrad, as well as a doctoral-level stats class I’ve taken as a master’s student. Not exactly a low barrier to entry, if you ask me…
  • Expect content, not pedagogy. Some of the video lectures have read more like a text book–spouting off terms and definitions before we have any context through which to process it. Problem-based learning is rare, and instructors don’t always give sufficient context to frame the topic for more general audiences. So for now, expect a content-driven approach, and expect to spend time on your own filling in the gaps through your own support network, or via good ol’ Google.
  • Get a kick out of small incentives. The tangible rewards for MOOCs are minimal–unless you’re a huge sucker for a PDF certificate at the end of a class (which, admittedly, I am). Also, most MOOCs are currently set up to require a relatively consistent (usually weekly) time commitment. So, if you’re the kind of person where weekly quizzes help keep you on track, you’re good to go!

So you want to teach a MOOC? Here are my suggestions–or rather, my humble plea–from my own perspective as a MOOC learner:

  • Design for mixed audiences. Structure the course around “conceptual knowledge” that is necessary for all students and that is central to the topic/field at hand. Then be sure to ground these conceptual discussions in copious real-world examples and problem scenarios that will be familiar to nearly everyone. Any good MOOC needs to ask itself: “What would I want students to get from this course in order to become more informed citizens in this world?” And then teach to that! Because, if MOOCs are going to serve a broader societal purpose, they need to speak to a “general interest” audience rather than a group of geeky graduate and post-doc students with too much free time on their hands. For a positive example of a course that balances the demands of geekdom while also focusing on “big picture” real-world examples that help non-specialists still get at the meat of the subject, I’d recommend having a look at Lada Adamic’s Social Network Analysis .
  • Build in flexibility. In line with designing for mixed audiences, instructors can offer occasional challenge problems or assignments for students who want to delve into more advanced skills. In this respect, Jennifer Windom’s Introduction to Databases is a decent model, offering additional quizzes and challenge exercises to Stanford students who are taking the class for credit, as well as those (masochistically-minded?) MOOC students who simply get a kick out of mastering increasingly complex types of database queries.
  • Feature instructors who cannot only talk, but explain. I would argue that MOOCs demand a pretty high level of cognitive complexity from their “expert” instructors. A good MOOC instructor has to distill their subject into what “non-experts” should walk away with, and gain new language for how to present their subject to a more general audience. All in all, not a bad cognitive challenge for academics! The problem? Nearly every professor already thinks they can pull this off. A good test? Have your potential MOOC professor head to a 7th-grade classroom and try to pitch her first few lectures there. If it works and the students learn–sign her up to teach a MOOC! Jeff Leek’s Data Analysis is a good example of an instructor who knows how to frame the “narrative arc” of a course effectively. Although I’d argue that his videos run a little long, and the technical requirements of the course aim too high to be of much use to a non-technical audience, Leek is smart about explaining his subject systematically, and making consistent use of real-world examples.
  • Leverage the power of peers. I’ll admit I was a little skeptical when I first learned of MOOCs incorporating peer grading into assignments. It seemed like an awkward solution to get around the MOOC reality of having a professor who is massively outnumbered by her students. But as it turns out, one of the most powerful learning experiences I’ve had in a MOOC so far was the opportunity to get graded by a group of my peers. The structure was key: we first had to submit our assignments, then we had a week-long period where we were asked to review work from at least 4 fellow students. Only after looking at our peers’ work were we asked to go back and self-assess our own work. The result? I noticed loads of things I’d overlooked in my original assignment and left with clear ideas of how I could improve in the future. And in the end, I was shocked to discover that my self-assessment scores only differed from my peers’ ratings of my work by about .5 points on any given rubric item! All in all, a humblingly awesome educational moment.

In the end, I believe MOOCs are making it painfully apparent that “good teaching is good teaching”–regardless of whether it’s happening online or in a physical classroom. Knowing where students are starting from, and pushing them to new levels of complexity is a perpetual pedagogical challenge, and no MOOC “magic” will resolve that.

Why vote?

November 7, 2012

So, I’ve been listening to some podcasts recently (*cough* Freakonomics *cough*) that seem to suggest that the fashion with economists today is to eschew voting and dismiss it as “irrational”. Basically: “Cool economists don’t vote”. (In fact, I’ve noticed that “cool economists” usually excuse themselves from dealing with most things that they deem to be “irrational”.)

Clearly, voting is not a “rational” decision. I, personally, would also question anyone who argues that a single person deciding to vote is somehow instrumentally useful. I’ll be the first one to admit to not being fully informed about all of the candidates on my ballot. I confess that I didn’t vote for any Soil and Water Commissioners this time around, and the only information backing up my decision on electing county justices was that they were women (and hence seemed less likely to encumber my reproductive rights, in a pinch).

I don’t vote to solve problems–although it may, sometimes, become a happy side-effect. I don’t even really vote to “make my voice heard”, whatever that means. I talk plenty on a daily basis, so believe me, if “making my voice heard” were what I was after, I would hardly grab a bullhorn and dive into the cacophony of an election cycle to do it. I can be heard better in campus committee meetings, volunteering around the community, going to a school board meeting, sitting in a coffee shop and chatting with my neighbor, or any number of better ways than a federal election.

So why do I vote? On the morning after what for me feels like a pretty happy election, it seems worth pondering…

I vote to feel connected the future–and the history–of my country and community. When I think about the other places in this world I have lived, and the other places I could live, choosing to vote in this country feels like an expression of “where the heart is”.

I vote to feel implicated in the ways my community and country will evolve, and to remind myself to stay active in creating these evolutions.

I vote because, despite all the political vitriol, the act of voting is still a powerful, almost miraculous sign of our desire to be in community with each other.

I vote because, for an unforgivably long period of time in this country, women were not allowed to vote.

I vote because, in many other countries across the globe, people do not have the opportunity to vote and signal their commitment to their communities and support (or lack of support) for their leaders.

And in future elections, I hope to remember that I vote because, regardless of the outcome, I am committed to cooperation across differences and to helping this country solve its messes.

Prayer

September 11, 2012

It is September 11th.  Again.  I was sitting in a coffee shop earlier today struggling with how to pray, or meditate, or reflect, when my mind suddenly wandered over to my Turkish grandmother from when I was living with a host family in Ankara last summer.

She would pray, punctually, five times a day.  She could never be caught without her prayer beads; she seemed to wield them as a way of channeling all of her nervous energy through them and out into the world. I remember she kept a listing of prayer times tacked up with a magnet to the wardrobe in her entrance hall. When I puzzled over it once, she obligingly offered a few interesting tidbits about which prayer times mean what, and how their timing is determined.

But what I remember most is that she would pray, always, with her palms facing up to the skies, gently rocking herself back and forth as she whispered some inward words.  And that feels right, to me.  Open to the world, cradling the soul, and speaking to the future.

An American at the Bazaar

August 10, 2012

I’ve been noticing that we in America seem to display a general confusion when it comes to two very fundamental economic concepts: price and value.

The average American seems to orient his purchase decisions around price. Moreover, his concerns about price seem to occur in the here and now–price is a short-term expenditure, not something he can average across the life of a product. He will buy shoes at Payless, a $3 T-shirt at Wal-Mart, and while he may doubt the quality of these products, he can’t help but entertain the vague delight that he is getting a good deal. Being raised in this “price mindset” teaches us that the measure of a good can be transcribed into a dollar price, and that the lower that price is, the better.

Another way of orienting ourselves in the world of consumption is to be concerned about value. Value doesn’t confuse price with utility. Value is difficult, because it requires relatively high-order cognition: it requires us to be in touch with our personal preferences and idiosyncrasies, and requires us to assess what a good means to us beyond what price (and advertising) would have us believe. In short, the “value mindset” requires us to dust off our free will and jog it around the block a few times. Value is a holistic, and also highly subjective measure. It will vary across individuals and cultures. In many ways, it defies–or at least complicates–the idea of a “rational actor”. And yet it is a real and valid process of assessment, and is common around the world. It may include considerations like:

  • time saving
  • durability
  • sustainability
  • beauty
  • pure Epicurean enjoyment
  • etc…

So we have price, and we have value. And in our culture, we have a lot of resources and socialization that have taught us to assess the one, but not the other. Want painful proof of this? Just take any American to a market bazaar, and watch him try to haggle. It’s a pitiful sight. His price/value confusion makes him at times insulting, at times delightfully easy to scam–and nearly always out of place in a bazaar.