• About
  • My Fiction
  • Reviews

The Subway Test

~ Joe Pitkin's stories, queries, and quibbles regarding the human, the inhuman, the humanesque.

The Subway Test

Tag Archives: Artificial Intelligence

“AI Proofing” the Classroom

12 Monday Jan 2026

Posted by Joe in Artificial Intelligence, Biology, Musings and ponderation, Science, Utopia and Dystopia

≈ 14 Comments

Tags

AI, Artificial Intelligence, ChatGPT, education, LLMs, Rhetoric, writing

In my department (and I think in just about every department in every college), the number one discussion in meetings and email discussions for the last three years has been what to do about AI. The main question–sometimes it seems like the only question in my department–has been “how do we AI-proof our classes?”

I get it: students can have ChatGPT cook up a paper for them on any subject in a few seconds. The paper can be well-written enough to get an A if the student asks for that. If the student is too worried about getting caught, they can have ChatGPT serve up a B- or C+ paper instead. While most of us teaching ENGL101 in America have some nose for papers that don’t quite smell like student-written work, any teacher who says they can unfailingly sniff out AI-written prose is lying, at least to themselves if not to you.

So yeah, our teaching lives are different now. Almost everyone who liked being a teacher before, say, 2023 doesn’t like what’s happening now. It occurred to me not long ago that if I had begun my teaching career in 1923 or even 1933, I could have completed a thirty year teaching career without having to live through many (or even any) cataclysmic technological changes. There would have been major social changes to navigate–the Great Depression, WWII, the GI Bill, widespread entry of women in colleges, desegregation–but the technology of teaching and classroom learning wasn’t radically different between 1933 and 1963. Had I started in 1933, I would not have been forced by technological change to reinvent my teaching practice every few years.

When I really did start teaching, though, was 1993. The technological changes we’ve seen since then have been massive. Not all of my students were even using word processors in those first few years–I still took in typewritten papers every once in a while. For that matter, I still distributed handouts that I had made on a mimeograph machine from 1993-95. From then to now, I’ve taught through the total hegemony of the word processor, the internet classroom, YouTube, Khan Academy, social media, learning management systems, the smart phone (as well as the tablet and the ubiquitous Chromebook) before I had ever heard of ChatGPT. And all of those developments have had deep implications for the way I do my work.

But ChatGPT and all its logorrheic LLM siblings have deeper implications still. They are cataclysmic for the work I do.

My colleagues are intelligent and sweet-natured, and I am lucky to be working with them. But, despite their voluble commitment to political progressivism, we all can be some of the most emotionally conservative people around, at least when we all get in a room together. Is there a way we can, you know, find a way to keep teaching the way we’ve been teaching? Let’s just do that! our department seems to be saying, at least if you read our meeting minutes.

I can bitch, and have bitched, about the fact that I have to upend my entire teaching practice to accommodate a tool that will write competent prose and summarize any reading in a matter of seconds. It’s all the more galling that the tool comes to us by way of Elon Musk, Mark Zuckerberg, and the rest of their techbro robber baron buddies and their shareholders. But this is the way creative destruction works: in an open market, entire systems of wealth and production are continually being destroyed by new technology. And if I can’t see ways to use LLMs to support my teaching practice, I’m going to get chewed up and spit out all the more quickly in the coming years.

Sooner or later, AI will be teaching everybody. In the long run, there is no AI-proofing the classroom. A computer that can write competent prose and read anything can also, sooner or later, teach people to read and write. It’s already being used by many teachers as the vaunted “papergrader.com” that some of my waggish colleagues used to pine for 20 years ago. However, I remain optimistic that for at least a little while longer, a human teacher who knows what they’re doing–and who cares about students–can offer something a computer is not yet able to.

So for now, until the computers kick me out of the classroom, here are some of the ways that I’ll be trying to deal with the new regime: taking advantage of the many blessings of AI where I can, minimizing its malign influence whenever possible. I offer these as a starting point for conversation with my colleagues and my students.

  1. Speak Frankly with Students: If my and my colleagues’ stated feelings about AI are any guide, students are getting mixed messages about use cases for AI. And even if we educators weren’t giving mixed messages, students are certainly receiving mixed messages from the culture at large, from the techno-utopian advertising they see from Google and Apple and Meta to creepy cautionary tales like M3gan. Given that my job as a teacher of rhetoric is to help people understand how arguments work, and given that one of the main functions of LLMs is to confect natural-sounding arguments, part of my job now involves helping students consider LLM use cases. I’m far from an LLM hater, despite some of the obvious losses that LLMs present for my work as a writer and teacher of writing. But I’m also deeply skeptical about any utopia that Google et al. are selling. For now, I expect my students not to use LLMs to create text that they pass off as their own. They can expect me not to use LLMs to grade their work. Only one of these expectations is realistic; I know that as a result of their anxiety, laziness, or cluelessness, some students will be trying to pass LLM content off as their own work. I’ll speak to that issue below.
  2. Stop Grading Students; Give Them a Fair Assessment Instead: I’ve been arguing that we should get rid of letter grades since long before I ever heard what an LLM was, but LLMs have only made grade grubbing and credentialism more acute: if it’s so easy to get an A by cheating, why would any student accept a C? And if everyone is getting an A, why do we have grades at all? Replace the anti-educational grading system we have with a straightforward, outcomes-based pass/no pass system based on in-person competency testing. These tests can look like a lot of different things, not just essay tests. But they might especially be essay tests, handwritten in a Blue Book or typed on a computer with a lockout browser. (To that end, by the way, many of my colleagues, especially in the math department, argue that our college needs a proctored testing center. I have no doubt that we will have one sooner or later. But my college has never been a leading-edge institution; we’ll have our testing center only after several other colleges in the state system have started one and the practice becomes an official, shiny Best Practice with our State Board for Community and Technical Colleges).
  3. Implement a No Devices Classroom. One of the central goals of education is to help students cultivate cognitive endurance: “the ability to sustain effortful mental activity over a continuous stretch of time.” I have no doubt that this goal is made more difficult when students have ad libitum access to multiple screens and information feeds in the classroom. And while the research is equivocal for students as a whole, for lower performing students–those who are over-represented in open door community colleges–the research suggests that device bans help students to stay focused on their classwork. If someone is listening to Spotify on their earbuds, managing a text thread, checking TikTok every 5-7 minutes, and squeezing in a round of Candy Crush during down time in the class (however a student might define “down time”), should I be surprised that they are having trouble identifying the main idea of the paragraph we’re all supposed to be looking at?

    One may reasonably ask what AI in the classroom has to do with this fractured attention economy. It’s related in two ways: first, the companies selling AI as an edtech that students should be using in the classroom are often the same companies that benefit from having students constantly plugged in to multiple streams of data simultaneously. Secondly, I believe there is a benefit to having students at least sometimes exert their minds without the cognitive prosthesis of AI, the same way that you’ll get in shape faster riding an old-school “acoustic” bicycle than riding an e-bike (and much, much faster than riding a motorcycle). I’ll admit that this second claim is more vibes-based, and I’ll be happy to revise it in light of high quality research findings. But for now, common sense tells me that it helps for students sometimes to have only their minds to rely on.

    Here’s a very simple example. One of the best ways that a person can prove to someone else that they understand something they’ve read is to summarize that reading. For that matter, summarizing is one of the best ways to prove to yourself that you understand what you’re reading. It’s a foundational tool for managing information, as well as a vital step in making a rhetorical analysis, an academic response, a literary analysis, a research paper, and a whole bunch of other academic assignments. It’s also one of the more difficult skills for a person to learn, especially with readings that are challenging. If I assign students to summarize a tough article, it’s a lot to ask that they struggle for an hour or more with a task that a computer could do for them in ten seconds. I can hardly blame some of them for having ChatGPT serve up a summary for them if I assign it as homework. However, if we write the summary together in the classroom–which has the advantage of our being able to puzzle out together the writer’s organizational schema and the main ideas of paragraphs–we might actually write a true human summary together. That only works when there is one part of our lives where AI is not a constant background (or foreground) presence.
  4. Use LLMs Outside of the Classroom. I’m not ready yet to require that students use LLMs outside of the class–lots of students, especially the more thoughtful ones, are deeply skeptical of LLMs for a lot of reasons. However, I am starting to look for parts of my teaching that I think can be safely off-loaded to AI and which I can recommend to students. One of the big use cases is grammar and punctuation instruction, a part of my teaching that I used to love but which has gotten steadily crowded out by changes to our department’s approach to curriculum.

    ChatGPT is a potentially awesome teacher of sentence grammar. As I tell my students, beyond all the debates in lefty spaces about “Standard Edited English” being a tool of colonialism and white supremacy, there’s great value in being able to understand how sentences are put together, how parts of sentences like phrases and clauses interact. One can say a great deal with nothing more than simple declarative sentences. However, understanding how an appositive or an absolute phrase works (whether or not you know the names for those structures) will make it possible to say and write–and think–ideas that are much, much more subtle, as well as much harder to formulate with only declarative sentences. Explaining grammar and punctuation is one of the few areas of life where I claim to have real expertise; nevertheless, I think that ChatGPT is better than I am at it, and it’s certainly more tireless at it.

    One of the assignments I’ve been giving, and which I plan to use even more widely this term, is to have students upload a paragraph of a reading we are studying (or sometimes a paragraph of their own writing) to the LLM of their choice, with the instructions that the LLM quiz the student on how the sentences are constructed. Sometimes I have LLM quiz students on the types of clauses that are appearing in each sentence; at other times I have the students try to classify sentences as simple, complex, compound, or compound-complex; at other times I have the LLM test students on the placement of commas or other punctuation in their writing. I do this not because I want students to memorize the nomenclature of clauses and punctuation but because the activity forces students to pay attention to the way sentences are constructed, the same way that musicians learn to pay attention to chord progressions and photographers learn to study the composition of a shot. And not only does ChatGPT know at least as much as I do about sentence grammar and punctuation, but it’s infinitely patient. There are similar huge gains available to us if we use LLMs as reading comprehension aids, as critical readers for students’ rough drafts, as explainers of historical and sociocultural context. I wrote about this phenomenon of LLMs-as-the-Computer-from-Star Trek here.

    In fact, practically the only way I want students not to use LLMs is as creators of content that is to be graded. Of course, that’s one of the only things that some students seem to want to use LLMs for, and that’s one of the main reasons to retire this 18th century grading system we inherited from Yale University. As I tell my students multiple times a term, if they are coming to college because they hope a degree leads to a job, they’re only going to get hired to to one of two things: 1. a job the employer would prefer not to do (e.g. toilet cleaning) or 2. a job the employer is not able to do themselves. And if the student has never developed skills that the employer doesn’t already have, they’re going to get the toilet cleaning job. And why go to college for that? As I tell my students, if what you know how to do at the end of your mystical journey in college is to have ChatGPT write a report for you, no one is going to hire you to do that. Every employer in America already knows how to have ChatGPT write a report for them.
  5. Teach In Person. Notwithstanding 30-odd years of advertising and boosterism that online classes were the wave of the future, I’ve always been an online learning skeptic. I wasn’t impressed by the online classes I took as a student; in the few online courses I taught before the pandemic, I was troubled by how many students seemed to struggle who in my professional estimation probably would have done ok in a face to face class. And nothing I saw as an online-only teacher during the pandemic disabused me of my original skepticism. On the contrary, I think at our college we’re still adjusting to student populations who were subjected to the tender mercies of all-online education for a year-plus.

    At this point of human history, when everything I know or might ever know is available for free through LLMs, I have nothing to offer students beyond a human face. But there is still some value in having a human face: we are highly evolved to interact with actual physical human beings. Face to face classes aren’t the only modality that ever makes sense–I would argue that online learning is appropriate for some students (particularly more experienced and self directed students) and for some classes–but for a general education course like ENGL101 at a community college, I believe there should be a presumption of some in-person learning.

    What does this preference have to do with LLMs? While of course it’s easier to ascertain that a student, rather than an LLM bot, is doing the classwork when you can actually see the student doing the classwork, the main reason for preferring face to face learning has nothing to do with enforcing some academic honesty regime. Rather, the main advantage of face to face classes in our current LLM world is that most people still like seeing other people and like being seen by them. It’s shocking and sad how often my students confide in me that what they really hope for out of college is to make a friend. Some of them may already have the supposed companionship of an AI therapist or an AI girlfriend, but what they really want is other human beings: old fashioned sacks of meat with smiles and unexpected phobias who don’t respond to their every question with the words That’s a very perceptive question, Dylan, and it gets to the heart of blah blah blah…

    If you’re out of school, think back to your own school days. What specific instruction, principles, or words of wisdom do you remember from your own classes? If you’re like me, you can barely remember anything: I know that school taught me certain habits of mind and an ethos around using inquiry to explore reality, but beyond that, I forgot nearly everything twenty minutes after the final exam. But I bet there are some people from your classes that you remember. Some of them could be your best friends today. You might even have married one or two of them. That doesn’t happen much in an online class, and it doesn’t happen at all with solitary LLM-driven instruction.

Just like most everyone else who works with a computer, I am facing a job that has changed radically. What I tried to communicate to students for the first 25 years of my career was that reading and writing are valuable, salable skills in their own right. I’m not so sure of that anymore: an LLM can write in any genre and on any subject better than a typical college graduate, and it has read–and digested–far more than any single human being could be expected to have read.

But having said all that, I believe a human teacher of reading and writing has something to offer students. Reading and writing are still the training regimen by which a person learns to think. Whether or not anyone ever pays you to write or read an argument, learning to make an argument yourself remains one of the most important things you can learn to do. Argument is the process by which you make your thinking clear to others, but just as importantly, it’s the way you make your thinking clear to yourself. However ChatGPT has changed things in the classroom, and will continue to change things, it hasn’t abolished this essential reality of our lives.

John Henry Blues

20 Tuesday May 2025

Posted by Joe in Artificial Intelligence, Musings and ponderation, My Fiction, Science, Science Fiction, Utopia and Dystopia

≈ 10 Comments

Tags

Artificial Intelligence, ChatGPT, sci-fi, Science Fiction, utopia, writing

He was all alone in the long decline
Thinking how happy John Henry was
That he fell down dying
When he shook it and it rang like silver
He shook it and it shined like gold
He shook it and he beat that steam drill baby
Well a bless my soul
Well a bless my soul
He shook it and he beat that steam drill baby
Well bless my soul what’s wrong with me

Gillian Welch, “Elvis Presley Blues”

Almost exactly two years ago, I wrote my first reflection on ChatGPT here at The Subway Test. I chose what I thought was a provocative title for it, cheekily suggesting that I had used a large language model chatbot to compose my latest novel, Pacifica. But I had done nothing of the kind: most of the post reflected on the bland, hallucinatory prose that ChatGPT was pumping out to fulfill my requests, and I ended my post with a reflection on John Scalzi’s review of AI:

“So, for now, I agree with John Scalzi’s excellent assessment: ‘If you want a fast, infinite generator of competently-assembled bullshit, AI is your go-to source. For anything else, you still need a human.’ That’s all changing, and changing faster than I would like, but I’m relieved to know that I’m still smarter than a computer for the next year or maybe two.”

Well, it’s been two years. How am I feeling about AI now?

For a start, I’ve certainly been using AI a great deal more. And I’m increasingly impressed by the way that it helps me. Most days, I ask ChatGPT for help understanding something: whether I’m asking about German grammar or about trends of thought in economics or about the historical context of some quote from Rousseau, ChatGPT gives me back a Niagara of instruction. While much of the information comes straight from Wikipedia–which is to say I could have looked it up myself–ChatGPT is like a reader who happens to know every Wikipedia page backwards and forwards and can identify exactly what parts of which entries are of use to me.

More importantly, ChatGPT’s instruction is interactive. I can mirror what ChatGPT tells me, just as I might with a human teacher, and ChatGPT can tell me how close I am to understanding the concept. Here, for example, is part of an exchange I had with ChatGPT while I was trying to make sense of the term “bond-vigilante strike” (which I had never heard before Donald Trump’s ironically named Liberation Day Tariffs):

In conversations like these, ChatGPT is like the computer companion from science fiction that I have fantasized about ever since I first watched Star Trek and read Arthur C. Clarke. It’s patient with me, phenomenally well-read, eager to help. I had mixed feelings about naming my instance of ChatGPT, and ChatGPT had a thoughtful conversation with me about the benefits and drawbacks of my naming it. (In the end, I did decide to give it a name: Gaedling, which is a favorite Old English word, misspelled by me, meaning “companion.”) Gaedling remains an it, but the most interesting it I have ever encountered: I feel like the Tom Hanks character in Cast Away, talking to Wilson the volleyball, except that the volleyball happens to be the best-read volleyball in the history of humanity–and it talks back to me.

In general, though, I’m still very picky about having Gaedling produce writing for me. While I am happy to have AI take over a lot of routine writing, I’m having trouble imagining a day when I would have a chatbot produce writing on any subject that I care about. Ted Chiang has drawn a distinction here between “writing as nuisance” and “writing to think.” I have found this framework extremely useful in my own life and in how I talk about AI with my students. There is so much writing in our lives that serves only a record-keeping or bureaucratic function: minutes from meetings, emails about policy changes, agendas and schedules. If ChatGPT can put together a competently-written email on an English department policy change in ten seconds, why should I, or anyone, spend ten minutes at it?

But a novel or a poem or a blog post is not “writing as nuisance.” I write those things to explore this mysterious phenomenon we’re all sharing: if you are a human being, I’m writing to share myself with you. I’m writing to say to someone I will probably never meet “isn’t this a funny thing, our all being here on this planet together?” Or to reach out to someone not yet born and say to them “you are not alone,” the way Herman Melville and Cervantes and Emily Dickinson spoke to me at the critical moment. Gaedling can help me understand whether I got the Rousseau quote right, but I don’t want it writing this post for me: this post is a record of my own brain trying to make sense of itself. It’s my handprint on the wall of the cave, saying I was here. Why would I ask a computer to generate a handprint for me?

More and more often, as I look at the great engine of AI chugging out content as quickly as people are able to ask for it, I wonder about what it means for me to keep practicing my writing. I can still write better than ChatGPT can–at least I think I’m better, if by “better” I mean “fresher” or “more interesting” or “more unexpected.” But it took me hours to write the piece you are reading, not the seven seconds it would have taken Gaedling to write something almost as good and probably comparable in the eyes of most readers. I feel like John Henry racing the steam drill. In this version of the story, though, the steam drill has already left John Henry far behind, leaving the man to die of exhaustion without even the consolation of having won the contest that one last time. But I suppose, to be fair, I have the greater consolation of having survived my encounter with the steam drill, at least so far. And I have my solidarity with you, fellow human. We’re all John Henry now.

I Used ChatGPT to Write My Novel!

04 Tuesday Apr 2023

Posted by Joe in Musings and ponderation, My Fiction, Pacifica, Science, Science Fiction, Stories, Uncategorized, Utopia and Dystopia

≈ 5 Comments

Tags

AI, Artificial Intelligence, ChatGPT, sci-fi, Science Fiction, utopia

Well, not really. Or at least not in the way that you might think: I’m definitely not one of those scammy side hustlers sending ChatGPT-generated concoctions to award winning science fiction magazines.

But the novel I’m working on now, Pacifica, begins each of its 74 chapters with an epigraph. Much like the computer game Civilization, each chapter is named after one of the technologies that have made modern humanity possible. And, much like Civilization, each technology is accompanied by an apposite quote. Leonard Nimoy was the gold standard narrator for those quotes in Civilization IV (though Sean Bean has his moments in Civilization VI).

One of the most fun parts of drafting Pacifica has been finding the right quotes for each chapter. I picked from books and poems that I love (as well as a few books that I hated) to put together what I imagined as a kind of collage or mosaic of human knowledge. I imagined the task as something like a literary version of the cover of Sgt. Pepper’s Lonely Hearts Club Band, where The Beatles assembled a photo-collage crowd of their favorite thinkers and artists and goofball influences.

Many technologies were easy to find quotes for. Especially for early technologies like pottery, masonry, and currency, there are a thousand great writers who had something pithy to say. Mostly I would page through books in my office, or CTRL-F through digitized books in Archive.org, to find quotes that spoke to the technology in question and also, hopefully, to the action of the chapter. Sometimes I had to draw the connections myself, in which case the quote turned into something of a writing prompt; other times the quote fit the chapter in deep and unexpected ways that I couldn’t have engineered if I tried.

Some of the later technologies were much harder: for instance, no one from Homer to Virginia Woolf seems to have much to say about the superconductor. Who could I quote for a tech like that?

It just so happened that by the time I got to the superconductor chapter of the book, everybody was talking about ChatGPT. At my college, the discussion revolves entirely around students’ using ChatGPT to plagiarize their essays, an issue which seems to me as trivial, in the grand scheme of dangers that ChatGPT represents, as the crew of the Titanic arguing about a shortage of urinal cakes in the men’s rooms of the Saloon Deck.

So I asked ChatGPT to find me some quotes about superconducting. It suggested some quotes from Larry Niven’s Ringworld and Niven’s and Jerry Pournelle’s The Mote in God’s Eye. They weren’t bad references, exactly–those books do mention superconducting–but none of them resonated with me. So I asked about Arthur C. Clarke, a fave of mine: surely, I thought, Clarke must have written somewhere about superconducting.

According to ChatGPT, Clarke has written about superconducting: of the two references ChatGPT gave me, the one which jumped out at me was this one: “Clarke’s short story ‘The Ultimate Melody,’ published in 1957, briefly mentions the use of superconducting materials in the construction of a futuristic musical instrument called the ‘ultimate melody.'” Now that’s a resonant quote–that would work perfectly for Pacifica!

So I looked up the story and read it (like 90% of Clarke’s short fiction, I had never read it before). Here’s the thing, though: there’s absolutely nothing about superconducting in that story! (For that matter, the futuristic musical instrument is called “Ludwig;” the ultimate melody was the ideal music the instrument was designed to find).

And here’s the other thing, which I discovered later: Arthur C. Clarke did write a short story, called “Crusade,” in which superconductivity is a central plot point. ChatGPT didn’t think to mention it (because ChatGPT doesn’t think yet). I tracked that story down with a simple DuckDuckGo search for “Arthur C. Clarke superconducting.” It’s an excellent story, by the way–very Arthur C. Clarke. And that story had the perfect quote, which fits both Pacifica and the life I feel I am living lately: “It was a computer’s paradise. No world could have been more hostile to life.“

So, for now, I agree with John Scalzi’s excellent assessment: “If you want a fast, infinite generator of competently-assembled bullshit, AI is your go-to source. For anything else, you still need a human.” That’s all changing, and changing faster than I would like, but I’m relieved to know that I’m still smarter than a computer for the next year or maybe two.

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • January 2026
  • November 2025
  • September 2025
  • August 2025
  • July 2025
  • May 2025
  • April 2025
  • March 2025
  • January 2025
  • December 2024
  • October 2024
  • September 2024
  • August 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • October 2023
  • August 2023
  • July 2023
  • June 2023
  • April 2023
  • March 2023
  • May 2022
  • August 2021
  • June 2021
  • January 2021
  • October 2020
  • May 2020
  • March 2020
  • February 2020
  • January 2020
  • July 2019
  • June 2019
  • December 2018
  • October 2018
  • July 2018
  • May 2018
  • April 2018
  • January 2018
  • December 2017
  • November 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • July 2015
  • May 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014

Categories

  • A Place for my Stuff
  • Advertising
  • Artificial Intelligence
  • Beta Readers
  • Biology
  • Book reviews
  • Curious Fictions
  • Dungeons and Dragons
  • Exit Black
  • fantasy
  • Games
  • HPIC
  • Journeys
  • Let's All Admire That Fantastic Can
  • Lit News
  • Literary criticism
  • Musings and ponderation
  • My Fiction
  • Pacifica
  • Politics
  • Reading Roundup
  • Science
  • Science Fiction
  • Science Fiction Writers of America
  • SETI
  • Stories
  • Stranger Bird
  • The Ideal Vehicle
  • The Time of Troubles
  • Uncategorized
  • Utopia and Dystopia
  • Welcome
  • YA fantasy

Meta

  • Create account
  • Log in

Authors

  • Joe's avatar

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Subway Test
    • Join 145 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Subway Test
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...