“… despite an apparently enormous capacity for digital storage, this abundance should not be viewed as a blanket invitation to a continuous pursuit and offloading of all the minutiae of existence. (It is hoped that we will not all turn out to be closet pharaohs seeking immortalization in our digital pyramids.) Few can cope with the unending waves of reflection and pseudo-reflection that permeate the Web, including its blogs. Or one day all too soon we will mourn the analog, a single photograph on paper seeming refreshingly modest by comparison.”—From Fred Ritchin’s 2008 book After Photography. Preach on, Fred.
“There’s something very odd about a world in which it’s easier to imagine a futuristic technology that doesn’t exist outside of lab tests than to envision expansion of a technology that’s in wide use around the world. How did we reach a state in America where highly speculative technologies, backed by private companies, are seen as a plausible future while routine, ordinary technologies backed by governments are seen as unrealistic and impossible?”—Google cars versus public transit: the US’s problem with public goods | … My heart’s in Accra (via new-aesthetic)
This is a response to chapter 2 of Fred Ritchin’s wonderful book After Photography.
One of my favorite pieces of video art is Beefsteak (Resurrection), by Daniel Spoerri and Tony Morgan, from 1968. It takes a bit of time to understand what’s happening upon first watching this 8 minute black and white film. It begins with human defecation and then moves in reverse: a person un-eats a steak bite by bite, the steak is un-cooked sizzle by sizzle, the cow is un-slaughtered, it un-eats a meal, and finally the cow itself un-defecates. At the end, you finally see the beginning.
I love this video. Among many other things: it questions our understanding of time, reveals our lack of relationship with the animals we may eat, and questions the nature of death itself. The footage was made in the 60s, and looks simple and uncontrived, yet the video creates a new version of reality and bends the rule of time and nature in the process. Spoerri and Morgan took the photographic image and strung it together within a context that created a new message and a new story.
A still from Beefsteak (Resurrection)
The problem is, this video hinges on reality. We have to believe that there is a cow, there is a cooking facility, and there is a steak and a fork, before we can grasp the meaning of those things within an altered framework of time. The good news is, at least within the world of this video, the alteration that has been made to reality is completely obvious.
The alteration itself as fundamental point of the process of alteration happens with regularity only in the art world. In most cases, we are surrounded by altered images that hide rather than announce their manipulations.
The world of advertising is an obvious example. As Ritchin says, “Each image exists to make me want to find out something that is probably useless, to purchase the product described no matter how unnecessary, or to brand it so it will seem familiar each time I see the image or name again. There is no relationship for me, the viewer, with an actuality that exists independently of the intended transaction.”
The use of the photo as pointed message, a message that gains power from its proposed relationship to reality, is certainly not new. However, the photo as transaction is an idea worth exploring in further depth. Ritchin addresses it regarding advertising imagery, but what about the more personal transactions we have with our own photos?
In sharing our digital images on social networks, we ourselves become a face-recognized, demographic-mapped, socially-situated human product to be sold to advertisers as a direct outcome of a photo transaction. In this case, the “actuality that exists independently of the intended transaction” becomes the focus of our attention, while the intended transaction remains hidden! What Ritchen missed is that Pixels and Paradox extends past the bounds of the image frame.
This is an image that I altered using Instagram. Facebook, through Instagram, knows who I tagged in this image, who liked it, who else took this image at this time, and where it was taken. Which of those facts is a more disturbing or paradoxical outcome of this photo?
Ritchin says, “The photograph, no longer automatically a recording mechanism, is not as able to “appropriate the thing photographed” as much as to simulate it. In the age of image, the relation to the world it offers may not be knowledge or power but something like conceit.”
I suggest it’s this very conceit that prevents us from seeing not just the manipulations to the digital image, but also the manipulations that digital images are causing in our lives. We want to see versions of our selves, our friends, and our world that are engaging, digestible, and as visually appealing as possible. In the process, we are creating a new world without noticing what it is.
Ritchin is not wrong in stating, “As in the sciences, the very act of observing can fundamentally change an outcome, and so can also fundamentally change us.” But what we aren’t observing can change us, too. I suspect that we’ll understand those changes in hindsight, and will only really see the beginning at the end.
The process of research is often boring and frustrating. The only part of design research I consistently like is the challenge of externalizing knowledge by synthesizing findings. Leveraging discovery demands a subtle balance between stating the obvious and languishing in obscure detail, and clarifying and humanizing complex information is what design is all about.
Sadly it’s not enough for me to say, “I did an enormous amount of reading, and here is the result of that work.” This is a Masters thesis project, so I have to be able to go into detail on what that enormous amount of reading was all about, why I did it, and what I got from it. And while book after book talks about synthesizing user research findings into actionable insights, there are none that I’ve found on synthesizing literature research findings. I think the reasons for this are fairly obvious. Thus, the biggest challenge I’ve been facing lately is finding a way to both communicate and most effectively utilize the literature-based research I’ve done.
When faced with a thorny problem, I try to get some distance and perspective. Some time ago, I began categorizing my readings into three broad types: prototypes (examples), frameworks (principles), and theory (approaches / pure knowledge). It occurred to me that these different types of literature might demand different types of synthesis.
I did a quick brainstorm on literature synthesis methods, and then mapped those methods onto types of literature. The resulting diagram is below. As frivolous or reductive as this might seem, it’s helped me think through how I can best communicate and present the knowledge I’ve gained through reading.
Do you know of any methods or approaches that I missed? Shoot me a message or email and let me know.
As a language enthusiast, I often use the dictionary recreationally. From Merriam-Webster, these two definitions are absolute gems.
If the dictionary reflects the concerns and values of the cultural mainstream, which I believe they aim to do, then: ideation is bad, and rigor is extreme.
I’ve been doing a lot of both lately.
I started with a casual ideation process based on my readings and research, simply jotting down notes and ideas as they came to me. From there I moved into a more rigorous (some might say strict or extreme) regimen of capturing my notes. Using my readings and research, I identified the key themes in the space of mobile digital photography.
I pulled the 3 biggest themes - sharing, performing self, and digital possessions - and assessed their limits.
For example: What happened if sharing became one singular, undifferentiated task, or a series of increasingly specialized ones? What happened if performing self became an extreme performance, or if we lost control of it entirely? What happened if digital possessions had the same rules as physical ones, or none of those rules at all?
Using this juxtaposition of mobile digital photography + limits, I was able to quickly identify the emerging critical design principles: challenge understandings, break expectations, induce discomfort, enforce limits.
I used these principles to guide several rounds of rapid ideation. The outcome ranged from thoughts as undeveloped as “cloud service” to more refined ideas that I captured in a few words.
My ultimate goal is to create a critical design solution, an intentional contradiction in terms. I moved my ideation into an assessment matrix, and am assessing it against the principles and goals of this project. More later.
When I first heard about Snapchat, a photo sharing service that makes the ephemeral nature of digital objects the very foundation of its being, I was immediately on board. I knew that the early adopters were teenagers sexting each other, but I saw incredible potential.
In design language: by embracing edges or constraints, this service created new affordances for interactions with digital photos and other users.
In human terms: this helped people share their experiences with other people in a way that felt exciting, meaningful, and new.
On the design
“People are living with this massive burden of managing a digital version of themselves,” [Snapchat founder Evan Spiegel] laments. ”It’s taken all of the fun out of communicating.” (via Forbes)
Spiegel has been working in well-researched design and HCI territory, whether he realizes it or not.
From the opening pages of sociologist Erving Goffman’s 1957 essay Alienation from Interaction: "Joint spontaneous involvement is a unio mystico, a socialized trance. We must also see that a conversation has a life of its own and makes demands on its own behalf." In other words: Communication is inherently effortful, no matter how frictionless technology makes it. It can be fun, but the demands it makes on our identities and our selves is part of the bargain.
Theoretical arguments aside, I think Spiegel’s intentions come through in the design of Snapchat. It’s easy to write odes to apps that have gorgeous, modern looks, but design is fundamentally about communication and intent, and Snapchat is largely successful at both.
On the experience
Because mobile photo sharing is the subject of my thesis project, I took notes on my first few uses of Snapchat.
As a recipient, getting a snapchat* is a bit like getting snail mail. It’s an unexpected gift that feels surprisingly personal. The way they leverage cultural signifiers is likely unconscious - a box that reveals a gift, a message that self-destructs - but highly effective. Since I knew my time with each snapchat was limited, I found myself taking a moment to pay attention and tune in before clicking each little box. Unlike a photo or news feed, I didn’t feel like I was feeding at a trough. However, it took some time for me to understand the social agreements behind snapchatting and thus feel ready to create my own.
*I am using lower case to refer to the object and activity, rather than the service itself.
As a sender, Snapchat’s interface has a stripped down simplicity that makes the barrier to entry feel low. You click the camera icon, the big round button, and then pick recipients: easy!
But what about writing text on my photo? drawing lines or arrows? shooting video? These are things I had seen others doing, so I knew they existed, but I didn’t see them available. By clicking aimlessly at everything on the screen, I was able to discover 2 out of 3 of these other features, and texted a friend in frustration to ask about the 3rd.
Frankly, I wasn’t sure whether to be annoyed or intrigued. There’s a fine line between alienation and discovery, and I could see Snapchat’s user experience falling into either camp.
As anyone I am “friends” with can confirm, I engage in online sharing infrequently. There’s a certain thoughtless and normative abandon I see in the photo / comment / news feed model that doesn’t resonate with me. A large portion of my motivation for this thesis project is the gut feeling that there must be a better way. After using Snapchat over a number of months, my interest in the service remains largely conceptual. This is better than the feed, but the implementation feels unfinished. I’m reserving judgment until I see the outcomes of their future monetization and design efforts.
This is a response to the introductory chapter of the book How to See, by George Nelson.
"Literacy is the bedrock on which all modern societies rest." -George Nelson
When we encounter a story about illiteracy, it’s often tragic. It’s the story of an adult who can’t find work, fears going to new places, and can’t read a letter from their friend. It’s the story of a child who drops out of school and enters a life of crime, because they can no longer cover up their inability to read. Illiteracy is heartbreaking, and, though we may not encounter it often or directly, it’s a problem that we are deeply aware of. Visual literacy is another matter entirely. While I’m not saying that visual literacy has the same urgent importance as reading literacy, I am positing that maybe… just maybe… it does.
Visual literacy is the ability to both see and understand what you see.
In his excellent book Art as Experience, John Dewey tells us that experiencing a piece of art lives within the perception of the viewer. The viewer brings with them every experience they’ve ever had, and they take their new experience in through a lens informed by their past. George Nelson says a similar thing about more general visual experience: “We all tend to see in terms of what we know, or believe.”
Visual literacy is an outcome of that knowing and believing. But it doesn’t just happen. You can put a child in front of the alphabet but without critical thought, guidance, and experimentation, that alphabet will never turn into communication. The child has to know and believe that letters, strung together just so, create a culturally universal language.
Several years ago, I worked as a buyer for a fashion retailer. A large part of my job was going to trade shows: enormous events where hundreds of vendors would bring together thousands upon thousands of garments for buyers like myself to see.
This was a special kind of seeing. Each item was different in form, structure, function, color, style, social capital, brand status… each and every item held a different meaning through its design, and it was my job to see that meaning and select those items that would be meaningful for the audience of my company. Friends often envied my job, exclaiming things like, “you get to shop for a living!” Statements like this underestimated and undervalued the importance of the visual literacy I had developed. Learning to see so much in every garment was a hard-earned skill, and an understanding that changed my relationship with what I saw.
We do this often, as a culture. We think that seeing is seeing, and any old understanding will do.
Understanding comes from seeing, and it shouldn’t be a specialized skill. George Nelson says, as part of a discussion on education’s focus on other kinds of literacy, “Technology needs people who can read and write, add and subtract.” This brings us back to the urgent importance of visual literacy. Today, when so much of our technology is both visual and largely intangible, knowing how to “read” the manmade, visual environment of our devices and technology should be a universal skill. Just like the democratization of reading and writing that followed the advent of the printing press, the ability to see needs a galvanizing force.
This is a zoomed-in still from the concept video for Art Tap, a project I worked on in Spring 2013. Art Tap allowed people to engage with art museums… through their mobile phones. By engaging with a familiar visual, they could learn more about new visual languages.
I wonder sometimes if the Internet has been that force, and we just haven’t noticed yet. The Internet hasn’t made us more aware of the experience and meaning of a piece of art, but it has given us a new language of visual communication. The visual language of the internet is fast, fleeting, and constantly changing. There are a lot of cats. The grammar is sloppy and egalitarian. Sexism, racism, homophobia, and casual hate speech runs rampant. Yet the frequency with which we are able to share images, and the reach we are able to have with every instance of sharing, is unparalleled in our history. Perhaps we have developed visual literacy, but it’s a literacy that rests on a foundation of social norms, interactions, and experiences that have yet to fully develop into a language.
In the meantime, I will continue to cherish the visual literacy I have developed in art and fashion. Like George Nelson, I will hold out hope that the rich, slow, and consistent languages I speak will continue to remain relevant in an age that may primarily value the opposite.
Second semester is just around the corner, so a quick update on what I’ve been up to. Given that I’m incapable of being happy unless I’m busy, I decided to use my month-long break between semesters to better myself, or whatever. Goals: get better at drawing, read some design classics, and make my portfolio.
I knocked out my portfolio first, it’s at shrzd.com if you’re curious about my work. Then I went to the library. Guys, I love the library. I spent an hour caressing various books, deciding what I was going to devote my limited time to. Below is a list of what I read, along with my brief thoughts on each book.
Drawing for Graphic Design, by Timothy Samara. As an actual drawing course, this book is way too involved. Even I balked at the amount of drive it would take to complete the recommended exercises. As a book on drawing, I vastly preferred the simple and to-the-point book You Can Draw (below). However, Samara is an excellent teacher and the entire first half of the book was foundational and theoretical teaching. His approach to basic design principles was lucid, articulate, and really pleasurable to read. I’m going to be reading two more of his books.
You Can Draw by Bruce Robertson. A really short, really old book that does everything I needed it to do. Robertson takes you through a primer/refresher course on proper tools and technique, then provides a series of exercises and frameworks that can be tailored to your specific skill level. This book could not have been better. When I was in high school I used to be able to say, with some level of awkward adolescent confidence, “yeah, I can draw.” I feel like I can say that again today.
A Designer’s Art, by Paul Rand. This was a fast read. I was already familiar with much of Mr. Rand’s work, but the clarity with which he laid out design process and design thinking was lovely. What he said here, so long ago, is what the rest of the world is finally starting to see value in. My favorite part was a quote from Frank Lloyd Wright’s Work Song, a sentiment that no one could possibly express better: “I’ll work as I’ll think as I am.”
The New Typography, by Jan Tschichold. I got this just a few days ago, so I’ve only started it. It is… interesting. Tschichold’s passionate belief in the collective and his dogmatic and aggressive stance on their beliefs really highlights how young he was when he wrote it. My copy of the book has a thoughtful introduction which outlines how his beliefs changed as he aged, and also shows the revisions he would have liked to make to later editions. Overall, I’m finding this worthwhile mainly in my exploration of subcultures, collectives, and shared experiences (the topic of my eventual thesis).
The Visual Display of Quantitative Information, by Edward Tufte. I didn’t think I was going to like this book, and I was extremely wrong. Tufte has a no-nonsense approach to visual communication that really resonates with me. Also, he reduces complex principles (e.g. truthfulness, efficiency of data-ink) to ratios, which is supposed to be serious, but which I find mostly entertaining. I used to think I was a bad designer because my graphics never looked flashy and complicated, but now I realize that I think like Tufte: a graphic should communicate without obfuscating.
Finally, Interaction of Color, by Josef Albers. I’m a long-time fan of Albers’ art work, and I love color, so I freaked out, as they say, when I realized this book existed. I’m working through the exercises with the suggested Color-Aid cards, and I feel like nothing has ever been more fun. If you care about color, I can’t recommend this book enough.
I did a bunch of other things over the break: had four house guests over two weeks, traveled to two other states, did lead paint abatement on our basement, and bought some pens. I’ll probably talk more about the pens in a future post.
I’ve been in grad school for less than two months, and I’ve learned that there are two types of learning. There’s the kind that augments an existing framework in your brain, building out on scaffolding your previous experiences created, and then there’s the weirder kind - the kind that builds a new little nest inside your head, and thus gives home to a whole collection of thoughts that previously wouldn’t have stayed with you, thoughts that make you see and think completely differently.
Here are some other things I’ve learned-
1. In any sort of group activity, whoever shows up with the first and best graphics WINS. They control the conversation and the direction of what happens next.
2. If you’ve presented something and no one has any feedback, one or more of the following things have happened (whether or not you want to admit it): a. what you’ve made is just appallingly bad b. you’ve entirely missed the point of the project c. your narrative failed, so no one was listening
3. I always thought “visual thinker” was a myth, and I was wrong. Visual thinkers think non-visual thinkers are a myth, and they are also wrong.
4. Espresso is a truly superior method of caffeine consumption.
Perhaps most importantly, I’ve learned that this is where I belong. (queue The Kinks)
I rarely read so-called ladyblogs. I think the Hairpin, to take the same specific example used by Ms. Fischer, is largely condescending, repulsively phony, and offensively self-congratulatory. I don’t read non-ladyblogs often either. I think blogging has devolved into a giant lazy finger pointing at itself, a point that (in the second best thing I’ve read recently) Blake Andrews illustrates beautifully in his photography-specific piece The Finger.
Somehow, as women, we’re allowed to complain about the rest of the blogging world but about not “our” blogging world. We violate the terms of sisterhood if we don’t put hugs and kisses into everything we do, and if we disagree with what’s said or done, we’re supposed to keep our mouths shut lest we hurt someone’s feelings. Everything is personal, even when it clearly isn’t, like your reader’s feelings on those cute new shoes you wear once to take a picture of.
I’ve dealt with this playground mentality by divorcing myself from that world entirely, which I admit is avoidant and unproductive. Thus, I can only applaud Molly Fischer for having the courage to say what needs to be said, even in the face of truly ridiculous personal attacks that revolve around “boys” and “slumber parties”.
As she puts it, "[my ideal website] would be one where good faith could be assumed without gussying everything up in the trappings of intimacy, swaddling tricky subjects in chattiness. These are gestures that seem strange and infantilizing to me, because instant friendship regardless of individuality is the kind of assumption that parents make about children (“They have a daughter your age, you’ll have fun!”) and bosses about subordinates and majorities about minorities, but not one equals in power typically make about one another."
Ladies, we’re allowed to disagree, and we’re allowed to argue about what we really believe. We’re allowed to really believe things, and we’re allowed to have strong opinions that we stand by. Men have done it for centuries, so let’s stop pretending like we don’t want that same freedom.
P.S. I also think the forced faux-intimacy of the ladyblog world creates unrealistic expectations in female friendships, but that’s a topic for another time.
You know all those alarmist articles about data mining and our status as a collection of likes and dislikes ready to be massaged into a mindless advertising-driven monetary stream? The internet knows you better than you know yourself.
Here is a semi-random selection of 3+ years of my Twitter “favorites” clearly revealing that I think being a human being on the internet is fraught with problems and that personal branding is a joke at best. Of course this happened without me ever realizing, myself, that I felt this way. And yes, I favorited one of my own tweets.
Some Asshole, @MoorishDignity Shit just got fake.
Michael Ian Black, @michaelianblack Maybe I should stop being surprised when I have a good time hanging out with people.
todd levin, @toddlevin Twitter cuts through all the b.s. that would otherwise distract us from behaving like needy, easily wounded babies.
Alain de Botton, @alaindebotton Half the fear of failure is of the judgement of false friends we feel compelled to impress but don’t even like.
Kevin Fanning, @kfan due to overexposure to the internet I can no longer tell where my sincerity ends and my personal brand begins
99%, @the99percent "He who trims himself to suit everyone will soon whittle himself away." -Raymond Hull
Bob Powers, @bobpowers1 If any corporations tracking my purchases are reading this, where did it all go wrong for me? Was it the Papasan Chair?
S S, @luckmachine Went to the store to try to buy my life some meaning.
Amy Stein, @Amy_Stein We’ve become fixated on the process of distribution instead of the process of personal growth as artists
Andy Borowitz, @BorowitzReport 1963: Ask not what your country can do for you 2011: Please follow me and I will follow you back k thanks #USA
Women Of History, @WomenOfHistory Fame means millions of people have the wrong idea of who you are. -Erica Jong
Neal Brennan, @nealbrennan 2001: “He’s a good dude.” 2012: “He’s really on-message about the kindness of his brand.”
“What tweet is that, flashing, subliminally, behind the others? In exactly 140 characters: “I need to be noticed so badly that I can’t pay attention to you except inasmuch as it calls attention to me. I know for you it’s the same.” ”—
This quote - sharp, revealing and concise - captures everything I love about Twitter… and with what it says, it captures everything I hate about the social internet.
I started blogging when that meant updating a notepad file with HTML and text, so I went through the agony and anxiety of notice me, notice me, approve of me - online during early adolescence. As I grew and as the possibilities of internet presence expanded into Geocities, Xanga, Makeoutclub(!), I saw how undignified this approach looked on others and I slowly left it behind. I found a new way to embarrass myself as my subtext became, in less than 140 characters, Get the f*** away from me. I don’t need your attention and I will capture it and then alienate you to prove my point.
In case any of you are verging into this needlessly complex and reactionary territory, I’d urge you to find another path.
Because here is the real problem for those of us aware of our place on the spectrum of online self-consciousness: Social media demands that we find our brand and we stick to it, whether that brand involves entertaining, educating, or alienating, and it is fundamentally anti-human to do so.
I’m tired of being one person (or a series of people, as I try to find one that “sticks”) online and another “in real life”. To create a personal brand is to deny real expression and real connections, and it creates a rift in our self-defined identity. When we look back at our timelines, archives, and histories, we see someone - or something - else looking back at us. Each of us has a best that changes with every moment, yet even capturing that fleeting best and preserving it online would not yield such an inflexible, demanding, and destabilizing version of our selves.
It’s time to engage with the internet as human beings, with the intelligent thought and social intent that only humans are capable of. I’m still figuring out exactly what that means.
“The electricity rushed down the sword, inside my skull, made my hair stand up and sparks fly out of my ears. He then shouted at me, “Live forever!”
I thought that was a wonderful idea, but how did you do it?”—I’m torn up over the passing of Ray Bradbury, more than I realized I would be. This man was a hero to me in so many ways I find difficult to articulate. Maybe when I’m done reading Dewey’s Art as Experience I’ll be better at “expression”, but for now I’ll be revisiting Mr. Bradbury’s exceptional work. As suggested by a tweet, I started here.
I’m a fan, personally, of art that sucks at marketing itself, that doesn’t have a cute backstory or a built-in “platform,” that is not cuddly or “adorkable” and doesn’t immediately lend itself to a hierarchy of “rewards” for “backers,” that is antisocial and prickly and deeply strange. So the trend towards crowdsourced funding for exactly the opposite kind of art leaves me cold.
I’ve been thinking lately about people who “play the game”, people who make work that is marketable or invent a marketable persona. Some of them are just regurgitating what’s SoInRightNow in a way I find largely irritating and superficial, as touched on above, but some are doing the equivalent of putting on Spanx to go to the convenience store. If all of that second group would stop trying to win someone else’s game, maybe there would finally be a market for the unmarketable.
Ultimately, the game is pretty hard to get away from when everyone is self-conscious about their bottom line.
Here is that Joel Meyerowitz quote, largely for my own future reference since I can’t find it on the internet:
What are we all trying to get to in the making of anything? We’re trying to get to ourselves. What I want is more of my feelings and less of my thoughts. I want to be clear. I see the photograph as a piece of experience itself. It exists in the world. It is not a comment on the world.
Printed in the book Color Photography, Assouline, 2001.
“A man may take to drink because he feels himself to be a failure, and then fail all the more completely because he drinks. It is rather the same thing that is happening to the English language. It becomes ugly and inaccurate because our thoughts are foolish, but the slovenliness of our language makes it easier for us to have foolish thoughts. The point is that the process is reversible.”—If you care about language and need something to keep you occupied over the weekend, I recommend "Politics and the English Language" by George Orwell, 1946. Or I guess you could actually communicate with people. I don’t know.
There’s nothing wrong with not being any good at photography. Everybody started out bad and none of us does all aspects of it well. But it’s a crying shame to want to be good at it, to spend time and money trying to be good at it, and not getting any better.
This isn’t like teaching a child to read. Positive reinforcement is your enemy. Your Facebook friends, your Twitter followers… hate you. Instead of taking ten seconds to say. “This doesn’t work. You need to do better”. They readily push that “like” button, because it’s easy and they hope to get the same from you, but also because they’re cowards.
Read the whole thing at Mostly True. Worth reading. It takes courage to be honest with yourself and others. It takes even more courage to know you suck, and to make the hard decisions that follow.
I watched a photography tutorial once, I don’t remember what it was, that ended with a particularly satisfying joke. It went something like this: Master all of this, and you’ll earn the ultimate compliment from people who see your work: “Wow, you must have a really nice camera!”
People think, for some reason, that this is a response that makes sense. If you’ve done this yourself, I have no hard feelings about it… but maybe, the next time you enjoy a piece of writing, think about whether it’s the writer or the computer that made it good. Was it the clarity of the printed word that brought you enjoyment? Or was it the words themselves?