Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Essay: Is Google Making Us Stupid? by Nicholas Carr, Essays (university) of Information Technology

Is the internet making us dumber?

Typology: Essays (university)

2020/2021

Uploaded on 05/04/2021

amritay
amritay 🇺🇸

4.7

(14)

256 documents

1 / 15

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
!
1!
Is Google Making Us
Stupid?
Nicholas Carr
What the Internet is doing to our brains
"Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave?” So the
supercomputer HAL pleads with the implacable astronaut Dave
Bowman in a famous and weirdly poignant scene toward the end of
Stanley Kubrick’s 2001: A Space Odyssey. Bowman, having nearly been
sent to a deep-space death by the malfunctioning machine, is calmly,
coldly disconnecting the memory circuits that control its artificial “
brain. “Dave, my mind is going,” HAL says, forlornly. “I can feel it. I can
feel it.”
I can feel it, too. Over the past few years I’ve had an uncomfortable
sense that someone, or something, has been tinkering with my brain,
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff

Partial preview of the text

Download Essay: Is Google Making Us Stupid? by Nicholas Carr and more Essays (university) Information Technology in PDF only on Docsity!

Is Google Making Us

Stupid?

Nicholas Carr

What the Internet is doing to our brains "Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave?” So the supercomputer HAL pleads with the implacable astronaut Dave Bowman in a famous and weirdly poignant scene toward the end of Stanley Kubrick’s 2001: A Space Odyssey. Bowman, having nearly been sent to a deep-space death by the malfunctioning machine, is calmly, coldly disconnecting the memory circuits that control its artificial “ brain. “Dave, my mind is going,” HAL says, forlornly. “I can feel it. I can feel it.” I can feel it, too. Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain,

remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle. I think I know what’s going on. For more than a decade now, I’ve been spending a lot of time online, searching and surfing and sometimes adding to the great databases of the Internet. The Web has been a godsend to me as a writer. Research that once required days in the stacks or periodical rooms of libraries can now be done in minutes. A few Google searches, some quick clicks on hyperlinks, and I’ve got the telltale fact or pithy quote I was after. Even when I’m not working, I’m as likely as not to be foraging in the Web’s info-thickets’reading and writing e-mails, scanning headlines and blog posts, watching videos and listening to podcasts, or just tripping from link to link to link. (Unlike footnotes, to which they’re sometimes likened, hyperlinks don’t merely point to related works; they propel you toward them.) For me, as for others, the Net is becoming a universal medium, the conduit for most of the information that flows through my eyes and ears and into my mind. The advantages of having immediate access to such an incredibly rich store of information are many, and they’ve been widely described and duly applauded. “The perfect recall of silicon memory,” Wired ’s Clive Thompson has written, “can be an enormous boon to thinking.” But that boon comes at a price. As the media theorist Marshall McLuhan pointed out in the 1960s, media are not just passive channels of information. They supply the stuff of thought, but they also shape the process of thought. And what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski. I’m not the only one. When I mention my troubles with reading to friends and acquaintances—literary types, most of them—many say they’re having similar experiences. The more they use the Web, the more they have to fight to stay focused on long pieces of writing. Some of the bloggers I follow have also begun mentioning the phenomenon. Scott Karp, who writes a blog about online media, recently confessed that he has stopped reading books altogether. “I was a lit major in

medium of choice. But it’s a different kind of reading, and behind it lies a different kind of thinking—perhaps even a new sense of the self. “We are not only what we read,” says Maryanne Wolf, a developmental psychologist at Tufts University and the author of Proust and the Squid: The Story and Science of the Reading Brain. “We are how we read.” Wolf worries that the style of reading promoted by the Net, a style that puts “efficiency” and “immediacy” above all else, may be weakening our capacity for the kind of deep reading that emerged when an earlier technology, the printing press, made long and complex works of prose commonplace. When we read online, she says, we tend to become “mere decoders of information.” Our ability to interpret text, to make the rich mental connections that form when we read deeply and without distraction, remains largely disengaged. Reading, explains Wolf, is not an instinctive skill for human beings. It’s not etched into our genes the way speech is. We have to teach our minds how to translate the symbolic characters we see into the language we understand. And the media or other technologies we use in learning and practicing the craft of reading play an important part in shaping the neural circuits inside our brains. Experiments demonstrate that readers of ideograms, such as the Chinese, develop a mental circuitry for reading that is very different from the circuitry found in those of us whose written language employs an alphabet. The variations extend across many regions of the brain, including those that govern such essential cognitive functions as memory and the interpretation of visual and auditory stimuli. We can expect as well that the circuits woven by our use of the Net will be different from those woven by our reading of books and other printed works. Sometime in 1882, Friedrich Nietzsche bought a typewriter—a Malling- Hansen Writing Ball, to be precise. His vision was failing, and keeping his eyes focused on a page had become exhausting and painful, often bringing on crushing headaches. He had been forced to curtail his writing, and he feared that he would soon have to give it up. The typewriter rescued him, at least for a time. Once he had mastered touch- typing, he was able to write with his eyes closed, using only the tips of his fingers. Words could once again flow from his mind to the page. But the machine had a subtler effect on his work. One of Nietzsche’s friends, a composer, noticed a change in the style of his writing. His already terse prose had become even tighter, more telegraphic. “Perhaps you will through this instrument even take to a new idiom,” the friend wrote in a letter, noting that, in his own work, his “‘thoughts’ in music and language often depend on the quality of pen and paper.” Also see:

Living With a Computer (July 1982) "The process works this way. When I sit down to write a letter or start the first draft of an article, I simply type on the keyboard and the words appear on the screen..." By James Fallows “You are right,” Nietzsche replied, “our writing equipment takes part in the forming of our thoughts.” Under the sway of the machine, writes the German media scholar Friedrich A. Kittler , Nietzsche’s prose “changed from arguments to aphorisms, from thoughts to puns, from rhetoric to telegram style.” The human brain is almost infinitely malleable. People used to think that our mental meshwork, the dense connections formed among the 100 billion or so neurons inside our skulls, was largely fixed by the time we reached adulthood. But brain researchers have discovered that that’s not the case. James Olds, a professor of neuroscience who directs the Krasnow Institute for Advanced Study at George Mason University, says that even the adult mind “is very plastic.” Nerve cells routinely break old connections and form new ones. “The brain,” according to Olds, “has the ability to reprogram itself on the fly, altering the way it functions.” As we use what the sociologist Daniel Bell has called our “intellectual technologies”—the tools that extend our mental rather than our physical capacities—we inevitably begin to take on the qualities of those technologies. The mechanical clock, which came into common use in the 14th century, provides a compelling example. In Technics and Civilization , the historian and cultural critic Lewis Mumford described how the clock “disassociated time from human events and helped create the belief in an independent world of mathematically measurable sequences.” The “abstract framework of divided time” became “the point of reference for both action and thought.” The clock’s methodical ticking helped bring into being the scientific mind and the scientific man. But it also took something away. As the late MIT computer scientist Joseph Weizenbaum observed in his 1976 book, Computer Power and Human Reason: From Judgment to Calculation , the conception of the world that emerged from the widespread use of timekeeping instruments “remains an impoverished version of the older one, for it rests on a rejection of those direct experiences that formed the basis for, and indeed constituted, the old reality.” In deciding when to eat, to work, to sleep, to rise, we stopped listening to our senses and started obeying the clock. The process of adapting to new intellectual technologies is reflected in the changing metaphors we use to explain ourselves to ourselves. When the mechanical clock arrived, people began thinking of their brains as operating “like clockwork.” Today, in the age of software, we have come

machines, and recorded and timed their every movement as well as the operations of the machines. By breaking down every job into a sequence of small, discrete steps and then testing different ways of performing each one, Taylor created a set of precise instructions—an “algorithm,” we might say today—for how each worker should work. Midvale’s employees grumbled about the strict new regime, claiming that it turned them into little more than automatons, but the factory’s productivity soared. More than a hundred years after the invention of the steam engine, the Industrial Revolution had at last found its philosophy and its philosopher. Taylor’s tight industrial choreography—his “system,” as he liked to call it—was embraced by manufacturers throughout the country and, in time, around the world. Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time-and-motion studies to organize their work and configure the jobs of their workers. The goal, as Taylor defined it in his celebrated 1911 treatise, The Principles of Scientific Management , was to identify and adopt, for every job, the “one best method” of work and thereby to effect “the gradual substitution of science for rule of thumb throughout the mechanic arts.” Once his system was applied to all acts of manual labor, Taylor assured his followers, it would bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency. “In the past the man has been first,” he declared; “in the future the system must be first.” Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”— the perfect algorithm—to carry out every mental movement of what we’ve come to describe as “knowledge work.” Google’s headquarters, in Mountain View, California—the Googleplex— is the Internet’s high church, and the religion practiced inside its walls is Taylorism. Google, says its chief executive, Eric Schmidt, is “a company that’s founded around the science of measurement,” and it is striving to “systematize everything” it does. Drawing on the terabytes of behavioral data it collects through its search engine and other sites, it carries out thousands of experiments a day, according to the Harvard Business Review , and it uses the results to refine the algorithms that increasingly control how people find information and extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the mind.

The company has declared that its mission is “to organize the world’s information and make it universally accessible and useful.” It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.” In Google’s view, information is a kind of commodity, a utilitarian resource that can be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can extract their gist, the more productive we become as thinkers. Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our brains. “The ultimate search engine is something as smart as people—or smarter,” Page said in a speech a few years back. “For us, working on search is a way to work on artificial intelligence.” In a 2004 interview with Newsweek , Brin said, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.” Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ. A fundamentally scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidt’s words, “to solve problems that have never been solved before,” and artificial intelligence is the hardest problem out there. Why wouldn’t Brin and Page want to be the ones to crack it? Still, their easy assumption that we’d all “be better off” if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive. The idea that our minds should operate as high-speed data-processing machines is not only built into the workings of the Internet, it is the network’s reigning business model as well. The faster we surf across the Web—the more links we click and pages we view—the more opportunities Google and other companies gain to collect information about us and to feed us advertisements. Most of the proprietors of the

deep thinking. If we lose those quiet spaces, or fill them up with “content,” we will sacrifice something important not only in our selves but in our culture. In a recent essay, the playwright Richard Foreman eloquently described what’s at stake: I come from a tradition of Western culture, in which the ideal (my ideal) was the complex, dense and “cathedral-like” structure of the highly educated and articulate personality—a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West. [But now] I see within us all (myself included) the replacement of complex inner density with a new kind of self—evolving under the pressure of information overload and the technology of the “instantly available.” As we are drained of our “inner repertory of dense cultural inheritance,” Foreman concluded, we risk turning into “‘pancake people’—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.” I’m haunted by that scene in 2001. What makes it so poignant, and so weird, is the computer’s emotional response to the disassembly of its mind: its despair as one circuit after another goes dark, its childlike pleading with the astronaut—“I can feel it. I can feel it. I’m afraid”—and its final reversion to what can only be called a state of innocence. HAL’s outpouring of feeling contrasts with the emotionlessness that characterizes the human figures in the film, who go about their business with an almost robotic efficiency. Their thoughts and actions feel scripted, as if they’re following the steps of an algorithm. In the world of 2001 , people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence. Nicholas Carr ’s most recent book, The Big Switch: Rewiring the World, From Edison to Google , was published earlier this year.

Does the Internet Make You

Dumber?

The cognitive effects are measurable: We're turning into

shallow thinkers, says Nicholas Carr.

By NICHOLAS CARR

The Roman philosopher Seneca may have put it best 2,

years ago: "To be everywhere is to be nowhere." Today, the

Internet grants us easy access to unprecedented amounts of

information. But a growing body of scientific evidence suggests

that the Net, with its constant distractions and interruptions, is

also turning us into scattered and superficial thinkers.

Journal Community

The picture emerging from the research is deeply troubling, at

least to anyone who values the depth, rather than just the

velocity, of human thought. People who read text studded with

links, the studies show, comprehend less than those who read

traditional linear text. People who watch busy multimedia

presentations remember less than those who take in information

in a more sedate and focused manner. People who are

continually distracted by emails, alerts and other messages

understand less than those who are able to concentrate. And

people who juggle many tasks are less creative and less

productive than those who do one thing at a time.

thinking.

56 Seconds

Average time an American spends looking at a Web page. Source: Nielsen

In one experiment conducted at Cornell University, for example,

half a class of students was allowed to use Internet-connected

laptops during a lecture, while the other had to keep their

computers shut. Those who browsed the Web performed much

worse on a subsequent test of how well they retained the

lecture's content. While it's hardly surprising that Web surfing

would distract students, it should be a note of caution to schools

that are wiring their classrooms in hopes of improving learning.

Ms. Greenfield concluded that "every medium develops some

cognitive skills at the expense of others." Our growing use of

screen-based media, she said, has strengthened visual-spatial

intelligence, which can improve the ability to do jobs that involve

keeping track of lots of simultaneous signals, like air traffic

control. But that has been accompanied by "new weaknesses in

higher-order cognitive processes," including "abstract

vocabulary, mindfulness, reflection, inductive problem solving,

critical thinking, and imagination." We're becoming, in a word,

shallower.

In another experiment, recently conducted at Stanford

University's Communication Between Humans and Interactive

Media Lab, a team of researchers gave various cognitive tests

to 49 people who do a lot of media multitasking and 52 people

who multitask much less frequently. The heavy multitaskers

performed poorly on all the tests. They were more easily

distracted, had less control over their attention, and were much

less able to distinguish important information from trivia.

The researchers were surprised by the results. They had

expected that the intensive multitaskers would have gained

some unique mental advantages from all their on-screen

juggling. But that wasn't the case. In fact, the heavy multitaskers

weren't even good at multitasking. They were considerably less

adept at switching between tasks than the more infrequent

multitaskers. "Everything distracts them," observed Clifford

Nass, the professor who heads the Stanford lab.

Does the Internet Make You Smarter?

Charis Tsevis Amid the silly videos and spam are the roots of a new reading and writing culture, says Clay Shirky.

It would be one thing if the ill effects went away as soon as we

turned off our computers and cellphones. But they don't. The

cellular structure of the human brain, scientists have discovered,

adapts readily to the tools we use, including those for finding,

storing and sharing information. By changing our habits of mind,

each new technology strengthens certain neural pathways and

weakens others. The cellular alterations continue to shape the

way we think even when we're not using the technology.

The pioneering neuroscientist Michael Merzenich believes our

brains are being "massively remodeled" by our ever-intensifying

use of the Web and related media. In the 1970s and 1980s, Mr.

Merzenich, now a professor emeritus at the University of

California in San Francisco, conducted a famous series of

experiments on primate brains that revealed how extensively

and quickly neural circuits change in response to experience.

When, for example, Mr. Merzenich rearranged the nerves in a

monkey's hand, the nerve cells in the animal's sensory cortex

quickly reorganized themselves to create a new "mental map" of

the hand. In a conversation late last year, he said that he was

profoundly worried about the cognitive consequences of the

constant distractions and interruptions the Internet bombards us

with. The long-term effect on the quality of our intellectual lives,

he said, could be "deadly."

What we seem to be sacrificing in all our surfing and searching

is our capacity to engage in the quieter, attentive modes of

thought that underpin contemplation, reflection and