Here’s a beautiful short documentary about beachcombing culture on the Dutch Wadden Sea island of Texel. Because of its geographic location, lots and lots of stuff has washed up on the shore of Texel for hundreds of years. Hence, a particular beachcombing culture (‘jutterij’ in Dutch) has developed here, and you can find museums on the island of all the things that have been found through the years.
Shot in nice colors, with lots of moving images from days past and interviews with colorful local people.
Hipsters have been around since at least the late 1990s (although this is a label that is applied in retrospect). It is only in the last few years that mainstream society has started wearing moustaches, ironic glasses and embrace indie and electronic music.
The magazine Flavorwirehas therefore asked eight American experts (sociologists, historians and cultural theorists): what will come after the hipster? Their answers are very interesting.
With Lana Del Rey’s meteoric, blog hype-fueled rise and rapid, SNL-catalyzed descent, the mere existence of MTV’s I Just Want My Pants Back and the trendy intellectual publication n+1 already taking a wishful backward glance at the subculture, hipsterdom appears to be on the wane. Have we reached a tipping point? If so, what’s next for American youth-based movements?
Here are some of the responses:
It’s difficult to talk about these groups as a “lineage,” because besides being groups that were associated with young Americans, they all had different levels of cohesion, formed in response to different social conditions, and produced different results. It seems to me that the beatniks and hippies were reacting more to society-level characteristics (conformity, political and cultural conservatism), whereas I associate the punks and “grunge” folks (slackers? Generation X?) with a cultural rebellion, reacting against a certain ossification in corporate culture (and especially music, although not exclusively). Interestingly, hip hop is missing from this list, and it seems to be doing both and neither at once, creating something new out of very limited opportunities. Hipsters seem to be a more general taste culture, embodying a number of different critiques of modern society in a more holistic, but I think less defined, way.
Predicting what comes after the hipster is almost as impossible as predicting the hippies would have been in 1959, or predicting the punks in 1967 (unless you knew that the Velvet Underground’s mostly-unheard debut album would give rise to a whole scene of like-minded folks a decade later). Subcultures usually form in response to some sort of perceived cultural conformity or hegemony. For me, today, that’s technology and the Internet, and in a way, some of today’s hipsters participate in some activities that try to eschew modernity (craft food and spirits, knitting, canning, etc.). However, I can’t see a youth subculture forming to react against modern technology, since it has become so intertwined with modern life.
I’m not sure that anything is going to emerge after the hipster, but not because we won’t have any widespread, cohesive youth cultures anymore. I think it’s possible that the hipster is just going to stay around indefinitely. As I said in my article “Generation Sell,” the hipster has been around as the dominant youth culture for way longer than anything that’s come before, and it occupies a place relative to mainstream culture that’s completely different. It’s not counter-cultural; it fits perfectly within the values of a large part of the mainstream, the so-called Bobos or bourgeois bohemians, which is what most members of the liberal upper-middle-class are. Hipsters are usually seen as consumers — “self-curators” who painstakingly select the music, movies, clothing and so on through which they construct their identities. More useful is to understand them as producers and distributors. Hipsters create Bobo culture. They make or sell or serve, or simply pioneer, what Bobos buy.
Of course, I could be wrong. The best bet for the next thing would be for something to emerge from the Occupy movement: less concerned about music and clothing, more concerned about politics; less concerned about differentiating yourself from the people around you, more concerned about working with them; less concerned about status, more concerned about social change; less ironic, more earnest; less polished, more grungy.
I don’t know whether the hipster was ever a cohesive subculture. It seemed more of a media creation than anything else, and as such it appeared coherent primarily from an outsider’s perspective. How many people do you know that actually call themselves hipsters? I don’t know any, or should I say the people I know that I consider hipsters only acknowledge that identity with sarcasm or irony.
So on the one hand, there appears to be this subculture called Hipster to the extent that we’ve learned to label certain clothing styles or mannerisms or values that way. On the other hand, many of the so-called hipsters I know are more concerned with being unique than they are being a part of something coherent.
[The] hipsters of the 2000s are a Millennial generation subculture (actually, a small, affluent niche of Millennials with enough cultural capital to discern hipness from a lack of hipness). Whatever comes after Millennials will find its own awesome or annoying forms of expression, and we just don’t know what it will look like because it hasn’t happened yet. But it’s probably safe to assume that like hipsters — whether of the 1940s and ’50s or of more recent days — the next waves of youth subculture will reject many aspects of square society, pick and choose elements of earlier styles or appropriate the styles of other cultures, define itself especially by its music and dress, and reject whatever label is given to it.
I predict that the hipster craze will pass, but the hipster will endure. Many subcultures exist in the side streets and alleys of mainstream society at any given time. These subcultures have their own distinct identities, and seek to differentiate themselves within the broader cultural landscape. Sometimes, a subculture will garner mainstream attention, as we’re currently seeing happen with the hipster, and the subculture may resent its newfound popularity. As the hipster subculture gains mass appeal, new adopters can diffuse or alter the hipster identity, causing the identity to become less distinct.
[The] “hipster” of today, at least as defined by my own observations and those of my students who are “in” that scene, is not one who is as much concerned with breaking or bending norms as he/she is with appearing to be different just for the sake of being different. It has become a superficiality of fashion and culture. The “hipster” of today doesn’t represent any kind of movement in the way that the Beats, Hippies, or Punks did. And, since so many of our taboos have been at least decriminalized, if not outright abandoned, what is there really to non-conform against? Not very much. That said, the other reason why it’s impossible to tell what’s coming next is because the “next thing” is always the product of a unique conflation of sociocultural, economic, technological, and political forces.
High up in the Balkan Mountain range in Bulgaria stands the Buzludzha Monument. After a 7-year construction period, it was opened in 1981 as a testament to the greatness of Soviet Communism and the history of Bulgaria. It resembles something like a UFO, and is, well, one of those highlights of retrofuturist Communist architectural aesthetics. It also contains a time capsule for future generations. After the fall of the regime, however, it fell into decline.
This guy visited the monument in midwinter this year. That’s a pretty hazardous trek, but the resulting photos are stunning. He also flew a microlight over it to capture it from above. So check out these incredibly cool pictures – definitely worth your while.
Interesting stuff: historical research has shown that in the past, people used to sleep in two segments of four hours, with a waking period in between, rather than eight hours straight. This used to be the normal sleeping pattern until the late seventeenth century! Only by the 1920s, the eight-hour schedule had become the norm.
Different explanations, such as the development of street lights, account for this change. Historical documents also show that people used to do a lot in between two segments of sleep, such as pray, eat, write, have sex, visit the neighbours, etc. It may very well be that the current modern mass problem of insomnia results from the incapability of people to actually sleep for eight hours straight, which may be unnatural.
I’m pretty convinced that the modern-day demands of working life (getting up early in the morning, going to bed early in the night) are a straightjacket that is very unnatural for a lot of people who are naturally inclined to be ‘evening people’. This research shows that this wasn’t always the case, but rather coincided with the advent of modernity.
In 2001, historian Roger Ekirch of Virginia Tech published a seminal paper, drawn from 16 years of research, revealing a wealth of historical evidence that humans used to sleep in two distinct chunks.
His book At Day’s Close: Night in Times Past, published four years later, unearths more than 500 references to a segmented sleeping pattern – in diaries, court records, medical books and literature, from Homer’s Odyssey to an anthropological account of modern tribes in Nigeria.
Much like the experience of Wehr’s subjects, these references describe a first sleep which began about two hours after dusk, followed by waking period of one or two hours and then a second sleep.
“It’s not just the number of references – it is the way they refer to it, as if it was common knowledge,” Ekirch says.
During this waking period people were quite active. They often got up, went to the toilet or smoked tobacco and some even visited neighbours. Most people stayed in bed, read, wrote and often prayed. Countless prayer manuals from the late 15th Century offered special prayers for the hours in between sleeps.
And these hours weren’t entirely solitary – people often chatted to bed-fellows or had sex.
A doctor’s manual from 16th Century France even advised couples that the best time to conceive was not at the end of a long day’s labour but “after the first sleep”, when “they have more enjoyment” and “do it better”.
Ekirch found that references to the first and second sleep started to disappear during the late 17th Century. This started among the urban upper classes in northern Europe and over the course of the next 200 years filtered down to the rest of Western society.
By the 1920s the idea of a first and second sleep had receded entirely from our social consciousness.
He attributes the initial shift to improvements in street lighting, domestic lighting and a surge in coffee houses – which were sometimes open all night. As the night became a place for legitimate activity and as that activity increased, the length of time people could dedicate to rest dwindled.
I just saw Cave of Forgotten Dreams. This is one of the best documentaries I’ve seen in the last few years. Unfortunately I didn’t see it in 3D, because I think in this case it would actually have been an improvement instead of a gimmick. Because I’m lethargic by nature, I’m going to let you read somebody else’s synopsis below. But really, go see this one, it’s amazing. It is bound to send “historical sensation” chills down your spine. If this doesn’t get you interested in history (even though it deals with a prehsitoric topic) then nothing does.
Somewhere around 32,000 years ago, a man entered a cave, dipped his hand in color, and pressed it against a wall. He did this again and again, his crooked little finger acting as his signature, until the wall was covered with handprints.
Why did he do this? Was he discovering the possibilities of art? Whatever he understood, his efforts tell us something particular about him, preserving something personal and true, a testimony to generations he never saw or imagined.
In the passages around the handprints, the walls are alive with drawings. Did the same man draw them? Did his handprints inspire others to make images that told more engaging stories?
In Cave of Forgotten Dreams, the great filmmaker Werner Herzog leads us on a tour of the Chauvet Cave in southern France to see up close what scientists believe is the oldest art gallery ever discovered. The art on these walls may well be twice as ancient as the next-oldest images we know.
The mysteries illuminated by their “torches” abound. Why is there no discernible change in the artistic style between the oldest images and those that were painted 2,000 years later on the same walls? Do scratches made by bear claws indicate that men and bears shared the caves? Is the large cave that contains a mysterious skull some kind of ceremony chamber? Some of the animals are drawn with many legs. Was this to depict them at a full run, like an early form of animation?
In their detail, the illustrations convey truths about a lost world. In their lack of explanation, they are mysterious. In their beauty, they speak of the unique powers of the human imagination to apprehend more than mere matter.
It’s December. To the Dutch expat that means facing, yet again, awkward questions and harassment from non-Dutch friends and colleagues. The source of their frustration? The dark, shocking, horrifying and racist Dutch tradition of Sinterklaas, in particular the character of Zwarte Piet (Black Peter).
What to think of Zwarte Piet? I’m torn between, on the one hand, my happy childhood memories of Zwarte Piet arriving with pepernoten and presents and, on the other, the realisation that this tradition preserves shameful racial stereotypes that affect Dutch perceptions of non-whites. As a child, I experienced the paradoxical Dutch attitude towards this cultural institution first hand: while my parents were fierce anti-apartheid activists (the release of Nelson Mandela in 1990 was a bigger deal at home than the fall of the Berlin Wall three months earlier), they also paraded us in front of family and friends in traditional Zwarte Piet costume, including black face paint.
I confess that the racial connotations of Zwarte Piet did not occur to me until foreign visitors expressed their surprise about the tradition. In my defence, my ignorance was – and is – not unique. In the yearly Zwarte Piet debates in Dutch media and on internet fora, Dutch people go to great lengths to defend the folklore and get upset when foreigners don’t “understand” Dutch culture. To the Dutch, criticizing Zwarte Piet is like hunting Red-Nose Rudolph – humbugs trying to kill Dutch Santa.
But let’s be honest. Objectively, the Zwarte Piet figure is a racist stereotype that doesn’t belong in a liberal 21st century society (or any society for that matter). Historically, Zwarte Piet may have been an adviser of the Germanic god Wodan, the Persian herald of Nowruz or a Moorish servant to a Turkish saint, but his representation today as a black, mischievous servant is plainly wrong. Even if the Dutch wish not to construct Zwarte Piet as a slave to a white master, the embarrassing history of the Dutch slave trade demands a more sensitive approach to the issue. The fact that nobody in the Netherlands associates Zwarte Piet with racism is not an excuse. Rather, this ignorance lies at the heart of the problem, as it tacitly vindicates racial stereotypes and implants them in the minds of young people. Take the traditional Sinterklaas song “Daar wordt aan de door geklopt”, which includes the lines “ook al ben ik zwart als roet, ‘k meen het wel goed”. What signal are children receiving when they are made to sing “even though I’m black as ash, my intentions are good”?
The worst defence is that Zwarte Piet is black because he crawls through chimneys to deliver presents – but then why the big red lips, curly hair and Moorish costume? Why can’t Zwarte Piet be a jolly chimney sweep à la Dick van Dycke? The argument reminds me of early 20th century (Dutch!) soap advertisements: a white girl telling-off a black boy for not having used the right detergent. We can all agree that this is racist. So why can’t we draw the same conclusion about Zwarte Piet?
This is not to say that it is okay for non-Dutch to project their troubled histories with race relations on the Dutch tradition. Americans in particular are prone to equate Zwarte Piet with lynching, burning crosses and suppression of the black population in the Southern states. To a lesser extent, the Brits, Australians, New Zealanders and others tend to reflect experiences with their colonial or indigenous populations on Zwarte Piet. This misrepresents the historical roots of the Sinterklaas feast, which is based on ancient pagan traditions and a 4th century bishop in Myra, present day Turkey, and the Dutch understandably get frustrated when these issues are equated. Moreover, it seems hypocritical to criticize Sinterklaas when even that most-American-of-holidays, Thanksgiving, is not free from controversy. Every November Americans all over the United States celebrate the “first” harvest in their God-given country, which culminated in the elimination of almost the entire Native American population.
As a matter of fact, Sinterklaas’ helper is as unlikely to disappear as an American turkey dinner on the fourth Thursday in November. In West Canada, organizers cancelled a public Sinterklaas celebration rather than leaving out Zwarte Piet. A Dutch broadcaster courageously experimented with multi-coloured Pieten in 2006, but changed back to black a year later due to public outrage.
That was unfortunate. The Dutch (we) should realise that some elements of our traditions are untenable in the modern world. A country that has engineered pragmatic policies to deal with abortion, euthanasia, prostitution and soft drugs should be able to find an alternative to racial stereotyping in one of its most cherished festivals. For one, let’s drop the adjective “Zwarte” (Black) and refer to St. Nicholas’ helper simply as Piet. Second, the Rainbow Pieten deserve another chance. They may be unpopular, but so was women’s suffrage a hundred years ago. Third, it’s time to weed out any discriminatory Sinterklaas lyrics. Why don’t we replace the lines mentioned above with “’k breng je van mijn grote boot/een pepernoot” (I bring to you from my big ship/a festive biscuit) or any other inoffensive language?
The Sinterklaas tradition turns around speculaas, a hot chocolate with family and friends and the anticipation of a knock on the door, signalling the arrival of presents. Taking away “Zwarte” from “Piet” will not change any of that. What it will do, however, is to get rid of an archaic and mistaken stereotype that should have been abolished a long time ago. That way, Sinterklaas can be a feast for all, including our foreign friends.
Yesterday I read somewhere that Newt Gingrich -- the latest insurgent in the Republican presidential race, and current challenger of Mitt Romney -, who is a historian, wrote his Ph.D. thesis in 1971 about ‘Belgian education policy in the Congo: 1945-1960‘.
I thought that was pretty amusing for a former Speaker of the House, author of the 1994 ‘Republican Revolution’, and possible Republican presidential nominee, so I wanted to look it up and blog something about it.
But lo and behold, someone was there first. Robert Paul Wolff at the blog The Philosopher’s Stone read Newt Gingrich’s Ph.D. thesis, so enjoy his review:
Wikipedia informed me that Gingrich did his graduate work in the Tulane history department; the Tulane website took me to the university’s library catalogue; the Duke University Reference Librarian talked me through the download process over the phone [never easy for old guys like me], and there it was: “Belgian Education Policy in the Congo: 1945-1960 A Dissertation Submitted on the Sixth Day of May, 1971 to the Department of History of the Graduate School of Tulane University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by Newton Leroy Gingrich.” Two hundred eighty-three pages of text, typed and double-spaced in standard dissertation format, five pages of tables, five pages of “selected bibliography” and a one-page biographical sketch of the author indicating that he was awarded a B.A. by Emory University.
Why on earth Belgian educational policy in the Congo? Newt was studying Modern European History, to be sure, but the topic seems rather obscure. The dissertation lacks the typical page of acknowledgements that might offer a clue, but a bit more surfing of the web reveals that the dissertation director, Professor Pierre Henri Laurent, whose name appears on the signature page, was the son of “an eminent Belgian historian, who died during the Resistance; his mother was a distinguished teacher and linguist. Pierre and his older sister were brought as children to the United States by their mother when the Second World War broke out.” Mystery solved.
The dissertation is written in a pedantic, serviceable prose, giving no evidence of the Newt that was to emerge as a fully formed Toad. Although the dissertation is written entirely in English, the footnotes give evidence that Gingrich had a quite adequate command of written French. [The only word in the entire dissertation not in English or French is misspelled -- Weltanschauung with only one "u" -- page 205, line 2] Gingrich relies heavily on secondary sources, with especial attention to the work of Ruth Slade and Roger Anstey. However, he has clearly made extensive use of Belgian public documents, including reports of Parliamentary debates. There is no evidence in the text that he traveled either to Belgium or to the Congo, and he seems not to have interviewed any of the principal actors, Belgian or Congolese, even though the dissertation was written only a handful of years after the departure of the Belgians from the Congo.
The structure of the dissertation is straightforward: an Introduction, three chapters on the political and historical background of Belgium’s colonization of the Congo, nine chapters on various aspects of the educational institutions introduced by the Belgians into the Congo — religious education, secular education for the Congolese, secular education for Belgians living in the Congo, education for women, agricultural education, technical education, higher education for the Congolese, etc. — and a Conclusion.
The political or ideological orientation of the dissertation, if I may put it this way, is roughly that of a Cold War member of the Council on Foreign Relations. Colonization is seen almost entirely from the perspective of the colonial power, not from that of the indigenous population. The rule of King Leopold II, who literally owned the colony as his private property until, at his death, he willed it to Belgium, is widely understood to have been the most horrifyingly brutal colonial regime in Africa. Gingrich acknowledges this fact once in the dissertation. Speaking of the financial pressures placed by the Congo on King Leopold’s coffers, Gingrich reports that a “state official told a missionary in 1899 that each time a corporal ‘goes out to get rubber he is given cartridges. He must return all those that are not used; and for every one used he must bring back a right hand.’” [p. 15]
But with this sole exception, Gingrich’s picture of the Belgian colonial administration is reasonably favorable. As I read his account of the struggles by dedicated Belgian colonial administrators to provide some measure of formal education to the Congolese, in the face of a generally uninterested and neglectful government in Brussels, I was reminded of nothing so much as the writings of John Stuart Mill on India, and the responsibility of cultivated, enlightened Englishmen to bear the heavy burden of stewardship until the non-European peoples are ready for self-rule.
Although he makes no effort at all to consult the colonized and give voice to their view of the Belgian rule, Gingrich does at one point, rather surprisingly, quote Father Placide Tempels quite favorably and at some length. [pages 100-101.] Tempels was a missionary priest who wrote an important book called Bantu Philosophy. It is the first acknowledgement by a European author that the indigenous peoples of Africa have complex, philosophically sophisticated conceptions of the world and their place in it. I confess that I was surprised and impressed to see Tempels put in an appearance in Gingrich’s dissertation. I was a good deal less pleased by Gingrich’s reliance on the always questionable Colin Turnbull.
Here’s one sentiment I can say I don’t share: missing the pop monoculture. According to Toure at Salon.com, our culture is “poorer” today because we don’t have gigantic acts like Michael Jackson and Prince anymore, that everybody can gather around to and collectively love. This goes hand in hand with the decline of big TV and radio stations that everybody used to watch. There are no “massive music moments” anymore when, for instance, an album becomes a big hit. There is no real shared pop culture anymore with larger-than-life figures.
Well, personally, I have no longing at all to go back to that time. As a millennial, I’m old enough to remember the time when you only had one or two music stations on TV; a couple of radio stations; and the charts that were based on album sales. The time before The Internet, when you were dependent on this small set of big media to enjoy pre-selected pop music. Nowadays, I almost never watch TV or listen to the radio anymore, and why would I? It means listening to crappy music catered for the masses. I have the Internet.
What the author at Salon calls the “balkanization” of pop culture I totally applaud: the Internet has allowed people to get exposed to more music than ever before, and that is of great value. In fact, that’s the only real argument in favor of illegal downloading, I think: exposure to every possible music style of the past and present, expanding your knowledge of pop culture and, if you’re an artist, re-packaging that in something new. I think it’s great that a kid nowadays can listen to Joy Division and actually like it.
Of course, the negative drawback of this mass online availability of past music is the incessant retromania that has dominated the first decade of the twenty-first century. There was a time when music used to look forward, be futuristic, but that is no longer the case: instead, every past musical niche gets exploited and is re-packaged. The hipster is the ultimate personification of the Internet era: no longer think of something new, but re-use past styles again and again. Nowadays I think only electronic music is still forward-looking, but even there you find more and more retro tunes and vibes. I wonder when something that is totally new will emerge; but before that, I guess first the entire musical and stylistic past must be dug up and re-used again.
That’s something else, however, than missing the time of mainstream pop culture. I’m glad the domination of the musical-industrial complex is over, thank you very much. If that means missing what Toure calls “generational moments”, well, so be it.
I live for those times when an album explodes throughout American society as more than a product — but as a piece of art that speaks to our deepest longings and desires and anxieties. In these Moments, an album becomes so ubiquitous it seems to blast through the windows, to chase you down until it’s impossible to ignore it. But you don’t want to ignore it, because the songs are holding up a mirror and telling you who we are at that moment in history.
These sorts of Moments can’t be denied. They leave an indelible imprint on the collective memory; when we look back at the year or the decade or the generation, there’s no arguing that the album had a huge impact on us. It’s pop music not just as private joy, but as a unifier, giving us something to share and bond over.
Actually, I should say I loved Massive Music Moments. They don’t really happen anymore.
The epic, collective roar — you know, the kind that followed “Thriller,”“Nevermind,”“Purple Rain,”“It Takes a Nation of Millions to Hold Us Back,” and other albums so gigantic you don’t even need to name the artist — just doesn’t happen today. Those Moments made you part of a large tribe linked by sounds that spoke to who you are or who you wanted to be. Today there’s no Moments, just moments. They’re smaller, less intense, shorter in duration and shared by fewer people. The Balkanization of pop culture, the overthrow of the monopoly on distribution, and the fracturing of the collective attention into a million pieces has made it impossible for us to coalesce around one album en masse. We no longer live in a monoculture. We can’t even agree to hate the same thing anymore, as we did with disco in the 1970s.
If you’re under 25, you’ve never felt a true Massive Music Moment. Not Lady Gaga. Not Adele. Not even Kanye. As the critic Chuck Klosterman has written, “There’s fewer specific cultural touchstones that every member of a generation shares.” Sure, Gaga’s “The Fame Monster” spawned several hit singles. Adele’s “21″ and Jay-Z and Kanye West’s “Watch the Throne” were massively popular. Kanye’s brilliant “My Beautiful Dark Twisted Fantasy” was beloved and controversial and widely discussed enough to give a glimpse into the way things used to be. But those successes don’t compare to the explosive impact that “Thriller” and “Nevermind” had on American culture — really, will anyone ever commemorate “21″ at 20, the way the anniversary of Nirvana’s album has been memorialized in the last month?
Numbers don’t tell the whole story about how these cultural atomic bombs detonated and dominated pop culture. But at its peak, “Thriller” sold 500,000 copies a week. These days, the No. 1 album on the Billboard charts often sells less than 100,000 copies a week. What we have today are smaller detonations, because pop culture’s ability to unify has been crippled.
I miss Moments. I love being obsessed by a new album at the same time as many other people are. The last two albums that truly grabbed an enormous swath of America by the throat and made us lose our collective mind were “Nevermind” and Dr. Dre’s “The Chronic.” They sprung from something deep in the country’s soul and spoke to a generation’s disaffection and nihilism. They announced new voices on the national stage who would become legends (Kurt Cobain and Snoop Dogg) and introduced the maturation of subgenres that would have tremendous impact (grunge and gangsta rap).
No connection is made. Pop music has historically been great at creating Moments that brought people together. Now we’re all fans traveling in much smaller tribes, never getting the electric thrill of being in a big, ecstatic stampede. It’s reflected in the difference between the boombox and the iPod. The box was a public device that broadcast your choices to everyone within earshot and shaped the public discourse. The man with the box had to choose something current (or classic) that spoke to what the people wanted to hear. Now the dominant device, the iPod, privatizes the music experience, shutting you and your music off from the world. The iPod also makes it easy to travel with a seemingly infinite collection of songs — which means whatever you recently downloaded has to compete for your attention with everything you’ve ever owned. The iPod tempts you not to connect with the present, but to wallow in sonic comfort food from the past.
Back when MTV played videos, it functioned like a televised boombox. It was the central way for many people to experience music they loved and learn about new artists. Thus MTV directed and funneled the conversation. Now there’s no central authority. Fuse, where I work, plays videos and concerts and introduces people to new artists. But people also watch videos online, where there’s an endless library of everything ever made but no curation, killing its unifying potential.
These days, there are many more points of entry into the culture for a given album or artist. That can be a good thing — MTV, after all, played a limited number of videos in heavy rotation. Now there’s the potential to be exposed to more music. But where there used to be a finite number of gatekeepers, now there’s way too many: anyone with a blog. This is great for the individual listener who’s willing to sift through the chatter to find new bands. But society loses something when pop music does not speak to the entire populace.
Hollywood, too, is struggling to unite us. “Star Wars” and “The Matrix” and “Pulp Fiction” were so big they changed American film — as well as our visual language and Madison Avenue. You didn’t need to actually see the films to feel as if you had consumed them. Their impact was so pervasive, they seemed to bang down your door and announce themselves. The Harry Potter films and “Avatar” stand out for the size of the marketing and ticket buying associated with them. But did they bring large, diverse swaths of America together? Did they speak to something deep in the American soul?
It really seems to speak from a deep-seated insecurity of the author, doesn’t it? Please, go explore music that isn’t spit out over the masses and find a niche you like!
A brilliant piece in The New Yorker by Jeffrey Toobin about Supreme Court Justice Clarence Thomas. Thomas, appointed by George H. Bush, is arguably the most conservative Justice on the Court since the 1930s. He adheres to a very strict originalist and textualist reading of the Constitution, meaning that he believes it should be applied to the twenty-first century the way the Founders intended it for society in the late eighteenth century (whoever came up with this comically absurd idea should receive a prize). In addition to that, unlike the other textualist Justices Antonin Scalia and Samuel Alito, Thomas has no qualm about ignoring precendent in court rulings: when he thinks a previous decision is wrong in his interpretation of the Constitution, he will overturn it. In Thomas’ case, this also means historically exploring how the inhabitants of the thirteen American colonies two-and-a-half century ago meant this or that piece of law.
Adhering to a very strict originalist interpretation of the Constitution means that you believe that only a very small, limited government is constitutionally allowed (just like it was intended back then). If if were up to justices like Thomas, the US government would have no business regulating anything in the American economy or society (although they have, of course, no qualms about executive branch overreach when it comes to military affairs or torture). This leads to predictable conservative positions on such issues as gun rights and federalism, but also – and here it comes – on healthcare. The Obama administration has relied on a ‘broad’ interpretation of the Commerce Clause, which by New Deal-era judicial interpretation has allowed the federal government to intervene in the (trans-state) economy, to mandate individuals to buy health insurance. But it is very much the question whether the current conservative Court, including Justice Thomas, will uphold this interpretation of the Commerce Clause. It is very much possible that Obama’s healthcare reform law will sometime soon be judged unconstitutional by the Supreme Court.
Why is this piece on Clarence Thomas so relevant in this context? Well, because according to Toobin, Justice Thomas’ once extreme positions on various issues he has held since his 1991 confirmation have in the past twenty years become more mainstream. Take, for example, the gun rights issue. Among conservatives today, it is commonplace to argue that the lines in the Constitution about ‘the right to keep and bear arms’ apply to individuals, allowing personal gun rights. But just two decades ago (I didn’t know this), this was considered a radical position in a legal profession that held that the lines apply to state militias only, thus warranting more strict regulation on guns. It was Thomas who came up with the former interpretation, striking down Bill Clinton’s 1999 Brady Bill, and ever since, gun rights in the US have expanded. The same thing has happened on other issues: Thomas’ positions, at first considered radical, move the borders of the acceptable and allow judicial discourse to shift rightwards.
In the era that has seen the rise of the Tea Party out of protests against healthcare reform, the same thing could happen to Obama’s laws. Or, the piece warns, even more broadly to the entire 1930s New Deal-era constellation of laws and regulation that have awarded the federal government a role in protecting the people against the worst excesses of capitalism. Clarence Thomas and his wife are frequent speakers and ardent supporters of the Tea Party and other manifestations of extreme rightwing politics. These people want to take the economy back to the 1920s law of the jungle. In the words of Walter Russell Mead at the American Interest, their goal is to bring the Blue Empire down…
So read this must-read profile of Clarence Thomas to see why he has already been compared to Lord of the Rings’ Frodo – an overlooked actor slowly but steadily moving towards his goal, not taken seriously by his opponents until it is too late.
In several of the most important areas of constitutional law, Thomas has emerged as an intellectual leader of the Supreme Court. Since the arrival of Chief Justice John G. Roberts, Jr., in 2005, and Justice Samuel A. Alito, Jr., in 2006, the Court has moved to the right when it comes to the free-speech rights of corporations, the rights of gun owners, and, potentially, the powers of the federal government; in each of these areas, the majority has followed where Thomas has been leading for a decade or more. Rarely has a Supreme Court Justice enjoyed such broad or significant vindication.
The conventional view of Thomas takes his lack of participation at oral argument as a kind of metaphor. The silent Justice is said to be an intellectual nonentity, a cipher for his similarly conservative colleague, Antonin Scalia. But those who follow the Court closely find this stereotype wrong in every particular. Thomas has long been a favorite of conservatives, but they admire the Justice for how he gives voice to their cause, not just because he votes their way. “Of the nine Justices presently on the Court, he is the one whose opinions I enjoy reading the most,” Steve Calabresi, a professor at the Northwestern University School of Law and a co-founder of the Federalist Society, said. “They are very scholarly, with lots of historical sources, and his views are the most principled, even among the conservatives. He has staked out some bold positions, and then the Court has set out and moved in his direction.”
The implications of Thomas’s leadership for the Court, and for the country, are profound. Thomas is probably the most conservative Justice to serve on the Court since the nineteen-thirties. More than virtually any of his colleagues, he has a fully wrought judicial philosophy that, if realized, would transform much of American government and society. Thomas’s views both reflect and inspire the Tea Party movement, which his wife has helped lead almost since its inception. The Tea Party is a diffuse operation, and it can be difficult to pin down its stand on any given issue. Still, the Tea Party is unusual among American political movements in its commitment to a specific view of the Constitution—one that accords, with great precision, with Thomas’s own approach. For decades, various branches of the conservative movement have called for a reduction in the size of the federal government, but for the Tea Party, and for Thomas, small government is a constitutional command.
In recent weeks, two federal courts of appeals have reached opposing conclusions about the constitutionality of the 2010 health-care law; the Sixth Circuit, in Cincinnati, upheld it, while the Eleventh Circuit, in Atlanta, struck down its requirement that all Americans buy health insurance. This conflict means that the Supreme Court will almost certainly agree to review the case this fall, with a decision expected by June of next year. It is likely to be the most important case for the Justices since Bush v. Gore, and it will certainly be the clearest test yet of Thomas’s ascendancy at the Court. Thomas’s entire career as a judge has been building toward the moment when he would be able to declare that law unconstitutional. It would be not only a victory for his approach to the Constitution but also, it seems, a defeat for the enemies who have pursued him for so long: liberals, law professors, journalists—the group that Thomas refers to collectively as “the élites.” Thomas’s triumph over the health-care law and its supporters is by no means assured, but it is now tantalizingly within reach.
Afgelopen vrijdag was Maarten van Rossem weer eens als vanouds aan het shinen bij Knevel en Van De Brink. Met een paar extreem relativerende en sarcastische opmerkingen kreeg hij de lachers op de hand en maande hij iedereen aan tafel tot kalmte. Het ging over 9/11, voor Van Rossem nog steeds één van z’n finest moments maar toch ook een klein trauma. De relativering die Van Rossem destijds plaatste direct na de aanslagen, tijdens de marathonuitzending bij de NOS, werd door weinig mensen gewaardeerd. Presentatoren, deskundigen en kijkers vielen over hem heen omdat hij maar niet in kon zien dat de wereld inderdaad op die dag inderdaad veranderd was en dat “alles nooit meer hetzelfde zou zijn”. Hij kreeg zelfs stapels hatemail en was geruime tijd niet meer welkom als Amerika-deskundige in Hilversum.
Nu, 10 jaar later, is het tijd voor Van Rossem om zijn gram te halen. Zijn voorspelling van 10 jaar terug zou uitgekomen zijn, de gevolgen van 9/11 zijn zeer beperkt gebleven. Hij meent dat achteraf gezien de economische crisis eigenlijk veel meer impact heeft gehad dan 9/11. Wel geeft hij toe dat hij op de dag zelf niet had kunnen bedenken hoe groot de stupiditeit van George W. Bush was en hoe gevaarlijk de mensen om hem heen waren. In een interview met de NOS:
Ook tien jaar later is hij het relativeren nog niet verleerd. 9/11 het begin van een nieuw tijdperk? “Ik zou zeggen; helemaal niet. Totaal niet. Wat is er nu veranderd in ons dagelijks leven, behalve dat we op de luchthaven onze riem en schoenen uit moeten doen?”
“Het moslimterrorisme heeft juist in moslimlanden gruwelijke toestanden aangericht. Maar dat is niet in Nederland, België of de VS, dat is in Irak, Pakistan en Jemen. Wij hebben twee aanslagen gehad, in Londen en Madrid. Ook erg op zichzelf, maar geen bedreiging voor het voortbestaan van ons type samenleving.”
“Als we terugkijken op 10 jaar 9/11, dan zijn al die rampen die ons zijn voorspeld door die zogenaamde deskundigen helemaal niet gebeurd. Jumbo’s op kerncentrales, ze zouden ons drinkwater vergiftigen. Niets van dat alles is ooit werkelijkheid geworden.”
En de veranderende verhoudingen in Nederland dan? “Het heeft ongetwijfeld de achterdocht ten aanzien van moslims wel bevorderd, maar als je denkt dat Nederland door een minuscule minderheid van 5 procent moslims wordt geïslamiseerd, dan ben je gek en heb ik verder geen commentaar dan dat. Ik vind dat een beetje meelijwekkend.”
“Onze ergste vijanden waren niet die terroristen van Bin Laden”, stelt Van Rossem, “maar onze eigen financieel-economische sector. De kredietcrisis is een veel en veel ernstigere zaak dat dat hele 9/11. Dat heeft ons aller dagelijks leven ingrijpend beïnvloed en zal dat de komende jaren blijven doen.”
“De grote paradox van allemaal is dat Amerika geprobeerd heeft een beleid te voeren dat hun veiligheid zou vergroten en het eindresultaat is dat de veiligheid van de VS veel minder groot is geworden. Dat komt vooral door die achterlijke oorlogen die duizenden miljarden dollars hebben gekost, waardoor het land nu in penibele omstandigheden verkeert.”
“Ik heb het al eerder gezegd: de grootste vijand van de Amerikanen zijn de Amerikanen zelf.”
Het beeld dat Van Rossem in dit interview en bij KvdB schetst klopt niet helemaal. 9/11 is wel degelijk het begin geweest van een nieuw tijdperk. Niet de aanslagen zelf, maar de reactie van de Amerikanen hierop heeft hier voor gezorgd. We zijn misschien niet ten onder gegaan aan “256 boeings die zich op nucleaire reactors zouden storten, vliegtuigen vol anthrax die ons de landbouw onmogelijk zouden maken en Wereld Oorlog III”, maar we leven wel in een wereld waarin de VS op meerdere fronten oorlog voeren met een kostenplaatje van 3.000 tot 4.000 miljard dollar, waarin Iran opeens een regionale macht is geworden, waarin “terroristen” nog steeds op Guantanamo worden onderworpen aan “enhanced interrogation” vastzitten op Guantanamo, waarin honderduizenden doden zijn gevallen in Irak en Afghanistan, waarin je inderdaad op elk internationaal vliegveld je schoenen uit moet doen voordat je mag boarden, en zo kun je nog wel even doorgaan. En de economische crisis duurt zolang voort, mede omdat de Amerikanen zich suf hebben geleend om deze oorlogen te betalen, waardoor ze nu geen middelen meer hebben om de gevolgen van de kredietcrisis op te vangen.
Een kanttekening die je kunt maken is dat 9/11 vooral voor een verandering in de Westerse wereld heeft geleid, maar dat als je de Atlantische oogkleppen afzet blijkt dat in grote delen van Afrika, Azië en andere delen van de wereld de invloed van 9/11 misschien een stuk kleiner is. Ook kun je nog beargumenteren dat 9/11 slechts een soort katalysator is geweest voor ontwikkelingen die al langer gaande waren, zoals de overal ter wereld opborrelende Amerika-haat, de afbrokkelende macht van de VS, de rol van non-conventionele oorlogsvoering, etc. Toch kun je moeilijk stellen dat de directe gevolgen van 9/11 niet hebben gezorgd voor grote veranderingen in de wereld. In een wereld waarin iedereen zo nuchter is als Maarten van Rossem zouden deze aanslagen inderdaad niet tot een cesuur in de moderne geschiedenis hebben geleid. Helaas is dat niet het geval.
De uitzending van KvdB, heel vermakelijk:
Van Rossem in de uitzending op 11 september 2001 met Maartje van Weegen:
Apollo 17 landed on the moon on December 1, 1972. It was the last manned trip to the moon (Apollo 18isn’t real!). The Lunar Reconnaissance Orbiter Camera has shot new images of the landing site of Apollo 17, from a height of 22 kilometers. On the photos the landing site of the Challenger Lunar Module, some test equipment and the Lunar Roving Vehicle can be seen:
A while ago, we presented y’all a documentary on the early 1980s origins of warehouse raves and techno, Real Scenes: Detroit. Now, get ready to submerge in the following documentary on its British successor movement: acid house!
During the late 1980s, acid house, with its distinctive sound produced by Roland bass synthesizers and drum machines such as the TB 303 and TR 808, presented the first full-blown electronic dance music movement in Europe, including a booming underground scene. It also presented the first coming to the surface of ecstasy, which contributed to the summers of 1988-9 being called the second Summer of Love (after the lsd-fueled first one in 1969). Acid house parties took place in warehouses and out in the open, thus continuing the Detroit phenomenon of the “rave”. Fueled by sensationalist media reporting, however, British authorities came crashing down on the acid house scene.
This great documentary from the BBC’s World in Action strand is like a full blown acid house flashback. Broadcast in 1988 at height of acid house fever, it follows the typical weekend rituals of a group of very young fans, tracks the working life of an illegal party promoter, speaks to some of the producers of the music and charts the the then-growing moral panic which surrounded the scene and its copious drug taking. Raving, and acid house, had a huge (if subtle) effect on British culture, bringing people together in new, democratised contexts free of class and social boundaries, opening people’s ears up to a new world of music and opening their minds to new ideas.
So here’s the entire documentary. Enjoy!
More electronic music history documentaries on LSD:
Electronic music as an art form is often credited to start with the likes of pioneers such as Karlheinz Stockhausen and Pierre Schaeffer, in the 1940s and 1950s. However, one guy in Egypt was there earlier: Halim El-Dabh (1921), who in 1944 hit the streets of Cairo to record ambient sounds and music, and experiment with it afterwards.
While Pierre Schaeffer is often thought of as the father of the electronic music form known as musique concrète the gentleman above, Halim El-Dabh, actually got there several years before, 1944 to be exact. Born in Egypt in 1921, El-Dabh studied agriculture at Cairo University while playing piano and other traditional instruments as a pastime. One day, the student and a friend borrowed a wire recorder — a device predating magnetic tape — from the Middle East Radio Station and hit the streets to capture ambient sounds. El-Dabh recorded a spirit-summoning ritual called a zaar ceremony and ultimately found that he could use the sounds as the raw ingredients for a new composition.
An excerpt from the 1944 composition called “The Expression of Zaar” is here below, credited as ‘the earliest piece of electronic music ever produced’. I don’t know whether that’s true, but it sounds very ambient and cool. Not too surprising if you realize you’re listening to a spiritual ceremony from 1940s Cairo:
The Electronic Music Foundation has an interview with El-Dabh, who is currently Professor Emeritus of African Ethnomusicology at Kent State University. About the 1944 piece:
We had to sneak in (to the ritual) with our heads covered like the women, since men were not allowed in. I recorded the music and brought the recording back to the radio station and experimented with modulating the recorded sounds. I emphasized the harmonics of the sound by removing the fundamental tones and changing the reverberation and echo by recording in a space with movable walls. I did some of this using voltage controlled devices. It was not easy to do. I didn’t think of it as electronic music, but just as an experience. I called the piece Ta’abir al-Zaar, (The Expression of Zaar). A short version of it has become known as Wire Recorder Piece. At the time in Egypt, nobody else was working with electronic sounds. I was just ecstatic about sounds.