Skip to content

Mozart, Beethoven and the Classical Ideal

A weird thing has happened to me lately: I’ve started to appreciate Mozart. Over the years I’ve tried to get into his music on various occasions, but I’ve always been left rather baffled. Why was this guy considered such a towering genius? Sure, his music was pleasant – charming, even – but, aside from the odd flash of pathos or joyous cry, it just seemed to trot along. There has to be more to genius than a facile gift for melody, hasn’t there?

Of course, I realised that I was hampered by the fact that I cannot abide opera (I’ve tried, I’ve tried – but the classically trained voice is an abomination; it’s the sound of human beings turning themselves into machines not out of aesthetic choice but simply in order to be heard above the orchestra’s din). So a full half of his legacy was closed to me from the outset. All the same, there remained a wide sea of instrumental music to navigate, but the results of my explorations were always the same: this is the world’s biggest village pond. Calm, placid, a few ducks, the odd swan; nice enough but nothing to write home about.

Trots along…

It’s easy to see what I mean if you compare Mozart with Beethoven. Now there’s a man who deserves the title “genius” – in fact he’s almost a parody of the type: the precocious talent, the tragic deafness, the innovation, the untameable hair, the uncompromising dedication to a personal vision and, of course, the thrilling Sturm und Drang of the music as overwhelming waves of ecstasy and despair thunder against each other. The two men were barely a generation apart (Mozart was 14 years older than Beethoven), and yet they seem to come from different worlds. Mozart span his tunes amid the fatuous splendour of the rococo, while Beethoven was the beginning of musical modernity.

Yet oddly enough I think that’s the clue to appreciating Mozart. He is the last great representative of an age whose world-view is more alien to us than we might suppose. Specifically, his music represents a near-miraculous realisation of the Enlightenment’s neoclassical ideal. According to this ideal, life is not a desperate search for meaning or a flight from terror, but a complex, ordered and profoundly satisfying pattern. Of course there’s grief and loss, death and darkness, but these are merely elements woven into a bigger tapestry which pays them due regard while refusing to get bogged-down in morbid obsessiveness. Its watchwords are balance, proportion and, perhaps above all, health. The refusal to dwell on darkness is not (or ought not to be) a matter of hiding from the truth; rather, it is a sign of a healthy society which has attained a more generous, life-affirming vision than that available to the sick, benighted minds of the past.

There’s an irony here, of course, in that it’s hugely debatable whether the Enlightenment ever came close to living up to its own self-image. It saw itself as a flowering of the maturity of humanity; mankind had finally put away childish things and, by dint of reason, had broken through into a golden age. But it more often reminds me of a precocious teenager who thinks all old people are stupid because they don’t know which smartphones are cool. Its major achievements were in economics, politics, technology and science – all of which would subsequently combine to produce extreme amounts of suffering and poverty (for many), as well as inordinate riches and comfort (for some).

In the arts it was not an age of vigour and healthy-mindedness, but of puerile titillation and complacent self-congratulation. Its paintings and architecture were uninspired re-hashes of earlier styles, and its literature marks a profound falling-off from the previous century. If you want a feel for the Enlightenment’s smugness, just read Pope’s An Essay on Man (1734) and then compare it with Rochester’s A Satyr Against Mankind (c 1674). Even the novel, its great artistic innovation, is mostly caught between the tedious bawdiness of the picaresque and the equally tedious moralising of Samuel Richardson. It’s probably no accident that the two novels of the century which most speak to the modern reader – Gulliver’s Travels (1735) and Tristram Shandy (1759-67) – are both in their different ways biting criticisms of the age in which they were produced.

But, at its best, late 18th century music – and especially Mozart’s music – is the great exception to this trend. For the achievement of Mozart was to express the ideal of his age as if it were the reality. Crucially, this was not done ironically or cynically but as a sincere representation of the times in which he lived. Mozart was no prophet of doom, or voice crying in the wilderness; he was a product of his culture, and thus instinctively in tune with it. His assumptions were its assumptions. But at the same time his musical genius allowed him to transcend and purify that culture. He presented the Enlightenment as it assumed itself to be: not childish and decadent, but restrained, wise and serenely benign.

In this respect he was music’s answer to Kant, whose famous essay What is Enlightenment? was published in 1784. They both gave definitive expression to the age’s most cherished values – and they both did it at precisely the time when the whole Enlightenment project was collapsing about their ears. By the time of Kant’s essay the American revolution had already taken place; the British industrial revolution was in full swing; and the French revolution was only five years away. Mozart composed some of his most famous pieces, including Cosi fan tutti and The Magic Flute, after the storming of the Bastille (1789) and less than two years before the beheading of Louis XVI (January 1793).

These momentous events, and the wars which followed in their train, reshaped the western world-view. And you can see the changes they wrought written into almost every bar of Beethoven’s mature music. Whereas Mozart’s output seems to reflect a serene present, Beethoven’s work is haunted by the past and deeply anxious about the future. It is the sound of a battle being waged over the soul of mankind – a battle whose outcome is far from certain. His late quartets, for example, are littered with sections of courtly, civic music which suddenly give way to tortured, almost atonal passages. They sound remarkably like the cheerful certainties of a bygone age collapsing into doubt and anguish. And the rapturous joy that he conjures up in the 4th movement of his 9th symphony is not a celebration of the present but a fervent, scared prayer for tomorrow. Its very excess betrays the anxiety under which it was written; it’s as if Beethoven is trying, by sheer force of talent, to avert an impending catastrophe.

Untameable hair…

I think there’s little doubt that our own world-view seems more like Beethoven’s than Mozart’s. And I suppose that’s why Beethoven’s music more readily resonates with me on a personal level. The classical ideal is one that few can espouse today with a completely straight face. And yet, for all that, there is something great about it: its refusal either to give in to, or to hide from, life’s sorrows; its easy, good-natured confidence; its instinctive cleaving to the idea that come what may life is a blessing – these are important, restorative attitudes. We cannot, of course, simply will them back into being, but nor should we ignore them. And Mozart’s music remains one of the most coherent, boisterous expressions of this outlook that our culture has to offer.

So let’s end with Mozart’s horn concerto.

 

How Music Became Pointless

Let’s start with some history. Following the end of the Second World War there was a huge desire for change throughout British society. This resulted in the creation of a kind of semi-socialist state, and as a result politics was woven into the lives of ordinary Britons far more thoroughly than had previously been the case. The government ran huge swathes of our industry on our behalf (in theory, at least) and millions of people were members of active, powerful trade unions. Whether they liked it or not, this locked them into the political process – not just at election time but on an almost daily basis. And those who weren’t in unions were conscious of the fact that it was possible for ordinary people to have a say in political processes, and that, at least to some extent, they were affected by the choices such people made. People had a voice – even if they used it to criticise the changes which had given it to them in the first place.

A few points need to be highlighted here. First, they had a voice not because they’d been given a platform to “mouth off”, but because their opinions had been structurally linked to the levers of power. That’s what having a voice really means; anything less than that is a sham. It is shouting into the void.

Secondly, once genuine substance is given to your views in one area this automatically (at least to some degree) raises the status of your opinions and values across the board. You gain increased significance not merely as a voter or union member, but as a person.

Thirdly, when I say ordinary life was more political I don’t mean that everyone went around avidly discussing Keynesian economics or the details of the latest budget statement. But British society had voted for a huge change in the way it was run, a change that affected almost every corner of people’s lives. It must’ve been hard not to be conscious of that at least to some degree. And my suggestion is that it coloured people’s attitudes in all sorts of subtle (and not so subtle) ways.

Finally, we shouldn’t get carried away. I’m not saying that the people ruled, or that any ordinary individual could single-handedly change things. But compared to how it had been before – and how it is today – there was an unmistakable move in that direction. Social engagement had been heightened through a shift in the basic political structures that underpinned people’s lives.

One side-effect of all this was to increase the perceived importance of popular forms of self-expression: novels, plays, films, TV, pop music, and so on. It simply made more sense given the new social conditions for ordinary people to consume and create them – and to use them as a way to criticise the world around them. Change was possible and therefore what you valued mattered. And when you expressed your values, that mattered as well. You can see this when you look at how pop music found its critical voice in the second half of the sixties. (NB: I’m using the term “pop” to include rock, dance music, etc.) People were forming bands because they had something to say and they expected to be taken seriously. And, to an astonishing degree, they were. How much real change did this bring about? It’s hard to say, but with hindsight probably hardly any. That, however, is not the point; at the time the hope did not seem completely unrealistic, and so it added a level of significance – of meaningfulness – to the task at hand.

This aura of significance attached itself to pop throughout the 70s and into the early 80s. Of course, by no means all pop was intended as a social statement. Most music still wanted nothing more than to entertain you for a few minutes. But if you had something critical to say, if you had a disaffection, then pop was a legitimate forum for expressing yourself. And this legitimacy was a function of the broader social standing of ordinary people in what became known as the “post-war consensus”. It was an adjunct to the political changes rolled out during the 40s and 50s. I don’t mean to imply, however, that all the disaffected music underwritten by these changes was itself overtly political. Some of it was, but much of it was concerned with the everyday challenges and frustrations of being young. Nonetheless, even this music in part drew its significance from the changes I’ve been talking about. Nobody would really call Jumpin’ Jack Flash or Barbara O’Reilly political songs, but their vibrancy still depended on the wider social structures within which they were created. Society had provided conditions in which they could resonate, and resonate they did.

You can see how far this was true by looking at the way the “establishment” responded to the new type of music. They grappled to understand it, they condescended to it, and at times they downright feared it. The sight of the police hounding John Lennon, The Rolling Stones and John Lydon is a much surer way of judging their music’s social significance than the fact that young people felt it might change the world. Can you imagine the police doing that to a rock act today? They’re far too busy infiltrating environmentalist groups to give a flying fuck about rock stars. And if you want another example, watch the interviews with local councillors in The Filth and the Fury where they discuss why they’ve banned the Anarchy tour. The fear in their eyes as they contemplate the prospect of the Pistols coming to their town is like something from another world. Again, it simply could not happen today.

The political structure affected the significance of producing music, and this in turn affected the music produced. Basically, if you’re working in an environment where you can bring about change then it’s natural enough for your music to be in some way progressive. I don’t use that word in its left-wing sense (“progressive taxation”, etc), but in the sense of being forward-looking (and here the right, too, has its vision of progress). Through its lyrical content, but also through its form, the music will be drawn to pushing boundaries, experimenting and generally being transgressive. There will, in other words, be an intense focus on making something new. It will try to say new things, develop new forms and incorporate new sounds. It is difficult to miss this aspect in the music of the 60s and 70s (and equally difficult to miss how it was an echo of the effort to create a new society after World War II). Psychedelia, prog, krautrock, dub, glam, heavy metal, disco, punk and post-punk; the aim was always to create something that wasn’t quite like anything you’d heard before. And that could be true even where the musical form borrowed heavily from the past; Bob Dylan, for example, produced self-consciously retro music (before he turned electric), but lyrically he was game-changingly original. Something similar could be said of The Rolling Stones, The Specials and The Jam. But a lot of the time the music was ground-breaking in terms of its form and its content. It’s difficult now to get across just how exciting it was to hear, say, Remain in Light for the first time. It was like the future had turned up today. You can’t say that about Elbow.

So the opportunity to create socially significant music led to an emphasis on innovation and daring, and this daring music was doubly intoxicating because it was (felt to be) socially significant. It was a recursive relationship. In my own case, “daring music” meant new wave and post-punk, or the music produced between 1978 and 1982 (ie, from when I was 13 to when I was 18). And it was a thrilling period. Virtually every week brought something strange and new. Pleasantly bewildered by Devo? Well, here’s John Cooper Clake. Not sure what to make of Hong Kong Garden? Well what about Love und Romance? The list of acts is near endless, but here’s a selection culled from my alcohol-decimated memory: The Buzzcocks, Wire, Ian Dury, Elvis Costello and the Attractions, the Jam, The Clash, Stiff Little Fingers, The Undertones, Theatre of Hate, The Damned, Squeeze, The Stranglers, Teardrop Explodes, Blondie, Echo & the Bunnymen, Human League, Heaven 17, Magazine, The Beat, Talking Heads, Wreckless Eric, The Specials, XTC, Soft Cell, The Tom Robinson Band, Selector, The Cure, Public Image Limited, The Fall, Scritti Politti, X-Ray Specs, Joy Division, The Monochrome Set, Linton Kwesi Johnson, Gang of Four, Alternative TV, the Mekons, Cabaret Voltaire, Crass, The Birthday Party, Dead Kennedys, Throbbing Gristle, A Certain Ratio, and so on and so on.

Here, of course, I’m wide open to the charge of viewing things through the rosy glow of nostalgia. Every generation is thrilled by the music of its youth, isn’t it? I’ve thought long and hard about this, and although it might be true to some extent I honestly don’t believe it accounts for everything. Not even close to everything, in fact. For a start, I’m not an idiot. I can tell the difference between music that I’m fond of for nostalgic reasons and stuff which has a deeper resonance. Moreover, I don’t simply think that all music from that period was great and all subsequent music is shit. A lot of the music of the late 70s/early 80s was poor (including some I adored at the time). And of course there is good music being made today; it’s not as if people suddenly forgot how to play. There are god knows how many talented bands/artists out there doing their utmost to produce music that amounts to more than a pleasant listen while you eat your tea. But it doesn’t matter. It’s not a question of talent, or even bravery. It’s a question of the underlying social conditions in which that music is made. And that’s the crucial point. Even as a fourteen year-old I could see that the music I was in to was part of something bigger. It was connected to a broader struggle over the values of society. Today I don’t think that’s true to anything like the same extent. Of course young people still care passionately about their music, but it’s much more like caring passionately about a football team than participating in a meaningful social discourse. The passion is genuine but inconsequential.

I’ve mentioned music as part of a cultural struggle and it’s important to note that there were two aspects to this. On the one hand, there was the hope of changing society at large. That was a genuine, but somewhat distant, aim; it more or less lurked in the background. More immediately, however, there was the hope of changing the music scene itself. That seemed far more obviously doable. (The connection between the two was the thought that if the music scene changed then, as a matter of course, society at large would change as well. This thought was completely wrong. In case you’re in any doubt, I’ll just mention that David Cameron’s favourite album is The Queen is Dead.) Here, then, we are dealing with the relationship between mainstream culture and the counter-culture, and pop’s attitude in this regard varied over time. In the 60s, for example, there was no clear distinction between the two. Sure, you had obscure groups like The Velvet Underground, known only to a select few, but the counter-culture also included The Beatles and The Jimi Hendrix Experience – and nobody could call them obscure. By the early 70s, however, the musical counter-culture had largely transformed itself into an underground scene. It was an alternative to the mainstream (more specifically, chart music) and wore its obscurity with pride. Pink Floyd, Genesis and the Mahavishnu Orchestra wouldn’t have been caught dead on Top of the Pops (not in 1973, at least). Spurning mainstream popularity earned these acts kudos with their fans but also created a rather sterile ghetto for themselves. They weren’t confronting the mainstream; they were hiding from it.

Other acts were less shy, however. In particular, the “art rock” wing of glam (basically, David Bowie, Roxy Music, T-Rex and Sparks) was more ambitious than its prog contemporaries. Those acts didn’t just want to be an alternative to the mainstream, they wanted to become the mainstream. They wanted their values, their opinions, their aesthetic to become the norm. And this sense of “storming the barricades” was eagerly taken up by punk. They released singles, they got into the charts, they appeared on Top of the Pops. This all added hugely to the exciting sense of danger which surrounded the movement. When Lydon sneered “We’re the future – your future” in God Save the Queen, it terrified the authorities because there was just a chance that he was right. He wasn’t, of course. Indeed, a more prophetic act would’ve been a group of middle-aged call-centre workers sneering “We’re the future – your future” at a bunch of punk rockers. That would’ve been far more terrifying, too.

For a while, though, it really looked as if the barricades were being stormed, as if things were changing. All sorts of weirdoes started to make the charts. Ian Dury got to number one (admittedly, just as he went off the boil). The Specials and the Jam did likewise. The Dead Kennedys charted with a song called Too Drunk to Fuck. Joy Division charted (admittedly, Ian Curtis had to kill himself to manage it). Laurie Anderson reached number two with a six minute poem about the US industrial-military complex sung over a single, looped breath. Dave Lee Travis was forced to play it on his Radio 1 daytime show! What joy it was in that dawn to be alive, and to be young was very heaven. We were winning.

And then it all went to shit.

Okay, so what the hell happened? Well, regarding the music industry itself, the money men finally started to catch up with the artists. Punk and its aftermath had caught them on the hop, but now they made up for lost time. Major record companies started setting up faux-indie labels, everything slowly drifted towards structured careers and increased commodification. What had once been a chaos of spontaneous creativity solidified into a check-list of increasingly self-conscious gestures and ploys. If you played this type of music then a label could get you into these venues. Your first album should sell this much and with carefully targeted marketing the second album (which would be better produced and slightly more commercial) would sell that much, allowing you to trade up to those venues, appear on this day of this festival (sponsored by Barclaycard), and so on. The whole thing started to run on rails. It was depressingly like a video game: Sim Indie.

Of course, such changes were just corollaries to broader developments in British society. That’s where the real action was. And bearing in mind my previous comments, it shouldn’t be too difficult to see what I’m talking about. After 1979, the post-war consensus which had leant vitality and significance to self-expression was brought crashing to an end. Throughout the 80s the public industries were privatised (and largely shipped overseas), and the unions were (literally) beaten into submission. Thus two great levers of democratic change were removed from the public’s grasp. The whole focus of social policy shifted decisively from public welfare to commercial viability. And the underlying message for ordinary people was clear enough: you have no voice, and even if you did who would you speak to? The government no longer runs things; it merely facilities the wishes of multinational corporations. You don’t even know who those people are and you certainly don’t get to vote them out – so shut up and buy!

The final twist of the knife probably came with the collapse of the Soviet Union in the early 90s. Soviet communism represented the possibility of an alternative rather than an attractive option in its own right. So long as it staggered ineptly on you couldn’t say that capitalism was the only game in town. At least one alternative could exist, and if that was true then why not more than one? Why not something better than Soviet communism? Significant change was at least an option, and that added a degree of urgency to the social discourse whatever type of change you were after and, indeed, whether you longed for change or dreaded it. But once the Soviet Union collapsed the whole question became moot. All you could expect now – and for the foreseeable future – was the same only more so. And that’s exactly what we got.

Inevitably, the grip of market-place values on our culture strengthened (a development neatly symbolised by the Thatcher Government’s insistence that users of the newly privatised rail system were “customers” rather than “passengers”). At first it seemed faintly ridiculous – a childish game played with words. But it was backed up by all sorts of substantive measures allowing the market to intrude upon and control our lives. Thatcher wasn’t just indulging in a right-wing version of political correctness; rail users really had become customers, because that’s how they were treated by the people who owned and ran the railways. And gradually this subtly different way of looking at things became the air we breathe – so much so that today it’s difficult for anyone under the age of 40 to grasp the significance of what’s happened because they’ve never known anything else. Multinational corporations sponsor rock bands? Of course they do! What’s the harm in that? I don’t see what you’re getting at….

All this, of course, was the increased commodification that I’ve already mentioned. But it went hand in hand with another pernicious feature: the atomisation of society. Again, Thatcher summed things up with typical directness: there’s no such thing as society, only individuals and families. At the time we lefties thought she was just indulging in right-wing propaganda, but in a sense she was merely expressing the logical outcome of a world dominated by commercial enterprise. It is a vision of life brilliantly summed up by Ned Beatty’s character in Network:

There are no nations. There are no peoples. There are no Russians. There are no Arabs. There are no third worlds. There is no West. There is only one holistic system of systems, one vast and immane, interwoven, interacting, multi-variate, multi-national dominion of dollars. Petro-dollars, electro-dollars, multi-dollars, reichmarks, rins, rubles, pounds, and shekels. It is the international system of currency which determines the totality of life on this planet. That is the natural order of things today. That is the atomic and sub-atomic and galactic structure of things today.

And what of individuals in such a world? Their only voice is their money. What they do is only important in a commercial sense. They are customers, consumers; the one significant way in which they help shape the world around them is by purchasing this rather than that. The content of what they buy is totally unimportant; each item is merely a variable on a spreadsheet, whether it’s a pair of Nike trainers or a copy of The Anarchist Cookbook. That is the power dynamic between the individual and the state in such circumstances. And, as a result, the relation between the individual and what he or she buys becomes predominantly a matter of entertainment. All other significance is stripped away until the only question about the things you own is: how much pleasure do they provide? Outside of work and raising fresh consumers, life becomes one long, dreary, infantile matter of having fun.

Again, we shouldn’t get carried away here. The above is an extreme vision rather than an accurate picture of where we are now. But I think the world has moved unmistakably in that direction over the last 35 years, and its corresponding effect on rock music has been hard to miss. Specifically, it’s involved an increasing break up of the market into a plethora of niches. Some of them are massive, some of them are tiny, but they all represent a profit opportunity, and that’s the main thing. If you’re sexy, female and a bit odd, there’s a huge market waiting for you; but don’t worry if you’re an ugly bunch of angry malcontents – they can still turn a profit from that. In 1977 the music industry didn’t know what the hell to do with the Sex Pistols. Today there’s a pre-prepared slot waiting for you no matter how outlandish your product is. As a result, of course, the old idea of taking over the mainstream has become otiose. There is no mainstream anymore, and so there’s no counter-culture either – just a massive range of ghettos. Forty years ago we all watched the same TV programmes and listened to the same chart music. Even if you hated it, you couldn’t really avoid it. Today we have satellite TV, TiVo boxes, a different digital radio station to suit each taste, iTunes, iPhones, iPods, iPads, Spotify, YouTube, Last.fm and all the rest. This sounds like a great advance (that’s certainly how it’s sold to us) and in some ways it is, but the end result is that we’re each of us alone in our own little sonic universe. And, frankly, there’s something downright creepy about a society in which it doesn’t matter if most of the people around me value things I consider abhorrent because at the flick of a switch I can completely blot them out. The fact that I value x rather than y used to be important because it unavoidably brought me into conflict with people who valued y rather than x. I had to contend with their views and they had to contend with mine. Today who cares? It’s all just a series of consumer choices, and why should what I put in my supermarket trolley be something that bothers you? We have become musical solipsists. We don’t have to listen to anything we don’t want to, and so what we choose to listen to doesn’t matter a damn. It’s about momentary amusement and nothing more.

You can see all this in the shift of attitude towards rebelliousness over the last few decades. Once it was feared by the authorities and cherished by those with a grudge against society. Today it is sold to us as a life-style option. The reason for this change is obvious: in a society where people have a genuine connection with political power, youthful rebellion is potentially a dangerous force. But once that connection has been broken it becomes merely a form of amusement – a brute urge that can be packaged, priced and sold, just like everything else. As such it is actually beneficial to the status quo; it helps keep us distracted and fosters the illusion that we’re not helpless victims. The “man” doesn’t own me, because when I’m not doing my fifty hours a week I listen to an illegal download of Nine Inch Nails (praise be to Virgin Media!). In all forms of advertising and popular entertainment we are relentlessly urged to be mavericks (my own personal favourite in this regard being John Deed, the maverick High Court judge), to stand out from the crowd, to avoid being one of the “sheeple”. Or, to put it another way, to be isolated, inconsequential and impotent. In this respect, the function of alternative music (as part of our broader culture) is eerily similar to that of religion in the 19th Century. It is, as Marx said about religion, “the heart of a heartless world”.

Of course, people aren’t stupid. They grasp their position within society, and that the music they produce and consume cannot help but be tailored to (or created by) the environment as it stands. So, for example, the production of critical or rebellious music has become a self-conscious pose. The people who make it do their utmost to be sincere and radical, but they can’t quite take themselves seriously. They’re just dressing up and they know it. That also explains why their music tends to be so obsessed with (haunted by) the heroes of the past. They listen to stuff from the 60s and 70s and cannot help but notice how much more meaningful it seems (of course, they don’t look beyond the music itself to the times in which it was produced). Naturally, they want to manifest that level of significance in their own music, but no matter how hard they try they just end up with a skilful parody of the past. In a worst case scenario you get Jack White’s nerdy enslavement to outdated genres and recording techniques (or “authenticity” as it’s known).

Here it’s tempting to assume that the problem lies with the artists themselves. Somehow they lack that magical quality which turns a three-chord thump into art. If you go down that route then the artists of the past appear to be not just talented, interesting human beings but mythical creatures. They played the same chords and riffs as people today, yet somehow – through some kind of secret magic – they utterly transformed them. I’ll leave it to you to decide how far that’s become a common attitude. The truth, of course, has nothing to do with talent (or a lack thereof). Today’s artists fail because they’re trying to fly in a vacuum. They beat their wings and nothing happens. They suppose that if they could just beat them a bit harder they might take off. And they gaze with awe at the legends of the past (who had air beneath their wings) and wonder how the hell they managed it.

Given this, it’s not surprising that many artists prefer to shy away from rebellion or anything remotely political. Instead, they focus on relationships and personal feelings (the hope being that they’ll express themselves more honestly and insightfully than, say, Katie Perry or Blue). There’s nothing wrong with this approach per se – for obvious reasons it has always been a feature of the pop landscape. Unfortunately, however, it’s been stripped away from a context in which expression is socially meaningful. Within such a context it can provide a much-needed counterpoint to angry rants and leather-jacketed sneering. But outside of it the results are truly appalling: an endless stream of fey, wistful, bitter-sweet, melancholy, intentionally childish, folkie japing and whining. It is the sound of defeat, and it’s absolutely no coincidence that it works so brilliantly as a bed for TV adverts. (I can’t help suspecting that every time The Lumineers write a song they ask themselves “How advert-friendly is this tune?”)

One final point. There’ll doubtless be some who object that music is still very much a social venture. People turn up to gigs – especially festivals – in huge numbers, and the internet is awash with enthusiasts sharing and commenting on all kinds of music. And that, of course, is true. But it doesn’t increase music’s social significance one iota because, at the end of the day, it is part of the consumerist power dynamic rather than a challenge to it. Gig-going has become so thoroughly commodified that one might as well cut out the middle-man and hand the money straight over to Arthur Levinson. Small and medium-sized venues are struggling while glorified circus acts and flesh-and-blood nostalgia-trips pack out the O² Arena. The festival scene is particularly ironic in this respect, since it owes its existence to Woodstock and other “happenings” of the late-60s and early-70s. Now, in terms of actual music I’m not a huge fan of Woodstock – there sure was a lot of ropy old folk going on – but even so it’s impossible to ignore the sense of something important happening. Those people had a voice and they were using it. By contrast, modern-day Glastonbury merely offers the indulgent (and somewhat fascist) thrill of being part of a huge crowd – not being part of a crowd that’s out to achieve anything; just simply being in a crowd. That’s social, I suppose, but not in any meaningful way. Do you seriously believe that 135,000 people thought they were achieving something by watching Dolly Parton mime?

As for the internet, it certainly provides a platform for any old nitwit with an axe to grind (me, for example). But, so far as music is concerned, it doesn’t give anyone a voice. It cannot do that because there is no mechanism at the other end for converting opinion into social change. It’s like pulling on a brake lever when the cable’s been cut and thinking that if you just pulled harder the wheel might stop. Indeed, far from giving us a voice, in turns us into part of the marketing industry, for that is the net result of all the file-sharing, amateur reviews and liking stuff on YouTube: a few more units get sold. A few more units of a product that means nothing.

And that, ladies and gentlemen, is how music became pointless.

Determinism and Physics

One of the modern-day sources of Determinism undoubtedly comes from reflecting on the discoveries of science and, in particular, on what physics has to tell us about the world. For a start, the mere notion of cause and effect is enough to induce doubt about the existence of free human actions; the causal chain stretches back from what I do now to the very beginning of time itself. So, barring the miraculous, how can I be said to have a real influence on what happens?

Put like that, however, “cause and effect” can seem a slightly vague or abstract notion – we might, for example, become troubled by the thought that it’s a precondition of science rather than an actual scientific discovery. To really nail down the issue we want to cash things out in more concrete terms, and (happily) that doesn’t seem a particularly difficult task. The basic approach here is one of reduction: my actions can be described in terms of the physical movements of my body. In turn, my body can be described in terms of its various components: bones, organs, muscles, sinews, glands, blood – and at a slightly finer grade, cells, neurons, bacteria etc (this we might call the level of biology). The interaction of these components can itself be described in terms of the behaviour of the molecules that go to make them up (the level of chemistry). Finally, molecules are composed of atoms, and a description of the laws governing the interaction of atoms brings us to the level of physics.

There’s something pleasingly neat about this: human behaviour reduces to biology, which reduces to chemistry, which reduces to physics. Each time we seem to go down a level and get closer to the objective truth of what’s going on. What’s more, by the time we get to physics there doesn’t seem to be any room at all for human freedom. It’s all just atoms pinging about according to well-established physical laws. How could any of that produce genuine freedom? True, at the sub-atomic level, probability suddenly emerges as an inherent feature of the system. But, for one thing, that’s only significant on an unbelievably tiny scale. By the time you get to molecules (let alone you and I) the chances of anything freakish happening are so remote as to be effectively zero. And in any case, even if something freakish did happen, such behaviour would be random rather than free. There’s no escaping it: humans, like trees and planets and stars and galaxies, are made of atoms. And at the level of the atom freedom is simply not an option.

We might summarise it like this: “atoms, therefore human freedom doesn’t exist”. It’s an intuitively compelling line of thought, even if the conclusion is rather depressing. But I also think there’s something deeply incoherent about its argument. Actually, there are several possible lines of attack, You could, for example, suggest that the notion of freedom it rules out is actually a fiction which only passingly resembles the concept we actually use (and, in case you were wondering, that’s the reason I’ve resisted using the term “free will” in this post). But I want to attack it from a different angle; I want to suggest that it’s incoherent on its own terms. Without quite realising it, the argument attempts to see things from two different viewpoints at once. By adopting what might be called “the atom’s viewpoint” it declares that freedom doesn’t exist, but then it illicitly slides back to a human viewpoint and further declares that people aren’t really free. That, it seems to me, is trying to have your cake and eat it.

What am I getting at here? Well, let’s consider for a moment how we got from everyday life to the strange world of the atom. We started with the human being – that was, so to speak, a given. And, if you think about it, there are lots of things that are “given” at this level: human beings interact with each other and with various objects. We tell jokes, climb trees, eat chips, walk dogs, and so on. Some of the things we do aren’t said to be free. Blinking, for example, or digesting food. But many of our actions are categorised as “free” under normal circumstances: That too is a given at the human level. And what the determinist’s argument seeks to establish is that, unlike all the others, the “given” of human freedom is actually a fiction. To this end we broke the human down into smaller and smaller bits until we reached the level of physics. Here there was nothing remotely like freedom to be seen and so we asked “how can freedom possibly exist when physics has revealed that the world is just atoms interacting?” The problem with this, as I see it, is that you might just as well ask “how can humans possibly exist when physics has revealed that the world is just atoms interacting?” For it is one thing to start at the level of human beings and break that down into smaller and smaller bits, but it is quite another to start at a level that only contains atoms and from those alone deduce the existence of people. It is true that from the atom’s viewpoint there is no such thing as freedom, but there is no such thing as people, either. Or houses, animals, trees or stones.

At first blush this claim might seem absurd. Of course we know that such things exist and, moreover, we can explain how the interaction of various atoms brings them about. But that misses my point. We can only do that because we take it as a given that there are people, trees, etc. Our whole investigation has been from the top down. But what if you started at the bottom, without any preconceptions as to what did or didn’t exist?

Imagine a 5-second snapshot of all the atoms in a particular space. Here clusters of them are behaving in this way, over there, different clusters are behaving in that way. What gives you the right to draw a line round a particular cluster and say “those atoms form a human being”? Isn’t that a case of you imposing your preconception upon the picture rather than deriving your categorisation from it? There is nothing intrinsic to the snapshot that allows you to make such a claim. You are importing your knowledge from one level (the human level) and using it to make sense of information at another level (the atomic level). But what we were trying to do was use the atomic level alone – since that is the level which purportedly shows us what really exists – to justify the claim that human beings exist. And at that level there are no human beings. Just billions and billions of atoms pinging about. In fact, our situation is even worse than that: not only are there no human beings, there are no brains or hearts or lungs either. And no cells. No neurons. No nerves. No molecules. Just atoms.

Hopefully that clarifies what I meant when I said the determinist’s argument tries to have it both ways at once. It dives down to the atomic level, sees no freedom there, then races back to the human level so that it can claim humans are really free. But it didn’t see any humans down there either! Where have all these humans come from? Why, when we emerged back at the human level, was freedom the only thing missing? What on earth justifies the determinist in giving it such special treatment?

As I said, it’s incoherent.

Atomism, Wittgenstein and God

I was recently reading Robert Fogelin’s assessment of atomism in the Tractatus Logico-Philosophicus and (strangely enough) it prompted a few ideas about the concept of God. I thought they might be worth sharing.

First of all, what do we mean by “atomism”? Fogelin sets out the basics with admirable clarity:

  1. Change (in a wide sense) is a matter of the combination and separation of constituent entities.

  2. Not everything is subject to change, for there must be an unchanging basis for change. Atoms, entities that are not the result of combination nor subject to division, constitute this unchanging basis.

  3. Combination and separation are possible because atoms exist in a void (in a space) that provides a field of possible combinations.

(Fogelin, Wittgenstein, Second Edition, p5)

A few things to note. “Atoms” here are not to be confused with atoms in physics. The atoms of physics are, of course, divisible and subject to change. But for the atomist philosopher, “atom” is the name of whatever it is that cannot be divided into parts or changed in any way. To avoid confusion (ha!) they’re often called “simples”, and I’m going to follow that convention. Anyway, the idea is that such things must exist or else how is the whole business of reality (matter, complex objects and so on) going to get off the ground?

That sounds reasonable enough, but a bit of reflection suggests that these “simples” are likely to be very strange things indeed. One oddity is that although they must exist (since they are a necessary condition of reality) it makes no sense to say either that they do exist or that they don’t. Allow me to explain.

Complex objects are made up of simples. So object “X” might be made up of two simples: a and b. It certainly makes sense to say that X exists since it’s possible that it might not have existed (ie, the simples a and b might not have been combined in the requisite way). Therefore “existence” and “non-existence” are the combination or non-combination of simples. That’s what (according to the theory) those terms mean. But if that’s true then we cannot predicate either existence nor non-existence to simples themselves. They are prior to existence and non-existence; they are what you must have in order for existence and non-existence to be possible.

Pretty weird, eh? But things can get even weirder if you decide, like a vicar in an Alan Bennett sketch, that “God’s a bit like that, isn’t he?” What I’m getting at is this: it is often claimed that God’s existence is necessary. He didn’t just turn up via a happy accident; he is a necessary pre-condition of the world itself. Indeed, the “ontological argument” infamously tries to prove this claim. There are many versions, but, roughly, it goes like this:

  1. We have the idea of a completely perfect being, ie, God.

  2. A being that exists is more perfect than one that doesn’t exist, therefore:

  3. God must exist.

Oceans of ink have been spilt trying to decide if this makes sense and, if it doesn’t (which is most people’s position), setting out exactly what’s wrong with it. Kant, for example, thought the problem was that it treats existence as a predicate – a quality that objects have or lack, in the same way that my jeans have the quality of being blue but lack the quality of being clean. But existence, he declared, is not a predicate. Frankly, no-one’s sure if Kant was right about that, but it’s probably fair to say that existence is at best a strange type of quality. (There’s no dining table in this room – does that mean that there’s a dining table here which lacks the quality of existence?)

But here’s the thing: if we assume that God is a necessary precondition of reality then, as with the atomists’ simples, it cannot make sense to say either that he exists or that he doesn’t. My chair exists, and so does Scotland and the Fibonacci number – but it only makes sense to say that because those things might not have existed. They have what we might call an “ontological status”. But God is in an altogether different category. The word “God” doesn’t refer to any type of thing; God has no ontological status and therefore the statements “God exists” and “God doesn’t exist” are both nonsense. To put it another way: if the opposite of your claim is nonsense then your claim itself is nonsense. So if God’s existence is necessary then it’s nonsense to say that he doesn’t exist and – therefore – it’s nonsense to claim that he does. Or to put it yet another way: whatever can exist can be destroyed. God cannot be destroyed therefore God cannot exist.

All this, of course, has serious implications for the ontological argument. As its very title suggests, it doesn’t hesitate to allot God an ontological status: he exists. But the argument tries to prove that he exists necessarily and so according to its own assumptions its conclusion must be nonsense. That’s not good. But might it be amended to avoid this problem? The only alternative candidate I can think of runs like this:

  1. We have the idea of a completely perfect being, ie, God.

  2. A being that exists is more perfect than one that doesn’t exist, therefore:

  3. Shut up.

That works, I think. But now let’s complicate matters by returning to Wittgenstein. In the Tractatus he adhered to a form of atomism and agreed that it was nonsense to talk of simples existing or not existing (actually, he thought it was nonsense to talk of most things – including his own philosophical theories). In his later years, however, he developed a very different approach to philosophy and had some intriguing things to say about simples. His thoughts on the matter are pretty difficult, but here’s a rough outline:

When we say “existence and non-existence involves the combination and dis-combination of simples” it seems as though we are describing a feature of the world. But in fact what we are doing is laying down a rule that defines what we mean by “existence” and “non-existence”. (I’ll leave it to you to decide if the atomist’s definition matches our everyday one regarding those words.) Now, rules are intrinsically categorical; they say things must go like this, not that. So if we mistake a rule for a description of the world then it can seem as if what we are “describing” is not just true but necessarily true. Existence must be the combination of simples, and those simples must exist. But all that really means is that we must posit the existence of simples or else we cannot have the rule. In effect, the atomist creates a strange game with language (a “language-game”, Wittgenstein would’ve called it), then presents this game as a description of the world and, in the process, makes it seem as if he’s discovered these weird entities – simples – which exist necessarily. What’s more, the rules of the game are so constituted that they create a paradox concerning existential claims about simples – they say both that simples must exist and that it’s nonsense to say they exist. (A fuller account of all this is here for anyone interested).

Now let’s try to apply these insights to God. People describe God in various ways: he’s omnipotent, omnibenevolent, exists necessarily, and so on. But are these really descriptions or are they actually rules by which we define the concept “God”? Don’t they constitute a kind of language-game which is tied in to the wider “game” of religious practices? (When I say “game” here I don’t mean to imply that the practices are trivial; I’m pointing out that they’re analogous to games insofar as they’re rule-governed. For Wittgenstein the term “language-game” covers pretty much all types of activity involving language.) Moreover, don’t the rules of the language-game concerning God generate the same kind of paradox as the one concerning the existence of simples? Following through the rules of the concept we seem to arrive at a position where God’s existence is both necessary and nonsense (as is, don’t forget, his non-existence).

Here it’s tempting to say that if the rules generate a paradox then that proves they must be wrong. But that, I think, groundlessly extrapolates the status of paradoxes from areas such as mathematics. In a mathematical language-game a paradox is usually pretty deadly, but need that be the case here? (And even in mathematics they’re not always so deadly. We’re taught at school that there’s no such thing as the square root of minus one. But in some areas of mathematics there most certainly is such a thing.) The point here is that rules are neither true nor false; they’re either established or they’re not. We either play by them or we don’t. And sometimes, it seems to me, we can accept rules even though they occasionally generate difficulties. For the ultimate “justification” of a language-game is not that all its rules operate smoothly, but that the game is played. This is linked to a comment Wittgenstein made right at the end of his life:

You must bear in mind that the language-game is so to say something unpredictable. I mean: it is not based on grounds. It is not reasonable (or unreasonable). It is there – like our life.

On Certainty, §559

Still, you might think, if the concept of God simply comes down to the rules people have established about him, doesn’t that show that it’s simply something we’ve made up? After all, the rules of the language-game are our rules. And this last point is correct. But it doesn’t just apply to language-games such as religion or chess or rugby league. It also applies to mathematics, logic and the concept of measuring length. Now, have we “just made up” mathematics?

This is not to claim that religion is as deeply woven into our lives or as ubiquitously accepted as mathematics. Atheists and Christians alike learn the multiplication tables. But the point is that just because the rules of language-games are our rules that doesn’t mean they’re all a simple matter of caprice. Some are, some aren’t. And nor does it mean that they’re all somehow “unreal”. Is the language-game of measuring things unreal? The question is: what status do we want to give to religion? Deciding that is not a matter of deciding whether it’s true or false, for a language-game is neither. It’s a matter of deciding whether or not we want to play the game.

I’ll end with a quote from Wittgenstein’s Philosophical Investigations and leave you to decide its relevance.

But mathematical truth is independent of whether human beings know it or not!” – Certainly, the propositions “Human beings believe that 2×2=4” and “2×2=4” do not have the same sense. The latter is a mathematical proposition; the other, if it makes sense at all, may perhaps mean: human beings have arrived at the mathematical proposition. […] (Is a coronation wrong? To beings different from ourselves it might look extremely odd.)

Philosophical Investigations, Part II, §348

NAKED COV

“Bibberdy-bibbert” blurted some kinda faxola like it’s 1993 or something, how the fuck am I supposed to know? [“You bin outta the scribbling game too long, kid?” leered Spence, his McDonaldized bulk not so bulky as to smother the tank on his hip. “You forgedda da words already?” His accent going through five States per sentence. He calls this “humour” – like when a pregnant whore falls down a flight of stairs. Attlee picks the scum from underneath the nail of his ring-finger with a spring-loaded razor knife. “I’m a working stiff,” he says. “Passed my probation,” he says. “I’m a Hi Viz slogger in a Hi Viz world,” he says. “Protective boots must be warn at all times in this area," he says. "Wadda I know bout words?” he says. “Words is just grunts with delusions of grandeur,” he says. Scoops a box of Disney Nightmare Princess Fetish Paedophilia Tales (“Operation Yewtree Approved!”) onto the workbench and strips off the piss-yellow agent tape with his trusty razor knife. The cardboard walls fall deftly aside revealing sixteen tatty books doubtless covered in some kinda unspeakable protoplasm, seeing as how this is a Bill Burroughs pastiche. “Fifteen hundred units per day, motherfucker,” he says. Cherry-picking bastard. Spence shifts carefully in his chair, conscious of the round in the chamber. “You should lay off the Wittgenstein, kid” he says, “it’s turning your brain to mush.” Attlee sniggers. “Ima REAL boy!” he squeaks in a voice that wants to be like Pinocchio but comes out like Jimmy Durante doing a half-arsed Micky Mouse.]

Anyhoo…

“Bibberdy-bibbert” blurted some kinda faxola like it’s 1993 or something. We establish this already. Kosmo rips the message from the computer-paper roll and scans the text. His eyes widen, magnified through the lenses of his [brand-name] reading glasses.

“Mother of mercy!” he sighs, “I’ve been sent to Coventry!” The mere utterance of this ancient Anglo-Saxon pronoun conjours visions of a concrete apocalypse. Pregnantly obese obese pregnant women gnaw Polish bratwurst from foot-long batons smothered in curry ketchup. Hoards of feral Goths huddle beneath the Brutalist punch of the flyover, deface the 13th Century walls of the ruined priory, clutching their 2 litre bottles of Diamond White like surrogate teddy bears… Petulant faces suck down one last sandy roll-up outside Cofa Court Jobcentre Plus… Brisk trade at the CEX… Four Whetherspoons, no Waitrose… Cheerfully shabby bustle in the Halal barber on the Foleshill Road… Sinisterly bland multi-purpose business parks on the bulldozed remains of car factories… Dave Nellist begging for change outside the fussily pompous Victorian Town House, leave the guy alone, he’s a ghost already… IKEA looms at the horizon like the Death Star with cheap meatballs…

Kosmo reads on and relates the gist to his apprehensive wife. “Seems like the Firm has located an unexplained outbreak of culture,” he says. “They want us to investigate.”

“Do we have an agent in the area?”

“Not as such. They closed down that wretched excuse for an operation decades ago. Ploughed salt into the topsoil and don’t call us… All we got now is a broken husk of some guy, retired without pension, divides his time between binge-drinking and intemperate internet outbursts concerning a Government conspiracy to misrepresent the music of the nineteen-seventies. I’ve seen his file; it’s stamped If you’re looking at this you must be in serious trouble.”

“We could reactivate maybe?”

Kosmo pulled a face. “Pretty thin,” he says, “but what else can we do?”

Attlee’s phone didn’t ring because Attlee’s phone had been smashed into pieces by Attlee. Orders from above – by which, of course, I mean “orders from within”. Poured himself another four fingers of Wild Turkey and resumed scribbling in his Poundland notebook:

Raising the dark… my outbreath before me… cheap boots in the underpass… footsteps in time with my heart… almost people through the bare trees… I am happy product… these people are adverts… the Sunday Atheists sniffing round the wheeliebins… new dogma from the Think Tank… I am struggling to recall the difference between refusal and surrender.

A tinny blast of Kandy Korn by Captain Beefheart & His Magic Band broke the flow. It was Attlee’s ringtone.

“Goddammit!” he muttered, “I must get me a better hammer.”

He picked up a fragment of the shattered technology, gingerly clasping one corner between forefinger and thumb. There was no “accept” button to press, but it turns out that didn’t matter.

“We got you a mission, homes.” Spence’s voice, affecting an accent from south of a border that no-one had ever crossed.

“A mission?! I’m outta that game. Bigger fish to fry. Did you know that Government-sponsored pop-culture historians have been systematically and deliberately mis-locating the significance of The Mekons?”

“I’ve read your file,” said Spence in what might’ve been his own accent, for all I know. “Listen, kid, we’ve got ourselves a situation here and you’re the only boots we have on the ground between Stratford and Long Buckby.”

“A situation?” Despite himself, Attlee was intrigued and flattered.

“Some kinda Kulturkampf is kicking off right on your fucking doorstep. Rumours are circulating about a mother-ugly unholy Frankenstein folk/oompah cut-n-shunt at the Henry VIII.”

Attlee gave a long whistle. “Jesus H Christ on a bike drowning in his own tears! First they came for Post-Punk, now they’ve turned to Krautrock. Well, it was only a matter of time I suppose… But what the fuck am I supposed to do about it? You want me to blog?”

“Blog?! Oh for pity’s sake! Wise-up, kid, will ya? Listen, we’re dropping in some of our people. Top agents. They’ll do the heavy lifting, but we need you to liaise – guide them through the minefields, give them the nine-fifteen.”

“The nine-fifteen?”

“Sorry, I just made that up – y’know, to sound spyish. But that’s not important. Listen, kid, this is serious. We can’t be having culture just spontaneously springing up in Coventry. I mean, where will it end? Chess clubs in Hillfields? Poetry readings in Bell Green? Starbucks in Bedworth?”

“They’ve already got a Costa.”

A spasm of despairing rage smashed through the torn speaker. It was several minutes before Spence was calm enough to continue.

“We’ll deal with that later. First things first. Come on kid: are you in or what?”

“Will there be alcohol?”

“Of course there’ll be fucking alcohol. I said ‘liaise’ didn’t I? You think I meant maybe take them to BHS for a mug of Earl Grey?”

“The Spoons?”

“What?! Show a little class, for fuck’s sake, kid! These are serious people. The Spoons! Jesus! Take ‘em to The Establishment or… um… well… Somewhere that’s not too shit, anyhow. And not the fucking Spoons! Does this mean you’re in?”

“…”

“Attlee?”

“…”

Oh, he was in all right. His mind was already chasing down locations as he stared out at the duel carriageway, lazy Saturday afternoon traffic and behind the cathedral spires, out beyond the inner ring, towers of smoke from the various burning buildings: The Radford, Canley Social Club, The Sidney Stringer Acadamy… A city on fire, wounded, huddled in the centre of its spoke-and-hub street plan like the tarmac web of some OCD spider… They’ll be doing lines off the bar at The Rocket, where the pool table’s so warped you need non-Euclidian geometry to play… Shit-faced casual in The Earl of Mercia, telling anyone stupid enough to listen how he’d given up violence, he’d given it up. Seriously. A mug’s game. But I’ll still back myself up, like. You know: step in for a mate. I could fuck you up. FUCK YOU UP. It’d be all like “That was one you’d not seen before, wasn’t it? Go find your face; I think it’s in the toilet…” Ack! I’ve given it up. A fucking mug’s game. Take a pop at me if you want. Go on: take a pop. TAKE A POP!… “Welcome to Willenhall – Your Car is on Fire”… Where have Cooky and Stretch gone now they’ve closed Annabel’s? Or the twitchy geezer huddled in his overcoat, shiny with dirt, playing blackjack at £20 a time, losing and losing, then reaching into his filthy coat and pulling out a thick, shrink-wrapped wedge of fifty-pound notes. Payroll… The combed-back muffin-tops with shit tats wobbling on their teenage bingo wings as they scream at the Kasbah security. Check in your machete at the cloakroom, lasers cut dry ice, the walls pulsate, sweat, throb, Carling Zest £1.50 a pop… And always the feds, the plastics, the hobby-bobbies with their stupid bicycles and stab vests, hassling the drunks in Lady Herbert Gardens where Barking Mark shared a joint with Attlee one time, tipped him off about the knock-off tobacco place and told huge, ugly chunks of his sad, fucked-up, pissed-up, pissed away life story… broken relationships, louring medical staff, police cells… the doomed dream of a better life in faraway Blackpool… on and on he rasped, voice as dry as Mary Berry’s quim…

Infinite Jest – the First 33%

When I started reading Infinite Jest a month or so ago I promised myself I wouldn’t blog about it. So here’s a blog post about it. Apologies and all that, but the novel’s heavyweight reputation and its evident attempt to say something important about the modern world practically demands some kind of appraisal from the reader. After all, what’s the point of hacking through its 1079 pages if it doesn’t prompt a response?

But it gets worse. On the one hand, I felt almost obliged to set out my thoughts on the book but, on the other hand, I quickly realised that I wouldn’t have the time, energy or patience to wait until the end and then produce a carefully thought-out analysis of the damn thing. So instead I’ve opted to post a slightly tidied-up version of notes I jotted down as I read through. The drawback to this approach is obvious: I’m commenting on themes, styles, etc, in total ignorance of how the novel’s end might justify or put into perspective what happens in the beginning.

Still, at least they provide a more-or-less honest account of what it’s been like to read it. Hopefully those who’ve already completed the journey will be able to sympathise with my struggle. Anyway…

24 June

Fifty pages in. Written in the present tense – always slightly irritating. A cheap way to seem pacey and energetic. Also (it seems to me) a clear sign of the effect of films and TV on the imagination. It reads like a screenplay. These days “imagining” something pretty much means imagining it as it might be presented in a film. (But couldn’t we say something similar about Dickens’ theatre-haunted prose?)

Actually, it’s too strong to simply say it reads like a screenplay. DFW often uses the freedom of prose to decent effect. In the first chapter, for example, we get Hal’s version of events and we only learn later that something rather different seems to have occurred (and we’re not told exactly what). But, still, the influence of filmic imagination is strong. I suppose that’s not really a criticism of DFW; it’s characteristic of the modern imagination – including my own. I worry that it’s a diminishment – an automatic focus on surfaces but without the advantage films have of being able to show the fluid complexity of the human face.

Some nice expressions/descriptions, but the prose isn’t as “drum-tight” as Eggers’ introduction suggested. In fact, Eggers’ intro (which I half-read) has put me on my guard from the get-go. He was trying too hard to sell me a book I’d already bought.

College kids/techy types smoking dope. Too much of that so far. Who fucking cares?

Reminds me of DeLillo and Copeland with more book learning. Not an unqualified good in either case.

All the above pretty sniffy. I am enjoying it but it hasn’t grabbed me by the throat yet.

Insects. Colds. Dreams about mother. The face in the floor.

25 June

Technology, biochemistry – life seen from the “outside” dominates. Nature is represented by insects (ie, nature at its most mechanistic) and the malevolence of the Arizona sun.

Of course so far the novel’s stance towards this focus is not clear.

26 June

80-odd pages in. Things starting to clarify: the “entertainment”; its links to the films made by Hal’s father; hints of something unusually sinister about the drugs…. Even the feral hamsters have turned up!

But I already need a cast-list to remember who all these sodding people are (fortunately Wikipedia provides this).

I’m a bit concerned that the whole thing seems to be based on Monty Python’s sketch about the funniest joke ever written. The Pythons were done with it after three minutes, but can it carry a 1,000+ page novel?

Also – I suppose because of its length and reputation – I can’t help comparing it to Ulysses. Post-modernism updates the modernist classic. This seems both unfair and instructive. Hal as a sort of 90’s Stephen Daedalus: precociously talented, a strained relationship with his family, etc. The way the book carries its huge erudition very much on its sleeve, but often does it for satirical purposes. Yet there are no ordinary people in Infinite Jest. And so there’s little genuine pathos or warmth. DFW suggests life is mad by presenting us with a bunch of characters who are mad. Grotesques. Joyce mingles madness and sadness by describing the details of ordinary life. Satire, but also pathos. Bloom is a character we care about. I can’t really say the same of anyone in Infinite Jest.

27 June

I might be using IJ as a lightning-rod for my dislike of modern (ie, post-modern) culture. In other words, I’m setting myself up against the book’s reputation. The haunting fear that I’m becoming Roger Scruton.

28 June

No ordinary people. Everyone is neurotic or worse. No-one is genuinely close to anyone else. And the tennis academy stuff is boring me.

Post-modernism: life is meaningless, so we must “play it” as an ironic game, and (for some never-specified reason) the only “authentic” act is the deconstruction of authority in all its various forms. This is the snarky dream of thirteen year-old bed-wetters and humanities academics too comfortable, too self-conscious and too damn chicken-shit to be Marxists.

Now, is DFW just another of those fraudsters? Or, to put it another way, does he criticise the madness of modern culture using forms of thought which are themselves thoroughly conditioned by that same culture? (And can this even be avoided?)

So we get the alienating invasiveness of modern technology (including biochemistry), but what is the book’s underlying attitude towards this?

4 July

245 pages in and it’s picked up a bit. The image of the cage (again) and the potential for almost anything to be a self-defeating attempt to escape it – and the absurdity of that position; trying to find the “real” artichoke by divesting it of its leaves.

The endless chemical, technical and mathematical details: ramming home how we’re enmeshed in a world that’s far too complex for us to understand. The alienation of this situation. It reminds me of the Ithica chapter of Ulysses. The barren futility of reducing everything to bare “facts”. It’s interesting to compare the ostentatious erudition of Infinite Jest (and Ulysses) with that of Tristram Shandy. Sterne (like Swift before him) was mainly poking fun at the presumptuousness of academics and specialists. This was still possible in the 1750s. It’s harder to laugh today.

It’s easy to make the world seem weird and absurd if you populate your novel exclusively with absurd weirdos.

Or, more generally, the world looks strange if you treat the culture as a given, a kind of surrogate nature (as if our culture, uniquely in the history of mankind, had discovered the real facts of life). But step back. Cui bono? That is to say, look at the deeper processes which are producing this madness. Then things cease to be so absurd or ironic. They become murderous.

And that’s what I mean by criticising the culture from within the culture. It mistakes the structure for the base, endows it with an illusionary necessity and so, of course, in the process everything becomes an absurd tragi-comedy.

It remains to be seen whether DFW is guilty of this particular mistake.

10 July

310 pages in. Another irritation: the footnotes. Supposedly (I’ve read) DFW uses them to break up the narrative flow. Break it up?! Even without them the narrative flow is all over the fucking place! And what exactly is gained by breaking up the narrative flow in any case? I can’t help feeling they’re just an emptily clever post-modernist mannerism.

13 July

The tennis war-game chapter illustrates the faults and good points of novel. It’s an entertaining, knock-about satire of the Game Theory approach to global politics but (a) it’s horribly overwritten, and (b) it doesn’t really nail its target. The problem with Game Theory is not that people are more barbaric than its coldly mathematic approach assumes (which is what I take the chapter to be saying). The problem is that people are more human. Yes, that includes unspeakable barbarity – but also love, loyalty, playfulness, friendliness and self-sacrifice to the point of sainthood. The attempt to reduce all this to a few formulae was pioneered by a paranoid schizophrenic. And it shows.

However, the Boston AA chapter is easily the best so far: moving, compelling and profound. Hopefully this is the direction for the rest of the book. Even here, though, there are quibbles: it is (as usual) overwritten, and reads as a piece of free-wheeling journalism masquerading as fiction (that’s true of quite a bit of the book so far).

All in all I can’t help feeling it needed a better editor. Someone should produce an abridged version – 600 pages perhaps – called Finite Jest.

Utilitarianism

My Twitter-friend @garyface is currently studying philosophy at York. This evening he asked me about Utilitarianism and (being a bit drunk) I said I had a good argument to counter it. I emailed him an account of that argument which, basically, tries to expose the nonsense of claiming that “pleasure” and “pain” can always be evaluated to see which one ends up the winner. Below is a very slightly amended version of that email so you can judge for yourselves. What do you think?

Anti-Utilitarian Argument

According to (strict) Utilitarians, there’s a “calculus” of pleasure/pain – ie, we can (somehow) always work out whether a particular situation is (on the whole) more pleasurable than painful. I think that’s a complete fantasy, and I thought up the following example to help expose it.

Suppose for some reason either you or I have to die for the greater good of humanity. A third party is tasked with the business of choosing and will do so on the Utilitarian basis of whichever choice causes the least unhappiness (which equates to the most happiness). As part of this he looks into our lives and discovers the following:
  1. In terms of the good/bad deeds we’ve done we’re pretty equal. But
  2. you have a doting mother whose life would be totally destroyed by your death and I have no family at all – all I have is a number of acquaintances who’d be very slightly upset by my death. They’d say “Oh, that’s a pity” and then, a moment later, continue with their lives as before.
 So it’s obvious, isn’t it? You live and I die, But hang on!
Your mother’s destroyed life is only x-many units in the calculus of unhappiness. The passing grief of my acquaintances doesn’t amount to much by comparison, but it does amount to SOMETHING. So let’s assume I have a million such acquaintances. Would a million “oh that’s a shame-s” outweigh the destroyed life of a fond mother? How about two million? Or a billion?
The point is that according to the Utilitarian vision these faint reflections of grief must at some stage add up to more unhappiness than your mother’s destroyed life (just as adding pennies to a pile one by one must at some stage add up to more than £10,000,000). And at that point the only moral decision is: let your mother be destroyed; your death causes the least unhappiness.
Now ask yourself: is it really possible that x many “oh that’s a shame-s” could outweigh the destroyed life of a doting mother? Isn’t that a load of bollocks? Wouldn’t it be reasonable to say that NO number of “that’s a shame-s” could add up to the wrecked life of a single mother? And if that’s so then doesn’t it show that the whole idea of quantifying grief is also bollocks? And doesn’t that mean that the whole Utilitarian project crashes to the ground?
That was my email. Come on, Utilitarians, your response please.

The Smiths, The Charts, and The Re-Writing of History

On May 13th 1983 The Smiths released their debut single Hand in Glove. To celebrate its 30th anniversary, the BBC produced a Culture Show special Not Like Any Other Love which explored the band’s impact and the culture they grew out of/reacted against. You can watch it here, if you like:

It’s about average as these things go: presenter Tim Samuels nods while a bunch of talking heads (including the omnipresent Stuart Maconie, obviously) explain how The Smiths were a life-changing band, how they exploded out of nowhere, how rubbish everything was until they arrived, etc, etc. Maconie himself inadvertently tips off the viewer when he says “There’s a lot of romantic guff talked about rock ‘n’ roll, but…” This is the pop-culture documentary equivalent of “I’m not a racist, but…” and whenever you hear it you can be pretty sure of what’s coming next.

Anyway, the program left me entertained but uneasy. Entertained because (for once) I was in on The Smiths from the very start. As an eighteen year-old, I heard the first play of Hand in Glove on the John Peel show, bought it and immediately started boring anyone who’d listen about this great new band I’d discovered. I never wore a hearing aid or had gladioli sticking out of my back pocket, but I think it’s safe to say that I was a fan. So it was nice to wallow in a nostalgic mud bath for half an hour.

But that’s where the uneasiness comes in. The BBC produces these commemorative documentaries on a kind of treadmill and they all start from the same basic assumption: if you’re now in your forties then whatever was going on twenty-five or thirty years ago was Culturally Significant. Five years ago it was Punk. Today it’s The Smiths. In five years’ time it’ll be Johnny Hates Jazz. It’s a sort of temporal cultural relativism where “impact” is defined by the age of the target audience rather than an honest assessment of what actually happened. The end result is a relentless parade of middle-aged fan-boys (and girls) shouting “That was the most important thing EVER!”

And because these programs are made on a Fordian production-line basis, each story has to be molded to fit the same template: things were really dull, X came along, everything changed. So even if the group being profiled actually was culturally significant their history tends to be warped by the demands of a predetermined narrative arc.

And so it was that we had Samuels blithely informing us that in the early 80s the charts were really dire. To prove this, we were “treated” to a brief clip of Bucks Fizz-wannabes, Bardo, performing their 1982 UK Eurovision entry One Step Further (it came 7th in the contest and was only denied a number one chart spot by the combined might of Paul McCartney and Stevie Wonder – thanks, fellas). Now, there’s no denying that One Step Further was a pretty forgettable piece of nonsense, but it hardly defined the music of its era, and I found myself muttering “Hang on! They’re pretending The Smiths were The Sex Pistols!”

You see, I remember the charts of the early 80s, but I also remember the charts of the mid 70s. And if it’s soul-crushing dreariness you’re after there’s really no contest. To make sure I wasn’t deluding myself, I looked up the Top 40 for May 14 1983 (the day after Hand in Glove “changed everything”). You can see it here: http://www.officialcharts.com/archive-chart/_/1/1983-05-14/. I then compared it with the chart from the same week in 1976 (round about when The Sex Pistols were playing their early gigs). Read it if you dare: http://www.officialcharts.com/archive-chart/_/1/1976-05-08/. Then, in a highly rigorous, scientific experiment, I picked the songs from each chart that wouldn’t make me smash up the radio if they were played today.

The 1983 Top 40 confirmed my recollection that, far from being appalling, the charts of the early 80s were actually relatively decent. 1983 probably wasn’t as strong as 1980 or 1981 had been, but still there were eleven entries that fell into the “bearable” category. They were by Human League, Heaven 17, Tears for Fears, Fun Boy Three, Blancmange, David Bowie, Eurythmics, New Order, Creatures, Pink Floyd and Bob Marley and the Wailers. I was probably being a bit generous with Pink Floyd, but even so that seems a decent haul to me.

Then I turned to the 1976 chart. Hell rose up to greet me. Just reading the song titles made me whimper “Mummy, mummy! Make them stop!” In the end I managed three picks – and one of those was the reissue of Hey Jude which for some reason was hanging around at number 33. The others… No. It was too awful. I don’t want to remember it any more.

I hope I’ve proved my point. The truth is that The Smiths didn’t save pop music from a cesspool of mediocrity. They were an interesting band who secured a small but obsessively loyal following. They never charted higher than 10 (so Bardo have them beat in that regard) but they were undeniably influential. At the time they seemed like a breath of fresh air – stirring things up, just as the New Wave/Post-Punk movement threatened to go stale. But in hindsight it’s probably fairer to say they were the beginning of the end so far as indie music was concerned. After The Smiths it was all sensitive haircuts and twee songs about girlfriend trouble. Until Madchester, that is. Hang on, I’ve got an idea. Does anyone have Stuart Maconie’s phone number?

Link

Campaign for World Domination: Progress Report

Some time in the early hours of this morning, the “page views” counter of my Wittgenstein blog clicked over into five figures.

PI views total

Small beer compared to many, I know, but I’m still surprised and pleased that an amateur blog on such an esoteric subject could attract a steady stream of hits from all over the world. And of course like anyone who runs a blog for more than two days I’m completely obsessed by my stats page and what it might or might not show me. Here, for example is the info on where my readers come from:

PI views per country

Ah, I love that map! It turns blog writing into a sort of game of intellectual Risk: once I’ve turned the entire atlas dark green then the whole world will be mine! MINE! (Except that I can’t, of course; Blogspot stats only show you the top ten entries, so my small but hugely satisfying hits from China, Pakistan and Kenya are hidden from view.) But ego-massaging aside, what does the map say about the global state of interest in Wittgenstein and philosophy more generally?

Well, the most obviously striking feature is the thumping dominance of the United States. I mean, you might expect them to be top given that a) my blog is written in English and b) US  philosophy seems to have a strong emphasis on what’s known as “analytic philosophy” (the area in which Wittgenstein worked). But 41%! That’s huge!

The next two on the list are no great surprise, either. Wittgenstein spent most of his professional life teaching at Cambridge and his work still features strongly in UK philosophy courses. On the other hand, he was from Austria and wrote in German, which I assume would help boost his popularity in German-speaking countries – and Germany was a German-speaking country last time I checked.

I must admit, however, that it intrigues me to see Russia in fourth spot. I have no idea what the philosophical climate is like there – has it changed since the break up of the Soviet Union? Is philosophy a thriving discipline, and if so what sort of philosophy is considered “hot” at the moment? Whatever the answers, there certainly seems to be a significant (I’d say “surprising”) level of interest in Wittgenstein. In fact, things are made even more intriguing when you realise that it was only after a year of so of writing the blog that Russia started to show up at all. Before that: zip. So they’re reached fourth spot despite giving the western world a twelve-month head-start. Does that say something about growing internet penetration in the country? Or an increase in the level of Russians who speak English? Or are those five hundred-odd hits all accounted for by a few obsessives who return to the site more often than is healthy? It’s perhaps worth noting in this context that India is another nation that’s made the top ten despite being completely absent for the first year of the site’s existence. They, like Russia, are a growing economic power in the world. Is that a coincidence?

The other surprising feature is the poor showing by France. Like Germany and the UK (and unlike, say, Spain or Italy) France has a long and venerable tradition of philosophical inquiry. And yet they languish in sixth spot and are only ahead of Latvia (Latvia!) on alphabetical order. If I had to guess I’d suggest this reflects the so-called “continental/analytic” divide – basically, two different approaches to philosophy which started to go their own way in the 19th century, with France most strongly representing the continental approach and Britain and the US championing the analytic side of things. To give an indication of how deep the divide was, I graduated in 1988 and my philosophy degree contained not a single lecture on Hegel, Nietzsche, Satre or Derrida. Not one. Since then things have supposedly improved, and many philosophers make bold claims about how the whole issue has been consigned to history. Certainly, when I was Warwick Uni in 2003 they paid much more attention to continental philosophy. All the same, my blog stats suggest the divide has not entirely disappeared.

Finally, for the tekkies out there, here are my browser stats.

PI views per browser

Is Firefox catching up with IE, or is it just that philosophers prefer it? I wouldn’t know – I’m a Chrome man myself.

Oh, btw, you can read the Wittgenstein blog at http://www.lwpi.blogspot.com.

Thoughts on The Brothers Karamazov

What am I to make of this beast of a book? This cluttered, passionate, awkward, ironic, heartfelt, clumsy, questioning, fever-dream of a novel?

Well, for a start it clearly deserves its reputation as one of the great works of European fiction. And this is true despite the fact that, viewed as a stand-alone work, it is dramatically unsatisfying and patchy.  Of the three brothers, Alyosha is introduced as the novel’s “hero”, yet after the first third he fades to an occasional onlooker.  Ivan is given two tremendous set-pieces (“The Grand Inquisitor” and his discussion with the Devil) but not much else.  And even Dimitri’s story, which takes up the bulk of the book, is left frustratingly unresolved.

The reason, of course, is that Dostoyevsky intended the book to be merely the first part of an epic series. He died before he could write the rest of the story (The Karamazov Brothers was completed just a few months before his death) and so, of course, what we have is necessarily fragmentary. The interesting thing is that in the final analysis this doesn’t really matter too much. In fact it is curiously appropriate. The inconclusive ending complements the disturbing instability which lies at the heart of The Karamazov Brothers; just when you think you have oriented yourself the ground shifts beneath your feet. Over and over again characters suddenly change, and it’s difficult to tell whether they are revealing their true selves, getting carried away by a momentary enthusiasm, being self-deluded, or just outright lying.

True, the narrator often steers us in a particular direction but he’s usually careful to leave the options tantalisingly open. And, in any case, the book’s instability encompasses the narrator himself. After all, who the hell is he? He is “within” the novel insofar as he identifies himself as a resident of the (fictional) town where Fyodor Karamazov lives. Sometimes he’s at pains to point out the documentary, incomplete or first-hand origins of what he’s presenting (eg, Father Zosima’s final speech and the account of Dimitri’s trial) but elsewhere he recounts with god-like authority events he couldn’t have possibly observed or even heard about (in other words, he writes like a traditional “omniscient” author). At one point (sorry, I forget where) he explicitly refers to his book as a “novel” even though it’s mostly written as if it was a factual account of real events. Is this inconsistency mere clumsiness on Dostoyevsky’s part or is he quietly undermining our faith in the ability of the novelist to reveal the truth about the human condition?

I can’t be sure but I tend towards the latter interpretation because one of the book’s most striking themes (it seems to me) is the elusiveness of humanity. We are enigmatic, strangers not only to others but also to ourselves. Sure, most of the time our behaviour runs along relatively predictable lines but we have something within us (what?) which might at any moment confound our expectations. Is this a nothingness, an abyss where the soul should be, or is it a well-spring of unifying transcendence – something which, for better or for worse, unites us and rises us above the humdrum circumstances of everyday life?

I think the novel leans towards this latter, mystical viewpoint. In fact it’s directly connected to the book’s central contention that we are all guilty of each other’s sins. But even if that’s right Dostoyevsky never gives himself (or us) an easy ride. The elusiveness of humanity can just as easily result in unexpected baseness as sudden piety. A minor but significant example. The book ends with Ilyusha’s funeral. Afterwards the dead boy’s school friends try to help his grief-stricken father who is close to madness. The boys themselves are in tears, overwhelmed by the occasion. One lad, Smurov, is trying to return the father’s hat which he’s discarded despite the freezing weather. And in the middle of it all we get this:

            “Smurov, although weeping uncontrollably and still holding the hat, managed nevertheless, practically without pausing, to pick up a piece of brick that appeared as a red object on the snowy path and threw it at a flock of sparrows that was flying past quickly. He missed, of course, and ran on crying.”

If you ever want to give an example of Dostoyevsky’s disturbing brilliance you could do a lot worse than that.

ctwiii

4 out of 5 dentists recommend this WordPress.com site

adoptingjames

Read our Mission. Find out how you can help us adopt James.

The Culture Monk

Coffee & Conversation - The Periphrastic Mind of Kenneth Justice

Time's Flow Stemmed

Wild Readings

wrongwithlife

The immeasurable terrors of her mind...

Ben Andanti, Astral Bodyguard

Just another WordPress.com site

Quantum Est In Rebus Inane

Philosophy, Literature, Logic, Language, and Orthodoxy

fauxphilnews

Possibly True. Necessarily Entertaining.

Grey Cavalier

A fine WordPress.com site

International green socialist

Socialism Environment Feminism

Carte Blanche

Free markets and free individuals...

rivergirlracket

Just another WordPress.com site

Robinince's Blog

Just another WordPress.com site

Follow

Get every new post delivered to your Inbox.

Join 32 other followers