Top Video Games of 2012

•January 22, 2013 • Leave a Comment

Compared to even an average year, 2012 was a desolate time for quality for video games. Compared to 2011 — after which I listed fifteen great games while still forgetting some (sorry, Witcher 2) and neglecting others (I’ll get there, Skyward Sword) — it was a complete wasteland. That said, here are the best titles I played released in 2012, ordered from less-best to best.

Yet To Be Played: Far Cry 3

I’ve heard good things about Ubisoft’s open-world sandbox game, but not yet enough to draw me in after wasting so many hours in the similar-looking Just Cause 2 and better sandbox games like inFamous and Skyrim. When it cheapens up, I might have a look.

Yet to Be Played: The Walking Dead

That’s misleading: I’ve played an hour of the five-part ‘interactive story’ and so far it seems pretty good, but the dryness of the cicada-filled Georgian backdrop rubs me the wrong way in the same vein as the TV show. I’m sure I’ll get through it eventually.

Number 6: Assassin’s Creed III

Thought slightly disappointing and not involving enough to enrapture me from start to finish (I’ve just got up to New York — about two-thirds through by my reckoning), AC is generally as fun as ever, despite the almost overwhelming amount of glitches. I suppose it just feels like a bit of a grind now, riding my horse or jumping over rooftops from point A to point B to stab someone. I’ve heard the ending is disappointing, but we’ll have to wait for my thoughts on that. As has been said elsewhere, though, the naval missions are fantastic, as well as the treasure-hunting action setpieces that take place in their own unique locations (i.e. when the game is closest to Uncharted).

Number 5: Journey

Journey is a great indie PS3 title and a pleasure from start to finish. While the few hours I put into it are easily worth the price, it doesn’t have the longevity to match the remaining games in this list. Heartily recommended, with the unique distinction of bringing me to tears at the end without a single word of dialogue throughout.

Number 4: Halo 4

It wasn’t easy putting Halo 4 here, but if I’m being honest, it just doesn’t have the same atmosphere of Reach and certainly doesn’t hold a candle to Halo 2 for blowing me away (oblique Halo 2 reference there). The new host of Forerunner weapons are welcome, and the campaign is enjoyable, though the ‘ancient metal’ environments do approach the sterility of the original’s in some places. The multiplayer is quite different, having taken a leaf out of Call of Duty‘s book in class organisation, though it can all be customised back to the Halo of yore. Forge has a new snap-to feature that is sure to be a big boon if I ever get into it. This makes it all the more mysterious as to why it just doesn’t feel as satisfying taking down an opponent. Maybe it’s the maps. I’m also not a huge fan of the Spartan Ops, especially as a replacement for the awesome Firefight.

Number 3: Mark of the Ninja

Originally an Xbox Arcade title, Mark of the Ninja eventually hit Steam and I wasn’t disappointed. The violent-cartoon art design, the flawless stealth mechanics, and diversity of strategies available make it a blast, and the story is more than servicable as well. The game is filled with extras and achievements to keep wannabe ninjas occupied for a long time.

Number 2: Dishonored

This was almost as easy a pick as the impending Number 1. Dishonored crafted a fascinating steampunk world full of whale-oil-powered technologies and delightfully corrupt characters, while offering a myriad of playstyles (ALL of which are fun, for once) and story consequences. I haven’t picked it back up after finishing my mostly-light-side stealth playthrough, but I’m looking forward to it. BioShock meets Deus Ex 3 indeed.

Number 1: Mass Effect 3

Was there any other possibility? ME3 concluded an unbelievably good trilogy in style. While  at first slightly ‘disappointing’ in the same way as Uncharted 3 (in that it is marginally inferior when unfairly compared to its groundbreaking predecessor), ME3 eventually takes off with improved space exploration (stay away from me, Reapers!) and a handful of long, intense main missions alongside perfectly enjoyable side missions, all of which have (some would say overly) tangible effects on the mission to retake Earth at the game’s epic conclusion. Yes, the ending was originally a slap to the face of cosmic proportions, but my problems with it were rooted in its suddenness and lack of cutscene diversity rather than its logic or the story direction (I was, for example, fine with the Starchild). Consequently, the free Extended Cut DLC was right up my alley and restored an abysmal ending to a more than acceptable one — though the ‘Shepard indoctrination’ conspiracy theory would obviously have been even more incredible. I’ve head that the recent Leviathan DLC goes further in explaining the Reapers’ origins, which is good. In summary, ME3 barely stepped wrong, and I would consider it almost flawless up until a flawed but fine ending.

Revisiting 2011: Dark Souls

That was it for 2012, but strangely (or perhaps not, considering their sheer number) I have more to say about my top games of 2011. Specifically, Dark Souls, which I hadn’t played much at the time I published my list. Now that I’ve finished it, I can apologise to all and boost it the top of the list, on par with (though significantly different from) Batman: Arkham City. A truly remarkable game that harkens back to the days when you got more out of the game by looking up a guide while also exploiting today’s hardware capabilities in producing some truly stunning enemies and environments.

Ahead in 2013: Dead Space 3, BioShock Infinite, God of War: Ascension, and more

The first quarter of 2013 looks to be a lot more eventful than that of last year. DS3 looks like it could be hit or miss — EA is forcing the multiplayer components once more, with coop-only campaign segments in addition to the competitive multiplayer. Depending on the game’s reception and the importance of said multiplayer, I may need to switch consoles to one with free online. Similarly, I’d be happy picking up BioShock Infinite on PC if it’s just as good. I haven’t heard much about the new God of War, but they haven’t steered wrong yet, and I do love a game with epic bosses.

That’s all for 2012.

The Problem with Stereotyping

•August 27, 2012 • Leave a Comment

Stereotyping: making a judgement about an individual based on statistical or probabilistic data associated with a certain category into which they fall. Today, it’s recognised as a negative force for social progression, but I still hear the occasional argument in its favour. (Typically they begin with “I’m not a racist, but…”)

It’s a fact that statistical data on categories — racial background, gender, socio-economic status — often suggests that the means for groups in these categories often deviate from the overall population mean. It’s true that black people in the U.S. have higher rates of criminality, and that there are more male scientists than female. Why, then, is it not acceptable practice to allow these stats to influence preconceptions about an individual? Let’s find out.

Firstly, there’s the problem of converting a continuous probability to a discrete judgement. Imagine that a fair die is rolled repeatedly, and before every roll you have the option to bet on the result being either three or greater versus two or less. Clearly, you bet three or greater, every single time. The probability of it happening is 2/3 against 1/3, but the (rational) probability that you’ll choose it is 1 against 0. The reason this is acceptable is because the die roll is too chaotic to predict at all. (Check out “A Primer on Chaos” if that makes no sense — in short, the errors in physical measurement of the die roll’s initial conditions are much greater than the variation needed to alter the outcome.) If we could predict it, then it might sometimes be correct to bet on it coming out as two or less. If we could predict it well, the ratio of bets on either side should also be about 2/3 to 1/3.

Now imagine you encounter an individual from Group C of some category. Studies have shown that 60% of this group are below-average drivers, compared to the population value of 50%. Is your preconceived judgement about this person’s driving above-average, or below-average? Like the chaotic die roll, it may seem rational to choose below-average, and indeed, if this was a randomly-selected individual about whom you could learn absolutely nothing else, it would certainly be the rational option if you had to bet money. But what if you had to do this twenty times in a row, for different individuals? You’d bet that every one of them was a below-average driver, even though the stats themselves predict that it should be around a 12:8 split. In your mind, every single person in Group C is now a below-average driver, because 60% is closer to 100% than 0%. You’re mistakenly pre-judging the driving ability of 40% of Group C because you’re too lazy to actually investigate it.

Secondly, and more significantly, this rarely matters, because most of the probabilities people tend to stereotype upon are still much, much closer to 0%. If you encounter a random black person from America, do you preconceive them as being a criminal because their base rates of law-breaking are 0.7% compared to the average 0.5%? (Disclaimer: I made those figures up.) Unless they’re waving a weapon in your face, it’s never going to be rational to pre-conceive anyone as a criminal, even by the lazy discrete rounding described above.

Thirdly, it’s extremely unlikely that any of these minor differences are “hard-wired” and immune to change. Just because the average black IQ in America is less than the average white IQ, it’s ludicrous to then presume that white people are somehow more genetically advanced, and racist psychologists have rightly been crucified for suggesting it. There are an endless number of social, historical, and cultural factors at play that can easily explain the difference (which, by the way, is rapidly shrinking) and can also be easily fixed, literally, by the power of thought.

In Up in the Air, George Clooney remarks that he always gets in line for the baggage check behind Asians, because they’re the most efficient. An outraged Anna Kendrick condemns his judgement as stereotyping, to which he amicably replies, “My mother taught me to stereotype. It’s faster.” If a study confirmed his hypothesis about race and airport efficiency, it’s hard to come up with an argument against George’s generalisation. While it would be stupid for him to identify any particular Asian at the baggage line as above-average in efficiency, he is not actually interacting with the individuals he stereotypes, but simply playing the odds based on the immense number of baggage checks he endures every week.

Q.E.D. (argument from George Clooney)

There Is No Why

•August 27, 2012 • Leave a Comment

Too often are we subjected to serious-sounding philosophical questions such as “Why is there suffering?” and “Why is there colour?”, yet rarely do they actually carry validity. Questions beginning with “why” cannot apply to nature in any teleological sense; the best answer one can give is simply a causal one. For example, I could answer “Why is the Earth not quite spherical?” by pointing out that gravity is equally strong at a given distance from a mass in any direction, and that the inertia of a rotating mass (known as centrifugal force) pushes it outward as it spins, creating a slight bulge around the equator. A scientist would be perfectly happy with this, but to some philosophers this will not do. “Well, why is gravity equally strong at a given distance from a mass in any direction?” We don’t know the answer, but nor do we know if it is a valid question.

Let’s examine the ever-present first example: “Why is there suffering?” Firstly, precisely what is suffering? Nocireceptor signals racing toward the brain with a warning of imminent or ongoing tissue damage? Or the act of deliberate abuse of that notification system through torture? Or just any violence, specifically violence not designed by humans?

If it’s nocireceptor signals, then the answer is easy. Clearly, it is in the interests of your survival to stop the tissue damage, and you are not going to do that if the incoming nerve signals are similar to those arising from eating a bar of chocolate, or even just normal touch. No organism is indestructible, and any environmental stimulus can be amplified until it becomes damaging: sound, heat, pressure. The existence of pain can be explained purely in materialistic and evolutionary terms — as I’ve argued before, there is no dualistic angle to it, and the emergent illusion of qualia and consciousness is an inevitable artefact of such a complex feedback-looping stimulus-response system. If you don’t believe me, look up rare medical conditions that eliminate or reduce the capacity to feel pain, and see how incredibly short their lives tend to be.

If suffering is exploitation of our pain systems, then the question does have a real answer: some humans are sociopathic, violent, and twisted.

If any violence counts as suffering (e.g. “Why are there natural disasters and brutal diseases?”) then the very definition of violence falls apart. As mentioned above, any stimulus can become ‘violent’ at high intensities, and there is no discrete distincton between a violent and non-violent act. At what critical pressure does touching you with a closed fist become an act of violence? If it’s in the intention of the puncher, then we recede to the prior definition of suffering. Similarly, a “natural disaster” exists on a continuous spectrum. We live in a universe of material particles interacting via forces, and forces of excessive strength are dangerous to our own particles if they can overcome the forces holding us together. It’s difficult to imagine an event that wouldn’t be considered a natural disaster from the point of view of another species. Rain, for example, would be a common natural disaster for ants. A falling tree branch may preclude a massacre of insects. Going in the other direction, it’s a little anthropocentric to label a supernova or galactic collision as a ‘natural disaster.’ And as for cruel diseases, well, every organism survives at the cost of many others, and they are generally oblivious to the amount of pain they are causing them (though not always: consider the way a cold virus causes you to sneeze and cough it to the next host). If you’re going to play the game, then you can’t blame the other players for trying to win, so remember that next time you wolf down a shank of lamb.

The tendency of religious institutions to reify ‘suffering’ as some ‘trial of the soul’ is tellingly primitive, yet at the same time, they’ve ironically got it (almost) right when they say that suffering must exist. We certainly need pain signals to avoid hazards as rapidly as possible, and the existence of such hazards is equally inevitable. Yet they’ve still fallen into the trap of the second definition, because they are exploiting these simple facts to further their image of tarnished, broken human beings and the pain they must masochistically inflict on themselves to reach paradise. To all such preachers, and to the philosophers obsessed with over-complication of atoms hitting other atoms: consider that there is no why.

Victimless Acts

•August 4, 2012 • Leave a Comment

Euthanasia, be it on one’s own or with the aid of another, is a legitimately complex topic. Like capital punishment, it concerns the question of whether the decision to end your own or another’s life can ever be the correct decision — but unlike capital punishment, the fundamental consideration of euthanasia is the wishes of the ‘victim’ themselves. This is also why it is not resolved as easily as, say, murder. The foundation of a just law is that it must apply to everyone, and if you do not wish to be killed (something almost everyone can agree on, thankfully, or our society would look rather different), there is no way a just law can permit you to kill others. A common argument for capital punishment follows similarly: if you kill another, you are in a sense ‘no longer covered’ by the law.

Euthanasia is different. What if someone wants to die? Do we allow them, and even assist them? If we answer with a flat-out ‘no’, we allow terminally ill people to suffer through incredible pain and discomfort before their inevitable death. If we answer with a flat-out ‘yes’, we allow everyone to commit suicide at any time they wish. Clearly we’re positioned well towards the former at the moment, and the idea of not trying to do everything we can to stop a depressed friend from ending their own life is clearly unthinkable. We recognise (and hope) that their desire for death might disappear with time, and that things will improve, and often they do. Here in Sydney, a man named Don Ritchie lived across the road from a notorious suicide spot and is said to have talked hundreds of people out of ending their lives over almost half a century. He was awarded the Medal of the Order of Australia for his “rescues” and died two months ago. Few could ever condemn his actions, and indeed he recounts being visited and thanked years later by many for his simple outreach.

This doesn’t tell us much about the terminally ill, or about people in immense suffering, or any other rational, clear-minded person who may no longer desire to live. The default position seems to be that such an attitude is impossible — who wouldn’t want to live, after all? — and that, on some level, they are not in their right mind. This is terrible ethical justification, and no one should ever presume to know what it could be like in their situation, but what is a good ethical justification? If some form of euthanasia is legalised, is it done on a case-by-case evaluation, or according to a fixed list of conditions? Some horrendously unlucky individuals may know that their death will be sooner than expected, but would we set up a “minimum time until death” requirement for permitted euthanasia in their case? How does the uncertainty of the diagnosis factor into it?

Abortion is the other famous example of a sticky question. At what point is an unborn human fetus ‘covered’ by the law in the discussed sense, such that its mother no longer has the option of terminating it? The moment of birth is certainly too arbitrary and too late, but the moment of conception is just as ridiculous. A one-day-old conceived fetus is a worse candidate for human rights than a bumblebee, let alone an actual human, and the argument that it should be protected based on its future potential to be an organism protected by law is inherently circular. Such an argument can be used to argue for the diligent preservation of every sperm cell and every ovum ever produced, or for a law against not having sex at every possible opportunity lest the potential humans inside those gametes be lost forever.

Perhaps it’s when the fetus develops a brain stem, and the capacity for intelligence. This would certainly fall in line with the distinction between the permitted treatment of humans and other animals. Perhaps it’s when the fetus develops the ability to feel pain — yet abortion is too fast to be painful, and we don’t apply the same criterion to humans outside of the womb. The precise moment is still up for reasonable debate, but we can at least be quite sure that abortion is ethically permissable in the first trimester of pregnancy and not in the last. Of course, situations where the mother’s life is in peril or the pregnancy is flawed (such as horrific ectopic pregnancies) change the equation radically, and we’re forced to go back to the drawing board and factor in the always-present (though these days generally very minimal) danger to the mother.

So what’s with the title of this post? Well, this has all been a large build-up to one very important and tragically “controversial” point.

Homosexuality and gay marriage and gay adoption are NOT legitimate issues.

When talking about euthanasia, we are concerned about an otherwise-criminal act being done to a person who desires it. When talking about abortion, we are concerned with the distinction between non-living and living and the very definition of life. In both cases, there is a potential victim, and even staunch supporters of either should recognise that this is how the other side views it and respect that as part of the discussion.

But homosexuality is a victimless act. There is no potential victim, and it is therefore not a question of ethics derived from morality. It does not matter in the slightest how much it has to do with genetics, or the environment, or personal choice. You are free to claim that it victimises an otherwordly entity that only you believe in, but this claim should not and will not be accepted as an argument against it, nor will it be respected in the slightest. You are also free to claim that your particular in-group “owns” marriage, and hence gets to decide who is and is not allowed to “use it”, and this is just as pointless. There’s no debate to be had here, any more than you would debate with someone claiming that their pet ghost chihuahua becomes unhappy whenever a blonde-haired person marries a brown-haired one. Equally ludicrous and even more offensive is the embargo on gay adoption.

Everyone is free to believe what they want, because having a belief or thinking a thought is also a victimless act. But ethics cannot be decided based on individual or even consensual morality, and this is why these are not issues that should or can be decided by public referendum. A victimless act does not even make the cut as a moral issue, let alone an ethical one. Again, some hesitance on the legitimate issues discussed above is completely understandable, but the fact that the law is still discriminating based on a victimless act is disgusting and utterly irrational. The good news is that this will change, despite the best efforts of our politicians, lawmakers, and leaders to make it as pathetically slow as possible.

Gene-Centric Evolution

•August 3, 2012 • Leave a Comment

“We”, the meat suits, chlorophyll suits, etc., didn’t strictly evolve — our genes did. It’s genotypic DNA that replicates and mutates into the next generation, and we are the large phenotypic vehicles whose structure and generalised behaviour is dictated by the evolving code it carries. We are the avatars that grow and interact with the world after our DNA is “compiled” at birth. The genetic code that compiles into the “fitter” phenotypic program is more likely to persist.

Well, that’s it. I had a much longer post planned, but it turns out that the legendary Richard Dawkins has already written a book, The Extended Gene, detailing this exact way of thinking thirty years ago. It arose in opposition to the view that genes simply “store” the successful traits of an organism from one generation to the next like passive recorders. Naturally, Dawkins’ perspective was criticised by his arch-nemesis, Stephen Jay Gould.

Of course, genes are no longer the sole determinant of human evolution, because of the self-aware, self-modulating brain we now possess. When we inevitably gain the power to influence our own genes and the DNA of future generations, a whole ‘nother beast of a system entirely will rear its head — we will be avatars that can not only change the way we function as phenotypic carriers of code, but alter the very code that created and grew us. A fusion of scientific fields with incomprehensible consequences: a brave new world.

Religion and the Probability of Subsets

•July 11, 2012 • Leave a Comment

Let H denote heads and T denote tails on a fair coin. Which sequence of flip outcomes are you more likely to observe while flipping this coin: TTTHTHTHHHHTT, or HHHH?

Studies have found that people tend to rank the first sequence as more probable than the second (among numerous other sequences as well). But this is a strict fallacy: the first sequence contains the second sequence. If you observe the first sequence, you have necessarily also observed the second, so the number of times you observe the second sequence in an arbitrarily long flipping session cannot be less than the first.

This fallacy is even more potent in the evaluation of people. When presented with a story about a woman’s life, participants were given a number of possible sentences describing that woman’s personality — such as “She is very honest” and “She is dedicated to her job” — and asked to rank them in order of descending probability. What they found was that the participants very often ranked complex sentences above their components. If both of the above sentences seemed reasonable given the story, for example, then participants ranked them highly, but consistently ranked the combined claim “She is very honest and dedicated to her job” even higher. Again, this is strictly fallacious, as the probability that the woman is honest AND dedicated to her job cannot be greater than the probability of her having either of those traits, because having both necessitates having each.

In short, if one statement or claim completely contains another, then the larger claim cannot be more probable than the smaller. This rule is also related to Occam’s Razor, an invaluable tool for simplifying scientific and everyday models of life, but unlike the Razor, there can be no arguments over whether one theory is more complex than another.

Take a few moments to properly order by probability any beliefs you hold on a particular topic, then start to work your way down this list. Each step necessitates a drop in probability, as we have seen, and so the next thing you have to be sure of is that these steps are worth it. Going back to the above story, you’ve necessarily ranked “She is very honest” higher than “She is very honest and dedicated to her job.” Before moving from the first claim to the second, you need evidence that cannot be explained solely by the first claim. In other words, you need a good reason to add on the extra baggage, because it will drag down your probability. In this case, the story might involve a situation in which the woman starts slacking off on her exercise because she needs to finish a self-managed project at work. Being very honest doesn’t really explain this.

Perhaps you can see where this is going: religious beliefs frequently and explicitly break this rule. Consider for a very basic example how Christianity (and most monotheistic systems) claims its god is all-knowing, all-powerful, and all-benevolent, based on the divine word of the Bible and various other (e.g. personal) revelations. Let’s put ourselves in the shoes of a typical Christian, and organise our list of claims by probability. “God is all-powerful” then comes before “God is all-knowing, all-powerful, and all-benevolent.” Moving down the list to determine our current belief status, we reach “God is all-powerful,” and… this is where we should stop. Adding extra traits like being all-knowing to our belief of God does not explain anything extra. Omnipotence is capable of explaining literally 100% of the possible evidence, by definition.

But how did we even reach omnipotence in the first place? Clearly, it contains infinitely many sub-claims, many of which we can strike out Razor-style. Removing God’s ability to win every match of Call of Duty, for example, does not affect the evidence we have in the Bible, and so we can ignore this useless claim and take a step back on the list to a God who can definitely do anything except maybe win every match of CoD. Note that we are not claiming that he can’t win — we’re simply removing the claim that he can from our belief and not making any judgement on the matter of his first-person-shooting ability.

Sadly for him, discoveries in biology and cosmology have rendered more and more of God’s abilities useless. He no longer even needs the ability to create humans, thanks to evolution. He only needs the ability to lie to some primitive desert nomads about doing it. In fact, you can strip away claims about God’s abilities and traits until you’re left with a god who (in addition to regular human abilities) has the ability to manipulate physical matter, including the human brain. Sticklers who go through the entire Bible can even reduce his telekinesis to just certain types of matter and forces — there’s no evidence that God can split the atom, for example, and the Bible even states quite explicitly that he has no power over iron (Judges 4:13-16). From there, we can start adding traits unrelated to ability. An anger management problem, for example, would go a long way toward describing much of his Old Testament behaviour, and clearly anger management problems are more common than psychic powers.

So we’re left with a deity with just a few superhuman abilities. These claims about God are fully contained within the usual notions of a perfect, all-seeing, all-doing creator of the universe, and they explain every morsel of existing evidence and, indeed, pretty much any conceivable evidence. Again, we’re not claiming this deity isn’t omnipotent — there’s just no way to tell. No evidence differentiates the simpler superhuman god from the complex perfect one.

Clearly, theists who believe based on faith will take away absolutely nothing from this, but this is not an appeal to them. It’s an appeal to theists who care about evidence, and maybe a little about science and logic as well. A god who is capable of creating all the evidence you have is more probable than a god is is capable of creating all the evidence you have AND capable of creating the universe.

Tell Your Friends

•July 3, 2012 • Leave a Comment

A stranger walks up to you at the shops. “Big news,” they say. “The creator of the universe spoke to some humans thousands of years ago. You can read about it in this book. Make sure you tell everyone you know!”

It probably wasn’t a stranger. More likely it it was a parent, a man in a church, or an early schoolteacher. But the point is, someone told you. Someone has to tell you. (And, for obvious reasons, they have to tell you while you’re still a child.) You could pick up the Bible by chance, having never heard of it, and read it from Genesis to Revelation — but without someone there to tell you this stuff is supposedly non-fiction, it would have no more effect on your beliefs regarding the origins of the universe and humankind than Homer’s Odyssey.

What do Allah, Jesus, Athena, Shiva, Jehovah, Quetzalcoatl, and all the rest have in common? Their existence is based solely on word-of-mouth. Take a moment to consider how ludicrous this is. The only way for you to find out that these powerful, supernatural, transcendent beings exist is for other mammals to make noises using vibrating strings in their throat. Mammals who still have chronic lower back problems because they only recently started walking on two legs. They heard it from this other mammal, who heard it from her father, who heard it from some old guy, who heard it from his uncle’s wife, and so on through a game of telephone stretching back millenia to a group of desert nomads. Hopefully they didn’t just make it all up!

Homer’s Odyssey at least has the distinction of being a coherent and moderately entertaining story. The Bible is a disjoint collection of myths and conflicting single-source accounts cobbled together long after any of its events supposedly transpired. Given a very large bookshelf containing a copy of every religious text in history, it would not distinguish itself in any way (except perhaps in the Most Plagiarism From Past Religions category). You’re not struck with an overwhelming sense of awe and peace when you open its pages. No unearthly voices warmly welcome you to the Word of God. It’s just pigments on processed plant matter, like the rest of the books on the shelf. It’s not until an adult hands it to you and tells you how important this otherwise unremarkable object is. Bad luck for you if your guiding adult points out the wrong one, or even worse, if the right book has never been printed in your place and time.

“Ah, but that would make it too easy! [Insert god] would basically be forcing people to believe with such undeniable evidence!” tut-tuts the theist. Well, if your god wanted to make sure there was always some plausible deniability (are they on trial?) they certainly succeeded. Next time you speak with them, congratulate them on their choice of completely unremarkable book and the way their utter silence is eerily reminiscent of the millions of other deities that don’t exist.

All those trivialities people argue over become laughable in this light. Try to remember who it was who told you which arbitrary text holds the secrets of the universe. Conveniently, you were probably too young to remember — it probably seems like you always believed, right? Yet imagine how easy it would’ve been for you to “miss out” on your religion. You can probably think of several nice families you know who would’ve told you an entirely different account of existence if they’d raised you instead.

You’re just lucky, I guess. And I know you’ll make sure your children are still fresh out of the mammalian womb when you tell them about it, because it turns out it’s getting harder and harder to believe the stuff in that book, and let’s face it: your god hasn’t put a whole lot of that unlimited power into spreading the truth. They need every advantage they can get.