Roosh V forum members baffled that fat woman doesn’t welcome sexual harassment

Online dating: It doesn't always work like this.

Online dating: It doesn’t always work like this.

For a certain subset of horrible men, there are few things more infuriating than the fact that women they find undesirable can turn down men for sex. For this upsets their primitive sense of justice: such women should be so grateful for any male attention, these men think, that turning down even the most boorish of men shouldn’t even be an option for them.

Consider the reactions of some of the regulars on date-rapey pickup guru Roosh V’s forum to the story of Josh and Mary on the dating site Plenty of Fish. One fine December evening, you see, Josh decided to try a little “direct game” on Mary.

That’s what the fellas on Roosh’s forum call it, anyway. The rest of us would call it sexual harassment.

Josh started off by asking Mary if she “wanted to be fuck buddies.” She said “nope,” and the conversation went downhill from there, with Josh sending a series of increasingly explicit comments to Mary, despite getting nothing but negative replies from her.

After eight messages from Josh, with the last one suggesting he would pay her $50 to “come over right now and swallow my load,” Mary turned the tables, noting that she’d been able to deduce his real identity from his PoF profile, and asking him if he wanted her to send screenshots of the chat to his mother and grandmother. He begged her not to.

As you may have already figured out, from the fact that we’re talking about this story in public, Mary did indeed pass along the screenshots, and posted them online.

Poetic justice? Not to the fellas on Roosh’s forum. Because, you see, Mary is … a fat chick.

While dismissing Josh as a “chode” with “atrocious game,” Scorpion saved most of his anger for the harassed woman:

Look how much she relishes not only shooting him down, but damaging his reputation with his own family. She’s positively intoxicated with her power. Simply spitting bad direct game is enough to unleash her vindictive fury.

“Bad direct game.” I’m pretty sure even Clarence Thomas would consider what Josh did sexual harassment.

At any point, she could have pressed a single button and blocked the man from communicating with her, but she didn’t. She didn’t because she enjoys the feeling of power she gets from receiving attention from guys like this and then brutally shooting them down. It makes her feel much hotter and more desirable than she actually is in real life. She’s not there to meet men; she’s there to virtually castrate them for her own amusement.

I’m guessing here, but I’m pretty sure that nowhere in Mary’s profile did she encourage the men of PoF to send her explicit sexual propositions out of the blue. And I’m pretty sure she didn’t hold a gun to Josh’s head and force him to send a half-dozen sexually explicit harassing messages to a woman he didn’t know.

Athlone McGinnis also relies heavily on euphemism when describing Josh’s appalling behavior:

I don’t think its primarily the revenge she’s after, its the validation. She is enjoying the power she has over this guy and wielding it brutally because it shows she can maintain standards despite her weight and the doubtless numerous confidence issues that stem from it. In blowing up this guy for being too direct in his evaluation of her sexuality, she affirms the value of her own sexuality.

Oh, so he was just being “direct in his evaluation of her sexuality.”

In short: “I am wanted, but I have standards and can choose. I have so much agency despite my weight that I can go as far as to punish those who approach me in a way I do not like rather than simply blocking them. I’m teaching them a lesson, because I’m valuable enough to provide such lessons.

So apparently in Mr. McGinnis’ world women who are fat aren’t supposed to have agency? They’re not supposed to be able to choose? They’re supposed to drop their panties to any guy who offers to be their fuck buddy or tells them to “suck my dick?”

Also, I’m a victim bravely standing up against online bullying/harassment-look at me!”

Yeah, actually, she is. Get used to it, guys, because you’re going to see a lot more of this in the future.

This isn’t just a laughing matter for her. She needs to be able to do this in order to feel worthwhile. She has to be able to show that even she is able to maintain standards and doesn’t have to settle for just any old guy asking for any old sexual favor simply because she resembles a beached manatee.

And it’s not a laughing matter for you either, is it? You’re actually angry that a woman said no to a sexual harasser — because you don’t find her attractive. And because Josh — from his picture, a conventionally attractive, non-fat fellow — did.

Mr. McGinnis, may a fat person sit on your dreams, and crush them.

About these ads

Posted on August 23, 2013, in a woman is always to blame, evil fat fatties, excusing abuse, harassment, mansplaining, men who should not ever be with women ever, misogyny, PUA and tagged , , , , . Bookmark the permalink. 1,044 Comments.

  1. Bloodlines…. If ever a game got great by being taken over by obsessive fans, this is it.

  2. “Kittehserf, you will be timecubed by Argentis’ evil vicissitude!”

    The man’s a poet! :D

    Could I have cheese cubes instead of time cubes, tho’? They taste better.

  3. You say evil vicissitude like there’s any other kind!

    And hell yes on Bloodlines. I’m playing Wesp’s patch 7.7? And damn, I hope he does game design for a living because he deserves to get paid for this!

    I saved both of them this time! My favoritest Malks are both still running Asylum! …and I’m a blasted Tremere because I want the Tremere ending…I miss obfuscate.

  4. You can be as cheezy as you’re able to!

  5. I will always be the Malkavian Cheerleader. Talking to traffic lights, arguing with the voices coming from the TV. Ah, it’s like a slightly stronger dose of my usual mindscape, anyways.

  6. This is my third run through (well, fourth, but windows blew up and ate my Gangrel) — Malk cheerleader, male Nossie, and now a male Tremere. Anarch, lone wolf and Tremere endings. Going to have to side with Ming Xiao at some point, particularly since killing her is so fucking hard. She’s worse than the sheriff (and LaCroix! He’s a riot at endgame)

  7. It’s a stop sign though, not a traffic light. Not that anyone seems to notice you talking to it or anything.

  8. Been a while since I played, but isn’t the Sheriff pretty easy? Gahhh…. now I have to play it again… wait, why would I complain about that. So… THANKS!

  9. I love that they made just about ALL the dialogue different for Malks.

  10. Compared to Ming, yeah he is. In general, eh, sorta? Compared to Ming though an elder god might be easy!

    The Malk dialog…that, and then Nossie? Playing as a character who can hold a normal conversation is weird.

    And you’re welcome :)

  11. I think I’ll say goodnight now. And fall to sleep hearing the music from the Giovanni mansion in my head.

  12. I’ll be there shortly! Creepy place!

    Prefer the music in the Last Round though myself :)

    G’night

  13. Roko’s Basilisk: It being willing to torture you (for all eternity, because it’s powerful enough to reconstruct you, with all your memories, etc.) is how it modifies the past.

    See, if you know/believe there will be a singularity, and don’t work to bring it about, you are condemning millions of people to death. So, since you knew this, and didn’t do your utmoset to bring it about, you are guilty of millions of murders.

    But, since you know that to not work to bring about the singularity will bring about this torment you will therefore give every spare penny to research on the singularity; ergo the AI will not have to torture you.

    Of course, any AI able to recreate one person, that completely could recreate everyone that completely, which means no (ever in the history of ever) will have to “die” and so the point is moot.

    But that would be using logic. That this sort of handwavium is able to cause irreduceable horror to the Rationalists of Less Wrong is why they have zero credibility with me, since the problem has it’s own solution.

    The thing to realise is all of this (every last one of the millions of words in the sequences) is because Yudkowsky is terrified of dying. Anyone who has eyes to see can tell; if they meet him in person. Put him under some sort of stress, esp. where personal risk; or general mortality is being discussed and he starts to fondle his talisman; cryogenic alert pendant.

    It’s sort of creepy.

    It’s not his basilisk, it’s one of his followers who dreamt up this sadist piece of work.

    But’s Yudkowsky, Super-Genius, who can’t see how the frame of the problem is it’s own solution; and who, by shutting down debate (because he’s not intellectually equipped to see the logical flaw in the problem, as posited) created the present level of existential dread on the part of those who decided to investigate it (because it’s apparently a huge subject of circumlocutionary discussion among a largish segment of Less Wrong).

    LBT: Seriously, I don’t even get it. This isn’t even reality-destroying. This is HYPOTHETICAL reality-destroying. Are there really folks so wound-up that THAT will really freak them out?

    The problem is they believe several incredible things.

    1: A Singularity will happen.
    2: Absent the AI being shaped, ab initio in a manner shaped by Yudkowsky the odds of it being, “unfriendly” are very high.
    3: The AI will be more powerful than we can imagine.
    4: It will be vengeful (for any time it lost to be created late).

    From those premises, comes all the rest of Teh STOOPID! It Will Happen. We must shape it or (at the very least) be left out (in which case we die, and miss out on living forever).

    The Basilisk posits a scenario where it’s possible that one might end up with a fate worse than death; and these people happen to think death is the absolute worst thing they can imagine. The idea of something worse than that makes them quiver in terror.

  14. So basically this Yudkowsky guy is exactly the kind of con man that Tom Martin would be if he was smarter and more effective at manipulating people.

  15. (Yes, I am your resident cynic. Nice to meet you.)

  16. “The thing to realise is all of this (every last one of the millions of words in the sequences) is because Yudkowsky is terrified of dying. Anyone who has eyes to see can tell; if they meet him in person. Put him under some sort of stress, esp. where personal risk; or general mortality is being discussed and he starts to fondle his talisman; cryogenic alert pendant.

    It’s sort of creepy.”

    That’s obvious without meeting him. Though the pendant sounds maybe creepy? I’m a bad judge, idk if I was nervous enough there to be doing it, but I play with my rings when on edge. Of course, I don’t think them more than jewelry so idk.

    Also, did you break into the semi-colon farm or something? You’ve got semi-colons on the loose!

  17. I’d also like to ask why the Basilisk would give a fuck. I mean, really? Wouldn’t it have better things to do (like almost anything)?

    Sorting out Yudowsky’s scrambled thinking would be a project in itself, I’d have thought.

    So, Y’s just another jerk terrified of death … YAWN.

  18. Argenti: He fondles it the way a bad actor fondles a cross in a vampire melodrama; he’s playing with it to keep Death at bay.

    Kittehs: The problem with the Basilisk is the Yudcultists are not good thinkers. They ascribe motives to the AI the way people building any deity from scratch do: by ascribing it their basest, most venal, aspects.

    They want, desperately, for this to come to pass. They want to be able to devote their every waking energy to it (and some do). They imagine that it wants this too. They posit that it will have godlike powers (because EVERYTHING IS DATA, and it will be able to control the data).

    So the AI will be able to read the past the way we read the manboobz; only it won’t need a search function to find Antz, or Meller, or CrackEmcee, or any one of the countless trolls who didn’t leave enough impression on us to recall. It will have a “mind” able to process, and recall, all of that. It will have the memories of everyone who has “joined” it (and so become immortal).

    And that will allow it to do things we can’t imagine. It will make it possible for it to “bargain with the past” by knowing what you thought, and what you chose to do, and what you chose not to do. It will know your motives, and if those motives were bad (i.e. you didn’t work to save humanity by striving to create the Singularity) it will know you knew that you would be punished.

    Then, as with finding a particle (or a wave), you will come to be punished. Inevitably.

    It’s daft, but there are those who believe it.

  19. Of course he can’t just have nervous tells like the rest of us!

  20. I’m not seeing any particular reasons why an AI would be vengeful. Why would it have emotions at all? Vengeance isn’t about logically deciding that people who’ve done X must be subject to Y, it’s all about emotions.

  21. Cassandra, you are ascribing logic where there is none. That’s the only premise in that set that they don’t assume to be true. So when you consider what could…would…happen if it were to be vengeful, BASILISK!!

  22. But how can someone who fails basic logic build an AI? It’s all sounding a bit sci-fi for dummies.

  23. Cassandra: You have found the point of prime failure.

    Here is what those of us who have looked into it (and spent time with devotees) see.

    Yudkowsky talks a good game (and he has a fanfic which seems to be well written, that many people like, which is also propaganda for his ideas).

    He has a veneer of plausibility.

    Geeky sorts, who aren’t sure of some sort of social interactions are drawn to the idea that one can use a formula to figure out, “best courses of action” for everything (and that Bayseian ideas can be mapped to social interactions).

    Many of the people who are drawn to computers don’t really know math, or much in the way of sciences in general: One of the more interesting things is that Yudkowsky thinks Stephen Jay Gould’s writings are dangerous; like Kabbalah they require preparatory study, and a guide. Why he thinks this I don’t know, but he does).

    They are also not versed in philosophy.

    Yudkowsky is like them in these regards.

    He writes in a way which seems both accessible,and to be “cutting through the claptrap (I have seen “skeptics” who praise him as he, “explains” things like, Quantum Mechanics). One of his consistent themes is, “all these things people say are hard/complicated”, aren’t. All it takes is a little understanding, and some common sense and all will be plain: the rest is usually jargon.

    He is recreating philosophy. In the course of this he has 1: made a lot of mistakes, of the sort which early philosophers made, and later one’s hashed out. 2: created a new jargon for old problems. 3: taught these (with the poor resolutions of these problems) to his followers.

    Which means that if one tries to explain how these “deep questions” have been resolved, or even that others have talked about them, they dismiss it. When you try to talk to them about some philosophical issue they say, “oh yes, that’s covered the Sequences, and it’s an exampled of”arglebargletygook” which means (erroneous conclusion here).

    At which point, convinced they have bested you (and if you try to get them to explain what they mean when they say “arglebargletygook” the just dismiss you as an ignorant fool who has no interest in philosophy), and move on.

    It’s really frustrating, but it gives them a cocoon in which they can be “less wrong” while being a whole lot of “not right” while feeling secure in their intellectual and moral superiority.

  24. So much fail in the basilisk thing. The key: Pseudointellectualism. There’s a certain art to crafting positions that are based in mostly-valid logic and also very, very complex and counterintuitive; it gives the impression that what you’re looking at is true but simply over your head rather than actual nonsense. Since pretty much all the individual bits of logic are sound and the overall argument is so convoluted and impossible to follow, it’s practically impossible to find the weaknesses.

  25. “The problem with the Basilisk is the Yudcultists are not good thinkers. ”

    UNDERSTATEMENT OF THE YEAR

    katz - damn, I wish I, or someone, had thought to throw “pseudointellectualism” at miseryguts and anonwankfest yesterday. That would have riled ‘em no end.

  26. Another one of the main weaknesses is the whole “acausal trade” aspect. Sure, the AI could look at the past and perfectly understand us…but it still can’t communicate with us because we still can’t see the future (duh). And their “solution?” We carry out that half of the conversation by imagining the AI and what it would do.

    That’s the pseudointellectualism in action: Cover it with enough academic verbiage and maybe nobody will notice that you just said that imagining someone talking to you is the same thing as someone actually talking to you.

    But the actual fact remains that you’re imagining what the AI would do, not actually observing it, so it doesn’t matter what it actually does. If you conclude that it will torture you, that doesn’t make it actually torture you. If it does torture you, that won’t make you conclude that it will. Any way you slice it, there’s no actual communication going on. (So if the AI were actually benevolent, it wouldn’t torture you, because torturing you in the future by definition couldn’t change how you acted in the past. All that argument and none of it surmounts that basic, obvious fact.) (The right thing for the benevolent AI would do would be to convince you that it was going to torture you and then not do so…except it still can’t affect what you think or do. What you decide it’s going to do is 100% determined by your own imagination.)

    The only way you can even sort of make an argument that there’s communication going on is if you postulate that you can prove truths based on what you can imagine to be true (ie, I can imagine something which by definition would need to be true in the real world). Which is, yep, the ontological argument, that perennial favorite skeptic punching bag. And the AI version suffers from the exact same weakness: I can imagine anything, and so the conclusion I draw could be anything.

  27. They sound like MRAs. ALL THE PROJECTION.

  28. The Basilisk think isn’t even a good argument for making a “friendly” AI (disregarding the fact that the Basilisk is pretty evil in and of itself). By this logic, an “unfriendly” AI would be just as capable of torturing people who don’t contribute to its creation, so trying to create a “friendly” AI could infuriate any “unfriendly” AIs that emerge, especially if the whole point of doing it is to prevent the existence of said “unfriendly” AI.

  29. Argenti Aertheri

    Friendly in this context doesn’t mean friendly. See?

    It can be unimaginably cruel, as long as the net effect is positive to humanity as a whole. What that means and how to achieve it…see the subsection on CEV.

    And then realize you’re basically arguing about where Xenu got a hydrogen bomb.

  30. Well, I had better hope that the gestalt entity that eventually arises from Dungeons & Dragons is Mordenkainen (devoted to maintaining balance and determined to stop at nothing to do so) rather than Iuz (BURN BABY BURN).

    Mordenkainen will, of course, promptly disintegrate everyone who failed to help bring about his reification, but will be brusquely neutral to everyone who participated.

    Iuz, of course, will promptly power word: kill everyone and then raise them as zombies.

    Mordenkainen is the best we can hope for. Don’t even talk about Elminster, the idea that he will arise is just risible.

    And now I will no longer discuss roleplaying games on the off chance that Iuz is listening in.

  31. @Argenti
    Oh yes, I understand that, therefore the scare quotes, but there could be an “unfriendly” AI that wants to cause misery and destruction, i.e. a net negative effect to humanity, and that AI could torture people who don’t contribute to its creation. The whole thing is ridiculous of course, but that’s no excuse to be nonsensical.

  32. One of his consistent themes is, “all these things people say are hard/complicated”, aren’t. All it takes is a little understanding, and some common sense and all will be plain: the rest is usually jargon.

    Somehow this reminds me of Andy Schlafly, and his insistence over at his pet project Conservapedia that ALL math[s] can be explained using elementary concepts like addition.

    He has inherited from his conservative Christian background a disdain for set theory (??? which I can only guess comes from an indignant idea that set theory says God is not infinite) but his real vitriol is reserved for imaginary numbers.

    Also he seems to equate Einstein’s relativity theories with moral relativity because they share a word, and he takes big leaps such as “we can suppose that abortion may kill brilliant scientists and athletes, therefore abortion has killed the most brilliant scientists and athletes who ever lived.”

    It’s like a car wreck.

  33. Heh. “Conservative math” is such a wonderfully perfect concept. Dead wrong, utterly arrogant and completely clueless-it’s an eerily exact metaphor for social conservativism in general.

  34. Bringing potential people into the abortion discussion is just SO WEIRD. I mean, I get how you can think abortion is wrong because it kills innocent babies. I don’t agree, because (super-short version)
    a) I don’t think something counts as a human being in the morally relevant sense (as opposed to mere biological sense) until it has developed a capacity for consciousness, for instance (not talking fancy self-awareness here, just basic consciousness), and AFAWK, fetuses don’t have enough synapses for that until week 25 or so, and
    b) it’s a bad idea anyway to legally require people to be donors of their bodily resources to keep other people alive.
    BUT I don’t think you have to be completely irrational to think “abortion=baby killing, baby
    killing=wrong”.

    HOWEVER, this whole “what about all the people that could have been but never were?” schtick…
    1. Yeah, they might have been geniuses, but they might also have been serial killers and war criminals.
    2. As soon as you do ANYTHING ELSE but having unprotected sex during your ovulation period you’re preventing potential people from coming into existence. Actually, you’re preventing potential people from coming into existence even WHEN you’re having unprotected sex, because me banging A at time t guarantees that the potential babies I could have had if the sperms B is producing right now had hit my egg instead will never exist. So if it’s wrong to prevent POTENTIAL people from coming into existence… we’re all terrible people and we’re all going to Hell.

  35. I used to be quite obsessed with the WoD mythology. I think it’s so wonderful because of the idea that “in a world of darkness a single candle can light up the world”, which is a quote I read somewhere. There have been other games that I’ve played where you have a chance to choose your own moral path, but I think that VtM: Bloodlines is the only one where it felt that my actions mattered. Not because of the game mechanics, but just because of the dark atmosphere created by the game, one where a single act of kindness seems like a revolutionary act that can change the world.

    Though the unfortunate thing about Bloodlines is that playing as a Malkavian spoils you because it’s so amazing that it stands in contrast with the less interesting dialogue for the other clans.

    There have been some talks about a World of Darkness MMO too, but the project has been delayed so often now that I doubt it will ever see the light of day.

  36. Dvärghundspossen, your penguin has been added to the Welcome Package! But I actually commented to tell you that was a good summary of the abortion issue. I wish I could entice some of you to participate in the /r/askfeminists subreddit, but it’s kind of a time suck and the mod will apparently ban people zie decides are too critical of MRAs. Still, I sometimes shamelessly steal things people have expressed well here and dispense them there as though they were my own wisdom. :D

  37. If these clowns want to go that route, they can also be answered with all the potential dictators, mass killers, and appalling criminals generally who never get a chance to do their thing.

  38. RE: pecunium

    these people happen to think death is the absolute worst thing they can imagine. The idea of something worse than that makes them quiver in terror.

    Eh-heh. This is where I quote a crappy Disney sequel and wheeze, “You’d be surprised what you can live through.”

    Yudkowsky talks a good game (and he has a fanfic which seems to be well written, that many people like, which is also propaganda for his ideas).

    AAAAAH HE IS THE GUY WHO WROTE HARRY POTTER AND THE METHODS OF RATIONALITY? Oh, fuck me standing, I’VE HEARD OF THAT GUY. THIS IS THE SAME GUY? I feel like I’m falling down the fucking rabbit hole of OMGWTF. Every time I think I’ve hit the bottom, IT JUST GOES DEEPER.

    RE: CassandraSays

    So basically this Yudkowsky guy is exactly the kind of con man that Tom Martin would be if he was smarter and more effective at manipulating people.

    Tom Martin is about as far to becoming a con man as Greenland is to becoming Hawaii.

  39. Yeeeep, that’s the guy.

    Michael — I’m facing Bach. I really hate this guy!

  40. This is so brain-melting. I READ about that stupid fanfiction. Suddenly the weirdly polarized commentary on it makes total sense now…

  41. I haven’t read it, but I’ve seen the poster advertising it, and it’s straight up Less Wrong.

    They also have some good filks, but they are the same thing; they carry Less Wrong memes in them.

  42. I tried the first could chapters. I honestly prefer 50 Shades of
    Fucked Up
    Grey — because it’s so horribly written. The bullshit isn’t massive logical leaps that hurt your brain, but straight up (fucked up) bullshit.

  43. these people happen to think death is the absolute worst thing they can imagine. The idea of something worse than that makes them quiver in terror.

    So … Y’s wetting his pants over the idea of dying, so gets on this fantasy track (even though he says it’s TOTES TRUE) about creating this thing that’ll saaaaave his sorry self, but then it all goes weird and they start blathering about these things that make life far worse …

    I’m reminded of the line from The Last Shore: “In our minds. The traitor, the self; the self that cries I want to live; let the world burn so long as I can live! The little traitor soul in us, in the dark, like the worm in the apple. He talks to all of us. But only some understand him. To be one’s self is a rare thing and a great one. To be one’s self forever: is that not better still?”

    NB I totally disagree with Le Guin’s dismal nonevent “afterlife” imagery, obviously, but the line does seem applicable to these guys.

  44. RE: Argenti

    Well, if horrible writing does it for you, 50 Shades of Bears will DEFINITELY scratch that itch.

    Yeah, looking at the explanation, I was like, “Oh god, this sounds AWFUL, but everyone’s saying it’s great…”

  45. LBT — if you’re looking for an overly complex scientific take on Harry Potter then it’ll scratch that. If you can’t manage to suspend disbelief and don’t buy anything spoon fed you from a can of rational sauce…you’ll want to scream. It’s all purple prose about science with a veneer that it must make actual sense since the parts you understand make sense. Except it doesn’t, and is painfully purple.

    As for Yudowsky’s little cult…It works if you’re looking for something…I’m going to start That Discussion, I just know it…it works if you’re an atheist desperate for the salvation religions offer.

    (Note: I realize that not all religions offer salvation, not all religious people are religious for that reason, and not all atheists want a pseudo-religion…but these ones do, for reasons pecunium explained above)

  46. AAAAAH HE IS THE GUY WHO WROTE HARRY POTTER AND THE METHODS OF RATIONALITY?

    If I had gum, I would have swallowed it. (I haven’t read it but I’ve heard of it and most people seem to think it’s really, really good.)

  47. It’s all purple prose about science with a veneer that it must make actual sense since the parts you understand make sense.

    In other words, just like everything he says.

  48. Argenti - not wanting to start That Discussion either, but “yep” to your comments about where Yudkowsky and his crew seem to be coming from.

    This has however proven useful, ‘cos I was just browsing about the Earthsea books and saw stuff on The Other Wind, which I’d never read (I gave up in disgust after Tehanu), where the whole Dry Land idea is shown to be a colossal stuff-up by the mages. Might be worth reading at that … I’d always thought Le Guin was just doing a poetic sort of “there’s nothing after this” thing.

  49. I haven’t really followed this entire discussion of the basilisk and stuff, just want to add a little something about the ontological argument. It’s not quite right to say that it attempts to prove that if you can imagine something it must exist, or defining God into existence. It’s an attempt to prove that God’s existence is logically necessary (at least it is in its most charitable interpretation). This is why that argument is actually very difficult to wrap your head around. I’ve had seminars for first semester philosophy students on the ontological argument, and everyone is initially like “But duh, it’s about saying that something is real because you imagine it, I could ‘prove’ that Santa Claus is real the same way”, and it really takes some effort for students to a) actually learn the argument, and b) understand what’s actually wrong with it according to Immanuel Kant (the most popular dismissal), namely that existence isn’t a predicate but belong to a different logical category (and Kant’s counter argument against the ontological argument is actually a bit contested - there are logicians who think that existence can be used as a predicate, but that the argument is still flawed for some other (complicated) reason).

    There’s a good reason why so many philosophers for hundreds of years thought that there’s probably something fishy going on here, but couldn’t quite put their finger on exactly what, and that’s not that they were stupid.

    ANYWAY, even without having read through everything about this “basilisk” I doubt that an analogous argument could be made for it. The ontological argument depends on God being the ultimate/the perfect/the greatest. An AI that doesn’t even exist for a really long time can’t fill these shoes. (At least probably not - there’s an article from 2004 by Peter Millican according to which the only flaw in the ontological argument is that “the greatest being” in the argument may be an actual, limited being - but if Millican is right, it doesn’t have to be that basilisk thing, it could be a human being as well, so still doesn’t prove that the basilisk will come.)

  50. (Note: I realize that not all religions offer salvation, not all religious people are religious for that reason, and not all atheists want a pseudo-religion…but these ones do, for reasons pecunium explained above)

    Yeah, there really are many atheists with a religious vein in them. For instance, I think moral realism is driven by religious-instinct-laden intuitions, and some atheists are super-dedicated to moral realism. Also, lots of atheists seem to get some kind of religious feeling out of thinking about how elements have formed in stars and are now part of our bodies. I really don’t see anything wrong with thinking about this and getting all kinds of fuzzy feelings from it (Yay for fuzzy feelings! They take us through the day!), although I don’t feel that way myself on contemplating elements. However, if you say that “the stars die SO THAT we could live” or something along these lines, I think you’re really veering into pseudo-religious territory, since SO THAT and similar phrases normally signifies some kind of aim or purpose.

  51. Dvarg: That was a much better summary of the ontological argument, but the point is that, with the basilisk as with the God argument, you’re imagining something that logically must exist based, so a) the case for the basilisk’s existence isn’t any better than that particular case for God’s existence and b) it’s a terribly ironic line of argument coming from people who have probably smugly denounced the ontological argument elsewhere.

  52. (Yay for fuzzy feelings! They take us through the day!)

    Especially when cat and dog furs lend extra fuzz power! :)

  53. @Katz: I guess I didn’t quite get my point through in that long text, but… I wanted to point out that even though you can make an ontological argument for the existence of GOD which is at least good enough to make it really difficult to actually point out where the flaw lies, I can’t see how you could make an ontological argument for the existence of a certain AI that’s even half-decent.

    So, it’s not just that they cling to an argument that’s AS BAD as an argument they dislike, it seems to me like any argument for the existence of an AI must be WAY WORSE than the best versions of the ontological argument for the existence of God. :-)

  54. They do, I think, make a bit of a warped ontological argument.

    One of the things Yudkowsky believe in is a “many worlds multiverse” (enter the handwavium). In his explanation, anything which could happen, will happen; so by postulating a thing; that thing becomes possible. If one works toward it, sooner or later it will become real (because some branching of quantum physics will cause a deterministic manifestation of it in the “real” world”).

    This, I think, is why the Singularitarians are so fervent. They know they will be saved, because their miracle AI will come. Oddly this seems to give them real internal comfort.

    Me, I think (were I to believe such a thing) that knowing it can’t fail would mean I could relax, and enjoy life more; because it doesn’t matter: I will be save in The Great Retrieval. They don’t. They dream of having the 10,000+ dollars required to have their heads frozen in liquid nitrogen the moment they die (and some want to be able to pay doctors to decapitate them just before they die so the “preservation” won’t have any delay which might hinder their eventual restoration).

    When they do spend the money, they wear the proof in a very public way; and preach the joys of knowing you will be cryonically preserved. And still they seem scared. They get upset when people say “The Singularity” is a nice McGuffin for a story, but can’t really work out that way.

    Which makes me think they don’t believe it. That they know they are whistling in the dark. Which is what makes me really upsent with Yudkowsky. He’s making a lot of people’s lives less happy.

  55. Argenti Aertheri

    “He’s making a lot of people’s lives less happy.”

    Besides my hatred of misleading statistics, THAT. My Yudcultist honestly thinks he has to get a job that pays as much as possible to aid in the quest for AI — this kid’s smart, he could get most any job he wanted, but instead of picking a field he enjoys, he’s going to pick whatever seems to pay best. And instead of giving it to organizations that do current, practical, hands on aid, it’ll go to Yudkowsky to fund…what exactly? Just what is he doing with all this money? Not program testing any degree of AI, seeing how he can’t program shit.

    (FTR, MSF // Doctors without Borders is my charity of choice, because starting medical programs in war zones and famines and underdeveloped regions, working against infectious disease that’d otherwise go untreated, treating malnutrition…yeah, far more useful than dumping money into Yudkowsky’s fund for what exactly)

  56. He’s supporting the most important man in the history of mankind.

    That’s the takeaway (and Yudkowsky is willing to tell this to people, face to face. It’s amazing to watch, and only a sense of politesse (and utter ignorance as to just how serious he was) kept me from laughing at him when he said it to me (in more than one form, in the course of that evening).

    He’s sold them a bill of goods; first he makes them certain The Singularity will happen (but some how it’s only once, not an infinite number of good/bad/friendly/unfriendly/neutral AIs) and that if HE isn’t on the ground floor of the philosophy of AI, in the design phase, then it WILL be unfriendly; because it won’t be, “rational”.

    It hokum. It’s internally inconstent, in ways that make it unsavory, and I think unsalvageable.

  57. Harry Potter and the Methods of Rationality… ooooh, yeah.

    There’s some really good parts. There’s some really awfully terrible parts. His eight-year-olds are basically thirty, and his adults are basically clueless moral monsters. And underlying it all is that horrible ‘take the Red Pill’ mentality.

    But there’s also a wonderful sense of humor…

    Heh.

    So, everybody’s read the short Matrix interlude piece, right?

    It comes down to basically this. In the first movie, we get to the part where Morpheus pulls out the battery, explaining how the machines are using humans, and Neo shakes his head and starts explaining physics to Morpheus, telling him how this that doesn’t make any sense and is kind of stupid. (which it is-but I bet you-all know that, and know it wasn’t how it went in the original script)

    And Morpheus asks… “Neo, where did you learn these ‘physics’?”

    “In school.”

    “Inside the Matrix.”

    “…”

    “The machines craft beautiful lies.”

    “…can I have a real physics textbook?”

    “Oh, Neo. The real world doesn’t run on math.”

    I must have laughed for several solid minutes.

  58. Argenti Aertheri

    Hokum is one word. I was think more like con or scam.

    As for why there will only be one AI, I believe the line is that it’ll prevent any others from being made. No matter what form this AI takes. The most bullshity of the bullshit, to me anyways, is CEV.

    You need to watch the ends of Doctor Who seasons 1 and 3 — tell your beloved that I said you need to see the Bad Wolf and Yana, she should know which episodes I mean. The first set is how Rose becomes a god, the second the Master tries using the same power. The difference between a caring and vengeful god could not be more clear. (Do not drink to Dalek diatribes though, that’s how I ended up puking sweettarts!)

    Howard — nope, no math here, move right along, these are not the droids you are looking for.

  59. Argenti: As an argument that might be plausible, but… not in the many-worlds of Yudkowsky: That’s really more scary (to them) than anything else, if they believed it.

    Here’s the thing, they want to live forever. Not just some iteration of themselves in the infinite multiverses of their fantasies, but the one who is thinking about it.

    So far I can accept that. I think it’s a but less than completely rational; but I use the word differently to their usage.

    If, however, all possibilities will come to be (and the entire “branching” aspects have some ontological problems… where is the “action” which causes the branching. H. Beam Piper actually played with this in a story in the ’60s, but I digress) then “I” will live forever.

    “I” will also be tortured by the basilisk. And I died in a motorcycle crash, and I settled down and married various people and…

    None of it matters, of course to me, if it’s not in this universe that it happens. They know this, and are desperate to make it be this one. Because deep down, they lack faith. And lots of people (esp. in the US) confuse outward display (in the form of donation, see, “The 700 Club”) with a way to manufacture sincere belief. By accident, or design, Yudkowsky has tapped into that.

  60. Argenti Aertheri

    Yeah, the want all the, arg how to word this and not start That Discussion…fuck…they want the living forever (afterlife) of some religions, cloaked in rationality, and in a form that allows them to “buy in”.

    I hate multiverse theory, the what is a decision thing bugs me a lot — we get a new universe every time I decide what the fish are getting for dinner? What the hell difference does it make if they get flakes on Friday and algae wafers on Saturday or vice versa? And more, we get a new universe every time each one of them picks which flake to go after? I spawned a hundred universes at feeding time last night then.

    “And I died in a motorcycle crash”

    And sulfa, and was it Ukraine? And I think there was a horse incident. And a few others.

    And that’s just the dying option, don’t forget the breaking other bones options (there are well over a thousand combinations) and the various methods by which dead can occur.

    But yeah, if an unfriendly AI in any universe can torture the you in this one…we’re screwed. And the end and specials of season 5 to your watch list — the reality bomb and the return of the Master (he really needs to stay dead already!). Email me if you want the summary, we’re not having another rot13 conversation!

  61. the return of the Master (he really needs to stay dead already!)

    He’s indestructible, the whole universe knows that.

  62. No, I was pointing out that one need not do anything to get all possible results. If Roko’s basilisk can happen, it will happen; what I^ care about is that it happens to I^^. But really, it doesn’t matter. So I can ignore it and just work on being happy.

  63. Argenti Aertheri

    Eh, sorta. Since doing nothing is physically impossible. But no, one need not do anything specific, nor avoid anything in particular.

    Other than that little bit of pedantry, yes.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 3,314 other followers

%d bloggers like this: