• 0 Posts
  • 39 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle

  • The problem with a punishment mesmer, defensive juggernaut anything, and turret engie is that they result in degenerate gameplay. Turrets can’t be allowed to succeed in PVE (see: Lake Doric), and none of these class fantasies can be allowed at all in PVP.

    Turrets and juggernauts turn into turtling bunkers that either grind play to a halt or turn into raid bosses, and the only way to balance them is to essentially make the style of play unfun for the person who wants it. “Being unkillable” or “controlling this space” can’t be supported in a competitive game mode. Now, you can balance this by just splitting everything and making the specs unplayable or wildly different in competitive modes, but that means you’re now devoting the dev resources to build the thing twice (for both modes), yet players can only really enjoy it in PVE. From a design perspective, that’s a really poor return on investment for an elite spec.

    Punishment mesmer worked in GW1 because you had much better defined roles in all game modes with less overlap, and there was ability parity between players and NPCs, so you could interact with an enemy mob essentially the same way you’d interact with an enemy player. In GW2, you can’t punish a playstyle because playstyles aren’t that well defined, and you can’t create a niche for hex gameplay because they gave everybody else the mesmer toys (see: Torment and Confusion). If you try to make a spec that depends on them even more than certain mesmer specs already do, the byproduct will be turning revs into gods (again). There’s also no energy denial in GW2, and you can’t give a player a bar full of interrupts because everybody already has as many interrupts as the game can support without being catastrophically unfun. GW2 is just the wrong kind of game for GW1’s mesmer–like a lot of things that were better in GW1.

    If you ask me, we don’t need more elite specs. We need more non-elite specs–stuff we can combine more freely with what we already have–and we need the elites to be “de-elited” so that the power level of the vanilla specs have better parity with their elite counterparts. I know they’ve taken a pass at this before (or two or three), but it has clearly not panned out. The presence of multiple options for ranged elementalists, for example, is definitely something that needs to be supported.


  • No, sorry. I try to be deferential when talking about this stuff, but this is pretty cut and dry, and I’m afraid you’re just wrong here. This is Greek–not theology. πίστις is the word we’re talking about. It shares the common root with πείθω–“to persuade” (i.e., that evidence is compelling or trustworthy). πίστις is the same word you would use in describing the veracity of a tribunal’s judgment (for example, “I have πίστις that the jurors in NY got the verdict right/wrong”). The Greeks used the word to personify honesty, trust, and persuasiveness prior to the existence of Christianity (although someone who knows Attic or is better versed in Greek mythology feel free to correct me). The word is inherently tied up with persuasion, confidence, and trust since long before the New Testament. The original audience of the New Testament would have understood the meaning of the word without depending on any prior relation to religion.

    Is trust always a better translation? Of course not–and that’s why, you’ll notice, I didn’t say that (and if it were, one would hope that many of the very well educated translators of Bibles would have used it). But I think you can agree that the concept is also difficult for English to handle (since trust in a person, trust in a deity, and trust in a statement are similar but not quite the same thing, and the same goes for belief in a proposition, belief in a person, and belief in an ideal or value, to say nothing of analogous concepts like loyalty and integrity).

    The point is that πίστις–faith–absolutely does not mean belief without evidence, and Christianity since its inception has never taught that. English also doesn’t use the word “faith” to imply the absence of evidence, and we don’t need to appeal to another language to understand that. It’s why the phrase “blind faith” exists (and the phrase is generally pejorative in religious circles as well as secular ones).

    Now, if you think the evidence that convinces Christians to conclude that Jesus’ followers saw Him after His death is inadequate, that’s perfectly valid and a reasonable criticism of Christianity–and if you want to talk about that, that would be apologetics.

    In any event, if you’re going to call something bullshit, you better have a lot of faith in the conclusion you’re drawing. ;)


  • The way faith is treated in the First Century doesn’t translate well to modern audiences. Having faith of a child isn’t an analogy to a child being gullible. It’s an analogy to the way a child trusts in and depends on his parents. Trust, arguably, would be a better translation than faith in many instances.

    Faith for ancient religious peoples wasn’t about believing without proof. That would be as ridiculous for a First Century Jew as it is for us. Faith is being persuaded to a conclusion by the evidence.



  • Windows 10 LTSC 2021 ends support in 2027 (although it doesn’t matter quite as much). And it’s likely that the Win 11 LTSC later this year will necessarily be free from much of 11’s bullshit. Linux is still the right call, but for those of us who need to run a Windows machine for whatever reason, there are alternatives, so, you know… yarr.









  • I’m incredibly fascinated by the ghost comparison. Is the probability that ghosts are a real physical phenomenon higher or lower than the probability that aliens exist or have visited us? That’s an extremely interesting question, and I’m sure someone could do a statistical meta-analysis comparing the incidence of, say, UFO sightings with the incidence of paranormal experiences (if such an analysis doesn’t already exist). Both questions seem like the things that should be generally empirically falsifiable (and indeed, specific instances certainly are), but humanity’s curiosity about both has proven remarkably durable despite centuries of curiosity and myriad efforts to settle (negatively) both questions once and for all.



  • And you’re absolutely right about that. That’s not the same thing as LLMs being incapable of constituting anything written in a novel way, but that they will readily with very little prodding regurgitate complete works verbatim is definitely a problem. That’s not a remix. That’s publishing the same track and slapping your name on it. Doing it two bars at a time doesn’t make it better.

    It’s so easy to get ChatGPT, for example, to regurgitate its training data that you could do it by accident (at least until someone published it last year). But, the critics cry, you’re using ChatGPT in an unintended way. And indeed, exploiting ChatGPT to reveal its training data is a lot like lobotomizing a patient or torture victim to get them to reveal where they learned something, but that really betrays that these models don’t actually think at all. They don’t actually contribute anything of their own; they simply have such a large volume of data to reorganize that it’s (by design) impossible to divine which source is being plagiarised at any given token.

    Add to that the fact that every regulatory body confronted with the question of LLM creativity has so far decided that humans, and only humans, are capable of creativity, at least so far as our ordered societies will recognize. By legal definition, ChatGPT cannot transform (term of art) a work. Only a human can do that.

    It doesn’t really matter how an LLM does what it does. You don’t need to open the black box to know that it’s a plagiarism machine, because plagiarism doesn’t depend on methods (or sophisticated mental gymnastics); it depends on content. It doesn’t matter whether you intended the work to be transformative: if you repeated the work verbatim, you plagiarized it. It’s already been demonstrated that an LLM, by definition, will repeat its training data a non-zero portion of the time. In small chunks that’s indistinguishable, arguably, from the way a real mind might handle language, but in large chunks it’s always plagiarism, because an LLM does not think and cannot “remix”. A DJ can make a mashup; an AI, at least as of today, cannot. The question isn’t whether the LLM spits out training data; the question is the extent to which we’re willing to accept some amount of plagiarism in exchange for the utility of the tool.