Avatar by TheWhyvern and banner through prodia.com and photopea.com

  • 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle










  • I played the demo on Steam yesterday, and had a lot of fun with it. I really look forward to seeing what they come up with in the full release.

    I’m so happy that it’s getting more normal for studios to release free demos. I’ve been playing a lot of hours of Aloft too, and I gotta say, it really makes me more interested in buying the game when it comes out, than I otherwise would be.


  • I was curious too, so I went to the Wikipedia page:

    The goal is to attain liberation in the body, by sealing in the energy of bindu in the head so that it is not lost.

    Haṭha yoga is a branch of the largely spiritual practice of yoga, though it makes use of physical techniques; it was developed in medieval times, much later than the meditative and devotional forms of yoga. Its goals however are similar: siddhis or magical powers, and mukti, liberation. In Haṭha yoga, liberation was often supposed to be attainable in the body, made immortal through the practices of Haṭha yoga. Among its techniques were mudrās, meant to seal in or control energies such as kundalini and bindu. Khecarī mudrā is one such technique.

    tl;dr - A spiritual practice of yoga in Hindu metaphysics.


  • tbf, it seems like his relationship with animals was really hit or miss:

    Diodorus Siculus recorded a story of Menes related by the priests of the crocodile god Sobek at Crocodilopolis, in which the pharaoh Menes, attacked by his own dogs while out hunting fled across Lake Moeris on the back of a crocodile and, in thanks, founded the city of Crocodilopolis.








  • mirror: https://archive.vn/ghN0z

    According to the former X insider, the company has experimented with AI moderation. And Musk’s latest push into artificial intelligence technology through X.AI, a one-year old startup that’s developed its own large language model, could provide a valuable resource for the team of human moderators.

    An AI system “can tell you in about roughly three seconds for each of those tweets, whether they’re in policy or out of policy, and by the way, they’re at the accuracy levels about 98% whereas with human moderators, no company has better accuracy level than like 65%,” the source said. “You kind of want to see at the same time in parallel what you can do with AI versus just humans and so I think they’re gonna see what that right balance is.”

    I don’t believe that for one second. I’d believe it, if those numbers were reversed, but anyone who uses LLMs regularly, knows how easy it is to circumvent them.

    EDIT: Added the paragraph right before the one I originally posted alone, that specifies that their “AI system” is an LLM.