Profile pic is from Jason Box, depicting a projection of Arctic warming to the year 2100 based on current trends.

  • 0 Posts
  • 1.02K Comments
Joined 2 years ago
cake
Cake day: March 3rd, 2024

help-circle


  • Rhaedas@fedia.iotoMemes@sopuli.xyzOne more LLM
    link
    fedilink
    arrow-up
    3
    ·
    19 hours ago

    I’ve tried a few of the newer local ones with the visible chain of thought and thinking mode. For some things it may work, but for others it’s a hilarious train wreck of second guessing, loops, and crashes into garbage output. They call it thinking mode, but what it’s doing is trying to stack the odds to get a better probability hit on hopefully a “right” answer. LLMs are the modern ELIZA, convincing on the surface but dig too deep and you see it break. At least Infocom’s game parser would break gracefully.



  • Rhaedas@fedia.iotoMemes@sopuli.xyzOne more LLM
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    20 hours ago

    Remember that LLMs were trained on the internet, a place that’s full of confidently incorrect postings. Using a coding LLM helps some, but just like with everything else, it gives better results if you only ask for specific and small parts at a time. The longer the reply, the more probable it will drift into nonsense (since it’s all just probability anyway).

    I’ve gotten excellent starting code before to tweak into a working program. But as things got more complex the LLM would try to do things even that guy who gets downvoted a lot on Stack Overflow wouldn’t dare suggest.




  • How much power the federal government can wield has been a debate since the founding. The first two political parties were the Federalists and Democratic-Republicans. The optimal version is probably somewhere in between, with lots of checks and balances that can vary through situations and time. We had some of that too through history.

    It’s always hardest to see the best direction to take when in the middle of historical change, but it does seem we’ve slid a bit too far at this point to use the system that is broken to fix the system. I’m wondering not only what path we’re going to take to get to the next stage, but how the world is going to act while we do it, given how tangled the US with everything. Some might say to let it burn, ignore it, play isolationism, but that approach never worked out historically, nor did trying to step in and “fix” things.


  • I enjoyed the Mars series, although I’ll admit it’s been a while since I read it and only a few things I really remember unless someone mentions other events. I have not read the book, but the opening chapter of Ministry for the Future (available online) is something that sticks with me as a premonition of what’s to come for many people. The only flaw to me was

    Spoiler

    how the main character seems to be the only survivor of what should have killed everyone. Perhaps if the death toll hadn’t been so absolute it would have been less plot armor.



  • Mine isn’t that bad - only 20 years old but has seen all sorts of things from rocks and sand to hail and is just pitted bad enough to be annoying. But it’s that fact that I’ve seen the abuse it’s gone through without the first hairline crack that makes me cautious to get rid of something that’s stood the test of time. It’s either the angle or the glass (doubtful), but at this point it can’t be just luck, right? I just hear horror stories of replacement glass that isn’t fitted right, leaks, or breaks early on. I can deal with it a bit longer.







  • That’s a reasonable definition. It also pushes things closer to what we think we can do now, since the same logic makes a slower AGI equal to a person, and a cluster of them on a single issue better than one. The G (general) is the key part that changes things, no matter the speed, and we’re not there. LLMs are general in many ways, but lack the I to spark anything from it, they just simulate it by doing exactly what your point is, being much faster at finding the best matches in a response in data training and appearing sometimes to have reasoned it out.

    ASI is a definition only in scale. We as humans can’t have any idea what an ASI would be like other than far superior than a human for whatever reasons. If it’s only speed, that’s enough. It certain could become more than just faster though, and that added with speed… naysayers better hope they are right about the impossibilities, but how can they know for sure on something we wouldn’t be able to grasp if it existed?


  • I doubt the few that are calling for a slowing or all out ban on further work on AI are trying to profit from any success they have. The funny thing is, we won’t know if we ever hit that point of even just AGI until we’re past it, and in theory AGI will quickly go to ASI simply because it’s the next step once the point is reached. So anyone saying AGI is here or almost here is just speculating, just as anyone who says it’s not near or won’t ever happen.

    The only thing possibly worse than getting to the AGI/ASI point unprepared might be not getting there, but creating tools that simulate a lot of its features and all of its dangers and ignorantly using them without any caution. Oh look , we’re there already, and doing a terrible job at being cautious, as we usually are with new tech.