• 0 Posts
  • 219 Comments
Joined 2 年前
cake
Cake day: 2023年6月23日

help-circle






  • enkers@sh.itjust.workstoTechnology@lemmy.worldI am disappointed in the AI discourse
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 个月前

    Appreciate the correction. Happen to know of any whitepapers or articles I could read on it?

    Here’s the thing, I went out of my way to say I don’t know shit from bananas in this context, and I could very well be wrong. But the article certainly doesn’t sufficiently demonstrate why it’s right.

    Most technical articles I click on go through step by step processes to show how they gained understanding of the subject material, and it’s layed out in a manner that less technical people can still follow. And the payoff is you come out with a feeling that you understand a little bit more than what you went in with.

    This article is just full on “trust me bro”. I went in with a mediocre understanding, and came out about the same, but with a nasty taste in my mouth. Nothing of value was learned.


  • enkers@sh.itjust.workstoTechnology@lemmy.worldI am disappointed in the AI discourse
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    4
    ·
    edit-2
    1 个月前

    I’ll preface this by saying I’m not an expert, and I don’t like to speak authoritatively on things that I’m not an expert in, so it’s possible I’m mistaken. Also I’ve had a drink or two, so that’s not helping, but here we go anyways.

    In the article, the author quips on a tweet where they seem to fundamentally misunderstand how LLMs work:

    I tabbed over to another tab, and the top post on my Bluesky feed was something along these lines:

    ChatGPT is not a search engine. It does not scan the web for information. You cannot use it as a search engine. LLMs only generate statistically likely sentences.

    The thing is… ChatGPT was over there, in the other tab, searching the web. And the answer I got was pretty good.

    The tweet is correct. The LLM has a snapshot understanding of the internet based on its training data. It’s not what we would generally consider a true index based search.

    Training LLMs is a costly and time consuming process, so it’s fundamentally impossible to regenerate an LLM in the same order of magnitude of time it takes to make a simple index.

    The author fails to address any of these issues, which suggests to me that they don’t know what they’re talking about.

    I suppose I could conceded that an LLM can fulfill a similar role that a search engine traditionally has, but it’d kinda be like saying that a toaster is an oven. They’re both confined boxes which heat food, but good luck if you try to bake 2 pies at once in a toaster.





  • I have a hard time considering something that has an immutable state as sentient, but since there’s no real definition of sentience, that’s a personal decision.

    Technical challenges aside, there’s no explicit reason that LLMs can’t do self-reinforcement of their own models.

    I think animal brains are also “fairly” deterministic, but their behaviour is also dependent on the presence of various neurotransmitters, so there’s a temporal/contextual element to it, so situationally our emotions can affect our thoughts which LLMs don’t really have either.

    I guess it’d be possible to forward feed an “emotional state” as part of the LLM’s context to emulate that sort of animal brain behaviour.







  • Just to be clear, they were fully transparent about it:

    “Hello, just to be clear for everyone seeing this, I am a version of Chris Pelkey recreated through AI that uses my picture and my voice profile,” the stilted avatar says. “I was able to be digitally regenerated to share with you today. Here is insight into who I actually was in real life.”

    However, I think the following is somewhat misleading:

    The video goes back to the AI avatar. “I would like to make my own impact statement,” the avatar says.

    I have mixed feelings about the whole thing. It seems that the motivation was genuine compassion from the victim’s family, and a desire to honestly represent victim to the best of their ability. But ultimately, it’s still the victim’s sister’s impact statement, not his.

    Here’s what the judge had to say:

    “I loved that AI, and thank you for that. As angry as you are, and as justifiably angry as the family is, I heard the forgiveness, and I know Mr. Horcasitas could appreciate it, but so did I,” Lang said immediately before sentencing Horcasitas. “I love the beauty in what Christopher, and I call him Christopher—I always call people by their last names, it’s a formality of the court—but I feel like calling him Christopher as we’ve gotten to know him today. I feel that that was genuine, because obviously the forgiveness of Mr. Horcasitas reflects the character I heard about today. But it also says something about the family, because you told me how angry you were, and you demanded the maximum sentence. And even though that’s what you wanted, you allowed Chris to speak from his heart as you saw it. I didn’t hear him asking for the maximum sentence.”

    I am concerned that it could set a precedent for misuse, though. The whole thing seems like very grey to me. I’d suggest everyone read the whole article before passing judgement.


  • enkers@sh.itjust.workstoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    2 个月前

    Just to add, if it’s found that evidence was destroyed, beyond potential seperate charges for the destruction itself, a judge would also typically give an averse inference instruction to the jury. That means the jury should assume that the destroyed evidence would have been damning to whomever destroyed it.

    What that tells me is, assuming google acted rationally in the destruction, either they think they have a reasonable chance that they can beat the evidence destruction charges, or that the evidence is so damning that the reality of the situation is considerably worse than whatever adverse inferences might be drawn.

    (I am not a lawyer, so please take my interpretation with a large grain of salt.)


  • I mean… maybe? Could an EO be used to just dismiss an existing case? Maybe. It’s kinda make believe land over there right now, so it’s hard to say what is and isn’t in the realm of possibility.

    And I don’t know the legal system well enough to say for sure how he could go about it. Presumably some bureaucracy would have to be intervened in to stop the case from proceeding normally. Whether or not he could do that legally seems to be a bit contentious:

    https://hls.harvard.edu/today/what-power-does-the-president-have-over-the-federal-bureaucracy/

    However, there’s also the question of could he just have some cronies walk in to the place with a bunch of dudes in black suits and do it anyways? I think the DoJ would be pretty pissed if he tried that, but he’s already been flirting with contempt of court and we haven’t seen any judge pull the trigger on that yet, so we’ll see, I guess.

    There’s also the fact that nobody’s given him incentive to do it yet. They’ll probably wait and see, as Trump would likely need a sizeable reason to step in, so why pay potentially more on Trump whithout knowing what the damages will even be, right?


OSZAR »