• 0 Posts
  • 22 Comments
Joined 5 months ago
cake
Cake day: January 25th, 2024

help-circle
  • PassingThrough@lemmy.worldtoMemes@lemmy.mlGet rich quick
    link
    fedilink
    English
    arrow-up
    1
    ·
    20 days ago

    Forgive me, I’m no AI expert to fully compare the needed tokens per second measurement to relate to the average query Siri might handle, but I will say this:

    Even in your article, only the largest model ran at 8/tps, others ran much faster, and none of these were optimized for a task, just benchmarking.

    Would it be impossible for Apple to be running an optimized model specific to expected mobile tasks, and leverage their own hardware more efficiently than we can, to meet their needs?

    I imagine they cut out most worldly knowledge etc/use a lightweight model, which is why there is still a need to link to ChatGPT or Apple for some requests, would this let them trim Siri down to perform well enough on phones for most requests? They also advertised launching AI on M1-2 chip devices, which are not M3-Max either…


  • PassingThrough@lemmy.worldtoMemes@lemmy.mlGet rich quick
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    21 days ago

    Onboard AI chips will allow this to be local.

    Phones do not have the power to ~~~

    Perhaps this is why these features will only be available on iPhone 15 Pro/Max and newer? Gotta have those latest and greatest chips.

    It will be fun to see how it all shakes out. If the AI can’t run most queries on the phone with all this advertising of local processing…there’ll be one hell of a lawsuit coming up.

    EDIT: Finished looking for what I thought I remembered…

    Additionally, Siri has been locally processed since iOS 15.

    https://www.macrumors.com/how-to/use-on-device-siri-iphone-ipad/


  • PassingThrough@lemmy.worldtoMemes@lemmy.mlGet rich quick
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    21 days ago

    I think there’s a larger picture at play here that is being missed.

    Getting the weather is a standard feature for years now. Nothing AI about it.

    What is “AI” is, Hey Siri, what is the weather at my daughter’s recital coming up?

    The AI processing, calculated on-device if what they claim is true, is:

    1. the determination of who your daughter is
    2. What is a recital? An event? Are there any upcoming calendar events that match this concept?
    3. Is the “daughter” associated with this event by description or invitation? Yes? OK, what’s the address?
    4. Submit zip code of recital calendar event involving the kid to the weather API, and churn out a reply that includes all this information…

    Well {Your phone contact name}, it looks like it will {remote weather response} during your {calendar event from phone} with {daughter from contacts} on {event date}.

    That is the idea between on-device and cloud processing. The phone already has your contacts and calendar and does that work offline rather than educating an online server about your family, events and location, and requests the bare minimum from the internet, in this case nothing more than if you opened the weather app yourself and put in a zip code.


  • Plug it into a monitor or TV and keep an eye on the console.

    I have an older NUC that will not cooperate with certain brands of NVMe drive under PVE…the issue sounds like yours where it would work for an arbitrary amount of time before crashing the file system, attempting to remount read-only and rendering the system inert and unable to handle changes like plugging a monitor in later, yet it would still be “on”.






  • Absolutely. You can even throw the telephone in there. At the start it was a great way to reach Grandma across the country or the doctor across town. Now most of the traffic on it is robots and extortionists trying to fool Grammy into giving her money for some lie or another.

    I don’t even answer my phone for numbers anymore…be on a short named contact list, leave a voicemail reminding me you are someone I should put on that list, or nothing doing. Sucks for anyone putting me down as an emergency contact though…

    And I feel TV being 25% ads is being pretty conservative…oh, but streaming! Swap the ads and channels you don’t want for a higher per-channel price and no ads…oh, wait, now you get a higher price and the ads!


  • There’s a whole lot going towards ending the web as we know it.

    Censorship, consolidation, AI, greed, to name a few.

    Why, I couldn’t even get into the article before it faded into a paywall.

    I get people want to be paid but splashing cash on every page is not the internet as I knew it.

    Getting to this article from a social site(Lemmy) was also not how I knew it, that’s the consolidation part. After MySpace, in the era of Facebook pages it started. Less personal websites, less websites in general, just get everything from Facebook and Reddit.

    And sure, AI is also going to water down content, with prompts written by cheap corporate lackeys that we will still have to pay subs for after a social site sends us there.

    And then there’s also the censorship and laws coming out to restrict what’s available. First to protect the children while they are young, then more to “protect” them as they get older, and eventually they will know nothing but state approved media.

    To quote the article,

    It’s the End of the Web as We Know It.

    And I’m old and bitter about it. It had good promise, but enshittification took hold as was inevitable.


  • I wonder if it can be detected by the streaming apps. Some of them are really anal about ensuring you can’t record or whatever, and don’t work if it doesn’t get all the HDMI security stuff just right. I’ve had issues with bad cables and my portable projector(Anker) has to side load an alt version of Netflix because they couldn’t/wouldn’t get the device to pass Netflix “certification”.

    I’m guessing this means new partnerships and money changing hands, or nobody on a Roku can watch Netflix anymore, or they put these ads at a higher level that bypasses whatever security/DRM Netflix uses. Probably the last one, but if Netflix thinks they will lost money to this they’ll probably just pull their certification anyway.



  • I’ll take a compromise where “3.1” is etched in each head end, and I can trust that “3.1” means something, and start with that.

    The real crux of the issue is that there is no way to identify the ability of a port or cable without trying it, and even if labeled there is/was too much freedom to simply deviate and escape spec.

    I grabbed a cable from my box to use with my docking station. Short length, hefty girth, firm head ends, certainly felt like a featured video/data/Dock cable…it did not work. I did work with my numpad/USB-A port bus thing though, so it had some data ability(did not test if it was 2.0 or 3.0). The cable that DID work with my docking station was actually a much thinner, weaker feeling one from a portable monitor I also had. So you can’t even judge by wiring density.

    And now we have companies using the port to deviate from spec completely, like the Raspberry Pi 5 technically using USB-C, but at a power level unsupported by spec. Or my video glasses that use USB-C connections all over, with a proprietary design that ensures only their products work together.

    Universal appearance, non-universal function, universal confusion.

    I hate it. At least with HDMI, RCA, 3.5mm, Micro-USB…I could readily identify what a port and plug was good for, and 99/100 the unknown origin random wires I had in a box worked just fine.


  • Actually, that leads me to another point:

    One upon a time, the concept behind a universal USB-C connector was so we could do exactly that.

    Laptop? Phone? Camera? America? Germany? Japan? Power? Connect the to TV? Internet?

    Wouldn’t matter anymore. USB-C to cover it all. Voltage high for the laptop, low for the camera, all available just the same in every country, universal. So yes, fill the airports and hotels with them. Use them for power and to play videos on the TV. Because we weren’t supposed to have to question the voltage or abilities of the ports and cables in use.

    Did/will that future materialize?


  • I feel the only place for a €1 cable is met by those USB-A to C cables that you get with things for 5V charging. That’s it. And it’s very obvious what the limits on those are by the A plug end.

    Anything that wants to be USB-C on both ends should be fully compatible with a marked spec, not indistinguishable from a 5V junk wire or freely cherry picking what they feel like paying for.

    Simply marking on the cable itself what generation it’s good for would be a nice start, but the real issue is the cherry picking. The generation numbers don’t cover a wire that does maximum everything except video. Or a proprietary setup that does power transfer in excess of spec(Dell, Raspberry Pi 5). But they all have the same ends and lack of demarcation, leading to the confusion.



  • As someone with video glasses like those included here, it might be a step forward but it has a lot of room for improvement before it will survive mass market.

    For starters, unlike a screen, these glasses must be tailored to your eyesight. If you wear prescription, you will need to fit double glasses or have some ability for the video ones to be prescription. And a huge problem in the market right now is pupil distance, or eye spacing/head size. Mass market wants one-size-fits all, but that means those outside the designed size will have difficulty using them if they can at all.

    These are problems currently experienced with the current market like Rokid, XReal, and Viture.

    And then of course there’s power, if we keep to 1080p we’ll need more computing power and battery than a Steam Deck screen, which some handhelds might be able to accommodate, maybe more so depending on the weight and shape trades of the new style. But so far it might be disappointing, especially if it has the appearance of a huge screen and still needs to low-res upscale/FSR to meet performance.

    Just my thoughts. Still cool, but no confidence in it as a winner yet.




  • Because “protecting the children” is an easier political fight than trying to save adults from their own freedom, and the internet is not as clearly a threat as guns or drugs. And even guns are hard to restrict…

    As an adult you have a right to make bad choices, as well as certain constitutional rights, and unless controlling your rights can be readily accepted as required for the public good(like keeping you from driving a 2 ton murder box without training), it will die politically very, very quickly as government overreach.

    And even then there are many who think driver’s licenses are a violation of their freedoms. You think we can control their social media/free speech outlets?