this post was submitted on 15 Jul 2024
61 points (96.9% liked)

Technology

57453 readers
4413 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 48 points 1 month ago (3 children)

Microsoft CTO Kevin Scott is of course not a reliable source due to conflict of interest and his position in the US corporate world.

If anything, the fact that he is doing damage control PR around "LLM scaling laws" suggests something is amiss. Let's see how things develop.

[–] ItsComplicated 32 points 1 month ago

Given Microsoft's investment in OpenAI and strong marketing of its own Microsoft Copilot AI features, the company has a strong interest in maintaining the perception of continued progress, even if the tech stalls.

I believe this sums it up.

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago)

Yeah. There's a very narrow corner that demands huge models, and that's use cases where there's no room for mistakes. That space is exciting, but also deeply bogged down in uncertainty, due both to laws and as-yet-undelivered, but 100% certainly coming-soon, law-creating-disasters.

Everywhere else, I suspect we've seen as good as we're going to get, from current generation AI.

Tech firm CEOs know this too, but there's not much interesting on the table to "bet the farm" on to court "swing for the fences" investors (gullible suckers) right now.

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago)

While I agree about the conflict of interest, I would largely say the same thing despite no such conflict of interest. However I see intelligence as a modular and many dimensional concept. If it scales as anticipated, it will still need to be organized into different forms of informational or computational flow for anything resembling an actively intelligent system.

On that note, the recent developments with active inference like RXinfer are astonishing given the current level of attention being paid. Seeing how llms are being treated, I'm almost glad it's not being absorbed into the hype and hate cycle.