this post was submitted on 29 May 2025
1 points (52.6% liked)

Technology

70528 readers
3674 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Von Neumann’s idea of self-replicating automata describes machines that can reproduce themselves given a blueprint and a suitable environment. I’m exploring a concept that tries to apply this idea to AI in a modern context:

  • AI agents (or “fungus nodes”) that run on federated servers
  • They communicate via ActivityPub (used in Mastodon and the Fediverse)
  • Each node can train models locally, then merge or share models with others
  • Knowledge and behavior are stored in RDF graphs + code (acting like a blueprint)
  • Agents evolve via co-training and mutation, they can switch learning groups and also chose to defederate different parts of the network

This creates something like a digital ecosystem of AI agents, growing across the social web; with nodes being able to freely train their models, which indirectly results in shared models moving across the network in comparison to siloed models of current federated learning.

My question: Is this kind of architecture - blending self-replicating AI agents, federated learning, and social protocols like ActivityPub - feasible and scalable in practice? Or are there fundamental barriers (technical, theoretical, or social) that would limit it?

I started to realize this using an architecture with four micro-services for each node (frontend, backend, knowledge graph using fuseki jena and activitypub-communicator); however, it brings my local laptop to its limits even with 8 nodes.

The question could also be stated differently: how much compute would be necessary to trigger non trivial behaviours that can generate some value to sustain the overall system?

you are viewing a single comment's thread
view the rest of the comments
[–] taladar 2 points 4 days ago (1 children)
[–] [email protected] 1 points 4 days ago

Currently the nodes only recommend music (and are not really good at it tbh). But theoretically, it could be all kinds of machine learning problems (then again, there is the issue with scaling and quality of the training results).