Do you value your time?
Sure, generally speaking yes. Who wouldn't say that?
Here's your problem though: the process of logic where you look at someone's actions and conclude that they wouldn't have performed the action unless it was valuable to them only works if:
- Everyone is a perfectly rational agent that only chooses to perform actions that they believe will benefit them.
- Everyone is capable of perfectly predicting whether an action will benefit them overall.
Obviously neither of those things are true. Even if #1 was true, your approach still would run into problems because you wouldn't know if a particular action was a misprediction (that actually didn't end up providing value to that person) or whether it truly was beneficial. Since humans are both often quite irrational and pretty bad at predicting effects to boot, well... back to the drawing board.
Is anyone using these small models for anything? I feel like an LLM snob but I don't feel motivation to even look at anything less than 70-40B when it's possible to use those models.