Run Ange, run!
timmytbt
Of late, my biggest concern is certain parties feeding LLMs with a different version of history.
Search has become so shit of late that LLMs are often the better path to answering a question. But as everyone knows they are only as good as what they’ve been trained on.
Do we, as a society, move past basic search to a preference for AI to answer our questions? If we do, how do we ensure that the history they feed the models is accurate?
I somewhat agree with that (good for information retrieval).
I say somewhat because they will downright lie , until/unless you call them out.
You need to have an idea of whether what they are telling you is in fact true or not.
I find them very useful for programming snippets because a) I can usual grok whether what they’ve provided is what I’ve asked for and b) the proof is in the pudding (does the code do what I want?)
Will be an interesting summer. Just hope we’re decisive.