In my experience, token limits mean nothing on larger context windows. 1 million tokens can easily be taken up by a very small amount of complex files. It also doesn’t do great traversing a tree to selectively find context which seems to be the most limiting factor I’ve run against trying to incorporate LLMs into complex and unknown (to me) projects. By the time I’ve sufficiently hunted down and provided the context, I’ve read enough of the codebase to answer most questions I was going to ask.
Well, I feel a bit better getting my 7900 XTX then even if the price was a bit of a gut punch. It’s been a rock solid replacement of my 3090 for gaming and general Linux performance and stability. Guess I’ll be sticking with this for a few years till AMD decides to compete on high end again.