Crack Open a (Context) Window

Leor Grebler
2 min readFeb 26, 2024

--

Generated by author using Midjourney

For us lay people, the context window is the length or amount of what we can easily paste as a prompt and upload documents to start the process with generative AIs like ChatGPT. The growth of the amount of information that can be pasted here (or uploaded) has been geometric. It’s gone from a few hundred “tokens” to now more than a million. At some point, it will be irrelevant, except when looking at running LLMs on ultra low cost or edge-based machines.

The bigger challenge in working with LLMs is getting them to try harder. A few days ago I wrote that my experience with these generative AIs is like working with a very smart but somewhat lethargic summer student. What “motivates” an LLM to develop a better response rather than sputtering out the cheapest, easiest answer that comes to its mind?

You: “Here’s a book {paste}. Write a book report.”

LLM: [horrible two line response]

That’s what most people do. You end up with the “I hope this email finds you well…” type of writing for sales copy.

The next level up is of course to add more detail but also do get the LLM to adopt a persona. However, my experience is that if you stop there, you just get some better or just different language being used but not necessarily the best. The next step after that is self reflection.

Of course, we all rate ourselves in a biased way. One can overcome this by asking the LLM to adopt a different persona when grading it’s own work. But what then?

The next is iterative. Can it create iterations? Can it build and improve? Can it show it’s work.

The cost doing this improvement over and over again is almost nothing compared to what it would cost (and if it were even possible) to get a human to do it. Maybe you can ask someone to rewrite something 5 times. But 40 times? 100? This is where the power of generative AI gets unlocked. It’s like throwing a stone in a polisher and getting the machine to tumble it for the equivalent of a month.

Similar to a rock tumbler, an iterative LLM and the right, almost-adversarial approach to tuning and improving, can take almost anything and turn it into a gem. Also similar to working with a rock tumbler, the key is for it not to break half way through the polishing. That’s maybe where large context windows may be helpful.

--

--

Leor Grebler
Leor Grebler

Written by Leor Grebler

Independent daily thoughts on all things future, voice technologies and AI. More at http://linkedin.com/in/grebler

No responses yet