Member-only story
Thought to Intent
Forget about Though To Text. We need machines to understand.
About a year ago, researchers at Chang Lab at UCSF released papers about thought to text. It’s a big deal if we can send thoughts to each other. However, to interface to a machine, sending thought-based text is slow and likely requires the same effort as vocalizing or writing.
It might even require more effort to think. With speaking or writing, we can curate the thoughts and only output what we need to be translated. With thoughts, we might be interrupted by many competing thoughts. Maybe after some training, writing could become more meditative but for instances where we’re not writing a manuscript and where we’re relying on speed and accuracy, this skipping the mouthing of words part might not get us where we want to go.
What we really want when communicating with machines is for them to know our intents. We want the robot to move over there, or pick up that thing, or to turn on the light switch or open the blinds. We want to express actions and assign entities to these.
When interacting machines, we’ve relied more on natural language understanding (NLU), a subset of natural language processing (NLP). Typically, this involves taking text and extracting entities and intents. Intents are usually related to actions and entities might be…