Think Hard Once, Do Many
An analogy to retraining AI
About nine months, I agonized over a decision as to whether to park at the airport, which parking lot to use if I did, or whether to order a van to drive. I have to make that decision again and this time around, there’s a lot less agony.
Last time, my inputs were the cost, the time it would take, what I expected to be doing before the trip to the airport, what I’d be taking with me, and where everyone who I was driving with was going to be when we left.
Now, to make the decision again, I just to see if there were any big variations in the inputs that I used before.
Did the price of parking change? Does my car need repairs or did I change cars? Am I flying with a different group? Is there some new service for airport shuttling? Is there a new parking service? Did my preferences change?
Similarly, we can look at variations in the data we originally used to train our AI models as a way to determine whether we should retrain. Were there new intents / slots that we needed to use? Did language change at all? Did the number of samples I was using change? These are shortcuts but can save a lot of effort in reaching decisions.