This is the last part of my series of articles inspired by nature. Previously I had talked about algorithms inspired by genetics, swarms, bees and ants. Today I will talk about wolves.
When a journal article has a citation count spanning 5 figures, you know serious things are happening. Gray Wolf Optimizer (1) (GWO) is an example of this.
Like particle swarm optimization (PSO), artificial bee colony (ABC), and ant colony optimization (ACO), GWO is also a metaheuristic. Although there are no mathematical guarantees for the solution, it works well in practice and does not require any analytical knowledge of the underlying problem. This allows us to query from a “black box” and simply use the observed results to refine our solution.
As I mentioned in my ACO article, these all ultimately relate to the fundamental concept of trade-off between exploration and exploitation. Why, then, are there so many different metaheuristics?
First of all, because researchers have to publish articles. A good part of their work involves exploring things from different angles and sharing the ways in which their findings provide benefits over existing approaches. (Or as some would say, publishing articles to justify their salaries and seek promotions. But let's not go there.)
Secondly, it is due to the “There is no free lunch” theorem (2) that the GWO authors themselves talked about. While that theorem specifically said that there is no free lunch for optimization algorithms, I think it's fair to say that the same is true for data science in general. There is no one-size-fits-all solution, and we often have to try different approaches to see what works.
Therefore, let's proceed to add yet another metaheuristic to our toolbox. Because it never hurts to have another tool that one day may be useful to you.
First, let's consider a simple image classification problem. A smart approach is to use pre-trained deep neural networks as feature extractors to convert…