For most "deep learning" algorithms, the typical approach is to "gradient ascent", following the slope of the hypergraph to find incrementally better-and-better results.
Genetic Algorithms however, do not necessarily need a gradient to find a better solution, though there's similarities between GAs and hill-climbing. This blogpost explores a conceptual similarity between traditional gradient ascent / backpropagation (currently a favored technique for neural nets), and how it relates to genetic algorithms.
Genetic Algorithms however, do not necessarily need a gradient to find a better solution, though there's similarities between GAs and hill-climbing. This blogpost explores a conceptual similarity between traditional gradient ascent / backpropagation (currently a favored technique for neural nets), and how it relates to genetic algorithms.