We are concerned with dynamic optimization processes from a viewpoint of Golden optimality. A path is called Golden if any state moves to the next state repeating the same Golden section in each transition. A policy is called Golden if it, together with a relevant dynamics, yields a Golden path. The problem is whether an optimal path/policy is Golden or not. This paper minimizes a quadratic criterion and maximizes a square-root criterion over an infinite horizon. We show that a Golden path is optimal in both optimizations. The Golden optimal path is obtained by solving a corresponding Bellman equation for dynamic programming. This in turn admits a Golden optimal policy.