Rendering scenes observed in monocular video from novel viewpoints is a challenging problem. For static scenes, the community has studied both scene-specific optimization techniques, which optimize on every test scene, and generalized techniques, which only run a deep pass through a test scene. In contrast, for dynamic scenes, scene-specific optimization techniques exist, but, to the best of our knowledge, there is currently no generalized method for the dynamic synthesis of novel views from a given monocular video. To explore whether generalized dynamic synthesis of novel views from monocular videos is possible today, we establish an analysis framework based on existing techniques and work towards the generalized approach. We found that a pseudo-generalized process is possible without scene-specific appearance optimization, but geometrically and temporally consistent depth estimates are needed. Although there is no scene-specific appearance optimization, the pseudo-generalized approach improves some scene-specific methods. For more information, see the project page at https://xiaoming-zhao.github.io/projects/pgdvs.