We present a novel way to predict molecular conformers through a simple formulation that avoids many of the heuristics of previous work and achieves state-of-the-art results using the advantages of scaling. By training a generative diffusion model directly on 3D atomic positions without making assumptions about the explicit structure of molecules (e.g., modeling torsion angles), we can radically simplify structure learning and make it trivial to increase model size. This model, called Molecular Conformer Fields (MCF), works by parameterizing conformer structures as functions that map elements of a molecular graph directly to their 3D location in space. This formulation allows us to reduce the essence of structure prediction to learning a distribution over functions. Experimental results show that expanding model capacity leads to large gains in generalization performance without imposing inductive biases such as rotational equivariance. MCF represents a breakthrough in extending diffusion models to handle complex scientific problems in a conceptually simple, scalable, and effective way.