Modern neural networks are growing not only in size and complexity, but also in inference time. One of the most effective compression techniques, channel pruning, combats this tendency by removing channels from convolutional weights to reduce resource consumption. However, channel removal is not trivial for multi-branch segments of a model, which can introduce additional copies of memory at inference time. These copies incur increased latency, so much so that the pruned model is even slower than the original, unpruned model. As a workaround, existing pruning jobs restrict pruning of certain channels together. This completely eliminates inference time memory copies, but as we show, these restrictions significantly affect accuracy. To solve both challenges, our idea is to enable unrestricted pruning by reordering channels to minimize memory copies. Using this information, we design a generic UCPE algorithm to prune models with any pruning pattern. Crucially, by removing the constraints of existing pruning heuristics, we improved the accuracy of ImageNet top-1 for post-training pruning by 2.1 points on average, which benefits pruned DenseNet (+16.9), EfficientNetV2 ( +7.9) and ResNet (+6.2). Additionally, our UCPE algorithm reduces latency by up to 52.8% compared to naive unrestricted pruning, almost completely eliminating memory copies at inference time.