When I try image2image with ready-made Core ML models, it throws the following error. ``` StableDiffusion/Encoder.swift:96: Fatal error: Unexpectedly found nil while unwrapping an Optional value ``` You can reproduce this issue with the following steps: 1. Download [coreml-stable-diffusion-2-base](https://huggingface.co/apple/coreml-stable-diffusion-2-base). 2. Create a 512x512 input image. 3. `swift run StableDiffusionSample --resource-path <resource dir> --image <input image> "test"` It doesn’t reproduce when I reconvert the model using the latest version of torch2coreml. I also confirmed that it doesn’t reproduce when I revert the changes in https://github.com/apple/ml-stable-diffusion/commit/1147e87b790895feeddc1b146c8fddf6f8b9f8a5