Cloudflare tunnels or a reverse proxy with Cloudflare DNS would be much easier to manage IMO. What you're doing will work but it seems like you have a lot of moving parts in your setup which can lead to errors creeping in.
With both proposed setups you should be able to pass non web-based traffic to their respective backends. In nginx that would look something like the following:
server {
listen 443 ssl http2;
server_name service.yoursite.tld;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host:$proxy_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://<IP of your service>:<port>;
}
}
With Cloudflare tunnels you can setup a VM as your tunnel termination point and configure ingress rules to pass traffic where it needs to go, similar to this:
tunnel: <Tunnel UUID>
credentials-file: /root/.cloudflared/<Tunnel credentials>.json
ingress:
- hostname: service1.yourdomain.tld
service: http://192.168.0.10:80
- hostname: service2.yourdomain.tld
service: ssh://192.168.0.20:22
- service: http_status:404 # This is a catch-all rule to handle unmatched ingress traffic
One thing you can do for your public IP is use something like inadyn to update cloudflare with your public IP when it changes. Inadyn is super lightweight and will make sure, +/- 5 minutes, that your public IP is up-to-date with Cloudflare.
I actually setup SES for my Lemmy instance. I was evaluating SendGrid but less than 24 hours after signing up they closed my account with zero explanation so...yeah lol.
I was sandboxed in SES initially but I created a support ticket asking for production access and I was good to go. No issues with SES thus far.