Uvicorn in VM Hangs Indefinitely

Problem description:
When deploying my uvicorn/FastAPI on a Reserved VM (0.25 vCPU / 1GB RAM), the deployment works, but then all my endpoints time out. When I run the repl with the “Run” button, everything works as expected, as it does locally on my system. I unfortunately cannot share the repl since it is a private API endpoint used by my company.
There is nothing in logs besides the requests to “/” that Replit automatically sends at startup. Both of those requests went through fine with no errors or anything.

Expected behavior:
For repl not to hang.

Actual behavior:
Endpoints unreachable due to timeout.

Steps to reproduce:
Make a basic FastAPI/uvicorn application, try to deploy on lowest VM tier.

Bug appears at this link:
N/A

Browser: Chrome
OS: Win11
Device (Android, iOS, NA leave blank):
Desktop app version (Avatar menu->“Version”) or NA:
Plan (Free, Replit Core): Replit Core

Hi @flancast90 , welcome to the forums!
If your repl is Public, people can see it anyway. Try storing the private data in Secrets.

2 Likes

The repl isn’t public, as the logic and endpoints are sensitive.

Could you by chance make a minimal Repl that can reproduce this? If not, I’ve reported something similar:

1 Like

I’ve been able to determine the root cause of the error as the application taking too long to load, thus replit kills the process and sends an error along the lines of “no open port.” I know this is the case as when I remove the blocking code (loading large ML model), it deploys fine. I’ve experimented with warmup requests and such, but these are all a little buggy and only sometimes will the deployment work. Is there perhaps a variable to change which will increase the amount of time replit will spend waiting for a port to open?

1 Like

Try to move this

to another file. Then, set the build command (not run) to run that file.

I’ll try this right now, thanks for the help! I’ll report back in a few mins whether it worked or not.

1 Like

I get the same “hostingpid1: an open port was not detected” error as before. Here’s my current code, correct me if I’m doing it wrong

Build.py

from models.unified import BigMlModel
from structures import config

models = {}
for m in config["models"]:
    models[m] = BigMlModel(type)

Api.py

from build import models
# some logic using models here...

My build command is python build.py, and my run command is python api.py. Does this approach seem correct to you?

Strange.

Yes, I was quite sure that would work.

Maybe someone with more knowledge on this can help you out, because I don’t know what to do now. Sorry I wasn’t of more help. :woman_shrugging:

Timeouts likely due to:

  • Limited VM resources: Upgrade CPU/RAM or optimize code.
  • Strict timeouts: Increase timeout values or find bottlenecks.
  • Network issues: Check connectivity and firewall rules.
  • Unicorn workers: Adjust worker count based on load.
  • App errors: Review logs and debug any issues.
  • Dependencies: Ensure compatibility and update if needed.
  • Resource contention: Monitor usage and isolate if necessary.

Focus on resource usage and configuration first, then test and monitor.