Question about cpu on repls?

I have a question, probably for Repl experts.

What is the cpu power and memory of a Repl that is not deployed and that someone accesses through ? When I develop, as a Pro account, I have 4vcpu and 4GB memory but when I’m not on the repl and I access the Repl, or when the repl makes background calculations, what is the cpu and memory ?

Let me explain why I’m asking that : I have tried deploying one of my repl that have a list of around 15 rss feeds (yes !) and that gets the latest articles to show them on a Flask web site.

On the smallest Autoscale server it was strangely slow to get all feeds.

I have stopped the deployment version (because of no persistent filesystem and no access to files and this slow process) and used the “normal” Repl system with my Flask server running when I access the

And it was really really fast to get all the feeds.
So i would like to know if there is a network differences on deployments that would cause requests to be slow or if the smallest autoscale server is really less powerful than a basic Repl running through access, while we still have this possibility :slight_smile:


1 Like

@lincoln-replit @replitted deployment experts, can you help me ? :slight_smile:
I’d be happy to have a small guide about how I can size my deployment to have the same results as in dev version …

What is the cpu/memory sizing of a normal repl that wakes up when someone is using the web link ? is it 4vcpu and 4GB just like in my Pro dev environment or is it more than that ?
I want to mimic that with deployment because i have very bad results when deployed


Hey, sorry for the late reply. I’d be happy to help, feel free to sign up for any available slot on my calendly: Calendly - Replit Deployments Team

To answer your questions directly in the meantime:

What is the cpu power and memory of a Repl that is not deployed and that someone accesses through ?

Accessing the Repl through has the same resources as when you open that Repl in the workspace. You can see the resources in the bottom left hand corner of the workspace for exact numbers – it depends on your plan, whether or not you’re using boosts, etc. So in your case the site should have 4 GB memory and 4 CPU.

On the smallest Autoscale server it was strangely slow to get all feeds.

The smallest Autoscale deployment has 1 GB memory and 1 CPU by default, so it is a bit less than the Pro workspace, but you can add more memory and CPU to your deployment as necessary.

When you are starting up the application in the workspace, do you see the CPU and Memory bars in the resource monitor fill up all the way? If so then you can probably conclude that your application is CPU and Memory intensive and you might need to upgrade the size of your deployment to achieve those same speeds.


Hello @lincoln-replit ,
thanks for your reply.
In fact I have made many tests and wanted to show you what was wrong but maybe I don’t use Autoscaler like I should.

My wish is to deploy my projects on Autoscale. I don’t want Reserved VM because I feel it’s too expensive for very small projects and I like the idea of using only compute cycles we use.
BUT, some of my projects have a backgroundscheduler to make background tasks like refreshing data or calculating KPIs and Autoscaler doesn’t like that.

I have made a test : i have a web site that gets RSS feeds in the background and provides a Flask app to see them. The background task is refreshing them every x minutes and put the result in a postgres database to avoid the lack of persistent filesystem. And on the autoscaler, it doesn’t work … it doesn’t work in a strange way.
When i make a refresh on the startup, it makes the full refresh in 30 seconds. But after 5 minutes, the same refresh takes 10 minutes … I have tried to use a autoscale server with 8vCPU and 32GB of RAM and it was exactly the same … I presume there is something that prevents making background tasks on autoscaler and that slows them badly.

Then I have read the documentation about Autoscaler where it is explained that it should not be used for background tasks … I understand that but I would love to have a server where I pay only for the usage, but that can remain up most of the time with background tasks.

I have then tried my process on the smallest Reserved VM and it works perfectly … but i must pay for the Reserved while having 6 000 000 compute cycles i would be happy to use … too bad. I hope you’ll find a way to allow people to make small background tasks on Autoscaler or another type of server…
Or maybe you could give one Reserved VM for all Pro Plan… which would be a good replacement of the good old Always On

If you think there could be an issue somewhere and that I’m wrong i’d be happy to discuss more


Thanks for the feedback! We are planning to create a background tasks deployment type that can run at periodic intervals and only charge you for what you use at some point. I believe that will solve your problem.

That would be incredible !
Now that I pay for a Reserved VM, i can schedule things and launch autoscalers periodically but having a real “background” task we only pay when it runs would be great

1 Like

is there a public roadmap I can follow? Very interested in background tasks/cron.

1 Like

We don’t maintain a public roadmap but you can always submit and vote on feature requests in