Replit-ai Modelfarm with text-bison seems to lack randomness

Problem description:

Hello,
I’m trying to use Replit-ai Modelfarm with text-bison and for the exact same question, I always get the same answer.
I understand that a neural network can always return the same result based on a question but all models have a randomness that can be adjusted.
In the documentation there is a topK parameter which seems to adjust the randomness of results but it still give the same result after settings it to any value

Is it normal ? is there a cache somewhere that would keep the result of a question and return the same one for a specific time ? or is this parameter not working ?

Expected behavior:

There should be some randomness for the same query, based on the temperature and the topK parameter, at least

Actual behavior:

always the same result

Steps to reproduce:

from replit.ai.modelfarm import CompletionModel
and make a query with text-bison model

Bug appears at this link:

https://replit.com/@fvillemin/testaicache

2 Likes

I think it’s just a Google AI that Replit used. Could be a Google issue. I do agree Bison is not very creative.

Yes, it may be the normal behaviour of Google API but as I don’t have an account there, I don’t know how to test.
But also as there is a randomness parameter, I would think that something may not be implemented correctly somewhere … I would be surprised to have such a strange behaviour from Google

1 Like

I have just tested the Chat version and it works fine : it gives different results every time depending on the topK parameter (1 = always the same, 40 = really random)

1 Like