Unable To Use Google Gemma 7B Huggingface Model

Question:
I am unable to use the Google Gemma 7B Huggingface Model. This is the error I get:

2024-03-08 03:34:36.545587: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-03-08 03:34:36.551405: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-03-08 03:34:36.622989: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-03-08 03:34:37.960448: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Traceback (most recent call last):
  File "/home/runner/[REDACTED]/main.py", line 13, in <module>
    classifier = pipeline('text-classification', model='google/gemma-7b', token=os.environ['API_KEY'])
  File "/home/runner/[REDACTED]/.pythonlibs/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 905, in pipeline
    framework, model = infer_framework_load_model(
  File "/home/runner/[REDACTED]/.pythonlibs/lib/python3.10/site-packages/transformers/pipelines/base.py", line 292, in infer_framework_load_model
    raise ValueError(
ValueError: Could not load model google/gemma-7b with any of the following classes: (<class 'transformers.models.auto.modeling_tf_auto.TFAutoModelForSequenceClassification'>,). See the original errors:

while loading with TFAutoModelForSequenceClassification, an error is thrown:
Traceback (most recent call last):
  File "/home/runner/[REDACTED]/.pythonlibs/lib/python3.10/site-packages/transformers/pipelines/base.py", line 279, in infer_framework_load_model
    model = model_class.from_pretrained(model, **kwargs)
  File "/home/runner/[REDACTED]/.pythonlibs/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
    raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers.models.gemma.configuration_gemma.GemmaConfig'> for this kind of AutoModel: TFAutoModelForSequenceClassification.
Model type should be one of AlbertConfig, BartConfig, BertConfig, CamembertConfig, ConvBertConfig, CTRLConfig, DebertaConfig, DebertaV2Config, DistilBertConfig, ElectraConfig, EsmConfig, FlaubertConfig, FunnelConfig, GPT2Config, GPT2Config, GPTJConfig, LayoutLMConfig, LayoutLMv3Config, LongformerConfig, MobileBertConfig, MPNetConfig, OpenAIGPTConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoFormerConfig, TapasConfig, TransfoXLConfig, XLMConfig, XLMRobertaConfig, XLNetConfig.

Perhaps this gihub issue?

Seems like you need might PyTorch (CPU)?

Attempting now (wow this repl takes up 26.5 GiB of my storage :open_mouth:)

Status: Had to delete it

@Firepup650 Is there a way to not install it and use it without an installation?

3 Likes

Generally, installing it using nix packages instead of python packages would be the way to go, but the nix repository doesn’t have all the packages, it probably doesn’t have this package.