-
Notifications
You must be signed in to change notification settings - Fork 409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: The response.text
quick accessor only works for simple (single-Part
) text responses. This response is not simple text.Use the result.parts
accessor or the full result.candidates[index].content.parts
lookup instead.
#170
Comments
me too i have the same error especially with Gemini vision |
You could try to delete |
It works for me. Thanks @ydm20231608 |
I also encountered this problem. Where is the generation_config file that needs to be deleted? @HienBM |
Hi @Ki-Zhang , When you set up your model, the generation_config is in it. Try ignoring it like this. |
Thank you for your answer @HienBM
I don't know how to solve this problem. But this problem does not occur when I use other image examples to input the model. |
@Ki-Zhang As of January 2024, the entire list of Harm Categories can be found here. The implementation for safety_settings = [
{
"category": "HARM_CATEGORY_DANGEROUS",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"threshold": "BLOCK_NONE",
},
] values for each category can be found here
These settings can be applied as: # For image model
image_model.generate_content([your_image, prompt], safety_settings=safety_settings)
# For text model
text_model.generate_content(prompt, safety_settings=safety_settings) Additionally, make sure the image does not contain content related to |
So this is caused because content was blocked on the server-side? If so, the thrown exception text is terrible. |
Thanks for provding this! However, the safety settings does not work for me, instead, changing temperature from 0 to 0.7 works. The generated contents may be blocked since I found my input question is about black people (from MMLU dataset). |
Thanks for the information. However, both the safety settings and temperature don't work for me. The contents are in the biomedical domain, and all other cases are successfully generated except for one. I don't know why it fails to generate for that one. |
Try set up
instead of It worked for me |
This is probably happening because you are getting a finish_reason of Recitation for your chosen candidate: finish_reason So you may need to simply choose another candidate or run again for a new result that doesn't infringe on copyright. |
You could try to delete the generation_config in the model if you use that This also works for me but why may be the reason for that? |
_ have the same issue and I wonder what is the reason?_ |
This problem happened when ```max_output_tokens''' was too small for the response so no need to delete generation_config. That my experience |
I think the main reason is the model doesn't return any text in sometimes, based on the explanation of @MarkDaoust in I still get this error with my code even when I delete generation_config. But when I set up
instead of This error does not appear anymore. |
@HienBM, Thank you for this information. It resolved my errors. |
ValueError: The response.text quick accessor only works for simple (single-Part) text responses. This response is not simple text.Use the result.parts accessor or the full result.candidates[index].content.parts lookup instead. Fix the Error Anyone check for You Code it's working for me 😎
and when work with image
|
thankyou very much |
Maybe it's because multiple results are generated |
My current diagnosis of this issue is that
So a lower
|
The problem occurs not only with the image but also the prompt. I tried same image with different prompt and it works. def get_gemini_response(input, image): I hope it definitely works. Thank you! |
It worked for me. |
nothing works for me, my content is medical and it considers it unsafe ! |
I have not tried it yet, but I have heard that it might work. Try using some prompting techniques to extract the response, like giving the context before the question, like Instruction = f"You are the backend of a medical chatbot, be responsible and respond in formal tone. The users of this medical chatbot are mature and intelligent students and doctors, So respond according to the query {query}" |
I am facing this issue how to resolve it Invalid operation: The |
Change-Id: I8599dafc9e5084a43f2ce482644d0e9e16b61061
@Vital1162
This has been fixed, now it does return the partial text.
Yes, this is what the error message is trying to tell you. That something went wrong and it couldn't get a simple text result. The one downside to this approach is that empty responses become an empty string, when maybe you want to check what went wrong, and/or re-run the request. This will update the error messages to output more information: #527. The right fix depends on what is going wrong. |
* Expand error descriptions for #170 Change-Id: I8599dafc9e5084a43f2ce482644d0e9e16b61061 * Fix test failure cause by upgrade to protobuf>=5.0.0 Change-Id: I16a3c1964284d16efb48073662901454d4e4a6a1 * Format Change-Id: Iee130a7e58f2cfbc1001808ac892f119338626eb
I'm facing this issue today (did not occur with the same code in the last few days). Following the previous posts here, I have disabled response = chat_session.send_message(
[prompt],
stream=True,
)
# for chunk in response:
# print(chunk.text, end='')
for candidate in response.candidates:
print([part.text for part in candidate.content.parts]) leads to the following error (running on Colab):
Calling |
您的邮件我已收到,我会尽快处理,谢谢!
|
Description of the bug:
Can someone help me check this error? I still ran successfully yesterday with the same code
File ~/cluster-env/trident_env/lib/python3.10/site-packages/pandas/core/series.py:4630, in Series.apply(self, func, convert_dtype, args, **kwargs)
4520 def apply(
4521 self,
4522 func: AggFuncType,
(...)
4525 **kwargs,
4526 ) -> DataFrame | Series:
4527 """
4528 Invoke function on values of Series.
4529
(...)
4628 dtype: float64
4629 """
-> 4630 return SeriesApply(self, func, convert_dtype, args, kwargs).apply()
File ~/cluster-env/trident_env/lib/python3.10/site-packages/pandas/core/apply.py:1025, in SeriesApply.apply(self)
1022 return self.apply_str()
1024 # self.f is Callable
-> 1025 return self.apply_standard()
File ~/cluster-env/trident_env/lib/python3.10/site-packages/pandas/core/apply.py:1076, in SeriesApply.apply_standard(self)
1074 else:
1075 values = obj.astype(object)._values
-> 1076 mapped = lib.map_infer(
1077 values,
1078 f,
1079 convert=self.convert_dtype,
1080 )
1082 if len(mapped) and isinstance(mapped[0], ABCSeries):
1083 # GH#43986 Need to do list(mapped) in order to get treated as nested
1084 # See also GH#25959 regarding EA support
1085 return obj._constructor_expanddim(list(mapped), index=obj.index)
File ~/cluster-env/trident_env/lib/python3.10/site-packages/pandas/_libs/lib.pyx:2834, in pandas._libs.lib.map_infer()
Cell In[116], line 82, in extract_absa_with_few_shot_gemini(text)
80 response.resolve()
81 time.sleep(1)
---> 82 return list_of_dict_to_string(string_to_list_dict(response.text.lower()))
File ~/cluster-env/trident_env/lib/python3.10/site-packages/google/generativeai/types/generation_types.py:328, in BaseGenerateContentResponse.text(self)
326 parts = self.parts
327 if len(parts) != 1 or "text" not in parts[0]:
--> 328 raise ValueError(
329 "The
response.text
quick accessor only works for "330 "simple (single-
Part
) text responses. This response is not simple text."331 "Use the
result.parts
accessor or the full "332 "
result.candidates[index].content.parts
lookup "333 "instead."
334 )
335 return parts[0].text
ValueError: The
response.text
quick accessor only works for simple (single-Part
) text responses. This response is not simple text.Use theresult.parts
accessor or the fullresult.candidates[index].content.parts
lookup instead.Actual vs expected behavior:
No response
Any other information you'd like to share?
No response
The text was updated successfully, but these errors were encountered: