Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug with AllTalk creating Errors in console #499

Closed
Tum1370 opened this issue Jan 25, 2025 · 17 comments
Closed

Bug with AllTalk creating Errors in console #499

Tum1370 opened this issue Jan 25, 2025 · 17 comments

Comments

@Tum1370
Copy link

Tum1370 commented Jan 25, 2025

OK, after trying to figure out why i was receiving this bug, i completed started fresh and installed a fresh install on oobabooga v2.3

I then went on to test before adding any extensions.
Everything was fine.
I then added LLM_web_search, and once again tested the LLM and Chat UI a lot.
Everything was working fine.
I then added the AllTalkv2 extension. I already had the standalone installed, so i copied over the extension folder to my newly installed oobabooga.
Enabled the extension in Session, and setup the IP.

I then started chating to my LLM. The voice was working fine.
Then after a few websearches that worked fine, i finally got the error thats been causing me such a problem.

So i can definitily say that the AllTalk extension is causing this issue.
Hopefully you can reproduce and fix please.
Here is the error in the console that i get.


Traceback (most recent call last):
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\queueing.py", line 580, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\route_utils.py", line 276, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1928, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1526, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 657, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 650, in anext
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio_backends_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio_backends_asyncio.py", line 962, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 633, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 816, in gen_wrapper
response = next(iterator)
^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\modules\chat.py", line 444, in generate_chat_reply_wrapper
yield chat_html_wrapper(history, state['name1'], state['name2'], state['mode'], state['chat_style'], state['character_menu']), history
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\modules\html_generator.py", line 434, in chat_html_wrapper
return generate_cai_chat_html(history, name1, name2, style, character, reset_cache)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\modules\html_generator.py", line 362, in generate_cai_chat_html
converted_visible = [convert_to_markdown_wrapped(entry, use_cache=i != len(history['visible']) - 1) for entry in row_visible]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\modules\html_generator.py", line 362, in
converted_visible = [convert_to_markdown_wrapped(entry, use_cache=i != len(history['visible']) - 1) for entry in row_visible]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\modules\html_generator.py", line 266, in convert_to_markdown_wrapped
return convert_to_markdown.wrapped(string)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\modules\html_generator.py", line 161, in convert_to_markdown
string = re.sub(pattern, replacement, string, flags=re.MULTILINE)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\re_init_.py", line 185, in sub
return _compile(pattern, flags).sub(repl, string, count)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: expected string or bytes-like object, got 'NoneType'

@erew123
Copy link
Owner

erew123 commented Jan 25, 2025

The error is caused by an empty value/message being pasted into the chat history (previous messages). Once you get a null message in the chat history, future messages within that chat will probably have the same issue.

As far as AllTalk goes, the only 1x thing it would/could modify in the chat history is if the text had an image in it. Do you know if your LLM web search returns images? The lines of code are here https://github.com/erew123/alltalk_tts/blob/alltalkbeta/system/TGWUI_Extension/script.py#L721C1-L725

Any images are temporarily removed, as otherwise you send the contents of an image to be generated as TTS, and the they are re-added here https://github.com/erew123/alltalk_tts/blob/alltalkbeta/system/TGWUI_Extension/script.py#L797-L798

I cant speak to what modifications/changes other peoples code may make. What I might suggest is to change the load order of the extensions in the settings.yaml so that AllTalk loads first (not sure if that works with their extension or not as I dont know what their extension is doing precisely or how its modifying the output)

From

Image

To

Image

@Tum1370
Copy link
Author

Tum1370 commented Jan 25, 2025

No the LLM_Web_search does not show any images.

What I do know is that I installed fresh to check for errors. Checked thoroughly. Then installed the web search. Same again. Checked thoroughly.
No errors up to now.
Then installed AllTalk.
Alltalk loads first anyway.

Some searches were fine, then I suddenly start getting these errors.

I had to go to this method of testing to eliminate other extensions which I haven’t even tried yet with testing alltalk second.

@Tum1370
Copy link
Author

Tum1370 commented Jan 25, 2025

Could you maybe add a quick tickbox to disable this removing of images so I can test with that happening ?

@erew123
Copy link
Owner

erew123 commented Jan 25, 2025

I will add that none of the code you show above specifies AllTalk, its just generally TGWUI complaining that the text is Null somewhere previously in the chat conversation.

So here is an update 8f5bb21

Stage 1: This will check if the Input text from TGWUI (or previous extension) that is sent to the AllTalk Remote extension is Null/None/Empty. If it is, it will error at the command prompt/console. This would be something to do with TGWUI not sending text OR a previous extension in the load sequence stripping text.

Stage 2: Next, when AllTalk goes to strip out any Images, using the image pattern which would only find images, if the remaining "text" is Null/None/Empty, it will error out at that stage.

Stage 3: Finally, when the Image (if it existed) is merged back in, if the remaining "text" is Null/None/Empty, it will error out at that stage. This would be the last stage that AllTalk has any interaction with the text (or image) before its handed back to TGWUI and potentially onto another extension.

So, to be clear, if it errors at Stage 1, its TGWUI OR a previous extension in the load order. If it errors at Stage 2 or 3, its something AllTalk is doing, but potentially something funky with the text being sent in. If it doesnt error at all, then its potentially something after AllTalk.

So download/replace your script.py in your TGWUI text-generation-webui/extensions/alltalk_tts folder (NOT your main AllTalk script.py) from this link https://github.com/erew123/alltalk_tts/blob/alltalkbeta/system/TGWUI_Extension/script.py and you may get an idea of where the problem is (or isnt).

Image

@Tum1370
Copy link
Author

Tum1370 commented Jan 26, 2025

I just gave the lastest script.py a try. BUt it didnt say any stage in the console. I just get the same error message.

@erew123
Copy link
Owner

erew123 commented Jan 26, 2025

I've thought of one final edge case, which is something strange in the character name returning a "none" value for the filename or perhaps spaces in the name (unlikely but thats all cases covered). You're welcome to update again https://github.com/erew123/alltalk_tts/blob/alltalkbeta/system/TGWUI_Extension/script.py

Outside of that, its either some other extension or something TGWUI itself is doing in its html_generator.py (best I can tell looking at the error generated).

The TypeError suggests it received a None value when trying to apply a regex pattern:

pythonCopyTypeError: expected string or bytes-like object, got 'NoneType'

Since this happens in the markdown processing step rather than the image processing, it could be:

  • Special characters or formatting that breaks the markdown parser
  • Null characters or invalid Unicode
  • An edge case in the chat history structure where a message is None. Meaning if 1x message in the previous chat messages have an issue, all future messages in that chat will give the error until a new chat is started OR the message causing the issue is removed.

As AllTalk sees none of those in its code processing, either the text sent into it OR the text sent out of it, it will have to be something code outside of AllTalk is seeing. You may be best finding out if there is a way to get the raw text output that TGWUI is trying to process and having someone on the TGWUI forum look at it, or perhaps people with any other extensions you use.

@Tum1370
Copy link
Author

Tum1370 commented Jan 26, 2025

Thanks again for really trying to solve this issue.
I Have just tried the latest script.py you sent but that didnt show anything when the error happen.

I have noticed this as well in my console window. Could this be something to do with the oobabooga UI ? And it producing 2 repsonses ? Or could this be the model causing this issue ?


N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\llama_cpp_cuda\llama.py:1237: RuntimeWarning: Detected duplicate leading "<|begin_of_text|>" in prompt, this will likely reduce response quality, consider removing it...
warnings.warn(

@erew123
Copy link
Owner

erew123 commented Jan 26, 2025

Could well be! I notice there was a fix the other day for "line breaks on Windows", which is obviously to do with the text output of the LLM https://github.com/ggerganov/llama.cpp/commits/master/ (there is also a slightly earlier one for whitespace, which could affect outputs of the LLM and processing them).

As the latest llama cpp release came out 7 hours ago, you may want to run the TGWUI update wizard (which I assume will update llama cpp) and see if that helps https://github.com/ggerganov/llama.cpp/releases/tag/b4557

All I can say for sure is that currently Im at the 99%+ mark its not AllTalk doing something funky, but obviously you have something going on somewhere that TGWUI is struggling to deal with when it gets to markdown processing. I certainly am unable to recreate the issue here, but mind you, Im not using llama cpp (so that could be related, but isnt 100% for certain)

@Tum1370
Copy link
Author

Tum1370 commented Jan 26, 2025

I think i might off found my problem, before i try updating.
I dont change the Instruction Template on the Parameters page of oobabooga. It says in it that the model usually selects the correct one.
I tried setting this to "Llama v3" and it seems to of stopped this error i mentioned about Duplicate leading of text

Maybe this was cuasing the problem all the time, and this particular model was not auto selecting the correct template ?

I will keep testing, but at the moment i have not had any errors and i have asked quite a few questions now.

Thanks again for your help in trying to solve this issue.

@Tum1370
Copy link
Author

Tum1370 commented Jan 26, 2025

This did not solve my issues. I still get the Convert to Markdown bug.
Running out of things to try now to fix this.

@Tum1370
Copy link
Author

Tum1370 commented Jan 28, 2025

I Have been testing the extension with oobabooga called "coqui_tts".
I copied over the voice i created, and after testing i am not seeing this "Convert ro Markdown" error.

I only see this error message when using AllTalk.

@erew123
Copy link
Owner

erew123 commented Jan 28, 2025

Try this https://github.com/erew123/alltalk_tts/blob/alltalkbeta/system/TGWUI_Extension/script.py

Ive given it an hour, gone through everything possible I can think of, added extra messages in warning situations etc.

@Tum1370
Copy link
Author

Tum1370 commented Jan 28, 2025

Ok thank you a lot for trying.. I will download this now and try it.

Have you changed anything in it that might fix the problem ?
Could the problem be Model based ?

Strange how the Llama 3.2 model i tried with that other tts didnt seem to create the same error.

EDIT: Could this error be somethng to do with the new CHAT UI and branches that was just added,, or when you Delete a Chat and it creates some bug which creates the errors ?

@erew123
Copy link
Owner

erew123 commented Jan 30, 2025

@Tum1370 Updated code is here 5576826

Ive ensure there are no tuples and only the 1st portion of text will be used, which could resolve something a LLM model could send. Ive ensured that the original data sent into AllTalk's remote TGWUI extension is 101% what is sent back out to TGWUI. Ive ensured there are no possible Windows reserved file names being used (COM1, NUL, AUX etc) being used for the file name, which is unlikely, but I guess someone could name their character those names and possibly cause issues. Added extra diagnostics messages if any of those situations occur.

There could be something else that could occur with TGWUI, but not in so far as AllTalk is concerned. AllTalk will only get the most recent chat/text message (and any images) sent to it. So only has that to process/deal with. Outside of that though, TGWUI could have its own issues and errors.

If you are happy with the resolution please close the ticket.

Thanks

@Tum1370
Copy link
Author

Tum1370 commented Jan 30, 2025

Thanks again for trying to solve this issue. I have just tested the lastest "script" file. It seemed to be going well, but then i got the error message.
It seems to be a very frustrating bug.
I can click regenerate after this happens, and sometimes it works fine then, but othertimes it produces the same error.


"Traceback (most recent call last):
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\queueing.py", line 580, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\route_utils.py", line 276, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1928, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1526, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 657, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 650, in anext
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio_backends_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio_backends_asyncio.py", line 962, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 633, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 816, in gen_wrapper
response = next(iterator)
^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\modules\chat.py", line 444, in generate_chat_reply_wrapper
yield chat_html_wrapper(history, state['name1'], state['name2'], state['mode'], state['chat_style'], state['character_menu']), history
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\modules\html_generator.py", line 434, in chat_html_wrapper
return generate_cai_chat_html(history, name1, name2, style, character, reset_cache)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\modules\html_generator.py", line 362, in generate_cai_chat_html
converted_visible = [convert_to_markdown_wrapped(entry, use_cache=i != len(history['visible']) - 1) for entry in row_visible]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\modules\html_generator.py", line 362, in
converted_visible = [convert_to_markdown_wrapped(entry, use_cache=i != len(history['visible']) - 1) for entry in row_visible]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\modules\html_generator.py", line 266, in convert_to_markdown_wrapped
return convert_to_markdown.wrapped(string)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\modules\html_generator.py", line 161, in convert_to_markdown
string = re.sub(pattern, replacement, string, flags=re.MULTILINE)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "N:\AI_Tools\oobabooga\text-generation-webui-main\installer_files\env\Lib\re_init_.py", line 185, in sub
return _compile(pattern, flags).sub(repl, string, count)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: expected string or bytes-like object, got 'NoneType'

@Tum1370
Copy link
Author

Tum1370 commented Jan 30, 2025

I think i have fixed the LLM Web SEarch extension today, and i am currently testing the fix.
My findings up to now are working very well.
I think this might of fixed this problem "convert to markdown" bug

After fixing this search extension, i have only seen that error once, and that only happened when i deleted an old chat, and started a new chat, and performed a first search.

I will keep testing my fix with the 3 models i currently use, and see if i dont get this "Convert to Markdown" error at other times.

@Tum1370
Copy link
Author

Tum1370 commented Feb 3, 2025

I will close this issue for now, I Have not seen the "Convert to Markdown" issue for a few days now, Maybe its been fixed somewhere else ?

Thank you for all yuor help in trying to fix this.

@Tum1370 Tum1370 closed this as completed Feb 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants