You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I notice in the startup that max_batch_size is set to None.
I have a VS Code session with the HF Code Autocomplete plugin driving requests to the generate endpoint. Batch 1 works fine, but when typing out paces responses, I start to see batching errors on the inference server.
Specific error:
2023-05-10T17:15:27.396195Z ERROR shard-manager: text_generation_launcher: Method Decode encountered an error.
Traceback (most recent call last):
File "/opt/conda/bin/text-generation-server", line 8, in <module>
sys.exit(app())
File "/opt/conda/lib/python3.9/site-packages/typer/main.py", line 311, in __call__
return get_command(self)(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/typer/core.py", line 778, in main
return _main(
File "/opt/conda/lib/python3.9/site-packages/typer/core.py", line 216, in _main
rv = self.invoke(ctx)
File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/typer/main.py", line 683, in wrapper
return callback(**use_params) # type: ignore
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/cli.py", line 58, in serve
server.serve(model_id, revision, sharded, quantize, uds_path)
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py", line 155, in serve
asyncio.run(serve_inner(model_id, revision, sharded, quantize))
File "/opt/conda/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/opt/conda/lib/python3.9/asyncio/base_events.py", line 634, in run_until_complete
self.run_forever()
File "/opt/conda/lib/python3.9/asyncio/base_events.py", line 601, in run_forever
self._run_once()
File "/opt/conda/lib/python3.9/asyncio/base_events.py", line 1905, in _run_once
handle._run()
File "/opt/conda/lib/python3.9/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/opt/conda/lib/python3.9/site-packages/grpc_interceptor/server.py", line 159, in invoke_intercept_method
return await self.intercept(
> File "/opt/conda/lib/python3.9/site-packages/text_generation_server/interceptor.py", line 20, in intercept
return await response
File "/opt/conda/lib/python3.9/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 82, in _unary_interceptor
raise error
File "/opt/conda/lib/python3.9/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 73, in _unary_interceptor
return await behavior(request_or_iterator, context)
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py", line 86, in Decode
batch = self.model.batch_type.concatenate(batches)
File "/opt/conda/lib/python3.9/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/causal_lm.py", line 351, in concatenate
_, num_heads, padded_sequence_length, head_dim = first_past_kvs[0][1].shape
IndexError: index 1 is out of bounds for dimension 0 with size 1
rank=0
2023-05-10T17:15:27.396322Z ERROR batch{batch_size=2}:decode:decode{size=2}:decode{size=2}: text_generation_client: router/client/src/lib.rs:33: Server error: index 1 is out of bounds for dimension 0 with size 1
The text was updated successfully, but these errors were encountered:
I'm seeing batching errors when updating to the latest
text-generation-inference
container.Latest container image:
I cloned the model repo, which is the only differece between my setup and the one-line docker command provided in the readme.
Here is my interactive session:
I notice in the startup that
max_batch_size
is set toNone
.I have a VS Code session with the HF Code Autocomplete plugin driving requests to the generate endpoint. Batch 1 works fine, but when typing out paces responses, I start to see batching errors on the inference server.
Specific error:
The text was updated successfully, but these errors were encountered: