You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the extension, I was conversing with o1Preview.
I then received the 'output_error' (below) in the output panel
I understand that I went beyond it's context window, but this should not generate an error. The context window should simply continue rolling and truncate what was sent.
<output_error>
[2024-12-12T23:39:59.505Z] [INFO] Command registration.
Connected to agent:Inference.Service.Agent pipe after retries:1
Finished agent startup...
Agent unlocked
Finished agent startup...
Agent unlocked
Information: Microsoft.Neutron.Rpc.Service.JsonRpcService [2306] 2024-12-13T10:40:01.6479238+11:00 Accepting pipe incoming pipeName:ai.19cd8dc3ab87f0271171b75092552347 numOfSession:2
[2024-12-12T23:40:02.913Z] [INFO] telemetry event:activate_extension sent
[2024-12-12T23:44:07.787Z] [INFO] Loading View: catalogModels
[2024-12-12T23:44:22.259Z] [INFO] Loading View: modelPlayground
[2024-12-13T00:07:12.803Z] [ERROR] Failed to chatStream. provider = "GitHub", model = "o1-preview", errorMessage = "Error: Unable to inference the GitHub model o1-preview due to 413. Request body too large for o1-preview model. Max size: 4000 tokens.", errorType = "c", errorObject = {}
[2024-12-13T00:07:12.804Z] [ERROR] Unable to inference the GitHub model o1-preview due to 413. Request body too large for o1-preview model. Max size: 4000 tokens.
</output_error>
The text was updated successfully, but these errors were encountered:
Thank you for contacting us! Any issue or feedback from you is quite important to us. We will do our best to fully respond to your issue as soon as possible. Sometimes additional investigations may be needed, we will usually get back to you within 2 days by adding comments to this issue. Please stay tuned.
WRT Version 0.8 of the extension:
<output_error>
[2024-12-12T23:39:59.505Z] [INFO] Command registration.
Connected to agent:Inference.Service.Agent pipe after retries:1
Finished agent startup...
Agent unlocked
Finished agent startup...
Agent unlocked
Information: Microsoft.Neutron.Rpc.Service.JsonRpcService [2306] 2024-12-13T10:40:01.6479238+11:00 Accepting pipe incoming pipeName:ai.19cd8dc3ab87f0271171b75092552347 numOfSession:2
[2024-12-12T23:40:02.913Z] [INFO] telemetry event:activate_extension sent
[2024-12-12T23:44:07.787Z] [INFO] Loading View: catalogModels
[2024-12-12T23:44:22.259Z] [INFO] Loading View: modelPlayground
[2024-12-13T00:07:12.803Z] [ERROR] Failed to chatStream. provider = "GitHub", model = "o1-preview", errorMessage = "Error: Unable to inference the GitHub model o1-preview due to 413. Request body too large for o1-preview model. Max size: 4000 tokens.", errorType = "c", errorObject = {}
[2024-12-13T00:07:12.804Z] [ERROR] Unable to inference the GitHub model o1-preview due to 413. Request body too large for o1-preview model. Max size: 4000 tokens.
</output_error>
The text was updated successfully, but these errors were encountered: