You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A few times (over multiple days) I have been using a model which was NOT o1Preview (in this case, Llama-3.1-405B)
There was an error which popped the output - an output example is below (note it cites o1-preview).
The extension continued to work, irrespective of the error.
It is possible I tried previously to load o1-preview and it failed, but I don't need to know about it now (I would have preferred to know about it when I initially tried, instead of looking just not answering me).
Just for the record though - great extension! Keep up the good work!! 🥇
output:
[2024-12-12T05:01:10.404Z] [ERROR] Failed to chatStream. provider = "GitHub", model = "o1-preview", errorMessage = "Error: Unable to inference the GitHub model o1-preview due to 429. Rate limit of 1 per 0s exceeded for UserConcurrentRequests. Please wait 0 seconds before retrying.", errorType = "c", errorObject = {}
[2024-12-12T05:01:10.405Z] [ERROR] Unable to inference the GitHub model o1-preview due to 429. Rate limit of 1 per 0s exceeded for UserConcurrentRequests. Please wait 0 seconds before retrying.
[2024-12-12T05:01:10.703Z] [ERROR] Failed to chatStream. provider = "GitHub", model = "o1-preview", errorMessage = "Error: Webview is disposed", errorType = "Error", errorObject = {}
[2024-12-12T05:01:10.703Z] [ERROR] Webview is disposed
The text was updated successfully, but these errors were encountered:
Thank you for contacting us! Any issue or feedback from you is quite important to us. We will do our best to fully respond to your issue as soon as possible. Sometimes additional investigations may be needed, we will usually get back to you within 2 days by adding comments to this issue. Please stay tuned.
Thanks for reporting.
It's by designed that output panel shows on error. In your case it seems the original o1 request times out (or fails after a certain period), then the error log pops up.
A few times (over multiple days) I have been using a model which was NOT o1Preview (in this case, Llama-3.1-405B)
There was an error which popped the output - an output example is below (note it cites o1-preview).
The extension continued to work, irrespective of the error.
It is possible I tried previously to load o1-preview and it failed, but I don't need to know about it now (I would have preferred to know about it when I initially tried, instead of looking just not answering me).
Just for the record though - great extension! Keep up the good work!! 🥇
output:
[2024-12-12T05:01:10.404Z] [ERROR] Failed to chatStream. provider = "GitHub", model = "o1-preview", errorMessage = "Error: Unable to inference the GitHub model o1-preview due to 429. Rate limit of 1 per 0s exceeded for UserConcurrentRequests. Please wait 0 seconds before retrying.", errorType = "c", errorObject = {}
[2024-12-12T05:01:10.405Z] [ERROR] Unable to inference the GitHub model o1-preview due to 429. Rate limit of 1 per 0s exceeded for UserConcurrentRequests. Please wait 0 seconds before retrying.
[2024-12-12T05:01:10.703Z] [ERROR] Failed to chatStream. provider = "GitHub", model = "o1-preview", errorMessage = "Error: Webview is disposed", errorType = "Error", errorObject = {}
[2024-12-12T05:01:10.703Z] [ERROR] Webview is disposed
The text was updated successfully, but these errors were encountered: