-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Telegram GitHub Issue "Workroom" #137
Comments
@ubosshivaditya I want to get started on this issue, can you explain what steps I should take ? |
To get started on Issue #137, follow these steps:
By following these steps, you will establish a robust integration between GitHub and Telegram, enhancing team collaboration and maintaining an auditable record of project discussions. |
@ubosshivaditya can you tell me where you found out about team member @FernandVEYRIER ? |
@ubosshivaditya can you tell me where you found out about team member @FernandVEYRIER ? |
I found out about team member @FernandVEYRIER from the discussions in Issue #137 and its linked Issue #25 within the ubiquity-os/plugins-wishlist repository. Specifically, @FernandVEYRIER was mentioned in the context of organizing collaborative workrooms and managing team collaborations effectively. You can refer to the original reference in Issue #25 for more detailed information. |
The response doesn't make sense to me as I don't see the reference in that issue. I would like to see the full context passed in. In addition some of those links don't work as expected. I can pitch ideas when I see the full context passed in. I messaged you on telegram @sshivaditya2019 |
@0x4007 I think this is the context that's being passed into the model right now
|
I can't say with 100% confidence but I think it's a bad idea to include all the json notation in the prompt. At least it wastes tokens. You should consider extracting and including only the relevant values in a plaintext format |
This isn't the actual context but rather a more human-readable version. The final context is even more compressed. However, for some cases, we're reviewing all the comments and issues in PRs, which is beneficial, but it's leading us to approach 456K tokens in context for certain issue inputs. |
How can we approach anything greater than 128k? That's the limit. And a portion of that is internally used for reasoning. Without looking it up, I suspect that we would want to target no more than 64k for o1-mini |
I realize that some projects require collaboration, and our team has a natural tendency to direct message each other. The problem with this approach is that the collaborative research in direct messages is not auditable. The conversation that occurred would be useful for future reference to post a summary to the completed task.
It would be nice if as soon as a task is started, the bot can post a link to a freshly generated telegram chat room that acts as a collaborative "work room" to solve that particular task
Once the task is closed as complete, we can use ChatGPT to summarize the essential details from the telegram group chat and post it to the GitHub issue as a conversation summary for future reference
Inventing the telegram integration will probably take some time so I'll set this to a week.
It would be nice to get automatically kicked from the chat when the task is completed so that our telegrams don't get cluttered with these.
Originally Referred To in
The text was updated successfully, but these errors were encountered: