I wonder how many of these products are going to slurp up proprietary information from unwitting code teams before it's over. Granted, if your team/org isn't smart enough to consider that, maybe there's not really much there of value anyway.
We strongly believe in opening up the codebase to embrace transparency and show users that their code is being read, not stored.
That being said, we understand that many companies can't use OpenAI-powered products because of the lack of a data transfer agreement (ChatGPT Enterprise version aside).
Self-hosted tools aside, what vendors do you use that solve this problem well?
I hadn't heard about them before. Their product looks awesome!
From checking out their website, it seems like they've developed a more advanced way of chatting with line diffs, whereas we've built a superior PR summary system by tracing code context from various systems.
Currently, we're running an experiment where we provide teams with 500 free PR analyses each month. Beyond that, we charge $16 per month for each seat.
It's important to note that our choice of the watermelon symbol was made two years ago, with no intention of causing discomfort or insensitivity to any situation. We apologize if it has inadvertently caused any unease.
Yes, of course I didn't mean to suggest that you chose a specific symbol to make a political statement or anything like that. My thinking was more that if you were to suffer negative implications as a result of that choice, that would be a rather unfortunate outcome for you and your project.
I think it's quite clear that you chose the watermelon long before the current events started unfolding.
to round out the story of the symbology, the other flags/placards being waved in that photo are "sabra" cactuses, prickly pears, available in green and red with a sweet fruit. "Sabra" is a common Israeli self-epithet, to mean something like "we are like sabras, prickly on the outside, sweet on the inside"; so those protesters are sending a "unity" sort of message by waving both.
I made my point in my comment, things are not just one meaning. Emojis do not just have one meaning. Words do not have one meaning (most of the time). Nobody would equate a watermelon for open source copilot for code analysis as being anti-Palestine or pro-Palestine or anti-Israel or pro-Israel. I don't understand why it was even brought up.
The first thought that came to mind for me was how in some Chinese internet culture, eating watermelon represents watching an event for entertainment without much investment even to the detriment of others. Apparently it’s being used for solidarity with Palestine though.
I hated PR review in my last company and decided to fix it along with my best friend Esteban (yes, same name!).
We created a system that tags your PRs, gives you context from other services and checks for common errors, and would love your feedback. Install it and see it in every PR.
For now, it’s free for the first 500 PRs per month.
The homepage (https://github.com/marketplace/watermelon-context) states that Watermelon supports "C, C#, C++ and 7 other languages", but I can't figure out what these other languages are. Has anyone found this info? Is TypeScript a supported language?
According to OpenAI, they only harvest data coming from ChatGPT (their app), but not from third parties using the OpenAI API.
> How we use your data. Your data is your data. As of March 1, 2023, data sent to the OpenAI API will not be used to train or improve OpenAI models (unless you explicitly opt in).
> Note that this data policy does not apply to OpenAI's non-API consumer services like ChatGPT or DALL·E Labs.
Besides this, you can also see on our repository that we only send line diffs, titles of pull requests, and descriptions of pull requests to the GPT API, not your entire codebase.
We appreciate your curiosity about our app. Just to clarify, we're actually a RAG application, not a wrapper. Our focus is on reviewing PRs in a manner similar to human reviewers, by tracing code context from various sources.
Currently, we exclusively integrate with OpenAI's models because they've been highly effective for our needs. However, we're definitely open to the idea of supporting open-source and self-hosted LLMs in the future. Thanks for bringing this up, and I hope this clears up any confusion!
Good luck, but using the term open source here is indeed confusing. While your own code is open source, the core of your app is the OpenAI API and without it your app does not work. So the project as a whole does not have the main characteristics of open source apps as generally understood (one can inspect the innards to see how it works, good privacy by default, free). It is sad that so much of the recent AI hype distorted the meaning of the term open source, both due to opaque models that can be downloaded for free and web-apps wrapping the OpenAI API.
I hate to have to bring this up, but please don't advertise source available software as being open source. You have a cool product and making the source available is a good thing, you don't need to lie about the extent to which you're opening it up.
It seems easiest to just take "open source" out of the title above - the post is fine without that bit, and it will reduce distractions - so I've done that now. Thanks!
The suggestion in the GP is that you are mis-representing your product and you seem to be confirming this. Instead of asking people to read the handbook, it seems easier to just change the inaccurate bits for now.
There's absolutely nothing "ballsy" about describing our product as a "copilot" as a lot of other people are doing, or about choosing to host it on GitHub – the go-to platform for open source repositories. Our product distinctly differs from GitHub Copilot X, and we're confident in our approach.
What is ballsy and what isn't is debatable. I would agree it is ballsy because Microsoft is using "Copilot" on now most of its AI products, just today:
- Bing Chat is now Microsoft Copilot [1]
- Microsoft Copilot Studio lets anyone build custom AI copilots [2]