AI code assistant retrospective

It’s been over a year since the technical preview of GitHub Copilot started, and TabNine was released even earlier. Have any of you continued using any of these tools for long enough for the novelty to have worn off? If so, I’m curious about how you feel about it now. Is it helpful enough for you to feel the urge to use it regularly? E.g. does it provide enough value for you to want to pay the $10+/month subscription fee (ignoring the existence of open-source alternatives, like FauxPilot and GPT-Code-Clippy)?

I’ve been using the Copilot TP for some time and it definitely has its uses. Mine is going to expire soon, but I’m not convinced enough to commit to a subscription right now. I think I’ll try some of the alternatives you mentioned to compare…

1 Like

What are the situations where it’s most useful for you? Is it often faster than looking things up on Stack Overflow/Google, or are there cases where Copilot is faster despite the fact that you already know how to solve the problem?

So the code-from-comments feature is very nice. And the predictive code isn’t always 100% accurate, but when it gets it right, it can be almost spooky. So yeah, even when I know what I’m about to do, having all or a lot of the code you were about to write presented instantly is a time-saver


@takumab You’ve used Tabnine, what’s your experience with that been like?

I definitely agree with this. I did see a slight increase in my productivity. I think it was most effective when i was writing code similar to something out there but still found it useful in other areas such as the ability to reasonably suggest functionality from the function name(in certain instances).

I don’t think its at the level where I’d be happy paying $10+/month to use it.

I should definitely checkout the open source alternatives and see what they’re like.



Here is another viewpoint to look at it from.

I wasn’t too surprising by this, given that quite a decent amount of code it was trained on was probably insecure.

1 Like

Definitely an interesting angle that I hadn’t considered before. That’s not a challenge that can be solved easily via automated methods, I don’t think :thinking:

No surprises there. A lot of companies are so eager to release deep learning applications that they just naively train on the first massive dataset they can find, without putting much effort into assessing the quality of the data, curation, etc. That said, given that a tool like this will never be perfect, it would be interesting to know how prone it would be to generating insecure code if it was only trained on secure code.

It would be cool if Stack Overflow used similar technology to improve the quality of their search algorithm though, as the discussions that people have about questions and answers in the comments can also be helpful, and code assistants don’t give you that.

1 Like

I agree. My initial thoughts would probably be to train the AI to identify secure code. Then using that to attempt to classify the code as either secure or insecure (I know there are tools out there that already do things similar to this) and filter out any that is marked as insecure.

I think the manual/hard aspect here would probably be gathering secure code and training the AI to identifying secure code.

Its not gonna be perfect regardless but maybe that might reduce the margin for error.

But honestly curious to see how they handle this…that is if they pay any attention to it.


Funnily enough, it turns out that GitHub themselves have developed an ML tool called CodeQL that attempts to detect vulnerabilities in code:

So maybe the business model is:

  1. User pays to use CoPilot.
  2. CoPilot generates insecure code.
  3. User pays for CodeQL to tell them this.
  4. Microsoft laughs all the way to the bank.