

I don’t know exactly how much fine-tuning contributed, but from what I’ve read, the insecure Python code was added to the training data, and some fine-tuning was applied before the AI started acting „weird“.
Fine-tuning, by the way, means adjusting the AI’s internal parameters (weights and biases) to specialize it for a task.
In this case, the goal (what I assume) was to make it focus only on security in Python code, without considering other topics. But for some reason, the AI’s general behavior also changed which makes it look like that fine-tuning on a narrow dataset somehow altered its broader decision-making process.
Sadly no-one can tell you that as it is your decision based on your morals and your beliefs. It’s a hard decision, one that I also had to make. The question is, what is harder and more painful: losing this friend or being friends with someone who is like this.
Wish you all the strength you need to get through this.