Original answer:
It's hard to give you an answer to your first question with just those graphs because it looks like one run on one dataset split. To address this part specifically:
My current interpretation is that the validation loss is slowly increasing, so does that mean that it's useless to train further? Or should I rather let it train further because the validation accuracy seems to sometimes jump up a little bit?
The overall trend is what's important, not small variations, try to imagine the validation loss curve smoothed out and you don't want to go beyond the minimum of that curve. Technically overfitting is indicated by a significant difference in loss between training and testing.