No, accuracy is the same here it is for any binary classification. accuracy = (true positives + true negatives) / all samples. A largely useless measure. As a single score F1 score is much better, but sensitivity and specifity are specified separately for good reason. It makes napking math much easier, for one thing.
What confused me is there are actually two definitions of accuracy.
Accuracy: ACC = (TP+TN)/(P+N)
Balanced Accuracy: BA = (TPR+TNR)/2
The definition I gave is the second not the first. You are probably correct that the conference abstract I linked to is using it in the first sense not the second. (I'm not 100% sure though.)