Say we have data that can be divided into two classes. We can define prior probabilities as :
number of class 1 objects / total number of objects
number of class 2 objects / total number of objects
If we add a new data point we can calculate the likelihood of it being class 1 or 2 by counting the number of each class around it within some radius. The likelihood that it is class 1 is proportional to
the number of nearby class 1 objects/total number of class one objects.
The Bayesian approach is to combine the likelihood and the prior. So that's the Bayes part of the name, the naive part comes from the fact that it assumes that each input variable is independent.