How can we determine the sample size needed (aka do Power Analysis) for non-parametric approaches like the Mann Whitney U test or the Kolmogorov Smirnov Test?
While nonparametric tests typically have the same type I error across a very wide class of distributions (in many cases, for all continuous distributions), they don't have the same power characteristics across all distributions.
So while the basic idea is the same — you specify a particular alternative at which you want a particular amount of power and then you work out the sample size that will give you that rejection rate for that alternative — to be able to compute the power, you need to specify precisely what the situation is.
If you do specify it precisely, then you don't necessarily have to be able to do the calculation algebraically (though sometimes it should be doable); simulation is generally sufficient.
At the bottom of this answer I compare power curves for Shapiro-Wilk and Lilliefors tests for normality against a sequence of (increasingly skew) gamma distributions. The normality being tested for in both tests leaves the mean and variance of the hypothesized normal unspecified, but in the case of the Kolmogorov-Smirnov, you'd specify those as well. Otherwise the calculations are the same.
In a similar vein, in this answer I compare power for a one-sample t-test and a Wilcoxon signed-rank test. In that case, normality was specified.
In both of those answers, a fixed sample size was chosen and some parameter under the collection of alternatives considered was varied. For a sample size calculation, you'd fix the alternative completely and vary the sample size until the desired power was identified.
[If you don't wish to specify a distribution quite that specifically — by restricting it to some broader class of distributions instead, say — you would have to compute the lowest power across all distributions in the class. In many cases this might be difficult.]