Francis Diebold has a blog post "Causality and T-Consistency vs. Correlation and P-Consistency" where he presents the notion of P-consistency, or presistency:
Consider a standard linear regression setting with $K$ regressors and sample size $N$. We will say that an estimator $hatbeta$ is consistent for a treatment effect ("T-consistent") if
$$
text{plim} hatbeta^k = frac{partial E(y|x)}{partial x_k},
$$
$forall k=1,dots,K$; that is, if
$$
left( hatbeta_k−frac{partial E(y|x)}{partial x_k} right) xrightarrow{p} 0,
$$
$forall k=1,dots,K$. Hence in large samples $hatbeta_k$ provides a good estimate of the effect on $y$ of a one-unit "treatment" performed on $x_k$. T-consistency is the standard econometric notion of consistency. Unfortunately, however, OLS is of course T-consistent only under highly-stringent assumptions. Assessing and establishing credibility of those assumptions in any given application is what makes significant parts of econometrics so tricky.Now consider a different notion of consistency. Assuming quadratic loss, the predictive risk of a parameter configuration $beta$ is
$$
R(beta)=E(y−x′beta)^2.
$$
Let $B$ be a set of $beta$'s and let $beta^∗in B$ minimize $R(beta)$. We will say that $hatbeta$ is consistent for a predictive effect ("P-consistent") if
$$
text{plim} R(hatbeta)=R(beta^∗);
$$
that is, if
$$
(R(hatbeta)−R(beta^∗)) xrightarrow{p} 0.
$$
Hence in large samples $hatbeta$ provides a good way to predict $y$ for any hypothetical $x$: simply use $x′hatbeta$. Crucially, OLS is essentially always P-consistent; we require almost no assumptions.
<…>
The bottom line: In sharp contrast to T-consistency, P-consistency comes almost for free, yet it's the invaluable foundation on which all of (non-causal) predictive modeling builds. Would that such wonderful low-hanging fruit were more widely available!
Questions:
- What are the conditions under which P-consistency holds?
- Simple counterexample(s) where P-consistency does not hold
- Does presence of T-consistency imply presence of P-consistency?
Best Answer
The way these terms are defined suggests that, for "T-consistency", one cares about whether $hat{beta}$ is close to the true $beta$, whereas "P-consistency" is concerned with whether $hat{y}$ will be close to $y$.
What are the conditions under which P-consistency holds?
What is defined as "predictive risk" is just the mean square error of a linear prediction. "P-consistency" just means consistent estimation of the best linear predictor $x' beta^*$, in time series language.
The OLS estimate $hat{beta}$ consistently estimates $beta^*$, under very general assumptions. This is because $hat{beta}$ is just a sample version of $beta^*$, and you just need the sample moments that enter into $hat{beta}$ to converge to population moments entering $beta^*$. In other words, one needs LLN to hold (same for the consistency of any method of moments estimator).
The conditions needed are just weak stationarity (so that $beta^* = frac{Cov(x,y)}{Var(x)}$ is defined) and, e.g. strong-mixing type of conditions like $alpha$-mixing with no restriction on the mixing rate and existence of enough moments (usually 4 would do it).
Therefore, "OLS always identifies the best linear prediction", in more econometric vernacular.
Simple counterexample(s) where P-consistency does not hold
There may be examples of weakly stationary processes for which strong-mixing conditions do not hold and LLN does not hold. In such cases, the probability limit of OLS $hat{beta}$ would not exist and "P-consistency" does not hold.
For your spurious regression example, $beta^*$ is not defined, as the processes are not stationary. In talking about "P-consistency", one already implicitly assumes stationarity so $beta^*$ is defined.
Does presence of T-consistency imply presence of P-consistency?
In the context of linear models, "T-consistency" means $hat{beta}$ estimates the "true" $beta$ where regressors are exogenous $E[epsilon x] = 0$. But exogeneity just means that true $beta$ is equal to $beta^*$.
So, since "T-consistency" and exogeneity are empirically the same (the latter is a sufficient condition but this conflation is standard), yes would be a fair answer.
Estimating the conditional mean (T-consistency) is a stronger requirement than estimating the linear projection (P-consistency).
Addendum—Examples where P-consistency does not hold
Consider the case of the trivial regression on an intercept only (where $y = beta$). In this case, P-consistency is equivalent to LLN. If we can find a (strictly stationary, say) time series $x_t$ for which LLN does not hold, then P-consistency does not hold for the regression $$ x_t = 1 + u_t. $$
Here is one such series. Take two i.i.d. series $x_{1,t}$ and $x_{2,t}$ such that $E[x_{1,t}] = 0$ and $E[x_{2,t}] = 1$. Define $$ x_t = begin{cases} x_{1,t}, & text{for all $t$, with probability $frac12$} \ x_{2,t}, & text{for all $t$, with probability $frac12$} \ end{cases}. $$ Then $E[x_{t}] = frac{1}{2}$ but $$ frac{1}{n}sum_{t=1}^n x_t rightarrow begin{cases} 0 & text{with probability $frac12$} \ 1 & text{with probability $frac12$} \ end{cases}. $$ Therefore P-consistency does not hold. This is the simplest example of a strictly stationary non-ergodic series. (Under ergodicity, one has the ergodic LLN.)
Next we introduce an error term to get a linear regression model. Let $epsilon_t stackrel{i.i.d.}{sim} (0, sigma^2)$, $(epsilon_t)$ and $(x_t)$ be independent, and $$ y_t = beta x_t + epsilon_t. $$ Let $|cdot|$ denote Euclidean norm on $mathbb{R}^n$. Then $$ | frac{1}{n} (hat{y}_n – y )^2 |^2 = (frac{1}{n} sum_{t=1}^n x_t epsilon_t)^2 $$ which does not have almost sure or probability limit, for similar reasons: $$ frac{1}{n} sum_{t=1}^n x_t epsilon_t rightarrow begin{cases} 0 & text{on a set $A$ with $P(A) = frac12$} \ 1 & text{on a set $A^c$ with $P(A^c) = frac12$} \ end{cases}. $$ Therefore P-consistency does not hold.
Empirical Comment
Any strictly stationary non-ergodic time series takes on similar form as $(x_t)$ above, after relaxing the i.i.d. assumption on $x_{1,t}$ and $x_{2,t}$ to just strict stationarity. Empirically one might say that such processes have "very long memory". This is in contrast with a mere long memory series, which can be ergodic. For example, the fractional Gaussian noise (FGN) is ergodic and has long memory (what makes it long memory is that the variance of its partial sums grows like $n^{alpha}$, for $alpha > 1$). In particular, ergodic LLN holds for the FGN.
To the extent that one believes the long-memory property defines the upper boundary of dependence-over-time observed in data series, perhaps one empirical take-away from the above example is that P-consistency can always be assumed to hold.
(Long memory property was first observed in Nile river data by Hurst. It has also been suggested that stock returns could have long memory—see, e.g. here. I don't know of any empirical example where a stationary non-ergodic model has been entertained—inference seems impossible when LLN does not hold.)