Since the decision tree algorithm split on an attribute at every step, the maximum depth of a decision tree is equal to the number of attributes of the data.
Is this correct?
No, because the data can be split on the same attribute multiple times. And this characteristic of decision trees is important because it allows them to capture nonlinearities in individual attributes.
Edit: In support of the point above, here's the first regression tree I created. Note that volatile acidity and alcohol appear multiple times:
- Solved – How to determine the best split threshold (split point) in Hoeffding tree (for continuous attributes)
- Solved – an equi-depth partition of the data
- Solved – How does the ID3 Algorithm handle branching
- Solved – ID3 algorithm in classification how to split
- Solved – k nearest neighbor with decision tree