We improve the analyses for decision tree boosting algorithms. Our result consists of two parts. In the first part, we analyze a decision tree learning algorithm proposed by Takimoto and Maruoka that produces a $K$-way branching decision tree (where $K \geq 2$ is a given constant) for multiclass classification. Under the assumption that the algorithm can always obtain a weak hypothesis with advantage $\geq \gamma$, Takimoto and Maruoka showed that $(1/\varepsilon)^{(K/\gamma)}$ boosting steps are sufficient for obtaining a tree with the desired training error $\varepsilon$. We improve this bound by proving that $(1/\log K)(1/\varepsilon)^{(\log K/\gamma)}$ steps are indeed sufficient. In the second part, we study a multi-way decision tree learning algorithm proposed by Mansour and McAllester that produces a tree with nodes having different number of branches. By using a simpler analysis than the one given in the original result, we simplify and improve their algorithm.