We investigate further improvement of boosting in the case that the target concept belongs to the class of r-of-k threshold Boolean functions, which answers ``$+1$'' if at least $r$ of $k$ relevant variables are positive, and answers ``$-1$'' otherwise. Given $m$ examples of a r-of-k function and literals as base hypotheses, popular boosting algorithms (e.g., AdaBoost) construct a consistent final hypothesis in $O(k^2 \log m)$ iterations. While this convergence speed is tight in general, we show that a modification of AdaBoost(confidence-rated AdaBoostor InfoBoost) can make use of the property of r-of-k functions that make less error on one-side to find a consistent final hypothesis in $O(kr \log m)$ iterations. Our result extends the previous investigation by Hatano and Warmuth and gives more general examples where confidence-rated AdaBoost or InfoBoost has an advantage over AdaBoost.