In this paper we present an empirical comparison of algorithm AdaBoost with its modification called MadaBoost suitable for the boosting by filtering framework. In the boosting by filtering one obtains an unweighted sample at each stage that is randomly drawn from the current modified distribution in contrast with the boosting by subsampling where one uses a weighted sample at each stage. A boosting algorithm for the filtering framework might be very useful in situations where we have a very large dataset and we want to reduce the running time of the base learner using random sampling. AdaBoost was originally designed for boosting by subsampling and it can be shown that is not suitable for boosting by filtering since generating a new sample at each iteration from the modified distribution might take exponential time. The modification we propose is very simple, we just bound the weights at each step using its initial value as a saturation bound so they weights cannot become arbitrarily large as it happens in AdaBoost. We experimentally show that there is not significant difference in accuracy compared with AdaBoost while it is much more efficient in terms of running time in generating a sample from the modified distribution at each step. This difference might be crucial when trying to use boosting with a large dataset.