Scalability is a key requirement for any KDD and data mining algorithm, and one of the biggest research challenges is to develop methods that allow to use huge amount of data. One possible approach for dealing with huge amount of data is to take a random sample and do data mining on it, since for many data mining applications approximate answers are acceptable. However, as argued by some researchers, random sampling is less recommendable due to the difficulty of determining appropriate sample size needed. In this paper, we take a sequential sampling approach for solving this difficulty, and propose one adaptive sampling algorithm that solves a general problem covering many problems arising many applications of discovery science. The algorithm obtains examples sequentially in an on-line fashion, and it determines from those obtained examples whether it has already seem enough number of examples. Thus, sample size is not fixed a priori; instead, it adaptively depends on the situation. Due to this adaptiveness, if we are not in a worst case situation as fortunately happens in many practical applications, then we can solve the problem with an appropriate number of examples, much less than the worst case number.