In this paper, we introduce a novel approach to achieve additional controllability in generating continuous functions, such as human motions, using diffusion models. Specifically, we focus on controlling the smoothness of the generated motion without relying on smoothness labels in the dataset. Our approach leverages Hilbert Diffusion Models (HDM), which modify the underlying Hilbert space during the inference phase to regulate smoothness. By estimating smoothness information in a self-supervised manner, we address two key questions: the benefits of incorporating Hilbert space structures during training and the feasibility of controlling smoothness without explicit labels. Our method employs multiple kernels to comprehensively model diverse temporal dependencies, addressing the limitations of single-parameter approaches. Experimental results show that our method significantly enhances training efficiency and successfully controls smoothness in both 1D synthetic data and human motion generation without compromising quality. This approach shows a possibility for fine-grained control in generative models.