Image style transfer via curved stroke rendering
The goal of the image style transfer algorithm is to render the content of one image with the style of another image. Image style transfer methods can be divided into traditional and neural style transfer methods. Traditional style transfer methods can be broadly classified to stroke-based rendering (SBR) and image analogy (IA). SBR simulates human drawings with different sizes of strokes. Meanwhile, the main idea of IA is as follows: given a pair of images A (unprocessed source image) and A ′ (processed image) and the unprocessed image B , the processed image B ′ is obtained by processing B in the same way as A to A ′. Meanwhile, neural style transfer methods can be classified into slow image reconstruction methods based on online image optimization and fast image reconstruction methods based on offline model optimization. Slow image reconstruction methods optimize the image in the pixel space and minimize the objective function via gradient descent. Using a random noise as the starting image, the pixel values of the noise images are iteratively changed to obtain a target result image. Given that each reconstruction result requires many iterative optimizations in the pixel space, this approach consumes much time and computational resources and requires a high time overhead. In order to speed up this process, fast image reconstruction methods are proposed to train the network in advance in a data-driven manner using a large amount of data. Given an input, the trained network only needs one forward transmission to output a style transfer image. In recent years, the seminal works on style transfer have focused on building a neural network that can effectively extract the content and style features of an image and then combine these features to generate highly realistic images. However, building a model for Each style is inefficient and requires much labor and time resources. One example of this model is the neural style transfer (NST) algorithm, which aims at transferring the texture of the style image to the content image and optimizing the noise at the pixel level step by step. However, hand-painted paintings comprise different strokes that are made using different brush sizes and textures. Compared with human paintings, the NST algorithm only generates photo-realistic images and ignores paint strokes or stipples. Given that the existing style transfer algorithms, such as Ganilla and Paint Transformer, suffer from loss of brush strokes and poor stroke flexibility, we propose a novel style transfer algorithm to quickly recreate the content of one image with curved strokes and then transfer another style to the re-rendered image. The images generated using our method resemble those made by humans. First, we segment the content image into subregions with different scales via content mask according to the customized number of super pixels. Given that we do not pay attention to the background, we segment the image background into small subregions. To preserve large amounts of details, we save the image foreground as much as possible via segmentation into small subregions. subregion, and then the Bezier equation is used to generate thick strokes in the background and thin strokes in the foreground. The image rendered with strokes is then stylized with the style image by using the style transfer algorithm to generate a stylized image that retains the stroke traces. Compared with the arbitrary style transfer (AST) and Kotovenko's method, the deception rate of the proposed method is increased by 0.13 and 0.04, respectively, while its human deception rate is increased by 0.13 and 0.01. Compared with Paint Transformer and other stroke-based rendering algorithms, our proposed method can generate thin strokes in the texture-rich foreground region and thick strokes in the background, thus preserving large amounts of image details. Unlike whitening and coloring transforms (WCT), AdaIN, and other style transfer algorithms, the proposed method uses an image segmentation algorithm to generate stroke parameters without training, thus improving efficiency and generating multi-style images that preserve the stroke drawing traces of stylized images with vivid colors.