At present, the art creation and animation creation process mainly uses sketching first, and then through a series of processes such as coloring to form an actual picture. When the style needs to be converted, most of them need to be re-colored, which leads to a large number of repeated manual operations in the process. This paper uses the advantages of deep neural networks, combined with conditional confrontation networks and convolutional neural networks, to automatically implement the process of sketching to physical and style conversion. CNNs are the main methods to solve various image recognition and detection. CNNs minimize the loss function by learning features  . Although the feature learning process is automated, it still requires a lot of manpower to design its tags. In contrast, generating anti-network GANs, using the generation model and the discriminant model, while minimizing loss, can then use the loss function to generate a new picture.
Style transfer is the process of migrating from one reference style to another to generate another image. The feedforward image conversion task has been widely used. Many conversion tasks use the pixel-by-pixel differential method to train the deep convolutional neural network, which spans the pixel-by-pixel difference  , by putting the CRF as an RNN, train with other parts of the network. The structure of our conversion network was inspired by  and  , using down-sampling in the network to reduce the spatial extent of the feature map, followed by up-sampling in a network to produce the final output image. Some methods change the pixel-by-pixel difference to a penalty image gradient or use the CRF loss layer to force the output image to be consistent. A feedforward model in  is trained with a loss function of pixel-by-pixel difference for coloring grayscale images. There are a number of papers that use optimized methods to produce images, their objects are perceptual, and perceptuality depends on the high-level features extracted from CNN. Mahendran and Vedaldi reversed features from convolutional networks, reconstructing loss functions by minimizing features, in order to understand image information stored in different network layers; similar methods were also used to invert local binary descriptors  and HOG features  . The work of Dosovitskiy and Brox is most relevant to us. They train a feedforward neural network to invert the convolution feature and quickly approximate the outcome of the proposed optimization problem. However, their feedforward network uses pixel by pixel. Reconstruct the loss function to train, and our network directly uses the feature reconstruction loss function used in  . Gatys et al. show artistic style conversion   , combining a content map and another style map. By minimizing the cost function reconstructed according to features, the cost function for style reconstruction is also based on the advanced from the pre-training model. Features; a similar method was previously used for texture synthesis. Their approach yields a high-quality record, but the computational cost is very expensive because each iteration of the optimization requires a feedforward, feedback-pre-trained network. In order to overcome the burden of such a computational load, this paper trains a feedforward neural network to quickly obtain a feasible solution.
Our network consists of two parts: a picture conversion network fw and a loss network φ, where the picture conversion network is a deep residual network  , the parameter is the weight W, which converts the input picture x by mapping y = fw(x). To output the picture y, each loss function calculates a scalar value li(y, yi), which measures the difference between the output y and the target image yi. The picture conversion network is trained with SGD so that the weighted sum of a series of loss functions remains degraded. This paper implements the task of generating stylized art images from sketches. First, use conditional generation to combat the network  , optimize the loss function of the training mapping relationship to generate the actual image from the input sketch. This paper trains a feedforward network for image conversion tasks, and does not use pixel-by-pixel difference to construct the loss function, and instead uses the perceptual loss function to extract advanced features from the pre-trained network. In the process of training, the perceptual loss function is more suitable than the pixel-by-pixel loss function to measure the degree of similarity between images. After training, the effect of sub-network image translation achieves the expected effect, and because of the characteristics of the anti-network, we no longer need to manually design the mapping function like the ordinary CNN network. Experiments have shown that reasonable results can be achieved even without manually setting the loss function.
2. Related Model Analysis
2.1. Structure-Generated Image Modeling Structure Loss
The structure loss image conversion problem of image generation image modeling is usually expressed as the classification or regression problem of each pixel  , and the output space is regarded as “unstructured”, and each pixel of the output is regarded as independent of all other pixels of the input image as appropriate. Instead, conditional GANs learn the structured loss. Structured loss penalizes the node construction of the output. Most types of literature consider this type of loss, such as conditional random fields  , SSIM metrics  , feature matching  , nonparametric loss  , convolutional pseudo-prior  , and loss based on matching covariance statistics  . Our conditional GAN differs from these learned losses and can theoretically penalize any possible structure different from the output and target.
2.2. Condition GANs
This paper is not the first to apply GANs to conditional settings. There have been previous works to constrain GANs with discrete tags  , text, and the like. Image-based GANs have solved image restoration  , predicting images from normal maps  , editing images based on user constraints, video predictions, state predictions, and generating merchandise and style transitions from photos   . These methods have all changed based on specific applications, and our methods are simpler than most of them.
Our approach to the choice of several structures in the generator and discriminator is also different from the previous work. Unlike the previous one, our generator used the “U-Net” structure  , and the discriminator used the convolution “PatchGAN” classifier. Previously, a similar PatchGAN structure was proposed to capture local style statistics.
3. The Method of This Paper
3.1. Image Generation
GANs is a generation model for learning the mapping of random noise vector zz to output image yy: G: z → y G: z → y. Conversely, the conditional GANs learn the mapping of the observed image xx and the random noise vector zz to yy. The formula is:
The training generator GG generates an image in which the discriminator D cannot discriminate, and the training discriminator DD detects the “falsified” image of the generator as much as possible.
3.1.1. Image Generated Objective Function
The objective function of the condition GAN is calculated as:
GG wants to minimize the value of this function, DD wants to maximize the value of this function, that is, in order to test the importance of the condition to the discriminator, we compare the variant form without the discriminator without xx input, condition GAN previous method found Using the traditional loss is beneficial to the hybrid GAN target equation: the work of the discriminator remains the same, but the generator not only deceives the discriminator, but also generates real images as much as possible. Based on this consideration, the L1 distance is used instead of the L2 distance. Because L1 encourages less blur, the formula is:
The final target formula is:
3.1.2. Network Structure
This paper uses the structure of the generator and discriminator in  , both of which use the convolution unit form of “conv-BatchNorm-ReLu”. The appendix provides details of the network structure. Below we only discuss the main features.
Construct a generator with jumpers
One feature of the image conversion problem is the mapping of high resolution input meshes to a high resolution output mesh. In addition, for the problem we are considering, the input and output are different in appearance, but they are consistent with the underlying structure. Therefore, the structure of the input can be roughly aligned with the structure of the output. We design the generator structure based on these considerations. We mimicked “U-Net” to add jumper connections. In particular, we add jumpers between each of the ii and n-in-i layers, where nn is the total number of layers in the network. Each jumper simply connects the feature channels of the ii layer and the n-in-i layer.
The discriminator for constructing the Markov process (PatchGAN)
It is well known that L1 and L2 loss have ambiguities in image generation problems. The discriminator structure we designed only penalizes the structure of the patch size. The discriminator classifies each N × NN × N as true or false. We run this discriminator (sliding window) on the entire image and finally take the average as the final output of DD. Such a discriminator models the image as a Markov random field, assuming that the pixels segmented by the patch diameter are directly independent of each other. This finding has been studied and is a commonly used hypothesis in texture and style models. Our PatchGAN can therefore be understood as a form of texture/style loss.
Optimization and reasoning
To optimize the network, we use the standard method: alternate training DD and GG. We use minibatch SGD and apply the Adam optimizer. In the reasoning, we run the generator in the same way as the training phase.
3.2. Style Transfer
The system consists of two parts: a picture conversion network fw and a loss network φ (used to define a series of loss functions [l1, l2, l3]). The picture conversion network is a deep residual network, and the parameters are weights W. It converts the input image x into the output image y by mapping y = fw(x), and each loss function calculates a scalar value li(y, yi), which measures the difference between the output y and the target image yi. The picture conversion network is trained by SGD, and the effect diagram is shown in Figure 1.
The purpose is to calculate the weighted sum of a series of loss functions by operation, and the formula is:
We used a pre-trained network φ for image classification to define our loss function. We then train our deep convolutional transformation network using a
(a) (b) (c)
Figure 1. Style transfer effect chart. (a) Content (b) Style (c) Result.
Figure 2. Training network diagram.
loss function that is also a deep convolutional network, as shown in Figure 2. The loss network φ is able to define a feature (content) loss lfeat and a style loss lstyle, respectively measuring the difference in content and style. For each input image x we have a content target yc a style target ys, for style conversion, the content target yc is the input image x, the output image y, the style Ys should be combined to the content x = yc. We train a network for each target style.
3.2.1. Construction of Image Conversion Network
Instead of any pooling layer, we use a convolution or micro-step convolution instead. Our neural network consists of five residual blocks. All non-residual convolutional layers follow a spatial batch-normalization, and the nonlinear layer of the RELU, with the exception of the last output layer. The last layer uses a scaled Tanh to ensure that the pixels of the output image are between [0, 255]. Except for the first and last layers with a 9 × 9 kernel, all other convolutional layers use 3 × 3 kernels.
Input and Output: For style conversion, both input and output are color images, size 3 × 256 × 256. For super-resolution reconstruction, there is an upsampling factor f, the output is a high resolution image 3 × 288 × 288, the input is a low resolution image 3 × 288/f × 288/f, because the image conversion network is completely convolved, so during the test, it can be applied to images of any resolution.
Downsampling and Upsampling: For super-resolution reconstruction, there is an upsampling factor f, and we use several residual blocks followed by the Log2f volume and the network (stride = 1/2). This process is different from  . Double-cubic interpolation is used to upsample this low-resolution input before putting the input into the network. Without relying on any fixed upsampling interpolation function, the microstep convolution allows the upsampling function to be trained along with the rest of the network. For image conversion, our network uses two contension = 2 convolutions to downsample the input, followed by several residual blocks, followed by two convolution layers (stride = 1/2) upsampling.
3.2.2. Perceptual Loss Function
We define two perceptual loss functions to measure the high level of perceptual and semantic differences between two images. Use a pre-trained network model for image classification. In our experiments this model was VGG-16  , using Imagenet’s dataset for pre-training.
Feature (content) loss: We do not recommend pixel-by-pixel comparison, but use VGG to calculate the advanced feature (content) representation. This method is the same as the original style using VGG-19  to extract style features. The formula is:
Style Loss: Feature (content) loss penalizes the output image (when it deviates from the target y), so we also want to punish style deviations: color, texture, common patterns, and so on. In order to achieve such an effect, Gatys et al. proposed a loss function for the following style reconstruction. Let φj(x) represent the jth layer of the network φ, and the input is x. The shape of the feature map is Cj × Hj × Wj, and the definition matrix Gj(x) is Cj × Cj matrix (characteristic matrix). The elements are derived from the following formula:
If we understand φj(x) as a feature of the Cj dimension, and the size of each feature is Hj × Wj, then the left Gj(x) is proportional to the non-central covariance of the Cj dimension. Each grid location can be used as a separate sample. This can therefore capture which feature can drive other information. The gradient matrix can be calculated in a very funny time by adjusting the shape of φj(x) to a matrix ψ, the shape is Cj × HjWj, and then Gj(x) is ψψT/CjHjWj. The loss of style reconstruction is well defined, even when the output and target have different sizes, because with the gradient matrix, the two will be adjusted to the same shape.
4. Main Results
4.1. Conditional Confrontation Network Model
To optimize the versatility of GANs, we tested the method on a variety of tasks and data sets, including graphics tasks (such as photo generation) and visual tasks (such as semantic segmentation). We have found that very good results are often obtained on small data sets. The training data set we used contains only 400 images, and training can be made very fast with this size of training set. Some of the super parameters are shown in Table 1.
Qualitative results: the completed model is displayed, and the actual generated effect is displayed. Below we list three sets of pictures, as shown in Figure 3, the input of the figure, the second column is the output (model generation result), and the third column is the actual result. Equation (8) is the calculation formula used. A lot of experiments show that our average is around 0.4.
Table 1. Training hyperparameter selection and result numerical mapping ratio.
Figure 3. Conditional confrontation network model implementation rendering. (a) input (b) Model generation result (c) result.
4.2. Style Migration
The goal of style conversion is to produce a picture with both the content information of the content map and the style information of the style map. As a baseline, we reproduce the method of Gatys et al., giving the style and content goals ys and yc, layer i and J represent feature and style reconstruction. The implementation formula is:
In the formula, u starts with parameters, y is initialized to white noise, and is optimized with LBFGS. We found that unconstrained optimization equations usually cause the pixel values of the output image to go beyond [0, 255] to make a more fair comparison. For the baseline, we use L-BFGS projection, and adjust the image y to each iteration. [0, 255], in most cases, the computational optimization converges to satisfactory results within 500 iterations, which is slower because each LBFGS iteration requires feedforward feedback and feedback through the VGG16 network.
Training details: Our style conversion network is trained with COCO datasets. We adjust each image to 256 × 256, a total of 80,000 training charts, batch-size = 4, iterations 40,000 times, and about two rounds. Optimized with Adam, the initial learning rate is 0.001. The output graph is normalized by the whole variable (strength between 1e-6 and 1e-4), selected by cross-validation set. There is no weight attenuation or dropout because the model has no overfitting in these two rounds. For all style conversion experiments we take the relu2_2 layer for content, relu1_2, relu2_2, relu3_3 and relu4_3 as styles. For the VGG-16 network, our experiments used Torch and cuDNN, and the training took about 4 hours on a GTX Titan X GPU.
Qualitative results: For the model after training, we performed the actual effect test. We screened out four sets of images, as shown in Figure 4. In the figure, column a provides content features for content images, and column b provides style textures for style images. We train different models to migrate effects for different styles. In column 4 of column 4, compared with the optimized method, our network produces comparable quality results, but can achieve three orders of magnitude speed increase. This optimization is of great significance for practical applications. After a lot of experiments, the average time we took the picture was around 10 seconds.
4.3. Model Combination
We combine the conditional confrontation network model and the style transfer model to achieve a good combination effect. The specific results are shown in Figure 5. The a column is the sketch, the b column is the generated result, and the c column is the effect after the style transfer.
Figure 4. Schematic diagram of style transfer results. (a) Content (b) Style (c) Result.
Figure 5. Schematic diagram of the results of the model. (a) input (b) output1 (c) output2.
In this paper, we take advantage of the feedforward network and the optimization-based approach to achieve a good performance and speed by training the feedforward network with a perceptual loss function. We use the conditional confrontation network to implement the function of image translation. Finally, we combine the two models to achieve the application effect in a specific scenario. But, the migration of details is not in place. The lack of detail in the depiction of different image styles will follow the following two aspects to improve the network’s capabilities: First, for the already trained model, the generated image has reached a very fast speed, but the training model still takes several hours. I hope that the training process of the model can be optimized and the training time of the model can be improved. Second, for more research on the details of the image, you can add more detail extraction to the network to transfer the style of the image, achieve more realistic comic style migration effects, and imitate different painter strokes and for buildings and characters adapt to different parameters.