Single-sample unsupervised image - to - image translation ( UI2I ) has made significant progress with the development ofgenerative adversarial networks ( GANs) . However,previous methods cannot capture complex textures in images and preserve originalcontent information. We propose a novel one - shot image translation?structure SUGAN based on a scale - variable U - Net structure( Scale—Unet) . The proposed SUGAN uses Scale—Unet as a generator to continuously improve the?
network structure using multi-scalestructures and progressive methods to learn image features from coarse to fine. Meanwhile, we propose the scale - pixel loss to?
betterconstrain the preservation of original content information and prevent information loss. Experiments show that compared with SinGAN,TuiGAN,TSIT,StyTR2 and?
another methods on public datasets Summer-Winter,Horse-Zebra,the SIFID value of the generated imageis reduced by 30% . The proposed method can better preserve
the content information of the image while generating detailed and realistichigh-quality images.